From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id D31E89D7E9 for ; Fri, 27 Oct 2023 11:19:39 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B4E0437118 for ; Fri, 27 Oct 2023 11:19:39 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Fri, 27 Oct 2023 11:19:39 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id A34034246D; Fri, 27 Oct 2023 11:19:38 +0200 (CEST) Message-ID: Date: Fri, 27 Oct 2023 11:19:33 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird To: "DERUMIER, Alexandre" , "pve-devel@lists.proxmox.com" , "aderumier@odiso.com" References: <20230928144556.2023558-1-aderumier@odiso.com> <20230928144556.2023558-3-aderumier@odiso.com> <5ecfa7d0-4525-5f1e-75a2-a6ae1a93356b@proxmox.com> <73e0a3a6-f978-ac24-5f6b-16af759ee209@proxmox.com> <016bd4b8-7502-48fe-9208-a075e8aea02b@proxmox.com> <283ff207e6065f0ae178410bfa391e9a5369924f.camel@groupe-cyllene.com> <8d06d2f6-b831-45b3-ac1b-2cc3f1721b85@proxmox.com> <188c296857bc3ae42f0a5150770e8c3942ec74f0.camel@groupe-cyllene.com> Content-Language: en-US From: Fiona Ebner In-Reply-To: <188c296857bc3ae42f0a5150770e8c3942ec74f0.camel@groupe-cyllene.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.085 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [readthedocs.io] Subject: Re: [pve-devel] [PATCH v4 qemu-server 2/2] remote-migration: add target-cpu && target-reboot params X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Oct 2023 09:19:39 -0000 Am 25.10.23 um 18:01 schrieb DERUMIER, Alexandre: >>> Unused disks can just be migrated >>> offline via storage_migrate(), or?  > > currently unused disk can't be migrate through the http tunnel for > remote-migration > > 2023-10-25 17:51:38 ERROR: error - tunnel command > '{"format":"raw","migration_snapshot":"1","export_formats":"zfs","allow > _rename":"1","snapshot":"__migration__","volname":"vm-1112-disk- > 1","cmd":"disk-import","storage":"targetstorage","with_snapshots":1}' > failed - failed to handle 'disk-import' command - no matching > import/export format found for storage 'preprodkvm' > 2023-10-25 17:51:38 aborting phase 1 - cleanup resources > tunnel: -> sending command "quit" to remote > tunnel: <- got reply > tunnel: CMD channel closed, shutting down > 2023-10-25 17:51:39 ERROR: migration aborted (duration 00:00:01): error > - tunnel command > '{"format":"raw","migration_snapshot":"1","export_formats":"zfs","allow > _rename":"1","snapshot":"__migration__","volname":"vm-1112-disk- > 1","cmd":"disk-import","storage":"targetstorage","with_snapshots":1}' > failed - failed to handle 'disk-import' command - no matching > import/export format found for storage 'preprodkvm' > migration aborted > Well, yes, they can. But there needs to be a common import/export format between the storage types. Which admittedly is a bit limited for certain storage types, e.g. ZFS only supports ZFS and RBD does not implement import/export at all yet (because in a single cluster it wasn't needed). >>> If we want to switch to migrating >>> disks offline via QEMU instead of our current storage_migrate(), >>> going >>> for QEMU storage daemon + NBD seems the most natural to me. > > Yes, I more for this solution. > >>> If it's not too complicated to temporarily attach the disks to the >>> VM, >>> that can be done too, but is less re-usable (e.g. pure offline >>> migration >>> won't benefit from that). > > No sure about attach/detach temporary once by once, or attach all > devices (but this need enough controllers slot). > I think you can attach them to the VM without attaching to a controller by using QMP blockdev-add, but... > qemu storage daemon seem to be a less hacky solution ^_^ > ...sure, this should be nicer and more re-usable. > >> but if it's work, I think we'll need to add config generation in pv >> storage for differents blockdriver >> >> >> like: >> >> –blockdev driver=file,node-name=file0,filename=vm.img >> >> –blockdev driver=rbd,node-name=rbd0,pool=my-pool,image=vm01 >> > >>> What other special cases besides (non-krbd) RBD are there? If it's >>> just >>> that, I'd much rather keep the special handling in QEMU itself then >>> burden all other storage plugins with implementing something specific >>> to >>> VMs. > > not sure, maybe glusterfs, .raw (should works for block device like > lvm,zfs), .qcow2 > There's a whole lot of drivers https://qemu.readthedocs.io/en/v8.1.0/interop/qemu-qmp-ref.html#qapidoc-883 But e.g. for NFS, we don't necessarily need it and can just use qcow2/raw. Currently, with -drive we also just treat it like any other file. I'd like to keep the logic for how to construct the -blockdev command line option (mostly) in qemu-server itself. But I guess we can't avoid some amount of coupling. Currently, for -drive we have the coupling in path() which can e.g. return rbd: or gluster: and then QEMU will parse what driver to use from that path. Two approaches that make sense to me (no real preference at the moment): 1. Have a storage plugin method which tells qemu-server about the necessary driver and properties for opening the image. E.g. return the properties as a hash and then have qemu-server join them together and then add the generic properties (e.g. aio,node-name) to construct the full -blockdev option. 2. Do everything in qemu-server and special case for certain storage types that have a dedicated driver. Still needs to get the info like pool name from the RBD storage of course, but that should be possible with existing methods. Happy to hear other suggestions/opinions. > >>> Or is there a way to use the path from the storage plugin somehow >>> like >>> we do at the moment, i.e. >>> "rbd:rbd/vm-111-disk- >>> 1:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/rbd.key >>> ring"? > > I don't think it's possible just like this.I need to do more test, > looking at libvirt before they are not too much doc about it. > Probably they decided to get rid of this magic for the newer -blockdev variant. I tried to cheat using driver=file and specify the "rbd:"-path as the filename, but it doesn't work :P