From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 2B723638E7 for ; Wed, 26 Jan 2022 13:43:37 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 114077EAE for ; Wed, 26 Jan 2022 13:43:07 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 1F3497EA5 for ; Wed, 26 Jan 2022 13:43:06 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id EB5B643A20 for ; Wed, 26 Jan 2022 13:43:05 +0100 (CET) Date: Wed, 26 Jan 2022 13:42:56 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Fabian Ebner , pve-devel@lists.proxmox.com References: <20220113100831.34113-1-f.ebner@proxmox.com> <20220113100831.34113-8-f.ebner@proxmox.com> In-Reply-To: MIME-Version: 1.0 User-Agent: astroid/0.15.0 (https://github.com/astroidmail/astroid) Message-Id: <1643200113.0ad0cpn1af.astroid@nora.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.215 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [RFC v10 qemu-server 6/7] api: support VM disk import X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Jan 2022 12:43:37 -0000 On January 26, 2022 12:40 pm, Fabian Ebner wrote: > Am 13.01.22 um 11:08 schrieb Fabian Ebner: >> + >> + if (my $source =3D delete $disk->{'import-from'}) { >=20 > I'm adding a comment here in v11, because otherwise it's not clear where=20 > volume activation happens: > + # abs_filesystem_path also calls activate_volume when=20 > $source is a volid >=20 > I'm also adding "The source should not be actively used by another=20 > process!" to the description of the import-from parameter in v11. sounds good >=20 >> + $source =3D PVE::Storage::abs_filesystem_path($storecfg, $source, 1); >=20 > But there are a couple of issues here: >=20 > 1. There's no protection against using a source volume that's actively=20 > used by a guest/other operation. While it's not possible to detect in=20 > general, I wonder if we should behave more like a full clone and lock=20 > the owning VM? >=20 > 1a. we could check if the volume is referenced in the config/snapshots,=20 > but migration picks up everything, so it might be preferable not to. >=20 > 1b. the volume might be configured in a VM that doesn't own it... >=20 > 2. Related: avoiding concurrent activation of volumes on a shared LVM. >=20 > 3. Related: cannot deactivate any volumes as the might be used by=20 > something else. >=20 > 4. abs_filesystem_path does not work for RBD when krbd=3D0, because the=20 > plugin produces an "rbd:XYZ" path and the -f || -b check doesn't like=20 > that. But full clone does work, passing the volid to qemu_img_convert=20 > and that's likely what we should do here as well, when we are dealing=20 > with an existing volid rather than an absolute path. >=20 > 5. Add your own ;) >=20 > TL;DR: I'd like to behave much more like full clone, when we are dealing=20 > with a volid rather than an absolute path. yeah. it sounds to me like we could do most of that properly by just=20 (iff import source is a volume) check whether the owning VM resides on=20 the current node, and lock if so (and fail if not?). not sure whether=20 we'd want to require it to be stopped as well if the volume is=20 referenced in the current config? for full clones we pass the 'running'=20 state to the storage layer's volume_has_feature - but that seems to not=20 use the information in any way? that way we can skip deactivation altogether (it's only relevant for=20 shared storages that require it for migration, and by locking the owning=20 VM and having a requirement for it to be on the same node at import-time,=20 no migration can happen in parallel anyway..). or we could deactivate if=20 an owning VM exists and is not running, like we do at the end of full=20 clones. 1b is 'all bets are off' territory anyway IMHO - there is no sane way to=20 handle all the edge cases.. >=20 >> + my $src_size =3D PVE::Storage::file_size_info($source); >> + die "Could not get file size of $source" if !defined($src_size); >> + >> + my (undef, $dst_volid) =3D PVE::QemuServer::ImportDisk::do_import( >=20