From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id E2CCF99B3F for ; Wed, 3 May 2023 11:17:27 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id BE3DD86E2 for ; Wed, 3 May 2023 11:17:27 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Wed, 3 May 2023 11:17:27 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id C7E6E47136 for ; Wed, 3 May 2023 11:17:26 +0200 (CEST) Date: Wed, 03 May 2023 11:17:20 +0200 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox VE development discussion References: <20230502131732.1875692-1-a.lauterer@proxmox.com> <20230502131732.1875692-2-a.lauterer@proxmox.com> In-Reply-To: <20230502131732.1875692-2-a.lauterer@proxmox.com> MIME-Version: 1.0 User-Agent: astroid/0.16.0 (https://github.com/astroidmail/astroid) Message-Id: <1683104902.ds4ntmeubl.astroid@yuna.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.075 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [qemumigrate.pm, proxmox.com] Subject: Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid migrating disk images multiple times X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 May 2023 09:17:27 -0000 On May 2, 2023 3:17 pm, Aaron Lauterer wrote: > Scan the VM config and store the volid and full path for each storage. > Do the same when we scan each storage. Then we can have these > scenarios: > * multiple storage configurations might point to the same storage > The result is, that when scanning the storages, we find the disk image > multiple times. > -> we ignore them >=20 > * a VM might have multiple disks configured, pointing to the same disk > image > -> We fail with a warning that two disk configs point to the same disk > image this is not a problem for VMs, and can actually be a valid case in a test lab (e.g., testing multipath). I am not sure whether that means we want to handle it properly in live migration though (or whether there is a way to do so? I guess since starting the VM with both disks pointing at the same volume works, the same would be true for having two such disks on the target side, with an NBD export + drive mirror on each?). for offline migration the same solution as for containers would apply - migrate volume once, update volid for all references. > Without these checks, it was possible to multiply the number of disk > images with each migration (with local disk) if at least another storage > was configured, pointing to the same place. >=20 > Signed-off-by: Aaron Lauterer > --- > PVE/QemuMigrate.pm | 33 +++++++++++++++++++++++++++++++++ > 1 file changed, 33 insertions(+) >=20 > diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm > index 09cc1d8..bd3ea00 100644 > --- a/PVE/QemuMigrate.pm > +++ b/PVE/QemuMigrate.pm > @@ -301,6 +301,10 @@ sub scan_local_volumes { > my $other_errors =3D []; > my $abort =3D 0; > =20 > + # store and map already referenced absolute paths and volids > + my $referencedpath =3D {}; # path -> volid > + my $referenced =3D {}; # volid -> config key (e.g. scsi0) > + the same comments as for pve-container apply here as well AFAICT. > my $log_error =3D sub { > my ($msg, $volid) =3D @_; > =20 > @@ -312,6 +316,26 @@ sub scan_local_volumes { > $abort =3D 1; > }; > =20 > + # reference disks in config first > + PVE::QemuConfig->foreach_volume_full($conf, { include_unused =3D> 1 }, = sub { > + my ($key, $drive) =3D @_; > + my $volid =3D $drive->{file}; > + return if PVE::QemuServer::Drive::drive_is_cdrom($drive); > + return if !$volid || $volid =3D~ m|^/|; > + > + my $path =3D PVE::Storage::path($storecfg, $volid); > + if (defined $referencedpath->{$path}) { > + my $rkey =3D $referenced->{$referencedpath->{$path}}; > + &$log_error( > + "cannot migrate local image '$volid': '$key' and '$rkey' ". > + "reference the same volume. (check guest and storage configuration= ?)\n" > + ); > + return; > + } > + $referencedpath->{$path} =3D $volid; > + $referenced->{$volid} =3D $key; > + }); > + > my @sids =3D PVE::Storage::storage_ids($storecfg); > foreach my $storeid (@sids) { > my $scfg =3D PVE::Storage::storage_config($storecfg, $storeid); > @@ -342,6 +366,15 @@ sub scan_local_volumes { > PVE::Storage::foreach_volid($dl, sub { > my ($volid, $sid, $volinfo) =3D @_; > =20 > + # check if image is already referenced > + my $path =3D PVE::Storage::path($storecfg, $volid); > + if (defined $referencedpath->{$path} && !$referenced->{$volid}) { > + $self->log('info', "ignoring '$volid' - already referenced by othe= r storage '$referencedpath->{$path}'\n"); > + return; > + } > + $referencedpath->{$path} =3D $volid; > + $referenced->{$volid} =3D 1; > + > $local_volumes->{$volid}->{ref} =3D 'storage'; > $local_volumes->{$volid}->{size} =3D $volinfo->{size}; > $local_volumes->{$volid}->{targetsid} =3D $targetsid; > --=20 > 2.30.2 >=20 >=20 >=20 > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >=20 >=20 >=20