From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <a.lauterer@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id DE6719992D
 for <pve-devel@lists.proxmox.com>; Tue,  2 May 2023 15:17:34 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id C487E3106B
 for <pve-devel@lists.proxmox.com>; Tue,  2 May 2023 15:17:34 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Tue,  2 May 2023 15:17:33 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 5BC5F46DDD
 for <pve-devel@lists.proxmox.com>; Tue,  2 May 2023 15:17:33 +0200 (CEST)
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Date: Tue,  2 May 2023 15:17:31 +0200
Message-Id: <20230502131732.1875692-2-a.lauterer@proxmox.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230502131732.1875692-1-a.lauterer@proxmox.com>
References: <20230502131732.1875692-1-a.lauterer@proxmox.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.089 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Subject: [pve-devel] [PATCH qemu-server 1/2] migration: avoid migrating disk
 images multiple times
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Tue, 02 May 2023 13:17:34 -0000

Scan the VM config and store the volid and full path for each storage.
Do the same when we scan each storage.  Then we can have these
scenarios:
* multiple storage configurations might point to the same storage
The result is, that when scanning the storages, we find the disk image
multiple times.
-> we ignore them

* a VM might have multiple disks configured, pointing to the same disk
  image
-> We fail with a warning that two disk configs point to the same disk
image

Without these checks, it was possible to multiply the number of disk
images with each migration (with local disk) if at least another storage
was configured, pointing to the same place.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/QemuMigrate.pm | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 09cc1d8..bd3ea00 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -301,6 +301,10 @@ sub scan_local_volumes {
 	my $other_errors = [];
 	my $abort = 0;
 
+	# store and map already referenced absolute paths and volids
+	my $referencedpath = {}; # path -> volid
+	my $referenced = {}; # volid -> config key (e.g. scsi0)
+
 	my $log_error = sub {
 	    my ($msg, $volid) = @_;
 
@@ -312,6 +316,26 @@ sub scan_local_volumes {
 	    $abort = 1;
 	};
 
+	# reference disks in config first
+	PVE::QemuConfig->foreach_volume_full($conf, { include_unused => 1 }, sub {
+	    my ($key, $drive) = @_;
+	    my $volid = $drive->{file};
+	    return if PVE::QemuServer::Drive::drive_is_cdrom($drive);
+	    return if !$volid || $volid =~ m|^/|;
+
+	    my $path = PVE::Storage::path($storecfg, $volid);
+	    if (defined $referencedpath->{$path}) {
+		my $rkey = $referenced->{$referencedpath->{$path}};
+		&$log_error(
+		    "cannot migrate local image '$volid': '$key' and '$rkey' ".
+		    "reference the same volume. (check guest and storage configuration?)\n"
+		);
+		return;
+	    }
+	    $referencedpath->{$path} = $volid;
+	    $referenced->{$volid} = $key;
+	});
+
 	my @sids = PVE::Storage::storage_ids($storecfg);
 	foreach my $storeid (@sids) {
 	    my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
@@ -342,6 +366,15 @@ sub scan_local_volumes {
 	    PVE::Storage::foreach_volid($dl, sub {
 		my ($volid, $sid, $volinfo) = @_;
 
+		# check if image is already referenced
+		my $path = PVE::Storage::path($storecfg, $volid);
+		if (defined $referencedpath->{$path} && !$referenced->{$volid}) {
+		    $self->log('info', "ignoring '$volid' - already referenced by other storage '$referencedpath->{$path}'\n");
+		    return;
+		}
+		$referencedpath->{$path} = $volid;
+		$referenced->{$volid} = 1;
+
 		$local_volumes->{$volid}->{ref} = 'storage';
 		$local_volumes->{$volid}->{size} = $volinfo->{size};
 		$local_volumes->{$volid}->{targetsid} = $targetsid;
-- 
2.30.2