public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases
@ 2023-06-16  9:56 Aaron Lauterer
  2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 1/12] migration: only migrate disks used by the guest Aaron Lauterer
                   ` (12 more replies)
  0 siblings, 13 replies; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:56 UTC (permalink / raw)
  To: pve-devel

Oh boy did this version explode in number of commits.

This patch series changes the behavior during guest migrations:

Don't scan all storages for potential images belonging to the guest.
Only migrate images referenced in the config.
This made it necessary to handle pending changes explicitly which had
been covered by the storage scan.

We also added checks for any aliased volids and fail the migration if
detected.

The qemu-server part consists of quite a lot more commits since we had
to change a few things and for a better history, they are mostly their
own patches.

There is also a small patch for the documentation to add a hint that
aliased storages should be avoided.


changes since
v3:
- split it into many smaller commits
- changed / improved the handling of the 'ref's in
QemuMigrate::scan_local_volumes()
- fixed tests and added error handling in new mock functions
- rephrased the documentation hint

v2:
- style fixes
- qemu-server: mainly change the handling of pending changes
- container: use NEW_DISK_RE to check if the volume is a not yet created
pending volume

More in each patch.

qemu-server: Aaron Lauterer (9):
  migration: only migrate disks used by the guest
  qemuserver: foreach_volid: include pending volumes
  qemuserver: foreach_volid: always include pending disks
  qemuserver: foreach_volid: test regular config last
  migration: add target_storage_check_available
  migration: scan_local_volumes: adapt refs handling
  tests: add migration test for pending disk
  migration: fail when aliased volume is detected
  tests: add migration alias check

 PVE/QemuMigrate.pm                    | 109 ++++++++-----------
 PVE/QemuServer.pm                     |  13 ++-
 test/MigrationTest/QemuMigrateMock.pm |  10 ++
 test/run_qemu_migrate_tests.pl        | 150 +++++++++++++++++++++++++-
 4 files changed, 208 insertions(+), 74 deletions(-)


container: Aaron Lauterer (2):
  migration: only migrate volumes used by the guest
  migration: fail when aliased volume is detected

 src/PVE/LXC/Migrate.pm | 51 +++++++++++++++++-------------------------
 1 file changed, 21 insertions(+), 30 deletions(-)


docs: Aaron Lauterer (1):
  storage: add hint to avoid storage aliasing

 pvesm.adoc | 6 ++++++
 1 file changed, 6 insertions(+)

-- 
2.39.2




^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 qemu-server 1/12] migration: only migrate disks used by the guest
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
@ 2023-06-16  9:56 ` Aaron Lauterer
  2023-06-16 12:16   ` Fiona Ebner
  2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 2/12] qemuserver: foreach_volid: include pending volumes Aaron Lauterer
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:56 UTC (permalink / raw)
  To: pve-devel

When scanning all configured storages for disk images belonging to the
VM, the migration could easily fail if a storage is not available, but
enabled. That storage might not even be used by the VM at all.

By not scanning all storages and only looking at the disk images
referenced in the VM config, we can avoid unnecessary failures.
Some information that used to be provided by the storage scanning needs
to be fetched explicilty (size, format).

Behaviorally the biggest change is that unreferenced disk images will
not be migrated anymore. Only images in the config or in a snapshot will
be migrated.

The tests have been adapted accordingly.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3: now it only removes the storage scanning

 PVE/QemuMigrate.pm                    | 49 ++++-----------------------
 test/MigrationTest/QemuMigrateMock.pm | 10 ++++++
 test/run_qemu_migrate_tests.pl        | 11 +++---
 3 files changed, 22 insertions(+), 48 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 09cc1d8..5f4f402 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -312,49 +312,6 @@ sub scan_local_volumes {
 	    $abort = 1;
 	};
 
-	my @sids = PVE::Storage::storage_ids($storecfg);
-	foreach my $storeid (@sids) {
-	    my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
-	    next if $scfg->{shared} && !$self->{opts}->{remote};
-	    next if !PVE::Storage::storage_check_enabled($storecfg, $storeid, undef, 1);
-
-	    # get list from PVE::Storage (for unused volumes)
-	    my $dl = PVE::Storage::vdisk_list($storecfg, $storeid, $vmid, undef, 'images');
-
-	    next if @{$dl->{$storeid}} == 0;
-
-	    my $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storeid);
-	    if (!$self->{opts}->{remote}) {
-		# check if storage is available on target node
-		my $target_scfg = PVE::Storage::storage_check_enabled(
-		    $storecfg,
-		    $targetsid,
-		    $self->{node},
-		);
-
-		die "content type 'images' is not available on storage '$targetsid'\n"
-		    if !$target_scfg->{content}->{images};
-
-	    }
-
-	    my $bwlimit = $self->get_bwlimit($storeid, $targetsid);
-
-	    PVE::Storage::foreach_volid($dl, sub {
-		my ($volid, $sid, $volinfo) = @_;
-
-		$local_volumes->{$volid}->{ref} = 'storage';
-		$local_volumes->{$volid}->{size} = $volinfo->{size};
-		$local_volumes->{$volid}->{targetsid} = $targetsid;
-		$local_volumes->{$volid}->{bwlimit} = $bwlimit;
-
-		# If with_snapshots is not set for storage migrate, it tries to use
-		# a raw+size stream, but on-the-fly conversion from qcow2 to raw+size
-		# back to qcow2 is currently not possible.
-		$local_volumes->{$volid}->{snapshots} = ($volinfo->{format} =~ /^(?:qcow2|vmdk)$/);
-		$local_volumes->{$volid}->{format} = $volinfo->{format};
-	    });
-	}
-
 	my $replicatable_volumes = !$self->{replication_jobcfg} ? {}
 	    : PVE::QemuConfig->get_replicatable_volumes($storecfg, $vmid, $conf, 0, 1);
 	foreach my $volid (keys %{$replicatable_volumes}) {
@@ -407,6 +364,12 @@ sub scan_local_volumes {
 	    $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};
 	    $local_volumes->{$volid}->{ref} = 'generated' if $attr->{is_tpmstate};
 
+	    $local_volumes->{$volid}->{bwlimit} = $self->get_bwlimit($sid, $targetsid);
+	    $local_volumes->{$volid}->{targetsid} = $targetsid;
+
+	    ($local_volumes->{$volid}->{size}, $local_volumes->{$volid}->{format})
+		= PVE::Storage::volume_size_info($storecfg, $volid);
+
 	    $local_volumes->{$volid}->{is_vmstate} = $attr->{is_vmstate} ? 1 : 0;
 
 	    $local_volumes->{$volid}->{drivename} = $attr->{drivename}
diff --git a/test/MigrationTest/QemuMigrateMock.pm b/test/MigrationTest/QemuMigrateMock.pm
index 94fe686..9034d10 100644
--- a/test/MigrationTest/QemuMigrateMock.pm
+++ b/test/MigrationTest/QemuMigrateMock.pm
@@ -240,6 +240,16 @@ $MigrationTest::Shared::storage_module->mock(
 
 	delete $source_volids->{$volid};
     },
+    volume_size_info => sub {
+	my ($scfg, $volid) = @_;
+	my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
+
+	for my $v ($source_vdisks->{$storeid}->@*) {
+	    return wantarray ? ($v->{size}, $v->{format}, $v->{used}, $v->{parent}) : $v
+		if $v->{volid} eq $volid;
+	}
+	die "could not find '$volid' in mock 'source_vdisks'\n";
+    },
 );
 
 $MigrationTest::Shared::tools_module->mock(
diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl
index 090449f..7a9d7ea 100755
--- a/test/run_qemu_migrate_tests.pl
+++ b/test/run_qemu_migrate_tests.pl
@@ -708,7 +708,6 @@ my $tests = [
 	},
     },
     {
-	# FIXME: Maybe add orphaned drives as unused?
 	name => '149_running_orphaned_disk_targetstorage_zfs',
 	target => 'pve1',
 	vmid => 149,
@@ -729,10 +728,11 @@ my $tests = [
 	},
 	expected_calls => $default_expected_calls_online,
 	expected => {
-	    source_volids => {},
+	    source_volids => {
+		'local-dir:149/vm-149-disk-0.qcow2' => 1,
+	    },
 	    target_volids => {
 		'local-zfs:vm-149-disk-10' => 1,
-		'local-zfs:vm-149-disk-0' => 1,
 	    },
 	    vm_config => get_patched_config(149, {
 		scsi0 => 'local-zfs:vm-149-disk-10,format=raw,size=4G',
@@ -765,10 +765,11 @@ my $tests = [
 	},
 	expected_calls => $default_expected_calls_online,
 	expected => {
-	    source_volids => {},
+	    source_volids => {
+		'local-dir:149/vm-149-disk-0.qcow2' => 1,
+	    },
 	    target_volids => {
 		'local-lvm:vm-149-disk-10' => 1,
-		'local-dir:149/vm-149-disk-0.qcow2' => 1,
 	    },
 	    vm_config => get_patched_config(149, {
 		scsi0 => 'local-lvm:vm-149-disk-10,format=raw,size=4G',
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 qemu-server 2/12] qemuserver: foreach_volid: include pending volumes
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
  2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 1/12] migration: only migrate disks used by the guest Aaron Lauterer
@ 2023-06-16  9:56 ` Aaron Lauterer
  2023-06-16 12:25   ` Fiona Ebner
  2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 3/12] qemuserver: foreach_volid: always include pending disks Aaron Lauterer
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:56 UTC (permalink / raw)
  To: pve-devel

Make it possible to optionally iterate over disks in the pending section
of VMs, similar as to how snapshots are handled already.

This is for example useful in the migration. Since the storages aren't
scanned anymore, we need to pick up disk images referenced only in the
pending section.

All calling sites are adapted and enable it, except for
QemuConfig::get_replicatable_volumes as that would cause a change for
the replication if pending disks would be included.

The following lists the calling sites and if they should be fine with
the change (source [0]):

1. QemuMigrate: scan_local_volumes(): needed to include pending disk
   images
2. API2/Qemu.pm: check_vm_disks_local() for migration precondition:
   related to migration, so more consistent with pending
3. QemuConfig.pm: get_replicatable_volumes(): would change the behavior
   of the replication, will not use it for now.
4. QemuServer.pm: get_vm_volumes(): is used multiple times by:
4a. vm_stop_cleanup() to deactivate/unmap: should also be fine with
    including pending
4b. QemuMigrate.pm: in prepare(): part of migration, so more consistent
    with pending
4c. QemuMigrate.pm: in phase3_cleanup() for deactivation: part of
    migration, so more consistent with pending

[0] https://lists.proxmox.com/pipermail/pve-devel/2023-May/056868.html

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3:

added as intermediate patch for a better git history as suggested

 PVE/API2/Qemu.pm   |  2 +-
 PVE/QemuConfig.pm  |  2 +-
 PVE/QemuMigrate.pm |  2 +-
 PVE/QemuServer.pm  | 17 ++++++++++++-----
 4 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c92734a..37f78fe 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4164,7 +4164,7 @@ my $check_vm_disks_local = sub {
     my $local_disks = {};
 
     # add some more information to the disks e.g. cdrom
-    PVE::QemuServer::foreach_volid($vmconf, sub {
+    PVE::QemuServer::foreach_volid($vmconf, 1, sub {
 	my ($volid, $attr) = @_;
 
 	my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid, 1);
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 10e6929..5e46db2 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -161,7 +161,7 @@ sub get_replicatable_volumes {
 	$volhash->{$volid} = 1;
     };
 
-    PVE::QemuServer::foreach_volid($conf, $test_volid);
+    PVE::QemuServer::foreach_volid($conf, undef, $test_volid);
 
     return $volhash;
 }
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 5f4f402..d979211 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -413,7 +413,7 @@ sub scan_local_volumes {
 		if PVE::Storage::volume_is_base_and_used($storecfg, $volid);
 	};
 
-	PVE::QemuServer::foreach_volid($conf, sub {
+	PVE::QemuServer::foreach_volid($conf, 1, sub {
 	    my ($volid, $attr) = @_;
 	    eval { $test_volid->($volid, $attr); };
 	    if (my $err = $@) {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6cbaf87..33acef6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2424,7 +2424,7 @@ sub destroy_vm {
 
     if ($purge_unreferenced) { # also remove unreferenced disk
 	my $vmdisks = PVE::Storage::vdisk_list($storecfg, undef, $vmid, undef, 'images');
-	PVE::Storage::foreach_volid($vmdisks, sub {
+	PVE::Storage::foreach_volid($vmdisks, 1, sub {
 	    my ($volid, $sid, $volname, $d) = @_;
 	    eval { PVE::Storage::vdisk_free($storecfg, $volid) };
 	    warn $@ if $@;
@@ -4855,12 +4855,12 @@ sub set_migration_caps {
 }
 
 sub foreach_volid {
-    my ($conf, $func, @param) = @_;
+    my ($conf, $include_pending, $func, @param) = @_;
 
     my $volhash = {};
 
     my $test_volid = sub {
-	my ($key, $drive, $snapname) = @_;
+	my ($key, $drive, $snapname, $pending) = @_;
 
 	my $volid = $drive->{file};
 	return if !$volid;
@@ -4876,11 +4876,13 @@ sub foreach_volid {
 	$volhash->{$volid}->{shared} = 1 if $drive->{shared};
 
 	$volhash->{$volid}->{referenced_in_config} //= 0;
-	$volhash->{$volid}->{referenced_in_config} = 1 if !defined($snapname);
+	$volhash->{$volid}->{referenced_in_config} = 1 if !defined($snapname) && !defined($pending);
 
 	$volhash->{$volid}->{referenced_in_snapshot}->{$snapname} = 1
 	    if defined($snapname);
 
+	$volhash->{$volid}->{referenced_in_pending} = 1 if defined($pending);
+
 	my $size = $drive->{size};
 	$volhash->{$volid}->{size} //= $size if $size;
 
@@ -4902,6 +4904,11 @@ sub foreach_volid {
     };
 
     PVE::QemuConfig->foreach_volume_full($conf, $include_opts, $test_volid);
+
+    if ($include_pending && defined($conf->{pending}) && $conf->{pending}->%*) {
+	PVE::QemuConfig->foreach_volume_full($conf->{pending}, $include_opts, $test_volid, undef, 1);
+    }
+
     foreach my $snapname (keys %{$conf->{snapshots}}) {
 	my $snap = $conf->{snapshots}->{$snapname};
 	PVE::QemuConfig->foreach_volume_full($snap, $include_opts, $test_volid, $snapname);
@@ -6149,7 +6156,7 @@ sub get_vm_volumes {
     my ($conf) = @_;
 
     my $vollist = [];
-    foreach_volid($conf, sub {
+    foreach_volid($conf, 1, sub {
 	my ($volid, $attr) = @_;
 
 	return if $volid =~ m|^/|;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 qemu-server 3/12] qemuserver: foreach_volid: always include pending disks
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
  2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 1/12] migration: only migrate disks used by the guest Aaron Lauterer
  2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 2/12] qemuserver: foreach_volid: include pending volumes Aaron Lauterer
@ 2023-06-16  9:56 ` Aaron Lauterer
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 4/12] qemuserver: foreach_volid: test regular config last Aaron Lauterer
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:56 UTC (permalink / raw)
  To: pve-devel

All calling sites except for QemuConfig.pm::get_replicatable_volumes()
already enabled it. Making it the non-configurable default results in a
change in the VM replication.  Now a disk image only referenced in the
pending section will also be replicated.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3: added as dedicated patch for a better git history

 PVE/API2/Qemu.pm   | 2 +-
 PVE/QemuConfig.pm  | 2 +-
 PVE/QemuMigrate.pm | 2 +-
 PVE/QemuServer.pm  | 8 ++++----
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 37f78fe..c92734a 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4164,7 +4164,7 @@ my $check_vm_disks_local = sub {
     my $local_disks = {};
 
     # add some more information to the disks e.g. cdrom
-    PVE::QemuServer::foreach_volid($vmconf, 1, sub {
+    PVE::QemuServer::foreach_volid($vmconf, sub {
 	my ($volid, $attr) = @_;
 
 	my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid, 1);
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 5e46db2..10e6929 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -161,7 +161,7 @@ sub get_replicatable_volumes {
 	$volhash->{$volid} = 1;
     };
 
-    PVE::QemuServer::foreach_volid($conf, undef, $test_volid);
+    PVE::QemuServer::foreach_volid($conf, $test_volid);
 
     return $volhash;
 }
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index d979211..5f4f402 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -413,7 +413,7 @@ sub scan_local_volumes {
 		if PVE::Storage::volume_is_base_and_used($storecfg, $volid);
 	};
 
-	PVE::QemuServer::foreach_volid($conf, 1, sub {
+	PVE::QemuServer::foreach_volid($conf, sub {
 	    my ($volid, $attr) = @_;
 	    eval { $test_volid->($volid, $attr); };
 	    if (my $err = $@) {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 33acef6..d485495 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2424,7 +2424,7 @@ sub destroy_vm {
 
     if ($purge_unreferenced) { # also remove unreferenced disk
 	my $vmdisks = PVE::Storage::vdisk_list($storecfg, undef, $vmid, undef, 'images');
-	PVE::Storage::foreach_volid($vmdisks, 1, sub {
+	PVE::Storage::foreach_volid($vmdisks, sub {
 	    my ($volid, $sid, $volname, $d) = @_;
 	    eval { PVE::Storage::vdisk_free($storecfg, $volid) };
 	    warn $@ if $@;
@@ -4855,7 +4855,7 @@ sub set_migration_caps {
 }
 
 sub foreach_volid {
-    my ($conf, $include_pending, $func, @param) = @_;
+    my ($conf, $func, @param) = @_;
 
     my $volhash = {};
 
@@ -4905,7 +4905,7 @@ sub foreach_volid {
 
     PVE::QemuConfig->foreach_volume_full($conf, $include_opts, $test_volid);
 
-    if ($include_pending && defined($conf->{pending}) && $conf->{pending}->%*) {
+    if (defined($conf->{pending}) && $conf->{pending}->%*) {
 	PVE::QemuConfig->foreach_volume_full($conf->{pending}, $include_opts, $test_volid, undef, 1);
     }
 
@@ -6156,7 +6156,7 @@ sub get_vm_volumes {
     my ($conf) = @_;
 
     my $vollist = [];
-    foreach_volid($conf, 1, sub {
+    foreach_volid($conf, sub {
 	my ($volid, $attr) = @_;
 
 	return if $volid =~ m|^/|;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 qemu-server 4/12] qemuserver: foreach_volid: test regular config last
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
                   ` (2 preceding siblings ...)
  2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 3/12] qemuserver: foreach_volid: always include pending disks Aaron Lauterer
@ 2023-06-16  9:57 ` Aaron Lauterer
  2023-06-16 12:40   ` Fiona Ebner
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 5/12] migration: add target_storage_check_available Aaron Lauterer
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:57 UTC (permalink / raw)
  To: pve-devel

after snapshots and the pending section to make sure, that volids that
are referenced here, will be marked correctly, e.g. 'is_unused'.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3: new patch and change, wasn't there before

 PVE/QemuServer.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index d485495..86f8f10 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4903,8 +4903,6 @@ sub foreach_volid {
 	include_unused => 1,
     };
 
-    PVE::QemuConfig->foreach_volume_full($conf, $include_opts, $test_volid);
-
     if (defined($conf->{pending}) && $conf->{pending}->%*) {
 	PVE::QemuConfig->foreach_volume_full($conf->{pending}, $include_opts, $test_volid, undef, 1);
     }
@@ -4914,6 +4912,8 @@ sub foreach_volid {
 	PVE::QemuConfig->foreach_volume_full($snap, $include_opts, $test_volid, $snapname);
     }
 
+    PVE::QemuConfig->foreach_volume_full($conf, $include_opts, $test_volid);
+
     foreach my $volid (keys %$volhash) {
 	&$func($volid, $volhash->{$volid}, @param);
     }
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 qemu-server 5/12] migration: add target_storage_check_available
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
                   ` (3 preceding siblings ...)
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 4/12] qemuserver: foreach_volid: test regular config last Aaron Lauterer
@ 2023-06-16  9:57 ` Aaron Lauterer
  2023-06-16 13:11   ` [pve-devel] applied: " Fiona Ebner
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 6/12] migration: scan_local_volumes: adapt refs handling Aaron Lauterer
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:57 UTC (permalink / raw)
  To: pve-devel

We use this in a few places. By factoring it into its own function, we
can avoid running slightly different checks in various places.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3: added as its own dedicated patch for a better git
history

 PVE/QemuMigrate.pm | 37 +++++++++++++++++++------------------
 1 file changed, 19 insertions(+), 18 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 5f4f402..5f61bcd 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -149,6 +149,22 @@ sub lock_vm {
     return PVE::QemuConfig->lock_config($vmid, $code, @param);
 }
 
+sub target_storage_check_available {
+    my ($self, $storecfg, $targetsid, $volid) = @_;
+
+    if (!$self->{opts}->{remote}) {
+	# check if storage is available on target node
+	my $target_scfg = PVE::Storage::storage_check_enabled(
+	    $storecfg,
+	    $targetsid,
+	    $self->{node},
+	);
+	my ($vtype) = PVE::Storage::parse_volname($storecfg, $volid);
+	die "$volid: content type '$vtype' is not available on storage '$targetsid'\n"
+	    if !$target_scfg->{content}->{$vtype};
+    }
+}
+
 sub prepare {
     my ($self, $vmid) = @_;
 
@@ -236,18 +252,7 @@ sub prepare {
 
 	$storages->{$targetsid} = 1;
 
-	if (!$self->{opts}->{remote}) {
-	    # check if storage is available on target node
-	    my $target_scfg = PVE::Storage::storage_check_enabled(
-		$storecfg,
-		$targetsid,
-		$self->{node},
-	    );
-	    my ($vtype) = PVE::Storage::parse_volname($storecfg, $volid);
-
-	    die "$volid: content type '$vtype' is not available on storage '$targetsid'\n"
-		if !$target_scfg->{content}->{$vtype};
-	}
+	$self->target_storage_check_available($storecfg, $targetsid, $volid);
 
 	if ($scfg->{shared}) {
 	    # PVE::Storage::activate_storage checks this for non-shared storages
@@ -353,12 +358,8 @@ sub scan_local_volumes {
 		$targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid);
 	    }
 
-	    # check target storage on target node if intra-cluster migration
-	    if (!$self->{opts}->{remote}) {
-		PVE::Storage::storage_check_enabled($storecfg, $targetsid, $self->{node});
-
-		return if $scfg->{shared};
-	    }
+	    $self->target_storage_check_available($storecfg, $targetsid, $volid);
+	    return if $scfg->{shared} && !$self->{opts}->{remote};
 
 	    $local_volumes->{$volid}->{ref} = $attr->{referenced_in_config} ? 'config' : 'snapshot';
 	    $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 qemu-server 6/12] migration: scan_local_volumes: adapt refs handling
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
                   ` (4 preceding siblings ...)
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 5/12] migration: add target_storage_check_available Aaron Lauterer
@ 2023-06-16  9:57 ` Aaron Lauterer
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 7/12] tests: add migration test for pending disk Aaron Lauterer
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:57 UTC (permalink / raw)
  To: pve-devel

Since we don't scan all storages for matching disk images anymore for a
migration don't have any images found via storage alone. They will be
referenced in the config somehwere.

Therefore, there is no need for the 'storage' ref.
A new ref, 'config_unused' is introduced to distinguish between a disk
image that is attached or not (unusedX in the config).

We can now also detect if a disk image is only referenced in the pending
section.

The refs are mostly used for informational use to print out in the logs
why a disk image is part of the migration. Except for the 'config' case.

The 'config' case is used to determine if the disk image needs to be
live migrated if the VM is currently running. Therefore, only set the
ref to 'config' if it is a used/attached disk image.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3:
Parts of this were touched by patch 1, now its own dedicated patch.
Different changes than in the previous version, overworking the whole
refs handling a lot more.

 PVE/QemuMigrate.pm | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 5f61bcd..9e80866 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -361,8 +361,11 @@ sub scan_local_volumes {
 	    $self->target_storage_check_available($storecfg, $targetsid, $volid);
 	    return if $scfg->{shared} && !$self->{opts}->{remote};
 
-	    $local_volumes->{$volid}->{ref} = $attr->{referenced_in_config} ? 'config' : 'snapshot';
-	    $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};
+	    $local_volumes->{$volid}->{ref} = 'pending' if $attr->{referenced_in_pending};
+	    $local_volumes->{$volid}->{ref} = 'snapshot' if $attr->{referenced_in_snapshot};
+	    if ($attr->{referenced_in_config}) {
+		$local_volumes->{$volid}->{ref} = $attr->{is_unused} ? 'config_unused' : 'config';
+	    }
 	    $local_volumes->{$volid}->{ref} = 'generated' if $attr->{is_tpmstate};
 
 	    $local_volumes->{$volid}->{bwlimit} = $self->get_bwlimit($sid, $targetsid);
@@ -425,14 +428,16 @@ sub scan_local_volumes {
 	foreach my $vol (sort keys %$local_volumes) {
 	    my $type = $replicatable_volumes->{$vol} ? 'local, replicated' : 'local';
 	    my $ref = $local_volumes->{$vol}->{ref};
-	    if ($ref eq 'storage') {
-		$self->log('info', "found $type disk '$vol' (via storage)\n");
-	    } elsif ($ref eq 'config') {
+	    if ($ref eq 'config') {
 		&$log_error("can't live migrate attached local disks without with-local-disks option\n", $vol)
 		    if $self->{running} && !$self->{opts}->{"with-local-disks"};
 		$self->log('info', "found $type disk '$vol' (in current VM config)\n");
+	    } elsif ($ref eq 'config_unused') {
+		$self->log('info', "found $type, unused disk '$vol' (in current VM config)\n");
 	    } elsif ($ref eq 'snapshot') {
 		$self->log('info', "found $type disk '$vol' (referenced by snapshot(s))\n");
+	    } elsif ($ref eq 'pending') {
+		$self->log('info', "found $type disk '$vol' (pending change)\n");
 	    } elsif ($ref eq 'generated') {
 		$self->log('info', "found generated disk '$vol' (in current VM config)\n");
 	    } else {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 qemu-server 7/12] tests: add migration test for pending disk
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
                   ` (5 preceding siblings ...)
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 6/12] migration: scan_local_volumes: adapt refs handling Aaron Lauterer
@ 2023-06-16  9:57 ` Aaron Lauterer
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 8/12] migration: fail when aliased volume is detected Aaron Lauterer
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3: none

 test/run_qemu_migrate_tests.pl | 64 ++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl
index 7a9d7ea..583ea7d 100755
--- a/test/run_qemu_migrate_tests.pl
+++ b/test/run_qemu_migrate_tests.pl
@@ -130,6 +130,25 @@ my $vm_configs = {
 	'startup' => 'order=2',
 	'vmgenid' => '4eb1d535-9381-4ddc-a8aa-af50c4d9177b',
     },
+    111 => {
+	'bootdisk' => 'scsi0',
+	'cores' => 1,
+	'ide0' => 'local-lvm:vm-111-disk-0,size=4096M',
+	'ide2' => 'none,media=cdrom',
+	'memory' => 512,
+	'name' => 'pending-test',
+	'net0' => 'virtio=4A:A3:E4:4C:CF:F0,bridge=vmbr0,firewall=1',
+	'numa' => 0,
+	'ostype' => 'l26',
+	'pending' => {
+		'scsi0' => 'local-zfs:vm-111-disk-0,size=103M',
+	},
+	'scsihw' => 'virtio-scsi-pci',
+	'snapshots' => {},
+	'smbios1' => 'uuid=5ad71d4d-8f73-4377-853e-2d22c10c96a5',
+	'sockets' => 1,
+	'vmgenid' => '2c00c030-0b5b-4988-a371-6ab259893f22',
+    },
     149 => {
 	'agent' => '0',
 	'bootdisk' => 'scsi0',
@@ -306,6 +325,13 @@ my $source_vdisks = {
 	    'vmid' => '341',
 	    'volid' => 'local-lvm:vm-341-disk-0',
 	},
+	{
+	    'ctime' => '1589277334',
+	    'format' => 'raw',
+	    'size' =>  4294967296,
+	    'vmid' => '111',
+	    'volid' => 'local-lvm:vm-111-disk-0',
+	},
     ],
     'local-zfs' => [
 	{
@@ -322,6 +348,13 @@ my $source_vdisks = {
 	    'vmid' => '105',
 	    'volid' => 'local-zfs:vm-105-disk-1',
 	},
+	{
+	    'ctime' => '1589277334',
+	    'format' => 'raw',
+	    'size' => 108003328,
+	    'vmid' => '111',
+	    'volid' => 'local-zfs:vm-111-disk-0',
+	},
 	{
 	    'format' => 'raw',
 	    'name' => 'vm-4567-disk-0',
@@ -1529,6 +1562,37 @@ my $tests = [
 	    },
 	},
     },
+    {
+	name => '111_running_pending',
+	target => 'pve1',
+	vmid => 111,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-zfs:vm-111-disk-0' => 1,
+		'local-lvm:vm-111-disk-10' => 1,
+	    },
+	    vm_config => get_patched_config(111, {
+		ide0 => 'local-lvm:vm-111-disk-10,format=raw,size=4G',
+		pending => {
+		    scsi0 => 'local-zfs:vm-111-disk-0,size=103M',
+		},
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
 ];
 
 my $single_test_name = shift;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 qemu-server 8/12] migration: fail when aliased volume is detected
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
                   ` (6 preceding siblings ...)
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 7/12] tests: add migration test for pending disk Aaron Lauterer
@ 2023-06-16  9:57 ` Aaron Lauterer
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 9/12] tests: add migration alias check Aaron Lauterer
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:57 UTC (permalink / raw)
  To: pve-devel

Aliased volids can lead to unexpected behavior in a migration.

An aliased volid can happen if we have two storage configurations,
pointing to the same place. The resulting 'path' for a disk image
will be the same.
Therefore, stop the migration in such a case.

The check works by comparing the path returned by the storage plugin.

We decided against checking the storages themselves being aliased. It is
not possible to infer that reliably from just the storage configuration
options alone.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3: none

 PVE/QemuMigrate.pm | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 9e80866..d29e1aa 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -299,12 +299,12 @@ sub scan_local_volumes {
 
     my $storecfg = $self->{storecfg};
     eval {
-
 	# found local volumes and their origin
 	my $local_volumes = $self->{local_volumes};
 	my $local_volumes_errors = {};
 	my $other_errors = [];
 	my $abort = 0;
+	my $path_to_volid = {};
 
 	my $log_error = sub {
 	    my ($msg, $volid) = @_;
@@ -392,6 +392,8 @@ sub scan_local_volumes {
 	    die "owned by other VM (owner = VM $owner)\n"
 		if !$owner || ($owner != $vmid);
 
+	    $path_to_volid->{$path}->{$volid} = 1;
+
 	    return if $attr->{is_vmstate};
 
 	    if (defined($snaprefs)) {
@@ -425,6 +427,12 @@ sub scan_local_volumes {
 	    }
         });
 
+	for my $path (keys %$path_to_volid) {
+	    my @volids = keys $path_to_volid->{$path}->%*;
+	    die "detected not supported aliased volumes: '" . join("', '", @volids) . "'"
+		if (scalar(@volids) > 1);
+	}
+
 	foreach my $vol (sort keys %$local_volumes) {
 	    my $type = $replicatable_volumes->{$vol} ? 'local, replicated' : 'local';
 	    my $ref = $local_volumes->{$vol}->{ref};
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 qemu-server 9/12] tests: add migration alias check
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
                   ` (7 preceding siblings ...)
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 8/12] migration: fail when aliased volume is detected Aaron Lauterer
@ 2023-06-16  9:57 ` Aaron Lauterer
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 container 10/12] migration: only migrate volumes used by the guest Aaron Lauterer
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3:
- changes names of mock storages
- fixed broken and just wrong tests and mock data

 test/run_qemu_migrate_tests.pl | 75 ++++++++++++++++++++++++++++++++++
 1 file changed, 75 insertions(+)

diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl
index 583ea7d..c06d44f 100755
--- a/test/run_qemu_migrate_tests.pl
+++ b/test/run_qemu_migrate_tests.pl
@@ -88,6 +88,24 @@ my $storage_config = {
 	    path => "/some/other/dir/",
 	    type => "dir",
 	},
+	"zfs-alias-1" => {
+	    content => {
+		images => 1,
+		rootdir => 1,
+	    },
+	    pool => "aliaspool",
+	    sparse => 1,
+	    type => "zfspool",
+	},
+	"zfs-alias-2" => {
+	    content => {
+		images => 1,
+		rootdir => 1,
+	    },
+	    pool => "aliaspool",
+	    sparse => 1,
+	    type => "zfspool",
+	},
     },
 };
 
@@ -149,6 +167,24 @@ my $vm_configs = {
 	'sockets' => 1,
 	'vmgenid' => '2c00c030-0b5b-4988-a371-6ab259893f22',
     },
+    123 => {
+	'bootdisk' => 'scsi0',
+	'cores' => 1,
+	'scsi0' => 'zfs-alias-1:vm-123-disk-0,size=4096M',
+	'scsi1' => 'zfs-alias-2:vm-123-disk-0,size=4096M',
+	'ide2' => 'none,media=cdrom',
+	'memory' => 512,
+	'name' => 'alias-test',
+	'net0' => 'virtio=4A:A3:E4:4C:CF:F0,bridge=vmbr0,firewall=1',
+	'numa' => 0,
+	'ostype' => 'l26',
+	'pending' => {},
+	'scsihw' => 'virtio-scsi-pci',
+	'snapshots' => {},
+	'smbios1' => 'uuid=5ad71d4d-8f73-4377-853e-2d22c10c96a5',
+	'sockets' => 1,
+	'vmgenid' => '2c00c030-0b5b-4988-a371-6ab259893f22',
+    },
     149 => {
 	'agent' => '0',
 	'bootdisk' => 'scsi0',
@@ -380,6 +416,24 @@ my $source_vdisks = {
 	    'volid' => 'rbd-store:vm-1033-cloudinit',
 	},
     ],
+    'zfs-alias-1' => [
+	{
+	    'ctime' => '1589277334',
+	    'format' => 'raw',
+	    'size' => 4294967296,
+	    'vmid' => '123',
+	    'volid' => 'zfs-alias-1:vm-123-disk-0',
+	},
+    ],
+    'zfs-alias-2' => [
+	{
+	    'ctime' => '1589277334',
+	    'format' => 'raw',
+	    'size' => 4294967296,
+	    'vmid' => '123',
+	    'volid' => 'zfs-alias-2:vm-123-disk-0',
+	},
+    ],
 };
 
 my $default_expected_calls_online = {
@@ -1593,6 +1647,27 @@ my $tests = [
 	    },
 	},
     },
+    {
+	name => '123_alias_fail',
+	target => 'pve1',
+	vmid => 123,
+	vm_status => {
+	    running => 0,
+	},
+	opts => {
+	    'with-local-disks' => 1,
+	},
+	expected_calls => {},
+	expect_die => "detected not supported aliased volumes",
+	expected => {
+	    source_volids => local_volids_for_vm(123),
+	    target_volids => {},
+	    vm_config => $vm_configs->{123},
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
 ];
 
 my $single_test_name = shift;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 container 10/12] migration: only migrate volumes used by the guest
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
                   ` (8 preceding siblings ...)
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 9/12] tests: add migration alias check Aaron Lauterer
@ 2023-06-16  9:57 ` Aaron Lauterer
  2023-06-16 13:58   ` Fiona Ebner
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 container 11/12] migration: fail when aliased volume is detected Aaron Lauterer
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:57 UTC (permalink / raw)
  To: pve-devel

When scanning all configured storages for volumes belonging to the
container, the migration could easily fail if a storage is not
available, but enabled. That storage might not even be used by the
container at all.

By not doing that and only looking at the disk images referenced in the
config, we can avoid that.
We need to add additional steps for pending volumes with checks if they
actually exist. Changing an existing mountpoint to a new volume
will only create the volume on the next start of the container.

The big change regarding behavior is that volumes not referenced in the
container config will be ignored.  They are already orphans that used to
be migrated as well, but are now left where they are.

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3: hopefully caught all style nits now ;)

 src/PVE/LXC/Migrate.pm | 42 ++++++++++++------------------------------
 1 file changed, 12 insertions(+), 30 deletions(-)

diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index ccf5157..e3fa7a7 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -195,6 +195,12 @@ sub phase1 {
 
 	return if !$volid;
 
+	# check if volume exists, might be a pending new one
+	if ($volid =~ $PVE::LXC::NEW_DISK_RE) {
+	    $self->log('info', "volume '$volid' does not exist (pending change?)");
+	    return;
+	}
+
 	my ($sid, $volname) = PVE::Storage::parse_volume_id($volid);
 
 	# check if storage is available on source node
@@ -256,42 +262,18 @@ sub phase1 {
 	&$log_error($@, $volid) if $@;
     };
 
-    # first unused / lost volumes owned by this container
-    my @sids = PVE::Storage::storage_ids($self->{storecfg});
-    foreach my $storeid (@sids) {
-	my $scfg = PVE::Storage::storage_config($self->{storecfg}, $storeid);
-	next if $scfg->{shared} && !$remote;
-	next if !PVE::Storage::storage_check_enabled($self->{storecfg}, $storeid, undef, 1);
-
-	# get list from PVE::Storage (for unreferenced volumes)
-	my $dl = PVE::Storage::vdisk_list($self->{storecfg}, $storeid, $vmid, undef, 'rootdir');
-
-	next if @{$dl->{$storeid}} == 0;
-
-	# check if storage is available on target node
-	my $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storeid);
-	if (!$remote) {
-	    my $target_scfg = PVE::Storage::storage_check_enabled($self->{storecfg}, $targetsid, $self->{node});
-
-	    die "content type 'rootdir' is not available on storage '$targetsid'\n"
-		if !$target_scfg->{content}->{rootdir};
-	}
-
-	PVE::Storage::foreach_volid($dl, sub {
-	    my ($volid, $sid, $volname) = @_;
-
-	    $volhash->{$volid}->{ref} = 'storage';
-	    $volhash->{$volid}->{targetsid} = $targetsid;
-	});
-    }
-
-    # then all volumes referenced in snapshots
+    # first all volumes referenced in snapshots
     foreach my $snapname (keys %{$conf->{snapshots}}) {
 	&$test_volid($conf->{snapshots}->{$snapname}->{'vmstate'}, 0, undef)
 	    if defined($conf->{snapshots}->{$snapname}->{'vmstate'});
 	PVE::LXC::Config->foreach_volume($conf->{snapshots}->{$snapname}, $test_mp, $snapname);
     }
 
+    # then all pending volumes
+    if (defined $conf->{pending} && $conf->{pending}->%*) {
+	PVE::LXC::Config->foreach_volume($conf->{pending}, $test_mp);
+    }
+
     # finally all current volumes
     PVE::LXC::Config->foreach_volume_full($conf, { include_unused => 1 }, $test_mp);
 
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 container 11/12] migration: fail when aliased volume is detected
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
                   ` (9 preceding siblings ...)
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 container 10/12] migration: only migrate volumes used by the guest Aaron Lauterer
@ 2023-06-16  9:57 ` Aaron Lauterer
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 docs 12/12] storage: add hint to avoid storage aliasing Aaron Lauterer
  2023-06-16 14:12 ` [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Fiona Ebner
  12 siblings, 0 replies; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:57 UTC (permalink / raw)
  To: pve-devel

Aliased volumes (referencing the same volume multiple times) can lead to
unexpected behavior in a migration.

Therefore, stop the migration in such a case.

The check works by comparing the path returned by the storage plugin.
This means that we should be able to catch the common situations where
it can happen:

* by referencing the same volid multiple times
* having a different volid due to an aliased storage: different storage
name but pointing to the same location.

We decided against checking the storages themselves being aliased. It is
not possible to infer that reliably from just the storage configuration
options alone.

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3: none

 src/PVE/LXC/Migrate.pm | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index e3fa7a7..e7c6736 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -182,6 +182,7 @@ sub phase1 {
     my $volhash = {}; # 'config', 'snapshot' or 'storage' for local volumes
     my $volhash_errors = {};
     my $abort = 0;
+    my $path_to_volid = {};
 
     my $log_error = sub {
 	my ($msg, $volid) = @_;
@@ -231,6 +232,8 @@ sub phase1 {
 	die "owned by other guest (owner = $owner)\n"
 	    if !$owner || ($owner != $self->{vmid});
 
+	$path_to_volid->{$path}->{$volid} = 1;
+
 	if (defined($snapname)) {
 	    # we cannot migrate shapshots on local storage
 	    # exceptions: 'zfspool', 'btrfs'
@@ -277,6 +280,12 @@ sub phase1 {
     # finally all current volumes
     PVE::LXC::Config->foreach_volume_full($conf, { include_unused => 1 }, $test_mp);
 
+    for my $path (keys %$path_to_volid) {
+	my @volids = keys $path_to_volid->{$path}->%*;
+	die "detected not supported aliased volumes: '" . join("', '", @volids) . "'"
+	    if (scalar @volids > 1);
+    }
+
     # additional checks for local storage
     foreach my $volid (keys %$volhash) {
 	eval {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] [PATCH v4 docs 12/12] storage: add hint to avoid storage aliasing
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
                   ` (10 preceding siblings ...)
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 container 11/12] migration: fail when aliased volume is detected Aaron Lauterer
@ 2023-06-16  9:57 ` Aaron Lauterer
  2023-06-16 14:12 ` [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Fiona Ebner
  12 siblings, 0 replies; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16  9:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v3: rephrased it more harshly and with a bit more
background information

 pvesm.adoc | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/pvesm.adoc b/pvesm.adoc
index d250e0b..46a50f0 100644
--- a/pvesm.adoc
+++ b/pvesm.adoc
@@ -177,6 +177,12 @@ zfspool: local-zfs
 	content images,rootdir
 ----
 
+CAUTION: It is problematic to have multiple storage configurations pointing to
+the exact same underlying storage. Such an _aliased_ storage configuration can
+lead to two different volume IDs ('volid') pointing to the exact same disk
+image. {pve} expects that the images' volume IDs point to, are unique. Choosing
+different content types for _aliased_ storage configurations can be fine, but
+is not recommended.
 
 Common Storage Properties
 ~~~~~~~~~~~~~~~~~~~~~~~~~
-- 
2.39.2





^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [pve-devel] [PATCH v4 qemu-server 1/12] migration: only migrate disks used by the guest
  2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 1/12] migration: only migrate disks used by the guest Aaron Lauterer
@ 2023-06-16 12:16   ` Fiona Ebner
  0 siblings, 0 replies; 21+ messages in thread
From: Fiona Ebner @ 2023-06-16 12:16 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

Am 16.06.23 um 11:56 schrieb Aaron Lauterer:
> When scanning all configured storages for disk images belonging to the
> VM, the migration could easily fail if a storage is not available, but
> enabled. That storage might not even be used by the VM at all.
> 
> By not scanning all storages and only looking at the disk images
> referenced in the VM config, we can avoid unnecessary failures.
> Some information that used to be provided by the storage scanning needs
> to be fetched explicilty (size, format).
> 
> Behaviorally the biggest change is that unreferenced disk images will
> not be migrated anymore. Only images in the config or in a snapshot will
> be migrated.
> 
> The tests have been adapted accordingly.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> changes since v3: now it only removes the storage scanning

Ideally, this patch should come after recognizing pending disks in
foreach_volid(), because like this, the patch breaks picking up the
pending disks during migration until the later patch is also there. But
if it's too much work to reorder, feel free to leave it.

> 
>  PVE/QemuMigrate.pm                    | 49 ++++-----------------------
>  test/MigrationTest/QemuMigrateMock.pm | 10 ++++++
>  test/run_qemu_migrate_tests.pl        | 11 +++---
>  3 files changed, 22 insertions(+), 48 deletions(-)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 09cc1d8..5f4f402 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -312,49 +312,6 @@ sub scan_local_volumes {
>  	    $abort = 1;
>  	};
>  
> -	my @sids = PVE::Storage::storage_ids($storecfg);
> -	foreach my $storeid (@sids) {
> -	    my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> -	    next if $scfg->{shared} && !$self->{opts}->{remote};
> -	    next if !PVE::Storage::storage_check_enabled($storecfg, $storeid, undef, 1);
> -
> -	    # get list from PVE::Storage (for unused volumes)
> -	    my $dl = PVE::Storage::vdisk_list($storecfg, $storeid, $vmid, undef, 'images');
> -
> -	    next if @{$dl->{$storeid}} == 0;
> -
> -	    my $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storeid);
> -	    if (!$self->{opts}->{remote}) {
> -		# check if storage is available on target node
> -		my $target_scfg = PVE::Storage::storage_check_enabled(
> -		    $storecfg,
> -		    $targetsid,
> -		    $self->{node},
> -		);
> -
> -		die "content type 'images' is not available on storage '$targetsid'\n"
> -		    if !$target_scfg->{content}->{images};
> -
> -	    }
> -
> -	    my $bwlimit = $self->get_bwlimit($storeid, $targetsid);
> -
> -	    PVE::Storage::foreach_volid($dl, sub {
> -		my ($volid, $sid, $volinfo) = @_;
> -
> -		$local_volumes->{$volid}->{ref} = 'storage';
> -		$local_volumes->{$volid}->{size} = $volinfo->{size};
> -		$local_volumes->{$volid}->{targetsid} = $targetsid;
> -		$local_volumes->{$volid}->{bwlimit} = $bwlimit;
> -
> -		# If with_snapshots is not set for storage migrate, it tries to use
> -		# a raw+size stream, but on-the-fly conversion from qcow2 to raw+size
> -		# back to qcow2 is currently not possible.
> -		$local_volumes->{$volid}->{snapshots} = ($volinfo->{format} =~ /^(?:qcow2|vmdk)$/);

Sorry, only noticed now, but this one is dropped, but it shouldn't be.
See 5eca0c36 ("sync_disks: Always set 'snapshots' for qcow2 and vmdk
volumes") for the rationale. You can just re-do it further down when you
have the format and please also copy the comment.

> -		$local_volumes->{$volid}->{format} = $volinfo->{format};
> -	    });
> -	}
> -
>  	my $replicatable_volumes = !$self->{replication_jobcfg} ? {}
>  	    : PVE::QemuConfig->get_replicatable_volumes($storecfg, $vmid, $conf, 0, 1);
>  	foreach my $volid (keys %{$replicatable_volumes}) {
> @@ -407,6 +364,12 @@ sub scan_local_volumes {
>  	    $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};
>  	    $local_volumes->{$volid}->{ref} = 'generated' if $attr->{is_tpmstate};
>  
> +	    $local_volumes->{$volid}->{bwlimit} = $self->get_bwlimit($sid, $targetsid);
> +	    $local_volumes->{$volid}->{targetsid} = $targetsid;
> +
> +	    ($local_volumes->{$volid}->{size}, $local_volumes->{$volid}->{format})
> +		= PVE::Storage::volume_size_info($storecfg, $volid);

If you want it to fit on one line:
$local_volumes->{$volid}->@{qw(size format)} =
PVE::Storage::volume_size_info($storecfg, $volid);

> +
>  	    $local_volumes->{$volid}->{is_vmstate} = $attr->{is_vmstate} ? 1 : 0;
>  
>  	    $local_volumes->{$volid}->{drivename} = $attr->{drivename}
> diff --git a/test/MigrationTest/QemuMigrateMock.pm b/test/MigrationTest/QemuMigrateMock.pm
> index 94fe686..9034d10 100644
> --- a/test/MigrationTest/QemuMigrateMock.pm
> +++ b/test/MigrationTest/QemuMigrateMock.pm
> @@ -240,6 +240,16 @@ $MigrationTest::Shared::storage_module->mock(
>  
>  	delete $source_volids->{$volid};
>      },
> +    volume_size_info => sub {
> +	my ($scfg, $volid) = @_;
> +	my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
> +
> +	for my $v ($source_vdisks->{$storeid}->@*) {
> +	    return wantarray ? ($v->{size}, $v->{format}, $v->{used}, $v->{parent}) : $v

Above line should end with ": $v->{size}"?

> +		if $v->{volid} eq $volid;
> +	}
> +	die "could not find '$volid' in mock 'source_vdisks'\n";
> +    },
>  );
>  
>  $MigrationTest::Shared::tools_module->mock(
> diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl
> index 090449f..7a9d7ea 100755
> --- a/test/run_qemu_migrate_tests.pl
> +++ b/test/run_qemu_migrate_tests.pl
> @@ -708,7 +708,6 @@ my $tests = [
>  	},
>      },
>      {
> -	# FIXME: Maybe add orphaned drives as unused?
>  	name => '149_running_orphaned_disk_targetstorage_zfs',
>  	target => 'pve1',
>  	vmid => 149,
> @@ -729,10 +728,11 @@ my $tests = [
>  	},
>  	expected_calls => $default_expected_calls_online,
>  	expected => {
> -	    source_volids => {},
> +	    source_volids => {
> +		'local-dir:149/vm-149-disk-0.qcow2' => 1,
> +	    },
>  	    target_volids => {
>  		'local-zfs:vm-149-disk-10' => 1,
> -		'local-zfs:vm-149-disk-0' => 1,
>  	    },
>  	    vm_config => get_patched_config(149, {
>  		scsi0 => 'local-zfs:vm-149-disk-10,format=raw,size=4G',
> @@ -765,10 +765,11 @@ my $tests = [

There's another FIXME above here, that should be removed

>  	},
>  	expected_calls => $default_expected_calls_online,
>  	expected => {
> -	    source_volids => {},
> +	    source_volids => {
> +		'local-dir:149/vm-149-disk-0.qcow2' => 1,
> +	    },
>  	    target_volids => {
>  		'local-lvm:vm-149-disk-10' => 1,
> -		'local-dir:149/vm-149-disk-0.qcow2' => 1,
>  	    },
>  	    vm_config => get_patched_config(149, {
>  		scsi0 => 'local-lvm:vm-149-disk-10,format=raw,size=4G',




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [pve-devel] [PATCH v4 qemu-server 2/12] qemuserver: foreach_volid: include pending volumes
  2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 2/12] qemuserver: foreach_volid: include pending volumes Aaron Lauterer
@ 2023-06-16 12:25   ` Fiona Ebner
  2023-06-16 12:37     ` Thomas Lamprecht
  0 siblings, 1 reply; 21+ messages in thread
From: Fiona Ebner @ 2023-06-16 12:25 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

Am 16.06.23 um 11:56 schrieb Aaron Lauterer:
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 6cbaf87..33acef6 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -2424,7 +2424,7 @@ sub destroy_vm {
>  
>      if ($purge_unreferenced) { # also remove unreferenced disk
>  	my $vmdisks = PVE::Storage::vdisk_list($storecfg, undef, $vmid, undef, 'images');
> -	PVE::Storage::foreach_volid($vmdisks, sub {
> +	PVE::Storage::foreach_volid($vmdisks, 1, sub {

This is a different function.

>  	    my ($volid, $sid, $volname, $d) = @_;
>  	    eval { PVE::Storage::vdisk_free($storecfg, $volid) };
>  	    warn $@ if $@;

(...)

> @@ -4902,6 +4904,11 @@ sub foreach_volid {
>      };
>  
>      PVE::QemuConfig->foreach_volume_full($conf, $include_opts, $test_volid);
> +
> +    if ($include_pending && defined($conf->{pending}) && $conf->{pending}->%*) {
> +	PVE::QemuConfig->foreach_volume_full($conf->{pending}, $include_opts, $test_volid, undef, 1);
> +    }

Style nit: wouldn't be 101 chars as a post-if ;)

> +
>      foreach my $snapname (keys %{$conf->{snapshots}}) {
>  	my $snap = $conf->{snapshots}->{$snapname};
>  	PVE::QemuConfig->foreach_volume_full($snap, $include_opts, $test_volid, $snapname);





^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [pve-devel] [PATCH v4 qemu-server 2/12] qemuserver: foreach_volid: include pending volumes
  2023-06-16 12:25   ` Fiona Ebner
@ 2023-06-16 12:37     ` Thomas Lamprecht
  0 siblings, 0 replies; 21+ messages in thread
From: Thomas Lamprecht @ 2023-06-16 12:37 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fiona Ebner, Aaron Lauterer

Am 16/06/2023 um 14:25 schrieb Fiona Ebner:
>> @@ -4902,6 +4904,11 @@ sub foreach_volid {
>>      };
>>  
>>      PVE::QemuConfig->foreach_volume_full($conf, $include_opts, $test_volid);
>> +
>> +    if ($include_pending && defined($conf->{pending}) && $conf->{pending}->%*) {
>> +	PVE::QemuConfig->foreach_volume_full($conf->{pending}, $include_opts, $test_volid, undef, 1);
>> +    }
>
> Style nit: wouldn't be 101 chars as a post-if 😉

OTOH, the check is a bit complex and for complex ones the "normal" if block
often feels a bit easier to read, but not really hard feelings either way ^^




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [pve-devel] [PATCH v4 qemu-server 4/12] qemuserver: foreach_volid: test regular config last
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 4/12] qemuserver: foreach_volid: test regular config last Aaron Lauterer
@ 2023-06-16 12:40   ` Fiona Ebner
  2023-06-16 14:36     ` Aaron Lauterer
  0 siblings, 1 reply; 21+ messages in thread
From: Fiona Ebner @ 2023-06-16 12:40 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

Am 16.06.23 um 11:57 schrieb Aaron Lauterer:
> after snapshots and the pending section to make sure, that volids that
> are referenced here, will be marked correctly, e.g. 'is_unused'.
> 

The flag is set as follows:

>         $volhash->{$volid}->{is_unused} //= 0;
>         $volhash->{$volid}->{is_unused} = 1 if $key =~ /^unused\d+$/;

So why would the order matter? Once it's set to 1 it won't change.

On the other hand

>         my $size = $drive->{size};
>         $volhash->{$volid}->{size} //= $size if $size;

the first size wins, so with your change, the size in a snapshot will
win. I'm not sure we use this attribute anywhere though, might be worth
checking and dropping if not.




^ permalink raw reply	[flat|nested] 21+ messages in thread

* [pve-devel] applied: [PATCH v4 qemu-server 5/12] migration: add target_storage_check_available
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 5/12] migration: add target_storage_check_available Aaron Lauterer
@ 2023-06-16 13:11   ` Fiona Ebner
  0 siblings, 0 replies; 21+ messages in thread
From: Fiona Ebner @ 2023-06-16 13:11 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

Am 16.06.23 um 11:57 schrieb Aaron Lauterer:
> We use this in a few places. By factoring it into its own function, we
> can avoid running slightly different checks in various places.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---

applied this one, thanks!

Not too important right now, but could be improved further by turning it
into a function taking all required parameters rather than having it as
a method and also adapting QemuServer.pm's check_storage_availability()
to use it, but certainly a start as-is and improves readability.




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [pve-devel] [PATCH v4 container 10/12] migration: only migrate volumes used by the guest
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 container 10/12] migration: only migrate volumes used by the guest Aaron Lauterer
@ 2023-06-16 13:58   ` Fiona Ebner
  0 siblings, 0 replies; 21+ messages in thread
From: Fiona Ebner @ 2023-06-16 13:58 UTC (permalink / raw)
  To: Aaron Lauterer, pve-devel

Am 16.06.23 um 11:57 schrieb Aaron Lauterer:> changes since v3:
hopefully caught all style nits now ;)

I'm sorry to tell you:

> +    if (defined $conf->{pending} && $conf->{pending}->%*) {

missing parentheses around defined() :P




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases
  2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
                   ` (11 preceding siblings ...)
  2023-06-16  9:57 ` [pve-devel] [PATCH v4 docs 12/12] storage: add hint to avoid storage aliasing Aaron Lauterer
@ 2023-06-16 14:12 ` Fiona Ebner
  12 siblings, 0 replies; 21+ messages in thread
From: Fiona Ebner @ 2023-06-16 14:12 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer


Am 16.06.23 um 11:56 schrieb Aaron Lauterer:
> Oh boy did this version explode in number of commits.

Made it very easy to already apply a patch and review the others and
catch the few remaining issues :)

As already discussed off-list, switching the attribute in foreach_volid
from referenced_in_config to is_attached and not have the unused disks
be part of is_attached might be best. Because whether the disk is
attached or not is what migration, being the only user of this
attribute, cares about.

Apart from the 'ref' handling, there's also

>                     my $msg = "can't migrate local cdrom drive";
>                     if (defined($snaprefs) && !$attr->{referenced_in_config}) {
>                         my $snapnames = join(', ', sort keys %$snaprefs);
>                         $msg .= " (referenced in snapshot - $snapnames)";
>                     }

but that can also just s/referenced_in_config/is_attached/

Consider patches 7/12 until 12/12:

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [pve-devel] [PATCH v4 qemu-server 4/12] qemuserver: foreach_volid: test regular config last
  2023-06-16 12:40   ` Fiona Ebner
@ 2023-06-16 14:36     ` Aaron Lauterer
  0 siblings, 0 replies; 21+ messages in thread
From: Aaron Lauterer @ 2023-06-16 14:36 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion



On 6/16/23 14:40, Fiona Ebner wrote:
> Am 16.06.23 um 11:57 schrieb Aaron Lauterer:
>> after snapshots and the pending section to make sure, that volids that
>> are referenced here, will be marked correctly, e.g. 'is_unused'.
>>
> 
> The flag is set as follows:
> 
>>          $volhash->{$volid}->{is_unused} //= 0;
>>          $volhash->{$volid}->{is_unused} = 1 if $key =~ /^unused\d+$/;
> 
> So why would the order matter? Once it's set to 1 it won't change.

yep. I did check the behavior after we talked. 'unusedX' will not end up in a 
snapshot and if we detach a disk from a running VM with disk hotplugging 
disabled, it will become a pending change in the form:
delete: scsi4

So we should also not expect 'unusedX' there. Therefore I think we can drop this 
patch.

> 
> On the other hand
> 
>>          my $size = $drive->{size};
>>          $volhash->{$volid}->{size} //= $size if $size;
> 
> the first size wins, so with your change, the size in a snapshot will
> win. I'm not sure we use this attribute anywhere though, might be worth
> checking and dropping if not.




^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-06-16 14:37 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-16  9:56 [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Aaron Lauterer
2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 1/12] migration: only migrate disks used by the guest Aaron Lauterer
2023-06-16 12:16   ` Fiona Ebner
2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 2/12] qemuserver: foreach_volid: include pending volumes Aaron Lauterer
2023-06-16 12:25   ` Fiona Ebner
2023-06-16 12:37     ` Thomas Lamprecht
2023-06-16  9:56 ` [pve-devel] [PATCH v4 qemu-server 3/12] qemuserver: foreach_volid: always include pending disks Aaron Lauterer
2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 4/12] qemuserver: foreach_volid: test regular config last Aaron Lauterer
2023-06-16 12:40   ` Fiona Ebner
2023-06-16 14:36     ` Aaron Lauterer
2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 5/12] migration: add target_storage_check_available Aaron Lauterer
2023-06-16 13:11   ` [pve-devel] applied: " Fiona Ebner
2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 6/12] migration: scan_local_volumes: adapt refs handling Aaron Lauterer
2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 7/12] tests: add migration test for pending disk Aaron Lauterer
2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 8/12] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-16  9:57 ` [pve-devel] [PATCH v4 qemu-server 9/12] tests: add migration alias check Aaron Lauterer
2023-06-16  9:57 ` [pve-devel] [PATCH v4 container 10/12] migration: only migrate volumes used by the guest Aaron Lauterer
2023-06-16 13:58   ` Fiona Ebner
2023-06-16  9:57 ` [pve-devel] [PATCH v4 container 11/12] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-16  9:57 ` [pve-devel] [PATCH v4 docs 12/12] storage: add hint to avoid storage aliasing Aaron Lauterer
2023-06-16 14:12 ` [pve-devel] [PATCH v4 qemu-server, container, docs 0/12] migration: don't scan all storages, fail on aliases Fiona Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal