public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH v5 qemu-server 3/10] migration: only migrate disks used by the guest
Date: Mon, 19 Jun 2023 11:29:30 +0200	[thread overview]
Message-ID: <20230619092937.604628-4-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20230619092937.604628-1-a.lauterer@proxmox.com>

When scanning all configured storages for disk images belonging to the
VM, the migration could easily fail if a storage is not available, but
enabled. That storage might not even be used by the VM at all.

By not scanning all storages and only looking at the disk images
referenced in the VM config, we can avoid unnecessary failures.
Some information that used to be provided by the storage scanning needs
to be fetched explicilty (size, format).

Behaviorally the biggest change is that unreferenced disk images will
not be migrated anymore. Only images referenced in the config will be
migrated.

The tests have been adapted accordingly.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v4:
- reordered
- kept check if snaphots on qcow2 exist
- fixed return values in mock volume_size_info()
- removed missed fixme in tests that doesn't apply anymore

 PVE/QemuMigrate.pm                    | 52 +++++----------------------
 test/MigrationTest/QemuMigrateMock.pm | 10 ++++++
 test/run_qemu_migrate_tests.pl        | 12 +++----
 3 files changed, 25 insertions(+), 49 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index b10a515..4f6ab64 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -317,49 +317,6 @@ sub scan_local_volumes {
 	    $abort = 1;
 	};
 
-	my @sids = PVE::Storage::storage_ids($storecfg);
-	foreach my $storeid (@sids) {
-	    my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
-	    next if $scfg->{shared} && !$self->{opts}->{remote};
-	    next if !PVE::Storage::storage_check_enabled($storecfg, $storeid, undef, 1);
-
-	    # get list from PVE::Storage (for unused volumes)
-	    my $dl = PVE::Storage::vdisk_list($storecfg, $storeid, $vmid, undef, 'images');
-
-	    next if @{$dl->{$storeid}} == 0;
-
-	    my $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storeid);
-	    if (!$self->{opts}->{remote}) {
-		# check if storage is available on target node
-		my $target_scfg = PVE::Storage::storage_check_enabled(
-		    $storecfg,
-		    $targetsid,
-		    $self->{node},
-		);
-
-		die "content type 'images' is not available on storage '$targetsid'\n"
-		    if !$target_scfg->{content}->{images};
-
-	    }
-
-	    my $bwlimit = $self->get_bwlimit($storeid, $targetsid);
-
-	    PVE::Storage::foreach_volid($dl, sub {
-		my ($volid, $sid, $volinfo) = @_;
-
-		$local_volumes->{$volid}->{ref} = 'storage';
-		$local_volumes->{$volid}->{size} = $volinfo->{size};
-		$local_volumes->{$volid}->{targetsid} = $targetsid;
-		$local_volumes->{$volid}->{bwlimit} = $bwlimit;
-
-		# If with_snapshots is not set for storage migrate, it tries to use
-		# a raw+size stream, but on-the-fly conversion from qcow2 to raw+size
-		# back to qcow2 is currently not possible.
-		$local_volumes->{$volid}->{snapshots} = ($volinfo->{format} =~ /^(?:qcow2|vmdk)$/);
-		$local_volumes->{$volid}->{format} = $volinfo->{format};
-	    });
-	}
-
 	my $replicatable_volumes = !$self->{replication_jobcfg} ? {}
 	    : PVE::QemuConfig->get_replicatable_volumes($storecfg, $vmid, $conf, 0, 1);
 	foreach my $volid (keys %{$replicatable_volumes}) {
@@ -408,6 +365,11 @@ sub scan_local_volumes {
 	    $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};
 	    $local_volumes->{$volid}->{ref} = 'generated' if $attr->{is_tpmstate};
 
+	    $local_volumes->{$volid}->{bwlimit} = $self->get_bwlimit($sid, $targetsid);
+	    $local_volumes->{$volid}->{targetsid} = $targetsid;
+
+	    $local_volumes->{$volid}->@{qw(size format)} = PVE::Storage::volume_size_info($storecfg, $volid);
+
 	    $local_volumes->{$volid}->{is_vmstate} = $attr->{is_vmstate} ? 1 : 0;
 
 	    $local_volumes->{$volid}->{drivename} = $attr->{drivename}
@@ -420,6 +382,10 @@ sub scan_local_volumes {
 		}
 		die "local cdrom image\n";
 	    }
+	    # If with_snapshots is not set for storage migrate, it tries to use
+	    # a raw+size stream, but on-the-fly conversion from qcow2 to raw+size
+	    # back to qcow2 is currently not possible.
+	    $local_volumes->{$volid}->{snapshots} = ($local_volumes->{$volid}->{format} =~ /^(?:qcow2|vmdk)$/);
 
 	    my ($path, $owner) = PVE::Storage::path($storecfg, $volid);
 
diff --git a/test/MigrationTest/QemuMigrateMock.pm b/test/MigrationTest/QemuMigrateMock.pm
index 94fe686..1efabe2 100644
--- a/test/MigrationTest/QemuMigrateMock.pm
+++ b/test/MigrationTest/QemuMigrateMock.pm
@@ -240,6 +240,16 @@ $MigrationTest::Shared::storage_module->mock(
 
 	delete $source_volids->{$volid};
     },
+    volume_size_info => sub {
+	my ($scfg, $volid) = @_;
+	my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
+
+	for my $v ($source_vdisks->{$storeid}->@*) {
+	    return wantarray ? ($v->{size}, $v->{format}, $v->{used}, $v->{parent}) : $v->{size}
+		if $v->{volid} eq $volid;
+	}
+	die "could not find '$volid' in mock 'source_vdisks'\n";
+    },
 );
 
 $MigrationTest::Shared::tools_module->mock(
diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl
index 090449f..fedbc32 100755
--- a/test/run_qemu_migrate_tests.pl
+++ b/test/run_qemu_migrate_tests.pl
@@ -708,7 +708,6 @@ my $tests = [
 	},
     },
     {
-	# FIXME: Maybe add orphaned drives as unused?
 	name => '149_running_orphaned_disk_targetstorage_zfs',
 	target => 'pve1',
 	vmid => 149,
@@ -729,10 +728,11 @@ my $tests = [
 	},
 	expected_calls => $default_expected_calls_online,
 	expected => {
-	    source_volids => {},
+	    source_volids => {
+		'local-dir:149/vm-149-disk-0.qcow2' => 1,
+	    },
 	    target_volids => {
 		'local-zfs:vm-149-disk-10' => 1,
-		'local-zfs:vm-149-disk-0' => 1,
 	    },
 	    vm_config => get_patched_config(149, {
 		scsi0 => 'local-zfs:vm-149-disk-10,format=raw,size=4G',
@@ -745,7 +745,6 @@ my $tests = [
 	},
     },
     {
-	# FIXME: Maybe add orphaned drives as unused?
 	name => '149_running_orphaned_disk',
 	target => 'pve1',
 	vmid => 149,
@@ -765,10 +764,11 @@ my $tests = [
 	},
 	expected_calls => $default_expected_calls_online,
 	expected => {
-	    source_volids => {},
+	    source_volids => {
+		'local-dir:149/vm-149-disk-0.qcow2' => 1,
+	    },
 	    target_volids => {
 		'local-lvm:vm-149-disk-10' => 1,
-		'local-dir:149/vm-149-disk-0.qcow2' => 1,
 	    },
 	    vm_config => get_patched_config(149, {
 		scsi0 => 'local-lvm:vm-149-disk-10,format=raw,size=4G',
-- 
2.39.2





  parent reply	other threads:[~2023-06-19  9:29 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-19  9:29 [pve-devel] [PATCH v5 qemu-server 0/7] migration: don't scan all storages, fail on aliases Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 1/10] qemuserver: foreach_volid: include pending volumes Aaron Lauterer
2023-06-19 12:20   ` Fiona Ebner
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 2/10] qemuserver: foreach_volid: always include pending disks Aaron Lauterer
2023-06-19  9:29 ` Aaron Lauterer [this message]
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 4/10] qemuserver: migration: test_volid: change attr name and ref handling Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 5/10] tests: add migration test for pending disk Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 6/10] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-19 12:21   ` Fiona Ebner
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 7/10] tests: add migration alias check Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 container 8/10] migration: only migrate volumes used by the guest Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 container 9/10] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-19 12:21   ` Fiona Ebner
2023-06-19  9:29 ` [pve-devel] [PATCH v5 docs 10/10] storage: add hint to avoid storage aliasing Aaron Lauterer
2023-06-19 12:21 ` [pve-devel] [PATCH v5 qemu-server 0/7] migration: don't scan all storages, fail on aliases Fiona Ebner
2023-06-21 10:53 ` [pve-devel] applied-series: " Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230619092937.604628-4-a.lauterer@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal