From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 26A1EA2725 for ; Mon, 19 Jun 2023 11:29:42 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 0432427261 for ; Mon, 19 Jun 2023 11:29:42 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Mon, 19 Jun 2023 11:29:39 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 78BE1468E1 for ; Mon, 19 Jun 2023 11:29:39 +0200 (CEST) From: Aaron Lauterer To: pve-devel@lists.proxmox.com Date: Mon, 19 Jun 2023 11:29:30 +0200 Message-Id: <20230619092937.604628-4-a.lauterer@proxmox.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230619092937.604628-1-a.lauterer@proxmox.com> References: <20230619092937.604628-1-a.lauterer@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.086 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: [pve-devel] [PATCH v5 qemu-server 3/10] migration: only migrate disks used by the guest X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Jun 2023 09:29:42 -0000 When scanning all configured storages for disk images belonging to the VM, the migration could easily fail if a storage is not available, but enabled. That storage might not even be used by the VM at all. By not scanning all storages and only looking at the disk images referenced in the VM config, we can avoid unnecessary failures. Some information that used to be provided by the storage scanning needs to be fetched explicilty (size, format). Behaviorally the biggest change is that unreferenced disk images will not be migrated anymore. Only images referenced in the config will be migrated. The tests have been adapted accordingly. Signed-off-by: Aaron Lauterer --- changes since v4: - reordered - kept check if snaphots on qcow2 exist - fixed return values in mock volume_size_info() - removed missed fixme in tests that doesn't apply anymore PVE/QemuMigrate.pm | 52 +++++---------------------- test/MigrationTest/QemuMigrateMock.pm | 10 ++++++ test/run_qemu_migrate_tests.pl | 12 +++---- 3 files changed, 25 insertions(+), 49 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index b10a515..4f6ab64 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -317,49 +317,6 @@ sub scan_local_volumes { $abort = 1; }; - my @sids = PVE::Storage::storage_ids($storecfg); - foreach my $storeid (@sids) { - my $scfg = PVE::Storage::storage_config($storecfg, $storeid); - next if $scfg->{shared} && !$self->{opts}->{remote}; - next if !PVE::Storage::storage_check_enabled($storecfg, $storeid, undef, 1); - - # get list from PVE::Storage (for unused volumes) - my $dl = PVE::Storage::vdisk_list($storecfg, $storeid, $vmid, undef, 'images'); - - next if @{$dl->{$storeid}} == 0; - - my $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storeid); - if (!$self->{opts}->{remote}) { - # check if storage is available on target node - my $target_scfg = PVE::Storage::storage_check_enabled( - $storecfg, - $targetsid, - $self->{node}, - ); - - die "content type 'images' is not available on storage '$targetsid'\n" - if !$target_scfg->{content}->{images}; - - } - - my $bwlimit = $self->get_bwlimit($storeid, $targetsid); - - PVE::Storage::foreach_volid($dl, sub { - my ($volid, $sid, $volinfo) = @_; - - $local_volumes->{$volid}->{ref} = 'storage'; - $local_volumes->{$volid}->{size} = $volinfo->{size}; - $local_volumes->{$volid}->{targetsid} = $targetsid; - $local_volumes->{$volid}->{bwlimit} = $bwlimit; - - # If with_snapshots is not set for storage migrate, it tries to use - # a raw+size stream, but on-the-fly conversion from qcow2 to raw+size - # back to qcow2 is currently not possible. - $local_volumes->{$volid}->{snapshots} = ($volinfo->{format} =~ /^(?:qcow2|vmdk)$/); - $local_volumes->{$volid}->{format} = $volinfo->{format}; - }); - } - my $replicatable_volumes = !$self->{replication_jobcfg} ? {} : PVE::QemuConfig->get_replicatable_volumes($storecfg, $vmid, $conf, 0, 1); foreach my $volid (keys %{$replicatable_volumes}) { @@ -408,6 +365,11 @@ sub scan_local_volumes { $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused}; $local_volumes->{$volid}->{ref} = 'generated' if $attr->{is_tpmstate}; + $local_volumes->{$volid}->{bwlimit} = $self->get_bwlimit($sid, $targetsid); + $local_volumes->{$volid}->{targetsid} = $targetsid; + + $local_volumes->{$volid}->@{qw(size format)} = PVE::Storage::volume_size_info($storecfg, $volid); + $local_volumes->{$volid}->{is_vmstate} = $attr->{is_vmstate} ? 1 : 0; $local_volumes->{$volid}->{drivename} = $attr->{drivename} @@ -420,6 +382,10 @@ sub scan_local_volumes { } die "local cdrom image\n"; } + # If with_snapshots is not set for storage migrate, it tries to use + # a raw+size stream, but on-the-fly conversion from qcow2 to raw+size + # back to qcow2 is currently not possible. + $local_volumes->{$volid}->{snapshots} = ($local_volumes->{$volid}->{format} =~ /^(?:qcow2|vmdk)$/); my ($path, $owner) = PVE::Storage::path($storecfg, $volid); diff --git a/test/MigrationTest/QemuMigrateMock.pm b/test/MigrationTest/QemuMigrateMock.pm index 94fe686..1efabe2 100644 --- a/test/MigrationTest/QemuMigrateMock.pm +++ b/test/MigrationTest/QemuMigrateMock.pm @@ -240,6 +240,16 @@ $MigrationTest::Shared::storage_module->mock( delete $source_volids->{$volid}; }, + volume_size_info => sub { + my ($scfg, $volid) = @_; + my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid); + + for my $v ($source_vdisks->{$storeid}->@*) { + return wantarray ? ($v->{size}, $v->{format}, $v->{used}, $v->{parent}) : $v->{size} + if $v->{volid} eq $volid; + } + die "could not find '$volid' in mock 'source_vdisks'\n"; + }, ); $MigrationTest::Shared::tools_module->mock( diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl index 090449f..fedbc32 100755 --- a/test/run_qemu_migrate_tests.pl +++ b/test/run_qemu_migrate_tests.pl @@ -708,7 +708,6 @@ my $tests = [ }, }, { - # FIXME: Maybe add orphaned drives as unused? name => '149_running_orphaned_disk_targetstorage_zfs', target => 'pve1', vmid => 149, @@ -729,10 +728,11 @@ my $tests = [ }, expected_calls => $default_expected_calls_online, expected => { - source_volids => {}, + source_volids => { + 'local-dir:149/vm-149-disk-0.qcow2' => 1, + }, target_volids => { 'local-zfs:vm-149-disk-10' => 1, - 'local-zfs:vm-149-disk-0' => 1, }, vm_config => get_patched_config(149, { scsi0 => 'local-zfs:vm-149-disk-10,format=raw,size=4G', @@ -745,7 +745,6 @@ my $tests = [ }, }, { - # FIXME: Maybe add orphaned drives as unused? name => '149_running_orphaned_disk', target => 'pve1', vmid => 149, @@ -765,10 +764,11 @@ my $tests = [ }, expected_calls => $default_expected_calls_online, expected => { - source_volids => {}, + source_volids => { + 'local-dir:149/vm-149-disk-0.qcow2' => 1, + }, target_volids => { 'local-lvm:vm-149-disk-10' => 1, - 'local-dir:149/vm-149-disk-0.qcow2' => 1, }, vm_config => get_patched_config(149, { scsi0 => 'local-lvm:vm-149-disk-10,format=raw,size=4G', -- 2.39.2