From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 2332AA1EAE for ; Fri, 16 Jun 2023 14:17:11 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 0414E32AAA for ; Fri, 16 Jun 2023 14:16:41 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Fri, 16 Jun 2023 14:16:40 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id E476D45B0C for ; Fri, 16 Jun 2023 14:16:39 +0200 (CEST) Message-ID: <3eea743f-35e3-9d3a-e19d-a754e958a2c5@proxmox.com> Date: Fri, 16 Jun 2023 14:16:38 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0 Content-Language: en-US To: Proxmox VE development discussion , Aaron Lauterer References: <20230616095708.1323621-1-a.lauterer@proxmox.com> <20230616095708.1323621-2-a.lauterer@proxmox.com> From: Fiona Ebner In-Reply-To: <20230616095708.1323621-2-a.lauterer@proxmox.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.002 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -0.098 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [pve-devel] [PATCH v4 qemu-server 1/12] migration: only migrate disks used by the guest X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 Jun 2023 12:17:11 -0000 Am 16.06.23 um 11:56 schrieb Aaron Lauterer: > When scanning all configured storages for disk images belonging to the > VM, the migration could easily fail if a storage is not available, but > enabled. That storage might not even be used by the VM at all. > > By not scanning all storages and only looking at the disk images > referenced in the VM config, we can avoid unnecessary failures. > Some information that used to be provided by the storage scanning needs > to be fetched explicilty (size, format). > > Behaviorally the biggest change is that unreferenced disk images will > not be migrated anymore. Only images in the config or in a snapshot will > be migrated. > > The tests have been adapted accordingly. > > Signed-off-by: Aaron Lauterer > --- > changes since v3: now it only removes the storage scanning Ideally, this patch should come after recognizing pending disks in foreach_volid(), because like this, the patch breaks picking up the pending disks during migration until the later patch is also there. But if it's too much work to reorder, feel free to leave it. > > PVE/QemuMigrate.pm | 49 ++++----------------------- > test/MigrationTest/QemuMigrateMock.pm | 10 ++++++ > test/run_qemu_migrate_tests.pl | 11 +++--- > 3 files changed, 22 insertions(+), 48 deletions(-) > > diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm > index 09cc1d8..5f4f402 100644 > --- a/PVE/QemuMigrate.pm > +++ b/PVE/QemuMigrate.pm > @@ -312,49 +312,6 @@ sub scan_local_volumes { > $abort = 1; > }; > > - my @sids = PVE::Storage::storage_ids($storecfg); > - foreach my $storeid (@sids) { > - my $scfg = PVE::Storage::storage_config($storecfg, $storeid); > - next if $scfg->{shared} && !$self->{opts}->{remote}; > - next if !PVE::Storage::storage_check_enabled($storecfg, $storeid, undef, 1); > - > - # get list from PVE::Storage (for unused volumes) > - my $dl = PVE::Storage::vdisk_list($storecfg, $storeid, $vmid, undef, 'images'); > - > - next if @{$dl->{$storeid}} == 0; > - > - my $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storeid); > - if (!$self->{opts}->{remote}) { > - # check if storage is available on target node > - my $target_scfg = PVE::Storage::storage_check_enabled( > - $storecfg, > - $targetsid, > - $self->{node}, > - ); > - > - die "content type 'images' is not available on storage '$targetsid'\n" > - if !$target_scfg->{content}->{images}; > - > - } > - > - my $bwlimit = $self->get_bwlimit($storeid, $targetsid); > - > - PVE::Storage::foreach_volid($dl, sub { > - my ($volid, $sid, $volinfo) = @_; > - > - $local_volumes->{$volid}->{ref} = 'storage'; > - $local_volumes->{$volid}->{size} = $volinfo->{size}; > - $local_volumes->{$volid}->{targetsid} = $targetsid; > - $local_volumes->{$volid}->{bwlimit} = $bwlimit; > - > - # If with_snapshots is not set for storage migrate, it tries to use > - # a raw+size stream, but on-the-fly conversion from qcow2 to raw+size > - # back to qcow2 is currently not possible. > - $local_volumes->{$volid}->{snapshots} = ($volinfo->{format} =~ /^(?:qcow2|vmdk)$/); Sorry, only noticed now, but this one is dropped, but it shouldn't be. See 5eca0c36 ("sync_disks: Always set 'snapshots' for qcow2 and vmdk volumes") for the rationale. You can just re-do it further down when you have the format and please also copy the comment. > - $local_volumes->{$volid}->{format} = $volinfo->{format}; > - }); > - } > - > my $replicatable_volumes = !$self->{replication_jobcfg} ? {} > : PVE::QemuConfig->get_replicatable_volumes($storecfg, $vmid, $conf, 0, 1); > foreach my $volid (keys %{$replicatable_volumes}) { > @@ -407,6 +364,12 @@ sub scan_local_volumes { > $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused}; > $local_volumes->{$volid}->{ref} = 'generated' if $attr->{is_tpmstate}; > > + $local_volumes->{$volid}->{bwlimit} = $self->get_bwlimit($sid, $targetsid); > + $local_volumes->{$volid}->{targetsid} = $targetsid; > + > + ($local_volumes->{$volid}->{size}, $local_volumes->{$volid}->{format}) > + = PVE::Storage::volume_size_info($storecfg, $volid); If you want it to fit on one line: $local_volumes->{$volid}->@{qw(size format)} = PVE::Storage::volume_size_info($storecfg, $volid); > + > $local_volumes->{$volid}->{is_vmstate} = $attr->{is_vmstate} ? 1 : 0; > > $local_volumes->{$volid}->{drivename} = $attr->{drivename} > diff --git a/test/MigrationTest/QemuMigrateMock.pm b/test/MigrationTest/QemuMigrateMock.pm > index 94fe686..9034d10 100644 > --- a/test/MigrationTest/QemuMigrateMock.pm > +++ b/test/MigrationTest/QemuMigrateMock.pm > @@ -240,6 +240,16 @@ $MigrationTest::Shared::storage_module->mock( > > delete $source_volids->{$volid}; > }, > + volume_size_info => sub { > + my ($scfg, $volid) = @_; > + my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid); > + > + for my $v ($source_vdisks->{$storeid}->@*) { > + return wantarray ? ($v->{size}, $v->{format}, $v->{used}, $v->{parent}) : $v Above line should end with ": $v->{size}"? > + if $v->{volid} eq $volid; > + } > + die "could not find '$volid' in mock 'source_vdisks'\n"; > + }, > ); > > $MigrationTest::Shared::tools_module->mock( > diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl > index 090449f..7a9d7ea 100755 > --- a/test/run_qemu_migrate_tests.pl > +++ b/test/run_qemu_migrate_tests.pl > @@ -708,7 +708,6 @@ my $tests = [ > }, > }, > { > - # FIXME: Maybe add orphaned drives as unused? > name => '149_running_orphaned_disk_targetstorage_zfs', > target => 'pve1', > vmid => 149, > @@ -729,10 +728,11 @@ my $tests = [ > }, > expected_calls => $default_expected_calls_online, > expected => { > - source_volids => {}, > + source_volids => { > + 'local-dir:149/vm-149-disk-0.qcow2' => 1, > + }, > target_volids => { > 'local-zfs:vm-149-disk-10' => 1, > - 'local-zfs:vm-149-disk-0' => 1, > }, > vm_config => get_patched_config(149, { > scsi0 => 'local-zfs:vm-149-disk-10,format=raw,size=4G', > @@ -765,10 +765,11 @@ my $tests = [ There's another FIXME above here, that should be removed > }, > expected_calls => $default_expected_calls_online, > expected => { > - source_volids => {}, > + source_volids => { > + 'local-dir:149/vm-149-disk-0.qcow2' => 1, > + }, > target_volids => { > 'local-lvm:vm-149-disk-10' => 1, > - 'local-dir:149/vm-149-disk-0.qcow2' => 1, > }, > vm_config => get_patched_config(149, { > scsi0 => 'local-lvm:vm-149-disk-10,format=raw,size=4G',