From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id D4B7CA1D79 for ; Fri, 16 Jun 2023 11:57:40 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B76D1309EB for ; Fri, 16 Jun 2023 11:57:10 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Fri, 16 Jun 2023 11:57:09 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 965A845BD9 for ; Fri, 16 Jun 2023 11:57:09 +0200 (CEST) From: Aaron Lauterer To: pve-devel@lists.proxmox.com Date: Fri, 16 Jun 2023 11:56:58 +0200 Message-Id: <20230616095708.1323621-3-a.lauterer@proxmox.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230616095708.1323621-1-a.lauterer@proxmox.com> References: <20230616095708.1323621-1-a.lauterer@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.089 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: [pve-devel] [PATCH v4 qemu-server 2/12] qemuserver: foreach_volid: include pending volumes X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 Jun 2023 09:57:40 -0000 Make it possible to optionally iterate over disks in the pending section of VMs, similar as to how snapshots are handled already. This is for example useful in the migration. Since the storages aren't scanned anymore, we need to pick up disk images referenced only in the pending section. All calling sites are adapted and enable it, except for QemuConfig::get_replicatable_volumes as that would cause a change for the replication if pending disks would be included. The following lists the calling sites and if they should be fine with the change (source [0]): 1. QemuMigrate: scan_local_volumes(): needed to include pending disk images 2. API2/Qemu.pm: check_vm_disks_local() for migration precondition: related to migration, so more consistent with pending 3. QemuConfig.pm: get_replicatable_volumes(): would change the behavior of the replication, will not use it for now. 4. QemuServer.pm: get_vm_volumes(): is used multiple times by: 4a. vm_stop_cleanup() to deactivate/unmap: should also be fine with including pending 4b. QemuMigrate.pm: in prepare(): part of migration, so more consistent with pending 4c. QemuMigrate.pm: in phase3_cleanup() for deactivation: part of migration, so more consistent with pending [0] https://lists.proxmox.com/pipermail/pve-devel/2023-May/056868.html Signed-off-by: Aaron Lauterer --- changes since v3: added as intermediate patch for a better git history as suggested PVE/API2/Qemu.pm | 2 +- PVE/QemuConfig.pm | 2 +- PVE/QemuMigrate.pm | 2 +- PVE/QemuServer.pm | 17 ++++++++++++----- 4 files changed, 15 insertions(+), 8 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index c92734a..37f78fe 100644 --- a/PVE/API2/Qemu.pm +++ b/PVE/API2/Qemu.pm @@ -4164,7 +4164,7 @@ my $check_vm_disks_local = sub { my $local_disks = {}; # add some more information to the disks e.g. cdrom - PVE::QemuServer::foreach_volid($vmconf, sub { + PVE::QemuServer::foreach_volid($vmconf, 1, sub { my ($volid, $attr) = @_; my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid, 1); diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm index 10e6929..5e46db2 100644 --- a/PVE/QemuConfig.pm +++ b/PVE/QemuConfig.pm @@ -161,7 +161,7 @@ sub get_replicatable_volumes { $volhash->{$volid} = 1; }; - PVE::QemuServer::foreach_volid($conf, $test_volid); + PVE::QemuServer::foreach_volid($conf, undef, $test_volid); return $volhash; } diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 5f4f402..d979211 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -413,7 +413,7 @@ sub scan_local_volumes { if PVE::Storage::volume_is_base_and_used($storecfg, $volid); }; - PVE::QemuServer::foreach_volid($conf, sub { + PVE::QemuServer::foreach_volid($conf, 1, sub { my ($volid, $attr) = @_; eval { $test_volid->($volid, $attr); }; if (my $err = $@) { diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 6cbaf87..33acef6 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -2424,7 +2424,7 @@ sub destroy_vm { if ($purge_unreferenced) { # also remove unreferenced disk my $vmdisks = PVE::Storage::vdisk_list($storecfg, undef, $vmid, undef, 'images'); - PVE::Storage::foreach_volid($vmdisks, sub { + PVE::Storage::foreach_volid($vmdisks, 1, sub { my ($volid, $sid, $volname, $d) = @_; eval { PVE::Storage::vdisk_free($storecfg, $volid) }; warn $@ if $@; @@ -4855,12 +4855,12 @@ sub set_migration_caps { } sub foreach_volid { - my ($conf, $func, @param) = @_; + my ($conf, $include_pending, $func, @param) = @_; my $volhash = {}; my $test_volid = sub { - my ($key, $drive, $snapname) = @_; + my ($key, $drive, $snapname, $pending) = @_; my $volid = $drive->{file}; return if !$volid; @@ -4876,11 +4876,13 @@ sub foreach_volid { $volhash->{$volid}->{shared} = 1 if $drive->{shared}; $volhash->{$volid}->{referenced_in_config} //= 0; - $volhash->{$volid}->{referenced_in_config} = 1 if !defined($snapname); + $volhash->{$volid}->{referenced_in_config} = 1 if !defined($snapname) && !defined($pending); $volhash->{$volid}->{referenced_in_snapshot}->{$snapname} = 1 if defined($snapname); + $volhash->{$volid}->{referenced_in_pending} = 1 if defined($pending); + my $size = $drive->{size}; $volhash->{$volid}->{size} //= $size if $size; @@ -4902,6 +4904,11 @@ sub foreach_volid { }; PVE::QemuConfig->foreach_volume_full($conf, $include_opts, $test_volid); + + if ($include_pending && defined($conf->{pending}) && $conf->{pending}->%*) { + PVE::QemuConfig->foreach_volume_full($conf->{pending}, $include_opts, $test_volid, undef, 1); + } + foreach my $snapname (keys %{$conf->{snapshots}}) { my $snap = $conf->{snapshots}->{$snapname}; PVE::QemuConfig->foreach_volume_full($snap, $include_opts, $test_volid, $snapname); @@ -6149,7 +6156,7 @@ sub get_vm_volumes { my ($conf) = @_; my $vollist = []; - foreach_volid($conf, sub { + foreach_volid($conf, 1, sub { my ($volid, $attr) = @_; return if $volid =~ m|^/|; -- 2.39.2