public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH v5 qemu-server 4/10] qemuserver: migration: test_volid: change attr name and ref handling
Date: Mon, 19 Jun 2023 11:29:31 +0200	[thread overview]
Message-ID: <20230619092937.604628-5-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20230619092937.604628-1-a.lauterer@proxmox.com>

Since we don't scan all storages for matching disk images anymore for a
migration we don't have any images found via storage alone. They will be
referenced in the config somewhere.

Therefore, there is no need for the 'storage' ref.
The 'referenced_in_config' is not really needed and can apply to both,
attached and unused disk images.

Therefore the QemuServer::foreach_volid() will change the
'referenced_in_config' attribute to an 'is_attached' one that only
applies to disk images that are in the _main_ config part and are not
unused.

In QemuMigrate::scan_local_volumes() we can then quite easily map the
refs to each state, attached, unused, referenced_in_{pending,snapshot}.

The refs are mostly used for informational use to print out in the logs
why a disk image is part of the migration. Except for the 'attached' case.

In the future the extra step of the refs in QemuMigrate could probably
be streamlined even more.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v4:
- drop 'referenced_in_config' in favor of 'is_attached'
needed a reordering in test_volid() to have the 'is_unused' attribute
available
- simplified the setting of the 'ref' property, can be streamlined in
the future as we can probably just set it, or a substitute property in
test_volid() directly.

 PVE/QemuMigrate.pm | 20 ++++++++++++--------
 PVE/QemuServer.pm  | 11 ++++++-----
 2 files changed, 18 insertions(+), 13 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 4f6ab64..f51904d 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -337,7 +337,7 @@ sub scan_local_volumes {
 	    if ($attr->{cdrom}) {
 		if ($volid eq 'cdrom') {
 		    my $msg = "can't migrate local cdrom drive";
-		    if (defined($snaprefs) && !$attr->{referenced_in_config}) {
+		    if (defined($snaprefs) && !$attr->{is_attached}) {
 			my $snapnames = join(', ', sort keys %$snaprefs);
 			$msg .= " (referenced in snapshot - $snapnames)";
 		    }
@@ -361,8 +361,10 @@ sub scan_local_volumes {
 	    $self->target_storage_check_available($storecfg, $targetsid, $volid);
 	    return if $scfg->{shared} && !$self->{opts}->{remote};
 
-	    $local_volumes->{$volid}->{ref} = $attr->{referenced_in_config} ? 'config' : 'snapshot';
-	    $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};
+	    $local_volumes->{$volid}->{ref} = 'pending' if $attr->{referenced_in_pending};
+	    $local_volumes->{$volid}->{ref} = 'snapshot' if $attr->{referenced_in_snapshot};
+	    $local_volumes->{$volid}->{ref} = 'unused' if $attr->{is_unused};
+	    $local_volumes->{$volid}->{ref} = 'attached' if $attr->{is_attached};
 	    $local_volumes->{$volid}->{ref} = 'generated' if $attr->{is_tpmstate};
 
 	    $local_volumes->{$volid}->{bwlimit} = $self->get_bwlimit($sid, $targetsid);
@@ -428,14 +430,16 @@ sub scan_local_volumes {
 	foreach my $vol (sort keys %$local_volumes) {
 	    my $type = $replicatable_volumes->{$vol} ? 'local, replicated' : 'local';
 	    my $ref = $local_volumes->{$vol}->{ref};
-	    if ($ref eq 'storage') {
-		$self->log('info', "found $type disk '$vol' (via storage)\n");
-	    } elsif ($ref eq 'config') {
+	    if ($ref eq 'attached') {
 		&$log_error("can't live migrate attached local disks without with-local-disks option\n", $vol)
 		    if $self->{running} && !$self->{opts}->{"with-local-disks"};
-		$self->log('info', "found $type disk '$vol' (in current VM config)\n");
+		$self->log('info', "found $type disk '$vol' (attached)\n");
+	    } elsif ($ref eq 'unused') {
+		$self->log('info', "found $type disk '$vol' (unused)\n");
 	    } elsif ($ref eq 'snapshot') {
 		$self->log('info', "found $type disk '$vol' (referenced by snapshot(s))\n");
+	    } elsif ($ref eq 'pending') {
+		$self->log('info', "found $type disk '$vol' (pending change)\n");
 	    } elsif ($ref eq 'generated') {
 		$self->log('info', "found generated disk '$vol' (in current VM config)\n");
 	    } else {
@@ -475,7 +479,7 @@ sub scan_local_volumes {
 
 	foreach my $volid (sort keys %$local_volumes) {
 	    my $ref = $local_volumes->{$volid}->{ref};
-	    if ($self->{running} && $ref eq 'config') {
+	    if ($self->{running} && $ref eq 'attached') {
 		$local_volumes->{$volid}->{migration_mode} = 'online';
 	    } elsif ($self->{running} && $ref eq 'generated') {
 		# offline migrate the cloud-init ISO and don't regenerate on VM start
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7666dcb..a49aeea 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4875,8 +4875,12 @@ sub foreach_volid {
 	$volhash->{$volid}->{shared} //= 0;
 	$volhash->{$volid}->{shared} = 1 if $drive->{shared};
 
-	$volhash->{$volid}->{referenced_in_config} //= 0;
-	$volhash->{$volid}->{referenced_in_config} = 1 if !defined($snapname) && !defined($pending);
+	$volhash->{$volid}->{is_unused} //= 0;
+	$volhash->{$volid}->{is_unused} = 1 if $key =~ /^unused\d+$/;
+
+	$volhash->{$volid}->{is_attached} //= 0;
+	$volhash->{$volid}->{is_attached} = 1
+	    if !$volhash->{$volid}->{is_unused} && !defined($snapname) && !defined($pending);
 
 	$volhash->{$volid}->{referenced_in_snapshot}->{$snapname} = 1
 	    if defined($snapname);
@@ -4892,9 +4896,6 @@ sub foreach_volid {
 	$volhash->{$volid}->{is_tpmstate} //= 0;
 	$volhash->{$volid}->{is_tpmstate} = 1 if $key eq 'tpmstate0';
 
-	$volhash->{$volid}->{is_unused} //= 0;
-	$volhash->{$volid}->{is_unused} = 1 if $key =~ /^unused\d+$/;
-
 	$volhash->{$volid}->{drivename} = $key if is_valid_drivename($key);
     };
 
-- 
2.39.2





  parent reply	other threads:[~2023-06-19  9:29 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-19  9:29 [pve-devel] [PATCH v5 qemu-server 0/7] migration: don't scan all storages, fail on aliases Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 1/10] qemuserver: foreach_volid: include pending volumes Aaron Lauterer
2023-06-19 12:20   ` Fiona Ebner
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 2/10] qemuserver: foreach_volid: always include pending disks Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 3/10] migration: only migrate disks used by the guest Aaron Lauterer
2023-06-19  9:29 ` Aaron Lauterer [this message]
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 5/10] tests: add migration test for pending disk Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 6/10] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-19 12:21   ` Fiona Ebner
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 7/10] tests: add migration alias check Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 container 8/10] migration: only migrate volumes used by the guest Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 container 9/10] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-19 12:21   ` Fiona Ebner
2023-06-19  9:29 ` [pve-devel] [PATCH v5 docs 10/10] storage: add hint to avoid storage aliasing Aaron Lauterer
2023-06-19 12:21 ` [pve-devel] [PATCH v5 qemu-server 0/7] migration: don't scan all storages, fail on aliases Fiona Ebner
2023-06-21 10:53 ` [pve-devel] applied-series: " Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230619092937.604628-5-a.lauterer@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal