public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fabian Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH v2 qemu-server 08/13] migration: simplify removal of local volumes and get rid of self->{volumes}
Date: Fri, 29 Jan 2021 16:11:38 +0100	[thread overview]
Message-ID: <20210129151143.10014-9-f.ebner@proxmox.com> (raw)
In-Reply-To: <20210129151143.10014-1-f.ebner@proxmox.com>

This also changes the behavior to remove the local copies of offline migrated
volumes only after the migration has finished successfully (this is relevant
for mixed settings, e.g. online migration with unused/vmstate disks).

local_volumes contains both, the volumes previously in $self->{volumes}
and the volumes in $self->{online_local_volumes}, and hence is the place
to look for which volumes we need to remove. Of course, replicated
volumes still need to be skipped.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

Changes from v1:
    * changed the order of this and next patch to avoid temporary breakage in an
      edge case if only the old #8 would be applied: if we died in
      phase3_cleanup on the qemu_drive_mirror_monitor call unused disks could
      get lost, because they were already cleaned up in phase3. Now this patch
      (old #9) comes first, making sure the cleanup of local disks happens in
      phase3_cleanup after successful migration. See also the test in patch #13


 PVE/QemuMigrate.pm | 51 +++++++++++++++-------------------------------
 1 file changed, 16 insertions(+), 35 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index b10638a..5d92028 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -365,7 +365,6 @@ sub scan_local_volumes {
 
     # local volumes which have been copied
     # and their old_id => new_id pairs
-    $self->{volumes} = [];
     $self->{volume_map} = {};
     $self->{local_volumes} = {};
 
@@ -599,14 +598,10 @@ sub scan_local_volumes {
 	    my $ref = $local_volumes->{$volid}->{ref};
 	    if ($self->{running} && $ref eq 'config') {
 		push @{$self->{online_local_volumes}}, $volid;
-	    } elsif ($ref eq 'generated') {
-		die "can't live migrate VM with local cloudinit disk. use a shared storage instead\n" if $self->{running};
-		# skip all generated volumes but queue them for deletion in phase3_cleanup
-		push @{$self->{volumes}}, $volid;
-		next;
+	    } elsif ($self->{running} && $ref eq 'generated') {
+		die "can't live migrate VM with local cloudinit disk. use a shared storage instead\n";
 	    } else {
 		next if $self->{replicated_volumes}->{$volid};
-		push @{$self->{volumes}}, $volid;
 		$local_volumes->{$volid}->{migration_mode} = 'offline';
 	    }
 	}
@@ -764,8 +759,10 @@ sub phase1_cleanup {
 	$self->log('err', $err);
     }
 
-    if ($self->{volumes}) {
-	foreach my $volid (@{$self->{volumes}}) {
+    my @volids = $self->filter_local_volumes('offline');
+    if (scalar(@volids)) {
+	foreach my $volid (@volids) {
+	    next if defined($self->{replicated_volumes}->{$volid});
 	    $self->log('err', "found stale volume copy '$volid' on node '$self->{node}'");
 	    # fixme: try to remove ?
 	}
@@ -1198,18 +1195,7 @@ sub phase2_cleanup {
 sub phase3 {
     my ($self, $vmid) = @_;
 
-    my $volids = $self->{volumes};
-    return if $self->{phase2errors};
-
-    # destroy local copies
-    foreach my $volid (@$volids) {
-	eval { PVE::Storage::vdisk_free($self->{storecfg}, $volid); };
-	if (my $err = $@) {
-	    $self->log('err', "removing local copy of '$volid' failed - $err");
-	    $self->{errors} = 1;
-	    last if $err =~ /^interrupted by signal$/;
-	}
-    }
+    return;
 }
 
 sub phase3_cleanup {
@@ -1334,22 +1320,17 @@ sub phase3_cleanup {
 	$self->{errors} = 1;
     }
 
-    if($self->{storage_migration}) {
-	# destroy local copies
-	my $volids = $self->{online_local_volumes};
-
-	foreach my $volid (@$volids) {
-	    # keep replicated volumes!
-	    next if $self->{replicated_volumes}->{$volid};
+    # destroy local copies
+    foreach my $volid (keys %{$self->{local_volumes}}) {
+	# keep replicated volumes!
+	next if $self->{replicated_volumes}->{$volid};
 
-	    eval { PVE::Storage::vdisk_free($self->{storecfg}, $volid); };
-	    if (my $err = $@) {
-		$self->log('err', "removing local copy of '$volid' failed - $err");
-		$self->{errors} = 1;
-		last if $err =~ /^interrupted by signal$/;
-	    }
+	eval { PVE::Storage::vdisk_free($self->{storecfg}, $volid); };
+	if (my $err = $@) {
+	    $self->log('err', "removing local copy of '$volid' failed - $err");
+	    $self->{errors} = 1;
+	    last if $err =~ /^interrupted by signal$/;
 	}
-
     }
 
     # clear migrate lock
-- 
2.20.1





  parent reply	other threads:[~2021-01-29 15:11 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-29 15:11 [pve-devel] [PATCH-SERIES v2 qemu-server] Cleanup migration code and improve migration disk cleanup Fabian Ebner
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 01/13] test: migration: add parse_volume_id calls Fabian Ebner
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 02/13] migration: split sync_disks into two functions Fabian Ebner
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 03/13] migration: avoid re-scanning all volumes Fabian Ebner
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 04/13] migration: split out config_update_local_disksizes from scan_local_volumes Fabian Ebner
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 05/13] migration: fix calculation of bandwith limit for non-disk migration Fabian Ebner
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 06/13] migration: save targetstorage and bwlimit in local_volumes hash and re-use information Fabian Ebner
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 07/13] migration: add nbd migrated volumes to volume_map earlier Fabian Ebner
2021-01-29 15:11 ` Fabian Ebner [this message]
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 09/13] migration: cleanup_remotedisks: simplify and include more disks Fabian Ebner
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 10/13] migration: use storage_migration for checks instead of online_local_volumes Fabian Ebner
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 11/13] migration: keep track of replicated volumes via local_volumes Fabian Ebner
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 12/13] migration: split out replication from scan_local_volumes Fabian Ebner
2021-01-29 15:11 ` [pve-devel] [PATCH v2 qemu-server 13/13] migration: move finishing block jobs to phase2 for better/uniform error handling Fabian Ebner
2021-04-19  6:49 ` [pve-devel] [PATCH-SERIES v2 qemu-server] Cleanup migration code and improve migration disk cleanup Fabian Ebner
2021-04-19 11:50 ` [pve-devel] applied-series: " Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210129151143.10014-9-f.ebner@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal