public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH v3 container 5/7] migration: only migrate volumes used by the guest
Date: Thu,  1 Jun 2023 15:53:40 +0200	[thread overview]
Message-ID: <20230601135342.2903359-6-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20230601135342.2903359-1-a.lauterer@proxmox.com>

When scanning all configured storages for volumes belonging to the
container, the migration could easily fail if a storage is not
available, but enabled. That storage might not even be used by the
container at all.

By not doing that and only looking at the disk images referenced in the
config, we can avoid that.
We need to add additional steps for pending volumes with checks if they
actually exist. Changing an existing mountpoint to a new volume
will only create the volume on the next start of the container.

The big change regarding behavior is that volumes not referenced in the
container config will be ignored.  They are already orphans that used to
be migrated as well, but are now left where they are.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v2:
- use PVE::LCX::NEW_DISK_RE to check for a not yet created pending
volume
- style fixe

 src/PVE/LXC/Migrate.pm | 42 ++++++++++++------------------------------
 1 file changed, 12 insertions(+), 30 deletions(-)

diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index ccf5157..c5bca7a 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -195,6 +195,12 @@ sub phase1 {
 
 	return if !$volid;
 
+	# check if volume exists, might be a pending new one
+	if ($volid =~ $PVE::LXC::NEW_DISK_RE) {
+	    $self->log('info', "volume '$volid' does not exist (pending change?)");
+	    return;
+	}
+
 	my ($sid, $volname) = PVE::Storage::parse_volume_id($volid);
 
 	# check if storage is available on source node
@@ -256,42 +262,18 @@ sub phase1 {
 	&$log_error($@, $volid) if $@;
     };
 
-    # first unused / lost volumes owned by this container
-    my @sids = PVE::Storage::storage_ids($self->{storecfg});
-    foreach my $storeid (@sids) {
-	my $scfg = PVE::Storage::storage_config($self->{storecfg}, $storeid);
-	next if $scfg->{shared} && !$remote;
-	next if !PVE::Storage::storage_check_enabled($self->{storecfg}, $storeid, undef, 1);
-
-	# get list from PVE::Storage (for unreferenced volumes)
-	my $dl = PVE::Storage::vdisk_list($self->{storecfg}, $storeid, $vmid, undef, 'rootdir');
-
-	next if @{$dl->{$storeid}} == 0;
-
-	# check if storage is available on target node
-	my $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storeid);
-	if (!$remote) {
-	    my $target_scfg = PVE::Storage::storage_check_enabled($self->{storecfg}, $targetsid, $self->{node});
-
-	    die "content type 'rootdir' is not available on storage '$targetsid'\n"
-		if !$target_scfg->{content}->{rootdir};
-	}
-
-	PVE::Storage::foreach_volid($dl, sub {
-	    my ($volid, $sid, $volname) = @_;
-
-	    $volhash->{$volid}->{ref} = 'storage';
-	    $volhash->{$volid}->{targetsid} = $targetsid;
-	});
-    }
-
-    # then all volumes referenced in snapshots
+    # first all volumes referenced in snapshots
     foreach my $snapname (keys %{$conf->{snapshots}}) {
 	&$test_volid($conf->{snapshots}->{$snapname}->{'vmstate'}, 0, undef)
 	    if defined($conf->{snapshots}->{$snapname}->{'vmstate'});
 	PVE::LXC::Config->foreach_volume($conf->{snapshots}->{$snapname}, $test_mp, $snapname);
     }
 
+    # then all pending volumes
+    if (defined $conf->{pending} && %{$conf->{pending}}) {
+	PVE::LXC::Config->foreach_volume($conf->{pending}, $test_mp);
+    }
+
     # finally all current volumes
     PVE::LXC::Config->foreach_volume_full($conf, { include_unused => 1 }, $test_mp);
 
-- 
2.30.2





  parent reply	other threads:[~2023-06-01 13:54 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-01 13:53 [pve-devel] [PATCH v3 qemu-server, container, docs 0/7] migration: don't scan all storages, fail on aliases Aaron Lauterer
2023-06-01 13:53 ` [pve-devel] [PATCH v3 qemu-server 1/7] migration: only migrate disks used by the guest Aaron Lauterer
2023-06-02  9:45   ` Fiona Ebner
2023-06-02  9:50     ` Fiona Ebner
2023-06-05 12:43       ` Aaron Lauterer
2023-06-01 13:53 ` [pve-devel] [PATCH v3 qemu-server 2/7] tests: add migration test for pending disk Aaron Lauterer
2023-06-01 13:53 ` [pve-devel] [PATCH v3 qemu-server 3/7] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-01 13:53 ` [pve-devel] [PATCH v3 qemu-server 4/7] tests: add migration alias check Aaron Lauterer
2023-06-02 11:29   ` Fiona Ebner
2023-06-01 13:53 ` Aaron Lauterer [this message]
2023-06-02 11:42   ` [pve-devel] [PATCH v3 container 5/7] migration: only migrate volumes used by the guest Fiona Ebner
2023-06-01 13:53 ` [pve-devel] [PATCH v3 container 6/7] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-01 13:53 ` [pve-devel] [PATCH v3 docs 7/7] storage: add hint to avoid storage aliasing Aaron Lauterer
2023-06-02 11:51   ` Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230601135342.2903359-6-a.lauterer@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal