From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH container 3/3] fix #3229: migrate: consider node restriction for 'shared' storage
Date: Wed, 21 Jan 2026 11:51:16 +0100 [thread overview]
Message-ID: <20260121105145.47010-4-f.ebner@proxmox.com> (raw)
In-Reply-To: <20260121105145.47010-1-f.ebner@proxmox.com>
Previously, volumes on a storage with the 'shared' marker were assumed
to not need storage migration and migration would fail when checking
for storage availability on the target. But if a storage with a
'shared' marker has node restrictions, this is wrong. Fix the issue by
checking whether a storage with the 'shared' marker is actually
available on the target node and otherwise properly consider the
volume a local volume.
The new map_storage() helper does apply the mapping also for shared
storages if they are not configured for the target node.
To fix bug #3229 for offline migration, it's enough to change the
behavior of the source node: prepare() and phase1() need to start
using the new map_storage() helper as well as checking whether the
shared storage was mapped or not in two cases.
The list of storages supporting migration now becomes the same as for
remote migration (and should really be better handled by an early
format negotiation rather than a hard-coded list, but that is for
another patch series).
Note that the connection check for a shared storage still makes sense
even if the storage is being mapped. It's still unexpected if there
is no connection.
Online migration for containers is not (yet) supported, so this is
enough.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
Depends on new libpve-guest-common-perl!
src/PVE/LXC/Migrate.pm | 23 ++++++++---------------
1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index fe588ba..10425ff 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -76,8 +76,6 @@ sub prepare {
# check if storage is available on both nodes
my $scfg = PVE::Storage::storage_check_enabled($self->{storecfg}, $storage);
- my $targetsid = $storage;
-
die "content type 'rootdir' is not available on storage '$storage'\n"
if !$scfg->{content}->{rootdir};
@@ -86,12 +84,14 @@ sub prepare {
my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
warn "Used shared storage '$storage' is not online on source node!\n"
if !$plugin->check_connection($storage, $scfg);
- } else {
+ }
+
+ my $targetsid = $self->map_storage($scfg, $storage);
+
+ if (!$scfg->{shared} || $targetsid ne $storage || $remote) {
# unless in restart mode because we shut the container down
die "unable to migrate local mount point '$volid' while CT is running\n"
if $running && !$restart;
-
- $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storage);
}
if (!$remote) {
@@ -218,14 +218,12 @@ sub phase1 {
# check if storage is available on source node
my $scfg = PVE::Storage::storage_check_enabled($self->{storecfg}, $sid);
- my $targetsid = $sid;
+ my $targetsid = $self->map_storage($scfg, $sid);
- if ($scfg->{shared} && !$remote) {
+ if ($scfg->{shared} && $targetsid eq $sid && !$remote) {
$self->log('info', "volume '$volid' is on shared storage '$sid'")
if !$snapname;
return;
- } else {
- $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid);
}
PVE::Storage::storage_check_enabled($self->{storecfg}, $targetsid, $self->{node})
@@ -307,13 +305,8 @@ sub phase1 {
# TODO move to storage plugin layer?
my $migratable_storages = [
- 'dir', 'zfspool', 'lvmthin', 'lvm', 'btrfs',
+ 'dir', 'zfspool', 'lvmthin', 'lvm', 'btrfs', 'cifs', 'nfs', 'rbd',
];
- if ($remote) {
- push @$migratable_storages, 'cifs';
- push @$migratable_storages, 'nfs';
- push @$migratable_storages, 'rbd';
- }
die "storage type '$scfg->{type}' not supported\n"
if !grep { $_ eq $scfg->{type} } @$migratable_storages;
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
prev parent reply other threads:[~2026-01-21 10:51 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-21 10:51 [pve-devel] [PATCH-SERIES guest-common/qemu-server/container 0/3] " Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH guest-common 1/3] abstract migrate: add map_storage() helper Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH qemu-server 2/3] fix #3229: migrate: consider node restriction for 'shared' storage Fiona Ebner
2026-01-21 10:51 ` Fiona Ebner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260121105145.47010-4-f.ebner@proxmox.com \
--to=f.ebner@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox