* [pve-devel] [PATCH-SERIES guest-common/qemu-server/container 0/3] fix #3229: migrate: consider node restriction for 'shared' storage
@ 2026-01-21 10:51 Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH guest-common 1/3] abstract migrate: add map_storage() helper Fiona Ebner
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Fiona Ebner @ 2026-01-21 10:51 UTC (permalink / raw)
To: pve-devel
Previously, volumes on a storage with the 'shared' marker were assumed
to not need storage migration and migration would fail when checking
for storage availability on the target. But if a storage with a
'shared' marker has node restrictions, this is wrong. Fix the issue by
checking whether a storage with the 'shared' marker is actually
available on the target node and otherwise properly consider the
volume a local volume.
A new map_storage() helper does apply the mapping also for shared
storages if they are not configured for the target node.
Package dependencies:
qemu-server depends and build-depends on new libpve-guest-common-perl
pve-container depends on new libpve-guest-common-perl
guest-common:
Fiona Ebner (1):
abstract migrate: add map_storage() helper
src/PVE/AbstractMigrate.pm | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
qemu-server:
Fiona Ebner (1):
fix #3229: migrate: consider node restriction for 'shared' storage
src/PVE/QemuMigrate.pm | 14 +++-----------
src/PVE/QemuServer.pm | 6 +++++-
2 files changed, 8 insertions(+), 12 deletions(-)
container:
Fiona Ebner (1):
fix #3229: migrate: consider node restriction for 'shared' storage
src/PVE/LXC/Migrate.pm | 23 ++++++++---------------
1 file changed, 8 insertions(+), 15 deletions(-)
Summary over all repositories:
4 files changed, 32 insertions(+), 27 deletions(-)
--
Generated by git-murpp 0.5.0
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] [PATCH guest-common 1/3] abstract migrate: add map_storage() helper
2026-01-21 10:51 [pve-devel] [PATCH-SERIES guest-common/qemu-server/container 0/3] fix #3229: migrate: consider node restriction for 'shared' storage Fiona Ebner
@ 2026-01-21 10:51 ` Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH qemu-server 2/3] fix #3229: migrate: consider node restriction for 'shared' storage Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH container 3/3] " Fiona Ebner
2 siblings, 0 replies; 4+ messages in thread
From: Fiona Ebner @ 2026-01-21 10:51 UTC (permalink / raw)
To: pve-devel
For remote migration and non-shared storages, always map. For
intra-cluster migration and a shared storage, map when the storage is
not configured for the target. The latter part is in preparation to
fix bug #3229.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/AbstractMigrate.pm | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/src/PVE/AbstractMigrate.pm b/src/PVE/AbstractMigrate.pm
index 6b4d34c..919f12d 100644
--- a/src/PVE/AbstractMigrate.pm
+++ b/src/PVE/AbstractMigrate.pm
@@ -384,4 +384,20 @@ sub get_bwlimit {
return $bwlimit;
}
+sub map_storage {
+ my ($self, $scfg, $storeid) = @_;
+
+ # NOTE: For remote migration, always map shared storages. For local migration, shared storages
+ # were never mapped in the past, but to fix bug #3229, storages that are not configured for the
+ # target are mapped too.
+ my $do_map;
+ if ($self->{opts}->{remote} || !$scfg->{shared}) {
+ $do_map = 1;
+ } else { # intra-cluster migration, shared storage
+ $do_map = !PVE::Storage::storage_check_node($self->{storecfg}, $storeid, $self->{node}, 1);
+ }
+
+ return $do_map ? PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storeid) : $storeid;
+}
+
1;
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] [PATCH qemu-server 2/3] fix #3229: migrate: consider node restriction for 'shared' storage
2026-01-21 10:51 [pve-devel] [PATCH-SERIES guest-common/qemu-server/container 0/3] fix #3229: migrate: consider node restriction for 'shared' storage Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH guest-common 1/3] abstract migrate: add map_storage() helper Fiona Ebner
@ 2026-01-21 10:51 ` Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH container 3/3] " Fiona Ebner
2 siblings, 0 replies; 4+ messages in thread
From: Fiona Ebner @ 2026-01-21 10:51 UTC (permalink / raw)
To: pve-devel
Previously, volumes on a storage with the 'shared' marker were assumed
to not need storage migration and migration would fail when checking
for storage availability on the target. But if a storage with a
'shared' marker has node restrictions, this is wrong. Fix the issue by
checking whether a storage with the 'shared' marker is actually
available on the target node and otherwise properly consider the
volume a local volume.
The new map_storage() helper does apply the mapping also for shared
storages if they are not configured for the target node.
To fix bug #3229 for offline migration, it's enough to change the
behavior of the source node: prepare() and scan_local_volumes() need
to start using the new map_storage() helper as well as checking
whether the shared storage was mapped or not to avoid a premature
return.
To fix bug #3229 for online migration, the behavior of the target node
needs to be changed additionally: vm_migrate_get_nbd_disks() can check
whether the shared storage is configured for the node and otherwise
assume that volumes on the storage will be migrated via NBD too.
The following scenarios need to be considered:
1. Old source node:
Migration failure during early checks.
2. New source node, old target node:
Migration failure a bit later, when the target notices that the shared
storage is not available upon VM start. It's not fundamentally worse
than the previous behavior.
3. New source node, new target node:
Migration works, bug #3229 is fixed.
Note that the connection check for a shared storage still makes sense
even if the storage is being mapped. It's still unexpected if there
is no connection.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
Depends and build-depends on new libpve-guest-common-perl!
src/PVE/QemuMigrate.pm | 14 +++-----------
src/PVE/QemuServer.pm | 6 +++++-
2 files changed, 8 insertions(+), 12 deletions(-)
diff --git a/src/PVE/QemuMigrate.pm b/src/PVE/QemuMigrate.pm
index 829288ff..8b5e0ca4 100644
--- a/src/PVE/QemuMigrate.pm
+++ b/src/PVE/QemuMigrate.pm
@@ -317,11 +317,7 @@ sub prepare {
# check if storage is available on source node
my $scfg = PVE::Storage::storage_check_enabled($storecfg, $sid);
- my $targetsid = $sid;
- # NOTE: local ignores shared mappings, remote maps them
- if (!$scfg->{shared} || $self->{opts}->{remote}) {
- $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid);
- }
+ my $targetsid = $self->map_storage($scfg, $sid);
$storages->{$targetsid} = 1;
@@ -427,14 +423,10 @@ sub scan_local_volumes {
# check if storage is available on both nodes
my $scfg = PVE::Storage::storage_check_enabled($storecfg, $sid);
- my $targetsid = $sid;
- # NOTE: local ignores shared mappings, remote maps them
- if (!$scfg->{shared} || $self->{opts}->{remote}) {
- $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid);
- }
+ my $targetsid = $self->map_storage($scfg, $sid);
$self->target_storage_check_available($storecfg, $targetsid, $volid);
- return if $scfg->{shared} && !$self->{opts}->{remote};
+ return if $scfg->{shared} && $targetsid eq $sid && !$self->{opts}->{remote};
$local_volumes->{$volid}->{ref} = 'pending' if $attr->{referenced_in_pending};
$local_volumes->{$volid}->{ref} = 'snapshot' if $attr->{referenced_in_snapshot};
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index bad3527c..22af10c9 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -5300,6 +5300,8 @@ sub vmconfig_update_cloudinit_drive {
sub vm_migrate_get_nbd_disks {
my ($storecfg, $conf, $replicated_volumes) = @_;
+ my $node_name = PVE::INotify::nodename();
+
my $local_volumes = {};
PVE::QemuConfig->foreach_volume(
$conf,
@@ -5316,7 +5318,9 @@ sub vm_migrate_get_nbd_disks {
my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
- return if $scfg->{shared};
+ return
+ if $scfg->{shared}
+ && PVE::Storage::storage_check_node($storecfg, $storeid, $node_name, 1);
my $format = checked_volume_format($storecfg, $volid);
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] [PATCH container 3/3] fix #3229: migrate: consider node restriction for 'shared' storage
2026-01-21 10:51 [pve-devel] [PATCH-SERIES guest-common/qemu-server/container 0/3] fix #3229: migrate: consider node restriction for 'shared' storage Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH guest-common 1/3] abstract migrate: add map_storage() helper Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH qemu-server 2/3] fix #3229: migrate: consider node restriction for 'shared' storage Fiona Ebner
@ 2026-01-21 10:51 ` Fiona Ebner
2 siblings, 0 replies; 4+ messages in thread
From: Fiona Ebner @ 2026-01-21 10:51 UTC (permalink / raw)
To: pve-devel
Previously, volumes on a storage with the 'shared' marker were assumed
to not need storage migration and migration would fail when checking
for storage availability on the target. But if a storage with a
'shared' marker has node restrictions, this is wrong. Fix the issue by
checking whether a storage with the 'shared' marker is actually
available on the target node and otherwise properly consider the
volume a local volume.
The new map_storage() helper does apply the mapping also for shared
storages if they are not configured for the target node.
To fix bug #3229 for offline migration, it's enough to change the
behavior of the source node: prepare() and phase1() need to start
using the new map_storage() helper as well as checking whether the
shared storage was mapped or not in two cases.
The list of storages supporting migration now becomes the same as for
remote migration (and should really be better handled by an early
format negotiation rather than a hard-coded list, but that is for
another patch series).
Note that the connection check for a shared storage still makes sense
even if the storage is being mapped. It's still unexpected if there
is no connection.
Online migration for containers is not (yet) supported, so this is
enough.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
Depends on new libpve-guest-common-perl!
src/PVE/LXC/Migrate.pm | 23 ++++++++---------------
1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index fe588ba..10425ff 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -76,8 +76,6 @@ sub prepare {
# check if storage is available on both nodes
my $scfg = PVE::Storage::storage_check_enabled($self->{storecfg}, $storage);
- my $targetsid = $storage;
-
die "content type 'rootdir' is not available on storage '$storage'\n"
if !$scfg->{content}->{rootdir};
@@ -86,12 +84,14 @@ sub prepare {
my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
warn "Used shared storage '$storage' is not online on source node!\n"
if !$plugin->check_connection($storage, $scfg);
- } else {
+ }
+
+ my $targetsid = $self->map_storage($scfg, $storage);
+
+ if (!$scfg->{shared} || $targetsid ne $storage || $remote) {
# unless in restart mode because we shut the container down
die "unable to migrate local mount point '$volid' while CT is running\n"
if $running && !$restart;
-
- $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storage);
}
if (!$remote) {
@@ -218,14 +218,12 @@ sub phase1 {
# check if storage is available on source node
my $scfg = PVE::Storage::storage_check_enabled($self->{storecfg}, $sid);
- my $targetsid = $sid;
+ my $targetsid = $self->map_storage($scfg, $sid);
- if ($scfg->{shared} && !$remote) {
+ if ($scfg->{shared} && $targetsid eq $sid && !$remote) {
$self->log('info', "volume '$volid' is on shared storage '$sid'")
if !$snapname;
return;
- } else {
- $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid);
}
PVE::Storage::storage_check_enabled($self->{storecfg}, $targetsid, $self->{node})
@@ -307,13 +305,8 @@ sub phase1 {
# TODO move to storage plugin layer?
my $migratable_storages = [
- 'dir', 'zfspool', 'lvmthin', 'lvm', 'btrfs',
+ 'dir', 'zfspool', 'lvmthin', 'lvm', 'btrfs', 'cifs', 'nfs', 'rbd',
];
- if ($remote) {
- push @$migratable_storages, 'cifs';
- push @$migratable_storages, 'nfs';
- push @$migratable_storages, 'rbd';
- }
die "storage type '$scfg->{type}' not supported\n"
if !grep { $_ eq $scfg->{type} } @$migratable_storages;
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-01-21 10:52 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-21 10:51 [pve-devel] [PATCH-SERIES guest-common/qemu-server/container 0/3] fix #3229: migrate: consider node restriction for 'shared' storage Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH guest-common 1/3] abstract migrate: add map_storage() helper Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH qemu-server 2/3] fix #3229: migrate: consider node restriction for 'shared' storage Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH container 3/3] " Fiona Ebner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox