public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH qemu-server 2/3] fix #3229: migrate: consider node restriction for 'shared' storage
Date: Wed, 21 Jan 2026 11:51:15 +0100	[thread overview]
Message-ID: <20260121105145.47010-3-f.ebner@proxmox.com> (raw)
In-Reply-To: <20260121105145.47010-1-f.ebner@proxmox.com>

Previously, volumes on a storage with the 'shared' marker were assumed
to not need storage migration and migration would fail when checking
for storage availability on the target. But if a storage with a
'shared' marker has node restrictions, this is wrong. Fix the issue by
checking whether a storage with the 'shared' marker is actually
available on the target node and otherwise properly consider the
volume a local volume.

The new map_storage() helper does apply the mapping also for shared
storages if they are not configured for the target node.

To fix bug #3229 for offline migration, it's enough to change the
behavior of the source node: prepare() and scan_local_volumes() need
to start using the new map_storage() helper as well as checking
whether the shared storage was mapped or not to avoid a premature
return.

To fix bug #3229 for online migration, the behavior of the target node
needs to be changed additionally: vm_migrate_get_nbd_disks() can check
whether the shared storage is configured for the node and otherwise
assume that volumes on the storage will be migrated via NBD too.

The following scenarios need to be considered:

1. Old source node:
Migration failure during early checks.

2. New source node, old target node:
Migration failure a bit later, when the target notices that the shared
storage is not available upon VM start. It's not fundamentally worse
than the previous behavior.

3. New source node, new target node:
Migration works, bug #3229 is fixed.

Note that the connection check for a shared storage still makes sense
even if the storage is being mapped. It's still unexpected if there
is no connection.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Depends and build-depends on new libpve-guest-common-perl!

 src/PVE/QemuMigrate.pm | 14 +++-----------
 src/PVE/QemuServer.pm  |  6 +++++-
 2 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/src/PVE/QemuMigrate.pm b/src/PVE/QemuMigrate.pm
index 829288ff..8b5e0ca4 100644
--- a/src/PVE/QemuMigrate.pm
+++ b/src/PVE/QemuMigrate.pm
@@ -317,11 +317,7 @@ sub prepare {
         # check if storage is available on source node
         my $scfg = PVE::Storage::storage_check_enabled($storecfg, $sid);
 
-        my $targetsid = $sid;
-        # NOTE: local ignores shared mappings, remote maps them
-        if (!$scfg->{shared} || $self->{opts}->{remote}) {
-            $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid);
-        }
+        my $targetsid = $self->map_storage($scfg, $sid);
 
         $storages->{$targetsid} = 1;
 
@@ -427,14 +423,10 @@ sub scan_local_volumes {
             # check if storage is available on both nodes
             my $scfg = PVE::Storage::storage_check_enabled($storecfg, $sid);
 
-            my $targetsid = $sid;
-            # NOTE: local ignores shared mappings, remote maps them
-            if (!$scfg->{shared} || $self->{opts}->{remote}) {
-                $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid);
-            }
+            my $targetsid = $self->map_storage($scfg, $sid);
 
             $self->target_storage_check_available($storecfg, $targetsid, $volid);
-            return if $scfg->{shared} && !$self->{opts}->{remote};
+            return if $scfg->{shared} && $targetsid eq $sid && !$self->{opts}->{remote};
 
             $local_volumes->{$volid}->{ref} = 'pending' if $attr->{referenced_in_pending};
             $local_volumes->{$volid}->{ref} = 'snapshot' if $attr->{referenced_in_snapshot};
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index bad3527c..22af10c9 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -5300,6 +5300,8 @@ sub vmconfig_update_cloudinit_drive {
 sub vm_migrate_get_nbd_disks {
     my ($storecfg, $conf, $replicated_volumes) = @_;
 
+    my $node_name = PVE::INotify::nodename();
+
     my $local_volumes = {};
     PVE::QemuConfig->foreach_volume(
         $conf,
@@ -5316,7 +5318,9 @@ sub vm_migrate_get_nbd_disks {
             my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
 
             my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
-            return if $scfg->{shared};
+            return
+                if $scfg->{shared}
+                && PVE::Storage::storage_check_node($storecfg, $storeid, $node_name, 1);
 
             my $format = checked_volume_format($storecfg, $volid);
 
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2026-01-21 10:52 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-21 10:51 [pve-devel] [PATCH-SERIES guest-common/qemu-server/container 0/3] " Fiona Ebner
2026-01-21 10:51 ` [pve-devel] [PATCH guest-common 1/3] abstract migrate: add map_storage() helper Fiona Ebner
2026-01-21 10:51 ` Fiona Ebner [this message]
2026-01-21 10:51 ` [pve-devel] [PATCH container 3/3] fix #3229: migrate: consider node restriction for 'shared' storage Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260121105145.47010-3-f.ebner@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal