From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 2C4DA1FF15E for ; Wed, 21 Jan 2026 11:52:13 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 998BD22727; Wed, 21 Jan 2026 11:52:25 +0100 (CET) From: Fiona Ebner To: pve-devel@lists.proxmox.com Date: Wed, 21 Jan 2026 11:51:15 +0100 Message-ID: <20260121105145.47010-3-f.ebner@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260121105145.47010-1-f.ebner@proxmox.com> References: <20260121105145.47010-1-f.ebner@proxmox.com> MIME-Version: 1.0 X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1768992656065 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.017 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [qemuserver.pm, qemumigrate.pm] Subject: [pve-devel] [PATCH qemu-server 2/3] fix #3229: migrate: consider node restriction for 'shared' storage X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" Previously, volumes on a storage with the 'shared' marker were assumed to not need storage migration and migration would fail when checking for storage availability on the target. But if a storage with a 'shared' marker has node restrictions, this is wrong. Fix the issue by checking whether a storage with the 'shared' marker is actually available on the target node and otherwise properly consider the volume a local volume. The new map_storage() helper does apply the mapping also for shared storages if they are not configured for the target node. To fix bug #3229 for offline migration, it's enough to change the behavior of the source node: prepare() and scan_local_volumes() need to start using the new map_storage() helper as well as checking whether the shared storage was mapped or not to avoid a premature return. To fix bug #3229 for online migration, the behavior of the target node needs to be changed additionally: vm_migrate_get_nbd_disks() can check whether the shared storage is configured for the node and otherwise assume that volumes on the storage will be migrated via NBD too. The following scenarios need to be considered: 1. Old source node: Migration failure during early checks. 2. New source node, old target node: Migration failure a bit later, when the target notices that the shared storage is not available upon VM start. It's not fundamentally worse than the previous behavior. 3. New source node, new target node: Migration works, bug #3229 is fixed. Note that the connection check for a shared storage still makes sense even if the storage is being mapped. It's still unexpected if there is no connection. Signed-off-by: Fiona Ebner --- Depends and build-depends on new libpve-guest-common-perl! src/PVE/QemuMigrate.pm | 14 +++----------- src/PVE/QemuServer.pm | 6 +++++- 2 files changed, 8 insertions(+), 12 deletions(-) diff --git a/src/PVE/QemuMigrate.pm b/src/PVE/QemuMigrate.pm index 829288ff..8b5e0ca4 100644 --- a/src/PVE/QemuMigrate.pm +++ b/src/PVE/QemuMigrate.pm @@ -317,11 +317,7 @@ sub prepare { # check if storage is available on source node my $scfg = PVE::Storage::storage_check_enabled($storecfg, $sid); - my $targetsid = $sid; - # NOTE: local ignores shared mappings, remote maps them - if (!$scfg->{shared} || $self->{opts}->{remote}) { - $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid); - } + my $targetsid = $self->map_storage($scfg, $sid); $storages->{$targetsid} = 1; @@ -427,14 +423,10 @@ sub scan_local_volumes { # check if storage is available on both nodes my $scfg = PVE::Storage::storage_check_enabled($storecfg, $sid); - my $targetsid = $sid; - # NOTE: local ignores shared mappings, remote maps them - if (!$scfg->{shared} || $self->{opts}->{remote}) { - $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid); - } + my $targetsid = $self->map_storage($scfg, $sid); $self->target_storage_check_available($storecfg, $targetsid, $volid); - return if $scfg->{shared} && !$self->{opts}->{remote}; + return if $scfg->{shared} && $targetsid eq $sid && !$self->{opts}->{remote}; $local_volumes->{$volid}->{ref} = 'pending' if $attr->{referenced_in_pending}; $local_volumes->{$volid}->{ref} = 'snapshot' if $attr->{referenced_in_snapshot}; diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm index bad3527c..22af10c9 100644 --- a/src/PVE/QemuServer.pm +++ b/src/PVE/QemuServer.pm @@ -5300,6 +5300,8 @@ sub vmconfig_update_cloudinit_drive { sub vm_migrate_get_nbd_disks { my ($storecfg, $conf, $replicated_volumes) = @_; + my $node_name = PVE::INotify::nodename(); + my $local_volumes = {}; PVE::QemuConfig->foreach_volume( $conf, @@ -5316,7 +5318,9 @@ sub vm_migrate_get_nbd_disks { my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid); my $scfg = PVE::Storage::storage_config($storecfg, $storeid); - return if $scfg->{shared}; + return + if $scfg->{shared} + && PVE::Storage::storage_check_node($storecfg, $storeid, $node_name, 1); my $format = checked_volume_format($storecfg, $volid); -- 2.47.3 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel