From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id F18F21FF15E for ; Wed, 21 Jan 2026 11:51:45 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 5EF582269D; Wed, 21 Jan 2026 11:51:56 +0100 (CET) From: Fiona Ebner To: pve-devel@lists.proxmox.com Date: Wed, 21 Jan 2026 11:51:16 +0100 Message-ID: <20260121105145.47010-4-f.ebner@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260121105145.47010-1-f.ebner@proxmox.com> References: <20260121105145.47010-1-f.ebner@proxmox.com> MIME-Version: 1.0 X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1768992656165 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.017 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [migrate.pm] Subject: [pve-devel] [PATCH container 3/3] fix #3229: migrate: consider node restriction for 'shared' storage X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" Previously, volumes on a storage with the 'shared' marker were assumed to not need storage migration and migration would fail when checking for storage availability on the target. But if a storage with a 'shared' marker has node restrictions, this is wrong. Fix the issue by checking whether a storage with the 'shared' marker is actually available on the target node and otherwise properly consider the volume a local volume. The new map_storage() helper does apply the mapping also for shared storages if they are not configured for the target node. To fix bug #3229 for offline migration, it's enough to change the behavior of the source node: prepare() and phase1() need to start using the new map_storage() helper as well as checking whether the shared storage was mapped or not in two cases. The list of storages supporting migration now becomes the same as for remote migration (and should really be better handled by an early format negotiation rather than a hard-coded list, but that is for another patch series). Note that the connection check for a shared storage still makes sense even if the storage is being mapped. It's still unexpected if there is no connection. Online migration for containers is not (yet) supported, so this is enough. Signed-off-by: Fiona Ebner --- Depends on new libpve-guest-common-perl! src/PVE/LXC/Migrate.pm | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm index fe588ba..10425ff 100644 --- a/src/PVE/LXC/Migrate.pm +++ b/src/PVE/LXC/Migrate.pm @@ -76,8 +76,6 @@ sub prepare { # check if storage is available on both nodes my $scfg = PVE::Storage::storage_check_enabled($self->{storecfg}, $storage); - my $targetsid = $storage; - die "content type 'rootdir' is not available on storage '$storage'\n" if !$scfg->{content}->{rootdir}; @@ -86,12 +84,14 @@ sub prepare { my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); warn "Used shared storage '$storage' is not online on source node!\n" if !$plugin->check_connection($storage, $scfg); - } else { + } + + my $targetsid = $self->map_storage($scfg, $storage); + + if (!$scfg->{shared} || $targetsid ne $storage || $remote) { # unless in restart mode because we shut the container down die "unable to migrate local mount point '$volid' while CT is running\n" if $running && !$restart; - - $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storage); } if (!$remote) { @@ -218,14 +218,12 @@ sub phase1 { # check if storage is available on source node my $scfg = PVE::Storage::storage_check_enabled($self->{storecfg}, $sid); - my $targetsid = $sid; + my $targetsid = $self->map_storage($scfg, $sid); - if ($scfg->{shared} && !$remote) { + if ($scfg->{shared} && $targetsid eq $sid && !$remote) { $self->log('info', "volume '$volid' is on shared storage '$sid'") if !$snapname; return; - } else { - $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid); } PVE::Storage::storage_check_enabled($self->{storecfg}, $targetsid, $self->{node}) @@ -307,13 +305,8 @@ sub phase1 { # TODO move to storage plugin layer? my $migratable_storages = [ - 'dir', 'zfspool', 'lvmthin', 'lvm', 'btrfs', + 'dir', 'zfspool', 'lvmthin', 'lvm', 'btrfs', 'cifs', 'nfs', 'rbd', ]; - if ($remote) { - push @$migratable_storages, 'cifs'; - push @$migratable_storages, 'nfs'; - push @$migratable_storages, 'rbd'; - } die "storage type '$scfg->{type}' not supported\n" if !grep { $_ eq $scfg->{type} } @$migratable_storages; -- 2.47.3 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel