From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id B68121FF133 for ; Mon, 11 May 2026 19:06:03 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 8274621D73; Mon, 11 May 2026 19:06:01 +0200 (CEST) Message-ID: <8296c2a5-75e0-4426-8847-b4570351b3a8@proxmox.com> Date: Mon, 11 May 2026 19:05:21 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 cluster/storage/manager 00/15] storage mapping To: Mira Limbeck , pve-devel@lists.proxmox.com References: <20260430173220.441001-1-m.limbeck@proxmox.com> Content-Language: en-US From: Samuel Rufinatscha In-Reply-To: <20260430173220.441001-1-m.limbeck@proxmox.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.224 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com,archlinux.org,plugin.pm,storage.pm,zfspoolplugin.pm,iscsiplugin.pm,mapping.pm,zfspool.pm,cluster.pm,scan.pm,pvesm.pm,iscsi.pm] Message-ID-Hash: 6I47IA5QA5CY2HCEKKOSBM7UKAQ3X6YS X-Message-ID-Hash: 6I47IA5QA5CY2HCEKKOSBM7UKAQ3X6YS X-MailFrom: s.rufinatscha@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Tested the series with an existing 3 node cluster, with two nodes patched and the third (unpatched) node served a targetcli-fb iSCSI target. The iSCSI mappings were replicated via pmxcfs and the new API endpoint returned the mappings. pvestatd activated mapped iSCSI storage without errors on both nodes. Changing one node's mapping to another target removed the stale session entry and added the new one. A failed login to a non-existing target was logged but did not crash pvestatd. Non-persistent discovery left the discovery DB unchanged. For the ZFSPool POC, I tested file backed pools with different names on both nodes. A VM was created on the local mapped pool and replication to the other node completed successfully with the dataset appearing in the other mapped pool. Tested-by: Samuel Rufinatscha On 4/30/26 7:32 PM, Mira Limbeck wrote: > This patch series is the second iteration of storage mapping support. > The first iteration can be found under [1]. > > What is included: > * new mapping base plugin > * new iscsi mapping plugin > * reworked iscsi plugin to support mappings > * api support for creating and updating mappings > * optional non-persistent discovery > * (optional) cleanup for leftover node entries that are no longer discovered > * POC zfspool mapping plugin > > What is missing: > * fix for pvesh to support oneOf schemas > * GUI for handling mappings > * additional mapping plugins > > > Some patches need to be applied in a specific order: > pve-cluster > pve-storage > pve-manager > > The pve-cluster patch adds support for mapping/storage.cfg in /etc/pve. > This is required by the pve-storage changes, they won't compile > otherwise. > The pve-manager patch has to be applied after the API additions of > pve-storage, otherwise it won't compile since it adds an API endpoint > that forwards to the newly introduced pve-storage API. > > > The idea behind mapping support: > This stems mainly from iSCSI plugin limitations we've seen in support. > The current iSCSI storage plugin assumes that the central part of its > config is the `target`. It assumes there is only one target with one or > more portals. > But we've seen (mostly) proprietary SANs expose those LUNs in a > slightly different way. Each LUN is exposed via a different target via a > different portal. > This is something the current iSCSI plugin does not handle nicely. > Sometimes those targets and portals are even different on different > Proxmox VE nodes, since not all portals will be reachable from every > node. > > To manage such setups, the idea of cluster-wide storage mappings was > born. With this, rather than having one storage with each > target/multiple targets and all the portals, we can now specify exactly > which node has which portals and targets. And those are all combined > into a `logical mapping target`, which can be used cluster-wide and will > be resolved on each node separately. > > Even though the idea came based on limitations of the iSCSI plugin, it > can be used for other plugins as well, see the POC for the zfspool > plugin. > The idea is to create a base that would work for all storages where > mapping makes sense. > > How to test: > > *iSCSI*: > > Setup: > * at least 2 Proxmox VE hosts > * iSCSI target on one of the Proxmox VE hosts, or separate > > targetcli [2] makes setting up iSCSI easy [3] > > Create a mapping: > * use the API via curl/something else > * use pvesh (can't specify --map since oneOf schema is not supported) > > An example mapping config might look like this: > # cat /etc/pve/mapping/storage.cfg > iscsi: logicaltarget > discovery-portals 10.67.0.144,10.67.1.144 > map node=iscsi-test,portals=10.67.1.144;10.67.2.144;10.67.3.144:3260,target=iqn.2003-01.org.linux-iscsi.iscsi.x8664:sn.81bb080df375 > map node=iscsi-test,portals=10.67.4.144;10.67.5.144;10.67.8.144,target=iqn.2003-01.org.linux-iscsi.iscsi.x8664:sn.81bb080df375 > map node=iscsi-test2,portals=10.67.1.144;10.67.2.144;10.67.3.144:3260,target=iqn.2003-01.org.linux-iscsi.iscsi.x8664:sn.81bb080df375 > > And the corresponding /etc/pve/storage.cfg would look like this: > iscsi: iscsi > mapping logicaltarget > > On the next iteration of pvestatd it should pick it up and log in to all > configured portals when possible. > This can be checked with: > `iscsiadm -m session` > > `iscsiadm -m node` will contain all configured entries, and those will > be updated on the next pvestatd iteration after a change to the mapping > config. > > For the regular iSCSI storage config, everything should stay the same. A > discovery is done every iteration now though. > > With optional patch 09/15 it will clean up stale node entries that are no > longer announced on discovery. This can be tested by removing mapped > luns in targetcli under `acls`. > > > *ZFS*: > > Setup: > * at least 2 Proxmox VE hosts > * both hosts with different zpool names > > Create a mapping: > * same as iSCSI, but with `zfspool` storage type > * map only accepts `pool` and `node` as options > > To test: > * set up replication and see if it works with different zpool names on > each host > > > > [1] https://lore.proxmox.com/all/20251110170124.3460419-1-m.limbeck@proxmox.com/ > [2] https://packages.debian.org/trixie/targetcli-fb > [3] https://wiki.archlinux.org/title/ISCSI/LIO > > > v2: > - split up patch series > - fixed discovery in mapping case, entries are now added manually based > on the config > - added discovery-portals for future GUI usability > - added optional cleanup for stale node entries for regular iSCSI storages > - added POC ZFSPool mapping plugin with support for replication between > zpools with different names > > > pve-cluster: > > Mira Limbeck (1): > mapping: add storage.cfg > > src/PVE/Cluster.pm | 1 + > src/pmxcfs/status.c | 1 + > 2 files changed, 2 insertions(+) > > pve-storage: > > Mira Limbeck (13): > mapping: add base plugin > mapping: add iSCSI plugin > iscsi: introduce mapping support > iscsi: add helper to get local config > iscsi: change functions to handle mappings > iscsi: introduce helper to update discovery db > iscsi: rework to update discovery db and simplify login > iscsi: remove stale sessions in non-mapping case > api: add mapping support > mapping: iscsi: add discovery-portal config option > iscsi: add support for non-persistent discovery > api: add non-persistent iscsi discovery option > mapping: add zfspool poc > > src/PVE/API2/Storage/Makefile | 2 +- > src/PVE/API2/Storage/Mapping.pm | 213 +++++++++++++++ > src/PVE/API2/Storage/Scan.pm | 32 ++- > src/PVE/CLI/pvesm.pm | 3 +- > src/PVE/Storage.pm | 5 +- > src/PVE/Storage/ISCSIPlugin.pm | 399 +++++++++++++++++++++++++---- > src/PVE/Storage/Makefile | 4 +- > src/PVE/Storage/Mapping.pm | 46 ++++ > src/PVE/Storage/Mapping/ISCSI.pm | 59 +++++ > src/PVE/Storage/Mapping/Makefile | 8 + > src/PVE/Storage/Mapping/Plugin.pm | 90 +++++++ > src/PVE/Storage/Mapping/ZFSPool.pm | 48 ++++ > src/PVE/Storage/Plugin.pm | 6 + > src/PVE/Storage/ZFSPoolPlugin.pm | 133 +++++++--- > 14 files changed, 957 insertions(+), 91 deletions(-) > create mode 100644 src/PVE/API2/Storage/Mapping.pm > create mode 100644 src/PVE/Storage/Mapping.pm > create mode 100644 src/PVE/Storage/Mapping/ISCSI.pm > create mode 100644 src/PVE/Storage/Mapping/Makefile > create mode 100644 src/PVE/Storage/Mapping/Plugin.pm > create mode 100644 src/PVE/Storage/Mapping/ZFSPool.pm > > pve-manager: > > Mira Limbeck (1): > api: mapping: add storage mapping path > > PVE/API2/Cluster/Mapping.pm | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > >