public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Samuel Rufinatscha <s.rufinatscha@proxmox.com>
To: Mira Limbeck <m.limbeck@proxmox.com>, pve-devel@lists.proxmox.com
Subject: Re: [PATCH v2 cluster/storage/manager 00/15] storage mapping
Date: Mon, 11 May 2026 19:05:21 +0200	[thread overview]
Message-ID: <8296c2a5-75e0-4426-8847-b4570351b3a8@proxmox.com> (raw)
In-Reply-To: <20260430173220.441001-1-m.limbeck@proxmox.com>

Tested the series with an existing 3 node cluster, with two nodes
patched and the third (unpatched) node served a targetcli-fb
iSCSI target.

The iSCSI mappings were replicated via pmxcfs and the new API endpoint
returned the mappings. pvestatd activated mapped iSCSI storage
without errors on both nodes. Changing one node's mapping to another
target removed the stale session entry and added the new one. A failed
login to a non-existing target was logged but did not crash pvestatd.
Non-persistent discovery left the discovery DB unchanged.

For the ZFSPool POC, I tested file backed pools with different names on
both nodes. A VM was created on the local mapped pool and replication
to the other node completed successfully with the dataset appearing in
the other mapped pool.

Tested-by: Samuel Rufinatscha <s.rufinatscha@proxmox.com>


On 4/30/26 7:32 PM, Mira Limbeck wrote:
> This patch series is the second iteration of storage mapping support.
> The first iteration can be found under [1].
> 
> What is included:
> * new mapping base plugin
> * new iscsi mapping plugin
> * reworked iscsi plugin to support mappings
> * api support for creating and updating mappings
> * optional non-persistent discovery
> * (optional) cleanup for leftover node entries that are no longer discovered
> * POC zfspool mapping plugin
> 
> What is missing:
> * fix for pvesh to support oneOf schemas
> * GUI for handling mappings
> * additional mapping plugins
> 
> 
> Some patches need to be applied in a specific order:
> pve-cluster > pve-storage > pve-manager
> 
> The pve-cluster patch adds support for mapping/storage.cfg in /etc/pve.
> This is required by the pve-storage changes, they won't compile
> otherwise.
> The pve-manager patch has to be applied after the API additions of
> pve-storage, otherwise it won't compile since it adds an API endpoint
> that forwards to the newly introduced pve-storage API.
> 
> 
> The idea behind mapping support:
> This stems mainly from iSCSI plugin limitations we've seen in support.
> The current iSCSI storage plugin assumes that the central part of its
> config is the `target`. It assumes there is only one target with one or
> more portals.
> But we've seen (mostly) proprietary SANs expose those LUNs in a
> slightly different way. Each LUN is exposed via a different target via a
> different portal.
> This is something the current iSCSI plugin does not handle nicely.
> Sometimes those targets and portals are even different on different
> Proxmox VE nodes, since not all portals will be reachable from every
> node.
> 
> To manage such setups, the idea of cluster-wide storage mappings was
> born. With this, rather than having one storage with each
> target/multiple targets and all the portals, we can now specify exactly
> which node has which portals and targets. And those are all combined
> into a `logical mapping target`, which can be used cluster-wide and will
> be resolved on each node separately.
> 
> Even though the idea came based on limitations of the iSCSI plugin, it
> can be used for other plugins as well, see the POC for the zfspool
> plugin.
> The idea is to create a base that would work for all storages where
> mapping makes sense.
> 
> How to test:
> 
> *iSCSI*:
> 
> Setup:
> * at least 2 Proxmox VE hosts
> * iSCSI target on one of the Proxmox VE hosts, or separate
> 
> targetcli [2] makes setting up iSCSI easy [3]
> 
> Create a mapping:
> * use the API via curl/something else
> * use pvesh (can't specify --map since oneOf schema is not supported)
> 
> An example mapping config might look like this:
> # cat /etc/pve/mapping/storage.cfg
> iscsi: logicaltarget
> 	discovery-portals 10.67.0.144,10.67.1.144
> 	map node=iscsi-test,portals=10.67.1.144;10.67.2.144;10.67.3.144:3260,target=iqn.2003-01.org.linux-iscsi.iscsi.x8664:sn.81bb080df375
> 	map node=iscsi-test,portals=10.67.4.144;10.67.5.144;10.67.8.144,target=iqn.2003-01.org.linux-iscsi.iscsi.x8664:sn.81bb080df375
> 	map node=iscsi-test2,portals=10.67.1.144;10.67.2.144;10.67.3.144:3260,target=iqn.2003-01.org.linux-iscsi.iscsi.x8664:sn.81bb080df375
> 
> And the corresponding /etc/pve/storage.cfg would look like this:
> iscsi: iscsi
> 	mapping logicaltarget
> 
> On the next iteration of pvestatd it should pick it up and log in to all
> configured portals when possible.
> This can be checked with:
> `iscsiadm -m session`
> 
> `iscsiadm -m node` will contain all configured entries, and those will
> be updated on the next pvestatd iteration after a change to the mapping
> config.
> 
> For the regular iSCSI storage config, everything should stay the same. A
> discovery is done every iteration now though.
> 
> With optional patch 09/15 it will clean up stale node entries that are no
> longer announced on discovery. This can be tested by removing mapped
> luns in targetcli under `acls`.
> 
> 
> *ZFS*:
> 
> Setup:
> * at least 2 Proxmox VE hosts
> * both hosts with different zpool names
> 
> Create a mapping:
> * same as iSCSI, but with `zfspool` storage type
> * map only accepts `pool` and `node` as options
> 
> To test:
> * set up replication and see if it works with different zpool names on
>    each host
> 
> 
> 
> [1] https://lore.proxmox.com/all/20251110170124.3460419-1-m.limbeck@proxmox.com/
> [2] https://packages.debian.org/trixie/targetcli-fb
> [3] https://wiki.archlinux.org/title/ISCSI/LIO
> 
> 
> v2:
>   - split up patch series
>   - fixed discovery in mapping case, entries are now added manually based
>     on the config
>   - added discovery-portals for future GUI usability
>   - added optional cleanup for stale node entries for regular iSCSI storages
>   - added POC ZFSPool mapping plugin with support for replication between
>     zpools with different names
> 
> 
> pve-cluster:
> 
> Mira Limbeck (1):
>    mapping: add storage.cfg
> 
>   src/PVE/Cluster.pm  | 1 +
>   src/pmxcfs/status.c | 1 +
>   2 files changed, 2 insertions(+)
> 
> pve-storage:
> 
> Mira Limbeck (13):
>    mapping: add base plugin
>    mapping: add iSCSI plugin
>    iscsi: introduce mapping support
>    iscsi: add helper to get local config
>    iscsi: change functions to handle mappings
>    iscsi: introduce helper to update discovery db
>    iscsi: rework to update discovery db and simplify login
>    iscsi: remove stale sessions in non-mapping case
>    api: add mapping support
>    mapping: iscsi: add discovery-portal config option
>    iscsi: add support for non-persistent discovery
>    api: add non-persistent iscsi discovery option
>    mapping: add zfspool poc
> 
>   src/PVE/API2/Storage/Makefile      |   2 +-
>   src/PVE/API2/Storage/Mapping.pm    | 213 +++++++++++++++
>   src/PVE/API2/Storage/Scan.pm       |  32 ++-
>   src/PVE/CLI/pvesm.pm               |   3 +-
>   src/PVE/Storage.pm                 |   5 +-
>   src/PVE/Storage/ISCSIPlugin.pm     | 399 +++++++++++++++++++++++++----
>   src/PVE/Storage/Makefile           |   4 +-
>   src/PVE/Storage/Mapping.pm         |  46 ++++
>   src/PVE/Storage/Mapping/ISCSI.pm   |  59 +++++
>   src/PVE/Storage/Mapping/Makefile   |   8 +
>   src/PVE/Storage/Mapping/Plugin.pm  |  90 +++++++
>   src/PVE/Storage/Mapping/ZFSPool.pm |  48 ++++
>   src/PVE/Storage/Plugin.pm          |   6 +
>   src/PVE/Storage/ZFSPoolPlugin.pm   | 133 +++++++---
>   14 files changed, 957 insertions(+), 91 deletions(-)
>   create mode 100644 src/PVE/API2/Storage/Mapping.pm
>   create mode 100644 src/PVE/Storage/Mapping.pm
>   create mode 100644 src/PVE/Storage/Mapping/ISCSI.pm
>   create mode 100644 src/PVE/Storage/Mapping/Makefile
>   create mode 100644 src/PVE/Storage/Mapping/Plugin.pm
>   create mode 100644 src/PVE/Storage/Mapping/ZFSPool.pm
> 
> pve-manager:
> 
> Mira Limbeck (1):
>    api: mapping: add storage mapping path
> 
>   PVE/API2/Cluster/Mapping.pm | 8 +++++++-
>   1 file changed, 7 insertions(+), 1 deletion(-)
> 
> 





      parent reply	other threads:[~2026-05-11 17:06 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-30 17:26 [PATCH v2 cluster/storage/manager 00/15] storage mapping Mira Limbeck
2026-04-30 17:26 ` [PATCH v2 cluster 01/15] mapping: add storage.cfg Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 storage 02/15] mapping: add base plugin Mira Limbeck
2026-04-30 17:35   ` Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 storage 03/15] mapping: add iSCSI plugin Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 storage 04/15] iscsi: introduce mapping support Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 storage 05/15] iscsi: add helper to get local config Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 storage 06/15] iscsi: change functions to handle mappings Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 storage 07/15] iscsi: introduce helper to update discovery db Mira Limbeck
2026-05-11 16:46   ` Samuel Rufinatscha
2026-04-30 17:27 ` [PATCH v2 storage 08/15] iscsi: rework to update discovery db and simplify login Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 storage 09/15] iscsi: remove stale sessions in non-mapping case Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 storage 10/15] api: add mapping support Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 storage 11/15] mapping: iscsi: add discovery-portal config option Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 storage 12/15] iscsi: add support for non-persistent discovery Mira Limbeck
2026-04-30 17:38   ` Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 storage 13/15] api: add non-persistent iscsi discovery option Mira Limbeck
2026-04-30 17:27 ` [POC v2 storage 14/15] mapping: add zfspool plugin Mira Limbeck
2026-04-30 17:27 ` [PATCH v2 manager 15/15] api: mapping: add storage mapping path Mira Limbeck
2026-05-11 17:05 ` Samuel Rufinatscha [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8296c2a5-75e0-4426-8847-b4570351b3a8@proxmox.com \
    --to=s.rufinatscha@proxmox.com \
    --cc=m.limbeck@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal