From: Timo Veith via pve-devel <pve-devel@lists.proxmox.com>
To: Mira Limbeck <m.limbeck@proxmox.com>
Cc: Timo Veith <timo.veith@uni-tuebingen.de>,
Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] iscsi and multipathing
Date: Tue, 15 Apr 2025 16:10:58 +0200 [thread overview]
Message-ID: <mailman.1019.1744726267.359.pve-devel@lists.proxmox.com> (raw)
In-Reply-To: <b19fc3b6-563c-4d0e-a766-78dd3bb28804@proxmox.com>
[-- Attachment #1: Type: message/rfc822, Size: 9330 bytes --]
[-- Attachment #1.1.1: Type: text/plain, Size: 5278 bytes --]
Hello Mira,
thank you very much for your reply.
> Am 15.04.2025 um 11:09 schrieb Mira Limbeck <m.limbeck@proxmox.com>:
>
> Hi Timo,
>
> At the moment I'm working on storage mapping support for iSCSI.
> This would allow one to configure different portals on each of the hosts
> that are logically the same storage.
>
> If you tried setting up a storage via iSCSI where each host can only
> access a part of the portals which are announced, you probably noticed
> some higher pvestatd update times.
> The storage mapping implementation will alleviate those issues.
>
> Other than that I'm not aware of anyone working on iSCSI improvements at
> the moment.
> We do have some open enhancement requests in our bug tracker [0]. One of
> which is yours [1].
From the list [0] you mentioned iSCSI CHAP credentials in the GUI is something we are interested in too.
>
> Regarding multipath handling via the GUI there hasn't been much of a
> discussion on how we could tackle that yet. It is quite easy to set up
> [2] the usual way.
I know that it is easy, because otherwise I wouldn’t have been able to configure it ;)
>
>
> Sorry, I might have missed your bug report previously, so I'll go into a
> bit more detail here. (I'll add that information to the enhancement
> request as well)
>
>> When adding iscsi storage to the data center there could possiblity to
>> do a iscsi discovery multiple times against different portal ips and
>> thus get multiple path to a iscsi san.
>
> That's already the default. For each target we run the discovery on at
> least one portal since it should announce all other portals. We haven't
> encountered a setup where that is not the case.
I am dealing only with setups that do not announce their portals. I have to do iscsi discovery for every portal ip address. That are mostly Infortrend iSCSI SAN systems but also from Huawei. But I think I know what you mean. Some storage devices give you all portals when you do a discovery against one of their ip adresses.
However, it would be great to have a possibility to enter multiple portal ip addresses in the web ui. Together with chap credentials.
>
>> multipathd should be updated with the path to the luns. The user
>> would/could only need to have to add vendor specific device configs
>> like alua or multibus settings.
>
> For now that has to be done manually. There exists a multipath.conf
> setting that automatically creates a multipath mapping for devices that
> have at least 2 paths available: `find_multipaths yes` [3].
I will test `find_multipaths yes`. If I understand you correctly then the command `multipath -a <wwid>` will not be necessary. Just like written in the multipath wiki article [2].
>
>> Then when adding a certain disk to a vm, it would be good if it's wwn
>> would be displayed instead of the "CH 00 ID0 LUN0" e.g. So it would be
>> easier to identify the right one.
>
> That would be a nice addition. And shouldn't be too hard to extract that
> information in the ISCSIPlugin and provide it as additional information
> via the API.
> That information could also be listed in the `VM Disks` page of iSCSI
> storages.
> Would you like to tackle that?
Are you asking me to provide the code for that?
>
>> Also when changing lun size would have been grown on the storage side,
>> it would be handy to have a button in pve web gui to "refresh" the
>> disk in the vm. The new size should be reflected in the hardware
>> details of the vm. And the qemu prozess should be informed of the new
>> disk size so the vm would not have to be shutdown and restarted.
>
> Based on experience, I doubt it would be that easy. Refreshing of the
> LUN sizes involves the SAN, the client, multipath and QEMU. There's
> always at least one place where it doesn't update even with
> `rescan-scsi-bus.sh`, `multipath -r`, etc.
> If you have a reliable way to make all sides agree on the new size,
> please let us know.
Don’t get me wrong, I didn’t meant that it should be possible to resize a iscsi disk right from the PVE web gui. I meant that if one has changed the size of a LUN on SAN side with the measures that are necessary to that there (e.g. with Infortrend you need to login to the management software there, find the LUN and then resize it) then the refreshing of that new size could be triggered by a button in the PVE web gui. When pressing the button an iscsi rescan of the corresponding iscsi session would have to be done and then a multipath map rescan like you wrote and eventually a qemu block device refresh. (And/Or the equvialent for the lxc container)
Even if I do all that manually then the size of the LUN in the hardware details of the vm is not beeing updated.
I personally do not know how but at least I know that it is possible in ovirt/RHV.
Regards,
Timo
>
>
>
> [0]
> https://bugzilla.proxmox.com/buglist.cgi?bug_severity=enhancement&list_id=50969&resolution=---&short_desc=iscsi&short_desc_type=allwordssubstr
> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6133
> [2] https://pve.proxmox.com/wiki/Multipath
> [3]
> https://manpages.debian.org/bookworm/multipath-tools/multipath.conf.5.en.html
>
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2025-04-15 14:11 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-09 17:21 Timo Veith via pve-devel
2025-04-15 9:09 ` Mira Limbeck
2025-04-15 14:10 ` Timo Veith via pve-devel [this message]
2025-04-18 6:24 ` DERUMIER, Alexandre via pve-devel
[not found] ` <BE392B97-179A-4168-A3F0-B8ED4EF46907@uni-tuebingen.de>
2025-04-18 8:45 ` Mira Limbeck
2025-04-24 20:27 ` Timo Veith via pve-devel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=mailman.1019.1744726267.359.pve-devel@lists.proxmox.com \
--to=pve-devel@lists.proxmox.com \
--cc=m.limbeck@proxmox.com \
--cc=timo.veith@uni-tuebingen.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal