* [pve-devel] iscsi and multipathing
@ 2025-04-09 17:21 Timo Veith via pve-devel
2025-04-15 9:09 ` Mira Limbeck
0 siblings, 1 reply; 5+ messages in thread
From: Timo Veith via pve-devel @ 2025-04-09 17:21 UTC (permalink / raw)
To: PVE development discussion; +Cc: Timo Veith
[-- Attachment #1: Type: message/rfc822, Size: 3688 bytes --]
[-- Attachment #1.1.1: Type: text/plain, Size: 208 bytes --]
Hi all,
are there any plans for the further development of iscsi and multipathing?
If there are any, what are they, what is their status and can they be supplemented or contributed to?
Regards,
Timo
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [pve-devel] iscsi and multipathing
2025-04-09 17:21 [pve-devel] iscsi and multipathing Timo Veith via pve-devel
@ 2025-04-15 9:09 ` Mira Limbeck
2025-04-15 14:10 ` Timo Veith via pve-devel
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Mira Limbeck @ 2025-04-15 9:09 UTC (permalink / raw)
To: Proxmox VE development discussion
Hi Timo,
At the moment I'm working on storage mapping support for iSCSI.
This would allow one to configure different portals on each of the hosts
that are logically the same storage.
If you tried setting up a storage via iSCSI where each host can only
access a part of the portals which are announced, you probably noticed
some higher pvestatd update times.
The storage mapping implementation will alleviate those issues.
Other than that I'm not aware of anyone working on iSCSI improvements at
the moment.
We do have some open enhancement requests in our bug tracker [0]. One of
which is yours [1].
Regarding multipath handling via the GUI there hasn't been much of a
discussion on how we could tackle that yet. It is quite easy to set up
[2] the usual way.
Sorry, I might have missed your bug report previously, so I'll go into a
bit more detail here. (I'll add that information to the enhancement
request as well)
> When adding iscsi storage to the data center there could possiblity to
> do a iscsi discovery multiple times against different portal ips and
> thus get multiple path to a iscsi san.
That's already the default. For each target we run the discovery on at
least one portal since it should announce all other portals. We haven't
encountered a setup where that is not the case.
> multipathd should be updated with the path to the luns. The user
> would/could only need to have to add vendor specific device configs
> like alua or multibus settings.
For now that has to be done manually. There exists a multipath.conf
setting that automatically creates a multipath mapping for devices that
have at least 2 paths available: `find_multipaths yes` [3].
> Then when adding a certain disk to a vm, it would be good if it's wwn
> would be displayed instead of the "CH 00 ID0 LUN0" e.g. So it would be
> easier to identify the right one.
That would be a nice addition. And shouldn't be too hard to extract that
information in the ISCSIPlugin and provide it as additional information
via the API.
That information could also be listed in the `VM Disks` page of iSCSI
storages.
Would you like to tackle that?
> Also when changing lun size would have been grown on the storage side,
> it would be handy to have a button in pve web gui to "refresh" the
> disk in the vm. The new size should be reflected in the hardware
> details of the vm. And the qemu prozess should be informed of the new
> disk size so the vm would not have to be shutdown and restarted.
Based on experience, I doubt it would be that easy. Refreshing of the
LUN sizes involves the SAN, the client, multipath and QEMU. There's
always at least one place where it doesn't update even with
`rescan-scsi-bus.sh`, `multipath -r`, etc.
If you have a reliable way to make all sides agree on the new size,
please let us know.
[0]
https://bugzilla.proxmox.com/buglist.cgi?bug_severity=enhancement&list_id=50969&resolution=---&short_desc=iscsi&short_desc_type=allwordssubstr
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=6133
[2] https://pve.proxmox.com/wiki/Multipath
[3]
https://manpages.debian.org/bookworm/multipath-tools/multipath.conf.5.en.html
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [pve-devel] iscsi and multipathing
2025-04-15 9:09 ` Mira Limbeck
@ 2025-04-15 14:10 ` Timo Veith via pve-devel
2025-04-18 6:24 ` DERUMIER, Alexandre via pve-devel
[not found] ` <BE392B97-179A-4168-A3F0-B8ED4EF46907@uni-tuebingen.de>
2 siblings, 0 replies; 5+ messages in thread
From: Timo Veith via pve-devel @ 2025-04-15 14:10 UTC (permalink / raw)
To: Mira Limbeck; +Cc: Timo Veith, Proxmox VE development discussion
[-- Attachment #1: Type: message/rfc822, Size: 9330 bytes --]
[-- Attachment #1.1.1: Type: text/plain, Size: 5278 bytes --]
Hello Mira,
thank you very much for your reply.
> Am 15.04.2025 um 11:09 schrieb Mira Limbeck <m.limbeck@proxmox.com>:
>
> Hi Timo,
>
> At the moment I'm working on storage mapping support for iSCSI.
> This would allow one to configure different portals on each of the hosts
> that are logically the same storage.
>
> If you tried setting up a storage via iSCSI where each host can only
> access a part of the portals which are announced, you probably noticed
> some higher pvestatd update times.
> The storage mapping implementation will alleviate those issues.
>
> Other than that I'm not aware of anyone working on iSCSI improvements at
> the moment.
> We do have some open enhancement requests in our bug tracker [0]. One of
> which is yours [1].
From the list [0] you mentioned iSCSI CHAP credentials in the GUI is something we are interested in too.
>
> Regarding multipath handling via the GUI there hasn't been much of a
> discussion on how we could tackle that yet. It is quite easy to set up
> [2] the usual way.
I know that it is easy, because otherwise I wouldn’t have been able to configure it ;)
>
>
> Sorry, I might have missed your bug report previously, so I'll go into a
> bit more detail here. (I'll add that information to the enhancement
> request as well)
>
>> When adding iscsi storage to the data center there could possiblity to
>> do a iscsi discovery multiple times against different portal ips and
>> thus get multiple path to a iscsi san.
>
> That's already the default. For each target we run the discovery on at
> least one portal since it should announce all other portals. We haven't
> encountered a setup where that is not the case.
I am dealing only with setups that do not announce their portals. I have to do iscsi discovery for every portal ip address. That are mostly Infortrend iSCSI SAN systems but also from Huawei. But I think I know what you mean. Some storage devices give you all portals when you do a discovery against one of their ip adresses.
However, it would be great to have a possibility to enter multiple portal ip addresses in the web ui. Together with chap credentials.
>
>> multipathd should be updated with the path to the luns. The user
>> would/could only need to have to add vendor specific device configs
>> like alua or multibus settings.
>
> For now that has to be done manually. There exists a multipath.conf
> setting that automatically creates a multipath mapping for devices that
> have at least 2 paths available: `find_multipaths yes` [3].
I will test `find_multipaths yes`. If I understand you correctly then the command `multipath -a <wwid>` will not be necessary. Just like written in the multipath wiki article [2].
>
>> Then when adding a certain disk to a vm, it would be good if it's wwn
>> would be displayed instead of the "CH 00 ID0 LUN0" e.g. So it would be
>> easier to identify the right one.
>
> That would be a nice addition. And shouldn't be too hard to extract that
> information in the ISCSIPlugin and provide it as additional information
> via the API.
> That information could also be listed in the `VM Disks` page of iSCSI
> storages.
> Would you like to tackle that?
Are you asking me to provide the code for that?
>
>> Also when changing lun size would have been grown on the storage side,
>> it would be handy to have a button in pve web gui to "refresh" the
>> disk in the vm. The new size should be reflected in the hardware
>> details of the vm. And the qemu prozess should be informed of the new
>> disk size so the vm would not have to be shutdown and restarted.
>
> Based on experience, I doubt it would be that easy. Refreshing of the
> LUN sizes involves the SAN, the client, multipath and QEMU. There's
> always at least one place where it doesn't update even with
> `rescan-scsi-bus.sh`, `multipath -r`, etc.
> If you have a reliable way to make all sides agree on the new size,
> please let us know.
Don’t get me wrong, I didn’t meant that it should be possible to resize a iscsi disk right from the PVE web gui. I meant that if one has changed the size of a LUN on SAN side with the measures that are necessary to that there (e.g. with Infortrend you need to login to the management software there, find the LUN and then resize it) then the refreshing of that new size could be triggered by a button in the PVE web gui. When pressing the button an iscsi rescan of the corresponding iscsi session would have to be done and then a multipath map rescan like you wrote and eventually a qemu block device refresh. (And/Or the equvialent for the lxc container)
Even if I do all that manually then the size of the LUN in the hardware details of the vm is not beeing updated.
I personally do not know how but at least I know that it is possible in ovirt/RHV.
Regards,
Timo
>
>
>
> [0]
> https://bugzilla.proxmox.com/buglist.cgi?bug_severity=enhancement&list_id=50969&resolution=---&short_desc=iscsi&short_desc_type=allwordssubstr
> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6133
> [2] https://pve.proxmox.com/wiki/Multipath
> [3]
> https://manpages.debian.org/bookworm/multipath-tools/multipath.conf.5.en.html
>
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [pve-devel] iscsi and multipathing
2025-04-15 9:09 ` Mira Limbeck
2025-04-15 14:10 ` Timo Veith via pve-devel
@ 2025-04-18 6:24 ` DERUMIER, Alexandre via pve-devel
[not found] ` <BE392B97-179A-4168-A3F0-B8ED4EF46907@uni-tuebingen.de>
2 siblings, 0 replies; 5+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-04-18 6:24 UTC (permalink / raw)
To: pve-devel; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 13112 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] iscsi and multipathing
Date: Fri, 18 Apr 2025 06:24:28 +0000
Message-ID: <83233075ec35d1543ab7eb4cb19c01b96a20af8c.camel@groupe-cyllene.com>
>
Hi,
> Also when changing lun size would have been grown on the storage
> side,
> it would be handy to have a button in pve web gui to "refresh" the
> disk in the vm. The new size should be reflected in the hardware
> details of the vm. And the qemu prozess should be informed of the new
> disk size so the vm would not have to be shutdown and restarted.
>>Based on experience, I doubt it would be that easy. Refreshing of the
>>LUN sizes involves the SAN, the client, multipath and QEMU. There's
>>always at least one place where it doesn't update even with
>>`rescan-scsi-bus.sh`, `multipath -r`, etc.
>>If you have a reliable way to make all sides agree on the new size,
>>please let us know.
From what I remember to try to do it 10year ago, another complexity is
to to refresh the volume size on all nodes for vm migration.
(or it need refresh at each vm start).
That's why zfs over iscsi is using qemu iscsi driver, it's really more
simple. (but no multipath :/ )
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [pve-devel] iscsi and multipathing
[not found] ` <BE392B97-179A-4168-A3F0-B8ED4EF46907@uni-tuebingen.de>
@ 2025-04-18 8:45 ` Mira Limbeck
0 siblings, 0 replies; 5+ messages in thread
From: Mira Limbeck @ 2025-04-18 8:45 UTC (permalink / raw)
To: Timo Veith; +Cc: Proxmox VE development discussion
On 4/15/25 16:10, Timo Veith wrote:
> Hello Mira,
>
> thank you very much for your reply.
>
>> Am 15.04.2025 um 11:09 schrieb Mira Limbeck <m.limbeck@proxmox.com>:
>>
>> Hi Timo,
>>
>> At the moment I'm working on storage mapping support for iSCSI.
>> This would allow one to configure different portals on each of the hosts
>> that are logically the same storage.
>>
>> If you tried setting up a storage via iSCSI where each host can only
>> access a part of the portals which are announced, you probably noticed
>> some higher pvestatd update times.
>> The storage mapping implementation will alleviate those issues.
>>
>> Other than that I'm not aware of anyone working on iSCSI improvements at
>> the moment.
>> We do have some open enhancement requests in our bug tracker [0]. One of
>> which is yours [1].
>
> From the list [0] you mentioned iSCSI CHAP credentials in the GUI is something we are interested in too.
This is probably a bit more work to implement with the current way the
plugin works.
Since the discoverydb is recreated constantly, you would have to set the
credentials before each login. Or pass them to iscsiadm as options,
which needs to make sure that no sensitive information is logged on error.
>
>>
>> Regarding multipath handling via the GUI there hasn't been much of a
>> discussion on how we could tackle that yet. It is quite easy to set up
>> [2] the usual way.
>
> I know that it is easy, because otherwise I wouldn’t have been able to configure it ;)
>
>
>>
>>
>> Sorry, I might have missed your bug report previously, so I'll go into a
>> bit more detail here. (I'll add that information to the enhancement
>> request as well)
>>
>>> When adding iscsi storage to the data center there could possiblity to
>>> do a iscsi discovery multiple times against different portal ips and
>>> thus get multiple path to a iscsi san.
>>
>> That's already the default. For each target we run the discovery on at
>> least one portal since it should announce all other portals. We haven't
>> encountered a setup where that is not the case.
>
> I am dealing only with setups that do not announce their portals. I have to do iscsi discovery for every portal ip address. That are mostly Infortrend iSCSI SAN systems but also from Huawei. But I think I know what you mean. Some storage devices give you all portals when you do a discovery against one of their ip adresses.
> However, it would be great to have a possibility to enter multiple portal ip addresses in the web ui. Together with chap credentials.
I tried just allowing multiple portals, and it didn't scale well.
For setups where each host has access to the same portals and targets,
it already works nicely the way it currently is.
But for asymmetric setups where each host can only connect to different
portals, and maybe different targets altogether, it doesn't bring any
benefit.
That's the reason I'm currently working on a `storage mapping` solution
where you can specify host-specific portals and targets, that all map to
the same `logical` storage.
Do you SANs provide the same target on all portals, or is it always a
different target for each portal?
>
>>
>>> multipathd should be updated with the path to the luns. The user
>>> would/could only need to have to add vendor specific device configs
>>> like alua or multibus settings.
>>
>> For now that has to be done manually. There exists a multipath.conf
>> setting that automatically creates a multipath mapping for devices that
>> have at least 2 paths available: `find_multipaths yes` [3].
>
> I will test `find_multipaths yes`. If I understand you correctly then the command `multipath -a <wwid>` will not be necessary. Just like written in the multipath wiki article [2].
>
>>
>>> Then when adding a certain disk to a vm, it would be good if it's wwn
>>> would be displayed instead of the "CH 00 ID0 LUN0" e.g. So it would be
>>> easier to identify the right one.
>>
>> That would be a nice addition. And shouldn't be too hard to extract that
>> information in the ISCSIPlugin and provide it as additional information
>> via the API.
>> That information could also be listed in the `VM Disks` page of iSCSI
>> storages.
>> Would you like to tackle that?
>
> Are you asking me to provide the code for that?
Since you mentioned `If there are any, what are they, what is their
status and can they be supplemented or contributed to?` I assumed you
were willing to contribute code as well. That's why I asked if you
wanted to tackle that improvement.
>
>>
>>> Also when changing lun size would have been grown on the storage side,
>>> it would be handy to have a button in pve web gui to "refresh" the
>>> disk in the vm. The new size should be reflected in the hardware
>>> details of the vm. And the qemu prozess should be informed of the new
>>> disk size so the vm would not have to be shutdown and restarted.
>>
>> Based on experience, I doubt it would be that easy. Refreshing of the
>> LUN sizes involves the SAN, the client, multipath and QEMU. There's
>> always at least one place where it doesn't update even with
>> `rescan-scsi-bus.sh`, `multipath -r`, etc.
>> If you have a reliable way to make all sides agree on the new size,
>> please let us know.
>
> Don’t get me wrong, I didn’t meant that it should be possible to resize a iscsi disk right from the PVE web gui. I meant that if one has changed the size of a LUN on SAN side with the measures that are necessary to that there (e.g. with Infortrend you need to login to the management software there, find the LUN and then resize it) then the refreshing of that new size could be triggered by a button in the PVE web gui. When pressing the button an iscsi rescan of the corresponding iscsi session would have to be done and then a multipath map rescan like you wrote and eventually a qemu block device refresh. (And/Or the equvialent for the lxc container)
>
> Even if I do all that manually then the size of the LUN in the hardware details of the vm is not beeing updated.
>
> I personally do not know how but at least I know that it is possible in ovirt/RHV.
We've seen some setups in our enterprise support where none of the above
mentioned commands helped after a resize. The host still saw the old
size. Only a reboot helped.
So that's going to be difficult to do for all combinations of hardware
and software.
Do you have a reliable set of commands that work in all your cases of a
resize, so that the host sees the correct size, and multipath resizes
reliably?
>
> Regards,
> Timo
>
>>
>>
>>
>> [0]
>> https://bugzilla.proxmox.com/buglist.cgi?bug_severity=enhancement&list_id=50969&resolution=---&short_desc=iscsi&short_desc_type=allwordssubstr
>> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6133
>> [2] https://pve.proxmox.com/wiki/Multipath
>> [3]
>> https://manpages.debian.org/bookworm/multipath-tools/multipath.conf.5.en.html
>>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-04-18 8:46 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-09 17:21 [pve-devel] iscsi and multipathing Timo Veith via pve-devel
2025-04-15 9:09 ` Mira Limbeck
2025-04-15 14:10 ` Timo Veith via pve-devel
2025-04-18 6:24 ` DERUMIER, Alexandre via pve-devel
[not found] ` <BE392B97-179A-4168-A3F0-B8ED4EF46907@uni-tuebingen.de>
2025-04-18 8:45 ` Mira Limbeck
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.