* [pve-devel] iscsi and multipathing
@ 2025-04-09 17:21 Timo Veith via pve-devel
2025-04-15 9:09 ` Mira Limbeck
0 siblings, 1 reply; 6+ messages in thread
From: Timo Veith via pve-devel @ 2025-04-09 17:21 UTC (permalink / raw)
To: PVE development discussion; +Cc: Timo Veith
[-- Attachment #1: Type: message/rfc822, Size: 3688 bytes --]
[-- Attachment #1.1.1: Type: text/plain, Size: 208 bytes --]
Hi all,
are there any plans for the further development of iscsi and multipathing?
If there are any, what are they, what is their status and can they be supplemented or contributed to?
Regards,
Timo
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [pve-devel] iscsi and multipathing
2025-04-09 17:21 [pve-devel] iscsi and multipathing Timo Veith via pve-devel
@ 2025-04-15 9:09 ` Mira Limbeck
2025-04-15 14:10 ` Timo Veith via pve-devel
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Mira Limbeck @ 2025-04-15 9:09 UTC (permalink / raw)
To: Proxmox VE development discussion
Hi Timo,
At the moment I'm working on storage mapping support for iSCSI.
This would allow one to configure different portals on each of the hosts
that are logically the same storage.
If you tried setting up a storage via iSCSI where each host can only
access a part of the portals which are announced, you probably noticed
some higher pvestatd update times.
The storage mapping implementation will alleviate those issues.
Other than that I'm not aware of anyone working on iSCSI improvements at
the moment.
We do have some open enhancement requests in our bug tracker [0]. One of
which is yours [1].
Regarding multipath handling via the GUI there hasn't been much of a
discussion on how we could tackle that yet. It is quite easy to set up
[2] the usual way.
Sorry, I might have missed your bug report previously, so I'll go into a
bit more detail here. (I'll add that information to the enhancement
request as well)
> When adding iscsi storage to the data center there could possiblity to
> do a iscsi discovery multiple times against different portal ips and
> thus get multiple path to a iscsi san.
That's already the default. For each target we run the discovery on at
least one portal since it should announce all other portals. We haven't
encountered a setup where that is not the case.
> multipathd should be updated with the path to the luns. The user
> would/could only need to have to add vendor specific device configs
> like alua or multibus settings.
For now that has to be done manually. There exists a multipath.conf
setting that automatically creates a multipath mapping for devices that
have at least 2 paths available: `find_multipaths yes` [3].
> Then when adding a certain disk to a vm, it would be good if it's wwn
> would be displayed instead of the "CH 00 ID0 LUN0" e.g. So it would be
> easier to identify the right one.
That would be a nice addition. And shouldn't be too hard to extract that
information in the ISCSIPlugin and provide it as additional information
via the API.
That information could also be listed in the `VM Disks` page of iSCSI
storages.
Would you like to tackle that?
> Also when changing lun size would have been grown on the storage side,
> it would be handy to have a button in pve web gui to "refresh" the
> disk in the vm. The new size should be reflected in the hardware
> details of the vm. And the qemu prozess should be informed of the new
> disk size so the vm would not have to be shutdown and restarted.
Based on experience, I doubt it would be that easy. Refreshing of the
LUN sizes involves the SAN, the client, multipath and QEMU. There's
always at least one place where it doesn't update even with
`rescan-scsi-bus.sh`, `multipath -r`, etc.
If you have a reliable way to make all sides agree on the new size,
please let us know.
[0]
https://bugzilla.proxmox.com/buglist.cgi?bug_severity=enhancement&list_id=50969&resolution=---&short_desc=iscsi&short_desc_type=allwordssubstr
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=6133
[2] https://pve.proxmox.com/wiki/Multipath
[3]
https://manpages.debian.org/bookworm/multipath-tools/multipath.conf.5.en.html
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [pve-devel] iscsi and multipathing
2025-04-15 9:09 ` Mira Limbeck
@ 2025-04-15 14:10 ` Timo Veith via pve-devel
2025-04-18 6:24 ` DERUMIER, Alexandre via pve-devel
[not found] ` <BE392B97-179A-4168-A3F0-B8ED4EF46907@uni-tuebingen.de>
2 siblings, 0 replies; 6+ messages in thread
From: Timo Veith via pve-devel @ 2025-04-15 14:10 UTC (permalink / raw)
To: Mira Limbeck; +Cc: Timo Veith, Proxmox VE development discussion
[-- Attachment #1: Type: message/rfc822, Size: 9330 bytes --]
[-- Attachment #1.1.1: Type: text/plain, Size: 5278 bytes --]
Hello Mira,
thank you very much for your reply.
> Am 15.04.2025 um 11:09 schrieb Mira Limbeck <m.limbeck@proxmox.com>:
>
> Hi Timo,
>
> At the moment I'm working on storage mapping support for iSCSI.
> This would allow one to configure different portals on each of the hosts
> that are logically the same storage.
>
> If you tried setting up a storage via iSCSI where each host can only
> access a part of the portals which are announced, you probably noticed
> some higher pvestatd update times.
> The storage mapping implementation will alleviate those issues.
>
> Other than that I'm not aware of anyone working on iSCSI improvements at
> the moment.
> We do have some open enhancement requests in our bug tracker [0]. One of
> which is yours [1].
From the list [0] you mentioned iSCSI CHAP credentials in the GUI is something we are interested in too.
>
> Regarding multipath handling via the GUI there hasn't been much of a
> discussion on how we could tackle that yet. It is quite easy to set up
> [2] the usual way.
I know that it is easy, because otherwise I wouldn’t have been able to configure it ;)
>
>
> Sorry, I might have missed your bug report previously, so I'll go into a
> bit more detail here. (I'll add that information to the enhancement
> request as well)
>
>> When adding iscsi storage to the data center there could possiblity to
>> do a iscsi discovery multiple times against different portal ips and
>> thus get multiple path to a iscsi san.
>
> That's already the default. For each target we run the discovery on at
> least one portal since it should announce all other portals. We haven't
> encountered a setup where that is not the case.
I am dealing only with setups that do not announce their portals. I have to do iscsi discovery for every portal ip address. That are mostly Infortrend iSCSI SAN systems but also from Huawei. But I think I know what you mean. Some storage devices give you all portals when you do a discovery against one of their ip adresses.
However, it would be great to have a possibility to enter multiple portal ip addresses in the web ui. Together with chap credentials.
>
>> multipathd should be updated with the path to the luns. The user
>> would/could only need to have to add vendor specific device configs
>> like alua or multibus settings.
>
> For now that has to be done manually. There exists a multipath.conf
> setting that automatically creates a multipath mapping for devices that
> have at least 2 paths available: `find_multipaths yes` [3].
I will test `find_multipaths yes`. If I understand you correctly then the command `multipath -a <wwid>` will not be necessary. Just like written in the multipath wiki article [2].
>
>> Then when adding a certain disk to a vm, it would be good if it's wwn
>> would be displayed instead of the "CH 00 ID0 LUN0" e.g. So it would be
>> easier to identify the right one.
>
> That would be a nice addition. And shouldn't be too hard to extract that
> information in the ISCSIPlugin and provide it as additional information
> via the API.
> That information could also be listed in the `VM Disks` page of iSCSI
> storages.
> Would you like to tackle that?
Are you asking me to provide the code for that?
>
>> Also when changing lun size would have been grown on the storage side,
>> it would be handy to have a button in pve web gui to "refresh" the
>> disk in the vm. The new size should be reflected in the hardware
>> details of the vm. And the qemu prozess should be informed of the new
>> disk size so the vm would not have to be shutdown and restarted.
>
> Based on experience, I doubt it would be that easy. Refreshing of the
> LUN sizes involves the SAN, the client, multipath and QEMU. There's
> always at least one place where it doesn't update even with
> `rescan-scsi-bus.sh`, `multipath -r`, etc.
> If you have a reliable way to make all sides agree on the new size,
> please let us know.
Don’t get me wrong, I didn’t meant that it should be possible to resize a iscsi disk right from the PVE web gui. I meant that if one has changed the size of a LUN on SAN side with the measures that are necessary to that there (e.g. with Infortrend you need to login to the management software there, find the LUN and then resize it) then the refreshing of that new size could be triggered by a button in the PVE web gui. When pressing the button an iscsi rescan of the corresponding iscsi session would have to be done and then a multipath map rescan like you wrote and eventually a qemu block device refresh. (And/Or the equvialent for the lxc container)
Even if I do all that manually then the size of the LUN in the hardware details of the vm is not beeing updated.
I personally do not know how but at least I know that it is possible in ovirt/RHV.
Regards,
Timo
>
>
>
> [0]
> https://bugzilla.proxmox.com/buglist.cgi?bug_severity=enhancement&list_id=50969&resolution=---&short_desc=iscsi&short_desc_type=allwordssubstr
> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6133
> [2] https://pve.proxmox.com/wiki/Multipath
> [3]
> https://manpages.debian.org/bookworm/multipath-tools/multipath.conf.5.en.html
>
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [pve-devel] iscsi and multipathing
2025-04-15 9:09 ` Mira Limbeck
2025-04-15 14:10 ` Timo Veith via pve-devel
@ 2025-04-18 6:24 ` DERUMIER, Alexandre via pve-devel
[not found] ` <BE392B97-179A-4168-A3F0-B8ED4EF46907@uni-tuebingen.de>
2 siblings, 0 replies; 6+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-04-18 6:24 UTC (permalink / raw)
To: pve-devel; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 13112 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] iscsi and multipathing
Date: Fri, 18 Apr 2025 06:24:28 +0000
Message-ID: <83233075ec35d1543ab7eb4cb19c01b96a20af8c.camel@groupe-cyllene.com>
>
Hi,
> Also when changing lun size would have been grown on the storage
> side,
> it would be handy to have a button in pve web gui to "refresh" the
> disk in the vm. The new size should be reflected in the hardware
> details of the vm. And the qemu prozess should be informed of the new
> disk size so the vm would not have to be shutdown and restarted.
>>Based on experience, I doubt it would be that easy. Refreshing of the
>>LUN sizes involves the SAN, the client, multipath and QEMU. There's
>>always at least one place where it doesn't update even with
>>`rescan-scsi-bus.sh`, `multipath -r`, etc.
>>If you have a reliable way to make all sides agree on the new size,
>>please let us know.
From what I remember to try to do it 10year ago, another complexity is
to to refresh the volume size on all nodes for vm migration.
(or it need refresh at each vm start).
That's why zfs over iscsi is using qemu iscsi driver, it's really more
simple. (but no multipath :/ )
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [pve-devel] iscsi and multipathing
[not found] ` <BE392B97-179A-4168-A3F0-B8ED4EF46907@uni-tuebingen.de>
@ 2025-04-18 8:45 ` Mira Limbeck
2025-04-24 20:27 ` Timo Veith via pve-devel
0 siblings, 1 reply; 6+ messages in thread
From: Mira Limbeck @ 2025-04-18 8:45 UTC (permalink / raw)
To: Timo Veith; +Cc: Proxmox VE development discussion
On 4/15/25 16:10, Timo Veith wrote:
> Hello Mira,
>
> thank you very much for your reply.
>
>> Am 15.04.2025 um 11:09 schrieb Mira Limbeck <m.limbeck@proxmox.com>:
>>
>> Hi Timo,
>>
>> At the moment I'm working on storage mapping support for iSCSI.
>> This would allow one to configure different portals on each of the hosts
>> that are logically the same storage.
>>
>> If you tried setting up a storage via iSCSI where each host can only
>> access a part of the portals which are announced, you probably noticed
>> some higher pvestatd update times.
>> The storage mapping implementation will alleviate those issues.
>>
>> Other than that I'm not aware of anyone working on iSCSI improvements at
>> the moment.
>> We do have some open enhancement requests in our bug tracker [0]. One of
>> which is yours [1].
>
> From the list [0] you mentioned iSCSI CHAP credentials in the GUI is something we are interested in too.
This is probably a bit more work to implement with the current way the
plugin works.
Since the discoverydb is recreated constantly, you would have to set the
credentials before each login. Or pass them to iscsiadm as options,
which needs to make sure that no sensitive information is logged on error.
>
>>
>> Regarding multipath handling via the GUI there hasn't been much of a
>> discussion on how we could tackle that yet. It is quite easy to set up
>> [2] the usual way.
>
> I know that it is easy, because otherwise I wouldn’t have been able to configure it ;)
>
>
>>
>>
>> Sorry, I might have missed your bug report previously, so I'll go into a
>> bit more detail here. (I'll add that information to the enhancement
>> request as well)
>>
>>> When adding iscsi storage to the data center there could possiblity to
>>> do a iscsi discovery multiple times against different portal ips and
>>> thus get multiple path to a iscsi san.
>>
>> That's already the default. For each target we run the discovery on at
>> least one portal since it should announce all other portals. We haven't
>> encountered a setup where that is not the case.
>
> I am dealing only with setups that do not announce their portals. I have to do iscsi discovery for every portal ip address. That are mostly Infortrend iSCSI SAN systems but also from Huawei. But I think I know what you mean. Some storage devices give you all portals when you do a discovery against one of their ip adresses.
> However, it would be great to have a possibility to enter multiple portal ip addresses in the web ui. Together with chap credentials.
I tried just allowing multiple portals, and it didn't scale well.
For setups where each host has access to the same portals and targets,
it already works nicely the way it currently is.
But for asymmetric setups where each host can only connect to different
portals, and maybe different targets altogether, it doesn't bring any
benefit.
That's the reason I'm currently working on a `storage mapping` solution
where you can specify host-specific portals and targets, that all map to
the same `logical` storage.
Do you SANs provide the same target on all portals, or is it always a
different target for each portal?
>
>>
>>> multipathd should be updated with the path to the luns. The user
>>> would/could only need to have to add vendor specific device configs
>>> like alua or multibus settings.
>>
>> For now that has to be done manually. There exists a multipath.conf
>> setting that automatically creates a multipath mapping for devices that
>> have at least 2 paths available: `find_multipaths yes` [3].
>
> I will test `find_multipaths yes`. If I understand you correctly then the command `multipath -a <wwid>` will not be necessary. Just like written in the multipath wiki article [2].
>
>>
>>> Then when adding a certain disk to a vm, it would be good if it's wwn
>>> would be displayed instead of the "CH 00 ID0 LUN0" e.g. So it would be
>>> easier to identify the right one.
>>
>> That would be a nice addition. And shouldn't be too hard to extract that
>> information in the ISCSIPlugin and provide it as additional information
>> via the API.
>> That information could also be listed in the `VM Disks` page of iSCSI
>> storages.
>> Would you like to tackle that?
>
> Are you asking me to provide the code for that?
Since you mentioned `If there are any, what are they, what is their
status and can they be supplemented or contributed to?` I assumed you
were willing to contribute code as well. That's why I asked if you
wanted to tackle that improvement.
>
>>
>>> Also when changing lun size would have been grown on the storage side,
>>> it would be handy to have a button in pve web gui to "refresh" the
>>> disk in the vm. The new size should be reflected in the hardware
>>> details of the vm. And the qemu prozess should be informed of the new
>>> disk size so the vm would not have to be shutdown and restarted.
>>
>> Based on experience, I doubt it would be that easy. Refreshing of the
>> LUN sizes involves the SAN, the client, multipath and QEMU. There's
>> always at least one place where it doesn't update even with
>> `rescan-scsi-bus.sh`, `multipath -r`, etc.
>> If you have a reliable way to make all sides agree on the new size,
>> please let us know.
>
> Don’t get me wrong, I didn’t meant that it should be possible to resize a iscsi disk right from the PVE web gui. I meant that if one has changed the size of a LUN on SAN side with the measures that are necessary to that there (e.g. with Infortrend you need to login to the management software there, find the LUN and then resize it) then the refreshing of that new size could be triggered by a button in the PVE web gui. When pressing the button an iscsi rescan of the corresponding iscsi session would have to be done and then a multipath map rescan like you wrote and eventually a qemu block device refresh. (And/Or the equvialent for the lxc container)
>
> Even if I do all that manually then the size of the LUN in the hardware details of the vm is not beeing updated.
>
> I personally do not know how but at least I know that it is possible in ovirt/RHV.
We've seen some setups in our enterprise support where none of the above
mentioned commands helped after a resize. The host still saw the old
size. Only a reboot helped.
So that's going to be difficult to do for all combinations of hardware
and software.
Do you have a reliable set of commands that work in all your cases of a
resize, so that the host sees the correct size, and multipath resizes
reliably?
>
> Regards,
> Timo
>
>>
>>
>>
>> [0]
>> https://bugzilla.proxmox.com/buglist.cgi?bug_severity=enhancement&list_id=50969&resolution=---&short_desc=iscsi&short_desc_type=allwordssubstr
>> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6133
>> [2] https://pve.proxmox.com/wiki/Multipath
>> [3]
>> https://manpages.debian.org/bookworm/multipath-tools/multipath.conf.5.en.html
>>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [pve-devel] iscsi and multipathing
2025-04-18 8:45 ` Mira Limbeck
@ 2025-04-24 20:27 ` Timo Veith via pve-devel
0 siblings, 0 replies; 6+ messages in thread
From: Timo Veith via pve-devel @ 2025-04-24 20:27 UTC (permalink / raw)
To: Mira Limbeck; +Cc: Timo Veith, Proxmox VE development discussion
[-- Attachment #1: Type: message/rfc822, Size: 15696 bytes --]
[-- Attachment #1.1.1: Type: text/plain, Size: 11350 bytes --]
> Am 18.04.2025 um 10:45 schrieb Mira Limbeck <m.limbeck@proxmox.com>:
>
> On 4/15/25 16:10, Timo Veith wrote:
>> Hello Mira,
>>
>> thank you very much for your reply.
>>
>>> Am 15.04.2025 um 11:09 schrieb Mira Limbeck <m.limbeck@proxmox.com>:
>>>
>>> Hi Timo,
>>>
>>> At the moment I'm working on storage mapping support for iSCSI.
>>> This would allow one to configure different portals on each of the hosts
>>> that are logically the same storage.
>>>
>>> If you tried setting up a storage via iSCSI where each host can only
>>> access a part of the portals which are announced, you probably noticed
>>> some higher pvestatd update times.
>>> The storage mapping implementation will alleviate those issues.
>>>
>>> Other than that I'm not aware of anyone working on iSCSI improvements at
>>> the moment.
>>> We do have some open enhancement requests in our bug tracker [0]. One of
>>> which is yours [1].
>>
>> From the list [0] you mentioned iSCSI CHAP credentials in the GUI is something we are interested in too.
> This is probably a bit more work to implement with the current way the
> plugin works.
> Since the discoverydb is recreated constantly, you would have to set the
> credentials before each login. Or pass them to iscsiadm as options,
> which needs to make sure that no sensitive information is logged on error.
Since you write about discoverydb, I must admit that I was never in the need of having to use ``ìscsiadm -m discoverydb`` to configure a storage connection. I always used only ``iscsiadm -m discovery``. ovirt/RHV is doing this with the help of a postgresql db. All storage connections and their credentials are saved in there on the management server. As PVE doesn’t have such a server, but has the distributed ``/etc/pve`` directory, the idea comes to mind to save chap credentials there too. And there is already that ``/etc/pve/priv`` directory which holds sensitive data. So maybe that place could be good for chap credentials too. Available for iscsi logins on all nodes. Just like ``/etc/pve/storage.cfg``.
>
>>
>>>
>>> Regarding multipath handling via the GUI there hasn't been much of a
>>> discussion on how we could tackle that yet. It is quite easy to set up
>>> [2] the usual way.
>>
>> I know that it is easy, because otherwise I wouldn’t have been able to configure it ;)
>>
>>
>>>
>>>
>>> Sorry, I might have missed your bug report previously, so I'll go into a
>>> bit more detail here. (I'll add that information to the enhancement
>>> request as well)
>>>
>>>> When adding iscsi storage to the data center there could possiblity to
>>>> do a iscsi discovery multiple times against different portal ips and
>>>> thus get multiple path to a iscsi san.
>>>
>>> That's already the default. For each target we run the discovery on at
>>> least one portal since it should announce all other portals. We haven't
>>> encountered a setup where that is not the case.
>>
>> I am dealing only with setups that do not announce their portals. I have to do iscsi discovery for every portal ip address. That are mostly Infortrend iSCSI SAN systems but also from Huawei. But I think I know what you mean. Some storage devices give you all portals when you do a discovery against one of their ip adresses.
>> However, it would be great to have a possibility to enter multiple portal ip addresses in the web ui. Together with chap credentials.
> I tried just allowing multiple portals, and it didn't scale well.
> For setups where each host has access to the same portals and targets,
> it already works nicely the way it currently is.
> But for asymmetric setups where each host can only connect to different
> portals, and maybe different targets altogether, it doesn't bring any
> benefit.
>
> That's the reason I'm currently working on a `storage mapping` solution
> where you can specify host-specific portals and targets, that all map to
> the same `logical` storage.
>
> Do you SANs provide the same target on all portals, or is it always a
> different target for each portal?
What exactly do you mean with „it didn’t scale well“?
It may work nicely but only if you don’t need to do a discovery more than one time to get all portal/target records. I have put this on my todo list and I will ask our storage vendor support if it is possible to configure the SANs so they announce all their portals with one discovery. But what if they say, that it is not possible at all and you have to do it one time for each portal ip?
Assymetric setups? That sounds a bit weird, if you allow me to say that. Why would one need that? If you have a virtualization cluster, then you very probably want to have vm live migration underneath all of your cluster nodes. Assymetric would only allow it underneath those who take part of the same sub group. Is that correct? Anyway, the same like above applies here too. It would bring the benefit that you can configure multiple paths by doing more than one discovery.
Maybe your `storage mapping` solution would also solve that problem too.
I am also thinking about providing you screenshots of the `add storage` dialog of ovirt/RHV and the output of iscsiadm commands against our SAN to show you what I mean. If you want to see those, I could put them somewhere on a public share or a web page.
>
>>
>>>
>>>> multipathd should be updated with the path to the luns. The user
>>>> would/could only need to have to add vendor specific device configs
>>>> like alua or multibus settings.
>>>
>>> For now that has to be done manually. There exists a multipath.conf
>>> setting that automatically creates a multipath mapping for devices that
>>> have at least 2 paths available: `find_multipaths yes` [3].
>>
>> I will test `find_multipaths yes`. If I understand you correctly then the command `multipath -a <wwid>` will not be necessary. Just like written in the multipath wiki article [2].
I have tested that, and it works as expected. Thank you for pointing me on that!
>>
>>>
>>>> Then when adding a certain disk to a vm, it would be good if it's wwn
>>>> would be displayed instead of the "CH 00 ID0 LUN0" e.g. So it would be
>>>> easier to identify the right one.
>>>
>>> That would be a nice addition. And shouldn't be too hard to extract that
>>> information in the ISCSIPlugin and provide it as additional information
>>> via the API.
>>> That information could also be listed in the `VM Disks` page of iSCSI
>>> storages.
>>> Would you like to tackle that?
>>
>> Are you asking me to provide the code for that?
> Since you mentioned `If there are any, what are they, what is their
> status and can they be supplemented or contributed to?` I assumed you
> were willing to contribute code as well. That's why I asked if you
> wanted to tackle that improvement.
We would like to contribute code, but we do not yet have a colleague with Proxmox VE development skills in our team. We are currently looking for reinforcements who could contribute code. But that seems to take more time. Maybe it is even faster to switch to a different storage protocol in the meantime. So far, we ourselves can only provide ideas and tests. Those ideas come from the many years of use of ovirt/RHV and the trials of switching over to Proxmox VE.
>
>>
>>>
>>>> Also when changing lun size would have been grown on the storage side,
>>>> it would be handy to have a button in pve web gui to "refresh" the
>>>> disk in the vm. The new size should be reflected in the hardware
>>>> details of the vm. And the qemu prozess should be informed of the new
>>>> disk size so the vm would not have to be shutdown and restarted.
>>>
>>> Based on experience, I doubt it would be that easy. Refreshing of the
>>> LUN sizes involves the SAN, the client, multipath and QEMU. There's
>>> always at least one place where it doesn't update even with
>>> `rescan-scsi-bus.sh`, `multipath -r`, etc.
>>> If you have a reliable way to make all sides agree on the new size,
>>> please let us know.
>>
>> Don’t get me wrong, I didn’t meant that it should be possible to resize a iscsi disk right from the PVE web gui. I meant that if one has changed the size of a LUN on SAN side with the measures that are necessary to that there (e.g. with Infortrend you need to login to the management software there, find the LUN and then resize it) then the refreshing of that new size could be triggered by a button in the PVE web gui. When pressing the button an iscsi rescan of the corresponding iscsi session would have to be done and then a multipath map rescan like you wrote and eventually a qemu block device refresh. (And/Or the equvialent for the lxc container)
>>
>> Even if I do all that manually then the size of the LUN in the hardware details of the vm is not beeing updated.
>>
>> I personally do not know how but at least I know that it is possible in ovirt/RHV.
> We've seen some setups in our enterprise support where none of the above
> mentioned commands helped after a resize. The host still saw the old
> size. Only a reboot helped.
> So that's going to be difficult to do for all combinations of hardware
> and software.
>
> Do you have a reliable set of commands that work in all your cases of a
> resize, so that the host sees the correct size, and multipath resizes
> reliably?
I must admit that I only tried a lun resize one single time with Proxmox VE. I resized the lun on the Infortrend SAN, then I logged into the PVE node and issued ``iscsiadm -m session -R``. Then I issued ``multipath -r``. And then - as I couldn’t remember the block refresh command for qemu I just stopped the test vm and started it again. So I can only say this for this combination of hard- and software with PVE. Now, that I write this, I think I am mixing this with the virsh command ``virsh blockresize <domain> <fully-qualified path of block device> --size <new volume size>``. That is not available on PVE, but there must be a qemu equivalent to this.
At least the new size should be updated in the PVE web gui when a LUN was resized. It is just wrong when the LUN size changed from e.g. 5 to 6 TB but the gui still shows 5 TB, right?
On the other hand, as I already said, I can prove that ovirt/RHV can do it. We have used ovirt/RHV together with a Nimble, Huawei, NetApp, Infortrend DS/GS systems and one TrueNAS Core storage system.
We have looked for the code that implements this in ovirt/RHV and found this repo [4]. The folder ``lib/vdsm/storage/` holds iscsi.py, hsm.py, and multipath.py.
But I am too unexperienced reading code that is split in modules and libraries. As already said, I can provide screenshots and command outputs that prove that it is working. We could also do a video call with a live session on this too.
>>
>>>
>>>
>>>
>>> [0]
>>> https://bugzilla.proxmox.com/buglist.cgi?bug_severity=enhancement&list_id=50969&resolution=---&short_desc=iscsi&short_desc_type=allwordssubstr
>>> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6133
>>> [2] https://pve.proxmox.com/wiki/Multipath
>>> [3]
>>> https://manpages.debian.org/bookworm/multipath-tools/multipath.conf.5.en.html
[4] https://github.com/oVirt/vdsm
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-04-24 20:27 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-09 17:21 [pve-devel] iscsi and multipathing Timo Veith via pve-devel
2025-04-15 9:09 ` Mira Limbeck
2025-04-15 14:10 ` Timo Veith via pve-devel
2025-04-18 6:24 ` DERUMIER, Alexandre via pve-devel
[not found] ` <BE392B97-179A-4168-A3F0-B8ED4EF46907@uni-tuebingen.de>
2025-04-18 8:45 ` Mira Limbeck
2025-04-24 20:27 ` Timo Veith via pve-devel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal