public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH pve-manager] ui: ha: consider status/presence of qdevice for warning
@ 2025-09-12 10:16 Hannes Laimer
  2025-09-12 16:36 ` Thomas Lamprecht
  2025-09-15  6:51 ` [pve-devel] [PATCH pve-manager] fixup! " Hannes Laimer
  0 siblings, 2 replies; 6+ messages in thread
From: Hannes Laimer @ 2025-09-12 10:16 UTC (permalink / raw)
  To: pve-devel

We showed this warning for setups with two nodes and a qdevice, but for
setups like this this warning didn't make Sense. This checks if a
qdevice is connected to the cluster before showing the 'not enough votes
for reliable HA'-warning.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
This came up in support[1].

[1] https://my.proxmox.com/en/dbsfk/ticket/view/20332

 www/manager6/ha/ResourceEdit.js | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/www/manager6/ha/ResourceEdit.js b/www/manager6/ha/ResourceEdit.js
index 428672a8..04795da7 100644
--- a/www/manager6/ha/ResourceEdit.js
+++ b/www/manager6/ha/ResourceEdit.js
@@ -54,7 +54,18 @@ Ext.define('PVE.ha.VMResourceInputPanel', {
                 });
 
                 if (votes < MIN_QUORUM_VOTES) {
-                    fewVotesHint.setVisible(true);
+                    Proxmox.Utils.API2Request({
+                        url: '/cluster/config/qdevice',
+                        method: 'GET',
+                        failure: function (response) {
+                            fewVotesHint.setVisible(true);
+                        },
+                        success: function (response) {
+                            let qdeviceStatus = response.result.data;
+                            let qdeviceConnected = qdeviceStatus.State === 'Connected';
+                            fewVotesHint.setVisible(!qdeviceConnected);
+                        },
+                    });
                 }
             },
         });
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [pve-devel] [PATCH pve-manager] ui: ha: consider status/presence of qdevice for warning
  2025-09-12 10:16 [pve-devel] [PATCH pve-manager] ui: ha: consider status/presence of qdevice for warning Hannes Laimer
@ 2025-09-12 16:36 ` Thomas Lamprecht
  2025-09-15  6:52   ` Hannes Laimer
  2025-09-15  6:51 ` [pve-devel] [PATCH pve-manager] fixup! " Hannes Laimer
  1 sibling, 1 reply; 6+ messages in thread
From: Thomas Lamprecht @ 2025-09-12 16:36 UTC (permalink / raw)
  To: Proxmox VE development discussion, Hannes Laimer

Am 12.09.25 um 12:17 schrieb Hannes Laimer:
> We showed this warning for setups with two nodes and a qdevice, but for
> setups like this this warning didn't make Sense. This checks if a
> qdevice is connected to the cluster before showing the 'not enough votes
> for reliable HA'-warning.

Maybe it would be a bit more robust if we fleece that info in already in
the backend – but disclaimer: I did not just check, so you really need
to evaluate if it indeed makes sense or if applying this now as is might
be the better route forward.

> 
> Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
> ---
> This came up in support[1].
> 
> [1] https://my.proxmox.com/en/dbsfk/ticket/view/20332
> 
>  www/manager6/ha/ResourceEdit.js | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/www/manager6/ha/ResourceEdit.js b/www/manager6/ha/ResourceEdit.js
> index 428672a8..04795da7 100644
> --- a/www/manager6/ha/ResourceEdit.js
> +++ b/www/manager6/ha/ResourceEdit.js
> @@ -54,7 +54,18 @@ Ext.define('PVE.ha.VMResourceInputPanel', {
>                  });
>  
>                  if (votes < MIN_QUORUM_VOTES) {
> -                    fewVotesHint.setVisible(true);
> +                    Proxmox.Utils.API2Request({
> +                        url: '/cluster/config/qdevice',
> +                        method: 'GET',
> +                        failure: function (response) {
> +                            fewVotesHint.setVisible(true);
> +                        },
> +                        success: function (response) {
> +                            let qdeviceStatus = response.result.data;
> +                            let qdeviceConnected = qdeviceStatus.State === 'Connected';
> +                            fewVotesHint.setVisible(!qdeviceConnected);
> +                        },
> +                    });

nit: I would slightly prefer such inline requests to use the async
Proxmox.Async.api2 method, that makes the code-flow a bit more linear
and easier to grasp.

>                  }
>              },
>          });



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pve-devel] [PATCH pve-manager] fixup! ui: ha: consider status/presence of qdevice for warning
  2025-09-12 10:16 [pve-devel] [PATCH pve-manager] ui: ha: consider status/presence of qdevice for warning Hannes Laimer
  2025-09-12 16:36 ` Thomas Lamprecht
@ 2025-09-15  6:51 ` Hannes Laimer
  1 sibling, 0 replies; 6+ messages in thread
From: Hannes Laimer @ 2025-09-15  6:51 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
 www/manager6/ha/ResourceEdit.js | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/www/manager6/ha/ResourceEdit.js b/www/manager6/ha/ResourceEdit.js
index 04795da7..ae94f002 100644
--- a/www/manager6/ha/ResourceEdit.js
+++ b/www/manager6/ha/ResourceEdit.js
@@ -45,7 +45,7 @@ Ext.define('PVE.ha.VMResourceInputPanel', {
             failure: function (response) {
                 Ext.Msg.alert(gettext('Error'), response.htmlStatus);
             },
-            success: function (response) {
+            success: async function (response) {
                 var nodes = response.result.data;
                 var votes = 0;
                 Ext.Array.forEach(nodes, function (node) {
@@ -54,18 +54,16 @@ Ext.define('PVE.ha.VMResourceInputPanel', {
                 });
 
                 if (votes < MIN_QUORUM_VOTES) {
-                    Proxmox.Utils.API2Request({
-                        url: '/cluster/config/qdevice',
-                        method: 'GET',
-                        failure: function (response) {
-                            fewVotesHint.setVisible(true);
-                        },
-                        success: function (response) {
-                            let qdeviceStatus = response.result.data;
-                            let qdeviceConnected = qdeviceStatus.State === 'Connected';
-                            fewVotesHint.setVisible(!qdeviceConnected);
-                        },
-                    });
+                    try {
+                        let { result: qres } = await Proxmox.Async.api2({
+                            url: '/cluster/config/qdevice',
+                            method: 'GET',
+                        });
+                        let qdeviceConnected = qres.data.State === 'Connected';
+                        fewVotesHint.setVisible(!qdeviceConnected);
+                    } catch (_) {
+                        fewVotesHint.setVisible(true);
+                    }
                 }
             },
         });
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [pve-devel] [PATCH pve-manager] ui: ha: consider status/presence of qdevice for warning
  2025-09-12 16:36 ` Thomas Lamprecht
@ 2025-09-15  6:52   ` Hannes Laimer
  2025-09-15 10:25     ` Thomas Lamprecht
  0 siblings, 1 reply; 6+ messages in thread
From: Hannes Laimer @ 2025-09-15  6:52 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

On 12.09.25 18:36, Thomas Lamprecht wrote:
> Am 12.09.25 um 12:17 schrieb Hannes Laimer:
>> We showed this warning for setups with two nodes and a qdevice, but for
>> setups like this this warning didn't make Sense. This checks if a
>> qdevice is connected to the cluster before showing the 'not enough votes
>> for reliable HA'-warning.
> 
> Maybe it would be a bit more robust if we fleece that info in already in
> the backend – but disclaimer: I did not just check, so you really need
> to evaluate if it indeed makes sense or if applying this now as is might
> be the better route forward.
> 

I did consider that, but the `/cluster/config/nodes` endpoint seems to
really only be intended for actual corosync nodes, it is also used like
that. We could maybe add a `include-qdevice`-flag, but I think that
would seem rather out-of-place
```
   {
     "name": "qClusterN1",
     "node": "qClusterN1",
     "nodeid": "1",
     "quorum_votes": "1",
     "ring0_addr": "192.168.55.44"
   },
```
is what it currently returns, none of those fields except name(?) would
really make sense for a qdevice.

>>
>> Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
>> ---
>> This came up in support[1].
>>
>> [1] https://my.proxmox.com/en/dbsfk/ticket/view/20332
>>
>>   www/manager6/ha/ResourceEdit.js | 13 ++++++++++++-
>>   1 file changed, 12 insertions(+), 1 deletion(-)
>>
>> diff --git a/www/manager6/ha/ResourceEdit.js b/www/manager6/ha/ResourceEdit.js
>> index 428672a8..04795da7 100644
>> --- a/www/manager6/ha/ResourceEdit.js
>> +++ b/www/manager6/ha/ResourceEdit.js
>> @@ -54,7 +54,18 @@ Ext.define('PVE.ha.VMResourceInputPanel', {
>>                   });
>>   
>>                   if (votes < MIN_QUORUM_VOTES) {
>> -                    fewVotesHint.setVisible(true);
>> +                    Proxmox.Utils.API2Request({
>> +                        url: '/cluster/config/qdevice',
>> +                        method: 'GET',
>> +                        failure: function (response) {
>> +                            fewVotesHint.setVisible(true);
>> +                        },
>> +                        success: function (response) {
>> +                            let qdeviceStatus = response.result.data;
>> +                            let qdeviceConnected = qdeviceStatus.State === 'Connected';
>> +                            fewVotesHint.setVisible(!qdeviceConnected);
>> +                        },
>> +                    });
> 
> nit: I would slightly prefer such inline requests to use the async
> Proxmox.Async.api2 method, that makes the code-flow a bit more linear
> and easier to grasp.
> 

sure, I've sent a fixup, you can just squash that in

>>                   }
>>               },
>>           });
> 



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [pve-devel] [PATCH pve-manager] ui: ha: consider status/presence of qdevice for warning
  2025-09-15  6:52   ` Hannes Laimer
@ 2025-09-15 10:25     ` Thomas Lamprecht
  2025-09-15 10:29       ` Hannes Laimer
  0 siblings, 1 reply; 6+ messages in thread
From: Thomas Lamprecht @ 2025-09-15 10:25 UTC (permalink / raw)
  To: Hannes Laimer, Proxmox VE development discussion

Am 15.09.25 um 08:51 schrieb Hannes Laimer:
> On 12.09.25 18:36, Thomas Lamprecht wrote:
>> Am 12.09.25 um 12:17 schrieb Hannes Laimer:
>>> We showed this warning for setups with two nodes and a qdevice, but for
>>> setups like this this warning didn't make Sense. This checks if a
>>> qdevice is connected to the cluster before showing the 'not enough votes
>>> for reliable HA'-warning.
>>
>> Maybe it would be a bit more robust if we fleece that info in already in
>> the backend – but disclaimer: I did not just check, so you really need
>> to evaluate if it indeed makes sense or if applying this now as is might
>> be the better route forward.
>>
> 
> I did consider that, but the `/cluster/config/nodes` endpoint seems to
> really only be intended for actual corosync nodes, it is also used like
> that. We could maybe add a `include-qdevice`-flag, but I think that
> would seem rather out-of-place
> ```
>   {
>     "name": "qClusterN1",
>     "node": "qClusterN1",
>     "nodeid": "1",
>     "quorum_votes": "1",
>     "ring0_addr": "192.168.55.44"
>   },
> ```
> is what it currently returns, none of those fields except name(?) would
> really make sense for a qdevice.

True, if, that would better fit in a new status endpoint there,
but I see no hard need for creating such a endpoint just for
this check now.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [pve-devel] [PATCH pve-manager] ui: ha: consider status/presence of qdevice for warning
  2025-09-15 10:25     ` Thomas Lamprecht
@ 2025-09-15 10:29       ` Hannes Laimer
  0 siblings, 0 replies; 6+ messages in thread
From: Hannes Laimer @ 2025-09-15 10:29 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion



On 15.09.25 12:25, Thomas Lamprecht wrote:
> Am 15.09.25 um 08:51 schrieb Hannes Laimer:
>> On 12.09.25 18:36, Thomas Lamprecht wrote:
>>> Am 12.09.25 um 12:17 schrieb Hannes Laimer:
>>>> We showed this warning for setups with two nodes and a qdevice, but for
>>>> setups like this this warning didn't make Sense. This checks if a
>>>> qdevice is connected to the cluster before showing the 'not enough votes
>>>> for reliable HA'-warning.
>>>
>>> Maybe it would be a bit more robust if we fleece that info in already in
>>> the backend – but disclaimer: I did not just check, so you really need
>>> to evaluate if it indeed makes sense or if applying this now as is might
>>> be the better route forward.
>>>
>>
>> I did consider that, but the `/cluster/config/nodes` endpoint seems to
>> really only be intended for actual corosync nodes, it is also used like
>> that. We could maybe add a `include-qdevice`-flag, but I think that
>> would seem rather out-of-place
>> ```
>>    {
>>      "name": "qClusterN1",
>>      "node": "qClusterN1",
>>      "nodeid": "1",
>>      "quorum_votes": "1",
>>      "ring0_addr": "192.168.55.44"
>>    },
>> ```
>> is what it currently returns, none of those fields except name(?) would
>> really make sense for a qdevice.
> 
> True, if, that would better fit in a new status endpoint there,
> but I see no hard need for creating such a endpoint just for
> this check now.

That's what I meant :) so just doing second GET to
`/cluster/config/qdevice` made sense


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-09-15 10:30 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-12 10:16 [pve-devel] [PATCH pve-manager] ui: ha: consider status/presence of qdevice for warning Hannes Laimer
2025-09-12 16:36 ` Thomas Lamprecht
2025-09-15  6:52   ` Hannes Laimer
2025-09-15 10:25     ` Thomas Lamprecht
2025-09-15 10:29       ` Hannes Laimer
2025-09-15  6:51 ` [pve-devel] [PATCH pve-manager] fixup! " Hannes Laimer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal