all lists on lists.proxmox.com
 help / color / mirror / Atom feed
* [PATCH-SERIES ha-manager/manager 0/3] fix #7557: introduce 'auto-rebalance' property
@ 2026-05-11 15:57 Dominik Rusovac
  2026-05-11 15:57 ` [PATCH pve-manager 1/3] ui: ha: add auto-rebalance flag Dominik Rusovac
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Dominik Rusovac @ 2026-05-11 15:57 UTC (permalink / raw)
  To: pve-devel

# TL;DR 
Add 'auto-rebalance' property to HA resources config, which gives users
control over which HA resources may be moved by dynamic CRS during
automatic rebalancing.

# Details
The 'auto-rebalance' flag is set to true by default. As requested by [0],
disabling 'auto-rebalance' for some HA resource, say vm:100, means that vm:100
will be disregarded as a migration candidate during auto-rebalancing. Any HA
resource with a positive affinity for vm:100 will be disregarded too.

# Summary of Changes
This series:
- introduces property to control whether an HA resource may be migrated during
  automatic rebalancing;
- adds corresponding flag to PVE UI.

# Refs
[0] https://bugzilla.proxmox.com/show_bug.cgi?id=7557


pve-manager:

Dominik Rusovac (1):
  ui: ha: add auto-rebalance flag

 www/manager6/ha/ResourceEdit.js | 14 ++++++++++++++
 www/manager6/ha/Resources.js    |  6 ++++++
 www/manager6/ha/StatusView.js   |  4 ++++
 3 files changed, 24 insertions(+)


pve-ha-manager:

Dominik Rusovac (2):
  manager: set service config value in self
  fix #7557: introduce 'auto-rebalance' property

 src/PVE/API2/HA/Resources.pm      |   7 ++
 src/PVE/API2/HA/Status.pm         |   9 +-
 src/PVE/HA/Config.pm              |   2 +
 src/PVE/HA/Manager.pm             |  23 +++--
 src/PVE/HA/Resources.pm           |   6 ++
 src/PVE/HA/Resources/PVECT.pm     |   1 +
 src/PVE/HA/Resources/PVEVM.pm     |   1 +
 src/PVE/HA/Sim/Hardware.pm        |   1 +
 src/test/test_resource_bundles.pl | 134 +++++++++++++++++++++++++++++-
 9 files changed, 172 insertions(+), 12 deletions(-)


Summary over all repositories:
  12 files changed, 196 insertions(+), 12 deletions(-)

-- 
Generated by murpp 0.11.0




^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH pve-manager 1/3] ui: ha: add auto-rebalance flag
  2026-05-11 15:57 [PATCH-SERIES ha-manager/manager 0/3] fix #7557: introduce 'auto-rebalance' property Dominik Rusovac
@ 2026-05-11 15:57 ` Dominik Rusovac
  2026-05-12  9:05   ` Daniel Kral
  2026-05-11 15:57 ` [PATCH pve-ha-manager 2/3] manager: set service config value in self Dominik Rusovac
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Dominik Rusovac @ 2026-05-11 15:57 UTC (permalink / raw)
  To: pve-devel

Adapt the implementation of the 'failback' flag for the 'auto-rebalance'
flag, which controls whether an HA resource is allowed to migrate during
automatic rebalancing.

Signed-off-by: Dominik Rusovac <d.rusovac@proxmox.com>
---
 www/manager6/ha/ResourceEdit.js | 14 ++++++++++++++
 www/manager6/ha/Resources.js    |  6 ++++++
 www/manager6/ha/StatusView.js   |  4 ++++
 3 files changed, 24 insertions(+)

diff --git a/www/manager6/ha/ResourceEdit.js b/www/manager6/ha/ResourceEdit.js
index a4f53dad..487d9cf3 100644
--- a/www/manager6/ha/ResourceEdit.js
+++ b/www/manager6/ha/ResourceEdit.js
@@ -12,6 +12,7 @@ Ext.define('PVE.ha.VMResourceInputPanel', {
         delete values.vmid;
 
         PVE.Utils.delete_if_default(values, 'failback', '1', me.isCreate);
+        PVE.Utils.delete_if_default(values, 'auto-rebalance', '1', me.isCreate);
         PVE.Utils.delete_if_default(values, 'max_restart', '1', me.isCreate);
         PVE.Utils.delete_if_default(values, 'max_relocate', '1', me.isCreate);
 
@@ -123,6 +124,19 @@ Ext.define('PVE.ha.VMResourceInputPanel', {
                 uncheckedValue: 0,
                 value: 1,
             },
+            {
+                xtype: 'proxmoxcheckbox',
+                name: 'auto-rebalance',
+                fieldLabel: gettext('Auto-Rebalance'),
+                autoEl: {
+                    tag: 'div',
+                    'data-qtip': gettext(
+                        'Enable if HA resource may be migrated during automatic rebalancing.',
+                    ),
+                },
+                uncheckedValue: 0,
+                value: 1,
+            },
             {
                 xtype: 'proxmoxKVComboBox',
                 name: 'state',
diff --git a/www/manager6/ha/Resources.js b/www/manager6/ha/Resources.js
index 621ed336..2fda3b24 100644
--- a/www/manager6/ha/Resources.js
+++ b/www/manager6/ha/Resources.js
@@ -150,6 +150,12 @@ Ext.define('PVE.ha.ResourcesView', {
                     sortable: true,
                     dataIndex: 'failback',
                 },
+                {
+                    header: gettext('Auto-Rebalance'),
+                    width: 100,
+                    sortable: true,
+                    dataIndex: 'auto-rebalance',
+                },
                 {
                     header: gettext('Description'),
                     flex: 1,
diff --git a/www/manager6/ha/StatusView.js b/www/manager6/ha/StatusView.js
index bc2da71f..59fcc6f3 100644
--- a/www/manager6/ha/StatusView.js
+++ b/www/manager6/ha/StatusView.js
@@ -84,6 +84,10 @@ Ext.define(
                     name: 'failback',
                     type: 'boolean',
                 },
+                {
+                    name: 'auto-rebalance',
+                    type: 'boolean',
+                },
                 'max_restart',
                 'max_relocate',
                 'type',
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH pve-ha-manager 2/3] manager: set service config value in self
  2026-05-11 15:57 [PATCH-SERIES ha-manager/manager 0/3] fix #7557: introduce 'auto-rebalance' property Dominik Rusovac
  2026-05-11 15:57 ` [PATCH pve-manager 1/3] ui: ha: add auto-rebalance flag Dominik Rusovac
@ 2026-05-11 15:57 ` Dominik Rusovac
  2026-05-12  9:06   ` Daniel Kral
  2026-05-11 15:57 ` [PATCH pve-ha-manager 3/3] fix #7557: introduce 'auto-rebalance' property Dominik Rusovac
  2026-05-12  9:21 ` [PATCH-SERIES ha-manager/manager 0/3] " Daniel Kral
  3 siblings, 1 reply; 11+ messages in thread
From: Dominik Rusovac @ 2026-05-11 15:57 UTC (permalink / raw)
  To: pve-devel

This is in preparation for the follow-up patch.

Reading the value of 'auto-rebalance'-flag in the service config of an
HA resource is required to perform proper resource bundling.

Signed-off-by: Dominik Rusovac <d.rusovac@proxmox.com>
---
 src/PVE/HA/Manager.pm | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index b69a6bb..2a4b31e 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -1003,6 +1003,7 @@ sub manage {
     $self->try_persistent_group_migration();
 
     my ($sc, $services_digest) = $haenv->read_service_config();
+    $self->{sc} = $sc;
 
     $self->{groups} = $haenv->read_group_config(); # update
 
@@ -1011,9 +1012,9 @@ sub manage {
     # skip service add/remove when disarmed - handle_disarm manages service status
     if (!$ms->{disarm}) {
         # add new service
-        foreach my $sid (sort keys %$sc) {
+        foreach my $sid (sort keys $self->{sc}->%*) {
             next if $ss->{$sid}; # already there
-            my $cd = $sc->{$sid};
+            my $cd = $self->{sc}->{$sid};
             next if $cd->{state} eq 'ignored';
 
             $haenv->log('info', "adding new service '$sid' on node '$cd->{node}'");
@@ -1028,9 +1029,9 @@ sub manage {
 
         # remove stale or ignored services from manager state
         foreach my $sid (keys %$ss) {
-            next if $sc->{$sid} && $sc->{$sid}->{state} ne 'ignored';
+            next if $self->{sc}->{$sid} && $self->{sc}->{$sid}->{state} ne 'ignored';
 
-            my $reason = defined($sc->{$sid}) ? 'ignored state requested' : 'no config';
+            my $reason = defined($self->{sc}->{$sid}) ? 'ignored state requested' : 'no config';
             $haenv->log('info', "removing stale service '$sid' ($reason)");
 
             # remove all service related state information
@@ -1088,7 +1089,7 @@ sub manage {
         foreach my $sid (sort keys %$ss) {
             next if $deferred_sids && !$deferred_sids->{$sid};
             my $sd = $ss->{$sid};
-            my $cd = $sc->{$sid} || { state => 'disabled' };
+            my $cd = $self->{sc}->{$sid} || { state => 'disabled' };
 
             my $lrm_res = $sd->{uid} ? $lrm_results->{ $sd->{uid} } : undef;
 
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH pve-ha-manager 3/3] fix #7557: introduce 'auto-rebalance' property
  2026-05-11 15:57 [PATCH-SERIES ha-manager/manager 0/3] fix #7557: introduce 'auto-rebalance' property Dominik Rusovac
  2026-05-11 15:57 ` [PATCH pve-manager 1/3] ui: ha: add auto-rebalance flag Dominik Rusovac
  2026-05-11 15:57 ` [PATCH pve-ha-manager 2/3] manager: set service config value in self Dominik Rusovac
@ 2026-05-11 15:57 ` Dominik Rusovac
  2026-05-12  9:07   ` Daniel Kral
  2026-05-12  9:21 ` [PATCH-SERIES ha-manager/manager 0/3] " Daniel Kral
  3 siblings, 1 reply; 11+ messages in thread
From: Dominik Rusovac @ 2026-05-11 15:57 UTC (permalink / raw)
  To: pve-devel

Add 'auto-rebalance' property to HA resources config, which gives users
control over which HA resources may be moved by dynamic CRS during
automatic rebalancing.

The 'auto-rebalance' flag is set to true by default. Disabling
'auto-rebalance' for some HA resource, say vm:100, means that vm:100
will be disregarded as a migration candidate during auto-rebalancing.
Any HA resource with a positive affinity for vm:100 will be disregarded
too.

Tests validate that an entire resource bundle will be disregarded if any
resource belonging to the bundle has 'auto-rebalance' disabled.

Signed-off-by: Dominik Rusovac <d.rusovac@proxmox.com>
---
 src/PVE/API2/HA/Resources.pm      |   7 ++
 src/PVE/API2/HA/Status.pm         |   9 +-
 src/PVE/HA/Config.pm              |   2 +
 src/PVE/HA/Manager.pm             |  12 ++-
 src/PVE/HA/Resources.pm           |   6 ++
 src/PVE/HA/Resources/PVECT.pm     |   1 +
 src/PVE/HA/Resources/PVEVM.pm     |   1 +
 src/PVE/HA/Sim/Hardware.pm        |   1 +
 src/test/test_resource_bundles.pl | 134 +++++++++++++++++++++++++++++-
 9 files changed, 166 insertions(+), 7 deletions(-)

diff --git a/src/PVE/API2/HA/Resources.pm b/src/PVE/API2/HA/Resources.pm
index e0690d5..181cdbb 100644
--- a/src/PVE/API2/HA/Resources.pm
+++ b/src/PVE/API2/HA/Resources.pm
@@ -142,6 +142,13 @@ __PACKAGE__->register_method({
                 optional => 1,
                 default => 1,
             },
+            'auto-rebalance' => {
+                description => "HA resource may be migrated during"
+                    . " automatic rebalancing.",
+                type => 'boolean',
+                optional => 1,
+                default => 1,
+            },
             group => get_standard_option('pve-ha-group-id', { optional => 1 }),
             max_restart => {
                 description => "Maximal number of tries to restart the service on"
diff --git a/src/PVE/API2/HA/Status.pm b/src/PVE/API2/HA/Status.pm
index 4894f3b..c352aa1 100644
--- a/src/PVE/API2/HA/Status.pm
+++ b/src/PVE/API2/HA/Status.pm
@@ -121,6 +121,13 @@ __PACKAGE__->register_method({
                     optional => 1,
                     default => 1,
                 },
+                'auto-rebalance' => {
+                    description => "HA resource may be migrated during"
+                        . " automatic rebalancing.",
+                    type => 'boolean',
+                    optional => 1,
+                    default => 1,
+                },
                 max_relocate => {
                     description => "For type 'service'.",
                     type => "integer",
@@ -333,7 +340,7 @@ __PACKAGE__->register_method({
             # also return common resource attributes
             if (defined($sc)) {
                 $data->{request_state} = $sc->{state};
-                foreach my $key (qw(group max_restart max_relocate failback comment)) {
+                foreach my $key (qw(group max_restart max_relocate failback comment auto-rebalance)) {
                     $data->{$key} = $sc->{$key} if defined($sc->{$key});
                 }
             }
diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm
index a34a302..54a6503 100644
--- a/src/PVE/HA/Config.pm
+++ b/src/PVE/HA/Config.pm
@@ -118,8 +118,10 @@ sub read_and_check_resources_config {
         $d->{state} = 'started' if !defined($d->{state});
         $d->{state} = 'started' if $d->{state} eq 'enabled'; # backward compatibility
         $d->{failback} = 1 if !defined($d->{failback});
+        $d->{'auto-rebalance'} = 1 if !defined($d->{'auto-rebalance'});
         $d->{max_restart} = 1 if !defined($d->{max_restart});
         $d->{max_relocate} = 1 if !defined($d->{max_relocate});
+
         if (PVE::HA::Resources->lookup($d->{type})) {
             if (my $vmd = $vmlist->{ids}->{$name}) {
                 $d->{node} = $vmd->{node};
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 2a4b31e..68e3cd6 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -136,12 +136,14 @@ sub update_crs_scheduler_mode {
 # HA resource in the resource bundle and also the key of each resource bundle
 # in the returned hash.
 sub get_active_stationary_resource_bundles {
-    my ($ss, $resource_affinity) = @_;
+    my ($ss, $sc, $resource_affinity) = @_;
 
     my $resource_bundles = {};
 OUTER: for my $sid (sort keys %$ss) {
         # do not consider non-started resource as 'active' leading resource
         next if $ss->{$sid}->{state} ne 'started';
+        # do not consider resource if it may not be moved
+        next if !$sc->{$sid}->{'auto-rebalance'};
 
         my @resources = ($sid);
         my $nodes = { $ss->{$sid}->{node} => 1 };
@@ -156,6 +158,8 @@ OUTER: for my $sid (sort keys %$ss) {
                 next OUTER if $state eq 'migrate' || $state eq 'relocate';
                 # do not add non-started resource to active bundle
                 next if $state ne 'started';
+                # do not consider stationary bundle if a dependent resource may not be moved
+                next OUTER if !$sc->{$csid}->{'auto-rebalance'};
 
                 $nodes->{$node} = 1;
 
@@ -182,12 +186,12 @@ OUTER: for my $sid (sort keys %$ss) {
 sub get_resource_migration_candidates {
     my ($self) = @_;
 
-    my ($ss, $compiled_rules, $online_node_usage) =
-        $self->@{qw(ss compiled_rules online_node_usage)};
+    my ($ss, $sc, $compiled_rules, $online_node_usage) =
+        $self->@{qw(ss sc compiled_rules online_node_usage)};
     my ($node_affinity, $resource_affinity) =
         $compiled_rules->@{qw(node-affinity resource-affinity)};
 
-    my $resource_bundles = get_active_stationary_resource_bundles($ss, $resource_affinity);
+    my $resource_bundles = get_active_stationary_resource_bundles($ss, $sc, $resource_affinity);
 
     my @compact_migration_candidates = ();
     for my $leader_sid (sort keys %$resource_bundles) {
diff --git a/src/PVE/HA/Resources.pm b/src/PVE/HA/Resources.pm
index 4238d9b..df7c1ff 100644
--- a/src/PVE/HA/Resources.pm
+++ b/src/PVE/HA/Resources.pm
@@ -71,6 +71,12 @@ EODESC
             optional => 1,
             default => 1,
         },
+        'auto-rebalance' => {
+            description => "HA resource may be migrated during automatic rebalancing",
+            type => 'boolean',
+            optional => 1,
+            default => 1,
+        },
         max_restart => {
             description => "Maximal number of tries to restart the resource on"
                 . " a node after its start failed. When reached, the HA manager will try to"
diff --git a/src/PVE/HA/Resources/PVECT.pm b/src/PVE/HA/Resources/PVECT.pm
index 0943d5e..177b907 100644
--- a/src/PVE/HA/Resources/PVECT.pm
+++ b/src/PVE/HA/Resources/PVECT.pm
@@ -38,6 +38,7 @@ sub options {
         group => { optional => 1 },
         comment => { optional => 1 },
         failback => { optional => 1 },
+        'auto-rebalance' => { optional => 1 },
         max_restart => { optional => 1 },
         max_relocate => { optional => 1 },
     };
diff --git a/src/PVE/HA/Resources/PVEVM.pm b/src/PVE/HA/Resources/PVEVM.pm
index 1621c15..98ea7d6 100644
--- a/src/PVE/HA/Resources/PVEVM.pm
+++ b/src/PVE/HA/Resources/PVEVM.pm
@@ -38,6 +38,7 @@ sub options {
         group => { optional => 1 },
         comment => { optional => 1 },
         failback => { optional => 1 },
+        'auto-rebalance' => { optional => 1 },
         max_restart => { optional => 1 },
         max_relocate => { optional => 1 },
     };
diff --git a/src/PVE/HA/Sim/Hardware.pm b/src/PVE/HA/Sim/Hardware.pm
index 82f85c9..e247bfa 100644
--- a/src/PVE/HA/Sim/Hardware.pm
+++ b/src/PVE/HA/Sim/Hardware.pm
@@ -115,6 +115,7 @@ sub read_service_config {
         $d->{state} = 'disabled' if !$d->{state};
         $d->{state} = 'started' if $d->{state} eq 'enabled'; # backward compatibility
         $d->{failback} = 1 if !defined($d->{failback});
+        $d->{'auto-rebalance'} = 1 if !defined($d->{'auto-rebalance'});
         $d->{max_restart} = 1 if !defined($d->{max_restart});
         $d->{max_relocate} = 1 if !defined($d->{max_relocate});
     }
diff --git a/src/test/test_resource_bundles.pl b/src/test/test_resource_bundles.pl
index d38dc51..7df96a9 100755
--- a/src/test/test_resource_bundles.pl
+++ b/src/test/test_resource_bundles.pl
@@ -21,6 +21,10 @@ my $get_active_stationary_resource_bundle_tests = [
                 node => 'node1',
             },
         },
+        service_config => {
+            'vm:101' => { 'auto-rebalance' => 1 },
+            'vm:102' => { 'auto-rebalance' => 1 },
+        },
         resource_affinity => {
             positive => {},
             negative => {},
@@ -46,6 +50,10 @@ my $get_active_stationary_resource_bundle_tests = [
                 node => 'node1',
             },
         },
+        service_config => {
+            'vm:101' => { 'auto-rebalance' => 1 },
+            'vm:102' => { 'auto-rebalance' => 1 },
+        },
         resource_affinity => {
             positive => {
                 'vm:101' => {
@@ -79,6 +87,11 @@ my $get_active_stationary_resource_bundle_tests = [
                 node => 'node1',
             },
         },
+        service_config => {
+            'vm:101' => { 'auto-rebalance' => 1 },
+            'vm:102' => { 'auto-rebalance' => 1 },
+            'vm:103' => { 'auto-rebalance' => 1 },
+        },
         resource_affinity => {
             positive => {
                 'vm:101' => {
@@ -118,6 +131,11 @@ my $get_active_stationary_resource_bundle_tests = [
                 node => 'node1',
             },
         },
+        service_config => {
+            'vm:101' => { 'auto-rebalance' => 1 },
+            'vm:102' => { 'auto-rebalance' => 1 },
+            'vm:103' => { 'auto-rebalance' => 1 },
+        },
         resource_affinity => {
             positive => {
                 'vm:101' => {
@@ -159,6 +177,11 @@ my $get_active_stationary_resource_bundle_tests = [
                 target => 'node1',
             },
         },
+        service_config => {
+            'vm:101' => { 'auto-rebalance' => 1 },
+            'vm:102' => { 'auto-rebalance' => 1 },
+            'vm:103' => { 'auto-rebalance' => 1 },
+        },
         resource_affinity => {
             positive => {
                 'vm:101' => {
@@ -196,6 +219,11 @@ my $get_active_stationary_resource_bundle_tests = [
                 node => 'node3',
             },
         },
+        service_config => {
+            'vm:101' => { 'auto-rebalance' => 1 },
+            'vm:102' => { 'auto-rebalance' => 1 },
+            'vm:103' => { 'auto-rebalance' => 1 },
+        },
         resource_affinity => {
             positive => {
                 'vm:101' => {
@@ -215,6 +243,108 @@ my $get_active_stationary_resource_bundle_tests = [
         },
         resource_bundles => {},
     },
+    {
+        description => "singleton resource bundle with disabled auto-rebalance",
+        services => {
+            'vm:101' => {
+                state => 'started',
+                node => 'node1',
+            },
+            'vm:102' => {
+                state => 'started',
+                node => 'node1',
+            },
+        },
+        service_config => {
+            'vm:101' => { 'auto-rebalance' => 0 },
+            'vm:102' => { 'auto-rebalance' => 1 },
+        },
+        resource_affinity => {
+            positive => {},
+            negative => {},
+        },
+        resource_bundles => {
+            'vm:102' => [
+                'vm:102',
+            ],
+        },
+    },
+    {
+        description => "resource bundle leader with disabled auto-rebalance",
+        services => {
+            'vm:101' => {
+                state => 'started',
+                node => 'node1',
+            },
+            'vm:102' => {
+                state => 'started',
+                node => 'node1',
+            },
+            'ct:103' => {
+                state => 'started',
+                node => 'node2',
+            },
+        },
+        service_config => {
+            'vm:101' => { 'auto-rebalance' => 0 },
+            'vm:102' => { 'auto-rebalance' => 1 },
+            'ct:103' => { 'auto-rebalance' => 1 },
+        },
+        resource_affinity => {
+            positive => {
+                'vm:101' => {
+                    'vm:102' => 1,
+                },
+                'vm:102' => {
+                    'vm:101' => 1,
+                },
+            },
+            negative => {},
+        },
+        resource_bundles => {
+            'ct:103' => [
+                'ct:103',
+            ],
+        },
+    },
+    {
+        description => "some member of resource bundle with disabled auto-rebalance",
+        services => {
+            'vm:101' => {
+                state => 'started',
+                node => 'node1',
+            },
+            'vm:102' => {
+                state => 'started',
+                node => 'node1',
+            },
+            'ct:103' => {
+                state => 'started',
+                node => 'node2',
+            },
+        },
+        service_config => {
+            'vm:101' => { 'auto-rebalance' => 1 },
+            'vm:102' => { 'auto-rebalance' => 0 },
+            'ct:103' => { 'auto-rebalance' => 1 },
+        },
+        resource_affinity => {
+            positive => {
+                'vm:101' => {
+                    'vm:102' => 1,
+                },
+                'vm:102' => {
+                    'vm:101' => 1,
+                },
+            },
+            negative => {},
+        },
+        resource_bundles => {
+            'ct:103' => [
+                'ct:103',
+            ],
+        },
+    },
 ];
 
 my $tests = [
@@ -224,9 +354,9 @@ my $tests = [
 plan(tests => scalar($tests->@*));
 
 for my $case ($get_active_stationary_resource_bundle_tests->@*) {
-    my ($ss, $resource_affinity) = $case->@{qw(services resource_affinity)};
+    my ($ss, $sc, $resource_affinity) = $case->@{qw(services service_config resource_affinity)};
 
-    my $result = PVE::HA::Manager::get_active_stationary_resource_bundles($ss, $resource_affinity);
+    my $result = PVE::HA::Manager::get_active_stationary_resource_bundles($ss, $sc, $resource_affinity);
 
     is_deeply($result, $case->{resource_bundles}, $case->{description});
 }
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH pve-manager 1/3] ui: ha: add auto-rebalance flag
  2026-05-11 15:57 ` [PATCH pve-manager 1/3] ui: ha: add auto-rebalance flag Dominik Rusovac
@ 2026-05-12  9:05   ` Daniel Kral
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Kral @ 2026-05-12  9:05 UTC (permalink / raw)
  To: Dominik Rusovac, pve-devel

Looks good to me, a small unrelated (pre-existing) nit inline, in any
way consider this as:

Reviewed-by: Daniel Kral <d.kral@proxmox.com>

On Mon May 11, 2026 at 5:57 PM CEST, Dominik Rusovac wrote:
> Adapt the implementation of the 'failback' flag for the 'auto-rebalance'
> flag, which controls whether an HA resource is allowed to migrate during
> automatic rebalancing.
>
> Signed-off-by: Dominik Rusovac <d.rusovac@proxmox.com>
> ---
>  www/manager6/ha/ResourceEdit.js | 14 ++++++++++++++
>  www/manager6/ha/Resources.js    |  6 ++++++
>  www/manager6/ha/StatusView.js   |  4 ++++
>  3 files changed, 24 insertions(+)
>
> diff --git a/www/manager6/ha/ResourceEdit.js b/www/manager6/ha/ResourceEdit.js
> index a4f53dad..487d9cf3 100644
> --- a/www/manager6/ha/ResourceEdit.js
> +++ b/www/manager6/ha/ResourceEdit.js
> @@ -12,6 +12,7 @@ Ext.define('PVE.ha.VMResourceInputPanel', {
>          delete values.vmid;
>  
>          PVE.Utils.delete_if_default(values, 'failback', '1', me.isCreate);
> +        PVE.Utils.delete_if_default(values, 'auto-rebalance', '1', me.isCreate);
>          PVE.Utils.delete_if_default(values, 'max_restart', '1', me.isCreate);
>          PVE.Utils.delete_if_default(values, 'max_relocate', '1', me.isCreate);
>  
> @@ -123,6 +124,19 @@ Ext.define('PVE.ha.VMResourceInputPanel', {
>                  uncheckedValue: 0,
>                  value: 1,
>              },
> +            {
> +                xtype: 'proxmoxcheckbox',
> +                name: 'auto-rebalance',
> +                fieldLabel: gettext('Auto-Rebalance'),
> +                autoEl: {
> +                    tag: 'div',
> +                    'data-qtip': gettext(
> +                        'Enable if HA resource may be migrated during automatic rebalancing.',
> +                    ),
> +                },
> +                uncheckedValue: 0,
> +                value: 1,
> +            },

nit: pre-existing, but maybe I think it might be nice to move the
requested state to the top row (in a patch before this) as it seems more
central to the HA resources than the 'failback' and new 'auto-rebalance'
option. No hard feelings though for this and definitely shouldn't block
this series at all.

>              {
>                  xtype: 'proxmoxKVComboBox',
>                  name: 'state',
> diff --git a/www/manager6/ha/Resources.js b/www/manager6/ha/Resources.js
> index 621ed336..2fda3b24 100644
> --- a/www/manager6/ha/Resources.js
> +++ b/www/manager6/ha/Resources.js
> @@ -150,6 +150,12 @@ Ext.define('PVE.ha.ResourcesView', {
>                      sortable: true,
>                      dataIndex: 'failback',
>                  },
> +                {
> +                    header: gettext('Auto-Rebalance'),
> +                    width: 100,
> +                    sortable: true,
> +                    dataIndex: 'auto-rebalance',
> +                },
>                  {
>                      header: gettext('Description'),
>                      flex: 1,
> diff --git a/www/manager6/ha/StatusView.js b/www/manager6/ha/StatusView.js
> index bc2da71f..59fcc6f3 100644
> --- a/www/manager6/ha/StatusView.js
> +++ b/www/manager6/ha/StatusView.js
> @@ -84,6 +84,10 @@ Ext.define(
>                      name: 'failback',
>                      type: 'boolean',
>                  },
> +                {
> +                    name: 'auto-rebalance',
> +                    type: 'boolean',
> +                },
>                  'max_restart',
>                  'max_relocate',
>                  'type',





^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH pve-ha-manager 2/3] manager: set service config value in self
  2026-05-11 15:57 ` [PATCH pve-ha-manager 2/3] manager: set service config value in self Dominik Rusovac
@ 2026-05-12  9:06   ` Daniel Kral
  2026-05-12 11:55     ` Dominik Rusovac
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Kral @ 2026-05-12  9:06 UTC (permalink / raw)
  To: Dominik Rusovac, pve-devel

Looks good to me, left a few nits inline, with those resolved consider
this patch as:

Reviewed-by: Daniel Kral <d.kral@proxmox.com>

On Mon May 11, 2026 at 5:57 PM CEST, Dominik Rusovac wrote:
> This is in preparation for the follow-up patch.
>
> Reading the value of 'auto-rebalance'-flag in the service config of an
> HA resource is required to perform proper resource bundling.
>
> Signed-off-by: Dominik Rusovac <d.rusovac@proxmox.com>
> ---
>  src/PVE/HA/Manager.pm | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
> index b69a6bb..2a4b31e 100644
> --- a/src/PVE/HA/Manager.pm
> +++ b/src/PVE/HA/Manager.pm
> @@ -1003,6 +1003,7 @@ sub manage {
>      $self->try_persistent_group_migration();
>  
>      my ($sc, $services_digest) = $haenv->read_service_config();
> +    $self->{sc} = $sc;

nit: could be

    ($self->{sc}, my $services_digest) = $haenv->read_service_config();

would need a few more s/$sc/$self->{sc}/

>  
>      $self->{groups} = $haenv->read_group_config(); # update
>  
> @@ -1011,9 +1012,9 @@ sub manage {
>      # skip service add/remove when disarmed - handle_disarm manages service status
>      if (!$ms->{disarm}) {
>          # add new service
> -        foreach my $sid (sort keys %$sc) {
> +        foreach my $sid (sort keys $self->{sc}->%*) {

nit: pre-existing, but foreach is a synonym for for and we prefer the
latter according to our Perl Style Guide.

We don't change all at once to cause unnecessary merge conflicts for
already existing patch series, but usually change these when we touch
the relevant line/context, so it can be changed here and for other
foreach's you touch (at least I do so ^^).

>              next if $ss->{$sid}; # already there
> -            my $cd = $sc->{$sid};
> +            my $cd = $self->{sc}->{$sid};
>              next if $cd->{state} eq 'ignored';
>  
>              $haenv->log('info', "adding new service '$sid' on node '$cd->{node}'");
> @@ -1028,9 +1029,9 @@ sub manage {
>  
>          # remove stale or ignored services from manager state
>          foreach my $sid (keys %$ss) {

nit: could also have a

    my $cd = $self->{sc}->{$sid};

so we don't need to repeat the $self->{sc}->{$sid}

> -            next if $sc->{$sid} && $sc->{$sid}->{state} ne 'ignored';
> +            next if $self->{sc}->{$sid} && $self->{sc}->{$sid}->{state} ne 'ignored';
>  
> -            my $reason = defined($sc->{$sid}) ? 'ignored state requested' : 'no config';
> +            my $reason = defined($self->{sc}->{$sid}) ? 'ignored state requested' : 'no config';
>              $haenv->log('info', "removing stale service '$sid' ($reason)");
>  
>              # remove all service related state information
> @@ -1088,7 +1089,7 @@ sub manage {
>          foreach my $sid (sort keys %$ss) {
>              next if $deferred_sids && !$deferred_sids->{$sid};
>              my $sd = $ss->{$sid};
> -            my $cd = $sc->{$sid} || { state => 'disabled' };
> +            my $cd = $self->{sc}->{$sid} || { state => 'disabled' };
>  
>              my $lrm_res = $sd->{uid} ? $lrm_results->{ $sd->{uid} } : undef;
>  

The migrate_groups_to_{resources,rules}() calls later in manage() should
use $self->{sc} too.

nit: Also to be safe, add

    $self->{sc} = {};

to PVE::HA::Manager::new(), or maybe even initialize it with
$haenv->read_service_config(), though we don't have any need to read it
before the assignment in PVE::HA::Manager::manage().




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH pve-ha-manager 3/3] fix #7557: introduce 'auto-rebalance' property
  2026-05-11 15:57 ` [PATCH pve-ha-manager 3/3] fix #7557: introduce 'auto-rebalance' property Dominik Rusovac
@ 2026-05-12  9:07   ` Daniel Kral
  2026-05-12 11:51     ` Dominik Rusovac
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Kral @ 2026-05-12  9:07 UTC (permalink / raw)
  To: Dominik Rusovac, pve-devel

Very nice test case additions for the new functionality, thanks for the
series!

Only a small nit about a method name and some make tidy notes, but
otherwise it looks good to me, so with those resolved consider this as:

Reviewed-by: Daniel Kral <d.kral@proxmox.com>

On Mon May 11, 2026 at 5:57 PM CEST, Dominik Rusovac wrote:
> Add 'auto-rebalance' property to HA resources config, which gives users
> control over which HA resources may be moved by dynamic CRS during
> automatic rebalancing.
>
> The 'auto-rebalance' flag is set to true by default. Disabling
> 'auto-rebalance' for some HA resource, say vm:100, means that vm:100
> will be disregarded as a migration candidate during auto-rebalancing.
> Any HA resource with a positive affinity for vm:100 will be disregarded
> too.
>
> Tests validate that an entire resource bundle will be disregarded if any
> resource belonging to the bundle has 'auto-rebalance' disabled.
>
> Signed-off-by: Dominik Rusovac <d.rusovac@proxmox.com>
> ---
>  src/PVE/API2/HA/Resources.pm      |   7 ++
>  src/PVE/API2/HA/Status.pm         |   9 +-
>  src/PVE/HA/Config.pm              |   2 +
>  src/PVE/HA/Manager.pm             |  12 ++-
>  src/PVE/HA/Resources.pm           |   6 ++
>  src/PVE/HA/Resources/PVECT.pm     |   1 +
>  src/PVE/HA/Resources/PVEVM.pm     |   1 +
>  src/PVE/HA/Sim/Hardware.pm        |   1 +
>  src/test/test_resource_bundles.pl | 134 +++++++++++++++++++++++++++++-
>  9 files changed, 166 insertions(+), 7 deletions(-)
>
> diff --git a/src/PVE/API2/HA/Resources.pm b/src/PVE/API2/HA/Resources.pm
> index e0690d5..181cdbb 100644
> --- a/src/PVE/API2/HA/Resources.pm
> +++ b/src/PVE/API2/HA/Resources.pm
> @@ -142,6 +142,13 @@ __PACKAGE__->register_method({
>                  optional => 1,
>                  default => 1,
>              },
> +            'auto-rebalance' => {
> +                description => "HA resource may be migrated during"
> +                    . " automatic rebalancing.",

make tidy wants this to be in a single line, but

    description => "HA resource may be migrated during automatic rebalancing.",

seems to work fine.

> +                type => 'boolean',
> +                optional => 1,
> +                default => 1,
> +            },
>              group => get_standard_option('pve-ha-group-id', { optional => 1 }),
>              max_restart => {
>                  description => "Maximal number of tries to restart the service on"
> diff --git a/src/PVE/API2/HA/Status.pm b/src/PVE/API2/HA/Status.pm
> index 4894f3b..c352aa1 100644
> --- a/src/PVE/API2/HA/Status.pm
> +++ b/src/PVE/API2/HA/Status.pm
> @@ -121,6 +121,13 @@ __PACKAGE__->register_method({
>                      optional => 1,
>                      default => 1,
>                  },
> +                'auto-rebalance' => {
> +                    description => "HA resource may be migrated during"
> +                        . " automatic rebalancing.",

same here

> +                    type => 'boolean',
> +                    optional => 1,
> +                    default => 1,
> +                },
>                  max_relocate => {
>                      description => "For type 'service'.",
>                      type => "integer",
> @@ -333,7 +340,7 @@ __PACKAGE__->register_method({
>              # also return common resource attributes
>              if (defined($sc)) {
>                  $data->{request_state} = $sc->{state};
> -                foreach my $key (qw(group max_restart max_relocate failback comment)) {
> +                foreach my $key (qw(group max_restart max_relocate failback comment auto-rebalance)) {

make tidy wants to move the quoted whitespace array to a new line...
Might look just a little bit nicer with:

    my @exported_service_properties =
        qw(group max_restart max_relocate failback comment auto-rebalance);
    for my $key (@exported_service_properties) {
        $data->{$key} = $sc->{$key} if defined($sc->{$key});
    }

>                      $data->{$key} = $sc->{$key} if defined($sc->{$key});
>                  }
>              }
> diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm
> index a34a302..54a6503 100644
> --- a/src/PVE/HA/Config.pm
> +++ b/src/PVE/HA/Config.pm
> @@ -118,8 +118,10 @@ sub read_and_check_resources_config {
>          $d->{state} = 'started' if !defined($d->{state});
>          $d->{state} = 'started' if $d->{state} eq 'enabled'; # backward compatibility
>          $d->{failback} = 1 if !defined($d->{failback});
> +        $d->{'auto-rebalance'} = 1 if !defined($d->{'auto-rebalance'});
>          $d->{max_restart} = 1 if !defined($d->{max_restart});
>          $d->{max_relocate} = 1 if !defined($d->{max_relocate});
> +
>          if (PVE::HA::Resources->lookup($d->{type})) {
>              if (my $vmd = $vmlist->{ids}->{$name}) {
>                  $d->{node} = $vmd->{node};
> diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
> index 2a4b31e..68e3cd6 100644
> --- a/src/PVE/HA/Manager.pm
> +++ b/src/PVE/HA/Manager.pm
> @@ -136,12 +136,14 @@ sub update_crs_scheduler_mode {
>  # HA resource in the resource bundle and also the key of each resource bundle
>  # in the returned hash.
>  sub get_active_stationary_resource_bundles {

nit: Hm, might need a better name now that we have an additional
condition on which resource bundles are gathered, e.g.

    get_active_stationary_movable_resource_bundles

though that's rather long. No hard feelings though, but updating the
description would be nice.

> -    my ($ss, $resource_affinity) = @_;
> +    my ($ss, $sc, $resource_affinity) = @_;
>  
>      my $resource_bundles = {};
>  OUTER: for my $sid (sort keys %$ss) {
>          # do not consider non-started resource as 'active' leading resource
>          next if $ss->{$sid}->{state} ne 'started';
> +        # do not consider resource if it may not be moved
> +        next if !$sc->{$sid}->{'auto-rebalance'};
>  
>          my @resources = ($sid);
>          my $nodes = { $ss->{$sid}->{node} => 1 };
> @@ -156,6 +158,8 @@ OUTER: for my $sid (sort keys %$ss) {
>                  next OUTER if $state eq 'migrate' || $state eq 'relocate';
>                  # do not add non-started resource to active bundle
>                  next if $state ne 'started';
> +                # do not consider stationary bundle if a dependent resource may not be moved
> +                next OUTER if !$sc->{$csid}->{'auto-rebalance'};
>  
>                  $nodes->{$node} = 1;
>  

[ ... ]

> diff --git a/src/test/test_resource_bundles.pl b/src/test/test_resource_bundles.pl
> index d38dc51..7df96a9 100755
> --- a/src/test/test_resource_bundles.pl
> +++ b/src/test/test_resource_bundles.pl
> @@ -224,9 +354,9 @@ my $tests = [
>  plan(tests => scalar($tests->@*));
>  
>  for my $case ($get_active_stationary_resource_bundle_tests->@*) {
> -    my ($ss, $resource_affinity) = $case->@{qw(services resource_affinity)};
> +    my ($ss, $sc, $resource_affinity) = $case->@{qw(services service_config resource_affinity)};
>  
> -    my $result = PVE::HA::Manager::get_active_stationary_resource_bundles($ss, $resource_affinity);
> +    my $result = PVE::HA::Manager::get_active_stationary_resource_bundles($ss, $sc, $resource_affinity);

make tidy wants this to be

    my $result =
        PVE::HA::Manager::get_active_stationary_resource_bundles($ss, $sc, $resource_affinity);


>  
>      is_deeply($result, $case->{resource_bundles}, $case->{description});
>  }





^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH-SERIES ha-manager/manager 0/3] fix #7557: introduce 'auto-rebalance' property
  2026-05-11 15:57 [PATCH-SERIES ha-manager/manager 0/3] fix #7557: introduce 'auto-rebalance' property Dominik Rusovac
                   ` (2 preceding siblings ...)
  2026-05-11 15:57 ` [PATCH pve-ha-manager 3/3] fix #7557: introduce 'auto-rebalance' property Dominik Rusovac
@ 2026-05-12  9:21 ` Daniel Kral
  2026-05-12 11:53   ` Dominik Rusovac
  3 siblings, 1 reply; 11+ messages in thread
From: Daniel Kral @ 2026-05-12  9:21 UTC (permalink / raw)
  To: Dominik Rusovac, pve-devel

On Mon May 11, 2026 at 5:57 PM CEST, Dominik Rusovac wrote:
> # TL;DR 
> Add 'auto-rebalance' property to HA resources config, which gives users
> control over which HA resources may be moved by dynamic CRS during
> automatic rebalancing.
>
> # Details
> The 'auto-rebalance' flag is set to true by default. As requested by [0],
> disabling 'auto-rebalance' for some HA resource, say vm:100, means that vm:100
> will be disregarded as a migration candidate during auto-rebalancing. Any HA
> resource with a positive affinity for vm:100 will be disregarded too.
>
> # Summary of Changes
> This series:
> - introduces property to control whether an HA resource may be migrated during
>   automatic rebalancing;
> - adds corresponding flag to PVE UI.
>
> # Refs
> [0] https://bugzilla.proxmox.com/show_bug.cgi?id=7557

Tested the series and it works just as expected:

- only a single HA resource with auto-rebalance cleared; set
  imbalance-threshold to 0.1 and imbalance-margin to 0.05, put no load
  on node3, some load on node2 and a lot of load with a non-HA VM and
  the HA resource from above, but it was never moved to the other nodes;
  it was moved to node3 as soon as I set auto-rebalance again

- tested the same with a positive resource affinity rule with that HA
  resource and another new HA resource and they never moved away unless
  I set the auto-rebalance option for all again

Nice work, thanks for the quick patch series!

Would be great to document this in the documentation as well so users
are aware especially that if such HA resources are in a resource
affinity rule, that they make the whole resource bundle immovable. But
this can also be done as a follow-up.

Consider this as:

Tested-by: Daniel Kral <d.kral@proxmox.com>




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH pve-ha-manager 3/3] fix #7557: introduce 'auto-rebalance' property
  2026-05-12  9:07   ` Daniel Kral
@ 2026-05-12 11:51     ` Dominik Rusovac
  0 siblings, 0 replies; 11+ messages in thread
From: Dominik Rusovac @ 2026-05-12 11:51 UTC (permalink / raw)
  To: Daniel Kral, pve-devel

thx for taking the time 

will send a v2 

On Tue May 12, 2026 at 11:07 AM CEST, Daniel Kral wrote:
> Very nice test case additions for the new functionality, thanks for the
> series!
>
> Only a small nit about a method name and some make tidy notes, but
> otherwise it looks good to me, so with those resolved consider this as:
>
> Reviewed-by: Daniel Kral <d.kral@proxmox.com>
>
> On Mon May 11, 2026 at 5:57 PM CEST, Dominik Rusovac wrote:
>> Add 'auto-rebalance' property to HA resources config, which gives users
>> control over which HA resources may be moved by dynamic CRS during
>> automatic rebalancing.
>>
>> The 'auto-rebalance' flag is set to true by default. Disabling
>> 'auto-rebalance' for some HA resource, say vm:100, means that vm:100
>> will be disregarded as a migration candidate during auto-rebalancing.
>> Any HA resource with a positive affinity for vm:100 will be disregarded
>> too.
>>
>> Tests validate that an entire resource bundle will be disregarded if any
>> resource belonging to the bundle has 'auto-rebalance' disabled.
>>
>> Signed-off-by: Dominik Rusovac <d.rusovac@proxmox.com>
>> ---
>>  src/PVE/API2/HA/Resources.pm      |   7 ++
>>  src/PVE/API2/HA/Status.pm         |   9 +-
>>  src/PVE/HA/Config.pm              |   2 +
>>  src/PVE/HA/Manager.pm             |  12 ++-
>>  src/PVE/HA/Resources.pm           |   6 ++
>>  src/PVE/HA/Resources/PVECT.pm     |   1 +
>>  src/PVE/HA/Resources/PVEVM.pm     |   1 +
>>  src/PVE/HA/Sim/Hardware.pm        |   1 +
>>  src/test/test_resource_bundles.pl | 134 +++++++++++++++++++++++++++++-
>>  9 files changed, 166 insertions(+), 7 deletions(-)
>>
>> diff --git a/src/PVE/API2/HA/Resources.pm b/src/PVE/API2/HA/Resources.pm
>> index e0690d5..181cdbb 100644
>> --- a/src/PVE/API2/HA/Resources.pm
>> +++ b/src/PVE/API2/HA/Resources.pm
>> @@ -142,6 +142,13 @@ __PACKAGE__->register_method({
>>                  optional => 1,
>>                  default => 1,
>>              },
>> +            'auto-rebalance' => {
>> +                description => "HA resource may be migrated during"
>> +                    . " automatic rebalancing.",
>
> make tidy wants this to be in a single line, but
>
>     description => "HA resource may be migrated during automatic rebalancing.",
>
> seems to work fine.
>

ACK

>> +                type => 'boolean',
>> +                optional => 1,
>> +                default => 1,
>> +            },
>>              group => get_standard_option('pve-ha-group-id', { optional => 1 }),
>>              max_restart => {
>>                  description => "Maximal number of tries to restart the service on"
>> diff --git a/src/PVE/API2/HA/Status.pm b/src/PVE/API2/HA/Status.pm
>> index 4894f3b..c352aa1 100644
>> --- a/src/PVE/API2/HA/Status.pm
>> +++ b/src/PVE/API2/HA/Status.pm
>> @@ -121,6 +121,13 @@ __PACKAGE__->register_method({
>>                      optional => 1,
>>                      default => 1,
>>                  },
>> +                'auto-rebalance' => {
>> +                    description => "HA resource may be migrated during"
>> +                        . " automatic rebalancing.",
>
> same here
>

ACK

>> +                    type => 'boolean',
>> +                    optional => 1,
>> +                    default => 1,
>> +                },
>>                  max_relocate => {
>>                      description => "For type 'service'.",
>>                      type => "integer",
>> @@ -333,7 +340,7 @@ __PACKAGE__->register_method({
>>              # also return common resource attributes
>>              if (defined($sc)) {
>>                  $data->{request_state} = $sc->{state};
>> -                foreach my $key (qw(group max_restart max_relocate failback comment)) {
>> +                foreach my $key (qw(group max_restart max_relocate failback comment auto-rebalance)) {
>
> make tidy wants to move the quoted whitespace array to a new line...
> Might look just a little bit nicer with:
>
>     my @exported_service_properties =
>         qw(group max_restart max_relocate failback comment auto-rebalance);
>     for my $key (@exported_service_properties) {
>         $data->{$key} = $sc->{$key} if defined($sc->{$key});
>     }
>

ACK

>>                      $data->{$key} = $sc->{$key} if defined($sc->{$key});
>>                  }
>>              }
>> diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm
>> index a34a302..54a6503 100644
>> --- a/src/PVE/HA/Config.pm
>> +++ b/src/PVE/HA/Config.pm
>> @@ -118,8 +118,10 @@ sub read_and_check_resources_config {
>>          $d->{state} = 'started' if !defined($d->{state});
>>          $d->{state} = 'started' if $d->{state} eq 'enabled'; # backward compatibility
>>          $d->{failback} = 1 if !defined($d->{failback});
>> +        $d->{'auto-rebalance'} = 1 if !defined($d->{'auto-rebalance'});
>>          $d->{max_restart} = 1 if !defined($d->{max_restart});
>>          $d->{max_relocate} = 1 if !defined($d->{max_relocate});
>> +
>>          if (PVE::HA::Resources->lookup($d->{type})) {
>>              if (my $vmd = $vmlist->{ids}->{$name}) {
>>                  $d->{node} = $vmd->{node};
>> diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
>> index 2a4b31e..68e3cd6 100644
>> --- a/src/PVE/HA/Manager.pm
>> +++ b/src/PVE/HA/Manager.pm
>> @@ -136,12 +136,14 @@ sub update_crs_scheduler_mode {
>>  # HA resource in the resource bundle and also the key of each resource bundle
>>  # in the returned hash.
>>  sub get_active_stationary_resource_bundles {
>
> nit: Hm, might need a better name now that we have an additional
> condition on which resource bundles are gathered, e.g.
>
>     get_active_stationary_movable_resource_bundles
>
> though that's rather long. No hard feelings though, but updating the
> description would be nice.
>

will update the description

>> -    my ($ss, $resource_affinity) = @_;
>> +    my ($ss, $sc, $resource_affinity) = @_;
>>  
>>      my $resource_bundles = {};
>>  OUTER: for my $sid (sort keys %$ss) {
>>          # do not consider non-started resource as 'active' leading resource
>>          next if $ss->{$sid}->{state} ne 'started';
>> +        # do not consider resource if it may not be moved
>> +        next if !$sc->{$sid}->{'auto-rebalance'};
>>  
>>          my @resources = ($sid);
>>          my $nodes = { $ss->{$sid}->{node} => 1 };
>> @@ -156,6 +158,8 @@ OUTER: for my $sid (sort keys %$ss) {
>>                  next OUTER if $state eq 'migrate' || $state eq 'relocate';
>>                  # do not add non-started resource to active bundle
>>                  next if $state ne 'started';
>> +                # do not consider stationary bundle if a dependent resource may not be moved
>> +                next OUTER if !$sc->{$csid}->{'auto-rebalance'};
>>  
>>                  $nodes->{$node} = 1;
>>  
>
> [ ... ]
>
>> diff --git a/src/test/test_resource_bundles.pl b/src/test/test_resource_bundles.pl
>> index d38dc51..7df96a9 100755
>> --- a/src/test/test_resource_bundles.pl
>> +++ b/src/test/test_resource_bundles.pl
>> @@ -224,9 +354,9 @@ my $tests = [
>>  plan(tests => scalar($tests->@*));
>>  
>>  for my $case ($get_active_stationary_resource_bundle_tests->@*) {
>> -    my ($ss, $resource_affinity) = $case->@{qw(services resource_affinity)};
>> +    my ($ss, $sc, $resource_affinity) = $case->@{qw(services service_config resource_affinity)};
>>  
>> -    my $result = PVE::HA::Manager::get_active_stationary_resource_bundles($ss, $resource_affinity);
>> +    my $result = PVE::HA::Manager::get_active_stationary_resource_bundles($ss, $sc, $resource_affinity);
>
> make tidy wants this to be
>
>     my $result =
>         PVE::HA::Manager::get_active_stationary_resource_bundles($ss, $sc, $resource_affinity);
>
>

ACK

>>  
>>      is_deeply($result, $case->{resource_bundles}, $case->{description});
>>  }





^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH-SERIES ha-manager/manager 0/3] fix #7557: introduce 'auto-rebalance' property
  2026-05-12  9:21 ` [PATCH-SERIES ha-manager/manager 0/3] " Daniel Kral
@ 2026-05-12 11:53   ` Dominik Rusovac
  0 siblings, 0 replies; 11+ messages in thread
From: Dominik Rusovac @ 2026-05-12 11:53 UTC (permalink / raw)
  To: Daniel Kral, pve-devel

thx for testing the series

On Tue May 12, 2026 at 11:21 AM CEST, Daniel Kral wrote:
> On Mon May 11, 2026 at 5:57 PM CEST, Dominik Rusovac wrote:
>> # TL;DR 
>> Add 'auto-rebalance' property to HA resources config, which gives users
>> control over which HA resources may be moved by dynamic CRS during
>> automatic rebalancing.
>>
>> # Details
>> The 'auto-rebalance' flag is set to true by default. As requested by [0],
>> disabling 'auto-rebalance' for some HA resource, say vm:100, means that vm:100
>> will be disregarded as a migration candidate during auto-rebalancing. Any HA
>> resource with a positive affinity for vm:100 will be disregarded too.
>>
>> # Summary of Changes
>> This series:
>> - introduces property to control whether an HA resource may be migrated during
>>   automatic rebalancing;
>> - adds corresponding flag to PVE UI.
>>
>> # Refs
>> [0] https://bugzilla.proxmox.com/show_bug.cgi?id=7557
>
> Tested the series and it works just as expected:
>
> - only a single HA resource with auto-rebalance cleared; set
>   imbalance-threshold to 0.1 and imbalance-margin to 0.05, put no load
>   on node3, some load on node2 and a lot of load with a non-HA VM and
>   the HA resource from above, but it was never moved to the other nodes;
>   it was moved to node3 as soon as I set auto-rebalance again
>
> - tested the same with a positive resource affinity rule with that HA
>   resource and another new HA resource and they never moved away unless
>   I set the auto-rebalance option for all again
>
> Nice work, thanks for the quick patch series!
>
> Would be great to document this in the documentation as well so users
> are aware especially that if such HA resources are in a resource
> affinity rule, that they make the whole resource bundle immovable. But
> this can also be done as a follow-up.

I will take care of the docs in a follow-up

>
> Consider this as:
>
> Tested-by: Daniel Kral <d.kral@proxmox.com>





^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH pve-ha-manager 2/3] manager: set service config value in self
  2026-05-12  9:06   ` Daniel Kral
@ 2026-05-12 11:55     ` Dominik Rusovac
  0 siblings, 0 replies; 11+ messages in thread
From: Dominik Rusovac @ 2026-05-12 11:55 UTC (permalink / raw)
  To: Daniel Kral, pve-devel

thx for taking the time, will resolve the nits in v2

On Tue May 12, 2026 at 11:06 AM CEST, Daniel Kral wrote:
> Looks good to me, left a few nits inline, with those resolved consider
> this patch as:
>
> Reviewed-by: Daniel Kral <d.kral@proxmox.com>
>
> On Mon May 11, 2026 at 5:57 PM CEST, Dominik Rusovac wrote:
>> This is in preparation for the follow-up patch.
>>
>> Reading the value of 'auto-rebalance'-flag in the service config of an
>> HA resource is required to perform proper resource bundling.
>>
>> Signed-off-by: Dominik Rusovac <d.rusovac@proxmox.com>
>> ---
>>  src/PVE/HA/Manager.pm | 11 ++++++-----
>>  1 file changed, 6 insertions(+), 5 deletions(-)

[snip]




^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2026-05-12 11:56 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-11 15:57 [PATCH-SERIES ha-manager/manager 0/3] fix #7557: introduce 'auto-rebalance' property Dominik Rusovac
2026-05-11 15:57 ` [PATCH pve-manager 1/3] ui: ha: add auto-rebalance flag Dominik Rusovac
2026-05-12  9:05   ` Daniel Kral
2026-05-11 15:57 ` [PATCH pve-ha-manager 2/3] manager: set service config value in self Dominik Rusovac
2026-05-12  9:06   ` Daniel Kral
2026-05-12 11:55     ` Dominik Rusovac
2026-05-11 15:57 ` [PATCH pve-ha-manager 3/3] fix #7557: introduce 'auto-rebalance' property Dominik Rusovac
2026-05-12  9:07   ` Daniel Kral
2026-05-12 11:51     ` Dominik Rusovac
2026-05-12  9:21 ` [PATCH-SERIES ha-manager/manager 0/3] " Daniel Kral
2026-05-12 11:53   ` Dominik Rusovac

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal