From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH ha-manager v2 40/40] test: add automatic rebalancing system test cases with affinity rules
Date: Tue, 24 Mar 2026 19:30:24 +0100 [thread overview]
Message-ID: <20260324183029.1274972-41-d.kral@proxmox.com> (raw)
In-Reply-To: <20260324183029.1274972-1-d.kral@proxmox.com>
These test cases document and verify some behaviors of the automatic
rebalancing system in combination with HA affinity rules.
All of these test cases use only the dynamic usage information and
bruteforce method as the waiting on ongoing migrations and candidate
generation are invariant to those parameters.
As an overview:
- Case 1: rebalancing system acknowledges node affinity rules
- Case 2: rebalancing system considers HA resources in strict positive
resource affinity rules as a single unit (a resource bundle)
and will not split them apart
- Case 3: rebalancing system will wait on the migration of a not-yet
enforced strict positive resource affinity rule, i.e., the
HA resources still need to migrate to their common node
- Case 4: rebalancing system will acknowledge strict negative resource
affinity rules, but will still try to minimize the node
imbalance as much as possible
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v1 -> v2:
- new!
.../README | 7 +++
.../cmdlist | 8 +++
.../datacenter.cfg | 7 +++
.../dynamic_service_stats | 5 ++
.../hardware_status | 5 ++
.../log.expect | 49 +++++++++++++++
.../manager_status | 1 +
.../rules_config | 4 ++
.../service_config | 5 ++
.../static_service_stats | 5 ++
.../README | 12 ++++
.../cmdlist | 8 +++
.../datacenter.cfg | 7 +++
.../dynamic_service_stats | 4 ++
.../hardware_status | 5 ++
.../log.expect | 53 +++++++++++++++++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 4 ++
.../static_service_stats | 4 ++
.../README | 14 +++++
.../cmdlist | 3 +
.../datacenter.cfg | 8 +++
.../dynamic_service_stats | 6 ++
.../hardware_status | 5 ++
.../log.expect | 59 +++++++++++++++++++
.../manager_status | 31 ++++++++++
.../rules_config | 3 +
.../service_config | 6 ++
.../static_service_stats | 6 ++
.../README | 14 +++++
.../cmdlist | 3 +
.../datacenter.cfg | 7 +++
.../dynamic_service_stats | 6 ++
.../hardware_status | 5 ++
.../log.expect | 59 +++++++++++++++++++
.../manager_status | 1 +
.../rules_config | 7 +++
.../service_config | 6 ++
.../static_service_stats | 6 ++
40 files changed, 452 insertions(+)
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance1/README
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance1/cmdlist
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance1/datacenter.cfg
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance1/dynamic_service_stats
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance1/hardware_status
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance1/log.expect
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance1/manager_status
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance1/rules_config
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance1/service_config
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance1/static_service_stats
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance2/README
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance2/cmdlist
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance2/datacenter.cfg
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance2/dynamic_service_stats
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance2/hardware_status
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance2/log.expect
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance2/manager_status
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance2/rules_config
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance2/service_config
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance2/static_service_stats
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance3/README
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance3/cmdlist
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance3/datacenter.cfg
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance3/dynamic_service_stats
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance3/hardware_status
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance3/log.expect
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance3/manager_status
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance3/rules_config
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance3/service_config
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance3/static_service_stats
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance4/README
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance4/cmdlist
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance4/datacenter.cfg
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance4/dynamic_service_stats
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance4/hardware_status
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance4/log.expect
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance4/manager_status
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance4/rules_config
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance4/service_config
create mode 100644 src/test/test-crs-dynamic-constrained-auto-rebalance4/static_service_stats
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance1/README b/src/test/test-crs-dynamic-constrained-auto-rebalance1/README
new file mode 100644
index 00000000..8504755f
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance1/README
@@ -0,0 +1,7 @@
+Test that the auto rebalance system with dynamic usage information will not
+auto rebalance running HA resources, which cause a node imbalance exceeding the
+threshold, because their HA node affinity rules require them to strictly be
+kept on specific nodes.
+
+As a sanity check, the added HA resource, which is not part of the node
+affinity rule, is rebalanced to another node to lower the imbalance.
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance1/cmdlist b/src/test/test-crs-dynamic-constrained-auto-rebalance1/cmdlist
new file mode 100644
index 00000000..6ee04948
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance1/cmdlist
@@ -0,0 +1,8 @@
+[
+ [ "power node1 on", "power node2 on", "power node3 on" ],
+ [
+ "service vm:104 add node1 started 1",
+ "service vm:104 set-static-stats maxcpu 8.0 maxmem 8192",
+ "service vm:104 set-dynamic-stats cpu 4.0 mem 4096"
+ ]
+]
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance1/datacenter.cfg b/src/test/test-crs-dynamic-constrained-auto-rebalance1/datacenter.cfg
new file mode 100644
index 00000000..147bd61a
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance1/datacenter.cfg
@@ -0,0 +1,7 @@
+{
+ "crs": {
+ "ha": "dynamic",
+ "ha-auto-rebalance": 1
+ }
+}
+
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance1/dynamic_service_stats b/src/test/test-crs-dynamic-constrained-auto-rebalance1/dynamic_service_stats
new file mode 100644
index 00000000..02133ab0
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance1/dynamic_service_stats
@@ -0,0 +1,5 @@
+{
+ "vm:101": { "cpu": 0.9, "mem": 2621440000 },
+ "vm:102": { "cpu": 7.9, "mem": 8589934592 },
+ "vm:103": { "cpu": 4.7, "mem": 5242880000 }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance1/hardware_status b/src/test/test-crs-dynamic-constrained-auto-rebalance1/hardware_status
new file mode 100644
index 00000000..8f1e695c
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance1/hardware_status
@@ -0,0 +1,5 @@
+{
+ "node1": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance1/log.expect b/src/test/test-crs-dynamic-constrained-auto-rebalance1/log.expect
new file mode 100644
index 00000000..d0b2aee2
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance1/log.expect
@@ -0,0 +1,49 @@
+info 0 hardware: starting simulation
+info 20 cmdlist: execute power node1 on
+info 20 node1/crm: status change startup => wait_for_quorum
+info 20 node1/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node2 on
+info 20 node2/crm: status change startup => wait_for_quorum
+info 20 node2/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node3 on
+info 20 node3/crm: status change startup => wait_for_quorum
+info 20 node3/lrm: status change startup => wait_for_agent_lock
+info 20 node1/crm: got lock 'ha_manager_lock'
+info 20 node1/crm: status change wait_for_quorum => master
+info 20 node1/crm: using scheduler mode 'dynamic'
+info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info 20 node1/crm: adding new service 'vm:101' on node 'node1'
+info 20 node1/crm: adding new service 'vm:102' on node 'node1'
+info 20 node1/crm: adding new service 'vm:103' on node 'node1'
+info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node1)
+info 20 node1/crm: service 'vm:102': state changed from 'request_start' to 'started' (node = node1)
+info 20 node1/crm: service 'vm:103': state changed from 'request_start' to 'started' (node = node1)
+info 21 node1/lrm: got lock 'ha_agent_node1_lock'
+info 21 node1/lrm: status change wait_for_agent_lock => active
+info 21 node1/lrm: starting service vm:101
+info 21 node1/lrm: service status vm:101 started
+info 21 node1/lrm: starting service vm:102
+info 21 node1/lrm: service status vm:102 started
+info 21 node1/lrm: starting service vm:103
+info 21 node1/lrm: service status vm:103 started
+info 22 node2/crm: status change wait_for_quorum => slave
+info 24 node3/crm: status change wait_for_quorum => slave
+info 120 cmdlist: execute service vm:104 add node1 started 1
+info 120 cmdlist: execute service vm:104 set-static-stats maxcpu 8.0 maxmem 8192
+info 120 cmdlist: execute service vm:104 set-dynamic-stats cpu 4.0 mem 4096
+info 120 node1/crm: adding new service 'vm:104' on node 'node1'
+info 120 node1/crm: service 'vm:104': state changed from 'request_start' to 'started' (node = node1)
+info 140 node1/crm: auto rebalance - migrate vm:104 to node2 (expected target imbalance: 0.98)
+info 140 node1/crm: got crm command: migrate vm:104 node2
+info 140 node1/crm: migrate service 'vm:104' to node 'node2'
+info 140 node1/crm: service 'vm:104': state changed from 'started' to 'migrate' (node = node1, target = node2)
+info 141 node1/lrm: service vm:104 - start migrate to node 'node2'
+info 141 node1/lrm: service vm:104 - end migrate to node 'node2'
+info 143 node2/lrm: got lock 'ha_agent_node2_lock'
+info 143 node2/lrm: status change wait_for_agent_lock => active
+info 160 node1/crm: service 'vm:104': state changed from 'migrate' to 'started' (node = node2)
+info 163 node2/lrm: starting service vm:104
+info 163 node2/lrm: service status vm:104 started
+info 720 hardware: exit simulation - done
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance1/manager_status b/src/test/test-crs-dynamic-constrained-auto-rebalance1/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance1/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance1/rules_config b/src/test/test-crs-dynamic-constrained-auto-rebalance1/rules_config
new file mode 100644
index 00000000..00f615e9
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance1/rules_config
@@ -0,0 +1,4 @@
+node-affinity: vm101-stays-on-node1
+ nodes node1
+ resources vm:101,vm:102,vm:103
+ strict 1
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance1/service_config b/src/test/test-crs-dynamic-constrained-auto-rebalance1/service_config
new file mode 100644
index 00000000..57e3579d
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance1/service_config
@@ -0,0 +1,5 @@
+{
+ "vm:101": { "node": "node1", "state": "started" },
+ "vm:102": { "node": "node1", "state": "started" },
+ "vm:103": { "node": "node1", "state": "started" }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance1/static_service_stats b/src/test/test-crs-dynamic-constrained-auto-rebalance1/static_service_stats
new file mode 100644
index 00000000..b11cc5eb
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance1/static_service_stats
@@ -0,0 +1,5 @@
+{
+ "vm:101": { "maxcpu": 8.0, "maxmem": 8589934592 },
+ "vm:102": { "maxcpu": 8.0, "maxmem": 8589934592 },
+ "vm:103": { "maxcpu": 8.0, "maxmem": 8589934592 }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance2/README b/src/test/test-crs-dynamic-constrained-auto-rebalance2/README
new file mode 100644
index 00000000..be072f6d
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance2/README
@@ -0,0 +1,12 @@
+Test that the auto rebalance system with dynamic usage information will
+consider running HA resources in strict positive resource affinity rules as
+bundles, which can only be moved to other nodes as a single unit.
+
+Therefore, even though the two initial HA resources would be split apart,
+because these cause a node imbalance in the cluster, the auto rebalance system
+does not issue a rebalancing migration, because they must stay together.
+
+As a sanity check, adding another HA resource, which is not part of the strict
+positive resource affinity rule, will cause a rebalancing migration: in this
+case the resource bundle itself, because the leading node 'vm:101' is
+alphabetically first.
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance2/cmdlist b/src/test/test-crs-dynamic-constrained-auto-rebalance2/cmdlist
new file mode 100644
index 00000000..61373367
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance2/cmdlist
@@ -0,0 +1,8 @@
+[
+ [ "power node1 on", "power node2 on", "power node3 on" ],
+ [
+ "service vm:103 add node1 started 1",
+ "service vm:103 set-static-stats maxcpu 8.0 maxmem 8192",
+ "service vm:103 set-dynamic-stats cpu 4.0 mem 4096"
+ ]
+]
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance2/datacenter.cfg b/src/test/test-crs-dynamic-constrained-auto-rebalance2/datacenter.cfg
new file mode 100644
index 00000000..147bd61a
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance2/datacenter.cfg
@@ -0,0 +1,7 @@
+{
+ "crs": {
+ "ha": "dynamic",
+ "ha-auto-rebalance": 1
+ }
+}
+
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance2/dynamic_service_stats b/src/test/test-crs-dynamic-constrained-auto-rebalance2/dynamic_service_stats
new file mode 100644
index 00000000..4f81dfe2
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance2/dynamic_service_stats
@@ -0,0 +1,4 @@
+{
+ "vm:101": { "cpu": 0.9, "mem": 2621440000 },
+ "vm:102": { "cpu": 7.9, "mem": 8589934592 }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance2/hardware_status b/src/test/test-crs-dynamic-constrained-auto-rebalance2/hardware_status
new file mode 100644
index 00000000..8f1e695c
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance2/hardware_status
@@ -0,0 +1,5 @@
+{
+ "node1": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance2/log.expect b/src/test/test-crs-dynamic-constrained-auto-rebalance2/log.expect
new file mode 100644
index 00000000..48501321
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance2/log.expect
@@ -0,0 +1,53 @@
+info 0 hardware: starting simulation
+info 20 cmdlist: execute power node1 on
+info 20 node1/crm: status change startup => wait_for_quorum
+info 20 node1/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node2 on
+info 20 node2/crm: status change startup => wait_for_quorum
+info 20 node2/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node3 on
+info 20 node3/crm: status change startup => wait_for_quorum
+info 20 node3/lrm: status change startup => wait_for_agent_lock
+info 20 node1/crm: got lock 'ha_manager_lock'
+info 20 node1/crm: status change wait_for_quorum => master
+info 20 node1/crm: using scheduler mode 'dynamic'
+info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info 20 node1/crm: adding new service 'vm:101' on node 'node1'
+info 20 node1/crm: adding new service 'vm:102' on node 'node1'
+info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node1)
+info 20 node1/crm: service 'vm:102': state changed from 'request_start' to 'started' (node = node1)
+info 21 node1/lrm: got lock 'ha_agent_node1_lock'
+info 21 node1/lrm: status change wait_for_agent_lock => active
+info 21 node1/lrm: starting service vm:101
+info 21 node1/lrm: service status vm:101 started
+info 21 node1/lrm: starting service vm:102
+info 21 node1/lrm: service status vm:102 started
+info 22 node2/crm: status change wait_for_quorum => slave
+info 24 node3/crm: status change wait_for_quorum => slave
+info 120 cmdlist: execute service vm:103 add node1 started 1
+info 120 cmdlist: execute service vm:103 set-static-stats maxcpu 8.0 maxmem 8192
+info 120 cmdlist: execute service vm:103 set-dynamic-stats cpu 4.0 mem 4096
+info 120 node1/crm: adding new service 'vm:103' on node 'node1'
+info 120 node1/crm: service 'vm:103': state changed from 'request_start' to 'started' (node = node1)
+info 140 node1/crm: auto rebalance - migrate vm:101 to node2 (expected target imbalance: 0.86)
+info 140 node1/crm: got crm command: migrate vm:101 node2
+info 140 node1/crm: crm command 'migrate vm:101 node2' - migrate service 'vm:102' to node 'node2' (service 'vm:102' in positive affinity with service 'vm:101')
+info 140 node1/crm: migrate service 'vm:101' to node 'node2'
+info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node2)
+info 140 node1/crm: migrate service 'vm:102' to node 'node2'
+info 140 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node1, target = node2)
+info 141 node1/lrm: service vm:101 - start migrate to node 'node2'
+info 141 node1/lrm: service vm:101 - end migrate to node 'node2'
+info 141 node1/lrm: service vm:102 - start migrate to node 'node2'
+info 141 node1/lrm: service vm:102 - end migrate to node 'node2'
+info 143 node2/lrm: got lock 'ha_agent_node2_lock'
+info 143 node2/lrm: status change wait_for_agent_lock => active
+info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2)
+info 160 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node2)
+info 163 node2/lrm: starting service vm:101
+info 163 node2/lrm: service status vm:101 started
+info 163 node2/lrm: starting service vm:102
+info 163 node2/lrm: service status vm:102 started
+info 720 hardware: exit simulation - done
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance2/manager_status b/src/test/test-crs-dynamic-constrained-auto-rebalance2/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance2/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance2/rules_config b/src/test/test-crs-dynamic-constrained-auto-rebalance2/rules_config
new file mode 100644
index 00000000..e1948a00
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance2/rules_config
@@ -0,0 +1,3 @@
+resource-affinity: vms-stay-together
+ resources vm:101,vm:102
+ affinity positive
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance2/service_config b/src/test/test-crs-dynamic-constrained-auto-rebalance2/service_config
new file mode 100644
index 00000000..880e0a59
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance2/service_config
@@ -0,0 +1,4 @@
+{
+ "vm:101": { "node": "node1", "state": "started" },
+ "vm:102": { "node": "node1", "state": "started" }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance2/static_service_stats b/src/test/test-crs-dynamic-constrained-auto-rebalance2/static_service_stats
new file mode 100644
index 00000000..455ae043
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance2/static_service_stats
@@ -0,0 +1,4 @@
+{
+ "vm:101": { "maxcpu": 8.0, "maxmem": 8589934592 },
+ "vm:102": { "maxcpu": 8.0, "maxmem": 8589934592 }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance3/README b/src/test/test-crs-dynamic-constrained-auto-rebalance3/README
new file mode 100644
index 00000000..4b4d4855
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance3/README
@@ -0,0 +1,14 @@
+Test that the auto rebalance system with dynamic usage information will wait on
+a resource motion being finished, because a strict positive resource affinity
+rule is not correctly enforced yet.
+
+This test case manipulates the manager status in such a way, so that the HA
+Manager will assume that the not-yet-migrated HA resource in the strict
+positive resource affinity rule is still migrating as currently the integration
+tests do not support prolonged migrations.
+
+Furthermore, auto rebalancing migrations are forced to be issued as soon as
+possible with the hold duration being set to 0. This ensures that if the auto
+rebalance system would not wait on the ongoing migration, the auto rebalancing
+migration would be done right away in the same round as the HA resources being
+acknowledged as running.
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance3/cmdlist b/src/test/test-crs-dynamic-constrained-auto-rebalance3/cmdlist
new file mode 100644
index 00000000..13f90cd7
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance3/cmdlist
@@ -0,0 +1,3 @@
+[
+ [ "power node1 on", "power node2 on", "power node3 on" ]
+]
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance3/datacenter.cfg b/src/test/test-crs-dynamic-constrained-auto-rebalance3/datacenter.cfg
new file mode 100644
index 00000000..181ea848
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance3/datacenter.cfg
@@ -0,0 +1,8 @@
+{
+ "crs": {
+ "ha": "dynamic",
+ "ha-auto-rebalance": 1,
+ "ha-auto-rebalance-hold-duration": 0
+ }
+}
+
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance3/dynamic_service_stats b/src/test/test-crs-dynamic-constrained-auto-rebalance3/dynamic_service_stats
new file mode 100644
index 00000000..d35a2c8f
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance3/dynamic_service_stats
@@ -0,0 +1,6 @@
+{
+ "vm:101": { "cpu": 0.9, "mem": 2621440000 },
+ "vm:102": { "cpu": 7.9, "mem": 8589934592 },
+ "vm:103": { "cpu": 4.7, "mem": 5242880000 },
+ "vm:104": { "cpu": 4.0, "mem": 4294967296 }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance3/hardware_status b/src/test/test-crs-dynamic-constrained-auto-rebalance3/hardware_status
new file mode 100644
index 00000000..8f1e695c
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance3/hardware_status
@@ -0,0 +1,5 @@
+{
+ "node1": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance3/log.expect b/src/test/test-crs-dynamic-constrained-auto-rebalance3/log.expect
new file mode 100644
index 00000000..1242f827
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance3/log.expect
@@ -0,0 +1,59 @@
+info 0 hardware: starting simulation
+info 20 cmdlist: execute power node1 on
+info 20 node1/crm: status change startup => wait_for_quorum
+info 20 node1/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node2 on
+info 20 node2/crm: status change startup => wait_for_quorum
+info 20 node2/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node3 on
+info 20 node3/crm: status change startup => wait_for_quorum
+info 20 node3/lrm: status change startup => wait_for_agent_lock
+info 20 node1/crm: got lock 'ha_manager_lock'
+info 20 node1/crm: status change wait_for_quorum => master
+info 20 node1/crm: using scheduler mode 'dynamic'
+info 21 node1/lrm: got lock 'ha_agent_node1_lock'
+info 21 node1/lrm: status change wait_for_agent_lock => active
+info 21 node1/lrm: starting service vm:102
+info 21 node1/lrm: service status vm:102 started
+info 21 node1/lrm: starting service vm:103
+info 21 node1/lrm: service status vm:103 started
+info 21 node1/lrm: starting service vm:104
+info 21 node1/lrm: service status vm:104 started
+info 22 node2/crm: status change wait_for_quorum => slave
+info 23 node2/lrm: got lock 'ha_agent_node2_lock'
+info 23 node2/lrm: status change wait_for_agent_lock => active
+info 23 node2/lrm: service vm:101 - start migrate to node 'node1'
+info 23 node2/lrm: service vm:101 - end migrate to node 'node1'
+info 24 node3/crm: status change wait_for_quorum => slave
+info 40 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1)
+info 41 node1/lrm: starting service vm:101
+info 41 node1/lrm: service status vm:101 started
+info 60 node1/crm: auto rebalance - migrate vm:102 to node2 (expected target imbalance: 0.72)
+info 60 node1/crm: got crm command: migrate vm:102 node2
+info 60 node1/crm: migrate service 'vm:102' to node 'node2'
+info 60 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node1, target = node2)
+info 61 node1/lrm: service vm:102 - start migrate to node 'node2'
+info 61 node1/lrm: service vm:102 - end migrate to node 'node2'
+info 80 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node2)
+info 83 node2/lrm: starting service vm:102
+info 83 node2/lrm: service status vm:102 started
+info 100 node1/crm: auto rebalance - migrate vm:101 to node3 (expected target imbalance: 0.27)
+info 100 node1/crm: got crm command: migrate vm:101 node3
+info 100 node1/crm: crm command 'migrate vm:101 node3' - migrate service 'vm:103' to node 'node3' (service 'vm:103' in positive affinity with service 'vm:101')
+info 100 node1/crm: migrate service 'vm:101' to node 'node3'
+info 100 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node3)
+info 100 node1/crm: migrate service 'vm:103' to node 'node3'
+info 100 node1/crm: service 'vm:103': state changed from 'started' to 'migrate' (node = node1, target = node3)
+info 101 node1/lrm: service vm:101 - start migrate to node 'node3'
+info 101 node1/lrm: service vm:101 - end migrate to node 'node3'
+info 101 node1/lrm: service vm:103 - start migrate to node 'node3'
+info 101 node1/lrm: service vm:103 - end migrate to node 'node3'
+info 105 node3/lrm: got lock 'ha_agent_node3_lock'
+info 105 node3/lrm: status change wait_for_agent_lock => active
+info 120 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
+info 120 node1/crm: service 'vm:103': state changed from 'migrate' to 'started' (node = node3)
+info 125 node3/lrm: starting service vm:101
+info 125 node3/lrm: service status vm:101 started
+info 125 node3/lrm: starting service vm:103
+info 125 node3/lrm: service status vm:103 started
+info 620 hardware: exit simulation - done
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance3/manager_status b/src/test/test-crs-dynamic-constrained-auto-rebalance3/manager_status
new file mode 100644
index 00000000..cf90037c
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance3/manager_status
@@ -0,0 +1,31 @@
+{
+ "master_node": "node1",
+ "node_status": {
+ "node1":"online",
+ "node2":"online",
+ "node3":"online"
+ },
+ "service_status": {
+ "vm:101": {
+ "node": "node2",
+ "state": "migrate",
+ "target": "node1",
+ "uid": "RoPGTlvNYq/oZFokv9fgWw"
+ },
+ "vm:102": {
+ "node": "node1",
+ "state": "started",
+ "uid": "fR3i18EHk6DhF8Zd2jddNX"
+ },
+ "vm:103": {
+ "node": "node1",
+ "state": "started",
+ "uid": "JVDARwmsXoVTF8Zd0BY2Mg"
+ },
+ "vm:104": {
+ "node": "node1",
+ "state": "started",
+ "uid": "23hk23EHk6DhF8Zd0218DD"
+ }
+ }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance3/rules_config b/src/test/test-crs-dynamic-constrained-auto-rebalance3/rules_config
new file mode 100644
index 00000000..2c3f3171
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance3/rules_config
@@ -0,0 +1,3 @@
+resource-affinity: vms-stay-together
+ resources vm:101,vm:103
+ affinity positive
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance3/service_config b/src/test/test-crs-dynamic-constrained-auto-rebalance3/service_config
new file mode 100644
index 00000000..3dadaabc
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance3/service_config
@@ -0,0 +1,6 @@
+{
+ "vm:101": { "node": "node2", "state": "started" },
+ "vm:102": { "node": "node1", "state": "started" },
+ "vm:103": { "node": "node1", "state": "started" },
+ "vm:104": { "node": "node1", "state": "started" }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance3/static_service_stats b/src/test/test-crs-dynamic-constrained-auto-rebalance3/static_service_stats
new file mode 100644
index 00000000..ff1e50f8
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance3/static_service_stats
@@ -0,0 +1,6 @@
+{
+ "vm:101": { "maxcpu": 8.0, "maxmem": 8589934592 },
+ "vm:102": { "maxcpu": 8.0, "maxmem": 8589934592 },
+ "vm:103": { "maxcpu": 8.0, "maxmem": 8589934592 },
+ "vm:104": { "maxcpu": 8.0, "maxmem": 8589934592 }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance4/README b/src/test/test-crs-dynamic-constrained-auto-rebalance4/README
new file mode 100644
index 00000000..e304cc22
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance4/README
@@ -0,0 +1,14 @@
+Test that the auto rebalance system with dynamic usage information will not
+rebalance a HA resource on the same node as another HA resource, which are in a
+strict negative resource affinity rule.
+
+There is a high node imbalance since vm:101 and vm:102 on node1 cause a higher
+usage than node2 and node3 have. Even though it would be ideal to move one of
+these to node2, because it has a very low usage, these cannot be moved there as
+both vm:101 and vm:102 are in a strict negative resource affinity rule with a
+HA resource on node2 respectively.
+
+To minimize the imbalance in the cluster, one of the HA resources from node1 is
+migrated to node3 first, and afterwards the HA resource on node3, which is not
+in a strict negative resource affinity rule with a HA resource on node2, will
+be migrated to node2.
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance4/cmdlist b/src/test/test-crs-dynamic-constrained-auto-rebalance4/cmdlist
new file mode 100644
index 00000000..13f90cd7
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance4/cmdlist
@@ -0,0 +1,3 @@
+[
+ [ "power node1 on", "power node2 on", "power node3 on" ]
+]
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance4/datacenter.cfg b/src/test/test-crs-dynamic-constrained-auto-rebalance4/datacenter.cfg
new file mode 100644
index 00000000..147bd61a
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance4/datacenter.cfg
@@ -0,0 +1,7 @@
+{
+ "crs": {
+ "ha": "dynamic",
+ "ha-auto-rebalance": 1
+ }
+}
+
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance4/dynamic_service_stats b/src/test/test-crs-dynamic-constrained-auto-rebalance4/dynamic_service_stats
new file mode 100644
index 00000000..083f338b
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance4/dynamic_service_stats
@@ -0,0 +1,6 @@
+{
+ "vm:101": { "cpu": 0.9, "mem": 4294967296 },
+ "vm:102": { "cpu": 2.4, "mem": 2621440000 },
+ "vm:103": { "cpu": 0.0, "mem": 0 },
+ "vm:104": { "cpu": 1.0, "mem": 1073741824 }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance4/hardware_status b/src/test/test-crs-dynamic-constrained-auto-rebalance4/hardware_status
new file mode 100644
index 00000000..8f1e695c
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance4/hardware_status
@@ -0,0 +1,5 @@
+{
+ "node1": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 24, "maxmem": 51539607552 }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance4/log.expect b/src/test/test-crs-dynamic-constrained-auto-rebalance4/log.expect
new file mode 100644
index 00000000..58f1b481
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance4/log.expect
@@ -0,0 +1,59 @@
+info 0 hardware: starting simulation
+info 20 cmdlist: execute power node1 on
+info 20 node1/crm: status change startup => wait_for_quorum
+info 20 node1/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node2 on
+info 20 node2/crm: status change startup => wait_for_quorum
+info 20 node2/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node3 on
+info 20 node3/crm: status change startup => wait_for_quorum
+info 20 node3/lrm: status change startup => wait_for_agent_lock
+info 20 node1/crm: got lock 'ha_manager_lock'
+info 20 node1/crm: status change wait_for_quorum => master
+info 20 node1/crm: using scheduler mode 'dynamic'
+info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info 20 node1/crm: adding new service 'vm:101' on node 'node1'
+info 20 node1/crm: adding new service 'vm:102' on node 'node1'
+info 20 node1/crm: adding new service 'vm:103' on node 'node2'
+info 20 node1/crm: adding new service 'vm:104' on node 'node3'
+info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node1)
+info 20 node1/crm: service 'vm:102': state changed from 'request_start' to 'started' (node = node1)
+info 20 node1/crm: service 'vm:103': state changed from 'request_start' to 'started' (node = node2)
+info 20 node1/crm: service 'vm:104': state changed from 'request_start' to 'started' (node = node3)
+info 21 node1/lrm: got lock 'ha_agent_node1_lock'
+info 21 node1/lrm: status change wait_for_agent_lock => active
+info 21 node1/lrm: starting service vm:101
+info 21 node1/lrm: service status vm:101 started
+info 21 node1/lrm: starting service vm:102
+info 21 node1/lrm: service status vm:102 started
+info 22 node2/crm: status change wait_for_quorum => slave
+info 23 node2/lrm: got lock 'ha_agent_node2_lock'
+info 23 node2/lrm: status change wait_for_agent_lock => active
+info 23 node2/lrm: starting service vm:103
+info 23 node2/lrm: service status vm:103 started
+info 24 node3/crm: status change wait_for_quorum => slave
+info 25 node3/lrm: got lock 'ha_agent_node3_lock'
+info 25 node3/lrm: status change wait_for_agent_lock => active
+info 25 node3/lrm: starting service vm:104
+info 25 node3/lrm: service status vm:104 started
+info 80 node1/crm: auto rebalance - migrate vm:101 to node3 (expected target imbalance: 0.72)
+info 80 node1/crm: got crm command: migrate vm:101 node3
+info 80 node1/crm: migrate service 'vm:101' to node 'node3'
+info 80 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node3)
+info 81 node1/lrm: service vm:101 - start migrate to node 'node3'
+info 81 node1/lrm: service vm:101 - end migrate to node 'node3'
+info 100 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
+info 105 node3/lrm: starting service vm:101
+info 105 node3/lrm: service status vm:101 started
+info 160 node1/crm: auto rebalance - migrate vm:104 to node2 (expected target imbalance: 0.33)
+info 160 node1/crm: got crm command: migrate vm:104 node2
+info 160 node1/crm: migrate service 'vm:104' to node 'node2'
+info 160 node1/crm: service 'vm:104': state changed from 'started' to 'migrate' (node = node3, target = node2)
+info 165 node3/lrm: service vm:104 - start migrate to node 'node2'
+info 165 node3/lrm: service vm:104 - end migrate to node 'node2'
+info 180 node1/crm: service 'vm:104': state changed from 'migrate' to 'started' (node = node2)
+info 183 node2/lrm: starting service vm:104
+info 183 node2/lrm: service status vm:104 started
+info 620 hardware: exit simulation - done
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance4/manager_status b/src/test/test-crs-dynamic-constrained-auto-rebalance4/manager_status
new file mode 100644
index 00000000..0967ef42
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance4/manager_status
@@ -0,0 +1 @@
+{}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance4/rules_config b/src/test/test-crs-dynamic-constrained-auto-rebalance4/rules_config
new file mode 100644
index 00000000..eef5460f
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance4/rules_config
@@ -0,0 +1,7 @@
+resource-affinity: vms-stay-apart1
+ resources vm:101,vm:103
+ affinity negative
+
+resource-affinity: vms-stay-apart2
+ resources vm:102,vm:103
+ affinity negative
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance4/service_config b/src/test/test-crs-dynamic-constrained-auto-rebalance4/service_config
new file mode 100644
index 00000000..16bffacf
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance4/service_config
@@ -0,0 +1,6 @@
+{
+ "vm:101": { "node": "node1", "state": "started" },
+ "vm:102": { "node": "node1", "state": "started" },
+ "vm:103": { "node": "node2", "state": "started" },
+ "vm:104": { "node": "node3", "state": "started" }
+}
diff --git a/src/test/test-crs-dynamic-constrained-auto-rebalance4/static_service_stats b/src/test/test-crs-dynamic-constrained-auto-rebalance4/static_service_stats
new file mode 100644
index 00000000..ff1e50f8
--- /dev/null
+++ b/src/test/test-crs-dynamic-constrained-auto-rebalance4/static_service_stats
@@ -0,0 +1,6 @@
+{
+ "vm:101": { "maxcpu": 8.0, "maxmem": 8589934592 },
+ "vm:102": { "maxcpu": 8.0, "maxmem": 8589934592 },
+ "vm:103": { "maxcpu": 8.0, "maxmem": 8589934592 },
+ "vm:104": { "maxcpu": 8.0, "maxmem": 8589934592 }
+}
--
2.47.3
prev parent reply other threads:[~2026-03-24 18:35 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-24 18:29 [PATCH cluster/ha-manager/perl-rs/proxmox v2 00/40] dynamic scheduler + load rebalancer Daniel Kral
2026-03-24 18:29 ` [PATCH proxmox v2 01/40] resource-scheduling: inline add_cpu_usage in score_nodes_to_start_service Daniel Kral
2026-03-26 10:10 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH proxmox v2 02/40] resource-scheduling: move score_nodes_to_start_service to scheduler crate Daniel Kral
2026-03-26 10:11 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH proxmox v2 03/40] resource-scheduling: rename service to resource where appropriate Daniel Kral
2026-03-26 10:12 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH proxmox v2 04/40] resource-scheduling: introduce generic scheduler implementation Daniel Kral
2026-03-26 10:19 ` Dominik Rusovac
2026-03-26 14:16 ` Daniel Kral
2026-03-24 18:29 ` [PATCH proxmox v2 05/40] resource-scheduling: implement generic cluster usage implementation Daniel Kral
2026-03-26 10:28 ` Dominik Rusovac
2026-03-26 14:15 ` Daniel Kral
2026-03-24 18:29 ` [PATCH proxmox v2 06/40] resource-scheduling: topsis: handle empty criteria without panics Daniel Kral
2026-03-26 10:29 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH proxmox v2 07/40] resource-scheduling: compare by nodename in score_nodes_to_start_resource Daniel Kral
2026-03-26 10:29 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH proxmox v2 08/40] resource-scheduling: factor out topsis alternative mapping Daniel Kral
2026-03-26 10:30 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH proxmox v2 09/40] resource-scheduling: implement rebalancing migration selection Daniel Kral
2026-03-26 10:34 ` Dominik Rusovac
2026-03-26 14:11 ` Daniel Kral
2026-03-27 9:34 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH perl-rs v2 10/40] pve-rs: resource-scheduling: remove pedantic error handling from remove_node Daniel Kral
2026-03-27 9:38 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH perl-rs v2 11/40] pve-rs: resource-scheduling: remove pedantic error handling from remove_service_usage Daniel Kral
2026-03-27 9:39 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH perl-rs v2 12/40] pve-rs: resource-scheduling: move pve_static into resource_scheduling module Daniel Kral
2026-03-27 9:41 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH perl-rs v2 13/40] pve-rs: resource-scheduling: use generic usage implementation Daniel Kral
2026-03-27 14:13 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH perl-rs v2 14/40] pve-rs: resource-scheduling: static: replace deprecated usage structs Daniel Kral
2026-03-27 14:18 ` Dominik Rusovac
2026-03-24 18:29 ` [PATCH perl-rs v2 15/40] pve-rs: resource-scheduling: implement pve_dynamic bindings Daniel Kral
2026-03-27 14:15 ` Dominik Rusovac
2026-03-24 18:30 ` [PATCH perl-rs v2 16/40] pve-rs: resource-scheduling: expose auto rebalancing methods Daniel Kral
2026-03-27 14:16 ` Dominik Rusovac
2026-03-24 18:30 ` [PATCH cluster v2 17/40] datacenter config: restructure verbose description for the ha crs option Daniel Kral
2026-03-24 18:30 ` [PATCH cluster v2 18/40] datacenter config: add dynamic load scheduler option Daniel Kral
2026-03-24 18:30 ` [PATCH cluster v2 19/40] datacenter config: add auto rebalancing options Daniel Kral
2026-03-26 16:08 ` Jillian Morgan
2026-03-26 16:20 ` Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 20/40] env: pve2: implement dynamic node and service stats Daniel Kral
2026-03-25 21:43 ` Thomas Lamprecht
2026-03-24 18:30 ` [PATCH ha-manager v2 21/40] sim: hardware: pass correct types for static stats Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 22/40] sim: hardware: factor out static stats' default values Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 23/40] sim: hardware: fix static stats guard Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 24/40] sim: hardware: handle dynamic service stats Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 25/40] sim: hardware: add set-dynamic-stats command Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 26/40] sim: hardware: add getters for dynamic {node,service} stats Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 27/40] usage: pass service data to add_service_usage Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 28/40] usage: pass service data to get_used_service_nodes Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 29/40] add running flag to cluster service stats Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 30/40] usage: use add_service to add service usage to nodes Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 31/40] usage: add dynamic usage scheduler Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 32/40] test: add dynamic usage scheduler test cases Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 33/40] manager: rename execute_migration to queue_resource_motion Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 34/40] manager: update_crs_scheduler_mode: factor out crs config Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 35/40] implement automatic rebalancing Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 36/40] test: add resource bundle generation test cases Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 37/40] test: add dynamic automatic rebalancing system " Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 38/40] test: add static " Daniel Kral
2026-03-24 18:30 ` [PATCH ha-manager v2 39/40] test: add automatic rebalancing system test cases with TOPSIS method Daniel Kral
2026-03-24 18:30 ` Daniel Kral [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260324183029.1274972-41-d.kral@proxmox.com \
--to=d.kral@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.