From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH ha-manager v4 16/28] test: add dynamic usage scheduler test cases
Date: Thu, 2 Apr 2026 14:44:10 +0200 [thread overview]
Message-ID: <20260402124817.416232-17-d.kral@proxmox.com> (raw)
In-Reply-To: <20260402124817.416232-1-d.kral@proxmox.com>
These test cases document the basic behavior of the scheduler using the
dynamic usage information of the HA resources with rebalance-on-start
being cleared and set respectively.
As the mechanisms for the scheduler with static and dynamic usage
information are mostly the same, these test cases verify only the
essential parts, which are:
- dynamic usage information is used correctly (for both test cases), and
- repeatedly scheduling resources with score_nodes_to_start_service(...)
correctly simulates that the previously scheduled HA resources are
already started
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Dominik Rusovac <d.rusovac@proxmox.com>
---
changes v3 -> v4:
- none
src/test/test-crs-dynamic-rebalance1/README | 3 +
src/test/test-crs-dynamic-rebalance1/cmdlist | 4 +
.../datacenter.cfg | 7 ++
.../dynamic_service_stats | 7 ++
.../hardware_status | 5 ++
.../test-crs-dynamic-rebalance1/log.expect | 82 +++++++++++++++++++
.../manager_status | 1 +
.../service_config | 7 ++
.../static_service_stats | 7 ++
src/test/test-crs-dynamic1/README | 4 +
src/test/test-crs-dynamic1/cmdlist | 4 +
src/test/test-crs-dynamic1/datacenter.cfg | 6 ++
.../test-crs-dynamic1/dynamic_service_stats | 3 +
src/test/test-crs-dynamic1/hardware_status | 5 ++
src/test/test-crs-dynamic1/log.expect | 51 ++++++++++++
src/test/test-crs-dynamic1/manager_status | 1 +
src/test/test-crs-dynamic1/service_config | 3 +
.../test-crs-dynamic1/static_service_stats | 3 +
18 files changed, 203 insertions(+)
create mode 100644 src/test/test-crs-dynamic-rebalance1/README
create mode 100644 src/test/test-crs-dynamic-rebalance1/cmdlist
create mode 100644 src/test/test-crs-dynamic-rebalance1/datacenter.cfg
create mode 100644 src/test/test-crs-dynamic-rebalance1/dynamic_service_stats
create mode 100644 src/test/test-crs-dynamic-rebalance1/hardware_status
create mode 100644 src/test/test-crs-dynamic-rebalance1/log.expect
create mode 100644 src/test/test-crs-dynamic-rebalance1/manager_status
create mode 100644 src/test/test-crs-dynamic-rebalance1/service_config
create mode 100644 src/test/test-crs-dynamic-rebalance1/static_service_stats
create mode 100644 src/test/test-crs-dynamic1/README
create mode 100644 src/test/test-crs-dynamic1/cmdlist
create mode 100644 src/test/test-crs-dynamic1/datacenter.cfg
create mode 100644 src/test/test-crs-dynamic1/dynamic_service_stats
create mode 100644 src/test/test-crs-dynamic1/hardware_status
create mode 100644 src/test/test-crs-dynamic1/log.expect
create mode 100644 src/test/test-crs-dynamic1/manager_status
create mode 100644 src/test/test-crs-dynamic1/service_config
create mode 100644 src/test/test-crs-dynamic1/static_service_stats
diff --git a/src/test/test-crs-dynamic-rebalance1/README b/src/test/test-crs-dynamic-rebalance1/README
new file mode 100644
index 00000000..df0ba0a8
--- /dev/null
+++ b/src/test/test-crs-dynamic-rebalance1/README
@@ -0,0 +1,3 @@
+Test rebalancing on start and how after a failed node the recovery gets
+balanced out for a small batch of HA resources with the dynamic usage
+information.
diff --git a/src/test/test-crs-dynamic-rebalance1/cmdlist b/src/test/test-crs-dynamic-rebalance1/cmdlist
new file mode 100644
index 00000000..eee0e40e
--- /dev/null
+++ b/src/test/test-crs-dynamic-rebalance1/cmdlist
@@ -0,0 +1,4 @@
+[
+ [ "power node1 on", "power node2 on", "power node3 on"],
+ [ "network node3 off" ]
+]
diff --git a/src/test/test-crs-dynamic-rebalance1/datacenter.cfg b/src/test/test-crs-dynamic-rebalance1/datacenter.cfg
new file mode 100644
index 00000000..0f76d24e
--- /dev/null
+++ b/src/test/test-crs-dynamic-rebalance1/datacenter.cfg
@@ -0,0 +1,7 @@
+{
+ "crs": {
+ "ha": "dynamic",
+ "ha-rebalance-on-start": 1
+ }
+}
+
diff --git a/src/test/test-crs-dynamic-rebalance1/dynamic_service_stats b/src/test/test-crs-dynamic-rebalance1/dynamic_service_stats
new file mode 100644
index 00000000..5ef75ae0
--- /dev/null
+++ b/src/test/test-crs-dynamic-rebalance1/dynamic_service_stats
@@ -0,0 +1,7 @@
+{
+ "vm:101": { "cpu": 1.3, "mem": 1073741824 },
+ "vm:102": { "cpu": 5.6, "mem": 3221225472 },
+ "vm:103": { "cpu": 0.5, "mem": 4000000000 },
+ "vm:104": { "cpu": 7.9, "mem": 2147483648 },
+ "vm:105": { "cpu": 3.2, "mem": 2684354560 }
+}
diff --git a/src/test/test-crs-dynamic-rebalance1/hardware_status b/src/test/test-crs-dynamic-rebalance1/hardware_status
new file mode 100644
index 00000000..bfdbbf7b
--- /dev/null
+++ b/src/test/test-crs-dynamic-rebalance1/hardware_status
@@ -0,0 +1,5 @@
+{
+ "node1": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 256000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 256000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 256000000000 }
+}
diff --git a/src/test/test-crs-dynamic-rebalance1/log.expect b/src/test/test-crs-dynamic-rebalance1/log.expect
new file mode 100644
index 00000000..5c8b050c
--- /dev/null
+++ b/src/test/test-crs-dynamic-rebalance1/log.expect
@@ -0,0 +1,82 @@
+info 0 hardware: starting simulation
+info 20 cmdlist: execute power node1 on
+info 20 node1/crm: status change startup => wait_for_quorum
+info 20 node1/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node2 on
+info 20 node2/crm: status change startup => wait_for_quorum
+info 20 node2/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node3 on
+info 20 node3/crm: status change startup => wait_for_quorum
+info 20 node3/lrm: status change startup => wait_for_agent_lock
+info 20 node1/crm: got lock 'ha_manager_lock'
+info 20 node1/crm: status change wait_for_quorum => master
+info 20 node1/crm: using scheduler mode 'dynamic'
+info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info 20 node1/crm: adding new service 'vm:101' on node 'node3'
+info 20 node1/crm: adding new service 'vm:102' on node 'node3'
+info 20 node1/crm: adding new service 'vm:103' on node 'node3'
+info 20 node1/crm: adding new service 'vm:104' on node 'node3'
+info 20 node1/crm: adding new service 'vm:105' on node 'node3'
+info 20 node1/crm: service vm:101: re-balance selected new node node1 for startup
+info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'request_start_balance' (node = node3, target = node1)
+info 20 node1/crm: service vm:102: re-balance selected new node node2 for startup
+info 20 node1/crm: service 'vm:102': state changed from 'request_start' to 'request_start_balance' (node = node3, target = node2)
+info 20 node1/crm: service vm:103: re-balance selected current node node3 for startup
+info 20 node1/crm: service 'vm:103': state changed from 'request_start' to 'started' (node = node3)
+info 20 node1/crm: service vm:104: re-balance selected new node node1 for startup
+info 20 node1/crm: service 'vm:104': state changed from 'request_start' to 'request_start_balance' (node = node3, target = node1)
+info 20 node1/crm: service vm:105: re-balance selected new node node2 for startup
+info 20 node1/crm: service 'vm:105': state changed from 'request_start' to 'request_start_balance' (node = node3, target = node2)
+info 21 node1/lrm: got lock 'ha_agent_node1_lock'
+info 21 node1/lrm: status change wait_for_agent_lock => active
+info 22 node2/crm: status change wait_for_quorum => slave
+info 23 node2/lrm: got lock 'ha_agent_node2_lock'
+info 23 node2/lrm: status change wait_for_agent_lock => active
+info 24 node3/crm: status change wait_for_quorum => slave
+info 25 node3/lrm: got lock 'ha_agent_node3_lock'
+info 25 node3/lrm: status change wait_for_agent_lock => active
+info 25 node3/lrm: service vm:101 - start relocate to node 'node1'
+info 25 node3/lrm: service vm:101 - end relocate to node 'node1'
+info 25 node3/lrm: service vm:102 - start relocate to node 'node2'
+info 25 node3/lrm: service vm:102 - end relocate to node 'node2'
+info 25 node3/lrm: starting service vm:103
+info 25 node3/lrm: service status vm:103 started
+info 25 node3/lrm: service vm:104 - start relocate to node 'node1'
+info 25 node3/lrm: service vm:104 - end relocate to node 'node1'
+info 25 node3/lrm: service vm:105 - start relocate to node 'node2'
+info 25 node3/lrm: service vm:105 - end relocate to node 'node2'
+info 40 node1/crm: service 'vm:101': state changed from 'request_start_balance' to 'started' (node = node1)
+info 40 node1/crm: service 'vm:102': state changed from 'request_start_balance' to 'started' (node = node2)
+info 40 node1/crm: service 'vm:104': state changed from 'request_start_balance' to 'started' (node = node1)
+info 40 node1/crm: service 'vm:105': state changed from 'request_start_balance' to 'started' (node = node2)
+info 41 node1/lrm: starting service vm:101
+info 41 node1/lrm: service status vm:101 started
+info 41 node1/lrm: starting service vm:104
+info 41 node1/lrm: service status vm:104 started
+info 43 node2/lrm: starting service vm:102
+info 43 node2/lrm: service status vm:102 started
+info 43 node2/lrm: starting service vm:105
+info 43 node2/lrm: service status vm:105 started
+info 120 cmdlist: execute network node3 off
+info 120 node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info 124 node3/crm: status change slave => wait_for_quorum
+info 125 node3/lrm: status change active => lost_agent_lock
+info 160 node1/crm: service 'vm:103': state changed from 'started' to 'fence'
+info 160 node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai 160 node1/crm: FENCE: Try to fence node 'node3'
+info 166 watchdog: execute power node3 off
+info 165 node3/crm: killed by poweroff
+info 166 node3/lrm: killed by poweroff
+info 166 hardware: server 'node3' stopped by poweroff (watchdog)
+info 240 node1/crm: got lock 'ha_agent_node3_lock'
+info 240 node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info 240 node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai 240 node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info 240 node1/crm: service 'vm:103': state changed from 'fence' to 'recovery'
+info 240 node1/crm: recover service 'vm:103' from fenced node 'node3' to node 'node1'
+info 240 node1/crm: service 'vm:103': state changed from 'recovery' to 'started' (node = node1)
+info 241 node1/lrm: starting service vm:103
+info 241 node1/lrm: service status vm:103 started
+info 720 hardware: exit simulation - done
diff --git a/src/test/test-crs-dynamic-rebalance1/manager_status b/src/test/test-crs-dynamic-rebalance1/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-crs-dynamic-rebalance1/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-crs-dynamic-rebalance1/service_config b/src/test/test-crs-dynamic-rebalance1/service_config
new file mode 100644
index 00000000..3071f480
--- /dev/null
+++ b/src/test/test-crs-dynamic-rebalance1/service_config
@@ -0,0 +1,7 @@
+{
+ "vm:101": { "node": "node3", "state": "started" },
+ "vm:102": { "node": "node3", "state": "started" },
+ "vm:103": { "node": "node3", "state": "started" },
+ "vm:104": { "node": "node3", "state": "started" },
+ "vm:105": { "node": "node3", "state": "started" }
+}
diff --git a/src/test/test-crs-dynamic-rebalance1/static_service_stats b/src/test/test-crs-dynamic-rebalance1/static_service_stats
new file mode 100644
index 00000000..a9e810d7
--- /dev/null
+++ b/src/test/test-crs-dynamic-rebalance1/static_service_stats
@@ -0,0 +1,7 @@
+{
+ "vm:101": { "maxcpu": 8, "maxmem": 4294967296 },
+ "vm:102": { "maxcpu": 8, "maxmem": 4294967296 },
+ "vm:103": { "maxcpu": 8, "maxmem": 4294967296 },
+ "vm:104": { "maxcpu": 8, "maxmem": 4294967296 },
+ "vm:105": { "maxcpu": 8, "maxmem": 4294967296 }
+}
diff --git a/src/test/test-crs-dynamic1/README b/src/test/test-crs-dynamic1/README
new file mode 100644
index 00000000..e6382130
--- /dev/null
+++ b/src/test/test-crs-dynamic1/README
@@ -0,0 +1,4 @@
+Test how service recovery works with dynamic usage information.
+
+Expect that the single service gets recovered to the node with the most
+available resources.
diff --git a/src/test/test-crs-dynamic1/cmdlist b/src/test/test-crs-dynamic1/cmdlist
new file mode 100644
index 00000000..8684073c
--- /dev/null
+++ b/src/test/test-crs-dynamic1/cmdlist
@@ -0,0 +1,4 @@
+[
+ [ "power node1 on", "power node2 on", "power node3 on"],
+ [ "network node1 off" ]
+]
diff --git a/src/test/test-crs-dynamic1/datacenter.cfg b/src/test/test-crs-dynamic1/datacenter.cfg
new file mode 100644
index 00000000..6a7fbc48
--- /dev/null
+++ b/src/test/test-crs-dynamic1/datacenter.cfg
@@ -0,0 +1,6 @@
+{
+ "crs": {
+ "ha": "dynamic"
+ }
+}
+
diff --git a/src/test/test-crs-dynamic1/dynamic_service_stats b/src/test/test-crs-dynamic1/dynamic_service_stats
new file mode 100644
index 00000000..922ae9a6
--- /dev/null
+++ b/src/test/test-crs-dynamic1/dynamic_service_stats
@@ -0,0 +1,3 @@
+{
+ "vm:102": { "cpu": 5.9, "mem": 2744123392 }
+}
diff --git a/src/test/test-crs-dynamic1/hardware_status b/src/test/test-crs-dynamic1/hardware_status
new file mode 100644
index 00000000..bbe44a96
--- /dev/null
+++ b/src/test/test-crs-dynamic1/hardware_status
@@ -0,0 +1,5 @@
+{
+ "node1": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 100000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 200000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 300000000000 }
+}
diff --git a/src/test/test-crs-dynamic1/log.expect b/src/test/test-crs-dynamic1/log.expect
new file mode 100644
index 00000000..b7e298e1
--- /dev/null
+++ b/src/test/test-crs-dynamic1/log.expect
@@ -0,0 +1,51 @@
+info 0 hardware: starting simulation
+info 20 cmdlist: execute power node1 on
+info 20 node1/crm: status change startup => wait_for_quorum
+info 20 node1/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node2 on
+info 20 node2/crm: status change startup => wait_for_quorum
+info 20 node2/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node3 on
+info 20 node3/crm: status change startup => wait_for_quorum
+info 20 node3/lrm: status change startup => wait_for_agent_lock
+info 20 node1/crm: got lock 'ha_manager_lock'
+info 20 node1/crm: status change wait_for_quorum => master
+info 20 node1/crm: using scheduler mode 'dynamic'
+info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info 20 node1/crm: adding new service 'vm:102' on node 'node1'
+info 20 node1/crm: service 'vm:102': state changed from 'request_start' to 'started' (node = node1)
+info 21 node1/lrm: got lock 'ha_agent_node1_lock'
+info 21 node1/lrm: status change wait_for_agent_lock => active
+info 21 node1/lrm: starting service vm:102
+info 21 node1/lrm: service status vm:102 started
+info 22 node2/crm: status change wait_for_quorum => slave
+info 24 node3/crm: status change wait_for_quorum => slave
+info 120 cmdlist: execute network node1 off
+info 120 node1/crm: status change master => lost_manager_lock
+info 120 node1/crm: status change lost_manager_lock => wait_for_quorum
+info 121 node1/lrm: status change active => lost_agent_lock
+info 162 watchdog: execute power node1 off
+info 161 node1/crm: killed by poweroff
+info 162 node1/lrm: killed by poweroff
+info 162 hardware: server 'node1' stopped by poweroff (watchdog)
+info 222 node3/crm: got lock 'ha_manager_lock'
+info 222 node3/crm: status change slave => master
+info 222 node3/crm: using scheduler mode 'dynamic'
+info 222 node3/crm: node 'node1': state changed from 'online' => 'unknown'
+info 282 node3/crm: service 'vm:102': state changed from 'started' to 'fence'
+info 282 node3/crm: node 'node1': state changed from 'unknown' => 'fence'
+emai 282 node3/crm: FENCE: Try to fence node 'node1'
+info 282 node3/crm: got lock 'ha_agent_node1_lock'
+info 282 node3/crm: fencing: acknowledged - got agent lock for node 'node1'
+info 282 node3/crm: node 'node1': state changed from 'fence' => 'unknown'
+emai 282 node3/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node1'
+info 282 node3/crm: service 'vm:102': state changed from 'fence' to 'recovery'
+info 282 node3/crm: recover service 'vm:102' from fenced node 'node1' to node 'node3'
+info 282 node3/crm: service 'vm:102': state changed from 'recovery' to 'started' (node = node3)
+info 283 node3/lrm: got lock 'ha_agent_node3_lock'
+info 283 node3/lrm: status change wait_for_agent_lock => active
+info 283 node3/lrm: starting service vm:102
+info 283 node3/lrm: service status vm:102 started
+info 720 hardware: exit simulation - done
diff --git a/src/test/test-crs-dynamic1/manager_status b/src/test/test-crs-dynamic1/manager_status
new file mode 100644
index 00000000..0967ef42
--- /dev/null
+++ b/src/test/test-crs-dynamic1/manager_status
@@ -0,0 +1 @@
+{}
diff --git a/src/test/test-crs-dynamic1/service_config b/src/test/test-crs-dynamic1/service_config
new file mode 100644
index 00000000..9c124471
--- /dev/null
+++ b/src/test/test-crs-dynamic1/service_config
@@ -0,0 +1,3 @@
+{
+ "vm:102": { "node": "node1", "state": "enabled" }
+}
diff --git a/src/test/test-crs-dynamic1/static_service_stats b/src/test/test-crs-dynamic1/static_service_stats
new file mode 100644
index 00000000..1819d24c
--- /dev/null
+++ b/src/test/test-crs-dynamic1/static_service_stats
@@ -0,0 +1,3 @@
+{
+ "vm:102": { "maxcpu": 8, "maxmem": 4294967296 }
+}
--
2.47.3
next prev parent reply other threads:[~2026-04-02 12:50 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-02 12:43 [PATCH cluster/ha-manager/manager v4 00/28] dynamic scheduler + load rebalancer Daniel Kral
2026-04-02 12:43 ` [PATCH cluster v4 01/28] datacenter config: restructure verbose description for the ha crs option Daniel Kral
2026-04-02 12:43 ` [PATCH cluster v4 02/28] datacenter config: add dynamic load scheduler option Daniel Kral
2026-04-02 12:43 ` [PATCH cluster v4 03/28] datacenter config: add auto rebalancing options Daniel Kral
2026-04-02 13:07 ` Dominik Rusovac
2026-04-02 12:43 ` [PATCH ha-manager v4 04/28] env: pve2: implement dynamic node and service stats Daniel Kral
2026-04-02 13:40 ` Dominik Rusovac
2026-04-02 12:43 ` [PATCH ha-manager v4 05/28] sim: hardware: pass correct types for static stats Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 06/28] sim: hardware: factor out static stats' default values Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 07/28] sim: hardware: fix static stats guard Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 08/28] sim: hardware: handle dynamic service stats Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 09/28] sim: hardware: add set-dynamic-stats command Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 10/28] sim: hardware: add getters for dynamic {node,service} stats Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 11/28] usage: pass service data to add_service_usage Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 12/28] usage: pass service data to get_used_service_nodes Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 13/28] add running flag to non-HA cluster service stats Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 14/28] usage: use add_service to add service usage to nodes Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 15/28] usage: add dynamic usage scheduler Daniel Kral
2026-04-02 12:44 ` Daniel Kral [this message]
2026-04-02 12:44 ` [PATCH ha-manager v4 17/28] manager: rename execute_migration to queue_resource_motion Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 18/28] manager: update_crs_scheduler_mode: factor out crs config Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 19/28] implement automatic rebalancing Daniel Kral
2026-04-02 13:14 ` Dominik Rusovac
2026-04-02 12:44 ` [PATCH ha-manager v4 20/28] test: add resource bundle generation test cases Daniel Kral
2026-04-02 12:44 ` [PATCH ha-manager v4 21/28] test: add dynamic automatic rebalancing system " Daniel Kral
2026-04-02 13:21 ` Dominik Rusovac
2026-04-02 12:44 ` [PATCH ha-manager v4 22/28] test: add static " Daniel Kral
2026-04-02 13:23 ` Dominik Rusovac
2026-04-02 12:44 ` [PATCH ha-manager v4 23/28] test: add automatic rebalancing system test cases with TOPSIS method Daniel Kral
2026-04-02 13:29 ` Dominik Rusovac
2026-04-02 12:44 ` [PATCH ha-manager v4 24/28] test: add automatic rebalancing system test cases with affinity rules Daniel Kral
2026-04-02 12:44 ` [PATCH manager v4 25/28] ui: dc/options: make the ha crs strings translatable Daniel Kral
2026-04-02 13:33 ` Dominik Rusovac
2026-04-02 12:44 ` [PATCH manager v4 26/28] ui: dc/options: add dynamic load scheduler option for ha crs Daniel Kral
2026-04-02 13:33 ` Dominik Rusovac
2026-04-02 12:44 ` [PATCH manager v4 27/28] ui: move cluster resource scheduling from dc/options into separate component Daniel Kral
2026-04-02 13:35 ` Dominik Rusovac
2026-04-02 12:44 ` [PATCH manager v4 28/28] ui: form: add crs auto rebalancing options Daniel Kral
2026-04-02 13:38 ` Dominik Rusovac
2026-04-02 14:24 ` [PATCH cluster/ha-manager/manager v4 00/28] dynamic scheduler + load rebalancer Dominik Rusovac
2026-04-02 16:07 ` applied: " Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260402124817.416232-17-d.kral@proxmox.com \
--to=d.kral@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.