all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH ha-manager v2 19/26] test: ha tester: add test cases in more complex scenarios
Date: Fri, 20 Jun 2025 16:31:31 +0200	[thread overview]
Message-ID: <20250620143148.218469-24-d.kral@proxmox.com> (raw)
In-Reply-To: <20250620143148.218469-1-d.kral@proxmox.com>

Add test cases, where colocation rules are used with the static
utilization scheduler and the rebalance on start option enabled. These
verify the behavior in the following scenarios:

- 7 services with interwined colocation rules in a 3 node cluster;
  1 node failing
- 3 neg. colocated services in a 3 node cluster, where the rules are
  stated in pairwise form; 1 node failing
- 5 neg. colocated services in a 5 node cluster; nodes consecutively
  failing after each other

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes since v1:
    - changed intransitive to pairwise
    - added dummy services in second test case to check whether
      colocation rules are applied during rebalance
    - changed third test case to check for consecutive node fails and
      that with each failed node the colocation rules are applied
      correctly

 .../test-crs-static-rebalance-coloc1/README   |  26 ++
 .../test-crs-static-rebalance-coloc1/cmdlist  |   4 +
 .../datacenter.cfg                            |   6 +
 .../hardware_status                           |   5 +
 .../log.expect                                | 120 ++++++++
 .../manager_status                            |   1 +
 .../rules_config                              |  19 ++
 .../service_config                            |  10 +
 .../static_service_stats                      |  10 +
 .../test-crs-static-rebalance-coloc2/README   |  20 ++
 .../test-crs-static-rebalance-coloc2/cmdlist  |   4 +
 .../datacenter.cfg                            |   6 +
 .../hardware_status                           |   5 +
 .../log.expect                                | 174 +++++++++++
 .../manager_status                            |   1 +
 .../rules_config                              |  11 +
 .../service_config                            |  14 +
 .../static_service_stats                      |  14 +
 .../test-crs-static-rebalance-coloc3/README   |  22 ++
 .../test-crs-static-rebalance-coloc3/cmdlist  |  22 ++
 .../datacenter.cfg                            |   6 +
 .../hardware_status                           |   7 +
 .../log.expect                                | 272 ++++++++++++++++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |   9 +
 .../static_service_stats                      |   9 +
 27 files changed, 801 insertions(+)
 create mode 100644 src/test/test-crs-static-rebalance-coloc1/README
 create mode 100644 src/test/test-crs-static-rebalance-coloc1/cmdlist
 create mode 100644 src/test/test-crs-static-rebalance-coloc1/datacenter.cfg
 create mode 100644 src/test/test-crs-static-rebalance-coloc1/hardware_status
 create mode 100644 src/test/test-crs-static-rebalance-coloc1/log.expect
 create mode 100644 src/test/test-crs-static-rebalance-coloc1/manager_status
 create mode 100644 src/test/test-crs-static-rebalance-coloc1/rules_config
 create mode 100644 src/test/test-crs-static-rebalance-coloc1/service_config
 create mode 100644 src/test/test-crs-static-rebalance-coloc1/static_service_stats
 create mode 100644 src/test/test-crs-static-rebalance-coloc2/README
 create mode 100644 src/test/test-crs-static-rebalance-coloc2/cmdlist
 create mode 100644 src/test/test-crs-static-rebalance-coloc2/datacenter.cfg
 create mode 100644 src/test/test-crs-static-rebalance-coloc2/hardware_status
 create mode 100644 src/test/test-crs-static-rebalance-coloc2/log.expect
 create mode 100644 src/test/test-crs-static-rebalance-coloc2/manager_status
 create mode 100644 src/test/test-crs-static-rebalance-coloc2/rules_config
 create mode 100644 src/test/test-crs-static-rebalance-coloc2/service_config
 create mode 100644 src/test/test-crs-static-rebalance-coloc2/static_service_stats
 create mode 100644 src/test/test-crs-static-rebalance-coloc3/README
 create mode 100644 src/test/test-crs-static-rebalance-coloc3/cmdlist
 create mode 100644 src/test/test-crs-static-rebalance-coloc3/datacenter.cfg
 create mode 100644 src/test/test-crs-static-rebalance-coloc3/hardware_status
 create mode 100644 src/test/test-crs-static-rebalance-coloc3/log.expect
 create mode 100644 src/test/test-crs-static-rebalance-coloc3/manager_status
 create mode 100644 src/test/test-crs-static-rebalance-coloc3/rules_config
 create mode 100644 src/test/test-crs-static-rebalance-coloc3/service_config
 create mode 100644 src/test/test-crs-static-rebalance-coloc3/static_service_stats

diff --git a/src/test/test-crs-static-rebalance-coloc1/README b/src/test/test-crs-static-rebalance-coloc1/README
new file mode 100644
index 0000000..0685189
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc1/README
@@ -0,0 +1,26 @@
+Test whether a mixed set of strict colocation rules in conjunction with the
+static load scheduler with auto-rebalancing are applied correctly on service
+start enabled and in case of a subsequent failover.
+
+The test scenario is:
+- vm:101 and vm:102 are non-colocated services
+- Services that must be kept together:
+    - vm:102 and vm:107
+    - vm:104, vm:106, and vm:108
+- Services that must be kept separate:
+    - vm:103, vm:104, and vm:105
+    - vm:103, vm:106, and vm:107
+    - vm:107 and vm:108
+- Therefore, there are consistent interdependencies between the positive and
+  negative colocation rules' service members
+- vm:101 and vm:102 are currently assigned to node1 and node2 respectively
+- vm:103 through vm:108 are currently assigned to node3
+
+The expected outcome is:
+- vm:101, vm:102, vm:103 should be started on node1, node2, and node3
+  respectively, as there's nothing running on there yet
+- vm:104, vm:106, and vm:108 should all be assigned on the same node, which
+  will be node1, since it has the most resources left for vm:104
+- vm:105 and vm:107 should both be assigned on the same node, which will be
+  node2, since both cannot be assigned to the other nodes because of the
+  colocation constraints
diff --git a/src/test/test-crs-static-rebalance-coloc1/cmdlist b/src/test/test-crs-static-rebalance-coloc1/cmdlist
new file mode 100644
index 0000000..eee0e40
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc1/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-crs-static-rebalance-coloc1/datacenter.cfg b/src/test/test-crs-static-rebalance-coloc1/datacenter.cfg
new file mode 100644
index 0000000..f2671a5
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc1/datacenter.cfg
@@ -0,0 +1,6 @@
+{
+    "crs": {
+        "ha": "static",
+        "ha-rebalance-on-start": 1
+    }
+}
diff --git a/src/test/test-crs-static-rebalance-coloc1/hardware_status b/src/test/test-crs-static-rebalance-coloc1/hardware_status
new file mode 100644
index 0000000..84484af
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc1/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 },
+  "node2": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 },
+  "node3": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 }
+}
diff --git a/src/test/test-crs-static-rebalance-coloc1/log.expect b/src/test/test-crs-static-rebalance-coloc1/log.expect
new file mode 100644
index 0000000..cdd2497
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc1/log.expect
@@ -0,0 +1,120 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: using scheduler mode 'static'
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node1'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node3'
+info     20    node1/crm: adding new service 'vm:104' on node 'node3'
+info     20    node1/crm: adding new service 'vm:105' on node 'node3'
+info     20    node1/crm: adding new service 'vm:106' on node 'node3'
+info     20    node1/crm: adding new service 'vm:107' on node 'node3'
+info     20    node1/crm: adding new service 'vm:108' on node 'node3'
+info     20    node1/crm: service vm:101: re-balance selected current node node1 for startup
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service vm:102: re-balance selected current node node2 for startup
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service vm:103: re-balance selected current node node3 for startup
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service vm:104: re-balance selected new node node1 for startup
+info     20    node1/crm: service 'vm:104': state changed from 'request_start' to 'request_start_balance'  (node = node3, target = node1)
+info     20    node1/crm: service vm:105: re-balance selected new node node2 for startup
+info     20    node1/crm: service 'vm:105': state changed from 'request_start' to 'request_start_balance'  (node = node3, target = node2)
+info     20    node1/crm: service vm:106: re-balance selected new node node1 for startup
+info     20    node1/crm: service 'vm:106': state changed from 'request_start' to 'request_start_balance'  (node = node3, target = node1)
+info     20    node1/crm: service vm:107: re-balance selected new node node2 for startup
+info     20    node1/crm: service 'vm:107': state changed from 'request_start' to 'request_start_balance'  (node = node3, target = node2)
+info     20    node1/crm: service vm:108: re-balance selected new node node1 for startup
+info     20    node1/crm: service 'vm:108': state changed from 'request_start' to 'request_start_balance'  (node = node3, target = node1)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:101
+info     21    node1/lrm: service status vm:101 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:103
+info     25    node3/lrm: service status vm:103 started
+info     25    node3/lrm: service vm:104 - start relocate to node 'node1'
+info     25    node3/lrm: service vm:104 - end relocate to node 'node1'
+info     25    node3/lrm: service vm:105 - start relocate to node 'node2'
+info     25    node3/lrm: service vm:105 - end relocate to node 'node2'
+info     25    node3/lrm: service vm:106 - start relocate to node 'node1'
+info     25    node3/lrm: service vm:106 - end relocate to node 'node1'
+info     25    node3/lrm: service vm:107 - start relocate to node 'node2'
+info     25    node3/lrm: service vm:107 - end relocate to node 'node2'
+info     25    node3/lrm: service vm:108 - start relocate to node 'node1'
+info     25    node3/lrm: service vm:108 - end relocate to node 'node1'
+info     40    node1/crm: service 'vm:104': state changed from 'request_start_balance' to 'started'  (node = node1)
+info     40    node1/crm: service 'vm:105': state changed from 'request_start_balance' to 'started'  (node = node2)
+info     40    node1/crm: service 'vm:106': state changed from 'request_start_balance' to 'started'  (node = node1)
+info     40    node1/crm: service 'vm:107': state changed from 'request_start_balance' to 'started'  (node = node2)
+info     40    node1/crm: service 'vm:108': state changed from 'request_start_balance' to 'started'  (node = node1)
+info     41    node1/lrm: starting service vm:104
+info     41    node1/lrm: service status vm:104 started
+info     41    node1/lrm: starting service vm:106
+info     41    node1/lrm: service status vm:106 started
+info     41    node1/lrm: starting service vm:108
+info     41    node1/lrm: service status vm:108 started
+info     43    node2/lrm: starting service vm:105
+info     43    node2/lrm: service status vm:105 started
+info     43    node2/lrm: starting service vm:107
+info     43    node2/lrm: service status vm:107 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:103': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:103': state changed from 'fence' to 'recovery'
+err     240    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     260    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     280    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     300    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     320    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     340    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     360    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     380    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     400    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     420    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     440    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     460    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     480    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     500    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     520    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     540    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     560    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     580    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     600    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     620    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     640    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     660    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     680    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+err     700    node1/crm: recovering service 'vm:103' from fenced node 'node3' failed, no recovery node found
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-crs-static-rebalance-coloc1/manager_status b/src/test/test-crs-static-rebalance-coloc1/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc1/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-crs-static-rebalance-coloc1/rules_config b/src/test/test-crs-static-rebalance-coloc1/rules_config
new file mode 100644
index 0000000..3e6ebf2
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc1/rules_config
@@ -0,0 +1,19 @@
+colocation: vms-must-stick-together1
+	services vm:102,vm:107
+	affinity together
+
+colocation: vms-must-stick-together2
+	services vm:104,vm:106,vm:108
+	affinity together
+
+colocation: vms-must-stay-apart1
+	services vm:103,vm:104,vm:105
+	affinity separate
+
+colocation: vms-must-stay-apart2
+	services vm:103,vm:106,vm:107
+	affinity separate
+
+colocation: vms-must-stay-apart3
+	services vm:107,vm:108
+	affinity separate
diff --git a/src/test/test-crs-static-rebalance-coloc1/service_config b/src/test/test-crs-static-rebalance-coloc1/service_config
new file mode 100644
index 0000000..02e4a07
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc1/service_config
@@ -0,0 +1,10 @@
+{
+    "vm:101": { "node": "node1", "state": "started" },
+    "vm:102": { "node": "node2", "state": "started" },
+    "vm:103": { "node": "node3", "state": "started" },
+    "vm:104": { "node": "node3", "state": "started" },
+    "vm:105": { "node": "node3", "state": "started" },
+    "vm:106": { "node": "node3", "state": "started" },
+    "vm:107": { "node": "node3", "state": "started" },
+    "vm:108": { "node": "node3", "state": "started" }
+}
diff --git a/src/test/test-crs-static-rebalance-coloc1/static_service_stats b/src/test/test-crs-static-rebalance-coloc1/static_service_stats
new file mode 100644
index 0000000..c6472ca
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc1/static_service_stats
@@ -0,0 +1,10 @@
+{
+    "vm:101": { "maxcpu": 8, "maxmem": 16000000000 },
+    "vm:102": { "maxcpu": 4, "maxmem": 24000000000 },
+    "vm:103": { "maxcpu": 2, "maxmem": 32000000000 },
+    "vm:104": { "maxcpu": 4, "maxmem": 48000000000 },
+    "vm:105": { "maxcpu": 8, "maxmem": 16000000000 },
+    "vm:106": { "maxcpu": 4, "maxmem": 32000000000 },
+    "vm:107": { "maxcpu": 2, "maxmem": 64000000000 },
+    "vm:108": { "maxcpu": 8, "maxmem": 48000000000 }
+}
diff --git a/src/test/test-crs-static-rebalance-coloc2/README b/src/test/test-crs-static-rebalance-coloc2/README
new file mode 100644
index 0000000..c335752
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc2/README
@@ -0,0 +1,20 @@
+Test whether a pairwise strict negative colocation rules, i.e. negative
+colocation relations a<->b, b<->c and a<->c, in conjunction with the static
+load scheduler with auto-rebalancing are applied correctly on service start and
+in case of a subsequent failover.
+
+The test scenario is:
+- vm:100 and vm:200 must be kept separate
+- vm:200 and vm:300 must be kept separate
+- vm:100 and vm:300 must be kept separate
+- Therefore, vm:100, vm:200, and vm:300 must be kept separate
+- The services' static usage stats are chosen so that during rebalancing vm:300
+  will need to select a less than ideal node according to the static usage
+  scheduler, i.e. node1 being the ideal one, to test whether the colocation
+  rule still applies correctly
+
+The expected outcome is:
+- vm:100, vm:200, and vm:300 should be started on node1, node2, and node3
+  respectively, just as if the three negative colocation rule would've been
+  stated in a single negative colocation rule
+- As node3 fails, vm:300 cannot be recovered
diff --git a/src/test/test-crs-static-rebalance-coloc2/cmdlist b/src/test/test-crs-static-rebalance-coloc2/cmdlist
new file mode 100644
index 0000000..eee0e40
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc2/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-crs-static-rebalance-coloc2/datacenter.cfg b/src/test/test-crs-static-rebalance-coloc2/datacenter.cfg
new file mode 100644
index 0000000..f2671a5
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc2/datacenter.cfg
@@ -0,0 +1,6 @@
+{
+    "crs": {
+        "ha": "static",
+        "ha-rebalance-on-start": 1
+    }
+}
diff --git a/src/test/test-crs-static-rebalance-coloc2/hardware_status b/src/test/test-crs-static-rebalance-coloc2/hardware_status
new file mode 100644
index 0000000..84484af
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc2/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 },
+  "node2": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 },
+  "node3": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 }
+}
diff --git a/src/test/test-crs-static-rebalance-coloc2/log.expect b/src/test/test-crs-static-rebalance-coloc2/log.expect
new file mode 100644
index 0000000..a7e5c8e
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc2/log.expect
@@ -0,0 +1,174 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: using scheduler mode 'static'
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:100' on node 'node1'
+info     20    node1/crm: adding new service 'vm:101' on node 'node1'
+info     20    node1/crm: adding new service 'vm:102' on node 'node1'
+info     20    node1/crm: adding new service 'vm:103' on node 'node1'
+info     20    node1/crm: adding new service 'vm:200' on node 'node1'
+info     20    node1/crm: adding new service 'vm:201' on node 'node1'
+info     20    node1/crm: adding new service 'vm:202' on node 'node1'
+info     20    node1/crm: adding new service 'vm:203' on node 'node1'
+info     20    node1/crm: adding new service 'vm:300' on node 'node1'
+info     20    node1/crm: adding new service 'vm:301' on node 'node1'
+info     20    node1/crm: adding new service 'vm:302' on node 'node1'
+info     20    node1/crm: adding new service 'vm:303' on node 'node1'
+info     20    node1/crm: service vm:100: re-balance selected current node node1 for startup
+info     20    node1/crm: service 'vm:100': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service vm:101: re-balance selected new node node2 for startup
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node2)
+info     20    node1/crm: service vm:102: re-balance selected new node node3 for startup
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node3)
+info     20    node1/crm: service vm:103: re-balance selected new node node3 for startup
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node3)
+info     20    node1/crm: service vm:200: re-balance selected new node node2 for startup
+info     20    node1/crm: service 'vm:200': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node2)
+info     20    node1/crm: service vm:201: re-balance selected new node node3 for startup
+info     20    node1/crm: service 'vm:201': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node3)
+info     20    node1/crm: service vm:202: re-balance selected new node node3 for startup
+info     20    node1/crm: service 'vm:202': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node3)
+info     20    node1/crm: service vm:203: re-balance selected current node node1 for startup
+info     20    node1/crm: service 'vm:203': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service vm:300: re-balance selected new node node3 for startup
+info     20    node1/crm: service 'vm:300': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node3)
+info     20    node1/crm: service vm:301: re-balance selected current node node1 for startup
+info     20    node1/crm: service 'vm:301': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service vm:302: re-balance selected new node node2 for startup
+info     20    node1/crm: service 'vm:302': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node2)
+info     20    node1/crm: service vm:303: re-balance selected current node node1 for startup
+info     20    node1/crm: service 'vm:303': state changed from 'request_start' to 'started'  (node = node1)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:100
+info     21    node1/lrm: service status vm:100 started
+info     21    node1/lrm: service vm:101 - start relocate to node 'node2'
+info     21    node1/lrm: service vm:101 - end relocate to node 'node2'
+info     21    node1/lrm: service vm:102 - start relocate to node 'node3'
+info     21    node1/lrm: service vm:102 - end relocate to node 'node3'
+info     21    node1/lrm: service vm:103 - start relocate to node 'node3'
+info     21    node1/lrm: service vm:103 - end relocate to node 'node3'
+info     21    node1/lrm: service vm:200 - start relocate to node 'node2'
+info     21    node1/lrm: service vm:200 - end relocate to node 'node2'
+info     21    node1/lrm: service vm:201 - start relocate to node 'node3'
+info     21    node1/lrm: service vm:201 - end relocate to node 'node3'
+info     21    node1/lrm: service vm:202 - start relocate to node 'node3'
+info     21    node1/lrm: service vm:202 - end relocate to node 'node3'
+info     21    node1/lrm: starting service vm:203
+info     21    node1/lrm: service status vm:203 started
+info     21    node1/lrm: service vm:300 - start relocate to node 'node3'
+info     21    node1/lrm: service vm:300 - end relocate to node 'node3'
+info     21    node1/lrm: starting service vm:301
+info     21    node1/lrm: service status vm:301 started
+info     21    node1/lrm: service vm:302 - start relocate to node 'node2'
+info     21    node1/lrm: service vm:302 - end relocate to node 'node2'
+info     21    node1/lrm: starting service vm:303
+info     21    node1/lrm: service status vm:303 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     40    node1/crm: service 'vm:101': state changed from 'request_start_balance' to 'started'  (node = node2)
+info     40    node1/crm: service 'vm:102': state changed from 'request_start_balance' to 'started'  (node = node3)
+info     40    node1/crm: service 'vm:103': state changed from 'request_start_balance' to 'started'  (node = node3)
+info     40    node1/crm: service 'vm:200': state changed from 'request_start_balance' to 'started'  (node = node2)
+info     40    node1/crm: service 'vm:201': state changed from 'request_start_balance' to 'started'  (node = node3)
+info     40    node1/crm: service 'vm:202': state changed from 'request_start_balance' to 'started'  (node = node3)
+info     40    node1/crm: service 'vm:300': state changed from 'request_start_balance' to 'started'  (node = node3)
+info     40    node1/crm: service 'vm:302': state changed from 'request_start_balance' to 'started'  (node = node2)
+info     43    node2/lrm: got lock 'ha_agent_node2_lock'
+info     43    node2/lrm: status change wait_for_agent_lock => active
+info     43    node2/lrm: starting service vm:101
+info     43    node2/lrm: service status vm:101 started
+info     43    node2/lrm: starting service vm:200
+info     43    node2/lrm: service status vm:200 started
+info     43    node2/lrm: starting service vm:302
+info     43    node2/lrm: service status vm:302 started
+info     45    node3/lrm: got lock 'ha_agent_node3_lock'
+info     45    node3/lrm: status change wait_for_agent_lock => active
+info     45    node3/lrm: starting service vm:102
+info     45    node3/lrm: service status vm:102 started
+info     45    node3/lrm: starting service vm:103
+info     45    node3/lrm: service status vm:103 started
+info     45    node3/lrm: starting service vm:201
+info     45    node3/lrm: service status vm:201 started
+info     45    node3/lrm: starting service vm:202
+info     45    node3/lrm: service status vm:202 started
+info     45    node3/lrm: starting service vm:300
+info     45    node3/lrm: service status vm:300 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:102': state changed from 'started' to 'fence'
+info    160    node1/crm: service 'vm:103': state changed from 'started' to 'fence'
+info    160    node1/crm: service 'vm:201': state changed from 'started' to 'fence'
+info    160    node1/crm: service 'vm:202': state changed from 'started' to 'fence'
+info    160    node1/crm: service 'vm:300': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:102': state changed from 'fence' to 'recovery'
+info    240    node1/crm: service 'vm:103': state changed from 'fence' to 'recovery'
+info    240    node1/crm: service 'vm:201': state changed from 'fence' to 'recovery'
+info    240    node1/crm: service 'vm:202': state changed from 'fence' to 'recovery'
+info    240    node1/crm: service 'vm:300': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:102' from fenced node 'node3' to node 'node1'
+info    240    node1/crm: service 'vm:102': state changed from 'recovery' to 'started'  (node = node1)
+info    240    node1/crm: recover service 'vm:103' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:103': state changed from 'recovery' to 'started'  (node = node2)
+info    240    node1/crm: recover service 'vm:201' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:201': state changed from 'recovery' to 'started'  (node = node2)
+info    240    node1/crm: recover service 'vm:202' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:202': state changed from 'recovery' to 'started'  (node = node2)
+err     240    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     240    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+info    241    node1/lrm: starting service vm:102
+info    241    node1/lrm: service status vm:102 started
+info    243    node2/lrm: starting service vm:103
+info    243    node2/lrm: service status vm:103 started
+info    243    node2/lrm: starting service vm:201
+info    243    node2/lrm: service status vm:201 started
+info    243    node2/lrm: starting service vm:202
+info    243    node2/lrm: service status vm:202 started
+err     260    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     280    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     300    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     320    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     340    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     360    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     380    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     400    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     420    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     440    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     460    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     480    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     500    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     520    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     540    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     560    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     580    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     600    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     620    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     640    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     660    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     680    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+err     700    node1/crm: recovering service 'vm:300' from fenced node 'node3' failed, no recovery node found
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-crs-static-rebalance-coloc2/manager_status b/src/test/test-crs-static-rebalance-coloc2/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc2/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-crs-static-rebalance-coloc2/rules_config b/src/test/test-crs-static-rebalance-coloc2/rules_config
new file mode 100644
index 0000000..ea1ec10
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc2/rules_config
@@ -0,0 +1,11 @@
+colocation: very-lonely-services1
+	services vm:100,vm:200
+	affinity separate
+
+colocation: very-lonely-services2
+	services vm:200,vm:300
+	affinity separate
+
+colocation: very-lonely-services3
+	services vm:100,vm:300
+	affinity separate
diff --git a/src/test/test-crs-static-rebalance-coloc2/service_config b/src/test/test-crs-static-rebalance-coloc2/service_config
new file mode 100644
index 0000000..0de367e
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc2/service_config
@@ -0,0 +1,14 @@
+{
+    "vm:100": { "node": "node1", "state": "started" },
+    "vm:101": { "node": "node1", "state": "started" },
+    "vm:102": { "node": "node1", "state": "started" },
+    "vm:103": { "node": "node1", "state": "started" },
+    "vm:200": { "node": "node1", "state": "started" },
+    "vm:201": { "node": "node1", "state": "started" },
+    "vm:202": { "node": "node1", "state": "started" },
+    "vm:203": { "node": "node1", "state": "started" },
+    "vm:300": { "node": "node1", "state": "started" },
+    "vm:301": { "node": "node1", "state": "started" },
+    "vm:302": { "node": "node1", "state": "started" },
+    "vm:303": { "node": "node1", "state": "started" }
+}
diff --git a/src/test/test-crs-static-rebalance-coloc2/static_service_stats b/src/test/test-crs-static-rebalance-coloc2/static_service_stats
new file mode 100644
index 0000000..3c7502e
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc2/static_service_stats
@@ -0,0 +1,14 @@
+{
+    "vm:100": { "maxcpu": 8, "maxmem": 16000000000 },
+    "vm:101": { "maxcpu": 4, "maxmem": 8000000000 },
+    "vm:102": { "maxcpu": 2, "maxmem": 8000000000 },
+    "vm:103": { "maxcpu": 2, "maxmem": 4000000000 },
+    "vm:200": { "maxcpu": 4, "maxmem": 24000000000 },
+    "vm:201": { "maxcpu": 2, "maxmem": 8000000000 },
+    "vm:202": { "maxcpu": 4, "maxmem": 4000000000 },
+    "vm:203": { "maxcpu": 2, "maxmem": 8000000000 },
+    "vm:300": { "maxcpu": 6, "maxmem": 32000000000 },
+    "vm:301": { "maxcpu": 2, "maxmem": 4000000000 },
+    "vm:302": { "maxcpu": 2, "maxmem": 8000000000 },
+    "vm:303": { "maxcpu": 4, "maxmem": 8000000000 }
+}
diff --git a/src/test/test-crs-static-rebalance-coloc3/README b/src/test/test-crs-static-rebalance-coloc3/README
new file mode 100644
index 0000000..4e3a1ae
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc3/README
@@ -0,0 +1,22 @@
+Test whether a more complex set of pairwise strict negative colocation rules,
+i.e. there's negative colocation relations a<->b, b<->c and a<->c, with 5
+services in conjunction with the static load scheduler with auto-rebalancing
+are applied correctly on service start and in case of a consecutive failover of
+all nodes after each other.
+
+The test scenario is:
+- vm:100, vm:200, vm:300, vm:400, and vm:500 must be kept separate
+- The services' static usage stats are chosen so that during rebalancing vm:300
+  and vm:500 will need to select a less than ideal node according to the static
+  usage scheduler, i.e. node2 and node3 being their ideal ones, to test whether
+  the colocation rule still applies correctly
+
+The expected outcome is:
+- vm:100, vm:200, vm:300, vm:400, and vm:500 should be started on node2, node1,
+  node4, node3, and node5 respectively
+- vm:400 and vm:500 are started on node3 and node5, instead of node2 and node3
+  as would've been without the colocation rules
+- As node1, node2, node3, node4, and node5 fail consecutively with each node
+  coming back online, vm:200, vm:100, vm:400, vm:300, and vm:500 will be put in
+  recovery during the failover respectively, as there is no other node left to
+  accomodate them without violating the colocation rule.
diff --git a/src/test/test-crs-static-rebalance-coloc3/cmdlist b/src/test/test-crs-static-rebalance-coloc3/cmdlist
new file mode 100644
index 0000000..6665419
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc3/cmdlist
@@ -0,0 +1,22 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on", "power node4 on", "power node5 on" ],
+    [ "power node1 off" ],
+    [ "delay 100" ],
+    [ "power node1 on" ],
+    [ "delay 100" ],
+    [ "power node2 off" ],
+    [ "delay 100" ],
+    [ "power node2 on" ],
+    [ "delay 100" ],
+    [ "power node3 off" ],
+    [ "delay 100" ],
+    [ "power node3 on" ],
+    [ "delay 100" ],
+    [ "power node4 off" ],
+    [ "delay 100" ],
+    [ "power node4 on" ],
+    [ "delay 100" ],
+    [ "power node5 off" ],
+    [ "delay 100" ],
+    [ "power node5 on" ]
+]
diff --git a/src/test/test-crs-static-rebalance-coloc3/datacenter.cfg b/src/test/test-crs-static-rebalance-coloc3/datacenter.cfg
new file mode 100644
index 0000000..f2671a5
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc3/datacenter.cfg
@@ -0,0 +1,6 @@
+{
+    "crs": {
+        "ha": "static",
+        "ha-rebalance-on-start": 1
+    }
+}
diff --git a/src/test/test-crs-static-rebalance-coloc3/hardware_status b/src/test/test-crs-static-rebalance-coloc3/hardware_status
new file mode 100644
index 0000000..b6dcb1a
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc3/hardware_status
@@ -0,0 +1,7 @@
+{
+  "node1": { "power": "off", "network": "off", "cpus": 8, "memory": 48000000000 },
+  "node2": { "power": "off", "network": "off", "cpus": 32, "memory": 36000000000 },
+  "node3": { "power": "off", "network": "off", "cpus": 16, "memory": 24000000000 },
+  "node4": { "power": "off", "network": "off", "cpus": 32, "memory": 36000000000 },
+  "node5": { "power": "off", "network": "off", "cpus": 8, "memory": 48000000000 }
+}
diff --git a/src/test/test-crs-static-rebalance-coloc3/log.expect b/src/test/test-crs-static-rebalance-coloc3/log.expect
new file mode 100644
index 0000000..4e87f03
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc3/log.expect
@@ -0,0 +1,272 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node4 on
+info     20    node4/crm: status change startup => wait_for_quorum
+info     20    node4/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node5 on
+info     20    node5/crm: status change startup => wait_for_quorum
+info     20    node5/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: using scheduler mode 'static'
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node4': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node5': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:100' on node 'node1'
+info     20    node1/crm: adding new service 'vm:101' on node 'node1'
+info     20    node1/crm: adding new service 'vm:200' on node 'node1'
+info     20    node1/crm: adding new service 'vm:201' on node 'node1'
+info     20    node1/crm: adding new service 'vm:300' on node 'node1'
+info     20    node1/crm: adding new service 'vm:400' on node 'node1'
+info     20    node1/crm: adding new service 'vm:500' on node 'node1'
+info     20    node1/crm: service vm:100: re-balance selected new node node2 for startup
+info     20    node1/crm: service 'vm:100': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node2)
+info     20    node1/crm: service vm:101: re-balance selected new node node4 for startup
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node4)
+info     20    node1/crm: service vm:200: re-balance selected current node node1 for startup
+info     20    node1/crm: service 'vm:200': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service vm:201: re-balance selected new node node5 for startup
+info     20    node1/crm: service 'vm:201': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node5)
+info     20    node1/crm: service vm:300: re-balance selected new node node4 for startup
+info     20    node1/crm: service 'vm:300': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node4)
+info     20    node1/crm: service vm:400: re-balance selected new node node3 for startup
+info     20    node1/crm: service 'vm:400': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node3)
+info     20    node1/crm: service vm:500: re-balance selected new node node5 for startup
+info     20    node1/crm: service 'vm:500': state changed from 'request_start' to 'request_start_balance'  (node = node1, target = node5)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: service vm:100 - start relocate to node 'node2'
+info     21    node1/lrm: service vm:100 - end relocate to node 'node2'
+info     21    node1/lrm: service vm:101 - start relocate to node 'node4'
+info     21    node1/lrm: service vm:101 - end relocate to node 'node4'
+info     21    node1/lrm: starting service vm:200
+info     21    node1/lrm: service status vm:200 started
+info     21    node1/lrm: service vm:201 - start relocate to node 'node5'
+info     21    node1/lrm: service vm:201 - end relocate to node 'node5'
+info     21    node1/lrm: service vm:300 - start relocate to node 'node4'
+info     21    node1/lrm: service vm:300 - end relocate to node 'node4'
+info     21    node1/lrm: service vm:400 - start relocate to node 'node3'
+info     21    node1/lrm: service vm:400 - end relocate to node 'node3'
+info     21    node1/lrm: service vm:500 - start relocate to node 'node5'
+info     21    node1/lrm: service vm:500 - end relocate to node 'node5'
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     26    node4/crm: status change wait_for_quorum => slave
+info     28    node5/crm: status change wait_for_quorum => slave
+info     40    node1/crm: service 'vm:100': state changed from 'request_start_balance' to 'started'  (node = node2)
+info     40    node1/crm: service 'vm:101': state changed from 'request_start_balance' to 'started'  (node = node4)
+info     40    node1/crm: service 'vm:201': state changed from 'request_start_balance' to 'started'  (node = node5)
+info     40    node1/crm: service 'vm:300': state changed from 'request_start_balance' to 'started'  (node = node4)
+info     40    node1/crm: service 'vm:400': state changed from 'request_start_balance' to 'started'  (node = node3)
+info     40    node1/crm: service 'vm:500': state changed from 'request_start_balance' to 'started'  (node = node5)
+info     43    node2/lrm: got lock 'ha_agent_node2_lock'
+info     43    node2/lrm: status change wait_for_agent_lock => active
+info     43    node2/lrm: starting service vm:100
+info     43    node2/lrm: service status vm:100 started
+info     45    node3/lrm: got lock 'ha_agent_node3_lock'
+info     45    node3/lrm: status change wait_for_agent_lock => active
+info     45    node3/lrm: starting service vm:400
+info     45    node3/lrm: service status vm:400 started
+info     47    node4/lrm: got lock 'ha_agent_node4_lock'
+info     47    node4/lrm: status change wait_for_agent_lock => active
+info     47    node4/lrm: starting service vm:101
+info     47    node4/lrm: service status vm:101 started
+info     47    node4/lrm: starting service vm:300
+info     47    node4/lrm: service status vm:300 started
+info     49    node5/lrm: got lock 'ha_agent_node5_lock'
+info     49    node5/lrm: status change wait_for_agent_lock => active
+info     49    node5/lrm: starting service vm:201
+info     49    node5/lrm: service status vm:201 started
+info     49    node5/lrm: starting service vm:500
+info     49    node5/lrm: service status vm:500 started
+info    120      cmdlist: execute power node1 off
+info    120    node1/crm: killed by poweroff
+info    120    node1/lrm: killed by poweroff
+info    220      cmdlist: execute delay 100
+info    222    node3/crm: got lock 'ha_manager_lock'
+info    222    node3/crm: status change slave => master
+info    222    node3/crm: using scheduler mode 'static'
+info    222    node3/crm: node 'node1': state changed from 'online' => 'unknown'
+info    282    node3/crm: service 'vm:200': state changed from 'started' to 'fence'
+info    282    node3/crm: node 'node1': state changed from 'unknown' => 'fence'
+emai    282    node3/crm: FENCE: Try to fence node 'node1'
+info    282    node3/crm: got lock 'ha_agent_node1_lock'
+info    282    node3/crm: fencing: acknowledged - got agent lock for node 'node1'
+info    282    node3/crm: node 'node1': state changed from 'fence' => 'unknown'
+emai    282    node3/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node1'
+info    282    node3/crm: service 'vm:200': state changed from 'fence' to 'recovery'
+err     282    node3/crm: recovering service 'vm:200' from fenced node 'node1' failed, no recovery node found
+err     302    node3/crm: recovering service 'vm:200' from fenced node 'node1' failed, no recovery node found
+err     322    node3/crm: recovering service 'vm:200' from fenced node 'node1' failed, no recovery node found
+err     342    node3/crm: recovering service 'vm:200' from fenced node 'node1' failed, no recovery node found
+err     362    node3/crm: recovering service 'vm:200' from fenced node 'node1' failed, no recovery node found
+err     382    node3/crm: recovering service 'vm:200' from fenced node 'node1' failed, no recovery node found
+info    400      cmdlist: execute power node1 on
+info    400    node1/crm: status change startup => wait_for_quorum
+info    400    node1/lrm: status change startup => wait_for_agent_lock
+info    400    node1/crm: status change wait_for_quorum => slave
+info    404    node3/crm: node 'node1': state changed from 'unknown' => 'online'
+info    404    node3/crm: recover service 'vm:200' to previous failed and fenced node 'node1' again
+info    404    node3/crm: service 'vm:200': state changed from 'recovery' to 'started'  (node = node1)
+info    421    node1/lrm: got lock 'ha_agent_node1_lock'
+info    421    node1/lrm: status change wait_for_agent_lock => active
+info    421    node1/lrm: starting service vm:200
+info    421    node1/lrm: service status vm:200 started
+info    500      cmdlist: execute delay 100
+info    680      cmdlist: execute power node2 off
+info    680    node2/crm: killed by poweroff
+info    680    node2/lrm: killed by poweroff
+info    682    node3/crm: node 'node2': state changed from 'online' => 'unknown'
+info    742    node3/crm: service 'vm:100': state changed from 'started' to 'fence'
+info    742    node3/crm: node 'node2': state changed from 'unknown' => 'fence'
+emai    742    node3/crm: FENCE: Try to fence node 'node2'
+info    780      cmdlist: execute delay 100
+info    802    node3/crm: got lock 'ha_agent_node2_lock'
+info    802    node3/crm: fencing: acknowledged - got agent lock for node 'node2'
+info    802    node3/crm: node 'node2': state changed from 'fence' => 'unknown'
+emai    802    node3/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node2'
+info    802    node3/crm: service 'vm:100': state changed from 'fence' to 'recovery'
+err     802    node3/crm: recovering service 'vm:100' from fenced node 'node2' failed, no recovery node found
+err     822    node3/crm: recovering service 'vm:100' from fenced node 'node2' failed, no recovery node found
+err     842    node3/crm: recovering service 'vm:100' from fenced node 'node2' failed, no recovery node found
+err     862    node3/crm: recovering service 'vm:100' from fenced node 'node2' failed, no recovery node found
+err     882    node3/crm: recovering service 'vm:100' from fenced node 'node2' failed, no recovery node found
+err     902    node3/crm: recovering service 'vm:100' from fenced node 'node2' failed, no recovery node found
+err     922    node3/crm: recovering service 'vm:100' from fenced node 'node2' failed, no recovery node found
+err     942    node3/crm: recovering service 'vm:100' from fenced node 'node2' failed, no recovery node found
+info    960      cmdlist: execute power node2 on
+info    960    node2/crm: status change startup => wait_for_quorum
+info    960    node2/lrm: status change startup => wait_for_agent_lock
+info    962    node2/crm: status change wait_for_quorum => slave
+info    963    node2/lrm: got lock 'ha_agent_node2_lock'
+info    963    node2/lrm: status change wait_for_agent_lock => active
+info    964    node3/crm: node 'node2': state changed from 'unknown' => 'online'
+info    964    node3/crm: recover service 'vm:100' to previous failed and fenced node 'node2' again
+info    964    node3/crm: service 'vm:100': state changed from 'recovery' to 'started'  (node = node2)
+info    983    node2/lrm: starting service vm:100
+info    983    node2/lrm: service status vm:100 started
+info   1060      cmdlist: execute delay 100
+info   1240      cmdlist: execute power node3 off
+info   1240    node3/crm: killed by poweroff
+info   1240    node3/lrm: killed by poweroff
+info   1340      cmdlist: execute delay 100
+info   1346    node5/crm: got lock 'ha_manager_lock'
+info   1346    node5/crm: status change slave => master
+info   1346    node5/crm: using scheduler mode 'static'
+info   1346    node5/crm: node 'node3': state changed from 'online' => 'unknown'
+info   1406    node5/crm: service 'vm:400': state changed from 'started' to 'fence'
+info   1406    node5/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai   1406    node5/crm: FENCE: Try to fence node 'node3'
+info   1406    node5/crm: got lock 'ha_agent_node3_lock'
+info   1406    node5/crm: fencing: acknowledged - got agent lock for node 'node3'
+info   1406    node5/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai   1406    node5/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info   1406    node5/crm: service 'vm:400': state changed from 'fence' to 'recovery'
+err    1406    node5/crm: recovering service 'vm:400' from fenced node 'node3' failed, no recovery node found
+err    1426    node5/crm: recovering service 'vm:400' from fenced node 'node3' failed, no recovery node found
+err    1446    node5/crm: recovering service 'vm:400' from fenced node 'node3' failed, no recovery node found
+err    1466    node5/crm: recovering service 'vm:400' from fenced node 'node3' failed, no recovery node found
+err    1486    node5/crm: recovering service 'vm:400' from fenced node 'node3' failed, no recovery node found
+err    1506    node5/crm: recovering service 'vm:400' from fenced node 'node3' failed, no recovery node found
+info   1520      cmdlist: execute power node3 on
+info   1520    node3/crm: status change startup => wait_for_quorum
+info   1520    node3/lrm: status change startup => wait_for_agent_lock
+info   1524    node3/crm: status change wait_for_quorum => slave
+info   1528    node5/crm: node 'node3': state changed from 'unknown' => 'online'
+info   1528    node5/crm: recover service 'vm:400' to previous failed and fenced node 'node3' again
+info   1528    node5/crm: service 'vm:400': state changed from 'recovery' to 'started'  (node = node3)
+info   1545    node3/lrm: got lock 'ha_agent_node3_lock'
+info   1545    node3/lrm: status change wait_for_agent_lock => active
+info   1545    node3/lrm: starting service vm:400
+info   1545    node3/lrm: service status vm:400 started
+info   1620      cmdlist: execute delay 100
+info   1800      cmdlist: execute power node4 off
+info   1800    node4/crm: killed by poweroff
+info   1800    node4/lrm: killed by poweroff
+info   1806    node5/crm: node 'node4': state changed from 'online' => 'unknown'
+info   1866    node5/crm: service 'vm:101': state changed from 'started' to 'fence'
+info   1866    node5/crm: service 'vm:300': state changed from 'started' to 'fence'
+info   1866    node5/crm: node 'node4': state changed from 'unknown' => 'fence'
+emai   1866    node5/crm: FENCE: Try to fence node 'node4'
+info   1900      cmdlist: execute delay 100
+info   1926    node5/crm: got lock 'ha_agent_node4_lock'
+info   1926    node5/crm: fencing: acknowledged - got agent lock for node 'node4'
+info   1926    node5/crm: node 'node4': state changed from 'fence' => 'unknown'
+emai   1926    node5/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node4'
+info   1926    node5/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info   1926    node5/crm: service 'vm:300': state changed from 'fence' to 'recovery'
+info   1926    node5/crm: recover service 'vm:101' from fenced node 'node4' to node 'node2'
+info   1926    node5/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node2)
+err    1926    node5/crm: recovering service 'vm:300' from fenced node 'node4' failed, no recovery node found
+err    1926    node5/crm: recovering service 'vm:300' from fenced node 'node4' failed, no recovery node found
+info   1943    node2/lrm: starting service vm:101
+info   1943    node2/lrm: service status vm:101 started
+err    1946    node5/crm: recovering service 'vm:300' from fenced node 'node4' failed, no recovery node found
+err    1966    node5/crm: recovering service 'vm:300' from fenced node 'node4' failed, no recovery node found
+err    1986    node5/crm: recovering service 'vm:300' from fenced node 'node4' failed, no recovery node found
+err    2006    node5/crm: recovering service 'vm:300' from fenced node 'node4' failed, no recovery node found
+err    2026    node5/crm: recovering service 'vm:300' from fenced node 'node4' failed, no recovery node found
+err    2046    node5/crm: recovering service 'vm:300' from fenced node 'node4' failed, no recovery node found
+err    2066    node5/crm: recovering service 'vm:300' from fenced node 'node4' failed, no recovery node found
+info   2080      cmdlist: execute power node4 on
+info   2080    node4/crm: status change startup => wait_for_quorum
+info   2080    node4/lrm: status change startup => wait_for_agent_lock
+info   2086    node4/crm: status change wait_for_quorum => slave
+info   2087    node4/lrm: got lock 'ha_agent_node4_lock'
+info   2087    node4/lrm: status change wait_for_agent_lock => active
+info   2088    node5/crm: node 'node4': state changed from 'unknown' => 'online'
+info   2088    node5/crm: recover service 'vm:300' to previous failed and fenced node 'node4' again
+info   2088    node5/crm: service 'vm:300': state changed from 'recovery' to 'started'  (node = node4)
+info   2107    node4/lrm: starting service vm:300
+info   2107    node4/lrm: service status vm:300 started
+info   2180      cmdlist: execute delay 100
+info   2360      cmdlist: execute power node5 off
+info   2360    node5/crm: killed by poweroff
+info   2360    node5/lrm: killed by poweroff
+info   2460      cmdlist: execute delay 100
+info   2480    node1/crm: got lock 'ha_manager_lock'
+info   2480    node1/crm: status change slave => master
+info   2480    node1/crm: using scheduler mode 'static'
+info   2480    node1/crm: node 'node5': state changed from 'online' => 'unknown'
+info   2540    node1/crm: service 'vm:201': state changed from 'started' to 'fence'
+info   2540    node1/crm: service 'vm:500': state changed from 'started' to 'fence'
+info   2540    node1/crm: node 'node5': state changed from 'unknown' => 'fence'
+emai   2540    node1/crm: FENCE: Try to fence node 'node5'
+info   2540    node1/crm: got lock 'ha_agent_node5_lock'
+info   2540    node1/crm: fencing: acknowledged - got agent lock for node 'node5'
+info   2540    node1/crm: node 'node5': state changed from 'fence' => 'unknown'
+emai   2540    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node5'
+info   2540    node1/crm: service 'vm:201': state changed from 'fence' to 'recovery'
+info   2540    node1/crm: service 'vm:500': state changed from 'fence' to 'recovery'
+info   2540    node1/crm: recover service 'vm:201' from fenced node 'node5' to node 'node2'
+info   2540    node1/crm: service 'vm:201': state changed from 'recovery' to 'started'  (node = node2)
+err    2540    node1/crm: recovering service 'vm:500' from fenced node 'node5' failed, no recovery node found
+err    2540    node1/crm: recovering service 'vm:500' from fenced node 'node5' failed, no recovery node found
+info   2543    node2/lrm: starting service vm:201
+info   2543    node2/lrm: service status vm:201 started
+err    2560    node1/crm: recovering service 'vm:500' from fenced node 'node5' failed, no recovery node found
+err    2580    node1/crm: recovering service 'vm:500' from fenced node 'node5' failed, no recovery node found
+err    2600    node1/crm: recovering service 'vm:500' from fenced node 'node5' failed, no recovery node found
+err    2620    node1/crm: recovering service 'vm:500' from fenced node 'node5' failed, no recovery node found
+info   2640      cmdlist: execute power node5 on
+info   2640    node5/crm: status change startup => wait_for_quorum
+info   2640    node5/lrm: status change startup => wait_for_agent_lock
+info   2640    node1/crm: node 'node5': state changed from 'unknown' => 'online'
+info   2640    node1/crm: recover service 'vm:500' to previous failed and fenced node 'node5' again
+info   2640    node1/crm: service 'vm:500': state changed from 'recovery' to 'started'  (node = node5)
+info   2648    node5/crm: status change wait_for_quorum => slave
+info   2669    node5/lrm: got lock 'ha_agent_node5_lock'
+info   2669    node5/lrm: status change wait_for_agent_lock => active
+info   2669    node5/lrm: starting service vm:500
+info   2669    node5/lrm: service status vm:500 started
+info   3240     hardware: exit simulation - done
diff --git a/src/test/test-crs-static-rebalance-coloc3/manager_status b/src/test/test-crs-static-rebalance-coloc3/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc3/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-crs-static-rebalance-coloc3/rules_config b/src/test/test-crs-static-rebalance-coloc3/rules_config
new file mode 100644
index 0000000..f2646fc
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc3/rules_config
@@ -0,0 +1,3 @@
+colocation: keep-them-apart
+	services vm:100,vm:200,vm:300,vm:400,vm:500
+	affinity separate
diff --git a/src/test/test-crs-static-rebalance-coloc3/service_config b/src/test/test-crs-static-rebalance-coloc3/service_config
new file mode 100644
index 0000000..86dc27d
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc3/service_config
@@ -0,0 +1,9 @@
+{
+    "vm:100": { "node": "node1", "state": "started" },
+    "vm:101": { "node": "node1", "state": "started" },
+    "vm:200": { "node": "node1", "state": "started" },
+    "vm:201": { "node": "node1", "state": "started" },
+    "vm:300": { "node": "node1", "state": "started" },
+    "vm:400": { "node": "node1", "state": "started" },
+    "vm:500": { "node": "node1", "state": "started" }
+}
diff --git a/src/test/test-crs-static-rebalance-coloc3/static_service_stats b/src/test/test-crs-static-rebalance-coloc3/static_service_stats
new file mode 100644
index 0000000..755282b
--- /dev/null
+++ b/src/test/test-crs-static-rebalance-coloc3/static_service_stats
@@ -0,0 +1,9 @@
+{
+    "vm:100": { "maxcpu": 16, "maxmem": 16000000000 },
+    "vm:101": { "maxcpu": 4, "maxmem": 8000000000 },
+    "vm:200": { "maxcpu": 2, "maxmem": 48000000000 },
+    "vm:201": { "maxcpu": 4, "maxmem": 8000000000 },
+    "vm:300": { "maxcpu": 8, "maxmem": 32000000000 },
+    "vm:400": { "maxcpu": 32, "maxmem": 32000000000 },
+    "vm:500": { "maxcpu": 16, "maxmem": 8000000000 }
+}
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2025-06-20 14:39 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-20 14:31 [pve-devel] [RFC common/cluster/ha-manager/docs/manager v2 00/40] HA colocation rules Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH common v2 1/1] introduce HashTools module Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH cluster v2 1/3] cfs: add 'ha/rules.cfg' to observed files Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH cluster v2 2/3] datacenter config: make pve-ha-shutdown-policy optional Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH cluster v2 3/3] datacenter config: introduce feature flag for location rules Daniel Kral
2025-06-23 15:58   ` Thomas Lamprecht
2025-06-24  7:29     ` Daniel Kral
2025-06-24  7:51       ` Thomas Lamprecht
2025-06-24  8:19         ` Daniel Kral
2025-06-24  8:25           ` Thomas Lamprecht
2025-06-24  8:52             ` Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 01/26] tree-wide: make arguments for select_service_node explicit Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 02/26] manager: improve signature of select_service_node Daniel Kral
2025-06-23 16:21   ` Thomas Lamprecht
2025-06-24  8:06     ` Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 03/26] introduce rules base plugin Daniel Kral
2025-07-04 14:18   ` Michael Köppl
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 04/26] rules: introduce location rule plugin Daniel Kral
2025-06-20 16:17   ` Jillian Morgan
2025-06-20 16:30     ` Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 05/26] rules: introduce colocation " Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 06/26] rules: add global checks between location and colocation rules Daniel Kral
2025-07-01 11:02   ` Daniel Kral
2025-07-04 14:43   ` Michael Köppl
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 07/26] config, env, hw: add rules read and parse methods Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 08/26] manager: read and update rules config Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 09/26] test: ha tester: add test cases for future location rules Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 10/26] resources: introduce failback property in service config Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 11/26] manager: migrate ha groups to location rules in-memory Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 12/26] manager: apply location rules when selecting service nodes Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 13/26] usage: add information about a service's assigned nodes Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 14/26] manager: apply colocation rules when selecting service nodes Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 15/26] manager: handle migrations for colocated services Daniel Kral
2025-06-27  9:10   ` Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 16/26] sim: resources: add option to limit start and migrate tries to node Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 17/26] test: ha tester: add test cases for strict negative colocation rules Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 18/26] test: ha tester: add test cases for strict positive " Daniel Kral
2025-06-20 14:31 ` Daniel Kral [this message]
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 20/26] test: add test cases for rules config Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 21/26] manager: handle negative colocations with too many services Daniel Kral
2025-07-01 12:11   ` Michael Köppl
2025-07-01 12:23     ` Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 22/26] config: prune services from rules if services are deleted from config Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 23/26] api: introduce ha rules api endpoints Daniel Kral
2025-07-04 14:16   ` Michael Köppl
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 24/26] cli: expose ha rules api endpoints to ha-manager cli Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 25/26] api: groups, services: assert use-location-rules feature flag Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 26/26] api: services: check for colocations for service motions Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH docs v2 1/5] ha: config: add section about ha rules Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH docs v2 2/5] update static files to include ha rules api endpoints Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH docs v2 3/5] update static files to include use-location-rules feature flag Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH docs v2 4/5] update static files to include ha resources failback flag Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH docs v2 5/5] update static files to include ha service motion return value schema Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH manager v2 1/5] api: ha: add ha rules api endpoints Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH manager v2 2/5] ui: add use-location-rules feature flag Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH manager v2 3/5] ui: ha: hide ha groups if use-location-rules is enabled Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH manager v2 4/5] ui: ha: adapt resources components " Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH manager v2 5/5] ui: ha: add ha rules components and menu entry Daniel Kral
2025-06-30 15:09   ` Michael Köppl
2025-07-01 14:38   ` Michael Köppl
2025-06-20 15:43 ` [pve-devel] [RFC common/cluster/ha-manager/docs/manager v2 00/40] HA colocation rules Daniel Kral
2025-06-20 17:11   ` Jillian Morgan
2025-06-20 17:45     ` DERUMIER, Alexandre via pve-devel
     [not found]     ` <476c41123dced9d560dfbf27640ef8705fd90f11.camel@groupe-cyllene.com>
2025-06-23 15:36       ` Thomas Lamprecht
2025-06-24  8:48         ` Daniel Kral
2025-06-27 12:23           ` Friedrich Weber
2025-06-27 12:41             ` Daniel Kral
2025-06-23  8:11 ` DERUMIER, Alexandre via pve-devel
     [not found] ` <bf973ec4e8c52a10535ed35ad64bf0ec8d1ad37d.camel@groupe-cyllene.com>
2025-06-23 15:28   ` Thomas Lamprecht
2025-06-23 23:21     ` DERUMIER, Alexandre via pve-devel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250620143148.218469-24-d.kral@proxmox.com \
    --to=d.kral@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal