all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH ha-manager v2 17/26] test: ha tester: add test cases for strict negative colocation rules
Date: Fri, 20 Jun 2025 16:31:29 +0200	[thread overview]
Message-ID: <20250620143148.218469-22-d.kral@proxmox.com> (raw)
In-Reply-To: <20250620143148.218469-1-d.kral@proxmox.com>

Add test cases for strict negative colocation rules, i.e. where services
must be kept on separate nodes. These verify the behavior of the
services in strict negative colocation rules in case of a failover of
the node of one or more of these services in the following scenarios:

1. 2 neg. colocated services in a 3 node cluster; 1 node failing
2. 3 neg. colocated services in a 5 node cluster; 1 node failing
3. 3 neg. colocated services in a 5 node cluster; 2 nodes failing
4. 2 neg. colocated services in a 3 node cluster; 1 node failing, but
   the recovery node cannot start the service
5. Pair of 2 neg. colocated services (with one common service in both)
   in a 3 node cluster; 1 node failing
6. 2 neg. colocated services in a 3 node cluster; 1 node failing, but
   both services cannot start on the recovery node
7. 2 neg. colocated services in a 3 node cluster; 1 service manually
   migrated to another free node; other neg. colocated service cannot be
   migrated to migrated service's source node during migration
8. 3 neg. colocated services in a 3 node cluster; 1 service manually
   migrated to another neg. colocated service's node fails

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes since v1:
    - added test cases 6, 7, and 8
    - corrected README in test case #2
    - removed strict from rules_config but let them in the test case
      name and READMEs when loose colocation rules are added later
    - other slight corrections or adaptions in existing READMEs

 .../test-colocation-strict-separate1/README   |  13 +++
 .../test-colocation-strict-separate1/cmdlist  |   4 +
 .../hardware_status                           |   5 +
 .../log.expect                                |  60 ++++++++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |   6 +
 .../test-colocation-strict-separate2/README   |  15 +++
 .../test-colocation-strict-separate2/cmdlist  |   4 +
 .../hardware_status                           |   7 ++
 .../log.expect                                |  90 ++++++++++++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |  10 ++
 .../test-colocation-strict-separate3/README   |  16 +++
 .../test-colocation-strict-separate3/cmdlist  |   4 +
 .../hardware_status                           |   7 ++
 .../log.expect                                | 110 ++++++++++++++++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |  10 ++
 .../test-colocation-strict-separate4/README   |  18 +++
 .../test-colocation-strict-separate4/cmdlist  |   4 +
 .../hardware_status                           |   5 +
 .../log.expect                                |  69 +++++++++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |   6 +
 .../test-colocation-strict-separate5/README   |  11 ++
 .../test-colocation-strict-separate5/cmdlist  |   4 +
 .../hardware_status                           |   5 +
 .../log.expect                                |  56 +++++++++
 .../manager_status                            |   1 +
 .../rules_config                              |   7 ++
 .../service_config                            |   5 +
 .../test-colocation-strict-separate6/README   |  18 +++
 .../test-colocation-strict-separate6/cmdlist  |   4 +
 .../hardware_status                           |   5 +
 .../log.expect                                |  69 +++++++++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |   6 +
 .../test-colocation-strict-separate7/README   |  15 +++
 .../test-colocation-strict-separate7/cmdlist  |   5 +
 .../hardware_status                           |   5 +
 .../log.expect                                |  52 +++++++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |   4 +
 .../test-colocation-strict-separate8/README   |  11 ++
 .../test-colocation-strict-separate8/cmdlist  |   4 +
 .../hardware_status                           |   5 +
 .../log.expect                                |  38 ++++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |   5 +
 56 files changed, 826 insertions(+)
 create mode 100644 src/test/test-colocation-strict-separate1/README
 create mode 100644 src/test/test-colocation-strict-separate1/cmdlist
 create mode 100644 src/test/test-colocation-strict-separate1/hardware_status
 create mode 100644 src/test/test-colocation-strict-separate1/log.expect
 create mode 100644 src/test/test-colocation-strict-separate1/manager_status
 create mode 100644 src/test/test-colocation-strict-separate1/rules_config
 create mode 100644 src/test/test-colocation-strict-separate1/service_config
 create mode 100644 src/test/test-colocation-strict-separate2/README
 create mode 100644 src/test/test-colocation-strict-separate2/cmdlist
 create mode 100644 src/test/test-colocation-strict-separate2/hardware_status
 create mode 100644 src/test/test-colocation-strict-separate2/log.expect
 create mode 100644 src/test/test-colocation-strict-separate2/manager_status
 create mode 100644 src/test/test-colocation-strict-separate2/rules_config
 create mode 100644 src/test/test-colocation-strict-separate2/service_config
 create mode 100644 src/test/test-colocation-strict-separate3/README
 create mode 100644 src/test/test-colocation-strict-separate3/cmdlist
 create mode 100644 src/test/test-colocation-strict-separate3/hardware_status
 create mode 100644 src/test/test-colocation-strict-separate3/log.expect
 create mode 100644 src/test/test-colocation-strict-separate3/manager_status
 create mode 100644 src/test/test-colocation-strict-separate3/rules_config
 create mode 100644 src/test/test-colocation-strict-separate3/service_config
 create mode 100644 src/test/test-colocation-strict-separate4/README
 create mode 100644 src/test/test-colocation-strict-separate4/cmdlist
 create mode 100644 src/test/test-colocation-strict-separate4/hardware_status
 create mode 100644 src/test/test-colocation-strict-separate4/log.expect
 create mode 100644 src/test/test-colocation-strict-separate4/manager_status
 create mode 100644 src/test/test-colocation-strict-separate4/rules_config
 create mode 100644 src/test/test-colocation-strict-separate4/service_config
 create mode 100644 src/test/test-colocation-strict-separate5/README
 create mode 100644 src/test/test-colocation-strict-separate5/cmdlist
 create mode 100644 src/test/test-colocation-strict-separate5/hardware_status
 create mode 100644 src/test/test-colocation-strict-separate5/log.expect
 create mode 100644 src/test/test-colocation-strict-separate5/manager_status
 create mode 100644 src/test/test-colocation-strict-separate5/rules_config
 create mode 100644 src/test/test-colocation-strict-separate5/service_config
 create mode 100644 src/test/test-colocation-strict-separate6/README
 create mode 100644 src/test/test-colocation-strict-separate6/cmdlist
 create mode 100644 src/test/test-colocation-strict-separate6/hardware_status
 create mode 100644 src/test/test-colocation-strict-separate6/log.expect
 create mode 100644 src/test/test-colocation-strict-separate6/manager_status
 create mode 100644 src/test/test-colocation-strict-separate6/rules_config
 create mode 100644 src/test/test-colocation-strict-separate6/service_config
 create mode 100644 src/test/test-colocation-strict-separate7/README
 create mode 100644 src/test/test-colocation-strict-separate7/cmdlist
 create mode 100644 src/test/test-colocation-strict-separate7/hardware_status
 create mode 100644 src/test/test-colocation-strict-separate7/log.expect
 create mode 100644 src/test/test-colocation-strict-separate7/manager_status
 create mode 100644 src/test/test-colocation-strict-separate7/rules_config
 create mode 100644 src/test/test-colocation-strict-separate7/service_config
 create mode 100644 src/test/test-colocation-strict-separate8/README
 create mode 100644 src/test/test-colocation-strict-separate8/cmdlist
 create mode 100644 src/test/test-colocation-strict-separate8/hardware_status
 create mode 100644 src/test/test-colocation-strict-separate8/log.expect
 create mode 100644 src/test/test-colocation-strict-separate8/manager_status
 create mode 100644 src/test/test-colocation-strict-separate8/rules_config
 create mode 100644 src/test/test-colocation-strict-separate8/service_config

diff --git a/src/test/test-colocation-strict-separate1/README b/src/test/test-colocation-strict-separate1/README
new file mode 100644
index 0000000..ae6c12f
--- /dev/null
+++ b/src/test/test-colocation-strict-separate1/README
@@ -0,0 +1,13 @@
+Test whether a strict negative colocation rule among two services makes one of
+the services migrate to a different recovery node than the other in case of a
+failover of their previously assigned node.
+
+The test scenario is:
+- vm:101 and vm:102 must be kept separate
+- vm:101 and vm:102 are currently running on node2 and node3 respectively
+- node1 has a higher service count than node2 to test the colocation rule is
+  applied even though the scheduler would prefer the less utilized node
+
+The expected outcome is:
+- As node3 fails, vm:102 is migrated to node1; even though the utilization of
+  node1 is high already, the services must be kept separate
diff --git a/src/test/test-colocation-strict-separate1/cmdlist b/src/test/test-colocation-strict-separate1/cmdlist
new file mode 100644
index 0000000..c0a4daa
--- /dev/null
+++ b/src/test/test-colocation-strict-separate1/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on" ],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-colocation-strict-separate1/hardware_status b/src/test/test-colocation-strict-separate1/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-colocation-strict-separate1/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-colocation-strict-separate1/log.expect b/src/test/test-colocation-strict-separate1/log.expect
new file mode 100644
index 0000000..475db39
--- /dev/null
+++ b/src/test/test-colocation-strict-separate1/log.expect
@@ -0,0 +1,60 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node2'
+info     20    node1/crm: adding new service 'vm:102' on node 'node3'
+info     20    node1/crm: adding new service 'vm:103' on node 'node1'
+info     20    node1/crm: adding new service 'vm:104' on node 'node1'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:104': state changed from 'request_start' to 'started'  (node = node1)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:103
+info     21    node1/lrm: service status vm:103 started
+info     21    node1/lrm: starting service vm:104
+info     21    node1/lrm: service status vm:104 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:101
+info     23    node2/lrm: service status vm:101 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:102
+info     25    node3/lrm: service status vm:102 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:102': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:102': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:102' from fenced node 'node3' to node 'node1'
+info    240    node1/crm: service 'vm:102': state changed from 'recovery' to 'started'  (node = node1)
+info    241    node1/lrm: starting service vm:102
+info    241    node1/lrm: service status vm:102 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-colocation-strict-separate1/manager_status b/src/test/test-colocation-strict-separate1/manager_status
new file mode 100644
index 0000000..0967ef4
--- /dev/null
+++ b/src/test/test-colocation-strict-separate1/manager_status
@@ -0,0 +1 @@
+{}
diff --git a/src/test/test-colocation-strict-separate1/rules_config b/src/test/test-colocation-strict-separate1/rules_config
new file mode 100644
index 0000000..87d309e
--- /dev/null
+++ b/src/test/test-colocation-strict-separate1/rules_config
@@ -0,0 +1,3 @@
+colocation: lonely-must-vms-be
+	services vm:101,vm:102
+	affinity separate
diff --git a/src/test/test-colocation-strict-separate1/service_config b/src/test/test-colocation-strict-separate1/service_config
new file mode 100644
index 0000000..6582e8c
--- /dev/null
+++ b/src/test/test-colocation-strict-separate1/service_config
@@ -0,0 +1,6 @@
+{
+    "vm:101": { "node": "node2", "state": "started" },
+    "vm:102": { "node": "node3", "state": "started" },
+    "vm:103": { "node": "node1", "state": "started" },
+    "vm:104": { "node": "node1", "state": "started" }
+}
diff --git a/src/test/test-colocation-strict-separate2/README b/src/test/test-colocation-strict-separate2/README
new file mode 100644
index 0000000..37245a5
--- /dev/null
+++ b/src/test/test-colocation-strict-separate2/README
@@ -0,0 +1,15 @@
+Test whether a strict negative colocation rule among three services makes one
+of the services migrate to a different node than the other services in case of
+a failover of the service's previously assigned node.
+
+The test scenario is:
+- vm:101, vm:102, and vm:103 must be kept separate
+- vm:101, vm:102, and vm:103 are on node3, node4, and node5 respectively
+- node1 and node2 have each both higher service counts than node3, node4 and
+  node5 to test the rule is applied even though the scheduler would prefer the
+  less utilized nodes node3 and node4
+
+The expected outcome is:
+- As node5 fails, vm:103 is migrated to node2; even though the utilization of
+  node2 is high already, the services must be kept separate; node2 is chosen
+  since node1 has one more service running on it
diff --git a/src/test/test-colocation-strict-separate2/cmdlist b/src/test/test-colocation-strict-separate2/cmdlist
new file mode 100644
index 0000000..89d09c9
--- /dev/null
+++ b/src/test/test-colocation-strict-separate2/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on", "power node4 on", "power node5 on" ],
+    [ "network node5 off" ]
+]
diff --git a/src/test/test-colocation-strict-separate2/hardware_status b/src/test/test-colocation-strict-separate2/hardware_status
new file mode 100644
index 0000000..7b8e961
--- /dev/null
+++ b/src/test/test-colocation-strict-separate2/hardware_status
@@ -0,0 +1,7 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" },
+  "node4": { "power": "off", "network": "off" },
+  "node5": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-colocation-strict-separate2/log.expect b/src/test/test-colocation-strict-separate2/log.expect
new file mode 100644
index 0000000..858d3c9
--- /dev/null
+++ b/src/test/test-colocation-strict-separate2/log.expect
@@ -0,0 +1,90 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node4 on
+info     20    node4/crm: status change startup => wait_for_quorum
+info     20    node4/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node5 on
+info     20    node5/crm: status change startup => wait_for_quorum
+info     20    node5/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node4': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node5': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node4'
+info     20    node1/crm: adding new service 'vm:103' on node 'node5'
+info     20    node1/crm: adding new service 'vm:104' on node 'node1'
+info     20    node1/crm: adding new service 'vm:105' on node 'node1'
+info     20    node1/crm: adding new service 'vm:106' on node 'node1'
+info     20    node1/crm: adding new service 'vm:107' on node 'node2'
+info     20    node1/crm: adding new service 'vm:108' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node4)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node5)
+info     20    node1/crm: service 'vm:104': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:105': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:106': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:107': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:108': state changed from 'request_start' to 'started'  (node = node2)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:104
+info     21    node1/lrm: service status vm:104 started
+info     21    node1/lrm: starting service vm:105
+info     21    node1/lrm: service status vm:105 started
+info     21    node1/lrm: starting service vm:106
+info     21    node1/lrm: service status vm:106 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:107
+info     23    node2/lrm: service status vm:107 started
+info     23    node2/lrm: starting service vm:108
+info     23    node2/lrm: service status vm:108 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info     26    node4/crm: status change wait_for_quorum => slave
+info     27    node4/lrm: got lock 'ha_agent_node4_lock'
+info     27    node4/lrm: status change wait_for_agent_lock => active
+info     27    node4/lrm: starting service vm:102
+info     27    node4/lrm: service status vm:102 started
+info     28    node5/crm: status change wait_for_quorum => slave
+info     29    node5/lrm: got lock 'ha_agent_node5_lock'
+info     29    node5/lrm: status change wait_for_agent_lock => active
+info     29    node5/lrm: starting service vm:103
+info     29    node5/lrm: service status vm:103 started
+info    120      cmdlist: execute network node5 off
+info    120    node1/crm: node 'node5': state changed from 'online' => 'unknown'
+info    128    node5/crm: status change slave => wait_for_quorum
+info    129    node5/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:103': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node5': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node5'
+info    170     watchdog: execute power node5 off
+info    169    node5/crm: killed by poweroff
+info    170    node5/lrm: killed by poweroff
+info    170     hardware: server 'node5' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node5_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node5'
+info    240    node1/crm: node 'node5': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node5'
+info    240    node1/crm: service 'vm:103': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:103' from fenced node 'node5' to node 'node2'
+info    240    node1/crm: service 'vm:103': state changed from 'recovery' to 'started'  (node = node2)
+info    243    node2/lrm: starting service vm:103
+info    243    node2/lrm: service status vm:103 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-colocation-strict-separate2/manager_status b/src/test/test-colocation-strict-separate2/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-colocation-strict-separate2/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-colocation-strict-separate2/rules_config b/src/test/test-colocation-strict-separate2/rules_config
new file mode 100644
index 0000000..64c7bfb
--- /dev/null
+++ b/src/test/test-colocation-strict-separate2/rules_config
@@ -0,0 +1,3 @@
+colocation: lonely-must-vms-be
+	services vm:101,vm:102,vm:103
+	affinity separate
diff --git a/src/test/test-colocation-strict-separate2/service_config b/src/test/test-colocation-strict-separate2/service_config
new file mode 100644
index 0000000..2c27816
--- /dev/null
+++ b/src/test/test-colocation-strict-separate2/service_config
@@ -0,0 +1,10 @@
+{
+    "vm:101": { "node": "node3", "state": "started" },
+    "vm:102": { "node": "node4", "state": "started" },
+    "vm:103": { "node": "node5", "state": "started" },
+    "vm:104": { "node": "node1", "state": "started" },
+    "vm:105": { "node": "node1", "state": "started" },
+    "vm:106": { "node": "node1", "state": "started" },
+    "vm:107": { "node": "node2", "state": "started" },
+    "vm:108": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-colocation-strict-separate3/README b/src/test/test-colocation-strict-separate3/README
new file mode 100644
index 0000000..0397fdf
--- /dev/null
+++ b/src/test/test-colocation-strict-separate3/README
@@ -0,0 +1,16 @@
+Test whether a strict negative colocation rule among three services makes two
+of the services migrate to two different recovery nodes than the node of the
+third service in case of a failover of their two previously assigned nodes.
+
+The test scenario is:
+- vm:101, vm:102, and vm:103 must be kept separate
+- vm:101, vm:102, and vm:103 are respectively on node3, node4, and node5
+- node1 and node2 have both higher service counts than node3, node4 and node5
+  to test the colocation rule is enforced even though the utilization would
+  prefer the other node3
+
+The expected outcome is:
+- As node4 and node5 fails, vm:102 and vm:103 are migrated to node2 and node1
+  respectively; even though the utilization of node1 and node2 are high
+  already, the services must be kept separate; node2 is chosen first since
+  node1 has one more service running on it
diff --git a/src/test/test-colocation-strict-separate3/cmdlist b/src/test/test-colocation-strict-separate3/cmdlist
new file mode 100644
index 0000000..1934596
--- /dev/null
+++ b/src/test/test-colocation-strict-separate3/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on", "power node4 on", "power node5 on" ],
+    [ "network node4 off", "network node5 off" ]
+]
diff --git a/src/test/test-colocation-strict-separate3/hardware_status b/src/test/test-colocation-strict-separate3/hardware_status
new file mode 100644
index 0000000..7b8e961
--- /dev/null
+++ b/src/test/test-colocation-strict-separate3/hardware_status
@@ -0,0 +1,7 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" },
+  "node4": { "power": "off", "network": "off" },
+  "node5": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-colocation-strict-separate3/log.expect b/src/test/test-colocation-strict-separate3/log.expect
new file mode 100644
index 0000000..4acdcec
--- /dev/null
+++ b/src/test/test-colocation-strict-separate3/log.expect
@@ -0,0 +1,110 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node4 on
+info     20    node4/crm: status change startup => wait_for_quorum
+info     20    node4/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node5 on
+info     20    node5/crm: status change startup => wait_for_quorum
+info     20    node5/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node4': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node5': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node4'
+info     20    node1/crm: adding new service 'vm:103' on node 'node5'
+info     20    node1/crm: adding new service 'vm:104' on node 'node1'
+info     20    node1/crm: adding new service 'vm:105' on node 'node1'
+info     20    node1/crm: adding new service 'vm:106' on node 'node1'
+info     20    node1/crm: adding new service 'vm:107' on node 'node2'
+info     20    node1/crm: adding new service 'vm:108' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node4)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node5)
+info     20    node1/crm: service 'vm:104': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:105': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:106': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:107': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:108': state changed from 'request_start' to 'started'  (node = node2)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:104
+info     21    node1/lrm: service status vm:104 started
+info     21    node1/lrm: starting service vm:105
+info     21    node1/lrm: service status vm:105 started
+info     21    node1/lrm: starting service vm:106
+info     21    node1/lrm: service status vm:106 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:107
+info     23    node2/lrm: service status vm:107 started
+info     23    node2/lrm: starting service vm:108
+info     23    node2/lrm: service status vm:108 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info     26    node4/crm: status change wait_for_quorum => slave
+info     27    node4/lrm: got lock 'ha_agent_node4_lock'
+info     27    node4/lrm: status change wait_for_agent_lock => active
+info     27    node4/lrm: starting service vm:102
+info     27    node4/lrm: service status vm:102 started
+info     28    node5/crm: status change wait_for_quorum => slave
+info     29    node5/lrm: got lock 'ha_agent_node5_lock'
+info     29    node5/lrm: status change wait_for_agent_lock => active
+info     29    node5/lrm: starting service vm:103
+info     29    node5/lrm: service status vm:103 started
+info    120      cmdlist: execute network node4 off
+info    120      cmdlist: execute network node5 off
+info    120    node1/crm: node 'node4': state changed from 'online' => 'unknown'
+info    120    node1/crm: node 'node5': state changed from 'online' => 'unknown'
+info    126    node4/crm: status change slave => wait_for_quorum
+info    127    node4/lrm: status change active => lost_agent_lock
+info    128    node5/crm: status change slave => wait_for_quorum
+info    129    node5/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:102': state changed from 'started' to 'fence'
+info    160    node1/crm: service 'vm:103': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node4': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node4'
+info    160    node1/crm: node 'node5': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node5'
+info    168     watchdog: execute power node4 off
+info    167    node4/crm: killed by poweroff
+info    168    node4/lrm: killed by poweroff
+info    168     hardware: server 'node4' stopped by poweroff (watchdog)
+info    170     watchdog: execute power node5 off
+info    169    node5/crm: killed by poweroff
+info    170    node5/lrm: killed by poweroff
+info    170     hardware: server 'node5' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node4_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node4'
+info    240    node1/crm: node 'node4': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node4'
+info    240    node1/crm: service 'vm:102': state changed from 'fence' to 'recovery'
+info    240    node1/crm: got lock 'ha_agent_node5_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node5'
+info    240    node1/crm: node 'node5': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node5'
+info    240    node1/crm: service 'vm:103': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:102' from fenced node 'node4' to node 'node2'
+info    240    node1/crm: service 'vm:102': state changed from 'recovery' to 'started'  (node = node2)
+info    240    node1/crm: recover service 'vm:103' from fenced node 'node5' to node 'node1'
+info    240    node1/crm: service 'vm:103': state changed from 'recovery' to 'started'  (node = node1)
+info    241    node1/lrm: starting service vm:103
+info    241    node1/lrm: service status vm:103 started
+info    243    node2/lrm: starting service vm:102
+info    243    node2/lrm: service status vm:102 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-colocation-strict-separate3/manager_status b/src/test/test-colocation-strict-separate3/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-colocation-strict-separate3/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-colocation-strict-separate3/rules_config b/src/test/test-colocation-strict-separate3/rules_config
new file mode 100644
index 0000000..64c7bfb
--- /dev/null
+++ b/src/test/test-colocation-strict-separate3/rules_config
@@ -0,0 +1,3 @@
+colocation: lonely-must-vms-be
+	services vm:101,vm:102,vm:103
+	affinity separate
diff --git a/src/test/test-colocation-strict-separate3/service_config b/src/test/test-colocation-strict-separate3/service_config
new file mode 100644
index 0000000..2c27816
--- /dev/null
+++ b/src/test/test-colocation-strict-separate3/service_config
@@ -0,0 +1,10 @@
+{
+    "vm:101": { "node": "node3", "state": "started" },
+    "vm:102": { "node": "node4", "state": "started" },
+    "vm:103": { "node": "node5", "state": "started" },
+    "vm:104": { "node": "node1", "state": "started" },
+    "vm:105": { "node": "node1", "state": "started" },
+    "vm:106": { "node": "node1", "state": "started" },
+    "vm:107": { "node": "node2", "state": "started" },
+    "vm:108": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-colocation-strict-separate4/README b/src/test/test-colocation-strict-separate4/README
new file mode 100644
index 0000000..824274c
--- /dev/null
+++ b/src/test/test-colocation-strict-separate4/README
@@ -0,0 +1,18 @@
+Test whether a strict negative colocation rule among two services makes one of
+the services migrate to a different recovery node than the other service in
+case of a failover of service's previously assigned node. As the service fails
+to start on the recovery node (e.g. insufficient resources), the failing
+service is kept on the recovery node.
+
+The test scenario is:
+- vm:101 and fa:120001 must be kept separate
+- vm:101 and fa:120001 are on node2 and node3 respectively
+- fa:120001 will fail to start on node1
+- node1 has a higher service count than node2 to test the colocation rule is
+  applied even though the scheduler would prefer the less utilized node
+
+The expected outcome is:
+- As node3 fails, fa:120001 is migrated to node1
+- fa:120001 will stay on the node (potentially in recovery), since it cannot be
+  started on node1, but cannot be relocated to another one either due to the
+  strict colocation rule
diff --git a/src/test/test-colocation-strict-separate4/cmdlist b/src/test/test-colocation-strict-separate4/cmdlist
new file mode 100644
index 0000000..c0a4daa
--- /dev/null
+++ b/src/test/test-colocation-strict-separate4/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on" ],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-colocation-strict-separate4/hardware_status b/src/test/test-colocation-strict-separate4/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-colocation-strict-separate4/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-colocation-strict-separate4/log.expect b/src/test/test-colocation-strict-separate4/log.expect
new file mode 100644
index 0000000..f772ea8
--- /dev/null
+++ b/src/test/test-colocation-strict-separate4/log.expect
@@ -0,0 +1,69 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'fa:120001' on node 'node3'
+info     20    node1/crm: adding new service 'vm:101' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node1'
+info     20    node1/crm: adding new service 'vm:104' on node 'node1'
+info     20    node1/crm: service 'fa:120001': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:104': state changed from 'request_start' to 'started'  (node = node1)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:103
+info     21    node1/lrm: service status vm:103 started
+info     21    node1/lrm: starting service vm:104
+info     21    node1/lrm: service status vm:104 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:101
+info     23    node2/lrm: service status vm:101 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service fa:120001
+info     25    node3/lrm: service status fa:120001 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'fa:120001': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'fa:120001': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'fa:120001' from fenced node 'node3' to node 'node1'
+info    240    node1/crm: service 'fa:120001': state changed from 'recovery' to 'started'  (node = node1)
+info    241    node1/lrm: starting service fa:120001
+warn    241    node1/lrm: unable to start service fa:120001
+warn    241    node1/lrm: restart policy: retry number 1 for service 'fa:120001'
+info    261    node1/lrm: starting service fa:120001
+warn    261    node1/lrm: unable to start service fa:120001
+err     261    node1/lrm: unable to start service fa:120001 on local node after 1 retries
+warn    280    node1/crm: starting service fa:120001 on node 'node1' failed, relocating service.
+warn    280    node1/crm: Start Error Recovery: Tried all available nodes for service 'fa:120001', retry start on current node. Tried nodes: node1
+info    281    node1/lrm: starting service fa:120001
+info    281    node1/lrm: service status fa:120001 started
+info    300    node1/crm: relocation policy successful for 'fa:120001' on node 'node1', failed nodes: node1
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-colocation-strict-separate4/manager_status b/src/test/test-colocation-strict-separate4/manager_status
new file mode 100644
index 0000000..0967ef4
--- /dev/null
+++ b/src/test/test-colocation-strict-separate4/manager_status
@@ -0,0 +1 @@
+{}
diff --git a/src/test/test-colocation-strict-separate4/rules_config b/src/test/test-colocation-strict-separate4/rules_config
new file mode 100644
index 0000000..90226b7
--- /dev/null
+++ b/src/test/test-colocation-strict-separate4/rules_config
@@ -0,0 +1,3 @@
+colocation: lonely-must-vms-be
+	services vm:101,fa:120001
+	affinity separate
diff --git a/src/test/test-colocation-strict-separate4/service_config b/src/test/test-colocation-strict-separate4/service_config
new file mode 100644
index 0000000..f53c2bc
--- /dev/null
+++ b/src/test/test-colocation-strict-separate4/service_config
@@ -0,0 +1,6 @@
+{
+    "vm:101": { "node": "node2", "state": "started" },
+    "fa:120001": { "node": "node3", "state": "started" },
+    "vm:103": { "node": "node1", "state": "started" },
+    "vm:104": { "node": "node1", "state": "started" }
+}
diff --git a/src/test/test-colocation-strict-separate5/README b/src/test/test-colocation-strict-separate5/README
new file mode 100644
index 0000000..7795e3d
--- /dev/null
+++ b/src/test/test-colocation-strict-separate5/README
@@ -0,0 +1,11 @@
+Test whether two pair-wise strict negative colocation rules, i.e. where one
+service is in two separate non-colocation relationship with two other services,
+makes one of the outer services migrate to the same node as the other outer
+service in case of a failover of their previously assigned node.
+
+The test scenario is:
+- vm:101 and vm:102, and vm:101 and vm:103 must each be kept separate
+- vm:101, vm:102, and vm:103 are respectively on node1, node2, and node3
+
+The expected outcome is:
+- As node3 fails, vm:103 is migrated to node2 - the same as vm:102
diff --git a/src/test/test-colocation-strict-separate5/cmdlist b/src/test/test-colocation-strict-separate5/cmdlist
new file mode 100644
index 0000000..c0a4daa
--- /dev/null
+++ b/src/test/test-colocation-strict-separate5/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on" ],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-colocation-strict-separate5/hardware_status b/src/test/test-colocation-strict-separate5/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-colocation-strict-separate5/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-colocation-strict-separate5/log.expect b/src/test/test-colocation-strict-separate5/log.expect
new file mode 100644
index 0000000..16156ad
--- /dev/null
+++ b/src/test/test-colocation-strict-separate5/log.expect
@@ -0,0 +1,56 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node1'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node3)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:101
+info     21    node1/lrm: service status vm:101 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:103
+info     25    node3/lrm: service status vm:103 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:103': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:103': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:103' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:103': state changed from 'recovery' to 'started'  (node = node2)
+info    243    node2/lrm: starting service vm:103
+info    243    node2/lrm: service status vm:103 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-colocation-strict-separate5/manager_status b/src/test/test-colocation-strict-separate5/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-colocation-strict-separate5/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-colocation-strict-separate5/rules_config b/src/test/test-colocation-strict-separate5/rules_config
new file mode 100644
index 0000000..b198427
--- /dev/null
+++ b/src/test/test-colocation-strict-separate5/rules_config
@@ -0,0 +1,7 @@
+colocation: lonely-must-some-vms-be1
+	services vm:101,vm:102
+	affinity separate
+
+colocation: lonely-must-some-vms-be2
+	services vm:101,vm:103
+	affinity separate
diff --git a/src/test/test-colocation-strict-separate5/service_config b/src/test/test-colocation-strict-separate5/service_config
new file mode 100644
index 0000000..4b26f6b
--- /dev/null
+++ b/src/test/test-colocation-strict-separate5/service_config
@@ -0,0 +1,5 @@
+{
+    "vm:101": { "node": "node1", "state": "started" },
+    "vm:102": { "node": "node2", "state": "started" },
+    "vm:103": { "node": "node3", "state": "started" }
+}
diff --git a/src/test/test-colocation-strict-separate6/README b/src/test/test-colocation-strict-separate6/README
new file mode 100644
index 0000000..ff10171
--- /dev/null
+++ b/src/test/test-colocation-strict-separate6/README
@@ -0,0 +1,18 @@
+Test whether a strict negative colocation rule among two services makes one of
+the services migrate to a different recovery node than the other service in
+case of a failover of the service's previously assigned node. As the other
+service fails to starts on the recovery node (e.g. insufficient resources),
+the failing service is kept on the recovery node.
+
+The test scenario is:
+- fa:120001 and fa:220001 must be kept separate
+- fa:120001 and fa:220001 are on node2 and node3 respectively
+- fa:120001 and fa:220001 will fail to start on node1
+- node1 has a higher service count than node2 to test the colocation rule is
+  applied even though the scheduler would prefer the less utilized node
+
+The expected outcome is:
+- As node3 fails, fa:220001 is migrated to node1
+- fa:220001 will stay on the node (potentially in recovery), since it cannot be
+  started on node1, but cannot be relocated to another one either due to the
+  strict colocation rule
diff --git a/src/test/test-colocation-strict-separate6/cmdlist b/src/test/test-colocation-strict-separate6/cmdlist
new file mode 100644
index 0000000..c0a4daa
--- /dev/null
+++ b/src/test/test-colocation-strict-separate6/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on" ],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-colocation-strict-separate6/hardware_status b/src/test/test-colocation-strict-separate6/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-colocation-strict-separate6/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-colocation-strict-separate6/log.expect b/src/test/test-colocation-strict-separate6/log.expect
new file mode 100644
index 0000000..0d9854a
--- /dev/null
+++ b/src/test/test-colocation-strict-separate6/log.expect
@@ -0,0 +1,69 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'fa:120001' on node 'node2'
+info     20    node1/crm: adding new service 'fa:220001' on node 'node3'
+info     20    node1/crm: adding new service 'vm:101' on node 'node1'
+info     20    node1/crm: adding new service 'vm:102' on node 'node1'
+info     20    node1/crm: service 'fa:120001': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'fa:220001': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node1)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:101
+info     21    node1/lrm: service status vm:101 started
+info     21    node1/lrm: starting service vm:102
+info     21    node1/lrm: service status vm:102 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service fa:120001
+info     23    node2/lrm: service status fa:120001 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service fa:220001
+info     25    node3/lrm: service status fa:220001 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'fa:220001': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'fa:220001': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'fa:220001' from fenced node 'node3' to node 'node1'
+info    240    node1/crm: service 'fa:220001': state changed from 'recovery' to 'started'  (node = node1)
+info    241    node1/lrm: starting service fa:220001
+warn    241    node1/lrm: unable to start service fa:220001
+warn    241    node1/lrm: restart policy: retry number 1 for service 'fa:220001'
+info    261    node1/lrm: starting service fa:220001
+warn    261    node1/lrm: unable to start service fa:220001
+err     261    node1/lrm: unable to start service fa:220001 on local node after 1 retries
+warn    280    node1/crm: starting service fa:220001 on node 'node1' failed, relocating service.
+warn    280    node1/crm: Start Error Recovery: Tried all available nodes for service 'fa:220001', retry start on current node. Tried nodes: node1
+info    281    node1/lrm: starting service fa:220001
+info    281    node1/lrm: service status fa:220001 started
+info    300    node1/crm: relocation policy successful for 'fa:220001' on node 'node1', failed nodes: node1
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-colocation-strict-separate6/manager_status b/src/test/test-colocation-strict-separate6/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-colocation-strict-separate6/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-colocation-strict-separate6/rules_config b/src/test/test-colocation-strict-separate6/rules_config
new file mode 100644
index 0000000..82482d0
--- /dev/null
+++ b/src/test/test-colocation-strict-separate6/rules_config
@@ -0,0 +1,3 @@
+colocation: lonely-must-vms-be
+	services fa:120001,fa:220001
+	affinity separate
diff --git a/src/test/test-colocation-strict-separate6/service_config b/src/test/test-colocation-strict-separate6/service_config
new file mode 100644
index 0000000..1f9480c
--- /dev/null
+++ b/src/test/test-colocation-strict-separate6/service_config
@@ -0,0 +1,6 @@
+{
+    "vm:101": { "node": "node1", "state": "started" },
+    "vm:102": { "node": "node1", "state": "started" },
+    "fa:120001": { "node": "node2", "state": "started" },
+    "fa:220001": { "node": "node3", "state": "started" }
+}
diff --git a/src/test/test-colocation-strict-separate7/README b/src/test/test-colocation-strict-separate7/README
new file mode 100644
index 0000000..b783a47
--- /dev/null
+++ b/src/test/test-colocation-strict-separate7/README
@@ -0,0 +1,15 @@
+Test whether a strict negative colocation rule among two services makes one of
+the service, which is manually migrated to another node, be migrated there and
+disallows other negatively colocated services to not be migrated to the
+migrated service's source node.
+
+The test scenario is:
+- vm:101 and vm:102 must be kept separate
+- vm:101 and vm:102 are running on node1 and node2 respectively
+
+The expected outcome is:
+- vm:101 is migrated to node3
+- While vm:101 is migrated, vm:102 cannot be migrated to node1, as vm:101 is
+  still putting load on node1 as its source node
+- After vm:101 is successfully migrated to node3, vm:102 can be migrated to
+  node1
diff --git a/src/test/test-colocation-strict-separate7/cmdlist b/src/test/test-colocation-strict-separate7/cmdlist
new file mode 100644
index 0000000..468ba56
--- /dev/null
+++ b/src/test/test-colocation-strict-separate7/cmdlist
@@ -0,0 +1,5 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "service vm:101 migrate node3", "service vm:102 migrate node1" ],
+    [ "service vm:102 migrate node1" ]
+]
diff --git a/src/test/test-colocation-strict-separate7/hardware_status b/src/test/test-colocation-strict-separate7/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-colocation-strict-separate7/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-colocation-strict-separate7/log.expect b/src/test/test-colocation-strict-separate7/log.expect
new file mode 100644
index 0000000..07213b2
--- /dev/null
+++ b/src/test/test-colocation-strict-separate7/log.expect
@@ -0,0 +1,52 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node1'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:101
+info     21    node1/lrm: service status vm:101 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info    120      cmdlist: execute service vm:101 migrate node3
+info    120      cmdlist: execute service vm:102 migrate node1
+info    120    node1/crm: got crm command: migrate vm:101 node3
+err     120    node1/crm: crm command 'migrate vm:102 node1' error - negatively colocated service 'vm:101' on 'node1'
+info    120    node1/crm: migrate service 'vm:101' to node 'node3'
+info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node1, target = node3)
+info    121    node1/lrm: service vm:101 - start migrate to node 'node3'
+info    121    node1/lrm: service vm:101 - end migrate to node 'node3'
+info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
+info    145    node3/lrm: got lock 'ha_agent_node3_lock'
+info    145    node3/lrm: status change wait_for_agent_lock => active
+info    145    node3/lrm: starting service vm:101
+info    145    node3/lrm: service status vm:101 started
+info    220      cmdlist: execute service vm:102 migrate node1
+info    220    node1/crm: got crm command: migrate vm:102 node1
+info    220    node1/crm: migrate service 'vm:102' to node 'node1'
+info    220    node1/crm: service 'vm:102': state changed from 'started' to 'migrate'  (node = node2, target = node1)
+info    223    node2/lrm: service vm:102 - start migrate to node 'node1'
+info    223    node2/lrm: service vm:102 - end migrate to node 'node1'
+info    240    node1/crm: service 'vm:102': state changed from 'migrate' to 'started'  (node = node1)
+info    241    node1/lrm: starting service vm:102
+info    241    node1/lrm: service status vm:102 started
+info    820     hardware: exit simulation - done
diff --git a/src/test/test-colocation-strict-separate7/manager_status b/src/test/test-colocation-strict-separate7/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-colocation-strict-separate7/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-colocation-strict-separate7/rules_config b/src/test/test-colocation-strict-separate7/rules_config
new file mode 100644
index 0000000..87d309e
--- /dev/null
+++ b/src/test/test-colocation-strict-separate7/rules_config
@@ -0,0 +1,3 @@
+colocation: lonely-must-vms-be
+	services vm:101,vm:102
+	affinity separate
diff --git a/src/test/test-colocation-strict-separate7/service_config b/src/test/test-colocation-strict-separate7/service_config
new file mode 100644
index 0000000..0336d09
--- /dev/null
+++ b/src/test/test-colocation-strict-separate7/service_config
@@ -0,0 +1,4 @@
+{
+    "vm:101": { "node": "node1", "state": "started" },
+    "vm:102": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-colocation-strict-separate8/README b/src/test/test-colocation-strict-separate8/README
new file mode 100644
index 0000000..78035a8
--- /dev/null
+++ b/src/test/test-colocation-strict-separate8/README
@@ -0,0 +1,11 @@
+Test whether a strict negative colocation rule among three services makes one
+of the service, which is manually migrated to another negatively colocated
+service's node, stay on the node of the other services.
+
+The test scenario is:
+- vm:101, vm:102, and vm:103 must be kept separate
+- vm:101, vm:102, and vm:103 are all running on node1, node2, and node3
+
+The expected outcome is:
+- vm:101 cannot be migrated to node3 as it would conflict the negative
+  colocation rule between vm:101, vm:102 and vm:103.
diff --git a/src/test/test-colocation-strict-separate8/cmdlist b/src/test/test-colocation-strict-separate8/cmdlist
new file mode 100644
index 0000000..13cab7b
--- /dev/null
+++ b/src/test/test-colocation-strict-separate8/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "service vm:101 migrate node3" ]
+]
diff --git a/src/test/test-colocation-strict-separate8/hardware_status b/src/test/test-colocation-strict-separate8/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-colocation-strict-separate8/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-colocation-strict-separate8/log.expect b/src/test/test-colocation-strict-separate8/log.expect
new file mode 100644
index 0000000..d1048ed
--- /dev/null
+++ b/src/test/test-colocation-strict-separate8/log.expect
@@ -0,0 +1,38 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node1'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node3)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:101
+info     21    node1/lrm: service status vm:101 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:103
+info     25    node3/lrm: service status vm:103 started
+info    120      cmdlist: execute service vm:101 migrate node3
+err     120    node1/crm: crm command 'migrate vm:101 node3' error - negatively colocated service 'vm:103' on 'node3'
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-colocation-strict-separate8/manager_status b/src/test/test-colocation-strict-separate8/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-colocation-strict-separate8/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-colocation-strict-separate8/rules_config b/src/test/test-colocation-strict-separate8/rules_config
new file mode 100644
index 0000000..64c7bfb
--- /dev/null
+++ b/src/test/test-colocation-strict-separate8/rules_config
@@ -0,0 +1,3 @@
+colocation: lonely-must-vms-be
+	services vm:101,vm:102,vm:103
+	affinity separate
diff --git a/src/test/test-colocation-strict-separate8/service_config b/src/test/test-colocation-strict-separate8/service_config
new file mode 100644
index 0000000..4b26f6b
--- /dev/null
+++ b/src/test/test-colocation-strict-separate8/service_config
@@ -0,0 +1,5 @@
+{
+    "vm:101": { "node": "node1", "state": "started" },
+    "vm:102": { "node": "node2", "state": "started" },
+    "vm:103": { "node": "node3", "state": "started" }
+}
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2025-06-20 14:36 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-20 14:31 [pve-devel] [RFC common/cluster/ha-manager/docs/manager v2 00/40] HA " Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH common v2 1/1] introduce HashTools module Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH cluster v2 1/3] cfs: add 'ha/rules.cfg' to observed files Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH cluster v2 2/3] datacenter config: make pve-ha-shutdown-policy optional Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH cluster v2 3/3] datacenter config: introduce feature flag for location rules Daniel Kral
2025-06-23 15:58   ` Thomas Lamprecht
2025-06-24  7:29     ` Daniel Kral
2025-06-24  7:51       ` Thomas Lamprecht
2025-06-24  8:19         ` Daniel Kral
2025-06-24  8:25           ` Thomas Lamprecht
2025-06-24  8:52             ` Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 01/26] tree-wide: make arguments for select_service_node explicit Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 02/26] manager: improve signature of select_service_node Daniel Kral
2025-06-23 16:21   ` Thomas Lamprecht
2025-06-24  8:06     ` Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 03/26] introduce rules base plugin Daniel Kral
2025-07-04 14:18   ` Michael Köppl
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 04/26] rules: introduce location rule plugin Daniel Kral
2025-06-20 16:17   ` Jillian Morgan
2025-06-20 16:30     ` Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 05/26] rules: introduce colocation " Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 06/26] rules: add global checks between location and colocation rules Daniel Kral
2025-07-01 11:02   ` Daniel Kral
2025-07-04 14:43   ` Michael Köppl
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 07/26] config, env, hw: add rules read and parse methods Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 08/26] manager: read and update rules config Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 09/26] test: ha tester: add test cases for future location rules Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 10/26] resources: introduce failback property in service config Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 11/26] manager: migrate ha groups to location rules in-memory Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 12/26] manager: apply location rules when selecting service nodes Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 13/26] usage: add information about a service's assigned nodes Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 14/26] manager: apply colocation rules when selecting service nodes Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 15/26] manager: handle migrations for colocated services Daniel Kral
2025-06-27  9:10   ` Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 16/26] sim: resources: add option to limit start and migrate tries to node Daniel Kral
2025-06-20 14:31 ` Daniel Kral [this message]
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 18/26] test: ha tester: add test cases for strict positive colocation rules Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 19/26] test: ha tester: add test cases in more complex scenarios Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 20/26] test: add test cases for rules config Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 21/26] manager: handle negative colocations with too many services Daniel Kral
2025-07-01 12:11   ` Michael Köppl
2025-07-01 12:23     ` Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 22/26] config: prune services from rules if services are deleted from config Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 23/26] api: introduce ha rules api endpoints Daniel Kral
2025-07-04 14:16   ` Michael Köppl
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 24/26] cli: expose ha rules api endpoints to ha-manager cli Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 25/26] api: groups, services: assert use-location-rules feature flag Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH ha-manager v2 26/26] api: services: check for colocations for service motions Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH docs v2 1/5] ha: config: add section about ha rules Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH docs v2 2/5] update static files to include ha rules api endpoints Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH docs v2 3/5] update static files to include use-location-rules feature flag Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH docs v2 4/5] update static files to include ha resources failback flag Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH docs v2 5/5] update static files to include ha service motion return value schema Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH manager v2 1/5] api: ha: add ha rules api endpoints Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH manager v2 2/5] ui: add use-location-rules feature flag Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH manager v2 3/5] ui: ha: hide ha groups if use-location-rules is enabled Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH manager v2 4/5] ui: ha: adapt resources components " Daniel Kral
2025-06-20 14:31 ` [pve-devel] [PATCH manager v2 5/5] ui: ha: add ha rules components and menu entry Daniel Kral
2025-06-30 15:09   ` Michael Köppl
2025-07-01 14:38   ` Michael Köppl
2025-06-20 15:43 ` [pve-devel] [RFC common/cluster/ha-manager/docs/manager v2 00/40] HA colocation rules Daniel Kral
2025-06-20 17:11   ` Jillian Morgan
2025-06-20 17:45     ` DERUMIER, Alexandre via pve-devel
     [not found]     ` <476c41123dced9d560dfbf27640ef8705fd90f11.camel@groupe-cyllene.com>
2025-06-23 15:36       ` Thomas Lamprecht
2025-06-24  8:48         ` Daniel Kral
2025-06-27 12:23           ` Friedrich Weber
2025-06-27 12:41             ` Daniel Kral
2025-06-23  8:11 ` DERUMIER, Alexandre via pve-devel
     [not found] ` <bf973ec4e8c52a10535ed35ad64bf0ec8d1ad37d.camel@groupe-cyllene.com>
2025-06-23 15:28   ` Thomas Lamprecht
2025-06-23 23:21     ` DERUMIER, Alexandre via pve-devel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250620143148.218469-22-d.kral@proxmox.com \
    --to=d.kral@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal