public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH ha-manager v3 10/13] test: ha tester: add test cases for positive resource affinity rules
Date: Fri,  4 Jul 2025 20:20:53 +0200	[thread overview]
Message-ID: <20250704182102.467624-11-d.kral@proxmox.com> (raw)
In-Reply-To: <20250704182102.467624-1-d.kral@proxmox.com>

Add test cases for strict positive resource affinity rules, i.e. where
resources must be kept on the same node together. These verify the
behavior of the resources in strict positive resource affinity rules in
case of a failover of their assigned nodes in the following scenarios:

1. 2 resources in neg. affinity and a 3 node cluster; 1 node failing
2. 3 resources in neg. affinity and a 3 node cluster; 1 node failing
3. 3 resources in neg. affinity and a 3 node cluster; 1 node failing,
   but the recovery node cannot start one of the resources
4. 3 resources in neg. affinity and a 3 node cluster; 1 resource
   manually migrated to another node will migrate the other resources in
   pos. affinity with the migrated resource to the same node as well
5. 9 resources in neg. affinity a 3 node cluster; 1 resource manually
   migrated to another node will migrate the other resources in pos.
   affinity with the migrated resource to the same node as well

The word "strict" describes the current policy of resource affinity
rules and is added in anticipation of a "non-strict" variant in the
future.

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
 .../README                                    |  12 +
 .../cmdlist                                   |   4 +
 .../hardware_status                           |   5 +
 .../log.expect                                |  66 ++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |   6 +
 .../README                                    |  11 +
 .../cmdlist                                   |   4 +
 .../hardware_status                           |   5 +
 .../log.expect                                |  80 +++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |   8 +
 .../README                                    |  17 ++
 .../cmdlist                                   |   4 +
 .../hardware_status                           |   5 +
 .../log.expect                                |  89 ++++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |   8 +
 .../README                                    |  11 +
 .../cmdlist                                   |   4 +
 .../hardware_status                           |   5 +
 .../log.expect                                |  59 ++++
 .../manager_status                            |   1 +
 .../rules_config                              |   3 +
 .../service_config                            |   5 +
 .../README                                    |  19 ++
 .../cmdlist                                   |   8 +
 .../hardware_status                           |   5 +
 .../log.expect                                | 281 ++++++++++++++++++
 .../manager_status                            |   1 +
 .../rules_config                              |  15 +
 .../service_config                            |  11 +
 35 files changed, 764 insertions(+)
 create mode 100644 src/test/test-resource-affinity-strict-positive1/README
 create mode 100644 src/test/test-resource-affinity-strict-positive1/cmdlist
 create mode 100644 src/test/test-resource-affinity-strict-positive1/hardware_status
 create mode 100644 src/test/test-resource-affinity-strict-positive1/log.expect
 create mode 100644 src/test/test-resource-affinity-strict-positive1/manager_status
 create mode 100644 src/test/test-resource-affinity-strict-positive1/rules_config
 create mode 100644 src/test/test-resource-affinity-strict-positive1/service_config
 create mode 100644 src/test/test-resource-affinity-strict-positive2/README
 create mode 100644 src/test/test-resource-affinity-strict-positive2/cmdlist
 create mode 100644 src/test/test-resource-affinity-strict-positive2/hardware_status
 create mode 100644 src/test/test-resource-affinity-strict-positive2/log.expect
 create mode 100644 src/test/test-resource-affinity-strict-positive2/manager_status
 create mode 100644 src/test/test-resource-affinity-strict-positive2/rules_config
 create mode 100644 src/test/test-resource-affinity-strict-positive2/service_config
 create mode 100644 src/test/test-resource-affinity-strict-positive3/README
 create mode 100644 src/test/test-resource-affinity-strict-positive3/cmdlist
 create mode 100644 src/test/test-resource-affinity-strict-positive3/hardware_status
 create mode 100644 src/test/test-resource-affinity-strict-positive3/log.expect
 create mode 100644 src/test/test-resource-affinity-strict-positive3/manager_status
 create mode 100644 src/test/test-resource-affinity-strict-positive3/rules_config
 create mode 100644 src/test/test-resource-affinity-strict-positive3/service_config
 create mode 100644 src/test/test-resource-affinity-strict-positive4/README
 create mode 100644 src/test/test-resource-affinity-strict-positive4/cmdlist
 create mode 100644 src/test/test-resource-affinity-strict-positive4/hardware_status
 create mode 100644 src/test/test-resource-affinity-strict-positive4/log.expect
 create mode 100644 src/test/test-resource-affinity-strict-positive4/manager_status
 create mode 100644 src/test/test-resource-affinity-strict-positive4/rules_config
 create mode 100644 src/test/test-resource-affinity-strict-positive4/service_config
 create mode 100644 src/test/test-resource-affinity-strict-positive5/README
 create mode 100644 src/test/test-resource-affinity-strict-positive5/cmdlist
 create mode 100644 src/test/test-resource-affinity-strict-positive5/hardware_status
 create mode 100644 src/test/test-resource-affinity-strict-positive5/log.expect
 create mode 100644 src/test/test-resource-affinity-strict-positive5/manager_status
 create mode 100644 src/test/test-resource-affinity-strict-positive5/rules_config
 create mode 100644 src/test/test-resource-affinity-strict-positive5/service_config

diff --git a/src/test/test-resource-affinity-strict-positive1/README b/src/test/test-resource-affinity-strict-positive1/README
new file mode 100644
index 0000000..3b20474
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive1/README
@@ -0,0 +1,12 @@
+Test whether a strict positive resource affinity rule makes two resources migrate
+to the same recovery node in case of a failover of their previously assigned
+node.
+
+The test scenario is:
+- vm:101 and vm:102 must be kept together
+- vm:101 and vm:102 are both currently running on node3
+- node1 and node2 have the same resource count to test that the rule is applied
+  even though it would be usually balanced between both remaining nodes
+
+The expected outcome is:
+- As node3 fails, both resources are migrated to node1
diff --git a/src/test/test-resource-affinity-strict-positive1/cmdlist b/src/test/test-resource-affinity-strict-positive1/cmdlist
new file mode 100644
index 0000000..c0a4daa
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive1/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on" ],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-resource-affinity-strict-positive1/hardware_status b/src/test/test-resource-affinity-strict-positive1/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive1/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-resource-affinity-strict-positive1/log.expect b/src/test/test-resource-affinity-strict-positive1/log.expect
new file mode 100644
index 0000000..7d43314
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive1/log.expect
@@ -0,0 +1,66 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node3'
+info     20    node1/crm: adding new service 'vm:103' on node 'node1'
+info     20    node1/crm: adding new service 'vm:104' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:104': state changed from 'request_start' to 'started'  (node = node2)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:103
+info     21    node1/lrm: service status vm:103 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:104
+info     23    node2/lrm: service status vm:104 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info     25    node3/lrm: starting service vm:102
+info     25    node3/lrm: service status vm:102 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: service 'vm:102': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: service 'vm:102': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node3' to node 'node1'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node1)
+info    240    node1/crm: recover service 'vm:102' from fenced node 'node3' to node 'node1'
+info    240    node1/crm: service 'vm:102': state changed from 'recovery' to 'started'  (node = node1)
+info    241    node1/lrm: starting service vm:101
+info    241    node1/lrm: service status vm:101 started
+info    241    node1/lrm: starting service vm:102
+info    241    node1/lrm: service status vm:102 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-resource-affinity-strict-positive1/manager_status b/src/test/test-resource-affinity-strict-positive1/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive1/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-resource-affinity-strict-positive1/rules_config b/src/test/test-resource-affinity-strict-positive1/rules_config
new file mode 100644
index 0000000..9789d7c
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive1/rules_config
@@ -0,0 +1,3 @@
+resource-affinity: vms-must-stick-together
+	resources vm:101,vm:102
+	affinity positive
diff --git a/src/test/test-resource-affinity-strict-positive1/service_config b/src/test/test-resource-affinity-strict-positive1/service_config
new file mode 100644
index 0000000..9fb091d
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive1/service_config
@@ -0,0 +1,6 @@
+{
+    "vm:101": { "node": "node3", "state": "started" },
+    "vm:102": { "node": "node3", "state": "started" },
+    "vm:103": { "node": "node1", "state": "started" },
+    "vm:104": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-resource-affinity-strict-positive2/README b/src/test/test-resource-affinity-strict-positive2/README
new file mode 100644
index 0000000..533625c
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive2/README
@@ -0,0 +1,11 @@
+Test whether a strict positive resource affinity rule makes three resources
+migrate to the same recovery node in case of a failover of their previously
+assigned node.
+
+The test scenario is:
+- vm:101, vm:102, and vm:103 must be kept together
+- vm:101, vm:102, and vm:103 are all currently running on node3
+
+The expected outcome is:
+- As node3 fails, all resources are migrated to node2, as node2 is less utilized
+  than the other available node1
diff --git a/src/test/test-resource-affinity-strict-positive2/cmdlist b/src/test/test-resource-affinity-strict-positive2/cmdlist
new file mode 100644
index 0000000..c0a4daa
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive2/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on" ],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-resource-affinity-strict-positive2/hardware_status b/src/test/test-resource-affinity-strict-positive2/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive2/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-resource-affinity-strict-positive2/log.expect b/src/test/test-resource-affinity-strict-positive2/log.expect
new file mode 100644
index 0000000..78f4d66
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive2/log.expect
@@ -0,0 +1,80 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node3'
+info     20    node1/crm: adding new service 'vm:103' on node 'node3'
+info     20    node1/crm: adding new service 'vm:104' on node 'node1'
+info     20    node1/crm: adding new service 'vm:105' on node 'node1'
+info     20    node1/crm: adding new service 'vm:106' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:104': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:105': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:106': state changed from 'request_start' to 'started'  (node = node2)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:104
+info     21    node1/lrm: service status vm:104 started
+info     21    node1/lrm: starting service vm:105
+info     21    node1/lrm: service status vm:105 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:106
+info     23    node2/lrm: service status vm:106 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info     25    node3/lrm: starting service vm:102
+info     25    node3/lrm: service status vm:102 started
+info     25    node3/lrm: starting service vm:103
+info     25    node3/lrm: service status vm:103 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: service 'vm:102': state changed from 'started' to 'fence'
+info    160    node1/crm: service 'vm:103': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: service 'vm:102': state changed from 'fence' to 'recovery'
+info    240    node1/crm: service 'vm:103': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node2)
+info    240    node1/crm: recover service 'vm:102' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:102': state changed from 'recovery' to 'started'  (node = node2)
+info    240    node1/crm: recover service 'vm:103' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:103': state changed from 'recovery' to 'started'  (node = node2)
+info    243    node2/lrm: starting service vm:101
+info    243    node2/lrm: service status vm:101 started
+info    243    node2/lrm: starting service vm:102
+info    243    node2/lrm: service status vm:102 started
+info    243    node2/lrm: starting service vm:103
+info    243    node2/lrm: service status vm:103 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-resource-affinity-strict-positive2/manager_status b/src/test/test-resource-affinity-strict-positive2/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive2/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-resource-affinity-strict-positive2/rules_config b/src/test/test-resource-affinity-strict-positive2/rules_config
new file mode 100644
index 0000000..12da6e6
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive2/rules_config
@@ -0,0 +1,3 @@
+resource-affinity: vms-must-stick-together
+	resources vm:101,vm:102,vm:103
+	affinity positive
diff --git a/src/test/test-resource-affinity-strict-positive2/service_config b/src/test/test-resource-affinity-strict-positive2/service_config
new file mode 100644
index 0000000..fd4a87e
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive2/service_config
@@ -0,0 +1,8 @@
+{
+    "vm:101": { "node": "node3", "state": "started" },
+    "vm:102": { "node": "node3", "state": "started" },
+    "vm:103": { "node": "node3", "state": "started" },
+    "vm:104": { "node": "node1", "state": "started" },
+    "vm:105": { "node": "node1", "state": "started" },
+    "vm:106": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-resource-affinity-strict-positive3/README b/src/test/test-resource-affinity-strict-positive3/README
new file mode 100644
index 0000000..a270277
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive3/README
@@ -0,0 +1,17 @@
+Test whether a strict positive resource affinity rule makes three resources
+migrate to the same recovery node in case of a failover of their previously
+assigned node. If one of those fail to start on the recovery node (e.g.
+insufficient resources), the failing resource will be kept on the recovery node.
+
+The test scenario is:
+- vm:101, vm:102, and fa:120002 must be kept together
+- vm:101, vm:102, and fa:120002 are all currently running on node3
+- fa:120002 will fail to start on node2
+- node1 has a higher resource count than node2 so that node2 is selected for
+  migration so that fa:12002 is guaranteed to fail there
+
+The expected outcome is:
+- As node3 fails, all resources are migrated to node2
+- Two of those resources will start successfully, but fa:120002 will stay in
+  recovery, since it cannot be started on this node, but cannot be relocated to
+  another one either due to the strict resource affinity rule
diff --git a/src/test/test-resource-affinity-strict-positive3/cmdlist b/src/test/test-resource-affinity-strict-positive3/cmdlist
new file mode 100644
index 0000000..c0a4daa
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive3/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on" ],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-resource-affinity-strict-positive3/hardware_status b/src/test/test-resource-affinity-strict-positive3/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive3/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-resource-affinity-strict-positive3/log.expect b/src/test/test-resource-affinity-strict-positive3/log.expect
new file mode 100644
index 0000000..4a54cb3
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive3/log.expect
@@ -0,0 +1,89 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'fa:120002' on node 'node3'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node3'
+info     20    node1/crm: adding new service 'vm:104' on node 'node1'
+info     20    node1/crm: adding new service 'vm:105' on node 'node1'
+info     20    node1/crm: adding new service 'vm:106' on node 'node2'
+info     20    node1/crm: service 'fa:120002': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:104': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:105': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:106': state changed from 'request_start' to 'started'  (node = node2)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:104
+info     21    node1/lrm: service status vm:104 started
+info     21    node1/lrm: starting service vm:105
+info     21    node1/lrm: service status vm:105 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:106
+info     23    node2/lrm: service status vm:106 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service fa:120002
+info     25    node3/lrm: service status fa:120002 started
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info     25    node3/lrm: starting service vm:102
+info     25    node3/lrm: service status vm:102 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'fa:120002': state changed from 'started' to 'fence'
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: service 'vm:102': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'fa:120002': state changed from 'fence' to 'recovery'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: service 'vm:102': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'fa:120002' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'fa:120002': state changed from 'recovery' to 'started'  (node = node2)
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node2)
+info    240    node1/crm: recover service 'vm:102' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:102': state changed from 'recovery' to 'started'  (node = node2)
+info    243    node2/lrm: starting service fa:120002
+warn    243    node2/lrm: unable to start service fa:120002
+warn    243    node2/lrm: restart policy: retry number 1 for service 'fa:120002'
+info    243    node2/lrm: starting service vm:101
+info    243    node2/lrm: service status vm:101 started
+info    243    node2/lrm: starting service vm:102
+info    243    node2/lrm: service status vm:102 started
+info    263    node2/lrm: starting service fa:120002
+warn    263    node2/lrm: unable to start service fa:120002
+err     263    node2/lrm: unable to start service fa:120002 on local node after 1 retries
+warn    280    node1/crm: starting service fa:120002 on node 'node2' failed, relocating service.
+warn    280    node1/crm: Start Error Recovery: Tried all available nodes for service 'fa:120002', retry start on current node. Tried nodes: node2
+info    283    node2/lrm: starting service fa:120002
+info    283    node2/lrm: service status fa:120002 started
+info    300    node1/crm: relocation policy successful for 'fa:120002' on node 'node2', failed nodes: node2
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-resource-affinity-strict-positive3/manager_status b/src/test/test-resource-affinity-strict-positive3/manager_status
new file mode 100644
index 0000000..0967ef4
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive3/manager_status
@@ -0,0 +1 @@
+{}
diff --git a/src/test/test-resource-affinity-strict-positive3/rules_config b/src/test/test-resource-affinity-strict-positive3/rules_config
new file mode 100644
index 0000000..077fccd
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive3/rules_config
@@ -0,0 +1,3 @@
+resource-affinity: vms-must-stick-together
+	resources vm:101,vm:102,fa:120002
+	affinity positive
diff --git a/src/test/test-resource-affinity-strict-positive3/service_config b/src/test/test-resource-affinity-strict-positive3/service_config
new file mode 100644
index 0000000..3ce5f27
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive3/service_config
@@ -0,0 +1,8 @@
+{
+    "vm:101": { "node": "node3", "state": "started" },
+    "vm:102": { "node": "node3", "state": "started" },
+    "fa:120002": { "node": "node3", "state": "started" },
+    "vm:104": { "node": "node1", "state": "started" },
+    "vm:105": { "node": "node1", "state": "started" },
+    "vm:106": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-resource-affinity-strict-positive4/README b/src/test/test-resource-affinity-strict-positive4/README
new file mode 100644
index 0000000..6e16b30
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive4/README
@@ -0,0 +1,11 @@
+Test whether a strict positive resource affinity rule of three resources makes
+the resources stay together, if one of the resources is manually migrated to
+another node, i.e., migrate to the same node.
+
+The test scenario is:
+- vm:101, vm:102, and vm:103 must be kept together
+- vm:101, vm:102, and vm:103 are all currently running on node1
+
+The expected outcome is:
+- As vm:101 is migrated to node2, vm:102 and vm:103 are migrated to node2 as
+  well as a side-effect to follow the positive resource affinity rule.
diff --git a/src/test/test-resource-affinity-strict-positive4/cmdlist b/src/test/test-resource-affinity-strict-positive4/cmdlist
new file mode 100644
index 0000000..2e420cc
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive4/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on" ],
+    [ "service vm:101 migrate node2" ]
+]
diff --git a/src/test/test-resource-affinity-strict-positive4/hardware_status b/src/test/test-resource-affinity-strict-positive4/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive4/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-resource-affinity-strict-positive4/log.expect b/src/test/test-resource-affinity-strict-positive4/log.expect
new file mode 100644
index 0000000..0d9854d
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive4/log.expect
@@ -0,0 +1,59 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node1'
+info     20    node1/crm: adding new service 'vm:102' on node 'node1'
+info     20    node1/crm: adding new service 'vm:103' on node 'node1'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node1)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:101
+info     21    node1/lrm: service status vm:101 started
+info     21    node1/lrm: starting service vm:102
+info     21    node1/lrm: service status vm:102 started
+info     21    node1/lrm: starting service vm:103
+info     21    node1/lrm: service status vm:103 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info    120      cmdlist: execute service vm:101 migrate node2
+info    120    node1/crm: got crm command: migrate vm:101 node2
+info    120    node1/crm: crm command 'migrate vm:101 node2' - migrate service 'vm:102' to node 'node2' (service 'vm:102' in positive affinity with service 'vm:101')
+info    120    node1/crm: crm command 'migrate vm:101 node2' - migrate service 'vm:103' to node 'node2' (service 'vm:103' in positive affinity with service 'vm:101')
+info    120    node1/crm: migrate service 'vm:101' to node 'node2'
+info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    120    node1/crm: migrate service 'vm:102' to node 'node2'
+info    120    node1/crm: service 'vm:102': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    120    node1/crm: migrate service 'vm:103' to node 'node2'
+info    120    node1/crm: service 'vm:103': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    121    node1/lrm: service vm:101 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:101 - end migrate to node 'node2'
+info    121    node1/lrm: service vm:102 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:102 - end migrate to node 'node2'
+info    121    node1/lrm: service vm:103 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:103 - end migrate to node 'node2'
+info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: service 'vm:102': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: service 'vm:103': state changed from 'migrate' to 'started'  (node = node2)
+info    143    node2/lrm: got lock 'ha_agent_node2_lock'
+info    143    node2/lrm: status change wait_for_agent_lock => active
+info    143    node2/lrm: starting service vm:101
+info    143    node2/lrm: service status vm:101 started
+info    143    node2/lrm: starting service vm:102
+info    143    node2/lrm: service status vm:102 started
+info    143    node2/lrm: starting service vm:103
+info    143    node2/lrm: service status vm:103 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-resource-affinity-strict-positive4/manager_status b/src/test/test-resource-affinity-strict-positive4/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive4/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-resource-affinity-strict-positive4/rules_config b/src/test/test-resource-affinity-strict-positive4/rules_config
new file mode 100644
index 0000000..12da6e6
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive4/rules_config
@@ -0,0 +1,3 @@
+resource-affinity: vms-must-stick-together
+	resources vm:101,vm:102,vm:103
+	affinity positive
diff --git a/src/test/test-resource-affinity-strict-positive4/service_config b/src/test/test-resource-affinity-strict-positive4/service_config
new file mode 100644
index 0000000..57e3579
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive4/service_config
@@ -0,0 +1,5 @@
+{
+    "vm:101": { "node": "node1", "state": "started" },
+    "vm:102": { "node": "node1", "state": "started" },
+    "vm:103": { "node": "node1", "state": "started" }
+}
diff --git a/src/test/test-resource-affinity-strict-positive5/README b/src/test/test-resource-affinity-strict-positive5/README
new file mode 100644
index 0000000..3a9909e
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive5/README
@@ -0,0 +1,19 @@
+Test whether multiple connected positive resource affinity rules makes the
+resources stay together, if one of the resources is manually migrated to another
+node, i.e., migrate all of them to the same node.
+
+The test scenario is:
+- vm:101, vm:102, and vm:103 must be kept together
+- vm:103, vm:104, and vm:105 must be kept together
+- vm:105, vm:106, and vm:107 must be kept together
+- vm:105, vm:108, and vm:109 must be kept together
+- So essentially, vm:101 through vm:109 must be kept together
+- vm:101 through vm:109 are all on node1
+
+The expected outcome is:
+- As vm:103 is migrated to node2, all of vm:101 through vm:109 are migrated to
+  node2 as well, as these all must be kept together
+- As vm:101 is migrated to node3, all of vm:101 through vm:109 are migrated to
+  node3 as well, as these all must be kept together
+- As vm:109 is migrated to node1, all of vm:101 through vm:109 are migrated to
+  node1 as well, as these all must be kept together
diff --git a/src/test/test-resource-affinity-strict-positive5/cmdlist b/src/test/test-resource-affinity-strict-positive5/cmdlist
new file mode 100644
index 0000000..85c33d0
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive5/cmdlist
@@ -0,0 +1,8 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on" ],
+    [ "service vm:103 migrate node2" ],
+    [ "delay 100" ],
+    [ "service vm:101 migrate node3" ],
+    [ "delay 100" ],
+    [ "service vm:109 migrate node1" ]
+]
diff --git a/src/test/test-resource-affinity-strict-positive5/hardware_status b/src/test/test-resource-affinity-strict-positive5/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive5/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-resource-affinity-strict-positive5/log.expect b/src/test/test-resource-affinity-strict-positive5/log.expect
new file mode 100644
index 0000000..4e91890
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive5/log.expect
@@ -0,0 +1,281 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node1'
+info     20    node1/crm: adding new service 'vm:102' on node 'node1'
+info     20    node1/crm: adding new service 'vm:103' on node 'node1'
+info     20    node1/crm: adding new service 'vm:104' on node 'node1'
+info     20    node1/crm: adding new service 'vm:105' on node 'node1'
+info     20    node1/crm: adding new service 'vm:106' on node 'node1'
+info     20    node1/crm: adding new service 'vm:107' on node 'node1'
+info     20    node1/crm: adding new service 'vm:108' on node 'node1'
+info     20    node1/crm: adding new service 'vm:109' on node 'node1'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:104': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:105': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:106': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:107': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:108': state changed from 'request_start' to 'started'  (node = node1)
+info     20    node1/crm: service 'vm:109': state changed from 'request_start' to 'started'  (node = node1)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:101
+info     21    node1/lrm: service status vm:101 started
+info     21    node1/lrm: starting service vm:102
+info     21    node1/lrm: service status vm:102 started
+info     21    node1/lrm: starting service vm:103
+info     21    node1/lrm: service status vm:103 started
+info     21    node1/lrm: starting service vm:104
+info     21    node1/lrm: service status vm:104 started
+info     21    node1/lrm: starting service vm:105
+info     21    node1/lrm: service status vm:105 started
+info     21    node1/lrm: starting service vm:106
+info     21    node1/lrm: service status vm:106 started
+info     21    node1/lrm: starting service vm:107
+info     21    node1/lrm: service status vm:107 started
+info     21    node1/lrm: starting service vm:108
+info     21    node1/lrm: service status vm:108 started
+info     21    node1/lrm: starting service vm:109
+info     21    node1/lrm: service status vm:109 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info    120      cmdlist: execute service vm:103 migrate node2
+info    120    node1/crm: got crm command: migrate vm:103 node2
+info    120    node1/crm: crm command 'migrate vm:103 node2' - migrate service 'vm:101' to node 'node2' (service 'vm:101' in positive affinity with service 'vm:103')
+info    120    node1/crm: crm command 'migrate vm:103 node2' - migrate service 'vm:102' to node 'node2' (service 'vm:102' in positive affinity with service 'vm:103')
+info    120    node1/crm: crm command 'migrate vm:103 node2' - migrate service 'vm:104' to node 'node2' (service 'vm:104' in positive affinity with service 'vm:103')
+info    120    node1/crm: crm command 'migrate vm:103 node2' - migrate service 'vm:105' to node 'node2' (service 'vm:105' in positive affinity with service 'vm:103')
+info    120    node1/crm: crm command 'migrate vm:103 node2' - migrate service 'vm:106' to node 'node2' (service 'vm:106' in positive affinity with service 'vm:103')
+info    120    node1/crm: crm command 'migrate vm:103 node2' - migrate service 'vm:107' to node 'node2' (service 'vm:107' in positive affinity with service 'vm:103')
+info    120    node1/crm: crm command 'migrate vm:103 node2' - migrate service 'vm:108' to node 'node2' (service 'vm:108' in positive affinity with service 'vm:103')
+info    120    node1/crm: crm command 'migrate vm:103 node2' - migrate service 'vm:109' to node 'node2' (service 'vm:109' in positive affinity with service 'vm:103')
+info    120    node1/crm: migrate service 'vm:101' to node 'node2'
+info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    120    node1/crm: migrate service 'vm:102' to node 'node2'
+info    120    node1/crm: service 'vm:102': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    120    node1/crm: migrate service 'vm:103' to node 'node2'
+info    120    node1/crm: service 'vm:103': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    120    node1/crm: migrate service 'vm:104' to node 'node2'
+info    120    node1/crm: service 'vm:104': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    120    node1/crm: migrate service 'vm:105' to node 'node2'
+info    120    node1/crm: service 'vm:105': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    120    node1/crm: migrate service 'vm:106' to node 'node2'
+info    120    node1/crm: service 'vm:106': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    120    node1/crm: migrate service 'vm:107' to node 'node2'
+info    120    node1/crm: service 'vm:107': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    120    node1/crm: migrate service 'vm:108' to node 'node2'
+info    120    node1/crm: service 'vm:108': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    120    node1/crm: migrate service 'vm:109' to node 'node2'
+info    120    node1/crm: service 'vm:109': state changed from 'started' to 'migrate'  (node = node1, target = node2)
+info    121    node1/lrm: service vm:101 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:101 - end migrate to node 'node2'
+info    121    node1/lrm: service vm:102 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:102 - end migrate to node 'node2'
+info    121    node1/lrm: service vm:103 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:103 - end migrate to node 'node2'
+info    121    node1/lrm: service vm:104 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:104 - end migrate to node 'node2'
+info    121    node1/lrm: service vm:105 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:105 - end migrate to node 'node2'
+info    121    node1/lrm: service vm:106 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:106 - end migrate to node 'node2'
+info    121    node1/lrm: service vm:107 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:107 - end migrate to node 'node2'
+info    121    node1/lrm: service vm:108 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:108 - end migrate to node 'node2'
+info    121    node1/lrm: service vm:109 - start migrate to node 'node2'
+info    121    node1/lrm: service vm:109 - end migrate to node 'node2'
+info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: service 'vm:102': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: service 'vm:103': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: service 'vm:104': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: service 'vm:105': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: service 'vm:106': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: service 'vm:107': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: service 'vm:108': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: service 'vm:109': state changed from 'migrate' to 'started'  (node = node2)
+info    143    node2/lrm: got lock 'ha_agent_node2_lock'
+info    143    node2/lrm: status change wait_for_agent_lock => active
+info    143    node2/lrm: starting service vm:101
+info    143    node2/lrm: service status vm:101 started
+info    143    node2/lrm: starting service vm:102
+info    143    node2/lrm: service status vm:102 started
+info    143    node2/lrm: starting service vm:103
+info    143    node2/lrm: service status vm:103 started
+info    143    node2/lrm: starting service vm:104
+info    143    node2/lrm: service status vm:104 started
+info    143    node2/lrm: starting service vm:105
+info    143    node2/lrm: service status vm:105 started
+info    143    node2/lrm: starting service vm:106
+info    143    node2/lrm: service status vm:106 started
+info    143    node2/lrm: starting service vm:107
+info    143    node2/lrm: service status vm:107 started
+info    143    node2/lrm: starting service vm:108
+info    143    node2/lrm: service status vm:108 started
+info    143    node2/lrm: starting service vm:109
+info    143    node2/lrm: service status vm:109 started
+info    220      cmdlist: execute delay 100
+info    400      cmdlist: execute service vm:101 migrate node3
+info    400    node1/crm: got crm command: migrate vm:101 node3
+info    400    node1/crm: crm command 'migrate vm:101 node3' - migrate service 'vm:102' to node 'node3' (service 'vm:102' in positive affinity with service 'vm:101')
+info    400    node1/crm: crm command 'migrate vm:101 node3' - migrate service 'vm:103' to node 'node3' (service 'vm:103' in positive affinity with service 'vm:101')
+info    400    node1/crm: crm command 'migrate vm:101 node3' - migrate service 'vm:104' to node 'node3' (service 'vm:104' in positive affinity with service 'vm:101')
+info    400    node1/crm: crm command 'migrate vm:101 node3' - migrate service 'vm:105' to node 'node3' (service 'vm:105' in positive affinity with service 'vm:101')
+info    400    node1/crm: crm command 'migrate vm:101 node3' - migrate service 'vm:106' to node 'node3' (service 'vm:106' in positive affinity with service 'vm:101')
+info    400    node1/crm: crm command 'migrate vm:101 node3' - migrate service 'vm:107' to node 'node3' (service 'vm:107' in positive affinity with service 'vm:101')
+info    400    node1/crm: crm command 'migrate vm:101 node3' - migrate service 'vm:108' to node 'node3' (service 'vm:108' in positive affinity with service 'vm:101')
+info    400    node1/crm: crm command 'migrate vm:101 node3' - migrate service 'vm:109' to node 'node3' (service 'vm:109' in positive affinity with service 'vm:101')
+info    400    node1/crm: migrate service 'vm:101' to node 'node3'
+info    400    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    400    node1/crm: migrate service 'vm:102' to node 'node3'
+info    400    node1/crm: service 'vm:102': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    400    node1/crm: migrate service 'vm:103' to node 'node3'
+info    400    node1/crm: service 'vm:103': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    400    node1/crm: migrate service 'vm:104' to node 'node3'
+info    400    node1/crm: service 'vm:104': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    400    node1/crm: migrate service 'vm:105' to node 'node3'
+info    400    node1/crm: service 'vm:105': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    400    node1/crm: migrate service 'vm:106' to node 'node3'
+info    400    node1/crm: service 'vm:106': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    400    node1/crm: migrate service 'vm:107' to node 'node3'
+info    400    node1/crm: service 'vm:107': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    400    node1/crm: migrate service 'vm:108' to node 'node3'
+info    400    node1/crm: service 'vm:108': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    400    node1/crm: migrate service 'vm:109' to node 'node3'
+info    400    node1/crm: service 'vm:109': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    403    node2/lrm: service vm:101 - start migrate to node 'node3'
+info    403    node2/lrm: service vm:101 - end migrate to node 'node3'
+info    403    node2/lrm: service vm:102 - start migrate to node 'node3'
+info    403    node2/lrm: service vm:102 - end migrate to node 'node3'
+info    403    node2/lrm: service vm:103 - start migrate to node 'node3'
+info    403    node2/lrm: service vm:103 - end migrate to node 'node3'
+info    403    node2/lrm: service vm:104 - start migrate to node 'node3'
+info    403    node2/lrm: service vm:104 - end migrate to node 'node3'
+info    403    node2/lrm: service vm:105 - start migrate to node 'node3'
+info    403    node2/lrm: service vm:105 - end migrate to node 'node3'
+info    403    node2/lrm: service vm:106 - start migrate to node 'node3'
+info    403    node2/lrm: service vm:106 - end migrate to node 'node3'
+info    403    node2/lrm: service vm:107 - start migrate to node 'node3'
+info    403    node2/lrm: service vm:107 - end migrate to node 'node3'
+info    403    node2/lrm: service vm:108 - start migrate to node 'node3'
+info    403    node2/lrm: service vm:108 - end migrate to node 'node3'
+info    403    node2/lrm: service vm:109 - start migrate to node 'node3'
+info    403    node2/lrm: service vm:109 - end migrate to node 'node3'
+info    420    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
+info    420    node1/crm: service 'vm:102': state changed from 'migrate' to 'started'  (node = node3)
+info    420    node1/crm: service 'vm:103': state changed from 'migrate' to 'started'  (node = node3)
+info    420    node1/crm: service 'vm:104': state changed from 'migrate' to 'started'  (node = node3)
+info    420    node1/crm: service 'vm:105': state changed from 'migrate' to 'started'  (node = node3)
+info    420    node1/crm: service 'vm:106': state changed from 'migrate' to 'started'  (node = node3)
+info    420    node1/crm: service 'vm:107': state changed from 'migrate' to 'started'  (node = node3)
+info    420    node1/crm: service 'vm:108': state changed from 'migrate' to 'started'  (node = node3)
+info    420    node1/crm: service 'vm:109': state changed from 'migrate' to 'started'  (node = node3)
+info    425    node3/lrm: got lock 'ha_agent_node3_lock'
+info    425    node3/lrm: status change wait_for_agent_lock => active
+info    425    node3/lrm: starting service vm:101
+info    425    node3/lrm: service status vm:101 started
+info    425    node3/lrm: starting service vm:102
+info    425    node3/lrm: service status vm:102 started
+info    425    node3/lrm: starting service vm:103
+info    425    node3/lrm: service status vm:103 started
+info    425    node3/lrm: starting service vm:104
+info    425    node3/lrm: service status vm:104 started
+info    425    node3/lrm: starting service vm:105
+info    425    node3/lrm: service status vm:105 started
+info    425    node3/lrm: starting service vm:106
+info    425    node3/lrm: service status vm:106 started
+info    425    node3/lrm: starting service vm:107
+info    425    node3/lrm: service status vm:107 started
+info    425    node3/lrm: starting service vm:108
+info    425    node3/lrm: service status vm:108 started
+info    425    node3/lrm: starting service vm:109
+info    425    node3/lrm: service status vm:109 started
+info    500      cmdlist: execute delay 100
+info    680      cmdlist: execute service vm:109 migrate node1
+info    680    node1/crm: got crm command: migrate vm:109 node1
+info    680    node1/crm: crm command 'migrate vm:109 node1' - migrate service 'vm:101' to node 'node1' (service 'vm:101' in positive affinity with service 'vm:109')
+info    680    node1/crm: crm command 'migrate vm:109 node1' - migrate service 'vm:102' to node 'node1' (service 'vm:102' in positive affinity with service 'vm:109')
+info    680    node1/crm: crm command 'migrate vm:109 node1' - migrate service 'vm:103' to node 'node1' (service 'vm:103' in positive affinity with service 'vm:109')
+info    680    node1/crm: crm command 'migrate vm:109 node1' - migrate service 'vm:104' to node 'node1' (service 'vm:104' in positive affinity with service 'vm:109')
+info    680    node1/crm: crm command 'migrate vm:109 node1' - migrate service 'vm:105' to node 'node1' (service 'vm:105' in positive affinity with service 'vm:109')
+info    680    node1/crm: crm command 'migrate vm:109 node1' - migrate service 'vm:106' to node 'node1' (service 'vm:106' in positive affinity with service 'vm:109')
+info    680    node1/crm: crm command 'migrate vm:109 node1' - migrate service 'vm:107' to node 'node1' (service 'vm:107' in positive affinity with service 'vm:109')
+info    680    node1/crm: crm command 'migrate vm:109 node1' - migrate service 'vm:108' to node 'node1' (service 'vm:108' in positive affinity with service 'vm:109')
+info    680    node1/crm: migrate service 'vm:101' to node 'node1'
+info    680    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info    680    node1/crm: migrate service 'vm:102' to node 'node1'
+info    680    node1/crm: service 'vm:102': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info    680    node1/crm: migrate service 'vm:103' to node 'node1'
+info    680    node1/crm: service 'vm:103': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info    680    node1/crm: migrate service 'vm:104' to node 'node1'
+info    680    node1/crm: service 'vm:104': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info    680    node1/crm: migrate service 'vm:105' to node 'node1'
+info    680    node1/crm: service 'vm:105': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info    680    node1/crm: migrate service 'vm:106' to node 'node1'
+info    680    node1/crm: service 'vm:106': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info    680    node1/crm: migrate service 'vm:107' to node 'node1'
+info    680    node1/crm: service 'vm:107': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info    680    node1/crm: migrate service 'vm:108' to node 'node1'
+info    680    node1/crm: service 'vm:108': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info    680    node1/crm: migrate service 'vm:109' to node 'node1'
+info    680    node1/crm: service 'vm:109': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info    685    node3/lrm: service vm:101 - start migrate to node 'node1'
+info    685    node3/lrm: service vm:101 - end migrate to node 'node1'
+info    685    node3/lrm: service vm:102 - start migrate to node 'node1'
+info    685    node3/lrm: service vm:102 - end migrate to node 'node1'
+info    685    node3/lrm: service vm:103 - start migrate to node 'node1'
+info    685    node3/lrm: service vm:103 - end migrate to node 'node1'
+info    685    node3/lrm: service vm:104 - start migrate to node 'node1'
+info    685    node3/lrm: service vm:104 - end migrate to node 'node1'
+info    685    node3/lrm: service vm:105 - start migrate to node 'node1'
+info    685    node3/lrm: service vm:105 - end migrate to node 'node1'
+info    685    node3/lrm: service vm:106 - start migrate to node 'node1'
+info    685    node3/lrm: service vm:106 - end migrate to node 'node1'
+info    685    node3/lrm: service vm:107 - start migrate to node 'node1'
+info    685    node3/lrm: service vm:107 - end migrate to node 'node1'
+info    685    node3/lrm: service vm:108 - start migrate to node 'node1'
+info    685    node3/lrm: service vm:108 - end migrate to node 'node1'
+info    685    node3/lrm: service vm:109 - start migrate to node 'node1'
+info    685    node3/lrm: service vm:109 - end migrate to node 'node1'
+info    700    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node1)
+info    700    node1/crm: service 'vm:102': state changed from 'migrate' to 'started'  (node = node1)
+info    700    node1/crm: service 'vm:103': state changed from 'migrate' to 'started'  (node = node1)
+info    700    node1/crm: service 'vm:104': state changed from 'migrate' to 'started'  (node = node1)
+info    700    node1/crm: service 'vm:105': state changed from 'migrate' to 'started'  (node = node1)
+info    700    node1/crm: service 'vm:106': state changed from 'migrate' to 'started'  (node = node1)
+info    700    node1/crm: service 'vm:107': state changed from 'migrate' to 'started'  (node = node1)
+info    700    node1/crm: service 'vm:108': state changed from 'migrate' to 'started'  (node = node1)
+info    700    node1/crm: service 'vm:109': state changed from 'migrate' to 'started'  (node = node1)
+info    701    node1/lrm: starting service vm:101
+info    701    node1/lrm: service status vm:101 started
+info    701    node1/lrm: starting service vm:102
+info    701    node1/lrm: service status vm:102 started
+info    701    node1/lrm: starting service vm:103
+info    701    node1/lrm: service status vm:103 started
+info    701    node1/lrm: starting service vm:104
+info    701    node1/lrm: service status vm:104 started
+info    701    node1/lrm: starting service vm:105
+info    701    node1/lrm: service status vm:105 started
+info    701    node1/lrm: starting service vm:106
+info    701    node1/lrm: service status vm:106 started
+info    701    node1/lrm: starting service vm:107
+info    701    node1/lrm: service status vm:107 started
+info    701    node1/lrm: starting service vm:108
+info    701    node1/lrm: service status vm:108 started
+info    701    node1/lrm: starting service vm:109
+info    701    node1/lrm: service status vm:109 started
+info   1280     hardware: exit simulation - done
diff --git a/src/test/test-resource-affinity-strict-positive5/manager_status b/src/test/test-resource-affinity-strict-positive5/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive5/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-resource-affinity-strict-positive5/rules_config b/src/test/test-resource-affinity-strict-positive5/rules_config
new file mode 100644
index 0000000..b070af3
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive5/rules_config
@@ -0,0 +1,15 @@
+resource-affinity: vms-must-stick-together1
+	resources vm:101,vm:102,vm:103
+	affinity positive
+
+resource-affinity: vms-must-stick-together2
+	resources vm:103,vm:104,vm:105
+	affinity positive
+
+resource-affinity: vms-must-stick-together3
+	resources vm:105,vm:106,vm:107
+	affinity positive
+
+resource-affinity: vms-must-stick-together4
+	resources vm:105,vm:108,vm:109
+	affinity positive
diff --git a/src/test/test-resource-affinity-strict-positive5/service_config b/src/test/test-resource-affinity-strict-positive5/service_config
new file mode 100644
index 0000000..48db7b1
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive5/service_config
@@ -0,0 +1,11 @@
+{
+    "vm:101": { "node": "node1", "state": "started" },
+    "vm:102": { "node": "node1", "state": "started" },
+    "vm:103": { "node": "node1", "state": "started" },
+    "vm:104": { "node": "node1", "state": "started" },
+    "vm:105": { "node": "node1", "state": "started" },
+    "vm:106": { "node": "node1", "state": "started" },
+    "vm:107": { "node": "node1", "state": "started" },
+    "vm:108": { "node": "node1", "state": "started" },
+    "vm:109": { "node": "node1", "state": "started" }
+}
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2025-07-04 18:22 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-04 18:20 [pve-devel] [PATCH container/docs/ha-manager/manager/qemu-server v3 00/19] HA " Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 01/13] rules: introduce plugin-specific canonicalize routines Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 02/13] rules: add haenv node list to the rules' canonicalization stage Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 03/13] rules: introduce resource affinity rule plugin Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 04/13] rules: add global checks between node and resource affinity rules Daniel Kral
2025-07-29 11:44   ` Michael Köppl
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 05/13] usage: add information about a service's assigned nodes Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 06/13] manager: apply resource affinity rules when selecting service nodes Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 07/13] manager: handle resource affinity rules in manual migrations Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 08/13] sim: resources: add option to limit start and migrate tries to node Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 09/13] test: ha tester: add test cases for negative resource affinity rules Daniel Kral
2025-07-04 18:20 ` Daniel Kral [this message]
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 11/13] test: ha tester: add test cases for static scheduler resource affinity Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 12/13] test: rules: add test cases for resource affinity rules Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH ha-manager v3 13/13] api: resources: add check for resource affinity in resource migrations Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH docs v3 1/1] ha: add documentation about ha resource affinity rules Daniel Kral
2025-07-08 16:08   ` Shannon Sterz
2025-07-09  6:19     ` Friedrich Weber
2025-07-30 10:05     ` Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH manager v3 1/3] ui: ha: rules: add " Daniel Kral
2025-07-04 18:20 ` [pve-devel] [PATCH manager v3 2/3] ui: migrate: lxc: display precondition messages for ha resource affinity Daniel Kral
2025-07-04 18:21 ` [pve-devel] [PATCH manager v3 3/3] ui: migrate: vm: " Daniel Kral
2025-07-04 18:21 ` [pve-devel] [PATCH container v3 1/1] api: introduce migration preconditions api endpoint Daniel Kral
2025-07-04 18:21 ` [pve-devel] [PATCH qemu-server v3 1/1] api: migration preconditions: add checks for ha resource affinity rules Daniel Kral

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250704182102.467624-11-d.kral@proxmox.com \
    --to=d.kral@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal