all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH ha-manager v4 08/19] test: ha tester: add test cases for future node affinity rules
Date: Tue, 29 Jul 2025 20:00:48 +0200	[thread overview]
Message-ID: <20250729180107.428855-9-d.kral@proxmox.com> (raw)
In-Reply-To: <20250729180107.428855-1-d.kral@proxmox.com>

Add test cases to verify that the node affinity rules, which will be
added in a following patch, are functionally equivalent to the
existing HA groups.

These test cases verify the following scenarios for (a) unrestricted and
(b) restricted groups (i.e. non-strict and strict node affinity rules):

1. If a service is manually migrated to a non-member node and failback
   is enabled, then (a)(b) migrate the service back to a member node.
2. If a service is manually migrated to a non-member node and failback
   is disabled, then (a) migrate the service back to a member node, or
   (b) do nothing for unrestricted groups.
3. If a service's node fails, where the failed node is the only
   available group member left, (a) stay in recovery, or (b) migrate the
   service to a non-member node.
4. If a service's node fails, but there is another available group
   member left, (a)(b) migrate the service to the other member node.
5. If a service's group has failback enabled and the service's node,
   which is the node with the highest priority in the group, fails and
   comes back later, (a)(b) migrate it to the second-highest prioritized
   node and automatically migrate it back to the highest priority node
   as soon as it is available again.
6. If a service's group has failback disabled and the service's node,
   which is the node with the highest priority in the group, fails and
   comes back later, (a)(b) migrate it to the second-highest prioritized
   node, but do not migrate it back to the highest priority node if it
   becomes available again.

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
 src/test/test-node-affinity-nonstrict1/README | 10 +++
 .../test-node-affinity-nonstrict1/cmdlist     |  4 +
 src/test/test-node-affinity-nonstrict1/groups |  2 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-nonstrict1/log.expect  | 40 ++++++++++
 .../manager_status                            |  1 +
 .../service_config                            |  3 +
 src/test/test-node-affinity-nonstrict2/README | 12 +++
 .../test-node-affinity-nonstrict2/cmdlist     |  4 +
 src/test/test-node-affinity-nonstrict2/groups |  3 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-nonstrict2/log.expect  | 35 +++++++++
 .../manager_status                            |  1 +
 .../service_config                            |  3 +
 src/test/test-node-affinity-nonstrict3/README | 10 +++
 .../test-node-affinity-nonstrict3/cmdlist     |  4 +
 src/test/test-node-affinity-nonstrict3/groups |  2 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-nonstrict3/log.expect  | 56 ++++++++++++++
 .../manager_status                            |  1 +
 .../service_config                            |  5 ++
 src/test/test-node-affinity-nonstrict4/README | 14 ++++
 .../test-node-affinity-nonstrict4/cmdlist     |  4 +
 src/test/test-node-affinity-nonstrict4/groups |  2 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-nonstrict4/log.expect  | 54 ++++++++++++++
 .../manager_status                            |  1 +
 .../service_config                            |  5 ++
 src/test/test-node-affinity-nonstrict5/README | 16 ++++
 .../test-node-affinity-nonstrict5/cmdlist     |  5 ++
 src/test/test-node-affinity-nonstrict5/groups |  2 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-nonstrict5/log.expect  | 66 +++++++++++++++++
 .../manager_status                            |  1 +
 .../service_config                            |  3 +
 src/test/test-node-affinity-nonstrict6/README | 14 ++++
 .../test-node-affinity-nonstrict6/cmdlist     |  5 ++
 src/test/test-node-affinity-nonstrict6/groups |  3 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-nonstrict6/log.expect  | 52 +++++++++++++
 .../manager_status                            |  1 +
 .../service_config                            |  3 +
 src/test/test-node-affinity-strict1/README    | 10 +++
 src/test/test-node-affinity-strict1/cmdlist   |  4 +
 src/test/test-node-affinity-strict1/groups    |  3 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-strict1/log.expect     | 40 ++++++++++
 .../test-node-affinity-strict1/manager_status |  1 +
 .../test-node-affinity-strict1/service_config |  3 +
 src/test/test-node-affinity-strict2/README    | 11 +++
 src/test/test-node-affinity-strict2/cmdlist   |  4 +
 src/test/test-node-affinity-strict2/groups    |  4 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-strict2/log.expect     | 40 ++++++++++
 .../test-node-affinity-strict2/manager_status |  1 +
 .../test-node-affinity-strict2/service_config |  3 +
 src/test/test-node-affinity-strict3/README    | 10 +++
 src/test/test-node-affinity-strict3/cmdlist   |  4 +
 src/test/test-node-affinity-strict3/groups    |  3 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-strict3/log.expect     | 74 +++++++++++++++++++
 .../test-node-affinity-strict3/manager_status |  1 +
 .../test-node-affinity-strict3/service_config |  5 ++
 src/test/test-node-affinity-strict4/README    | 14 ++++
 src/test/test-node-affinity-strict4/cmdlist   |  4 +
 src/test/test-node-affinity-strict4/groups    |  3 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-strict4/log.expect     | 54 ++++++++++++++
 .../test-node-affinity-strict4/manager_status |  1 +
 .../test-node-affinity-strict4/service_config |  5 ++
 src/test/test-node-affinity-strict5/README    | 16 ++++
 src/test/test-node-affinity-strict5/cmdlist   |  5 ++
 src/test/test-node-affinity-strict5/groups    |  3 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-strict5/log.expect     | 66 +++++++++++++++++
 .../test-node-affinity-strict5/manager_status |  1 +
 .../test-node-affinity-strict5/service_config |  3 +
 src/test/test-node-affinity-strict6/README    | 14 ++++
 src/test/test-node-affinity-strict6/cmdlist   |  5 ++
 src/test/test-node-affinity-strict6/groups    |  4 +
 .../hardware_status                           |  5 ++
 .../test-node-affinity-strict6/log.expect     | 52 +++++++++++++
 .../test-node-affinity-strict6/manager_status |  1 +
 .../test-node-affinity-strict6/service_config |  3 +
 84 files changed, 982 insertions(+)
 create mode 100644 src/test/test-node-affinity-nonstrict1/README
 create mode 100644 src/test/test-node-affinity-nonstrict1/cmdlist
 create mode 100644 src/test/test-node-affinity-nonstrict1/groups
 create mode 100644 src/test/test-node-affinity-nonstrict1/hardware_status
 create mode 100644 src/test/test-node-affinity-nonstrict1/log.expect
 create mode 100644 src/test/test-node-affinity-nonstrict1/manager_status
 create mode 100644 src/test/test-node-affinity-nonstrict1/service_config
 create mode 100644 src/test/test-node-affinity-nonstrict2/README
 create mode 100644 src/test/test-node-affinity-nonstrict2/cmdlist
 create mode 100644 src/test/test-node-affinity-nonstrict2/groups
 create mode 100644 src/test/test-node-affinity-nonstrict2/hardware_status
 create mode 100644 src/test/test-node-affinity-nonstrict2/log.expect
 create mode 100644 src/test/test-node-affinity-nonstrict2/manager_status
 create mode 100644 src/test/test-node-affinity-nonstrict2/service_config
 create mode 100644 src/test/test-node-affinity-nonstrict3/README
 create mode 100644 src/test/test-node-affinity-nonstrict3/cmdlist
 create mode 100644 src/test/test-node-affinity-nonstrict3/groups
 create mode 100644 src/test/test-node-affinity-nonstrict3/hardware_status
 create mode 100644 src/test/test-node-affinity-nonstrict3/log.expect
 create mode 100644 src/test/test-node-affinity-nonstrict3/manager_status
 create mode 100644 src/test/test-node-affinity-nonstrict3/service_config
 create mode 100644 src/test/test-node-affinity-nonstrict4/README
 create mode 100644 src/test/test-node-affinity-nonstrict4/cmdlist
 create mode 100644 src/test/test-node-affinity-nonstrict4/groups
 create mode 100644 src/test/test-node-affinity-nonstrict4/hardware_status
 create mode 100644 src/test/test-node-affinity-nonstrict4/log.expect
 create mode 100644 src/test/test-node-affinity-nonstrict4/manager_status
 create mode 100644 src/test/test-node-affinity-nonstrict4/service_config
 create mode 100644 src/test/test-node-affinity-nonstrict5/README
 create mode 100644 src/test/test-node-affinity-nonstrict5/cmdlist
 create mode 100644 src/test/test-node-affinity-nonstrict5/groups
 create mode 100644 src/test/test-node-affinity-nonstrict5/hardware_status
 create mode 100644 src/test/test-node-affinity-nonstrict5/log.expect
 create mode 100644 src/test/test-node-affinity-nonstrict5/manager_status
 create mode 100644 src/test/test-node-affinity-nonstrict5/service_config
 create mode 100644 src/test/test-node-affinity-nonstrict6/README
 create mode 100644 src/test/test-node-affinity-nonstrict6/cmdlist
 create mode 100644 src/test/test-node-affinity-nonstrict6/groups
 create mode 100644 src/test/test-node-affinity-nonstrict6/hardware_status
 create mode 100644 src/test/test-node-affinity-nonstrict6/log.expect
 create mode 100644 src/test/test-node-affinity-nonstrict6/manager_status
 create mode 100644 src/test/test-node-affinity-nonstrict6/service_config
 create mode 100644 src/test/test-node-affinity-strict1/README
 create mode 100644 src/test/test-node-affinity-strict1/cmdlist
 create mode 100644 src/test/test-node-affinity-strict1/groups
 create mode 100644 src/test/test-node-affinity-strict1/hardware_status
 create mode 100644 src/test/test-node-affinity-strict1/log.expect
 create mode 100644 src/test/test-node-affinity-strict1/manager_status
 create mode 100644 src/test/test-node-affinity-strict1/service_config
 create mode 100644 src/test/test-node-affinity-strict2/README
 create mode 100644 src/test/test-node-affinity-strict2/cmdlist
 create mode 100644 src/test/test-node-affinity-strict2/groups
 create mode 100644 src/test/test-node-affinity-strict2/hardware_status
 create mode 100644 src/test/test-node-affinity-strict2/log.expect
 create mode 100644 src/test/test-node-affinity-strict2/manager_status
 create mode 100644 src/test/test-node-affinity-strict2/service_config
 create mode 100644 src/test/test-node-affinity-strict3/README
 create mode 100644 src/test/test-node-affinity-strict3/cmdlist
 create mode 100644 src/test/test-node-affinity-strict3/groups
 create mode 100644 src/test/test-node-affinity-strict3/hardware_status
 create mode 100644 src/test/test-node-affinity-strict3/log.expect
 create mode 100644 src/test/test-node-affinity-strict3/manager_status
 create mode 100644 src/test/test-node-affinity-strict3/service_config
 create mode 100644 src/test/test-node-affinity-strict4/README
 create mode 100644 src/test/test-node-affinity-strict4/cmdlist
 create mode 100644 src/test/test-node-affinity-strict4/groups
 create mode 100644 src/test/test-node-affinity-strict4/hardware_status
 create mode 100644 src/test/test-node-affinity-strict4/log.expect
 create mode 100644 src/test/test-node-affinity-strict4/manager_status
 create mode 100644 src/test/test-node-affinity-strict4/service_config
 create mode 100644 src/test/test-node-affinity-strict5/README
 create mode 100644 src/test/test-node-affinity-strict5/cmdlist
 create mode 100644 src/test/test-node-affinity-strict5/groups
 create mode 100644 src/test/test-node-affinity-strict5/hardware_status
 create mode 100644 src/test/test-node-affinity-strict5/log.expect
 create mode 100644 src/test/test-node-affinity-strict5/manager_status
 create mode 100644 src/test/test-node-affinity-strict5/service_config
 create mode 100644 src/test/test-node-affinity-strict6/README
 create mode 100644 src/test/test-node-affinity-strict6/cmdlist
 create mode 100644 src/test/test-node-affinity-strict6/groups
 create mode 100644 src/test/test-node-affinity-strict6/hardware_status
 create mode 100644 src/test/test-node-affinity-strict6/log.expect
 create mode 100644 src/test/test-node-affinity-strict6/manager_status
 create mode 100644 src/test/test-node-affinity-strict6/service_config

diff --git a/src/test/test-node-affinity-nonstrict1/README b/src/test/test-node-affinity-nonstrict1/README
new file mode 100644
index 00000000..8775b6ca
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict1/README
@@ -0,0 +1,10 @@
+Test whether a service in a unrestricted group will automatically migrate back
+to a node member in case of a manual migration to a non-member node.
+
+The test scenario is:
+- vm:101 should be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As vm:101 is manually migrated to node2, it is migrated back to node3, as
+  node3 is a group member and has higher priority than the other nodes
diff --git a/src/test/test-node-affinity-nonstrict1/cmdlist b/src/test/test-node-affinity-nonstrict1/cmdlist
new file mode 100644
index 00000000..a63e4fdf
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict1/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "service vm:101 migrate node2" ]
+]
diff --git a/src/test/test-node-affinity-nonstrict1/groups b/src/test/test-node-affinity-nonstrict1/groups
new file mode 100644
index 00000000..50c9a2d7
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict1/groups
@@ -0,0 +1,2 @@
+group: should_stay_here
+	nodes node3
diff --git a/src/test/test-node-affinity-nonstrict1/hardware_status b/src/test/test-node-affinity-nonstrict1/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict1/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-nonstrict1/log.expect b/src/test/test-node-affinity-nonstrict1/log.expect
new file mode 100644
index 00000000..d86c69de
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict1/log.expect
@@ -0,0 +1,40 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute service vm:101 migrate node2
+info    120    node1/crm: got crm command: migrate vm:101 node2
+info    120    node1/crm: migrate service 'vm:101' to node 'node2'
+info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    123    node2/lrm: got lock 'ha_agent_node2_lock'
+info    123    node2/lrm: status change wait_for_agent_lock => active
+info    125    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    125    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info    140    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    143    node2/lrm: service vm:101 - start migrate to node 'node3'
+info    143    node2/lrm: service vm:101 - end migrate to node 'node3'
+info    160    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
+info    165    node3/lrm: starting service vm:101
+info    165    node3/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-nonstrict1/manager_status b/src/test/test-node-affinity-nonstrict1/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict1/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-nonstrict1/service_config b/src/test/test-node-affinity-nonstrict1/service_config
new file mode 100644
index 00000000..5f558431
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict1/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "should_stay_here" }
+}
diff --git a/src/test/test-node-affinity-nonstrict2/README b/src/test/test-node-affinity-nonstrict2/README
new file mode 100644
index 00000000..f27414b1
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict2/README
@@ -0,0 +1,12 @@
+Test whether a service in a unrestricted group with nofailback enabled will
+stay on the manual migration target node, even though the target node is not a
+member of the unrestricted group.
+
+The test scenario is:
+- vm:101 should be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As vm:101 is manually migrated to node2, vm:101 stays on node2; even though
+  node2 is not a group member, the nofailback flag prevents vm:101 to be
+  migrated back to a group member
diff --git a/src/test/test-node-affinity-nonstrict2/cmdlist b/src/test/test-node-affinity-nonstrict2/cmdlist
new file mode 100644
index 00000000..a63e4fdf
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict2/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "service vm:101 migrate node2" ]
+]
diff --git a/src/test/test-node-affinity-nonstrict2/groups b/src/test/test-node-affinity-nonstrict2/groups
new file mode 100644
index 00000000..59192fad
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict2/groups
@@ -0,0 +1,3 @@
+group: should_stay_here
+	nodes node3
+	nofailback 1
diff --git a/src/test/test-node-affinity-nonstrict2/hardware_status b/src/test/test-node-affinity-nonstrict2/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict2/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-nonstrict2/log.expect b/src/test/test-node-affinity-nonstrict2/log.expect
new file mode 100644
index 00000000..c574097d
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict2/log.expect
@@ -0,0 +1,35 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute service vm:101 migrate node2
+info    120    node1/crm: got crm command: migrate vm:101 node2
+info    120    node1/crm: migrate service 'vm:101' to node 'node2'
+info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    123    node2/lrm: got lock 'ha_agent_node2_lock'
+info    123    node2/lrm: status change wait_for_agent_lock => active
+info    125    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    125    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    143    node2/lrm: starting service vm:101
+info    143    node2/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-nonstrict2/manager_status b/src/test/test-node-affinity-nonstrict2/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict2/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-nonstrict2/service_config b/src/test/test-node-affinity-nonstrict2/service_config
new file mode 100644
index 00000000..5f558431
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict2/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "should_stay_here" }
+}
diff --git a/src/test/test-node-affinity-nonstrict3/README b/src/test/test-node-affinity-nonstrict3/README
new file mode 100644
index 00000000..c4ddfab8
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict3/README
@@ -0,0 +1,10 @@
+Test whether a service in a unrestricted group with only one node member will
+be migrated to a non-member node in case of a failover of their previously
+assigned node.
+
+The test scenario is:
+- vm:101 should be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As node3 fails, vm:101 is migrated to node1
diff --git a/src/test/test-node-affinity-nonstrict3/cmdlist b/src/test/test-node-affinity-nonstrict3/cmdlist
new file mode 100644
index 00000000..eee0e40e
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict3/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-node-affinity-nonstrict3/groups b/src/test/test-node-affinity-nonstrict3/groups
new file mode 100644
index 00000000..50c9a2d7
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict3/groups
@@ -0,0 +1,2 @@
+group: should_stay_here
+	nodes node3
diff --git a/src/test/test-node-affinity-nonstrict3/hardware_status b/src/test/test-node-affinity-nonstrict3/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict3/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-nonstrict3/log.expect b/src/test/test-node-affinity-nonstrict3/log.expect
new file mode 100644
index 00000000..752300bc
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict3/log.expect
@@ -0,0 +1,56 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     23    node2/lrm: starting service vm:103
+info     23    node2/lrm: service status vm:103 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node3' to node 'node1'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node1)
+info    241    node1/lrm: got lock 'ha_agent_node1_lock'
+info    241    node1/lrm: status change wait_for_agent_lock => active
+info    241    node1/lrm: starting service vm:101
+info    241    node1/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-nonstrict3/manager_status b/src/test/test-node-affinity-nonstrict3/manager_status
new file mode 100644
index 00000000..0967ef42
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict3/manager_status
@@ -0,0 +1 @@
+{}
diff --git a/src/test/test-node-affinity-nonstrict3/service_config b/src/test/test-node-affinity-nonstrict3/service_config
new file mode 100644
index 00000000..777b2a7e
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict3/service_config
@@ -0,0 +1,5 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "should_stay_here" },
+    "vm:102": { "node": "node2", "state": "started" },
+    "vm:103": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-node-affinity-nonstrict4/README b/src/test/test-node-affinity-nonstrict4/README
new file mode 100644
index 00000000..a08f0e1d
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict4/README
@@ -0,0 +1,14 @@
+Test whether a service in a unrestricted group with two node members will stay
+assigned to one of the node members in case of a failover of their previously
+assigned node.
+
+The test scenario is:
+- vm:101 should be kept on node2 or node3
+- vm:101 is currently running on node3
+- node2 has a higher service count than node1 to test whether the restriction
+  to node2 and node3 is applied even though the scheduler would prefer the less
+  utilized node1
+
+The expected outcome is:
+- As node3 fails, vm:101 is migrated to node2, as it's the only available node
+  left in the unrestricted group
diff --git a/src/test/test-node-affinity-nonstrict4/cmdlist b/src/test/test-node-affinity-nonstrict4/cmdlist
new file mode 100644
index 00000000..eee0e40e
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict4/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-node-affinity-nonstrict4/groups b/src/test/test-node-affinity-nonstrict4/groups
new file mode 100644
index 00000000..b1584b55
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict4/groups
@@ -0,0 +1,2 @@
+group: should_stay_here
+	nodes node2,node3
diff --git a/src/test/test-node-affinity-nonstrict4/hardware_status b/src/test/test-node-affinity-nonstrict4/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict4/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-nonstrict4/log.expect b/src/test/test-node-affinity-nonstrict4/log.expect
new file mode 100644
index 00000000..847e157c
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict4/log.expect
@@ -0,0 +1,54 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     23    node2/lrm: starting service vm:103
+info     23    node2/lrm: service status vm:103 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node2)
+info    243    node2/lrm: starting service vm:101
+info    243    node2/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-nonstrict4/manager_status b/src/test/test-node-affinity-nonstrict4/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict4/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-nonstrict4/service_config b/src/test/test-node-affinity-nonstrict4/service_config
new file mode 100644
index 00000000..777b2a7e
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict4/service_config
@@ -0,0 +1,5 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "should_stay_here" },
+    "vm:102": { "node": "node2", "state": "started" },
+    "vm:103": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-node-affinity-nonstrict5/README b/src/test/test-node-affinity-nonstrict5/README
new file mode 100644
index 00000000..0c370446
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict5/README
@@ -0,0 +1,16 @@
+Test whether a service in a unrestricted group with two differently prioritized
+node members will stay on the node with the highest priority in case of a
+failover or when the service is on a lower-priority node.
+
+The test scenario is:
+- vm:101 should be kept on node2 or node3
+- vm:101 is currently running on node3
+- node2 has a higher priority than node3
+
+The expected outcome is:
+- As vm:101 runs on node3, it is automatically migrated to node2, as node2 has
+  a higher priority than node3
+- As node2 fails, vm:101 is migrated to node3 as node3 is the next and only
+  available node member left in the unrestricted group
+- As node2 comes back online, vm:101 is migrated back to node2, as node2 has a
+  higher priority than node3
diff --git a/src/test/test-node-affinity-nonstrict5/cmdlist b/src/test/test-node-affinity-nonstrict5/cmdlist
new file mode 100644
index 00000000..6932aa78
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict5/cmdlist
@@ -0,0 +1,5 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node2 off" ],
+    [ "power node2 on", "network node2 on" ]
+]
diff --git a/src/test/test-node-affinity-nonstrict5/groups b/src/test/test-node-affinity-nonstrict5/groups
new file mode 100644
index 00000000..03a0ee9b
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict5/groups
@@ -0,0 +1,2 @@
+group: should_stay_here
+	nodes node2:2,node3:1
diff --git a/src/test/test-node-affinity-nonstrict5/hardware_status b/src/test/test-node-affinity-nonstrict5/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict5/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-nonstrict5/log.expect b/src/test/test-node-affinity-nonstrict5/log.expect
new file mode 100644
index 00000000..ca6e4e4f
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict5/log.expect
@@ -0,0 +1,66 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: migrate service 'vm:101' to node 'node2' (running)
+info     20    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: service vm:101 - start migrate to node 'node2'
+info     25    node3/lrm: service vm:101 - end migrate to node 'node2'
+info     40    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info     43    node2/lrm: starting service vm:101
+info     43    node2/lrm: service status vm:101 started
+info    120      cmdlist: execute network node2 off
+info    120    node1/crm: node 'node2': state changed from 'online' => 'unknown'
+info    122    node2/crm: status change slave => wait_for_quorum
+info    123    node2/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node2': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node2'
+info    164     watchdog: execute power node2 off
+info    163    node2/crm: killed by poweroff
+info    164    node2/lrm: killed by poweroff
+info    164     hardware: server 'node2' stopped by poweroff (watchdog)
+info    220      cmdlist: execute power node2 on
+info    220    node2/crm: status change startup => wait_for_quorum
+info    220    node2/lrm: status change startup => wait_for_agent_lock
+info    220      cmdlist: execute network node2 on
+info    240    node1/crm: got lock 'ha_agent_node2_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: node 'node2': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node2' to node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node3)
+info    242    node2/crm: status change wait_for_quorum => slave
+info    245    node3/lrm: starting service vm:101
+info    245    node3/lrm: service status vm:101 started
+info    260    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info    260    node1/crm: migrate service 'vm:101' to node 'node2' (running)
+info    260    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    265    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    265    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    280    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    363    node2/lrm: got lock 'ha_agent_node2_lock'
+info    363    node2/lrm: status change wait_for_agent_lock => active
+info    363    node2/lrm: starting service vm:101
+info    363    node2/lrm: service status vm:101 started
+info    820     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-nonstrict5/manager_status b/src/test/test-node-affinity-nonstrict5/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict5/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-nonstrict5/service_config b/src/test/test-node-affinity-nonstrict5/service_config
new file mode 100644
index 00000000..5f558431
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict5/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "should_stay_here" }
+}
diff --git a/src/test/test-node-affinity-nonstrict6/README b/src/test/test-node-affinity-nonstrict6/README
new file mode 100644
index 00000000..4ab12756
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict6/README
@@ -0,0 +1,14 @@
+Test whether a service in a unrestricted group with nofailback enabled and two
+differently prioritized node members will stay on the current node without
+migrating back to the highest priority node.
+
+The test scenario is:
+- vm:101 should be kept on node2 or node3
+- vm:101 is currently running on node2
+- node2 has a higher priority than node3
+
+The expected outcome is:
+- As node2 fails, vm:101 is migrated to node3 as it is the only available node
+  member left in the unrestricted group
+- As node2 comes back online, vm:101 stays on node3; even though node2 has a
+  higher priority, the nofailback flag prevents vm:101 to migrate back to node2
diff --git a/src/test/test-node-affinity-nonstrict6/cmdlist b/src/test/test-node-affinity-nonstrict6/cmdlist
new file mode 100644
index 00000000..4dd33cc4
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict6/cmdlist
@@ -0,0 +1,5 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node2 off"],
+    [ "power node2 on", "network node2 on" ]
+]
diff --git a/src/test/test-node-affinity-nonstrict6/groups b/src/test/test-node-affinity-nonstrict6/groups
new file mode 100644
index 00000000..a7aed178
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict6/groups
@@ -0,0 +1,3 @@
+group: should_stay_here
+	nodes node2:2,node3:1
+	nofailback 1
diff --git a/src/test/test-node-affinity-nonstrict6/hardware_status b/src/test/test-node-affinity-nonstrict6/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict6/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-nonstrict6/log.expect b/src/test/test-node-affinity-nonstrict6/log.expect
new file mode 100644
index 00000000..bcb472ba
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict6/log.expect
@@ -0,0 +1,52 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:101
+info     23    node2/lrm: service status vm:101 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info    120      cmdlist: execute network node2 off
+info    120    node1/crm: node 'node2': state changed from 'online' => 'unknown'
+info    122    node2/crm: status change slave => wait_for_quorum
+info    123    node2/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node2': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node2'
+info    164     watchdog: execute power node2 off
+info    163    node2/crm: killed by poweroff
+info    164    node2/lrm: killed by poweroff
+info    164     hardware: server 'node2' stopped by poweroff (watchdog)
+info    220      cmdlist: execute power node2 on
+info    220    node2/crm: status change startup => wait_for_quorum
+info    220    node2/lrm: status change startup => wait_for_agent_lock
+info    220      cmdlist: execute network node2 on
+info    240    node1/crm: got lock 'ha_agent_node2_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: node 'node2': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node2' to node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node3)
+info    242    node2/crm: status change wait_for_quorum => slave
+info    245    node3/lrm: got lock 'ha_agent_node3_lock'
+info    245    node3/lrm: status change wait_for_agent_lock => active
+info    245    node3/lrm: starting service vm:101
+info    245    node3/lrm: service status vm:101 started
+info    260    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info    820     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-nonstrict6/manager_status b/src/test/test-node-affinity-nonstrict6/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict6/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-nonstrict6/service_config b/src/test/test-node-affinity-nonstrict6/service_config
new file mode 100644
index 00000000..c4ece62c
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict6/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node2", "state": "started", "group": "should_stay_here" }
+}
diff --git a/src/test/test-node-affinity-strict1/README b/src/test/test-node-affinity-strict1/README
new file mode 100644
index 00000000..c717d589
--- /dev/null
+++ b/src/test/test-node-affinity-strict1/README
@@ -0,0 +1,10 @@
+Test whether a service in a restricted group will automatically migrate back to
+a restricted node member in case of a manual migration to a non-member node.
+
+The test scenario is:
+- vm:101 must be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As vm:101 is manually migrated to node2, it is migrated back to node3, as
+  node3 is the only available node member left in the restricted group
diff --git a/src/test/test-node-affinity-strict1/cmdlist b/src/test/test-node-affinity-strict1/cmdlist
new file mode 100644
index 00000000..a63e4fdf
--- /dev/null
+++ b/src/test/test-node-affinity-strict1/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "service vm:101 migrate node2" ]
+]
diff --git a/src/test/test-node-affinity-strict1/groups b/src/test/test-node-affinity-strict1/groups
new file mode 100644
index 00000000..370865f6
--- /dev/null
+++ b/src/test/test-node-affinity-strict1/groups
@@ -0,0 +1,3 @@
+group: must_stay_here
+	nodes node3
+	restricted 1
diff --git a/src/test/test-node-affinity-strict1/hardware_status b/src/test/test-node-affinity-strict1/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-strict1/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-strict1/log.expect b/src/test/test-node-affinity-strict1/log.expect
new file mode 100644
index 00000000..d86c69de
--- /dev/null
+++ b/src/test/test-node-affinity-strict1/log.expect
@@ -0,0 +1,40 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute service vm:101 migrate node2
+info    120    node1/crm: got crm command: migrate vm:101 node2
+info    120    node1/crm: migrate service 'vm:101' to node 'node2'
+info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    123    node2/lrm: got lock 'ha_agent_node2_lock'
+info    123    node2/lrm: status change wait_for_agent_lock => active
+info    125    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    125    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info    140    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    143    node2/lrm: service vm:101 - start migrate to node 'node3'
+info    143    node2/lrm: service vm:101 - end migrate to node 'node3'
+info    160    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
+info    165    node3/lrm: starting service vm:101
+info    165    node3/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-strict1/manager_status b/src/test/test-node-affinity-strict1/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-strict1/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-strict1/service_config b/src/test/test-node-affinity-strict1/service_config
new file mode 100644
index 00000000..36ea15b1
--- /dev/null
+++ b/src/test/test-node-affinity-strict1/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "must_stay_here" }
+}
diff --git a/src/test/test-node-affinity-strict2/README b/src/test/test-node-affinity-strict2/README
new file mode 100644
index 00000000..f4d06a14
--- /dev/null
+++ b/src/test/test-node-affinity-strict2/README
@@ -0,0 +1,11 @@
+Test whether a service in a restricted group with nofailback enabled will
+automatically migrate back to a restricted node member in case of a manual
+migration to a non-member node.
+
+The test scenario is:
+- vm:101 must be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As vm:101 is manually migrated to node2, it is migrated back to node3, as
+  node3 is the only available node member left in the restricted group
diff --git a/src/test/test-node-affinity-strict2/cmdlist b/src/test/test-node-affinity-strict2/cmdlist
new file mode 100644
index 00000000..a63e4fdf
--- /dev/null
+++ b/src/test/test-node-affinity-strict2/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "service vm:101 migrate node2" ]
+]
diff --git a/src/test/test-node-affinity-strict2/groups b/src/test/test-node-affinity-strict2/groups
new file mode 100644
index 00000000..e43eafc5
--- /dev/null
+++ b/src/test/test-node-affinity-strict2/groups
@@ -0,0 +1,4 @@
+group: must_stay_here
+	nodes node3
+	restricted 1
+	nofailback 1
diff --git a/src/test/test-node-affinity-strict2/hardware_status b/src/test/test-node-affinity-strict2/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-strict2/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-strict2/log.expect b/src/test/test-node-affinity-strict2/log.expect
new file mode 100644
index 00000000..d86c69de
--- /dev/null
+++ b/src/test/test-node-affinity-strict2/log.expect
@@ -0,0 +1,40 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute service vm:101 migrate node2
+info    120    node1/crm: got crm command: migrate vm:101 node2
+info    120    node1/crm: migrate service 'vm:101' to node 'node2'
+info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    123    node2/lrm: got lock 'ha_agent_node2_lock'
+info    123    node2/lrm: status change wait_for_agent_lock => active
+info    125    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    125    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info    140    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    143    node2/lrm: service vm:101 - start migrate to node 'node3'
+info    143    node2/lrm: service vm:101 - end migrate to node 'node3'
+info    160    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
+info    165    node3/lrm: starting service vm:101
+info    165    node3/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-strict2/manager_status b/src/test/test-node-affinity-strict2/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-strict2/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-strict2/service_config b/src/test/test-node-affinity-strict2/service_config
new file mode 100644
index 00000000..36ea15b1
--- /dev/null
+++ b/src/test/test-node-affinity-strict2/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "must_stay_here" }
+}
diff --git a/src/test/test-node-affinity-strict3/README b/src/test/test-node-affinity-strict3/README
new file mode 100644
index 00000000..5aced390
--- /dev/null
+++ b/src/test/test-node-affinity-strict3/README
@@ -0,0 +1,10 @@
+Test whether a service in a restricted group with only one node member will
+stay in recovery in case of a failover of their previously assigned node.
+
+The test scenario is:
+- vm:101 must be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As node3 fails, vm:101 stays in recovery since there's no available node
+  member left in the restricted group
diff --git a/src/test/test-node-affinity-strict3/cmdlist b/src/test/test-node-affinity-strict3/cmdlist
new file mode 100644
index 00000000..eee0e40e
--- /dev/null
+++ b/src/test/test-node-affinity-strict3/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-node-affinity-strict3/groups b/src/test/test-node-affinity-strict3/groups
new file mode 100644
index 00000000..370865f6
--- /dev/null
+++ b/src/test/test-node-affinity-strict3/groups
@@ -0,0 +1,3 @@
+group: must_stay_here
+	nodes node3
+	restricted 1
diff --git a/src/test/test-node-affinity-strict3/hardware_status b/src/test/test-node-affinity-strict3/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-strict3/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-strict3/log.expect b/src/test/test-node-affinity-strict3/log.expect
new file mode 100644
index 00000000..47f97767
--- /dev/null
+++ b/src/test/test-node-affinity-strict3/log.expect
@@ -0,0 +1,74 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     23    node2/lrm: starting service vm:103
+info     23    node2/lrm: service status vm:103 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+err     240    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     260    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     280    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     300    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     320    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     340    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     360    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     380    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     400    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     420    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     440    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     460    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     480    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     500    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     520    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     540    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     560    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     580    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     600    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     620    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     640    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     660    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     680    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     700    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-strict3/manager_status b/src/test/test-node-affinity-strict3/manager_status
new file mode 100644
index 00000000..0967ef42
--- /dev/null
+++ b/src/test/test-node-affinity-strict3/manager_status
@@ -0,0 +1 @@
+{}
diff --git a/src/test/test-node-affinity-strict3/service_config b/src/test/test-node-affinity-strict3/service_config
new file mode 100644
index 00000000..9adf02c8
--- /dev/null
+++ b/src/test/test-node-affinity-strict3/service_config
@@ -0,0 +1,5 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "must_stay_here" },
+    "vm:102": { "node": "node2", "state": "started" },
+    "vm:103": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-node-affinity-strict4/README b/src/test/test-node-affinity-strict4/README
new file mode 100644
index 00000000..25ded53e
--- /dev/null
+++ b/src/test/test-node-affinity-strict4/README
@@ -0,0 +1,14 @@
+Test whether a service in a restricted group with two node members will stay
+assigned to one of the node members in case of a failover of their previously
+assigned node.
+
+The test scenario is:
+- vm:101 must be kept on node2 or node3
+- vm:101 is currently running on node3
+- node2 has a higher service count than node1 to test whether the restriction
+  to node2 and node3 is applied even though the scheduler would prefer the less
+  utilized node1
+
+The expected outcome is:
+- As node3 fails, vm:101 is migrated to node2, as it's the only available node
+  left in the restricted group
diff --git a/src/test/test-node-affinity-strict4/cmdlist b/src/test/test-node-affinity-strict4/cmdlist
new file mode 100644
index 00000000..eee0e40e
--- /dev/null
+++ b/src/test/test-node-affinity-strict4/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-node-affinity-strict4/groups b/src/test/test-node-affinity-strict4/groups
new file mode 100644
index 00000000..0ad2abc6
--- /dev/null
+++ b/src/test/test-node-affinity-strict4/groups
@@ -0,0 +1,3 @@
+group: must_stay_here
+	nodes node2,node3
+	restricted 1
diff --git a/src/test/test-node-affinity-strict4/hardware_status b/src/test/test-node-affinity-strict4/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-strict4/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-strict4/log.expect b/src/test/test-node-affinity-strict4/log.expect
new file mode 100644
index 00000000..847e157c
--- /dev/null
+++ b/src/test/test-node-affinity-strict4/log.expect
@@ -0,0 +1,54 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     23    node2/lrm: starting service vm:103
+info     23    node2/lrm: service status vm:103 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node2)
+info    243    node2/lrm: starting service vm:101
+info    243    node2/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-strict4/manager_status b/src/test/test-node-affinity-strict4/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-strict4/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-strict4/service_config b/src/test/test-node-affinity-strict4/service_config
new file mode 100644
index 00000000..9adf02c8
--- /dev/null
+++ b/src/test/test-node-affinity-strict4/service_config
@@ -0,0 +1,5 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "must_stay_here" },
+    "vm:102": { "node": "node2", "state": "started" },
+    "vm:103": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-node-affinity-strict5/README b/src/test/test-node-affinity-strict5/README
new file mode 100644
index 00000000..a4e67f42
--- /dev/null
+++ b/src/test/test-node-affinity-strict5/README
@@ -0,0 +1,16 @@
+Test whether a service in a restricted group with two differently prioritized
+node members will stay on the node with the highest priority in case of a
+failover or when the service is on a lower-priority node.
+
+The test scenario is:
+- vm:101 must be kept on node2 or node3
+- vm:101 is currently running on node3
+- node2 has a higher priority than node3
+
+The expected outcome is:
+- As vm:101 runs on node3, it is automatically migrated to node2, as node2 has
+  a higher priority than node3
+- As node2 fails, vm:101 is migrated to node3 as node3 is the next and only
+  available node member left in the restricted group
+- As node2 comes back online, vm:101 is migrated back to node2, as node2 has a
+  higher priority than node3
diff --git a/src/test/test-node-affinity-strict5/cmdlist b/src/test/test-node-affinity-strict5/cmdlist
new file mode 100644
index 00000000..6932aa78
--- /dev/null
+++ b/src/test/test-node-affinity-strict5/cmdlist
@@ -0,0 +1,5 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node2 off" ],
+    [ "power node2 on", "network node2 on" ]
+]
diff --git a/src/test/test-node-affinity-strict5/groups b/src/test/test-node-affinity-strict5/groups
new file mode 100644
index 00000000..ec3cd799
--- /dev/null
+++ b/src/test/test-node-affinity-strict5/groups
@@ -0,0 +1,3 @@
+group: must_stay_here
+	nodes node2:2,node3:1
+	restricted 1
diff --git a/src/test/test-node-affinity-strict5/hardware_status b/src/test/test-node-affinity-strict5/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-strict5/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-strict5/log.expect b/src/test/test-node-affinity-strict5/log.expect
new file mode 100644
index 00000000..ca6e4e4f
--- /dev/null
+++ b/src/test/test-node-affinity-strict5/log.expect
@@ -0,0 +1,66 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: migrate service 'vm:101' to node 'node2' (running)
+info     20    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: service vm:101 - start migrate to node 'node2'
+info     25    node3/lrm: service vm:101 - end migrate to node 'node2'
+info     40    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info     43    node2/lrm: starting service vm:101
+info     43    node2/lrm: service status vm:101 started
+info    120      cmdlist: execute network node2 off
+info    120    node1/crm: node 'node2': state changed from 'online' => 'unknown'
+info    122    node2/crm: status change slave => wait_for_quorum
+info    123    node2/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node2': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node2'
+info    164     watchdog: execute power node2 off
+info    163    node2/crm: killed by poweroff
+info    164    node2/lrm: killed by poweroff
+info    164     hardware: server 'node2' stopped by poweroff (watchdog)
+info    220      cmdlist: execute power node2 on
+info    220    node2/crm: status change startup => wait_for_quorum
+info    220    node2/lrm: status change startup => wait_for_agent_lock
+info    220      cmdlist: execute network node2 on
+info    240    node1/crm: got lock 'ha_agent_node2_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: node 'node2': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node2' to node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node3)
+info    242    node2/crm: status change wait_for_quorum => slave
+info    245    node3/lrm: starting service vm:101
+info    245    node3/lrm: service status vm:101 started
+info    260    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info    260    node1/crm: migrate service 'vm:101' to node 'node2' (running)
+info    260    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    265    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    265    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    280    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    363    node2/lrm: got lock 'ha_agent_node2_lock'
+info    363    node2/lrm: status change wait_for_agent_lock => active
+info    363    node2/lrm: starting service vm:101
+info    363    node2/lrm: service status vm:101 started
+info    820     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-strict5/manager_status b/src/test/test-node-affinity-strict5/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-strict5/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-strict5/service_config b/src/test/test-node-affinity-strict5/service_config
new file mode 100644
index 00000000..36ea15b1
--- /dev/null
+++ b/src/test/test-node-affinity-strict5/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "must_stay_here" }
+}
diff --git a/src/test/test-node-affinity-strict6/README b/src/test/test-node-affinity-strict6/README
new file mode 100644
index 00000000..c558afd1
--- /dev/null
+++ b/src/test/test-node-affinity-strict6/README
@@ -0,0 +1,14 @@
+Test whether a service in a restricted group with nofailback enabled and two
+differently prioritized node members will stay on the current node without
+migrating back to the highest priority node.
+
+The test scenario is:
+- vm:101 must be kept on node2 or node3
+- vm:101 is currently running on node2
+- node2 has a higher priority than node3
+
+The expected outcome is:
+- As node2 fails, vm:101 is migrated to node3 as it is the only available node
+  member left in the restricted group
+- As node2 comes back online, vm:101 stays on node3; even though node2 has a
+  higher priority, the nofailback flag prevents vm:101 to migrate back to node2
diff --git a/src/test/test-node-affinity-strict6/cmdlist b/src/test/test-node-affinity-strict6/cmdlist
new file mode 100644
index 00000000..4dd33cc4
--- /dev/null
+++ b/src/test/test-node-affinity-strict6/cmdlist
@@ -0,0 +1,5 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node2 off"],
+    [ "power node2 on", "network node2 on" ]
+]
diff --git a/src/test/test-node-affinity-strict6/groups b/src/test/test-node-affinity-strict6/groups
new file mode 100644
index 00000000..cdd0e502
--- /dev/null
+++ b/src/test/test-node-affinity-strict6/groups
@@ -0,0 +1,4 @@
+group: must_stay_here
+	nodes node2:2,node3:1
+	restricted 1
+	nofailback 1
diff --git a/src/test/test-node-affinity-strict6/hardware_status b/src/test/test-node-affinity-strict6/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-strict6/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-strict6/log.expect b/src/test/test-node-affinity-strict6/log.expect
new file mode 100644
index 00000000..bcb472ba
--- /dev/null
+++ b/src/test/test-node-affinity-strict6/log.expect
@@ -0,0 +1,52 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:101
+info     23    node2/lrm: service status vm:101 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info    120      cmdlist: execute network node2 off
+info    120    node1/crm: node 'node2': state changed from 'online' => 'unknown'
+info    122    node2/crm: status change slave => wait_for_quorum
+info    123    node2/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node2': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node2'
+info    164     watchdog: execute power node2 off
+info    163    node2/crm: killed by poweroff
+info    164    node2/lrm: killed by poweroff
+info    164     hardware: server 'node2' stopped by poweroff (watchdog)
+info    220      cmdlist: execute power node2 on
+info    220    node2/crm: status change startup => wait_for_quorum
+info    220    node2/lrm: status change startup => wait_for_agent_lock
+info    220      cmdlist: execute network node2 on
+info    240    node1/crm: got lock 'ha_agent_node2_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: node 'node2': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node2' to node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node3)
+info    242    node2/crm: status change wait_for_quorum => slave
+info    245    node3/lrm: got lock 'ha_agent_node3_lock'
+info    245    node3/lrm: status change wait_for_agent_lock => active
+info    245    node3/lrm: starting service vm:101
+info    245    node3/lrm: service status vm:101 started
+info    260    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info    820     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-strict6/manager_status b/src/test/test-node-affinity-strict6/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-strict6/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-strict6/service_config b/src/test/test-node-affinity-strict6/service_config
new file mode 100644
index 00000000..1d371e1e
--- /dev/null
+++ b/src/test/test-node-affinity-strict6/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node2", "state": "started", "group": "must_stay_here" }
+}
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2025-07-29 18:33 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-29 18:00 [pve-devel] [PATCH docs/ha-manager/manager v4 00/25] HA Rules Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 01/19] tree-wide: make arguments for select_service_node explicit Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 02/19] manager: improve signature of select_service_node Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 03/19] introduce rules base plugin Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 04/19] rules: introduce node affinity rule plugin Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 05/19] config, env, hw: add rules read and parse methods Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 06/19] config: delete services from rules if services are deleted from config Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 07/19] manager: read and update rules config Daniel Kral
2025-07-29 18:00 ` Daniel Kral [this message]
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 09/19] resources: introduce failback property in ha resource config Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 10/19] manager: migrate ha groups to node affinity rules in-memory Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 11/19] manager: apply node affinity rules when selecting service nodes Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 12/19] test: add test cases for rules config Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 13/19] api: introduce ha rules api endpoints Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 14/19] cli: expose ha rules api endpoints to ha-manager cli Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 15/19] sim: do not create default groups for test cases Daniel Kral
2025-07-30 10:01   ` Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 16/19] test: ha tester: migrate groups to service and rules config Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 17/19] test: ha tester: replace any reference to groups with node affinity rules Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 18/19] env: add property delete for update_service_config Daniel Kral
2025-07-29 18:00 ` [pve-devel] [PATCH ha-manager v4 19/19] manager: persistently migrate ha groups to ha rules Daniel Kral
2025-07-29 18:01 ` [pve-devel] [PATCH docs v4 1/2] ha: add documentation about ha rules and ha node affinity rules Daniel Kral
2025-07-29 18:01 ` [pve-devel] [PATCH docs v4 2/2] ha: crs: add effects of ha node affinity rule on the crs scheduler Daniel Kral
2025-07-29 18:01 ` [pve-devel] [PATCH manager v4 1/4] api: ha: add ha rules api endpoints Daniel Kral
2025-07-29 18:01 ` [pve-devel] [PATCH manager v4 2/4] ui: ha: remove ha groups from ha resource components Daniel Kral
2025-07-29 18:01 ` [pve-devel] [PATCH manager v4 3/4] ui: ha: show failback flag in resources status view Daniel Kral
2025-07-29 18:01 ` [pve-devel] [PATCH manager v4 4/4] ui: ha: replace ha groups with ha node affinity rules Daniel Kral
2025-07-30 17:29 ` [pve-devel] [PATCH docs/ha-manager/manager v4 00/25] HA Rules Michael Köppl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250729180107.428855-9-d.kral@proxmox.com \
    --to=d.kral@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal