public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH ha-manager 1/2] test: add delayed positive resource affinity migration test case
Date: Mon,  3 Nov 2025 16:17:11 +0100	[thread overview]
Message-ID: <20251103151823.387984-2-d.kral@proxmox.com> (raw)
In-Reply-To: <20251103151823.387984-1-d.kral@proxmox.com>

Add a test case, which tests what happens if two HA resources in
positive resource affinity, where one of the HA resources is already on
the target node, while the other is stuck still in migration.

The current behavior is not correct as the already migrated HA resource
will be migrated back to the source node instead of staying on the
common target node. This behavior will be fixed with the next patch.

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
 .../README                                    |  5 ++
 .../cmdlist                                   |  3 ++
 .../hardware_status                           |  5 ++
 .../log.expect                                | 46 +++++++++++++++++++
 .../manager_status                            | 21 +++++++++
 .../rules_config                              |  3 ++
 .../service_config                            |  4 ++
 7 files changed, 87 insertions(+)
 create mode 100644 src/test/test-resource-affinity-strict-positive6/README
 create mode 100644 src/test/test-resource-affinity-strict-positive6/cmdlist
 create mode 100644 src/test/test-resource-affinity-strict-positive6/hardware_status
 create mode 100644 src/test/test-resource-affinity-strict-positive6/log.expect
 create mode 100644 src/test/test-resource-affinity-strict-positive6/manager_status
 create mode 100644 src/test/test-resource-affinity-strict-positive6/rules_config
 create mode 100644 src/test/test-resource-affinity-strict-positive6/service_config

diff --git a/src/test/test-resource-affinity-strict-positive6/README b/src/test/test-resource-affinity-strict-positive6/README
new file mode 100644
index 00000000..a6affda3
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive6/README
@@ -0,0 +1,5 @@
+Test whether two HA resources in positive resource affinity will migrate to the
+same target node when one of them finishes earlier than the other.
+
+The current behavior is not correct, because the already migrated HA resource
+will be migrated back to the source node.
diff --git a/src/test/test-resource-affinity-strict-positive6/cmdlist b/src/test/test-resource-affinity-strict-positive6/cmdlist
new file mode 100644
index 00000000..13f90cd7
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive6/cmdlist
@@ -0,0 +1,3 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on" ]
+]
diff --git a/src/test/test-resource-affinity-strict-positive6/hardware_status b/src/test/test-resource-affinity-strict-positive6/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive6/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-resource-affinity-strict-positive6/log.expect b/src/test/test-resource-affinity-strict-positive6/log.expect
new file mode 100644
index 00000000..69f8d867
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive6/log.expect
@@ -0,0 +1,46 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: migrate service 'vm:101' to node 'node1' (running)
+info     20    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: service vm:102 - start migrate to node 'node3'
+info     21    node1/lrm: service vm:102 - end migrate to node 'node3'
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: service vm:101 - start migrate to node 'node1'
+info     25    node3/lrm: service vm:101 - end migrate to node 'node1'
+info     40    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node1)
+info     40    node1/crm: service 'vm:102': state changed from 'migrate' to 'started'  (node = node3)
+info     40    node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info     40    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node1, target = node3)
+info     40    node1/crm: migrate service 'vm:102' to node 'node1' (running)
+info     40    node1/crm: service 'vm:102': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info     41    node1/lrm: service vm:101 - start migrate to node 'node3'
+info     41    node1/lrm: service vm:101 - end migrate to node 'node3'
+info     45    node3/lrm: service vm:102 - start migrate to node 'node1'
+info     45    node3/lrm: service vm:102 - end migrate to node 'node1'
+info     60    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
+info     60    node1/crm: service 'vm:102': state changed from 'migrate' to 'started'  (node = node1)
+info     60    node1/crm: migrate service 'vm:101' to node 'node1' (running)
+info     60    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node1)
+info     61    node1/lrm: starting service vm:102
+info     61    node1/lrm: service status vm:102 started
+info     65    node3/lrm: service vm:101 - start migrate to node 'node1'
+info     65    node3/lrm: service vm:101 - end migrate to node 'node1'
+info     80    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node1)
+info     81    node1/lrm: starting service vm:101
+info     81    node1/lrm: service status vm:101 started
+info    620     hardware: exit simulation - done
diff --git a/src/test/test-resource-affinity-strict-positive6/manager_status b/src/test/test-resource-affinity-strict-positive6/manager_status
new file mode 100644
index 00000000..9e7cdf21
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive6/manager_status
@@ -0,0 +1,21 @@
+{
+    "master_node": "node1",
+    "node_status": {
+	"node1":"online",
+	"node2":"online",
+	"node3":"online"
+    },
+    "service_status": {
+	"vm:101": {
+	    "node": "node3",
+	    "state": "started",
+	    "uid": "RoPGTlvNYq/oZFokv9fgWw"
+	},
+	"vm:102": {
+	    "node": "node1",
+	    "state": "migrate",
+	    "target": "node3",
+	    "uid": "JVDARwmsXoVTF8Zd0BY2Mg"
+	}
+    }
+}
diff --git a/src/test/test-resource-affinity-strict-positive6/rules_config b/src/test/test-resource-affinity-strict-positive6/rules_config
new file mode 100644
index 00000000..9789d7cc
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive6/rules_config
@@ -0,0 +1,3 @@
+resource-affinity: vms-must-stick-together
+	resources vm:101,vm:102
+	affinity positive
diff --git a/src/test/test-resource-affinity-strict-positive6/service_config b/src/test/test-resource-affinity-strict-positive6/service_config
new file mode 100644
index 00000000..e71594d9
--- /dev/null
+++ b/src/test/test-resource-affinity-strict-positive6/service_config
@@ -0,0 +1,4 @@
+{
+    "vm:101": { "node": "node3", "state": "started" },
+    "vm:102": { "node": "node1", "state": "started" }
+}
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  reply	other threads:[~2025-11-03 15:18 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-03 15:17 [pve-devel] [PATCH ha-manager 0/2] fix #6801 Daniel Kral
2025-11-03 15:17 ` Daniel Kral [this message]
2025-11-03 15:17 ` [pve-devel] [PATCH ha-manager 2/2] fix #6801: only consider target node during positive resource affinity migration Daniel Kral

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251103151823.387984-2-d.kral@proxmox.com \
    --to=d.kral@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal