From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 17A7B1FF187 for ; Mon, 3 Nov 2025 16:18:25 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id CCAD921233; Mon, 3 Nov 2025 16:19:00 +0100 (CET) From: Daniel Kral To: pve-devel@lists.proxmox.com Date: Mon, 3 Nov 2025 16:17:11 +0100 Message-ID: <20251103151823.387984-2-d.kral@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20251103151823.387984-1-d.kral@proxmox.com> References: <20251103151823.387984-1-d.kral@proxmox.com> MIME-Version: 1.0 X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1762183089325 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.035 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment PROLO_LEO1 0.1 Meta Catches all Leo drug variations so far SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH ha-manager 1/2] test: add delayed positive resource affinity migration test case X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" Add a test case, which tests what happens if two HA resources in positive resource affinity, where one of the HA resources is already on the target node, while the other is stuck still in migration. The current behavior is not correct as the already migrated HA resource will be migrated back to the source node instead of staying on the common target node. This behavior will be fixed with the next patch. Signed-off-by: Daniel Kral --- .../README | 5 ++ .../cmdlist | 3 ++ .../hardware_status | 5 ++ .../log.expect | 46 +++++++++++++++++++ .../manager_status | 21 +++++++++ .../rules_config | 3 ++ .../service_config | 4 ++ 7 files changed, 87 insertions(+) create mode 100644 src/test/test-resource-affinity-strict-positive6/README create mode 100644 src/test/test-resource-affinity-strict-positive6/cmdlist create mode 100644 src/test/test-resource-affinity-strict-positive6/hardware_status create mode 100644 src/test/test-resource-affinity-strict-positive6/log.expect create mode 100644 src/test/test-resource-affinity-strict-positive6/manager_status create mode 100644 src/test/test-resource-affinity-strict-positive6/rules_config create mode 100644 src/test/test-resource-affinity-strict-positive6/service_config diff --git a/src/test/test-resource-affinity-strict-positive6/README b/src/test/test-resource-affinity-strict-positive6/README new file mode 100644 index 00000000..a6affda3 --- /dev/null +++ b/src/test/test-resource-affinity-strict-positive6/README @@ -0,0 +1,5 @@ +Test whether two HA resources in positive resource affinity will migrate to the +same target node when one of them finishes earlier than the other. + +The current behavior is not correct, because the already migrated HA resource +will be migrated back to the source node. diff --git a/src/test/test-resource-affinity-strict-positive6/cmdlist b/src/test/test-resource-affinity-strict-positive6/cmdlist new file mode 100644 index 00000000..13f90cd7 --- /dev/null +++ b/src/test/test-resource-affinity-strict-positive6/cmdlist @@ -0,0 +1,3 @@ +[ + [ "power node1 on", "power node2 on", "power node3 on" ] +] diff --git a/src/test/test-resource-affinity-strict-positive6/hardware_status b/src/test/test-resource-affinity-strict-positive6/hardware_status new file mode 100644 index 00000000..451beb13 --- /dev/null +++ b/src/test/test-resource-affinity-strict-positive6/hardware_status @@ -0,0 +1,5 @@ +{ + "node1": { "power": "off", "network": "off" }, + "node2": { "power": "off", "network": "off" }, + "node3": { "power": "off", "network": "off" } +} diff --git a/src/test/test-resource-affinity-strict-positive6/log.expect b/src/test/test-resource-affinity-strict-positive6/log.expect new file mode 100644 index 00000000..69f8d867 --- /dev/null +++ b/src/test/test-resource-affinity-strict-positive6/log.expect @@ -0,0 +1,46 @@ +info 0 hardware: starting simulation +info 20 cmdlist: execute power node1 on +info 20 node1/crm: status change startup => wait_for_quorum +info 20 node1/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node2 on +info 20 node2/crm: status change startup => wait_for_quorum +info 20 node2/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node3 on +info 20 node3/crm: status change startup => wait_for_quorum +info 20 node3/lrm: status change startup => wait_for_agent_lock +info 20 node1/crm: got lock 'ha_manager_lock' +info 20 node1/crm: status change wait_for_quorum => master +info 20 node1/crm: migrate service 'vm:101' to node 'node1' (running) +info 20 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node1) +info 21 node1/lrm: got lock 'ha_agent_node1_lock' +info 21 node1/lrm: status change wait_for_agent_lock => active +info 21 node1/lrm: service vm:102 - start migrate to node 'node3' +info 21 node1/lrm: service vm:102 - end migrate to node 'node3' +info 22 node2/crm: status change wait_for_quorum => slave +info 24 node3/crm: status change wait_for_quorum => slave +info 25 node3/lrm: got lock 'ha_agent_node3_lock' +info 25 node3/lrm: status change wait_for_agent_lock => active +info 25 node3/lrm: service vm:101 - start migrate to node 'node1' +info 25 node3/lrm: service vm:101 - end migrate to node 'node1' +info 40 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1) +info 40 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node3) +info 40 node1/crm: migrate service 'vm:101' to node 'node3' (running) +info 40 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node3) +info 40 node1/crm: migrate service 'vm:102' to node 'node1' (running) +info 40 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node3, target = node1) +info 41 node1/lrm: service vm:101 - start migrate to node 'node3' +info 41 node1/lrm: service vm:101 - end migrate to node 'node3' +info 45 node3/lrm: service vm:102 - start migrate to node 'node1' +info 45 node3/lrm: service vm:102 - end migrate to node 'node1' +info 60 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3) +info 60 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node1) +info 60 node1/crm: migrate service 'vm:101' to node 'node1' (running) +info 60 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node1) +info 61 node1/lrm: starting service vm:102 +info 61 node1/lrm: service status vm:102 started +info 65 node3/lrm: service vm:101 - start migrate to node 'node1' +info 65 node3/lrm: service vm:101 - end migrate to node 'node1' +info 80 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1) +info 81 node1/lrm: starting service vm:101 +info 81 node1/lrm: service status vm:101 started +info 620 hardware: exit simulation - done diff --git a/src/test/test-resource-affinity-strict-positive6/manager_status b/src/test/test-resource-affinity-strict-positive6/manager_status new file mode 100644 index 00000000..9e7cdf21 --- /dev/null +++ b/src/test/test-resource-affinity-strict-positive6/manager_status @@ -0,0 +1,21 @@ +{ + "master_node": "node1", + "node_status": { + "node1":"online", + "node2":"online", + "node3":"online" + }, + "service_status": { + "vm:101": { + "node": "node3", + "state": "started", + "uid": "RoPGTlvNYq/oZFokv9fgWw" + }, + "vm:102": { + "node": "node1", + "state": "migrate", + "target": "node3", + "uid": "JVDARwmsXoVTF8Zd0BY2Mg" + } + } +} diff --git a/src/test/test-resource-affinity-strict-positive6/rules_config b/src/test/test-resource-affinity-strict-positive6/rules_config new file mode 100644 index 00000000..9789d7cc --- /dev/null +++ b/src/test/test-resource-affinity-strict-positive6/rules_config @@ -0,0 +1,3 @@ +resource-affinity: vms-must-stick-together + resources vm:101,vm:102 + affinity positive diff --git a/src/test/test-resource-affinity-strict-positive6/service_config b/src/test/test-resource-affinity-strict-positive6/service_config new file mode 100644 index 00000000..e71594d9 --- /dev/null +++ b/src/test/test-resource-affinity-strict-positive6/service_config @@ -0,0 +1,4 @@ +{ + "vm:101": { "node": "node3", "state": "started" }, + "vm:102": { "node": "node1", "state": "started" } +} -- 2.47.3 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel