From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id A76311FF15C for ; Fri, 19 Sep 2025 16:09:20 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B44D41885A; Fri, 19 Sep 2025 16:09:34 +0200 (CEST) From: Daniel Kral To: pve-devel@lists.proxmox.com Date: Fri, 19 Sep 2025 16:08:09 +0200 Message-ID: <20250919140856.1361124-2-d.kral@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20250919140856.1361124-1-d.kral@proxmox.com> References: <20250919140856.1361124-1-d.kral@proxmox.com> MIME-Version: 1.0 X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1758290929584 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.015 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH ha-manager 1/3] tests: add regression tests for mixed resource affinity rules X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" These test cases show the current behavior of mixed resource affinity rules, which in the case of test-resource-affinity-strict-mixed2 is wrong. Signed-off-by: Daniel Kral --- .../README | 16 ++++++ .../cmdlist | 3 ++ .../hardware_status | 5 ++ .../log.expect | 50 +++++++++++++++++++ .../manager_status | 1 + .../rules_config | 7 +++ .../service_config | 6 +++ .../README | 10 ++++ .../cmdlist | 3 ++ .../hardware_status | 5 ++ .../log.expect | 48 ++++++++++++++++++ .../manager_status | 1 + .../rules_config | 11 ++++ .../service_config | 8 +++ 14 files changed, 174 insertions(+) create mode 100644 src/test/test-resource-affinity-strict-mixed1/README create mode 100644 src/test/test-resource-affinity-strict-mixed1/cmdlist create mode 100644 src/test/test-resource-affinity-strict-mixed1/hardware_status create mode 100644 src/test/test-resource-affinity-strict-mixed1/log.expect create mode 100644 src/test/test-resource-affinity-strict-mixed1/manager_status create mode 100644 src/test/test-resource-affinity-strict-mixed1/rules_config create mode 100644 src/test/test-resource-affinity-strict-mixed1/service_config create mode 100644 src/test/test-resource-affinity-strict-mixed2/README create mode 100644 src/test/test-resource-affinity-strict-mixed2/cmdlist create mode 100644 src/test/test-resource-affinity-strict-mixed2/hardware_status create mode 100644 src/test/test-resource-affinity-strict-mixed2/log.expect create mode 100644 src/test/test-resource-affinity-strict-mixed2/manager_status create mode 100644 src/test/test-resource-affinity-strict-mixed2/rules_config create mode 100644 src/test/test-resource-affinity-strict-mixed2/service_config diff --git a/src/test/test-resource-affinity-strict-mixed1/README b/src/test/test-resource-affinity-strict-mixed1/README new file mode 100644 index 00000000..b7003360 --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed1/README @@ -0,0 +1,16 @@ +The test scenario is: +- vm:201, vm:202, and vm:203 must be kept together +- vm:101 and vm:201 must be kept separate +- Therefore, vm:201, vm:202, vm:203 must all be kept separate from vm:101 +- vm:101 and vm:202 are currently running on node2 +- vm:201 and vm:203 are currently running on node1 + +The expected outcome is: +- The resource-node placements do not adhere to the defined resource affinity + rules, therefore the HA resources must be moved accordingly: As vm:101 and + vm:202 must be on separate nodes, these must be migrated to separate nodes: + - As the negative resource affinity rule is strict, resources should + neither share the current nor the migration target node, so vm:101 is + moved to node2, where neither vm:201, vm:202,nor vm:203 is assigned to + - Afterwards, vm:202 is migrated to node1, where vm:201 and vm:203 are + already running on diff --git a/src/test/test-resource-affinity-strict-mixed1/cmdlist b/src/test/test-resource-affinity-strict-mixed1/cmdlist new file mode 100644 index 00000000..13f90cd7 --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed1/cmdlist @@ -0,0 +1,3 @@ +[ + [ "power node1 on", "power node2 on", "power node3 on" ] +] diff --git a/src/test/test-resource-affinity-strict-mixed1/hardware_status b/src/test/test-resource-affinity-strict-mixed1/hardware_status new file mode 100644 index 00000000..451beb13 --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed1/hardware_status @@ -0,0 +1,5 @@ +{ + "node1": { "power": "off", "network": "off" }, + "node2": { "power": "off", "network": "off" }, + "node3": { "power": "off", "network": "off" } +} diff --git a/src/test/test-resource-affinity-strict-mixed1/log.expect b/src/test/test-resource-affinity-strict-mixed1/log.expect new file mode 100644 index 00000000..86e9439f --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed1/log.expect @@ -0,0 +1,50 @@ +info 0 hardware: starting simulation +info 20 cmdlist: execute power node1 on +info 20 node1/crm: status change startup => wait_for_quorum +info 20 node1/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node2 on +info 20 node2/crm: status change startup => wait_for_quorum +info 20 node2/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node3 on +info 20 node3/crm: status change startup => wait_for_quorum +info 20 node3/lrm: status change startup => wait_for_agent_lock +info 20 node1/crm: got lock 'ha_manager_lock' +info 20 node1/crm: status change wait_for_quorum => master +info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online' +info 20 node1/crm: adding new service 'vm:101' on node 'node3' +info 20 node1/crm: adding new service 'vm:201' on node 'node1' +info 20 node1/crm: adding new service 'vm:202' on node 'node3' +info 20 node1/crm: adding new service 'vm:203' on node 'node1' +info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node3) +info 20 node1/crm: service 'vm:201': state changed from 'request_start' to 'started' (node = node1) +info 20 node1/crm: service 'vm:202': state changed from 'request_start' to 'started' (node = node3) +info 20 node1/crm: service 'vm:203': state changed from 'request_start' to 'started' (node = node1) +info 20 node1/crm: migrate service 'vm:101' to node 'node2' (running) +info 20 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2) +info 20 node1/crm: migrate service 'vm:202' to node 'node1' (running) +info 20 node1/crm: service 'vm:202': state changed from 'started' to 'migrate' (node = node3, target = node1) +info 21 node1/lrm: got lock 'ha_agent_node1_lock' +info 21 node1/lrm: status change wait_for_agent_lock => active +info 21 node1/lrm: starting service vm:201 +info 21 node1/lrm: service status vm:201 started +info 21 node1/lrm: starting service vm:203 +info 21 node1/lrm: service status vm:203 started +info 22 node2/crm: status change wait_for_quorum => slave +info 23 node2/lrm: got lock 'ha_agent_node2_lock' +info 23 node2/lrm: status change wait_for_agent_lock => active +info 24 node3/crm: status change wait_for_quorum => slave +info 25 node3/lrm: got lock 'ha_agent_node3_lock' +info 25 node3/lrm: status change wait_for_agent_lock => active +info 25 node3/lrm: service vm:101 - start migrate to node 'node2' +info 25 node3/lrm: service vm:101 - end migrate to node 'node2' +info 25 node3/lrm: service vm:202 - start migrate to node 'node1' +info 25 node3/lrm: service vm:202 - end migrate to node 'node1' +info 40 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2) +info 40 node1/crm: service 'vm:202': state changed from 'migrate' to 'started' (node = node1) +info 41 node1/lrm: starting service vm:202 +info 41 node1/lrm: service status vm:202 started +info 43 node2/lrm: starting service vm:101 +info 43 node2/lrm: service status vm:101 started +info 620 hardware: exit simulation - done diff --git a/src/test/test-resource-affinity-strict-mixed1/manager_status b/src/test/test-resource-affinity-strict-mixed1/manager_status new file mode 100644 index 00000000..0967ef42 --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed1/manager_status @@ -0,0 +1 @@ +{} diff --git a/src/test/test-resource-affinity-strict-mixed1/rules_config b/src/test/test-resource-affinity-strict-mixed1/rules_config new file mode 100644 index 00000000..2cd9fe21 --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed1/rules_config @@ -0,0 +1,7 @@ +resource-affinity: vms-100s-must-stick-together + resources vm:201,vm:202,vm:203 + affinity positive + +resource-affinity: vms-100s-and-vm201-must-be-separate + resources vm:201,vm:101 + affinity negative diff --git a/src/test/test-resource-affinity-strict-mixed1/service_config b/src/test/test-resource-affinity-strict-mixed1/service_config new file mode 100644 index 00000000..83e2157d --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed1/service_config @@ -0,0 +1,6 @@ +{ + "vm:101": { "node": "node3", "state": "started" }, + "vm:201": { "node": "node1", "state": "started" }, + "vm:202": { "node": "node3", "state": "started" }, + "vm:203": { "node": "node1", "state": "started" } +} diff --git a/src/test/test-resource-affinity-strict-mixed2/README b/src/test/test-resource-affinity-strict-mixed2/README new file mode 100644 index 00000000..c56d1a2d --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed2/README @@ -0,0 +1,10 @@ +The test scenario is: +- vm:101, vm:102, and vm:103 must be kept together +- vm:201, vm:202, and vm:203 must be kept together +- vm:101 and vm:201 must be kept separate +- Therefore, vm:101, vm:102, and vm:103 must all be kept separate from vm:201, + vm:202, and vm:203 and vice versa +- vm:101, vm:103, vm:201, and vm:203 are currently running on node1 +- vm:102 and vm:202 are running on node3 and node2 respectively + +The current outcome is incorrect. diff --git a/src/test/test-resource-affinity-strict-mixed2/cmdlist b/src/test/test-resource-affinity-strict-mixed2/cmdlist new file mode 100644 index 00000000..13f90cd7 --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed2/cmdlist @@ -0,0 +1,3 @@ +[ + [ "power node1 on", "power node2 on", "power node3 on" ] +] diff --git a/src/test/test-resource-affinity-strict-mixed2/hardware_status b/src/test/test-resource-affinity-strict-mixed2/hardware_status new file mode 100644 index 00000000..451beb13 --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed2/hardware_status @@ -0,0 +1,5 @@ +{ + "node1": { "power": "off", "network": "off" }, + "node2": { "power": "off", "network": "off" }, + "node3": { "power": "off", "network": "off" } +} diff --git a/src/test/test-resource-affinity-strict-mixed2/log.expect b/src/test/test-resource-affinity-strict-mixed2/log.expect new file mode 100644 index 00000000..9cdc8b14 --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed2/log.expect @@ -0,0 +1,48 @@ +info 0 hardware: starting simulation +info 20 cmdlist: execute power node1 on +info 20 node1/crm: status change startup => wait_for_quorum +info 20 node1/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node2 on +info 20 node2/crm: status change startup => wait_for_quorum +info 20 node2/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node3 on +info 20 node3/crm: status change startup => wait_for_quorum +info 20 node3/lrm: status change startup => wait_for_agent_lock +info 20 node1/crm: got lock 'ha_manager_lock' +info 20 node1/crm: status change wait_for_quorum => master +info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online' +info 20 node1/crm: adding new service 'vm:101' on node 'node1' +info 20 node1/crm: adding new service 'vm:102' on node 'node3' +info 20 node1/crm: adding new service 'vm:103' on node 'node1' +info 20 node1/crm: adding new service 'vm:201' on node 'node1' +info 20 node1/crm: adding new service 'vm:202' on node 'node2' +info 20 node1/crm: adding new service 'vm:203' on node 'node1' +info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node1) +info 20 node1/crm: service 'vm:102': state changed from 'request_start' to 'started' (node = node3) +info 20 node1/crm: service 'vm:103': state changed from 'request_start' to 'started' (node = node1) +info 20 node1/crm: service 'vm:201': state changed from 'request_start' to 'started' (node = node1) +info 20 node1/crm: service 'vm:202': state changed from 'request_start' to 'started' (node = node2) +info 20 node1/crm: service 'vm:203': state changed from 'request_start' to 'started' (node = node1) +info 21 node1/lrm: got lock 'ha_agent_node1_lock' +info 21 node1/lrm: status change wait_for_agent_lock => active +info 21 node1/lrm: starting service vm:101 +info 21 node1/lrm: service status vm:101 started +info 21 node1/lrm: starting service vm:103 +info 21 node1/lrm: service status vm:103 started +info 21 node1/lrm: starting service vm:201 +info 21 node1/lrm: service status vm:201 started +info 21 node1/lrm: starting service vm:203 +info 21 node1/lrm: service status vm:203 started +info 22 node2/crm: status change wait_for_quorum => slave +info 23 node2/lrm: got lock 'ha_agent_node2_lock' +info 23 node2/lrm: status change wait_for_agent_lock => active +info 23 node2/lrm: starting service vm:202 +info 23 node2/lrm: service status vm:202 started +info 24 node3/crm: status change wait_for_quorum => slave +info 25 node3/lrm: got lock 'ha_agent_node3_lock' +info 25 node3/lrm: status change wait_for_agent_lock => active +info 25 node3/lrm: starting service vm:102 +info 25 node3/lrm: service status vm:102 started +info 620 hardware: exit simulation - done diff --git a/src/test/test-resource-affinity-strict-mixed2/manager_status b/src/test/test-resource-affinity-strict-mixed2/manager_status new file mode 100644 index 00000000..0967ef42 --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed2/manager_status @@ -0,0 +1 @@ +{} diff --git a/src/test/test-resource-affinity-strict-mixed2/rules_config b/src/test/test-resource-affinity-strict-mixed2/rules_config new file mode 100644 index 00000000..851ed590 --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed2/rules_config @@ -0,0 +1,11 @@ +resource-affinity: together-100s + resources vm:101,vm:102,vm:103 + affinity positive + +resource-affinity: together-200s + resources vm:201,vm:202,vm:203 + affinity positive + +resource-affinity: lonely-must-vms-be + resources vm:101,vm:201 + affinity negative diff --git a/src/test/test-resource-affinity-strict-mixed2/service_config b/src/test/test-resource-affinity-strict-mixed2/service_config new file mode 100644 index 00000000..fe6b2438 --- /dev/null +++ b/src/test/test-resource-affinity-strict-mixed2/service_config @@ -0,0 +1,8 @@ +{ + "vm:101": { "node": "node1", "state": "started" }, + "vm:102": { "node": "node3", "state": "started" }, + "vm:103": { "node": "node1", "state": "started" }, + "vm:201": { "node": "node1", "state": "started" }, + "vm:202": { "node": "node2", "state": "started" }, + "vm:203": { "node": "node1", "state": "started" } +} -- 2.47.3 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel