From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id A86D31FF13B for ; Wed, 22 Apr 2026 12:01:30 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 81C36137F9; Wed, 22 Apr 2026 12:01:30 +0200 (CEST) From: Daniel Kral To: pve-devel@lists.proxmox.com Subject: [PATCH ha-manager 2/7] test: add test casses for node affinity rules with maintenance mode Date: Wed, 22 Apr 2026 12:00:20 +0200 Message-ID: <20260422100035.232716-3-d.kral@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260422100035.232716-1-d.kral@proxmox.com> References: <20260422100035.232716-1-d.kral@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1776851951678 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.079 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: DINYM4XGXX5ZPISADAVDNTUHOS3JECTG X-Message-ID-Hash: DINYM4XGXX5ZPISADAVDNTUHOS3JECTG X-MailFrom: d.kral@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: These test cases document how the HA Manager currently behaves for node affinity with HA resources with failback enabled and disabled, whose current nodes are put in maintenance mode and available afterwards again. The non-strict node affinity rules do only need single node member test cases, since these are already multi-priority node affinity rules as the non-member nodes are added with priority -1. Signed-off-by: Daniel Kral --- .../README | 4 ++ .../cmdlist | 5 ++ .../hardware_status | 5 ++ .../log.expect | 48 +++++++++++++++++++ .../manager_status | 1 + .../rules_config | 3 ++ .../service_config | 3 ++ .../README | 3 ++ .../cmdlist | 5 ++ .../hardware_status | 5 ++ .../log.expect | 40 ++++++++++++++++ .../manager_status | 1 + .../rules_config | 3 ++ .../service_config | 3 ++ .../README | 3 ++ .../cmdlist | 5 ++ .../hardware_status | 5 ++ .../log.expect | 35 ++++++++++++++ .../manager_status | 1 + .../rules_config | 4 ++ .../service_config | 3 ++ .../README | 3 ++ .../cmdlist | 5 ++ .../hardware_status | 5 ++ .../log.expect | 35 ++++++++++++++ .../manager_status | 1 + .../rules_config | 4 ++ .../service_config | 3 ++ .../README | 4 ++ .../cmdlist | 5 ++ .../hardware_status | 5 ++ .../log.expect | 48 +++++++++++++++++++ .../manager_status | 1 + .../rules_config | 4 ++ .../service_config | 3 ++ .../README | 3 ++ .../cmdlist | 5 ++ .../hardware_status | 5 ++ .../log.expect | 40 ++++++++++++++++ .../manager_status | 1 + .../rules_config | 4 ++ .../service_config | 3 ++ 42 files changed, 372 insertions(+) create mode 100644 src/test/test-node-affinity-maintenance-nonstrict1/README create mode 100644 src/test/test-node-affinity-maintenance-nonstrict1/cmdlist create mode 100644 src/test/test-node-affinity-maintenance-nonstrict1/hardware_status create mode 100644 src/test/test-node-affinity-maintenance-nonstrict1/log.expect create mode 100644 src/test/test-node-affinity-maintenance-nonstrict1/manager_status create mode 100644 src/test/test-node-affinity-maintenance-nonstrict1/rules_config create mode 100644 src/test/test-node-affinity-maintenance-nonstrict1/service_config create mode 100644 src/test/test-node-affinity-maintenance-nonstrict2/README create mode 100644 src/test/test-node-affinity-maintenance-nonstrict2/cmdlist create mode 100644 src/test/test-node-affinity-maintenance-nonstrict2/hardware_status create mode 100644 src/test/test-node-affinity-maintenance-nonstrict2/log.expect create mode 100644 src/test/test-node-affinity-maintenance-nonstrict2/manager_status create mode 100644 src/test/test-node-affinity-maintenance-nonstrict2/rules_config create mode 100644 src/test/test-node-affinity-maintenance-nonstrict2/service_config create mode 100644 src/test/test-node-affinity-maintenance-strict1/README create mode 100644 src/test/test-node-affinity-maintenance-strict1/cmdlist create mode 100644 src/test/test-node-affinity-maintenance-strict1/hardware_status create mode 100644 src/test/test-node-affinity-maintenance-strict1/log.expect create mode 100644 src/test/test-node-affinity-maintenance-strict1/manager_status create mode 100644 src/test/test-node-affinity-maintenance-strict1/rules_config create mode 100644 src/test/test-node-affinity-maintenance-strict1/service_config create mode 100644 src/test/test-node-affinity-maintenance-strict2/README create mode 100644 src/test/test-node-affinity-maintenance-strict2/cmdlist create mode 100644 src/test/test-node-affinity-maintenance-strict2/hardware_status create mode 100644 src/test/test-node-affinity-maintenance-strict2/log.expect create mode 100644 src/test/test-node-affinity-maintenance-strict2/manager_status create mode 100644 src/test/test-node-affinity-maintenance-strict2/rules_config create mode 100644 src/test/test-node-affinity-maintenance-strict2/service_config create mode 100644 src/test/test-node-affinity-maintenance-strict3/README create mode 100644 src/test/test-node-affinity-maintenance-strict3/cmdlist create mode 100644 src/test/test-node-affinity-maintenance-strict3/hardware_status create mode 100644 src/test/test-node-affinity-maintenance-strict3/log.expect create mode 100644 src/test/test-node-affinity-maintenance-strict3/manager_status create mode 100644 src/test/test-node-affinity-maintenance-strict3/rules_config create mode 100644 src/test/test-node-affinity-maintenance-strict3/service_config create mode 100644 src/test/test-node-affinity-maintenance-strict4/README create mode 100644 src/test/test-node-affinity-maintenance-strict4/cmdlist create mode 100644 src/test/test-node-affinity-maintenance-strict4/hardware_status create mode 100644 src/test/test-node-affinity-maintenance-strict4/log.expect create mode 100644 src/test/test-node-affinity-maintenance-strict4/manager_status create mode 100644 src/test/test-node-affinity-maintenance-strict4/rules_config create mode 100644 src/test/test-node-affinity-maintenance-strict4/service_config diff --git a/src/test/test-node-affinity-maintenance-nonstrict1/README b/src/test/test-node-affinity-maintenance-nonstrict1/README new file mode 100644 index 00000000..715e8876 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict1/README @@ -0,0 +1,4 @@ +Test whether an HA resource with failback enabled in a non-strict node affinity +rule with a single node member will move to a replacement node if its current +node is in maintenance mode and moves back to the previous maintenance node as +soon as it's available again. diff --git a/src/test/test-node-affinity-maintenance-nonstrict1/cmdlist b/src/test/test-node-affinity-maintenance-nonstrict1/cmdlist new file mode 100644 index 00000000..7e577b68 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict1/cmdlist @@ -0,0 +1,5 @@ +[ + [ "power node1 on", "power node2 on", "power node3 on"], + [ "crm node3 enable-node-maintenance" ], + [ "crm node3 disable-node-maintenance" ] +] diff --git a/src/test/test-node-affinity-maintenance-nonstrict1/hardware_status b/src/test/test-node-affinity-maintenance-nonstrict1/hardware_status new file mode 100644 index 00000000..451beb13 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict1/hardware_status @@ -0,0 +1,5 @@ +{ + "node1": { "power": "off", "network": "off" }, + "node2": { "power": "off", "network": "off" }, + "node3": { "power": "off", "network": "off" } +} diff --git a/src/test/test-node-affinity-maintenance-nonstrict1/log.expect b/src/test/test-node-affinity-maintenance-nonstrict1/log.expect new file mode 100644 index 00000000..339ce3ab --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict1/log.expect @@ -0,0 +1,48 @@ +info 0 hardware: starting simulation +info 20 cmdlist: execute power node1 on +info 20 node1/crm: status change startup => wait_for_quorum +info 20 node1/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node2 on +info 20 node2/crm: status change startup => wait_for_quorum +info 20 node2/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node3 on +info 20 node3/crm: status change startup => wait_for_quorum +info 20 node3/lrm: status change startup => wait_for_agent_lock +info 20 node1/crm: got lock 'ha_manager_lock' +info 20 node1/crm: status change wait_for_quorum => master +info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online' +info 20 node1/crm: adding new service 'vm:101' on node 'node3' +info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node3) +info 22 node2/crm: status change wait_for_quorum => slave +info 24 node3/crm: status change wait_for_quorum => slave +info 25 node3/lrm: got lock 'ha_agent_node3_lock' +info 25 node3/lrm: status change wait_for_agent_lock => active +info 25 node3/lrm: starting service vm:101 +info 25 node3/lrm: service status vm:101 started +info 120 cmdlist: execute crm node3 enable-node-maintenance +info 125 node3/lrm: status change active => maintenance +info 140 node1/crm: node 'node3': state changed from 'online' => 'maintenance' +info 140 node1/crm: migrate service 'vm:101' to node 'node1' (running) +info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node1) +info 141 node1/lrm: got lock 'ha_agent_node1_lock' +info 141 node1/lrm: status change wait_for_agent_lock => active +info 145 node3/lrm: service vm:101 - start migrate to node 'node1' +info 145 node3/lrm: service vm:101 - end migrate to node 'node1' +info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1) +info 161 node1/lrm: starting service vm:101 +info 161 node1/lrm: service status vm:101 started +info 220 cmdlist: execute crm node3 disable-node-maintenance +info 225 node3/lrm: got lock 'ha_agent_node3_lock' +info 225 node3/lrm: status change maintenance => active +info 240 node1/crm: node 'node3': state changed from 'maintenance' => 'online' +info 240 node1/crm: moving service 'vm:101' back to 'node3', node came back from maintenance. +info 240 node1/crm: migrate service 'vm:101' to node 'node3' (running) +info 240 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node3) +info 241 node1/lrm: service vm:101 - start migrate to node 'node3' +info 241 node1/lrm: service vm:101 - end migrate to node 'node3' +info 260 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3) +info 265 node3/lrm: starting service vm:101 +info 265 node3/lrm: service status vm:101 started +info 820 hardware: exit simulation - done diff --git a/src/test/test-node-affinity-maintenance-nonstrict1/manager_status b/src/test/test-node-affinity-maintenance-nonstrict1/manager_status new file mode 100644 index 00000000..0967ef42 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict1/manager_status @@ -0,0 +1 @@ +{} diff --git a/src/test/test-node-affinity-maintenance-nonstrict1/rules_config b/src/test/test-node-affinity-maintenance-nonstrict1/rules_config new file mode 100644 index 00000000..f758b512 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict1/rules_config @@ -0,0 +1,3 @@ +node-affinity: vm101-should-be-on-node3 + nodes node3 + resources vm:101 diff --git a/src/test/test-node-affinity-maintenance-nonstrict1/service_config b/src/test/test-node-affinity-maintenance-nonstrict1/service_config new file mode 100644 index 00000000..7f0b1bf9 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict1/service_config @@ -0,0 +1,3 @@ +{ + "vm:101": { "node": "node3", "state": "started" } +} diff --git a/src/test/test-node-affinity-maintenance-nonstrict2/README b/src/test/test-node-affinity-maintenance-nonstrict2/README new file mode 100644 index 00000000..9af43c11 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict2/README @@ -0,0 +1,3 @@ +Test whether an HA resource with failback disabled in a non-strict node +affinity rule with a single node member will move to a replacement node if its +current node is in maintenance mode. diff --git a/src/test/test-node-affinity-maintenance-nonstrict2/cmdlist b/src/test/test-node-affinity-maintenance-nonstrict2/cmdlist new file mode 100644 index 00000000..7e577b68 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict2/cmdlist @@ -0,0 +1,5 @@ +[ + [ "power node1 on", "power node2 on", "power node3 on"], + [ "crm node3 enable-node-maintenance" ], + [ "crm node3 disable-node-maintenance" ] +] diff --git a/src/test/test-node-affinity-maintenance-nonstrict2/hardware_status b/src/test/test-node-affinity-maintenance-nonstrict2/hardware_status new file mode 100644 index 00000000..451beb13 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict2/hardware_status @@ -0,0 +1,5 @@ +{ + "node1": { "power": "off", "network": "off" }, + "node2": { "power": "off", "network": "off" }, + "node3": { "power": "off", "network": "off" } +} diff --git a/src/test/test-node-affinity-maintenance-nonstrict2/log.expect b/src/test/test-node-affinity-maintenance-nonstrict2/log.expect new file mode 100644 index 00000000..05a77a24 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict2/log.expect @@ -0,0 +1,40 @@ +info 0 hardware: starting simulation +info 20 cmdlist: execute power node1 on +info 20 node1/crm: status change startup => wait_for_quorum +info 20 node1/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node2 on +info 20 node2/crm: status change startup => wait_for_quorum +info 20 node2/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node3 on +info 20 node3/crm: status change startup => wait_for_quorum +info 20 node3/lrm: status change startup => wait_for_agent_lock +info 20 node1/crm: got lock 'ha_manager_lock' +info 20 node1/crm: status change wait_for_quorum => master +info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online' +info 20 node1/crm: adding new service 'vm:101' on node 'node3' +info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node3) +info 22 node2/crm: status change wait_for_quorum => slave +info 24 node3/crm: status change wait_for_quorum => slave +info 25 node3/lrm: got lock 'ha_agent_node3_lock' +info 25 node3/lrm: status change wait_for_agent_lock => active +info 25 node3/lrm: starting service vm:101 +info 25 node3/lrm: service status vm:101 started +info 120 cmdlist: execute crm node3 enable-node-maintenance +info 125 node3/lrm: status change active => maintenance +info 140 node1/crm: node 'node3': state changed from 'online' => 'maintenance' +info 140 node1/crm: migrate service 'vm:101' to node 'node1' (running) +info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node1) +info 141 node1/lrm: got lock 'ha_agent_node1_lock' +info 141 node1/lrm: status change wait_for_agent_lock => active +info 145 node3/lrm: service vm:101 - start migrate to node 'node1' +info 145 node3/lrm: service vm:101 - end migrate to node 'node1' +info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1) +info 161 node1/lrm: starting service vm:101 +info 161 node1/lrm: service status vm:101 started +info 220 cmdlist: execute crm node3 disable-node-maintenance +info 225 node3/lrm: got lock 'ha_agent_node3_lock' +info 225 node3/lrm: status change maintenance => active +info 240 node1/crm: node 'node3': state changed from 'maintenance' => 'online' +info 820 hardware: exit simulation - done diff --git a/src/test/test-node-affinity-maintenance-nonstrict2/manager_status b/src/test/test-node-affinity-maintenance-nonstrict2/manager_status new file mode 100644 index 00000000..0967ef42 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict2/manager_status @@ -0,0 +1 @@ +{} diff --git a/src/test/test-node-affinity-maintenance-nonstrict2/rules_config b/src/test/test-node-affinity-maintenance-nonstrict2/rules_config new file mode 100644 index 00000000..f758b512 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict2/rules_config @@ -0,0 +1,3 @@ +node-affinity: vm101-should-be-on-node3 + nodes node3 + resources vm:101 diff --git a/src/test/test-node-affinity-maintenance-nonstrict2/service_config b/src/test/test-node-affinity-maintenance-nonstrict2/service_config new file mode 100644 index 00000000..c7266eec --- /dev/null +++ b/src/test/test-node-affinity-maintenance-nonstrict2/service_config @@ -0,0 +1,3 @@ +{ + "vm:101": { "node": "node3", "state": "started", "failback": 0 } +} diff --git a/src/test/test-node-affinity-maintenance-strict1/README b/src/test/test-node-affinity-maintenance-strict1/README new file mode 100644 index 00000000..a31be5db --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict1/README @@ -0,0 +1,3 @@ +Test whether an HA resource with failback enabled in a strict node affinity +rule with a single node member will stay on the current node even though it is +in maintenance mode, because it cannot find any replacement node. diff --git a/src/test/test-node-affinity-maintenance-strict1/cmdlist b/src/test/test-node-affinity-maintenance-strict1/cmdlist new file mode 100644 index 00000000..7e577b68 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict1/cmdlist @@ -0,0 +1,5 @@ +[ + [ "power node1 on", "power node2 on", "power node3 on"], + [ "crm node3 enable-node-maintenance" ], + [ "crm node3 disable-node-maintenance" ] +] diff --git a/src/test/test-node-affinity-maintenance-strict1/hardware_status b/src/test/test-node-affinity-maintenance-strict1/hardware_status new file mode 100644 index 00000000..451beb13 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict1/hardware_status @@ -0,0 +1,5 @@ +{ + "node1": { "power": "off", "network": "off" }, + "node2": { "power": "off", "network": "off" }, + "node3": { "power": "off", "network": "off" } +} diff --git a/src/test/test-node-affinity-maintenance-strict1/log.expect b/src/test/test-node-affinity-maintenance-strict1/log.expect new file mode 100644 index 00000000..4bdc9122 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict1/log.expect @@ -0,0 +1,35 @@ +info 0 hardware: starting simulation +info 20 cmdlist: execute power node1 on +info 20 node1/crm: status change startup => wait_for_quorum +info 20 node1/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node2 on +info 20 node2/crm: status change startup => wait_for_quorum +info 20 node2/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node3 on +info 20 node3/crm: status change startup => wait_for_quorum +info 20 node3/lrm: status change startup => wait_for_agent_lock +info 20 node1/crm: got lock 'ha_manager_lock' +info 20 node1/crm: status change wait_for_quorum => master +info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online' +info 20 node1/crm: adding new service 'vm:101' on node 'node3' +info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node3) +info 22 node2/crm: status change wait_for_quorum => slave +info 24 node3/crm: status change wait_for_quorum => slave +info 25 node3/lrm: got lock 'ha_agent_node3_lock' +info 25 node3/lrm: status change wait_for_agent_lock => active +info 25 node3/lrm: starting service vm:101 +info 25 node3/lrm: service status vm:101 started +info 120 cmdlist: execute crm node3 enable-node-maintenance +info 125 node3/lrm: status change active => maintenance +info 140 node1/crm: node 'node3': state changed from 'online' => 'maintenance' +warn 140 node1/crm: service 'vm:101': cannot find a replacement node while its current node is in maintenance +warn 160 node1/crm: service 'vm:101': cannot find a replacement node while its current node is in maintenance +warn 180 node1/crm: service 'vm:101': cannot find a replacement node while its current node is in maintenance +warn 200 node1/crm: service 'vm:101': cannot find a replacement node while its current node is in maintenance +info 220 cmdlist: execute crm node3 disable-node-maintenance +warn 220 node1/crm: service 'vm:101': cannot find a replacement node while its current node is in maintenance +info 240 node1/crm: node 'node3': state changed from 'maintenance' => 'online' +info 240 node1/crm: service 'vm:101': clearing stale maintenance node 'node3' setting (is current node) +info 820 hardware: exit simulation - done diff --git a/src/test/test-node-affinity-maintenance-strict1/manager_status b/src/test/test-node-affinity-maintenance-strict1/manager_status new file mode 100644 index 00000000..0967ef42 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict1/manager_status @@ -0,0 +1 @@ +{} diff --git a/src/test/test-node-affinity-maintenance-strict1/rules_config b/src/test/test-node-affinity-maintenance-strict1/rules_config new file mode 100644 index 00000000..25aa655f --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict1/rules_config @@ -0,0 +1,4 @@ +node-affinity: vm101-must-be-on-node3 + nodes node3 + resources vm:101 + strict 1 diff --git a/src/test/test-node-affinity-maintenance-strict1/service_config b/src/test/test-node-affinity-maintenance-strict1/service_config new file mode 100644 index 00000000..7f0b1bf9 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict1/service_config @@ -0,0 +1,3 @@ +{ + "vm:101": { "node": "node3", "state": "started" } +} diff --git a/src/test/test-node-affinity-maintenance-strict2/README b/src/test/test-node-affinity-maintenance-strict2/README new file mode 100644 index 00000000..8a7f768d --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict2/README @@ -0,0 +1,3 @@ +Test whether an HA resource with failback disabled in a strict node affinity +rule with a single node member will stay on the current node even though it is +in maintenance mode, because it cannot find any replacement node. diff --git a/src/test/test-node-affinity-maintenance-strict2/cmdlist b/src/test/test-node-affinity-maintenance-strict2/cmdlist new file mode 100644 index 00000000..7e577b68 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict2/cmdlist @@ -0,0 +1,5 @@ +[ + [ "power node1 on", "power node2 on", "power node3 on"], + [ "crm node3 enable-node-maintenance" ], + [ "crm node3 disable-node-maintenance" ] +] diff --git a/src/test/test-node-affinity-maintenance-strict2/hardware_status b/src/test/test-node-affinity-maintenance-strict2/hardware_status new file mode 100644 index 00000000..451beb13 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict2/hardware_status @@ -0,0 +1,5 @@ +{ + "node1": { "power": "off", "network": "off" }, + "node2": { "power": "off", "network": "off" }, + "node3": { "power": "off", "network": "off" } +} diff --git a/src/test/test-node-affinity-maintenance-strict2/log.expect b/src/test/test-node-affinity-maintenance-strict2/log.expect new file mode 100644 index 00000000..4bdc9122 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict2/log.expect @@ -0,0 +1,35 @@ +info 0 hardware: starting simulation +info 20 cmdlist: execute power node1 on +info 20 node1/crm: status change startup => wait_for_quorum +info 20 node1/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node2 on +info 20 node2/crm: status change startup => wait_for_quorum +info 20 node2/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node3 on +info 20 node3/crm: status change startup => wait_for_quorum +info 20 node3/lrm: status change startup => wait_for_agent_lock +info 20 node1/crm: got lock 'ha_manager_lock' +info 20 node1/crm: status change wait_for_quorum => master +info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online' +info 20 node1/crm: adding new service 'vm:101' on node 'node3' +info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node3) +info 22 node2/crm: status change wait_for_quorum => slave +info 24 node3/crm: status change wait_for_quorum => slave +info 25 node3/lrm: got lock 'ha_agent_node3_lock' +info 25 node3/lrm: status change wait_for_agent_lock => active +info 25 node3/lrm: starting service vm:101 +info 25 node3/lrm: service status vm:101 started +info 120 cmdlist: execute crm node3 enable-node-maintenance +info 125 node3/lrm: status change active => maintenance +info 140 node1/crm: node 'node3': state changed from 'online' => 'maintenance' +warn 140 node1/crm: service 'vm:101': cannot find a replacement node while its current node is in maintenance +warn 160 node1/crm: service 'vm:101': cannot find a replacement node while its current node is in maintenance +warn 180 node1/crm: service 'vm:101': cannot find a replacement node while its current node is in maintenance +warn 200 node1/crm: service 'vm:101': cannot find a replacement node while its current node is in maintenance +info 220 cmdlist: execute crm node3 disable-node-maintenance +warn 220 node1/crm: service 'vm:101': cannot find a replacement node while its current node is in maintenance +info 240 node1/crm: node 'node3': state changed from 'maintenance' => 'online' +info 240 node1/crm: service 'vm:101': clearing stale maintenance node 'node3' setting (is current node) +info 820 hardware: exit simulation - done diff --git a/src/test/test-node-affinity-maintenance-strict2/manager_status b/src/test/test-node-affinity-maintenance-strict2/manager_status new file mode 100644 index 00000000..0967ef42 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict2/manager_status @@ -0,0 +1 @@ +{} diff --git a/src/test/test-node-affinity-maintenance-strict2/rules_config b/src/test/test-node-affinity-maintenance-strict2/rules_config new file mode 100644 index 00000000..25aa655f --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict2/rules_config @@ -0,0 +1,4 @@ +node-affinity: vm101-must-be-on-node3 + nodes node3 + resources vm:101 + strict 1 diff --git a/src/test/test-node-affinity-maintenance-strict2/service_config b/src/test/test-node-affinity-maintenance-strict2/service_config new file mode 100644 index 00000000..c7266eec --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict2/service_config @@ -0,0 +1,3 @@ +{ + "vm:101": { "node": "node3", "state": "started", "failback": 0 } +} diff --git a/src/test/test-node-affinity-maintenance-strict3/README b/src/test/test-node-affinity-maintenance-strict3/README new file mode 100644 index 00000000..b5f5dfbb --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict3/README @@ -0,0 +1,4 @@ +Test whether an HA resource with failback enabled in a strict node affinity +rule with two differently prioritized node members will move to the +lower-priority node if its current node is in maintenance mode and moves back +to the previous maintenance node as soon as it's available again. diff --git a/src/test/test-node-affinity-maintenance-strict3/cmdlist b/src/test/test-node-affinity-maintenance-strict3/cmdlist new file mode 100644 index 00000000..7e577b68 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict3/cmdlist @@ -0,0 +1,5 @@ +[ + [ "power node1 on", "power node2 on", "power node3 on"], + [ "crm node3 enable-node-maintenance" ], + [ "crm node3 disable-node-maintenance" ] +] diff --git a/src/test/test-node-affinity-maintenance-strict3/hardware_status b/src/test/test-node-affinity-maintenance-strict3/hardware_status new file mode 100644 index 00000000..451beb13 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict3/hardware_status @@ -0,0 +1,5 @@ +{ + "node1": { "power": "off", "network": "off" }, + "node2": { "power": "off", "network": "off" }, + "node3": { "power": "off", "network": "off" } +} diff --git a/src/test/test-node-affinity-maintenance-strict3/log.expect b/src/test/test-node-affinity-maintenance-strict3/log.expect new file mode 100644 index 00000000..0bdf4fa0 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict3/log.expect @@ -0,0 +1,48 @@ +info 0 hardware: starting simulation +info 20 cmdlist: execute power node1 on +info 20 node1/crm: status change startup => wait_for_quorum +info 20 node1/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node2 on +info 20 node2/crm: status change startup => wait_for_quorum +info 20 node2/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node3 on +info 20 node3/crm: status change startup => wait_for_quorum +info 20 node3/lrm: status change startup => wait_for_agent_lock +info 20 node1/crm: got lock 'ha_manager_lock' +info 20 node1/crm: status change wait_for_quorum => master +info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online' +info 20 node1/crm: adding new service 'vm:101' on node 'node3' +info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node3) +info 22 node2/crm: status change wait_for_quorum => slave +info 24 node3/crm: status change wait_for_quorum => slave +info 25 node3/lrm: got lock 'ha_agent_node3_lock' +info 25 node3/lrm: status change wait_for_agent_lock => active +info 25 node3/lrm: starting service vm:101 +info 25 node3/lrm: service status vm:101 started +info 120 cmdlist: execute crm node3 enable-node-maintenance +info 125 node3/lrm: status change active => maintenance +info 140 node1/crm: node 'node3': state changed from 'online' => 'maintenance' +info 140 node1/crm: migrate service 'vm:101' to node 'node2' (running) +info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2) +info 143 node2/lrm: got lock 'ha_agent_node2_lock' +info 143 node2/lrm: status change wait_for_agent_lock => active +info 145 node3/lrm: service vm:101 - start migrate to node 'node2' +info 145 node3/lrm: service vm:101 - end migrate to node 'node2' +info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2) +info 163 node2/lrm: starting service vm:101 +info 163 node2/lrm: service status vm:101 started +info 220 cmdlist: execute crm node3 disable-node-maintenance +info 225 node3/lrm: got lock 'ha_agent_node3_lock' +info 225 node3/lrm: status change maintenance => active +info 240 node1/crm: node 'node3': state changed from 'maintenance' => 'online' +info 240 node1/crm: moving service 'vm:101' back to 'node3', node came back from maintenance. +info 240 node1/crm: migrate service 'vm:101' to node 'node3' (running) +info 240 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node2, target = node3) +info 243 node2/lrm: service vm:101 - start migrate to node 'node3' +info 243 node2/lrm: service vm:101 - end migrate to node 'node3' +info 260 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3) +info 265 node3/lrm: starting service vm:101 +info 265 node3/lrm: service status vm:101 started +info 820 hardware: exit simulation - done diff --git a/src/test/test-node-affinity-maintenance-strict3/manager_status b/src/test/test-node-affinity-maintenance-strict3/manager_status new file mode 100644 index 00000000..0967ef42 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict3/manager_status @@ -0,0 +1 @@ +{} diff --git a/src/test/test-node-affinity-maintenance-strict3/rules_config b/src/test/test-node-affinity-maintenance-strict3/rules_config new file mode 100644 index 00000000..12539b76 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict3/rules_config @@ -0,0 +1,4 @@ +node-affinity: vm101-must-be-on-node3-or-node2 + nodes node2:1,node3:2 + resources vm:101 + strict 1 diff --git a/src/test/test-node-affinity-maintenance-strict3/service_config b/src/test/test-node-affinity-maintenance-strict3/service_config new file mode 100644 index 00000000..7f0b1bf9 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict3/service_config @@ -0,0 +1,3 @@ +{ + "vm:101": { "node": "node3", "state": "started" } +} diff --git a/src/test/test-node-affinity-maintenance-strict4/README b/src/test/test-node-affinity-maintenance-strict4/README new file mode 100644 index 00000000..43c68463 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict4/README @@ -0,0 +1,3 @@ +Test whether an HA resource with failback disabled in a strict node affinity +rule with two differently prioritized node members will move to the +lower-priority node if its current node is in maintenance mode. diff --git a/src/test/test-node-affinity-maintenance-strict4/cmdlist b/src/test/test-node-affinity-maintenance-strict4/cmdlist new file mode 100644 index 00000000..7e577b68 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict4/cmdlist @@ -0,0 +1,5 @@ +[ + [ "power node1 on", "power node2 on", "power node3 on"], + [ "crm node3 enable-node-maintenance" ], + [ "crm node3 disable-node-maintenance" ] +] diff --git a/src/test/test-node-affinity-maintenance-strict4/hardware_status b/src/test/test-node-affinity-maintenance-strict4/hardware_status new file mode 100644 index 00000000..451beb13 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict4/hardware_status @@ -0,0 +1,5 @@ +{ + "node1": { "power": "off", "network": "off" }, + "node2": { "power": "off", "network": "off" }, + "node3": { "power": "off", "network": "off" } +} diff --git a/src/test/test-node-affinity-maintenance-strict4/log.expect b/src/test/test-node-affinity-maintenance-strict4/log.expect new file mode 100644 index 00000000..6f19258c --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict4/log.expect @@ -0,0 +1,40 @@ +info 0 hardware: starting simulation +info 20 cmdlist: execute power node1 on +info 20 node1/crm: status change startup => wait_for_quorum +info 20 node1/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node2 on +info 20 node2/crm: status change startup => wait_for_quorum +info 20 node2/lrm: status change startup => wait_for_agent_lock +info 20 cmdlist: execute power node3 on +info 20 node3/crm: status change startup => wait_for_quorum +info 20 node3/lrm: status change startup => wait_for_agent_lock +info 20 node1/crm: got lock 'ha_manager_lock' +info 20 node1/crm: status change wait_for_quorum => master +info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online' +info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online' +info 20 node1/crm: adding new service 'vm:101' on node 'node3' +info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node3) +info 22 node2/crm: status change wait_for_quorum => slave +info 24 node3/crm: status change wait_for_quorum => slave +info 25 node3/lrm: got lock 'ha_agent_node3_lock' +info 25 node3/lrm: status change wait_for_agent_lock => active +info 25 node3/lrm: starting service vm:101 +info 25 node3/lrm: service status vm:101 started +info 120 cmdlist: execute crm node3 enable-node-maintenance +info 125 node3/lrm: status change active => maintenance +info 140 node1/crm: node 'node3': state changed from 'online' => 'maintenance' +info 140 node1/crm: migrate service 'vm:101' to node 'node2' (running) +info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2) +info 143 node2/lrm: got lock 'ha_agent_node2_lock' +info 143 node2/lrm: status change wait_for_agent_lock => active +info 145 node3/lrm: service vm:101 - start migrate to node 'node2' +info 145 node3/lrm: service vm:101 - end migrate to node 'node2' +info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2) +info 163 node2/lrm: starting service vm:101 +info 163 node2/lrm: service status vm:101 started +info 220 cmdlist: execute crm node3 disable-node-maintenance +info 225 node3/lrm: got lock 'ha_agent_node3_lock' +info 225 node3/lrm: status change maintenance => active +info 240 node1/crm: node 'node3': state changed from 'maintenance' => 'online' +info 820 hardware: exit simulation - done diff --git a/src/test/test-node-affinity-maintenance-strict4/manager_status b/src/test/test-node-affinity-maintenance-strict4/manager_status new file mode 100644 index 00000000..0967ef42 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict4/manager_status @@ -0,0 +1 @@ +{} diff --git a/src/test/test-node-affinity-maintenance-strict4/rules_config b/src/test/test-node-affinity-maintenance-strict4/rules_config new file mode 100644 index 00000000..12539b76 --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict4/rules_config @@ -0,0 +1,4 @@ +node-affinity: vm101-must-be-on-node3-or-node2 + nodes node2:1,node3:2 + resources vm:101 + strict 1 diff --git a/src/test/test-node-affinity-maintenance-strict4/service_config b/src/test/test-node-affinity-maintenance-strict4/service_config new file mode 100644 index 00000000..c7266eec --- /dev/null +++ b/src/test/test-node-affinity-maintenance-strict4/service_config @@ -0,0 +1,3 @@ +{ + "vm:101": { "node": "node3", "state": "started", "failback": 0 } +} -- 2.47.3