From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id BAE721FF13B for ; Wed, 22 Apr 2026 12:01:22 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 85CB413660; Wed, 22 Apr 2026 12:00:49 +0200 (CEST) From: Daniel Kral To: pve-devel@lists.proxmox.com Subject: [PATCH ha-manager 5/7] manager: make HA resource bundles move back to maintenance node Date: Wed, 22 Apr 2026 12:00:23 +0200 Message-ID: <20260422100035.232716-6-d.kral@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260422100035.232716-1-d.kral@proxmox.com> References: <20260422100035.232716-1-d.kral@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1776851951872 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.079 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: 5BEDVBPPV4LGEOHNOTMRQXCSG5P3BCBZ X-Message-ID-Hash: 5BEDVBPPV4LGEOHNOTMRQXCSG5P3BCBZ X-MailFrom: d.kral@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: HA resources in positive resource affinity rules (HA resource bundles) always prefer their current, common node as soon as at least one of their HA resources is actively assigned to a node already. This logic is implemented in apply_positive_resource_affinity(), which will reduce the node set to only their current, common node. As the maintenance node is different from the HA resources' current node (except no replacement node could be found for some reason), select_service_node() should make the HA resources move to the maintenance node before apply_positive_resource_affinity(). Signed-off-by: Daniel Kral --- src/PVE/HA/Manager.pm | 11 ++++++++- .../README | 3 ++- .../log.expect | 16 +++++++++++++ .../README | 3 ++- .../log.expect | 23 +++++++++++++++++++ 5 files changed, 53 insertions(+), 3 deletions(-) diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm index 795b98c1..ce5d69a4 100644 --- a/src/PVE/HA/Manager.pm +++ b/src/PVE/HA/Manager.pm @@ -352,11 +352,20 @@ sub select_service_node { } apply_negative_resource_affinity($separate, $pri_nodes); - apply_positive_resource_affinity($together, $pri_nodes); + # fallback to the previous maintenance node if it is available again. + # + # if the HA resource is in a resource bundle with one of them already running, + # then apply_positive_resource_affinity() will reduce the node set to only + # their current, common node. + # therefore fallback here already as $pri_nodes has already all other + # affinity rules applied and the HA resources in the resource bundle share + # the same maintenance node. return $maintenance_fallback if defined($maintenance_fallback) && $pri_nodes->{$maintenance_fallback}; + apply_positive_resource_affinity($together, $pri_nodes); + return $current_node if $node_preference eq 'none' && $pri_nodes->{$current_node}; my $scores = $online_node_usage->score_nodes_to_start_service($sid, $current_node); diff --git a/src/test/test-resource-affinity-maintenance-strict-positive1/README b/src/test/test-resource-affinity-maintenance-strict-positive1/README index 4b62e578..ab293cc5 100644 --- a/src/test/test-resource-affinity-maintenance-strict-positive1/README +++ b/src/test/test-resource-affinity-maintenance-strict-positive1/README @@ -1,3 +1,4 @@ Tests whether a strict positive resource affinity rule among two HA resources makes both HA resources move to the same replacement node in case their -current, common node is put in maintenance mode. +current, common node is put in maintenance mode and moves them back as the +previous maintenance node is available again. diff --git a/src/test/test-resource-affinity-maintenance-strict-positive1/log.expect b/src/test/test-resource-affinity-maintenance-strict-positive1/log.expect index 5f91b877..91637279 100644 --- a/src/test/test-resource-affinity-maintenance-strict-positive1/log.expect +++ b/src/test/test-resource-affinity-maintenance-strict-positive1/log.expect @@ -48,4 +48,20 @@ info 220 cmdlist: execute crm node3 disable-node-maintenance info 225 node3/lrm: got lock 'ha_agent_node3_lock' info 225 node3/lrm: status change maintenance => active info 240 node1/crm: node 'node3': state changed from 'maintenance' => 'online' +info 240 node1/crm: moving service 'vm:101' back to 'node3', node came back from maintenance. +info 240 node1/crm: migrate service 'vm:101' to node 'node3' (running) +info 240 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node3) +info 240 node1/crm: moving service 'vm:102' back to 'node3', node came back from maintenance. +info 240 node1/crm: migrate service 'vm:102' to node 'node3' (running) +info 240 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node1, target = node3) +info 241 node1/lrm: service vm:101 - start migrate to node 'node3' +info 241 node1/lrm: service vm:101 - end migrate to node 'node3' +info 241 node1/lrm: service vm:102 - start migrate to node 'node3' +info 241 node1/lrm: service vm:102 - end migrate to node 'node3' +info 260 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3) +info 260 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node3) +info 265 node3/lrm: starting service vm:101 +info 265 node3/lrm: service status vm:101 started +info 265 node3/lrm: starting service vm:102 +info 265 node3/lrm: service status vm:102 started info 820 hardware: exit simulation - done diff --git a/src/test/test-resource-affinity-maintenance-strict-positive2/README b/src/test/test-resource-affinity-maintenance-strict-positive2/README index 32f0942b..dcc4c81d 100644 --- a/src/test/test-resource-affinity-maintenance-strict-positive2/README +++ b/src/test/test-resource-affinity-maintenance-strict-positive2/README @@ -2,7 +2,8 @@ Tests whether a strict positive resource affinity rule among three HA resources, where two of them are already on a common node but the other HA resource is still on another node, makes the former two HA resources move to the node of the other HA resource as their current common node is put in -maintenance mode. +maintenance mode and moves them back as soon as the previous maintenance node +is available again. The "skip-round crm 1" command ensures that the HA Manager will not move the dislocated, third HA resource to the common node, but make the LRM acknowledge diff --git a/src/test/test-resource-affinity-maintenance-strict-positive2/log.expect b/src/test/test-resource-affinity-maintenance-strict-positive2/log.expect index ef63c8ca..9da6d968 100644 --- a/src/test/test-resource-affinity-maintenance-strict-positive2/log.expect +++ b/src/test/test-resource-affinity-maintenance-strict-positive2/log.expect @@ -43,4 +43,27 @@ info 120 cmdlist: execute crm node1 disable-node-maintenance info 121 node1/lrm: got lock 'ha_agent_node1_lock' info 121 node1/lrm: status change maintenance => active info 140 node1/crm: node 'node1': state changed from 'maintenance' => 'online' +info 140 node1/crm: moving service 'vm:101' back to 'node1', node came back from maintenance. +info 140 node1/crm: migrate service 'vm:101' to node 'node1' (running) +info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node1) +info 140 node1/crm: moving service 'vm:102' back to 'node1', node came back from maintenance. +info 140 node1/crm: migrate service 'vm:102' to node 'node1' (running) +info 140 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node3, target = node1) +info 140 node1/crm: migrate service 'vm:103' to node 'node1' (running) +info 140 node1/crm: service 'vm:103': state changed from 'started' to 'migrate' (node = node3, target = node1) +info 145 node3/lrm: service vm:101 - start migrate to node 'node1' +info 145 node3/lrm: service vm:101 - end migrate to node 'node1' +info 145 node3/lrm: service vm:102 - start migrate to node 'node1' +info 145 node3/lrm: service vm:102 - end migrate to node 'node1' +info 145 node3/lrm: service vm:103 - start migrate to node 'node1' +info 145 node3/lrm: service vm:103 - end migrate to node 'node1' +info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1) +info 160 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node1) +info 160 node1/crm: service 'vm:103': state changed from 'migrate' to 'started' (node = node1) +info 161 node1/lrm: starting service vm:101 +info 161 node1/lrm: service status vm:101 started +info 161 node1/lrm: starting service vm:102 +info 161 node1/lrm: service status vm:102 started +info 161 node1/lrm: starting service vm:103 +info 161 node1/lrm: service status vm:103 started info 720 hardware: exit simulation - done -- 2.47.3