public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH ha-manager v3 13/17] fix #1497: handle node affinity rules with failback in manual migrations
Date: Mon, 11 May 2026 11:46:34 +0200	[thread overview]
Message-ID: <20260511094707.142930-14-d.kral@proxmox.com> (raw)
In-Reply-To: <20260511094707.142930-1-d.kral@proxmox.com>

Do not execute any manual user migration of an HA resource to a target
node, which is not one of the highest priority nodes if the HA resource
has failback set.

This prevents users from moving an HA resource, which would be failed
back to a higher priority node of the strict or non-strict node affinity
rule immediately after, which just wastes time and resources.

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v2 -> v3:
- add 'fix #1497' prefix
- use new global service config hash ref to avoid a rather confusing
  passing of the service config hash through various method calls

 src/PVE/HA/Config.pm                          | 11 +++++--
 src/PVE/HA/Helpers.pm                         |  6 ++--
 src/PVE/HA/Manager.pm                         |  7 ++--
 .../test-node-affinity-nonstrict1/log.expect  | 16 +---------
 .../test-node-affinity-nonstrict7/log.expect  | 32 +++----------------
 .../test-node-affinity-strict7/log.expect     | 18 ++---------
 6 files changed, 24 insertions(+), 66 deletions(-)

diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm
index d78f7179..e9551dfb 100644
--- a/src/PVE/HA/Config.pm
+++ b/src/PVE/HA/Config.pm
@@ -398,22 +398,27 @@ sub service_is_configured {
 sub get_resource_motion_info {
     my ($sid) = @_;
 
-    my $resources = read_resources_config();
+    my $cfg = read_resources_config();
 
     my $dependent_resources = [];
     my $blocking_resources_by_node = {};
 
-    if (&$service_check_ha_state($resources, $sid)) {
+    if (&$service_check_ha_state($cfg, $sid)) {
         my $manager_status = read_manager_status();
         my $ss = $manager_status->{service_status};
         my $ns = $manager_status->{node_status};
         # get_resource_motion_info expects a hashset of all nodes with status 'online'
         my $online_nodes = { map { $ns->{$_} eq 'online' ? ($_ => 1) : () } keys %$ns };
+        # get_resource_motion_info expects a resource config with defaults set
+        my $resources = checked_resources_config($cfg);
 
         my $compiled_rules = read_and_compile_rules_config();
 
+        my $cd = $resources->{$sid} // {};
         ($dependent_resources, $blocking_resources_by_node) =
-            PVE::HA::Helpers::get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules);
+            PVE::HA::Helpers::get_resource_motion_info(
+                $ss, $sid, $cd, $online_nodes, $compiled_rules,
+            );
     }
 
     return ($dependent_resources, $blocking_resources_by_node);
diff --git a/src/PVE/HA/Helpers.pm b/src/PVE/HA/Helpers.pm
index b160c541..a58b1e12 100644
--- a/src/PVE/HA/Helpers.pm
+++ b/src/PVE/HA/Helpers.pm
@@ -18,13 +18,13 @@ causes that make the node unavailable to C<$sid>.
 
 =cut
 
-sub get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules) {
+sub get_resource_motion_info($ss, $sid, $cd, $online_nodes, $compiled_rules) {
     my $dependent_resources = [];
     my $blocking_resources_by_node = {};
 
     my ($node_affinity, $resource_affinity) =
         $compiled_rules->@{qw(node-affinity resource-affinity)};
-    my ($allowed_nodes) = get_node_affinity($node_affinity, $sid, $online_nodes);
+    my ($allowed_nodes, $pri_nodes) = get_node_affinity($node_affinity, $sid, $online_nodes);
     my ($together, $separate) = get_affinitive_resources($resource_affinity, $sid);
 
     for my $csid (sort keys %$together) {
@@ -35,7 +35,7 @@ sub get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules) {
     }
 
     for my $node (keys %$online_nodes) {
-        if (!$allowed_nodes->{$node}) {
+        if (!$allowed_nodes->{$node} || ($cd->{failback} && !$pri_nodes->{$node})) {
             push $blocking_resources_by_node->{$node}->@*,
                 {
                     sid => $sid,
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 9e8095e1..94be6472 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -583,11 +583,14 @@ sub read_lrm_status {
 sub queue_resource_motion {
     my ($self, $cmd, $task, $sid, $target) = @_;
 
-    my ($haenv, $ss, $ns, $compiled_rules) = $self->@{qw(haenv ss ns compiled_rules)};
+    my ($haenv, $sc, $ss, $ns, $compiled_rules) = $self->@{qw(haenv sc ss ns compiled_rules)};
     my $online_nodes = { map { $_ => 1 } $self->{ns}->list_online_nodes()->@* };
+    my $cd = $sc->{$sid};
 
     my ($dependent_resources, $blocking_resources_by_node) =
-        PVE::HA::Helpers::get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules);
+        PVE::HA::Helpers::get_resource_motion_info(
+            $ss, $sid, $cd, $online_nodes, $compiled_rules,
+        );
 
     if (my $blocking_resources = $blocking_resources_by_node->{$target}) {
         for my $blocking_resource (@$blocking_resources) {
diff --git a/src/test/test-node-affinity-nonstrict1/log.expect b/src/test/test-node-affinity-nonstrict1/log.expect
index d86c69de..ca2c40b3 100644
--- a/src/test/test-node-affinity-nonstrict1/log.expect
+++ b/src/test/test-node-affinity-nonstrict1/log.expect
@@ -22,19 +22,5 @@ info     25    node3/lrm: status change wait_for_agent_lock => active
 info     25    node3/lrm: starting service vm:101
 info     25    node3/lrm: service status vm:101 started
 info    120      cmdlist: execute service vm:101 migrate node2
-info    120    node1/crm: got crm command: migrate vm:101 node2
-info    120    node1/crm: migrate service 'vm:101' to node 'node2'
-info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
-info    123    node2/lrm: got lock 'ha_agent_node2_lock'
-info    123    node2/lrm: status change wait_for_agent_lock => active
-info    125    node3/lrm: service vm:101 - start migrate to node 'node2'
-info    125    node3/lrm: service vm:101 - end migrate to node 'node2'
-info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
-info    140    node1/crm: migrate service 'vm:101' to node 'node3' (running)
-info    140    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node2, target = node3)
-info    143    node2/lrm: service vm:101 - start migrate to node 'node3'
-info    143    node2/lrm: service vm:101 - end migrate to node 'node3'
-info    160    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
-info    165    node3/lrm: starting service vm:101
-info    165    node3/lrm: service status vm:101 started
+err     120    node1/crm: crm command 'migrate vm:101 node2' error - service 'vm:101' is not allowed on node 'node2'
 info    720     hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-nonstrict7/log.expect b/src/test/test-node-affinity-nonstrict7/log.expect
index 31daa618..54e824ea 100644
--- a/src/test/test-node-affinity-nonstrict7/log.expect
+++ b/src/test/test-node-affinity-nonstrict7/log.expect
@@ -28,35 +28,9 @@ info     25    node3/lrm: status change wait_for_agent_lock => active
 info     25    node3/lrm: starting service vm:101
 info     25    node3/lrm: service status vm:101 started
 info    120      cmdlist: execute service vm:101 migrate node1
-info    120    node1/crm: got crm command: migrate vm:101 node1
-info    120    node1/crm: migrate service 'vm:101' to node 'node1'
-info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node1)
-info    121    node1/lrm: got lock 'ha_agent_node1_lock'
-info    121    node1/lrm: status change wait_for_agent_lock => active
-info    125    node3/lrm: service vm:101 - start migrate to node 'node1'
-info    125    node3/lrm: service vm:101 - end migrate to node 'node1'
-info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node1)
-info    140    node1/crm: migrate service 'vm:101' to node 'node3' (running)
-info    140    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node1, target = node3)
-info    141    node1/lrm: service vm:101 - start migrate to node 'node3'
-info    141    node1/lrm: service vm:101 - end migrate to node 'node3'
-info    160    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
-info    165    node3/lrm: starting service vm:101
-info    165    node3/lrm: service status vm:101 started
+err     120    node1/crm: crm command 'migrate vm:101 node1' error - service 'vm:101' is not allowed on node 'node1'
 info    220      cmdlist: execute service vm:101 migrate node2
-info    220    node1/crm: got crm command: migrate vm:101 node2
-info    220    node1/crm: migrate service 'vm:101' to node 'node2'
-info    220    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
-info    225    node3/lrm: service vm:101 - start migrate to node 'node2'
-info    225    node3/lrm: service vm:101 - end migrate to node 'node2'
-info    240    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
-info    240    node1/crm: migrate service 'vm:101' to node 'node3' (running)
-info    240    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node2, target = node3)
-info    243    node2/lrm: service vm:101 - start migrate to node 'node3'
-info    243    node2/lrm: service vm:101 - end migrate to node 'node3'
-info    260    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
-info    265    node3/lrm: starting service vm:101
-info    265    node3/lrm: service status vm:101 started
+err     220    node1/crm: crm command 'migrate vm:101 node2' error - service 'vm:101' is not allowed on node 'node2'
 info    320      cmdlist: execute service vm:101 migrate node3
 info    320    node1/crm: ignore crm command - service already on target node: migrate vm:101 node3
 info    420      cmdlist: execute service vm:102 migrate node3
@@ -81,6 +55,8 @@ info    620      cmdlist: execute service vm:102 migrate node1
 info    620    node1/crm: got crm command: migrate vm:102 node1
 info    620    node1/crm: migrate service 'vm:102' to node 'node1'
 info    620    node1/crm: service 'vm:102': state changed from 'started' to 'migrate'  (node = node2, target = node1)
+info    621    node1/lrm: got lock 'ha_agent_node1_lock'
+info    621    node1/lrm: status change wait_for_agent_lock => active
 info    623    node2/lrm: service vm:102 - start migrate to node 'node1'
 info    623    node2/lrm: service vm:102 - end migrate to node 'node1'
 info    640    node1/crm: service 'vm:102': state changed from 'migrate' to 'started'  (node = node1)
diff --git a/src/test/test-node-affinity-strict7/log.expect b/src/test/test-node-affinity-strict7/log.expect
index 9c4e9f0b..ae8e43fb 100644
--- a/src/test/test-node-affinity-strict7/log.expect
+++ b/src/test/test-node-affinity-strict7/log.expect
@@ -28,21 +28,7 @@ info     25    node3/lrm: status change wait_for_agent_lock => active
 info     25    node3/lrm: starting service vm:101
 info     25    node3/lrm: service status vm:101 started
 info    120      cmdlist: execute service vm:101 migrate node1
-info    120    node1/crm: got crm command: migrate vm:101 node1
-info    120    node1/crm: migrate service 'vm:101' to node 'node1'
-info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node1)
-info    121    node1/lrm: got lock 'ha_agent_node1_lock'
-info    121    node1/lrm: status change wait_for_agent_lock => active
-info    125    node3/lrm: service vm:101 - start migrate to node 'node1'
-info    125    node3/lrm: service vm:101 - end migrate to node 'node1'
-info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node1)
-info    140    node1/crm: migrate service 'vm:101' to node 'node3' (running)
-info    140    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node1, target = node3)
-info    141    node1/lrm: service vm:101 - start migrate to node 'node3'
-info    141    node1/lrm: service vm:101 - end migrate to node 'node3'
-info    160    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
-info    165    node3/lrm: starting service vm:101
-info    165    node3/lrm: service status vm:101 started
+err     120    node1/crm: crm command 'migrate vm:101 node1' error - service 'vm:101' is not allowed on node 'node1'
 info    220      cmdlist: execute service vm:101 migrate node2
 err     220    node1/crm: crm command 'migrate vm:101 node2' error - service 'vm:101' is not allowed on node 'node2'
 info    320      cmdlist: execute service vm:101 migrate node3
@@ -55,6 +41,8 @@ info    620      cmdlist: execute service vm:102 migrate node1
 info    620    node1/crm: got crm command: migrate vm:102 node1
 info    620    node1/crm: migrate service 'vm:102' to node 'node1'
 info    620    node1/crm: service 'vm:102': state changed from 'started' to 'migrate'  (node = node2, target = node1)
+info    621    node1/lrm: got lock 'ha_agent_node1_lock'
+info    621    node1/lrm: status change wait_for_agent_lock => active
 info    623    node2/lrm: service vm:102 - start migrate to node 'node1'
 info    623    node2/lrm: service vm:102 - end migrate to node 'node1'
 info    640    node1/crm: service 'vm:102': state changed from 'migrate' to 'started'  (node = node1)
-- 
2.47.3





  parent reply	other threads:[~2026-05-11  9:57 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-11  9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 01/17] ha: put source files on individual new lines Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 02/17] d/pve-ha-manager.install: remove duplicate Config.pm Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 03/17] config: group and sort use statements Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 04/17] manager: " Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 05/17] manager: report all reasons when resources are blocked from migration Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 06/17] config, manager: factor out resource motion info logic Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 07/17] tests: add test cases for migrating resources with node affinity rules Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 08/17] fix #1497: handle strict node affinity rules in manual migrations Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 09/17] config: improve variable names in read_and_check_resources_config Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 10/17] config: factor out checked_resources_config helper Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 11/17] manager: store global reference to service config hash Daniel Kral
2026-05-11  9:46 ` [PATCH ha-manager v3 12/17] manager: remove duplicate service config read in update_crm_commands Daniel Kral
2026-05-11  9:46 ` Daniel Kral [this message]
2026-05-11  9:46 ` [PATCH ha-manager v3 14/17] config: remove duplicate config reads in get_resource_motion_info Daniel Kral
2026-05-11  9:46 ` [PATCH qemu-server v3 15/17] api: migration preconditions: add node affinity as blocking cause Daniel Kral
2026-05-11  9:46 ` [PATCH container v3 16/17] " Daniel Kral
2026-05-11  9:46 ` [PATCH manager v3 17/17] ui: migrate: display precondition messages for ha node affinity Daniel Kral
2026-05-15  4:51 ` applied: [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Thomas Lamprecht
2026-05-15  5:04 ` Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260511094707.142930-14-d.kral@proxmox.com \
    --to=d.kral@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal