* [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497)
@ 2026-05-11 9:46 Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 01/17] ha: put source files on individual new lines Daniel Kral
` (18 more replies)
0 siblings, 19 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
v1: https://lore.proxmox.com/pve-devel/20251215155334.476984-1-d.kral@proxmox.com/
v2: https://lore.proxmox.com/pve-devel/20260120152755.499037-1-d.kral@proxmox.com/
Changes v2 -> v3:
- rebase on master (relevant: {dis,}arm-ha and load balancer)
- add 'fix #1497' prefix to relevant patches
- add patch to make service config hash global and remove a duplicate
service config read from the Manager class
Changes v1 -> v2 (Thanks @Fiona!):
- rebase on master
- fix initial node assignments in test case descriptions
- make get_resource_motion_info(...) only read files once (introducing 3
additional patches)
Tested changes with strict & non-strict HA node affinity rules and manual
migrations (in the CLI & web interface) as well as failback set and
cleared and ran `git rebase master --exec 'make clean && make deb' and
`make tidy` on all repositories.
If pve-ha-manager is applied but not the other packages, it might show
that the HA resource cannot be moved because of resource affinity rules,
which is wrong though. So it might be nice to have a version bump here.
This patch series implements node affinity rule migration blockers
similar to the blockers introduced with resource affinity rules.
The node affinity rule migraton blockers prevent users from migrating HA
resources to nodes, which would make them migrate somewhere else
immediately afterwards. This includes:
- online nodes, which are not part of the strict node affinity rule's
allowed node set at all, or
- if the HA resource has failback set, online nodes, which are not in the
currently highest priority group of the strict or non-strict node
affinity rule.
The first few patches are some overall cleanup for things the series
touches + deduplicating the resource_motion_info logic and sharing it
between the Manager and the public
PVE::HA::Config::get_resource_motion_info(...), as well as exposing these
in the relevant VM/LXC API handlers and web interface.
ha-manager:
Daniel Kral (14):
ha: put source files on individual new lines
d/pve-ha-manager.install: remove duplicate Config.pm
config: group and sort use statements
manager: group and sort use statements
manager: report all reasons when resources are blocked from migration
config, manager: factor out resource motion info logic
tests: add test cases for migrating resources with node affinity rules
fix #1497: handle strict node affinity rules in manual migrations
config: improve variable names in read_and_check_resources_config
config: factor out checked_resources_config helper
manager: store global reference to service config hash
manager: remove duplicate service config read in update_crm_commands
fix #1497: handle node affinity rules with failback in manual
migrations
config: remove duplicate config reads in get_resource_motion_info
debian/pve-ha-manager.install | 2 +-
src/PVE/API2/HA/Resources.pm | 4 +-
src/PVE/CLI/ha_manager.pm | 14 +--
src/PVE/HA/Config.pm | 88 ++++++++-----------
src/PVE/HA/Helpers.pm | 63 +++++++++++++
src/PVE/HA/Makefile | 16 +++-
src/PVE/HA/Manager.pm | 72 ++++++++-------
.../test-node-affinity-nonstrict1/log.expect | 16 +---
src/test/test-node-affinity-nonstrict7/README | 9 ++
.../test-node-affinity-nonstrict7/cmdlist | 9 ++
.../hardware_status | 5 ++
.../test-node-affinity-nonstrict7/log.expect | 65 ++++++++++++++
.../manager_status | 1 +
.../rules_config | 7 ++
.../service_config | 4 +
.../test-node-affinity-strict1/log.expect | 16 +---
.../test-node-affinity-strict2/log.expect | 16 +---
src/test/test-node-affinity-strict7/README | 9 ++
src/test/test-node-affinity-strict7/cmdlist | 9 ++
.../hardware_status | 5 ++
.../test-node-affinity-strict7/log.expect | 51 +++++++++++
.../test-node-affinity-strict7/manager_status | 1 +
.../test-node-affinity-strict7/rules_config | 9 ++
.../test-node-affinity-strict7/service_config | 4 +
src/test/test-recovery4/log.expect | 2 +-
25 files changed, 355 insertions(+), 142 deletions(-)
create mode 100644 src/PVE/HA/Helpers.pm
create mode 100644 src/test/test-node-affinity-nonstrict7/README
create mode 100644 src/test/test-node-affinity-nonstrict7/cmdlist
create mode 100644 src/test/test-node-affinity-nonstrict7/hardware_status
create mode 100644 src/test/test-node-affinity-nonstrict7/log.expect
create mode 100644 src/test/test-node-affinity-nonstrict7/manager_status
create mode 100644 src/test/test-node-affinity-nonstrict7/rules_config
create mode 100644 src/test/test-node-affinity-nonstrict7/service_config
create mode 100644 src/test/test-node-affinity-strict7/README
create mode 100644 src/test/test-node-affinity-strict7/cmdlist
create mode 100644 src/test/test-node-affinity-strict7/hardware_status
create mode 100644 src/test/test-node-affinity-strict7/log.expect
create mode 100644 src/test/test-node-affinity-strict7/manager_status
create mode 100644 src/test/test-node-affinity-strict7/rules_config
create mode 100644 src/test/test-node-affinity-strict7/service_config
qemu-server:
Daniel Kral (1):
api: migration preconditions: add node affinity as blocking cause
src/PVE/API2/Qemu.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
container:
Daniel Kral (1):
api: migration preconditions: add node affinity as blocking cause
src/PVE/API2/LXC.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
manager:
Daniel Kral (1):
ui: migrate: display precondition messages for ha node affinity
www/manager6/window/Migrate.js | 10 ++++++++++
1 file changed, 10 insertions(+)
Summary over all repositories:
28 files changed, 367 insertions(+), 144 deletions(-)
--
Generated by murpp 0.11.0
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 01/17] ha: put source files on individual new lines
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 02/17] d/pve-ha-manager.install: remove duplicate Config.pm Daniel Kral
` (17 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
There are quite a lot of source files in the list already. To reduce
noise in diffs on changes here, put each on a newline.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes v2 -> v3:
- none
src/PVE/HA/Makefile | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/src/PVE/HA/Makefile b/src/PVE/HA/Makefile
index 0b240e1e..1aeb976b 100644
--- a/src/PVE/HA/Makefile
+++ b/src/PVE/HA/Makefile
@@ -1,5 +1,16 @@
-SIM_SOURCES=CRM.pm Env.pm Groups.pm HashTools.pm Rules.pm Resources.pm LRM.pm \
- Manager.pm NodeStatus.pm Tools.pm FenceConfig.pm Fence.pm Usage.pm
+SIM_SOURCES=CRM.pm \
+ Env.pm \
+ Groups.pm \
+ HashTools.pm \
+ Rules.pm \
+ Resources.pm \
+ LRM.pm \
+ Manager.pm \
+ NodeStatus.pm \
+ Tools.pm \
+ FenceConfig.pm \
+ Fence.pm \
+ Usage.pm
SOURCES=${SIM_SOURCES} Config.pm
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 02/17] d/pve-ha-manager.install: remove duplicate Config.pm
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 01/17] ha: put source files on individual new lines Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 03/17] config: group and sort use statements Daniel Kral
` (16 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes v2 -> v3:
- none
debian/pve-ha-manager.install | 1 -
1 file changed, 1 deletion(-)
diff --git a/debian/pve-ha-manager.install b/debian/pve-ha-manager.install
index 75220a0b..23e063af 100644
--- a/debian/pve-ha-manager.install
+++ b/debian/pve-ha-manager.install
@@ -21,7 +21,6 @@
/usr/share/perl5/PVE/CLI/ha_manager.pm
/usr/share/perl5/PVE/HA/CRM.pm
/usr/share/perl5/PVE/HA/Config.pm
-/usr/share/perl5/PVE/HA/Config.pm
/usr/share/perl5/PVE/HA/Env.pm
/usr/share/perl5/PVE/HA/Env/PVE2.pm
/usr/share/perl5/PVE/HA/Fence.pm
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 03/17] config: group and sort use statements
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 01/17] ha: put source files on individual new lines Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 02/17] d/pve-ha-manager.install: remove duplicate Config.pm Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 04/17] manager: " Daniel Kral
` (15 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
Group and sort use statements according to our Perl Style guide [0].
[0] https://pve.proxmox.com/wiki/Perl_Style_Guide#Module_Dependencies
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes v2 -> v3:
- none
src/PVE/HA/Config.pm | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm
index a34a3022..8e24ece3 100644
--- a/src/PVE/HA/Config.pm
+++ b/src/PVE/HA/Config.pm
@@ -5,12 +5,13 @@ use warnings;
use JSON;
-use PVE::HA::Tools;
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file cfs_lock_file);
+
use PVE::HA::Groups;
+use PVE::HA::Resources;
use PVE::HA::Rules;
use PVE::HA::Rules::ResourceAffinity qw(get_affinitive_resources);
-use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file cfs_lock_file);
-use PVE::HA::Resources;
+use PVE::HA::Tools;
my $manager_status_filename = "ha/manager_status";
my $ha_groups_config = "ha/groups.cfg";
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 04/17] manager: group and sort use statements
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (2 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 03/17] config: group and sort use statements Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 05/17] manager: report all reasons when resources are blocked from migration Daniel Kral
` (14 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
Group and sort use statements according to our Perl Style guide [0].
[0] https://pve.proxmox.com/wiki/Perl_Style_Guide#Module_Dependencies
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes v2 -> v3:
- none
src/PVE/HA/Manager.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index b69a6bba..559fc4fa 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -6,13 +6,14 @@ use warnings;
use Digest::MD5 qw(md5_base64);
use PVE::Tools;
+
use PVE::HA::Groups;
-use PVE::HA::Tools ':exit_codes';
use PVE::HA::NodeStatus;
use PVE::HA::Rules;
use PVE::HA::Rules::NodeAffinity qw(get_node_affinity);
use PVE::HA::Rules::ResourceAffinity
qw(get_affinitive_resources get_resource_affinity apply_positive_resource_affinity apply_negative_resource_affinity);
+use PVE::HA::Tools ':exit_codes';
use PVE::HA::Usage::Basic;
my $have_static_scheduling;
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 05/17] manager: report all reasons when resources are blocked from migration
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (3 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 04/17] manager: " Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 06/17] config, manager: factor out resource motion info logic Daniel Kral
` (13 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
PVE::HA::Config::get_resource_motion_info(...) already reports all
reasons to callers, so log that information in the HA Manager as well.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes v2 -> v3:
- none
src/PVE/HA/Manager.pm | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 559fc4fa..af14ea74 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -586,6 +586,7 @@ sub queue_resource_motion {
my $resource_affinity = $self->{compiled_rules}->{'resource-affinity'};
my ($together, $separate) = get_affinitive_resources($resource_affinity, $sid);
+ my $blocked_from_migration;
for my $csid (sort keys %$separate) {
next if !defined($ss->{$csid});
next if $ss->{$csid}->{state} eq 'ignored';
@@ -598,9 +599,11 @@ sub queue_resource_motion {
. " negative affinity with service '$sid'",
);
- return; # one negative resource affinity is enough to not execute migration
+ $blocked_from_migration = 1;
}
+ return if $blocked_from_migration;
+
$haenv->log('info', "got crm command: $cmd");
$ss->{$sid}->{cmd} = [$task, $target];
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 06/17] config, manager: factor out resource motion info logic
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (4 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 05/17] manager: report all reasons when resources are blocked from migration Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 07/17] tests: add test cases for migrating resources with node affinity rules Daniel Kral
` (12 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
The logic to gather dependent and blocking HA resources in
execute_migration(...) and get_resource_motion_info(...) is equivalent,
so factor them out as a separate helper.
Introduce PVE::HA::Helpers as a module to share code between modules,
which cannot depend on each other but use the same underlying data
structures (e.g. Manager and Config, LRM and CRM) and where
PVE::HA::Tools is not the right place.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v2 -> v3:
- none
debian/pve-ha-manager.install | 1 +
src/PVE/HA/Config.pm | 31 ++++-----------------
src/PVE/HA/Helpers.pm | 52 +++++++++++++++++++++++++++++++++++
src/PVE/HA/Makefile | 1 +
src/PVE/HA/Manager.pm | 43 ++++++++++++-----------------
5 files changed, 77 insertions(+), 51 deletions(-)
create mode 100644 src/PVE/HA/Helpers.pm
diff --git a/debian/pve-ha-manager.install b/debian/pve-ha-manager.install
index 23e063af..301e9fc4 100644
--- a/debian/pve-ha-manager.install
+++ b/debian/pve-ha-manager.install
@@ -27,6 +27,7 @@
/usr/share/perl5/PVE/HA/FenceConfig.pm
/usr/share/perl5/PVE/HA/Groups.pm
/usr/share/perl5/PVE/HA/HashTools.pm
+/usr/share/perl5/PVE/HA/Helpers.pm
/usr/share/perl5/PVE/HA/LRM.pm
/usr/share/perl5/PVE/HA/Manager.pm
/usr/share/perl5/PVE/HA/NodeStatus.pm
diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm
index 8e24ece3..f74cb58b 100644
--- a/src/PVE/HA/Config.pm
+++ b/src/PVE/HA/Config.pm
@@ -8,9 +8,9 @@ use JSON;
use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file cfs_lock_file);
use PVE::HA::Groups;
+use PVE::HA::Helpers;
use PVE::HA::Resources;
use PVE::HA::Rules;
-use PVE::HA::Rules::ResourceAffinity qw(get_affinitive_resources);
use PVE::HA::Tools;
my $manager_status_filename = "ha/manager_status";
@@ -401,34 +401,13 @@ sub get_resource_motion_info {
my $manager_status = read_manager_status();
my $ss = $manager_status->{service_status};
my $ns = $manager_status->{node_status};
+ # get_resource_motion_info expects a hashset of all nodes with status 'online'
+ my $online_nodes = { map { $ns->{$_} eq 'online' ? ($_ => 1) : () } keys %$ns };
my $compiled_rules = read_and_compile_rules_config();
- my $resource_affinity = $compiled_rules->{'resource-affinity'};
- my ($together, $separate) = get_affinitive_resources($resource_affinity, $sid);
- for my $csid (sort keys %$together) {
- next if !defined($ss->{$csid});
- next if $ss->{$csid}->{state} eq 'ignored';
-
- push @$dependent_resources, $csid;
- }
-
- for my $node (keys %$ns) {
- next if $ns->{$node} ne 'online';
-
- for my $csid (sort keys %$separate) {
- next if !defined($ss->{$csid});
- next if $ss->{$csid}->{state} eq 'ignored';
- next if $ss->{$csid}->{node} && $ss->{$csid}->{node} ne $node;
- next if $ss->{$csid}->{target} && $ss->{$csid}->{target} ne $node;
-
- push $blocking_resources_by_node->{$node}->@*,
- {
- sid => $csid,
- cause => 'resource-affinity',
- };
- }
- }
+ ($dependent_resources, $blocking_resources_by_node) =
+ PVE::HA::Helpers::get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules);
}
return ($dependent_resources, $blocking_resources_by_node);
diff --git a/src/PVE/HA/Helpers.pm b/src/PVE/HA/Helpers.pm
new file mode 100644
index 00000000..09300cd4
--- /dev/null
+++ b/src/PVE/HA/Helpers.pm
@@ -0,0 +1,52 @@
+package PVE::HA::Helpers;
+
+use v5.36;
+
+use PVE::HA::Rules::ResourceAffinity qw(get_affinitive_resources);
+
+=head3 get_resource_motion_info
+
+Gathers which other HA resources in C<$ss> put a node placement dependency or
+node placement restriction on C<$sid> according to the compiled rules in
+C<$compiled_rules> and the online nodes in C<$online_nodes>.
+
+Returns a list of two elements, where the first element is a list of HA resource
+ids which are dependent on the node placement of C<$sid>, and the second element
+is a hash of nodes blocked for C<$sid>, where each entry value is a list of the
+causes that make the node unavailable to C<$sid>.
+
+=cut
+
+sub get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules) {
+ my $dependent_resources = [];
+ my $blocking_resources_by_node = {};
+
+ my $resource_affinity = $compiled_rules->{'resource-affinity'};
+ my ($together, $separate) = get_affinitive_resources($resource_affinity, $sid);
+
+ for my $csid (sort keys %$together) {
+ next if !defined($ss->{$csid});
+ next if $ss->{$csid}->{state} eq 'ignored';
+
+ push @$dependent_resources, $csid;
+ }
+
+ for my $node (keys %$online_nodes) {
+ for my $csid (sort keys %$separate) {
+ next if !defined($ss->{$csid});
+ next if $ss->{$csid}->{state} eq 'ignored';
+ next if $ss->{$csid}->{node} && $ss->{$csid}->{node} ne $node;
+ next if $ss->{$csid}->{target} && $ss->{$csid}->{target} ne $node;
+
+ push $blocking_resources_by_node->{$node}->@*,
+ {
+ sid => $csid,
+ cause => 'resource-affinity',
+ };
+ }
+ }
+
+ return ($dependent_resources, $blocking_resources_by_node);
+}
+
+1;
diff --git a/src/PVE/HA/Makefile b/src/PVE/HA/Makefile
index 1aeb976b..57871b29 100644
--- a/src/PVE/HA/Makefile
+++ b/src/PVE/HA/Makefile
@@ -2,6 +2,7 @@ SIM_SOURCES=CRM.pm \
Env.pm \
Groups.pm \
HashTools.pm \
+ Helpers.pm \
Rules.pm \
Resources.pm \
LRM.pm \
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index af14ea74..8419cb9a 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -8,6 +8,7 @@ use Digest::MD5 qw(md5_base64);
use PVE::Tools;
use PVE::HA::Groups;
+use PVE::HA::Helpers;
use PVE::HA::NodeStatus;
use PVE::HA::Rules;
use PVE::HA::Rules::NodeAffinity qw(get_node_affinity);
@@ -581,43 +582,35 @@ sub read_lrm_status {
sub queue_resource_motion {
my ($self, $cmd, $task, $sid, $target) = @_;
- my ($haenv, $ss) = $self->@{qw(haenv ss)};
+ my ($haenv, $ss, $ns, $compiled_rules) = $self->@{qw(haenv ss ns compiled_rules)};
+ my $online_nodes = { map { $_ => 1 } $self->{ns}->list_online_nodes()->@* };
- my $resource_affinity = $self->{compiled_rules}->{'resource-affinity'};
- my ($together, $separate) = get_affinitive_resources($resource_affinity, $sid);
+ my ($dependent_resources, $blocking_resources_by_node) =
+ PVE::HA::Helpers::get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules);
- my $blocked_from_migration;
- for my $csid (sort keys %$separate) {
- next if !defined($ss->{$csid});
- next if $ss->{$csid}->{state} eq 'ignored';
- next if $ss->{$csid}->{node} && $ss->{$csid}->{node} ne $target;
- next if $ss->{$csid}->{target} && $ss->{$csid}->{target} ne $target;
+ if (my $blocking_resources = $blocking_resources_by_node->{$target}) {
+ for my $blocking_resource (@$blocking_resources) {
+ my $err_msg = "unknown migration blocker reason";
+ my ($csid, $cause) = $blocking_resource->@{qw(sid cause)};
- $haenv->log(
- 'err',
- "crm command '$cmd' error - service '$csid' on node '$target' in"
- . " negative affinity with service '$sid'",
- );
+ if ($cause eq 'resource-affinity') {
+ $err_msg = "service '$csid' on node '$target' in negative"
+ . " affinity with service '$sid'";
+ }
- $blocked_from_migration = 1;
+ $haenv->log('err', "crm command '$cmd' error - $err_msg");
+ }
+
+ return; # do not queue migration if there are blockers
}
- return if $blocked_from_migration;
-
$haenv->log('info', "got crm command: $cmd");
$ss->{$sid}->{cmd} = [$task, $target];
- my $resources_to_migrate = [];
- for my $csid (sort keys %$together) {
- next if !defined($ss->{$csid});
- next if $ss->{$csid}->{state} eq 'ignored';
+ for my $csid (@$dependent_resources) {
next if $ss->{$csid}->{node} && $ss->{$csid}->{node} eq $target;
next if $ss->{$csid}->{target} && $ss->{$csid}->{target} eq $target;
- push @$resources_to_migrate, $csid;
- }
-
- for my $csid (@$resources_to_migrate) {
$haenv->log(
'info',
"crm command '$cmd' - $task service '$csid' to node '$target'"
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 07/17] tests: add test cases for migrating resources with node affinity rules
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (5 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 06/17] config, manager: factor out resource motion info logic Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 08/17] fix #1497: handle strict node affinity rules in manual migrations Daniel Kral
` (11 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
These test cases show the current behavior of manual HA resource
migrations, where the HA resource is in a strict or non-strict node
affinity rule.
These are added in preparation of preventing manual HA resource
migrations/relocations to nodes, which are not in the allowed set
according to the HA resource's node affinity rules.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v2 -> v3:
- none
changes v1 -> v2:
- fix initial node placement in READMEs
src/test/test-node-affinity-nonstrict7/README | 9 ++
.../test-node-affinity-nonstrict7/cmdlist | 9 ++
.../hardware_status | 5 ++
.../test-node-affinity-nonstrict7/log.expect | 89 +++++++++++++++++++
.../manager_status | 1 +
.../rules_config | 7 ++
.../service_config | 4 +
src/test/test-node-affinity-strict7/README | 9 ++
src/test/test-node-affinity-strict7/cmdlist | 9 ++
.../hardware_status | 5 ++
.../test-node-affinity-strict7/log.expect | 87 ++++++++++++++++++
.../test-node-affinity-strict7/manager_status | 1 +
.../test-node-affinity-strict7/rules_config | 9 ++
.../test-node-affinity-strict7/service_config | 4 +
14 files changed, 248 insertions(+)
create mode 100644 src/test/test-node-affinity-nonstrict7/README
create mode 100644 src/test/test-node-affinity-nonstrict7/cmdlist
create mode 100644 src/test/test-node-affinity-nonstrict7/hardware_status
create mode 100644 src/test/test-node-affinity-nonstrict7/log.expect
create mode 100644 src/test/test-node-affinity-nonstrict7/manager_status
create mode 100644 src/test/test-node-affinity-nonstrict7/rules_config
create mode 100644 src/test/test-node-affinity-nonstrict7/service_config
create mode 100644 src/test/test-node-affinity-strict7/README
create mode 100644 src/test/test-node-affinity-strict7/cmdlist
create mode 100644 src/test/test-node-affinity-strict7/hardware_status
create mode 100644 src/test/test-node-affinity-strict7/log.expect
create mode 100644 src/test/test-node-affinity-strict7/manager_status
create mode 100644 src/test/test-node-affinity-strict7/rules_config
create mode 100644 src/test/test-node-affinity-strict7/service_config
diff --git a/src/test/test-node-affinity-nonstrict7/README b/src/test/test-node-affinity-nonstrict7/README
new file mode 100644
index 00000000..24b32a39
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/README
@@ -0,0 +1,9 @@
+Test whether services in a non-strict node affinity rule handle manual
+migrations to nodes as expected with respect to whether these are part of the
+node affinity rule or not.
+
+The test scenario is:
+- vm:101 should be kept on node1 or node3 (preferred)
+- vm:102 should be kept on node1 or node2 (preferred)
+- vm:101 is running on node3 with failback enabled
+- vm:102 is running on node2 with failback disabled
diff --git a/src/test/test-node-affinity-nonstrict7/cmdlist b/src/test/test-node-affinity-nonstrict7/cmdlist
new file mode 100644
index 00000000..d992c805
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/cmdlist
@@ -0,0 +1,9 @@
+[
+ [ "power node1 on", "power node2 on", "power node3 on"],
+ [ "service vm:101 migrate node1" ],
+ [ "service vm:101 migrate node2" ],
+ [ "service vm:101 migrate node3" ],
+ [ "service vm:102 migrate node3" ],
+ [ "service vm:102 migrate node2" ],
+ [ "service vm:102 migrate node1" ]
+]
diff --git a/src/test/test-node-affinity-nonstrict7/hardware_status b/src/test/test-node-affinity-nonstrict7/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/hardware_status
@@ -0,0 +1,5 @@
+{
+ "node1": { "power": "off", "network": "off" },
+ "node2": { "power": "off", "network": "off" },
+ "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-nonstrict7/log.expect b/src/test/test-node-affinity-nonstrict7/log.expect
new file mode 100644
index 00000000..31daa618
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/log.expect
@@ -0,0 +1,89 @@
+info 0 hardware: starting simulation
+info 20 cmdlist: execute power node1 on
+info 20 node1/crm: status change startup => wait_for_quorum
+info 20 node1/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node2 on
+info 20 node2/crm: status change startup => wait_for_quorum
+info 20 node2/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node3 on
+info 20 node3/crm: status change startup => wait_for_quorum
+info 20 node3/lrm: status change startup => wait_for_agent_lock
+info 20 node1/crm: got lock 'ha_manager_lock'
+info 20 node1/crm: status change wait_for_quorum => master
+info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info 20 node1/crm: adding new service 'vm:101' on node 'node3'
+info 20 node1/crm: adding new service 'vm:102' on node 'node2'
+info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node3)
+info 20 node1/crm: service 'vm:102': state changed from 'request_start' to 'started' (node = node2)
+info 22 node2/crm: status change wait_for_quorum => slave
+info 23 node2/lrm: got lock 'ha_agent_node2_lock'
+info 23 node2/lrm: status change wait_for_agent_lock => active
+info 23 node2/lrm: starting service vm:102
+info 23 node2/lrm: service status vm:102 started
+info 24 node3/crm: status change wait_for_quorum => slave
+info 25 node3/lrm: got lock 'ha_agent_node3_lock'
+info 25 node3/lrm: status change wait_for_agent_lock => active
+info 25 node3/lrm: starting service vm:101
+info 25 node3/lrm: service status vm:101 started
+info 120 cmdlist: execute service vm:101 migrate node1
+info 120 node1/crm: got crm command: migrate vm:101 node1
+info 120 node1/crm: migrate service 'vm:101' to node 'node1'
+info 120 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node1)
+info 121 node1/lrm: got lock 'ha_agent_node1_lock'
+info 121 node1/lrm: status change wait_for_agent_lock => active
+info 125 node3/lrm: service vm:101 - start migrate to node 'node1'
+info 125 node3/lrm: service vm:101 - end migrate to node 'node1'
+info 140 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1)
+info 140 node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node3)
+info 141 node1/lrm: service vm:101 - start migrate to node 'node3'
+info 141 node1/lrm: service vm:101 - end migrate to node 'node3'
+info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
+info 165 node3/lrm: starting service vm:101
+info 165 node3/lrm: service status vm:101 started
+info 220 cmdlist: execute service vm:101 migrate node2
+info 220 node1/crm: got crm command: migrate vm:101 node2
+info 220 node1/crm: migrate service 'vm:101' to node 'node2'
+info 220 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2)
+info 225 node3/lrm: service vm:101 - start migrate to node 'node2'
+info 225 node3/lrm: service vm:101 - end migrate to node 'node2'
+info 240 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2)
+info 240 node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info 240 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node2, target = node3)
+info 243 node2/lrm: service vm:101 - start migrate to node 'node3'
+info 243 node2/lrm: service vm:101 - end migrate to node 'node3'
+info 260 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
+info 265 node3/lrm: starting service vm:101
+info 265 node3/lrm: service status vm:101 started
+info 320 cmdlist: execute service vm:101 migrate node3
+info 320 node1/crm: ignore crm command - service already on target node: migrate vm:101 node3
+info 420 cmdlist: execute service vm:102 migrate node3
+info 420 node1/crm: got crm command: migrate vm:102 node3
+info 420 node1/crm: migrate service 'vm:102' to node 'node3'
+info 420 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node2, target = node3)
+info 423 node2/lrm: service vm:102 - start migrate to node 'node3'
+info 423 node2/lrm: service vm:102 - end migrate to node 'node3'
+info 440 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node3)
+info 445 node3/lrm: starting service vm:102
+info 445 node3/lrm: service status vm:102 started
+info 520 cmdlist: execute service vm:102 migrate node2
+info 520 node1/crm: got crm command: migrate vm:102 node2
+info 520 node1/crm: migrate service 'vm:102' to node 'node2'
+info 520 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node3, target = node2)
+info 525 node3/lrm: service vm:102 - start migrate to node 'node2'
+info 525 node3/lrm: service vm:102 - end migrate to node 'node2'
+info 540 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node2)
+info 543 node2/lrm: starting service vm:102
+info 543 node2/lrm: service status vm:102 started
+info 620 cmdlist: execute service vm:102 migrate node1
+info 620 node1/crm: got crm command: migrate vm:102 node1
+info 620 node1/crm: migrate service 'vm:102' to node 'node1'
+info 620 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node2, target = node1)
+info 623 node2/lrm: service vm:102 - start migrate to node 'node1'
+info 623 node2/lrm: service vm:102 - end migrate to node 'node1'
+info 640 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node1)
+info 641 node1/lrm: starting service vm:102
+info 641 node1/lrm: service status vm:102 started
+info 1220 hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-nonstrict7/manager_status b/src/test/test-node-affinity-nonstrict7/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-nonstrict7/rules_config b/src/test/test-node-affinity-nonstrict7/rules_config
new file mode 100644
index 00000000..8aa2c589
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/rules_config
@@ -0,0 +1,7 @@
+node-affinity: vm101-should-be-on-node1-node3
+ nodes node1:1,node3:2
+ resources vm:101
+
+node-affinity: vm102-should-be-on-node1-node2
+ nodes node1:1,node2:2
+ resources vm:102
diff --git a/src/test/test-node-affinity-nonstrict7/service_config b/src/test/test-node-affinity-nonstrict7/service_config
new file mode 100644
index 00000000..3a916390
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/service_config
@@ -0,0 +1,4 @@
+{
+ "vm:101": { "node": "node3", "state": "started", "failback": 1 },
+ "vm:102": { "node": "node2", "state": "started", "failback": 0 }
+}
diff --git a/src/test/test-node-affinity-strict7/README b/src/test/test-node-affinity-strict7/README
new file mode 100644
index 00000000..253d4f02
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/README
@@ -0,0 +1,9 @@
+Test whether services in a strict node affinity rule handle manual migrations
+to nodes as expected with respect to whether these are part of the node
+affinity rule or not.
+
+The test scenario is:
+- vm:101 must be kept on node1 or node3 (preferred)
+- vm:102 must be kept on node1 or node2 (preferred)
+- vm:101 is running on node3 with failback enabled
+- vm:102 is running on node2 with failback disabled
diff --git a/src/test/test-node-affinity-strict7/cmdlist b/src/test/test-node-affinity-strict7/cmdlist
new file mode 100644
index 00000000..d992c805
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/cmdlist
@@ -0,0 +1,9 @@
+[
+ [ "power node1 on", "power node2 on", "power node3 on"],
+ [ "service vm:101 migrate node1" ],
+ [ "service vm:101 migrate node2" ],
+ [ "service vm:101 migrate node3" ],
+ [ "service vm:102 migrate node3" ],
+ [ "service vm:102 migrate node2" ],
+ [ "service vm:102 migrate node1" ]
+]
diff --git a/src/test/test-node-affinity-strict7/hardware_status b/src/test/test-node-affinity-strict7/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/hardware_status
@@ -0,0 +1,5 @@
+{
+ "node1": { "power": "off", "network": "off" },
+ "node2": { "power": "off", "network": "off" },
+ "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-strict7/log.expect b/src/test/test-node-affinity-strict7/log.expect
new file mode 100644
index 00000000..cbe9f323
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/log.expect
@@ -0,0 +1,87 @@
+info 0 hardware: starting simulation
+info 20 cmdlist: execute power node1 on
+info 20 node1/crm: status change startup => wait_for_quorum
+info 20 node1/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node2 on
+info 20 node2/crm: status change startup => wait_for_quorum
+info 20 node2/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node3 on
+info 20 node3/crm: status change startup => wait_for_quorum
+info 20 node3/lrm: status change startup => wait_for_agent_lock
+info 20 node1/crm: got lock 'ha_manager_lock'
+info 20 node1/crm: status change wait_for_quorum => master
+info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info 20 node1/crm: adding new service 'vm:101' on node 'node3'
+info 20 node1/crm: adding new service 'vm:102' on node 'node2'
+info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node3)
+info 20 node1/crm: service 'vm:102': state changed from 'request_start' to 'started' (node = node2)
+info 22 node2/crm: status change wait_for_quorum => slave
+info 23 node2/lrm: got lock 'ha_agent_node2_lock'
+info 23 node2/lrm: status change wait_for_agent_lock => active
+info 23 node2/lrm: starting service vm:102
+info 23 node2/lrm: service status vm:102 started
+info 24 node3/crm: status change wait_for_quorum => slave
+info 25 node3/lrm: got lock 'ha_agent_node3_lock'
+info 25 node3/lrm: status change wait_for_agent_lock => active
+info 25 node3/lrm: starting service vm:101
+info 25 node3/lrm: service status vm:101 started
+info 120 cmdlist: execute service vm:101 migrate node1
+info 120 node1/crm: got crm command: migrate vm:101 node1
+info 120 node1/crm: migrate service 'vm:101' to node 'node1'
+info 120 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node1)
+info 121 node1/lrm: got lock 'ha_agent_node1_lock'
+info 121 node1/lrm: status change wait_for_agent_lock => active
+info 125 node3/lrm: service vm:101 - start migrate to node 'node1'
+info 125 node3/lrm: service vm:101 - end migrate to node 'node1'
+info 140 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1)
+info 140 node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node3)
+info 141 node1/lrm: service vm:101 - start migrate to node 'node3'
+info 141 node1/lrm: service vm:101 - end migrate to node 'node3'
+info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
+info 165 node3/lrm: starting service vm:101
+info 165 node3/lrm: service status vm:101 started
+info 220 cmdlist: execute service vm:101 migrate node2
+info 220 node1/crm: got crm command: migrate vm:101 node2
+info 220 node1/crm: migrate service 'vm:101' to node 'node2'
+info 220 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2)
+info 225 node3/lrm: service vm:101 - start migrate to node 'node2'
+info 225 node3/lrm: service vm:101 - end migrate to node 'node2'
+info 240 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2)
+info 240 node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info 240 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node2, target = node3)
+info 243 node2/lrm: service vm:101 - start migrate to node 'node3'
+info 243 node2/lrm: service vm:101 - end migrate to node 'node3'
+info 260 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
+info 265 node3/lrm: starting service vm:101
+info 265 node3/lrm: service status vm:101 started
+info 320 cmdlist: execute service vm:101 migrate node3
+info 320 node1/crm: ignore crm command - service already on target node: migrate vm:101 node3
+info 420 cmdlist: execute service vm:102 migrate node3
+info 420 node1/crm: got crm command: migrate vm:102 node3
+info 420 node1/crm: migrate service 'vm:102' to node 'node3'
+info 420 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node2, target = node3)
+info 423 node2/lrm: service vm:102 - start migrate to node 'node3'
+info 423 node2/lrm: service vm:102 - end migrate to node 'node3'
+info 440 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node3)
+info 440 node1/crm: migrate service 'vm:102' to node 'node2' (running)
+info 440 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node3, target = node2)
+info 445 node3/lrm: service vm:102 - start migrate to node 'node2'
+info 445 node3/lrm: service vm:102 - end migrate to node 'node2'
+info 460 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node2)
+info 463 node2/lrm: starting service vm:102
+info 463 node2/lrm: service status vm:102 started
+info 520 cmdlist: execute service vm:102 migrate node2
+info 520 node1/crm: ignore crm command - service already on target node: migrate vm:102 node2
+info 620 cmdlist: execute service vm:102 migrate node1
+info 620 node1/crm: got crm command: migrate vm:102 node1
+info 620 node1/crm: migrate service 'vm:102' to node 'node1'
+info 620 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node2, target = node1)
+info 623 node2/lrm: service vm:102 - start migrate to node 'node1'
+info 623 node2/lrm: service vm:102 - end migrate to node 'node1'
+info 640 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node1)
+info 641 node1/lrm: starting service vm:102
+info 641 node1/lrm: service status vm:102 started
+info 1220 hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-strict7/manager_status b/src/test/test-node-affinity-strict7/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-strict7/rules_config b/src/test/test-node-affinity-strict7/rules_config
new file mode 100644
index 00000000..622ba80b
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/rules_config
@@ -0,0 +1,9 @@
+node-affinity: vm101-must-be-on-node1-node3
+ nodes node1:1,node3:2
+ resources vm:101
+ strict 1
+
+node-affinity: vm102-must-be-on-node1-node2
+ nodes node1:1,node2:2
+ resources vm:102
+ strict 1
diff --git a/src/test/test-node-affinity-strict7/service_config b/src/test/test-node-affinity-strict7/service_config
new file mode 100644
index 00000000..3a916390
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/service_config
@@ -0,0 +1,4 @@
+{
+ "vm:101": { "node": "node3", "state": "started", "failback": 1 },
+ "vm:102": { "node": "node2", "state": "started", "failback": 0 }
+}
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 08/17] fix #1497: handle strict node affinity rules in manual migrations
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (6 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 07/17] tests: add test cases for migrating resources with node affinity rules Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 09/17] config: improve variable names in read_and_check_resources_config Daniel Kral
` (10 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
Do not execute any manual user migration of an HA resource to a target
node, where it is not allowed to be on according to the strict node
affinity rule it is part of.
This prevents users from moving an HA resource, which would be migrated
back to an allowed member node of the strict node affinity rule
immediately after, which just wastes time and resources.
This new information is only redirected to the ha_manager's CLI
stdout/stderr and the HA Manager node's syslog respectively, so other
user-facing endpoints needs to implement this logic as well to give
users adequate feedback why migrations are not executed.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes v2 -> v3:
- add 'fix #1497' prefix
src/PVE/API2/HA/Resources.pm | 4 +--
src/PVE/CLI/ha_manager.pm | 14 +++++-----
src/PVE/HA/Helpers.pm | 13 ++++++++-
src/PVE/HA/Manager.pm | 7 +++--
.../test-node-affinity-strict1/log.expect | 16 +----------
.../test-node-affinity-strict2/log.expect | 16 +----------
.../test-node-affinity-strict7/log.expect | 28 ++-----------------
src/test/test-recovery4/log.expect | 2 +-
8 files changed, 31 insertions(+), 69 deletions(-)
diff --git a/src/PVE/API2/HA/Resources.pm b/src/PVE/API2/HA/Resources.pm
index e0690d5c..3f973c45 100644
--- a/src/PVE/API2/HA/Resources.pm
+++ b/src/PVE/API2/HA/Resources.pm
@@ -383,7 +383,7 @@ __PACKAGE__->register_method({
type => 'string',
description => "The reason why the HA resource is"
. " blocking the migration.",
- enum => ['resource-affinity'],
+ enum => ['node-affinity', 'resource-affinity'],
},
},
},
@@ -485,7 +485,7 @@ __PACKAGE__->register_method({
type => 'string',
description => "The reason why the HA resource is"
. " blocking the relocation.",
- enum => ['resource-affinity'],
+ enum => ['node-affinity', 'resource-affinity'],
},
},
},
diff --git a/src/PVE/CLI/ha_manager.pm b/src/PVE/CLI/ha_manager.pm
index f257c013..6625de68 100644
--- a/src/PVE/CLI/ha_manager.pm
+++ b/src/PVE/CLI/ha_manager.pm
@@ -160,15 +160,15 @@ my $print_resource_motion_output = sub {
my $err_msg = "cannot $cmd resource '$sid' to node '$req_node':\n\n";
for my $blocking_resource (@$blocking_resources) {
- my ($csid, $cause) = $blocking_resource->@{qw(sid cause)};
+ my $cause = $blocking_resource->{cause};
- $err_msg .= "- resource '$csid' on target node '$req_node'";
-
- if ($cause eq 'resource-affinity') {
- $err_msg .= " in negative affinity with resource '$sid'";
+ if ($cause eq 'node-affinity') {
+ $err_msg .= "- resource '$sid' not allowed on target node '$req_node'\n";
+ } elsif ($cause eq 'resource-affinity') {
+ my $csid = $blocking_resource->{sid};
+ $err_msg .= "- resource '$csid' on target node '$req_node'"
+ . " in negative affinity with resource '$sid'\n";
}
-
- $err_msg .= "\n";
}
die $err_msg;
diff --git a/src/PVE/HA/Helpers.pm b/src/PVE/HA/Helpers.pm
index 09300cd4..b160c541 100644
--- a/src/PVE/HA/Helpers.pm
+++ b/src/PVE/HA/Helpers.pm
@@ -2,6 +2,7 @@ package PVE::HA::Helpers;
use v5.36;
+use PVE::HA::Rules::NodeAffinity qw(get_node_affinity);
use PVE::HA::Rules::ResourceAffinity qw(get_affinitive_resources);
=head3 get_resource_motion_info
@@ -21,7 +22,9 @@ sub get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules) {
my $dependent_resources = [];
my $blocking_resources_by_node = {};
- my $resource_affinity = $compiled_rules->{'resource-affinity'};
+ my ($node_affinity, $resource_affinity) =
+ $compiled_rules->@{qw(node-affinity resource-affinity)};
+ my ($allowed_nodes) = get_node_affinity($node_affinity, $sid, $online_nodes);
my ($together, $separate) = get_affinitive_resources($resource_affinity, $sid);
for my $csid (sort keys %$together) {
@@ -32,6 +35,14 @@ sub get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules) {
}
for my $node (keys %$online_nodes) {
+ if (!$allowed_nodes->{$node}) {
+ push $blocking_resources_by_node->{$node}->@*,
+ {
+ sid => $sid,
+ cause => 'node-affinity',
+ };
+ }
+
for my $csid (sort keys %$separate) {
next if !defined($ss->{$csid});
next if $ss->{$csid}->{state} eq 'ignored';
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 8419cb9a..2d1c6d5d 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -591,9 +591,12 @@ sub queue_resource_motion {
if (my $blocking_resources = $blocking_resources_by_node->{$target}) {
for my $blocking_resource (@$blocking_resources) {
my $err_msg = "unknown migration blocker reason";
- my ($csid, $cause) = $blocking_resource->@{qw(sid cause)};
+ my $cause = $blocking_resource->{cause};
- if ($cause eq 'resource-affinity') {
+ if ($cause eq 'node-affinity') {
+ $err_msg = "service '$sid' is not allowed on node '$target'";
+ } elsif ($cause eq 'resource-affinity') {
+ my $csid = $blocking_resource->{sid};
$err_msg = "service '$csid' on node '$target' in negative"
. " affinity with service '$sid'";
}
diff --git a/src/test/test-node-affinity-strict1/log.expect b/src/test/test-node-affinity-strict1/log.expect
index d86c69de..ca2c40b3 100644
--- a/src/test/test-node-affinity-strict1/log.expect
+++ b/src/test/test-node-affinity-strict1/log.expect
@@ -22,19 +22,5 @@ info 25 node3/lrm: status change wait_for_agent_lock => active
info 25 node3/lrm: starting service vm:101
info 25 node3/lrm: service status vm:101 started
info 120 cmdlist: execute service vm:101 migrate node2
-info 120 node1/crm: got crm command: migrate vm:101 node2
-info 120 node1/crm: migrate service 'vm:101' to node 'node2'
-info 120 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2)
-info 123 node2/lrm: got lock 'ha_agent_node2_lock'
-info 123 node2/lrm: status change wait_for_agent_lock => active
-info 125 node3/lrm: service vm:101 - start migrate to node 'node2'
-info 125 node3/lrm: service vm:101 - end migrate to node 'node2'
-info 140 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2)
-info 140 node1/crm: migrate service 'vm:101' to node 'node3' (running)
-info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node2, target = node3)
-info 143 node2/lrm: service vm:101 - start migrate to node 'node3'
-info 143 node2/lrm: service vm:101 - end migrate to node 'node3'
-info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
-info 165 node3/lrm: starting service vm:101
-info 165 node3/lrm: service status vm:101 started
+err 120 node1/crm: crm command 'migrate vm:101 node2' error - service 'vm:101' is not allowed on node 'node2'
info 720 hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-strict2/log.expect b/src/test/test-node-affinity-strict2/log.expect
index d86c69de..ca2c40b3 100644
--- a/src/test/test-node-affinity-strict2/log.expect
+++ b/src/test/test-node-affinity-strict2/log.expect
@@ -22,19 +22,5 @@ info 25 node3/lrm: status change wait_for_agent_lock => active
info 25 node3/lrm: starting service vm:101
info 25 node3/lrm: service status vm:101 started
info 120 cmdlist: execute service vm:101 migrate node2
-info 120 node1/crm: got crm command: migrate vm:101 node2
-info 120 node1/crm: migrate service 'vm:101' to node 'node2'
-info 120 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2)
-info 123 node2/lrm: got lock 'ha_agent_node2_lock'
-info 123 node2/lrm: status change wait_for_agent_lock => active
-info 125 node3/lrm: service vm:101 - start migrate to node 'node2'
-info 125 node3/lrm: service vm:101 - end migrate to node 'node2'
-info 140 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2)
-info 140 node1/crm: migrate service 'vm:101' to node 'node3' (running)
-info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node2, target = node3)
-info 143 node2/lrm: service vm:101 - start migrate to node 'node3'
-info 143 node2/lrm: service vm:101 - end migrate to node 'node3'
-info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
-info 165 node3/lrm: starting service vm:101
-info 165 node3/lrm: service status vm:101 started
+err 120 node1/crm: crm command 'migrate vm:101 node2' error - service 'vm:101' is not allowed on node 'node2'
info 720 hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-strict7/log.expect b/src/test/test-node-affinity-strict7/log.expect
index cbe9f323..9c4e9f0b 100644
--- a/src/test/test-node-affinity-strict7/log.expect
+++ b/src/test/test-node-affinity-strict7/log.expect
@@ -44,35 +44,11 @@ info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'sta
info 165 node3/lrm: starting service vm:101
info 165 node3/lrm: service status vm:101 started
info 220 cmdlist: execute service vm:101 migrate node2
-info 220 node1/crm: got crm command: migrate vm:101 node2
-info 220 node1/crm: migrate service 'vm:101' to node 'node2'
-info 220 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2)
-info 225 node3/lrm: service vm:101 - start migrate to node 'node2'
-info 225 node3/lrm: service vm:101 - end migrate to node 'node2'
-info 240 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2)
-info 240 node1/crm: migrate service 'vm:101' to node 'node3' (running)
-info 240 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node2, target = node3)
-info 243 node2/lrm: service vm:101 - start migrate to node 'node3'
-info 243 node2/lrm: service vm:101 - end migrate to node 'node3'
-info 260 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
-info 265 node3/lrm: starting service vm:101
-info 265 node3/lrm: service status vm:101 started
+err 220 node1/crm: crm command 'migrate vm:101 node2' error - service 'vm:101' is not allowed on node 'node2'
info 320 cmdlist: execute service vm:101 migrate node3
info 320 node1/crm: ignore crm command - service already on target node: migrate vm:101 node3
info 420 cmdlist: execute service vm:102 migrate node3
-info 420 node1/crm: got crm command: migrate vm:102 node3
-info 420 node1/crm: migrate service 'vm:102' to node 'node3'
-info 420 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node2, target = node3)
-info 423 node2/lrm: service vm:102 - start migrate to node 'node3'
-info 423 node2/lrm: service vm:102 - end migrate to node 'node3'
-info 440 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node3)
-info 440 node1/crm: migrate service 'vm:102' to node 'node2' (running)
-info 440 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node3, target = node2)
-info 445 node3/lrm: service vm:102 - start migrate to node 'node2'
-info 445 node3/lrm: service vm:102 - end migrate to node 'node2'
-info 460 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node2)
-info 463 node2/lrm: starting service vm:102
-info 463 node2/lrm: service status vm:102 started
+err 420 node1/crm: crm command 'migrate vm:102 node3' error - service 'vm:102' is not allowed on node 'node3'
info 520 cmdlist: execute service vm:102 migrate node2
info 520 node1/crm: ignore crm command - service already on target node: migrate vm:102 node2
info 620 cmdlist: execute service vm:102 migrate node1
diff --git a/src/test/test-recovery4/log.expect b/src/test/test-recovery4/log.expect
index 12983b5f..684c796b 100644
--- a/src/test/test-recovery4/log.expect
+++ b/src/test/test-recovery4/log.expect
@@ -43,7 +43,7 @@ err 260 node1/crm: recovering service 'vm:102' from fenced node 'node2' f
err 280 node1/crm: recovering service 'vm:102' from fenced node 'node2' failed, no recovery node found
err 300 node1/crm: recovering service 'vm:102' from fenced node 'node2' failed, no recovery node found
info 320 cmdlist: execute service vm:102 migrate node3
-info 320 node1/crm: got crm command: migrate vm:102 node3
+err 320 node1/crm: crm command 'migrate vm:102 node3' error - service 'vm:102' is not allowed on node 'node3'
err 320 node1/crm: recovering service 'vm:102' from fenced node 'node2' failed, no recovery node found
err 340 node1/crm: recovering service 'vm:102' from fenced node 'node2' failed, no recovery node found
err 360 node1/crm: recovering service 'vm:102' from fenced node 'node2' failed, no recovery node found
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 09/17] config: improve variable names in read_and_check_resources_config
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (7 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 08/17] fix #1497: handle strict node affinity rules in manual migrations Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 10/17] config: factor out checked_resources_config helper Daniel Kral
` (9 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v2 -> v3:
- none
src/PVE/HA/Config.pm | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm
index f74cb58b..8b607cf2 100644
--- a/src/PVE/HA/Config.pm
+++ b/src/PVE/HA/Config.pm
@@ -108,13 +108,13 @@ sub read_resources_config {
# checks if resource exists and sets defaults for unset values
sub read_and_check_resources_config {
- my $res = cfs_read_file($ha_resources_config);
+ my $cfg = cfs_read_file($ha_resources_config);
my $vmlist = PVE::Cluster::get_vmlist();
- my $conf = {};
+ my $resources = {};
- foreach my $sid (keys %{ $res->{ids} }) {
- my $d = $res->{ids}->{$sid};
+ foreach my $sid (keys %{ $cfg->{ids} }) {
+ my $d = $cfg->{ids}->{$sid};
my (undef, undef, $name) = parse_sid($sid);
$d->{state} = 'started' if !defined($d->{state});
$d->{state} = 'started' if $d->{state} eq 'enabled'; # backward compatibility
@@ -124,17 +124,17 @@ sub read_and_check_resources_config {
if (PVE::HA::Resources->lookup($d->{type})) {
if (my $vmd = $vmlist->{ids}->{$name}) {
$d->{node} = $vmd->{node};
- $conf->{$sid} = $d;
+ $resources->{$sid} = $d;
} else {
# undef $d->{node} is handled in get_verbose_service_state and
# status API, don't spam logs or ignore it; allow to delete it!
- $conf->{$sid} = $d;
+ $resources->{$sid} = $d;
}
}
}
# TODO PVE 10: Remove digest when HA groups have been fully migrated to rules
- return wantarray ? ($conf, $res->{digest}) : $conf;
+ return wantarray ? ($resources, $cfg->{digest}) : $resources;
}
my sub update_single_resource_config_inplace {
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 10/17] config: factor out checked_resources_config helper
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (8 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 09/17] config: improve variable names in read_and_check_resources_config Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 11/17] manager: store global reference to service config hash Daniel Kral
` (8 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
This allows future users to retrieve the checked resource config if
these already have a parsed resource config instead of reading it twice.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v2 -> v3:
- none
src/PVE/HA/Config.pm | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm
index 8b607cf2..d78f7179 100644
--- a/src/PVE/HA/Config.pm
+++ b/src/PVE/HA/Config.pm
@@ -105,10 +105,9 @@ sub read_resources_config {
return cfs_read_file($ha_resources_config);
}
-# checks if resource exists and sets defaults for unset values
-sub read_and_check_resources_config {
-
- my $cfg = cfs_read_file($ha_resources_config);
+# returns resources config with defaults and node placement set
+my sub checked_resources_config {
+ my ($cfg) = @_;
my $vmlist = PVE::Cluster::get_vmlist();
my $resources = {};
@@ -137,6 +136,13 @@ sub read_and_check_resources_config {
return wantarray ? ($resources, $cfg->{digest}) : $resources;
}
+# checks if resource exists and sets defaults for unset values
+sub read_and_check_resources_config {
+ my $cfg = cfs_read_file($ha_resources_config);
+
+ return checked_resources_config($cfg);
+}
+
my sub update_single_resource_config_inplace {
my ($cfg, $sid, $param, $delete) = @_;
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 11/17] manager: store global reference to service config hash
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (9 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 10/17] config: factor out checked_resources_config helper Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 12/17] manager: remove duplicate service config read in update_crm_commands Daniel Kral
` (7 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
This is in preparation for an upcoming patch, which makes
get_resource_motion_info() aware of the HA resources' failback flag in
conjunction with node affinity rules.
This change is chosen in favor of passing down the $sc variable through
several method calls that otherwise wouldn't need a reference to the
service config hash in the upcoming patch.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v2 -> v3:
- new
src/PVE/HA/Manager.pm | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 2d1c6d5d..c4088dac 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -82,6 +82,7 @@ sub new {
$self->{ns} = PVE::HA::NodeStatus->new($haenv, $old_ms->{node_status} || {});
# fixme: use separate class PVE::HA::ServiceStatus
+ $self->{sc} = {};
$self->{ss} = $old_ms->{service_status} || {};
$self->{ms} = { master_node => $haenv->nodename() };
@@ -1002,7 +1003,7 @@ sub manage {
$self->try_persistent_group_migration();
- my ($sc, $services_digest) = $haenv->read_service_config();
+ ($self->{sc}, my $services_digest) = $haenv->read_service_config();
$self->{groups} = $haenv->read_group_config(); # update
@@ -1011,9 +1012,9 @@ sub manage {
# skip service add/remove when disarmed - handle_disarm manages service status
if (!$ms->{disarm}) {
# add new service
- foreach my $sid (sort keys %$sc) {
+ foreach my $sid (sort keys $self->{sc}->%*) {
next if $ss->{$sid}; # already there
- my $cd = $sc->{$sid};
+ my $cd = $self->{sc}->{$sid};
next if $cd->{state} eq 'ignored';
$haenv->log('info', "adding new service '$sid' on node '$cd->{node}'");
@@ -1028,9 +1029,10 @@ sub manage {
# remove stale or ignored services from manager state
foreach my $sid (keys %$ss) {
- next if $sc->{$sid} && $sc->{$sid}->{state} ne 'ignored';
+ my $cd = $self->{sc}->{$sid};
+ next if $cd && $cd->{state} ne 'ignored';
- my $reason = defined($sc->{$sid}) ? 'ignored state requested' : 'no config';
+ my $reason = defined($cd) ? 'ignored state requested' : 'no config';
$haenv->log('info', "removing stale service '$sid' ($reason)");
# remove all service related state information
@@ -1043,7 +1045,7 @@ sub manage {
my $new_rules = $haenv->read_rules_config();
# TODO PVE 10: Remove group migration when HA groups have been fully migrated to rules
- PVE::HA::Groups::migrate_groups_to_resources($self->{groups}, $sc);
+ PVE::HA::Groups::migrate_groups_to_resources($self->{groups}, $self->{sc});
if (
!$self->{compiled_rules}
@@ -1052,7 +1054,7 @@ sub manage {
|| $self->{groups}->{digest} ne $self->{last_groups_digest}
|| $services_digest && $services_digest ne $self->{last_services_digest}
) {
- PVE::HA::Groups::migrate_groups_to_rules($new_rules, $self->{groups}, $sc);
+ PVE::HA::Groups::migrate_groups_to_rules($new_rules, $self->{groups}, $self->{sc});
my $nodes = $self->{ns}->list_nodes();
my $messages = PVE::HA::Rules->transform($new_rules, $nodes);
@@ -1088,7 +1090,7 @@ sub manage {
foreach my $sid (sort keys %$ss) {
next if $deferred_sids && !$deferred_sids->{$sid};
my $sd = $ss->{$sid};
- my $cd = $sc->{$sid} || { state => 'disabled' };
+ my $cd = $self->{sc}->{$sid} || { state => 'disabled' };
my $lrm_res = $sd->{uid} ? $lrm_results->{ $sd->{uid} } : undef;
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 12/17] manager: remove duplicate service config read in update_crm_commands
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (10 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 11/17] manager: store global reference to service config hash Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 13/17] fix #1497: handle node affinity rules with failback in manual migrations Daniel Kral
` (6 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
As there exists a global service config hash in the Manager class now,
remove the redundant read from the handling of the 'arm-ha' CRM command
in PVE::HA::Manager::update_crm_commands().
The HA Manager should work from the same service config in each HA
Manager round to keep consistency.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v2 -> v3:
- new
src/PVE/HA/Manager.pm | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index c4088dac..9e8095e1 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -643,7 +643,7 @@ sub any_resource_motion_queued_or_running {
sub update_crm_commands {
my ($self) = @_;
- my ($haenv, $ms, $ns, $ss) = ($self->{haenv}, $self->{ms}, $self->{ns}, $self->{ss});
+ my ($haenv, $ms, $ns, $sc, $ss) = $self->@{qw(haenv ms ns sc ss)};
my $cmdlist = $haenv->read_crm_commands();
@@ -733,7 +733,6 @@ sub update_crm_commands {
# recheck node info after ignore mode, as services may have been manually
# migrated while HA tracking was suspended
if ($ms->{disarm}->{mode} eq 'ignore') {
- my $sc = $haenv->read_service_config();
for my $sid (sort keys %$ss) {
my $cd = $sc->{$sid};
next if !$cd;
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 13/17] fix #1497: handle node affinity rules with failback in manual migrations
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (11 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 12/17] manager: remove duplicate service config read in update_crm_commands Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 14/17] config: remove duplicate config reads in get_resource_motion_info Daniel Kral
` (5 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
Do not execute any manual user migration of an HA resource to a target
node, which is not one of the highest priority nodes if the HA resource
has failback set.
This prevents users from moving an HA resource, which would be failed
back to a higher priority node of the strict or non-strict node affinity
rule immediately after, which just wastes time and resources.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v2 -> v3:
- add 'fix #1497' prefix
- use new global service config hash ref to avoid a rather confusing
passing of the service config hash through various method calls
src/PVE/HA/Config.pm | 11 +++++--
src/PVE/HA/Helpers.pm | 6 ++--
src/PVE/HA/Manager.pm | 7 ++--
.../test-node-affinity-nonstrict1/log.expect | 16 +---------
.../test-node-affinity-nonstrict7/log.expect | 32 +++----------------
.../test-node-affinity-strict7/log.expect | 18 ++---------
6 files changed, 24 insertions(+), 66 deletions(-)
diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm
index d78f7179..e9551dfb 100644
--- a/src/PVE/HA/Config.pm
+++ b/src/PVE/HA/Config.pm
@@ -398,22 +398,27 @@ sub service_is_configured {
sub get_resource_motion_info {
my ($sid) = @_;
- my $resources = read_resources_config();
+ my $cfg = read_resources_config();
my $dependent_resources = [];
my $blocking_resources_by_node = {};
- if (&$service_check_ha_state($resources, $sid)) {
+ if (&$service_check_ha_state($cfg, $sid)) {
my $manager_status = read_manager_status();
my $ss = $manager_status->{service_status};
my $ns = $manager_status->{node_status};
# get_resource_motion_info expects a hashset of all nodes with status 'online'
my $online_nodes = { map { $ns->{$_} eq 'online' ? ($_ => 1) : () } keys %$ns };
+ # get_resource_motion_info expects a resource config with defaults set
+ my $resources = checked_resources_config($cfg);
my $compiled_rules = read_and_compile_rules_config();
+ my $cd = $resources->{$sid} // {};
($dependent_resources, $blocking_resources_by_node) =
- PVE::HA::Helpers::get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules);
+ PVE::HA::Helpers::get_resource_motion_info(
+ $ss, $sid, $cd, $online_nodes, $compiled_rules,
+ );
}
return ($dependent_resources, $blocking_resources_by_node);
diff --git a/src/PVE/HA/Helpers.pm b/src/PVE/HA/Helpers.pm
index b160c541..a58b1e12 100644
--- a/src/PVE/HA/Helpers.pm
+++ b/src/PVE/HA/Helpers.pm
@@ -18,13 +18,13 @@ causes that make the node unavailable to C<$sid>.
=cut
-sub get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules) {
+sub get_resource_motion_info($ss, $sid, $cd, $online_nodes, $compiled_rules) {
my $dependent_resources = [];
my $blocking_resources_by_node = {};
my ($node_affinity, $resource_affinity) =
$compiled_rules->@{qw(node-affinity resource-affinity)};
- my ($allowed_nodes) = get_node_affinity($node_affinity, $sid, $online_nodes);
+ my ($allowed_nodes, $pri_nodes) = get_node_affinity($node_affinity, $sid, $online_nodes);
my ($together, $separate) = get_affinitive_resources($resource_affinity, $sid);
for my $csid (sort keys %$together) {
@@ -35,7 +35,7 @@ sub get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules) {
}
for my $node (keys %$online_nodes) {
- if (!$allowed_nodes->{$node}) {
+ if (!$allowed_nodes->{$node} || ($cd->{failback} && !$pri_nodes->{$node})) {
push $blocking_resources_by_node->{$node}->@*,
{
sid => $sid,
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 9e8095e1..94be6472 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -583,11 +583,14 @@ sub read_lrm_status {
sub queue_resource_motion {
my ($self, $cmd, $task, $sid, $target) = @_;
- my ($haenv, $ss, $ns, $compiled_rules) = $self->@{qw(haenv ss ns compiled_rules)};
+ my ($haenv, $sc, $ss, $ns, $compiled_rules) = $self->@{qw(haenv sc ss ns compiled_rules)};
my $online_nodes = { map { $_ => 1 } $self->{ns}->list_online_nodes()->@* };
+ my $cd = $sc->{$sid};
my ($dependent_resources, $blocking_resources_by_node) =
- PVE::HA::Helpers::get_resource_motion_info($ss, $sid, $online_nodes, $compiled_rules);
+ PVE::HA::Helpers::get_resource_motion_info(
+ $ss, $sid, $cd, $online_nodes, $compiled_rules,
+ );
if (my $blocking_resources = $blocking_resources_by_node->{$target}) {
for my $blocking_resource (@$blocking_resources) {
diff --git a/src/test/test-node-affinity-nonstrict1/log.expect b/src/test/test-node-affinity-nonstrict1/log.expect
index d86c69de..ca2c40b3 100644
--- a/src/test/test-node-affinity-nonstrict1/log.expect
+++ b/src/test/test-node-affinity-nonstrict1/log.expect
@@ -22,19 +22,5 @@ info 25 node3/lrm: status change wait_for_agent_lock => active
info 25 node3/lrm: starting service vm:101
info 25 node3/lrm: service status vm:101 started
info 120 cmdlist: execute service vm:101 migrate node2
-info 120 node1/crm: got crm command: migrate vm:101 node2
-info 120 node1/crm: migrate service 'vm:101' to node 'node2'
-info 120 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2)
-info 123 node2/lrm: got lock 'ha_agent_node2_lock'
-info 123 node2/lrm: status change wait_for_agent_lock => active
-info 125 node3/lrm: service vm:101 - start migrate to node 'node2'
-info 125 node3/lrm: service vm:101 - end migrate to node 'node2'
-info 140 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2)
-info 140 node1/crm: migrate service 'vm:101' to node 'node3' (running)
-info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node2, target = node3)
-info 143 node2/lrm: service vm:101 - start migrate to node 'node3'
-info 143 node2/lrm: service vm:101 - end migrate to node 'node3'
-info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
-info 165 node3/lrm: starting service vm:101
-info 165 node3/lrm: service status vm:101 started
+err 120 node1/crm: crm command 'migrate vm:101 node2' error - service 'vm:101' is not allowed on node 'node2'
info 720 hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-nonstrict7/log.expect b/src/test/test-node-affinity-nonstrict7/log.expect
index 31daa618..54e824ea 100644
--- a/src/test/test-node-affinity-nonstrict7/log.expect
+++ b/src/test/test-node-affinity-nonstrict7/log.expect
@@ -28,35 +28,9 @@ info 25 node3/lrm: status change wait_for_agent_lock => active
info 25 node3/lrm: starting service vm:101
info 25 node3/lrm: service status vm:101 started
info 120 cmdlist: execute service vm:101 migrate node1
-info 120 node1/crm: got crm command: migrate vm:101 node1
-info 120 node1/crm: migrate service 'vm:101' to node 'node1'
-info 120 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node1)
-info 121 node1/lrm: got lock 'ha_agent_node1_lock'
-info 121 node1/lrm: status change wait_for_agent_lock => active
-info 125 node3/lrm: service vm:101 - start migrate to node 'node1'
-info 125 node3/lrm: service vm:101 - end migrate to node 'node1'
-info 140 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1)
-info 140 node1/crm: migrate service 'vm:101' to node 'node3' (running)
-info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node3)
-info 141 node1/lrm: service vm:101 - start migrate to node 'node3'
-info 141 node1/lrm: service vm:101 - end migrate to node 'node3'
-info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
-info 165 node3/lrm: starting service vm:101
-info 165 node3/lrm: service status vm:101 started
+err 120 node1/crm: crm command 'migrate vm:101 node1' error - service 'vm:101' is not allowed on node 'node1'
info 220 cmdlist: execute service vm:101 migrate node2
-info 220 node1/crm: got crm command: migrate vm:101 node2
-info 220 node1/crm: migrate service 'vm:101' to node 'node2'
-info 220 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2)
-info 225 node3/lrm: service vm:101 - start migrate to node 'node2'
-info 225 node3/lrm: service vm:101 - end migrate to node 'node2'
-info 240 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2)
-info 240 node1/crm: migrate service 'vm:101' to node 'node3' (running)
-info 240 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node2, target = node3)
-info 243 node2/lrm: service vm:101 - start migrate to node 'node3'
-info 243 node2/lrm: service vm:101 - end migrate to node 'node3'
-info 260 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
-info 265 node3/lrm: starting service vm:101
-info 265 node3/lrm: service status vm:101 started
+err 220 node1/crm: crm command 'migrate vm:101 node2' error - service 'vm:101' is not allowed on node 'node2'
info 320 cmdlist: execute service vm:101 migrate node3
info 320 node1/crm: ignore crm command - service already on target node: migrate vm:101 node3
info 420 cmdlist: execute service vm:102 migrate node3
@@ -81,6 +55,8 @@ info 620 cmdlist: execute service vm:102 migrate node1
info 620 node1/crm: got crm command: migrate vm:102 node1
info 620 node1/crm: migrate service 'vm:102' to node 'node1'
info 620 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node2, target = node1)
+info 621 node1/lrm: got lock 'ha_agent_node1_lock'
+info 621 node1/lrm: status change wait_for_agent_lock => active
info 623 node2/lrm: service vm:102 - start migrate to node 'node1'
info 623 node2/lrm: service vm:102 - end migrate to node 'node1'
info 640 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node1)
diff --git a/src/test/test-node-affinity-strict7/log.expect b/src/test/test-node-affinity-strict7/log.expect
index 9c4e9f0b..ae8e43fb 100644
--- a/src/test/test-node-affinity-strict7/log.expect
+++ b/src/test/test-node-affinity-strict7/log.expect
@@ -28,21 +28,7 @@ info 25 node3/lrm: status change wait_for_agent_lock => active
info 25 node3/lrm: starting service vm:101
info 25 node3/lrm: service status vm:101 started
info 120 cmdlist: execute service vm:101 migrate node1
-info 120 node1/crm: got crm command: migrate vm:101 node1
-info 120 node1/crm: migrate service 'vm:101' to node 'node1'
-info 120 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node1)
-info 121 node1/lrm: got lock 'ha_agent_node1_lock'
-info 121 node1/lrm: status change wait_for_agent_lock => active
-info 125 node3/lrm: service vm:101 - start migrate to node 'node1'
-info 125 node3/lrm: service vm:101 - end migrate to node 'node1'
-info 140 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1)
-info 140 node1/crm: migrate service 'vm:101' to node 'node3' (running)
-info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node3)
-info 141 node1/lrm: service vm:101 - start migrate to node 'node3'
-info 141 node1/lrm: service vm:101 - end migrate to node 'node3'
-info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
-info 165 node3/lrm: starting service vm:101
-info 165 node3/lrm: service status vm:101 started
+err 120 node1/crm: crm command 'migrate vm:101 node1' error - service 'vm:101' is not allowed on node 'node1'
info 220 cmdlist: execute service vm:101 migrate node2
err 220 node1/crm: crm command 'migrate vm:101 node2' error - service 'vm:101' is not allowed on node 'node2'
info 320 cmdlist: execute service vm:101 migrate node3
@@ -55,6 +41,8 @@ info 620 cmdlist: execute service vm:102 migrate node1
info 620 node1/crm: got crm command: migrate vm:102 node1
info 620 node1/crm: migrate service 'vm:102' to node 'node1'
info 620 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node2, target = node1)
+info 621 node1/lrm: got lock 'ha_agent_node1_lock'
+info 621 node1/lrm: status change wait_for_agent_lock => active
info 623 node2/lrm: service vm:102 - start migrate to node 'node1'
info 623 node2/lrm: service vm:102 - end migrate to node 'node1'
info 640 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node1)
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH ha-manager v3 14/17] config: remove duplicate config reads in get_resource_motion_info
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (12 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 13/17] fix #1497: handle node affinity rules with failback in manual migrations Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH qemu-server v3 15/17] api: migration preconditions: add node affinity as blocking cause Daniel Kral
` (4 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
read_and_compile_rules_config(...) is only used here and makes duplicate
config reads for the manager status and resource config.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v2 -> v3:
- none
src/PVE/HA/Config.pm | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm
index e9551dfb..eb1f9b7a 100644
--- a/src/PVE/HA/Config.pm
+++ b/src/PVE/HA/Config.pm
@@ -236,17 +236,11 @@ sub read_and_check_rules_config {
return $rules;
}
-sub read_and_compile_rules_config {
+my sub compile_rules_config {
+ my ($rules, $groups, $resources, $manager_status) = @_;
- my $rules = read_and_check_rules_config();
-
- my $manager_status = read_manager_status();
my $nodes = [keys $manager_status->{node_status}->%*];
- # TODO PVE 10: Remove group migration when HA groups have been fully migrated to location rules
- my $groups = read_group_config();
- my $resources = read_and_check_resources_config();
-
PVE::HA::Groups::migrate_groups_to_rules($rules, $groups, $resources);
PVE::HA::Rules->transform($rules, $nodes);
@@ -412,7 +406,10 @@ sub get_resource_motion_info {
# get_resource_motion_info expects a resource config with defaults set
my $resources = checked_resources_config($cfg);
- my $compiled_rules = read_and_compile_rules_config();
+ my $rules = read_and_check_rules_config();
+ # TODO PVE 10: Remove group migration when HA groups have been fully migrated to rules
+ my $groups = read_group_config();
+ my $compiled_rules = compile_rules_config($rules, $groups, $resources, $manager_status);
my $cd = $resources->{$sid} // {};
($dependent_resources, $blocking_resources_by_node) =
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH qemu-server v3 15/17] api: migration preconditions: add node affinity as blocking cause
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (13 preceding siblings ...)
2026-05-11 9:46 ` [PATCH ha-manager v3 14/17] config: remove duplicate config reads in get_resource_motion_info Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH container v3 16/17] " Daniel Kral
` (3 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes v2 -> v3:
- none
src/PVE/API2/Qemu.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index d762401b..80a95708 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -5260,7 +5260,7 @@ __PACKAGE__->register_method({
type => 'string',
description => "The reason why the HA"
. " resource is blocking the migration.",
- enum => ['resource-affinity'],
+ enum => ['node-affinity', 'resource-affinity'],
},
},
},
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH container v3 16/17] api: migration preconditions: add node affinity as blocking cause
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (14 preceding siblings ...)
2026-05-11 9:46 ` [PATCH qemu-server v3 15/17] api: migration preconditions: add node affinity as blocking cause Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-11 9:46 ` [PATCH manager v3 17/17] ui: migrate: display precondition messages for ha node affinity Daniel Kral
` (2 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes v2 -> v3:
- none
src/PVE/API2/LXC.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 838dd76..f337e5b 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1538,7 +1538,7 @@ __PACKAGE__->register_method({
type => 'string',
description => "The reason why the HA"
. " resource is blocking the migration.",
- enum => ['resource-affinity'],
+ enum => ['node-affinity', 'resource-affinity'],
},
},
},
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH manager v3 17/17] ui: migrate: display precondition messages for ha node affinity
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (15 preceding siblings ...)
2026-05-11 9:46 ` [PATCH container v3 16/17] " Daniel Kral
@ 2026-05-11 9:46 ` Daniel Kral
2026-05-15 4:51 ` applied: [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Thomas Lamprecht
2026-05-15 5:04 ` Thomas Lamprecht
18 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2026-05-11 9:46 UTC (permalink / raw)
To: pve-devel
Extend the VM and container precondition check to show whether a
migration of the VM/container cannot be completed because of a node
affinity rule restricting the HA resource from being migrated to a
specific node.
The migration is blocked by the HA Manager's CLI and state machine
anyway, so this is more of an informational heads-up.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes v2 -> v3:
- none
www/manager6/window/Migrate.js | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index ff80c70c..8cac54ea 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6/window/Migrate.js
@@ -432,6 +432,11 @@ Ext.define('PVE.window.Migrate', {
),
sid,
);
+ } else if (cause === 'node-affinity') {
+ reasonText = Ext.String.format(
+ gettext('HA resource {0} is not allowed on the selected target node'),
+ sid,
+ );
} else {
reasonText = Ext.String.format(
gettext('blocking HA resource {0} on selected target node'),
@@ -522,6 +527,11 @@ Ext.define('PVE.window.Migrate', {
),
sid,
);
+ } else if (cause === 'node-affinity') {
+ reasonText = Ext.String.format(
+ gettext('HA resource {0} is not allowed on the selected target node'),
+ sid,
+ );
} else {
reasonText = Ext.String.format(
gettext('blocking HA resource {0} on selected target node'),
--
2.47.3
^ permalink raw reply related [flat|nested] 20+ messages in thread
* applied: [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497)
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (16 preceding siblings ...)
2026-05-11 9:46 ` [PATCH manager v3 17/17] ui: migrate: display precondition messages for ha node affinity Daniel Kral
@ 2026-05-15 4:51 ` Thomas Lamprecht
2026-05-15 5:04 ` Thomas Lamprecht
18 siblings, 0 replies; 20+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 4:51 UTC (permalink / raw)
To: pve-devel, Daniel Kral
On Mon, 11 May 2026 11:46:21 +0200, Daniel Kral wrote:
> v1: https://lore.proxmox.com/pve-devel/20251215155334.476984-1-d.kral@proxmox.com/
> v2: https://lore.proxmox.com/pve-devel/20260120152755.499037-1-d.kral@proxmox.com/
>
> Changes v2 -> v3:
> - rebase on master (relevant: {dis,}arm-ha and load balancer)
> - add 'fix #1497' prefix to relevant patches
> - add patch to make service config hash global and remove a duplicate
> service config read from the Manager class
>
> [...]
Applied, thanks!
[ha-manager]:
[01/14] ha: put source files on individual new lines
commit: 9b21576d1ea03fb9c0292ef3b485930bbf6c29d9
[02/14] d/pve-ha-manager.install: remove duplicate Config.pm
commit: aa71309daff47a9a52719db2a34787aee5a847f7
[03/14] config: group and sort use statements
commit: c66253a4f30616e6b6433c54054e4783c3c932ff
[04/14] manager: group and sort use statements
commit: 20258bc9287e89b459ee200689f6cf45317ca8d4
[05/14] manager: report all reasons when resources are blocked from migration
commit: e1aa476ae3027e96d1d7e65a3426858e2a076537
[06/14] config, manager: factor out resource motion info logic
commit: b78bdce47452e693319ce6ad16a8cd9a07dd0ec5
[07/14] tests: add test cases for migrating resources with node affinity rules
commit: 122b95c748508f8b55b441fafda600bd6458ad1f
[08/14] fix #1497: handle strict node affinity rules in manual migrations
commit: dc200f10963fcb7ca81ea05246ce69bf03117629
[09/14] config: improve variable names in read_and_check_resources_config
commit: 4173f8e7912fa505b9382e9941cd8d4451d0c155
[10/14] config: factor out checked_resources_config helper
commit: 2a2ba3ddf35f78c6a324550d7737e46e2dbbcfb4
[11/14] manager: store global reference to service config hash
commit: d3a73dd28a4e25fd4a79418b43745ff3b78172d6
[12/14] manager: remove duplicate service config read in update_crm_commands
commit: d9004fe44e527f28ce7cf051a41dbd5aa3c36f10
[13/14] fix #1497: handle node affinity rules with failback in manual migrations
commit: b4a2197c4a49b209225e596102c28104e49eac1d
[14/14] config: remove duplicate config reads in get_resource_motion_info
commit: 2029d75f8b8dafd8e8bb9692daef58007c71bf8e
[qemu-server]:
[1/1] api: migration preconditions: add node affinity as blocking cause
commit: 4fbab7ee85033c8fb86a535f99b20e6dcdcca941
[pve-container]:
[1/1] api: migration preconditions: add node affinity as blocking cause
commit: 0d7f20bee4248bb43a1191632d8f7d94ac477f58
^ permalink raw reply [flat|nested] 20+ messages in thread
* applied: [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497)
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
` (17 preceding siblings ...)
2026-05-15 4:51 ` applied: [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Thomas Lamprecht
@ 2026-05-15 5:04 ` Thomas Lamprecht
18 siblings, 0 replies; 20+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 5:04 UTC (permalink / raw)
To: pve-devel, Daniel Kral
On Mon, 11 May 2026 11:46:21 +0200, Daniel Kral wrote:
> v1: https://lore.proxmox.com/pve-devel/20251215155334.476984-1-d.kral@proxmox.com/
> v2: https://lore.proxmox.com/pve-devel/20260120152755.499037-1-d.kral@proxmox.com/
>
> Changes v2 -> v3:
> - rebase on master (relevant: {dis,}arm-ha and load balancer)
> - add 'fix #1497' prefix to relevant patches
> - add patch to make service config hash global and remove a duplicate
> service config read from the Manager class
>
> [...]
Applied, thanks!
[1/1] ui: migrate: display precondition messages for ha node affinity
commit: eda234c5617dd609d33b52985c92cdc9ffe9b267
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2026-05-15 5:10 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-11 9:46 [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 01/17] ha: put source files on individual new lines Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 02/17] d/pve-ha-manager.install: remove duplicate Config.pm Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 03/17] config: group and sort use statements Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 04/17] manager: " Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 05/17] manager: report all reasons when resources are blocked from migration Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 06/17] config, manager: factor out resource motion info logic Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 07/17] tests: add test cases for migrating resources with node affinity rules Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 08/17] fix #1497: handle strict node affinity rules in manual migrations Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 09/17] config: improve variable names in read_and_check_resources_config Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 10/17] config: factor out checked_resources_config helper Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 11/17] manager: store global reference to service config hash Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 12/17] manager: remove duplicate service config read in update_crm_commands Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 13/17] fix #1497: handle node affinity rules with failback in manual migrations Daniel Kral
2026-05-11 9:46 ` [PATCH ha-manager v3 14/17] config: remove duplicate config reads in get_resource_motion_info Daniel Kral
2026-05-11 9:46 ` [PATCH qemu-server v3 15/17] api: migration preconditions: add node affinity as blocking cause Daniel Kral
2026-05-11 9:46 ` [PATCH container v3 16/17] " Daniel Kral
2026-05-11 9:46 ` [PATCH manager v3 17/17] ui: migrate: display precondition messages for ha node affinity Daniel Kral
2026-05-15 4:51 ` applied: [PATCH-SERIES container/ha-manager/manager/qemu-server v3 00/17] HA node affinity blockers (#1497) Thomas Lamprecht
2026-05-15 5:04 ` Thomas Lamprecht
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.