* [pve-devel] [PATCH qemu-server v2 1/1] config: only fetch necessary default values in get_derived_property helper
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-21 11:47 ` [pve-devel] applied: " Fiona Ebner
2025-10-20 16:45 ` [pve-devel] [PATCH proxmox v2 1/1] resource-scheduling: change score_nodes_to_start_service signature Daniel Kral
` (10 subsequent siblings)
11 siblings, 1 reply; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
get_derived_property(...) is called in the semi-hot path of the HA
Manager's static load scheduler to retrieve the static stats of each VM.
As the defaults are only needed in certain cases and for a very small
subset of properties in the VM config, get those separately when needed.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes since v1:
- none
src/PVE/QemuConfig.pm | 8 +++-----
src/PVE/QemuServer.pm | 6 ++++++
2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm
index bb469197..ad6ce6b0 100644
--- a/src/PVE/QemuConfig.pm
+++ b/src/PVE/QemuConfig.pm
@@ -582,12 +582,10 @@ sub load_current_config {
sub get_derived_property {
my ($class, $conf, $name) = @_;
- my $defaults = PVE::QemuServer::load_defaults();
-
if ($name eq 'max-cpu') {
- my $cpus =
- ($conf->{sockets} || $defaults->{sockets}) * ($conf->{cores} || $defaults->{cores});
- return $conf->{vcpus} || $cpus;
+ my $sockets = $conf->{sockets} || PVE::QemuServer::get_default_property_value('sockets');
+ my $cores = $conf->{cores} || PVE::QemuServer::get_default_property_value('cores');
+ return $conf->{vcpus} || ($sockets * $cores);
} elsif ($name eq 'max-memory') { # current usage maximum, not maximum hotpluggable
return get_current_memory($conf->{memory}) * 1024 * 1024;
} else {
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index df2476aa..8b8e8338 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -2126,6 +2126,12 @@ sub write_vm_config {
return $raw;
}
+sub get_default_property_value {
+ my ($name) = @_;
+
+ return $confdesc->{$name}->{default};
+}
+
sub load_defaults {
my $res = {};
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [pve-devel] [PATCH proxmox v2 1/1] resource-scheduling: change score_nodes_to_start_service signature
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
2025-10-20 16:45 ` [pve-devel] [PATCH qemu-server v2 1/1] config: only fetch necessary default values in get_derived_property helper Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-21 12:14 ` Fiona Ebner
2025-10-20 16:45 ` [pve-devel] [PATCH perl-rs v2 1/2] pve-rs: resource_scheduling: allow granular usage changes Daniel Kral
` (9 subsequent siblings)
11 siblings, 1 reply; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
This is needed as StaticNodeUsage is created in each invocation of
PVE::RS::ResourceScheduling::Static::score_nodes_to_start_service now.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes since v1:
- none
proxmox-resource-scheduling/src/pve_static.rs | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/proxmox-resource-scheduling/src/pve_static.rs b/proxmox-resource-scheduling/src/pve_static.rs
index d39614cd..fc40cb5c 100644
--- a/proxmox-resource-scheduling/src/pve_static.rs
+++ b/proxmox-resource-scheduling/src/pve_static.rs
@@ -70,7 +70,7 @@ criteria_struct! {
/// Returns a vector of (nodename, score) pairs. Scores are between 0.0 and 1.0 and a higher score
/// is better.
pub fn score_nodes_to_start_service(
- nodes: &[&StaticNodeUsage],
+ nodes: &[StaticNodeUsage],
service: &StaticServiceUsage,
) -> Result<Vec<(String, f64)>, Error> {
let len = nodes.len();
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [pve-devel] [PATCH perl-rs v2 1/2] pve-rs: resource_scheduling: allow granular usage changes
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
2025-10-20 16:45 ` [pve-devel] [PATCH qemu-server v2 1/1] config: only fetch necessary default values in get_derived_property helper Daniel Kral
2025-10-20 16:45 ` [pve-devel] [PATCH proxmox v2 1/1] resource-scheduling: change score_nodes_to_start_service signature Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-20 16:45 ` [pve-devel] [PATCH perl-rs v2 2/2] test: resource_scheduling: use score_nodes helper to imitate HA Manager Daniel Kral
` (8 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
Implements a simple bidirectional map to track which service usages have
been added to nodes, so that these can be removed later individually.
The `StaticNodeUsage` is newly initialized on every invocation of
score_nodes_to_start_service(...) instead of updating the values on
every call of `add_service_usage_to_node(...)` to reduce the likelihood
of introducing numerical instability caused by floating-point operations
done on the `cpu` field.
The StaticServiceUsage is added to the HashMap<> in StaticNodeInfo to
reduce unnecessary indirection when summing these values in
score_nodes_to_start_service(...).
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
Needs a build dependency bump for
librust-proxmox-resource-scheduling-dev and a versioned breaks for
pve-ha-manager.
changes since v1:
- added numerical stability bit to patch message
- changed service_nodes type from HashMap<String, Vec<String>> to
HashMap<String, HashSet<String>> and changed relevant methods
accordingly
- added consistency check in add_service_usage_to_node(...) for the
other way around too (whether service_nodes contains the node already)
- improve error messages
- use vm:{seqnum} instead of single letters
.../bindings/resource_scheduling_static.rs | 108 +++++++++++++++---
pve-rs/test/resource_scheduling.pl | 82 +++++++++++--
2 files changed, 167 insertions(+), 23 deletions(-)
diff --git a/pve-rs/src/bindings/resource_scheduling_static.rs b/pve-rs/src/bindings/resource_scheduling_static.rs
index 7cf2f35..5b91d36 100644
--- a/pve-rs/src/bindings/resource_scheduling_static.rs
+++ b/pve-rs/src/bindings/resource_scheduling_static.rs
@@ -6,7 +6,7 @@ pub mod pve_rs_resource_scheduling_static {
//!
//! See [`proxmox_resource_scheduling`].
- use std::collections::HashMap;
+ use std::collections::{HashMap, HashSet};
use std::sync::Mutex;
use anyhow::{Error, bail};
@@ -16,8 +16,16 @@ pub mod pve_rs_resource_scheduling_static {
perlmod::declare_magic!(Box<Scheduler> : &Scheduler as "PVE::RS::ResourceScheduling::Static");
+ struct StaticNodeInfo {
+ name: String,
+ maxcpu: usize,
+ maxmem: usize,
+ services: HashMap<String, StaticServiceUsage>,
+ }
+
struct Usage {
- nodes: HashMap<String, StaticNodeUsage>,
+ nodes: HashMap<String, StaticNodeInfo>,
+ service_nodes: HashMap<String, HashSet<String>>,
}
/// A scheduler instance contains the resource usage by node.
@@ -30,6 +38,7 @@ pub mod pve_rs_resource_scheduling_static {
pub fn new(#[raw] class: Value) -> Result<Value, Error> {
let inner = Usage {
nodes: HashMap::new(),
+ service_nodes: HashMap::new(),
};
Ok(perlmod::instantiate_magic!(
@@ -39,7 +48,7 @@ pub mod pve_rs_resource_scheduling_static {
/// Method: Add a node with its basic CPU and memory info.
///
- /// This inserts a [`StaticNodeUsage`] entry for the node into the scheduler instance.
+ /// This inserts a [`StaticNodeInfo`] entry for the node into the scheduler instance.
#[export]
pub fn add_node(
#[try_from_ref] this: &Scheduler,
@@ -53,12 +62,11 @@ pub mod pve_rs_resource_scheduling_static {
bail!("node {} already added", nodename);
}
- let node = StaticNodeUsage {
+ let node = StaticNodeInfo {
name: nodename.clone(),
- cpu: 0.0,
maxcpu,
- mem: 0,
maxmem,
+ services: HashMap::new(),
};
usage.nodes.insert(nodename, node);
@@ -67,10 +75,25 @@ pub mod pve_rs_resource_scheduling_static {
/// Method: Remove a node from the scheduler.
#[export]
- pub fn remove_node(#[try_from_ref] this: &Scheduler, nodename: &str) {
+ pub fn remove_node(#[try_from_ref] this: &Scheduler, nodename: &str) -> Result<(), Error> {
let mut usage = this.inner.lock().unwrap();
- usage.nodes.remove(nodename);
+ if let Some(node) = usage.nodes.remove(nodename) {
+ for (sid, _) in node.services.iter() {
+ match usage.service_nodes.get_mut(sid) {
+ Some(service_nodes) => {
+ service_nodes.remove(nodename);
+ }
+ None => bail!(
+ "service '{}' not present in service_nodes hashmap while removing node '{}'",
+ sid,
+ nodename
+ ),
+ }
+ }
+ }
+
+ Ok(())
}
/// Method: Get a list of all the nodes in the scheduler.
@@ -93,22 +116,63 @@ pub mod pve_rs_resource_scheduling_static {
usage.nodes.contains_key(nodename)
}
- /// Method: Add usage of `service` to the node's usage.
+ /// Method: Add service `sid` and its `service_usage` to the node.
#[export]
pub fn add_service_usage_to_node(
#[try_from_ref] this: &Scheduler,
nodename: &str,
- service: StaticServiceUsage,
+ sid: &str,
+ service_usage: StaticServiceUsage,
) -> Result<(), Error> {
let mut usage = this.inner.lock().unwrap();
match usage.nodes.get_mut(nodename) {
Some(node) => {
- node.add_service_usage(&service);
- Ok(())
+ if node.services.contains_key(sid) {
+ bail!("service '{}' already added to node '{}'", sid, nodename);
+ }
+
+ node.services.insert(sid.to_string(), service_usage);
}
None => bail!("node '{}' not present in usage hashmap", nodename),
}
+
+ if let Some(service_nodes) = usage.service_nodes.get_mut(sid) {
+ if service_nodes.contains(nodename) {
+ bail!("node '{}' already added to service '{}'", nodename, sid);
+ }
+
+ service_nodes.insert(nodename.to_string());
+ } else {
+ let mut service_nodes = HashSet::new();
+ service_nodes.insert(nodename.to_string());
+ usage.service_nodes.insert(sid.to_string(), service_nodes);
+ }
+
+ Ok(())
+ }
+
+ /// Method: Remove service `sid` and its usage from all assigned nodes.
+ #[export]
+ fn remove_service_usage(#[try_from_ref] this: &Scheduler, sid: &str) -> Result<(), Error> {
+ let mut usage = this.inner.lock().unwrap();
+
+ if let Some(nodes) = usage.service_nodes.remove(sid) {
+ for nodename in &nodes {
+ match usage.nodes.get_mut(nodename) {
+ Some(node) => {
+ node.services.remove(sid);
+ }
+ None => bail!(
+ "service '{}' not present in usage hashmap on node '{}'",
+ sid,
+ nodename
+ ),
+ }
+ }
+ }
+
+ Ok(())
}
/// Scores all previously added nodes for starting a `service` on.
@@ -126,7 +190,25 @@ pub mod pve_rs_resource_scheduling_static {
service: StaticServiceUsage,
) -> Result<Vec<(String, f64)>, Error> {
let usage = this.inner.lock().unwrap();
- let nodes = usage.nodes.values().collect::<Vec<&StaticNodeUsage>>();
+ let nodes = usage
+ .nodes
+ .values()
+ .map(|node| {
+ let mut node_usage = StaticNodeUsage {
+ name: node.name.to_string(),
+ cpu: 0.0,
+ maxcpu: node.maxcpu,
+ mem: 0,
+ maxmem: node.maxmem,
+ };
+
+ for service in node.services.values() {
+ node_usage.add_service_usage(service);
+ }
+
+ node_usage
+ })
+ .collect::<Vec<StaticNodeUsage>>();
proxmox_resource_scheduling::pve_static::score_nodes_to_start_service(&nodes, &service)
}
diff --git a/pve-rs/test/resource_scheduling.pl b/pve-rs/test/resource_scheduling.pl
index e3b7d2e..42556bd 100755
--- a/pve-rs/test/resource_scheduling.pl
+++ b/pve-rs/test/resource_scheduling.pl
@@ -7,6 +7,20 @@ use Test::More;
use PVE::RS::ResourceScheduling::Static;
+my sub score_nodes {
+ my ($static, $service) = @_;
+
+ my $score_list = $static->score_nodes_to_start_service($service);
+
+ # imitate HA manager
+ my $scores = { map { $_->[0] => -$_->[1] } $score_list->@* };
+ my @nodes = sort {
+ $scores->{$a} <=> $scores->{$b} || $a cmp $b
+ } keys $scores->%*;
+
+ return @nodes;
+}
+
sub test_basic {
my $static = PVE::RS::ResourceScheduling::Static->new();
is(scalar($static->list_nodes()->@*), 0, 'node list empty');
@@ -50,7 +64,54 @@ sub test_balance {
is($nodes[1], "A", 'second should be A');
}
- $static->add_service_usage_to_node($nodes[0], $service);
+ $static->add_service_usage_to_node($nodes[0], "vm:" . (100 + $i), $service);
+ }
+}
+
+sub test_balance_removal {
+ my $static = PVE::RS::ResourceScheduling::Static->new();
+ $static->add_node("A", 10, 100_000_000_000);
+ $static->add_node("B", 20, 200_000_000_000);
+ $static->add_node("C", 30, 300_000_000_000);
+
+ my $service = {
+ maxcpu => 4,
+ maxmem => 20_000_000_000,
+ };
+
+ $static->add_service_usage_to_node("A", "a", $service);
+ $static->add_service_usage_to_node("A", "b", $service);
+ $static->add_service_usage_to_node("B", "c", $service);
+ $static->add_service_usage_to_node("B", "d", $service);
+ $static->add_service_usage_to_node("C", "c", $service);
+
+ {
+ my @nodes = score_nodes($static, $service);
+
+ is($nodes[0], "C");
+ is($nodes[1], "B");
+ is($nodes[2], "A");
+ }
+
+ $static->remove_service_usage("d");
+ $static->remove_service_usage("c");
+ $static->add_service_usage_to_node("C", "c", $service);
+
+ {
+ my @nodes = score_nodes($static, $service);
+
+ is($nodes[0], "B");
+ is($nodes[1], "C");
+ is($nodes[2], "A");
+ }
+
+ $static->remove_node("B");
+
+ {
+ my @nodes = score_nodes($static, $service);
+
+ is($nodes[0], "C");
+ is($nodes[1], "A");
}
}
@@ -66,11 +127,11 @@ sub test_overcommitted {
maxmem => 536_870_912,
};
- $static->add_service_usage_to_node("A", $service);
- $static->add_service_usage_to_node("A", $service);
- $static->add_service_usage_to_node("A", $service);
- $static->add_service_usage_to_node("B", $service);
- $static->add_service_usage_to_node("A", $service);
+ $static->add_service_usage_to_node("A", "a", $service);
+ $static->add_service_usage_to_node("A", "b", $service);
+ $static->add_service_usage_to_node("A", "c", $service);
+ $static->add_service_usage_to_node("B", "d", $service);
+ $static->add_service_usage_to_node("A", "e", $service);
my $score_list = $static->score_nodes_to_start_service($service);
@@ -96,9 +157,9 @@ sub test_balance_small_memory_difference {
$static->add_node("C", 4, 8_000_000_000);
if ($with_start_load) {
- $static->add_service_usage_to_node("A", { maxcpu => 4, maxmem => 1_000_000_000 });
- $static->add_service_usage_to_node("B", { maxcpu => 2, maxmem => 1_000_000_000 });
- $static->add_service_usage_to_node("C", { maxcpu => 2, maxmem => 1_000_000_000 });
+ $static->add_service_usage_to_node("A", "vm:100", { maxcpu => 4, maxmem => 1_000_000_000 });
+ $static->add_service_usage_to_node("B", "vm:101", { maxcpu => 2, maxmem => 1_000_000_000 });
+ $static->add_service_usage_to_node("C", "vm:102", { maxcpu => 2, maxmem => 1_000_000_000 });
}
my $service = {
@@ -131,12 +192,13 @@ sub test_balance_small_memory_difference {
die "internal error, got $i % 4 == " . ($i % 4) . "\n";
}
- $static->add_service_usage_to_node($nodes[0], $service);
+ $static->add_service_usage_to_node($nodes[0], "vm:" . (103 + $i), $service);
}
}
test_basic();
test_balance();
+test_balance_removal();
test_overcommitted();
test_balance_small_memory_difference(1);
test_balance_small_memory_difference(0);
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [pve-devel] [PATCH perl-rs v2 2/2] test: resource_scheduling: use score_nodes helper to imitate HA Manager
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
` (2 preceding siblings ...)
2025-10-20 16:45 ` [pve-devel] [PATCH perl-rs v2 1/2] pve-rs: resource_scheduling: allow granular usage changes Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-21 12:14 ` Fiona Ebner
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 1/8] manager: remove redundant recompute_online_node_usage from next_state_recovery Daniel Kral
` (7 subsequent siblings)
11 siblings, 1 reply; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes since v1:
- new!
pve-rs/test/resource_scheduling.pl | 24 +++---------------------
1 file changed, 3 insertions(+), 21 deletions(-)
diff --git a/pve-rs/test/resource_scheduling.pl b/pve-rs/test/resource_scheduling.pl
index 42556bd..a332269 100755
--- a/pve-rs/test/resource_scheduling.pl
+++ b/pve-rs/test/resource_scheduling.pl
@@ -48,13 +48,7 @@ sub test_balance {
};
for (my $i = 0; $i < 15; $i++) {
- my $score_list = $static->score_nodes_to_start_service($service);
-
- # imitate HA manager
- my $scores = { map { $_->[0] => -$_->[1] } $score_list->@* };
- my @nodes = sort {
- $scores->{$a} <=> $scores->{$b} || $a cmp $b
- } keys $scores->%*;
+ my @nodes = score_nodes($static, $service);
if ($i % 3 == 2) {
is($nodes[0], "A", 'first should be A');
@@ -133,13 +127,7 @@ sub test_overcommitted {
$static->add_service_usage_to_node("B", "d", $service);
$static->add_service_usage_to_node("A", "e", $service);
- my $score_list = $static->score_nodes_to_start_service($service);
-
- # imitate HA manager
- my $scores = { map { $_->[0] => -$_->[1] } $score_list->@* };
- my @nodes = sort {
- $scores->{$a} <=> $scores->{$b} || $a cmp $b
- } keys $scores->%*;
+ my @nodes = score_nodes($static, $service);
is($nodes[0], "C", 'first should be C');
is($nodes[1], "D", 'second should be D');
@@ -168,13 +156,7 @@ sub test_balance_small_memory_difference {
};
for (my $i = 0; $i < 20; $i++) {
- my $score_list = $static->score_nodes_to_start_service($service);
-
- # imitate HA manager
- my $scores = { map { $_->[0] => -$_->[1] } $score_list->@* };
- my @nodes = sort {
- $scores->{$a} <=> $scores->{$b} || $a cmp $b
- } keys $scores->%*;
+ my @nodes = score_nodes($static, $service);
if ($i % 4 <= 1) {
is($nodes[0], "A", 'first should be A');
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [pve-devel] [PATCH ha-manager v2 1/8] manager: remove redundant recompute_online_node_usage from next_state_recovery
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
` (3 preceding siblings ...)
2025-10-20 16:45 ` [pve-devel] [PATCH perl-rs v2 2/2] test: resource_scheduling: use score_nodes helper to imitate HA Manager Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 2/8] manager: remove redundant add_service_usage_to_node " Daniel Kral
` (6 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
recompute_online_node_usage(...) is currently only dependent on the
configured CRS scheduler mode, the list of online nodes and the HA
resource's state, current node and optional migration target.
The proper recovery of HA resources was introduced in 9da84a0d ("fix
'change_service_location' misuse and recovery from fencing") as a
private helper to recover fenced HA resources and the
recompute_online_node_usage(...) was needed here.
As the recovery of fenced HA resources is its own state for HA resources
since c259b1a8 ("manager: make recovery actual state in FSM"), the
change_service_state(...) to 'recovery' will already call
recompute_online_node_usage(...) before, making this call redundant.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes since v1:
- added R-b
src/PVE/HA/Manager.pm | 2 --
1 file changed, 2 deletions(-)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index beaacf36..8a1e177d 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -1309,8 +1309,6 @@ sub next_state_recovery {
my $fenced_node = $sd->{node}; # for logging purpose
- $self->recompute_online_node_usage(); # we want the most current node state
-
my $recovery_node = select_service_node(
$self->{rules},
$self->{online_node_usage},
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [pve-devel] [PATCH ha-manager v2 2/8] manager: remove redundant add_service_usage_to_node from next_state_recovery
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
` (4 preceding siblings ...)
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 1/8] manager: remove redundant recompute_online_node_usage from next_state_recovery Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 3/8] manager: remove redundant add_service_usage_to_node from next_state_started Daniel Kral
` (5 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
Since 5c2eef4b ("account service to source and target during move") a
moving HA resource's load is accounted for on the source and target
nodes.
A HA resource is in the 'recovery' state after a successfully fenced
node and its primary goal becomes to find a recovery node. As soon as a
recovery node is found, the HA resource is moved there. As the previous
node was fenced, there is only load on the recovery node.
The add_service_usage_to_node(...) is redundant at this point as the
subsequent call to change_service_state(...) to either the 'started' or
'request_stop' state will call recompute_online_node_usage(...) and make
the changes to $online_node_usage be discarded immediately.
This calculation will remain true as recompute_online_node_usage(...) in
change_service_state(...) is replaced with a more granular change to
$online_node_usage in a later patch.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes since v1:
- added info that calculation will hold with later patches
- added R-b
src/PVE/HA/Manager.pm | 2 --
1 file changed, 2 deletions(-)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 8a1e177d..7e43cfdf 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -1329,8 +1329,6 @@ sub next_state_recovery {
$fence_recovery_cleanup->($self, $sid, $fenced_node);
$haenv->steal_service($sid, $sd->{node}, $recovery_node);
- $self->{online_node_usage}->add_service_usage_to_node($recovery_node, $sid, $recovery_node);
- $self->{online_node_usage}->add_service_node($sid, $recovery_node);
# NOTE: $sd *is normally read-only*, fencing is the exception
$cd->{node} = $sd->{node} = $recovery_node;
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [pve-devel] [PATCH ha-manager v2 3/8] manager: remove redundant add_service_usage_to_node from next_state_started
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
` (5 preceding siblings ...)
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 2/8] manager: remove redundant add_service_usage_to_node " Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 4/8] rules: resource affinity: decouple get_resource_affinity helper from Usage class Daniel Kral
` (4 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
Since 5c2eef4b ("account service to source and target during move") a
moving HA resource's load is accounted for on the source and target
nodes.
A HA resource in the 'started' state, which is not configured otherwise
or has no pending CRM commands left to process, actively checks whether
there is "better" node placement by querying select_service_node(...).
When a better node is found, the HA resource will be migrated or
relocated to the found node depending on their type.
The add_service_usage_to_node(...) is redundant at this point as the
subsequent call to change_service_state(...) to either the 'migrate' or
'relocate' state will call recompute_online_node_usage(...) and make
the changes to $online_node_usage be discarded immediately.
This calculation will remain true as recompute_online_node_usage(...) in
change_service_state(...) is replaced with a more granular change to
$online_node_usage in a later patch.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes since v1:
- added info that calculation will hold with later patches
- added R-b
src/PVE/HA/Manager.pm | 3 ---
1 file changed, 3 deletions(-)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 7e43cfdf..e1b510be 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -1202,9 +1202,6 @@ sub next_state_started {
);
if ($node && ($sd->{node} ne $node)) {
- $self->{online_node_usage}->add_service_usage_to_node($node, $sid, $sd->{node});
- $self->{online_node_usage}->add_service_node($sid, $node);
-
if (defined(my $fallback = $sd->{maintenance_node})) {
if ($node eq $fallback) {
$haenv->log(
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [pve-devel] [PATCH ha-manager v2 4/8] rules: resource affinity: decouple get_resource_affinity helper from Usage class
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
` (6 preceding siblings ...)
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 3/8] manager: remove redundant add_service_usage_to_node from next_state_started Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-21 13:02 ` Fiona Ebner
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 5/8] manager: make recompute_online_node_usage use add_service_usage helper Daniel Kral
` (3 subsequent siblings)
11 siblings, 1 reply; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
The resource affinity rules need information about the other HA
resource's used nodes to be enacted correctly, which has been proxied
through $online_node_usage before.
The get_used_service_nodes(...) helper reflects the same logic to
retrieve the nodes, where a HA resource $sid currently puts load on, as
in recompute_online_node_usage(...).
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes since v1:
- move get_used_service_nodes(...) helper to PVE::HA::Usage class
- change get_used_service_nodes(...) signature from
($sd, $online_nodes)
to
($online_nodes, $state, $node, $target)
- change return value of get_used_service_nodes(...) from hash ref to
two-element array
- added R-b
src/PVE/HA/Manager.pm | 24 +++++++-------
src/PVE/HA/Rules/ResourceAffinity.pm | 23 +++++++------
src/PVE/HA/Usage.pm | 48 +++++++++++++++++-----------
src/PVE/HA/Usage/Basic.pm | 19 -----------
src/PVE/HA/Usage/Static.pm | 19 -----------
src/test/test_failover1.pl | 17 ++++++----
6 files changed, 64 insertions(+), 86 deletions(-)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index e1b510be..a71de167 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -126,12 +126,12 @@ sub flush_master_status {
=head3 select_service_node(...)
-=head3 select_service_node($rules, $online_node_usage, $sid, $service_conf, $sd, $node_preference)
+=head3 select_service_node($rules, $online_node_usage, $sid, $service_conf, $ss, $node_preference)
Used to select the best fitting node for the service C<$sid>, with the
-configuration C<$service_conf> and state C<$sd>, according to the rules defined
-in C<$rules>, available node utilization in C<$online_node_usage>, and the
-given C<$node_preference>.
+configuration C<$service_conf>, according to the rules defined in C<$rules>,
+available node utilization in C<$online_node_usage>, the service states in
+C<$ss> and the given C<$node_preference>.
The C<$node_preference> can be set to:
@@ -148,11 +148,12 @@ The C<$node_preference> can be set to:
=cut
sub select_service_node {
- my ($rules, $online_node_usage, $sid, $service_conf, $sd, $node_preference) = @_;
+ my ($rules, $online_node_usage, $sid, $service_conf, $ss, $node_preference) = @_;
die "'$node_preference' is not a valid node_preference for select_service_node\n"
if $node_preference !~ m/(none|best-score|try-next)/;
+ my $sd = $ss->{$sid};
my ($current_node, $tried_nodes, $maintenance_fallback) =
$sd->@{qw(node failed_nodes maintenance_node)};
@@ -160,7 +161,8 @@ sub select_service_node {
return undef if !%$pri_nodes;
- my ($together, $separate) = get_resource_affinity($rules, $sid, $online_node_usage);
+ my $online_nodes = { map { $_ => 1 } $online_node_usage->list_nodes() };
+ my ($together, $separate) = get_resource_affinity($rules, $sid, $ss, $online_nodes);
# stay on current node if possible (avoids random migrations)
if (
@@ -289,7 +291,6 @@ sub recompute_online_node_usage {
|| $state eq 'recovery'
) {
$online_node_usage->add_service_usage_to_node($sd->{node}, $sid, $sd->{node});
- $online_node_usage->set_service_node($sid, $sd->{node});
} elsif (
$state eq 'migrate'
|| $state eq 'relocate'
@@ -299,11 +300,9 @@ sub recompute_online_node_usage {
# count it for both, source and target as load is put on both
if ($state ne 'request_start_balance') {
$online_node_usage->add_service_usage_to_node($source, $sid, $source, $target);
- $online_node_usage->add_service_node($sid, $source);
}
if ($online_node_usage->contains_node($target)) {
$online_node_usage->add_service_usage_to_node($target, $sid, $source, $target);
- $online_node_usage->add_service_node($sid, $target);
}
} elsif ($state eq 'stopped' || $state eq 'request_start') {
# do nothing
@@ -316,7 +315,6 @@ sub recompute_online_node_usage {
# case a node dies, as we cannot really know if the to-be-aborted incoming migration
# has already cleaned up all used resources
$online_node_usage->add_service_usage_to_node($target, $sid, $sd->{node}, $target);
- $online_node_usage->set_service_node($sid, $target);
}
}
}
@@ -1030,7 +1028,7 @@ sub next_state_request_start {
$self->{online_node_usage},
$sid,
$cd,
- $sd,
+ $self->{ss},
'best-score',
);
my $select_text = $selected_node ne $current_node ? 'new' : 'current';
@@ -1197,7 +1195,7 @@ sub next_state_started {
$self->{online_node_usage},
$sid,
$cd,
- $sd,
+ $self->{ss},
$select_node_preference,
);
@@ -1311,7 +1309,7 @@ sub next_state_recovery {
$self->{online_node_usage},
$sid,
$cd,
- $sd,
+ $self->{ss},
'best-score',
);
diff --git a/src/PVE/HA/Rules/ResourceAffinity.pm b/src/PVE/HA/Rules/ResourceAffinity.pm
index 9bc039ba..9a928196 100644
--- a/src/PVE/HA/Rules/ResourceAffinity.pm
+++ b/src/PVE/HA/Rules/ResourceAffinity.pm
@@ -5,6 +5,7 @@ use warnings;
use PVE::HA::HashTools qw(set_intersect sets_are_disjoint);
use PVE::HA::Rules;
+use PVE::HA::Usage;
use base qw(Exporter);
use base qw(PVE::HA::Rules);
@@ -496,12 +497,12 @@ sub get_affinitive_resources : prototype($$) {
return ($together, $separate);
}
-=head3 get_resource_affinity($rules, $sid, $online_node_usage)
+=head3 get_resource_affinity($rules, $sid, $ss, $online_nodes)
Returns a list of two hashes, where the first describes the positive resource
affinity and the second hash describes the negative resource affinity for
-resource C<$sid> according to the resource affinity rules in C<$rules> and the
-resource locations in C<$online_node_usage>.
+resource C<$sid> according to the resource affinity rules in C<$rules>, the
+service status C<$ss> and the C<$online_nodes> hash.
For the positive resource affinity of a resource C<$sid>, each element in the
hash represents an online node, where other resources, which C<$sid> is in
@@ -529,8 +530,8 @@ resource C<$sid> is in a negative affinity with, the returned value will be:
=cut
-sub get_resource_affinity : prototype($$$) {
- my ($rules, $sid, $online_node_usage) = @_;
+sub get_resource_affinity : prototype($$$$) {
+ my ($rules, $sid, $ss, $online_nodes) = @_;
my $together = {};
my $separate = {};
@@ -543,14 +544,16 @@ sub get_resource_affinity : prototype($$$) {
for my $csid (keys %{ $rule->{resources} }) {
next if $csid eq $sid;
- my $nodes = $online_node_usage->get_service_nodes($csid);
-
- next if !$nodes || !@$nodes; # skip unassigned nodes
+ my ($state, $node, $target) = $ss->{$csid}->@{qw(state node target)};
+ my ($current_node, $target_node) =
+ PVE::HA::Usage::get_used_service_nodes($online_nodes, $state, $node, $target);
if ($rule->{affinity} eq 'positive') {
- $together->{$_}++ for @$nodes;
+ $together->{$current_node}++ if defined($current_node);
+ $together->{$target_node}++ if defined($target_node);
} elsif ($rule->{affinity} eq 'negative') {
- $separate->{$_} = 1 for @$nodes;
+ $separate->{$current_node} = 1 if defined($current_node);
+ $separate->{$target_node} = 1 if defined($target_node);
} else {
die "unimplemented resource affinity type $rule->{affinity}\n";
}
diff --git a/src/PVE/HA/Usage.pm b/src/PVE/HA/Usage.pm
index 7f4d9ca3..edea2545 100644
--- a/src/PVE/HA/Usage.pm
+++ b/src/PVE/HA/Usage.pm
@@ -27,24 +27,6 @@ sub list_nodes {
die "implement in subclass";
}
-sub get_service_nodes {
- my ($self, $sid) = @_;
-
- die "implement in subclass";
-}
-
-sub set_service_node {
- my ($self, $sid, $nodename) = @_;
-
- die "implement in subclass";
-}
-
-sub add_service_node {
- my ($self, $sid, $nodename) = @_;
-
- die "implement in subclass";
-}
-
sub contains_node {
my ($self, $nodename) = @_;
@@ -65,4 +47,34 @@ sub score_nodes_to_start_service {
die "implement in subclass";
}
+# Returns the current and target node as a two-element array, that a service
+# puts load on according to the $online_nodes and the service's $state, $node
+# and $target.
+sub get_used_service_nodes {
+ my ($online_nodes, $state, $node, $target) = @_;
+
+ return (undef, undef) if $state eq 'stopped' || $state eq 'request_start';
+
+ my ($current_node, $target_node);
+
+ if (
+ $state eq 'started'
+ || $state eq 'request_stop'
+ || $state eq 'fence'
+ || $state eq 'freeze'
+ || $state eq 'error'
+ || $state eq 'recovery'
+ || $state eq 'migrate'
+ || $state eq 'relocate'
+ ) {
+ $current_node = $node if $online_nodes->{$node};
+ }
+
+ if ($state eq 'migrate' || $state eq 'relocate' || $state eq 'request_start_balance') {
+ $target_node = $target if defined($target) && $online_nodes->{$target};
+ }
+
+ return ($current_node, $target_node);
+}
+
1;
diff --git a/src/PVE/HA/Usage/Basic.pm b/src/PVE/HA/Usage/Basic.pm
index afe3733c..ead08c54 100644
--- a/src/PVE/HA/Usage/Basic.pm
+++ b/src/PVE/HA/Usage/Basic.pm
@@ -11,7 +11,6 @@ sub new {
return bless {
nodes => {},
haenv => $haenv,
- 'service-nodes' => {},
}, $class;
}
@@ -39,24 +38,6 @@ sub contains_node {
return defined($self->{nodes}->{$nodename});
}
-sub get_service_nodes {
- my ($self, $sid) = @_;
-
- return $self->{'service-nodes'}->{$sid};
-}
-
-sub set_service_node {
- my ($self, $sid, $nodename) = @_;
-
- $self->{'service-nodes'}->{$sid} = [$nodename];
-}
-
-sub add_service_node {
- my ($self, $sid, $nodename) = @_;
-
- push @{ $self->{'service-nodes'}->{$sid} }, $nodename;
-}
-
sub add_service_usage_to_node {
my ($self, $nodename, $sid, $service_node, $migration_target) = @_;
diff --git a/src/PVE/HA/Usage/Static.pm b/src/PVE/HA/Usage/Static.pm
index 6707a54c..061e74a2 100644
--- a/src/PVE/HA/Usage/Static.pm
+++ b/src/PVE/HA/Usage/Static.pm
@@ -22,7 +22,6 @@ sub new {
'service-stats' => {},
haenv => $haenv,
scheduler => $scheduler,
- 'service-nodes' => {},
'service-counts' => {}, # Service count on each node. Fallback if scoring calculation fails.
}, $class;
}
@@ -87,24 +86,6 @@ my sub get_service_usage {
return $service_stats;
}
-sub get_service_nodes {
- my ($self, $sid) = @_;
-
- return $self->{'service-nodes'}->{$sid};
-}
-
-sub set_service_node {
- my ($self, $sid, $nodename) = @_;
-
- $self->{'service-nodes'}->{$sid} = [$nodename];
-}
-
-sub add_service_node {
- my ($self, $sid, $nodename) = @_;
-
- push @{ $self->{'service-nodes'}->{$sid} }, $nodename;
-}
-
sub add_service_usage_to_node {
my ($self, $nodename, $sid, $service_node, $migration_target) = @_;
diff --git a/src/test/test_failover1.pl b/src/test/test_failover1.pl
index 78a001eb..495d4b4b 100755
--- a/src/test/test_failover1.pl
+++ b/src/test/test_failover1.pl
@@ -14,9 +14,10 @@ PVE::HA::Rules::NodeAffinity->register();
PVE::HA::Rules->init(property_isolation => 1);
+my $sid = 'vm:111';
my $rules = PVE::HA::Rules->parse_config("rules.tmp", <<EOD);
node-affinity: prefer_node1
- resources vm:111
+ resources $sid
nodes node1
EOD
@@ -31,10 +32,12 @@ my $service_conf = {
failback => 1,
};
-my $sd = {
- node => $service_conf->{node},
- failed_nodes => undef,
- maintenance_node => undef,
+my $ss = {
+ "$sid" => {
+ node => $service_conf->{node},
+ failed_nodes => undef,
+ maintenance_node => undef,
+ },
};
sub test {
@@ -43,14 +46,14 @@ sub test {
my $select_node_preference = $try_next ? 'try-next' : 'none';
my $node = PVE::HA::Manager::select_service_node(
- $rules, $online_node_usage, "vm:111", $service_conf, $sd, $select_node_preference,
+ $rules, $online_node_usage, "$sid", $service_conf, $ss, $select_node_preference,
);
my (undef, undef, $line) = caller();
die "unexpected result: $node != ${expected_node} at line $line\n"
if $node ne $expected_node;
- $sd->{node} = $node;
+ $ss->{$sid}->{node} = $node;
}
test('node1');
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [pve-devel] [PATCH ha-manager v2 4/8] rules: resource affinity: decouple get_resource_affinity helper from Usage class
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 4/8] rules: resource affinity: decouple get_resource_affinity helper from Usage class Daniel Kral
@ 2025-10-21 13:02 ` Fiona Ebner
0 siblings, 0 replies; 20+ messages in thread
From: Fiona Ebner @ 2025-10-21 13:02 UTC (permalink / raw)
To: Proxmox VE development discussion, Daniel Kral
Am 20.10.25 um 6:46 PM schrieb Daniel Kral:
> The resource affinity rules need information about the other HA
> resource's used nodes to be enacted correctly, which has been proxied
> through $online_node_usage before.
>
> The get_used_service_nodes(...) helper reflects the same logic to
> retrieve the nodes, where a HA resource $sid currently puts load on, as
> in recompute_online_node_usage(...).
>
> Signed-off-by: Daniel Kral <d.kral@proxmox.com>
> Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
> changes since v1:
> - move get_used_service_nodes(...) helper to PVE::HA::Usage class
> - change get_used_service_nodes(...) signature from
> ($sd, $online_nodes)
> to
> ($online_nodes, $state, $node, $target)
> - change return value of get_used_service_nodes(...) from hash ref to
> two-element array
> - added R-b
If there's this much change, I would prefer the R-b not to be added. The
v2 is a quite different patch from what I saw when giving the R-b. I did
review the v2 now, so you can keep the tag :)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [pve-devel] [PATCH ha-manager v2 5/8] manager: make recompute_online_node_usage use add_service_usage helper
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
` (7 preceding siblings ...)
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 4/8] rules: resource affinity: decouple get_resource_affinity helper from Usage class Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-21 13:06 ` Fiona Ebner
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 6/8] usage: allow granular changes to Usage implementations Daniel Kral
` (2 subsequent siblings)
11 siblings, 1 reply; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
As the previously introduced get_used_service_nodes(...) helper reflects
the same logic as in recompute_online_node_usage(...), introduce a new
helper add_service_usage(...), which adds service usage according to the
nodes returned by get_used_service_nodes(...).
This helper will also be used by a later patch, which introduces
granular changes to $online_node_usage in change_service_state(...).
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes since v1:
- introduce add_service_usage(...) instead of using
get_used_service_nodes(...) and add_service_usage_to_node(...)
directly
- this makes $online_nodes an array again instead of creating a hash
- reverted back the original foreach my ...
- did not add R-b from Fiona as it wasn't discussed
src/PVE/HA/Manager.pm | 40 ++--------------------------------------
src/PVE/HA/Usage.pm | 15 +++++++++++++++
2 files changed, 17 insertions(+), 38 deletions(-)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index a71de167..bf6895ad 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -279,44 +279,8 @@ sub recompute_online_node_usage {
foreach my $sid (sort keys %{ $self->{ss} }) {
my $sd = $self->{ss}->{$sid};
- my $state = $sd->{state};
- my $target = $sd->{target}; # optional
- if ($online_node_usage->contains_node($sd->{node})) {
- if (
- $state eq 'started'
- || $state eq 'request_stop'
- || $state eq 'fence'
- || $state eq 'freeze'
- || $state eq 'error'
- || $state eq 'recovery'
- ) {
- $online_node_usage->add_service_usage_to_node($sd->{node}, $sid, $sd->{node});
- } elsif (
- $state eq 'migrate'
- || $state eq 'relocate'
- || $state eq 'request_start_balance'
- ) {
- my $source = $sd->{node};
- # count it for both, source and target as load is put on both
- if ($state ne 'request_start_balance') {
- $online_node_usage->add_service_usage_to_node($source, $sid, $source, $target);
- }
- if ($online_node_usage->contains_node($target)) {
- $online_node_usage->add_service_usage_to_node($target, $sid, $source, $target);
- }
- } elsif ($state eq 'stopped' || $state eq 'request_start') {
- # do nothing
- } else {
- die "should not be reached (sid = '$sid', state = '$state')";
- }
- } elsif (defined($target) && $online_node_usage->contains_node($target)) {
- if ($state eq 'migrate' || $state eq 'relocate') {
- # to correctly track maintenance modi and also consider the target as used for the
- # case a node dies, as we cannot really know if the to-be-aborted incoming migration
- # has already cleaned up all used resources
- $online_node_usage->add_service_usage_to_node($target, $sid, $sd->{node}, $target);
- }
- }
+
+ $online_node_usage->add_service_usage($sid, $sd->{state}, $sd->{node}, $sd->{target});
}
$self->{online_node_usage} = $online_node_usage;
diff --git a/src/PVE/HA/Usage.pm b/src/PVE/HA/Usage.pm
index edea2545..e3725c92 100644
--- a/src/PVE/HA/Usage.pm
+++ b/src/PVE/HA/Usage.pm
@@ -40,6 +40,21 @@ sub add_service_usage_to_node {
die "implement in subclass";
}
+# Adds service $sid's usage to the online nodes according to their $state,
+# $service_node and $migration_target.
+sub add_service_usage {
+ my ($self, $sid, $service_state, $service_node, $migration_target) = @_;
+
+ my $online_nodes = { map { $_ => 1 } $self->list_nodes() };
+ my ($current_node, $target_node) =
+ get_used_service_nodes($online_nodes, $service_state, $service_node, $migration_target);
+
+ $self->add_service_usage_to_node($current_node, $sid, $service_node, $migration_target)
+ if $current_node;
+ $self->add_service_usage_to_node($target_node, $sid, $service_node, $migration_target)
+ if $target_node;
+}
+
# Returns a hash with $nodename => $score pairs. A lower $score is better.
sub score_nodes_to_start_service {
my ($self, $sid, $service_node) = @_;
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [pve-devel] [PATCH ha-manager v2 6/8] usage: allow granular changes to Usage implementations
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
` (8 preceding siblings ...)
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 5/8] manager: make recompute_online_node_usage use add_service_usage helper Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 7/8] manager: make online node usage computation granular Daniel Kral
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 8/8] implement static service stats cache Daniel Kral
11 siblings, 0 replies; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
This makes use of the new signature for add_service_usage_to_node(...)
from the Proxmox::RS::ResourceScheduling::Static package, which allows
tracking of which HA resources have been assigned to which nodes.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
Needs a build-dependency bump and dependency bump for libpve-rs-perl to
contain the pve-rs patch #1, which changes the signature for
add_service_usage_to_node(...).
changes since v1:
- introduce some temporary variables for more readability in
both score_nodes_to_start_service(...)
- use scalar(...) instead of scalar ... and delete(...) if ... instead
of delete ... if ...
- add R-b
src/PVE/HA/Usage.pm | 6 ++++++
src/PVE/HA/Usage/Basic.pm | 16 +++++++++++++---
src/PVE/HA/Usage/Static.pm | 24 ++++++++++++++++++------
3 files changed, 37 insertions(+), 9 deletions(-)
diff --git a/src/PVE/HA/Usage.pm b/src/PVE/HA/Usage.pm
index e3725c92..92e575cb 100644
--- a/src/PVE/HA/Usage.pm
+++ b/src/PVE/HA/Usage.pm
@@ -55,6 +55,12 @@ sub add_service_usage {
if $target_node;
}
+sub remove_service_usage {
+ my ($self, $sid) = @_;
+
+ die "implement in subclass";
+}
+
# Returns a hash with $nodename => $score pairs. A lower $score is better.
sub score_nodes_to_start_service {
my ($self, $sid, $service_node) = @_;
diff --git a/src/PVE/HA/Usage/Basic.pm b/src/PVE/HA/Usage/Basic.pm
index ead08c54..43817bf6 100644
--- a/src/PVE/HA/Usage/Basic.pm
+++ b/src/PVE/HA/Usage/Basic.pm
@@ -17,7 +17,7 @@ sub new {
sub add_node {
my ($self, $nodename) = @_;
- $self->{nodes}->{$nodename} = 0;
+ $self->{nodes}->{$nodename} = {};
}
sub remove_node {
@@ -42,7 +42,7 @@ sub add_service_usage_to_node {
my ($self, $nodename, $sid, $service_node, $migration_target) = @_;
if ($self->contains_node($nodename)) {
- $self->{nodes}->{$nodename}++;
+ $self->{nodes}->{$nodename}->{$sid} = 1;
} else {
$self->{haenv}->log(
'warning',
@@ -51,10 +51,20 @@ sub add_service_usage_to_node {
}
}
+sub remove_service_usage {
+ my ($self, $sid) = @_;
+
+ for my $node ($self->list_nodes()) {
+ delete $self->{nodes}->{$node}->{$sid};
+ }
+}
+
sub score_nodes_to_start_service {
my ($self, $sid, $service_node) = @_;
- return $self->{nodes};
+ my $nodes = $self->{nodes};
+
+ return { map { $_ => scalar(keys $nodes->{$_}->%*) } keys $nodes->%* };
}
1;
diff --git a/src/PVE/HA/Usage/Static.pm b/src/PVE/HA/Usage/Static.pm
index 061e74a2..d586b603 100644
--- a/src/PVE/HA/Usage/Static.pm
+++ b/src/PVE/HA/Usage/Static.pm
@@ -22,14 +22,14 @@ sub new {
'service-stats' => {},
haenv => $haenv,
scheduler => $scheduler,
- 'service-counts' => {}, # Service count on each node. Fallback if scoring calculation fails.
+ 'node-services' => {}, # Services on each node. Fallback if scoring calculation fails.
}, $class;
}
sub add_node {
my ($self, $nodename) = @_;
- $self->{'service-counts'}->{$nodename} = 0;
+ $self->{'node-services'}->{$nodename} = {};
my $stats = $self->{'node-stats'}->{$nodename}
or die "did not get static node usage information for '$nodename'\n";
@@ -43,7 +43,7 @@ sub add_node {
sub remove_node {
my ($self, $nodename) = @_;
- delete $self->{'service-counts'}->{$nodename};
+ delete $self->{'node-services'}->{$nodename};
$self->{scheduler}->remove_node($nodename);
}
@@ -89,16 +89,27 @@ my sub get_service_usage {
sub add_service_usage_to_node {
my ($self, $nodename, $sid, $service_node, $migration_target) = @_;
- $self->{'service-counts'}->{$nodename}++;
+ $self->{'node-services'}->{$nodename}->{$sid} = 1;
eval {
my $service_usage = get_service_usage($self, $sid, $service_node, $migration_target);
- $self->{scheduler}->add_service_usage_to_node($nodename, $service_usage);
+ $self->{scheduler}->add_service_usage_to_node($nodename, $sid, $service_usage);
};
$self->{haenv}->log('warning', "unable to add service '$sid' usage to node '$nodename' - $@")
if $@;
}
+sub remove_service_usage {
+ my ($self, $sid) = @_;
+
+ delete($self->{'node-services'}->{$_}->{$sid}) for $self->list_nodes();
+
+ eval { $self->{scheduler}->remove_service_usage($sid) };
+ $self->{haenv}->log('warning', "unable to remove service '$sid' usage - $@") if $@;
+
+ delete $self->{'service-stats'}->{$sid}; # Invalidate old service stats
+}
+
sub score_nodes_to_start_service {
my ($self, $sid, $service_node) = @_;
@@ -111,7 +122,8 @@ sub score_nodes_to_start_service {
'err',
"unable to score nodes according to static usage for service '$sid' - $err",
);
- return $self->{'service-counts'};
+ my $node_services = $self->{'node-services'};
+ return { map { $_ => scalar(keys $node_services->{$_}->%*) } keys $node_services->%* };
}
# Take minus the value, so that a lower score is better, which our caller(s) expect(s).
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [pve-devel] [PATCH ha-manager v2 7/8] manager: make online node usage computation granular
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
` (9 preceding siblings ...)
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 6/8] usage: allow granular changes to Usage implementations Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-21 13:09 ` Fiona Ebner
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 8/8] implement static service stats cache Daniel Kral
11 siblings, 1 reply; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
The HA Manager builds $online_node_usage in every FSM iteration in
manage(...) and at every HA resource state change in
change_service_state(...). This becomes quite costly with a high HA
resource count and a lot of state changes happening at once, e.g.
starting up multiple nodes with rebalance_on_request_start set or a
failover of a node with many configured HA resources.
To improve this situation, make the changes to the $online_node_usage
more granular by building $online_node_usage only once per call to
manage(...) and changing the nodes a HA resource uses individually on
every HA resource state transition. This allows the HA Manager to handle
many more HA resources with the static load scheduler.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes since v1:
- remove FIXME
- remove argument about cache from patch message
- use add_service_usage(...) helper from $online_node_usage now
- did not add R-b from Fiona as add_service_usage(...) was moved
src/PVE/HA/Manager.pm | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index bf6895ad..3bd6e1a6 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -238,8 +238,6 @@ my $valid_service_states = {
error => 1,
};
-# FIXME with 'static' mode and thousands of services, the overhead can be noticable and the fact
-# that this function is called for each state change and upon recovery doesn't help.
sub recompute_online_node_usage {
my ($self) = @_;
@@ -317,7 +315,9 @@ my $change_service_state = sub {
$sd->{$k} = $v;
}
- $self->recompute_online_node_usage();
+ $self->{online_node_usage}->remove_service_usage($sid);
+ $self->{online_node_usage}
+ ->add_service_usage($sid, $sd->{state}, $sd->{node}, $sd->{target});
$sd->{uid} = compute_new_uuid($new_state);
@@ -709,6 +709,8 @@ sub manage {
delete $ss->{$sid};
}
+ $self->recompute_online_node_usage();
+
my $new_rules = $haenv->read_rules_config();
# TODO PVE 10: Remove group migration when HA groups have been fully migrated to rules
@@ -738,8 +740,6 @@ sub manage {
for (;;) {
my $repeat = 0;
- $self->recompute_online_node_usage();
-
foreach my $sid (sort keys %$ss) {
my $sd = $ss->{$sid};
my $cd = $sc->{$sid} || { state => 'disabled' };
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [pve-devel] [PATCH ha-manager v2 7/8] manager: make online node usage computation granular
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 7/8] manager: make online node usage computation granular Daniel Kral
@ 2025-10-21 13:09 ` Fiona Ebner
0 siblings, 0 replies; 20+ messages in thread
From: Fiona Ebner @ 2025-10-21 13:09 UTC (permalink / raw)
To: Proxmox VE development discussion, Daniel Kral
Am 20.10.25 um 6:46 PM schrieb Daniel Kral:
> The HA Manager builds $online_node_usage in every FSM iteration in
> manage(...) and at every HA resource state change in
> change_service_state(...). This becomes quite costly with a high HA
> resource count and a lot of state changes happening at once, e.g.
> starting up multiple nodes with rebalance_on_request_start set or a
> failover of a node with many configured HA resources.
>
> To improve this situation, make the changes to the $online_node_usage
> more granular by building $online_node_usage only once per call to
> manage(...) and changing the nodes a HA resource uses individually on
> every HA resource state transition. This allows the HA Manager to handle
> many more HA resources with the static load scheduler.
>
> Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [pve-devel] [PATCH ha-manager v2 8/8] implement static service stats cache
2025-10-20 16:45 [pve-devel] [PATCH ha-manager/perl-rs/proxmox/qemu-server v2 00/12] Granular online_node_usage accounting Daniel Kral
` (10 preceding siblings ...)
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 7/8] manager: make online node usage computation granular Daniel Kral
@ 2025-10-20 16:45 ` Daniel Kral
2025-10-21 13:23 ` Fiona Ebner
11 siblings, 1 reply; 20+ messages in thread
From: Daniel Kral @ 2025-10-20 16:45 UTC (permalink / raw)
To: pve-devel
As the HA Manager builds the static load scheduler, it queries the
services' static usage by reading and parsing the static guest configs
individually, which can take significantly more time with respect to how
many HA resources are in an actively managed state.
PVE::Cluster exposes an efficient interface to gather a set of
properties from one or all guest configs [0]. This is used here to build
a rather short-lived cache on every (re)initialization of the static
load scheduler to avoid parsing guest configs individually.
[0] pve-cluster cf1b19d (add get_guest_config_property IPCC method)
Suggested-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes since v1:
- populate static service cache with entries from
PVE::Cluster::get_vmlist(...) to make a better distinction between
"not cached" and "not specified in guest config"
- improve interface to cache (remove {} fallback return value)
Should we add another cfs_update(...) for the get_vmlist(...) to be sure
that vmlist contains the newest value?
src/PVE/HA/Env.pm | 12 ++++++++++++
src/PVE/HA/Env/PVE2.pm | 35 +++++++++++++++++++++++++++++++++++
src/PVE/HA/Manager.pm | 1 +
src/PVE/HA/Resources/PVECT.pm | 3 ++-
src/PVE/HA/Resources/PVEVM.pm | 3 ++-
src/PVE/HA/Sim/Env.pm | 12 ++++++++++++
src/PVE/HA/Sim/Hardware.pm | 31 +++++++++++++++++++++----------
src/PVE/HA/Sim/Resources.pm | 3 +--
8 files changed, 86 insertions(+), 14 deletions(-)
diff --git a/src/PVE/HA/Env.pm b/src/PVE/HA/Env.pm
index e00272a0..4282d33f 100644
--- a/src/PVE/HA/Env.pm
+++ b/src/PVE/HA/Env.pm
@@ -300,6 +300,18 @@ sub get_datacenter_settings {
return $self->{plug}->get_datacenter_settings();
}
+sub get_static_service_stats {
+ my ($self, $id) = @_;
+
+ return $self->{plug}->get_static_service_stats($id);
+}
+
+sub update_static_service_stats {
+ my ($self) = @_;
+
+ return $self->{plug}->update_static_service_stats();
+}
+
sub get_static_node_stats {
my ($self) = @_;
diff --git a/src/PVE/HA/Env/PVE2.pm b/src/PVE/HA/Env/PVE2.pm
index 2cec6f25..83ab88ab 100644
--- a/src/PVE/HA/Env/PVE2.pm
+++ b/src/PVE/HA/Env/PVE2.pm
@@ -49,6 +49,8 @@ sub new {
$self->{nodename} = $nodename;
+ $self->{static_service_stats} = undef;
+
return $self;
}
@@ -502,6 +504,39 @@ sub get_datacenter_settings {
};
}
+sub get_static_service_stats {
+ my ($self, $id) = @_;
+
+ # undef if update_static_service_stats(...) failed before
+ return undef if !defined($self->{static_service_stats});
+
+ return $self->{static_service_stats}->{$id};
+}
+
+sub update_static_service_stats {
+ my ($self) = @_;
+
+ my $properties = ['cores', 'cpulimit', 'memory', 'sockets', 'vcpus'];
+ my $service_stats = eval {
+ my $stats = PVE::Cluster::get_guest_config_properties($properties);
+
+ # get_guest_config_properties(...) doesn't add guests which do not
+ # specify any of the given properties, but we need to make a distinction
+ # between "not cached" and "not specified" here
+ my $vmlist = PVE::Cluster::get_vmlist();
+ for my $id (keys %$vmlist) {
+ next if defined($stats->{$id});
+
+ $stats->{$id} = {};
+ }
+
+ return $stats;
+ };
+ $self->log('warning', "unable to update static service stats cache - $@") if $@;
+
+ $self->{static_service_stats} = $service_stats;
+}
+
sub get_static_node_stats {
my ($self) = @_;
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 3bd6e1a6..83167075 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -253,6 +253,7 @@ sub recompute_online_node_usage {
$online_node_usage = eval {
my $scheduler = PVE::HA::Usage::Static->new($haenv);
$scheduler->add_node($_) for $online_nodes->@*;
+ $haenv->update_static_service_stats();
return $scheduler;
};
} else {
diff --git a/src/PVE/HA/Resources/PVECT.pm b/src/PVE/HA/Resources/PVECT.pm
index 44644d92..091249d7 100644
--- a/src/PVE/HA/Resources/PVECT.pm
+++ b/src/PVE/HA/Resources/PVECT.pm
@@ -156,7 +156,8 @@ sub remove_locks {
sub get_static_stats {
my ($class, $haenv, $id, $service_node) = @_;
- my $conf = PVE::LXC::Config->load_config($id, $service_node);
+ my $conf = $haenv->get_static_service_stats($id);
+ $conf = PVE::LXC::Config->load_config($id, $service_node) if !defined($conf);
return {
maxcpu => PVE::LXC::Config->get_derived_property($conf, 'max-cpu'),
diff --git a/src/PVE/HA/Resources/PVEVM.pm b/src/PVE/HA/Resources/PVEVM.pm
index e634fe3c..d1bc3329 100644
--- a/src/PVE/HA/Resources/PVEVM.pm
+++ b/src/PVE/HA/Resources/PVEVM.pm
@@ -177,7 +177,8 @@ sub remove_locks {
sub get_static_stats {
my ($class, $haenv, $id, $service_node) = @_;
- my $conf = PVE::QemuConfig->load_config($id, $service_node);
+ my $conf = $haenv->get_static_service_stats($id);
+ $conf = PVE::QemuConfig->load_config($id, $service_node) if !defined($conf);
return {
maxcpu => PVE::QemuConfig->get_derived_property($conf, 'max-cpu'),
diff --git a/src/PVE/HA/Sim/Env.pm b/src/PVE/HA/Sim/Env.pm
index 684e92f8..1d70026e 100644
--- a/src/PVE/HA/Sim/Env.pm
+++ b/src/PVE/HA/Sim/Env.pm
@@ -488,6 +488,18 @@ sub get_datacenter_settings {
};
}
+sub get_static_service_stats {
+ my ($self, $id) = @_;
+
+ return $self->{hardware}->get_static_service_stats($id);
+}
+
+sub update_static_service_stats {
+ my ($self) = @_;
+
+ return $self->{hardware}->update_static_service_stats();
+}
+
sub get_static_node_stats {
my ($self) = @_;
diff --git a/src/PVE/HA/Sim/Hardware.pm b/src/PVE/HA/Sim/Hardware.pm
index 9e8c7995..fffc90e7 100644
--- a/src/PVE/HA/Sim/Hardware.pm
+++ b/src/PVE/HA/Sim/Hardware.pm
@@ -387,16 +387,6 @@ sub write_service_status {
return $res;
}
-sub read_static_service_stats {
- my ($self) = @_;
-
- my $filename = "$self->{statusdir}/static_service_stats";
- my $stats = eval { PVE::HA::Tools::read_json_from_file($filename) };
- $self->log('error', "loading static service stats failed - $@") if $@;
-
- return $stats;
-}
-
sub new {
my ($this, $testdir) = @_;
@@ -477,6 +467,8 @@ sub new {
$self->{service_config} = $self->read_service_config();
+ $self->{static_service_stats} = undef;
+
return $self;
}
@@ -943,6 +935,25 @@ sub watchdog_update {
return &$modify_watchog($self, $code);
}
+sub get_static_service_stats {
+ my ($self, $id) = @_;
+
+ # undef if update_static_service_stats(...) failed before
+ return undef if !defined($self->{static_service_stats});
+
+ return $self->{static_service_stats}->{$id};
+}
+
+sub update_static_service_stats {
+ my ($self) = @_;
+
+ my $filename = "$self->{statusdir}/static_service_stats";
+ my $stats = eval { PVE::HA::Tools::read_json_from_file($filename) };
+ $self->log('warning', "unable to update static service stats cache - $@") if $@;
+
+ $self->{static_service_stats} = $stats;
+}
+
sub get_static_node_stats {
my ($self) = @_;
diff --git a/src/PVE/HA/Sim/Resources.pm b/src/PVE/HA/Sim/Resources.pm
index 72623ee1..ed43373e 100644
--- a/src/PVE/HA/Sim/Resources.pm
+++ b/src/PVE/HA/Sim/Resources.pm
@@ -143,8 +143,7 @@ sub get_static_stats {
my $sid = $class->type() . ":$id";
my $hardware = $haenv->hardware();
- my $stats = $hardware->read_static_service_stats();
- if (my $service_stats = $stats->{$sid}) {
+ if (my $service_stats = $hardware->get_static_service_stats($sid)) {
return $service_stats;
} elsif ($id =~ /^(\d)(\d\d)/) {
# auto assign usage calculated from ID for convenience
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [pve-devel] [PATCH ha-manager v2 8/8] implement static service stats cache
2025-10-20 16:45 ` [pve-devel] [PATCH ha-manager v2 8/8] implement static service stats cache Daniel Kral
@ 2025-10-21 13:23 ` Fiona Ebner
0 siblings, 0 replies; 20+ messages in thread
From: Fiona Ebner @ 2025-10-21 13:23 UTC (permalink / raw)
To: Proxmox VE development discussion, Daniel Kral
Am 20.10.25 um 6:47 PM schrieb Daniel Kral:
> As the HA Manager builds the static load scheduler, it queries the
> services' static usage by reading and parsing the static guest configs
> individually, which can take significantly more time with respect to how
> many HA resources are in an actively managed state.
>
> PVE::Cluster exposes an efficient interface to gather a set of
> properties from one or all guest configs [0]. This is used here to build
> a rather short-lived cache on every (re)initialization of the static
> load scheduler to avoid parsing guest configs individually.
>
> [0] pve-cluster cf1b19d (add get_guest_config_property IPCC method)
>
> Suggested-by: Fiona Ebner <f.ebner@proxmox.com>
> Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
And with that, the whole series is :)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 20+ messages in thread