From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH ha-manager 1/9] implement static service stats cache
Date: Tue, 30 Sep 2025 16:19:11 +0200 [thread overview]
Message-ID: <20250930142021.366529-5-d.kral@proxmox.com> (raw)
In-Reply-To: <20250930142021.366529-1-d.kral@proxmox.com>
As the HA Manager builds the static load scheduler, it queries the
services' static usage by reading and parsing the static guest configs
individually, which can take significant time with respect to how many
times recompute_online_node_usage(...) is called.
PVE::Cluster exposes an efficient interface to gather a set of
properties from one or all guest configs [0]. This is used here to build
a rather short-lived cache on every (re)initialization of the static
load scheduler.
The downside to this approach is if there are way more non-HA managed
guests in the cluster than HA-managed guests, which causes this to load
much more information than necessary. It also doesn't cache the default
values for the environment-specific static service usage, which causes
quite a bottleneck as well.
[0] pve-cluster cf1b19d (add get_guest_config_property IPCC method)
Suggested-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
If the above mentioned downside is too large for some setups, we could
also just add a switch to enable/disable the static stats cache or
automatically figure out if it brings any benefits.. But I think this
will be good enough, especially with the latter patches making way fewer
calls to get_service_usage(...).
src/PVE/HA/Env.pm | 12 ++++++++++++
src/PVE/HA/Env/PVE2.pm | 21 +++++++++++++++++++++
src/PVE/HA/Manager.pm | 1 +
src/PVE/HA/Resources/PVECT.pm | 3 ++-
src/PVE/HA/Resources/PVEVM.pm | 3 ++-
src/PVE/HA/Sim/Env.pm | 12 ++++++++++++
src/PVE/HA/Sim/Hardware.pm | 31 +++++++++++++++++++++----------
src/PVE/HA/Sim/Resources.pm | 4 ++--
8 files changed, 73 insertions(+), 14 deletions(-)
diff --git a/src/PVE/HA/Env.pm b/src/PVE/HA/Env.pm
index e00272a0..4282d33f 100644
--- a/src/PVE/HA/Env.pm
+++ b/src/PVE/HA/Env.pm
@@ -300,6 +300,18 @@ sub get_datacenter_settings {
return $self->{plug}->get_datacenter_settings();
}
+sub get_static_service_stats {
+ my ($self, $id) = @_;
+
+ return $self->{plug}->get_static_service_stats($id);
+}
+
+sub update_static_service_stats {
+ my ($self) = @_;
+
+ return $self->{plug}->update_static_service_stats();
+}
+
sub get_static_node_stats {
my ($self) = @_;
diff --git a/src/PVE/HA/Env/PVE2.pm b/src/PVE/HA/Env/PVE2.pm
index e76e86b8..e1d752f7 100644
--- a/src/PVE/HA/Env/PVE2.pm
+++ b/src/PVE/HA/Env/PVE2.pm
@@ -49,6 +49,8 @@ sub new {
$self->{nodename} = $nodename;
+ $self->{static_service_stats} = undef;
+
return $self;
}
@@ -497,6 +499,25 @@ sub get_datacenter_settings {
};
}
+sub get_static_service_stats {
+ my ($self, $id) = @_;
+
+ # undef if update_static_service_stats(...) failed before
+ return undef if !defined($self->{static_service_stats});
+
+ return $self->{static_service_stats}->{$id} // {};
+}
+
+sub update_static_service_stats {
+ my ($self) = @_;
+
+ my $properties = ['cores', 'cpulimit', 'memory', 'sockets', 'vcpus'];
+ my $stats = eval { PVE::Cluster::get_guest_config_properties($properties) };
+ $self->log('warning', "unable to update static service stats cache - $@") if $@;
+
+ $self->{static_service_stats} = $stats;
+}
+
sub get_static_node_stats {
my ($self) = @_;
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index ba59f642..3f81f233 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -247,6 +247,7 @@ sub recompute_online_node_usage {
$online_node_usage = eval {
my $scheduler = PVE::HA::Usage::Static->new($haenv);
$scheduler->add_node($_) for $online_nodes->@*;
+ $haenv->update_static_service_stats();
return $scheduler;
};
$haenv->log(
diff --git a/src/PVE/HA/Resources/PVECT.pm b/src/PVE/HA/Resources/PVECT.pm
index 44644d92..091249d7 100644
--- a/src/PVE/HA/Resources/PVECT.pm
+++ b/src/PVE/HA/Resources/PVECT.pm
@@ -156,7 +156,8 @@ sub remove_locks {
sub get_static_stats {
my ($class, $haenv, $id, $service_node) = @_;
- my $conf = PVE::LXC::Config->load_config($id, $service_node);
+ my $conf = $haenv->get_static_service_stats($id);
+ $conf = PVE::LXC::Config->load_config($id, $service_node) if !defined($conf);
return {
maxcpu => PVE::LXC::Config->get_derived_property($conf, 'max-cpu'),
diff --git a/src/PVE/HA/Resources/PVEVM.pm b/src/PVE/HA/Resources/PVEVM.pm
index e634fe3c..d1bc3329 100644
--- a/src/PVE/HA/Resources/PVEVM.pm
+++ b/src/PVE/HA/Resources/PVEVM.pm
@@ -177,7 +177,8 @@ sub remove_locks {
sub get_static_stats {
my ($class, $haenv, $id, $service_node) = @_;
- my $conf = PVE::QemuConfig->load_config($id, $service_node);
+ my $conf = $haenv->get_static_service_stats($id);
+ $conf = PVE::QemuConfig->load_config($id, $service_node) if !defined($conf);
return {
maxcpu => PVE::QemuConfig->get_derived_property($conf, 'max-cpu'),
diff --git a/src/PVE/HA/Sim/Env.pm b/src/PVE/HA/Sim/Env.pm
index 684e92f8..1d70026e 100644
--- a/src/PVE/HA/Sim/Env.pm
+++ b/src/PVE/HA/Sim/Env.pm
@@ -488,6 +488,18 @@ sub get_datacenter_settings {
};
}
+sub get_static_service_stats {
+ my ($self, $id) = @_;
+
+ return $self->{hardware}->get_static_service_stats($id);
+}
+
+sub update_static_service_stats {
+ my ($self) = @_;
+
+ return $self->{hardware}->update_static_service_stats();
+}
+
sub get_static_node_stats {
my ($self) = @_;
diff --git a/src/PVE/HA/Sim/Hardware.pm b/src/PVE/HA/Sim/Hardware.pm
index 9e8c7995..fae27b2a 100644
--- a/src/PVE/HA/Sim/Hardware.pm
+++ b/src/PVE/HA/Sim/Hardware.pm
@@ -387,16 +387,6 @@ sub write_service_status {
return $res;
}
-sub read_static_service_stats {
- my ($self) = @_;
-
- my $filename = "$self->{statusdir}/static_service_stats";
- my $stats = eval { PVE::HA::Tools::read_json_from_file($filename) };
- $self->log('error', "loading static service stats failed - $@") if $@;
-
- return $stats;
-}
-
sub new {
my ($this, $testdir) = @_;
@@ -477,6 +467,8 @@ sub new {
$self->{service_config} = $self->read_service_config();
+ $self->{static_service_stats} = undef;
+
return $self;
}
@@ -943,6 +935,25 @@ sub watchdog_update {
return &$modify_watchog($self, $code);
}
+sub get_static_service_stats {
+ my ($self, $id) = @_;
+
+ # undef if update_static_service_stats(...) failed before
+ return undef if !defined($self->{static_service_stats});
+
+ return $self->{static_service_stats}->{$id} // {};
+}
+
+sub update_static_service_stats {
+ my ($self) = @_;
+
+ my $filename = "$self->{statusdir}/static_service_stats";
+ my $stats = eval { PVE::HA::Tools::read_json_from_file($filename) };
+ $self->log('warning', "unable to update static service stats cache - $@") if $@;
+
+ $self->{static_service_stats} = $stats;
+}
+
sub get_static_node_stats {
my ($self) = @_;
diff --git a/src/PVE/HA/Sim/Resources.pm b/src/PVE/HA/Sim/Resources.pm
index 72623ee1..7641b1a9 100644
--- a/src/PVE/HA/Sim/Resources.pm
+++ b/src/PVE/HA/Sim/Resources.pm
@@ -143,8 +143,8 @@ sub get_static_stats {
my $sid = $class->type() . ":$id";
my $hardware = $haenv->hardware();
- my $stats = $hardware->read_static_service_stats();
- if (my $service_stats = $stats->{$sid}) {
+ my $service_stats = $hardware->get_static_service_stats($sid);
+ if (%$service_stats) {
return $service_stats;
} elsif ($id =~ /^(\d)(\d\d)/) {
# auto assign usage calculated from ID for convenience
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2025-09-30 14:21 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-30 14:19 [pve-devel] [RFC ha-manager/perl-rs/proxmox/qemu-server 00/12] Granular online_node_usage accounting Daniel Kral
2025-09-30 14:19 ` [pve-devel] [PATCH qemu-server 1/1] config: only fetch necessary default values in get_derived_property helper Daniel Kral
2025-09-30 14:19 ` [pve-devel] [PATCH proxmox 1/1] resource-scheduling: change score_nodes_to_start_service signature Daniel Kral
2025-09-30 14:19 ` [pve-devel] [PATCH perl-rs 1/1] pve-rs: resource_scheduling: allow granular usage changes Daniel Kral
2025-09-30 14:19 ` Daniel Kral [this message]
2025-09-30 14:19 ` [pve-devel] [PATCH ha-manager 2/9] manager: remove redundant recompute_online_node_usage from next_state_recovery Daniel Kral
2025-09-30 14:19 ` [pve-devel] [PATCH ha-manager 3/9] manager: remove redundant add_service_usage_to_node " Daniel Kral
2025-09-30 14:19 ` [pve-devel] [PATCH ha-manager 4/9] manager: remove redundant add_service_usage_to_node from next_state_started Daniel Kral
2025-09-30 14:19 ` [pve-devel] [PATCH ha-manager 5/9] rules: resource affinity: decouple get_resource_affinity helper from Usage class Daniel Kral
2025-09-30 14:19 ` [pve-devel] [PATCH ha-manager 6/9] manager: make recompute_online_node_usage use get_service_nodes helper Daniel Kral
2025-09-30 14:19 ` [pve-devel] [PATCH ha-manager 7/9] usage: allow granular changes to Usage implementations Daniel Kral
2025-09-30 14:19 ` [pve-devel] [PATCH ha-manager 8/9] manager: make online node usage computation granular Daniel Kral
2025-09-30 14:19 ` [pve-devel] [PATCH ha-manager 9/9] manager: make service node usage computation more granular Daniel Kral
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250930142021.366529-5-d.kral@proxmox.com \
--to=d.kral@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox