From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [RFC ha-manager 01/21] rename static node stats to be consistent with similar interfaces
Date: Tue, 17 Feb 2026 15:14:08 +0100 [thread overview]
Message-ID: <20260217141437.584852-15-d.kral@proxmox.com> (raw)
In-Reply-To: <20260217141437.584852-1-d.kral@proxmox.com>
The names `maxcpu` and `maxmem` are used in the static load scheduler
itself and is more telling that these properties provide the maximum
configured amount of CPU cores and memory.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
src/PVE/HA/Env/PVE2.pm | 9 ++++++++-
src/PVE/HA/Sim/Hardware.pm | 8 ++++----
src/PVE/HA/Usage/Static.pm | 6 +++---
.../hardware_status | 6 +++---
.../hardware_status | 6 +++---
.../hardware_status | 10 +++++-----
src/test/test-crs-static-rebalance1/hardware_status | 6 +++---
src/test/test-crs-static-rebalance2/hardware_status | 6 +++---
src/test/test-crs-static1/hardware_status | 6 +++---
src/test/test-crs-static2/hardware_status | 10 +++++-----
src/test/test-crs-static3/hardware_status | 6 +++---
src/test/test-crs-static4/hardware_status | 6 +++---
src/test/test-crs-static5/hardware_status | 6 +++---
13 files changed, 49 insertions(+), 42 deletions(-)
diff --git a/src/PVE/HA/Env/PVE2.pm b/src/PVE/HA/Env/PVE2.pm
index 37720f72..ee4fa23d 100644
--- a/src/PVE/HA/Env/PVE2.pm
+++ b/src/PVE/HA/Env/PVE2.pm
@@ -543,7 +543,14 @@ sub get_static_node_stats {
my $stats = PVE::Cluster::get_node_kv('static-info');
for my $node (keys $stats->%*) {
- $stats->{$node} = eval { decode_json($stats->{$node}) };
+ $stats->{$node} = eval {
+ my $node_stats = decode_json($stats->{$node});
+
+ return {
+ maxcpu => $node_stats->{cpus},
+ maxmem => $node_stats->{memory},
+ };
+ };
$self->log('err', "unable to decode static node info for '$node' - $@") if $@;
}
diff --git a/src/PVE/HA/Sim/Hardware.pm b/src/PVE/HA/Sim/Hardware.pm
index 97ada580..702500c2 100644
--- a/src/PVE/HA/Sim/Hardware.pm
+++ b/src/PVE/HA/Sim/Hardware.pm
@@ -488,9 +488,9 @@ sub new {
|| die "Copy failed: $!\n";
} else {
my $cstatus = {
- node1 => { power => 'off', network => 'off', cpus => 24, memory => 131072 },
- node2 => { power => 'off', network => 'off', cpus => 24, memory => 131072 },
- node3 => { power => 'off', network => 'off', cpus => 24, memory => 131072 },
+ node1 => { power => 'off', network => 'off', maxcpu => 24, maxmem => 131072 },
+ node2 => { power => 'off', network => 'off', maxcpu => 24, maxmem => 131072 },
+ node3 => { power => 'off', network => 'off', maxcpu => 24, maxmem => 131072 },
};
$self->write_hardware_status_nolock($cstatus);
}
@@ -1088,7 +1088,7 @@ sub get_static_node_stats {
my $stats = {};
for my $node (keys $cstatus->%*) {
- $stats->{$node} = { $cstatus->{$node}->%{qw(cpus memory)} };
+ $stats->{$node} = { $cstatus->{$node}->%{qw(maxcpu maxmem)} };
}
return $stats;
diff --git a/src/PVE/HA/Usage/Static.pm b/src/PVE/HA/Usage/Static.pm
index d586b603..395be871 100644
--- a/src/PVE/HA/Usage/Static.pm
+++ b/src/PVE/HA/Usage/Static.pm
@@ -33,10 +33,10 @@ sub add_node {
my $stats = $self->{'node-stats'}->{$nodename}
or die "did not get static node usage information for '$nodename'\n";
- die "static node usage information for '$nodename' missing cpu count\n" if !$stats->{cpus};
- die "static node usage information for '$nodename' missing memory\n" if !$stats->{memory};
+ die "static node usage information for '$nodename' missing cpu count\n" if !$stats->{maxcpu};
+ die "static node usage information for '$nodename' missing memory\n" if !$stats->{maxmem};
- eval { $self->{scheduler}->add_node($nodename, int($stats->{cpus}), int($stats->{memory})); };
+ eval { $self->{scheduler}->add_node($nodename, int($stats->{maxcpu}), int($stats->{maxmem})); };
die "initializing static node usage for '$nodename' failed - $@" if $@;
}
diff --git a/src/test/test-crs-static-rebalance-resource-affinity1/hardware_status b/src/test/test-crs-static-rebalance-resource-affinity1/hardware_status
index 84484af1..3d4cf91f 100644
--- a/src/test/test-crs-static-rebalance-resource-affinity1/hardware_status
+++ b/src/test/test-crs-static-rebalance-resource-affinity1/hardware_status
@@ -1,5 +1,5 @@
{
- "node1": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 },
- "node2": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 },
- "node3": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 }
+ "node1": { "power": "off", "network": "off", "maxcpu": 8, "maxmem": 112000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 8, "maxmem": 112000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 8, "maxmem": 112000000000 }
}
diff --git a/src/test/test-crs-static-rebalance-resource-affinity2/hardware_status b/src/test/test-crs-static-rebalance-resource-affinity2/hardware_status
index 84484af1..3d4cf91f 100644
--- a/src/test/test-crs-static-rebalance-resource-affinity2/hardware_status
+++ b/src/test/test-crs-static-rebalance-resource-affinity2/hardware_status
@@ -1,5 +1,5 @@
{
- "node1": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 },
- "node2": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 },
- "node3": { "power": "off", "network": "off", "cpus": 8, "memory": 112000000000 }
+ "node1": { "power": "off", "network": "off", "maxcpu": 8, "maxmem": 112000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 8, "maxmem": 112000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 8, "maxmem": 112000000000 }
}
diff --git a/src/test/test-crs-static-rebalance-resource-affinity3/hardware_status b/src/test/test-crs-static-rebalance-resource-affinity3/hardware_status
index b6dcb1a5..7bc741f1 100644
--- a/src/test/test-crs-static-rebalance-resource-affinity3/hardware_status
+++ b/src/test/test-crs-static-rebalance-resource-affinity3/hardware_status
@@ -1,7 +1,7 @@
{
- "node1": { "power": "off", "network": "off", "cpus": 8, "memory": 48000000000 },
- "node2": { "power": "off", "network": "off", "cpus": 32, "memory": 36000000000 },
- "node3": { "power": "off", "network": "off", "cpus": 16, "memory": 24000000000 },
- "node4": { "power": "off", "network": "off", "cpus": 32, "memory": 36000000000 },
- "node5": { "power": "off", "network": "off", "cpus": 8, "memory": 48000000000 }
+ "node1": { "power": "off", "network": "off", "maxcpu": 8, "maxmem": 48000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 36000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 16, "maxmem": 24000000000 },
+ "node4": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 36000000000 },
+ "node5": { "power": "off", "network": "off", "maxcpu": 8, "maxmem": 48000000000 }
}
diff --git a/src/test/test-crs-static-rebalance1/hardware_status b/src/test/test-crs-static-rebalance1/hardware_status
index 651ad792..bfdbbf7b 100644
--- a/src/test/test-crs-static-rebalance1/hardware_status
+++ b/src/test/test-crs-static-rebalance1/hardware_status
@@ -1,5 +1,5 @@
{
- "node1": { "power": "off", "network": "off", "cpus": 32, "memory": 256000000000 },
- "node2": { "power": "off", "network": "off", "cpus": 32, "memory": 256000000000 },
- "node3": { "power": "off", "network": "off", "cpus": 32, "memory": 256000000000 }
+ "node1": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 256000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 256000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 256000000000 }
}
diff --git a/src/test/test-crs-static-rebalance2/hardware_status b/src/test/test-crs-static-rebalance2/hardware_status
index 9be70a40..c5cbde3d 100644
--- a/src/test/test-crs-static-rebalance2/hardware_status
+++ b/src/test/test-crs-static-rebalance2/hardware_status
@@ -1,5 +1,5 @@
{
- "node1": { "power": "off", "network": "off", "cpus": 40, "memory": 384000000000 },
- "node2": { "power": "off", "network": "off", "cpus": 32, "memory": 256000000000 },
- "node3": { "power": "off", "network": "off", "cpus": 32, "memory": 256000000000 }
+ "node1": { "power": "off", "network": "off", "maxcpu": 40, "maxmem": 384000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 256000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 256000000000 }
}
diff --git a/src/test/test-crs-static1/hardware_status b/src/test/test-crs-static1/hardware_status
index 0fa8c265..bbe44a96 100644
--- a/src/test/test-crs-static1/hardware_status
+++ b/src/test/test-crs-static1/hardware_status
@@ -1,5 +1,5 @@
{
- "node1": { "power": "off", "network": "off", "cpus": 32, "memory": 100000000000 },
- "node2": { "power": "off", "network": "off", "cpus": 32, "memory": 200000000000 },
- "node3": { "power": "off", "network": "off", "cpus": 32, "memory": 300000000000 }
+ "node1": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 100000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 200000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 300000000000 }
}
diff --git a/src/test/test-crs-static2/hardware_status b/src/test/test-crs-static2/hardware_status
index d426023a..815436ef 100644
--- a/src/test/test-crs-static2/hardware_status
+++ b/src/test/test-crs-static2/hardware_status
@@ -1,7 +1,7 @@
{
- "node1": { "power": "off", "network": "off", "cpus": 32, "memory": 100000000000 },
- "node2": { "power": "off", "network": "off", "cpus": 32, "memory": 200000000000 },
- "node3": { "power": "off", "network": "off", "cpus": 32, "memory": 300000000000 },
- "node4": { "power": "off", "network": "off", "cpus": 64, "memory": 300000000000 },
- "node5": { "power": "off", "network": "off", "cpus": 32, "memory": 100000000000 }
+ "node1": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 100000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 200000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 300000000000 },
+ "node4": { "power": "off", "network": "off", "maxcpu": 64, "maxmem": 300000000000 },
+ "node5": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 100000000000 }
}
diff --git a/src/test/test-crs-static3/hardware_status b/src/test/test-crs-static3/hardware_status
index dfbf496e..ed84b8bd 100644
--- a/src/test/test-crs-static3/hardware_status
+++ b/src/test/test-crs-static3/hardware_status
@@ -1,5 +1,5 @@
{
- "node1": { "power": "off", "network": "off", "cpus": 32, "memory": 100000000000 },
- "node2": { "power": "off", "network": "off", "cpus": 64, "memory": 200000000000 },
- "node3": { "power": "off", "network": "off", "cpus": 32, "memory": 100000000000 }
+ "node1": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 100000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 64, "maxmem": 200000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 100000000000 }
}
diff --git a/src/test/test-crs-static4/hardware_status b/src/test/test-crs-static4/hardware_status
index a83a2dcc..b08ba7f9 100644
--- a/src/test/test-crs-static4/hardware_status
+++ b/src/test/test-crs-static4/hardware_status
@@ -1,5 +1,5 @@
{
- "node1": { "power": "off", "network": "off", "cpus": 32, "memory": 100000000000 },
- "node2": { "power": "off", "network": "off", "cpus": 32, "memory": 100000000000 },
- "node3": { "power": "off", "network": "off", "cpus": 32, "memory": 100000000000 }
+ "node1": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 100000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 100000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 100000000000 }
}
diff --git a/src/test/test-crs-static5/hardware_status b/src/test/test-crs-static5/hardware_status
index 3eb9e735..edfd6db2 100644
--- a/src/test/test-crs-static5/hardware_status
+++ b/src/test/test-crs-static5/hardware_status
@@ -1,5 +1,5 @@
{
- "node1": { "power": "off", "network": "off", "cpus": 32, "memory": 100000000000 },
- "node2": { "power": "off", "network": "off", "cpus": 32, "memory": 100000000000 },
- "node3": { "power": "off", "network": "off", "cpus": 128, "memory": 100000000000 }
+ "node1": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 100000000000 },
+ "node2": { "power": "off", "network": "off", "maxcpu": 32, "maxmem": 100000000000 },
+ "node3": { "power": "off", "network": "off", "maxcpu": 128, "maxmem": 100000000000 }
}
--
2.47.3
next prev parent reply other threads:[~2026-02-17 14:17 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-17 14:13 [RFC PATCH-SERIES many 00/36] dynamic scheduler + load rebalancer Daniel Kral
2026-02-17 14:13 ` [RFC proxmox 1/5] resource-scheduling: move score_nodes_to_start_service to scheduler crate Daniel Kral
2026-02-17 14:13 ` [RFC proxmox 2/5] resource-scheduling: introduce generic cluster usage implementation Daniel Kral
2026-02-17 14:13 ` [RFC proxmox 3/5] resource-scheduling: add dynamic node and service stats Daniel Kral
2026-02-17 14:13 ` [RFC proxmox 4/5] resource-scheduling: implement rebalancing migration selection Daniel Kral
2026-02-17 14:13 ` [RFC proxmox 5/5] resource-scheduling: implement Add and Default for {Dynamic,Static}ServiceStats Daniel Kral
2026-02-17 14:14 ` [RFC perl-rs 1/6] pve-rs: resource scheduling: use generic cluster usage implementation Daniel Kral
2026-02-17 14:14 ` [RFC perl-rs 2/6] pve-rs: resource scheduling: create service_nodes hashset from array Daniel Kral
2026-02-17 14:14 ` [RFC perl-rs 3/6] pve-rs: resource scheduling: store service stats independently of node Daniel Kral
2026-02-17 14:14 ` [RFC perl-rs 4/6] pve-rs: resource scheduling: expose auto rebalancing methods Daniel Kral
2026-02-17 14:14 ` [RFC perl-rs 5/6] pve-rs: resource scheduling: move pve_static into resource_scheduling module Daniel Kral
2026-02-17 14:14 ` [RFC perl-rs 6/6] pve-rs: resource scheduling: implement pve_dynamic bindings Daniel Kral
2026-02-17 14:14 ` [RFC cluster 1/2] datacenter config: add dynamic load scheduler option Daniel Kral
2026-02-18 11:06 ` Maximiliano Sandoval
2026-02-17 14:14 ` [RFC cluster 2/2] datacenter config: add auto rebalancing options Daniel Kral
2026-02-18 11:15 ` Maximiliano Sandoval
2026-02-17 14:14 ` Daniel Kral [this message]
2026-02-17 14:14 ` [RFC ha-manager 02/21] resources: remove redundant load_config fallback for static config Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 03/21] remove redundant service_node and migration_target parameter Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 04/21] factor out common pve to ha resource type mapping Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 05/21] derive static service stats while filling the service stats repository Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 06/21] test: make static service usage explicit for all resources Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 07/21] make static service stats indexable by sid Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 08/21] move static service stats repository to PVE::HA::Usage::Static Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 09/21] usage: augment service stats with node and state information Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 10/21] include running non-HA resources in the scheduler's accounting Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 11/21] env, resources: add dynamic node and service stats abstraction Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 12/21] env: pve2: implement dynamic node and service stats Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 13/21] sim: hardware: pass correct types for static stats Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 14/21] sim: hardware: factor out static stats' default values Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 15/21] sim: hardware: rewrite set-static-stats Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 16/21] sim: hardware: add set-dynamic-stats for services Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 17/21] usage: add dynamic usage scheduler Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 18/21] manager: rename execute_migration to queue_resource_motion Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 19/21] manager: update_crs_scheduler_mode: factor out crs config Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 20/21] implement automatic rebalancing Daniel Kral
2026-02-17 14:14 ` [RFC ha-manager 21/21] test: add basic automatic rebalancing system test cases Daniel Kral
2026-02-17 14:14 ` [RFC manager 1/2] ui: dc/options: add dynamic load scheduler option Daniel Kral
2026-02-18 11:10 ` Maximiliano Sandoval
2026-02-17 14:14 ` [RFC manager 2/2] ui: dc/options: add auto rebalancing options Daniel Kral
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260217141437.584852-15-d.kral@proxmox.com \
--to=d.kral@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.