From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH proxmox v3 05/40] resource-scheduling: implement generic cluster usage implementation
Date: Mon, 30 Mar 2026 16:30:14 +0200 [thread overview]
Message-ID: <20260330144101.668747-6-d.kral@proxmox.com> (raw)
In-Reply-To: <20260330144101.668747-1-d.kral@proxmox.com>
This is a more generic version of the `Usage` implementation from the
pve_static bindings in the pve_rs repository.
As the upcoming load balancing scheduler actions and dynamic resource
scheduler will need more information about each resource, this further
improves on the state tracking of each resource:
In this implementation, a resource is composed of its usage statistics
and its two essential states: the running state and the node placement.
The non_exhaustive attribute ensures that usages need to construct the
a Resource instance through its API.
Users can repeatedly use the current state of Usage to make scheduling
decisions with the to_scheduler() method. This method takes an
implementation of UsageAggregator, which dictates how the usage
information is represented to the Scheduler.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v2 -> v3:
- inline bail! formatting variables
- s/to_string/to_owned/ where reasonable
- make Node::resources_iter(&self) return &str Iterator impl
- drop add_resource_to_nodes() and remove_resource_from_nodes()
- drop ResourcePlacement::nodenames() and Resource::nodenames()
- drop Resource::moving_to()
- fix behavior of add_resource_usage_to_node() for already added
resources: if the next nodename is non-existing, the resource would
still be put into moving but then not add the resource to the nodes;
this is fixed now by improving the handling
- inline behavior of add_resource() to be more consise how both
placement strategies are handled
- no change in Resource::remove_node() documentation as I did not find a
better description in the meantime, but as it's internal it can be
improved later on as well
test changes v2 -> v3:
- use assertions whether nodes were added correctly in test cases
- use assertions whether resource were added correctly in test cases
- additionally assert whether resource cannot be added to non-existing
node with add_resource_usage_to_node() and does not alter state of the
Resource for that resource in the mean time as it was in v2
- use assert!() instead of bail!() in test cases as much as appropriate
proxmox-resource-scheduling/src/lib.rs | 1 +
proxmox-resource-scheduling/src/node.rs | 40 ++++
proxmox-resource-scheduling/src/resource.rs | 84 ++++++++
proxmox-resource-scheduling/src/usage.rs | 208 ++++++++++++++++++++
proxmox-resource-scheduling/tests/usage.rs | 181 +++++++++++++++++
5 files changed, 514 insertions(+)
create mode 100644 proxmox-resource-scheduling/src/usage.rs
create mode 100644 proxmox-resource-scheduling/tests/usage.rs
diff --git a/proxmox-resource-scheduling/src/lib.rs b/proxmox-resource-scheduling/src/lib.rs
index 12b743fe..99ca16d8 100644
--- a/proxmox-resource-scheduling/src/lib.rs
+++ b/proxmox-resource-scheduling/src/lib.rs
@@ -3,6 +3,7 @@ pub mod topsis;
pub mod node;
pub mod resource;
+pub mod usage;
pub mod scheduler;
diff --git a/proxmox-resource-scheduling/src/node.rs b/proxmox-resource-scheduling/src/node.rs
index e6227eda..304582ee 100644
--- a/proxmox-resource-scheduling/src/node.rs
+++ b/proxmox-resource-scheduling/src/node.rs
@@ -1,3 +1,5 @@
+use std::collections::HashSet;
+
use crate::resource::ResourceStats;
/// Usage statistics of a node.
@@ -37,3 +39,41 @@ impl NodeStats {
self.mem as f64 / self.maxmem as f64
}
}
+
+/// A node in the cluster context.
+#[derive(Clone, Debug)]
+pub struct Node {
+ /// Base stats of the node.
+ stats: NodeStats,
+ /// The identifiers of the resources assigned to the node.
+ resources: HashSet<String>,
+}
+
+impl Node {
+ pub fn new(stats: NodeStats) -> Self {
+ Self {
+ stats,
+ resources: HashSet::new(),
+ }
+ }
+
+ pub fn add_resource(&mut self, sid: String) -> bool {
+ self.resources.insert(sid)
+ }
+
+ pub fn remove_resource(&mut self, sid: &str) -> bool {
+ self.resources.remove(sid)
+ }
+
+ pub fn stats(&self) -> NodeStats {
+ self.stats
+ }
+
+ pub fn resources_iter(&self) -> impl Iterator<Item = &str> {
+ self.resources.iter().map(String::as_str)
+ }
+
+ pub fn contains_resource(&self, sid: &str) -> bool {
+ self.resources.contains(sid)
+ }
+}
diff --git a/proxmox-resource-scheduling/src/resource.rs b/proxmox-resource-scheduling/src/resource.rs
index 1eb9d15e..2dbe6fa4 100644
--- a/proxmox-resource-scheduling/src/resource.rs
+++ b/proxmox-resource-scheduling/src/resource.rs
@@ -31,3 +31,87 @@ impl Sum for ResourceStats {
iter.fold(Self::default(), |a, b| a + b)
}
}
+
+/// Execution state of a resource.
+#[derive(Copy, Clone, PartialEq, Eq, Debug)]
+#[non_exhaustive]
+pub enum ResourceState {
+ /// The resource is stopped.
+ Stopped,
+ /// The resource is scheduled to start.
+ Starting,
+ /// The resource is started and currently running.
+ Started,
+}
+
+/// Placement of a resource.
+#[derive(Clone, PartialEq, Eq, Debug)]
+#[non_exhaustive]
+pub enum ResourcePlacement {
+ /// The resource is on `current_node`.
+ Stationary { current_node: String },
+ /// The resource is being moved from `current_node` to `target_node`.
+ Moving {
+ current_node: String,
+ target_node: String,
+ },
+}
+
+/// A resource in the cluster context.
+#[derive(Clone, Debug)]
+#[non_exhaustive]
+pub struct Resource {
+ /// The usage statistics of the resource.
+ stats: ResourceStats,
+ /// The execution state of the resource.
+ state: ResourceState,
+ /// The placement of the resource.
+ placement: ResourcePlacement,
+}
+
+impl Resource {
+ pub fn new(stats: ResourceStats, state: ResourceState, placement: ResourcePlacement) -> Self {
+ Self {
+ stats,
+ state,
+ placement,
+ }
+ }
+
+ /// Handles the external removal of a node.
+ ///
+ /// Returns whether the resource does not have any node left.
+ pub fn remove_node(&mut self, nodename: &str) -> bool {
+ match &self.placement {
+ ResourcePlacement::Stationary { current_node } => current_node == nodename,
+ ResourcePlacement::Moving {
+ current_node,
+ target_node,
+ } => {
+ if current_node == nodename {
+ self.placement = ResourcePlacement::Stationary {
+ current_node: target_node.to_owned(),
+ };
+ } else if target_node == nodename {
+ self.placement = ResourcePlacement::Stationary {
+ current_node: current_node.to_owned(),
+ };
+ }
+
+ false
+ }
+ }
+ }
+
+ pub fn state(&self) -> ResourceState {
+ self.state
+ }
+
+ pub fn stats(&self) -> ResourceStats {
+ self.stats
+ }
+
+ pub fn placement(&self) -> &ResourcePlacement {
+ &self.placement
+ }
+}
diff --git a/proxmox-resource-scheduling/src/usage.rs b/proxmox-resource-scheduling/src/usage.rs
new file mode 100644
index 00000000..81b88452
--- /dev/null
+++ b/proxmox-resource-scheduling/src/usage.rs
@@ -0,0 +1,208 @@
+use anyhow::{bail, Error};
+
+use std::collections::HashMap;
+
+use crate::{
+ node::{Node, NodeStats},
+ resource::{Resource, ResourcePlacement, ResourceState, ResourceStats},
+ scheduler::{NodeUsage, Scheduler},
+};
+
+/// The state of the usage in the cluster.
+///
+/// The cluster usage represents the current state of the assignments between nodes and resources
+/// and their usage statistics. A resource can be placed on these nodes according to their
+/// placement state. See [`crate::resource::Resource`] for more information.
+///
+/// The cluster usage state can be used to build a current state for the [`Scheduler`].
+#[derive(Default)]
+pub struct Usage {
+ nodes: HashMap<String, Node>,
+ resources: HashMap<String, Resource>,
+}
+
+/// An aggregator for the [`Usage`] maps the cluster usage to node usage statistics that are
+/// relevant for the scheduler.
+pub trait UsageAggregator {
+ fn aggregate(usage: &Usage) -> Vec<NodeUsage>;
+}
+
+impl Usage {
+ /// Instantiate an empty cluster usage.
+ pub fn new() -> Self {
+ Self::default()
+ }
+
+ /// Add a node to the cluster usage.
+ ///
+ /// This method fails if a node with the same `nodename` already exists.
+ pub fn add_node(&mut self, nodename: String, stats: NodeStats) -> Result<(), Error> {
+ if self.nodes.contains_key(&nodename) {
+ bail!("node '{nodename}' already exists");
+ }
+
+ self.nodes.insert(nodename, Node::new(stats));
+
+ Ok(())
+ }
+
+ /// Remove a node from the cluster usage.
+ pub fn remove_node(&mut self, nodename: &str) {
+ if let Some(node) = self.nodes.remove(nodename) {
+ node.resources_iter().for_each(|sid| {
+ if let Some(resource) = self.resources.get_mut(sid)
+ && resource.remove_node(nodename)
+ {
+ self.resources.remove(sid);
+ }
+ });
+ }
+ }
+
+ /// Returns a reference to the [`Node`] with the identifier `nodename`.
+ pub fn get_node(&self, nodename: &str) -> Option<&Node> {
+ self.nodes.get(nodename)
+ }
+
+ /// Returns an iterator for the cluster usage's nodes.
+ pub fn nodes_iter(&self) -> impl Iterator<Item = (&String, &Node)> {
+ self.nodes.iter()
+ }
+
+ /// Returns an iterator for the cluster usage's nodes.
+ pub fn nodenames_iter(&self) -> impl Iterator<Item = &String> {
+ self.nodes.keys()
+ }
+
+ /// Returns whether the node with the identifier `nodename` is present in the cluster usage.
+ pub fn contains_node(&self, nodename: &str) -> bool {
+ self.nodes.contains_key(nodename)
+ }
+
+ /// Add `resource` with identifier `sid` to cluster usage.
+ ///
+ /// This method fails if a resource with the same `sid` already exists or the resource's nodes
+ /// do not exist in the cluster usage.
+ pub fn add_resource(&mut self, sid: String, resource: Resource) -> Result<(), Error> {
+ if self.resources.contains_key(&sid) {
+ bail!("resource '{sid}' already exists");
+ }
+
+ match resource.placement() {
+ ResourcePlacement::Stationary { current_node } => {
+ match self.nodes.get_mut(current_node) {
+ Some(current_node) => {
+ current_node.add_resource(sid.to_owned());
+ }
+ _ => bail!("current node for resource '{sid}' does not exist"),
+ }
+ }
+ ResourcePlacement::Moving {
+ current_node,
+ target_node,
+ } => {
+ if current_node == target_node {
+ bail!("resource '{sid}' has the same current and target node");
+ }
+
+ match self.nodes.get_disjoint_mut([current_node, target_node]) {
+ [Some(current_node), Some(target_node)] => {
+ current_node.add_resource(sid.to_owned());
+ target_node.add_resource(sid.to_owned());
+ }
+ _ => bail!("nodes for resource '{sid}' do not exist"),
+ }
+ }
+ }
+
+ self.resources.insert(sid, resource);
+
+ Ok(())
+ }
+
+ /// Add `stats` from resource with identifier `sid` to node `nodename` in cluster usage.
+ ///
+ /// For the first call, the resource is assumed to be started and stationary on the given node.
+ /// If there was no intermediate call to remove the resource, the second call will assume that
+ /// the given node is the target node and the resource is being moved there. The second call
+ /// will ignore the value of `stats`.
+ #[deprecated = "only for backwards compatibility, use add_resource(...) instead"]
+ pub fn add_resource_usage_to_node(
+ &mut self,
+ nodename: &str,
+ sid: &str,
+ stats: ResourceStats,
+ ) -> Result<(), Error> {
+ if let Some(resource) = self.resources.remove(sid) {
+ match resource.placement() {
+ ResourcePlacement::Stationary { current_node } => {
+ let placement = ResourcePlacement::Moving {
+ current_node: current_node.to_owned(),
+ target_node: nodename.to_owned(),
+ };
+ let new_resource = Resource::new(resource.stats(), resource.state(), placement);
+
+ if let Err(err) = self.add_resource(sid.to_owned(), new_resource) {
+ self.add_resource(sid.to_owned(), resource)?;
+
+ bail!(err);
+ }
+
+ Ok(())
+ }
+ ResourcePlacement::Moving { target_node, .. } => {
+ bail!("resource '{sid}' is already moving to target node '{target_node}'")
+ }
+ }
+ } else {
+ let placement = ResourcePlacement::Stationary {
+ current_node: nodename.to_owned(),
+ };
+ let resource = Resource::new(stats, ResourceState::Started, placement);
+
+ self.add_resource(sid.to_owned(), resource)
+ }
+ }
+
+ /// Remove resource with identifier `sid` from cluster usage.
+ pub fn remove_resource(&mut self, sid: &str) {
+ if let Some(resource) = self.resources.remove(sid) {
+ match resource.placement() {
+ ResourcePlacement::Stationary { current_node } => {
+ if let Some(current_node) = self.nodes.get_mut(current_node) {
+ current_node.remove_resource(sid);
+ }
+ }
+ ResourcePlacement::Moving {
+ current_node,
+ target_node,
+ } => {
+ if let Some(current_node) = self.nodes.get_mut(current_node) {
+ current_node.remove_resource(sid);
+ }
+
+ if let Some(target_node) = self.nodes.get_mut(target_node) {
+ target_node.remove_resource(sid);
+ }
+ }
+ }
+ }
+ }
+
+ /// Returns a reference to the [`Resource`] with the identifier `sid`.
+ pub fn get_resource(&self, sid: &str) -> Option<&Resource> {
+ self.resources.get(sid)
+ }
+
+ /// Returns an iterator for the cluster usage's resources.
+ pub fn resources_iter(&self) -> impl Iterator<Item = (&String, &Resource)> {
+ self.resources.iter()
+ }
+
+ /// Use the current cluster usage as a base for a scheduling action.
+ pub fn to_scheduler<F: UsageAggregator>(&self) -> Scheduler {
+ let node_usages = F::aggregate(self);
+
+ Scheduler::from_nodes(node_usages)
+ }
+}
diff --git a/proxmox-resource-scheduling/tests/usage.rs b/proxmox-resource-scheduling/tests/usage.rs
new file mode 100644
index 00000000..b6cb5a6e
--- /dev/null
+++ b/proxmox-resource-scheduling/tests/usage.rs
@@ -0,0 +1,181 @@
+use proxmox_resource_scheduling::{
+ node::NodeStats,
+ resource::{Resource, ResourcePlacement, ResourceState, ResourceStats},
+ usage::Usage,
+};
+
+#[test]
+fn test_no_duplicate_nodes() {
+ let mut usage = Usage::new();
+
+ assert!(usage
+ .add_node("node1".to_owned(), NodeStats::default())
+ .is_ok());
+
+ assert!(
+ usage
+ .add_node("node1".to_owned(), NodeStats::default())
+ .is_err(),
+ "cluster usage does allow duplicate node entries"
+ );
+}
+
+#[test]
+fn test_no_duplicate_resources() {
+ let mut usage = Usage::new();
+
+ assert!(usage
+ .add_node("node1".to_owned(), NodeStats::default())
+ .is_ok());
+
+ let placement = ResourcePlacement::Stationary {
+ current_node: "node1".to_owned(),
+ };
+ let resource = Resource::new(ResourceStats::default(), ResourceState::Stopped, placement);
+
+ assert!(usage
+ .add_resource("vm:101".to_owned(), resource.clone())
+ .is_ok());
+
+ assert!(
+ usage.add_resource("vm:101".to_owned(), resource).is_err(),
+ "cluster usage does allow duplicate resource entries"
+ );
+}
+
+fn assert_add_node(usage: &mut Usage, nodename: &str) {
+ assert!(usage
+ .add_node(nodename.to_owned(), NodeStats::default())
+ .is_ok());
+
+ assert!(
+ usage.get_node(nodename).is_some(),
+ "node '{nodename}' was not added"
+ );
+}
+
+fn assert_add_resource(usage: &mut Usage, sid: &str, resource: Resource) {
+ assert!(usage.add_resource(sid.to_owned(), resource).is_ok());
+
+ assert!(
+ usage.get_resource(sid).is_some(),
+ "resource '{sid}' was not added"
+ );
+}
+
+#[test]
+#[allow(deprecated)]
+fn test_add_resource_usage_to_node() {
+ let mut usage = Usage::new();
+
+ assert_add_node(&mut usage, "node1");
+ assert_add_node(&mut usage, "node2");
+ assert_add_node(&mut usage, "node3");
+
+ assert!(usage
+ .add_resource_usage_to_node("node1", "vm:101", ResourceStats::default())
+ .is_ok());
+
+ assert!(
+ usage
+ .add_resource_usage_to_node("node4", "vm:101", ResourceStats::default())
+ .is_err(),
+ "add_resource_usage_to_node() allows adding non-existent nodes"
+ );
+
+ assert!(usage
+ .add_resource_usage_to_node("node2", "vm:101", ResourceStats::default())
+ .is_ok());
+
+ assert!(
+ usage
+ .add_resource_usage_to_node("node3", "vm:101", ResourceStats::default())
+ .is_err(),
+ "add_resource_usage_to_node() allows adding resources to more than two nodes"
+ );
+}
+
+#[test]
+fn test_add_remove_stationary_resource() {
+ let mut usage = Usage::new();
+
+ let (sid, nodename) = ("vm:101", "node1");
+
+ assert_add_node(&mut usage, nodename);
+
+ let placement = ResourcePlacement::Stationary {
+ current_node: nodename.to_owned(),
+ };
+ let resource = Resource::new(ResourceStats::default(), ResourceState::Stopped, placement);
+
+ assert_add_resource(&mut usage, sid, resource);
+
+ if let Some(node) = usage.get_node(nodename) {
+ assert!(
+ node.contains_resource(sid),
+ "resource '{sid}' was not added from node '{nodename}'"
+ );
+ }
+
+ usage.remove_resource(sid);
+
+ assert!(
+ usage.get_resource(sid).is_none(),
+ "resource '{sid}' was not removed"
+ );
+
+ if let Some(node) = usage.get_node(nodename) {
+ assert!(
+ !node.contains_resource(sid),
+ "resource '{sid}' was not removed from node '{nodename}'"
+ );
+ }
+}
+
+#[test]
+fn test_add_remove_moving_resource() {
+ let mut usage = Usage::new();
+
+ let (sid, current_nodename, target_nodename) = ("vm:101", "node1", "node2");
+
+ assert_add_node(&mut usage, current_nodename);
+ assert_add_node(&mut usage, target_nodename);
+
+ let placement = ResourcePlacement::Moving {
+ current_node: current_nodename.to_owned(),
+ target_node: target_nodename.to_owned(),
+ };
+ let resource = Resource::new(ResourceStats::default(), ResourceState::Stopped, placement);
+
+ assert_add_resource(&mut usage, sid, resource);
+
+ if let Some(current_node) = usage.get_node(current_nodename) {
+ assert!(
+ current_node.contains_resource(sid),
+ "resource '{sid}' was not added to current node '{current_nodename}'"
+ );
+ }
+
+ if let Some(target_node) = usage.get_node(target_nodename) {
+ assert!(
+ target_node.contains_resource(sid),
+ "resource '{sid}' was not added to target node '{target_nodename}'"
+ );
+ }
+
+ usage.remove_resource(sid);
+
+ if let Some(current_node) = usage.get_node(current_nodename) {
+ assert!(
+ !current_node.contains_resource(sid),
+ "resource '{sid}' was not removed from current node '{current_nodename}'"
+ );
+ }
+
+ if let Some(target_node) = usage.get_node(target_nodename) {
+ assert!(
+ !target_node.contains_resource(sid),
+ "resource '{sid}' was not removed from target node '{target_nodename}'"
+ );
+ }
+}
--
2.47.3
next prev parent reply other threads:[~2026-03-30 14:43 UTC|newest]
Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-30 14:30 [PATCH-SERIES cluster/ha-manager/perl-rs/proxmox v3 00/40] dynamic scheduler + load rebalancer Daniel Kral
2026-03-30 14:30 ` [PATCH proxmox v3 01/40] resource-scheduling: inline add_cpu_usage in score_nodes_to_start_service Daniel Kral
2026-03-31 6:01 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH proxmox v3 02/40] resource-scheduling: move score_nodes_to_start_service to scheduler crate Daniel Kral
2026-03-31 6:01 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH proxmox v3 03/40] resource-scheduling: rename service to resource where appropriate Daniel Kral
2026-03-31 6:02 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH proxmox v3 04/40] resource-scheduling: introduce generic scheduler implementation Daniel Kral
2026-03-31 6:11 ` Dominik Rusovac
2026-03-30 14:30 ` Daniel Kral [this message]
2026-03-31 7:26 ` [PATCH proxmox v3 05/40] resource-scheduling: implement generic cluster usage implementation Dominik Rusovac
2026-03-30 14:30 ` [PATCH proxmox v3 06/40] resource-scheduling: topsis: handle empty criteria without panics Daniel Kral
2026-03-30 14:30 ` [PATCH proxmox v3 07/40] resource-scheduling: compare by nodename in score_nodes_to_start_resource Daniel Kral
2026-03-30 14:30 ` [PATCH proxmox v3 08/40] resource-scheduling: factor out topsis alternative mapping Daniel Kral
2026-03-30 14:30 ` [PATCH proxmox v3 09/40] resource-scheduling: implement rebalancing migration selection Daniel Kral
2026-03-31 7:33 ` Dominik Rusovac
2026-03-31 12:42 ` Michael Köppl
2026-03-31 13:32 ` Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 10/40] pve-rs: resource-scheduling: remove pedantic error handling from remove_node Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 11/40] pve-rs: resource-scheduling: remove pedantic error handling from remove_service_usage Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 12/40] pve-rs: resource-scheduling: move pve_static into resource_scheduling module Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 13/40] pve-rs: resource-scheduling: use generic usage implementation Daniel Kral
2026-03-31 7:40 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH perl-rs v3 14/40] pve-rs: resource-scheduling: static: replace deprecated usage structs Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 15/40] pve-rs: resource-scheduling: implement pve_dynamic bindings Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 16/40] pve-rs: resource-scheduling: expose auto rebalancing methods Daniel Kral
2026-03-30 14:30 ` [PATCH cluster v3 17/40] datacenter config: restructure verbose description for the ha crs option Daniel Kral
2026-03-30 14:30 ` [PATCH cluster v3 18/40] datacenter config: add dynamic load scheduler option Daniel Kral
2026-03-30 14:30 ` [PATCH cluster v3 19/40] datacenter config: add auto rebalancing options Daniel Kral
2026-03-31 7:52 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 20/40] env: pve2: implement dynamic node and service stats Daniel Kral
2026-03-31 13:25 ` Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 21/40] sim: hardware: pass correct types for static stats Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 22/40] sim: hardware: factor out static stats' default values Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 23/40] sim: hardware: fix static stats guard Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 24/40] sim: hardware: handle dynamic service stats Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 25/40] sim: hardware: add set-dynamic-stats command Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 26/40] sim: hardware: add getters for dynamic {node,service} stats Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 27/40] usage: pass service data to add_service_usage Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 28/40] usage: pass service data to get_used_service_nodes Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 29/40] add running flag to non-HA cluster service stats Daniel Kral
2026-03-31 7:58 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 30/40] usage: use add_service to add service usage to nodes Daniel Kral
2026-03-31 8:12 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 31/40] usage: add dynamic usage scheduler Daniel Kral
2026-03-31 8:15 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 32/40] test: add dynamic usage scheduler test cases Daniel Kral
2026-03-31 8:20 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 33/40] manager: rename execute_migration to queue_resource_motion Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 34/40] manager: update_crs_scheduler_mode: factor out crs config Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 35/40] implement automatic rebalancing Daniel Kral
2026-03-31 9:07 ` Dominik Rusovac
2026-03-31 9:07 ` Michael Köppl
2026-03-31 9:16 ` Dominik Rusovac
2026-03-31 9:32 ` Daniel Kral
2026-03-31 9:39 ` Dominik Rusovac
2026-03-31 13:55 ` Daniel Kral
2026-03-31 9:42 ` Daniel Kral
2026-03-31 11:01 ` Michael Köppl
2026-03-31 13:50 ` Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 36/40] test: add resource bundle generation test cases Daniel Kral
2026-03-31 9:09 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 37/40] test: add dynamic automatic rebalancing system " Daniel Kral
2026-03-31 9:33 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 38/40] test: add static " Daniel Kral
2026-03-31 9:44 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 39/40] test: add automatic rebalancing system test cases with TOPSIS method Daniel Kral
2026-03-31 9:48 ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 40/40] test: add automatic rebalancing system test cases with affinity rules Daniel Kral
2026-03-31 10:06 ` Dominik Rusovac
2026-03-31 20:44 ` partially-applied: [PATCH-SERIES cluster/ha-manager/perl-rs/proxmox v3 00/40] dynamic scheduler + load rebalancer Thomas Lamprecht
2026-04-02 12:55 ` superseded: " Daniel Kral
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260330144101.668747-6-d.kral@proxmox.com \
--to=d.kral@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox