public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH proxmox v3 09/40] resource-scheduling: implement rebalancing migration selection
Date: Mon, 30 Mar 2026 16:30:18 +0200	[thread overview]
Message-ID: <20260330144101.668747-10-d.kral@proxmox.com> (raw)
In-Reply-To: <20260330144101.668747-1-d.kral@proxmox.com>

Assuming that a resource will hold the same dynamic resource usage on a
new node as on the previous node, score possible migrations, where:

- the cluster node imbalance is minimal (bruteforce), or
- the shifted root mean square and maximum resource usages of the cpu
  and memory is minimal across the cluster nodes (TOPSIS).

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
changes v2 -> v3:
- fix wording in ScoredMigration::new() documentation
- use f64::powi instead of f64::powf in ScoredMigration::new()
- adapt wording in MigrationCandidate `stats` member documentation
- only compare order of return value of
  score_best_balancing_migration_candidates{,_topsis}() in test cases
  instead of equal between the imbalance scores
- introduce rank_best_balancing_migration_candidates{,_topsis}() for the
  test cases for reduced code duplication
- use assert! instead of bail! wherever appropriate

 proxmox-resource-scheduling/src/node.rs       |  17 ++
 proxmox-resource-scheduling/src/scheduler.rs  | 282 ++++++++++++++++++
 .../tests/scheduler.rs                        | 181 ++++++++++-
 3 files changed, 479 insertions(+), 1 deletion(-)

diff --git a/proxmox-resource-scheduling/src/node.rs b/proxmox-resource-scheduling/src/node.rs
index 304582ee..e6d4ff5b 100644
--- a/proxmox-resource-scheduling/src/node.rs
+++ b/proxmox-resource-scheduling/src/node.rs
@@ -29,6 +29,18 @@ impl NodeStats {
         self.mem += resource_stats.maxmem;
     }
 
+    /// Adds the resource stats to the node stats as if the resource is running on the node.
+    pub fn add_running_resource(&mut self, resource_stats: &ResourceStats) {
+        self.cpu += resource_stats.cpu;
+        self.mem += resource_stats.mem;
+    }
+
+    /// Removes the resource stats from the node stats as if the resource is not running on the node.
+    pub fn remove_running_resource(&mut self, resource_stats: &ResourceStats) {
+        self.cpu -= resource_stats.cpu;
+        self.mem = self.mem.saturating_sub(resource_stats.mem);
+    }
+
     /// Returns the current cpu usage as a percentage.
     pub fn cpu_load(&self) -> f64 {
         self.cpu / self.maxcpu as f64
@@ -38,6 +50,11 @@ impl NodeStats {
     pub fn mem_load(&self) -> f64 {
         self.mem as f64 / self.maxmem as f64
     }
+
+    /// Returns a combined node usage as a percentage.
+    pub fn load(&self) -> f64 {
+        (self.cpu_load() + self.mem_load()) / 2.0
+    }
 }
 
 /// A node in the cluster context.
diff --git a/proxmox-resource-scheduling/src/scheduler.rs b/proxmox-resource-scheduling/src/scheduler.rs
index 5aca549d..49d16f9f 100644
--- a/proxmox-resource-scheduling/src/scheduler.rs
+++ b/proxmox-resource-scheduling/src/scheduler.rs
@@ -2,6 +2,12 @@ use anyhow::Error;
 
 use crate::{node::NodeStats, resource::ResourceStats, topsis};
 
+use serde::{Deserialize, Serialize};
+use std::{
+    cmp::{Ordering, Reverse},
+    collections::BinaryHeap,
+};
+
 /// The scheduler view of a node.
 #[derive(Clone, Debug)]
 pub struct NodeUsage {
@@ -11,6 +17,36 @@ pub struct NodeUsage {
     pub stats: NodeStats,
 }
 
+/// Returns the load imbalance among the nodes.
+///
+/// The load balance is measured as the statistical dispersion of the individual node loads.
+///
+/// The current implementation uses the dimensionless coefficient of variation, which expresses the
+/// standard deviation in relation to the average mean of the node loads.
+///
+/// The coefficient of variation is not robust, which is a desired property here, because outliers
+/// should be detected as much as possible.
+fn calculate_node_imbalance(nodes: &[NodeUsage], to_load: impl Fn(&NodeUsage) -> f64) -> f64 {
+    let node_count = nodes.len();
+    let node_loads = nodes.iter().map(to_load).collect::<Vec<_>>();
+
+    let load_sum = node_loads.iter().sum::<f64>();
+
+    // load_sum is guaranteed to be -0.0 for empty `nodes`
+    if load_sum == 0.0 {
+        0.0
+    } else {
+        let load_mean = load_sum / node_count as f64;
+
+        let squared_diff_sum = node_loads
+            .iter()
+            .fold(0.0, |sum, node_load| sum + (node_load - load_mean).powi(2));
+        let load_sd = (squared_diff_sum / node_count as f64).sqrt();
+
+        load_sd / load_mean
+    }
+}
+
 criteria_struct! {
     /// A given alternative.
     struct PveTopsisAlternative {
@@ -33,6 +69,83 @@ pub struct Scheduler {
     nodes: Vec<NodeUsage>,
 }
 
+/// A possible migration.
+#[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Debug, Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
+pub struct Migration {
+    /// The identifier of a leading resource.
+    pub sid: String,
+    /// The current node of the leading resource.
+    pub source_node: String,
+    /// The possible migration target node for the resource.
+    pub target_node: String,
+}
+
+/// A possible migration with a score.
+#[derive(Clone, Debug, Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
+pub struct ScoredMigration {
+    /// The possible migration.
+    pub migration: Migration,
+    /// The expected node imbalance after the migration.
+    pub imbalance: f64,
+}
+
+impl Ord for ScoredMigration {
+    fn cmp(&self, other: &Self) -> Ordering {
+        self.imbalance
+            .total_cmp(&other.imbalance)
+            .then(self.migration.cmp(&other.migration))
+    }
+}
+
+impl PartialOrd for ScoredMigration {
+    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
+        Some(self.cmp(other))
+    }
+}
+
+impl PartialEq for ScoredMigration {
+    fn eq(&self, other: &Self) -> bool {
+        self.cmp(other) == Ordering::Equal
+    }
+}
+
+impl Eq for ScoredMigration {}
+
+impl ScoredMigration {
+    pub fn new<T: Into<Migration>>(migration: T, imbalance: f64) -> Self {
+        // Depending on how the imbalance is calculated, it can contain minor approximation errors.
+        // As this struct implements the Ord trait, users of the struct's cmp() can run into cases,
+        // where the imbalance is the same up to the significant digits in base 10, but treated as
+        // different values.
+        //
+        // Therefore, truncate any non-significant digits to prevent these cases.
+        let factor = 10_f64.powi(f64::DIGITS as i32);
+        let truncated_imbalance = f64::trunc(factor * imbalance) / factor;
+
+        Self {
+            migration: migration.into(),
+            imbalance: truncated_imbalance,
+        }
+    }
+}
+
+/// A possible migration candidate with the migrated usage stats.
+#[derive(Clone, Debug)]
+pub struct MigrationCandidate {
+    /// The possible migration.
+    pub migration: Migration,
+    /// Usage stats of the resource(s) to be migrated.
+    pub stats: ResourceStats,
+}
+
+impl From<MigrationCandidate> for Migration {
+    fn from(candidate: MigrationCandidate) -> Self {
+        candidate.migration
+    }
+}
+
 impl Scheduler {
     /// Instantiate scheduler instance from node usages.
     pub fn from_nodes<I>(nodes: I) -> Self
@@ -82,6 +195,123 @@ impl Scheduler {
         }
     }
 
+    /// Returns the load imbalance among the nodes.
+    ///
+    /// See [`calculate_node_imbalance`] for more information.
+    pub fn node_imbalance(&self) -> f64 {
+        calculate_node_imbalance(&self.nodes, |node| node.stats.load())
+    }
+
+    /// Returns the load imbalance among the nodes as if a specific resource was moved.
+    ///
+    /// See [`calculate_node_imbalance`] for more information.
+    fn node_imbalance_with_migration_candidate(&self, candidate: &MigrationCandidate) -> f64 {
+        calculate_node_imbalance(&self.nodes, |node| {
+            let mut new_stats = node.stats;
+
+            if node.name == candidate.migration.source_node {
+                new_stats.remove_running_resource(&candidate.stats);
+            } else if node.name == candidate.migration.target_node {
+                new_stats.add_running_resource(&candidate.stats);
+            }
+
+            new_stats.load()
+        })
+    }
+
+    /// Scores the given migration `candidates` by the best node imbalance improvement with
+    /// exhaustive search.
+    ///
+    /// The `candidates` are assumed to be consistent with the scheduler. No further validation is
+    /// done whether the given nodenames actually exist in the scheduler.
+    ///
+    /// The scoring is done as if each resource migration has already been done. This assumes that
+    /// the already migrated resource consumes the same amount of each stat as on the previous node
+    /// according to its `stats`.
+    ///
+    /// Returns up to `limit` of the best scored migrations.
+    pub fn score_best_balancing_migration_candidates<I>(
+        &self,
+        candidates: I,
+        limit: usize,
+    ) -> Vec<ScoredMigration>
+    where
+        I: IntoIterator<Item = MigrationCandidate>,
+    {
+        let mut scored_migrations = candidates
+            .into_iter()
+            .map(|candidate| {
+                let imbalance = self.node_imbalance_with_migration_candidate(&candidate);
+
+                Reverse(ScoredMigration::new(candidate, imbalance))
+            })
+            .collect::<BinaryHeap<_>>();
+
+        let mut best_migrations = Vec::with_capacity(limit);
+
+        // BinaryHeap::into_iter_sorted() is still in nightly unfortunately
+        while best_migrations.len() < limit {
+            match scored_migrations.pop() {
+                Some(Reverse(alternative)) => best_migrations.push(alternative),
+                None => break,
+            }
+        }
+
+        best_migrations
+    }
+
+    /// Scores the given migration `candidates` by the best node imbalance improvement with the
+    /// TOPSIS method.
+    ///
+    /// The `candidates` are assumed to be consistent with the scheduler. No further validation is
+    /// done whether the given nodenames actually exist in the scheduler.
+    ///
+    /// The scoring is done as if each resource migration has already been done. This assumes that
+    /// the already migrated resource consumes the same amount of each stat as on the previous node
+    /// according to its `stats`.
+    ///
+    /// Returns up to `limit` of the best scored migrations.
+    pub fn score_best_balancing_migration_candidates_topsis(
+        &self,
+        candidates: &[MigrationCandidate],
+        limit: usize,
+    ) -> Result<Vec<ScoredMigration>, Error> {
+        let matrix = candidates
+            .iter()
+            .map(|candidate| {
+                let resource_stats = &candidate.stats;
+                let source_node = &candidate.migration.source_node;
+                let target_node = &candidate.migration.target_node;
+
+                self.topsis_alternative_with(|node| {
+                    let mut new_stats = node.stats;
+
+                    if &node.name == source_node {
+                        new_stats.remove_running_resource(resource_stats);
+                    } else if &node.name == target_node {
+                        new_stats.add_running_resource(resource_stats);
+                    }
+
+                    new_stats
+                })
+                .into()
+            })
+            .collect::<Vec<_>>();
+
+        let best_alternatives =
+            topsis::rank_alternatives(&topsis::Matrix::new(matrix)?, &PVE_HA_TOPSIS_CRITERIA)?;
+
+        Ok(best_alternatives
+            .into_iter()
+            .take(limit)
+            .map(|i| {
+                let imbalance = self.node_imbalance_with_migration_candidate(&candidates[i]);
+
+                ScoredMigration::new(candidates[i].clone(), imbalance)
+            })
+            .collect())
+    }
+
     /// Scores nodes to start a resource with the usage statistics `resource_stats` on.
     ///
     /// The scoring is done as if the resource is already started on each node. This assumes that
@@ -123,3 +353,55 @@ impl Scheduler {
             .collect())
     }
 }
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[test]
+    fn test_scored_migration_order() {
+        let migration1 = ScoredMigration::new(
+            Migration {
+                sid: String::from("vm:102"),
+                source_node: String::from("node1"),
+                target_node: String::from("node2"),
+            },
+            0.7231749488916931,
+        );
+        let migration2 = ScoredMigration::new(
+            Migration {
+                sid: String::from("vm:102"),
+                source_node: String::from("node1"),
+                target_node: String::from("node3"),
+            },
+            0.723174948891693,
+        );
+        let migration3 = ScoredMigration::new(
+            Migration {
+                sid: String::from("vm:101"),
+                source_node: String::from("node1"),
+                target_node: String::from("node2"),
+            },
+            0.723174948891693 + 1e-15,
+        );
+
+        let mut migrations = vec![migration2.clone(), migration3.clone(), migration1.clone()];
+
+        migrations.sort();
+
+        assert_eq!(
+            vec![migration1.clone(), migration2.clone(), migration3.clone()],
+            migrations
+        );
+
+        let mut heap = BinaryHeap::from(vec![
+            Reverse(migration2.clone()),
+            Reverse(migration3.clone()),
+            Reverse(migration1.clone()),
+        ]);
+
+        assert_eq!(heap.pop(), Some(Reverse(migration1)));
+        assert_eq!(heap.pop(), Some(Reverse(migration2)));
+        assert_eq!(heap.pop(), Some(Reverse(migration3)));
+    }
+}
diff --git a/proxmox-resource-scheduling/tests/scheduler.rs b/proxmox-resource-scheduling/tests/scheduler.rs
index 376a0a4f..be90e4f9 100644
--- a/proxmox-resource-scheduling/tests/scheduler.rs
+++ b/proxmox-resource-scheduling/tests/scheduler.rs
@@ -2,9 +2,13 @@ use anyhow::Error;
 use proxmox_resource_scheduling::{
     node::NodeStats,
     resource::ResourceStats,
-    scheduler::{NodeUsage, Scheduler},
+    scheduler::{Migration, MigrationCandidate, NodeUsage, Scheduler},
 };
 
+fn new_empty_cluster_scheduler() -> Scheduler {
+    Scheduler::from_nodes(Vec::<NodeUsage>::new())
+}
+
 fn new_homogeneous_cluster_scheduler() -> Scheduler {
     let (maxcpu, maxmem) = (16, 64 * (1 << 30));
 
@@ -75,6 +79,181 @@ fn new_heterogeneous_cluster_scheduler() -> Scheduler {
     Scheduler::from_nodes(vec![node1, node2, node3])
 }
 
+#[test]
+fn test_node_imbalance_with_empty_cluster() {
+    let scheduler = new_empty_cluster_scheduler();
+
+    assert_eq!(scheduler.node_imbalance(), 0.0);
+}
+
+#[test]
+fn test_node_imbalance_with_perfectly_balanced_cluster() {
+    let node = NodeUsage {
+        name: String::from("node1"),
+        stats: NodeStats {
+            cpu: 1.7,
+            maxcpu: 16,
+            mem: 224395264,
+            maxmem: 68719476736,
+        },
+    };
+
+    let scheduler = Scheduler::from_nodes(vec![node.clone()]);
+
+    assert_eq!(scheduler.node_imbalance(), 0.0);
+
+    let scheduler = Scheduler::from_nodes(vec![node.clone(), node.clone(), node]);
+
+    assert_eq!(scheduler.node_imbalance(), 0.0);
+}
+
+fn new_simple_migration_candidates() -> (Vec<MigrationCandidate>, Migration, Migration) {
+    let migration1 = Migration {
+        sid: String::from("vm:101"),
+        source_node: String::from("node1"),
+        target_node: String::from("node2"),
+    };
+    let migration2 = Migration {
+        sid: String::from("vm:101"),
+        source_node: String::from("node1"),
+        target_node: String::from("node3"),
+    };
+    let stats = ResourceStats {
+        cpu: 0.7,
+        maxcpu: 4.0,
+        mem: 8 << 30,
+        maxmem: 16 << 30,
+    };
+
+    let candidates = vec![
+        MigrationCandidate {
+            migration: migration1.clone(),
+            stats,
+        },
+        MigrationCandidate {
+            migration: migration2.clone(),
+            stats,
+        },
+    ];
+
+    (candidates, migration1, migration2)
+}
+
+fn assert_imbalance(imbalance: f64, expected_imbalance: f64) {
+    assert!(
+        (expected_imbalance - imbalance).abs() <= f64::EPSILON,
+        "imbalance is {imbalance}, but was expected to be {expected_imbalance}"
+    );
+}
+
+fn rank_best_balancing_migration_candidates(
+    scheduler: &Scheduler,
+    candidates: Vec<MigrationCandidate>,
+    limit: usize,
+) -> Vec<Migration> {
+    scheduler
+        .score_best_balancing_migration_candidates(candidates, limit)
+        .into_iter()
+        .map(|entry| entry.migration)
+        .collect()
+}
+
+#[test]
+fn test_score_best_balancing_migration_candidates_with_no_candidates() {
+    let scheduler = new_homogeneous_cluster_scheduler();
+
+    assert_eq!(
+        rank_best_balancing_migration_candidates(&scheduler, vec![], 2),
+        vec![]
+    );
+}
+
+#[test]
+fn test_score_best_balancing_migration_candidates_in_homogeneous_cluster() {
+    let scheduler = new_homogeneous_cluster_scheduler();
+
+    assert_imbalance(scheduler.node_imbalance(), 0.4893954724628247);
+
+    let (candidates, migration1, migration2) = new_simple_migration_candidates();
+
+    assert_eq!(
+        rank_best_balancing_migration_candidates(&scheduler, candidates, 2),
+        vec![migration2, migration1]
+    );
+}
+
+#[test]
+fn test_score_best_balancing_migration_candidates_in_heterogeneous_cluster() {
+    let scheduler = new_heterogeneous_cluster_scheduler();
+
+    assert_imbalance(scheduler.node_imbalance(), 0.33026013056867354);
+
+    let (candidates, migration1, migration2) = new_simple_migration_candidates();
+
+    assert_eq!(
+        rank_best_balancing_migration_candidates(&scheduler, candidates, 2),
+        vec![migration2, migration1]
+    );
+}
+
+fn rank_best_balancing_migration_candidates_topsis(
+    scheduler: &Scheduler,
+    candidates: &[MigrationCandidate],
+    limit: usize,
+) -> Result<Vec<Migration>, Error> {
+    Ok(scheduler
+        .score_best_balancing_migration_candidates_topsis(candidates, limit)?
+        .into_iter()
+        .map(|entry| entry.migration)
+        .collect())
+}
+
+#[test]
+fn test_score_best_balancing_migration_candidates_topsis_with_no_candidates() -> Result<(), Error> {
+    let scheduler = new_homogeneous_cluster_scheduler();
+
+    assert_eq!(
+        rank_best_balancing_migration_candidates_topsis(&scheduler, &[], 2)?,
+        vec![]
+    );
+
+    Ok(())
+}
+
+#[test]
+fn test_score_best_balancing_migration_candidates_topsis_in_homogeneous_cluster(
+) -> Result<(), Error> {
+    let scheduler = new_homogeneous_cluster_scheduler();
+
+    assert_imbalance(scheduler.node_imbalance(), 0.4893954724628247);
+
+    let (candidates, migration1, migration2) = new_simple_migration_candidates();
+
+    assert_eq!(
+        rank_best_balancing_migration_candidates_topsis(&scheduler, &candidates, 2)?,
+        vec![migration1, migration2]
+    );
+
+    Ok(())
+}
+
+#[test]
+fn test_score_best_balancing_migration_candidates_topsis_in_heterogeneous_cluster(
+) -> Result<(), Error> {
+    let scheduler = new_heterogeneous_cluster_scheduler();
+
+    assert_imbalance(scheduler.node_imbalance(), 0.33026013056867354);
+
+    let (candidates, migration1, migration2) = new_simple_migration_candidates();
+
+    assert_eq!(
+        rank_best_balancing_migration_candidates_topsis(&scheduler, &candidates, 2)?,
+        vec![migration1, migration2]
+    );
+
+    Ok(())
+}
+
 fn rank_nodes_to_start_resource(
     scheduler: &Scheduler,
     resource_stats: ResourceStats,
-- 
2.47.3





  parent reply	other threads:[~2026-03-30 14:42 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-30 14:30 [PATCH-SERIES cluster/ha-manager/perl-rs/proxmox v3 00/40] dynamic scheduler + load rebalancer Daniel Kral
2026-03-30 14:30 ` [PATCH proxmox v3 01/40] resource-scheduling: inline add_cpu_usage in score_nodes_to_start_service Daniel Kral
2026-03-31  6:01   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH proxmox v3 02/40] resource-scheduling: move score_nodes_to_start_service to scheduler crate Daniel Kral
2026-03-31  6:01   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH proxmox v3 03/40] resource-scheduling: rename service to resource where appropriate Daniel Kral
2026-03-31  6:02   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH proxmox v3 04/40] resource-scheduling: introduce generic scheduler implementation Daniel Kral
2026-03-31  6:11   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH proxmox v3 05/40] resource-scheduling: implement generic cluster usage implementation Daniel Kral
2026-03-31  7:26   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH proxmox v3 06/40] resource-scheduling: topsis: handle empty criteria without panics Daniel Kral
2026-03-30 14:30 ` [PATCH proxmox v3 07/40] resource-scheduling: compare by nodename in score_nodes_to_start_resource Daniel Kral
2026-03-30 14:30 ` [PATCH proxmox v3 08/40] resource-scheduling: factor out topsis alternative mapping Daniel Kral
2026-03-30 14:30 ` Daniel Kral [this message]
2026-03-31  7:33   ` [PATCH proxmox v3 09/40] resource-scheduling: implement rebalancing migration selection Dominik Rusovac
2026-03-31 12:42   ` Michael Köppl
2026-03-31 13:32     ` Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 10/40] pve-rs: resource-scheduling: remove pedantic error handling from remove_node Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 11/40] pve-rs: resource-scheduling: remove pedantic error handling from remove_service_usage Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 12/40] pve-rs: resource-scheduling: move pve_static into resource_scheduling module Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 13/40] pve-rs: resource-scheduling: use generic usage implementation Daniel Kral
2026-03-31  7:40   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH perl-rs v3 14/40] pve-rs: resource-scheduling: static: replace deprecated usage structs Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 15/40] pve-rs: resource-scheduling: implement pve_dynamic bindings Daniel Kral
2026-03-30 14:30 ` [PATCH perl-rs v3 16/40] pve-rs: resource-scheduling: expose auto rebalancing methods Daniel Kral
2026-03-30 14:30 ` [PATCH cluster v3 17/40] datacenter config: restructure verbose description for the ha crs option Daniel Kral
2026-03-30 14:30 ` [PATCH cluster v3 18/40] datacenter config: add dynamic load scheduler option Daniel Kral
2026-03-30 14:30 ` [PATCH cluster v3 19/40] datacenter config: add auto rebalancing options Daniel Kral
2026-03-31  7:52   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 20/40] env: pve2: implement dynamic node and service stats Daniel Kral
2026-03-31 13:25   ` Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 21/40] sim: hardware: pass correct types for static stats Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 22/40] sim: hardware: factor out static stats' default values Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 23/40] sim: hardware: fix static stats guard Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 24/40] sim: hardware: handle dynamic service stats Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 25/40] sim: hardware: add set-dynamic-stats command Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 26/40] sim: hardware: add getters for dynamic {node,service} stats Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 27/40] usage: pass service data to add_service_usage Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 28/40] usage: pass service data to get_used_service_nodes Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 29/40] add running flag to non-HA cluster service stats Daniel Kral
2026-03-31  7:58   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 30/40] usage: use add_service to add service usage to nodes Daniel Kral
2026-03-31  8:12   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 31/40] usage: add dynamic usage scheduler Daniel Kral
2026-03-31  8:15   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 32/40] test: add dynamic usage scheduler test cases Daniel Kral
2026-03-31  8:20   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 33/40] manager: rename execute_migration to queue_resource_motion Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 34/40] manager: update_crs_scheduler_mode: factor out crs config Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 35/40] implement automatic rebalancing Daniel Kral
2026-03-31  9:07   ` Dominik Rusovac
2026-03-31  9:07   ` Michael Köppl
2026-03-31  9:16     ` Dominik Rusovac
2026-03-31  9:32       ` Daniel Kral
2026-03-31  9:39         ` Dominik Rusovac
2026-03-31 13:55           ` Daniel Kral
2026-03-31  9:42     ` Daniel Kral
2026-03-31 11:01       ` Michael Köppl
2026-03-31 13:50   ` Daniel Kral
2026-03-30 14:30 ` [PATCH ha-manager v3 36/40] test: add resource bundle generation test cases Daniel Kral
2026-03-31  9:09   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 37/40] test: add dynamic automatic rebalancing system " Daniel Kral
2026-03-31  9:33   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 38/40] test: add static " Daniel Kral
2026-03-31  9:44   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 39/40] test: add automatic rebalancing system test cases with TOPSIS method Daniel Kral
2026-03-31  9:48   ` Dominik Rusovac
2026-03-30 14:30 ` [PATCH ha-manager v3 40/40] test: add automatic rebalancing system test cases with affinity rules Daniel Kral
2026-03-31 10:06   ` Dominik Rusovac
2026-03-31 20:44 ` partially-applied: [PATCH-SERIES cluster/ha-manager/perl-rs/proxmox v3 00/40] dynamic scheduler + load rebalancer Thomas Lamprecht
2026-04-02 12:55 ` superseded: " Daniel Kral

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260330144101.668747-10-d.kral@proxmox.com \
    --to=d.kral@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal