all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Kefu Chai <k.chai@proxmox.com>
To: pve-devel@lists.proxmox.com
Cc: Kefu Chai <tchaikov@gmail.com>
Subject: [PATCH pve-cluster v3 02/13] pmxcfs-rs: add pmxcfs-config crate
Date: Mon, 23 Mar 2026 19:32:17 +0800	[thread overview]
Message-ID: <20260323113239.942866-3-k.chai@proxmox.com> (raw)
In-Reply-To: <20260323113239.942866-1-k.chai@proxmox.com>

Add configuration management crate for pmxcfs:
- Config struct: Runtime configuration (node name, IP, flags)
- Thread-safe debug level mutation via AtomicU8
- Arc-wrapped for shared ownership across components
- Comprehensive unit tests including thread safety tests

This crate provides the foundational configuration structure used
by all pmxcfs components. The Config is designed to be shared via
Arc to allow multiple components to access the same configuration
instance, with mutable debug level for runtime adjustments.

Signed-off-by: Kefu Chai <k.chai@proxmox.com>
---
 src/pmxcfs-rs/Cargo.toml               |   5 +
 src/pmxcfs-rs/pmxcfs-config/Cargo.toml |  15 +
 src/pmxcfs-rs/pmxcfs-config/README.md  |  15 +
 src/pmxcfs-rs/pmxcfs-config/src/lib.rs | 365 +++++++++++++++++++++++++
 4 files changed, 400 insertions(+)
 create mode 100644 src/pmxcfs-rs/pmxcfs-config/Cargo.toml
 create mode 100644 src/pmxcfs-rs/pmxcfs-config/README.md
 create mode 100644 src/pmxcfs-rs/pmxcfs-config/src/lib.rs

diff --git a/src/pmxcfs-rs/Cargo.toml b/src/pmxcfs-rs/Cargo.toml
index da2b9440a..99bb79266 100644
--- a/src/pmxcfs-rs/Cargo.toml
+++ b/src/pmxcfs-rs/Cargo.toml
@@ -2,6 +2,7 @@
 [workspace]
 members = [
     "pmxcfs-api-types",  # Shared types and error definitions
+    "pmxcfs-config",     # Configuration management
 ]
 resolver = "2"
 
@@ -16,10 +17,14 @@ rust-version = "1.85"
 [workspace.dependencies]
 # Internal workspace dependencies
 pmxcfs-api-types = { path = "pmxcfs-api-types" }
+pmxcfs-config = { path = "pmxcfs-config" }
 
 # Error handling
 thiserror = "2.0"
 
+# Concurrency primitives
+parking_lot = "0.12"
+
 # System integration
 libc = "0.2"
 
diff --git a/src/pmxcfs-rs/pmxcfs-config/Cargo.toml b/src/pmxcfs-rs/pmxcfs-config/Cargo.toml
new file mode 100644
index 000000000..65e4fe600
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs-config/Cargo.toml
@@ -0,0 +1,15 @@
+[package]
+name = "pmxcfs-config"
+description = "Configuration management for pmxcfs"
+
+version.workspace = true
+edition.workspace = true
+authors.workspace = true
+license.workspace = true
+repository.workspace = true
+
+[lints]
+workspace = true
+
+[dependencies]
+tracing.workspace = true
diff --git a/src/pmxcfs-rs/pmxcfs-config/README.md b/src/pmxcfs-rs/pmxcfs-config/README.md
new file mode 100644
index 000000000..53aaf443a
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs-config/README.md
@@ -0,0 +1,15 @@
+# pmxcfs-config
+
+**Configuration Management** for pmxcfs.
+
+This crate provides configuration structures for the pmxcfs daemon.
+
+## Overview
+
+The `Config` struct holds daemon-wide configuration including:
+- Node hostname
+- IP address
+- www-data group ID
+- Debug flag
+- Local mode flag
+- Cluster name
diff --git a/src/pmxcfs-rs/pmxcfs-config/src/lib.rs b/src/pmxcfs-rs/pmxcfs-config/src/lib.rs
new file mode 100644
index 000000000..783ddb14d
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs-config/src/lib.rs
@@ -0,0 +1,365 @@
+use std::net::IpAddr;
+use std::sync::Arc;
+use std::sync::atomic::{AtomicU8, Ordering};
+
+/// Global configuration for pmxcfs
+pub struct Config {
+    /// Node name (hostname without domain)
+    nodename: String,
+
+    /// Node IP address
+    node_ip: IpAddr,
+
+    /// www-data group ID for file permissions
+    www_data_gid: u32,
+
+    /// Force local mode (no clustering)
+    local_mode: bool,
+
+    /// Cluster name (CPG group name)
+    cluster_name: String,
+
+    /// Debug level (0 = normal, 1+ = debug) - mutable at runtime
+    debug_level: AtomicU8,
+}
+
+impl std::fmt::Debug for Config {
+    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+        f.debug_struct("Config")
+            .field("nodename", &self.nodename)
+            .field("node_ip", &self.node_ip)
+            .field("www_data_gid", &self.www_data_gid)
+            .field("local_mode", &self.local_mode)
+            .field("cluster_name", &self.cluster_name)
+            .field("debug_level", &self.debug_level.load(Ordering::Relaxed))
+            .finish()
+    }
+}
+
+impl Config {
+    pub fn new(
+        nodename: impl Into<String>,
+        node_ip: IpAddr,
+        www_data_gid: u32,
+        debug_level: u8,
+        local_mode: bool,
+        cluster_name: impl Into<String>,
+    ) -> Self {
+        Self {
+            nodename: nodename.into(),
+            node_ip,
+            www_data_gid,
+            local_mode,
+            cluster_name: cluster_name.into(),
+            debug_level: AtomicU8::new(debug_level),
+        }
+    }
+
+    /// Wrap this config in an `Arc` for shared ownership.
+    pub fn into_shared(self) -> Arc<Self> {
+        Arc::new(self)
+    }
+
+    #[inline]
+    pub fn cluster_name(&self) -> &str {
+        &self.cluster_name
+    }
+
+    #[inline]
+    pub fn nodename(&self) -> &str {
+        &self.nodename
+    }
+
+    #[inline]
+    pub fn node_ip(&self) -> IpAddr {
+        self.node_ip
+    }
+
+    #[inline]
+    pub fn www_data_gid(&self) -> u32 {
+        self.www_data_gid
+    }
+
+    #[inline]
+    pub fn is_debug(&self) -> bool {
+        self.debug_level() > 0
+    }
+
+    #[inline]
+    pub fn is_local_mode(&self) -> bool {
+        self.local_mode
+    }
+
+    /// Get current debug level (0 = normal, 1+ = debug)
+    #[inline]
+    pub fn debug_level(&self) -> u8 {
+        self.debug_level.load(Ordering::Relaxed)
+    }
+
+    /// Set debug level (0 = normal, 1+ = debug)
+    #[inline]
+    pub fn set_debug_level(&self, level: u8) {
+        self.debug_level.store(level, Ordering::Relaxed);
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+    use std::thread;
+
+    #[test]
+    fn test_config_creation() {
+        let config = Config::new(
+            "node1",
+            "192.168.1.10".parse().unwrap(),
+            33,
+            0,
+            false,
+            "pmxcfs",
+        );
+
+        assert_eq!(config.nodename(), "node1");
+        assert_eq!(config.node_ip(), "192.168.1.10".parse::<IpAddr>().unwrap());
+        assert_eq!(config.www_data_gid(), 33);
+        assert!(!config.is_debug());
+        assert!(!config.is_local_mode());
+        assert_eq!(config.cluster_name(), "pmxcfs");
+        assert_eq!(config.debug_level(), 0);
+    }
+
+    #[test]
+    fn test_config_creation_with_debug() {
+        let config = Config::new(
+            "node2",
+            "10.0.0.5".parse().unwrap(),
+            1000,
+            1,
+            false,
+            "test-cluster",
+        );
+
+        assert!(config.is_debug());
+        assert_eq!(config.debug_level(), 1);
+    }
+
+    #[test]
+    fn test_config_creation_local_mode() {
+        let config = Config::new(
+            "localhost",
+            "127.0.0.1".parse().unwrap(),
+            33,
+            0,
+            true,
+            "local",
+        );
+
+        assert!(config.is_local_mode());
+        assert!(!config.is_debug());
+    }
+
+    #[test]
+    fn test_all_getters() {
+        let config = Config::new(
+            "testnode",
+            "172.16.0.1".parse().unwrap(),
+            999,
+            1,
+            true,
+            "my-cluster",
+        );
+
+        assert_eq!(config.nodename(), "testnode");
+        assert_eq!(config.node_ip(), "172.16.0.1".parse::<IpAddr>().unwrap());
+        assert_eq!(config.www_data_gid(), 999);
+        assert!(config.is_debug());
+        assert!(config.is_local_mode());
+        assert_eq!(config.cluster_name(), "my-cluster");
+        assert_eq!(config.debug_level(), 1);
+    }
+
+    #[test]
+    fn test_debug_level_mutation() {
+        let config = Config::new(
+            "node1",
+            "192.168.1.1".parse().unwrap(),
+            33,
+            0,
+            false,
+            "pmxcfs",
+        );
+
+        assert_eq!(config.debug_level(), 0);
+
+        config.set_debug_level(1);
+        assert_eq!(config.debug_level(), 1);
+
+        config.set_debug_level(5);
+        assert_eq!(config.debug_level(), 5);
+
+        config.set_debug_level(0);
+        assert_eq!(config.debug_level(), 0);
+    }
+
+    #[test]
+    fn test_debug_level_max_value() {
+        let config = Config::new(
+            "node1",
+            "192.168.1.1".parse().unwrap(),
+            33,
+            0,
+            false,
+            "pmxcfs",
+        );
+
+        config.set_debug_level(255);
+        assert_eq!(config.debug_level(), 255);
+
+        config.set_debug_level(0);
+        assert_eq!(config.debug_level(), 0);
+    }
+
+    #[test]
+    fn test_debug_level_thread_safety() {
+        let config = Config::new(
+            "node1",
+            "192.168.1.1".parse().unwrap(),
+            33,
+            0,
+            false,
+            "pmxcfs",
+        )
+        .into_shared();
+
+        let config_clone = Arc::clone(&config);
+
+        let handles: Vec<_> = (0..10)
+            .map(|i| {
+                let cfg = Arc::clone(&config);
+                thread::spawn(move || {
+                    for _ in 0..100 {
+                        cfg.set_debug_level(i);
+                        let _ = cfg.debug_level();
+                    }
+                })
+            })
+            .collect();
+
+        for handle in handles {
+            handle.join().unwrap();
+        }
+
+        let final_level = config_clone.debug_level();
+        assert!(
+            final_level < 10,
+            "Debug level should be < 10, got {final_level}"
+        );
+    }
+
+    #[test]
+    fn test_concurrent_reads() {
+        let config = Config::new(
+            "node1",
+            "192.168.1.1".parse().unwrap(),
+            33,
+            1,
+            false,
+            "pmxcfs",
+        )
+        .into_shared();
+
+        let handles: Vec<_> = (0..20)
+            .map(|_| {
+                let cfg = Arc::clone(&config);
+                thread::spawn(move || {
+                    for _ in 0..1000 {
+                        assert_eq!(cfg.nodename(), "node1");
+                        assert_eq!(cfg.node_ip(), "192.168.1.1".parse::<IpAddr>().unwrap());
+                        assert_eq!(cfg.www_data_gid(), 33);
+                        assert!(cfg.is_debug());
+                        assert!(!cfg.is_local_mode());
+                        assert_eq!(cfg.cluster_name(), "pmxcfs");
+                    }
+                })
+            })
+            .collect();
+
+        for handle in handles {
+            handle.join().unwrap();
+        }
+    }
+
+    #[test]
+    fn test_debug_format() {
+        let config = Config::new(
+            "node1",
+            "192.168.1.1".parse().unwrap(),
+            33,
+            1,
+            false,
+            "pmxcfs",
+        );
+
+        let debug_str = format!("{config:?}");
+
+        assert!(debug_str.contains("Config"));
+        assert!(debug_str.contains("nodename"));
+        assert!(debug_str.contains("node1"));
+        assert!(debug_str.contains("node_ip"));
+        assert!(debug_str.contains("192.168.1.1"));
+        assert!(debug_str.contains("www_data_gid"));
+        assert!(debug_str.contains("33"));
+        assert!(debug_str.contains("local_mode"));
+        assert!(debug_str.contains("false"));
+        assert!(debug_str.contains("cluster_name"));
+        assert!(debug_str.contains("pmxcfs"));
+        assert!(debug_str.contains("debug_level"));
+    }
+
+    #[test]
+    fn test_into_shared() {
+        let config = Config::new(
+            "node1",
+            "192.168.1.1".parse().unwrap(),
+            33,
+            0,
+            false,
+            "pmxcfs",
+        )
+        .into_shared();
+
+        // Arc::clone shares the same underlying config
+        let config2 = Arc::clone(&config);
+        config.set_debug_level(7);
+        assert_eq!(config2.debug_level(), 7);
+    }
+
+    #[test]
+    fn test_empty_strings() {
+        let config = Config::new("", "127.0.0.1".parse().unwrap(), 0, 0, false, "");
+
+        assert_eq!(config.nodename(), "");
+        assert_eq!(config.node_ip(), "127.0.0.1".parse::<IpAddr>().unwrap());
+        assert_eq!(config.cluster_name(), "");
+        assert_eq!(config.www_data_gid(), 0);
+    }
+
+    #[test]
+    fn test_long_strings() {
+        let long_name = "a".repeat(1000);
+        let long_cluster = "cluster-".to_string() + &"x".repeat(500);
+
+        let config = Config::new(
+            long_name.as_str(),
+            "192.168.1.1".parse().unwrap(),
+            u32::MAX,
+            1,
+            true,
+            long_cluster.as_str(),
+        );
+
+        assert_eq!(config.nodename(), long_name);
+        assert_eq!(config.cluster_name(), long_cluster);
+        assert_eq!(config.www_data_gid(), u32::MAX);
+    }
+}
-- 
2.47.3





  parent reply	other threads:[~2026-03-23 11:32 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-23 11:32 [PATCH pve-cluster v3 00/13] Rewrite pmxcfs with Rust Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 01/13] pmxcfs-rs: add pmxcfs-api-types crate Kefu Chai
2026-03-23 11:32 ` Kefu Chai [this message]
2026-03-23 11:32 ` [PATCH pve-cluster v3 03/13] pmxcfs-rs: add pmxcfs-logger crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 04/13] pmxcfs-rs: add pmxcfs-rrd crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 05/13] pmxcfs-rs: add pmxcfs-memdb crate Kefu Chai
2026-03-23 11:32 ` SPAM: [PATCH pve-cluster v3 06/13] pmxcfs-rs: add pmxcfs-status and pmxcfs-test-utils crates Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 07/13] pmxcfs-rs: add pmxcfs-services crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 08/13] pmxcfs-rs: add pmxcfs-ipc crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 09/13] pmxcfs-rs: add pmxcfs-dfsm crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 10/13] pmxcfs-rs: vendor patched rust-corosync for CPG compatibility Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 11/13] pmxcfs-rs: add pmxcfs main daemon binary Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 13/13] pmxcfs-rs: add project documentation Kefu Chai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260323113239.942866-3-k.chai@proxmox.com \
    --to=k.chai@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=tchaikov@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal