public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Kefu Chai <k.chai@proxmox.com>
To: pve-devel@lists.proxmox.com
Cc: Kefu Chai <tchaikov@gmail.com>
Subject: [PATCH pve-cluster v3 11/13] pmxcfs-rs: add pmxcfs main daemon binary
Date: Mon, 23 Mar 2026 19:32:26 +0800	[thread overview]
Message-ID: <20260323113239.942866-12-k.chai@proxmox.com> (raw)
In-Reply-To: <20260323113239.942866-1-k.chai@proxmox.com>

Main daemon binary integrating all crates with:

- FUSE filesystem with broadcast-first architecture
- 9 plugin files (clusterlog, debug, members, rrd, types,
  version, vmlist, registry, mod)
- Configuration initialization using Config::new(...).into_shared()
- Status initialization with status::init_with_config_and_rrd()
- Enhanced corosync config import logic
- 5 FUSE integration tests (basic, cluster, integration, locks,
  symlink)

Signed-off-by: Kefu Chai <k.chai@proxmox.com>
---
 src/pmxcfs-rs/Cargo.toml                      |   10 +-
 src/pmxcfs-rs/pmxcfs/Cargo.toml               |   82 +
 src/pmxcfs-rs/pmxcfs/README.md                |  174 ++
 .../pmxcfs/src/cluster_config_service.rs      |  376 ++++
 src/pmxcfs-rs/pmxcfs/src/daemon.rs            |  233 +++
 src/pmxcfs-rs/pmxcfs/src/file_lock.rs         |  105 ++
 src/pmxcfs-rs/pmxcfs/src/fuse/README.md       |  199 +++
 src/pmxcfs-rs/pmxcfs/src/fuse/filesystem.rs   | 1505 +++++++++++++++++
 src/pmxcfs-rs/pmxcfs/src/fuse/mod.rs          |    4 +
 src/pmxcfs-rs/pmxcfs/src/ipc/mod.rs           |   16 +
 src/pmxcfs-rs/pmxcfs/src/ipc/request.rs       |  281 +++
 src/pmxcfs-rs/pmxcfs/src/ipc/service.rs       |  920 ++++++++++
 src/pmxcfs-rs/pmxcfs/src/lib.rs               |   15 +
 src/pmxcfs-rs/pmxcfs/src/logging.rs           |   44 +
 src/pmxcfs-rs/pmxcfs/src/main.rs              |  761 +++++++++
 src/pmxcfs-rs/pmxcfs/src/memdb_callbacks.rs   | 1222 +++++++++++++
 src/pmxcfs-rs/pmxcfs/src/path.rs              |   45 +
 src/pmxcfs-rs/pmxcfs/src/plugins/README.md    |  203 +++
 .../pmxcfs/src/plugins/clusterlog.rs          |  275 +++
 src/pmxcfs-rs/pmxcfs/src/plugins/debug.rs     |  131 ++
 src/pmxcfs-rs/pmxcfs/src/plugins/members.rs   |  146 ++
 src/pmxcfs-rs/pmxcfs/src/plugins/mod.rs       |   30 +
 src/pmxcfs-rs/pmxcfs/src/plugins/registry.rs  |  328 ++++
 src/pmxcfs-rs/pmxcfs/src/plugins/rrd.rs       |   97 ++
 src/pmxcfs-rs/pmxcfs/src/plugins/types.rs     |  117 ++
 src/pmxcfs-rs/pmxcfs/src/plugins/version.rs   |  140 ++
 src/pmxcfs-rs/pmxcfs/src/plugins/vmlist.rs    |  120 ++
 src/pmxcfs-rs/pmxcfs/src/quorum_service.rs    |  203 +++
 src/pmxcfs-rs/pmxcfs/src/restart_flag.rs      |   60 +
 src/pmxcfs-rs/pmxcfs/src/status_callbacks.rs  |  420 +++++
 src/pmxcfs-rs/pmxcfs/src/status_messages.rs   |  211 +++
 src/pmxcfs-rs/pmxcfs/tests/fuse_basic_test.rs |  216 +++
 .../pmxcfs/tests/fuse_cluster_test.rs         |  217 +++
 .../pmxcfs/tests/fuse_integration_test.rs     |  405 +++++
 src/pmxcfs-rs/pmxcfs/tests/fuse_locks_test.rs |  375 ++++
 .../pmxcfs/tests/local_integration.rs         |  438 +++++
 src/pmxcfs-rs/pmxcfs/tests/quorum_behavior.rs |  302 ++++
 .../pmxcfs/tests/single_node_functional.rs    |  357 ++++
 .../pmxcfs/tests/symlink_quorum_test.rs       |  145 ++
 39 files changed, 10926 insertions(+), 2 deletions(-)
 create mode 100644 src/pmxcfs-rs/pmxcfs/Cargo.toml
 create mode 100644 src/pmxcfs-rs/pmxcfs/README.md
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/cluster_config_service.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/daemon.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/file_lock.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/fuse/README.md
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/fuse/filesystem.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/fuse/mod.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/ipc/mod.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/ipc/request.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/ipc/service.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/lib.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/logging.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/main.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/memdb_callbacks.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/path.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/plugins/README.md
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/plugins/clusterlog.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/plugins/debug.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/plugins/members.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/plugins/mod.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/plugins/registry.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/plugins/rrd.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/plugins/types.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/plugins/version.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/plugins/vmlist.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/quorum_service.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/restart_flag.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/status_callbacks.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/src/status_messages.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/tests/fuse_basic_test.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/tests/fuse_cluster_test.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/tests/fuse_integration_test.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/tests/fuse_locks_test.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/tests/local_integration.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/tests/quorum_behavior.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/tests/single_node_functional.rs
 create mode 100644 src/pmxcfs-rs/pmxcfs/tests/symlink_quorum_test.rs

diff --git a/src/pmxcfs-rs/Cargo.toml b/src/pmxcfs-rs/Cargo.toml
index 198007ae5..8b70fac8c 100644
--- a/src/pmxcfs-rs/Cargo.toml
+++ b/src/pmxcfs-rs/Cargo.toml
@@ -11,6 +11,7 @@ members = [
     "pmxcfs-services",   # Service framework for automatic retry and lifecycle management
     "pmxcfs-ipc",        # libqb-compatible IPC server
     "pmxcfs-dfsm",       # Distributed Finite State Machine
+    "pmxcfs",            # Main daemon binary
 ]
 resolver = "2"
 
@@ -41,6 +42,7 @@ rust-corosync = "0.1"
 # Core async runtime
 tokio = { version = "1.35", features = ["full"] }
 tokio-util = "0.7"
+futures = "0.3"
 
 # Error handling
 anyhow = "1.0"
@@ -48,13 +50,15 @@ thiserror = "2.0"
 
 # Logging and tracing
 tracing = "0.1"
-tracing-subscriber = "0.3"
+tracing-subscriber = { version = "0.3", features = ["env-filter"] }
+tracing-journald = "0.3"
 
 # Async trait support
 async-trait = "0.1"
 
 # Serialization
 serde = { version = "1.0", features = ["derive"] }
+serde_json = "1.0"
 bincode = "1.3"
 bytemuck = { version = "1.14", features = ["derive"] }
 
@@ -67,10 +71,11 @@ parking_lot = "0.12"
 
 # System integration
 libc = "0.2"
-nix = { version = "0.29", features = ["socket", "poll"] }
+nix = { version = "0.29", features = ["fs", "process", "signal", "user", "socket"] }
 
 # Utilities
 num_enum = "0.5"
+chrono = "0.4"
 
 # Development dependencies
 tempfile = "3.8"
@@ -93,3 +98,4 @@ debug = true
 # Fixed in corosync upstream (commit 71d6d93c) but not yet released
 # Remove this patch when rust-corosync > 0.1.0 is published
 rust-corosync = { path = "vendor/rust-corosync" }
+# proxmox-fuse = { path = "../proxmox-fuse" }
diff --git a/src/pmxcfs-rs/pmxcfs/Cargo.toml b/src/pmxcfs-rs/pmxcfs/Cargo.toml
new file mode 100644
index 000000000..6d37b4132
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/Cargo.toml
@@ -0,0 +1,82 @@
+[package]
+name = "pmxcfs"
+description = "Proxmox Cluster File System - Rust implementation"
+homepage = "https://www.proxmox.com"
+
+version.workspace = true
+edition.workspace = true
+authors.workspace = true
+license.workspace = true
+repository.workspace = true
+
+[lints]
+workspace = true
+
+[lib]
+name = "pmxcfs_rs"
+path = "src/lib.rs"
+
+[[bin]]
+name = "pmxcfs"
+path = "src/main.rs"
+
+[dependencies]
+# Workspace members
+pmxcfs-config.workspace = true
+pmxcfs-api-types.workspace = true
+pmxcfs-memdb.workspace = true
+pmxcfs-dfsm.workspace = true
+pmxcfs-rrd.workspace = true
+pmxcfs-status.workspace = true
+pmxcfs-ipc.workspace = true
+pmxcfs-services.workspace = true
+
+# Core async runtime
+tokio.workspace = true
+tokio-util.workspace = true
+async-trait.workspace = true
+
+# Error handling
+anyhow.workspace = true
+thiserror.workspace = true
+
+# Logging and tracing
+tracing.workspace = true
+tracing-subscriber.workspace = true
+tracing-journald.workspace = true
+
+# Serialization
+serde.workspace = true
+serde_json.workspace = true
+bincode.workspace = true
+
+# Command-line parsing
+clap = { version = "4.4", features = ["derive"] }
+
+# FUSE filesystem
+proxmox-fuse = "3.0.0"
+
+# Network and cluster
+bytes.workspace = true
+sha2.workspace = true
+bytemuck.workspace = true
+
+# System integration
+libc.workspace = true
+nix.workspace = true
+
+# Corosync/CPG bindings
+rust-corosync.workspace = true
+
+# Concurrency primitives
+parking_lot.workspace = true
+
+# Utilities
+chrono.workspace = true
+futures.workspace = true
+num_enum.workspace = true
+
+[dev-dependencies]
+tempfile.workspace = true
+pmxcfs-test-utils.workspace = true
+filetime = "0.2"
diff --git a/src/pmxcfs-rs/pmxcfs/README.md b/src/pmxcfs-rs/pmxcfs/README.md
new file mode 100644
index 000000000..eb457d3ba
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/README.md
@@ -0,0 +1,174 @@
+# pmxcfs Rust Implementation
+
+This directory contains the Rust reimplementation of pmxcfs (Proxmox Cluster File System).
+
+## Architecture Overview
+
+pmxcfs is a FUSE-based cluster filesystem that provides:
+- **Cluster-wide configuration storage** via replicated database (pmxcfs-memdb)
+- **State synchronization** across nodes via Corosync CPG (pmxcfs-dfsm)
+- **Virtual files** for runtime status (plugins: .version, .members, .vmlist, .rrd)
+- **Quorum enforcement** for write protection
+- **IPC server** for management tools (pvecm, pvenode)
+
+### Component Architecture
+
+### FUSE Plugin System
+
+Virtual files that appear in `/etc/pve` but don't exist in the database:
+
+| Plugin | File | Purpose | C Equivalent |
+|--------|------|---------|--------------|
+| `version.rs` | `.version` | Cluster version info | `cfs-plug-func.c` (cfs_plug_version_read) |
+| `members.rs` | `.members` | Cluster member list | `cfs-plug-func.c` (cfs_plug_members_read) |
+| `vmlist.rs` | `.vmlist` | VM/CT registry | `cfs-plug-func.c` (cfs_plug_vmlist_read) |
+| `rrd.rs` | `.rrd` | RRD dump (all metrics) | `cfs-plug-func.c` (cfs_plug_rrd_read) |
+| `clusterlog.rs` | `.clusterlog` | Cluster log viewer | `cfs-plug-func.c` (cfs_plug_clusterlog_read) |
+| `debug.rs` | `.debug` | Runtime debug control | `cfs-plug-func.c` (cfs_plug_debug) |
+
+#### Plugin Trait
+
+Plugins are registered in `plugins/registry.rs` and integrated into the FUSE filesystem.
+
+### C File Mapping
+
+| C Source | Rust Equivalent | Description |
+|----------|-----------------|-------------|
+| `pmxcfs.c` | `main.rs`, `daemon.rs` | Main entry point, daemon lifecycle |
+| `cfs-plug.c` | `fuse/filesystem.rs` | FUSE operations dispatcher |
+| `cfs-plug-memdb.c` | `fuse/filesystem.rs` | MemDb integration |
+| `cfs-plug-func.c` | `plugins/*.rs` | Virtual file plugins |
+| `server.c` | `ipc_service.rs` + pmxcfs-ipc | IPC server |
+| `loop.c` | pmxcfs-services | Service management |
+
+## Key Differences from C Implementation
+
+### Command-line Options
+
+Both implementations support the core options with identical behavior:
+- `-d` / `--debug` - Turn on debug messages
+- `-f` / `--foreground` - Do not daemonize server
+- `-l` / `--local` - Force local mode (ignore corosync.conf, force quorum)
+
+The Rust implementation adds these additional options for flexibility and testing:
+- `--test-dir <PATH>` - Test directory (sets all paths to subdirectories for isolated testing)
+- `--mount <PATH>` - Custom mount point (default: /etc/pve)
+- `--db <PATH>` - Custom database path (default: /var/lib/pve-cluster/config.db)
+- `--rundir <PATH>` - Custom runtime directory (default: /run/pmxcfs)
+- `--cluster-name <NAME>` - Cluster name / CPG group name for Corosync isolation (default: "pmxcfs")
+
+The Rust version is fully backward-compatible with C version command-line usage. The additional options are for advanced use cases (testing, multi-instance deployments) and don't affect standard deployment scenarios.
+
+### Logging
+
+**C Implementation**: Uses libqb's qb_log with traditional syslog format
+
+**Rust Implementation**: Uses tracing + tracing-subscriber with structured output integrated with systemd journald
+
+Log messages may appear in different format, but journald integration provides same searchability as syslog. Log levels work equivalently (debug, info, warn, error).
+
+## Plugin System Details
+
+### Virtual File Plugins
+
+Each plugin provides a read-only (or read-write) virtual file accessible through the FUSE mount:
+
+#### `.version` - Version Information
+
+**Path:** `/etc/pve/.version`
+**Format:** `{start_time}:{vmlist_version}:{path_versions...}`
+**Purpose:** Allows tools to detect configuration changes
+**Implementation:** `plugins/version.rs`
+
+Example output:
+Each number is a version counter that increments on changes.
+
+#### `.members` - Cluster Members
+
+**Path:** `/etc/pve/.members`
+**Format:** INI-style with member info
+**Purpose:** Lists active cluster nodes
+**Implementation:** `plugins/members.rs`
+
+Example output:
+Format: `{nodeid}\t{name}\t{online}\t{ip}`
+
+#### `.vmlist` - VM/CT Registry
+
+**Path:** `/etc/pve/.vmlist`
+**Format:** INI-style with VM info
+**Purpose:** Cluster-wide VM/CT registry
+**Implementation:** `plugins/vmlist.rs`
+
+Example output:
+Format: `{vmid}\t{node}\t{version}`
+
+#### `.rrd` - RRD Metrics Dump
+
+**Path:** `/etc/pve/.rrd`
+**Format:** Custom RRD dump format
+**Purpose:** Exports all RRD metrics for graph generation
+**Implementation:** `plugins/rrd.rs`
+
+Example output:
+
+#### `.clusterlog` - Cluster Log
+
+**Path:** `/etc/pve/.clusterlog`
+**Format:** Plain text log entries
+**Purpose:** Aggregated cluster-wide log
+**Implementation:** `plugins/clusterlog.rs`
+
+Example output:
+
+#### `.debug` - Debug Control
+
+**Path:** `/etc/pve/.debug`
+**Format:** Text commands
+**Purpose:** Runtime debug level control
+**Implementation:** `plugins/debug.rs`
+
+Write "1" to enable debug logging, "0" to disable.
+
+### Plugin Registration
+
+Plugins are registered in `plugins/registry.rs`:
+
+### FUSE Integration
+
+The FUSE filesystem checks plugins before MemDb:
+
+## Crate Structure
+
+The Rust implementation is organized as a workspace with 9 crates:
+
+| Crate | Purpose | Lines | C Equivalent |
+|-------|---------|-------|--------------|
+| **pmxcfs** | Main daemon binary | ~3500 | pmxcfs.c + plugins |
+| **pmxcfs-api-types** | Shared types | ~400 | cfs-utils.h |
+| **pmxcfs-config** | Configuration | ~75 | (inline in C) |
+| **pmxcfs-memdb** | In-memory database | ~2500 | memdb.c + database.c |
+| **pmxcfs-dfsm** | State machine | ~3000 | dfsm.c + dcdb.c |
+| **pmxcfs-rrd** | RRD persistence | ~800 | status.c (embedded) |
+| **pmxcfs-status** | Status tracking | ~900 | status.c |
+| **pmxcfs-ipc** | IPC server | ~2000 | server.c |
+| **pmxcfs-services** | Service framework | ~500 | loop.c |
+
+Total: **~14,000 lines** vs C implementation **~15,000 lines**
+
+## Migration Notes
+
+The Rust implementation can coexist with C nodes in the same cluster:
+- **Wire protocol**: 100% compatible (DFSM, IPC, RRD)
+- **Database format**: SQLite schema identical
+- **Corosync integration**: Uses same CPG groups
+- **File format**: All config files compatible
+
+## References
+
+### Documentation
+- [Implementation Plan](../../pmxcfs-rust-rewrite-plan.rst)
+- Individual crate README.md files for detailed docs
+
+### C Implementation
+- `src/pmxcfs/` - Original C implementation
diff --git a/src/pmxcfs-rs/pmxcfs/src/cluster_config_service.rs b/src/pmxcfs-rs/pmxcfs/src/cluster_config_service.rs
new file mode 100644
index 000000000..f4366b659
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/cluster_config_service.rs
@@ -0,0 +1,376 @@
+//! Cluster Configuration Service
+//!
+//! This service monitors Corosync cluster configuration changes via the CMAP API.
+//! It tracks nodelist changes and configuration version updates, matching the C
+//! implementation's service_confdb functionality.
+
+use async_trait::async_trait;
+use pmxcfs_services::{Service, ServiceError};
+use rust_corosync::{self as corosync, CsError, cmap};
+use std::path::PathBuf;
+use std::sync::Arc;
+use tracing::{debug, error, info, warn};
+
+use pmxcfs_status::Status;
+
+/// Parse `cluster_name: <name>` from a corosync.conf-style file.
+///
+/// Used as a fallback when `totem.cluster_name` is not yet available in CMAP
+/// at startup (race between corosync initialization and the CMAP handle being ready).
+fn parse_cluster_name_from_conf(path: &std::path::Path) -> Option<String> {
+    let content = std::fs::read_to_string(path).ok()?;
+    for line in content.lines() {
+        let line = line.trim();
+        if let Some(rest) = line.strip_prefix("cluster_name:") {
+            let name = rest.trim().to_string();
+            if !name.is_empty() {
+                return Some(name);
+            }
+        }
+    }
+    None
+}
+
+/// Cluster configuration service (matching C's service_confdb)
+///
+/// Monitors Corosync CMAP for:
+/// - Nodelist changes (`nodelist.node.*`)
+/// - Configuration version changes (`totem.config_version`)
+///
+/// Updates cluster info when configuration changes are detected.
+pub struct ClusterConfigService {
+    /// CMAP handle (None when not initialized)
+    cmap_handle: parking_lot::RwLock<Option<cmap::Handle>>,
+    /// Nodelist track handle
+    nodelist_track_handle: parking_lot::RwLock<Option<cmap::TrackHandle>>,
+    /// Config version track handle
+    version_track_handle: parking_lot::RwLock<Option<cmap::TrackHandle>>,
+    /// Cluster name track handle (totem.cluster_name — may appear after initial read)
+    cluster_name_track_handle: parking_lot::RwLock<Option<cmap::TrackHandle>>,
+    /// Status instance for cluster info updates
+    status: Arc<Status>,
+    /// Corosync.conf path — parsed as fallback when CMAP totem.cluster_name is not yet available
+    corosync_conf_path: PathBuf,
+    /// Flag indicating configuration changes detected
+    changes_detected: parking_lot::RwLock<bool>,
+}
+
+impl ClusterConfigService {
+    /// Create a new cluster configuration service
+    pub fn new(status: Arc<Status>, corosync_conf_path: PathBuf) -> Self {
+        Self {
+            cmap_handle: parking_lot::RwLock::new(None),
+            nodelist_track_handle: parking_lot::RwLock::new(None),
+            version_track_handle: parking_lot::RwLock::new(None),
+            cluster_name_track_handle: parking_lot::RwLock::new(None),
+            status,
+            corosync_conf_path,
+            changes_detected: parking_lot::RwLock::new(false),
+        }
+    }
+
+    /// Read cluster configuration from CMAP
+    fn read_cluster_config(&self, handle: &cmap::Handle) -> Result<(), anyhow::Error> {
+        // Read config version
+        let config_version = match cmap::get(*handle, &"totem.config_version".to_string()) {
+            Ok(cmap::Data::UInt64(v)) => v,
+            Ok(cmap::Data::UInt32(v)) => v as u64,
+            Ok(cmap::Data::UInt16(v)) => v as u64,
+            Ok(cmap::Data::UInt8(v)) => v as u64,
+            Ok(_) => {
+                warn!("Unexpected data type for totem.config_version");
+                0
+            }
+            Err(e) => {
+                warn!("Failed to read totem.config_version: {:?}", e);
+                0
+            }
+        };
+
+        // Read cluster name — fall back to parsing corosync.conf if CMAP is not yet populated.
+        // At startup there is a brief race where corosync hasn't written totem.cluster_name to
+        // ICMAP yet (cmap_get returns CS_ERR_INVALID_PARAM).  We parse the conf file as a fallback
+        // so the initial read still succeeds with the correct name.
+        let cluster_name = match cmap::get(*handle, &"totem.cluster_name".to_string()) {
+            Ok(cmap::Data::String(s)) => s,
+            Ok(_) => {
+                warn!("totem.cluster_name has unexpected type, falling back to corosync.conf");
+                parse_cluster_name_from_conf(&self.corosync_conf_path).unwrap_or_default()
+            }
+            Err(e) => {
+                if let Some(fallback) = parse_cluster_name_from_conf(&self.corosync_conf_path) {
+                    debug!(
+                        "totem.cluster_name not in CMAP ({:?}), parsed from conf: '{}'",
+                        e, fallback
+                    );
+                    fallback
+                } else {
+                    error!("Failed to read totem.cluster_name: {:?}", e);
+                    return Err(anyhow::anyhow!("Failed to read cluster_name"));
+                }
+            }
+        };
+
+        info!(
+            "Cluster configuration: name='{}', version={}",
+            cluster_name, config_version
+        );
+
+        // Read cluster nodes
+        self.read_cluster_nodes(handle, &cluster_name, config_version)?;
+
+        Ok(())
+    }
+
+    /// Read cluster nodes from CMAP nodelist
+    fn read_cluster_nodes(
+        &self,
+        handle: &cmap::Handle,
+        cluster_name: &str,
+        config_version: u64,
+    ) -> Result<(), anyhow::Error> {
+        let mut nodes = Vec::new();
+
+        // Iterate through nodelist (nodelist.node.0, nodelist.node.1, etc.)
+        for node_idx in 0..256 {
+            let nodeid_key = format!("nodelist.node.{node_idx}.nodeid");
+            let name_key = format!("nodelist.node.{node_idx}.name");
+            let ring0_key = format!("nodelist.node.{node_idx}.ring0_addr");
+
+            // Try to read node ID - if it doesn't exist, we've reached the end
+            let nodeid = match cmap::get(*handle, &nodeid_key) {
+                Ok(cmap::Data::UInt32(id)) => id,
+                Ok(cmap::Data::UInt8(id)) => id as u32,
+                Ok(cmap::Data::UInt16(id)) => id as u32,
+                Err(CsError::CsErrNotExist) => break, // No more nodes
+                Err(e) => {
+                    debug!("Error reading {}: {:?}", nodeid_key, e);
+                    continue;
+                }
+                Ok(_) => {
+                    warn!("Unexpected type for {}", nodeid_key);
+                    continue;
+                }
+            };
+
+            let name = match cmap::get(*handle, &name_key) {
+                Ok(cmap::Data::String(s)) => s,
+                _ => {
+                    debug!("No name for node {}", nodeid);
+                    format!("node{nodeid}")
+                }
+            };
+
+            let ip = match cmap::get(*handle, &ring0_key) {
+                Ok(cmap::Data::String(s)) => s,
+                _ => String::new(),
+            };
+
+            debug!(
+                "Found cluster node: id={}, name={}, ip={}",
+                nodeid, name, ip
+            );
+            nodes.push((nodeid, name, ip));
+        }
+
+        info!("Found {} cluster nodes", nodes.len());
+
+        // Update cluster info in Status
+        self.status
+            .update_cluster_info(cluster_name.to_string(), config_version, nodes)?;
+
+        Ok(())
+    }
+}
+
+/// CMAP track callback (matches C's track_callback)
+///
+/// This function is called by Corosync whenever a tracked CMAP key changes.
+/// We use user_data to pass a pointer to the ClusterConfigService.
+fn track_callback(
+    _handle: &cmap::Handle,
+    _track_handle: &cmap::TrackHandle,
+    _event: cmap::TrackType,
+    key_name: &String, // Note: rust-corosync API uses &String not &str
+    _new_value: &cmap::Data,
+    _old_value: &cmap::Data,
+    user_data: u64,
+) {
+    debug!("CMAP track callback: key_name={}", key_name);
+
+    if user_data == 0 {
+        error!("BUG: CMAP track callback called with null user_data");
+        return;
+    }
+
+    // Safety: user_data contains a valid pointer to ClusterConfigService
+    // The pointer remains valid because ServiceManager holds the service
+    unsafe {
+        let service_ptr = user_data as *const ClusterConfigService;
+        let service = &*service_ptr;
+        *service.changes_detected.write() = true;
+    }
+}
+
+#[async_trait]
+impl Service for ClusterConfigService {
+    fn name(&self) -> &str {
+        "cluster-config"
+    }
+
+    async fn initialize(&mut self) -> pmxcfs_services::Result<std::os::unix::io::RawFd> {
+        info!("Initializing cluster configuration service");
+
+        // Initialize CMAP connection
+        let handle = cmap::initialize(cmap::Map::Icmap).map_err(|e| {
+            ServiceError::InitializationFailed(format!("cmap_initialize failed: {e:?}"))
+        })?;
+
+        // Store self pointer as user_data for callbacks
+        let self_ptr = self as *const Self as u64;
+
+        // Create callback struct
+        let callback = cmap::NotifyCallback {
+            notify_fn: Some(track_callback),
+        };
+
+        // Set up nodelist tracking (matches C's CMAP_TRACK_PREFIX | CMAP_TRACK_ADD | ...)
+        let nodelist_track = cmap::track_add(
+            handle,
+            &"nodelist.node.".to_string(),
+            cmap::TrackType::PREFIX
+                | cmap::TrackType::ADD
+                | cmap::TrackType::DELETE
+                | cmap::TrackType::MODIFY,
+            &callback,
+            self_ptr,
+        )
+        .map_err(|e| {
+            cmap::finalize(handle).ok();
+            ServiceError::InitializationFailed(format!("cmap_track_add (nodelist) failed: {e:?}"))
+        })?;
+
+        // Set up config version tracking
+        let version_track = cmap::track_add(
+            handle,
+            &"totem.config_version".to_string(),
+            cmap::TrackType::ADD | cmap::TrackType::DELETE | cmap::TrackType::MODIFY,
+            &callback,
+            self_ptr,
+        )
+        .map_err(|e| {
+            cmap::track_delete(handle, nodelist_track).ok();
+            cmap::finalize(handle).ok();
+            ServiceError::InitializationFailed(format!(
+                "cmap_track_add (config_version) failed: {e:?}"
+            ))
+        })?;
+
+        // Track totem.cluster_name — it may not be in CMAP at startup (race with corosync init).
+        // When corosync writes this key, the ADD event triggers a re-read with the correct name.
+        let cluster_name_track = cmap::track_add(
+            handle,
+            &"totem.cluster_name".to_string(),
+            cmap::TrackType::ADD | cmap::TrackType::MODIFY,
+            &callback,
+            self_ptr,
+        )
+        .map_err(|e| {
+            cmap::track_delete(handle, version_track).ok();
+            cmap::track_delete(handle, nodelist_track).ok();
+            cmap::finalize(handle).ok();
+            ServiceError::InitializationFailed(format!(
+                "cmap_track_add (cluster_name) failed: {e:?}"
+            ))
+        })?;
+
+        // Get file descriptor for event monitoring
+        let fd = cmap::fd_get(handle).map_err(|e| {
+            cmap::track_delete(handle, cluster_name_track).ok();
+            cmap::track_delete(handle, version_track).ok();
+            cmap::track_delete(handle, nodelist_track).ok();
+            cmap::finalize(handle).ok();
+            ServiceError::InitializationFailed(format!("cmap_fd_get failed: {e:?}"))
+        })?;
+
+        // Read initial configuration
+        if let Err(e) = self.read_cluster_config(&handle) {
+            warn!("Failed to read initial cluster configuration: {}", e);
+            // Don't fail initialization - we'll try again on next change
+        }
+
+        // Store handles
+        *self.cmap_handle.write() = Some(handle);
+        *self.nodelist_track_handle.write() = Some(nodelist_track);
+        *self.version_track_handle.write() = Some(version_track);
+        *self.cluster_name_track_handle.write() = Some(cluster_name_track);
+
+        info!(
+            "Cluster configuration service initialized successfully with fd {}",
+            fd
+        );
+        Ok(fd)
+    }
+
+    async fn dispatch(&mut self) -> pmxcfs_services::Result<bool> {
+        let handle = *self.cmap_handle.read().as_ref().ok_or_else(|| {
+            ServiceError::DispatchFailed("CMAP handle not initialized".to_string())
+        })?;
+
+        // Dispatch CMAP events (matches C's cmap_dispatch with CS_DISPATCH_ALL)
+        match cmap::dispatch(handle, corosync::DispatchFlags::All) {
+            Ok(_) => {
+                // Check if changes were detected (matches C implementation)
+                if *self.changes_detected.read() {
+                    *self.changes_detected.write() = false;
+
+                    // Re-read cluster configuration
+                    if let Err(e) = self.read_cluster_config(&handle) {
+                        warn!("Failed to update cluster configuration: {}", e);
+                    }
+                }
+                Ok(true)
+            }
+            Err(CsError::CsErrTryAgain) => {
+                // TRY_AGAIN is expected, continue normally
+                Ok(true)
+            }
+            Err(CsError::CsErrLibrary) | Err(CsError::CsErrBadHandle) => {
+                // Connection lost, need to reinitialize
+                warn!("CMAP connection lost, requesting reinitialization");
+                Ok(false)
+            }
+            Err(e) => {
+                error!("CMAP dispatch failed: {:?}", e);
+                Err(ServiceError::DispatchFailed(format!(
+                    "cmap_dispatch failed: {e:?}"
+                )))
+            }
+        }
+    }
+
+    async fn finalize(&mut self) -> pmxcfs_services::Result<()> {
+        info!("Finalizing cluster configuration service");
+
+        if let Some(handle) = self.cmap_handle.write().take() {
+            // Remove track handles
+            if let Some(cluster_name_track) = self.cluster_name_track_handle.write().take() {
+                cmap::track_delete(handle, cluster_name_track).ok();
+            }
+            if let Some(version_track) = self.version_track_handle.write().take() {
+                cmap::track_delete(handle, version_track).ok();
+            }
+            if let Some(nodelist_track) = self.nodelist_track_handle.write().take() {
+                cmap::track_delete(handle, nodelist_track).ok();
+            }
+
+            // Finalize CMAP connection
+            if let Err(e) = cmap::finalize(handle) {
+                warn!("Error finalizing CMAP: {:?}", e);
+            }
+        }
+
+        info!("Cluster configuration service finalized");
+        Ok(())
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/daemon.rs b/src/pmxcfs-rs/pmxcfs/src/daemon.rs
new file mode 100644
index 000000000..812229358
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/daemon.rs
@@ -0,0 +1,233 @@
+//! Daemon builder with integrated PID file management
+//!
+//! This module provides a builder-based API for daemonization that combines
+//! process forking, parent-child signaling, and PID file management into a
+//! cohesive, easy-to-use abstraction.
+//!
+//! Inspired by the daemonize crate but tailored for pmxcfs needs with async support.
+
+use anyhow::{Context, Result};
+use nix::unistd::{ForkResult, fork, pipe};
+use pmxcfs_api_types::PmxcfsError;
+use std::fs::{self, File};
+use std::os::unix::fs::PermissionsExt;
+use std::os::unix::io::{AsRawFd, IntoRawFd, RawFd};
+use std::path::{Path, PathBuf};
+
+/// RAII guard for PID file - automatically removes file on drop
+pub struct PidFileGuard {
+    path: PathBuf,
+}
+
+impl Drop for PidFileGuard {
+    fn drop(&mut self) {
+        if let Err(e) = fs::remove_file(&self.path) {
+            tracing::warn!(
+                "Failed to remove PID file at {}: {}",
+                self.path.display(),
+                e
+            );
+        } else {
+            tracing::debug!("Removed PID file at {}", self.path.display());
+        }
+    }
+}
+
+/// Represents the daemon process after daemonization
+pub enum DaemonProcess {
+    /// Parent process - should exit after receiving this
+    Parent,
+    /// Child process - contains RAII guard for PID file cleanup
+    Child(PidFileGuard),
+}
+
+/// Builder for daemon configuration with integrated PID file management
+///
+/// Provides a fluent API for configuring daemonization behavior including
+/// PID file location, group ownership, and parent-child signaling.
+pub struct Daemon {
+    pid_file: Option<PathBuf>,
+    group: Option<u32>,
+}
+
+impl Daemon {
+    /// Create a new daemon builder with default settings
+    pub fn new() -> Self {
+        Self {
+            pid_file: None,
+            group: None,
+        }
+    }
+
+    /// Set the PID file path
+    ///
+    /// The PID file will be created with 0o644 permissions and owned by root:group.
+    pub fn pid_file<P: Into<PathBuf>>(mut self, path: P) -> Self {
+        self.pid_file = Some(path.into());
+        self
+    }
+
+    /// Set the group ID for PID file ownership
+    pub fn group(mut self, gid: u32) -> Self {
+        self.group = Some(gid);
+        self
+    }
+
+    /// Start the daemonization process (foreground mode)
+    ///
+    /// Returns a guard that manages PID file lifecycle.
+    /// The PID file is written immediately and cleaned up when the guard is dropped.
+    pub fn start_foreground(self) -> Result<PidFileGuard> {
+        let pid_file_path = self
+            .pid_file
+            .ok_or_else(|| PmxcfsError::System("PID file path must be specified".into()))?;
+
+        let gid = self.group.unwrap_or(0);
+
+        // Write PID file with current process ID
+        write_pid_file(&pid_file_path, std::process::id(), gid)?;
+
+        tracing::info!("Running in foreground mode with PID {}", std::process::id());
+
+        Ok(PidFileGuard {
+            path: pid_file_path,
+        })
+    }
+
+    /// Start daemonization with deferred signaling
+    ///
+    /// Returns (DaemonProcess, Option<SignalHandle>) where SignalHandle
+    /// must be used to signal the parent when ready.
+    pub fn start_daemon_with_signal(self) -> Result<(DaemonProcess, Option<SignalHandle>)> {
+        let pid_file_path = self
+            .pid_file
+            .clone()
+            .ok_or_else(|| PmxcfsError::System("PID file path must be specified".into()))?;
+
+        let gid = self.group.unwrap_or(0);
+
+        // Create pipe for parent-child signaling
+        let (read_fd, write_fd) = pipe().context("Failed to create pipe for daemonization")?;
+        let (read_fd, write_fd) = (read_fd.into_raw_fd(), write_fd.into_raw_fd());
+
+        match unsafe { fork() } {
+            Ok(ForkResult::Parent { child }) => {
+                // Parent: wait for child to signal readiness
+                unsafe { libc::close(write_fd) };
+
+                let mut buffer = [0u8; 1];
+                let bytes_read =
+                    unsafe { libc::read(read_fd, buffer.as_mut_ptr() as *mut libc::c_void, 1) };
+                let errno = std::io::Error::last_os_error();
+                unsafe { libc::close(read_fd) };
+
+                if bytes_read == -1 {
+                    return Err(
+                        PmxcfsError::System(format!("Failed to read from child: {errno}")).into(),
+                    );
+                } else if bytes_read != 1 || buffer[0] != b'1' {
+                    return Err(
+                        PmxcfsError::System("Child failed to send ready signal".into()).into(),
+                    );
+                }
+
+                // Child is ready - write PID file with child's PID
+                let child_pid = child.as_raw() as u32;
+                write_pid_file(&pid_file_path, child_pid, gid)?;
+
+                tracing::info!("Child process {} signaled ready, parent exiting", child_pid);
+
+                Ok((DaemonProcess::Parent, None))
+            }
+            Ok(ForkResult::Child) => {
+                // Child: become daemon and return signal handle
+                unsafe { libc::close(read_fd) };
+
+                // Create new session
+                unsafe {
+                    if libc::setsid() == -1 {
+                        return Err(
+                            PmxcfsError::System("Failed to create new session".into()).into()
+                        );
+                    }
+                }
+
+                // Change to root directory
+                std::env::set_current_dir("/")?;
+
+                // Redirect standard streams to /dev/null
+                let devnull = File::open("/dev/null")?;
+                unsafe {
+                    libc::dup2(devnull.as_raw_fd(), 0);
+                    libc::dup2(devnull.as_raw_fd(), 1);
+                    libc::dup2(devnull.as_raw_fd(), 2);
+                }
+
+                let signal_handle = SignalHandle { write_fd };
+                let guard = PidFileGuard {
+                    path: pid_file_path,
+                };
+
+                Ok((DaemonProcess::Child(guard), Some(signal_handle)))
+            }
+            Err(e) => Err(PmxcfsError::System(format!("Failed to fork: {e}")).into()),
+        }
+    }
+}
+
+impl Default for Daemon {
+    fn default() -> Self {
+        Self::new()
+    }
+}
+
+/// Handle for signaling parent process readiness
+///
+/// The child process must call `signal_ready()` to inform the parent
+/// that all initialization is complete and it's safe to write the PID file.
+pub struct SignalHandle {
+    write_fd: RawFd,
+}
+
+impl SignalHandle {
+    /// Signal parent that child is ready
+    ///
+    /// This must be called after all initialization is complete.
+    /// The parent will write the PID file and exit after receiving this signal.
+    pub fn signal_ready(self) -> Result<()> {
+        unsafe {
+            let result = libc::write(self.write_fd, b"1".as_ptr() as *const libc::c_void, 1);
+            libc::close(self.write_fd);
+
+            if result != 1 {
+                return Err(PmxcfsError::System("Failed to signal parent process".into()).into());
+            }
+        }
+        tracing::debug!("Signaled parent process - child ready");
+        Ok(())
+    }
+}
+
+/// Write PID file with specified process ID
+fn write_pid_file(path: &Path, pid: u32, gid: u32) -> Result<()> {
+    let content = format!("{pid}\n");
+
+    fs::write(path, content)
+        .with_context(|| format!("Failed to write PID file to {}", path.display()))?;
+
+    // Set permissions (0o644 = rw-r--r--)
+    let metadata = fs::metadata(path)?;
+    let mut perms = metadata.permissions();
+    perms.set_mode(0o644);
+    fs::set_permissions(path, perms)?;
+
+    // Set ownership (root:gid)
+    let path_cstr = std::ffi::CString::new(path.to_string_lossy().as_ref()).unwrap();
+    unsafe {
+        libc::chown(path_cstr.as_ptr(), 0, gid as libc::gid_t);
+    }
+
+    tracing::info!("Created PID file at {} with PID {}", path.display(), pid);
+
+    Ok(())
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/file_lock.rs b/src/pmxcfs-rs/pmxcfs/src/file_lock.rs
new file mode 100644
index 000000000..2180e67b7
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/file_lock.rs
@@ -0,0 +1,105 @@
+//! File locking utilities
+//!
+//! This module provides file-based locking to ensure only one pmxcfs instance
+//! runs at a time. It uses the flock(2) system call with exclusive locks.
+
+use anyhow::{Context, Result};
+use pmxcfs_api_types::PmxcfsError;
+use std::fs::File;
+use std::os::unix::fs::OpenOptionsExt;
+use std::os::unix::io::AsRawFd;
+use std::path::PathBuf;
+use tracing::{info, warn};
+
+/// RAII wrapper for a file lock
+///
+/// The lock is automatically released when the FileLock is dropped.
+pub struct FileLock(File);
+
+impl FileLock {
+    const MAX_RETRIES: u32 = 10;
+    const RETRY_DELAY: std::time::Duration = std::time::Duration::from_secs(1);
+
+    /// Acquire an exclusive file lock with retries (async)
+    ///
+    /// This function attempts to acquire an exclusive, non-blocking lock on the
+    /// specified file. It will retry up to 10 times with 1-second delays between
+    /// attempts, matching the C implementation's behavior.
+    ///
+    /// The blocking operations (file I/O and sleep) are executed on a blocking
+    /// thread pool to avoid blocking the async runtime.
+    ///
+    /// # Arguments
+    ///
+    /// * `lockfile_path` - Path to the lock file
+    ///
+    /// # Returns
+    ///
+    /// Returns a `FileLock` which automatically releases the lock when dropped.
+    ///
+    /// # Errors
+    ///
+    /// Returns an error if:
+    /// - The lock file cannot be created
+    /// - The lock cannot be acquired after 10 retry attempts
+    pub async fn acquire(lockfile_path: PathBuf) -> Result<Self> {
+        // Open/create the lock file on blocking thread pool
+        let file = tokio::task::spawn_blocking({
+            let lockfile_path = lockfile_path.clone();
+            move || {
+                File::options()
+                    .create(true)
+                    .read(true)
+                    .append(true)
+                    .mode(0o600)
+                    .open(&lockfile_path)
+                    .with_context(|| {
+                        format!("Unable to create lock file at {}", lockfile_path.display())
+                    })
+            }
+        })
+        .await
+        .context("Failed to spawn blocking task for file creation")??;
+
+        // Try to acquire lock with retries (matching C implementation)
+        for attempt in 0..=Self::MAX_RETRIES {
+            if Self::try_lock(&file).await? {
+                info!(path = %lockfile_path.display(), "Acquired pmxcfs lock");
+                return Ok(FileLock(file));
+            }
+
+            if attempt == Self::MAX_RETRIES {
+                return Err(PmxcfsError::System("Unable to acquire pmxcfs lock".into()).into());
+            }
+
+            if attempt == 0 {
+                warn!("Unable to acquire pmxcfs lock - retrying");
+            }
+
+            tokio::time::sleep(Self::RETRY_DELAY).await;
+        }
+
+        unreachable!("Loop should have returned or errored")
+    }
+
+    /// Attempt to acquire the lock (non-blocking)
+    async fn try_lock(file: &File) -> Result<bool> {
+        let result = tokio::task::spawn_blocking({
+            let fd = file.as_raw_fd();
+            move || unsafe { libc::flock(fd, libc::LOCK_EX | libc::LOCK_NB) }
+        })
+        .await
+        .context("Failed to spawn blocking task for flock")?;
+
+        Ok(result == 0)
+    }
+}
+
+impl Drop for FileLock {
+    fn drop(&mut self) {
+        // Safety: We own the file descriptor
+        unsafe {
+            libc::flock(self.0.as_raw_fd(), libc::LOCK_UN);
+        }
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/fuse/README.md b/src/pmxcfs-rs/pmxcfs/src/fuse/README.md
new file mode 100644
index 000000000..832207964
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/fuse/README.md
@@ -0,0 +1,199 @@
+# PMXCFS FUSE Filesystem
+
+## Overview
+
+PMXCFS provides a FUSE-based cluster filesystem mounted at `/etc/pve`. This filesystem exposes cluster configuration, VM/container configurations, and dynamic status information.
+
+## Filesystem Structure
+
+```
+/etc/pve/
+├── local -> nodes/{nodename}/                    # Symlink plugin
+├── qemu-server -> nodes/{nodename}/qemu-server/  # Symlink plugin
+├── lxc -> nodes/{nodename}/lxc/                  # Symlink plugin
+├── openvz -> nodes/{nodename}/openvz/            # Symlink plugin (legacy)
+│
+├── .version                                       # Plugin file
+├── .members                                       # Plugin file
+├── .vmlist                                        # Plugin file
+├── .rrd                                           # Plugin file
+├── .clusterlog                                    # Plugin file
+├── .debug                                         # Plugin file
+│
+├── nodes/
+│   ├── {node1}/
+│   │   ├── qemu-server/          # VM configs
+│   │   │   └── {vmid}.conf
+│   │   ├── lxc/                  # CT configs
+│   │   │   └── {ctid}.conf
+│   │   ├── openvz/               # Legacy (OpenVZ)
+│   │   └── priv/                 # Node-specific private data
+│   └── {node2}/
+│       └── ...
+│
+├── corosync.conf                  # Cluster configuration
+├── corosync.conf.new              # Staging for new config
+├── storage.cfg                    # Storage configuration
+├── user.cfg                       # User database
+├── domains.cfg                    # Authentication domains
+├── datacenter.cfg                 # Datacenter settings
+├── vzdump.cron                    # Backup schedule
+├── vzdump.conf                    # Backup configuration
+├── jobs.cfg                       # Job definitions
+│
+├── ha/                            # High Availability
+│   ├── crm_commands
+│   ├── manager_status
+│   ├── resources.cfg
+│   ├── groups.cfg
+│   ├── rules.cfg
+│   └── fence.cfg
+│
+├── sdn/                           # Software Defined Networking
+│   ├── vnets.cfg
+│   ├── zones.cfg
+│   ├── controllers.cfg
+│   ├── subnets.cfg
+│   └── ipams.cfg
+│
+├── firewall/
+│   └── cluster.fw                # Cluster firewall rules
+│
+├── replication.cfg                # Replication configuration
+├── ceph.conf                      # Ceph configuration
+│
+├── notifications.cfg              # Notification settings
+│
+└── priv/                          # Cluster-wide private data
+    ├── shadow.cfg                 # Password hashes
+    ├── tfa.cfg                    # Two-factor auth
+    ├── token.cfg                  # API tokens
+    ├── notifications.cfg          # Private notification config
+    └── acme/
+        └── plugins.cfg            # ACME plugin configs
+```
+
+## File Categories
+
+### Plugin Files (Dynamic Content)
+
+Files beginning with `.` are plugin files that generate content dynamically:
+- `.version` - Cluster version and status
+- `.members` - Cluster membership
+- `.vmlist` - VM/container list
+- `.rrd` - RRD metrics dump
+- `.clusterlog` - Cluster log entries
+- `.debug` - Debug mode toggle
+
+See `../plugins/README.md` for detailed format specifications.
+
+### Symlink Plugins
+
+Convenience symlinks to node-specific directories:
+- `local/` - Points to current node's directory
+- `qemu-server/` - Points to current node's VM configs
+- `lxc/` - Points to current node's container configs
+- `openvz/` - Points to current node's OpenVZ configs (legacy)
+
+### Configuration Files (40 tracked files)
+
+The following files are tracked for version changes and synchronized across the cluster:
+
+**Core Configuration**:
+- `corosync.conf` - Corosync cluster configuration
+- `corosync.conf.new` - Staged configuration before activation
+- `storage.cfg` - Storage pool definitions
+- `user.cfg` - User accounts and permissions
+- `domains.cfg` - Authentication realm configuration
+- `datacenter.cfg` - Datacenter-wide settings
+
+**Backup Configuration**:
+- `vzdump.cron` - Backup schedule
+- `vzdump.conf` - Backup job settings
+- `jobs.cfg` - Recurring job definitions
+
+**High Availability** (6 files):
+- `ha/crm_commands` - HA command queue
+- `ha/manager_status` - HA manager status
+- `ha/resources.cfg` - HA resource definitions
+- `ha/groups.cfg` - HA service groups
+- `ha/rules.cfg` - HA placement rules
+- `ha/fence.cfg` - Fencing configuration
+
+**Software Defined Networking** (5 files):
+- `sdn/vnets.cfg` - Virtual networks
+- `sdn/zones.cfg` - Network zones
+- `sdn/controllers.cfg` - SDN controllers
+- `sdn/subnets.cfg` - Subnet definitions
+- `sdn/ipams.cfg` - IP address management
+
+**Notification** (2 files):
+- `notifications.cfg` - Public notification settings
+- `priv/notifications.cfg` - Private notification credentials
+
+**Security** (5 files):
+- `priv/shadow.cfg` - Password hashes
+- `priv/tfa.cfg` - Two-factor authentication
+- `priv/token.cfg` - API tokens
+- `priv/acme/plugins.cfg` - ACME DNS plugins
+- `firewall/cluster.fw` - Cluster-wide firewall rules
+
+**Other**:
+- `replication.cfg` - Storage replication jobs
+- `ceph.conf` - Ceph cluster configuration
+
+### Node-Specific Directories
+
+Each node has a directory under `nodes/{nodename}/` containing:
+- `qemu-server/*.conf` - QEMU/KVM VM configurations
+- `lxc/*.conf` - LXC container configurations
+- `openvz/*.conf` - OpenVZ container configurations (legacy)
+- `priv/` - Node-specific private data (not replicated)
+
+## FUSE Operations
+
+### Supported Operations
+
+All standard FUSE operations are supported:
+
+**Metadata Operations**:
+- `getattr` - Get file/directory attributes
+- `readdir` - List directory contents
+- `statfs` - Get filesystem statistics
+
+**Read Operations**:
+- `read` - Read file contents
+- `readlink` - Read symlink target
+
+**Write Operations**:
+- `write` - Write file contents
+- `create` - Create new file
+- `unlink` - Delete file
+- `mkdir` - Create directory
+- `rmdir` - Delete directory
+- `rename` - Rename/move file
+- `truncate` - Truncate file to size
+- `utimens` - Update timestamps
+
+**Permission Operations**:
+- `chmod` - Change file mode
+- `chown` - Change file ownership
+
+### Permission Handling
+
+- **Regular paths**: Standard Unix permissions apply
+- **Private paths** (`priv/` directories): Restricted to root only
+- **Plugin files**: Read-only for most users, special handling for `.debug`
+
+### File Size Limits
+
+- Maximum file size: 1 MiB (1024 × 1024 bytes)
+- Maximum filesystem size: 128 MiB
+- Maximum inodes: 256,000
+
+## Implementation
+
+The FUSE filesystem is implemented in `filesystem.rs` and integrates with:
+- **MemDB**: Backend storage (SQLite + in-memory tree)
+- **Plugin System**: Dynamic file generation
+- **Cluster Sync**: Changes are propagated via DFSM protocol
diff --git a/src/pmxcfs-rs/pmxcfs/src/fuse/filesystem.rs b/src/pmxcfs-rs/pmxcfs/src/fuse/filesystem.rs
new file mode 100644
index 000000000..8fe85e3fe
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/fuse/filesystem.rs
@@ -0,0 +1,1505 @@
+use anyhow::{Error, bail};
+use futures::stream::TryStreamExt;
+use pmxcfs_api_types::errno;
+use proxmox_fuse::requests::{self, FuseRequest};
+use proxmox_fuse::{EntryParam, Fuse, ReplyBufState, Request};
+use std::ffi::{OsStr, OsString};
+use std::io;
+use std::mem;
+use std::path::Path;
+use std::sync::Arc;
+
+use crate::path::is_private_path;
+use crate::plugins::{Plugin, PluginRegistry};
+use pmxcfs_config::Config;
+use pmxcfs_dfsm::{Dfsm, DfsmBroadcast, FuseMessage};
+use pmxcfs_memdb::{MemDb, ROOT_INODE, TreeEntry};
+use pmxcfs_status::Status;
+
+const TTL: f64 = 1.0;
+
+/// FUSE filesystem context for pmxcfs
+pub struct PmxcfsFilesystem {
+    memdb: MemDb,
+    dfsm: Option<Arc<Dfsm<FuseMessage>>>,
+    plugins: Arc<PluginRegistry>,
+    status: Arc<Status>,
+    uid: u32,
+    gid: u32,
+}
+
+impl PmxcfsFilesystem {
+    const PLUGIN_INODE_OFFSET: u64 = 1000000;
+    const FUSE_GENERATION: u64 = 1;
+    const NLINK_FILE: u32 = 1;
+    const NLINK_DIR: u32 = 2;
+
+    pub fn new(
+        memdb: MemDb,
+        config: Arc<Config>,
+        dfsm: Option<Arc<Dfsm<FuseMessage>>>,
+        plugins: Arc<PluginRegistry>,
+        status: Arc<Status>,
+    ) -> Self {
+        Self {
+            memdb,
+            gid: config.www_data_gid(),
+            dfsm,
+            plugins,
+            status,
+            uid: 0, // root
+        }
+    }
+
+    /// Returns EACCES if the cluster is not quorate.
+    /// Matches C's dcdb_send_fuse_message() quorum check at dcdb.c:144.
+    fn require_quorum(&self) -> io::Result<()> {
+        if self.dfsm.is_some() && !self.status.is_quorate() {
+            return Err(errno::eacces());
+        }
+        Ok(())
+    }
+
+    /// Send a FuseMessage via DFSM and check its result code.
+    ///
+    /// This encapsulates the cluster broadcast-first pattern used by all mutating
+    /// FUSE operations: send the message, wait up to 10 seconds for delivery, then
+    /// map the i32 result code to an IO error if negative.
+    ///
+    /// Returns `Ok(())` on success, `Err(eio())` if the send itself fails, or
+    /// `Err(os_err(-result))` if the deliver callback reported an error.
+    async fn send_fuse_message(&self, msg: FuseMessage) -> io::Result<()> {
+        let dfsm = match self.dfsm.as_ref() {
+            Some(d) => d,
+            None => {
+                tracing::warn!("send_fuse_message called without DFSM (startup race?)");
+                return Err(pmxcfs_api_types::errno::eacces());
+            }
+        };
+        let result = dfsm
+            .send_message_sync(msg, std::time::Duration::from_secs(10))
+            .await
+            .map_err(|e| {
+                tracing::error!("DFSM send_message_sync failed: {}", e);
+                errno::eio()
+            })?;
+        if result.result < 0 {
+            tracing::warn!("DFSM message failed with errno: {}", -result.result);
+            return Err(errno::os_err(-result.result));
+        }
+        Ok(())
+    }
+
+    /// Convert FUSE nodeid to internal inode
+    ///
+    /// FUSE protocol uses nodeid 1 for root, but internally we use ROOT_INODE (0).
+    /// Regular file inodes need to be offset by -1 to match internal numbering.
+    /// Plugin inodes are in a separate range (>= PLUGIN_INODE_OFFSET) and unchanged.
+    ///
+    /// Mapping:
+    /// - FUSE nodeid 1 → internal inode 0 (ROOT_INODE)
+    /// - FUSE nodeid N (where N > 1 and N < PLUGIN_INODE_OFFSET) → internal inode N-1
+    /// - Plugin inodes (>= PLUGIN_INODE_OFFSET) are unchanged
+    #[inline]
+    fn fuse_to_inode(&self, fuse_nodeid: u64) -> u64 {
+        if fuse_nodeid >= Self::PLUGIN_INODE_OFFSET {
+            // Plugin inodes are unchanged
+            fuse_nodeid
+        } else {
+            // Regular inodes: FUSE nodeid N → internal inode N-1
+            // This maps FUSE root (1) to internal ROOT_INODE (0)
+            fuse_nodeid - 1
+        }
+    }
+
+    /// Convert internal inode to FUSE nodeid
+    ///
+    /// Internally we use ROOT_INODE (0) for root, but FUSE protocol uses nodeid 1.
+    /// Regular file inodes need to be offset by +1 to match FUSE numbering.
+    /// Plugin inodes (>= PLUGIN_INODE_OFFSET) are unchanged.
+    ///
+    /// Mapping:
+    /// - Internal inode 0 (ROOT_INODE) → FUSE nodeid 1
+    /// - Internal inode N (where N > 0 and N < PLUGIN_INODE_OFFSET) → FUSE nodeid N+1
+    /// - Plugin inodes (>= PLUGIN_INODE_OFFSET) are unchanged
+    #[inline]
+    fn inode_to_fuse(&self, inode: u64) -> u64 {
+        if inode >= Self::PLUGIN_INODE_OFFSET {
+            // Plugin inodes are unchanged
+            inode
+        } else {
+            // Regular inodes: internal inode N → FUSE nodeid N+1
+            // This maps internal ROOT_INODE (0) to FUSE root (1)
+            inode + 1
+        }
+    }
+
+    /// Get the full path for an inode by traversing up the tree
+    fn get_path_for_inode(&self, inode: u64) -> String {
+        if inode == ROOT_INODE {
+            return "/".to_string();
+        }
+
+        let mut path_components = Vec::new();
+        let mut current_inode = inode;
+
+        // Traverse up the tree
+        while current_inode != ROOT_INODE {
+            if let Some(entry) = self.memdb.get_entry_by_inode(current_inode) {
+                path_components.push(entry.name.clone());
+                current_inode = entry.parent;
+            } else {
+                // Entry not found, return root
+                return "/".to_string();
+            }
+        }
+
+        // Reverse to get correct order (we built from leaf to root)
+        path_components.reverse();
+
+        if path_components.is_empty() {
+            "/".to_string()
+        } else {
+            format!("/{}", path_components.join("/"))
+        }
+    }
+
+    fn join_path(&self, parent_path: &str, name: &str) -> String {
+        if parent_path == "/" {
+            format!("/{name}")
+        } else {
+            format!("{parent_path}/{name}")
+        }
+    }
+
+    /// Resolve a FUSE (parent, name) pair to a memdb path string.
+    fn resolve_path(&self, parent: u64, name: &OsStr) -> String {
+        let inode = self.fuse_to_inode(parent);
+        let dir_path = self.get_path_for_inode(inode);
+        self.join_path(&dir_path, &name.to_string_lossy())
+    }
+
+    /// Convert a TreeEntry to libc::stat using current quorum state
+    /// Applies permission adjustments based on whether the path is private
+    ///
+    /// Matches C implementation (cfs-plug-memdb.c:95-116, pmxcfs.c:130-138):
+    /// 1. Start with quorum-dependent base permissions (0777/0555 dirs, 0666/0444 files)
+    /// 2. Apply AND masking: private=0777700, dirs/symlinks=0777755, files=0777750
+    fn entry_to_stat(&self, entry: &TreeEntry, path: &str) -> libc::stat {
+        // Use current quorum state
+        let quorate = self.status.is_quorate();
+        self.entry_to_stat_with_quorum(entry, path, quorate)
+    }
+
+    /// Convert a TreeEntry to libc::stat with explicit quorum state
+    /// Applies permission adjustments based on whether the path is private
+    ///
+    /// Matches C implementation (cfs-plug-memdb.c:95-116, pmxcfs.c:130-138):
+    /// 1. Start with quorum-dependent base permissions (0777/0555 dirs, 0666/0444 files)
+    /// 2. Apply AND masking: private=0777700, dirs/symlinks=0777755, files=0777750
+    fn entry_to_stat_with_quorum(
+        &self,
+        entry: &TreeEntry,
+        path: &str,
+        quorate: bool,
+    ) -> libc::stat {
+        let mtime_secs = entry.mtime as i64;
+        let mut stat: libc::stat = unsafe { mem::zeroed() };
+
+        // Convert internal inode to FUSE nodeid for st_ino field
+        let fuse_nodeid = self.inode_to_fuse(entry.inode);
+
+        // Quorum-dependent permissions and file type
+        // C: dirs get 0777/0555, files get 0666/0444 based on quorum
+        let (type_bits, perms, nlink) = if entry.is_dir() {
+            (
+                libc::S_IFDIR,
+                if quorate { 0o777 } else { 0o555 },
+                Self::NLINK_DIR,
+            )
+        } else {
+            (
+                libc::S_IFREG,
+                if quorate { 0o666 } else { 0o444 },
+                Self::NLINK_FILE,
+            )
+        };
+
+        stat.st_ino = fuse_nodeid;
+        stat.st_mode = type_bits | perms;
+        stat.st_nlink = nlink as u64;
+        stat.st_uid = self.uid;
+        stat.st_gid = self.gid;
+        stat.st_size = entry.size as i64;
+        stat.st_blksize = 4096;
+        // NOTE: C pmxcfs computes st_blocks in units of MEMDB_BLOCKSIZE (4096 bytes),
+        // not the POSIX-standard 512-byte units. We match C for drop-in compatibility.
+        stat.st_blocks = entry.size.div_ceil(4096) as i64;
+        stat.st_atime = mtime_secs;
+        stat.st_mtime = mtime_secs;
+        stat.st_ctime = mtime_secs;
+
+        // Apply permission adjustments based on path privacy (matching C implementation)
+        // See pmxcfs.c cfs_fuse_getattr() lines 130-138
+        // Uses AND masking to restrict permissions while preserving file type bits
+        if is_private_path(path) {
+            // Private paths: mask to rwx------ (0o700)
+            // C: stbuf->st_mode &= 0777700
+            stat.st_mode &= 0o777700;
+        } else {
+            // Non-private paths: different masks for dirs vs files
+            if (stat.st_mode & libc::S_IFMT) == libc::S_IFDIR
+                || (stat.st_mode & libc::S_IFMT) == libc::S_IFLNK
+            {
+                // Directories and symlinks: mask to rwxr-xr-x (0o755)
+                // C: stbuf->st_mode &= 0777755
+                stat.st_mode &= 0o777755;
+            } else {
+                // Regular files: mask to rwxr-x--- (0o750)
+                // C: stbuf->st_mode &= 0777750
+                stat.st_mode &= 0o777750;
+            }
+        }
+
+        stat
+    }
+
+    /// Resolve a plugin from its FUSE inode.
+    ///
+    /// Converts the plugin-range inode to an index into the plugin list,
+    /// then looks up the plugin by index in a single lock acquisition.
+    fn resolve_plugin(&self, ino_fuse: u64) -> Option<Arc<dyn Plugin>> {
+        let plugin_idx = (ino_fuse - Self::PLUGIN_INODE_OFFSET) as usize;
+        self.plugins.get_by_index(plugin_idx)
+    }
+
+    /// Get stat for a plugin file
+    fn plugin_to_stat(&self, inode: u64, plugin: &Arc<dyn Plugin>) -> libc::stat {
+        let now = pmxcfs_api_types::unix_now_secs() as i64;
+        let data = plugin.read().unwrap_or_default();
+
+        let mut stat: libc::stat = unsafe { mem::zeroed() };
+        stat.st_ino = inode;
+
+        // Set file type and mode based on plugin type
+        if plugin.is_symlink() {
+            // Quorum-aware permissions for symlinks (matching C's cfs-plug-link.c:68-72)
+            // - When quorate: 0o777 (writable by all)
+            // - When not quorate: 0o555 (read-only for all)
+            let mode = if self.status.is_quorate() {
+                0o777
+            } else {
+                0o555
+            };
+            stat.st_mode = libc::S_IFLNK | mode;
+        } else {
+            // Regular file plugin
+            let mut mode = plugin.mode();
+
+            // Strip write bits if plugin doesn't support writing
+            // Matches C implementation (cfs-plug-func.c:216-218)
+            if !plugin.supports_write() {
+                mode &= !0o222; // Remove write bits (owner, group, other)
+            }
+
+            stat.st_mode = libc::S_IFREG | mode;
+        }
+
+        stat.st_nlink = Self::NLINK_FILE as u64;
+        stat.st_uid = self.uid;
+        stat.st_gid = self.gid;
+        stat.st_size = data.len() as i64;
+        stat.st_blksize = 4096;
+        // NOTE: Match C's MEMDB_BLOCKSIZE unit convention (4096 bytes), not POSIX 512-byte units.
+        stat.st_blocks = data.len().div_ceil(4096) as i64;
+        stat.st_atime = now;
+        stat.st_atime_nsec = 0;
+        stat.st_mtime = now;
+        stat.st_mtime_nsec = 0;
+        stat.st_ctime = now;
+        stat.st_ctime_nsec = 0;
+
+        stat
+    }
+
+    /// Handle lookup operation
+    async fn handle_lookup(&self, parent_fuse: u64, name: &OsStr) -> io::Result<EntryParam> {
+        tracing::debug!(
+            "lookup(parent={parent_fuse}, name={})",
+            name.to_string_lossy()
+        );
+
+        // Convert FUSE nodeid to internal inode
+        let parent = self.fuse_to_inode(parent_fuse);
+
+        let name_str = name.to_string_lossy();
+
+        // Check if this is a plugin file in the root directory
+        if parent == ROOT_INODE {
+            // Use list_all() to get both the index and the plugin in one lock acquisition
+            let all_plugins = self.plugins.list_all();
+            if let Some((plugin_idx, (_, plugin))) = all_plugins
+                .iter()
+                .enumerate()
+                .find(|(_, (name, _))| name == name_str.as_ref())
+            {
+                let plugin_inode = Self::PLUGIN_INODE_OFFSET + plugin_idx as u64;
+                let stat = self.plugin_to_stat(plugin_inode, plugin);
+
+                return Ok(EntryParam {
+                    inode: plugin_inode, // Plugin inodes already in FUSE space
+                    generation: Self::FUSE_GENERATION,
+                    attr: stat,
+                    attr_timeout: TTL,
+                    entry_timeout: TTL,
+                });
+            }
+        }
+
+        // Get parent entry
+        let parent_entry = self
+            .memdb
+            .get_entry_by_inode(parent)
+            .ok_or_else(errno::enoent)?;
+
+        // Construct the path
+        let parent_path = self.get_path_for_inode(parent_entry.inode);
+        let full_path = self.join_path(&parent_path, &name_str);
+
+        // Look up the entry
+        if let Some(entry) = self.memdb.lookup_path(&full_path) {
+            let stat = self.entry_to_stat(&entry, &full_path);
+            // Convert internal inode to FUSE nodeid
+            let fuse_nodeid = self.inode_to_fuse(entry.inode);
+            return Ok(EntryParam {
+                inode: fuse_nodeid,
+                generation: Self::FUSE_GENERATION,
+                attr: stat,
+                attr_timeout: TTL,
+                entry_timeout: TTL,
+            });
+        }
+
+        Err(errno::enoent())
+    }
+
+    /// Handle getattr operation
+    fn handle_getattr(&self, ino_fuse: u64) -> io::Result<libc::stat> {
+        tracing::debug!("getattr(ino={})", ino_fuse);
+
+        // Check if this is a plugin file (inode >= PLUGIN_INODE_OFFSET)
+        if ino_fuse >= Self::PLUGIN_INODE_OFFSET {
+            if let Some(plugin) = self.resolve_plugin(ino_fuse) {
+                return Ok(self.plugin_to_stat(ino_fuse, &plugin));
+            }
+        }
+
+        // Convert FUSE nodeid to internal inode
+        let ino = self.fuse_to_inode(ino_fuse);
+
+        if let Some(entry) = self.memdb.get_entry_by_inode(ino) {
+            let path = self.get_path_for_inode(ino);
+            Ok(self.entry_to_stat(&entry, &path))
+        } else {
+            Err(errno::enoent())
+        }
+    }
+
+    /// Handle readdir operation
+    fn handle_readdir(&self, request: &mut requests::Readdir) -> Result<(), Error> {
+        let ino_fuse = request.inode;
+        tracing::debug!("readdir(ino={}, offset={})", ino_fuse, request.offset);
+
+        // Convert FUSE nodeid to internal inode
+        let ino = self.fuse_to_inode(ino_fuse);
+        let offset = request.offset;
+
+        // Get the directory path
+        let path = self.get_path_for_inode(ino);
+
+        // Read directory entries from memdb
+        let entries = self.memdb.readdir(&path).map_err(|_| errno::enoent())?;
+
+        // Build complete list of entries
+        let mut all_entries: Vec<(u64, libc::stat, String)> = Vec::new();
+
+        // Add . and .. entries
+        // C implementation (cfs-plug-memdb.c:172) always passes quorate=0 for readdir stats
+        // This ensures directory listings show non-quorate permissions (read-only view)
+        if let Some(dir_entry) = self.memdb.get_entry_by_inode(ino) {
+            let dir_stat = self.entry_to_stat_with_quorum(&dir_entry, &path, false);
+            all_entries.push((ino_fuse, dir_stat, ".".to_string()));
+            all_entries.push((ino_fuse, dir_stat, "..".to_string()));
+        }
+
+        // For root directory, add plugin files
+        if ino == ROOT_INODE {
+            let plugins = self.plugins.list_all();
+            for (idx, (plugin_name, plugin)) in plugins.iter().enumerate() {
+                let plugin_inode = Self::PLUGIN_INODE_OFFSET + idx as u64;
+                let stat = self.plugin_to_stat(plugin_inode, plugin);
+                all_entries.push((plugin_inode, stat, plugin_name.clone()));
+            }
+        }
+
+        // Add actual entries from memdb
+        // C implementation (cfs-plug-memdb.c:172) always passes quorate=0 for readdir stats
+        for entry in &entries {
+            let entry_path = self.join_path(&path, &entry.name);
+            let stat = self.entry_to_stat_with_quorum(entry, &entry_path, false);
+            // Convert internal inode to FUSE nodeid for directory entry
+            let fuse_nodeid = self.inode_to_fuse(entry.inode);
+            all_entries.push((fuse_nodeid, stat, entry.name.clone()));
+        }
+
+        // Return entries starting from offset
+        let mut next = offset as isize;
+        for (_inode, stat, name) in all_entries.iter().skip(offset as usize) {
+            next += 1;
+            match request.add_entry(OsStr::new(name), stat, next)? {
+                ReplyBufState::Ok => (),
+                ReplyBufState::Full => return Ok(()),
+            }
+        }
+
+        Ok(())
+    }
+
+    /// Handle read operation
+    fn handle_read(&self, ino_fuse: u64, offset: u64, size: usize) -> io::Result<Vec<u8>> {
+        tracing::debug!("read(ino={}, offset={}, size={})", ino_fuse, offset, size);
+
+        // Check if this is a plugin file (inode >= PLUGIN_INODE_OFFSET)
+        if ino_fuse >= Self::PLUGIN_INODE_OFFSET {
+            if let Some(plugin) = self.resolve_plugin(ino_fuse) {
+                let data = plugin.read().map_err(|_| errno::eio())?;
+
+                let offset = offset as usize;
+                let start = offset.min(data.len());
+                let end = (offset + size).min(data.len());
+                return Ok(data[start..end].to_vec());
+            }
+        }
+
+        // Convert FUSE nodeid to internal inode
+        let ino = self.fuse_to_inode(ino_fuse);
+
+        let path = self.get_path_for_inode(ino);
+
+        // Check if this is a directory
+        if ino == ROOT_INODE {
+            // Root directory itself - can't read
+            return Err(errno::eisdir());
+        }
+
+        // Read from memdb
+        self.memdb
+            .read(&path, offset, size)
+            .map_err(|_| errno::enoent())
+    }
+
+    /// Handle write operation
+    async fn handle_write(&self, ino_fuse: u64, offset: u64, data: &[u8]) -> io::Result<usize> {
+        tracing::debug!(
+            "write(ino={}, offset={}, size={})",
+            ino_fuse,
+            offset,
+            data.len()
+        );
+
+        self.require_quorum()?;
+
+        // Check if this is a plugin file (inode >= PLUGIN_INODE_OFFSET)
+        if ino_fuse >= Self::PLUGIN_INODE_OFFSET {
+            if let Some(plugin) = self.resolve_plugin(ino_fuse) {
+                // Validate offset (C only allows offset 0)
+                if offset != 0 {
+                    tracing::warn!("Plugin write rejected: offset {} != 0", offset);
+                    return Err(errno::eio());
+                }
+
+                // Call plugin write
+                let plugin_name = plugin.name();
+                tracing::debug!("Writing {} bytes to plugin '{}'", data.len(), plugin_name);
+                plugin.write(data).map(|_| data.len()).map_err(|e| {
+                    tracing::error!("Plugin write failed: {}", e);
+                    errno::eio()
+                })?;
+
+                return Ok(data.len());
+            }
+
+            // Plugin not found or invalid index
+            return Err(errno::enoent());
+        }
+
+        // Regular memdb file write
+        // Convert FUSE nodeid to internal inode
+        let ino = self.fuse_to_inode(ino_fuse);
+
+        let path = self.get_path_for_inode(ino);
+
+        // C-style broadcast-first: send message and wait for result
+        // C implementation (cfs-plug-memdb.c:262-265) sends just the write chunk
+        // with original offset, not the full file contents
+        if self.dfsm.is_some() {
+            // Send write message with just the data chunk and original offset
+            // The DFSM delivery will apply the write to all nodes
+            self.send_fuse_message(FuseMessage::Write {
+                path: path.clone(),
+                offset,
+                truncate: false,
+                data: data.to_vec(),
+            })
+            .await?;
+
+            Ok(data.len())
+        } else {
+            // No cluster - write locally
+            let mtime = pmxcfs_api_types::unix_now_secs();
+
+            // FUSE write() should never truncate - truncation is handled separately
+            // via setattr (for explicit truncate) or open with O_TRUNC flag.
+            // Offset writes must preserve content beyond the write range (POSIX semantics).
+            self.memdb
+                .write(&path, offset, 0, mtime, data, false)
+                .map_err(|_| errno::eacces())
+        }
+    }
+
+    /// Build an EntryParam for a newly created (or already existing) memdb entry at `path`.
+    fn make_entry_param(&self, path: &str) -> io::Result<EntryParam> {
+        let entry = self
+            .memdb
+            .lookup_path(path)
+            .ok_or_else(pmxcfs_api_types::errno::enoent)?;
+        let fuse_nodeid = self.inode_to_fuse(entry.inode);
+        let attr = self.entry_to_stat(&entry, path);
+        Ok(EntryParam {
+            inode: fuse_nodeid,
+            generation: Self::FUSE_GENERATION,
+            attr,
+            attr_timeout: TTL,
+            entry_timeout: TTL,
+        })
+    }
+
+    /// Handle mkdir operation
+    async fn handle_mkdir(
+        &self,
+        parent_fuse: u64,
+        name: &OsStr,
+        mode: u32,
+    ) -> io::Result<EntryParam> {
+        tracing::debug!(
+            "mkdir(parent={}, name={})",
+            parent_fuse,
+            name.to_string_lossy()
+        );
+
+        self.require_quorum()?;
+
+        let full_path = self.resolve_path(parent_fuse, name);
+
+        // C-style broadcast-first: send message and wait for result
+        if self.dfsm.is_some() {
+            self.send_fuse_message(FuseMessage::Mkdir {
+                path: full_path.clone(),
+            })
+            .await?;
+        } else {
+            // No cluster - create locally
+            let mtime = pmxcfs_api_types::unix_now_secs();
+
+            self.memdb
+                .create(&full_path, mode | libc::S_IFDIR, 0, mtime)
+                .map_err(|_| errno::eacces())?;
+        }
+
+        // Look up the newly created entry (created via delivery callback)
+        self.make_entry_param(&full_path)
+    }
+
+    /// Handle rmdir operation
+    async fn handle_rmdir(&self, parent_fuse: u64, name: &OsStr) -> io::Result<()> {
+        tracing::debug!(
+            "rmdir(parent={}, name={})",
+            parent_fuse,
+            name.to_string_lossy()
+        );
+
+        self.require_quorum()?;
+
+        let full_path = self.resolve_path(parent_fuse, name);
+
+        // C-style broadcast-first: send message and wait for result
+        if self.dfsm.is_some() {
+            self.send_fuse_message(FuseMessage::Delete {
+                path: full_path.clone(),
+            })
+            .await?;
+        } else {
+            // No cluster - delete locally
+            self.memdb
+                .delete(&full_path, 0, pmxcfs_api_types::unix_now_secs())
+                .map_err(|_| errno::eacces())?;
+        }
+
+        Ok(())
+    }
+
+    /// Handle create operation
+    async fn handle_create(
+        &self,
+        parent_fuse: u64,
+        name: &OsStr,
+        mode: u32,
+    ) -> io::Result<EntryParam> {
+        tracing::debug!(
+            "create(parent={}, name={})",
+            parent_fuse,
+            name.to_string_lossy()
+        );
+
+        self.require_quorum()?;
+
+        let full_path = self.resolve_path(parent_fuse, name);
+
+        // C-style broadcast-first: send message and wait for result
+        if self.dfsm.is_some() {
+            self.send_fuse_message(FuseMessage::Create {
+                path: full_path.clone(),
+            })
+            .await?;
+        } else {
+            // No cluster - create locally
+            let mtime = pmxcfs_api_types::unix_now_secs();
+
+            self.memdb
+                .create(&full_path, mode | libc::S_IFREG, 0, mtime)
+                .map_err(|_| errno::eacces())?;
+        }
+
+        // Look up the newly created entry (created via delivery callback)
+        self.make_entry_param(&full_path)
+    }
+
+    /// Handle unlink operation
+    async fn handle_unlink(&self, parent_fuse: u64, name: &OsStr) -> io::Result<()> {
+        tracing::debug!(
+            "unlink(parent={}, name={})",
+            parent_fuse,
+            name.to_string_lossy()
+        );
+
+        self.require_quorum()?;
+
+        // Convert FUSE nodeid to internal inode
+        let parent = self.fuse_to_inode(parent_fuse);
+
+        // Don't allow unlinking plugin files (in root directory)
+        if parent == ROOT_INODE && self.plugins.is_plugin(&name.to_string_lossy()) {
+            return Err(errno::eacces());
+        }
+
+        let full_path = self.resolve_path(parent_fuse, name);
+
+        // Check if trying to unlink a directory (should use rmdir instead)
+        if let Some(entry) = self.memdb.lookup_path(&full_path)
+            && entry.is_dir()
+        {
+            return Err(errno::eisdir());
+        }
+
+        // C-style broadcast-first: send message and wait for result
+        if self.dfsm.is_some() {
+            self.send_fuse_message(FuseMessage::Delete {
+                path: full_path.clone(),
+            })
+            .await?;
+        } else {
+            // No cluster - delete locally
+            self.memdb
+                .delete(&full_path, 0, pmxcfs_api_types::unix_now_secs())
+                .map_err(|_| errno::eacces())?;
+        }
+
+        Ok(())
+    }
+
+    /// Handle rename operation
+    async fn handle_rename(
+        &self,
+        parent_fuse: u64,
+        name: &OsStr,
+        new_parent_fuse: u64,
+        new_name: &OsStr,
+    ) -> io::Result<()> {
+        tracing::debug!(
+            "rename(parent={}, name={}, new_parent={}, new_name={})",
+            parent_fuse,
+            name.to_string_lossy(),
+            new_parent_fuse,
+            new_name.to_string_lossy()
+        );
+
+        self.require_quorum()?;
+
+        let old_path = self.resolve_path(parent_fuse, name);
+        let new_path = self.resolve_path(new_parent_fuse, new_name);
+
+        // C-style broadcast-first: send message and wait for result
+        if self.dfsm.is_some() {
+            self.send_fuse_message(FuseMessage::Rename {
+                from: old_path.clone(),
+                to: new_path.clone(),
+            })
+            .await?;
+        } else {
+            // No cluster - rename locally
+            self.memdb
+                .rename(&old_path, &new_path, 0, pmxcfs_api_types::unix_now_secs())
+                .map_err(|_| errno::eacces())?;
+        }
+
+        Ok(())
+    }
+
+    /// Validate a chmod request (C: pmxcfs.c:180-197).
+    ///
+    /// chmod is validation-only — the filesystem does not actually change stored mode bits.
+    /// Only specific permission values are accepted:
+    /// - 0600 (rw-------) for private paths
+    /// - 0640 (rw-r-----) for all other paths
+    ///
+    /// Returns EPERM if the requested mode is not allowed.
+    fn validate_chmod(&self, path: &str, new_mode: u32) -> io::Result<()> {
+        let is_private = is_private_path(path);
+        let mode_bits = new_mode & 0o777; // Extract permission bits only
+
+        let allowed = if is_private {
+            mode_bits == 0o600
+        } else {
+            mode_bits == 0o640
+        };
+
+        if !allowed {
+            tracing::debug!(
+                "chmod rejected: mode={:o}, path={}, is_private={}",
+                mode_bits,
+                path,
+                is_private
+            );
+            return Err(errno::eperm());
+        }
+
+        tracing::debug!(
+            "chmod validated: mode={:o}, path={}, is_private={}",
+            mode_bits,
+            path,
+            is_private
+        );
+        Ok(())
+    }
+
+    /// Validate a chown request (C: pmxcfs.c:198-214).
+    ///
+    /// chown is validation-only — ownership is fixed and never changed.
+    /// Only these values are accepted:
+    /// - uid: 0 (root) or u32::MAX (-1, meaning "no change")
+    /// - gid: the www-data GID or u32::MAX (-1, meaning "no change")
+    ///
+    /// Returns EPERM if the requested ownership is not allowed.
+    fn validate_chown(&self, uid: Option<u32>, gid: Option<u32>) -> io::Result<()> {
+        let uid_allowed = match uid {
+            None => true,
+            Some(u) => u == 0 || u == u32::MAX, // -1 as u32 = u32::MAX
+        };
+
+        let gid_allowed = match gid {
+            None => true,
+            Some(g) => g == self.gid || g == u32::MAX, // -1 as u32 = u32::MAX
+        };
+
+        if !uid_allowed || !gid_allowed {
+            tracing::debug!(
+                "chown rejected: uid={:?}, gid={:?}, allowed_gid={}",
+                uid,
+                gid,
+                self.gid,
+            );
+            return Err(errno::eperm());
+        }
+
+        tracing::debug!(
+            "chown validated: uid={:?}, gid={:?}, allowed_gid={}",
+            uid,
+            gid,
+            self.gid,
+        );
+        Ok(())
+    }
+
+    /// Apply a truncate (size change) to `path` / `ino`.
+    ///
+    /// Sends a DFSM Write message when a cluster is present, otherwise updates
+    /// the local memdb directly.
+    async fn apply_size_change(&self, path: &str, size: u64, mtime: u32) -> io::Result<()> {
+        if self.dfsm.is_some() {
+            self.send_fuse_message(FuseMessage::Write {
+                path: path.to_owned(),
+                offset: size,
+                truncate: true,
+                data: Vec::new(),
+            })
+            .await?;
+        } else {
+            self.memdb
+                .write(path, size, 0, mtime, b"", true)
+                .map_err(|_| errno::eacces())?;
+        }
+        Ok(())
+    }
+
+    /// Apply an mtime update to `path` / `ino`, handling lock-release detection.
+    ///
+    /// C implementation (cfs-plug-memdb.c:415-422) ALWAYS sends DCDB_MESSAGE_CFS_MTIME
+    /// via DFSM when mtime is updated, in addition to any unlock messages.
+    ///
+    /// Lock-release semantics: mtime=0 on a lock directory triggers an unlock request.
+    /// If DFSM is not synced the expired lock is deleted locally; otherwise an
+    /// UnlockRequest message is broadcast to the cluster.
+    async fn apply_mtime_change(&self, path: &str, ino: u64, new_mtime: u32) -> io::Result<()> {
+        // Check if this is a lock directory (C: cfs-plug-memdb.c:411-418)
+        if pmxcfs_memdb::is_lock_path(path)
+            && let Some(entry) = self.memdb.get_entry_by_inode(ino)
+            && entry.is_dir()
+        {
+            // mtime=0 on lock directory = unlock request
+            if new_mtime == 0 {
+                tracing::debug!("Unlock request for lock directory: {}", path);
+                let csum = entry.compute_checksum();
+
+                // If DFSM is available and synced, only send the message - don't delete locally.
+                // The leader will check if expired and send Unlock message if needed.
+                // If DFSM is not available or not synced, delete locally if expired
+                // (C: cfs-plug-memdb.c:425-427).
+                if self.dfsm.as_ref().is_none_or(|d| !d.is_synced()) {
+                    if self.memdb.lock_expired(path, &csum) {
+                        tracing::info!("DFSM not synced - deleting expired lock locally: {}", path);
+                        self.memdb
+                            .delete(path, 0, pmxcfs_api_types::unix_now_secs())
+                            .map_err(|_| errno::eacces())?;
+                    }
+                } else {
+                    // Broadcast unlock request to cluster (C: cfs-plug-memdb.c:417)
+                    tracing::debug!("DFSM synced - sending unlock request to cluster");
+                    self.dfsm.broadcast(FuseMessage::UnlockRequest {
+                        path: path.to_owned(),
+                        checksum: csum,
+                    });
+                }
+            }
+        }
+
+        // C implementation ALWAYS sends MTIME message (lines 420-422), regardless of
+        // whether it's an unlock request or not. This broadcasts the mtime update to
+        // all cluster nodes for synchronization.
+        if self.dfsm.is_some() {
+            tracing::debug!(
+                "Sending MTIME message via DFSM: path={}, mtime={}",
+                path,
+                new_mtime
+            );
+            self.send_fuse_message(FuseMessage::Mtime {
+                path: path.to_owned(),
+                mtime: new_mtime,
+            })
+            .await?;
+        } else {
+            // No cluster - update locally
+            self.memdb
+                .set_mtime(path, 0, new_mtime)
+                .map_err(|_| errno::eacces())?;
+        }
+        Ok(())
+    }
+
+    /// Handle setattr operation.
+    ///
+    /// Supports:
+    /// - Truncate (size parameter)
+    /// - Mtime updates (mtime parameter) - used for lock renewal/release
+    /// - Mode changes (mode parameter) - validation only, no actual changes
+    /// - Ownership changes (uid/gid parameters) - validation only, no actual changes
+    ///
+    /// C implementation (cfs-plug-memdb.c:393-436) ALWAYS sends DCDB_MESSAGE_CFS_MTIME
+    /// via DFSM when mtime is updated (line 420-422), in addition to unlock messages.
+    ///
+    /// chmod/chown (pmxcfs.c:180-214): These operations don't actually change anything,
+    /// they just validate that the requested changes are allowed (returns -EPERM if not).
+    async fn handle_setattr(
+        &self,
+        ino_fuse: u64,
+        size: Option<u64>,
+        mtime: Option<u32>,
+        mode: Option<u32>,
+        uid: Option<u32>,
+        gid: Option<u32>,
+    ) -> io::Result<libc::stat> {
+        tracing::debug!(
+            "setattr(ino={}, size={:?}, mtime={:?})",
+            ino_fuse,
+            size,
+            mtime
+        );
+
+        // Convert FUSE nodeid to internal inode
+        let ino = self.fuse_to_inode(ino_fuse);
+        let path = self.get_path_for_inode(ino);
+
+        // Handle chmod operation (validation only - chmod doesn't need quorum)
+        if let Some(new_mode) = mode {
+            self.validate_chmod(&path, new_mode)?;
+        }
+
+        // Handle chown operation (validation only - chown doesn't need quorum)
+        if uid.is_some() || gid.is_some() {
+            self.validate_chown(uid, gid)?;
+        }
+
+        // Quorum is required for actual mutations (truncate, mtime).
+        // chmod/chown are validation-only and do not need quorum.
+        if size.is_some() || mtime.is_some() {
+            self.require_quorum()?;
+        }
+
+        // Handle truncate operation
+        if let Some(new_size) = size {
+            let current_mtime = pmxcfs_api_types::unix_now_secs();
+
+            self.apply_size_change(&path, new_size, current_mtime)
+                .await?;
+        }
+
+        // Handle mtime update (lock renewal/release)
+        if let Some(new_mtime) = mtime {
+            self.apply_mtime_change(&path, ino, new_mtime).await?;
+        }
+
+        // Return current attributes
+        if let Some(entry) = self.memdb.get_entry_by_inode(ino) {
+            Ok(self.entry_to_stat(&entry, &path))
+        } else {
+            Err(errno::enoent())
+        }
+    }
+
+    /// Handle readlink operation - read symbolic link target
+    fn handle_readlink(&self, ino_fuse: u64) -> io::Result<OsString> {
+        tracing::debug!("readlink(ino={})", ino_fuse);
+
+        // Check if this is a plugin (only plugins can be symlinks in pmxcfs)
+        if ino_fuse >= Self::PLUGIN_INODE_OFFSET {
+            if let Some(plugin) = self.resolve_plugin(ino_fuse) {
+                // Read the link target from the plugin
+                let data = plugin.read().map_err(|_| errno::eio())?;
+
+                // Convert bytes to OsString
+                let target = std::str::from_utf8(&data).map_err(|_| errno::eio())?;
+
+                return Ok(OsString::from(target));
+            }
+        }
+
+        // Not a plugin or plugin not found
+        Err(errno::einval())
+    }
+
+    /// Handle statfs operation - return filesystem statistics
+    ///
+    /// Matches C implementation (memdb.c:1275-1307)
+    /// Returns fixed filesystem stats based on memdb state
+    fn handle_statfs(&self) -> io::Result<libc::statvfs> {
+        tracing::debug!("statfs()");
+
+        const BLOCKSIZE: u64 = 4096;
+
+        // Get statistics from memdb
+        let (blocks, bfree, bavail, files, ffree) = self.memdb.statfs();
+
+        let mut stbuf: libc::statvfs = unsafe { mem::zeroed() };
+
+        // Block size and counts
+        stbuf.f_bsize = BLOCKSIZE; // Filesystem block size
+        stbuf.f_frsize = BLOCKSIZE; // Fragment size (same as block size)
+        stbuf.f_blocks = blocks; // Total blocks in filesystem
+        stbuf.f_bfree = bfree; // Free blocks
+        stbuf.f_bavail = bavail; // Free blocks available to unprivileged user
+
+        // Inode counts
+        stbuf.f_files = files; // Total file nodes in filesystem
+        stbuf.f_ffree = ffree; // Free file nodes in filesystem
+        stbuf.f_favail = ffree; // Free file nodes available to unprivileged user
+
+        // Other fields
+        stbuf.f_fsid = 0; // Filesystem ID
+        stbuf.f_flag = 0; // Mount flags
+        stbuf.f_namemax = 255; // Maximum filename length
+
+        Ok(stbuf)
+    }
+}
+
+/// Create and mount FUSE filesystem
+pub async fn mount_fuse(
+    mount_path: &Path,
+    memdb: MemDb,
+    config: Arc<Config>,
+    dfsm: Option<Arc<Dfsm<FuseMessage>>>,
+    plugins: Arc<PluginRegistry>,
+    status: Arc<Status>,
+) -> Result<(), Error> {
+    let fs = Arc::new(PmxcfsFilesystem::new(memdb, config, dfsm, plugins, status));
+
+    let mut fuse = Fuse::builder("pmxcfs")?
+        .debug()
+        .options("default_permissions")? // Enable kernel permission checking
+        .options("allow_other")? // Allow non-root access
+        .enable_readdir()
+        .enable_readlink()
+        .enable_mkdir()
+        .enable_create()
+        .enable_write()
+        .enable_unlink()
+        .enable_rmdir()
+        .enable_rename()
+        .enable_setattr()
+        .enable_read()
+        .enable_statfs()
+        .build()?
+        .mount(mount_path)?;
+
+    tracing::info!("FUSE filesystem mounted at {}", mount_path.display());
+
+    // Process FUSE requests
+    while let Some(request) = fuse.try_next().await? {
+        let fs = Arc::clone(&fs);
+        match request {
+            Request::Lookup(request) => {
+                match fs.handle_lookup(request.parent, &request.file_name).await {
+                    Ok(entry) => request.reply(&entry)?,
+                    Err(err) => request.io_fail(err)?,
+                }
+            }
+            Request::Getattr(request) => match fs.handle_getattr(request.inode) {
+                Ok(stat) => request.reply(&stat, TTL)?,
+                Err(err) => request.io_fail(err)?,
+            },
+            Request::Readlink(request) => match fs.handle_readlink(request.inode) {
+                Ok(target) => request.reply(&target)?,
+                Err(err) => request.io_fail(err)?,
+            },
+            Request::Readdir(mut request) => match fs.handle_readdir(&mut request) {
+                Ok(()) => request.reply()?,
+                Err(err) => {
+                    if let Some(io_err) = err.downcast_ref::<io::Error>() {
+                        let errno = io_err.raw_os_error().unwrap_or(libc::EIO);
+                        request.fail(errno)?;
+                    } else {
+                        request.io_fail(errno::eio())?;
+                    }
+                }
+            },
+            Request::Read(request) => {
+                match fs.handle_read(request.inode, request.offset, request.size) {
+                    Ok(data) => request.reply(&data)?,
+                    Err(err) => request.io_fail(err)?,
+                }
+            }
+            Request::Write(request) => {
+                match fs
+                    .handle_write(request.inode, request.offset, request.data())
+                    .await
+                {
+                    Ok(written) => request.reply(written)?,
+                    Err(err) => request.io_fail(err)?,
+                }
+            }
+            Request::Mkdir(request) => {
+                match fs
+                    .handle_mkdir(request.parent, &request.dir_name, request.mode)
+                    .await
+                {
+                    Ok(entry) => request.reply(&entry)?,
+                    Err(err) => request.io_fail(err)?,
+                }
+            }
+            Request::Rmdir(request) => {
+                match fs.handle_rmdir(request.parent, &request.dir_name).await {
+                    Ok(()) => request.reply()?,
+                    Err(err) => request.io_fail(err)?,
+                }
+            }
+            Request::Rename(request) => {
+                match fs
+                    .handle_rename(
+                        request.parent,
+                        &request.name,
+                        request.new_parent,
+                        &request.new_name,
+                    )
+                    .await
+                {
+                    Ok(()) => request.reply()?,
+                    Err(err) => request.io_fail(err)?,
+                }
+            }
+            Request::Create(request) => {
+                match fs
+                    .handle_create(request.parent, &request.file_name, request.mode)
+                    .await
+                {
+                    Ok(entry) => request.reply(&entry, 0)?,
+                    Err(err) => request.io_fail(err)?,
+                }
+            }
+            Request::Mknod(request) => {
+                // Treat mknod same as create
+                match fs
+                    .handle_create(request.parent, &request.file_name, request.mode)
+                    .await
+                {
+                    Ok(entry) => request.reply(&entry)?,
+                    Err(err) => request.io_fail(err)?,
+                }
+            }
+            Request::Unlink(request) => {
+                match fs.handle_unlink(request.parent, &request.file_name).await {
+                    Ok(()) => request.reply()?,
+                    Err(err) => request.io_fail(err)?,
+                }
+            }
+            Request::Setattr(request) => {
+                // Extract mtime if being set
+                let mtime = request.mtime().map(|set_time| match set_time {
+                    proxmox_fuse::requests::SetTime::Time(duration) => duration.as_secs() as u32,
+                    proxmox_fuse::requests::SetTime::Now => pmxcfs_api_types::unix_now_secs(),
+                });
+
+                // Extract mode, uid, gid for chmod/chown validation (M1, M2)
+                let mode = request.mode();
+                let uid = request.uid();
+                let gid = request.gid();
+
+                match fs
+                    .handle_setattr(request.inode, request.size(), mtime, mode, uid, gid)
+                    .await
+                {
+                    Ok(stat) => request.reply(&stat, TTL)?,
+                    Err(err) => request.io_fail(err)?,
+                }
+            }
+            Request::Open(request) => {
+                // Plugin files don't support truncation, but can be opened for write
+                if request.inode >= PmxcfsFilesystem::PLUGIN_INODE_OFFSET {
+                    // Check if plugin is being opened for writing
+                    let is_write = (request.flags & (libc::O_WRONLY | libc::O_RDWR)) != 0;
+
+                    if is_write {
+                        // Verify plugin is writable
+                        if let Some(plugin) = fs.resolve_plugin(request.inode) {
+                            // Check if plugin supports write (mode has write bit for owner)
+                            if (plugin.mode() & 0o200) == 0 {
+                                // Plugin is read-only
+                                request.io_fail(errno::eacces())?;
+                                continue;
+                            }
+                        }
+                    }
+
+                    // Verify plugin exists (getattr)
+                    match fs.handle_getattr(request.inode) {
+                        Ok(_) => request.reply(0)?,
+                        Err(err) => request.io_fail(err)?,
+                    }
+                } else {
+                    // Regular files: handle truncation
+                    if (request.flags & libc::O_TRUNC) != 0 {
+                        match fs
+                            .handle_setattr(request.inode, Some(0), None, None, None, None)
+                            .await
+                        {
+                            Ok(_) => request.reply(0)?,
+                            Err(err) => request.io_fail(err)?,
+                        }
+                    } else {
+                        match fs.handle_getattr(request.inode) {
+                            Ok(_) => request.reply(0)?,
+                            Err(err) => request.io_fail(err)?,
+                        }
+                    }
+                }
+            }
+            Request::Release(request) => {
+                request.reply()?;
+            }
+            Request::Forget(_request) => {
+                // Forget is a notification, no reply needed
+            }
+            Request::Statfs(request) => match fs.handle_statfs() {
+                Ok(stbuf) => request.reply(&stbuf)?,
+                Err(err) => request.io_fail(err)?,
+            },
+            other => {
+                tracing::warn!("Unsupported FUSE request: {:?}", other);
+                bail!("Unsupported FUSE request");
+            }
+        }
+    }
+
+    Ok(())
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+    use tempfile::TempDir;
+
+    /// Helper to create a minimal PmxcfsFilesystem for testing
+    fn create_test_filesystem() -> (PmxcfsFilesystem, TempDir) {
+        let tmp_dir = TempDir::new().unwrap();
+        let db_path = tmp_dir.path().join("test.db");
+
+        let memdb = MemDb::open(&db_path, true).unwrap();
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let plugins = crate::plugins::init_plugins_for_test("testnode");
+        let status = Arc::new(Status::new(config.clone(), None));
+
+        let fs = PmxcfsFilesystem::new(memdb, config, None, plugins, status);
+        (fs, tmp_dir)
+    }
+
+    // ===== Inode Mapping Tests =====
+
+    #[test]
+    fn test_fuse_to_inode_mapping() {
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        // Root: FUSE nodeid 1 → internal inode 0
+        assert_eq!(fs.fuse_to_inode(1), 0);
+
+        // Regular inodes: N → N-1
+        assert_eq!(fs.fuse_to_inode(2), 1);
+        assert_eq!(fs.fuse_to_inode(10), 9);
+        assert_eq!(fs.fuse_to_inode(100), 99);
+
+        // Plugin inodes (>= PLUGIN_INODE_OFFSET) unchanged
+        assert_eq!(fs.fuse_to_inode(1000000), 1000000);
+        assert_eq!(fs.fuse_to_inode(1000001), 1000001);
+    }
+
+    #[test]
+    fn test_inode_to_fuse_mapping() {
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        // Root: internal inode 0 → FUSE nodeid 1
+        assert_eq!(fs.inode_to_fuse(0), 1);
+
+        // Regular inodes: N → N+1
+        assert_eq!(fs.inode_to_fuse(1), 2);
+        assert_eq!(fs.inode_to_fuse(9), 10);
+        assert_eq!(fs.inode_to_fuse(99), 100);
+
+        // Plugin inodes (>= PLUGIN_INODE_OFFSET) unchanged
+        assert_eq!(fs.inode_to_fuse(1000000), 1000000);
+        assert_eq!(fs.inode_to_fuse(1000001), 1000001);
+    }
+
+    #[test]
+    fn test_inode_mapping_roundtrip() {
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        // Test roundtrip for regular inodes
+        for inode in 0..1000 {
+            let fuse = fs.inode_to_fuse(inode);
+            let back = fs.fuse_to_inode(fuse);
+            assert_eq!(inode, back, "Roundtrip failed for inode {inode}");
+        }
+
+        // Test roundtrip for plugin inodes
+        for offset in 0..100 {
+            let inode = 1000000 + offset;
+            let fuse = fs.inode_to_fuse(inode);
+            let back = fs.fuse_to_inode(fuse);
+            assert_eq!(inode, back, "Roundtrip failed for plugin inode {inode}");
+        }
+    }
+
+    // ===== Path Privacy Tests =====
+
+    // Note: is_private_path() tests are in path::tests - no duplication needed here
+
+    // ===== Error Path Tests =====
+
+    #[tokio::test]
+    async fn test_lookup_nonexistent() {
+        use std::ffi::OsStr;
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        // Try to lookup a file that doesn't exist
+        let result = fs.handle_lookup(1, OsStr::new("nonexistent.txt")).await;
+
+        assert!(result.is_err(), "Lookup of nonexistent file should fail");
+        if let Err(e) = result {
+            assert_eq!(e.raw_os_error(), Some(libc::ENOENT));
+        }
+    }
+
+    #[test]
+    fn test_getattr_nonexistent_inode() {
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        // Try to get attributes for an inode that doesn't exist
+        let result = fs.handle_getattr(999999);
+
+        assert!(result.is_err(), "Getattr on nonexistent inode should fail");
+        if let Err(e) = result {
+            assert_eq!(e.raw_os_error(), Some(libc::ENOENT));
+        }
+    }
+
+    #[test]
+    fn test_read_directory_as_file() {
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        // Try to read the root directory as if it were a file
+        let result = fs.handle_read(1, 0, 100);
+
+        assert!(result.is_err(), "Reading directory as file should fail");
+        if let Err(e) = result {
+            assert_eq!(e.raw_os_error(), Some(libc::EISDIR));
+        }
+    }
+
+    #[tokio::test]
+    async fn test_write_to_nonexistent_file() {
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        // Try to write to a file that doesn't exist (should fail with EACCES)
+        let result = fs.handle_write(999999, 0, b"data").await;
+
+        assert!(result.is_err(), "Writing to nonexistent file should fail");
+        if let Err(e) = result {
+            assert_eq!(e.raw_os_error(), Some(libc::EACCES));
+        }
+    }
+
+    #[tokio::test]
+    async fn test_unlink_directory_fails() {
+        use std::ffi::OsStr;
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        // Create a directory first by writing a file
+        let now = std::time::SystemTime::now()
+            .duration_since(std::time::UNIX_EPOCH)
+            .unwrap()
+            .as_secs() as u32;
+        let _ = fs
+            .memdb
+            .write("/testdir/file.txt", 0, 0, now, b"test", false);
+
+        // Look up testdir to verify it exists as a directory
+        if let Some(entry) = fs.memdb.lookup_path("/testdir") {
+            assert!(entry.is_dir(), "testdir should be a directory");
+
+            // Try to unlink the directory (should fail)
+            let result = fs.handle_unlink(1, OsStr::new("testdir")).await;
+
+            assert!(result.is_err(), "Unlinking directory should fail");
+            // Note: May return EACCES if directory doesn't exist in internal lookup,
+            // or EISDIR if found as directory
+            if let Err(e) = result {
+                let err_code = e.raw_os_error();
+                assert!(
+                    err_code == Some(libc::EISDIR) || err_code == Some(libc::EACCES),
+                    "Expected EISDIR or EACCES, got {err_code:?}"
+                );
+            }
+        }
+    }
+
+    // ===== Plugin-related Tests =====
+
+    #[test]
+    fn test_plugin_inode_range() {
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        // Plugin inodes should be >= PLUGIN_INODE_OFFSET (1000000)
+        let plugin_inode = 1000000;
+
+        // Verify that plugin inodes don't overlap with regular inodes
+        assert!(plugin_inode >= 1000000);
+        assert_ne!(fs.fuse_to_inode(plugin_inode), plugin_inode - 1);
+        assert_eq!(fs.fuse_to_inode(plugin_inode), plugin_inode);
+    }
+
+    #[test]
+    fn test_plugin_list_is_deterministic() {
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        assert_eq!(
+            fs.plugins.list(),
+            vec![
+                ".clusterlog".to_string(),
+                ".debug".to_string(),
+                ".members".to_string(),
+                ".rrd".to_string(),
+                ".version".to_string(),
+                ".vmlist".to_string(),
+                "local".to_string(),
+                "lxc".to_string(),
+                "openvz".to_string(),
+                "qemu-server".to_string(),
+            ]
+        );
+    }
+
+    #[tokio::test]
+    async fn test_plugin_lookup_inode_assignment_is_stable() {
+        use std::ffi::OsStr;
+
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        for (idx, plugin_name) in fs.plugins.list().iter().enumerate() {
+            let entry = fs
+                .handle_lookup(1, OsStr::new(plugin_name))
+                .await
+                .unwrap_or_else(|err| panic!("lookup failed for {plugin_name}: {err}"));
+
+            assert_eq!(
+                entry.inode,
+                PmxcfsFilesystem::PLUGIN_INODE_OFFSET + idx as u64,
+                "plugin {plugin_name} should map to a stable inode"
+            );
+            assert_eq!(fs.handle_getattr(entry.inode).unwrap().st_ino, entry.inode);
+        }
+    }
+
+    #[test]
+    fn test_file_type_preservation_in_permissions() {
+        let (fs, _tmpdir) = create_test_filesystem();
+
+        // Create a file
+        let now = std::time::SystemTime::now()
+            .duration_since(std::time::UNIX_EPOCH)
+            .unwrap()
+            .as_secs() as u32;
+        let _ = fs.memdb.write("/test.txt", 0, 0, now, b"test", false);
+
+        if let Ok(stat) = fs.handle_getattr(fs.inode_to_fuse(1)) {
+            // Verify that file type bits are preserved (S_IFREG)
+            assert_eq!(stat.st_mode & libc::S_IFMT, libc::S_IFREG);
+        }
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/fuse/mod.rs b/src/pmxcfs-rs/pmxcfs/src/fuse/mod.rs
new file mode 100644
index 000000000..1157127cd
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/fuse/mod.rs
@@ -0,0 +1,4 @@
+mod filesystem;
+
+pub use filesystem::PmxcfsFilesystem;
+pub use filesystem::mount_fuse;
diff --git a/src/pmxcfs-rs/pmxcfs/src/ipc/mod.rs b/src/pmxcfs-rs/pmxcfs/src/ipc/mod.rs
new file mode 100644
index 000000000..2fe08e753
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/ipc/mod.rs
@@ -0,0 +1,16 @@
+//! IPC (Inter-Process Communication) subsystem
+//!
+//! This module handles libqb-compatible IPC communication between pmxcfs
+//! and client applications (e.g., pvestatd, pvesh, etc.).
+//!
+//! The IPC subsystem consists of:
+//! - Operation codes (CfsIpcOp) defining available IPC operations
+//! - Request types (IpcRequest) representing parsed client requests
+//! - Service handler (IpcHandler) implementing the request processing logic
+
+mod request;
+mod service;
+
+// Re-export public types
+pub use request::{CfsIpcOp, IpcRequest};
+pub use service::IpcHandler;
diff --git a/src/pmxcfs-rs/pmxcfs/src/ipc/request.rs b/src/pmxcfs-rs/pmxcfs/src/ipc/request.rs
new file mode 100644
index 000000000..d9597b260
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/ipc/request.rs
@@ -0,0 +1,281 @@
+//! IPC request types and parsing
+//!
+//! This module defines the IPC operation codes and request message types
+//! used for communication between pmxcfs and client applications via libqb IPC.
+
+/// IPC operation codes (must match C version for compatibility)
+#[derive(Debug, Clone, Copy, PartialEq, Eq, num_enum::TryFromPrimitive)]
+#[repr(i32)]
+pub enum CfsIpcOp {
+    GetFsVersion = 1,
+    GetClusterInfo = 2,
+    GetGuestList = 3,
+    SetStatus = 4,
+    GetStatus = 5,
+    GetConfig = 6,
+    LogClusterMsg = 7,
+    GetClusterLog = 8,
+    GetRrdDump = 10,
+    GetGuestConfigProperty = 11,
+    VerifyToken = 12,
+    GetGuestConfigProperties = 13,
+}
+
+/// IPC request message
+///
+/// Represents deserialized IPC requests sent from clients via libqb IPC.
+/// Each variant corresponds to an IPC operation code and contains the
+/// deserialized request parameters.
+#[derive(Debug, Clone, PartialEq)]
+pub enum IpcRequest {
+    /// GET_FS_VERSION (op 1): Get filesystem version info
+    GetFsVersion,
+
+    /// GET_CLUSTER_INFO (op 2): Get cluster member list
+    GetClusterInfo,
+
+    /// GET_GUEST_LIST (op 3): Get VM/CT list
+    GetGuestList,
+
+    /// SET_STATUS (op 4): Update node status
+    SetStatus { name: String, data: Vec<u8> },
+
+    /// GET_STATUS (op 5): Get node status
+    /// C format: name (256 bytes) + nodename (256 bytes)
+    GetStatus { name: String, node_name: String },
+
+    /// GET_CONFIG (op 6): Read configuration file
+    GetConfig { path: String },
+
+    /// LOG_CLUSTER_MSG (op 7): Write to cluster log
+    LogClusterMsg {
+        priority: u8,
+        ident: String,
+        tag: String,
+        message: String,
+    },
+
+    /// GET_CLUSTER_LOG (op 8): Read cluster log
+    /// C struct has max_entries + 3 reserved u32s + user string
+    GetClusterLog { max_entries: usize, user: String },
+
+    /// GET_RRD_DUMP (op 10): Get RRD data dump
+    GetRrdDump,
+
+    /// GET_GUEST_CONFIG_PROPERTY (op 11): Get guest config property
+    GetGuestConfigProperty { vmid: u32, property: String },
+
+    /// VERIFY_TOKEN (op 12): Verify authentication token
+    VerifyToken { token: String },
+
+    /// GET_GUEST_CONFIG_PROPERTIES (op 13): Get multiple guest config properties
+    GetGuestConfigProperties { vmid: u32, properties: Vec<String> },
+}
+
+/// Parse a null-terminated C string from a byte slice.
+///
+/// Extracts up to the first NUL byte, validates UTF-8, and returns a `String`.
+/// `context` is used in error messages to identify the field.
+fn parse_cstr(data: &[u8], context: &str) -> anyhow::Result<String> {
+    std::ffi::CStr::from_bytes_until_nul(data)
+        .map_err(|_| anyhow::anyhow!("Invalid C string in {context}"))?
+        .to_str()
+        .map_err(|_| anyhow::anyhow!("Invalid UTF-8 in {context}"))
+        .map(str::to_string)
+}
+
+/// Parse a null-terminated C string from a fixed-size field within a byte slice.
+///
+/// Like `parse_cstr` but validates that `data` is at least `field_len` bytes and
+/// parses only the first `field_len` bytes.
+fn parse_cstr_field(data: &[u8], field_len: usize, context: &str) -> anyhow::Result<String> {
+    if data.len() < field_len {
+        anyhow::bail!("{context}: data too short ({} < {field_len})", data.len());
+    }
+    parse_cstr(&data[..field_len], context)
+}
+
+impl IpcRequest {
+    /// Deserialize an IPC request from message ID and data
+    pub fn deserialize(msg_id: i32, data: &[u8]) -> anyhow::Result<Self> {
+        let op = CfsIpcOp::try_from(msg_id)
+            .map_err(|_| anyhow::anyhow!("Unknown IPC operation code: {msg_id}"))?;
+
+        match op {
+            CfsIpcOp::GetFsVersion => Ok(IpcRequest::GetFsVersion),
+
+            CfsIpcOp::GetClusterInfo => Ok(IpcRequest::GetClusterInfo),
+
+            CfsIpcOp::GetGuestList => Ok(IpcRequest::GetGuestList),
+
+            CfsIpcOp::SetStatus => {
+                // SET_STATUS: name (256 bytes) + data (rest)
+                if data.len() < 256 {
+                    anyhow::bail!("SET_STATUS data too short");
+                }
+
+                let name = parse_cstr_field(data, 256, "SET_STATUS name")?;
+                let status_data = data[256..].to_vec();
+
+                Ok(IpcRequest::SetStatus {
+                    name,
+                    data: status_data,
+                })
+            }
+
+            CfsIpcOp::GetStatus => {
+                // GET_STATUS: name (256 bytes) + nodename (256 bytes)
+                // Matches C struct cfs_status_get_request_header_t (server.c:64-67)
+                if data.len() < 512 {
+                    anyhow::bail!("GET_STATUS data too short");
+                }
+
+                let name = parse_cstr_field(data, 256, "GET_STATUS name")?;
+                let node_name = parse_cstr_field(&data[256..], 256, "GET_STATUS node name")?;
+
+                Ok(IpcRequest::GetStatus { name, node_name })
+            }
+
+            CfsIpcOp::GetConfig => {
+                let path = parse_cstr(data, "GET_CONFIG path")?;
+                Ok(IpcRequest::GetConfig { path })
+            }
+
+            CfsIpcOp::LogClusterMsg => {
+                // LOG_CLUSTER_MSG: priority + ident_len + tag_len + strings
+                // C struct (server.c:69-75):
+                //   uint8_t priority;
+                //   uint8_t ident_len;  // Length INCLUDING null terminator
+                //   uint8_t tag_len;    // Length INCLUDING null terminator
+                //   char data[];        // ident\0 + tag\0 + message\0
+                if data.len() < 3 {
+                    anyhow::bail!("LOG_CLUSTER_MSG data too short");
+                }
+
+                let priority = data[0];
+                let ident_len = data[1] as usize;
+                let tag_len = data[2] as usize;
+
+                // Validate lengths (must include null terminator, so >= 1)
+                if ident_len < 1 || tag_len < 1 {
+                    anyhow::bail!("LOG_CLUSTER_MSG: ident_len or tag_len is 0");
+                }
+
+                // Calculate message length (C: datasize - ident_len - tag_len)
+                let msg_start = 3 + ident_len + tag_len;
+                if data.len() < msg_start + 1 {
+                    anyhow::bail!("LOG_CLUSTER_MSG data too short for message");
+                }
+
+                // Parse ident (null-terminated C string, length includes the NUL)
+                // C validates: msg[ident_len - 1] == 0
+                let ident_data = &data[3..3 + ident_len];
+                if ident_data[ident_len - 1] != 0 {
+                    anyhow::bail!("LOG_CLUSTER_MSG: ident not null-terminated");
+                }
+                let ident = parse_cstr(ident_data, "LOG_CLUSTER_MSG ident")?;
+
+                // Parse tag (null-terminated C string, length includes the NUL)
+                // C validates: msg[ident_len + tag_len - 1] == 0
+                let tag_data = &data[3 + ident_len..3 + ident_len + tag_len];
+                if tag_data[tag_len - 1] != 0 {
+                    anyhow::bail!("LOG_CLUSTER_MSG: tag not null-terminated");
+                }
+                let tag = parse_cstr(tag_data, "LOG_CLUSTER_MSG tag")?;
+
+                // Parse message (rest of data, null-terminated)
+                // C validates: data[request_size] == 0 (but this is a bug - accesses past buffer)
+                // We'll be more lenient and just read until end or first null
+                let message = parse_cstr(&data[msg_start..], "LOG_CLUSTER_MSG message")?;
+
+                Ok(IpcRequest::LogClusterMsg {
+                    priority,
+                    ident,
+                    tag,
+                    message,
+                })
+            }
+
+            CfsIpcOp::GetClusterLog => {
+                // GET_CLUSTER_LOG: C struct (server.c:77-83):
+                //   uint32_t max_entries;
+                //   uint32_t res1, res2, res3;  // reserved, unused
+                //   char user[];  // null-terminated user string for filtering
+                // Total header: 16 bytes, followed by user string
+                const HEADER_SIZE: usize = 16; // 4 u32 fields
+
+                if data.len() <= HEADER_SIZE {
+                    // C returns EINVAL if userlen <= 0
+                    anyhow::bail!("GET_CLUSTER_LOG: missing user string");
+                }
+
+                let max_entries = u32::from_le_bytes([data[0], data[1], data[2], data[3]]) as usize;
+                // Default to 50 if max_entries is 0 (matches C: rh->max_entries ? rh->max_entries : 50)
+                let max_entries = if max_entries == 0 { 50 } else { max_entries };
+
+                let user = parse_cstr(&data[HEADER_SIZE..], "GET_CLUSTER_LOG user")?;
+
+                Ok(IpcRequest::GetClusterLog { max_entries, user })
+            }
+
+            CfsIpcOp::GetRrdDump => Ok(IpcRequest::GetRrdDump),
+
+            CfsIpcOp::GetGuestConfigProperty => {
+                // GET_GUEST_CONFIG_PROPERTY: vmid (u32) + property (null-terminated)
+                if data.len() < 4 {
+                    anyhow::bail!("GET_GUEST_CONFIG_PROPERTY data too short");
+                }
+
+                let vmid = u32::from_le_bytes([data[0], data[1], data[2], data[3]]);
+                let property = parse_cstr(&data[4..], "GET_GUEST_CONFIG_PROPERTY property")?;
+
+                Ok(IpcRequest::GetGuestConfigProperty { vmid, property })
+            }
+
+            CfsIpcOp::VerifyToken => {
+                let token = parse_cstr(data, "VERIFY_TOKEN token")?;
+                Ok(IpcRequest::VerifyToken { token })
+            }
+
+            CfsIpcOp::GetGuestConfigProperties => {
+                // GET_GUEST_CONFIG_PROPERTIES: vmid (u32) + num_props (u8) + property list
+                if data.len() < 5 {
+                    anyhow::bail!("GET_GUEST_CONFIG_PROPERTIES data too short");
+                }
+
+                let vmid = u32::from_le_bytes([data[0], data[1], data[2], data[3]]);
+                let num_props = data[4] as usize;
+
+                if num_props == 0 {
+                    anyhow::bail!("GET_GUEST_CONFIG_PROPERTIES requires at least one property");
+                }
+
+                let mut properties = Vec::with_capacity(num_props);
+                let mut remaining = &data[5..];
+
+                for i in 0..num_props {
+                    if remaining.is_empty() {
+                        anyhow::bail!("Property {i} is missing");
+                    }
+
+                    let property = parse_cstr(remaining, &format!("property {i}"))?;
+
+                    // Validate property name starts with lowercase letter
+                    if !property.starts_with(|c: char| c.is_ascii_lowercase()) {
+                        anyhow::bail!("Property {i} does not start with [a-z]");
+                    }
+
+                    remaining = &remaining[property.len() + 1..]; // +1 for null terminator
+                    properties.push(property);
+                }
+
+                // Verify no leftover data
+                if !remaining.is_empty() {
+                    anyhow::bail!("Leftover data after parsing {num_props} properties");
+                }
+
+                Ok(IpcRequest::GetGuestConfigProperties { vmid, properties })
+            }
+        }
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/ipc/service.rs b/src/pmxcfs-rs/pmxcfs/src/ipc/service.rs
new file mode 100644
index 000000000..97e709bbd
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/ipc/service.rs
@@ -0,0 +1,920 @@
+//! IPC Service implementation
+//!
+//! This module implements the IPC service handler that processes requests
+//! from client applications via libqb-compatible IPC.
+
+use super::IpcRequest;
+use async_trait::async_trait;
+use pmxcfs_api_types::errno;
+use pmxcfs_config::Config;
+use pmxcfs_dfsm::{Dfsm, KvStoreMessage};
+use pmxcfs_ipc::{Handler, Permissions, Request, Response};
+use pmxcfs_memdb::MemDb;
+use pmxcfs_status as status;
+use std::io::Error as IoError;
+use std::sync::Arc;
+
+use crate::path::is_private_path;
+use crate::status_messages::{build_cluster_info_value, build_version_value};
+
+/// IPC handler for pmxcfs protocol operations
+pub struct IpcHandler {
+    memdb: MemDb,
+    status: Arc<status::Status>,
+    config: Arc<Config>,
+    www_data_gid: u32,
+    /// Kvstore DFSM for broadcasting cluster log entries (None in local mode)
+    status_dfsm: Option<Arc<Dfsm<KvStoreMessage>>>,
+}
+
+impl IpcHandler {
+    /// Create a new IPC handler
+    pub fn new(
+        memdb: MemDb,
+        status: Arc<status::Status>,
+        config: Arc<Config>,
+        www_data_gid: u32,
+    ) -> Self {
+        Self {
+            memdb,
+            status,
+            config,
+            www_data_gid,
+            status_dfsm: None,
+        }
+    }
+
+    /// Attach the kvstore DFSM so cluster log entries are broadcast to all nodes
+    pub fn with_status_dfsm(mut self, status_dfsm: Arc<Dfsm<KvStoreMessage>>) -> Self {
+        self.status_dfsm = Some(status_dfsm);
+        self
+    }
+}
+
+impl IpcHandler {
+    /// Handle an IPC request and return (error_code, response_data)
+    async fn handle_request(
+        &self,
+        request: IpcRequest,
+        is_read_only: bool,
+        client_pid: u32,
+    ) -> (i32, Vec<u8>) {
+        let result = match request {
+            IpcRequest::GetFsVersion => self.handle_get_fs_version(),
+            IpcRequest::GetClusterInfo => self.handle_get_cluster_info(),
+            IpcRequest::GetGuestList => self.handle_get_guest_list(),
+            IpcRequest::GetConfig { path } => self.handle_get_config(&path, is_read_only),
+            IpcRequest::GetStatus { name, node_name } => self.handle_get_status(&name, &node_name),
+            IpcRequest::SetStatus { name, data } => {
+                if is_read_only {
+                    Err(errno::eperm())
+                } else {
+                    self.handle_set_status(&name, &data).await
+                }
+            }
+            IpcRequest::LogClusterMsg {
+                priority,
+                ident,
+                tag,
+                message,
+            } => {
+                if is_read_only {
+                    Err(errno::eperm())
+                } else {
+                    self.handle_log_cluster_msg(priority, &ident, &tag, &message, client_pid)
+                }
+            }
+            IpcRequest::GetClusterLog { max_entries, user } => {
+                self.handle_get_cluster_log(max_entries, &user)
+            }
+            IpcRequest::GetRrdDump => self.handle_get_rrd_dump(),
+            IpcRequest::GetGuestConfigProperty { vmid, property } => {
+                self.handle_get_guest_config_property(vmid, &property)
+            }
+            IpcRequest::VerifyToken { token } => self.handle_verify_token(&token),
+            IpcRequest::GetGuestConfigProperties { vmid, properties } => {
+                self.handle_get_guest_config_properties(vmid, &properties)
+            }
+        };
+
+        match result {
+            Ok(response_data) => (0, response_data),
+            Err(e) => {
+                let error_code = if let Some(os_error) = e.raw_os_error() {
+                    -os_error
+                } else {
+                    -libc::EIO
+                };
+                tracing::debug!("Request error: {}", e);
+                (error_code, Vec::new())
+            }
+        }
+    }
+
+    /// GET_FS_VERSION: Return filesystem version information
+    fn handle_get_fs_version(&self) -> Result<Vec<u8>, IoError> {
+        serde_json::to_vec(&build_version_value(&self.config, &self.status))
+            .map_err(|_| errno::eio())
+    }
+
+    /// GET_CLUSTER_INFO: Return cluster member list
+    fn handle_get_cluster_info(&self) -> Result<Vec<u8>, IoError> {
+        serde_json::to_vec(&build_cluster_info_value(&self.config, &self.status))
+            .map_err(|_| errno::eio())
+    }
+
+    /// GET_GUEST_LIST: Return VM/CT list
+    fn handle_get_guest_list(&self) -> Result<Vec<u8>, IoError> {
+        let vmlist_data = self.status.get_vmlist();
+
+        // Convert VM list to JSON format matching C implementation
+        let mut ids = serde_json::Map::new();
+        for (vmid, vm_entry) in vmlist_data {
+            ids.insert(
+                vmid.to_string(),
+                serde_json::json!({
+                    "node": vm_entry.node,
+                    "type": vm_entry.vm_type.to_string(),
+                    "version": vm_entry.version,
+                }),
+            );
+        }
+
+        let vmlist = serde_json::json!({
+            "version": self.status.get_vmlist_version(),
+            "ids": ids,
+        });
+
+        Ok(vmlist.to_string().into_bytes())
+    }
+
+    /// GET_CONFIG: Read configuration file
+    fn handle_get_config(&self, path: &str, is_read_only: bool) -> Result<Vec<u8>, IoError> {
+        // Check if read-only client is trying to access private path
+        if is_read_only && is_private_path(path) {
+            return Err(errno::eperm());
+        }
+
+        // Read from memdb
+        match self.memdb.read(path, 0, 1024 * 1024) {
+            Ok(data) => Ok(data),
+            Err(_) => Err(errno::enoent()),
+        }
+    }
+
+    /// GET_STATUS: Get node status
+    ///
+    /// Matches C implementation: cfs_create_status_msg(outbuf, nodename, name)
+    /// where nodename is the node to query and name is the specific status key.
+    ///
+    /// C implementation (server.c:233, status.c:1640-1668):
+    /// - If name is empty: return ENOENT
+    /// - Local node: look up bare `name` key in node_status (cfs_status.kvhash)
+    /// - Remote node: resolve nodename→nodeid, look up in kvstore (clnode->kvhash)
+    fn handle_get_status(&self, name: &str, nodename: &str) -> Result<Vec<u8>, IoError> {
+        if name.is_empty() {
+            return Err(errno::enoent());
+        }
+
+        let is_local = nodename.is_empty() || nodename == self.config.nodename();
+
+        if is_local {
+            // Local node: look up bare key in node_status (matches C: cfs_status.kvhash[key])
+            if let Some(ns) = self.status.get_node_status(name) {
+                return Ok(ns.data);
+            }
+        } else {
+            // Remote node: resolve nodename→nodeid, look up in kvstore
+            // (matches C: clnode->kvhash[key] via clinfo->nodes_byname)
+            if let Some(info) = self.status.get_cluster_info()
+                && let Some(&nodeid) = info.nodes_by_name.get(nodename)
+                && let Some(data) = self.status.get_node_kv(nodeid, name)
+            {
+                return Ok(data);
+            }
+        }
+
+        Err(errno::enoent())
+    }
+
+    /// SET_STATUS: Update node status
+    async fn handle_set_status(&self, name: &str, status_data: &[u8]) -> Result<Vec<u8>, IoError> {
+        if status_data.len() > pmxcfs_api_types::CFS_MAX_STATUS_SIZE {
+            return Err(errno::efbig());
+        }
+        self.status
+            .set_node_status(name.to_string(), status_data.to_vec())
+            .await
+            .map_err(|_| errno::eio())?;
+
+        Ok(Vec::new())
+    }
+
+    /// LOG_CLUSTER_MSG: Write to cluster log
+    ///
+    /// In cluster mode: broadcast via kvstore DFSM only — CPG guarantees self-delivery,
+    /// so the sender's own log is populated via the DFSM callback just like any other node.
+    /// This avoids the double-entry bug where a local insert (uid=A) and DFSM self-delivery
+    /// (uid=B, A≠B) both pass dedup because they have different UIDs.
+    ///
+    /// In single-node mode (no DFSM): insert directly into the local log.
+    fn handle_log_cluster_msg(
+        &self,
+        priority: u8,
+        ident: &str,
+        tag: &str,
+        message: &str,
+        client_pid: u32,
+    ) -> Result<Vec<u8>, IoError> {
+        let node = self.config.nodename().to_string();
+
+        let timestamp = std::time::SystemTime::now()
+            .duration_since(std::time::UNIX_EPOCH)
+            .map_err(|_| errno::eio())?
+            .as_secs();
+
+        if let Some(dfsm) = &self.status_dfsm {
+            // Cluster mode: broadcast via DFSM; CPG self-delivery inserts into local log.
+            let msg = KvStoreMessage::Log {
+                time: timestamp as u32,
+                priority,
+                node,
+                ident: ident.to_string(),
+                tag: tag.to_string(),
+                message: message.to_string(),
+            };
+            if let Err(e) = dfsm.send_message(msg) {
+                tracing::warn!("Failed to broadcast cluster log via DFSM: {e}");
+            }
+        } else {
+            // Single-node mode: no DFSM, insert directly.
+            let entry = status::ClusterLogEntry {
+                uid: 0,
+                timestamp,
+                priority,
+                tag: tag.to_string(),
+                pid: client_pid,
+                node,
+                ident: ident.to_string(),
+                message: message.to_string(),
+            };
+            self.status.add_log_entry(entry);
+        }
+
+        Ok(Vec::new())
+    }
+
+    /// GET_CLUSTER_LOG: Read cluster log
+    ///
+    /// The `user` parameter is used for filtering log entries by user.
+    /// Matches C implementation: cfs_cluster_log_dump(outbuf, user, max)
+    /// Returns JSON format: {"data": [{entry1}, {entry2}, ...]}
+    fn handle_get_cluster_log(&self, max_entries: usize, user: &str) -> Result<Vec<u8>, IoError> {
+        Ok(self
+            .status
+            .dump_cluster_log_json(max_entries, user)
+            .into_bytes())
+    }
+
+    /// GET_RRD_DUMP: Get RRD data dump in C-compatible text format
+    fn handle_get_rrd_dump(&self) -> Result<Vec<u8>, IoError> {
+        let rrd_dump = self.status.get_rrd_dump();
+        Ok(rrd_dump.into_bytes())
+    }
+
+    /// GET_GUEST_CONFIG_PROPERTY: Get guest config property
+    fn handle_get_guest_config_property(
+        &self,
+        vmid: u32,
+        property: &str,
+    ) -> Result<Vec<u8>, IoError> {
+        // Delegate to multi-property handler with single property
+        self.handle_get_guest_config_properties_impl(&[property], vmid)
+    }
+
+    /// VERIFY_TOKEN: Verify authentication token
+    ///
+    /// Matches C implementation (server.c:399-448):
+    /// - Empty token → EINVAL
+    /// - Token containing newline → EINVAL
+    /// - Exact line match (no trimming), splitting on '\n' only
+    fn handle_verify_token(&self, token: &str) -> Result<Vec<u8>, IoError> {
+        // Reject empty tokens
+        if token.is_empty() {
+            return Err(errno::einval());
+        }
+
+        // Reject tokens containing newlines (would break line-based matching)
+        if token.contains('\n') {
+            return Err(errno::einval());
+        }
+
+        // Read token.cfg from database
+        match self.memdb.read("priv/token.cfg", 0, 1024 * 1024) {
+            Ok(token_data) => {
+                // Check if token exists in file (one token per line)
+                // C splits on '\n' only (not '\r\n') and does exact match (no trim)
+                let token_str = String::from_utf8_lossy(&token_data);
+                for line in token_str.split('\n') {
+                    if line == token {
+                        return Ok(Vec::new()); // Success
+                    }
+                }
+                Err(errno::enoent())
+            }
+            Err(_) => Err(errno::enoent()),
+        }
+    }
+
+    /// GET_GUEST_CONFIG_PROPERTIES: Get multiple guest config properties
+    fn handle_get_guest_config_properties(
+        &self,
+        vmid: u32,
+        properties: &[String],
+    ) -> Result<Vec<u8>, IoError> {
+        let property_refs: Vec<&str> = properties.iter().map(String::as_str).collect();
+        self.handle_get_guest_config_properties_impl(&property_refs, vmid)
+    }
+
+    /// Core implementation for getting guest config properties
+    fn handle_get_guest_config_properties_impl(
+        &self,
+        properties: &[&str],
+        vmid: u32,
+    ) -> Result<Vec<u8>, IoError> {
+        // Validate vmid range
+        if vmid > 0 && vmid < 100 {
+            tracing::debug!("vmid out of range: {}", vmid);
+            return Err(errno::einval());
+        }
+
+        // Build response as a map: vmid -> {property -> value}
+        let mut response_map: serde_json::Map<String, serde_json::Value> = serde_json::Map::new();
+
+        if vmid >= 100 {
+            // Get specific VM
+            let vmlist = self.status.get_vmlist();
+
+            let vm_entry = vmlist.get(&vmid).ok_or_else(errno::enoent)?;
+
+            // Get config path for this VM
+            let config_path = format!(
+                "nodes/{}/{}/{}.conf",
+                &vm_entry.node,
+                vm_entry.vm_type.config_dir(),
+                vmid
+            );
+
+            // Read config from memdb
+            match self.memdb.read(&config_path, 0, 1024 * 1024) {
+                Ok(config_data) => {
+                    let config_str = String::from_utf8_lossy(&config_data);
+                    let values = extract_properties(&config_str, properties);
+
+                    if !values.is_empty() {
+                        let json_val = serde_json::to_value(&values).map_err(|e| {
+                            tracing::error!("Failed to serialize VM {} properties: {}", vmid, e);
+                            errno::eio()
+                        })?;
+                        response_map.insert(vmid.to_string(), json_val);
+                    }
+                }
+                Err(e) => {
+                    tracing::debug!("Failed to read config for VM {}: {}", vmid, e);
+                    return Err(errno::eio());
+                }
+            }
+        } else {
+            // vmid == 0: Get properties from all VMs
+            let vmlist = self.status.get_vmlist();
+
+            for (vm_id, vm_entry) in vmlist.iter() {
+                let config_path = format!(
+                    "nodes/{}/{}/{}.conf",
+                    &vm_entry.node,
+                    vm_entry.vm_type.config_dir(),
+                    vm_id
+                );
+
+                // Read config from memdb
+                if let Ok(config_data) = self.memdb.read(&config_path, 0, 1024 * 1024) {
+                    let config_str = String::from_utf8_lossy(&config_data);
+                    let values = extract_properties(&config_str, properties);
+
+                    if !values.is_empty() {
+                        let json_val = serde_json::to_value(&values).map_err(|e| {
+                            tracing::error!("Failed to serialize VM {} properties: {}", vm_id, e);
+                            errno::eio()
+                        })?;
+                        response_map.insert(vm_id.to_string(), json_val);
+                    }
+                }
+            }
+        }
+
+        // Serialize to JSON with pretty printing (matches C output format)
+        let json_str = serde_json::to_string_pretty(&response_map).map_err(|e| {
+            tracing::error!("Failed to serialize JSON: {}", e);
+            errno::eio()
+        })?;
+
+        Ok(json_str.into_bytes())
+    }
+}
+
+/// Extract property values from a VM config file
+///
+/// Parses config file line-by-line looking for "property: value" patterns.
+/// Matches the C implementation's parsing behavior from status.c:767-796.
+///
+/// Format: `^([a-z][a-z_]*\d*):\s*(.+?)\s*$`
+/// - Property name must start with lowercase letter
+/// - Followed by colon and optional whitespace
+/// - Value is trimmed of leading/trailing whitespace
+/// - Stops at snapshot sections (lines starting with '[')
+///
+/// Returns a map of property names to their values.
+fn extract_properties(
+    config: &str,
+    properties: &[&str],
+) -> std::collections::HashMap<String, String> {
+    let mut values = std::collections::HashMap::new();
+
+    // Parse config line by line
+    for line in config.lines() {
+        // Stop at snapshot or pending section markers (matches C implementation)
+        if line.starts_with('[') {
+            break;
+        }
+
+        // Skip empty lines
+        if line.is_empty() {
+            continue;
+        }
+
+        // Find colon separator (required in VM config format)
+        let Some(colon_pos) = line.find(':') else {
+            continue;
+        };
+
+        // Extract key (property name)
+        let key = &line[..colon_pos];
+
+        // Property must start with lowercase letter (matches C regex check)
+        if !key.starts_with(|c: char| c.is_ascii_lowercase()) {
+            continue;
+        }
+
+        // Extract value after colon
+        let value = &line[colon_pos + 1..];
+
+        // Trim leading and trailing whitespace from value (matches C implementation)
+        let value = value.trim();
+
+        // Skip if value is empty after trimming
+        if value.is_empty() {
+            continue;
+        }
+
+        // Check if this is one of the requested properties
+        if properties.contains(&key) {
+            values.insert(key.to_string(), value.to_string());
+        }
+    }
+
+    values
+}
+
+#[async_trait]
+impl Handler for IpcHandler {
+    fn authenticate(&self, uid: u32, gid: u32) -> Option<Permissions> {
+        // Root with gid 0 gets read-write access
+        // Matches C: (uid == 0 && gid == 0) branch in server.c:111
+        if uid == 0 && gid == 0 {
+            tracing::debug!(
+                "IPC authentication: uid={}, gid={} - granted ReadWrite (root)",
+                uid,
+                gid
+            );
+            return Some(Permissions::ReadWrite);
+        }
+
+        // www-data group gets read-only access (regardless of uid)
+        // Matches C: (gid == cfs.gid) branch in server.c:111
+        if gid == self.www_data_gid {
+            tracing::debug!(
+                "IPC authentication: uid={}, gid={} - granted ReadOnly (www-data group)",
+                uid,
+                gid
+            );
+            return Some(Permissions::ReadOnly);
+        }
+
+        // Reject all other connections with security logging
+        tracing::warn!(
+            "IPC authentication failed: uid={}, gid={} - access denied (not root or www-data group)",
+            uid,
+            gid
+        );
+        None
+    }
+
+    async fn handle(&self, request: Request) -> Response {
+        // Deserialize IPC request from message ID and data
+        let ipc_request = match IpcRequest::deserialize(request.msg_id, &request.data) {
+            Ok(req) => req,
+            Err(e) => {
+                tracing::warn!(
+                    "Failed to deserialize IPC request (msg_id={}): {}",
+                    request.msg_id,
+                    e
+                );
+                return Response::err(-libc::EINVAL);
+            }
+        };
+
+        let (error_code, data) = self
+            .handle_request(ipc_request, request.is_read_only, request.pid)
+            .await;
+
+        Response { error_code, data }
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+    use pmxcfs_api_types::MemberInfo;
+    use pmxcfs_config::Config;
+    use pmxcfs_test_utils::{create_minimal_test_db, create_test_config};
+    use std::time::{SystemTime, UNIX_EPOCH};
+    use tempfile::TempDir;
+
+    fn create_handler() -> (TempDir, IpcHandler) {
+        let (tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let config = create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config.clone());
+
+        (tmpdir, IpcHandler::new(memdb, status, config, 33))
+    }
+
+    #[test]
+    fn test_cluster_log_json_uses_stored_uid_and_pid() {
+        let (_tmpdir, handler) = create_handler();
+        handler.status.clear_cluster_log();
+
+        handler
+            .handle_log_cluster_msg(5, "pmxcfs", "cluster", "ipc test entry", 4242)
+            .unwrap();
+
+        let stored_entry = handler
+            .status
+            .get_log_entries(1)
+            .into_iter()
+            .next()
+            .expect("cluster log entry should be stored");
+        let response = handler.handle_get_cluster_log(50, "").unwrap();
+        let parsed: serde_json::Value = serde_json::from_slice(&response).unwrap();
+        let first_entry = parsed["data"][0].as_object().unwrap();
+
+        assert_eq!(
+            first_entry["uid"].as_u64().unwrap(),
+            stored_entry.uid as u64,
+            "GET_CLUSTER_LOG should use the stored uid"
+        );
+        assert_eq!(
+            first_entry["pid"].as_u64().unwrap(),
+            stored_entry.pid as u64,
+            "GET_CLUSTER_LOG should use the stored pid"
+        );
+        assert_eq!(stored_entry.pid, 4242);
+    }
+
+    #[tokio::test]
+    async fn test_get_fs_version_matches_c_shape() {
+        let (_tmpdir, handler) = create_handler();
+
+        handler
+            .status
+            .set_node_status("agent".to_string(), b"up".to_vec())
+            .await
+            .unwrap();
+        handler.status.increment_path_version("corosync.conf");
+
+        let response = handler.handle_get_fs_version().unwrap();
+        let parsed: serde_json::Value = serde_json::from_slice(&response).unwrap();
+
+        assert!(parsed["starttime"].is_number());
+        assert!(parsed["clinfo"].is_number());
+        assert!(parsed["vmlist"].is_number());
+        assert!(parsed["corosync.conf"].is_number());
+        assert!(parsed["kvstore"].is_object());
+        assert_eq!(parsed["kvstore"]["testnode"]["agent"], 1);
+        assert!(parsed.get("protocol").is_none());
+        assert!(parsed.get("cluster").is_none());
+    }
+
+    #[test]
+    fn test_get_config_rejects_nested_private_paths_for_read_only_clients() {
+        let (_tmpdir, handler) = create_handler();
+
+        let err = handler
+            .handle_get_config("nodes/testnode/priv/token.cfg", true)
+            .expect_err("read-only GET_CONFIG should reject node private paths");
+        assert_eq!(err.raw_os_error(), Some(libc::EPERM));
+    }
+
+    #[tokio::test]
+    async fn test_set_status_preserves_efbig() {
+        let (_tmpdir, handler) = create_handler();
+
+        let err = handler
+            .handle_set_status(
+                "oversized",
+                &vec![0u8; pmxcfs_api_types::CFS_MAX_STATUS_SIZE + 1],
+            )
+            .await
+            .expect_err("oversized SET_STATUS should fail");
+        assert_eq!(err.raw_os_error(), Some(libc::EFBIG));
+    }
+
+    #[test]
+    fn test_extract_properties() {
+        let config = r#"
+# VM Configuration
+memory: 2048
+cores: 4
+sockets: 1
+cpu: host
+boot: order=scsi0;net0
+name: test-vm
+onboot: 1
+"#;
+
+        let properties = vec!["memory", "cores", "name", "nonexistent"];
+        let result = extract_properties(config, &properties);
+
+        assert_eq!(result.get("memory"), Some(&"2048".to_string()));
+        assert_eq!(result.get("cores"), Some(&"4".to_string()));
+        assert_eq!(result.get("name"), Some(&"test-vm".to_string()));
+        assert_eq!(result.get("nonexistent"), None);
+    }
+
+    #[test]
+    fn test_extract_properties_empty_config() {
+        let config = "";
+        let properties = vec!["memory"];
+        let result = extract_properties(config, &properties);
+        assert!(result.is_empty());
+    }
+
+    #[test]
+    fn test_extract_properties_stops_at_snapshot() {
+        let config = r#"
+memory: 2048
+cores: 4
+[snapshot]
+memory: 4096
+name: snapshot-value
+"#;
+        let properties = vec!["memory", "cores", "name"];
+        let result = extract_properties(config, &properties);
+
+        // Should stop at [snapshot] marker
+        assert_eq!(result.get("memory"), Some(&"2048".to_string()));
+        assert_eq!(result.get("cores"), Some(&"4".to_string()));
+        assert_eq!(result.get("name"), None); // After [snapshot], should not be parsed
+    }
+
+    #[test]
+    fn test_extract_properties_with_special_chars() {
+        let config = r#"
+name: test"vm
+description: Line1\nLine2
+path: /path/to\file
+"#;
+
+        let properties = vec!["name", "description", "path"];
+        let result = extract_properties(config, &properties);
+
+        assert_eq!(result.get("name"), Some(&r#"test"vm"#.to_string()));
+        assert_eq!(
+            result.get("description"),
+            Some(&r#"Line1\nLine2"#.to_string())
+        );
+        assert_eq!(result.get("path"), Some(&r#"/path/to\file"#.to_string()));
+    }
+
+    #[test]
+    fn test_extract_properties_whitespace_handling() {
+        let config = r#"
+memory:  2048
+cores:4
+name:   test-vm
+"#;
+
+        let properties = vec!["memory", "cores", "name"];
+        let result = extract_properties(config, &properties);
+
+        // Values should be trimmed of leading/trailing whitespace
+        assert_eq!(result.get("memory"), Some(&"2048".to_string()));
+        assert_eq!(result.get("cores"), Some(&"4".to_string()));
+        assert_eq!(result.get("name"), Some(&"test-vm".to_string()));
+    }
+
+    #[test]
+    fn test_extract_properties_invalid_format() {
+        let config = r#"
+Memory: 2048
+CORES: 4
+_private: value
+123: value
+name value
+"#;
+
+        let properties = vec!["Memory", "CORES", "_private", "123", "name"];
+        let result = extract_properties(config, &properties);
+
+        // None should match because:
+        // - "Memory" starts with uppercase
+        // - "CORES" starts with uppercase
+        // - "_private" starts with underscore
+        // - "123" starts with digit
+        // - "name value" has no colon
+        assert!(result.is_empty());
+    }
+
+    #[test]
+    fn test_json_serialization_with_serde() {
+        // Verify that serde_json properly handles escaping
+        let mut values = std::collections::HashMap::new();
+        values.insert("name".to_string(), r#"test"vm"#.to_string());
+        values.insert("description".to_string(), "Line1\nLine2".to_string());
+
+        let json = serde_json::to_string(&values).unwrap();
+
+        // serde_json should properly escape quotes and newlines
+        assert!(json.contains(r#"\"test\\\"vm\""#) || json.contains(r#""test\"vm""#));
+        assert!(json.contains(r#"\n"#));
+    }
+
+    #[test]
+    fn test_json_pretty_format() {
+        // Verify pretty printing works
+        let mut response_map = serde_json::Map::new();
+        let mut vm_props = std::collections::HashMap::new();
+        vm_props.insert("memory".to_string(), "2048".to_string());
+        vm_props.insert("cores".to_string(), "4".to_string());
+
+        response_map.insert("100".to_string(), serde_json::to_value(&vm_props).unwrap());
+
+        let json_str = serde_json::to_string_pretty(&response_map).unwrap();
+
+        // Pretty format should have newlines
+        assert!(json_str.contains('\n'));
+        // Should contain the VM ID and properties
+        assert!(json_str.contains("100"));
+        assert!(json_str.contains("memory"));
+        assert!(json_str.contains("2048"));
+    }
+
+    #[test]
+    fn test_get_cluster_info_matches_c_shape_without_nodes() {
+        let (_tmpdir, handler) = create_handler();
+        handler.status.init_cluster("test-cluster".to_string());
+
+        let response = handler.handle_get_cluster_info().unwrap();
+        let parsed: serde_json::Value = serde_json::from_slice(&response).unwrap();
+
+        assert_eq!(parsed["nodename"], "testnode");
+        assert!(parsed["version"].is_number());
+        assert!(parsed.get("cluster").is_none());
+        assert!(parsed.get("nodelist").is_none());
+    }
+
+    #[tokio::test]
+    async fn test_get_cluster_info_uses_actual_cluster_data() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let config = Config::new(
+            "node1",
+            "192.168.1.5".parse().unwrap(),
+            33,
+            0,
+            false,
+            "test-cluster",
+        )
+        .into_shared();
+        let status = pmxcfs_status::init_with_config(config.clone());
+        let handler = IpcHandler::new(memdb, status.clone(), config, 33);
+
+        status
+            .update_cluster_info(
+                "test-cluster".to_string(),
+                17,
+                vec![
+                    (1, "node1".to_string(), "10.0.0.1".to_string()),
+                    (2, "node2".to_string(), "".to_string()),
+                ],
+            )
+            .unwrap();
+
+        let now = SystemTime::now()
+            .duration_since(UNIX_EPOCH)
+            .unwrap()
+            .as_secs();
+        status.update_members(&[MemberInfo {
+            node_id: 1,
+            pid: 100,
+            joined_at: now,
+        }]);
+        status.set_quorate(true);
+        status
+            .set_node_status("nodeip".to_string(), b"10.10.10.1\0".to_vec())
+            .await
+            .unwrap();
+        status.set_node_kv(2, "nodeip".to_string(), b"10.10.10.2\0".to_vec());
+
+        let response = handler.handle_get_cluster_info().unwrap();
+        let parsed: serde_json::Value = serde_json::from_slice(&response).unwrap();
+
+        assert_eq!(parsed["nodename"], "node1");
+        assert!(parsed["version"].as_u64().unwrap() >= 1);
+        assert_eq!(parsed["cluster"]["name"], "test-cluster");
+        assert_eq!(parsed["cluster"]["version"], 17);
+        assert_eq!(parsed["cluster"]["nodes"], 2);
+        assert_eq!(parsed["cluster"]["quorate"], 1);
+        assert_eq!(parsed["nodelist"]["node1"]["id"], 1);
+        assert_eq!(parsed["nodelist"]["node1"]["online"], 1);
+        assert_eq!(parsed["nodelist"]["node1"]["ip"], "10.10.10.1");
+        assert_eq!(parsed["nodelist"]["node2"]["id"], 2);
+        assert_eq!(parsed["nodelist"]["node2"]["online"], 0);
+        assert_eq!(parsed["nodelist"]["node2"]["ip"], "10.10.10.2");
+    }
+
+    #[test]
+    fn test_get_cluster_info_omits_missing_ip_field() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let config = create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config.clone());
+        let handler = IpcHandler::new(memdb, status.clone(), config, 33);
+
+        status
+            .update_cluster_info(
+                "test-cluster".to_string(),
+                3,
+                vec![(2, "node2".to_string(), "".to_string())],
+            )
+            .unwrap();
+
+        let response = handler.handle_get_cluster_info().unwrap();
+        let parsed: serde_json::Value = serde_json::from_slice(&response).unwrap();
+
+        assert!(parsed["nodelist"]["node2"].get("ip").is_none());
+    }
+
+    #[tokio::test]
+    async fn test_get_rrd_dump_includes_remote_rrd_updates() {
+        let (_tmpdir, handler) = create_handler();
+        handler.status.init_cluster("test-cluster".to_string());
+        handler
+            .status
+            .register_node(1, "testnode".to_string(), "192.168.1.10".to_string());
+        handler
+            .status
+            .register_node(2, "node2".to_string(), "192.168.1.11".to_string());
+
+        let now = SystemTime::now()
+            .duration_since(UNIX_EPOCH)
+            .unwrap()
+            .as_secs();
+
+        handler
+            .status
+            .set_node_status(
+                "rrd/pve2-node/testnode".to_string(),
+                format!("{now}:12345:1.0:8:0.4:0.1:16000:8000:4000:0:100:50:1000:2000\0")
+                    .into_bytes(),
+            )
+            .await
+            .unwrap();
+        handler.status.set_node_kv(
+            2,
+            "rrd/pve2-node/node2".to_string(),
+            format!("{now}:23456:2.0:8:0.5:0.2:16000:9000:4000:0:100:60:1100:2100\0").into_bytes(),
+        );
+
+        let dump = String::from_utf8(handler.handle_get_rrd_dump().unwrap()).unwrap();
+        assert!(
+            dump.ends_with('\0'),
+            "GET_RRD_DUMP should remain NUL-terminated"
+        );
+        assert!(
+            dump.contains("pve2-node/testnode:"),
+            "GET_RRD_DUMP should include local RRD entries"
+        );
+        assert!(
+            dump.contains("pve2-node/node2:"),
+            "GET_RRD_DUMP should include remote kvstore-backed RRD entries"
+        );
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/lib.rs b/src/pmxcfs-rs/pmxcfs/src/lib.rs
new file mode 100644
index 000000000..989670bd5
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/lib.rs
@@ -0,0 +1,15 @@
+// Library exports for testing and potential library usage
+
+pub mod cluster_config_service; // Cluster configuration monitoring via CMAP (matching C's confdb.c)
+pub mod daemon; // Unified daemon builder with integrated PID file management
+pub mod file_lock; // File locking utilities
+pub mod fuse;
+pub mod ipc; // IPC subsystem (request handling and service)
+pub mod logging; // Runtime-adjustable logging (for .debug plugin)
+pub mod memdb_callbacks; // DFSM callbacks for memdb (glue between dfsm and memdb)
+pub mod path; // Shared path helpers matching pmxcfs C semantics
+pub mod plugins;
+pub mod quorum_service; // Quorum tracking service (matching C's quorum.c)
+pub mod restart_flag; // Restart flag management
+pub mod status_callbacks; // DFSM callbacks for status kvstore (glue between dfsm and status)
+pub mod status_messages; // Shared JSON builders for IPC/plugin parity
diff --git a/src/pmxcfs-rs/pmxcfs/src/logging.rs b/src/pmxcfs-rs/pmxcfs/src/logging.rs
new file mode 100644
index 000000000..637aebb2b
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/logging.rs
@@ -0,0 +1,44 @@
+/// Runtime-adjustable logging infrastructure
+///
+/// This module provides the ability to change tracing filter levels at runtime,
+/// matching the C implementation's behavior where the .debug plugin can dynamically
+/// enable/disable debug logging.
+use anyhow::Result;
+use parking_lot::Mutex;
+use std::sync::OnceLock;
+use tracing_subscriber::{EnvFilter, reload};
+
+/// Type alias for the reload handle
+type ReloadHandle = reload::Handle<EnvFilter, tracing_subscriber::Registry>;
+
+/// Global reload handle for runtime log level adjustment
+static LOG_RELOAD_HANDLE: OnceLock<Mutex<ReloadHandle>> = OnceLock::new();
+
+/// Initialize the reload handle (called once during logging setup)
+pub fn set_reload_handle(handle: ReloadHandle) -> Result<()> {
+    LOG_RELOAD_HANDLE
+        .set(Mutex::new(handle))
+        .map_err(|_| anyhow::anyhow!("Failed to set log reload handle - already initialized"))
+}
+
+/// Set debug level at runtime (called by .debug plugin)
+///
+/// This changes the tracing filter to either "debug" (level > 0) or "info" (level == 0),
+/// matching the C implementation where writing to .debug affects cfs_debug() output.
+pub fn set_debug_level(level: u8) -> Result<()> {
+    let filter = if level > 0 {
+        EnvFilter::new("debug")
+    } else {
+        EnvFilter::new("info")
+    };
+
+    if let Some(handle) = LOG_RELOAD_HANDLE.get() {
+        handle
+            .lock()
+            .reload(filter)
+            .map_err(|e| anyhow::anyhow!("Failed to reload log filter: {e}"))?;
+        Ok(())
+    } else {
+        Err(anyhow::anyhow!("Log reload handle not initialized"))
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/main.rs b/src/pmxcfs-rs/pmxcfs/src/main.rs
new file mode 100644
index 000000000..ea4478cc7
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/main.rs
@@ -0,0 +1,761 @@
+use anyhow::{Context, Result};
+use clap::Parser;
+use std::fs;
+use std::os::unix::fs::DirBuilderExt;
+use std::sync::Arc;
+use tracing::{debug, error, info};
+use tracing_subscriber::{EnvFilter, layer::SubscriberExt, reload, util::SubscriberInitExt};
+
+use pmxcfs_rs::{
+    cluster_config_service::ClusterConfigService,
+    daemon::{Daemon, DaemonProcess},
+    file_lock::FileLock,
+    fuse,
+    ipc::IpcHandler,
+    memdb_callbacks::MemDbCallbacks,
+    plugins,
+    quorum_service::QuorumService,
+    restart_flag::RestartFlag,
+    status_callbacks::StatusCallbacks,
+};
+
+use pmxcfs_api_types::PmxcfsError;
+use pmxcfs_config::Config;
+use pmxcfs_dfsm::{
+    Callbacks, ClusterDatabaseService, Dfsm, FuseMessage, KvStoreMessage, StatusSyncService,
+};
+use pmxcfs_memdb::MemDb;
+use pmxcfs_services::ServiceManager;
+use pmxcfs_status as status;
+
+// Default paths matching the C version
+const DEFAULT_MOUNT_DIR: &str = "/etc/pve";
+const DEFAULT_DB_PATH: &str = "/var/lib/pve-cluster/config.db";
+const DEFAULT_VARLIB_DIR: &str = "/var/lib/pve-cluster";
+const DEFAULT_RUN_DIR: &str = "/run/pmxcfs";
+
+/// Type alias for the cluster services tuple
+type ClusterServices = (
+    Arc<Dfsm<FuseMessage>>,
+    Arc<Dfsm<KvStoreMessage>>,
+    Arc<QuorumService>,
+);
+
+/// Proxmox Cluster File System - Rust implementation
+///
+/// This FUSE filesystem uses corosync and sqlite3 to provide a
+/// cluster-wide, consistent view of config and other files.
+#[derive(Parser, Debug)]
+#[command(author, version, about, long_about = None)]
+struct Args {
+    /// Turn on debug messages
+    #[arg(short = 'd', long = "debug")]
+    debug: bool,
+
+    /// Do not daemonize server
+    #[arg(short = 'f', long = "foreground")]
+    foreground: bool,
+
+    /// Force local mode (ignore corosync.conf, force quorum)
+    #[arg(short = 'l', long = "local")]
+    local: bool,
+
+    /// Test directory (sets all paths to subdirectories for isolated testing)
+    #[arg(long = "test-dir")]
+    test_dir: Option<std::path::PathBuf>,
+
+    /// Custom mount point
+    #[arg(long = "mount", default_value = DEFAULT_MOUNT_DIR)]
+    mount: std::path::PathBuf,
+
+    /// Custom database path
+    #[arg(long = "db", default_value = DEFAULT_DB_PATH)]
+    db: std::path::PathBuf,
+
+    /// Custom runtime directory
+    #[arg(long = "rundir", default_value = DEFAULT_RUN_DIR)]
+    rundir: std::path::PathBuf,
+
+    /// Cluster name (CPG group name for Corosync isolation)
+    /// Must match C implementation's DCDB_CPG_GROUP_NAME
+    #[arg(long = "cluster-name", default_value = "pve_dcdb_v1")]
+    cluster_name: String,
+}
+
+/// Configuration for all filesystem paths used by pmxcfs
+#[derive(Debug, Clone)]
+struct PathConfig {
+    dbfilename: std::path::PathBuf,
+    lockfile: std::path::PathBuf,
+    restart_flag_file: std::path::PathBuf,
+    pid_file: std::path::PathBuf,
+    mount_dir: std::path::PathBuf,
+    varlib_dir: std::path::PathBuf,
+    run_dir: std::path::PathBuf,
+    pve2_socket_path: std::path::PathBuf, // IPC server socket (libqb-compatible)
+    corosync_conf_path: std::path::PathBuf,
+    rrd_dir: std::path::PathBuf,
+}
+
+impl PathConfig {
+    /// Create PathConfig from command line arguments
+    fn from_args(args: &Args) -> Self {
+        if let Some(ref test_dir) = args.test_dir {
+            // Test mode: all paths under test directory
+            Self {
+                dbfilename: test_dir.join("db/config.db"),
+                lockfile: test_dir.join("db/.pmxcfs.lockfile"),
+                restart_flag_file: test_dir.join("run/cfs-restart-flag"),
+                pid_file: test_dir.join("run/pmxcfs.pid"),
+                mount_dir: test_dir.join("pve"),
+                varlib_dir: test_dir.join("db"),
+                run_dir: test_dir.join("run"),
+                pve2_socket_path: test_dir.join("run/pve2"),
+                corosync_conf_path: test_dir.join("etc/corosync/corosync.conf"),
+                rrd_dir: test_dir.join("rrd"),
+            }
+        } else {
+            // Production mode: use provided args (which have defaults from clap)
+            let varlib_dir = args
+                .db
+                .parent()
+                .map(|p| p.to_path_buf())
+                .unwrap_or_else(|| std::path::PathBuf::from(DEFAULT_VARLIB_DIR));
+
+            Self {
+                dbfilename: args.db.clone(),
+                lockfile: varlib_dir.join(".pmxcfs.lockfile"),
+                restart_flag_file: args.rundir.join("cfs-restart-flag"),
+                pid_file: args.rundir.join("pmxcfs.pid"),
+                mount_dir: args.mount.clone(),
+                varlib_dir,
+                run_dir: args.rundir.clone(),
+                pve2_socket_path: std::path::PathBuf::from(DEFAULT_PVE2_SOCKET),
+                corosync_conf_path: std::path::PathBuf::from(HOST_CLUSTER_CONF_FN),
+                rrd_dir: std::path::PathBuf::from(DEFAULT_RRD_DIR),
+            }
+        }
+    }
+}
+
+const HOST_CLUSTER_CONF_FN: &str = "/etc/corosync/corosync.conf";
+
+const DEFAULT_RRD_DIR: &str = "/var/lib/rrdcached/db";
+const DEFAULT_PVE2_SOCKET: &str = "/var/run/pve2";
+
+#[tokio::main]
+async fn main() -> Result<()> {
+    // Parse command line arguments
+    let args = Args::parse();
+
+    // Initialize logging
+    init_logging(args.debug)?;
+
+    // Create path configuration
+    let paths = PathConfig::from_args(&args);
+
+    info!("Starting pmxcfs (Rust version)");
+    debug!("Debug mode: {}", args.debug);
+    debug!("Foreground mode: {}", args.foreground);
+    debug!("Local mode: {}", args.local);
+
+    // Log test mode if enabled
+    if args.test_dir.is_some() {
+        info!("TEST MODE: Using isolated test directory");
+        info!("  Mount: {}", paths.mount_dir.display());
+        info!("  Database: {}", paths.dbfilename.display());
+        info!("  QB-IPC Socket: {}", paths.pve2_socket_path.display());
+        info!("  Run dir: {}", paths.run_dir.display());
+        info!("  RRD dir: {}", paths.rrd_dir.display());
+    }
+
+    // Get node name (equivalent to uname in C version)
+    let nodename = get_nodename()?;
+    info!("Node name: {}", nodename);
+
+    // Resolve node IP
+    let node_ip = resolve_node_ip(&nodename)?;
+    info!("Resolved node '{}' to IP '{}'", nodename, node_ip);
+
+    // Get www-data group ID
+    let www_data_gid = get_www_data_gid()?;
+    debug!("www-data group ID: {}", www_data_gid);
+
+    // Create configuration
+    let config = Config::new(
+        nodename,
+        node_ip,
+        www_data_gid,
+        u8::from(args.debug),
+        args.local,
+        args.cluster_name.clone(),
+    )
+    .into_shared();
+
+    // Set umask (027 = rwxr-x---)
+    unsafe {
+        libc::umask(0o027);
+    }
+
+    // Create required directories
+    let is_test_mode = args.test_dir.is_some();
+    create_directories(www_data_gid, &paths, is_test_mode)?;
+
+    // Acquire lock
+    let _lock = FileLock::acquire(paths.lockfile.clone()).await?;
+
+    // Initialize status subsystem with config and RRD directory
+    // This allows get_local_nodename() to work properly by accessing config.nodename()
+    let status = status::init_with_config_and_rrd(config.clone(), &paths.rrd_dir).await;
+
+    // Check if database exists
+    let db_exists = paths.dbfilename.exists();
+
+    // Open or create database
+    let memdb = MemDb::open(&paths.dbfilename, !db_exists)?;
+
+    // Check for corosync.conf in database
+    let mut has_corosync_conf = memdb.exists("/corosync.conf")?;
+
+    // Match C startup semantics: only seed /corosync.conf from the host file when the
+    // database is created for the first time. Existing databases stay in local mode if
+    // the in-database copy is absent.
+    if should_import_startup_corosync_conf(db_exists, has_corosync_conf, args.local) {
+        // Try test-mode path first, then fall back to production path
+        // This matches C behavior and handles test environments where only some nodes
+        // have the test path set up (others use the shared /etc/corosync via volume)
+        let import_path = if paths.corosync_conf_path.exists() {
+            &paths.corosync_conf_path
+        } else {
+            std::path::Path::new(HOST_CLUSTER_CONF_FN)
+        };
+
+        if import_path.exists() {
+            import_corosync_conf(&memdb, import_path)?;
+            // Refresh the check after import
+            has_corosync_conf = memdb.exists("/corosync.conf")?;
+        }
+    }
+
+    // Initialize cluster services if needed (matching C's pmxcfs.c)
+    let (dfsm, status_dfsm, quorum_service) = if has_corosync_conf && !args.local {
+        info!("Initializing cluster services");
+        let (db_dfsm, st_dfsm, quorum) = setup_cluster_services(
+            &memdb,
+            config.clone(),
+            status.clone(),
+            &paths.corosync_conf_path,
+        )?;
+        (Some(db_dfsm), Some(st_dfsm), Some(quorum))
+    } else {
+        if args.local {
+            info!("Forcing local mode");
+        } else {
+            info!("Using local mode (corosync.conf does not exist)");
+        }
+        status.set_quorate(true);
+        (None, None, None)
+    };
+
+    // Initialize cluster info in status
+    status.init_cluster(config.cluster_name().to_string());
+
+    // Initialize plugin registry
+    let plugins = plugins::init_plugins(config.clone(), status.clone());
+
+    // Note: Node registration from corosync is handled by ClusterConfigService during
+    // its initialization, matching C's service_confdb behavior (confdb.c:276)
+
+    // Daemonize if not in foreground mode (using builder pattern)
+    let (daemon_guard, signal_handle) = if !args.foreground {
+        let (process, handle) = Daemon::new()
+            .pid_file(paths.pid_file.clone())
+            .group(www_data_gid)
+            .start_daemon_with_signal()?;
+
+        match process {
+            DaemonProcess::Parent => {
+                // Parent exits here after child signals ready
+                std::process::exit(0);
+            }
+            DaemonProcess::Child(guard) => (Some(guard), handle),
+        }
+    } else {
+        (None, None)
+    };
+
+    // Mount FUSE filesystem
+    let fuse_task = setup_fuse(
+        &paths.mount_dir,
+        memdb.clone(),
+        config.clone(),
+        dfsm.clone(),
+        plugins,
+        status.clone(),
+    )?;
+
+    // Start cluster services using ServiceManager (matching C's pmxcfs.c service initialization)
+    // If this fails, abort the FUSE task to prevent orphaned mount
+    let service_manager_handle = match setup_services(
+        dfsm.as_ref(),
+        status_dfsm.as_ref(),
+        quorum_service,
+        has_corosync_conf,
+        args.local,
+        status.clone(),
+        paths.corosync_conf_path.clone(),
+    ) {
+        Ok(handle) => handle,
+        Err(e) => {
+            error!("Failed to setup services: {}", e);
+            fuse_task.abort();
+            return Err(e);
+        }
+    };
+
+    // Scan VM list after database is loaded (matching C's memdb_open behavior)
+    status.scan_vmlist(&memdb);
+
+    // Setup signal handlers BEFORE starting IPC server to ensure signals are caught
+    // during the startup sequence. This prevents a race where a signal arriving
+    // between IPC start and signal handler setup would be missed.
+    use tokio::signal::unix::{SignalKind, signal};
+    let mut sigterm = signal(SignalKind::terminate())
+        .map_err(|e| anyhow::anyhow!("Failed to setup SIGTERM handler: {e}"))?;
+    let mut sigint = signal(SignalKind::interrupt())
+        .map_err(|e| anyhow::anyhow!("Failed to setup SIGINT handler: {e}"))?;
+
+    // Initialize and start IPC server (libqb-compatible IPC for C clients)
+    // If this fails, abort FUSE task to prevent orphaned mount
+    info!("Initializing IPC server (libqb-compatible)");
+    let mut ipc_handler =
+        IpcHandler::new(memdb.clone(), status.clone(), config.clone(), www_data_gid);
+    if let Some(ref sd) = status_dfsm {
+        ipc_handler = ipc_handler.with_status_dfsm(sd.clone());
+    }
+    let mut ipc_server = pmxcfs_ipc::Server::new("pve2", ipc_handler);
+    if let Err(e) = ipc_server.start() {
+        error!("Failed to start IPC server: {}", e);
+        fuse_task.abort();
+        return Err(e);
+    }
+
+    info!("pmxcfs started successfully");
+
+    // Signal parent if daemonized, or write PID file in foreground mode
+    let _pid_guard = if let Some(handle) = signal_handle {
+        // Daemon mode: signal parent that we're ready (parent writes PID file and exits)
+        handle.signal_ready()?;
+        daemon_guard // Keep guard alive for cleanup on drop
+    } else {
+        // Foreground mode: write PID file now and retain guard for cleanup
+        Some(
+            Daemon::new()
+                .pid_file(paths.pid_file.clone())
+                .group(www_data_gid)
+                .start_foreground()?,
+        )
+    };
+
+    // Remove restart flag (matching C's timing - after all services started)
+    let _ = fs::remove_file(&paths.restart_flag_file);
+
+    // Wait for shutdown signal (using pre-registered handlers)
+    tokio::select! {
+        _ = sigterm.recv() => {
+            info!("Received SIGTERM");
+        }
+        _ = sigint.recv() => {
+            info!("Received SIGINT");
+        }
+    }
+
+    info!("Shutting down pmxcfs");
+
+    // Abort background tasks
+    fuse_task.abort();
+
+    // Create restart flag (signals restart, not permanent shutdown)
+    let _restart_flag = RestartFlag::create(paths.restart_flag_file.clone(), www_data_gid);
+
+    // Stop services
+    ipc_server.stop();
+
+    // Stop cluster services via ServiceManager
+    if let Some(service_manager) = service_manager_handle {
+        info!("Shutting down cluster services via ServiceManager");
+        let _ = service_manager
+            .shutdown(std::time::Duration::from_secs(5))
+            .await;
+    }
+
+    // Unmount filesystem (matching C's fuse_unmount, using lazy unmount like umount -l)
+    info!(
+        "Unmounting FUSE filesystem from {}",
+        paths.mount_dir.display()
+    );
+    let mount_path_cstr =
+        std::ffi::CString::new(paths.mount_dir.to_string_lossy().as_ref()).unwrap();
+    unsafe {
+        libc::umount2(mount_path_cstr.as_ptr(), libc::MNT_DETACH);
+    }
+
+    info!("pmxcfs shutdown complete");
+
+    Ok(())
+}
+
+fn should_import_startup_corosync_conf(
+    db_exists: bool,
+    has_corosync_conf: bool,
+    force_local: bool,
+) -> bool {
+    !db_exists && !has_corosync_conf && !force_local
+}
+
+fn init_logging(debug: bool) -> Result<()> {
+    let filter_level = if debug { "debug" } else { "info" };
+    let filter = EnvFilter::new(filter_level);
+
+    // Create reloadable filter layer
+    let (filter_layer, reload_handle) = reload::Layer::new(filter);
+
+    // Create formatter layer for console output
+    let fmt_layer = tracing_subscriber::fmt::layer()
+        .with_target(false)
+        .with_thread_ids(false)
+        .with_thread_names(false);
+
+    // Try to connect to journald (systemd journal / syslog integration)
+    // Matches C implementation's openlog() call (status.c:1360)
+    // Falls back to console-only logging if journald is unavailable
+    let subscriber = tracing_subscriber::registry()
+        .with(filter_layer)
+        .with(fmt_layer);
+
+    match tracing_journald::layer() {
+        Ok(journald_layer) => {
+            // Successfully connected to journald
+            subscriber.with(journald_layer).init();
+            debug!("Logging to journald (syslog) enabled");
+        }
+        Err(e) => {
+            // Journald not available (e.g., not running under systemd)
+            // Continue with console logging only
+            subscriber.init();
+            debug!("Journald unavailable ({}), using console logging only", e);
+        }
+    }
+
+    // Store reload handle for runtime adjustment (used by .debug plugin)
+    pmxcfs_rs::logging::set_reload_handle(reload_handle)?;
+
+    Ok(())
+}
+
+fn get_nodename() -> Result<String> {
+    let mut utsname = libc::utsname {
+        sysname: [0; 65],
+        nodename: [0; 65],
+        release: [0; 65],
+        version: [0; 65],
+        machine: [0; 65],
+        domainname: [0; 65],
+    };
+
+    unsafe {
+        if libc::uname(&mut utsname) != 0 {
+            return Err(PmxcfsError::System("Unable to get node name".into()).into());
+        }
+    }
+
+    let nodename_bytes = &utsname.nodename;
+    let nodename_cstr = unsafe { std::ffi::CStr::from_ptr(nodename_bytes.as_ptr()) };
+    let mut nodename = nodename_cstr.to_string_lossy().to_string();
+
+    // Remove domain part if present (like C version)
+    if let Some(dot_pos) = nodename.find('.') {
+        nodename.truncate(dot_pos);
+    }
+
+    Ok(nodename)
+}
+
+fn resolve_node_ip(nodename: &str) -> Result<std::net::IpAddr> {
+    use std::net::ToSocketAddrs;
+
+    let addr_iter = (nodename, 0)
+        .to_socket_addrs()
+        .context("Failed to resolve node IP")?;
+
+    for addr in addr_iter {
+        let ip = addr.ip();
+        // Skip loopback addresses
+        if !ip.is_loopback() {
+            return Ok(ip);
+        }
+    }
+
+    Err(PmxcfsError::Configuration(format!(
+        "Unable to resolve node name '{nodename}' to a non-loopback IP address"
+    ))
+    .into())
+}
+
+fn get_www_data_gid() -> Result<u32> {
+    let group = nix::unistd::Group::from_name("www-data")
+        .map_err(|e| PmxcfsError::System(format!("Unable to get www-data group: {e}")))?
+        .ok_or_else(|| PmxcfsError::System("www-data group not found".into()))?;
+    Ok(group.gid.as_raw())
+}
+
+fn create_directories(gid: u32, paths: &PathConfig, is_test_mode: bool) -> Result<()> {
+    // Create varlib directory
+    fs::DirBuilder::new()
+        .recursive(true)
+        .mode(0o750)
+        .create(&paths.varlib_dir)
+        .with_context(|| format!("Failed to create {}", paths.varlib_dir.display()))?;
+
+    // Create run directory
+    fs::DirBuilder::new()
+        .recursive(true)
+        .mode(0o750)
+        .create(&paths.run_dir)
+        .with_context(|| format!("Failed to create {}", paths.run_dir.display()))?;
+
+    // Set ownership for run directory (skip in test mode - doesn't require root)
+    if !is_test_mode {
+        let run_dir_cstr =
+            std::ffi::CString::new(paths.run_dir.to_string_lossy().as_ref()).unwrap();
+        unsafe {
+            if libc::chown(run_dir_cstr.as_ptr(), 0, gid as libc::gid_t) != 0 {
+                return Err(PmxcfsError::System(format!(
+                    "Failed to set ownership on {}",
+                    paths.run_dir.display()
+                ))
+                .into());
+            }
+        }
+    }
+
+    Ok(())
+}
+
+fn import_corosync_conf(memdb: &MemDb, corosync_conf_path: &std::path::Path) -> Result<()> {
+    if let Ok(content) = fs::read_to_string(corosync_conf_path) {
+        info!(
+            "Importing corosync.conf from {}",
+            corosync_conf_path.display()
+        );
+        let mtime = std::time::SystemTime::now()
+            .duration_since(std::time::UNIX_EPOCH)?
+            .as_secs() as u32;
+
+        memdb.create("/corosync.conf", 0, 0, mtime)?;
+        memdb.write("/corosync.conf", 0, 0, mtime, content.as_bytes(), false)?;
+    }
+
+    Ok(())
+}
+
+/// Initialize cluster services (DFSM, QuorumService)
+///
+/// Returns (database_dfsm, status_dfsm, quorum_service) for cluster mode
+fn setup_cluster_services(
+    memdb: &MemDb,
+    config: Arc<Config>,
+    status: Arc<status::Status>,
+    corosync_conf_path: &std::path::Path,
+) -> Result<ClusterServices> {
+    // Sync corosync configuration
+    let corosync_conf_str = corosync_conf_path
+        .to_str()
+        .context("corosync.conf path is not valid UTF-8")?;
+    memdb.sync_corosync_conf(Some(corosync_conf_str), true)?;
+
+    // Create main DFSM for database synchronization (pmxcfs_v1 CPG group)
+    // Note: nodeid will be obtained via cpg_local_get() during init_cpg()
+    info!("Creating main DFSM instance (pmxcfs_v1)");
+    let database_callbacks = MemDbCallbacks::new(
+        memdb.clone(),
+        status.clone(),
+        Some(corosync_conf_path.to_path_buf()),
+    );
+    let database_dfsm = Arc::new(Dfsm::new(
+        config.cluster_name().to_string(),
+        database_callbacks.clone(),
+    )?);
+    database_callbacks.set_dfsm(&database_dfsm);
+    info!("Main DFSM created successfully");
+
+    // Create status DFSM for ephemeral data synchronization (pve_kvstore_v1 CPG group)
+    // Note: nodeid will be obtained via cpg_local_get() during init_cpg()
+    // IMPORTANT: Use protocol version 0 to match C implementation's kvstore DFSM
+    info!("Creating status DFSM instance (pve_kvstore_v1)");
+    let status_callbacks: Arc<dyn Callbacks<Message = KvStoreMessage>> =
+        Arc::new(StatusCallbacks::new(status.clone()));
+    let status_dfsm = Arc::new(Dfsm::new_with_protocol_version(
+        "pve_kvstore_v1".to_string(),
+        status_callbacks,
+        0, // Protocol version 0 to match C's kvstore
+    )?);
+    info!("Status DFSM created successfully");
+
+    // Create QuorumService (owns quorum handle, matching C's service_quorum)
+    info!("Creating QuorumService");
+    let quorum_service = Arc::new(QuorumService::new(status));
+    info!("QuorumService created successfully");
+
+    Ok((database_dfsm, status_dfsm, quorum_service))
+}
+
+/// Setup and mount FUSE filesystem
+///
+/// Returns a task handle for the FUSE loop
+fn setup_fuse(
+    mount_path: &std::path::Path,
+    memdb: MemDb,
+    config: Arc<Config>,
+    dfsm: Option<Arc<Dfsm<FuseMessage>>>,
+    plugins: Arc<plugins::PluginRegistry>,
+    status: Arc<status::Status>,
+) -> Result<tokio::task::JoinHandle<()>> {
+    // Unmount if already mounted (matching C's umount2(CFSDIR, MNT_FORCE))
+    let mount_path_cstr = std::ffi::CString::new(mount_path.to_string_lossy().as_ref()).unwrap();
+    unsafe {
+        libc::umount2(mount_path_cstr.as_ptr(), libc::MNT_FORCE);
+    }
+
+    // Create mount directory
+    fs::DirBuilder::new()
+        .recursive(true)
+        .mode(0o750)
+        .create(mount_path)
+        .with_context(|| format!("Failed to create mount point {}", mount_path.display()))?;
+
+    // Spawn FUSE filesystem in background task
+    let mount_path = mount_path.to_path_buf();
+    let fuse_task = tokio::spawn(async move {
+        if let Err(e) = fuse::mount_fuse(&mount_path, memdb, config, dfsm, plugins, status).await {
+            tracing::error!("FUSE filesystem error: {}", e);
+        }
+    });
+
+    Ok(fuse_task)
+}
+
+/// Setup cluster services (quorum, confdb, dcdb, status sync)
+///
+/// Returns a shutdown handle if services were started, None otherwise
+fn setup_services(
+    dfsm: Option<&Arc<Dfsm<FuseMessage>>>,
+    status_dfsm: Option<&Arc<Dfsm<KvStoreMessage>>>,
+    quorum_service: Option<Arc<pmxcfs_rs::quorum_service::QuorumService>>,
+    has_corosync_conf: bool,
+    force_local: bool,
+    status: Arc<status::Status>,
+    corosync_conf_path: std::path::PathBuf,
+) -> Result<Option<ServiceManagerHandle>> {
+    if dfsm.is_none() && status_dfsm.is_none() && quorum_service.is_none() {
+        return Ok(None);
+    }
+
+    let mut manager = ServiceManager::new();
+
+    // Add ClusterDatabaseService (service_dcdb equivalent)
+    if let Some(dfsm_instance) = dfsm {
+        info!("Adding ClusterDatabaseService to ServiceManager");
+        manager.add_service(Box::new(ClusterDatabaseService::new(Arc::clone(
+            dfsm_instance,
+        ))))?;
+    }
+
+    // Add StatusSyncService (service_status / kvstore equivalent)
+    if let Some(status_dfsm_instance) = status_dfsm {
+        info!("Adding StatusSyncService to ServiceManager");
+        manager.add_service(Box::new(StatusSyncService::new(Arc::clone(
+            status_dfsm_instance,
+        ))))?;
+    }
+
+    // Add ClusterConfigService (service_confdb equivalent) - monitors Corosync configuration
+    if has_corosync_conf && !force_local {
+        info!("Adding ClusterConfigService to ServiceManager");
+        manager.add_service(Box::new(ClusterConfigService::new(
+            status,
+            corosync_conf_path.clone(),
+        )))?;
+    }
+
+    // Add QuorumService (service_quorum equivalent)
+    if let Some(quorum_instance) = quorum_service {
+        info!("Adding QuorumService to ServiceManager");
+        // Extract QuorumService from Arc - ServiceManager will manage it
+        match Arc::try_unwrap(quorum_instance) {
+            Ok(service) => {
+                manager.add_service(Box::new(service))?;
+            }
+            Err(_) => {
+                anyhow::bail!("Cannot unwrap QuorumService Arc - multiple references exist");
+            }
+        }
+    }
+
+    // Get shutdown token before spawning (for graceful shutdown)
+    let shutdown_token = manager.shutdown_token();
+
+    // Spawn ServiceManager in background task
+    let handle = manager.spawn();
+
+    Ok(Some(ServiceManagerHandle {
+        shutdown_token,
+        task: handle,
+    }))
+}
+
+/// Handle for managing ServiceManager lifecycle
+struct ServiceManagerHandle {
+    shutdown_token: tokio_util::sync::CancellationToken,
+    task: tokio::task::JoinHandle<()>,
+}
+
+impl ServiceManagerHandle {
+    /// Gracefully shutdown the ServiceManager with timeout
+    ///
+    /// Signals shutdown via cancellation token, then awaits task completion
+    /// with a timeout. Matches C's cfs_loop_stop_worker() behavior.
+    async fn shutdown(self, timeout: std::time::Duration) -> Result<()> {
+        // Signal graceful shutdown (matches C's stop_worker_flag)
+        self.shutdown_token.cancel();
+
+        // Await completion with timeout
+        match tokio::time::timeout(timeout, self.task).await {
+            Ok(Ok(())) => {
+                info!("ServiceManager shut down cleanly");
+                Ok(())
+            }
+            Ok(Err(e)) => {
+                tracing::warn!("ServiceManager task panicked: {}", e);
+                Err(anyhow::anyhow!("ServiceManager task panicked: {e}"))
+            }
+            Err(_) => {
+                tracing::warn!("ServiceManager shutdown timed out after {:?}", timeout);
+                Err(anyhow::anyhow!("ServiceManager shutdown timed out"))
+            }
+        }
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::should_import_startup_corosync_conf;
+
+    #[test]
+    fn startup_import_only_happens_for_fresh_db() {
+        assert!(should_import_startup_corosync_conf(false, false, false));
+        assert!(!should_import_startup_corosync_conf(true, false, false));
+        assert!(!should_import_startup_corosync_conf(false, true, false));
+        assert!(!should_import_startup_corosync_conf(false, false, true));
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/memdb_callbacks.rs b/src/pmxcfs-rs/pmxcfs/src/memdb_callbacks.rs
new file mode 100644
index 000000000..c4e2cc781
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/memdb_callbacks.rs
@@ -0,0 +1,1222 @@
+/// DFSM callbacks implementation for memdb synchronization
+///
+/// This module implements the DfsmCallbacks trait to integrate the DFSM
+/// state machine with the memdb database for cluster-wide synchronization.
+use anyhow::{Context, Result};
+use parking_lot::{Mutex, RwLock};
+use std::path::PathBuf;
+use std::sync::{Arc, Weak};
+
+use pmxcfs_dfsm::{Callbacks, DfsmBroadcast, FuseMessage, NodeSyncInfo};
+use pmxcfs_memdb::{MemDb, MemDbIndex};
+
+/// DFSM callbacks for memdb synchronization
+pub struct MemDbCallbacks {
+    memdb: MemDb,
+    status: Arc<pmxcfs_status::Status>,
+    corosync_conf_path: Option<PathBuf>,
+    dfsm: RwLock<Weak<pmxcfs_dfsm::Dfsm<FuseMessage>>>,
+    pending_leader_index: Mutex<Option<MemDbIndex>>,
+}
+
+impl MemDbCallbacks {
+    /// Create new callbacks for a memdb instance
+    pub fn new(
+        memdb: MemDb,
+        status: Arc<pmxcfs_status::Status>,
+        corosync_conf_path: Option<PathBuf>,
+    ) -> Arc<Self> {
+        Arc::new(Self {
+            memdb,
+            status,
+            corosync_conf_path,
+            dfsm: RwLock::new(Weak::new()),
+            pending_leader_index: Mutex::new(None),
+        })
+    }
+
+    /// Set the DFSM instance (called after DFSM is created)
+    pub fn set_dfsm(&self, dfsm: &Arc<pmxcfs_dfsm::Dfsm<FuseMessage>>) {
+        *self.dfsm.write() = Arc::downgrade(dfsm);
+    }
+
+    /// Get the DFSM instance if available
+    fn get_dfsm(&self) -> Option<Arc<pmxcfs_dfsm::Dfsm<FuseMessage>>> {
+        self.dfsm.read().upgrade()
+    }
+
+    /// Update version counters based on path changes
+    /// Matches the C implementation's update_node_status_version logic
+    fn update_version_counters(&self, path: &str) {
+        // Trim leading slash but use FULL path for version tracking
+        let path = path.trim_start_matches('/');
+
+        // Update path-specific version counter (use full path, not just first component)
+        self.status.increment_path_version(path);
+
+        // Update vmlist version for VM configuration changes
+        if path.starts_with("qemu-server/") || path.starts_with("lxc/") {
+            self.status.increment_vmlist_version();
+        }
+    }
+
+    fn is_local_origin(&self, nodeid: u32, pid: u32) -> bool {
+        self.get_dfsm()
+            .map(|dfsm| dfsm.get_nodeid() == nodeid && dfsm.get_pid() == pid)
+            .unwrap_or(false)
+    }
+
+    fn sync_runtime_corosync_conf(&self, notify_corosync: bool) {
+        let Some(path) = &self.corosync_conf_path else {
+            return;
+        };
+
+        if let Err(error) = self
+            .memdb
+            .sync_corosync_conf(path.to_str(), notify_corosync)
+        {
+            tracing::warn!(
+                "MemDbCallbacks: failed to sync corosync.conf to '{}': {}",
+                path.display(),
+                error
+            );
+        }
+    }
+}
+
+impl Callbacks for MemDbCallbacks {
+    type Message = FuseMessage;
+
+    /// Deliver an application message
+    /// Returns (message_result, processed) where processed indicates if message was handled
+    fn deliver_message(
+        &self,
+        nodeid: u32,
+        pid: u32,
+        fuse_message: FuseMessage,
+        timestamp: u64,
+    ) -> std::io::Result<(i32, bool)> {
+        // C-style delivery: ALL nodes (including originator) process messages
+        // No loopback check needed - the originator waits for this delivery
+        // and uses the result as the FUSE operation return value
+
+        tracing::debug!(
+            "MemDbCallbacks: delivering FUSE message from node {}/{} at timestamp {}",
+            nodeid,
+            pid,
+            timestamp
+        );
+
+        let mtime = timestamp as u32;
+
+        // Dispatch to dedicated handler for each message type
+        match fuse_message {
+            FuseMessage::Create { ref path } => {
+                let result = self.handle_create(path, nodeid, mtime);
+                if result >= 0 && path == "/corosync.conf" {
+                    self.sync_runtime_corosync_conf(self.is_local_origin(nodeid, pid));
+                }
+                Ok((result, result >= 0))
+            }
+            FuseMessage::Mkdir { ref path } => {
+                let result = self.handle_mkdir(path, nodeid, mtime);
+                Ok((result, result >= 0))
+            }
+            FuseMessage::Write {
+                ref path,
+                offset,
+                truncate,
+                ref data,
+            } => {
+                let result = self.handle_write(path, nodeid, offset, data, truncate, mtime);
+                if result >= 0 && path == "/corosync.conf" {
+                    self.sync_runtime_corosync_conf(self.is_local_origin(nodeid, pid));
+                }
+                Ok((result, result >= 0))
+            }
+            FuseMessage::Delete { ref path } => {
+                let result = self.handle_delete(path, nodeid, mtime);
+                Ok((result, result >= 0))
+            }
+            FuseMessage::Rename { ref from, ref to } => {
+                let result = self.handle_rename(from, to, nodeid, mtime);
+                if result >= 0 && to == "/corosync.conf" {
+                    self.sync_runtime_corosync_conf(self.is_local_origin(nodeid, pid));
+                }
+                Ok((result, result >= 0))
+            }
+            FuseMessage::Mtime {
+                ref path,
+                mtime: msg_mtime,
+            } => {
+                // Use mtime from message, not from timestamp (C: dcdb.c:900-901)
+                let result = self.handle_mtime(path, nodeid, msg_mtime);
+                Ok((result, result >= 0))
+            }
+            FuseMessage::UnlockRequest { path, checksum } => {
+                self.handle_unlock_request(path, checksum).map_err(|e| {
+                    tracing::warn!("handle_unlock_request failed: {e}");
+                    pmxcfs_api_types::errno::eio()
+                })?;
+                Ok((0, true))
+            }
+            FuseMessage::Unlock { path, checksum } => {
+                self.handle_unlock(path, checksum, nodeid, mtime)
+                    .map_err(|e| {
+                        tracing::warn!("handle_unlock failed: {e}");
+                        pmxcfs_api_types::errno::eio()
+                    })?;
+                Ok((0, true))
+            }
+        }
+    }
+
+    /// Compute state checksum for verification
+    /// Should compute SHA-256 checksum of current state
+    fn compute_checksum(&self, output: &mut [u8; 32]) -> Result<()> {
+        tracing::debug!("MemDbCallbacks: computing database checksum");
+
+        let checksum = self
+            .memdb
+            .compute_database_checksum()
+            .context("Failed to compute database checksum")?;
+
+        output.copy_from_slice(&checksum);
+
+        tracing::debug!("MemDbCallbacks: checksum = {:016x?}", &checksum[..8]);
+        Ok(())
+    }
+
+    /// Get current state for synchronization
+    fn get_state(&self) -> Result<Vec<u8>> {
+        tracing::debug!("MemDbCallbacks: generating state for synchronization");
+
+        // Generate MemDbIndex from current database
+        let index = self
+            .memdb
+            .encode_index()
+            .context("Failed to encode database index")?;
+
+        // Serialize to wire format
+        let serialized = index.serialize();
+
+        tracing::info!(
+            "MemDbCallbacks: state generated - version={}, entries={}, bytes={}",
+            index.version,
+            index.size,
+            serialized.len()
+        );
+
+        Ok(serialized)
+    }
+
+    /// Process state update during synchronization
+    /// Called when all states have been collected from nodes
+    fn process_state_update(&self, states: &[NodeSyncInfo]) -> Result<bool> {
+        tracing::info!(
+            "MemDbCallbacks: processing state update from {} nodes",
+            states.len()
+        );
+
+        // Parse all indices from node states
+        let mut indices: Vec<(u32, u32, MemDbIndex)> = Vec::new();
+
+        for node in states {
+            if let Some(state_data) = &node.state {
+                match MemDbIndex::deserialize(state_data) {
+                    Ok(index) => {
+                        tracing::info!(
+                            "MemDbCallbacks: node {}/{} - version={}, entries={}, mtime={}",
+                            node.node_id,
+                            node.pid,
+                            index.version,
+                            index.size,
+                            index.mtime
+                        );
+                        indices.push((node.node_id, node.pid, index));
+                    }
+                    Err(e) => {
+                        tracing::error!(
+                            "MemDbCallbacks: failed to parse index from node {}/{}: {}",
+                            node.node_id,
+                            node.pid,
+                            e
+                        );
+                    }
+                }
+            }
+        }
+
+        if indices.is_empty() {
+            tracing::warn!("MemDbCallbacks: no valid indices from any node");
+            return Ok(true);
+        }
+
+        // Find leader (highest version, or if tie, highest mtime)
+        // Matches C's dcdb_choose_leader_with_highest_index()
+        let mut leader_idx = 0;
+        for i in 1..indices.len() {
+            let (_, _, current_index) = &indices[i];
+            let (_, _, leader_index) = &indices[leader_idx];
+
+            if current_index > leader_index {
+                leader_idx = i;
+            }
+        }
+
+        let (leader_nodeid, leader_pid, leader_index) = &indices[leader_idx];
+        tracing::info!(
+            "MemDbCallbacks: elected leader: {}/{} (version={}, mtime={})",
+            leader_nodeid,
+            leader_pid,
+            leader_index.version,
+            leader_index.mtime
+        );
+
+        // Build list of synced nodes (those whose index matches leader exactly)
+        let mut synced_nodes = Vec::new();
+        for (nodeid, pid, index) in &indices {
+            // Check if indices are identical (same version, mtime, entry count, and entries).
+            // IndexEntry derives PartialEq; Vec<IndexEntry> == checks length then elements.
+            // size is checked first as a cheap guard before the full entry comparison.
+            let is_synced = index.version == leader_index.version
+                && index.mtime == leader_index.mtime
+                && index.size == leader_index.size
+                && index.entries == leader_index.entries;
+
+            if is_synced {
+                synced_nodes.push((*nodeid, *pid));
+                tracing::info!(
+                    "MemDbCallbacks: node {}/{} is synced with leader",
+                    nodeid,
+                    pid
+                );
+            } else {
+                tracing::info!("MemDbCallbacks: node {}/{} needs updates", nodeid, pid);
+            }
+        }
+
+        // Get DFSM instance to check if we're the leader
+        let dfsm = self.get_dfsm();
+
+        // Determine if WE are the leader
+        let we_are_leader = dfsm
+            .as_ref()
+            .map(|d| d.get_nodeid() == *leader_nodeid && d.get_pid() == *leader_pid)
+            .unwrap_or(false);
+
+        // Determine if WE are synced
+        let we_are_synced = dfsm
+            .as_ref()
+            .map(|d| {
+                let our_nodeid = d.get_nodeid();
+                let our_pid = d.get_pid();
+                synced_nodes
+                    .iter()
+                    .any(|(nid, pid)| *nid == our_nodeid && *pid == our_pid)
+            })
+            .unwrap_or(false);
+
+        if we_are_leader {
+            self.pending_leader_index.lock().take();
+            tracing::info!("MemDbCallbacks: we are the leader, sending updates to followers");
+
+            // Send updates to followers
+            if let Some(dfsm) = dfsm {
+                self.send_updates_to_followers(&dfsm, leader_index, &indices)?;
+            } else {
+                tracing::error!("MemDbCallbacks: cannot send updates - DFSM not available");
+            }
+
+            // Leader is always synced
+            Ok(true)
+        } else if we_are_synced {
+            self.pending_leader_index.lock().take();
+            tracing::info!("MemDbCallbacks: we are synced with leader");
+            Ok(true)
+        } else {
+            *self.pending_leader_index.lock() = Some(leader_index.clone());
+            tracing::info!("MemDbCallbacks: we need updates from leader, entering Update mode");
+            Ok(false)
+        }
+    }
+
+    /// Process incremental update from leader
+    ///
+    /// Deserializes a TreeEntry from the wire format and applies it to the local database.
+    /// Matches C's dcdb_parse_update_inode() function.
+    fn process_update(&self, nodeid: u32, pid: u32, data: &[u8]) -> Result<()> {
+        tracing::debug!(
+            "MemDbCallbacks: processing update from {}/{} ({} bytes)",
+            nodeid,
+            pid,
+            data.len()
+        );
+
+        // Deserialize TreeEntry from C wire format
+        let tree_entry = pmxcfs_memdb::TreeEntry::deserialize_from_update(data)
+            .context("Failed to deserialize TreeEntry from update message")?;
+
+        tracing::info!(
+            "MemDbCallbacks: received update for inode {} ({}), version={}",
+            tree_entry.inode,
+            tree_entry.name,
+            tree_entry.version
+        );
+
+        // Apply the entry to our local database
+        self.memdb
+            .apply_tree_entry(tree_entry)
+            .context("Failed to apply TreeEntry to database")?;
+
+        tracing::debug!("MemDbCallbacks: update applied successfully");
+        Ok(())
+    }
+
+    /// Commit synchronized state
+    fn commit_state(&self) -> Result<()> {
+        tracing::info!("MemDbCallbacks: committing synchronized state");
+        if let Some(leader_index) = self.pending_leader_index.lock().take() {
+            let removed_inodes = self
+                .memdb
+                .converge_to_index(&leader_index)
+                .context("Failed to converge follower state to leader index")?;
+            tracing::info!(
+                "MemDbCallbacks: converged follower state, removed {} stale inodes",
+                removed_inodes.len()
+            );
+        }
+
+        // Increment all path versions to notify clients of database reload
+        // Matches C's record_memdb_reload() called in database.c:607
+        self.status.increment_all_path_versions();
+
+        // Recreate VM list after database changes (matching C's bdb_backend_commit_update)
+        // This ensures VM list is updated whenever the cluster database is synchronized
+        self.status.scan_vmlist(&self.memdb);
+        self.sync_runtime_corosync_conf(false);
+
+        Ok(())
+    }
+
+    /// Called when cluster becomes synced
+    fn on_synced(&self) {
+        tracing::info!("MemDbCallbacks: cluster is now fully synchronized");
+    }
+
+    fn cleanup_sync_resources(&self, _states: &[NodeSyncInfo]) {
+        self.pending_leader_index.lock().take();
+    }
+}
+
+// Helper methods for MemDbCallbacks (not part of trait)
+impl MemDbCallbacks {
+    /// Extract errno from an io::Error, defaulting to EACCES for non-OS errors.
+    fn errno_from(e: &std::io::Error) -> i32 {
+        e.raw_os_error().unwrap_or(libc::EACCES)
+    }
+
+    /// Check if a path exists, then apply an operation.
+    /// Returns 0 on success, negative errno on failure.
+    /// If the path does not exist, returns `missing_errno` (negative).
+    fn with_existing_path(
+        &self,
+        path: &str,
+        missing_errno: i32,
+        op: impl FnOnce() -> Result<(), std::io::Error>,
+    ) -> i32 {
+        match self.memdb.exists(path) {
+            Ok(true) => match op() {
+                Ok(()) => 0,
+                Err(e) => -Self::errno_from(&e),
+            },
+            Ok(false) => missing_errno,
+            Err(e) => {
+                tracing::warn!(
+                    "MemDbCallbacks: failed to check if '{}' exists: {}",
+                    path,
+                    e
+                );
+                -libc::EIO
+            }
+        }
+    }
+
+    /// Handle Create message - create an empty file
+    fn handle_create(&self, path: &str, writer: u32, mtime: u32) -> i32 {
+        self.create_entry(path, 0, writer, mtime)
+    }
+
+    /// Handle Mkdir message - create a directory
+    fn handle_mkdir(&self, path: &str, writer: u32, mtime: u32) -> i32 {
+        self.create_entry(path, libc::S_IFDIR, writer, mtime)
+    }
+
+    /// Shared logic for creating files and directories.
+    /// Returns 0 on success, negative errno on failure.
+    fn create_entry(&self, path: &str, mode: u32, writer: u32, mtime: u32) -> i32 {
+        let kind = if mode == libc::S_IFDIR {
+            "directory"
+        } else {
+            "file"
+        };
+        match self.memdb.create_replicated(path, mode, writer, mtime) {
+            Ok(_) => {
+                tracing::info!("MemDbCallbacks: created {} '{}'", kind, path);
+                self.update_version_counters(path);
+                0
+            }
+            Err(e) => {
+                tracing::warn!(
+                    "MemDbCallbacks: failed to create {} '{}': {}",
+                    kind,
+                    path,
+                    e
+                );
+                -Self::errno_from(&e)
+            }
+        }
+    }
+
+    /// Handle Write message - write data to a file
+    /// Returns 0 on success, negative errno on failure
+    fn handle_write(
+        &self,
+        path: &str,
+        writer: u32,
+        offset: u64,
+        data: &[u8],
+        truncate: bool,
+        mtime: u32,
+    ) -> i32 {
+        // Create file if it doesn't exist
+        match self.memdb.exists(path) {
+            Err(e) => {
+                tracing::warn!(
+                    "MemDbCallbacks: failed to check if '{}' exists: {}",
+                    path,
+                    e
+                );
+                return -libc::EIO;
+            }
+            Ok(false) => {
+                if let Err(e) = self.memdb.create_replicated(path, 0, writer, mtime) {
+                    tracing::warn!("MemDbCallbacks: failed to create '{}': {}", path, e);
+                    return -Self::errno_from(&e);
+                }
+            }
+            Ok(true) => {}
+        }
+
+        // Write data
+        if !data.is_empty() || truncate {
+            match self
+                .memdb
+                .write(path, offset, writer, mtime, data, truncate)
+            {
+                Ok(_) => {
+                    tracing::info!(
+                        "MemDbCallbacks: wrote {} bytes to '{}' at offset {} (truncate={})",
+                        data.len(),
+                        path,
+                        offset,
+                        truncate
+                    );
+                    self.update_version_counters(path);
+                    0
+                }
+                Err(e) => {
+                    tracing::warn!("MemDbCallbacks: failed to write to '{}': {}", path, e);
+                    -Self::errno_from(&e)
+                }
+            }
+        } else {
+            0
+        }
+    }
+
+    /// Handle Delete message - delete a file or directory
+    /// Returns 0 on success, negative errno on failure
+    fn handle_delete(&self, path: &str, writer: u32, mtime: u32) -> i32 {
+        match self.memdb.exists(path) {
+            Ok(true) => match self.memdb.delete(path, writer, mtime) {
+                Ok(_) => {
+                    tracing::info!("MemDbCallbacks: deleted '{}'", path);
+                    self.update_version_counters(path);
+                    0
+                }
+                Err(e) => {
+                    tracing::warn!("MemDbCallbacks: failed to delete '{}': {}", path, e);
+                    -Self::errno_from(&e)
+                }
+            },
+            Ok(false) => {
+                tracing::debug!("MemDbCallbacks: path '{}' already deleted", path);
+                0 // Not an error - already deleted
+            }
+            Err(e) => {
+                tracing::warn!(
+                    "MemDbCallbacks: failed to check if '{}' exists: {}",
+                    path,
+                    e
+                );
+                -libc::EIO
+            }
+        }
+    }
+
+    /// Handle Rename message - rename a file or directory
+    /// Returns 0 on success, negative errno on failure
+    fn handle_rename(&self, from: &str, to: &str, writer: u32, mtime: u32) -> i32 {
+        let result = self.with_existing_path(from, -libc::ENOENT, || {
+            self.memdb.rename(from, to, writer, mtime)
+        });
+        if result == 0 {
+            tracing::info!("MemDbCallbacks: renamed '{}' to '{}'", from, to);
+            self.update_version_counters(from);
+            self.update_version_counters(to);
+        } else if result == -libc::ENOENT {
+            tracing::debug!(
+                "MemDbCallbacks: source path '{}' not found for rename",
+                from
+            );
+        } else {
+            tracing::warn!(
+                "MemDbCallbacks: failed to rename '{}' to '{}': errno={}",
+                from,
+                to,
+                -result
+            );
+        }
+        result
+    }
+
+    /// Handle Mtime message - update modification time
+    /// Returns 0 on success, negative errno on failure
+    fn handle_mtime(&self, path: &str, nodeid: u32, mtime: u32) -> i32 {
+        let result = self.with_existing_path(path, -libc::ENOENT, || {
+            self.memdb.set_mtime(path, nodeid, mtime)
+        });
+        if result == 0 {
+            tracing::info!(
+                "MemDbCallbacks: updated mtime for '{}' from node {}",
+                path,
+                nodeid
+            );
+            self.update_version_counters(path);
+        } else if result == -libc::ENOENT {
+            tracing::debug!("MemDbCallbacks: path '{}' not found for mtime update", path);
+        } else {
+            tracing::warn!(
+                "MemDbCallbacks: failed to update mtime for '{}': errno={}",
+                path,
+                -result
+            );
+        }
+        result
+    }
+
+    /// Handle UnlockRequest message - check if lock expired and broadcast Unlock if needed
+    ///
+    /// Only the leader processes unlock requests (C: dcdb.c:830-838)
+    fn handle_unlock_request(&self, path: String, checksum: [u8; 32]) -> Result<()> {
+        tracing::debug!("MemDbCallbacks: processing unlock request for: {}", path);
+
+        // Only the leader (lowest nodeid) should process unlock requests
+        if let Some(dfsm) = self.get_dfsm() {
+            if !dfsm.is_leader() {
+                tracing::debug!("Not leader, ignoring unlock request for: {}", path);
+                return Ok(());
+            }
+        } else {
+            tracing::warn!("DFSM not available, cannot process unlock request");
+            return Ok(());
+        }
+
+        // Get the lock entry to compute checksum
+        if let Some(entry) = self.memdb.lookup_path(&path)
+            && entry.is_dir()
+            && pmxcfs_memdb::is_lock_path(&path)
+            && entry.compute_checksum() == checksum
+        {
+            // Check if lock expired (C: dcdb.c:834)
+            if self.memdb.lock_expired(&path, &checksum) {
+                tracing::info!("Lock expired, sending unlock message for: {}", path);
+                // Send Unlock message to cluster (C: dcdb.c:836)
+                self.get_dfsm().broadcast(FuseMessage::Unlock {
+                    path: path.clone(),
+                    checksum,
+                });
+            } else {
+                tracing::debug!("Lock not expired for: {}", path);
+            }
+        }
+
+        Ok(())
+    }
+
+    /// Handle Unlock message - delete an expired lock
+    ///
+    /// This is broadcast by the leader when a lock expires (C: dcdb.c:834)
+    fn handle_unlock(
+        &self,
+        path: String,
+        checksum: [u8; 32],
+        writer: u32,
+        mtime: u32,
+    ) -> Result<()> {
+        tracing::info!("MemDbCallbacks: processing unlock message for: {}", path);
+
+        if let Some(entry) = self.memdb.lookup_path(&path)
+            && entry.is_dir()
+            && pmxcfs_memdb::is_lock_path(&path)
+            && entry.compute_checksum() == checksum
+        {
+            if let Err(e) = self.memdb.delete(&path, writer, mtime) {
+                tracing::warn!("Failed to delete lock {}: {}", path, e);
+            } else {
+                tracing::info!("Successfully deleted lock: {}", path);
+                self.update_version_counters(&path);
+            }
+        }
+
+        Ok(())
+    }
+
+    /// Send updates to followers (leader only)
+    ///
+    /// Compares the leader index with each follower and sends Update messages
+    /// for entries that differ. Matches C's dcdb_create_and_send_updates().
+    fn send_updates_to_followers(
+        &self,
+        dfsm: &pmxcfs_dfsm::Dfsm<FuseMessage>,
+        leader_index: &MemDbIndex,
+        all_indices: &[(u32, u32, MemDbIndex)],
+    ) -> Result<()> {
+        use std::collections::HashSet;
+
+        // Collect all inodes that need updating across all followers
+        let mut inodes_to_update: HashSet<u64> = HashSet::new();
+        let mut any_follower_needs_updates = false;
+
+        for (_nodeid, _pid, follower_index) in all_indices {
+            // Skip if this is us (the leader) - check if indices are identical
+            // Must match the same check in process_state_update()
+            let is_synced = follower_index.version == leader_index.version
+                && follower_index.mtime == leader_index.mtime
+                && follower_index.size == leader_index.size
+                && follower_index.entries.len() == leader_index.entries.len();
+
+            if is_synced {
+                continue;
+            }
+
+            // This follower needs updates
+            any_follower_needs_updates = true;
+
+            // Find differences between leader and this follower
+            let diffs = leader_index.find_differences(follower_index);
+            tracing::debug!(
+                "MemDbCallbacks: found {} differing inodes for follower",
+                diffs.len()
+            );
+            inodes_to_update.extend(diffs);
+        }
+
+        // If no follower needs updates at all, we're done
+        if !any_follower_needs_updates {
+            tracing::info!("MemDbCallbacks: no updates needed, all nodes are synced");
+            dfsm.send_update_complete()?;
+            return Ok(());
+        }
+
+        tracing::info!(
+            "MemDbCallbacks: sending updates ({} differing entries)",
+            inodes_to_update.len()
+        );
+
+        // Send Update for every differing inode, including root (inode 0).
+        //
+        // C's dcdb_create_and_send_updates iterates master->entries[] which includes
+        // inode 0 (root).  When root versions differ (e.g. after a node rejoin), the
+        // leader must send Update for root so the follower can write the new version/mtime/
+        // writer to its SQLite DB (as inode 0 / VERSIONFILENAME).  Without this update,
+        // the follower's root stays at the old version, and the byte-level index comparison
+        // in bdb_backend_commit_update fails.
+        let mut sent_count = 0;
+        for inode in inodes_to_update {
+            // Look up the TreeEntry for this inode
+            match self.memdb.get_entry_by_inode(inode) {
+                Some(tree_entry) => {
+                    tracing::info!(
+                        "MemDbCallbacks: sending UPDATE for inode {:#018x} (name='{}', parent={:#018x}, type={}, version={}, size={})",
+                        inode,
+                        tree_entry.name,
+                        tree_entry.parent,
+                        tree_entry.entry_type,
+                        tree_entry.version,
+                        tree_entry.size
+                    );
+
+                    if let Err(e) = dfsm.send_update(tree_entry) {
+                        tracing::error!(
+                            "MemDbCallbacks: failed to send update for inode {}: {}",
+                            inode,
+                            e
+                        );
+                        // Continue sending other updates even if one fails
+                    } else {
+                        sent_count += 1;
+                    }
+                }
+                None => {
+                    tracing::error!(
+                        "MemDbCallbacks: cannot find TreeEntry for inode {} in database",
+                        inode
+                    );
+                }
+            }
+        }
+
+        tracing::info!("MemDbCallbacks: sent {} updates", sent_count);
+
+        // Send UpdateComplete to signal end of updates
+        dfsm.send_update_complete()?;
+        tracing::info!("MemDbCallbacks: sent UpdateComplete");
+
+        Ok(())
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+    use pmxcfs_test_utils::{create_minimal_test_db, create_test_config};
+    use std::fs;
+    use tempfile::TempDir;
+
+    fn make_node_state(node_id: u32, pid: u32, memdb: &MemDb) -> NodeSyncInfo {
+        NodeSyncInfo {
+            node_id,
+            pid,
+            state: Some(memdb.encode_index().unwrap().serialize()),
+            synced: false,
+        }
+    }
+
+    fn corosync_config(version: u64) -> Vec<u8> {
+        format!("totem {{\n    config_version: {version}\n}}\n").into_bytes()
+    }
+
+    #[test]
+    fn test_deliver_truncate_write_preserves_c_semantics() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(memdb.clone(), status, None);
+
+        memdb.create("/truncate-test.txt", 0, 0, 100).unwrap();
+        memdb
+            .write("/truncate-test.txt", 0, 0, 101, b"abcdef", false)
+            .unwrap();
+
+        let (result, processed) = callbacks
+            .deliver_message(
+                2,
+                2000,
+                FuseMessage::Write {
+                    path: "/truncate-test.txt".to_string(),
+                    offset: 2,
+                    truncate: true,
+                    data: Vec::new(),
+                },
+                102,
+            )
+            .unwrap();
+
+        assert_eq!(result, 0);
+        assert!(processed);
+        assert_eq!(memdb.read("/truncate-test.txt", 0, 16).unwrap(), b"ab");
+    }
+
+    #[test]
+    fn test_cluster_lock_mtime_uses_originating_writer() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(memdb.clone(), status, None);
+
+        memdb.create("/priv", libc::S_IFDIR, 0, 90).unwrap();
+        memdb.create("/priv/lock", libc::S_IFDIR, 0, 90).unwrap();
+
+        let (mkdir_result, processed) = callbacks
+            .deliver_message(
+                7,
+                7000,
+                FuseMessage::Mkdir {
+                    path: "/priv/lock/test-lock".to_string(),
+                },
+                100,
+            )
+            .unwrap();
+
+        assert_eq!(mkdir_result, 0);
+        assert!(processed);
+
+        let entry = memdb.lookup_path("/priv/lock/test-lock").unwrap();
+        assert_eq!(entry.writer, 7);
+
+        let (mtime_result, processed) = callbacks
+            .deliver_message(
+                7,
+                7000,
+                FuseMessage::Mtime {
+                    path: "/priv/lock/test-lock".to_string(),
+                    mtime: 120,
+                },
+                120,
+            )
+            .unwrap();
+
+        assert_eq!(mtime_result, 0);
+        assert!(processed);
+        assert_eq!(
+            memdb.lookup_path("/priv/lock/test-lock").unwrap().mtime,
+            120
+        );
+    }
+
+    #[test]
+    fn test_stale_unlock_does_not_delete_reacquired_lock() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(memdb.clone(), status, None);
+
+        memdb.create("/priv", libc::S_IFDIR, 0, 90).unwrap();
+        memdb.create("/priv/lock", libc::S_IFDIR, 0, 90).unwrap();
+        memdb
+            .create("/priv/lock/test-lock", libc::S_IFDIR, 7, 100)
+            .unwrap();
+
+        let stale_checksum = memdb
+            .lookup_path("/priv/lock/test-lock")
+            .unwrap()
+            .compute_checksum();
+
+        memdb.set_mtime("/priv/lock/test-lock", 7, 120).unwrap();
+
+        let (result, processed) = callbacks
+            .deliver_message(
+                7,
+                7000,
+                FuseMessage::Unlock {
+                    path: "/priv/lock/test-lock".to_string(),
+                    checksum: stale_checksum,
+                },
+                121,
+            )
+            .unwrap();
+
+        assert_eq!(result, 0);
+        assert!(processed);
+        assert!(memdb.exists("/priv/lock/test-lock").unwrap());
+    }
+
+    #[test]
+    fn test_delete_uses_dfsm_timestamp() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(memdb.clone(), status, None);
+
+        memdb.create("/delete-me.txt", 0, 0, 100).unwrap();
+
+        let (result, processed) = callbacks
+            .deliver_message(
+                9,
+                9000,
+                FuseMessage::Delete {
+                    path: "/delete-me.txt".to_string(),
+                },
+                3456,
+            )
+            .unwrap();
+
+        assert_eq!(result, 0);
+        assert!(processed);
+        assert!(!memdb.exists("/delete-me.txt").unwrap());
+
+        let root = memdb.get_entry_by_inode(pmxcfs_memdb::ROOT_INODE).unwrap();
+        assert_eq!(root.writer, 9);
+        assert_eq!(root.mtime, 3456);
+    }
+
+    #[test]
+    fn test_rename_uses_dfsm_timestamp() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(memdb.clone(), status, None);
+
+        memdb.create("/rename-me.txt", 0, 0, 100).unwrap();
+
+        let (result, processed) = callbacks
+            .deliver_message(
+                5,
+                5000,
+                FuseMessage::Rename {
+                    from: "/rename-me.txt".to_string(),
+                    to: "/renamed.txt".to_string(),
+                },
+                4321,
+            )
+            .unwrap();
+
+        assert_eq!(result, 0);
+        assert!(processed);
+        assert!(!memdb.exists("/rename-me.txt").unwrap());
+        assert!(memdb.exists("/renamed.txt").unwrap());
+
+        let root = memdb.get_entry_by_inode(pmxcfs_memdb::ROOT_INODE).unwrap();
+        assert_eq!(root.writer, 5);
+        assert_eq!(root.mtime, 4321);
+    }
+
+    #[test]
+    fn test_runtime_write_syncs_corosync_conf_to_host_file() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let host_dir = TempDir::new().unwrap();
+        let host_path = host_dir.path().join("corosync.conf");
+        fs::write(&host_path, corosync_config(1)).unwrap();
+
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(memdb.clone(), status, Some(host_path.clone()));
+
+        let (result, processed) = callbacks
+            .deliver_message(
+                2,
+                2000,
+                FuseMessage::Create {
+                    path: "/corosync.conf".to_string(),
+                },
+                100,
+            )
+            .unwrap();
+        assert_eq!(result, 0);
+        assert!(processed);
+
+        let (result, processed) = callbacks
+            .deliver_message(
+                2,
+                2000,
+                FuseMessage::Write {
+                    path: "/corosync.conf".to_string(),
+                    offset: 0,
+                    truncate: true,
+                    data: corosync_config(2),
+                },
+                101,
+            )
+            .unwrap();
+
+        assert_eq!(result, 0);
+        assert!(processed);
+        assert_eq!(fs::read(&host_path).unwrap(), corosync_config(2));
+    }
+
+    #[test]
+    fn test_runtime_rename_syncs_corosync_conf_to_host_file() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let host_dir = TempDir::new().unwrap();
+        let host_path = host_dir.path().join("corosync.conf");
+        fs::write(&host_path, corosync_config(2)).unwrap();
+
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(memdb.clone(), status, Some(host_path.clone()));
+
+        memdb.create("/corosync.conf.new", 0, 0, 100).unwrap();
+        memdb
+            .write("/corosync.conf.new", 0, 0, 101, &corosync_config(3), true)
+            .unwrap();
+
+        let (result, processed) = callbacks
+            .deliver_message(
+                3,
+                3000,
+                FuseMessage::Rename {
+                    from: "/corosync.conf.new".to_string(),
+                    to: "/corosync.conf".to_string(),
+                },
+                102,
+            )
+            .unwrap();
+
+        assert_eq!(result, 0);
+        assert!(processed);
+        assert_eq!(fs::read(&host_path).unwrap(), corosync_config(3));
+    }
+
+    #[test]
+    fn test_commit_state_syncs_corosync_conf_to_host_file() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let host_dir = TempDir::new().unwrap();
+        let host_path = host_dir.path().join("corosync.conf");
+        fs::write(&host_path, corosync_config(4)).unwrap();
+
+        memdb.create("/corosync.conf", 0, 0, 100).unwrap();
+        memdb
+            .write("/corosync.conf", 0, 0, 101, &corosync_config(5), true)
+            .unwrap();
+
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(memdb.clone(), status, Some(host_path.clone()));
+
+        callbacks.commit_state().unwrap();
+
+        assert_eq!(fs::read(&host_path).unwrap(), corosync_config(5));
+    }
+
+    #[test]
+    fn test_commit_state_removes_follower_only_inode() {
+        let (_leader_tmpdir, leader_memdb) = create_minimal_test_db().unwrap();
+        let (_follower_tmpdir, follower_memdb) = create_minimal_test_db().unwrap();
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(follower_memdb.clone(), status, None);
+
+        leader_memdb.create("/keep.txt", 0, 1, 100).unwrap();
+        leader_memdb
+            .write("/keep.txt", 0, 1, 101, b"shared", false)
+            .unwrap();
+        leader_memdb.create("/leader-temp.txt", 0, 1, 102).unwrap();
+        leader_memdb.delete("/leader-temp.txt", 1, 103).unwrap();
+
+        follower_memdb.create("/keep.txt", 0, 2, 100).unwrap();
+        follower_memdb
+            .write("/keep.txt", 0, 2, 101, b"shared", false)
+            .unwrap();
+        follower_memdb.create("/stale.txt", 0, 2, 102).unwrap();
+        follower_memdb
+            .write("/stale.txt", 0, 2, 103, b"stale", false)
+            .unwrap();
+
+        let sync_required = callbacks
+            .process_state_update(&[
+                make_node_state(1, 1000, &leader_memdb),
+                make_node_state(2, 2000, &follower_memdb),
+            ])
+            .unwrap();
+
+        assert!(!sync_required);
+        callbacks.commit_state().unwrap();
+
+        assert!(follower_memdb.lookup_path("/keep.txt").is_some());
+        assert!(follower_memdb.lookup_path("/stale.txt").is_none());
+    }
+
+    #[test]
+    fn test_commit_state_refreshes_lock_cache_and_cleans_stale_nested_lock() {
+        let (_leader_tmpdir, leader_memdb) = create_minimal_test_db().unwrap();
+        let (_follower_tmpdir, follower_memdb) = create_minimal_test_db().unwrap();
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(follower_memdb.clone(), status, None);
+
+        for memdb in [&leader_memdb, &follower_memdb] {
+            memdb.create("/priv", libc::S_IFDIR, 0, 90).unwrap();
+            memdb.create("/priv/lock", libc::S_IFDIR, 0, 91).unwrap();
+            memdb
+                .create("/priv/lock/qemu-server", libc::S_IFDIR, 0, 92)
+                .unwrap();
+        }
+
+        leader_memdb
+            .acquire_lock("/qemu-server/100.conf", &[1u8; 32])
+            .unwrap();
+        leader_memdb.create("/leader-temp.txt", 0, 1, 100).unwrap();
+        leader_memdb.delete("/leader-temp.txt", 1, 101).unwrap();
+
+        follower_memdb
+            .acquire_lock("/qemu-server/200.conf", &[2u8; 32])
+            .unwrap();
+        assert!(follower_memdb.is_locked("/qemu-server/200.conf"));
+
+        let sync_required = callbacks
+            .process_state_update(&[
+                make_node_state(1, 1000, &leader_memdb),
+                make_node_state(2, 2000, &follower_memdb),
+            ])
+            .unwrap();
+
+        assert!(!sync_required);
+
+        let live_lock_entry = leader_memdb
+            .lookup_path("/priv/lock/qemu-server/100.conf")
+            .unwrap();
+        assert!(!follower_memdb.is_locked("/qemu-server/100.conf"));
+
+        callbacks
+            .process_update(1, 1000, &live_lock_entry.serialize_for_update())
+            .unwrap();
+        assert!(!follower_memdb.is_locked("/qemu-server/100.conf"));
+
+        callbacks.commit_state().unwrap();
+
+        assert!(follower_memdb.is_locked("/qemu-server/100.conf"));
+        assert!(
+            !follower_memdb
+                .exists("/priv/lock/qemu-server/200.conf")
+                .unwrap()
+        );
+        assert!(!follower_memdb.is_locked("/qemu-server/200.conf"));
+    }
+
+    #[test]
+    fn test_handle_create_duplicate_returns_eexist() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(memdb.clone(), status, None);
+
+        // First create succeeds
+        let result = callbacks.handle_create("/dup.txt", 1, 100);
+        assert_eq!(result, 0);
+
+        // Second create of same path should return -EEXIST, not -EACCES
+        let result = callbacks.handle_create("/dup.txt", 1, 101);
+        assert_eq!(
+            result,
+            -libc::EEXIST,
+            "duplicate create should return EEXIST"
+        );
+    }
+
+    #[test]
+    fn test_handle_delete_nonempty_dir_returns_enotempty() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(memdb.clone(), status, None);
+
+        // Create parent dir and child file
+        memdb.create("/parent", libc::S_IFDIR, 1, 100).unwrap();
+        memdb.create("/parent/child.txt", 0, 1, 101).unwrap();
+
+        // Deleting non-empty directory should return -ENOTEMPTY, not -EACCES
+        let result = callbacks.handle_delete("/parent", 1, 102);
+        assert_eq!(
+            result,
+            -libc::ENOTEMPTY,
+            "deleting non-empty dir should return ENOTEMPTY"
+        );
+    }
+
+    #[test]
+    fn test_handle_mkdir_duplicate_returns_eexist() {
+        let (_tmpdir, memdb) = create_minimal_test_db().unwrap();
+        let status = pmxcfs_status::init_with_config(create_test_config(false));
+        let callbacks = MemDbCallbacks::new(memdb.clone(), status, None);
+
+        let result = callbacks.handle_mkdir("/mydir", 1, 100);
+        assert_eq!(result, 0);
+
+        let result = callbacks.handle_mkdir("/mydir", 1, 101);
+        assert_eq!(
+            result,
+            -libc::EEXIST,
+            "duplicate mkdir should return EEXIST"
+        );
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/path.rs b/src/pmxcfs-rs/pmxcfs/src/path.rs
new file mode 100644
index 000000000..ab7aee460
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/path.rs
@@ -0,0 +1,45 @@
+//! Shared path helpers matching the C implementation.
+
+/// Matches C's `path_is_private()` logic from `cfs-utils.c`.
+pub fn is_private_path(path: &str) -> bool {
+    let path = path.trim_start_matches('/');
+
+    if path.starts_with("priv") && (path.len() == 4 || path.as_bytes()[4] == b'/') {
+        return true;
+    }
+
+    if let Some(after_nodes) = path.strip_prefix("nodes/")
+        && let Some(slash_pos) = after_nodes.find('/')
+    {
+        let after_nodename = &after_nodes[slash_pos..];
+        if after_nodename.starts_with("/priv") {
+            let priv_end = slash_pos + 5;
+            if after_nodes.len() == priv_end || after_nodes.as_bytes()[priv_end] == b'/' {
+                return true;
+            }
+        }
+    }
+
+    false
+}
+
+#[cfg(test)]
+mod tests {
+    use super::is_private_path;
+
+    #[test]
+    fn detects_private_paths() {
+        assert!(is_private_path("priv"));
+        assert!(is_private_path("/priv/token.cfg"));
+        assert!(is_private_path("nodes/node1/priv"));
+        assert!(is_private_path("/nodes/node1/priv/secret.cfg"));
+    }
+
+    #[test]
+    fn ignores_non_private_paths() {
+        assert!(!is_private_path("private"));
+        assert!(!is_private_path("nodes"));
+        assert!(!is_private_path("nodes/node1/private"));
+        assert!(!is_private_path("corosync.conf"));
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/plugins/README.md b/src/pmxcfs-rs/pmxcfs/src/plugins/README.md
new file mode 100644
index 000000000..53b0249cb
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/plugins/README.md
@@ -0,0 +1,203 @@
+# PMXCFS Plugin System
+
+## Overview
+
+The plugin system provides dynamic virtual files in the `/etc/pve` filesystem that generate content on-the-fly. These files provide cluster status, configuration, and monitoring data.
+
+## Plugin Types
+
+### Function Plugins
+
+These plugins generate dynamic content when read:
+
+- `.version` - Cluster version and status information
+- `.members` - Cluster membership information
+- `.vmlist` - List of VMs and containers
+- `.rrd` - Round-robin database dump
+- `.clusterlog` - Cluster log entries
+- `.debug` - Debug mode toggle
+
+### Symlink Plugins
+
+These plugins create symlinks to node-specific directories:
+
+- `local/` → `nodes/{nodename}/`
+- `qemu-server/` → `nodes/{nodename}/qemu-server/`
+- `lxc/` → `nodes/{nodename}/lxc/`
+- `openvz/` → `nodes/{nodename}/openvz/` (legacy)
+
+## Plugin File Formats
+
+### .version Plugin
+
+**Format**: JSON
+
+**Fields**:
+- `api` - API version (integer)
+- `clinfo` - Cluster info version (integer)
+- `cluster` - Cluster information object
+  - `name` - Cluster name (string)
+  - `nodes` - Number of nodes (integer)
+  - `quorate` - Quorum status (1 or 0)
+- `starttime` - Daemon start time (Unix timestamp)
+- `version` - Software version (string)
+- `vmlist` - VM list version (integer)
+
+**Example**:
+```json
+{
+  "api": 1,
+  "clinfo": 2,
+  "cluster": {
+    "name": "pmxcfs",
+    "nodes": 3,
+    "quorate": 1
+  },
+  "starttime": 1699876543,
+  "version": "9.0.6",
+  "vmlist": 5
+}
+```
+
+### .members Plugin
+
+**Format**: JSON with sections
+
+**Fields**:
+- `cluster` - Cluster information object
+  - `name` - Cluster name (string)
+  - `version` - Cluster version (integer)
+  - `nodes` - Number of nodes (integer)
+  - `quorate` - Quorum status (1 or 0)
+- `nodelist` - Array of node objects
+  - `id` - Node ID (integer)
+  - `name` - Node name (string)
+  - `online` - Online status (1 or 0)
+  - `ip` - Node IP address (string)
+
+**Example**:
+```json
+{
+  "cluster": {
+    "name": "pmxcfs",
+    "version": 2,
+    "nodes": 3,
+    "quorate": 1
+  },
+  "nodelist": [
+    {
+      "id": 1,
+      "name": "node1",
+      "online": 1,
+      "ip": "192.168.1.10"
+    },
+    {
+      "id": 2,
+      "name": "node2",
+      "online": 1,
+      "ip": "192.168.1.11"
+    },
+    {
+      "id": 3,
+      "name": "node3",
+      "online": 0,
+      "ip": "192.168.1.12"
+    }
+  ]
+}
+```
+
+### .vmlist Plugin
+
+**Format**: INI-style with sections
+
+**Sections**:
+- `[qemu]` - QEMU/KVM virtual machines
+- `[lxc]` - Linux containers
+
+**Entry Format**: `VMID<TAB>NODE<TAB>VERSION`
+- `VMID` - VM/container ID (integer)
+- `NODE` - Node name where the VM is defined (string)
+- `VERSION` - Configuration version (integer)
+
+**Example**:
+```
+[qemu]
+100	node1	2
+101	node2	1
+
+[lxc]
+200	node1	1
+201	node3	2
+```
+
+### .rrd Plugin
+
+**Format**: Text format with schema-based key-value pairs (one per line)
+
+**Line Format**: `{schema}/{id}:{timestamp}:{field1}:{field2}:...`
+- `schema` - RRD schema name (e.g., `pve-node-9.0`, `pve-vm-9.0`, `pve-storage-9.0`)
+- `id` - Resource identifier (node name, VMID, or storage name)
+- `timestamp` - Unix timestamp
+- `fields` - Colon-separated metric values
+
+Schemas include node metrics, VM metrics, and storage metrics with appropriate fields for each type.
+
+### .clusterlog Plugin
+
+**Format**: JSON with data array
+
+**Fields**:
+- `data` - Array of log entry objects
+  - `time` - Unix timestamp (integer)
+  - `node` - Node name (string)
+  - `priority` - Syslog priority (integer)
+  - `ident` - Process identifier (string)
+  - `tag` - Log tag (string)
+  - `message` - Log message (string)
+
+**Example**:
+```json
+{
+  "data": [
+    {
+      "time": 1699876543,
+      "node": "node1",
+      "priority": 6,
+      "ident": "pvedaemon",
+      "tag": "task",
+      "message": "Started VM 100"
+    }
+  ]
+}
+```
+
+### .debug Plugin
+
+**Format**: Plain text (single character)
+
+**Values**:
+- `0` - Debug mode disabled
+- `1` - Debug mode enabled
+
+**Behavior**:
+- Reading returns current debug state
+- Writing `1` enables debug logging
+- Writing `0` disables debug logging
+
+## Implementation Details
+
+### Registry
+
+The plugin registry (`registry.rs`) maintains all plugin definitions and handles lookups.
+
+### Plugin Trait
+
+All plugins implement a common trait that defines:
+- `get_content()` - Generate plugin content
+- `set_content()` - Handle writes (for `.debug` plugin)
+- `get_attr()` - Return file attributes
+
+### Integration with FUSE
+
+Plugins are integrated into the FUSE filesystem layer and appear as regular files in `/etc/pve`.
diff --git a/src/pmxcfs-rs/pmxcfs/src/plugins/clusterlog.rs b/src/pmxcfs-rs/pmxcfs/src/plugins/clusterlog.rs
new file mode 100644
index 000000000..da7c72daf
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/plugins/clusterlog.rs
@@ -0,0 +1,275 @@
+/// .clusterlog Plugin - Cluster Log Entries
+///
+/// This plugin provides cluster log entries in JSON format matching C implementation:
+/// ```json
+/// {
+///   "data": [
+///     {"uid": 42, "time": 1234567890, "pri": 6, "tag": "cluster", "pid": 1234, "node": "node1", "user": "root", "msg": "starting cluster log"}
+///   ]
+/// }
+/// ```
+///
+/// The format is compatible with the C implementation which uses clog_dump_json
+/// to write JSON data to clients.
+///
+/// Default max_entries: 50 (matching C implementation)
+use pmxcfs_status::Status;
+use std::sync::Arc;
+
+use super::Plugin;
+
+/// Clusterlog plugin - provides cluster log entries
+pub struct ClusterlogPlugin {
+    status: Arc<Status>,
+    max_entries: usize,
+}
+
+impl ClusterlogPlugin {
+    pub fn new(status: Arc<Status>) -> Self {
+        Self {
+            status,
+            max_entries: 50,
+        }
+    }
+
+    /// Create with custom entry limit
+    #[allow(dead_code)] // Used in tests for custom entry limits
+    pub fn new_with_limit(status: Arc<Status>, max_entries: usize) -> Self {
+        Self {
+            status,
+            max_entries,
+        }
+    }
+
+    /// Generate clusterlog content (C-compatible JSON format)
+    fn generate_content(&self) -> String {
+        self.status.dump_cluster_log_json(self.max_entries, "")
+    }
+}
+
+impl Plugin for ClusterlogPlugin {
+    fn name(&self) -> &str {
+        ".clusterlog"
+    }
+
+    fn read(&self) -> anyhow::Result<Vec<u8>> {
+        Ok(self.generate_content().into_bytes())
+    }
+
+    fn mode(&self) -> u32 {
+        0o440
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+    use pmxcfs_status as status;
+    use std::time::{SystemTime, UNIX_EPOCH};
+
+    /// Test helper: add a log message to the cluster log
+    fn add_log_message(
+        status: &status::Status,
+        node: String,
+        priority: u8,
+        ident: String,
+        tag: String,
+        pid: u32,
+        message: String,
+    ) {
+        let timestamp = SystemTime::now()
+            .duration_since(UNIX_EPOCH)
+            .unwrap()
+            .as_secs();
+        let entry = status::ClusterLogEntry {
+            uid: 0,
+            timestamp,
+            priority,
+            tag,
+            pid,
+            node,
+            ident,
+            message,
+        };
+        status.add_log_entry(entry);
+    }
+
+    #[tokio::test]
+    async fn test_clusterlog_format() {
+        // Initialize status subsystem without RRD persistence (not needed for test)
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = status::init_with_config(config);
+
+        // Test that it returns valid JSON
+        let plugin = ClusterlogPlugin::new(status.clone());
+        let result = plugin.generate_content();
+
+        // Should be valid JSON
+        assert!(
+            serde_json::from_str::<serde_json::Value>(&result).is_ok(),
+            "Should return valid JSON"
+        );
+    }
+
+    #[tokio::test]
+    async fn test_clusterlog_with_entries() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = status::init_with_config(config);
+
+        // Clear any existing log entries from other tests
+        status.clear_cluster_log();
+
+        // Add some log entries
+        add_log_message(
+            &status,
+            "node1".to_string(),
+            6, // Info priority
+            "pmxcfs".to_string(),
+            "cluster".to_string(),
+            1234,
+            "Node joined cluster".to_string(),
+        );
+
+        add_log_message(
+            &status,
+            "node2".to_string(),
+            4, // Warning priority
+            "pvestatd".to_string(),
+            "status".to_string(),
+            4321,
+            "High load detected".to_string(),
+        );
+
+        // Get clusterlog
+        let plugin = ClusterlogPlugin::new(status.clone());
+        let result = plugin.generate_content();
+
+        // Parse JSON
+        let json: serde_json::Value = serde_json::from_str(&result).expect("Should be valid JSON");
+
+        // Verify structure
+        assert!(json.get("data").is_some(), "Should have 'data' field");
+        let data = json["data"].as_array().expect("data should be array");
+        let stored_entries = status.get_log_entries(2);
+
+        // Should have at least 2 entries
+        assert!(data.len() >= 2, "Should have at least 2 entries");
+        assert_eq!(stored_entries.len(), 2, "Should have 2 stored entries");
+
+        // Verify first entry has all required fields
+        let first_entry = &data[0];
+        assert!(first_entry.get("uid").is_some(), "Should have uid");
+        assert!(first_entry.get("time").is_some(), "Should have time");
+        assert!(first_entry.get("pri").is_some(), "Should have pri");
+        assert!(first_entry.get("tag").is_some(), "Should have tag");
+        assert!(first_entry.get("pid").is_some(), "Should have pid");
+        assert!(first_entry.get("node").is_some(), "Should have node");
+        assert!(first_entry.get("user").is_some(), "Should have user");
+        assert!(first_entry.get("msg").is_some(), "Should have msg");
+
+        assert!(
+            first_entry["uid"].as_u64().unwrap() == stored_entries[0].uid as u64,
+            "uid should come from the stored entry"
+        );
+        assert_eq!(
+            first_entry["pid"].as_u64().unwrap(),
+            stored_entries[0].pid as u64
+        );
+    }
+
+    #[tokio::test]
+    async fn test_clusterlog_entry_limit() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = status::init_with_config(config);
+
+        // Add 10 log entries
+        for i in 0..10 {
+            add_log_message(
+                &status,
+                format!("node{i}"),
+                6,
+                "test".to_string(),
+                "test".to_string(),
+                i as u32,
+                format!("Test message {i}"),
+            );
+        }
+
+        // Request only 5 entries
+        let plugin = ClusterlogPlugin::new_with_limit(status, 5);
+        let result = plugin.generate_content();
+        let json: serde_json::Value = serde_json::from_str(&result).unwrap();
+        let data = json["data"].as_array().unwrap();
+
+        // Should have at most 5 entries
+        assert!(data.len() <= 5, "Should respect entry limit");
+    }
+
+    #[tokio::test]
+    async fn test_clusterlog_field_types() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = status::init_with_config(config);
+
+        add_log_message(
+            &status,
+            "testnode".to_string(),
+            5,
+            "testident".to_string(),
+            "testtag".to_string(),
+            9876,
+            "Test message content".to_string(),
+        );
+
+        let plugin = ClusterlogPlugin::new(status);
+        let result = plugin.generate_content();
+        let json: serde_json::Value = serde_json::from_str(&result).unwrap();
+        let data = json["data"].as_array().unwrap();
+
+        if let Some(entry) = data.first() {
+            // uid should be number
+            assert!(entry["uid"].is_u64(), "uid should be number");
+
+            // time should be number
+            assert!(entry["time"].is_u64(), "time should be number");
+
+            // pri should be number
+            assert!(entry["pri"].is_u64(), "pri should be number");
+
+            // tag should be string
+            assert!(entry["tag"].is_string(), "tag should be string");
+            assert_eq!(entry["tag"].as_str().unwrap(), "testtag");
+
+            // pid should be preserved from the stored entry
+            assert!(entry["pid"].is_u64(), "pid should be number");
+            assert_eq!(entry["pid"].as_u64().unwrap(), 9876);
+
+            // node should be string
+            assert!(entry["node"].is_string(), "node should be string");
+            assert_eq!(entry["node"].as_str().unwrap(), "testnode");
+
+            // user should be string
+            assert!(entry["user"].is_string(), "user should be string");
+            assert_eq!(entry["user"].as_str().unwrap(), "testident");
+
+            // msg should be string
+            assert!(entry["msg"].is_string(), "msg should be string");
+            assert_eq!(entry["msg"].as_str().unwrap(), "Test message content");
+        }
+    }
+
+    #[tokio::test]
+    async fn test_clusterlog_empty() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = status::init_with_config(config);
+
+        // Get clusterlog without any entries (or clear existing ones)
+        let plugin = ClusterlogPlugin::new_with_limit(status, 0);
+        let result = plugin.generate_content();
+        let json: serde_json::Value = serde_json::from_str(&result).unwrap();
+
+        // Should have data field with empty array
+        assert!(json.get("data").is_some());
+        let data = json["data"].as_array().unwrap();
+        assert_eq!(data.len(), 0, "Should have empty data array");
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/plugins/debug.rs b/src/pmxcfs-rs/pmxcfs/src/plugins/debug.rs
new file mode 100644
index 000000000..de45a5fd8
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/plugins/debug.rs
@@ -0,0 +1,131 @@
+/// .debug Plugin - Debug Level Control
+///
+/// This plugin provides read/write access to debug settings, matching the C implementation.
+/// Format: "0\n" or "1\n" (debug level as text)
+///
+/// When written, this actually changes the tracing filter level at runtime,
+/// matching the C implementation's behavior where cfs.debug controls cfs_debug() macro output.
+use anyhow::Result;
+use pmxcfs_config::Config;
+use std::sync::Arc;
+
+use super::Plugin;
+
+/// Debug plugin - provides debug level control
+pub struct DebugPlugin {
+    config: Arc<Config>,
+}
+
+impl DebugPlugin {
+    pub fn new(config: Arc<Config>) -> Self {
+        Self { config }
+    }
+
+    /// Generate debug setting content (read operation)
+    fn generate_content(&self) -> String {
+        let level = self.config.debug_level();
+        format!("{level}\n")
+    }
+
+    /// Handle debug plugin write operation
+    ///
+    /// This changes the tracing filter level at runtime to match C implementation behavior.
+    /// In C, writing to .debug sets cfs.debug which controls cfs_debug() macro output.
+    fn handle_write(&self, data: &str) -> Result<()> {
+        let level: u8 = data
+            .trim()
+            .parse()
+            .map_err(|_| anyhow::anyhow!("Invalid debug level: must be a number"))?;
+
+        // Update debug level in config
+        self.config.set_debug_level(level);
+
+        // Actually change the tracing filter level at runtime
+        // This matches C implementation where cfs.debug controls logging
+        if let Err(e) = crate::logging::set_debug_level(level) {
+            tracing::error!("Failed to update log level: {}", e);
+            // Don't fail - just log error. The level is still stored.
+        }
+
+        if level > 0 {
+            tracing::info!("Debug mode enabled (level {})", level);
+            tracing::debug!("Debug logging is now active");
+        } else {
+            tracing::info!("Debug mode disabled");
+        }
+
+        Ok(())
+    }
+}
+
+impl Plugin for DebugPlugin {
+    fn name(&self) -> &str {
+        ".debug"
+    }
+
+    fn read(&self) -> anyhow::Result<Vec<u8>> {
+        Ok(self.generate_content().into_bytes())
+    }
+
+    fn write(&self, data: &[u8]) -> Result<()> {
+        let text = std::str::from_utf8(data)?;
+        self.handle_write(text)
+    }
+
+    fn supports_write(&self) -> bool {
+        true
+    }
+
+    fn mode(&self) -> u32 {
+        0o640
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[test]
+    fn test_debug_read() {
+        let config =
+            Config::new("test", "127.0.0.1".parse().unwrap(), 33, 0, false, "pmxcfs").into_shared();
+        let plugin = DebugPlugin::new(config);
+        let result = plugin.generate_content();
+        assert_eq!(result, "0\n");
+    }
+
+    #[test]
+    fn test_debug_write() {
+        let config =
+            Config::new("test", "127.0.0.1".parse().unwrap(), 33, 0, false, "pmxcfs").into_shared();
+
+        let plugin = DebugPlugin::new(config.clone());
+        let result = plugin.handle_write("1");
+        // Note: This will fail to actually change the log level if the reload handle
+        // hasn't been initialized (which is expected in unit tests without full setup).
+        // The function should still succeed - it just warns about not being able to reload.
+        assert!(result.is_ok());
+
+        // Verify the stored level changed
+        assert_eq!(config.debug_level(), 1);
+
+        // Test setting it back to 0
+        let result = plugin.handle_write("0");
+        assert!(result.is_ok());
+        assert_eq!(config.debug_level(), 0);
+    }
+
+    #[test]
+    fn test_invalid_debug_level() {
+        let config =
+            Config::new("test", "127.0.0.1".parse().unwrap(), 33, 0, false, "pmxcfs").into_shared();
+
+        let plugin = DebugPlugin::new(config.clone());
+
+        let result = plugin.handle_write("invalid");
+        assert!(result.is_err());
+
+        let result = plugin.handle_write("");
+        assert!(result.is_err());
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/plugins/members.rs b/src/pmxcfs-rs/pmxcfs/src/plugins/members.rs
new file mode 100644
index 000000000..990bbb6e0
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/plugins/members.rs
@@ -0,0 +1,146 @@
+/// .members Plugin - Cluster Member Information
+///
+/// This plugin provides information about cluster members in JSON format:
+/// {
+///   "nodename": "node1",
+///   "version": 5,
+///   "cluster": {
+///     "name": "mycluster",
+///     "version": 1,
+///     "nodes": 3,
+///     "quorate": 1
+///   },
+///   "nodelist": {
+///     "node1": { "id": 1, "online": 1, "ip": "192.168.1.10" },
+///     "node2": { "id": 2, "online": 1, "ip": "192.168.1.11" }
+///   }
+/// }
+use pmxcfs_config::Config;
+use pmxcfs_status::Status;
+use std::sync::Arc;
+
+use super::Plugin;
+use crate::status_messages::build_cluster_info_value;
+
+/// Members plugin - provides cluster member information
+pub struct MembersPlugin {
+    config: Arc<Config>,
+    status: Arc<Status>,
+}
+
+impl MembersPlugin {
+    pub fn new(config: Arc<Config>, status: Arc<Status>) -> Self {
+        Self { config, status }
+    }
+
+    /// Generate members information content
+    fn generate_content(&self) -> String {
+        build_cluster_info_value(&self.config, &self.status).to_string()
+    }
+}
+
+impl Plugin for MembersPlugin {
+    fn name(&self) -> &str {
+        ".members"
+    }
+
+    fn read(&self) -> anyhow::Result<Vec<u8>> {
+        Ok(self.generate_content().into_bytes())
+    }
+
+    fn mode(&self) -> u32 {
+        0o440
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[tokio::test]
+    async fn test_members_format() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config);
+
+        let config = Config::new(
+            "testnode",
+            "127.0.0.1".parse().unwrap(),
+            33,
+            0,
+            false,
+            "testcluster",
+        )
+        .into_shared();
+
+        // Initialize cluster
+        status.init_cluster("testcluster".to_string());
+
+        let plugin = MembersPlugin::new(config, status);
+        let result = plugin.generate_content();
+        let parsed: serde_json::Value = serde_json::from_str(&result).unwrap();
+
+        // Should have nodename
+        assert_eq!(parsed["nodename"], "testnode");
+
+        // Should have version
+        assert!(parsed["version"].is_number());
+
+        assert!(parsed.get("cluster").is_none());
+        assert!(parsed.get("nodelist").is_none());
+    }
+
+    #[tokio::test]
+    async fn test_members_no_cluster() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config);
+
+        let config = Config::new(
+            "standalone",
+            "192.168.1.100".parse().unwrap(),
+            33,
+            0,
+            false,
+            "testcluster",
+        )
+        .into_shared();
+
+        // Don't set cluster info - should still work
+        let plugin = MembersPlugin::new(config, status);
+        let result = plugin.generate_content();
+        let parsed: serde_json::Value = serde_json::from_str(&result).unwrap();
+
+        assert_eq!(parsed["nodename"], "standalone");
+        assert!(parsed.get("cluster").is_none());
+        assert!(parsed.get("nodelist").is_none());
+    }
+
+    #[tokio::test]
+    async fn test_members_use_runtime_iphash_and_config_version() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config.clone());
+
+        status
+            .update_cluster_info(
+                "testcluster".to_string(),
+                9,
+                vec![
+                    (1, "testnode".to_string(), "192.168.1.10".to_string()),
+                    (2, "node2".to_string(), "192.168.1.11".to_string()),
+                ],
+            )
+            .unwrap();
+        status.set_quorate(true);
+        status
+            .set_node_status("nodeip".to_string(), b"10.0.0.1\0".to_vec())
+            .await
+            .unwrap();
+
+        let plugin = MembersPlugin::new(config, status);
+        let parsed: serde_json::Value = serde_json::from_str(&plugin.generate_content()).unwrap();
+
+        assert_eq!(parsed["cluster"]["version"], 9);
+        assert_eq!(parsed["cluster"]["nodes"], 2);
+        assert_eq!(parsed["nodelist"]["testnode"]["ip"], "10.0.0.1");
+        assert!(parsed["nodelist"]["node2"].get("ip").is_none());
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/plugins/mod.rs b/src/pmxcfs-rs/pmxcfs/src/plugins/mod.rs
new file mode 100644
index 000000000..9af9f8024
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/plugins/mod.rs
@@ -0,0 +1,30 @@
+/// Plugin system for special files and dynamic content
+///
+/// This module implements plugins for:
+/// - func: Dynamic files generated by callbacks (.version, .members, etc.)
+/// - link: Symbolic links
+///
+/// Each plugin is implemented in its own source file:
+/// - version.rs: .version plugin - cluster version information
+/// - members.rs: .members plugin - cluster member list
+/// - vmlist.rs: .vmlist plugin - VM/CT list
+/// - rrd.rs: .rrd plugin - system metrics
+/// - clusterlog.rs: .clusterlog plugin - cluster log entries
+/// - debug.rs: .debug plugin - debug level control
+mod clusterlog;
+mod debug;
+mod members;
+mod registry;
+mod rrd;
+mod types;
+mod version;
+mod vmlist;
+
+// Re-export core types (only Plugin trait is used outside this module)
+pub use types::Plugin;
+
+// Re-export registry
+pub use registry::{PluginRegistry, init_plugins};
+
+#[cfg(test)]
+pub use registry::init_plugins_for_test;
diff --git a/src/pmxcfs-rs/pmxcfs/src/plugins/registry.rs b/src/pmxcfs-rs/pmxcfs/src/plugins/registry.rs
new file mode 100644
index 000000000..ba8c44581
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/plugins/registry.rs
@@ -0,0 +1,328 @@
+/// Plugin registry and initialization
+use parking_lot::RwLock;
+use std::collections::BTreeMap;
+use std::sync::Arc;
+
+use super::clusterlog::ClusterlogPlugin;
+use super::debug::DebugPlugin;
+use super::members::MembersPlugin;
+use super::rrd::RrdPlugin;
+use super::types::{LinkPlugin, Plugin};
+use super::version::VersionPlugin;
+use super::vmlist::VmlistPlugin;
+
+/// Plugin registry
+pub struct PluginRegistry {
+    plugins: RwLock<BTreeMap<String, Arc<dyn Plugin>>>,
+}
+
+impl Default for PluginRegistry {
+    fn default() -> Self {
+        Self::new()
+    }
+}
+
+impl PluginRegistry {
+    pub fn new() -> Self {
+        Self {
+            plugins: RwLock::new(BTreeMap::new()),
+        }
+    }
+
+    /// Register a plugin
+    pub fn register(&self, plugin: Arc<dyn Plugin>) {
+        let name = plugin.name().to_string();
+        self.plugins.write().insert(name, plugin);
+    }
+
+    /// Get a plugin by name
+    pub fn get(&self, name: &str) -> Option<Arc<dyn Plugin>> {
+        self.plugins.read().get(name).cloned()
+    }
+
+    /// Check if a path is a plugin
+    pub fn is_plugin(&self, name: &str) -> bool {
+        self.plugins.read().contains_key(name)
+    }
+
+    /// List all plugin names
+    pub fn list(&self) -> Vec<String> {
+        self.plugins.read().keys().cloned().collect()
+    }
+
+    /// List all plugins with their names in a single lock acquisition
+    pub fn list_all(&self) -> Vec<(String, Arc<dyn Plugin>)> {
+        self.plugins
+            .read()
+            .iter()
+            .map(|(name, plugin)| (name.clone(), plugin.clone()))
+            .collect()
+    }
+
+    /// Get a plugin by its sorted index (same order as `list()`)
+    pub fn get_by_index(&self, idx: usize) -> Option<Arc<dyn Plugin>> {
+        self.plugins.read().values().nth(idx).cloned()
+    }
+}
+
+/// Initialize the plugin registry with default plugins
+pub fn init_plugins(
+    config: Arc<pmxcfs_config::Config>,
+    status: Arc<pmxcfs_status::Status>,
+) -> Arc<PluginRegistry> {
+    tracing::info!("Initializing plugin system for node: {}", config.nodename());
+
+    let registry = Arc::new(PluginRegistry::new());
+
+    // .version - cluster version information
+    let version_plugin = Arc::new(VersionPlugin::new(config.clone(), status.clone()));
+    registry.register(version_plugin);
+
+    // .members - cluster member list
+    let members_plugin = Arc::new(MembersPlugin::new(config.clone(), status.clone()));
+    registry.register(members_plugin);
+
+    // .vmlist - VM list
+    let vmlist_plugin = Arc::new(VmlistPlugin::new(status.clone()));
+    registry.register(vmlist_plugin);
+
+    // .rrd - RRD data
+    let rrd_plugin = Arc::new(RrdPlugin::new(status.clone()));
+    registry.register(rrd_plugin);
+
+    // .clusterlog - cluster log
+    let clusterlog_plugin = Arc::new(ClusterlogPlugin::new(status.clone()));
+    registry.register(clusterlog_plugin);
+
+    // .debug - debug settings (read/write)
+    let debug_plugin = Arc::new(DebugPlugin::new(config.clone()));
+    registry.register(debug_plugin);
+
+    // Symbolic link plugins - point to nodes/{nodename}/ subdirectories
+    // These provide convenient access to node-specific directories from the root
+    let nodename = config.nodename();
+
+    // local -> nodes/{nodename}/local
+    let local_link = Arc::new(LinkPlugin::new("local", format!("nodes/{nodename}")));
+    registry.register(local_link);
+
+    // qemu-server -> nodes/{nodename}/qemu-server
+    let qemu_link = Arc::new(LinkPlugin::new(
+        "qemu-server",
+        format!("nodes/{nodename}/qemu-server"),
+    ));
+    registry.register(qemu_link);
+
+    // openvz -> nodes/{nodename}/openvz (legacy support)
+    let openvz_link = Arc::new(LinkPlugin::new(
+        "openvz",
+        format!("nodes/{nodename}/openvz"),
+    ));
+    registry.register(openvz_link);
+
+    // lxc -> nodes/{nodename}/lxc
+    let lxc_link = Arc::new(LinkPlugin::new("lxc", format!("nodes/{nodename}/lxc")));
+    registry.register(lxc_link);
+
+    let total_plugins = registry.list().len();
+    tracing::info!("Registered {} plugins", total_plugins);
+
+    registry
+}
+
+#[cfg(test)]
+/// Test-only helper to create a plugin registry with a simple nodename
+pub fn init_plugins_for_test(nodename: &str) -> Arc<PluginRegistry> {
+    use pmxcfs_config::Config;
+
+    // Create config with the specified nodename for testing
+    let config = Config::new(
+        nodename,
+        "127.0.0.1".parse().unwrap(),
+        33,
+        0,
+        false,
+        "pmxcfs",
+    )
+    .into_shared();
+    let status = pmxcfs_status::init_with_config(config.clone());
+
+    init_plugins(config, status)
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[test]
+    fn test_registry_func_plugins_exist() {
+        let registry = init_plugins_for_test("testnode");
+
+        let func_plugins = vec![
+            ".version",
+            ".members",
+            ".vmlist",
+            ".rrd",
+            ".clusterlog",
+            ".debug",
+        ];
+
+        for plugin_name in func_plugins {
+            assert!(
+                registry.is_plugin(plugin_name),
+                "{plugin_name} should be registered"
+            );
+
+            let plugin = registry.get(plugin_name);
+            assert!(plugin.is_some(), "{plugin_name} should be accessible");
+            assert_eq!(plugin.unwrap().name(), plugin_name);
+        }
+    }
+
+    #[test]
+    fn test_registry_link_plugins_exist() {
+        let registry = init_plugins_for_test("testnode");
+
+        let link_plugins = vec!["local", "qemu-server", "openvz", "lxc"];
+
+        for plugin_name in link_plugins {
+            assert!(
+                registry.is_plugin(plugin_name),
+                "{plugin_name} link should be registered"
+            );
+
+            let plugin = registry.get(plugin_name);
+            assert!(plugin.is_some(), "{plugin_name} link should be accessible");
+            assert_eq!(plugin.unwrap().name(), plugin_name);
+        }
+    }
+
+    #[test]
+    fn test_registry_list_is_sorted() {
+        let registry = PluginRegistry::new();
+
+        registry.register(Arc::new(LinkPlugin::new("zeta", "/zeta")));
+        registry.register(Arc::new(LinkPlugin::new("alpha", "/alpha")));
+        registry.register(Arc::new(LinkPlugin::new(".beta", "/beta")));
+
+        assert_eq!(registry.list(), vec![".beta", "alpha", "zeta"]);
+    }
+
+    #[test]
+    fn test_registry_link_targets_use_nodename() {
+        // Test with different nodenames
+        let test_cases = vec![
+            ("node1", "nodes/node1"),
+            ("pve-test", "nodes/pve-test"),
+            ("cluster-node-03", "nodes/cluster-node-03"),
+        ];
+
+        for (nodename, expected_local_target) in test_cases {
+            let registry = init_plugins_for_test(nodename);
+
+            // Test local link
+            let local = registry.get("local").expect("local link should exist");
+            let data = local.read().expect("should read link target");
+            let target = String::from_utf8(data).expect("target should be UTF-8");
+            assert_eq!(
+                target, expected_local_target,
+                "local link should point to nodes/{nodename} for {nodename}"
+            );
+
+            // Test qemu-server link
+            let qemu = registry
+                .get("qemu-server")
+                .expect("qemu-server link should exist");
+            let data = qemu.read().expect("should read link target");
+            let target = String::from_utf8(data).expect("target should be UTF-8");
+            assert_eq!(
+                target,
+                format!("nodes/{nodename}/qemu-server"),
+                "qemu-server link should include nodename"
+            );
+
+            // Test lxc link
+            let lxc = registry.get("lxc").expect("lxc link should exist");
+            let data = lxc.read().expect("should read link target");
+            let target = String::from_utf8(data).expect("target should be UTF-8");
+            assert_eq!(
+                target,
+                format!("nodes/{nodename}/lxc"),
+                "lxc link should include nodename"
+            );
+
+            // Test openvz link (legacy)
+            let openvz = registry.get("openvz").expect("openvz link should exist");
+            let data = openvz.read().expect("should read link target");
+            let target = String::from_utf8(data).expect("target should be UTF-8");
+            assert_eq!(
+                target,
+                format!("nodes/{nodename}/openvz"),
+                "openvz link should include nodename"
+            );
+        }
+    }
+
+    #[test]
+    fn test_registry_nonexistent_plugin() {
+        let registry = init_plugins_for_test("testnode");
+
+        assert!(!registry.is_plugin(".nonexistent"));
+        assert!(registry.get(".nonexistent").is_none());
+    }
+
+    #[test]
+    fn test_registry_plugin_modes() {
+        let registry = init_plugins_for_test("testnode");
+
+        // .debug should be writable (0o640)
+        let debug = registry.get(".debug").expect(".debug should exist");
+        assert_eq!(debug.mode(), 0o640, ".debug should have writable mode");
+
+        // All other func plugins should be read-only (0o440)
+        let readonly_plugins = vec![".version", ".members", ".vmlist", ".rrd", ".clusterlog"];
+        for plugin_name in readonly_plugins {
+            let plugin = registry.get(plugin_name).unwrap();
+            assert_eq!(plugin.mode(), 0o440, "{plugin_name} should be read-only");
+        }
+
+        // Link plugins should have 0o777
+        let links = vec!["local", "qemu-server", "openvz", "lxc"];
+        for link_name in links {
+            let link = registry.get(link_name).unwrap();
+            assert_eq!(link.mode(), 0o777, "{link_name} should have 777 mode");
+        }
+    }
+
+    #[test]
+    fn test_link_plugins_are_symlinks() {
+        let registry = init_plugins_for_test("testnode");
+
+        // Link plugins should be identified as symlinks
+        let link_plugins = vec!["local", "qemu-server", "openvz", "lxc"];
+        for link_name in link_plugins {
+            let link = registry.get(link_name).unwrap();
+            assert!(
+                link.is_symlink(),
+                "{link_name} should be identified as a symlink"
+            );
+        }
+
+        // Func plugins should NOT be identified as symlinks
+        let func_plugins = vec![
+            ".version",
+            ".members",
+            ".vmlist",
+            ".rrd",
+            ".clusterlog",
+            ".debug",
+        ];
+        for plugin_name in func_plugins {
+            let plugin = registry.get(plugin_name).unwrap();
+            assert!(
+                !plugin.is_symlink(),
+                "{plugin_name} should NOT be identified as a symlink"
+            );
+        }
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/plugins/rrd.rs b/src/pmxcfs-rs/pmxcfs/src/plugins/rrd.rs
new file mode 100644
index 000000000..406b76727
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/plugins/rrd.rs
@@ -0,0 +1,97 @@
+/// .rrd Plugin - RRD (Round-Robin Database) Metrics
+///
+/// This plugin provides system metrics in text format matching C implementation:
+/// ```text
+/// pve2-node/nodename:timestamp:uptime:loadavg:maxcpu:cpu:iowait:memtotal:memused:...
+/// pve2.3-vm/100:timestamp:status:uptime:...
+/// ```
+///
+/// The format is compatible with the C implementation which uses rrd_update
+/// to write data to RRD files on disk.
+///
+/// Data aging: Entries older than 5 minutes are automatically removed.
+use pmxcfs_status::Status;
+use std::sync::Arc;
+
+use super::Plugin;
+
+/// RRD plugin - provides system metrics
+pub struct RrdPlugin {
+    status: Arc<Status>,
+}
+
+impl RrdPlugin {
+    pub fn new(status: Arc<Status>) -> Self {
+        Self { status }
+    }
+
+    /// Generate RRD content (C-compatible text format)
+    fn generate_content(&self) -> String {
+        // Get RRD dump in text format from status module
+        // Format: "key:data\n" for each entry
+        // The status module handles data aging (removes entries >5 minutes old)
+        self.status.get_rrd_dump()
+    }
+}
+
+impl Plugin for RrdPlugin {
+    fn name(&self) -> &str {
+        ".rrd"
+    }
+
+    fn read(&self) -> anyhow::Result<Vec<u8>> {
+        Ok(self.generate_content().into_bytes())
+    }
+
+    fn mode(&self) -> u32 {
+        0o440
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[tokio::test]
+    async fn test_rrd_empty() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config);
+
+        let plugin = RrdPlugin::new(status);
+        let result = plugin.generate_content();
+        // Empty RRD data should return just NUL terminator (C compatibility)
+        assert_eq!(result, "\0");
+    }
+
+    #[tokio::test]
+    async fn test_rrd_with_data() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config);
+
+        // Add some RRD data with proper schema
+        // Note: RRD file creation will fail (no rrdcached in tests), but in-memory storage works
+        // Node RRD (pve2 format): timestamp + 12 values
+        // (loadavg, maxcpu, cpu, iowait, memtotal, memused, swaptotal, swapused, roottotal, rootused, netin, netout)
+        let _ = status.set_rrd_data(
+            "pve2-node/testnode".to_string(),
+            "1234567890:0.5:4:1.2:0.25:8000000000:4000000000:2000000000:100000000:10000000000:5000000000:1000000:500000".to_string(),
+        ).await; // May fail if rrdcached not running, but in-memory storage succeeds
+
+        // VM RRD (pve2.3 format): timestamp + 10 values
+        // (maxcpu, cpu, maxmem, mem, maxdisk, disk, netin, netout, diskread, diskwrite)
+        let _ = status
+            .set_rrd_data(
+                "pve2.3-vm/100".to_string(),
+                "1234567890:4:2.5:4096:2048:100000:50000:1000000:500000:10000:5000".to_string(),
+            )
+            .await; // May fail if rrdcached not running, but in-memory storage succeeds
+
+        let plugin = RrdPlugin::new(status);
+        let result = plugin.generate_content();
+
+        // Should contain both entries (from in-memory storage)
+        assert!(result.contains("pve2-node/testnode"));
+        assert!(result.contains("pve2.3-vm/100"));
+        assert!(result.contains("1234567890"));
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/plugins/types.rs b/src/pmxcfs-rs/pmxcfs/src/plugins/types.rs
new file mode 100644
index 000000000..8929bea96
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/plugins/types.rs
@@ -0,0 +1,117 @@
+/// Core plugin types and trait definitions
+use anyhow::Result;
+
+/// Plugin trait for special file handlers
+///
+/// Note: We can't use `const NAME: &'static str` as an associated constant because
+/// it would make the trait not object-safe (dyn Plugin wouldn't work). Instead,
+/// each implementation provides the name via the name() method.
+pub trait Plugin: Send + Sync {
+    /// Get plugin name
+    fn name(&self) -> &str;
+
+    /// Read content from this plugin
+    fn read(&self) -> Result<Vec<u8>>;
+
+    /// Write content to this plugin (if supported)
+    fn write(&self, _data: &[u8]) -> Result<()> {
+        Err(anyhow::anyhow!("Write not supported for this plugin"))
+    }
+
+    /// Returns true if this plugin supports write operations
+    fn supports_write(&self) -> bool {
+        false
+    }
+
+    /// Get file mode
+    fn mode(&self) -> u32;
+
+    /// Check if this is a symbolic link
+    fn is_symlink(&self) -> bool {
+        false
+    }
+}
+
+/// Link plugin - symbolic links
+pub struct LinkPlugin {
+    name: &'static str,
+    target: String,
+}
+
+impl LinkPlugin {
+    pub fn new(name: &'static str, target: impl Into<String>) -> Self {
+        Self {
+            name,
+            target: target.into(),
+        }
+    }
+}
+
+impl Plugin for LinkPlugin {
+    fn name(&self) -> &str {
+        self.name
+    }
+
+    fn read(&self) -> Result<Vec<u8>> {
+        Ok(self.target.as_bytes().to_vec())
+    }
+
+    fn mode(&self) -> u32 {
+        0o777 // Symbolic links
+    }
+
+    fn is_symlink(&self) -> bool {
+        true
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    // ===== LinkPlugin Tests =====
+
+    #[test]
+    fn test_link_plugin_creation() {
+        let plugin = LinkPlugin::new("testlink", "/target/path");
+        assert_eq!(plugin.name(), "testlink");
+        assert!(plugin.is_symlink());
+    }
+
+    #[test]
+    fn test_link_plugin_read_target() {
+        let target = "/path/to/target";
+        let plugin = LinkPlugin::new("mylink", target);
+
+        let result = plugin.read().unwrap();
+        assert_eq!(result, target.as_bytes());
+    }
+
+    #[test]
+    fn test_link_plugin_mode() {
+        let plugin = LinkPlugin::new("link", "/target");
+        assert_eq!(
+            plugin.mode(),
+            0o777,
+            "Symbolic links should have mode 0o777"
+        );
+    }
+
+    #[test]
+    fn test_link_plugin_write_not_supported() {
+        let plugin = LinkPlugin::new("readonly", "/target");
+        let result = plugin.write(b"test data");
+
+        assert!(result.is_err(), "LinkPlugin should not support write");
+        assert!(result.unwrap_err().to_string().contains("not supported"));
+    }
+
+    #[test]
+    fn test_link_plugin_with_unicode_target() {
+        let target = "/path/with/üñïçödé/target";
+        let plugin = LinkPlugin::new("unicode", target);
+
+        let result = plugin.read().unwrap();
+        assert_eq!(String::from_utf8(result).unwrap(), target);
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/plugins/version.rs b/src/pmxcfs-rs/pmxcfs/src/plugins/version.rs
new file mode 100644
index 000000000..f5435c868
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/plugins/version.rs
@@ -0,0 +1,140 @@
+/// .version Plugin - Cluster Version Information
+///
+/// This plugin provides comprehensive version information in JSON format:
+/// {
+///   "starttime": 1234567890,
+///   "clinfo": 5,
+///   "vmlist": 12,
+///   "qemu-server": 3,
+///   "lxc": 2,
+///   "nodes": 1
+/// }
+///
+/// All version counters are now maintained in the Status module (status/mod.rs)
+/// to match the C implementation where they are stored in cfs_status.
+use pmxcfs_config::Config;
+use pmxcfs_status::Status;
+use std::sync::Arc;
+
+use super::Plugin;
+use crate::status_messages::build_version_value;
+
+/// Version plugin - provides cluster version information
+pub struct VersionPlugin {
+    config: Arc<Config>,
+    status: Arc<Status>,
+}
+
+impl VersionPlugin {
+    pub fn new(config: Arc<Config>, status: Arc<Status>) -> Self {
+        Self { config, status }
+    }
+
+    /// Generate version information content
+    fn generate_content(&self) -> String {
+        build_version_value(&self.config, &self.status).to_string()
+    }
+}
+
+impl Plugin for VersionPlugin {
+    fn name(&self) -> &str {
+        ".version"
+    }
+
+    fn read(&self) -> anyhow::Result<Vec<u8>> {
+        Ok(self.generate_content().into_bytes())
+    }
+
+    fn mode(&self) -> u32 {
+        0o440
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[tokio::test]
+    async fn test_version_format() {
+        // Create Status instance without RRD persistence (not needed for test)
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config);
+
+        // Create Config instance
+        let config = Config::new(
+            "testnode",
+            "127.0.0.1".parse().unwrap(),
+            33,
+            0,
+            false,
+            "testcluster",
+        )
+        .into_shared();
+
+        // Initialize cluster
+        status.init_cluster("testcluster".to_string());
+
+        let plugin = VersionPlugin::new(config, status);
+        let result = plugin.generate_content();
+        let parsed: serde_json::Value = serde_json::from_str(&result).unwrap();
+
+        assert!(parsed["starttime"].is_number());
+        assert!(parsed["clinfo"].is_number());
+        assert!(parsed["vmlist"].is_number());
+        assert!(parsed["kvstore"].is_object());
+        assert!(parsed.get("corosync.conf").is_some());
+    }
+
+    #[tokio::test]
+    async fn test_increment_versions() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config);
+
+        let initial_clinfo = status.get_cluster_version();
+        status.increment_cluster_version();
+        assert_eq!(status.get_cluster_version(), initial_clinfo + 1);
+
+        let initial_vmlist = status.get_vmlist_version();
+        status.increment_vmlist_version();
+        assert_eq!(status.get_vmlist_version(), initial_vmlist + 1);
+    }
+
+    #[tokio::test]
+    async fn test_path_versions() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config);
+
+        // Use actual paths from memdb_change_array
+        status.increment_path_version("corosync.conf");
+        status.increment_path_version("corosync.conf");
+        assert!(status.get_path_version("corosync.conf") >= 2);
+
+        status.increment_path_version("user.cfg");
+        assert!(status.get_path_version("user.cfg") >= 1);
+    }
+
+    #[tokio::test]
+    async fn test_version_includes_local_and_remote_kv_versions() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config.clone());
+
+        status
+            .update_cluster_info(
+                "testcluster".to_string(),
+                2,
+                vec![(2, "node2".to_string(), "192.168.1.11".to_string())],
+            )
+            .unwrap();
+        status
+            .set_node_status("foo".to_string(), b"bar".to_vec())
+            .await
+            .unwrap();
+        status.set_node_kv(2, "remote".to_string(), b"value".to_vec());
+
+        let plugin = VersionPlugin::new(config, status);
+        let parsed: serde_json::Value = serde_json::from_str(&plugin.generate_content()).unwrap();
+
+        assert_eq!(parsed["kvstore"]["testnode"]["foo"], 1);
+        assert_eq!(parsed["kvstore"]["node2"]["remote"], 1);
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/plugins/vmlist.rs b/src/pmxcfs-rs/pmxcfs/src/plugins/vmlist.rs
new file mode 100644
index 000000000..8cf05f8fb
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/plugins/vmlist.rs
@@ -0,0 +1,120 @@
+/// .vmlist Plugin - Virtual Machine List
+///
+/// This plugin provides VM/CT list in JSON format:
+/// {
+///   "version": 1,
+///   "ids": {
+///     "100": { "node": "node1", "type": "qemu", "version": 1 },
+///     "101": { "node": "node2", "type": "lxc", "version": 1 }
+///   }
+/// }
+use pmxcfs_status::Status;
+use serde_json::json;
+use std::sync::Arc;
+
+use super::Plugin;
+
+/// Vmlist plugin - provides VM/CT list
+pub struct VmlistPlugin {
+    status: Arc<Status>,
+}
+
+impl VmlistPlugin {
+    pub fn new(status: Arc<Status>) -> Self {
+        Self { status }
+    }
+
+    /// Generate vmlist content
+    fn generate_content(&self) -> String {
+        let vmlist = self.status.get_vmlist();
+        let vmlist_version = self.status.get_vmlist_version();
+
+        // Convert to JSON format expected by Proxmox
+        // Format: {"version":N,"ids":{vmid:{"node":"nodename","type":"qemu|lxc","version":M}}}
+        let mut ids = serde_json::Map::new();
+
+        for (vmid, entry) in vmlist {
+            let vm_obj = json!({
+                "node": entry.node,
+                "type": entry.vm_type.to_string(),
+                "version": entry.version
+            });
+
+            ids.insert(vmid.to_string(), vm_obj);
+        }
+
+        json!({
+            "version": vmlist_version,
+            "ids": ids
+        })
+        .to_string()
+    }
+}
+
+impl Plugin for VmlistPlugin {
+    fn name(&self) -> &str {
+        ".vmlist"
+    }
+
+    fn read(&self) -> anyhow::Result<Vec<u8>> {
+        Ok(self.generate_content().into_bytes())
+    }
+
+    fn mode(&self) -> u32 {
+        0o440
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[tokio::test]
+    async fn test_vmlist_format() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config);
+
+        let plugin = VmlistPlugin::new(status);
+        let result = plugin.generate_content();
+        let parsed: serde_json::Value = serde_json::from_str(&result).unwrap();
+
+        // Should have version
+        assert!(parsed["version"].is_number());
+
+        // Should have ids object
+        assert!(parsed["ids"].is_object());
+    }
+
+    #[tokio::test]
+    async fn test_vmlist_versions() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = pmxcfs_status::init_with_config(config);
+
+        // Register a VM
+        status.register_vm(100, pmxcfs_status::VmType::Qemu, "node1".to_string());
+
+        let plugin = VmlistPlugin::new(status.clone());
+        let result = plugin.generate_content();
+        let parsed: serde_json::Value = serde_json::from_str(&result).unwrap();
+
+        // Root version should be >= 1
+        assert!(parsed["version"].as_u64().unwrap() >= 1);
+
+        // VM should have version 1
+        assert_eq!(parsed["ids"]["100"]["version"], 1);
+        assert_eq!(parsed["ids"]["100"]["type"], "qemu");
+        assert_eq!(parsed["ids"]["100"]["node"], "node1");
+
+        // Update the VM - version should increment
+        status.register_vm(100, pmxcfs_status::VmType::Qemu, "node1".to_string());
+
+        let result2 = plugin.generate_content();
+        let parsed2: serde_json::Value = serde_json::from_str(&result2).unwrap();
+
+        // Root version should have incremented
+        assert!(parsed2["version"].as_u64().unwrap() > parsed["version"].as_u64().unwrap());
+
+        // VM version should have incremented to 2
+        assert_eq!(parsed2["ids"]["100"]["version"], 2);
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/quorum_service.rs b/src/pmxcfs-rs/pmxcfs/src/quorum_service.rs
new file mode 100644
index 000000000..f82f68568
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/quorum_service.rs
@@ -0,0 +1,203 @@
+//! Quorum service for cluster membership tracking
+//!
+//! This service tracks quorum status via Corosync quorum API and updates Status.
+//! It implements the Service trait for automatic retry and lifecycle management.
+
+use async_trait::async_trait;
+use parking_lot::RwLock;
+use pmxcfs_services::{Service, ServiceError};
+use rust_corosync::{self as corosync, CsError, NodeId, quorum};
+use std::sync::Arc;
+use std::sync::atomic::{AtomicBool, Ordering};
+
+use pmxcfs_status::Status;
+
+/// Quorum service (matching C's service_quorum)
+///
+/// Tracks cluster quorum status and member list changes. Automatically
+/// retries connection if Corosync is unavailable or restarts.
+pub struct QuorumService {
+    quorum_handle: RwLock<Option<quorum::Handle>>,
+    status: Arc<Status>,
+    /// Tracks whether start() / initialize() has been called
+    initialized: AtomicBool,
+}
+
+impl QuorumService {
+    /// Create a new quorum service
+    pub fn new(status: Arc<Status>) -> Self {
+        Self {
+            quorum_handle: RwLock::new(None),
+            status,
+            initialized: AtomicBool::new(false),
+        }
+    }
+}
+
+#[async_trait]
+impl Service for QuorumService {
+    fn name(&self) -> &str {
+        "quorum"
+    }
+
+    async fn initialize(&mut self) -> pmxcfs_services::Result<std::os::unix::io::RawFd> {
+        tracing::info!("Initializing quorum tracking");
+
+        // Quorum notification callback
+        fn quorum_notification(
+            handle: &quorum::Handle,
+            quorate: bool,
+            ring_id: quorum::RingId,
+            member_list: Vec<NodeId>,
+        ) {
+            tracing::info!(
+                "Quorum notification: quorate={}, ring_id=({},{}), members={:?}",
+                quorate,
+                u32::from(ring_id.nodeid),
+                ring_id.seq,
+                member_list
+            );
+
+            if quorate {
+                tracing::info!("Cluster is now quorate with {} members", member_list.len());
+            } else {
+                tracing::warn!("Cluster lost quorum");
+            }
+
+            // Retrieve QuorumService from handle context
+            let context = match quorum::context_get(*handle) {
+                Ok(ctx) => ctx,
+                Err(e) => {
+                    tracing::error!(
+                        "Failed to get quorum context: {} - quorum status not updated",
+                        e
+                    );
+                    return;
+                }
+            };
+
+            if context == 0 {
+                tracing::error!("BUG: Quorum context is null - quorum status not updated");
+                return;
+            }
+
+            // Safety: We stored a valid Arc<QuorumService> pointer in initialize()
+            unsafe {
+                let service_ptr = context as *const QuorumService;
+                let service = &*service_ptr;
+                service.status.set_quorate(quorate);
+            }
+        }
+
+        // Nodelist change notification callback
+        fn nodelist_notification(
+            _handle: &quorum::Handle,
+            ring_id: quorum::RingId,
+            member_list: Vec<NodeId>,
+            joined_list: Vec<NodeId>,
+            left_list: Vec<NodeId>,
+        ) {
+            tracing::info!(
+                "Nodelist change: ring_id=({},{}), members={:?}, joined={:?}, left={:?}",
+                u32::from(ring_id.nodeid),
+                ring_id.seq,
+                member_list,
+                joined_list,
+                left_list
+            );
+        }
+
+        let model_data = quorum::ModelData::ModelV1(quorum::Model1Data {
+            flags: quorum::Model1Flags::None,
+            quorum_notification_fn: Some(quorum_notification),
+            nodelist_notification_fn: Some(nodelist_notification),
+        });
+
+        // Initialize quorum connection
+        let (handle, _quorum_type) = quorum::initialize(&model_data, 0).map_err(|e| {
+            ServiceError::InitializationFailed(format!("quorum_initialize failed: {e:?}"))
+        })?;
+
+        // Store self pointer as context for callbacks
+        // We create a stable pointer that won't move - it's a pointer to self
+        // which is already on the heap as part of the Box<dyn Service>
+        let self_ptr = self as *const Self as u64;
+        quorum::context_set(handle, self_ptr).map_err(|e| {
+            quorum::finalize(handle).ok();
+            ServiceError::InitializationFailed(format!("Failed to set quorum context: {e:?}"))
+        })?;
+
+        self.initialized.store(true, Ordering::SeqCst);
+        tracing::debug!("Stored QuorumService context: 0x{:x}", self_ptr);
+
+        // Start tracking
+        quorum::trackstart(handle, corosync::TrackFlags::Changes).map_err(|e| {
+            quorum::finalize(handle).ok();
+            ServiceError::InitializationFailed(format!("quorum_trackstart failed: {e:?}"))
+        })?;
+
+        // Get file descriptor for event monitoring
+        let fd = quorum::fd_get(handle).map_err(|e| {
+            quorum::finalize(handle).ok();
+            ServiceError::InitializationFailed(format!("quorum_fd_get failed: {e:?}"))
+        })?;
+
+        // Dispatch once to get initial state
+        if let Err(e) = quorum::dispatch(handle, corosync::DispatchFlags::One) {
+            tracing::warn!("Initial quorum dispatch failed: {:?}", e);
+        }
+
+        *self.quorum_handle.write() = Some(handle);
+
+        tracing::info!("Quorum tracking initialized successfully with fd {}", fd);
+        Ok(fd)
+    }
+
+    async fn dispatch(&mut self) -> pmxcfs_services::Result<bool> {
+        let handle = self.quorum_handle.read().ok_or_else(|| {
+            ServiceError::DispatchFailed("Quorum handle not initialized".to_string())
+        })?;
+
+        // Dispatch all pending events
+        match quorum::dispatch(handle, corosync::DispatchFlags::All) {
+            Ok(_) => Ok(true),
+            Err(CsError::CsErrTryAgain) => {
+                // TRY_AGAIN is expected, continue normally
+                Ok(true)
+            }
+            Err(CsError::CsErrLibrary) | Err(CsError::CsErrBadHandle) => {
+                // Connection lost, need to reinitialize
+                tracing::warn!(
+                    "Quorum connection lost (library error), requesting reinitialization"
+                );
+                Ok(false)
+            }
+            Err(e) => {
+                tracing::error!("Quorum dispatch failed: {:?}", e);
+                Err(ServiceError::DispatchFailed(format!(
+                    "quorum_dispatch failed: {e:?}"
+                )))
+            }
+        }
+    }
+
+    async fn finalize(&mut self) -> pmxcfs_services::Result<()> {
+        tracing::info!("Finalizing quorum service");
+
+        // Clear quorate status
+        self.status.set_quorate(false);
+
+        // Finalize quorum handle
+        if let Some(handle) = self.quorum_handle.write().take()
+            && let Err(e) = quorum::finalize(handle)
+        {
+            tracing::warn!("Error finalizing quorum: {:?}", e);
+        }
+
+        // Clear initialized flag
+        self.initialized.store(false, Ordering::SeqCst);
+
+        tracing::info!("Quorum service finalized");
+        Ok(())
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/restart_flag.rs b/src/pmxcfs-rs/pmxcfs/src/restart_flag.rs
new file mode 100644
index 000000000..3c897b3af
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/restart_flag.rs
@@ -0,0 +1,60 @@
+//! Restart flag management
+//!
+//! This module provides RAII-based restart flag management. The flag is
+//! created on shutdown to signal that pmxcfs is restarting (not stopping).
+
+use std::ffi::CString;
+use std::fs::File;
+use std::io::Write;
+use std::path::{Path, PathBuf};
+use tracing::{info, warn};
+
+/// RAII wrapper for restart flag
+///
+/// Creates a flag file on construction to signal pmxcfs restart.
+/// The file is NOT automatically removed (it's consumed by the next startup).
+pub struct RestartFlag;
+
+impl RestartFlag {
+    /// Create a restart flag file
+    ///
+    /// This signals that pmxcfs is restarting (not permanently shutting down).
+    ///
+    /// # Arguments
+    ///
+    /// * `path` - Path where the restart flag should be created
+    /// * `gid` - Group ID to set for the file
+    pub fn create(path: PathBuf, gid: u32) -> Self {
+        // Create the restart flag file
+        match File::create(&path) {
+            Ok(mut file) => {
+                if let Err(e) = file.flush() {
+                    warn!(error = %e, path = %path.display(), "Failed to flush restart flag");
+                }
+
+                // Set ownership (root:gid)
+                Self::set_ownership(&path, gid);
+                info!(path = %path.display(), "Created restart flag");
+            }
+            Err(e) => {
+                warn!(error = %e, path = %path.display(), "Failed to create restart flag");
+            }
+        }
+
+        Self
+    }
+
+    /// Set file ownership to root:gid
+    fn set_ownership(path: &Path, gid: u32) {
+        let path_str = path.to_string_lossy();
+        if let Ok(path_cstr) = CString::new(path_str.as_ref()) {
+            // Safety: chown is called with a valid C string and valid UID/GID
+            unsafe {
+                if libc::chown(path_cstr.as_ptr(), 0, gid as libc::gid_t) != 0 {
+                    let error = std::io::Error::last_os_error();
+                    warn!(error = %error, "Failed to change ownership of restart flag");
+                }
+            }
+        }
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/status_callbacks.rs b/src/pmxcfs-rs/pmxcfs/src/status_callbacks.rs
new file mode 100644
index 000000000..f6deb84ae
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/status_callbacks.rs
@@ -0,0 +1,420 @@
+//! DFSM Callbacks for Status Synchronization (kvstore)
+//!
+//! This module implements the DfsmCallbacks trait for the status kvstore DFSM instance.
+//! It handles synchronization of ephemeral status data across the cluster:
+//! - Key-value status updates from nodes (RRD data, IP addresses, etc.)
+//! - Cluster log entries
+//!
+//! Equivalent to C implementation's kvstore DFSM callbacks in status.c
+//!
+//! Note: The kvstore DFSM doesn't use FuseMessage like the main database DFSM.
+//! It uses raw CPG messages for lightweight status synchronization.
+//! Most DfsmCallbacks methods are stubbed since status data is ephemeral and
+//! doesn't require the full database synchronization machinery.
+
+use pmxcfs_dfsm::{Callbacks, KvStoreMessage, NodeSyncInfo};
+use pmxcfs_status::Status;
+use std::sync::Arc;
+use tracing::{debug, warn};
+
+/// Callbacks for status synchronization DFSM (kvstore)
+///
+/// This implements the DfsmCallbacks trait but only uses basic CPG event handling.
+/// Most methods are stubbed since kvstore doesn't use database synchronization.
+pub struct StatusCallbacks {
+    status: Arc<Status>,
+}
+
+impl StatusCallbacks {
+    /// Create new status callbacks
+    pub fn new(status: Arc<Status>) -> Self {
+        Self { status }
+    }
+}
+
+impl Callbacks for StatusCallbacks {
+    type Message = KvStoreMessage;
+
+    /// Deliver a message - handles KvStore messages for status synchronization
+    ///
+    /// The kvstore DFSM handles KvStore messages (UPDATE, LOG, etc.) for
+    /// ephemeral status data synchronization across the cluster.
+    fn deliver_message(
+        &self,
+        nodeid: u32,
+        pid: u32,
+        kvstore_message: KvStoreMessage,
+        timestamp: u64,
+    ) -> std::io::Result<(i32, bool)> {
+        debug!(nodeid, pid, timestamp, "Delivering KvStore message");
+
+        // Handle different KvStore message types
+        match kvstore_message {
+            KvStoreMessage::Update { key, value } => {
+                debug!(key, value_len = value.len(), "KvStore UPDATE");
+
+                // Store the key-value data for this node (matches C's cfs_kvstore_node_set)
+                self.status.set_node_kv(nodeid, key, value);
+                Ok((0, true))
+            }
+            KvStoreMessage::Log {
+                time,
+                priority,
+                node,
+                ident,
+                tag,
+                message,
+            } => {
+                debug!(
+                    time, priority, %node, %ident, %tag, %message,
+                    "KvStore LOG"
+                );
+
+                // Add log entry to cluster log
+                if let Err(e) = self
+                    .status
+                    .add_remote_cluster_log(time, priority, node, ident, tag, message)
+                {
+                    warn!(error = %e, "Failed to add cluster log entry");
+                }
+
+                Ok((0, true))
+            }
+            KvStoreMessage::UpdateComplete => {
+                debug!("KvStore UpdateComplete");
+                Ok((0, true))
+            }
+        }
+    }
+
+    /// Compute checksum (not used by kvstore - ephemeral data doesn't need checksums)
+    fn compute_checksum(&self, output: &mut [u8; 32]) -> anyhow::Result<()> {
+        // Status data is ephemeral and doesn't use checksums
+        output.fill(0);
+        Ok(())
+    }
+
+    /// Get state for synchronization (returns cluster log state)
+    ///
+    /// Returns the cluster log in C-compatible binary format (clog_base_t).
+    /// This enables mixed C/Rust cluster operation - C nodes can deserialize
+    /// the state we send, and we can deserialize states from C nodes.
+    fn get_state(&self) -> anyhow::Result<Vec<u8>> {
+        debug!("Status kvstore: get_state called - serializing cluster log");
+        self.status.get_cluster_log_state()
+    }
+
+    /// Process state update (handles cluster log state sync)
+    ///
+    /// Deserializes cluster log states from remote nodes and merges them with
+    /// the local log. This enables cluster-wide log synchronization in mixed
+    /// C/Rust clusters.
+    fn process_state_update(&self, states: &[NodeSyncInfo]) -> anyhow::Result<bool> {
+        debug!(
+            "Status kvstore: process_state_update called with {} states",
+            states.len()
+        );
+
+        if states.is_empty() {
+            return Ok(true);
+        }
+
+        self.status.merge_cluster_log_states(states)?;
+        Ok(true)
+    }
+
+    /// Process incremental update (not used by kvstore)
+    ///
+    /// Kvstore uses direct CPG messages (UPDATE, LOG) instead of incremental sync
+    fn process_update(&self, _nodeid: u32, _pid: u32, _data: &[u8]) -> anyhow::Result<()> {
+        warn!("Status kvstore: received unexpected process_update call");
+        Ok(())
+    }
+
+    /// Commit state (no-op for kvstore - ephemeral data, no database commit)
+    fn commit_state(&self) -> anyhow::Result<()> {
+        // No commit needed for ephemeral status data
+        Ok(())
+    }
+
+    /// Called when cluster becomes synced
+    fn on_synced(&self) {
+        debug!("Status kvstore: cluster synced");
+    }
+
+    fn on_membership_change(&self, member_list: &[pmxcfs_api_types::MemberInfo]) {
+        self.status.update_members(member_list);
+    }
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+    use pmxcfs_status::ClusterLogEntry;
+    use std::time::{SystemTime, UNIX_EPOCH};
+
+    #[test]
+    fn test_kvstore_update_message_handling() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = Arc::new(Status::new(config, None));
+        let callbacks = StatusCallbacks::new(status.clone());
+
+        // Initialize cluster and register node 2
+        status.init_cluster("test-cluster".to_string());
+        status.register_node(2, "node2".to_string(), "192.168.1.11".to_string());
+
+        // Simulate receiving a kvstore UPDATE message from node 2
+        let key = "test-key".to_string();
+        let value = b"test-value".to_vec();
+        let message = KvStoreMessage::Update {
+            key: key.clone(),
+            value: value.clone(),
+        };
+
+        let result = callbacks.deliver_message(2, 1000, message, 12345);
+        assert!(result.is_ok(), "deliver_message should succeed");
+
+        let (res, continue_processing) = result.unwrap();
+        assert_eq!(res, 0, "Result code should be 0 for success");
+        assert!(continue_processing, "Should continue processing");
+
+        // Verify the data was stored in kvstore
+        let stored_value = status.get_node_kv(2, &key);
+        assert_eq!(
+            stored_value,
+            Some(value),
+            "Should store the key-value pair for node 2"
+        );
+    }
+
+    #[test]
+    fn test_kvstore_update_multiple_nodes() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = Arc::new(Status::new(config, None));
+        let callbacks = StatusCallbacks::new(status.clone());
+
+        // Initialize cluster and register nodes
+        status.init_cluster("test-cluster".to_string());
+        status.register_node(1, "node1".to_string(), "192.168.1.10".to_string());
+        status.register_node(2, "node2".to_string(), "192.168.1.11".to_string());
+
+        // Store data from multiple nodes
+        let msg1 = KvStoreMessage::Update {
+            key: "ip".to_string(),
+            value: b"192.168.1.10".to_vec(),
+        };
+        let msg2 = KvStoreMessage::Update {
+            key: "ip".to_string(),
+            value: b"192.168.1.11".to_vec(),
+        };
+
+        callbacks.deliver_message(1, 1000, msg1, 12345).unwrap();
+        callbacks.deliver_message(2, 1001, msg2, 12346).unwrap();
+
+        // Verify each node's data is stored separately
+        assert_eq!(
+            status.get_node_kv(1, "ip"),
+            Some(b"192.168.1.10".to_vec()),
+            "Node 1 IP should be stored"
+        );
+        assert_eq!(
+            status.get_node_kv(2, "ip"),
+            Some(b"192.168.1.11".to_vec()),
+            "Node 2 IP should be stored"
+        );
+    }
+
+    #[test]
+    fn test_kvstore_rrd_update_message_populates_rrd_dump() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = Arc::new(Status::new(config, None));
+        let callbacks = StatusCallbacks::new(status.clone());
+
+        status.init_cluster("test-cluster".to_string());
+        status.register_node(2, "node2".to_string(), "192.168.1.11".to_string());
+
+        let now = SystemTime::now()
+            .duration_since(UNIX_EPOCH)
+            .unwrap()
+            .as_secs();
+        let message = KvStoreMessage::Update {
+            key: "rrd/pve2-node/node2".to_string(),
+            value: format!("{now}:12345:1.5:8:0.5:0.1:16000:8000:4000:0:100:50:1000:2000\0")
+                .into_bytes(),
+        };
+
+        let result = callbacks.deliver_message(2, 1000, message, 12345);
+        assert!(result.is_ok(), "RRD update delivery should succeed");
+
+        let dump = status.get_rrd_dump();
+        assert!(
+            dump.contains("pve2-node/node2:"),
+            "RRD updates delivered through callbacks should appear in the dump"
+        );
+        assert_eq!(
+            status.get_node_kv(2, "rrd/pve2-node/node2"),
+            None,
+            "RRD messages should bypass the generic kvstore"
+        );
+    }
+
+    #[test]
+    fn test_kvstore_log_message_handling() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = Arc::new(Status::new(config, None));
+        let callbacks = StatusCallbacks::new(status.clone());
+
+        // Clear any existing log entries
+        status.clear_cluster_log();
+
+        // Simulate receiving a LOG message
+        let message = KvStoreMessage::Log {
+            time: 1234567890,
+            priority: 6, // LOG_INFO
+            node: "node1".to_string(),
+            ident: "pmxcfs".to_string(),
+            tag: "cluster".to_string(),
+            message: "Test log entry".to_string(),
+        };
+
+        let result = callbacks.deliver_message(1, 1000, message, 12345);
+        assert!(result.is_ok(), "LOG message delivery should succeed");
+
+        // Verify the log entry was added
+        let log_entries = status.get_log_entries(10);
+        assert_eq!(log_entries.len(), 1, "Should have 1 log entry");
+        assert_eq!(log_entries[0].node, "node1");
+        assert_eq!(log_entries[0].message, "Test log entry");
+        assert_eq!(log_entries[0].priority, 6);
+    }
+
+    #[test]
+    fn test_kvstore_update_complete_message() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = Arc::new(Status::new(config, None));
+        let callbacks = StatusCallbacks::new(status.clone());
+
+        let message = KvStoreMessage::UpdateComplete;
+
+        let result = callbacks.deliver_message(1, 1000, message, 12345);
+        assert!(result.is_ok(), "UpdateComplete should succeed");
+
+        let (res, continue_processing) = result.unwrap();
+        assert_eq!(res, 0);
+        assert!(continue_processing);
+    }
+
+    #[test]
+    fn test_compute_checksum_returns_zeros() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = Arc::new(Status::new(config, None));
+        let callbacks = StatusCallbacks::new(status);
+
+        let mut checksum = [0u8; 32];
+        let result = callbacks.compute_checksum(&mut checksum);
+
+        assert!(result.is_ok(), "compute_checksum should succeed");
+        assert_eq!(
+            checksum, [0u8; 32],
+            "Checksum should be all zeros for ephemeral data"
+        );
+    }
+
+    #[test]
+    fn test_get_state_returns_cluster_log() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = Arc::new(Status::new(config, None));
+        let callbacks = StatusCallbacks::new(status.clone());
+
+        // Add a log entry first
+        status.clear_cluster_log();
+        let entry = ClusterLogEntry {
+            uid: 0,
+            timestamp: 1234567890,
+            priority: 6,
+            tag: "test".to_string(),
+            pid: 0,
+            node: "node1".to_string(),
+            ident: "pmxcfs".to_string(),
+            message: "Test message".to_string(),
+        };
+        status.add_log_entry(entry);
+
+        // Get state should return serialized cluster log
+        let result = callbacks.get_state();
+        assert!(result.is_ok(), "get_state should succeed");
+
+        let state = result.unwrap();
+        assert!(
+            !state.is_empty(),
+            "State should not be empty when cluster log has entries"
+        );
+    }
+
+    #[test]
+    fn test_process_state_update_with_empty_states() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = Arc::new(Status::new(config, None));
+        let callbacks = StatusCallbacks::new(status);
+
+        let states: Vec<NodeSyncInfo> = vec![];
+        let result = callbacks.process_state_update(&states);
+
+        assert!(result.is_ok(), "Empty state update should succeed");
+        assert!(result.unwrap(), "Should return true for empty states");
+    }
+
+    #[test]
+    fn test_process_update_logs_warning() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = Arc::new(Status::new(config, None));
+        let callbacks = StatusCallbacks::new(status);
+
+        // process_update is not used by kvstore, but should not fail
+        let result = callbacks.process_update(1, 1000, &[1, 2, 3]);
+        assert!(
+            result.is_ok(),
+            "process_update should succeed even though not used"
+        );
+    }
+
+    #[test]
+    fn test_commit_state_is_noop() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = Arc::new(Status::new(config, None));
+        let callbacks = StatusCallbacks::new(status);
+
+        let result = callbacks.commit_state();
+        assert!(result.is_ok(), "commit_state should succeed (no-op)");
+    }
+
+    #[test]
+    fn test_membership_change_updates_online_state() {
+        let config = pmxcfs_test_utils::create_test_config(false);
+        let status = Arc::new(Status::new(config, None));
+        let callbacks = StatusCallbacks::new(status.clone());
+
+        status
+            .update_cluster_info(
+                "test-cluster".to_string(),
+                1,
+                vec![
+                    (1, "node1".to_string(), "192.168.1.10".to_string()),
+                    (2, "node2".to_string(), "192.168.1.11".to_string()),
+                ],
+            )
+            .unwrap();
+
+        callbacks.on_membership_change(&[pmxcfs_api_types::MemberInfo {
+            node_id: 2,
+            pid: 1000,
+            joined_at: 12345,
+        }]);
+
+        let cluster_info = status
+            .get_cluster_info()
+            .expect("cluster info should exist");
+        assert!(!cluster_info.nodes_by_id[&1].online);
+        assert!(cluster_info.nodes_by_id[&2].online);
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/src/status_messages.rs b/src/pmxcfs-rs/pmxcfs/src/status_messages.rs
new file mode 100644
index 000000000..dd0553f06
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/src/status_messages.rs
@@ -0,0 +1,211 @@
+use pmxcfs_config::Config;
+use pmxcfs_status::{ClusterInfo, Status};
+use serde_json::{Map, Value};
+
+fn cluster_member_ip(status: &Status, nodename: &str) -> Option<String> {
+    status.get_node_ip(nodename).filter(|ip| !ip.is_empty())
+}
+
+pub fn build_cluster_info_value(config: &Config, status: &Status) -> Value {
+    let mut response = Map::new();
+    response.insert(
+        "nodename".to_string(),
+        Value::String(config.nodename().to_string()),
+    );
+    response.insert(
+        "version".to_string(),
+        Value::from(status.get_cluster_version()),
+    );
+
+    let Some(cluster_info) = status.get_cluster_info() else {
+        return Value::Object(response);
+    };
+
+    if cluster_info.nodes_by_id.is_empty() {
+        return Value::Object(response);
+    }
+
+    append_cluster_fields(&mut response, status, &cluster_info);
+    Value::Object(response)
+}
+
+pub fn build_version_value(config: &Config, status: &Status) -> Value {
+    let mut response = Map::new();
+    response.insert(
+        "starttime".to_string(),
+        Value::from(status.get_start_time()),
+    );
+    response.insert(
+        "clinfo".to_string(),
+        Value::from(status.get_cluster_version()),
+    );
+    response.insert(
+        "vmlist".to_string(),
+        Value::from(status.get_vmlist_version()),
+    );
+
+    let mut path_versions: Vec<_> = status.get_all_path_versions().into_iter().collect();
+    path_versions.sort_unstable_by(|a, b| a.0.cmp(&b.0));
+    for (path, version) in path_versions {
+        response.insert(path, Value::from(version));
+    }
+
+    let mut kvstore = Map::new();
+
+    let mut local_versions: Vec<_> = status.get_local_kv_versions().into_iter().collect();
+    local_versions.sort_unstable_by(|a, b| a.0.cmp(&b.0));
+    let mut local_entry = Map::new();
+    for (key, version) in local_versions {
+        local_entry.insert(key, Value::from(version));
+    }
+    kvstore.insert(config.nodename().to_string(), Value::Object(local_entry));
+
+    let mut remote_versions: Vec<_> = status.get_remote_kv_versions().into_iter().collect();
+    remote_versions.sort_unstable_by(|a, b| a.0.cmp(&b.0));
+    for (nodename, versions) in remote_versions {
+        let mut node_entry = Map::new();
+        let mut versions: Vec<_> = versions.into_iter().collect();
+        versions.sort_unstable_by(|a, b| a.0.cmp(&b.0));
+        for (key, version) in versions {
+            node_entry.insert(key, Value::from(version));
+        }
+        kvstore.insert(nodename, Value::Object(node_entry));
+    }
+
+    response.insert("kvstore".to_string(), Value::Object(kvstore));
+    Value::Object(response)
+}
+
+fn append_cluster_fields(
+    response: &mut Map<String, Value>,
+    status: &Status,
+    cluster_info: &ClusterInfo,
+) {
+    response.insert(
+        "cluster".to_string(),
+        serde_json::json!({
+            "name": cluster_info.cluster_name,
+            "version": cluster_info.config_version,
+            "nodes": cluster_info.nodes_by_id.len(),
+            "quorate": if status.is_quorate() { 1 } else { 0 },
+        }),
+    );
+
+    let mut nodelist = Map::new();
+    let mut nodes: Vec<_> = cluster_info.nodes_by_id.values().collect();
+    nodes.sort_by_key(|node| node.node_id);
+
+    for node in nodes {
+        let mut entry = Map::new();
+        entry.insert("id".to_string(), Value::from(node.node_id));
+        entry.insert(
+            "online".to_string(),
+            Value::from(if node.online { 1 } else { 0 }),
+        );
+
+        if let Some(ip) = cluster_member_ip(status, &node.name) {
+            entry.insert("ip".to_string(), Value::String(ip));
+        }
+
+        nodelist.insert(node.name.clone(), Value::Object(entry));
+    }
+
+    response.insert("nodelist".to_string(), Value::Object(nodelist));
+}
+
+#[cfg(test)]
+mod tests {
+    use super::{build_cluster_info_value, build_version_value};
+    use pmxcfs_api_types::MemberInfo;
+    use pmxcfs_config::Config;
+    use pmxcfs_status::Status;
+    use std::sync::Arc;
+    use std::time::{SystemTime, UNIX_EPOCH};
+
+    fn test_config() -> Arc<Config> {
+        Config::new(
+            "node1",
+            "10.0.0.1".parse().unwrap(),
+            33,
+            0,
+            false,
+            "test-cluster",
+        )
+        .into_shared()
+    }
+
+    #[tokio::test]
+    async fn cluster_info_matches_c_empty_shape() {
+        let config = test_config();
+        let status = Status::new(config.clone(), None);
+
+        let value = build_cluster_info_value(&config, &status);
+
+        assert_eq!(value["nodename"], "node1");
+        assert!(value.get("cluster").is_none());
+        assert!(value.get("nodelist").is_none());
+    }
+
+    #[tokio::test]
+    async fn cluster_info_omits_missing_ip_and_uses_config_version() {
+        let config = test_config();
+        let status = Status::new(config.clone(), None);
+        status
+            .update_cluster_info(
+                "test-cluster".to_string(),
+                17,
+                vec![
+                    (1, "node1".to_string(), "10.0.0.1".to_string()),
+                    (2, "node2".to_string(), "10.0.0.2".to_string()),
+                ],
+            )
+            .unwrap();
+        status.set_quorate(true);
+        let now = SystemTime::now()
+            .duration_since(UNIX_EPOCH)
+            .unwrap_or_default()
+            .as_secs();
+        status.update_members(&[MemberInfo {
+            node_id: 1,
+            pid: 1234,
+            joined_at: now,
+        }]);
+        status
+            .set_node_status("nodeip".to_string(), b"10.10.10.1\0".to_vec())
+            .await
+            .unwrap();
+
+        let value = build_cluster_info_value(&config, &status);
+        assert_eq!(value["cluster"]["version"], 17);
+        assert_eq!(value["cluster"]["nodes"], 2);
+        assert_eq!(value["nodelist"]["node1"]["ip"], "10.10.10.1");
+        assert!(value["nodelist"]["node2"].get("ip").is_none());
+    }
+
+    #[tokio::test]
+    async fn version_value_includes_local_and_remote_kv_versions() {
+        let config = test_config();
+        let status = Status::new(config.clone(), None);
+        status
+            .update_cluster_info(
+                "test-cluster".to_string(),
+                4,
+                vec![(2, "node2".to_string(), "10.0.0.2".to_string())],
+            )
+            .unwrap();
+        status
+            .set_node_status("foo".to_string(), b"bar".to_vec())
+            .await
+            .unwrap();
+        status.set_node_kv(2, "remote".to_string(), b"value".to_vec());
+
+        let value = build_version_value(&config, &status);
+
+        assert!(value.get("starttime").unwrap().is_u64());
+        assert_eq!(value["clinfo"], status.get_cluster_version());
+        assert_eq!(value["vmlist"], status.get_vmlist_version());
+        assert_eq!(value["kvstore"]["node1"]["foo"], 1);
+        assert_eq!(value["kvstore"]["node2"]["remote"], 1);
+        assert!(value.get("corosync.conf").is_some());
+    }
+}
diff --git a/src/pmxcfs-rs/pmxcfs/tests/fuse_basic_test.rs b/src/pmxcfs-rs/pmxcfs/tests/fuse_basic_test.rs
new file mode 100644
index 000000000..a727efd9a
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/tests/fuse_basic_test.rs
@@ -0,0 +1,216 @@
+/// Basic FUSE subsystem test
+///
+/// This test verifies core FUSE functionality without actually mounting
+/// to avoid test complexity and timeouts
+use anyhow::Result;
+use pmxcfs_config::Config;
+use pmxcfs_memdb::MemDb;
+use pmxcfs_rs::plugins;
+use tempfile::TempDir;
+
+#[test]
+fn test_fuse_subsystem_components() -> Result<()> {
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("test.db");
+
+    // 1. Create memdb with test data
+    let memdb = MemDb::open(&db_path, true)?;
+
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs() as u32;
+
+    memdb.create("/testdir", libc::S_IFDIR, 0, now)?;
+    memdb.create("/testdir/file1.txt", libc::S_IFREG, 0, now)?;
+    memdb.write("/testdir/file1.txt", 0, 0, now, b"Hello pmxcfs!", false)?;
+
+    // 2. Create config
+    println!("\n2. Creating FUSE configuration...");
+    let config = Config::new(
+        "testnode",
+        "127.0.0.1".parse().unwrap(),
+        1000,
+        0,
+        true,
+        "test-cluster",
+    )
+    .into_shared();
+
+    // 3. Initialize status and plugins
+    println!("\n3. Initializing status and plugin registry...");
+    let status = pmxcfs_status::init_with_config(config.clone());
+    status.set_quorate(true);
+    let plugins = plugins::init_plugins(config.clone(), status);
+    let plugin_list = plugins.list();
+    println!("   Available plugins: {plugin_list:?}");
+    assert!(!plugin_list.is_empty(), "Should have some plugins");
+
+    // 4. Verify plugin functionality
+    for plugin_name in &plugin_list {
+        if let Some(plugin) = plugins.get(plugin_name) {
+            match plugin.read() {
+                Ok(data) => {
+                    println!(
+                        "   [OK] Plugin '{}' readable ({} bytes)",
+                        plugin_name,
+                        data.len()
+                    );
+                }
+                Err(e) => {
+                    println!("   [WARN]  Plugin '{plugin_name}' error: {e}");
+                }
+            }
+        }
+    }
+
+    // 5. Verify memdb data is accessible
+    println!("\n5. Verifying memdb data accessibility...");
+    assert!(memdb.exists("/testdir")?, "testdir should exist");
+    assert!(
+        memdb.exists("/testdir/file1.txt")?,
+        "file1.txt should exist"
+    );
+
+    let data = memdb.read("/testdir/file1.txt", 0, 1024)?;
+    assert_eq!(&data[..], b"Hello pmxcfs!");
+
+    // 6. Test write operations
+    let new_data = b"Modified!";
+    memdb.write("/testdir/file1.txt", 0, 0, now, new_data, true)?;
+    let data = memdb.read("/testdir/file1.txt", 0, 1024)?;
+    assert_eq!(&data[..], b"Modified!");
+
+    // 7. Test directory operations
+    memdb.create("/newdir", libc::S_IFDIR, 0, now)?;
+    memdb.create("/newdir/newfile.txt", libc::S_IFREG, 0, now)?;
+    memdb.write("/newdir/newfile.txt", 0, 0, now, b"New content", false)?;
+
+    let entries = memdb.readdir("/")?;
+    let dir_names: Vec<&String> = entries.iter().map(|e| &e.name).collect();
+    println!("   Root entries: {dir_names:?}");
+    assert!(
+        dir_names.iter().any(|n| n == &"testdir"),
+        "testdir should be in root"
+    );
+    assert!(
+        dir_names.iter().any(|n| n == &"newdir"),
+        "newdir should be in root"
+    );
+
+    // 8. Test deletion
+    memdb.delete("/newdir/newfile.txt", 0, 1000)?;
+    memdb.delete("/newdir", 0, 1000)?;
+    assert!(!memdb.exists("/newdir")?, "newdir should be deleted");
+
+    Ok(())
+}
+
+#[test]
+fn test_fuse_private_path_detection() -> Result<()> {
+    // This tests the logic that would be used in the FUSE filesystem
+    // to determine if paths should have restricted permissions
+
+    let test_cases = vec![
+        ("/priv", true, "root priv should be private"),
+        ("/priv/test", true, "priv subdir should be private"),
+        ("/nodes/node1/priv", true, "node priv should be private"),
+        (
+            "/nodes/node1/priv/data",
+            true,
+            "node priv subdir should be private",
+        ),
+        (
+            "/nodes/node1/config",
+            false,
+            "node config should not be private",
+        ),
+        ("/testdir", false, "testdir should not be private"),
+        (
+            "/private",
+            false,
+            "private (not priv) should not be private",
+        ),
+    ];
+
+    for (path, expected, description) in test_cases {
+        let is_private = is_private_path(path);
+        assert_eq!(is_private, expected, "Failed for {path}: {description}");
+    }
+
+    Ok(())
+}
+
+// Helper function matching the logic in filesystem.rs
+fn is_private_path(path: &str) -> bool {
+    let path = path.trim_start_matches('/');
+
+    // Check if path starts with "priv" or "priv/"
+    if path.starts_with("priv") && (path.len() == 4 || path.as_bytes().get(4) == Some(&b'/')) {
+        return true;
+    }
+
+    // Check for "nodes/*/priv" or "nodes/*/priv/*" pattern
+    if let Some(after_nodes) = path.strip_prefix("nodes/")
+        && let Some(slash_pos) = after_nodes.find('/')
+    {
+        let after_nodename = &after_nodes[slash_pos..];
+
+        if after_nodename.starts_with("/priv") {
+            let priv_end = slash_pos + 5;
+            if after_nodes.len() == priv_end || after_nodes.as_bytes().get(priv_end) == Some(&b'/')
+            {
+                return true;
+            }
+        }
+    }
+
+    false
+}
+
+#[test]
+fn test_fuse_inode_path_mapping() -> Result<()> {
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("test.db");
+    let memdb = MemDb::open(&db_path, true)?;
+
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs() as u32;
+
+    // Create nested directory structure
+    memdb.create("/a", libc::S_IFDIR, 0, now)?;
+    memdb.create("/a/b", libc::S_IFDIR, 0, now)?;
+    memdb.create("/a/b/c", libc::S_IFDIR, 0, now)?;
+    memdb.create("/a/b/c/file.txt", libc::S_IFREG, 0, now)?;
+    memdb.write("/a/b/c/file.txt", 0, 0, now, b"deep file", false)?;
+
+    // Verify we can look up deep paths
+    let entry = memdb
+        .lookup_path("/a/b/c/file.txt")
+        .ok_or_else(|| anyhow::anyhow!("Failed to lookup deep path"))?;
+
+    println!("   Inode: {}", entry.inode);
+    println!("   Size: {}", entry.size);
+    assert!(entry.inode > 1, "Should have valid inode");
+    assert_eq!(entry.size, 9, "File size should match");
+
+    // Verify parent relationships
+    println!("\n3. Verifying parent relationships...");
+    let c_entry = memdb
+        .lookup_path("/a/b/c")
+        .ok_or_else(|| anyhow::anyhow!("Failed to lookup /a/b/c"))?;
+    let b_entry = memdb
+        .lookup_path("/a/b")
+        .ok_or_else(|| anyhow::anyhow!("Failed to lookup /a/b"))?;
+
+    assert_eq!(
+        entry.parent, c_entry.inode,
+        "file.txt parent should be c directory"
+    );
+    assert_eq!(
+        c_entry.parent, b_entry.inode,
+        "c parent should be b directory"
+    );
+
+    Ok(())
+}
diff --git a/src/pmxcfs-rs/pmxcfs/tests/fuse_cluster_test.rs b/src/pmxcfs-rs/pmxcfs/tests/fuse_cluster_test.rs
new file mode 100644
index 000000000..3e56c5046
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/tests/fuse_cluster_test.rs
@@ -0,0 +1,217 @@
+/// FUSE Cluster Synchronization Tests
+///
+/// Tests for pmxcfs FUSE operations that trigger DFSM broadcasts
+/// and synchronize across cluster nodes. These tests verify that
+/// file operations made through FUSE properly propagate to other nodes.
+use anyhow::Result;
+use pmxcfs_dfsm::{Callbacks, Dfsm, FuseMessage, NodeSyncInfo};
+use pmxcfs_memdb::MemDb;
+use pmxcfs_rs::fuse;
+use pmxcfs_rs::plugins;
+use std::fs;
+use std::io::Write;
+use std::sync::{Arc, Mutex};
+use std::time::Duration;
+use tempfile::TempDir;
+
+/// Verify that FUSE filesystem successfully mounted, panic if not
+async fn verify_fuse_mounted(path: &std::path::Path) {
+    use std::process::Command;
+    let path_str = path.display().to_string();
+    let is_mounted = tokio::task::spawn_blocking(move || {
+        Command::new("mount")
+            .output()
+            .ok()
+            .map(|o| String::from_utf8_lossy(&o.stdout).contains(&path_str))
+            .unwrap_or(false)
+    })
+    .await
+    .expect("spawn_blocking failed");
+
+    if !is_mounted {
+        panic!(
+            "FUSE mount failed (likely permissions issue).\n\
+             To run FUSE integration tests, either:\n\
+             1. Run with sudo: sudo -E cargo test --test fuse_cluster_test\n\
+             2. Enable user_allow_other in /etc/fuse.conf and add your user to the 'fuse' group"
+        );
+    }
+}
+
+/// Test callbacks for DFSM - minimal implementation for testing
+struct TestDfsmCallbacks {
+    memdb: MemDb,
+    broadcasts: Arc<Mutex<Vec<String>>>, // Track broadcast operations
+}
+
+impl TestDfsmCallbacks {
+    fn new(memdb: MemDb) -> Self {
+        Self {
+            memdb,
+            broadcasts: Arc::new(Mutex::new(Vec::new())),
+        }
+    }
+
+    #[allow(dead_code)]
+    fn get_broadcasts(&self) -> Vec<String> {
+        self.broadcasts.lock().unwrap().clone()
+    }
+}
+
+impl Callbacks for TestDfsmCallbacks {
+    type Message = FuseMessage;
+
+    fn deliver_message(
+        &self,
+        _nodeid: u32,
+        _pid: u32,
+        message: FuseMessage,
+        _timestamp: u64,
+    ) -> std::io::Result<(i32, bool)> {
+        // Track the broadcast for testing
+        let msg_desc = match &message {
+            FuseMessage::Write { path, .. } => format!("write:{path}"),
+            FuseMessage::Create { path } => format!("create:{path}"),
+            FuseMessage::Mkdir { path } => format!("mkdir:{path}"),
+            FuseMessage::Delete { path } => format!("delete:{path}"),
+            FuseMessage::Rename { from, to } => format!("rename:{from}→{to}"),
+            _ => "other".to_string(),
+        };
+        self.broadcasts.lock().unwrap().push(msg_desc);
+        Ok((0, true))
+    }
+
+    fn compute_checksum(&self, output: &mut [u8; 32]) -> Result<()> {
+        *output = self.memdb.compute_database_checksum()?;
+        Ok(())
+    }
+
+    fn process_state_update(&self, _states: &[NodeSyncInfo]) -> Result<bool> {
+        Ok(true) // Indicate we're in sync for testing
+    }
+
+    fn process_update(&self, _nodeid: u32, _pid: u32, _data: &[u8]) -> Result<()> {
+        Ok(())
+    }
+
+    fn commit_state(&self) -> Result<()> {
+        Ok(())
+    }
+
+    fn on_synced(&self) {}
+
+    fn get_state(&self) -> Result<Vec<u8>> {
+        // Return empty state for testing
+        Ok(Vec::new())
+    }
+}
+
+#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
+#[ignore = "Requires FUSE mount permissions (user_allow_other in /etc/fuse.conf)"]
+async fn test_fuse_write_triggers_broadcast() -> Result<()> {
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("test.db");
+    let mount_path = temp_dir.path().join("mnt");
+
+    fs::create_dir_all(&mount_path)?;
+
+    let memdb = MemDb::open(&db_path, true)?;
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config.clone());
+    status.set_quorate(true);
+
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs() as u32;
+
+    // Create test directory
+    memdb.create("/testdir", libc::S_IFDIR, 0, now)?;
+
+    // Create DFSM instance with test callbacks
+    let callbacks = Arc::new(TestDfsmCallbacks::new(memdb.clone()));
+    let dfsm = Arc::new(Dfsm::new("test-cluster".to_string(), callbacks.clone())?);
+
+    let plugins = plugins::init_plugins(config.clone(), status.clone());
+
+    // Spawn FUSE mount with DFSM
+    let mount_path_clone = mount_path.clone();
+    let memdb_clone = memdb.clone();
+    let dfsm_clone = dfsm.clone();
+    let fuse_task = tokio::spawn(async move {
+        if let Err(e) = fuse::mount_fuse(
+            &mount_path_clone,
+            memdb_clone,
+            config,
+            Some(dfsm_clone),
+            plugins,
+            status,
+        )
+        .await
+        {
+            eprintln!("FUSE mount error: {e}");
+        }
+    });
+
+    tokio::time::sleep(Duration::from_millis(2000)).await;
+    verify_fuse_mounted(&mount_path).await;
+
+    // Test: Write to file via FUSE should trigger broadcast
+    let test_file = mount_path.join("testdir/broadcast-test.txt");
+    let mut file = fs::File::create(&test_file)?;
+    file.write_all(b"test data for broadcast")?;
+    drop(file);
+    println!("[OK] File written via FUSE");
+
+    // Give time for broadcast
+    tokio::time::sleep(Duration::from_millis(100)).await;
+
+    // Verify file exists in memdb
+    assert!(
+        memdb.exists("/testdir/broadcast-test.txt")?,
+        "File should exist in memdb"
+    );
+    let data = memdb.read("/testdir/broadcast-test.txt", 0, 1024)?;
+    assert_eq!(&data[..], b"test data for broadcast");
+    println!("[OK] File data verified in memdb");
+
+    // Cleanup
+    fs::remove_file(&test_file)?;
+    fuse_task.abort();
+    tokio::time::sleep(Duration::from_millis(100)).await;
+    let _ = std::process::Command::new("fusermount3")
+        .arg("-u")
+        .arg(&mount_path)
+        .output();
+
+    Ok(())
+}
+
+/// Additional FUSE + DFSM tests can be added here following the same pattern
+#[test]
+fn test_dfsm_callbacks_implementation() {
+    // Verify our test callbacks work correctly
+    let temp_dir = TempDir::new().unwrap();
+    let db_path = temp_dir.path().join("test.db");
+    let memdb = MemDb::open(&db_path, true).unwrap();
+
+    let callbacks = TestDfsmCallbacks::new(memdb);
+
+    // Test checksum computation
+    let mut checksum = [0u8; 32];
+    assert!(callbacks.compute_checksum(&mut checksum).is_ok());
+
+    // Test message delivery tracking
+    let result = callbacks.deliver_message(
+        1,
+        100,
+        FuseMessage::Create {
+            path: "/test".to_string(),
+        },
+        12345,
+    );
+    assert!(result.is_ok());
+
+    let broadcasts = callbacks.get_broadcasts();
+    assert_eq!(broadcasts.len(), 1);
+    assert_eq!(broadcasts[0], "create:/test");
+}
diff --git a/src/pmxcfs-rs/pmxcfs/tests/fuse_integration_test.rs b/src/pmxcfs-rs/pmxcfs/tests/fuse_integration_test.rs
new file mode 100644
index 000000000..9e46077f4
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/tests/fuse_integration_test.rs
@@ -0,0 +1,405 @@
+/// Integration tests for FUSE filesystem with proxmox-fuse-rs
+///
+/// These tests verify that the FUSE subsystem works correctly after
+/// migrating from fuser to proxmox-fuse-rs
+use anyhow::Result;
+use pmxcfs_memdb::MemDb;
+use pmxcfs_rs::fuse;
+use pmxcfs_rs::plugins;
+use std::fs;
+use std::io::{Read, Write};
+use std::time::Duration;
+use tempfile::TempDir;
+
+/// Verify that FUSE filesystem successfully mounted, panic if not
+fn verify_fuse_mounted(path: &std::path::Path) {
+    use std::process::Command;
+
+    let output = Command::new("mount").output().ok();
+
+    let is_mounted = if let Some(output) = output {
+        let mount_output = String::from_utf8_lossy(&output.stdout);
+        mount_output.contains(&format!(" {} ", path.display()))
+    } else {
+        false
+    };
+
+    if !is_mounted {
+        panic!(
+            "FUSE mount failed (likely permissions issue).\n\
+             To run FUSE integration tests, either:\n\
+             1. Run with sudo: sudo -E cargo test --test fuse_integration_test\n\
+             2. Enable user_allow_other in /etc/fuse.conf and add your user to the 'fuse' group\n\
+             3. Or skip these tests: cargo test --test fuse_basic_test"
+        );
+    }
+}
+
+#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
+#[ignore = "Requires FUSE mount permissions (run with sudo or configure /etc/fuse.conf)"]
+async fn test_fuse_mount_and_basic_operations() -> Result<()> {
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("test.db");
+    let mount_path = temp_dir.path().join("mnt");
+
+    // Create mount point
+    fs::create_dir_all(&mount_path)?;
+
+    // Create database
+    let memdb = MemDb::open(&db_path, true)?;
+
+    // Create some test data in memdb
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs() as u32;
+
+    memdb.create("/testdir", libc::S_IFDIR, 0, now)?;
+    memdb.create("/testdir/file1.txt", libc::S_IFREG, 0, now)?;
+    memdb.write(
+        "/testdir/file1.txt",
+        0,
+        0,
+        now,
+        b"Hello from pmxcfs!",
+        false,
+    )?;
+
+    memdb.create("/nodes", libc::S_IFDIR, 0, now)?;
+    memdb.create("/nodes/testnode", libc::S_IFDIR, 0, now)?;
+    memdb.create("/nodes/testnode/config", libc::S_IFREG, 0, now)?;
+    memdb.write(
+        "/nodes/testnode/config",
+        0,
+        0,
+        now,
+        b"test=configuration",
+        false,
+    )?;
+
+    // Create config, status, and plugins (no RRD persistence needed for test)
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config.clone());
+    status.set_quorate(true);
+    let plugins = plugins::init_plugins(config.clone(), status.clone());
+
+    // Spawn FUSE mount in background
+    println!("\n2. Mounting FUSE filesystem...");
+    let mount_path_clone = mount_path.clone();
+    let memdb_clone = memdb.clone();
+    let config_clone = config.clone();
+    let plugins_clone = plugins.clone();
+    let status_clone = status.clone();
+
+    let fuse_task = tokio::spawn(async move {
+        if let Err(e) = fuse::mount_fuse(
+            &mount_path_clone,
+            memdb_clone,
+            config_clone,
+            None, // no cluster
+            plugins_clone,
+            status_clone,
+        )
+        .await
+        {
+            eprintln!("FUSE mount error: {e}");
+        }
+    });
+
+    // Give FUSE time to initialize and check if mount succeeded
+    tokio::time::sleep(Duration::from_millis(500)).await;
+
+    // Verify FUSE mounted successfully
+    verify_fuse_mounted(&mount_path);
+
+    // Test 1: Check if mount point is accessible
+    let root_entries = fs::read_dir(&mount_path)?;
+    let mut entry_names: Vec<String> = root_entries
+        .filter_map(|e| e.ok())
+        .map(|e| e.file_name().to_string_lossy().to_string())
+        .collect();
+    entry_names.sort();
+
+    println!("   Root directory entries: {entry_names:?}");
+    assert!(
+        entry_names.contains(&"testdir".to_string()),
+        "testdir should be visible"
+    );
+    assert!(
+        entry_names.contains(&"nodes".to_string()),
+        "nodes should be visible"
+    );
+
+    // Test 2: Read existing file
+    let file_path = mount_path.join("testdir/file1.txt");
+    let mut file = fs::File::open(&file_path)?;
+    let mut contents = String::new();
+    file.read_to_string(&mut contents)?;
+    assert_eq!(contents, "Hello from pmxcfs!");
+    println!("   Read: '{contents}'");
+
+    // Test 3: Write to existing file
+    let mut file = fs::OpenOptions::new()
+        .write(true)
+        .truncate(true)
+        .open(&file_path)?;
+    file.write_all(b"Modified content!")?;
+    drop(file);
+
+    // Verify write
+    let mut file = fs::File::open(&file_path)?;
+    let mut contents = String::new();
+    file.read_to_string(&mut contents)?;
+    assert_eq!(contents, "Modified content!");
+    println!("   After write: '{contents}'");
+
+    // Test 4: Create new file
+    let new_file_path = mount_path.join("testdir/newfile.txt");
+    let mut new_file = fs::File::create(&new_file_path)?;
+    new_file.write_all(b"New file content")?;
+    drop(new_file);
+
+    // Verify creation
+    let mut file = fs::File::open(&new_file_path)?;
+    let mut contents = String::new();
+    file.read_to_string(&mut contents)?;
+    assert_eq!(contents, "New file content");
+    println!("   Created and verified: newfile.txt");
+
+    // Test 5: Create directory
+    let new_dir_path = mount_path.join("newdir");
+    fs::create_dir(&new_dir_path)?;
+
+    // Verify directory exists
+    assert!(new_dir_path.exists());
+    assert!(new_dir_path.is_dir());
+
+    // Test 6: List directory
+    let testdir_entries = fs::read_dir(mount_path.join("testdir"))?;
+    let mut file_names: Vec<String> = testdir_entries
+        .filter_map(|e| e.ok())
+        .map(|e| e.file_name().to_string_lossy().to_string())
+        .collect();
+    file_names.sort();
+
+    println!("   testdir entries: {file_names:?}");
+    assert!(
+        file_names.contains(&"file1.txt".to_string()),
+        "file1.txt should exist"
+    );
+    assert!(
+        file_names.contains(&"newfile.txt".to_string()),
+        "newfile.txt should exist"
+    );
+
+    // Test 7: Get file metadata
+    let metadata = fs::metadata(&file_path)?;
+    println!("   File size: {} bytes", metadata.len());
+    println!("   Is file: {}", metadata.is_file());
+    println!("   Is dir: {}", metadata.is_dir());
+    assert!(metadata.is_file());
+    assert!(!metadata.is_dir());
+
+    // Test 8: Test plugin files
+    let plugin_files = vec![".version", ".members", ".vmlist", ".rrd", ".clusterlog"];
+
+    for plugin_name in &plugin_files {
+        let plugin_path = mount_path.join(plugin_name);
+        if plugin_path.exists() {
+            match fs::File::open(&plugin_path) {
+                Ok(mut file) => {
+                    let mut contents = Vec::new();
+                    file.read_to_end(&mut contents)?;
+                    println!(
+                        "   [OK] Plugin '{}' readable ({} bytes)",
+                        plugin_name,
+                        contents.len()
+                    );
+                }
+                Err(e) => {
+                    println!("   [WARN]  Plugin '{plugin_name}' exists but not readable: {e}");
+                }
+            }
+        } else {
+            println!("   ℹ  Plugin '{plugin_name}' not present");
+        }
+    }
+
+    // Test 9: Delete file
+    fs::remove_file(&new_file_path)?;
+    assert!(!new_file_path.exists());
+
+    // Test 10: Delete directory
+    fs::remove_dir(&new_dir_path)?;
+    assert!(!new_dir_path.exists());
+
+    // Test 11: Verify changes persisted to memdb
+    println!("\n11. Verifying memdb persistence...");
+    assert!(
+        memdb.exists("/testdir/file1.txt")?,
+        "file1.txt should exist in memdb"
+    );
+    assert!(
+        !memdb.exists("/testdir/newfile.txt")?,
+        "newfile.txt should be deleted from memdb"
+    );
+    assert!(
+        !memdb.exists("/newdir")?,
+        "newdir should be deleted from memdb"
+    );
+
+    let read_data = memdb.read("/testdir/file1.txt", 0, 1024)?;
+    assert_eq!(
+        &read_data[..],
+        b"Modified content!",
+        "File content should be updated in memdb"
+    );
+
+    // Cleanup: unmount filesystem
+    fuse_task.abort();
+    tokio::time::sleep(Duration::from_millis(100)).await;
+
+    // Force unmount
+    let _ = std::process::Command::new("umount")
+        .arg("-l")
+        .arg(&mount_path)
+        .output();
+
+    Ok(())
+}
+
+#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
+#[ignore = "Requires FUSE mount permissions (run with sudo or configure /etc/fuse.conf)"]
+async fn test_fuse_concurrent_operations() -> Result<()> {
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("test.db");
+    let mount_path = temp_dir.path().join("mnt");
+
+    fs::create_dir_all(&mount_path)?;
+
+    let memdb = MemDb::open(&db_path, true)?;
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config.clone());
+    status.set_quorate(true);
+    let plugins = plugins::init_plugins(config.clone(), status.clone());
+
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs() as u32;
+
+    memdb.create("/testdir", libc::S_IFDIR, 0, now)?;
+
+    // Spawn FUSE mount
+    let mount_path_clone = mount_path.clone();
+    let memdb_clone = memdb.clone();
+    let fuse_task = tokio::spawn(async move {
+        let _ = fuse::mount_fuse(
+            &mount_path_clone,
+            memdb_clone,
+            config,
+            None,
+            plugins,
+            status,
+        )
+        .await;
+    });
+
+    tokio::time::sleep(Duration::from_millis(500)).await;
+
+    // Verify FUSE mounted successfully
+    verify_fuse_mounted(&mount_path);
+
+    // Create multiple files concurrently
+    let mut tasks = vec![];
+    for i in 0..5 {
+        let mount = mount_path.clone();
+        let task = tokio::task::spawn_blocking(move || -> Result<()> {
+            let file_path = mount.join(format!("testdir/file{i}.txt"));
+            let mut file = fs::File::create(&file_path)?;
+            file.write_all(format!("Content {i}").as_bytes())?;
+            Ok(())
+        });
+        tasks.push(task);
+    }
+
+    // Wait for all tasks
+    for task in tasks {
+        task.await??;
+    }
+
+    // Read all files and verify
+    for i in 0..5 {
+        let file_path = mount_path.join(format!("testdir/file{i}.txt"));
+        let mut file = fs::File::open(&file_path)?;
+        let mut contents = String::new();
+        file.read_to_string(&mut contents)?;
+        assert_eq!(contents, format!("Content {i}"));
+    }
+
+    // Cleanup
+    fuse_task.abort();
+    tokio::time::sleep(Duration::from_millis(100)).await;
+    let _ = std::process::Command::new("umount")
+        .arg("-l")
+        .arg(&mount_path)
+        .output();
+
+    Ok(())
+}
+
+#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
+#[ignore = "Requires FUSE mount permissions (run with sudo or configure /etc/fuse.conf)"]
+async fn test_fuse_error_handling() -> Result<()> {
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("test.db");
+    let mount_path = temp_dir.path().join("mnt");
+
+    fs::create_dir_all(&mount_path)?;
+
+    let memdb = MemDb::open(&db_path, true)?;
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config.clone());
+    status.set_quorate(true);
+    let plugins = plugins::init_plugins(config.clone(), status.clone());
+
+    // Spawn FUSE mount
+    let mount_path_clone = mount_path.clone();
+    let memdb_clone = memdb.clone();
+    let fuse_task = tokio::spawn(async move {
+        let _ = fuse::mount_fuse(
+            &mount_path_clone,
+            memdb_clone,
+            config,
+            None,
+            plugins,
+            status,
+        )
+        .await;
+    });
+
+    tokio::time::sleep(Duration::from_millis(500)).await;
+
+    // Verify FUSE mounted successfully
+    verify_fuse_mounted(&mount_path);
+
+    let result = fs::File::open(mount_path.join("nonexistent.txt"));
+    assert!(result.is_err(), "Should fail to open non-existent file");
+
+    let result = fs::remove_file(mount_path.join("nonexistent.txt"));
+    assert!(result.is_err(), "Should fail to delete non-existent file");
+
+    let result = fs::create_dir(mount_path.join("nonexistent/subdir"));
+    assert!(
+        result.is_err(),
+        "Should fail to create dir in non-existent parent"
+    );
+
+    // Cleanup
+    fuse_task.abort();
+    tokio::time::sleep(Duration::from_millis(100)).await;
+    let _ = std::process::Command::new("umount")
+        .arg("-l")
+        .arg(&mount_path)
+        .output();
+
+    Ok(())
+}
diff --git a/src/pmxcfs-rs/pmxcfs/tests/fuse_locks_test.rs b/src/pmxcfs-rs/pmxcfs/tests/fuse_locks_test.rs
new file mode 100644
index 000000000..84ec48d5b
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/tests/fuse_locks_test.rs
@@ -0,0 +1,375 @@
+/// FUSE Lock Operations Tests
+///
+/// Tests for pmxcfs lock operations through the FUSE interface.
+/// Locks are implemented as directories under /priv/lock/ and use
+/// setattr(mtime) for renewal and release operations.
+use anyhow::Result;
+use pmxcfs_memdb::MemDb;
+use pmxcfs_rs::fuse;
+use pmxcfs_rs::plugins;
+use std::fs;
+use std::os::unix::fs::MetadataExt;
+use std::time::Duration;
+use tempfile::TempDir;
+
+/// Verify that FUSE filesystem successfully mounted, panic if not
+fn verify_fuse_mounted(path: &std::path::Path) {
+    use std::process::Command;
+
+    let output = Command::new("mount").output().ok();
+
+    let is_mounted = if let Some(output) = output {
+        let mount_output = String::from_utf8_lossy(&output.stdout);
+        mount_output.contains(&format!(" {} ", path.display()))
+    } else {
+        false
+    };
+
+    if !is_mounted {
+        panic!(
+            "FUSE mount failed (likely permissions issue).\n\
+             To run FUSE integration tests, either:\n\
+             1. Run with sudo: sudo -E cargo test --test fuse_locks_test\n\
+             2. Enable user_allow_other in /etc/fuse.conf\n\
+             3. Or skip these tests: cargo test --lib"
+        );
+    }
+}
+
+#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
+#[ignore = "Requires FUSE mount permissions (run with sudo or configure /etc/fuse.conf)"]
+async fn test_lock_creation_and_access() -> Result<()> {
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("test.db");
+    let mount_path = temp_dir.path().join("mnt");
+
+    fs::create_dir_all(&mount_path)?;
+
+    let memdb = MemDb::open(&db_path, true)?;
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config.clone());
+    status.set_quorate(true);
+    let plugins = plugins::init_plugins(config.clone(), status.clone());
+
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs() as u32;
+
+    // Create lock directory structure in memdb
+    memdb.create("/priv", libc::S_IFDIR, 0, now)?;
+    memdb.create("/priv/lock", libc::S_IFDIR, 0, now)?;
+
+    // Spawn FUSE mount
+    let mount_path_clone = mount_path.clone();
+    let memdb_clone = memdb.clone();
+    let fuse_task = tokio::spawn(async move {
+        let _ = fuse::mount_fuse(
+            &mount_path_clone,
+            memdb_clone,
+            config,
+            None, // no cluster
+            plugins,
+            status,
+        )
+        .await;
+    });
+
+    tokio::time::sleep(Duration::from_millis(500)).await;
+    verify_fuse_mounted(&mount_path);
+
+    // Test 1: Create lock directory via FUSE (mkdir)
+    let lock_path = mount_path.join("priv/lock/test-resource");
+    fs::create_dir(&lock_path)?;
+    println!("[OK] Lock directory created via FUSE");
+
+    // Test 2: Verify lock exists and is a directory
+    assert!(lock_path.exists(), "Lock should exist");
+    assert!(lock_path.is_dir(), "Lock should be a directory");
+    println!("[OK] Lock directory accessible");
+
+    // Test 3: Verify lock is in memdb
+    assert!(
+        memdb.exists("/priv/lock/test-resource")?,
+        "Lock should exist in memdb"
+    );
+    println!("[OK] Lock persisted to memdb");
+
+    // Test 4: Verify lock path detection
+    assert!(
+        pmxcfs_memdb::is_lock_path("/priv/lock/test-resource"),
+        "Path should be detected as lock path"
+    );
+    println!("[OK] Lock path correctly identified");
+
+    // Test 5: List locks via FUSE readdir
+    let lock_dir_entries: Vec<_> = fs::read_dir(mount_path.join("priv/lock"))?
+        .filter_map(|e| e.ok())
+        .map(|e| e.file_name().to_string_lossy().to_string())
+        .collect();
+    assert!(
+        lock_dir_entries.contains(&"test-resource".to_string()),
+        "Lock should appear in directory listing"
+    );
+    println!("[OK] Lock visible in readdir");
+
+    // Cleanup
+    fs::remove_dir(&lock_path)?;
+    fuse_task.abort();
+    tokio::time::sleep(Duration::from_millis(100)).await;
+    let _ = std::process::Command::new("umount")
+        .arg("-l")
+        .arg(&mount_path)
+        .output();
+
+    Ok(())
+}
+
+#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
+#[ignore = "Requires FUSE mount permissions (run with sudo or configure /etc/fuse.conf)"]
+async fn test_lock_renewal_via_mtime_update() -> Result<()> {
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("test.db");
+    let mount_path = temp_dir.path().join("mnt");
+
+    fs::create_dir_all(&mount_path)?;
+
+    let memdb = MemDb::open(&db_path, true)?;
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config.clone());
+    status.set_quorate(true);
+    let plugins = plugins::init_plugins(config.clone(), status.clone());
+
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs() as u32;
+
+    // Create lock directory structure
+    memdb.create("/priv", libc::S_IFDIR, 0, now)?;
+    memdb.create("/priv/lock", libc::S_IFDIR, 0, now)?;
+
+    // Spawn FUSE mount
+    let mount_path_clone = mount_path.clone();
+    let memdb_clone = memdb.clone();
+    let fuse_task = tokio::spawn(async move {
+        let _ = fuse::mount_fuse(
+            &mount_path_clone,
+            memdb_clone,
+            config,
+            None,
+            plugins,
+            status,
+        )
+        .await;
+    });
+
+    tokio::time::sleep(Duration::from_millis(500)).await;
+    verify_fuse_mounted(&mount_path);
+
+    // Create lock via FUSE
+    let lock_path = mount_path.join("priv/lock/renewal-test");
+    fs::create_dir(&lock_path)?;
+    println!("[OK] Lock directory created");
+
+    // Get initial metadata
+    let metadata1 = fs::metadata(&lock_path)?;
+    let mtime1 = metadata1.mtime();
+    println!("  Initial mtime: {mtime1}");
+
+    // Wait a moment
+    tokio::time::sleep(Duration::from_millis(100)).await;
+
+    // Test lock renewal: update mtime using filetime crate
+    // (This simulates the lock renewal mechanism used by Proxmox VE)
+    use filetime::{FileTime, set_file_mtime};
+    let new_time = FileTime::now();
+    set_file_mtime(&lock_path, new_time)?;
+    println!("[OK] Lock mtime updated (renewal)");
+
+    // Verify mtime was updated
+    let metadata2 = fs::metadata(&lock_path)?;
+    let mtime2 = metadata2.mtime();
+    println!("  Updated mtime: {mtime2}");
+
+    // Note: Due to filesystem timestamp granularity, we just verify the operation succeeded
+    // The actual lock renewal logic is tested at the memdb level
+    println!("[OK] Lock renewal operation completed");
+
+    // Cleanup
+    fs::remove_dir(&lock_path)?;
+    fuse_task.abort();
+    tokio::time::sleep(Duration::from_millis(100)).await;
+    let _ = std::process::Command::new("umount")
+        .arg("-l")
+        .arg(&mount_path)
+        .output();
+
+    Ok(())
+}
+
+#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
+#[ignore = "Requires FUSE mount permissions (run with sudo or configure /etc/fuse.conf)"]
+async fn test_lock_unlock_via_mtime_zero() -> Result<()> {
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("test.db");
+    let mount_path = temp_dir.path().join("mnt");
+
+    fs::create_dir_all(&mount_path)?;
+
+    let memdb = MemDb::open(&db_path, true)?;
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config.clone());
+    status.set_quorate(true);
+    let plugins = plugins::init_plugins(config.clone(), status.clone());
+
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs() as u32;
+
+    // Create lock directory structure
+    memdb.create("/priv", libc::S_IFDIR, 0, now)?;
+    memdb.create("/priv/lock", libc::S_IFDIR, 0, now)?;
+
+    // Spawn FUSE mount (without DFSM so unlock happens locally)
+    let mount_path_clone = mount_path.clone();
+    let memdb_clone = memdb.clone();
+    let fuse_task = tokio::spawn(async move {
+        let _ = fuse::mount_fuse(
+            &mount_path_clone,
+            memdb_clone,
+            config,
+            None,
+            plugins,
+            status,
+        )
+        .await;
+    });
+
+    tokio::time::sleep(Duration::from_millis(500)).await;
+    verify_fuse_mounted(&mount_path);
+
+    // Create lock via FUSE
+    let lock_path = mount_path.join("priv/lock/unlock-test");
+    fs::create_dir(&lock_path)?;
+    println!("[OK] Lock directory created");
+
+    // Verify lock exists
+    assert!(lock_path.exists(), "Lock should exist");
+    assert!(
+        memdb.exists("/priv/lock/unlock-test")?,
+        "Lock should exist in memdb"
+    );
+
+    // Test unlock: set mtime to 0 (Unix epoch)
+    // This is the unlock signal in pmxcfs
+    use filetime::{FileTime, set_file_mtime};
+    let zero_time = FileTime::from_unix_time(0, 0);
+    set_file_mtime(&lock_path, zero_time)?;
+    println!("[OK] Lock unlock requested (mtime=0)");
+
+    // Give time for unlock processing
+    tokio::time::sleep(Duration::from_millis(200)).await;
+
+    // When no DFSM, lock should be deleted locally if expired
+    // Since we just created it, it won't be expired, so it should still exist
+    // (This matches the C behavior: only delete if lock_expired() returns true)
+    assert!(
+        lock_path.exists(),
+        "Lock should still exist (not expired yet)"
+    );
+    println!("[OK] Unlock handled correctly (lock not expired, kept)");
+
+    // Cleanup
+    fs::remove_dir(&lock_path)?;
+    fuse_task.abort();
+    tokio::time::sleep(Duration::from_millis(100)).await;
+    let _ = std::process::Command::new("umount")
+        .arg("-l")
+        .arg(&mount_path)
+        .output();
+
+    Ok(())
+}
+
+#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
+#[ignore = "Requires FUSE mount permissions (run with sudo or configure /etc/fuse.conf)"]
+async fn test_multiple_locks() -> Result<()> {
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("test.db");
+    let mount_path = temp_dir.path().join("mnt");
+
+    fs::create_dir_all(&mount_path)?;
+
+    let memdb = MemDb::open(&db_path, true)?;
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config.clone());
+    status.set_quorate(true);
+    let plugins = plugins::init_plugins(config.clone(), status.clone());
+
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs() as u32;
+
+    // Create lock directory structure
+    memdb.create("/priv", libc::S_IFDIR, 0, now)?;
+    memdb.create("/priv/lock", libc::S_IFDIR, 0, now)?;
+
+    // Spawn FUSE mount
+    let mount_path_clone = mount_path.clone();
+    let memdb_clone = memdb.clone();
+    let fuse_task = tokio::spawn(async move {
+        let _ = fuse::mount_fuse(
+            &mount_path_clone,
+            memdb_clone,
+            config,
+            None,
+            plugins,
+            status,
+        )
+        .await;
+    });
+
+    tokio::time::sleep(Duration::from_millis(500)).await;
+    verify_fuse_mounted(&mount_path);
+
+    // Test: Create multiple locks simultaneously
+    let lock_names = vec!["vm-100-disk-0", "vm-101-disk-0", "vm-102-disk-0"];
+
+    for name in &lock_names {
+        let lock_path = mount_path.join(format!("priv/lock/{name}"));
+        fs::create_dir(&lock_path)?;
+        println!("[OK] Lock '{name}' created");
+    }
+
+    // Verify all locks exist
+    let lock_dir_entries: Vec<_> = fs::read_dir(mount_path.join("priv/lock"))?
+        .filter_map(|e| e.ok())
+        .map(|e| e.file_name().to_string_lossy().to_string())
+        .collect();
+
+    for name in &lock_names {
+        assert!(
+            lock_dir_entries.contains(&name.to_string()),
+            "Lock '{name}' should be in directory listing"
+        );
+        assert!(
+            memdb.exists(&format!("/priv/lock/{name}"))?,
+            "Lock '{name}' should exist in memdb"
+        );
+    }
+    println!("[OK] All locks accessible");
+
+    // Cleanup
+    for name in &lock_names {
+        let lock_path = mount_path.join(format!("priv/lock/{name}"));
+        fs::remove_dir(&lock_path)?;
+    }
+
+    fuse_task.abort();
+    tokio::time::sleep(Duration::from_millis(100)).await;
+    let _ = std::process::Command::new("umount")
+        .arg("-l")
+        .arg(&mount_path)
+        .output();
+
+    Ok(())
+}
diff --git a/src/pmxcfs-rs/pmxcfs/tests/local_integration.rs b/src/pmxcfs-rs/pmxcfs/tests/local_integration.rs
new file mode 100644
index 000000000..05ebf0d55
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/tests/local_integration.rs
@@ -0,0 +1,438 @@
+// Local integration tests that don't require containers
+// Tests for MemDb functionality and basic plugin integration
+
+use anyhow::Result;
+use pmxcfs_memdb::MemDb;
+use pmxcfs_rs::plugins;
+use pmxcfs_test_utils::*;
+
+/// Test basic MemDb CRUD operations
+#[test]
+fn test_memdb_create_read_write() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    // Create a file
+    memdb.create("/test-file.txt", libc::S_IFREG, 0, TEST_MTIME)?;
+
+    // Write content
+    let content = b"Hello, World!";
+    memdb.write("/test-file.txt", 0, 0, TEST_MTIME, content, false)?;
+
+    // Read it back
+    let data = memdb.read("/test-file.txt", 0, 1024)?;
+    assert_eq!(data, content, "File content should match");
+
+    Ok(())
+}
+
+/// Test directory operations
+#[test]
+fn test_memdb_directories() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    // Create directory structure
+    memdb.create("/nodes", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/testnode", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/testnode/qemu-server", libc::S_IFDIR, 0, TEST_MTIME)?;
+
+    // List directory
+    let entries = memdb.readdir("/nodes/testnode")?;
+    assert_eq!(entries.len(), 1, "Should have 1 entry");
+    assert_eq!(entries[0].name, "qemu-server");
+
+    // Verify directory exists
+    assert!(memdb.exists("/nodes")?);
+    assert!(memdb.exists("/nodes/testnode")?);
+    assert!(memdb.exists("/nodes/testnode/qemu-server")?);
+
+    Ok(())
+}
+
+/// Test file operations: rename and delete
+#[test]
+fn test_memdb_file_operations() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    // Create and write file
+    memdb.create("/old-name.txt", libc::S_IFREG, 0, TEST_MTIME)?;
+    memdb.write("/old-name.txt", 0, 0, TEST_MTIME, b"test", false)?;
+
+    // Test rename
+    memdb.rename("/old-name.txt", "/new-name.txt", 0, 1000)?;
+    assert!(!memdb.exists("/old-name.txt")?, "Old name should not exist");
+    assert!(memdb.exists("/new-name.txt")?, "New name should exist");
+
+    // Verify content survived rename
+    let data = memdb.read("/new-name.txt", 0, 1024)?;
+    assert_eq!(data, b"test");
+
+    // Test delete
+    memdb.delete("/new-name.txt", 0, 1000)?;
+    assert!(!memdb.exists("/new-name.txt")?, "File should be deleted");
+
+    Ok(())
+}
+
+/// Test database persistence across reopens
+#[test]
+fn test_memdb_persistence() -> Result<()> {
+    let temp_dir = tempfile::TempDir::new()?;
+    let db_path = temp_dir.path().join("persist.db");
+
+    // Create and populate database
+    {
+        let memdb = MemDb::open(&db_path, true)?;
+        memdb.create("/persistent.txt", libc::S_IFREG, 0, TEST_MTIME)?;
+        memdb.write(
+            "/persistent.txt",
+            0,
+            0,
+            TEST_MTIME,
+            b"persistent data",
+            false,
+        )?;
+    }
+
+    // Reopen database and verify data persists
+    {
+        let memdb = MemDb::open(&db_path, false)?;
+        let data = memdb.read("/persistent.txt", 0, 1024)?;
+        assert_eq!(
+            data, b"persistent data",
+            "Data should persist across reopens"
+        );
+    }
+
+    Ok(())
+}
+
+/// Test write with offset (partial write/append)
+#[test]
+fn test_memdb_write_offset() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    memdb.create("/offset-test.txt", libc::S_IFREG, 0, TEST_MTIME)?;
+
+    // Write at offset 0
+    memdb.write("/offset-test.txt", 0, 0, TEST_MTIME, b"Hello", false)?;
+
+    // Write at offset 5 (append)
+    memdb.write("/offset-test.txt", 5, 0, TEST_MTIME, b", World!", false)?;
+
+    // Read full content
+    let data = memdb.read("/offset-test.txt", 0, 1024)?;
+    assert_eq!(data, b"Hello, World!");
+
+    Ok(())
+}
+
+/// Test write with truncation
+///
+/// Now tests CORRECT behavior after fixing the API bug.
+/// truncate=true should clear the file before writing.
+#[test]
+fn test_memdb_write_truncate() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    memdb.create("/truncate-test.txt", libc::S_IFREG, 0, TEST_MTIME)?;
+
+    // Write initial content
+    memdb.write(
+        "/truncate-test.txt",
+        0,
+        0,
+        TEST_MTIME,
+        b"Hello, World!",
+        false,
+    )?;
+
+    // Overwrite with truncate=true (should clear first, then write)
+    memdb.write("/truncate-test.txt", 0, 0, TEST_MTIME, b"Hi", true)?;
+
+    // Should only have "Hi"
+    let data = memdb.read("/truncate-test.txt", 0, 1024)?;
+    assert_eq!(data, b"Hi", "Truncate should clear file before writing");
+
+    Ok(())
+}
+
+/// Test file size limit (C implementation limits to 1MB)
+#[test]
+fn test_memdb_file_size_limit() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    memdb.create("/large.bin", libc::S_IFREG, 0, TEST_MTIME)?;
+
+    // Exactly 1MB should be accepted
+    let one_mb = vec![0u8; 1024 * 1024];
+    assert!(
+        memdb
+            .write("/large.bin", 0, 0, TEST_MTIME, &one_mb, false)
+            .is_ok(),
+        "1MB file should be accepted"
+    );
+
+    // Over 1MB should fail
+    let over_one_mb = vec![0u8; 1024 * 1024 + 1];
+    assert!(
+        memdb
+            .write("/large.bin", 0, 0, TEST_MTIME, &over_one_mb, false)
+            .is_err(),
+        "Over 1MB file should be rejected"
+    );
+
+    Ok(())
+}
+
+/// Test plugin initialization and basic functionality
+#[test]
+fn test_plugin_initialization() -> Result<()> {
+    let config = create_test_config(true);
+    let status = pmxcfs_status::init_with_config(create_test_config(false));
+
+    let plugin_registry = plugins::init_plugins(config, status);
+
+    // Verify plugins are registered
+    let plugin_list = plugin_registry.list();
+    assert!(!plugin_list.is_empty(), "Should have plugins registered");
+
+    // Verify expected plugins exist
+    assert!(
+        plugin_registry.get(".version").is_some(),
+        "Should have .version plugin"
+    );
+    assert!(
+        plugin_registry.get(".vmlist").is_some(),
+        "Should have .vmlist plugin"
+    );
+    assert!(
+        plugin_registry.get(".rrd").is_some(),
+        "Should have .rrd plugin"
+    );
+    assert!(
+        plugin_registry.get(".members").is_some(),
+        "Should have .members plugin"
+    );
+    assert!(
+        plugin_registry.get(".clusterlog").is_some(),
+        "Should have .clusterlog plugin"
+    );
+
+    Ok(())
+}
+
+/// Test .version plugin output
+#[test]
+fn test_version_plugin() -> Result<()> {
+    let config = create_test_config(true);
+    let status = pmxcfs_status::init_with_config(create_test_config(false));
+    let plugins = plugins::init_plugins(config, status);
+
+    let version_plugin = plugins
+        .get(".version")
+        .expect(".version plugin should exist");
+
+    let version_data = version_plugin.read()?;
+
+    // Verify it's valid JSON
+    let version_json: serde_json::Value = serde_json::from_slice(&version_data)?;
+    assert!(version_json.is_object(), "Version should be JSON object");
+    assert!(version_json["starttime"].is_number());
+    assert!(version_json["kvstore"].is_object());
+
+    Ok(())
+}
+
+/// Test error case: reading non-existent file
+#[test]
+fn test_memdb_error_nonexistent_file() {
+    let (_temp_dir, memdb) = create_minimal_test_db().unwrap();
+
+    let result = memdb.read("/does-not-exist.txt", 0, 1024);
+    assert!(result.is_err(), "Reading non-existent file should fail");
+}
+
+/// Test error case: creating file in non-existent directory
+#[test]
+fn test_memdb_error_no_parent_directory() {
+    let (_temp_dir, memdb) = create_minimal_test_db().unwrap();
+
+    let result = memdb.create("/nonexistent/file.txt", libc::S_IFREG, 0, TEST_MTIME);
+    assert!(
+        result.is_err(),
+        "Creating file in non-existent directory should fail"
+    );
+}
+
+/// Test error case: writing to non-existent file
+#[test]
+fn test_memdb_error_write_nonexistent() {
+    let (_temp_dir, memdb) = create_minimal_test_db().unwrap();
+
+    let result = memdb.write("/does-not-exist.txt", 0, 0, TEST_MTIME, b"test", false);
+    assert!(result.is_err(), "Writing to non-existent file should fail");
+}
+
+/// Test error case: deleting non-existent file
+#[test]
+fn test_memdb_error_delete_nonexistent() {
+    let (_temp_dir, memdb) = create_minimal_test_db().unwrap();
+
+    let result = memdb.delete("/does-not-exist.txt", 0, 1000);
+    assert!(result.is_err(), "Deleting non-existent file should fail");
+}
+
+/// Test that renaming a non-empty directory is rejected (pmxcfs POSIX restriction).
+///
+/// From the wiki: "You can't rename non-empty directories (because this makes it
+/// easier to guarantee that VMIDs are unique)."
+#[test]
+fn test_rename_nonempty_dir_rejected() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    memdb.create("/parent", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/parent/child.txt", 0, 0, TEST_MTIME)?;
+    memdb.create("/dest", libc::S_IFDIR, 0, TEST_MTIME)?;
+
+    let result = memdb.rename("/parent", "/dest/parent", 0, TEST_MTIME);
+    assert!(result.is_err(), "Renaming non-empty directory must fail");
+
+    // Original directory and its children should still be intact
+    assert!(
+        memdb.lookup_path("/parent/child.txt").is_some(),
+        "Child should still exist after rejected rename"
+    );
+
+    Ok(())
+}
+
+/// Test that renaming a VM config file is blocked when the VMID already exists
+/// on a different node (C parity: memdb_rename lines 1098-1105).
+///
+/// The wiki guarantees strong consistency checks to avoid duplicate VM IDs.
+#[test]
+fn test_rename_blocked_for_duplicate_vmid() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    // Set up two node directories with VM config dirs
+    memdb.create("/nodes", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node1", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node1/qemu-server", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node2", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node2/qemu-server", libc::S_IFDIR, 0, TEST_MTIME)?;
+
+    // node1 already has VM 100
+    memdb.create("/nodes/node1/qemu-server/100.conf", 0, 0, TEST_MTIME)?;
+
+    // Trying to rename any file to /nodes/node2/qemu-server/100.conf must fail (VMID conflict)
+    memdb.create("/nodes/node2/qemu-server/200.conf", 0, 0, TEST_MTIME)?;
+    let result = memdb.rename(
+        "/nodes/node2/qemu-server/200.conf",
+        "/nodes/node2/qemu-server/100.conf",
+        0,
+        TEST_MTIME,
+    );
+    assert!(
+        result.is_err(),
+        "Rename to duplicate VMID on different node must fail"
+    );
+
+    Ok(())
+}
+
+/// Test that renaming a VM config within the same VMID (e.g. moving between
+/// nodes) is allowed (C: "if (!(from_node && vmid == from_vmid))").
+#[test]
+fn test_rename_allowed_for_same_vmid_move() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    memdb.create("/nodes", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node1", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node1/qemu-server", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node2", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node2/qemu-server", libc::S_IFDIR, 0, TEST_MTIME)?;
+
+    memdb.create("/nodes/node1/qemu-server/100.conf", 0, 0, TEST_MTIME)?;
+
+    // Moving VM 100 from node1 to node2 with the same VMID must succeed
+    let result = memdb.rename(
+        "/nodes/node1/qemu-server/100.conf",
+        "/nodes/node2/qemu-server/100.conf",
+        0,
+        TEST_MTIME,
+    );
+    assert!(
+        result.is_ok(),
+        "Moving a VM config to the same VMID on another node must succeed: {result:?}"
+    );
+
+    Ok(())
+}
+
+/// Test that creating a VM config file is blocked when the VMID already exists
+/// on a different node (C parity: memdb_create() lines 720-725).
+///
+/// The wiki guarantees strong consistency checks to avoid duplicate VM IDs.
+#[test]
+fn test_create_blocked_for_duplicate_vmid() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    memdb.create("/nodes", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node1", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node1/qemu-server", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node2", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node2/qemu-server", libc::S_IFDIR, 0, TEST_MTIME)?;
+
+    // node1 already has VM 100
+    memdb.create("/nodes/node1/qemu-server/100.conf", 0, 0, TEST_MTIME)?;
+
+    // Trying to create /nodes/node2/qemu-server/100.conf must fail (VMID conflict)
+    let result = memdb.create("/nodes/node2/qemu-server/100.conf", 0, 0, TEST_MTIME);
+    assert!(
+        result.is_err(),
+        "Creating duplicate VMID on different node must fail"
+    );
+
+    // But a different VMID on node2 must succeed
+    let result2 = memdb.create("/nodes/node2/qemu-server/200.conf", 0, 0, TEST_MTIME);
+    assert!(
+        result2.is_ok(),
+        "Creating a different VMID on another node must succeed: {result2:?}"
+    );
+
+    Ok(())
+}
+
+/// Test that creating a VM config file with the same VMID on the same node is blocked
+/// by the "already exists" check (not the VMID uniqueness check).
+#[test]
+fn test_create_same_vmid_same_node_blocked() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    memdb.create("/nodes", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node1", libc::S_IFDIR, 0, TEST_MTIME)?;
+    memdb.create("/nodes/node1/qemu-server", libc::S_IFDIR, 0, TEST_MTIME)?;
+
+    memdb.create("/nodes/node1/qemu-server/100.conf", 0, 0, TEST_MTIME)?;
+
+    // Creating the same path again must fail
+    let result = memdb.create("/nodes/node1/qemu-server/100.conf", 0, 0, TEST_MTIME);
+    assert!(result.is_err(), "Creating already-existing file must fail");
+
+    Ok(())
+}
+
+/// Test that renaming an empty directory is allowed.
+#[test]
+fn test_rename_empty_dir_allowed() -> Result<()> {
+    let (_temp_dir, memdb) = create_minimal_test_db()?;
+
+    memdb.create("/emptydir", libc::S_IFDIR, 0, TEST_MTIME)?;
+
+    let result = memdb.rename("/emptydir", "/renamed-dir", 0, TEST_MTIME);
+    assert!(result.is_ok(), "Renaming an empty directory should succeed");
+    assert!(memdb.lookup_path("/renamed-dir").is_some());
+    assert!(memdb.lookup_path("/emptydir").is_none());
+
+    Ok(())
+}
diff --git a/src/pmxcfs-rs/pmxcfs/tests/quorum_behavior.rs b/src/pmxcfs-rs/pmxcfs/tests/quorum_behavior.rs
new file mode 100644
index 000000000..ed7e2d9a9
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/tests/quorum_behavior.rs
@@ -0,0 +1,302 @@
+/// Quorum-Dependent Behavior Tests
+///
+/// Tests for pmxcfs behavior that changes based on quorum state.
+/// These tests verify plugin behavior (especially symlinks) and
+/// operations that should be blocked/allowed based on quorum.
+///
+/// Note: These tests do NOT require FUSE mounting - they test the
+/// plugin layer directly, which is accessible without root permissions.
+use anyhow::Result;
+use pmxcfs_rs::plugins;
+use pmxcfs_test_utils::*;
+
+/// Test .members plugin remains readable and reports quorum state in its JSON payload.
+#[test]
+fn test_members_plugin_quorum_behavior() -> Result<()> {
+    let config = create_test_config(true);
+    let status = pmxcfs_status::init_with_config(create_test_config(false));
+    let plugins = plugins::init_plugins(config, status.clone());
+    status
+        .update_cluster_info(
+            TEST_CLUSTER_NAME.to_string(),
+            3,
+            vec![(1, TEST_NODE_NAME.to_string(), "10.0.0.1".to_string())],
+        )
+        .unwrap();
+
+    let members_plugin = plugins
+        .get(".members")
+        .expect(".members plugin should exist");
+
+    // With quorum, .members should be readable and report quorate=1.
+    status.set_quorate(true);
+
+    let data = members_plugin.read()?;
+    let members_json: serde_json::Value = serde_json::from_slice(&data)?;
+    assert!(
+        members_json.is_object(),
+        ".members should contain a JSON object"
+    );
+    assert_eq!(members_json["cluster"]["quorate"], 1);
+
+    // Without quorum, the C implementation still exposes .members as a function plugin.
+    status.set_quorate(false);
+
+    let data_without_quorum = members_plugin.read()?;
+    let members_without_quorum: serde_json::Value = serde_json::from_slice(&data_without_quorum)?;
+    assert!(
+        members_without_quorum.is_object(),
+        ".members should remain readable without quorum"
+    );
+    assert_eq!(members_without_quorum["cluster"]["quorate"], 0);
+
+    Ok(())
+}
+
+/// Test .vmlist plugin behavior with and without quorum
+#[test]
+fn test_vmlist_plugin_quorum_behavior() -> Result<()> {
+    let config = create_test_config(true);
+    let status = pmxcfs_status::init_with_config(create_test_config(false));
+    let plugins = plugins::init_plugins(config, status.clone());
+
+    // Register a test VM
+    clear_test_vms(status.as_ref());
+    status.register_vm(100, pmxcfs_status::VmType::Qemu, TEST_NODE_NAME.to_string());
+
+    let vmlist_plugin = plugins.get(".vmlist").expect(".vmlist plugin should exist");
+
+    // Test 1: With quorum, .vmlist works normally
+    status.set_quorate(true);
+
+    let data = vmlist_plugin.read()?;
+    let vmlist_str = String::from_utf8(data)?;
+
+    // Verify valid JSON
+    let vmlist_json: serde_json::Value = serde_json::from_str(&vmlist_str)?;
+    assert!(vmlist_json.is_object(), ".vmlist should be JSON object");
+
+    // Verify our test VM is present
+    assert!(
+        vmlist_str.contains("\"100\""),
+        "Should contain registered VM 100"
+    );
+
+    // Test 2: Without quorum (in local mode, should still work)
+    status.set_quorate(false);
+
+    let result = vmlist_plugin.read();
+    // In local mode, vmlist should still be accessible
+    assert!(
+        result.is_ok(),
+        "In local mode, .vmlist should work without quorum"
+    );
+
+    Ok(())
+}
+
+/// Test .version plugin is unaffected by quorum state
+#[test]
+fn test_version_plugin_unaffected_by_quorum() -> Result<()> {
+    let config = create_test_config(true);
+    let status = pmxcfs_status::init_with_config(create_test_config(false));
+    let plugins = plugins::init_plugins(config, status.clone());
+
+    let version_plugin = plugins
+        .get(".version")
+        .expect(".version plugin should exist");
+
+    // Test with quorum
+    status.set_quorate(true);
+    let data_with = version_plugin.read()?;
+    let version_with: serde_json::Value = serde_json::from_slice(&data_with)?;
+    assert!(version_with.is_object(), "Version should be JSON object");
+    assert!(version_with.get("starttime").is_some());
+    assert!(version_with.get("kvstore").is_some());
+
+    // Test without quorum
+    status.set_quorate(false);
+    let data_without = version_plugin.read()?;
+    let version_without: serde_json::Value = serde_json::from_slice(&data_without)?;
+    assert!(version_without.is_object(), "Version should be JSON object");
+    assert!(version_without.get("starttime").is_some());
+    assert!(version_without.get("kvstore").is_some());
+
+    // Filesystem version info should be same regardless of quorum state.
+    assert_eq!(
+        version_with.get("kvstore"),
+        version_without.get("kvstore"),
+        "Version counters should be same with/without quorum"
+    );
+
+    Ok(())
+}
+
+/// Test .rrd plugin behavior
+#[test]
+fn test_rrd_plugin_functionality() -> Result<()> {
+    let config = create_test_config(true);
+    let status = pmxcfs_status::init_with_config(create_test_config(false));
+    let plugins = plugins::init_plugins(config, status.clone());
+
+    let rrd_plugin = plugins.get(".rrd").expect(".rrd plugin should exist");
+
+    status.set_quorate(true);
+
+    // RRD plugin should be readable and return valid UTF-8 (may be empty initially)
+    let data = rrd_plugin.read()?;
+    let _rrd_str = String::from_utf8(data)?;
+
+    Ok(())
+}
+
+/// Test .clusterlog plugin behavior
+#[test]
+fn test_clusterlog_plugin_functionality() -> Result<()> {
+    let config = create_test_config(true);
+    let status = pmxcfs_status::init_with_config(create_test_config(false));
+    let plugins = plugins::init_plugins(config, status.clone());
+
+    let log_plugin = plugins
+        .get(".clusterlog")
+        .expect(".clusterlog plugin should exist");
+
+    status.set_quorate(true);
+
+    // Clusterlog should be readable
+    let data = log_plugin.read()?;
+    // Should be valid text (even if empty)
+    let _log_str = String::from_utf8(data)?;
+
+    Ok(())
+}
+
+/// Test quorum state changes work correctly
+#[test]
+fn test_quorum_state_transitions() -> Result<()> {
+    let config = create_test_config(true);
+    let status = pmxcfs_status::init_with_config(create_test_config(false));
+    let _plugins = plugins::init_plugins(config, status.clone());
+
+    // Test state transitions
+    status.set_quorate(false);
+    assert!(
+        !status.is_quorate(),
+        "Should not be quorate after set_quorate(false)"
+    );
+
+    status.set_quorate(true);
+    assert!(
+        status.is_quorate(),
+        "Should be quorate after set_quorate(true)"
+    );
+
+    status.set_quorate(false);
+    assert!(!status.is_quorate(), "Should not be quorate again");
+
+    // Multiple calls to same state should be idempotent
+    status.set_quorate(true);
+    status.set_quorate(true);
+    assert!(
+        status.is_quorate(),
+        "Multiple set_quorate(true) should work"
+    );
+
+    Ok(())
+}
+
+/// Test plugin registry lists all expected plugins
+#[test]
+fn test_plugin_registry_completeness() -> Result<()> {
+    let config = create_test_config(true);
+    let status = pmxcfs_status::init_with_config(create_test_config(false));
+    let plugins = plugins::init_plugins(config, status);
+
+    let plugin_list = plugins.list();
+
+    // Verify minimum expected plugins exist
+    let expected_plugins = vec![".version", ".members", ".vmlist", ".rrd", ".clusterlog"];
+
+    for plugin_name in expected_plugins {
+        assert!(
+            plugin_list.contains(&plugin_name.to_string()),
+            "Plugin registry should contain {plugin_name}"
+        );
+    }
+
+    assert!(!plugin_list.is_empty(), "Should have at least some plugins");
+    assert!(
+        plugin_list.len() >= 5,
+        "Should have at least 5 core plugins"
+    );
+
+    Ok(())
+}
+
+/// Test that .clusterlog respects the C-compatible 50-entry limit.
+///
+/// The C implementation calls clog_dump_json with max_entries=50.
+/// Verify the plugin cap holds regardless of how many entries are stored.
+#[test]
+fn test_clusterlog_respects_50_entry_limit() -> Result<()> {
+    let config = create_test_config(true);
+    let status = pmxcfs_status::init_with_config(create_test_config(false));
+    let plugins = plugins::init_plugins(config, status.clone());
+
+    status.clear_cluster_log();
+
+    // Add 75 entries — well above the 50-entry cap
+    for i in 0u64..75 {
+        status.add_log_entry(pmxcfs_status::ClusterLogEntry {
+            uid: i as u32,
+            timestamp: 1_700_000_000 + i,
+            priority: 6,
+            tag: "test".to_string(),
+            pid: i as u32,
+            node: "node1".to_string(),
+            ident: "pmxcfs".to_string(),
+            message: format!("Test message {i}"),
+        });
+    }
+
+    let log_plugin = plugins
+        .get(".clusterlog")
+        .expect(".clusterlog plugin should exist");
+    let data = log_plugin.read()?;
+    let json: serde_json::Value = serde_json::from_slice(&data)?;
+    let entries = json["data"].as_array().expect("data should be array");
+
+    assert!(
+        entries.len() <= 50,
+        ".clusterlog must cap at 50 entries (C parity), got {}",
+        entries.len()
+    );
+    // Sanity: we should have some entries (not empty)
+    assert!(!entries.is_empty(), ".clusterlog should have entries");
+
+    Ok(())
+}
+
+/// Test async quorum change notification
+#[tokio::test]
+async fn test_quorum_change_async() -> Result<()> {
+    let config = create_test_config(true);
+    let status = pmxcfs_status::init_with_config(create_test_config(false));
+    let _plugins = plugins::init_plugins(config, status.clone());
+
+    // Initial state
+    status.set_quorate(true);
+    assert!(status.is_quorate());
+
+    // Simulate async quorum loss
+    status.set_quorate(false);
+    tokio::time::sleep(std::time::Duration::from_millis(10)).await;
+    assert!(!status.is_quorate(), "Quorum loss should be immediate");
+
+    // Simulate async quorum regain
+    status.set_quorate(true);
+    tokio::time::sleep(std::time::Duration::from_millis(10)).await;
+    assert!(status.is_quorate(), "Quorum regain should be immediate");
+
+    Ok(())
+}
diff --git a/src/pmxcfs-rs/pmxcfs/tests/single_node_functional.rs b/src/pmxcfs-rs/pmxcfs/tests/single_node_functional.rs
new file mode 100644
index 000000000..4bda36186
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/tests/single_node_functional.rs
@@ -0,0 +1,357 @@
+/// Single-node functional test
+///
+/// This test simulates a complete single-node pmxcfs deployment
+/// without requiring root privileges or actual FUSE mounting.
+use anyhow::Result;
+use pmxcfs_config::Config;
+use pmxcfs_memdb::MemDb;
+use pmxcfs_rs::plugins::{PluginRegistry, init_plugins};
+use pmxcfs_status::{Status, VmType};
+use std::sync::Arc;
+use tempfile::TempDir;
+
+/// Helper to initialize plugins for testing
+fn init_test_plugins(nodename: &str, status: Arc<Status>) -> Arc<PluginRegistry> {
+    let config = Config::new(
+        nodename,
+        "127.0.0.1".parse().unwrap(),
+        33,
+        0,
+        false,
+        "pmxcfs",
+    )
+    .into_shared();
+    init_plugins(config, status)
+}
+
+/// Test complete single-node workflow
+#[tokio::test]
+async fn test_single_node_workflow() -> Result<()> {
+    println!("\n=== Single-Node Functional Test ===\n");
+
+    // Initialize status subsystem
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config);
+
+    // Clear any VMs from previous tests
+    let existing_vms: Vec<u32> = status.get_vmlist().keys().copied().collect();
+    for vmid in existing_vms {
+        status.delete_vm(vmid);
+    }
+
+    let plugins = init_test_plugins("localhost", status.clone());
+    println!(
+        "   [OK] Plugin system initialized ({} plugins)",
+        plugins.list().len()
+    );
+
+    // Create temporary database
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("pmxcfs.db");
+    println!("\n2. Creating database at {}", db_path.display());
+
+    let db = MemDb::open(&db_path, true)?;
+
+    // Test directory structure creation
+    println!("\n3. Creating directory structure...");
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs() as u32;
+
+    db.create("/nodes", libc::S_IFDIR, 0, now)?;
+    db.create("/nodes/localhost", libc::S_IFDIR, 0, now)?;
+    db.create("/nodes/localhost/qemu-server", libc::S_IFDIR, 0, now)?;
+    db.create("/nodes/localhost/lxc", libc::S_IFDIR, 0, now)?;
+    db.create("/nodes/localhost/priv", libc::S_IFDIR, 0, now)?;
+
+    db.create("/priv", libc::S_IFDIR, 0, now)?;
+    db.create("/priv/lock", libc::S_IFDIR, 0, now)?;
+    db.create("/priv/lock/qemu-server", libc::S_IFDIR, 0, now)?;
+    db.create("/priv/lock/lxc", libc::S_IFDIR, 0, now)?;
+    db.create("/qemu-server", libc::S_IFDIR, 0, now)?;
+    db.create("/lxc", libc::S_IFDIR, 0, now)?;
+
+    // Test configuration file creation
+    println!("\n4. Creating configuration files...");
+
+    // Create corosync.conf
+    let corosync_conf = b"totem {\n  version: 2\n  cluster_name: test\n}\n";
+    db.create("/corosync.conf", libc::S_IFREG, 0, now)?;
+    db.write("/corosync.conf", 0, 0, now, corosync_conf, false)?;
+    println!(
+        "   [OK] Created /corosync.conf ({} bytes)",
+        corosync_conf.len()
+    );
+
+    // Create datacenter.cfg
+    let datacenter_cfg = b"keyboard: en-us\n";
+    db.create("/datacenter.cfg", libc::S_IFREG, 0, now)?;
+    db.write("/datacenter.cfg", 0, 0, now, datacenter_cfg, false)?;
+    println!(
+        "   [OK] Created /datacenter.cfg ({} bytes)",
+        datacenter_cfg.len()
+    );
+
+    // Create some VM configs
+    let vm_config = b"cores: 2\nmemory: 2048\nnet0: virtio=00:00:00:00:00:01,bridge=vmbr0\n";
+    db.create("/qemu-server/100.conf", libc::S_IFREG, 0, now)?;
+    db.write("/qemu-server/100.conf", 0, 0, now, vm_config, false)?;
+
+    db.create("/qemu-server/101.conf", libc::S_IFREG, 0, now)?;
+    db.write("/qemu-server/101.conf", 0, 0, now, vm_config, false)?;
+
+    // Create LXC container config
+    let ct_config = b"cores: 1\nmemory: 512\nrootfs: local:100/vm-100-disk-0.raw\n";
+    db.create("/lxc/200.conf", libc::S_IFREG, 0, now)?;
+    db.write("/lxc/200.conf", 0, 0, now, ct_config, false)?;
+
+    // Create private file
+    let private_data = b"secret token data";
+    db.create("/priv/token.cfg", libc::S_IFREG, 0, now)?;
+    db.write("/priv/token.cfg", 0, 0, now, private_data, false)?;
+
+    // Test file operations
+
+    // Read back corosync.conf
+    let read_data = db.read("/corosync.conf", 0, 1024)?;
+    assert_eq!(&read_data[..], corosync_conf);
+
+    // Test file size limit (1MB)
+    let large_data = vec![0u8; 1024 * 1024]; // Exactly 1MB
+    db.create("/large.bin", libc::S_IFREG, 0, now)?;
+    let result = db.write("/large.bin", 0, 0, now, &large_data, false);
+    assert!(result.is_ok(), "1MB file should be accepted");
+
+    // Test directory listing
+    let entries = db.readdir("/qemu-server")?;
+    assert_eq!(entries.len(), 2, "Should have 2 VM configs");
+
+    // Test rename
+    db.rename("/qemu-server/101.conf", "/qemu-server/102.conf", 0, 1000)?;
+    assert!(db.exists("/qemu-server/102.conf")?);
+    assert!(!db.exists("/qemu-server/101.conf")?);
+
+    // Test delete
+    db.delete("/large.bin", 0, 1000)?;
+    assert!(!db.exists("/large.bin")?);
+
+    // Test VM list management
+
+    // Clear VMs again right before this section to avoid test interference
+    let existing_vms: Vec<u32> = status.get_vmlist().keys().copied().collect();
+    for vmid in existing_vms {
+        status.delete_vm(vmid);
+    }
+
+    status.register_vm(100, VmType::Qemu, "localhost".to_string());
+    status.register_vm(102, VmType::Qemu, "localhost".to_string());
+    status.register_vm(200, VmType::Lxc, "localhost".to_string());
+
+    let vmlist = status.get_vmlist();
+    assert_eq!(vmlist.len(), 3, "Should have 3 VMs registered");
+
+    // Verify VM types
+    assert_eq!(vmlist.get(&100).unwrap().vm_type, VmType::Qemu);
+    assert_eq!(vmlist.get(&200).unwrap().vm_type, VmType::Lxc);
+
+    // Test lock management
+
+    let lock_path = "/priv/lock/qemu-server/100.conf";
+    let csum = [1u8; 32];
+
+    db.acquire_lock(lock_path, &csum)?;
+    assert!(db.is_locked(lock_path));
+
+    db.release_lock(lock_path, &csum)?;
+    assert!(!db.is_locked(lock_path));
+
+    // Test checksum operations
+
+    let checksum = db.compute_database_checksum()?;
+    println!(
+        "   [OK] Database checksum: {:02x}{:02x}{:02x}{:02x}...",
+        checksum[0], checksum[1], checksum[2], checksum[3]
+    );
+
+    // Modify database and verify checksum changes
+    db.write("/datacenter.cfg", 0, 0, now, b"keyboard: de\n", false)?;
+    let new_checksum = db.compute_database_checksum()?;
+    assert_ne!(
+        checksum, new_checksum,
+        "Checksum should change after modification"
+    );
+
+    // Test database encoding
+    let _encoded = db.encode_database()?;
+
+    // Test RRD data collection
+
+    // Set RRD data in C-compatible format
+    // Format: "key:timestamp:val1:val2:val3:..."
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs();
+
+    status
+        .set_rrd_data(
+            "pve2-node/localhost".to_string(),
+            format!("{now}:0:1.5:4:45.5:2.1:8000000000:6000000000:0:0:0:0:1000000:500000"),
+        )
+        .await?;
+
+    let rrd_dump = status.get_rrd_dump();
+    assert!(
+        rrd_dump.contains("pve2-node/localhost"),
+        "Should have node data"
+    );
+    let num_entries = rrd_dump.lines().count();
+
+    // Test cluster log
+    use pmxcfs_status::ClusterLogEntry;
+    let log_entry = ClusterLogEntry {
+        uid: 0,
+        timestamp: now,
+        priority: 6, // Info priority
+        tag: "startup".to_string(),
+        pid: 0,
+        node: "localhost".to_string(),
+        ident: "pmxcfs".to_string(),
+        message: "Cluster filesystem started".to_string(),
+    };
+    status.add_log_entry(log_entry);
+
+    let log_entries = status.get_log_entries(100);
+    assert_eq!(log_entries.len(), 1);
+
+    // Test plugin system
+
+    // Test .version plugin
+    if let Some(plugin) = plugins.get(".version") {
+        let content = plugin.read()?;
+        let version_json: serde_json::Value = serde_json::from_slice(&content)?;
+        assert!(version_json["starttime"].is_number());
+        assert!(version_json["kvstore"].is_object());
+    }
+
+    // Test .vmlist plugin
+    if let Some(plugin) = plugins.get(".vmlist") {
+        let content = plugin.read()?;
+        let vmlist_str = String::from_utf8(content)?;
+        assert!(vmlist_str.contains("\"100\""));
+        assert!(vmlist_str.contains("\"200\""));
+        assert!(vmlist_str.contains("qemu"));
+        assert!(vmlist_str.contains("lxc"));
+        println!(
+            "   [OK] .vmlist plugin: {} bytes, {} VMs",
+            vmlist_str.len(),
+            3
+        );
+    }
+
+    // Test .rrd plugin
+    if let Some(plugin) = plugins.get(".rrd") {
+        let content = plugin.read()?;
+        let rrd_str = String::from_utf8(content)?;
+        // Should contain the node RRD data in C-compatible format
+        assert!(
+            rrd_str.contains("pve2-node/localhost"),
+            "RRD should contain node data"
+        );
+    }
+
+    // Test database persistence
+
+    drop(db); // Close database
+
+    // Reopen and verify data persists
+    let db = MemDb::open(&db_path, false)?;
+    assert!(db.exists("/corosync.conf")?);
+    assert!(db.exists("/qemu-server/100.conf")?);
+    assert!(db.exists("/lxc/200.conf")?);
+
+    let read_conf = db.read("/corosync.conf", 0, 1024)?;
+    assert_eq!(&read_conf[..], corosync_conf);
+
+    // Test state export
+
+    let all_entries = db.get_all_entries()?;
+
+    // Verify entry structure
+    let root_entry = db.lookup_path("/").expect("Root should exist");
+    assert_eq!(root_entry.inode, 0); // Root inode is 0
+    assert!(root_entry.is_dir());
+
+    println!("\n=== Single-Node Test Complete ===\n");
+    println!("\nTest Summary:");
+    println!("\nDatabase Statistics:");
+    println!("  • Total entries: {}", all_entries.len());
+    println!("  • VMs/CTs tracked: {}", vmlist.len());
+    println!("  • RRD entries: {num_entries}");
+    println!("  • Cluster log entries: 1");
+    println!(
+        "  • Database size: {} bytes",
+        std::fs::metadata(&db_path)?.len()
+    );
+
+    Ok(())
+}
+
+/// Test simulated multi-operation workflow
+#[tokio::test]
+async fn test_realistic_workflow() -> Result<()> {
+    println!("\n=== Realistic Workflow Test ===\n");
+
+    let temp_dir = TempDir::new()?;
+    let db_path = temp_dir.path().join("pmxcfs.db");
+    let db = MemDb::open(&db_path, true)?;
+
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config);
+
+    // Clear any VMs from previous tests
+    let existing_vms: Vec<u32> = status.get_vmlist().keys().copied().collect();
+    for vmid in existing_vms {
+        status.delete_vm(vmid);
+    }
+
+    let now = std::time::SystemTime::now()
+        .duration_since(std::time::UNIX_EPOCH)?
+        .as_secs() as u32;
+
+    println!("Scenario: Creating a new VM");
+
+    // 1. Check if VMID is available
+    let vmid = 103;
+    assert!(!status.vm_exists(vmid));
+
+    // 2. Acquire lock for VM creation
+    let lock_path = format!("/priv/lock/qemu-server/{vmid}.conf");
+    let csum = [1u8; 32];
+
+    // Create lock directories first
+    db.create("/priv", libc::S_IFDIR, 0, now).ok();
+    db.create("/priv/lock", libc::S_IFDIR, 0, now).ok();
+    db.create("/priv/lock/qemu-server", libc::S_IFDIR, 0, now)
+        .ok();
+
+    db.acquire_lock(&lock_path, &csum)?;
+
+    // 3. Create VM configuration
+    let config_path = format!("/qemu-server/{vmid}.conf");
+    db.create("/qemu-server", libc::S_IFDIR, 0, now).ok(); // May already exist
+    let vm_config = format!("name: test-vm-{vmid}\ncores: 4\nmemory: 4096\nbootdisk: scsi0\n");
+    db.create(&config_path, libc::S_IFREG, 0, now)?;
+    db.write(&config_path, 0, 0, now, vm_config.as_bytes(), false)?;
+
+    // 4. Register VM in cluster
+    status.register_vm(vmid, VmType::Qemu, "localhost".to_string());
+
+    // 5. Release lock
+    db.release_lock(&lock_path, &csum)?;
+
+    // 6. Verify VM is accessible
+    assert!(db.exists(&config_path)?);
+    assert!(status.vm_exists(vmid));
+
+    Ok(())
+}
diff --git a/src/pmxcfs-rs/pmxcfs/tests/symlink_quorum_test.rs b/src/pmxcfs-rs/pmxcfs/tests/symlink_quorum_test.rs
new file mode 100644
index 000000000..3b92027fd
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs/tests/symlink_quorum_test.rs
@@ -0,0 +1,145 @@
+/// Test for quorum-aware symlink permissions
+///
+/// This test verifies that symlink plugins correctly adjust their permissions
+/// based on quorum status, matching the C implementation behavior in cfs-plug-link.c:68-72
+use pmxcfs_memdb::MemDb;
+use pmxcfs_rs::{fuse, plugins};
+use std::fs;
+use std::time::Duration;
+use tempfile::TempDir;
+
+#[tokio::test]
+#[ignore = "Requires FUSE mount permissions (run with sudo or configure /etc/fuse.conf)"]
+async fn test_symlink_permissions_with_quorum() -> Result<(), Box<dyn std::error::Error>> {
+    let test_dir = TempDir::new()?;
+    let db_path = test_dir.path().join("test.db");
+    let mount_path = test_dir.path().join("mnt");
+
+    fs::create_dir_all(&mount_path)?;
+
+    // Create MemDb and status (no RRD persistence needed for test)
+    let memdb = MemDb::open(&db_path, true)?;
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config.clone());
+
+    // Test with quorum enabled (should have 0o777 permissions)
+    status.set_quorate(true);
+    let plugins = plugins::init_plugins(config.clone(), status.clone());
+
+    // Spawn FUSE mount
+    let mount_path_clone = mount_path.clone();
+    let memdb_clone = memdb.clone();
+    let config_clone = config.clone();
+    let plugins_clone = plugins.clone();
+    let status_clone = status.clone();
+
+    let fuse_task = tokio::spawn(async move {
+        if let Err(e) = fuse::mount_fuse(
+            &mount_path_clone,
+            memdb_clone,
+            config_clone,
+            None,
+            plugins_clone,
+            status_clone,
+        )
+        .await
+        {
+            eprintln!("FUSE mount error: {e}");
+        }
+    });
+
+    // Give FUSE time to mount
+    tokio::time::sleep(Duration::from_millis(2000)).await;
+
+    // Check if the symlink exists
+    let local_link = mount_path.join("local");
+    if local_link.exists() {
+        let metadata = fs::symlink_metadata(&local_link)?;
+        let permissions = metadata.permissions();
+        #[cfg(unix)]
+        {
+            use std::os::unix::fs::PermissionsExt;
+            let mode = permissions.mode();
+            let link_perms = mode & 0o777;
+            println!("   Link 'local' permissions: {link_perms:04o}");
+            // Note: On most systems, symlink permissions are always 0777
+            // This test mainly ensures the code path works correctly
+        }
+    } else {
+        println!("   [WARN]  Symlink 'local' not visible (may be a FUSE mounting issue)");
+    }
+
+    // Cleanup
+    fuse_task.abort();
+    tokio::time::sleep(Duration::from_millis(100)).await;
+
+    // Remount with quorum disabled
+    let mount_path2 = test_dir.path().join("mnt2");
+    fs::create_dir_all(&mount_path2)?;
+
+    status.set_quorate(false);
+    let plugins2 = plugins::init_plugins(config.clone(), status.clone());
+
+    let mount_path_clone2 = mount_path2.clone();
+    let memdb_clone2 = memdb.clone();
+    let fuse_task2 = tokio::spawn(async move {
+        let _ = fuse::mount_fuse(
+            &mount_path_clone2,
+            memdb_clone2,
+            config,
+            None,
+            plugins2,
+            status,
+        )
+        .await;
+    });
+
+    tokio::time::sleep(Duration::from_millis(2000)).await;
+
+    let local_link2 = mount_path2.join("local");
+    if local_link2.exists() {
+        let metadata = fs::symlink_metadata(&local_link2)?;
+        let permissions = metadata.permissions();
+        #[cfg(unix)]
+        {
+            use std::os::unix::fs::PermissionsExt;
+            let mode = permissions.mode();
+            let link_perms = mode & 0o777;
+            println!("   Link 'local' permissions: {link_perms:04o}");
+        }
+    } else {
+        println!("   [WARN]  Symlink 'local' not visible (may be a FUSE mounting issue)");
+    }
+
+    // Cleanup
+    fuse_task2.abort();
+
+    println!("   Note: Actual permission enforcement depends on FUSE and kernel");
+
+    Ok(())
+}
+
+#[test]
+fn test_link_plugin_has_quorum_aware_mode() {
+    // This is a unit test to verify the LinkPlugin mode is computed correctly
+    let _test_dir = TempDir::new().unwrap();
+
+    // Create status with quorum (no async needed, no RRD persistence)
+    let config = pmxcfs_test_utils::create_test_config(false);
+    let status = pmxcfs_status::init_with_config(config.clone());
+    status.set_quorate(true);
+    let registry_quorate = plugins::init_plugins(config.clone(), status.clone());
+
+    // Check that symlinks are identified correctly
+    let local_plugin = registry_quorate
+        .get("local")
+        .expect("local symlink should exist");
+    assert!(local_plugin.is_symlink(), "local should be a symlink");
+
+    // The mode itself is still 0o777, but the filesystem layer will use quorum status
+    assert_eq!(
+        local_plugin.mode(),
+        0o777,
+        "Link plugin base mode should be 0o777"
+    );
+}
-- 
2.47.3





  parent reply	other threads:[~2026-03-23 13:01 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-23 11:32 [PATCH pve-cluster v3 00/13] Rewrite pmxcfs with Rust Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 01/13] pmxcfs-rs: add pmxcfs-api-types crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 02/13] pmxcfs-rs: add pmxcfs-config crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 03/13] pmxcfs-rs: add pmxcfs-logger crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 04/13] pmxcfs-rs: add pmxcfs-rrd crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 05/13] pmxcfs-rs: add pmxcfs-memdb crate Kefu Chai
2026-03-23 11:32 ` SPAM: [PATCH pve-cluster v3 06/13] pmxcfs-rs: add pmxcfs-status and pmxcfs-test-utils crates Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 07/13] pmxcfs-rs: add pmxcfs-services crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 08/13] pmxcfs-rs: add pmxcfs-ipc crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 09/13] pmxcfs-rs: add pmxcfs-dfsm crate Kefu Chai
2026-03-23 11:32 ` [PATCH pve-cluster v3 10/13] pmxcfs-rs: vendor patched rust-corosync for CPG compatibility Kefu Chai
2026-03-23 11:32 ` Kefu Chai [this message]
2026-03-23 11:32 ` [PATCH pve-cluster v3 13/13] pmxcfs-rs: add project documentation Kefu Chai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260323113239.942866-12-k.chai@proxmox.com \
    --to=k.chai@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=tchaikov@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal