public inbox for pdm-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host
@ 2026-03-12 13:52 Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 01/26] sys: procfs: don't read from sysfs during unit tests Lukas Wagner
                   ` (26 more replies)
  0 siblings, 27 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

This series add metric collection physical PDM hosts.

The patches for `proxmox` introduce three new crates:
  - proxmox-disks: broken out from proxmox-backup, needed to read disk stats
  - proxmox-parallel-handler: also broken out from proxmox-backup,
    needed as a dependency for proxmox-disks. Since the scope was manageable,
    this series improves the existing code a bit by adding a dedicated error type,
    some documentation and basic unit tests
  - proxmox-procfs: as a new home for any procfs related modules. this patch series adds
    a `pressure` module for reading pressure stall information for the host and cgroups.
    The general idea is that we should move other procfs helpers from proxmox-sys into
    this new crate, but to avoid scope explosion this is not done as a part of this
    series

The patches for proxmox-backup just switch over to the new moved implementations of proxmox-disks
and proxmox-parallel-handler.

The patches for proxmox-yew-comp slight adapt the existing NodeStatusPanel to allow the application
to inject child components into the same panel.

The proxmox-datacenter-manager patches do some initial refactoring (naming), and then add the needed
collection loop, API types and UI elements.


proxmox:

Lukas Wagner (12):
  sys: procfs: don't read from sysfs during unit tests
  parallel-handler: import code from Proxmox Backup Server
  parallel-handler: introduce custom error type
  parallel-handler: add documentation
  parallel-handler: add simple unit-test suite
  disks: import from Proxmox Backup Server
  disks: fix typo in `initialize_gpt_disk`
  disks: add parts of gather_disk_stats from PBS
  disks: gate api macro behind 'api-types' feature
  disks: clippy: collapse if-let chains where possible
  procfs: add helpers for querying pressure stall information
  time: use u64 parse helper from nom

 Cargo.toml                                    |   10 +
 proxmox-disks/Cargo.toml                      |   34 +
 proxmox-disks/debian/changelog                |    5 +
 proxmox-disks/debian/control                  |   94 ++
 proxmox-disks/debian/copyright                |   18 +
 proxmox-disks/debian/debcargo.toml            |    7 +
 proxmox-disks/src/lib.rs                      | 1434 +++++++++++++++++
 proxmox-disks/src/lvm.rs                      |   60 +
 proxmox-disks/src/parse_helpers.rs            |   52 +
 proxmox-disks/src/smart.rs                    |  228 +++
 proxmox-disks/src/zfs.rs                      |  205 +++
 proxmox-disks/src/zpool_list.rs               |  294 ++++
 proxmox-disks/src/zpool_status.rs             |  496 ++++++
 proxmox-parallel-handler/Cargo.toml           |   16 +
 proxmox-parallel-handler/debian/changelog     |    5 +
 proxmox-parallel-handler/debian/control       |   36 +
 proxmox-parallel-handler/debian/copyright     |   18 +
 proxmox-parallel-handler/debian/debcargo.toml |    7 +
 proxmox-parallel-handler/src/lib.rs           |  344 ++++
 proxmox-procfs/Cargo.toml                     |   18 +
 proxmox-procfs/debian/changelog               |    5 +
 proxmox-procfs/debian/control                 |   50 +
 proxmox-procfs/debian/copyright               |   18 +
 proxmox-procfs/debian/debcargo.toml           |    7 +
 proxmox-procfs/src/lib.rs                     |    1 +
 proxmox-procfs/src/pressure.rs                |  334 ++++
 proxmox-sys/src/linux/procfs/mod.rs           |   30 +-
 proxmox-time/src/parse_helpers.rs             |    5 -
 proxmox-time/src/time_span.rs                 |    4 +-
 29 files changed, 3818 insertions(+), 17 deletions(-)
 create mode 100644 proxmox-disks/Cargo.toml
 create mode 100644 proxmox-disks/debian/changelog
 create mode 100644 proxmox-disks/debian/control
 create mode 100644 proxmox-disks/debian/copyright
 create mode 100644 proxmox-disks/debian/debcargo.toml
 create mode 100644 proxmox-disks/src/lib.rs
 create mode 100644 proxmox-disks/src/lvm.rs
 create mode 100644 proxmox-disks/src/parse_helpers.rs
 create mode 100644 proxmox-disks/src/smart.rs
 create mode 100644 proxmox-disks/src/zfs.rs
 create mode 100644 proxmox-disks/src/zpool_list.rs
 create mode 100644 proxmox-disks/src/zpool_status.rs
 create mode 100644 proxmox-parallel-handler/Cargo.toml
 create mode 100644 proxmox-parallel-handler/debian/changelog
 create mode 100644 proxmox-parallel-handler/debian/control
 create mode 100644 proxmox-parallel-handler/debian/copyright
 create mode 100644 proxmox-parallel-handler/debian/debcargo.toml
 create mode 100644 proxmox-parallel-handler/src/lib.rs
 create mode 100644 proxmox-procfs/Cargo.toml
 create mode 100644 proxmox-procfs/debian/changelog
 create mode 100644 proxmox-procfs/debian/control
 create mode 100644 proxmox-procfs/debian/copyright
 create mode 100644 proxmox-procfs/debian/debcargo.toml
 create mode 100644 proxmox-procfs/src/lib.rs
 create mode 100644 proxmox-procfs/src/pressure.rs


proxmox-backup:

Lukas Wagner (3):
  tools: move ParallelHandler to new proxmox-parallel-handler crate
  tools: replace disks module with proxmox-disks
  metric collection: use blockdev_stat_for_path from proxmox_disks

 Cargo.toml                                  |    6 +
 src/api2/admin/datastore.rs                 |   10 +-
 src/api2/config/datastore.rs                |    2 +-
 src/api2/node/disks/directory.rs            |   10 +-
 src/api2/node/disks/mod.rs                  |   20 +-
 src/api2/node/disks/zfs.rs                  |   14 +-
 src/api2/tape/restore.rs                    |   25 +-
 src/backup/verify.rs                        |    3 +-
 src/bin/proxmox_backup_manager/disk.rs      |    9 +-
 src/server/metric_collection/mod.rs         |   50 +-
 src/server/pull.rs                          |    5 +-
 src/tape/pool_writer/new_chunks_iterator.rs |    3 +-
 src/tools/disks/lvm.rs                      |   60 -
 src/tools/disks/mod.rs                      | 1394 -------------------
 src/tools/disks/smart.rs                    |  227 ---
 src/tools/disks/zfs.rs                      |  205 ---
 src/tools/mod.rs                            |    3 -
 src/tools/parallel_handler.rs               |  160 ---
 18 files changed, 62 insertions(+), 2144 deletions(-)
 delete mode 100644 src/tools/disks/lvm.rs
 delete mode 100644 src/tools/disks/mod.rs
 delete mode 100644 src/tools/disks/smart.rs
 delete mode 100644 src/tools/disks/zfs.rs
 delete mode 100644 src/tools/parallel_handler.rs


proxmox-yew-comp:

Lukas Wagner (3):
  node status panel: add `children` property
  RRDGrid: fix size observer by attaching node reference to rendered
    container
  RRDGrid: add padding and increase gap between elements

 src/node_status_panel.rs | 16 ++++++++++++++++
 src/rrd_grid.rs          |  5 +++--
 2 files changed, 19 insertions(+), 2 deletions(-)


proxmox-datacenter-manager:

Lukas Wagner (8):
  metric collection: clarify naming for remote metric collection
  metric collection: fix minor typo in error message
  metric collection: collect PDM host metrics in a new collection task
  api: fix /nodes/localhost/rrddata endpoint
  pdm: node rrd data: rename 'total-time' to
    'metric-collection-total-time'
  pdm-api-types: add PDM host metric fields
  ui: node status: add RRD graphs for PDM host metrics
  ui: lxc/qemu/node: use RRD value render helpers

 Cargo.toml                                    |   2 +
 cli/client/src/metric_collection.rs           |   4 +-
 debian/control                                |   1 +
 lib/pdm-api-types/src/metric_collection.rs    |   2 +-
 lib/pdm-api-types/src/rrddata.rs              |  74 ++++-
 lib/pdm-client/src/lib.rs                     |   8 +-
 server/Cargo.toml                             |   2 +
 server/src/api/metric_collection.rs           |  10 +-
 server/src/api/nodes/mod.rs                   |   2 +-
 server/src/api/nodes/rrddata.rs               |  73 +++-
 server/src/api/remotes.rs                     |   2 +-
 server/src/api/rrd_common.rs                  |   2 +-
 .../local_collection_task.rs                  | 199 +++++++++++
 server/src/metric_collection/mod.rs           |  40 ++-
 ...tion_task.rs => remote_collection_task.rs} |   8 +-
 server/src/metric_collection/rrd_task.rs      | 187 ++++++++++-
 server/src/metric_collection/state.rs         |   2 +-
 ui/src/administration/node_status.rs          | 312 +++++++++++++++++-
 ui/src/pbs/node/overview.rs                   |  29 +-
 ui/src/pve/lxc/overview.rs                    |  34 +-
 ui/src/pve/node/overview.rs                   |  29 +-
 ui/src/pve/qemu/overview.rs                   |  34 +-
 ui/src/renderer.rs                            |  49 +++
 23 files changed, 954 insertions(+), 151 deletions(-)
 create mode 100644 server/src/metric_collection/local_collection_task.rs
 rename server/src/metric_collection/{collection_task.rs => remote_collection_task.rs} (99%)


Summary over all repositories:
  72 files changed, 4853 insertions(+), 2314 deletions(-)

-- 
Generated by murpp 0.10.0




^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 01/26] sys: procfs: don't read from sysfs during unit tests
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 02/26] parallel-handler: import code from Proxmox Backup Server Lukas Wagner
                   ` (25 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

When running tests in sbuild, /sys/kernel/mm/ksm/pages_sharing cannot be
read; the test fails with a 'permission denied' error. To solve this, we
move reading the file to the caller, allowing us to provide a static
string an input during the test.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 proxmox-sys/src/linux/procfs/mod.rs | 30 +++++++++++++++++++----------
 1 file changed, 20 insertions(+), 10 deletions(-)

diff --git a/proxmox-sys/src/linux/procfs/mod.rs b/proxmox-sys/src/linux/procfs/mod.rs
index 202b6a45..d399cdc3 100644
--- a/proxmox-sys/src/linux/procfs/mod.rs
+++ b/proxmox-sys/src/linux/procfs/mod.rs
@@ -434,10 +434,16 @@ pub fn read_meminfo() -> Result<ProcFsMemInfo, Error> {
     let path = "/proc/meminfo";
 
     let meminfo_str = std::fs::read_to_string(path)?;
-    parse_proc_meminfo(&meminfo_str)
+
+    let pages_sharing = match read_firstline("/sys/kernel/mm/ksm/pages_sharing") {
+        Ok(p) => Some(p),
+        Err(err) if err.kind() == std::io::ErrorKind::NotFound => None,
+        Err(err) => bail!("unable to get KSM pages_sharing - {err}"),
+    };
+    parse_proc_meminfo(&meminfo_str, pages_sharing.as_deref())
 }
 
-fn parse_proc_meminfo(text: &str) -> Result<ProcFsMemInfo, Error> {
+fn parse_proc_meminfo(text: &str, pages_sharing: Option<&str>) -> Result<ProcFsMemInfo, Error> {
     let mut meminfo = ProcFsMemInfo {
         memtotal: 0,
         memfree: 0,
@@ -471,10 +477,9 @@ fn parse_proc_meminfo(text: &str) -> Result<ProcFsMemInfo, Error> {
 
     meminfo.swapused = meminfo.swaptotal - meminfo.swapfree;
 
-    meminfo.memshared = match read_firstline("/sys/kernel/mm/ksm/pages_sharing") {
-        Ok(spages_line) => spages_line.trim_end().parse::<u64>()? * 4096,
-        Err(err) if err.kind() == std::io::ErrorKind::NotFound => 0,
-        Err(err) => bail!("unable to get KSM pages_sharing - {err}"),
+    meminfo.memshared = match pages_sharing {
+        Some(pages_sharing) => pages_sharing.trim_end().parse::<u64>()? * 4096,
+        None => 0,
     };
 
     Ok(meminfo)
@@ -482,8 +487,7 @@ fn parse_proc_meminfo(text: &str) -> Result<ProcFsMemInfo, Error> {
 
 #[test]
 fn test_read_proc_meminfo() {
-    let meminfo = parse_proc_meminfo(
-        "MemTotal:       32752584 kB
+    let proc_meminfo = "MemTotal:       32752584 kB
 MemFree:         2106048 kB
 MemAvailable:   13301592 kB
 Buffers:               0 kB
@@ -538,9 +542,14 @@ Hugetlb:               0 kB
 DirectMap4k:      237284 kB
 DirectMap2M:    13281280 kB
 DirectMap1G:    22020096 kB
+";
+
+    let pages_sharing = Some(
+        "2
 ",
-    )
-    .expect("successful parsed a sample /proc/meminfo entry");
+    );
+    let meminfo = parse_proc_meminfo(proc_meminfo, pages_sharing)
+        .expect("successful parsed a sample /proc/meminfo entry");
 
     assert_eq!(meminfo.memtotal, 33538646016);
     assert_eq!(meminfo.memused, 19917815808);
@@ -549,6 +558,7 @@ DirectMap1G:    22020096 kB
     assert_eq!(meminfo.swapfree, 2048);
     assert_eq!(meminfo.swaptotal, 3072);
     assert_eq!(meminfo.swapused, 1024);
+    assert_eq!(meminfo.memshared, 8192);
 }
 
 #[derive(Clone, Debug)]
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 02/26] parallel-handler: import code from Proxmox Backup Server
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 01/26] sys: procfs: don't read from sysfs during unit tests Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 03/26] parallel-handler: introduce custom error type Lukas Wagner
                   ` (24 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Code is left unchanged, improvements will follow in followup commits.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 Cargo.toml                                    |   2 +
 proxmox-parallel-handler/Cargo.toml           |  15 ++
 proxmox-parallel-handler/debian/changelog     |   5 +
 proxmox-parallel-handler/debian/control       |  36 ++++
 proxmox-parallel-handler/debian/copyright     |  18 ++
 proxmox-parallel-handler/debian/debcargo.toml |   7 +
 proxmox-parallel-handler/src/lib.rs           | 160 ++++++++++++++++++
 7 files changed, 243 insertions(+)
 create mode 100644 proxmox-parallel-handler/Cargo.toml
 create mode 100644 proxmox-parallel-handler/debian/changelog
 create mode 100644 proxmox-parallel-handler/debian/control
 create mode 100644 proxmox-parallel-handler/debian/copyright
 create mode 100644 proxmox-parallel-handler/debian/debcargo.toml
 create mode 100644 proxmox-parallel-handler/src/lib.rs

diff --git a/Cargo.toml b/Cargo.toml
index 1cb5f09e..97593a5d 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -33,6 +33,7 @@ members = [
     "proxmox-notify",
     "proxmox-oci",
     "proxmox-openid",
+    "proxmox-parallel-handler",
     "proxmox-pgp",
     "proxmox-product-config",
     "proxmox-rate-limiter",
@@ -162,6 +163,7 @@ proxmox-lang = { version = "1.5", path = "proxmox-lang" }
 proxmox-log = { version = "1.0.0", path = "proxmox-log" }
 proxmox-login = { version = "1.0.0", path = "proxmox-login" }
 proxmox-network-types = { version = "1.0.0", path = "proxmox-network-types" }
+proxmox-parallel-handler = { version = "1.0.0", path = "proxmox-parallel-handler" }
 proxmox-pgp = { version = "1.0.0", path = "proxmox-pgp" }
 proxmox-product-config = { version = "1.0.0", path = "proxmox-product-config" }
 proxmox-config-digest = { version = "1.0.0", path = "proxmox-config-digest" }
diff --git a/proxmox-parallel-handler/Cargo.toml b/proxmox-parallel-handler/Cargo.toml
new file mode 100644
index 00000000..e55e7c63
--- /dev/null
+++ b/proxmox-parallel-handler/Cargo.toml
@@ -0,0 +1,15 @@
+[package]
+name = "proxmox-parallel-handler"
+description = "thread pool which runs a closure in parallel"
+version = "1.0.0"
+
+authors.workspace = true
+edition.workspace = true
+exclude.workspace = true
+homepage.workspace = true
+license.workspace = true
+repository.workspace = true
+
+[dependencies]
+anyhow.workspace = true
+crossbeam-channel.workspace = true
diff --git a/proxmox-parallel-handler/debian/changelog b/proxmox-parallel-handler/debian/changelog
new file mode 100644
index 00000000..df00d3f6
--- /dev/null
+++ b/proxmox-parallel-handler/debian/changelog
@@ -0,0 +1,5 @@
+rust-proxmox-parallel-handler (1.0.0-1) unstable; urgency=medium
+
+  * initial version -- imported from proxmox-backup
+
+ -- Proxmox Support Team <support@proxmox.com>  Thu, 26 Feb 2026 15:54:07 +0100
diff --git a/proxmox-parallel-handler/debian/control b/proxmox-parallel-handler/debian/control
new file mode 100644
index 00000000..ff356562
--- /dev/null
+++ b/proxmox-parallel-handler/debian/control
@@ -0,0 +1,36 @@
+Source: rust-proxmox-parallel-handler
+Section: rust
+Priority: optional
+Build-Depends: debhelper-compat (= 13),
+ dh-sequence-cargo
+Build-Depends-Arch: cargo:native <!nocheck>,
+ rustc:native <!nocheck>,
+ libstd-rust-dev <!nocheck>,
+ librust-anyhow-1+default-dev <!nocheck>,
+ librust-crossbeam-channel-0.5+default-dev <!nocheck>,
+ librust-thiserror-2+default-dev <!nocheck>
+Maintainer: Proxmox Support Team <support@proxmox.com>
+Standards-Version: 4.7.2
+Vcs-Git: git://git.proxmox.com/git/proxmox.git
+Vcs-Browser: https://git.proxmox.com/?p=proxmox.git
+Homepage: https://proxmox.com
+X-Cargo-Crate: proxmox-parallel-handler
+
+Package: librust-proxmox-parallel-handler-dev
+Architecture: any
+Multi-Arch: same
+Depends:
+ ${misc:Depends},
+ librust-anyhow-1+default-dev,
+ librust-crossbeam-channel-0.5+default-dev,
+ librust-thiserror-2+default-dev
+Provides:
+ librust-proxmox-parallel-handler+default-dev (= ${binary:Version}),
+ librust-proxmox-parallel-handler-1-dev (= ${binary:Version}),
+ librust-proxmox-parallel-handler-1+default-dev (= ${binary:Version}),
+ librust-proxmox-parallel-handler-1.0-dev (= ${binary:Version}),
+ librust-proxmox-parallel-handler-1.0+default-dev (= ${binary:Version}),
+ librust-proxmox-parallel-handler-1.0.0-dev (= ${binary:Version}),
+ librust-proxmox-parallel-handler-1.0.0+default-dev (= ${binary:Version})
+Description: Thread pool which runs a closure in parallel - Rust source code
+ Source code for Debianized Rust crate "proxmox-parallel-handler"
diff --git a/proxmox-parallel-handler/debian/copyright b/proxmox-parallel-handler/debian/copyright
new file mode 100644
index 00000000..01138fa0
--- /dev/null
+++ b/proxmox-parallel-handler/debian/copyright
@@ -0,0 +1,18 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+
+Files:
+ *
+Copyright: 2026 Proxmox Server Solutions GmbH <support@proxmox.com>
+License: AGPL-3.0-or-later
+ This program is free software: you can redistribute it and/or modify it under
+ the terms of the GNU Affero General Public License as published by the Free
+ Software Foundation, either version 3 of the License, or (at your option) any
+ later version.
+ .
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
+ details.
+ .
+ You should have received a copy of the GNU Affero General Public License along
+ with this program. If not, see <https://www.gnu.org/licenses/>.
diff --git a/proxmox-parallel-handler/debian/debcargo.toml b/proxmox-parallel-handler/debian/debcargo.toml
new file mode 100644
index 00000000..b7864cdb
--- /dev/null
+++ b/proxmox-parallel-handler/debian/debcargo.toml
@@ -0,0 +1,7 @@
+overlay = "."
+crate_src_path = ".."
+maintainer = "Proxmox Support Team <support@proxmox.com>"
+
+[source]
+vcs_git = "git://git.proxmox.com/git/proxmox.git"
+vcs_browser = "https://git.proxmox.com/?p=proxmox.git"
diff --git a/proxmox-parallel-handler/src/lib.rs b/proxmox-parallel-handler/src/lib.rs
new file mode 100644
index 00000000..75eab184
--- /dev/null
+++ b/proxmox-parallel-handler/src/lib.rs
@@ -0,0 +1,160 @@
+//! A thread pool which run a closure in parallel.
+
+use std::sync::{Arc, Mutex};
+use std::thread::JoinHandle;
+
+use anyhow::{bail, format_err, Error};
+use crossbeam_channel::{bounded, Sender};
+
+/// A handle to send data to the worker thread (implements clone)
+pub struct SendHandle<I> {
+    input: Sender<I>,
+    abort: Arc<Mutex<Option<String>>>,
+}
+
+/// Returns the first error happened, if any
+fn check_abort(abort: &Mutex<Option<String>>) -> Result<(), Error> {
+    let guard = abort.lock().unwrap();
+    if let Some(err_msg) = &*guard {
+        return Err(format_err!("{}", err_msg));
+    }
+    Ok(())
+}
+
+impl<I: Send> SendHandle<I> {
+    /// Send data to the worker threads
+    pub fn send(&self, input: I) -> Result<(), Error> {
+        check_abort(&self.abort)?;
+        match self.input.send(input) {
+            Ok(()) => Ok(()),
+            Err(_) => bail!("send failed - channel closed"),
+        }
+    }
+}
+
+/// A thread pool which run the supplied closure
+///
+/// The send command sends data to the worker threads. If one handler
+/// returns an error, we mark the channel as failed and it is no
+/// longer possible to send data.
+///
+/// When done, the 'complete()' method needs to be called to check for
+/// outstanding errors.
+pub struct ParallelHandler<I> {
+    handles: Vec<JoinHandle<()>>,
+    name: String,
+    input: Option<SendHandle<I>>,
+}
+
+impl<I> Clone for SendHandle<I> {
+    fn clone(&self) -> Self {
+        Self {
+            input: self.input.clone(),
+            abort: Arc::clone(&self.abort),
+        }
+    }
+}
+
+impl<I: Send + 'static> ParallelHandler<I> {
+    /// Create a new thread pool, each thread processing incoming data
+    /// with 'handler_fn'.
+    pub fn new<F>(name: &str, threads: usize, handler_fn: F) -> Self
+    where
+        F: Fn(I) -> Result<(), Error> + Send + Clone + 'static,
+    {
+        let mut handles = Vec::new();
+        let (input_tx, input_rx) = bounded::<I>(threads);
+
+        let abort = Arc::new(Mutex::new(None));
+
+        for i in 0..threads {
+            let input_rx = input_rx.clone();
+            let abort = Arc::clone(&abort);
+            let handler_fn = handler_fn.clone();
+
+            handles.push(
+                std::thread::Builder::new()
+                    .name(format!("{name} ({i})"))
+                    .spawn(move || loop {
+                        let data = match input_rx.recv() {
+                            Ok(data) => data,
+                            Err(_) => return,
+                        };
+                        if let Err(err) = (handler_fn)(data) {
+                            let mut guard = abort.lock().unwrap();
+                            if guard.is_none() {
+                                *guard = Some(err.to_string());
+                            }
+                        }
+                    })
+                    .unwrap(),
+            );
+        }
+        Self {
+            handles,
+            name: name.to_string(),
+            input: Some(SendHandle {
+                input: input_tx,
+                abort,
+            }),
+        }
+    }
+
+    /// Returns a cloneable channel to send data to the worker threads
+    pub fn channel(&self) -> SendHandle<I> {
+        self.input.as_ref().unwrap().clone()
+    }
+
+    /// Send data to the worker threads
+    pub fn send(&self, input: I) -> Result<(), Error> {
+        self.input.as_ref().unwrap().send(input)?;
+        Ok(())
+    }
+
+    /// Wait for worker threads to complete and check for errors
+    pub fn complete(mut self) -> Result<(), Error> {
+        let input = self.input.take().unwrap();
+        let abort = Arc::clone(&input.abort);
+        check_abort(&abort)?;
+        drop(input);
+
+        let msg_list = self.join_threads();
+
+        // an error might be encountered while waiting for the join
+        check_abort(&abort)?;
+
+        if msg_list.is_empty() {
+            return Ok(());
+        }
+        Err(format_err!("{}", msg_list.join("\n")))
+    }
+
+    fn join_threads(&mut self) -> Vec<String> {
+        let mut msg_list = Vec::new();
+
+        let mut i = 0;
+        while let Some(handle) = self.handles.pop() {
+            if let Err(panic) = handle.join() {
+                if let Some(panic_msg) = panic.downcast_ref::<&str>() {
+                    msg_list.push(format!("thread {} ({i}) panicked: {panic_msg}", self.name));
+                } else if let Some(panic_msg) = panic.downcast_ref::<String>() {
+                    msg_list.push(format!("thread {} ({i}) panicked: {panic_msg}", self.name));
+                } else {
+                    msg_list.push(format!("thread {} ({i}) panicked", self.name));
+                }
+            }
+            i += 1;
+        }
+        msg_list
+    }
+}
+
+// Note: We make sure that all threads will be joined
+impl<I> Drop for ParallelHandler<I> {
+    fn drop(&mut self) {
+        drop(self.input.take());
+        while let Some(handle) = self.handles.pop() {
+            let _ = handle.join();
+        }
+    }
+}
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 03/26] parallel-handler: introduce custom error type
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 01/26] sys: procfs: don't read from sysfs during unit tests Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 02/26] parallel-handler: import code from Proxmox Backup Server Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 04/26] parallel-handler: add documentation Lukas Wagner
                   ` (23 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Derive a custom error type using `thiserror`. For the handler functions,
we still use anyhow::Error, since that would involve bigger changes in
the callers.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 proxmox-parallel-handler/Cargo.toml |  1 +
 proxmox-parallel-handler/src/lib.rs | 67 +++++++++++++++++++----------
 2 files changed, 46 insertions(+), 22 deletions(-)

diff --git a/proxmox-parallel-handler/Cargo.toml b/proxmox-parallel-handler/Cargo.toml
index e55e7c63..5fe67889 100644
--- a/proxmox-parallel-handler/Cargo.toml
+++ b/proxmox-parallel-handler/Cargo.toml
@@ -13,3 +13,4 @@ repository.workspace = true
 [dependencies]
 anyhow.workspace = true
 crossbeam-channel.workspace = true
+thiserror.workspace = true
diff --git a/proxmox-parallel-handler/src/lib.rs b/proxmox-parallel-handler/src/lib.rs
index 75eab184..4c2ac118 100644
--- a/proxmox-parallel-handler/src/lib.rs
+++ b/proxmox-parallel-handler/src/lib.rs
@@ -3,9 +3,25 @@
 use std::sync::{Arc, Mutex};
 use std::thread::JoinHandle;
 
-use anyhow::{bail, format_err, Error};
 use crossbeam_channel::{bounded, Sender};
 
+#[derive(Debug, thiserror::Error)]
+pub enum Error {
+    #[error("send failed - channel closed")]
+    ChannelClosed,
+
+    #[error("handler failed: {0}")]
+    HandlerFailed(String),
+
+    #[error("thread {name} panicked")]
+    ThreadPanicked {
+        /// The name of the thread.
+        name: String,
+        /// The panic message extracted from the panic payload.
+        message: Option<String>,
+    },
+}
+
 /// A handle to send data to the worker thread (implements clone)
 pub struct SendHandle<I> {
     input: Sender<I>,
@@ -16,7 +32,7 @@ pub struct SendHandle<I> {
 fn check_abort(abort: &Mutex<Option<String>>) -> Result<(), Error> {
     let guard = abort.lock().unwrap();
     if let Some(err_msg) = &*guard {
-        return Err(format_err!("{}", err_msg));
+        return Err(Error::HandlerFailed(err_msg.clone()));
     }
     Ok(())
 }
@@ -25,10 +41,7 @@ impl<I: Send> SendHandle<I> {
     /// Send data to the worker threads
     pub fn send(&self, input: I) -> Result<(), Error> {
         check_abort(&self.abort)?;
-        match self.input.send(input) {
-            Ok(()) => Ok(()),
-            Err(_) => bail!("send failed - channel closed"),
-        }
+        self.input.send(input).map_err(|_| Error::ChannelClosed)
     }
 }
 
@@ -42,7 +55,6 @@ impl<I: Send> SendHandle<I> {
 /// outstanding errors.
 pub struct ParallelHandler<I> {
     handles: Vec<JoinHandle<()>>,
-    name: String,
     input: Option<SendHandle<I>>,
 }
 
@@ -60,7 +72,7 @@ impl<I: Send + 'static> ParallelHandler<I> {
     /// with 'handler_fn'.
     pub fn new<F>(name: &str, threads: usize, handler_fn: F) -> Self
     where
-        F: Fn(I) -> Result<(), Error> + Send + Clone + 'static,
+        F: Fn(I) -> Result<(), anyhow::Error> + Send + Clone + 'static,
     {
         let mut handles = Vec::new();
         let (input_tx, input_rx) = bounded::<I>(threads);
@@ -83,7 +95,7 @@ impl<I: Send + 'static> ParallelHandler<I> {
                         if let Err(err) = (handler_fn)(data) {
                             let mut guard = abort.lock().unwrap();
                             if guard.is_none() {
-                                *guard = Some(err.to_string());
+                                *guard = Some(format!("{err:#}"));
                             }
                         }
                     })
@@ -92,7 +104,6 @@ impl<I: Send + 'static> ParallelHandler<I> {
         }
         Self {
             handles,
-            name: name.to_string(),
             input: Some(SendHandle {
                 input: input_tx,
                 abort,
@@ -118,32 +129,44 @@ impl<I: Send + 'static> ParallelHandler<I> {
         check_abort(&abort)?;
         drop(input);
 
-        let msg_list = self.join_threads();
+        let mut msg_list = self.join_threads();
 
         // an error might be encountered while waiting for the join
         check_abort(&abort)?;
 
-        if msg_list.is_empty() {
-            return Ok(());
+        if let Some(e) = msg_list.pop() {
+            // Any error here is due to a thread panicking - let's just report that
+            // last panic that occurred.
+            Err(e)
+        } else {
+            Ok(())
         }
-        Err(format_err!("{}", msg_list.join("\n")))
     }
 
-    fn join_threads(&mut self) -> Vec<String> {
+    fn join_threads(&mut self) -> Vec<Error> {
         let mut msg_list = Vec::new();
 
-        let mut i = 0;
         while let Some(handle) = self.handles.pop() {
+            let thread_name = handle.thread().name().unwrap_or("<unknown>").to_string();
+
             if let Err(panic) = handle.join() {
-                if let Some(panic_msg) = panic.downcast_ref::<&str>() {
-                    msg_list.push(format!("thread {} ({i}) panicked: {panic_msg}", self.name));
-                } else if let Some(panic_msg) = panic.downcast_ref::<String>() {
-                    msg_list.push(format!("thread {} ({i}) panicked: {panic_msg}", self.name));
+                if let Some(message) = panic.downcast_ref::<&str>() {
+                    msg_list.push(Error::ThreadPanicked {
+                        name: thread_name,
+                        message: Some(message.to_string()),
+                    });
+                } else if let Some(message) = panic.downcast_ref::<String>() {
+                    msg_list.push(Error::ThreadPanicked {
+                        name: thread_name,
+                        message: Some(message.to_string()),
+                    });
                 } else {
-                    msg_list.push(format!("thread {} ({i}) panicked", self.name));
+                    msg_list.push(Error::ThreadPanicked {
+                        name: thread_name,
+                        message: None,
+                    });
                 }
             }
-            i += 1;
         }
         msg_list
     }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 04/26] parallel-handler: add documentation
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (2 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox 03/26] parallel-handler: introduce custom error type Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 05/26] parallel-handler: add simple unit-test suite Lukas Wagner
                   ` (22 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Add some module-level as well as function/type documentation.
---
 proxmox-parallel-handler/src/lib.rs | 120 ++++++++++++++++++++++++----
 1 file changed, 104 insertions(+), 16 deletions(-)

diff --git a/proxmox-parallel-handler/src/lib.rs b/proxmox-parallel-handler/src/lib.rs
index 4c2ac118..38f6a48a 100644
--- a/proxmox-parallel-handler/src/lib.rs
+++ b/proxmox-parallel-handler/src/lib.rs
@@ -1,18 +1,56 @@
-//! A thread pool which run a closure in parallel.
+//! A thread pool that runs a closure in parallel across multiple worker threads.
+//!
+//! This crate provides [`ParallelHandler`], a simple thread pool that distributes work items of
+//! type `I` to a fixed number of worker threads, each executing the same handler closure. Work is
+//! submitted through a bounded [`crossbeam_channel`].
+//!
+//! If any worker's handler returns an error, the pool is marked as failed and subsequent
+//! [`send`](ParallelHandler::send) calls will return the first recorded error. After all items
+//! have been submitted, call [`complete`](ParallelHandler::complete) to join the worker threads
+//! and surface any errors (including thread panics).
+//!
+//! # Example
+//!
+//! ```
+//! use proxmox_parallel_handler::ParallelHandler;
+//!
+//! let pool = ParallelHandler::new("example", 4, |value: u64| {
+//!     println!("processing {value}");
+//!     Ok(())
+//! });
+//!
+//! for i in 0..100 {
+//!     pool.send(i)?;
+//! }
+//!
+//! pool.complete()?;
+//! # Ok::<(), proxmox_parallel_handler::Error>(())
+//! ```
 
 use std::sync::{Arc, Mutex};
 use std::thread::JoinHandle;
 
 use crossbeam_channel::{bounded, Sender};
 
+/// Errors returned by [`ParallelHandler`] and [`SendHandle`] operations.
 #[derive(Debug, thiserror::Error)]
 pub enum Error {
+    /// The internal channel has been closed.
+    ///
+    /// This typically means the worker threads have already shut down, either because
+    /// [`ParallelHandler::complete`] was called or the pool was dropped.
     #[error("send failed - channel closed")]
     ChannelClosed,
 
+    /// A worker thread's handler closure returned an error.
+    ///
+    /// Contains the formatted error message from the first handler that failed.
+    /// Once a handler fails, all subsequent [`send`](SendHandle::send) calls will
+    /// this error.
     #[error("handler failed: {0}")]
     HandlerFailed(String),
 
+    /// A worker thread panicked.
     #[error("thread {name} panicked")]
     ThreadPanicked {
         /// The name of the thread.
@@ -22,13 +60,17 @@ pub enum Error {
     },
 }
 
-/// A handle to send data to the worker thread (implements clone)
+/// A cloneable handle for sending work items to a [`ParallelHandler`]'s worker threads.
+///
+/// Obtained via [`ParallelHandler::channel`]. Multiple clones of the same `SendHandle` share the
+/// underlying channel and abort state, so they can be used from different threads or tasks to
+/// submit work concurrently.
 pub struct SendHandle<I> {
     input: Sender<I>,
     abort: Arc<Mutex<Option<String>>>,
 }
 
-/// Returns the first error happened, if any
+/// Returns the first error which happened, if any.
 fn check_abort(abort: &Mutex<Option<String>>) -> Result<(), Error> {
     let guard = abort.lock().unwrap();
     if let Some(err_msg) = &*guard {
@@ -38,21 +80,35 @@ fn check_abort(abort: &Mutex<Option<String>>) -> Result<(), Error> {
 }
 
 impl<I: Send> SendHandle<I> {
-    /// Send data to the worker threads
+    /// Send a work item to the worker threads.
+    ///
+    /// The item is placed into the bounded channel and will be picked up by the next idle
+    /// worker. If all workers are busy, this call blocks until a worker becomes available.
+    ///
+    /// # Errors
+    ///
+    ///  - [`Error::HandlerFailed`] if any worker has already returned an error
+    ///  - [`Error::ChannelClosed`] if the channel has been closed (e.g. the pool was dropped).
     pub fn send(&self, input: I) -> Result<(), Error> {
         check_abort(&self.abort)?;
         self.input.send(input).map_err(|_| Error::ChannelClosed)
     }
 }
 
-/// A thread pool which run the supplied closure
+/// A thread pool that runs the supplied closure on each work item in parallel.
 ///
-/// The send command sends data to the worker threads. If one handler
-/// returns an error, we mark the channel as failed and it is no
-/// longer possible to send data.
+/// `ParallelHandler` spawns a fixed number of worker threads at construction time. Each thread
+/// receives work items of type `I` through a shared bounded channel and processes them with a
+/// cloned copy of the handler closure.
 ///
-/// When done, the 'complete()' method needs to be called to check for
-/// outstanding errors.
+/// # Error handling
+///
+/// If any handler invocation returns an error, the pool records the first error message and
+/// enters a failed state. Subsequent [`send`](Self::send) calls will immediately return
+/// [`Error::HandlerFailed`] rather than enqueueing more work.
+///
+/// If the `ParallelHandler` is dropped without calling `complete`, the [`Drop`] implementation
+/// still joins all threads, but any errors are silently discarded.
 pub struct ParallelHandler<I> {
     handles: Vec<JoinHandle<()>>,
     input: Option<SendHandle<I>>,
@@ -68,8 +124,14 @@ impl<I> Clone for SendHandle<I> {
 }
 
 impl<I: Send + 'static> ParallelHandler<I> {
-    /// Create a new thread pool, each thread processing incoming data
-    /// with 'handler_fn'.
+    /// Create a new thread pool with `threads` workers, each processing incoming data with
+    /// `handler_fn`.
+    ///
+    /// # Parameters
+    ///
+    /// - `name` - A human-readable name used in thread names and error messages.
+    /// - `threads` - The number of worker threads to spawn.
+    /// - `handler_fn` - The closure invoked for every work item.
     pub fn new<F>(name: &str, threads: usize, handler_fn: F) -> Self
     where
         F: Fn(I) -> Result<(), anyhow::Error> + Send + Clone + 'static,
@@ -99,6 +161,8 @@ impl<I: Send + 'static> ParallelHandler<I> {
                             }
                         }
                     })
+                    // unwrap is fine, `spawn` only panics if a thread name with null bytes as
+                    // set
                     .unwrap(),
             );
         }
@@ -111,18 +175,39 @@ impl<I: Send + 'static> ParallelHandler<I> {
         }
     }
 
-    /// Returns a cloneable channel to send data to the worker threads
+    /// Returns a cloneable [`SendHandle`] that can be used to send work items to the worker
+    /// threads.
+    ///
+    /// This is useful when you need to send items from multiple threads or tasks concurrently.
+    /// Each clone of the returned handle shares the same underlying channel.
     pub fn channel(&self) -> SendHandle<I> {
+        // unwrap: fine as long as Self::complete has not been called yet. Since
+        // Self::complete takes self, this cannot happen for any of our callers.
         self.input.as_ref().unwrap().clone()
     }
 
-    /// Send data to the worker threads
+    /// Send a work item to the worker threads.
+    ///
+    /// Convenience wrapper around the internal [`SendHandle::send`]. Blocks if the bounded
+    /// channel is full (i.e. all workers are busy).
+    ///
+    /// # Errors
+    ///
+    ///  - [`Error::HandlerFailed`] if any worker has already returned an error
+    ///  - [`Error::ChannelClosed`] if the channel has been closed.
     pub fn send(&self, input: I) -> Result<(), Error> {
+        // unwrap: fine as long as Self::complete has not been called yet. Since
+        // Self::complete takes self, this cannot happen for any of our callers.
         self.input.as_ref().unwrap().send(input)?;
         Ok(())
     }
 
-    /// Wait for worker threads to complete and check for errors
+    /// Close the channel, wait for all worker threads to finish, and check for errors.
+    ///
+    /// # Errors
+    ///
+    /// - [`Error::HandlerFailed`] - if any handler returned an error.
+    /// - [`Error::ThreadPanicked`] - if a worker thread panicked.
     pub fn complete(mut self) -> Result<(), Error> {
         let input = self.input.take().unwrap();
         let abort = Arc::clone(&input.abort);
@@ -172,7 +257,10 @@ impl<I: Send + 'static> ParallelHandler<I> {
     }
 }
 
-// Note: We make sure that all threads will be joined
+/// Dropping a `ParallelHandler` closes the channel and joins all worker threads.
+///
+/// Any errors that occurred in handler closures or thread panics are silently discarded.
+/// Prefer calling [`ParallelHandler::complete`] explicitly if you need to observe errors.
 impl<I> Drop for ParallelHandler<I> {
     fn drop(&mut self) {
         drop(self.input.take());
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 05/26] parallel-handler: add simple unit-test suite
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (3 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox 04/26] parallel-handler: add documentation Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 06/26] disks: import from Proxmox Backup Server Lukas Wagner
                   ` (21 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Add a simple test suite to the new proxmox-parallel-handler crate to
avoid any future regressions.
---
 proxmox-parallel-handler/src/lib.rs | 73 +++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/proxmox-parallel-handler/src/lib.rs b/proxmox-parallel-handler/src/lib.rs
index 38f6a48a..91dac17d 100644
--- a/proxmox-parallel-handler/src/lib.rs
+++ b/proxmox-parallel-handler/src/lib.rs
@@ -269,3 +269,76 @@ impl<I> Drop for ParallelHandler<I> {
         }
     }
 }
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+    use std::sync::atomic::{AtomicUsize, Ordering};
+
+    #[test]
+    fn test_send_on_pool() {
+        let count = Arc::new(AtomicUsize::new(0));
+        let count_clone = Arc::clone(&count);
+
+        let pool = ParallelHandler::new("ok", 2, move |_: u32| {
+            count_clone.fetch_add(1, Ordering::Relaxed);
+            Ok(())
+        });
+
+        for i in 0..10 {
+            pool.send(i).unwrap();
+        }
+
+        pool.complete().unwrap();
+        assert_eq!(count.load(Ordering::Relaxed), 10);
+    }
+
+    #[test]
+    fn test_send_on_handle() {
+        let count = Arc::new(AtomicUsize::new(0));
+        let count_clone = Arc::clone(&count);
+
+        let pool = ParallelHandler::new("chan", 2, move |_: u32| {
+            count_clone.fetch_add(1, Ordering::Relaxed);
+            Ok(())
+        });
+
+        let handle = pool.channel();
+        for i in 0..5 {
+            handle.send(i).unwrap();
+        }
+        drop(handle);
+
+        pool.complete().unwrap();
+        assert_eq!(count.load(Ordering::Relaxed), 5);
+    }
+
+    #[test]
+    fn handler_error_is_propagated_on_complete() {
+        let pool = ParallelHandler::new("fail", 1, |_: u32| {
+            anyhow::bail!("boom");
+        });
+
+        pool.send(1).unwrap();
+        let err = pool.complete().unwrap_err();
+
+        match err {
+            Error::HandlerFailed(msg) => assert!(msg.contains("boom")),
+            _ => panic!("invalid error variant"),
+        }
+    }
+
+    #[test]
+    fn thread_panic_is_reported_on_complete() {
+        let pool = ParallelHandler::new("panic", 1, |_: u32| -> Result<(), anyhow::Error> {
+            panic!("boom");
+        });
+
+        pool.send(1).unwrap();
+        let err = pool.complete().unwrap_err();
+        match err {
+            Error::ThreadPanicked { message, .. } => assert!(message.unwrap().contains("boom")),
+            _ => panic!("invalid error variant"),
+        }
+    }
+}
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 06/26] disks: import from Proxmox Backup Server
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (4 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox 05/26] parallel-handler: add simple unit-test suite Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-16 13:13   ` Arthur Bied-Charreton
  2026-03-12 13:52 ` [PATCH proxmox 07/26] disks: fix typo in `initialize_gpt_disk` Lukas Wagner
                   ` (20 subsequent siblings)
  26 siblings, 1 reply; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

This is based on the disks module from PBS and left unchanged.

The version has not been set to 1.0 yet since it seems like this crate
could use a bit a cleanup (custom error type instead of anyhow,
documentation).

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 Cargo.toml                         |    6 +
 proxmox-disks/Cargo.toml           |   30 +
 proxmox-disks/debian/changelog     |    5 +
 proxmox-disks/debian/control       |   94 ++
 proxmox-disks/debian/copyright     |   18 +
 proxmox-disks/debian/debcargo.toml |    7 +
 proxmox-disks/src/lib.rs           | 1396 ++++++++++++++++++++++++++++
 proxmox-disks/src/lvm.rs           |   60 ++
 proxmox-disks/src/parse_helpers.rs |   52 ++
 proxmox-disks/src/smart.rs         |  227 +++++
 proxmox-disks/src/zfs.rs           |  205 ++++
 proxmox-disks/src/zpool_list.rs    |  294 ++++++
 proxmox-disks/src/zpool_status.rs  |  496 ++++++++++
 13 files changed, 2890 insertions(+)
 create mode 100644 proxmox-disks/Cargo.toml
 create mode 100644 proxmox-disks/debian/changelog
 create mode 100644 proxmox-disks/debian/control
 create mode 100644 proxmox-disks/debian/copyright
 create mode 100644 proxmox-disks/debian/debcargo.toml
 create mode 100644 proxmox-disks/src/lib.rs
 create mode 100644 proxmox-disks/src/lvm.rs
 create mode 100644 proxmox-disks/src/parse_helpers.rs
 create mode 100644 proxmox-disks/src/smart.rs
 create mode 100644 proxmox-disks/src/zfs.rs
 create mode 100644 proxmox-disks/src/zpool_list.rs
 create mode 100644 proxmox-disks/src/zpool_status.rs

diff --git a/Cargo.toml b/Cargo.toml
index 97593a5d..8f3886bd 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -15,6 +15,7 @@ members = [
     "proxmox-config-digest",
     "proxmox-daemon",
     "proxmox-deb-version",
+    "proxmox-disks",
     "proxmox-dns-api",
     "proxmox-fixed-string",
     "proxmox-docgen",
@@ -112,6 +113,9 @@ mail-parser = "0.11"
 md5 = "0.7.0"
 native-tls = "0.2"
 nix = "0.29"
+nom = "7"
+# used by proxmox-disks, can be replaced by OnceLock from std once it supports get_or_try_init
+once_cell = "1.3.1"
 openssl = "0.10"
 pam-sys = "0.5"
 percent-encoding = "2.1"
@@ -139,6 +143,7 @@ tracing = "0.1"
 tracing-journald = "0.3.1"
 tracing-log = { version = "0.2", default-features = false }
 tracing-subscriber = "0.3.16"
+udev = "0.9"
 url = "2.2"
 walkdir = "2"
 zstd = "0.13"
@@ -154,6 +159,7 @@ proxmox-async = { version = "0.5.0", path = "proxmox-async" }
 proxmox-base64 = {  version = "1.0.0", path = "proxmox-base64" }
 proxmox-compression = { version = "1.0.0", path = "proxmox-compression" }
 proxmox-daemon = { version = "1.0.0", path = "proxmox-daemon" }
+proxmox-disks = { version = "0.1.0", path = "proxmox-daemon" }
 proxmox-fixed-string = { version = "0.1.0", path = "proxmox-fixed-string" }
 proxmox-http = { version = "1.0.5", path = "proxmox-http" }
 proxmox-http-error = { version = "1.0.0", path = "proxmox-http-error" }
diff --git a/proxmox-disks/Cargo.toml b/proxmox-disks/Cargo.toml
new file mode 100644
index 00000000..29bf56fe
--- /dev/null
+++ b/proxmox-disks/Cargo.toml
@@ -0,0 +1,30 @@
+[package]
+name = "proxmox-disks"
+description = "disk management and utilities"
+version = "0.1.0"
+
+authors.workspace = true
+edition.workspace = true
+exclude.workspace = true
+homepage.workspace = true
+license.workspace = true
+repository.workspace = true
+
+[dependencies]
+anyhow.workspace = true
+crossbeam-channel.workspace = true
+libc.workspace = true
+nix.workspace = true
+nom.workspace = true
+once_cell.workspace = true
+regex.workspace = true
+serde_json.workspace = true
+serde.workspace = true
+udev.workspace = true
+
+proxmox-io.workspace = true
+proxmox-lang.workspace = true
+proxmox-log.workspace = true
+proxmox-parallel-handler.workspace = true
+proxmox-schema = { workspace = true, features = [ "api-macro", "api-types" ] }
+proxmox-sys.workspace = true
diff --git a/proxmox-disks/debian/changelog b/proxmox-disks/debian/changelog
new file mode 100644
index 00000000..d41a2000
--- /dev/null
+++ b/proxmox-disks/debian/changelog
@@ -0,0 +1,5 @@
+rust-proxmox-disks (0.1.0-1) unstable; urgency=medium
+
+  * initial version.
+
+ -- Proxmox Support Team <support@proxmox.com>  Tue, 10 Mar 2026 15:05:21 +0100
diff --git a/proxmox-disks/debian/control b/proxmox-disks/debian/control
new file mode 100644
index 00000000..2b5dfb68
--- /dev/null
+++ b/proxmox-disks/debian/control
@@ -0,0 +1,94 @@
+Source: rust-proxmox-disks
+Section: rust
+Priority: optional
+Build-Depends: debhelper-compat (= 13),
+ dh-sequence-cargo
+Build-Depends-Arch: cargo:native <!nocheck>,
+ rustc:native <!nocheck>,
+ libstd-rust-dev <!nocheck>,
+ librust-anyhow-1+default-dev <!nocheck>,
+ librust-crossbeam-channel-0.5+default-dev <!nocheck>,
+ librust-libc-0.2+default-dev (>= 0.2.107-~~) <!nocheck>,
+ librust-nix-0.29+default-dev <!nocheck>,
+ librust-nom-7+default-dev <!nocheck>,
+ librust-once-cell-1+default-dev (>= 1.3.1-~~) <!nocheck>,
+ librust-proxmox-io-1+default-dev (>= 1.2.1-~~) <!nocheck>,
+ librust-proxmox-lang-1+default-dev (>= 1.5-~~) <!nocheck>,
+ librust-proxmox-log-1+default-dev <!nocheck>,
+ librust-proxmox-parallel-handler-1+default-dev <!nocheck>,
+ librust-proxmox-schema-5+api-types-dev (>= 5.0.1-~~) <!nocheck>,
+ librust-proxmox-schema-5+default-dev (>= 5.0.1-~~) <!nocheck>,
+ librust-proxmox-sys-1+default-dev <!nocheck>,
+ librust-regex-1+default-dev (>= 1.5-~~) <!nocheck>,
+ librust-serde-1+default-dev <!nocheck>,
+ librust-serde-json-1+default-dev <!nocheck>,
+ librust-udev-0.9+default-dev <!nocheck>
+Maintainer: Proxmox Support Team <support@proxmox.com>
+Standards-Version: 4.7.2
+Vcs-Git: git://git.proxmox.com/git/proxmox.git
+Vcs-Browser: https://git.proxmox.com/?p=proxmox.git
+Homepage: https://proxmox.com
+X-Cargo-Crate: proxmox-disks
+
+Package: librust-proxmox-disks-dev
+Architecture: any
+Multi-Arch: same
+Depends:
+ ${misc:Depends},
+ librust-anyhow-1+default-dev,
+ librust-crossbeam-channel-0.5+default-dev,
+ librust-libc-0.2+default-dev (>= 0.2.107-~~),
+ librust-nix-0.29+default-dev,
+ librust-nom-7+default-dev,
+ librust-once-cell-1+default-dev (>= 1.3.1-~~),
+ librust-proxmox-io-1+default-dev (>= 1.2.1-~~),
+ librust-proxmox-lang-1+default-dev (>= 1.5-~~),
+ librust-proxmox-log-1+default-dev,
+ librust-proxmox-parallel-handler-1+default-dev,
+ librust-proxmox-sys-1+default-dev,
+ librust-regex-1+default-dev (>= 1.5-~~),
+ librust-serde-1+default-dev,
+ librust-serde-json-1+default-dev,
+ librust-udev-0.9+default-dev
+Recommends:
+ librust-proxmox-disks+default-dev (= ${binary:Version})
+Suggests:
+ librust-proxmox-disks+api-types-dev (= ${binary:Version})
+Provides:
+ librust-proxmox-disks-0-dev (= ${binary:Version}),
+ librust-proxmox-disks-0.1-dev (= ${binary:Version}),
+ librust-proxmox-disks-0.1.0-dev (= ${binary:Version})
+Description: Disk management and utilities - Rust source code
+ Source code for Debianized Rust crate "proxmox-disks"
+
+Package: librust-proxmox-disks+api-types-dev
+Architecture: any
+Multi-Arch: same
+Depends:
+ ${misc:Depends},
+ librust-proxmox-disks-dev (= ${binary:Version}),
+ librust-proxmox-schema-5+api-macro-dev (>= 5.0.1-~~),
+ librust-proxmox-schema-5+api-types-dev (>= 5.0.1-~~)
+Provides:
+ librust-proxmox-disks-0+api-types-dev (= ${binary:Version}),
+ librust-proxmox-disks-0.1+api-types-dev (= ${binary:Version}),
+ librust-proxmox-disks-0.1.0+api-types-dev (= ${binary:Version})
+Description: Disk management and utilities - feature "api-types"
+ This metapackage enables feature "api-types" for the Rust proxmox-disks crate,
+ by pulling in any additional dependencies needed by that feature.
+
+Package: librust-proxmox-disks+default-dev
+Architecture: any
+Multi-Arch: same
+Depends:
+ ${misc:Depends},
+ librust-proxmox-disks-dev (= ${binary:Version}),
+ librust-proxmox-schema-5+api-types-dev (>= 5.0.1-~~),
+ librust-proxmox-schema-5+default-dev (>= 5.0.1-~~)
+Provides:
+ librust-proxmox-disks-0+default-dev (= ${binary:Version}),
+ librust-proxmox-disks-0.1+default-dev (= ${binary:Version}),
+ librust-proxmox-disks-0.1.0+default-dev (= ${binary:Version})
+Description: Disk management and utilities - feature "default"
+ This metapackage enables feature "default" for the Rust proxmox-disks crate, by
+ pulling in any additional dependencies needed by that feature.
diff --git a/proxmox-disks/debian/copyright b/proxmox-disks/debian/copyright
new file mode 100644
index 00000000..01138fa0
--- /dev/null
+++ b/proxmox-disks/debian/copyright
@@ -0,0 +1,18 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+
+Files:
+ *
+Copyright: 2026 Proxmox Server Solutions GmbH <support@proxmox.com>
+License: AGPL-3.0-or-later
+ This program is free software: you can redistribute it and/or modify it under
+ the terms of the GNU Affero General Public License as published by the Free
+ Software Foundation, either version 3 of the License, or (at your option) any
+ later version.
+ .
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
+ details.
+ .
+ You should have received a copy of the GNU Affero General Public License along
+ with this program. If not, see <https://www.gnu.org/licenses/>.
diff --git a/proxmox-disks/debian/debcargo.toml b/proxmox-disks/debian/debcargo.toml
new file mode 100644
index 00000000..b7864cdb
--- /dev/null
+++ b/proxmox-disks/debian/debcargo.toml
@@ -0,0 +1,7 @@
+overlay = "."
+crate_src_path = ".."
+maintainer = "Proxmox Support Team <support@proxmox.com>"
+
+[source]
+vcs_git = "git://git.proxmox.com/git/proxmox.git"
+vcs_browser = "https://git.proxmox.com/?p=proxmox.git"
diff --git a/proxmox-disks/src/lib.rs b/proxmox-disks/src/lib.rs
new file mode 100644
index 00000000..e6056c14
--- /dev/null
+++ b/proxmox-disks/src/lib.rs
@@ -0,0 +1,1396 @@
+//! Disk query/management utilities for.
+
+use std::collections::{HashMap, HashSet};
+use std::ffi::{OsStr, OsString};
+use std::io;
+use std::os::unix::ffi::{OsStrExt, OsStringExt};
+use std::os::unix::fs::{FileExt, MetadataExt, OpenOptionsExt};
+use std::path::{Path, PathBuf};
+use std::sync::{Arc, LazyLock};
+
+use anyhow::{bail, format_err, Context as _, Error};
+use libc::dev_t;
+use once_cell::sync::OnceCell;
+
+use ::serde::{Deserialize, Serialize};
+
+use proxmox_lang::{io_bail, io_format_err};
+use proxmox_log::info;
+use proxmox_parallel_handler::ParallelHandler;
+use proxmox_schema::api;
+use proxmox_sys::linux::procfs::{mountinfo::Device, MountInfo};
+
+use proxmox_schema::api_types::{
+    BLOCKDEVICE_DISK_AND_PARTITION_NAME_REGEX, BLOCKDEVICE_NAME_REGEX, UUID_REGEX,
+};
+
+mod zfs;
+pub use zfs::*;
+mod zpool_status;
+pub use zpool_status::*;
+mod zpool_list;
+pub use zpool_list::*;
+mod lvm;
+pub use lvm::*;
+mod smart;
+pub use smart::*;
+
+mod parse_helpers;
+
+static ISCSI_PATH_REGEX: LazyLock<regex::Regex> =
+    LazyLock::new(|| regex::Regex::new(r"host[^/]*/session[^/]*").unwrap());
+
+/// Disk management context.
+///
+/// This provides access to disk information with some caching for faster querying of multiple
+/// devices.
+pub struct DiskManage {
+    mount_info: OnceCell<MountInfo>,
+    mounted_devices: OnceCell<HashSet<dev_t>>,
+}
+
+/// Information for a device as returned by lsblk.
+#[derive(Deserialize)]
+pub struct LsblkInfo {
+    /// Path to the device.
+    path: String,
+    /// Partition type GUID.
+    #[serde(rename = "parttype")]
+    partition_type: Option<String>,
+    /// File system label.
+    #[serde(rename = "fstype")]
+    file_system_type: Option<String>,
+    /// File system UUID.
+    uuid: Option<String>,
+}
+
+impl DiskManage {
+    /// Create a new disk management context.
+    pub fn new() -> Arc<Self> {
+        Arc::new(Self {
+            mount_info: OnceCell::new(),
+            mounted_devices: OnceCell::new(),
+        })
+    }
+
+    /// Get the current mount info. This simply caches the result of `MountInfo::read` from the
+    /// `proxmox::sys` module.
+    pub fn mount_info(&self) -> Result<&MountInfo, Error> {
+        self.mount_info.get_or_try_init(MountInfo::read)
+    }
+
+    /// Get a `Disk` from a device node (eg. `/dev/sda`).
+    pub fn disk_by_node<P: AsRef<Path>>(self: Arc<Self>, devnode: P) -> io::Result<Disk> {
+        let devnode = devnode.as_ref();
+
+        let meta = std::fs::metadata(devnode)?;
+        if (meta.mode() & libc::S_IFBLK) == libc::S_IFBLK {
+            self.disk_by_dev_num(meta.rdev())
+        } else {
+            io_bail!("not a block device: {:?}", devnode);
+        }
+    }
+
+    /// Get a `Disk` for a specific device number.
+    pub fn disk_by_dev_num(self: Arc<Self>, devnum: dev_t) -> io::Result<Disk> {
+        self.disk_by_sys_path(format!(
+            "/sys/dev/block/{}:{}",
+            unsafe { libc::major(devnum) },
+            unsafe { libc::minor(devnum) },
+        ))
+    }
+
+    /// Get a `Disk` for a path in `/sys`.
+    pub fn disk_by_sys_path<P: AsRef<Path>>(self: Arc<Self>, path: P) -> io::Result<Disk> {
+        let device = udev::Device::from_syspath(path.as_ref())?;
+        Ok(Disk {
+            manager: self,
+            device,
+            info: Default::default(),
+        })
+    }
+
+    /// Get a `Disk` for a name in `/sys/block/<name>`.
+    pub fn disk_by_name(self: Arc<Self>, name: &str) -> io::Result<Disk> {
+        let syspath = format!("/sys/block/{name}");
+        self.disk_by_sys_path(syspath)
+    }
+
+    /// Get a `Disk` for a name in `/sys/class/block/<name>`.
+    pub fn partition_by_name(self: Arc<Self>, name: &str) -> io::Result<Disk> {
+        let syspath = format!("/sys/class/block/{name}");
+        self.disk_by_sys_path(syspath)
+    }
+
+    /// Gather information about mounted disks:
+    fn mounted_devices(&self) -> Result<&HashSet<dev_t>, Error> {
+        self.mounted_devices
+            .get_or_try_init(|| -> Result<_, Error> {
+                let mut mounted = HashSet::new();
+
+                for (_id, mp) in self.mount_info()? {
+                    let source = match mp.mount_source.as_deref() {
+                        Some(s) => s,
+                        None => continue,
+                    };
+
+                    let path = Path::new(source);
+                    if !path.is_absolute() {
+                        continue;
+                    }
+
+                    let meta = match std::fs::metadata(path) {
+                        Ok(meta) => meta,
+                        Err(ref err) if err.kind() == io::ErrorKind::NotFound => continue,
+                        Err(other) => return Err(Error::from(other)),
+                    };
+
+                    if (meta.mode() & libc::S_IFBLK) != libc::S_IFBLK {
+                        // not a block device
+                        continue;
+                    }
+
+                    mounted.insert(meta.rdev());
+                }
+
+                Ok(mounted)
+            })
+    }
+
+    /// Information about file system type and used device for a path
+    ///
+    /// Returns tuple (fs_type, device, mount_source)
+    pub fn find_mounted_device(
+        &self,
+        path: &std::path::Path,
+    ) -> Result<Option<(String, Device, Option<OsString>)>, Error> {
+        let stat = nix::sys::stat::stat(path)?;
+        let device = Device::from_dev_t(stat.st_dev);
+
+        let root_path = std::path::Path::new("/");
+
+        for (_id, entry) in self.mount_info()? {
+            if entry.root == root_path && entry.device == device {
+                return Ok(Some((
+                    entry.fs_type.clone(),
+                    entry.device,
+                    entry.mount_source.clone(),
+                )));
+            }
+        }
+
+        Ok(None)
+    }
+
+    /// Check whether a specific device node is mounted.
+    ///
+    /// Note that this tries to `stat` the sources of all mount points without caching the result
+    /// of doing so, so this is always somewhat expensive.
+    pub fn is_devnum_mounted(&self, dev: dev_t) -> Result<bool, Error> {
+        self.mounted_devices().map(|mounted| mounted.contains(&dev))
+    }
+}
+
+/// Queries (and caches) various information about a specific disk.
+///
+/// This belongs to a `Disks` and provides information for a single disk.
+pub struct Disk {
+    manager: Arc<DiskManage>,
+    device: udev::Device,
+    info: DiskInfo,
+}
+
+/// Helper struct (so we can initialize this with Default)
+///
+/// We probably want this to be serializable to the same hash type we use in perl currently.
+#[derive(Default)]
+struct DiskInfo {
+    size: OnceCell<u64>,
+    vendor: OnceCell<Option<OsString>>,
+    model: OnceCell<Option<OsString>>,
+    rotational: OnceCell<Option<bool>>,
+    // for perl: #[serde(rename = "devpath")]
+    ata_rotation_rate_rpm: OnceCell<Option<u64>>,
+    // for perl: #[serde(rename = "devpath")]
+    device_path: OnceCell<Option<PathBuf>>,
+    wwn: OnceCell<Option<OsString>>,
+    serial: OnceCell<Option<OsString>>,
+    // for perl: #[serde(skip_serializing)]
+    partition_table_type: OnceCell<Option<OsString>>,
+    // for perl: #[serde(skip_serializing)]
+    partition_entry_scheme: OnceCell<Option<OsString>>,
+    // for perl: #[serde(skip_serializing)]
+    partition_entry_uuid: OnceCell<Option<OsString>>,
+    // for perl: #[serde(skip_serializing)]
+    partition_entry_type: OnceCell<Option<OsString>>,
+    gpt: OnceCell<bool>,
+    // ???
+    bus: OnceCell<Option<OsString>>,
+    // ???
+    fs_type: OnceCell<Option<OsString>>,
+    // ???
+    has_holders: OnceCell<bool>,
+    // ???
+    is_mounted: OnceCell<bool>,
+}
+
+impl Disk {
+    /// Try to get the device number for this disk.
+    ///
+    /// (In udev this can fail...)
+    pub fn devnum(&self) -> Result<dev_t, Error> {
+        // not sure when this can fail...
+        self.device
+            .devnum()
+            .ok_or_else(|| format_err!("failed to get device number"))
+    }
+
+    /// Get the sys-name of this device. (The final component in the `/sys` path).
+    pub fn sysname(&self) -> &OsStr {
+        self.device.sysname()
+    }
+
+    /// Get the this disk's `/sys` path.
+    pub fn syspath(&self) -> &Path {
+        self.device.syspath()
+    }
+
+    /// Get the device node in `/dev`, if any.
+    pub fn device_path(&self) -> Option<&Path> {
+        //self.device.devnode()
+        self.info
+            .device_path
+            .get_or_init(|| self.device.devnode().map(Path::to_owned))
+            .as_ref()
+            .map(PathBuf::as_path)
+    }
+
+    /// Get the parent device.
+    pub fn parent(&self) -> Option<Self> {
+        self.device.parent().map(|parent| Self {
+            manager: self.manager.clone(),
+            device: parent,
+            info: Default::default(),
+        })
+    }
+
+    /// Read from a file in this device's sys path.
+    ///
+    /// Note: path must be a relative path!
+    pub fn read_sys(&self, path: &Path) -> io::Result<Option<Vec<u8>>> {
+        assert!(path.is_relative());
+
+        std::fs::read(self.syspath().join(path))
+            .map(Some)
+            .or_else(|err| {
+                if err.kind() == io::ErrorKind::NotFound {
+                    Ok(None)
+                } else {
+                    Err(err)
+                }
+            })
+    }
+
+    /// Convenience wrapper for reading a `/sys` file which contains just a simple `OsString`.
+    pub fn read_sys_os_str<P: AsRef<Path>>(&self, path: P) -> io::Result<Option<OsString>> {
+        Ok(self.read_sys(path.as_ref())?.map(|mut v| {
+            if Some(&b'\n') == v.last() {
+                v.pop();
+            }
+            OsString::from_vec(v)
+        }))
+    }
+
+    /// Convenience wrapper for reading a `/sys` file which contains just a simple utf-8 string.
+    pub fn read_sys_str<P: AsRef<Path>>(&self, path: P) -> io::Result<Option<String>> {
+        Ok(match self.read_sys(path.as_ref())? {
+            Some(data) => Some(String::from_utf8(data).map_err(io::Error::other)?),
+            None => None,
+        })
+    }
+
+    /// Convenience wrapper for unsigned integer `/sys` values up to 64 bit.
+    pub fn read_sys_u64<P: AsRef<Path>>(&self, path: P) -> io::Result<Option<u64>> {
+        Ok(match self.read_sys_str(path)? {
+            Some(data) => Some(data.trim().parse().map_err(io::Error::other)?),
+            None => None,
+        })
+    }
+
+    /// Get the disk's size in bytes.
+    pub fn size(&self) -> io::Result<u64> {
+        Ok(*self.info.size.get_or_try_init(|| {
+            self.read_sys_u64("size")?.map(|s| s * 512).ok_or_else(|| {
+                io_format_err!(
+                    "failed to get disk size from {:?}",
+                    self.syspath().join("size"),
+                )
+            })
+        })?)
+    }
+
+    /// Get the device vendor (`/sys/.../device/vendor`) entry if available.
+    pub fn vendor(&self) -> io::Result<Option<&OsStr>> {
+        Ok(self
+            .info
+            .vendor
+            .get_or_try_init(|| self.read_sys_os_str("device/vendor"))?
+            .as_ref()
+            .map(OsString::as_os_str))
+    }
+
+    /// Get the device model (`/sys/.../device/model`) entry if available.
+    pub fn model(&self) -> Option<&OsStr> {
+        self.info
+            .model
+            .get_or_init(|| self.device.property_value("ID_MODEL").map(OsStr::to_owned))
+            .as_ref()
+            .map(OsString::as_os_str)
+    }
+
+    /// Check whether this is a rotational disk.
+    ///
+    /// Returns `None` if there's no `queue/rotational` file, in which case no information is
+    /// known. `Some(false)` if `queue/rotational` is zero, `Some(true)` if it has a non-zero
+    /// value.
+    pub fn rotational(&self) -> io::Result<Option<bool>> {
+        Ok(*self
+            .info
+            .rotational
+            .get_or_try_init(|| -> io::Result<Option<bool>> {
+                Ok(self.read_sys_u64("queue/rotational")?.map(|n| n != 0))
+            })?)
+    }
+
+    /// Get the WWN if available.
+    pub fn wwn(&self) -> Option<&OsStr> {
+        self.info
+            .wwn
+            .get_or_init(|| self.device.property_value("ID_WWN").map(|v| v.to_owned()))
+            .as_ref()
+            .map(OsString::as_os_str)
+    }
+
+    /// Get the device serial if available.
+    pub fn serial(&self) -> Option<&OsStr> {
+        self.info
+            .serial
+            .get_or_init(|| {
+                self.device
+                    .property_value("ID_SERIAL_SHORT")
+                    .map(|v| v.to_owned())
+            })
+            .as_ref()
+            .map(OsString::as_os_str)
+    }
+
+    /// Get the ATA rotation rate value from udev. This is not necessarily the same as sysfs'
+    /// `rotational` value.
+    pub fn ata_rotation_rate_rpm(&self) -> Option<u64> {
+        *self.info.ata_rotation_rate_rpm.get_or_init(|| {
+            std::str::from_utf8(
+                self.device
+                    .property_value("ID_ATA_ROTATION_RATE_RPM")?
+                    .as_bytes(),
+            )
+            .ok()?
+            .parse()
+            .ok()
+        })
+    }
+
+    /// Get the partition table type, if any.
+    pub fn partition_table_type(&self) -> Option<&OsStr> {
+        self.info
+            .partition_table_type
+            .get_or_init(|| {
+                self.device
+                    .property_value("ID_PART_TABLE_TYPE")
+                    .map(|v| v.to_owned())
+            })
+            .as_ref()
+            .map(OsString::as_os_str)
+    }
+
+    /// Check if this contains a GPT partition table.
+    pub fn has_gpt(&self) -> bool {
+        *self.info.gpt.get_or_init(|| {
+            self.partition_table_type()
+                .map(|s| s == "gpt")
+                .unwrap_or(false)
+        })
+    }
+
+    /// Get the partitioning scheme of which this device is a partition.
+    pub fn partition_entry_scheme(&self) -> Option<&OsStr> {
+        self.info
+            .partition_entry_scheme
+            .get_or_init(|| {
+                self.device
+                    .property_value("ID_PART_ENTRY_SCHEME")
+                    .map(|v| v.to_owned())
+            })
+            .as_ref()
+            .map(OsString::as_os_str)
+    }
+
+    /// Check if this is a partition.
+    pub fn is_partition(&self) -> bool {
+        self.partition_entry_scheme().is_some()
+    }
+
+    /// Get the type of partition entry (ie. type UUID from the entry in the GPT partition table).
+    pub fn partition_entry_type(&self) -> Option<&OsStr> {
+        self.info
+            .partition_entry_type
+            .get_or_init(|| {
+                self.device
+                    .property_value("ID_PART_ENTRY_TYPE")
+                    .map(|v| v.to_owned())
+            })
+            .as_ref()
+            .map(OsString::as_os_str)
+    }
+
+    /// Get the partition entry UUID (ie. the UUID from the entry in the GPT partition table).
+    pub fn partition_entry_uuid(&self) -> Option<&OsStr> {
+        self.info
+            .partition_entry_uuid
+            .get_or_init(|| {
+                self.device
+                    .property_value("ID_PART_ENTRY_UUID")
+                    .map(|v| v.to_owned())
+            })
+            .as_ref()
+            .map(OsString::as_os_str)
+    }
+
+    /// Get the bus type used for this disk.
+    pub fn bus(&self) -> Option<&OsStr> {
+        self.info
+            .bus
+            .get_or_init(|| self.device.property_value("ID_BUS").map(|v| v.to_owned()))
+            .as_ref()
+            .map(OsString::as_os_str)
+    }
+
+    /// Attempt to guess the disk type.
+    pub fn guess_disk_type(&self) -> io::Result<DiskType> {
+        Ok(match self.rotational()? {
+            Some(false) => DiskType::Ssd,
+            Some(true) => DiskType::Hdd,
+            None => match self.ata_rotation_rate_rpm() {
+                Some(_) => DiskType::Hdd,
+                None => match self.bus() {
+                    Some(bus) if bus == "usb" => DiskType::Usb,
+                    _ => DiskType::Unknown,
+                },
+            },
+        })
+    }
+
+    /// Get the file system type found on the disk, if any.
+    ///
+    /// Note that `None` may also just mean "unknown".
+    pub fn fs_type(&self) -> Option<&OsStr> {
+        self.info
+            .fs_type
+            .get_or_init(|| {
+                self.device
+                    .property_value("ID_FS_TYPE")
+                    .map(|v| v.to_owned())
+            })
+            .as_ref()
+            .map(OsString::as_os_str)
+    }
+
+    /// Check if there are any "holders" in `/sys`. This usually means the device is in use by
+    /// another kernel driver like the device mapper.
+    pub fn has_holders(&self) -> io::Result<bool> {
+        Ok(*self
+            .info
+            .has_holders
+            .get_or_try_init(|| -> io::Result<bool> {
+                let mut subdir = self.syspath().to_owned();
+                subdir.push("holders");
+                for entry in std::fs::read_dir(subdir)? {
+                    match entry?.file_name().as_bytes() {
+                        b"." | b".." => (),
+                        _ => return Ok(true),
+                    }
+                }
+                Ok(false)
+            })?)
+    }
+
+    /// Check if this disk is mounted.
+    pub fn is_mounted(&self) -> Result<bool, Error> {
+        Ok(*self
+            .info
+            .is_mounted
+            .get_or_try_init(|| self.manager.is_devnum_mounted(self.devnum()?))?)
+    }
+
+    /// Read block device stats
+    ///
+    /// see <https://www.kernel.org/doc/Documentation/block/stat.txt>
+    pub fn read_stat(&self) -> std::io::Result<Option<BlockDevStat>> {
+        if let Some(stat) = self.read_sys(Path::new("stat"))? {
+            let stat = unsafe { std::str::from_utf8_unchecked(&stat) };
+            let stat: Vec<u64> = stat
+                .split_ascii_whitespace()
+                .map(|s| s.parse().unwrap_or_default())
+                .collect();
+
+            if stat.len() < 15 {
+                return Ok(None);
+            }
+
+            return Ok(Some(BlockDevStat {
+                read_ios: stat[0],
+                read_sectors: stat[2],
+                write_ios: stat[4] + stat[11],     // write + discard
+                write_sectors: stat[6] + stat[13], // write + discard
+                io_ticks: stat[10],
+            }));
+        }
+        Ok(None)
+    }
+
+    /// List device partitions
+    pub fn partitions(&self) -> Result<HashMap<u64, Disk>, Error> {
+        let sys_path = self.syspath();
+        let device = self.sysname().to_string_lossy().to_string();
+
+        let mut map = HashMap::new();
+
+        for item in proxmox_sys::fs::read_subdir(libc::AT_FDCWD, sys_path)? {
+            let item = item?;
+            let name = match item.file_name().to_str() {
+                Ok(name) => name,
+                Err(_) => continue, // skip non utf8 entries
+            };
+
+            if !name.starts_with(&device) {
+                continue;
+            }
+
+            let mut part_path = sys_path.to_owned();
+            part_path.push(name);
+
+            let disk_part = self.manager.clone().disk_by_sys_path(&part_path)?;
+
+            if let Some(partition) = disk_part.read_sys_u64("partition")? {
+                map.insert(partition, disk_part);
+            }
+        }
+
+        Ok(map)
+    }
+}
+
+#[api()]
+#[derive(Debug, Serialize, Deserialize)]
+#[serde(rename_all = "lowercase")]
+/// This is just a rough estimate for a "type" of disk.
+pub enum DiskType {
+    /// We know nothing.
+    Unknown,
+
+    /// May also be a USB-HDD.
+    Hdd,
+
+    /// May also be a USB-SSD.
+    Ssd,
+
+    /// Some kind of USB disk, but we don't know more than that.
+    Usb,
+}
+
+#[derive(Debug)]
+/// Represents the contents of the `/sys/block/<dev>/stat` file.
+pub struct BlockDevStat {
+    pub read_ios: u64,
+    pub read_sectors: u64,
+    pub write_ios: u64,
+    pub write_sectors: u64,
+    pub io_ticks: u64, // milliseconds
+}
+
+/// Use lsblk to read partition type uuids and file system types.
+pub fn get_lsblk_info() -> Result<Vec<LsblkInfo>, Error> {
+    let mut command = std::process::Command::new("lsblk");
+    command.args(["--json", "-o", "path,parttype,fstype,uuid"]);
+
+    let output = proxmox_sys::command::run_command(command, None)?;
+
+    let mut output: serde_json::Value = output.parse()?;
+
+    Ok(serde_json::from_value(output["blockdevices"].take())?)
+}
+
+/// Get set of devices with a file system label.
+///
+/// The set is indexed by using the unix raw device number (dev_t is u64)
+fn get_file_system_devices(lsblk_info: &[LsblkInfo]) -> Result<HashSet<u64>, Error> {
+    let mut device_set: HashSet<u64> = HashSet::new();
+
+    for info in lsblk_info.iter() {
+        if info.file_system_type.is_some() {
+            let meta = std::fs::metadata(&info.path)?;
+            device_set.insert(meta.rdev());
+        }
+    }
+
+    Ok(device_set)
+}
+
+#[api()]
+#[derive(Debug, Serialize, Deserialize, Eq, PartialEq)]
+#[serde(rename_all = "lowercase")]
+/// What a block device partition is used for.
+pub enum PartitionUsageType {
+    /// Partition is not used (as far we can tell)
+    Unused,
+    /// Partition is used by LVM
+    LVM,
+    /// Partition is used by ZFS
+    ZFS,
+    /// Partition is ZFS reserved
+    ZfsReserved,
+    /// Partition is an EFI partition
+    EFI,
+    /// Partition is a BIOS partition
+    BIOS,
+    /// Partition contains a file system label
+    FileSystem,
+}
+
+#[api()]
+#[derive(Debug, Serialize, Deserialize, Eq, PartialEq)]
+#[serde(rename_all = "lowercase")]
+/// What a block device (disk) is used for.
+pub enum DiskUsageType {
+    /// Disk is not used (as far we can tell)
+    Unused,
+    /// Disk is mounted
+    Mounted,
+    /// Disk is used by LVM
+    LVM,
+    /// Disk is used by ZFS
+    ZFS,
+    /// Disk is used by device-mapper
+    DeviceMapper,
+    /// Disk has partitions
+    Partitions,
+    /// Disk contains a file system label
+    FileSystem,
+}
+
+#[api()]
+#[derive(Debug, Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
+/// Basic information about a partition
+pub struct PartitionInfo {
+    /// The partition name
+    pub name: String,
+    /// What the partition is used for
+    pub used: PartitionUsageType,
+    /// Is the partition mounted
+    pub mounted: bool,
+    /// The filesystem of the partition
+    pub filesystem: Option<String>,
+    /// The partition devpath
+    pub devpath: Option<String>,
+    /// Size in bytes
+    pub size: Option<u64>,
+    /// GPT partition
+    pub gpt: bool,
+    /// UUID
+    pub uuid: Option<String>,
+}
+
+#[api(
+    properties: {
+        used: {
+            type: DiskUsageType,
+        },
+        "disk-type": {
+            type: DiskType,
+        },
+        status: {
+            type: SmartStatus,
+        },
+        partitions: {
+            optional: true,
+            items: {
+                type: PartitionInfo
+            }
+        }
+    }
+)]
+#[derive(Debug, Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
+/// Information about how a Disk is used
+pub struct DiskUsageInfo {
+    /// Disk name (`/sys/block/<name>`)
+    pub name: String,
+    pub used: DiskUsageType,
+    pub disk_type: DiskType,
+    pub status: SmartStatus,
+    /// Disk wearout
+    pub wearout: Option<f64>,
+    /// Vendor
+    pub vendor: Option<String>,
+    /// Model
+    pub model: Option<String>,
+    /// WWN
+    pub wwn: Option<String>,
+    /// Disk size
+    pub size: u64,
+    /// Serisal number
+    pub serial: Option<String>,
+    /// Partitions on the device
+    pub partitions: Option<Vec<PartitionInfo>>,
+    /// Linux device path (/dev/xxx)
+    pub devpath: Option<String>,
+    /// Set if disk contains a GPT partition table
+    pub gpt: bool,
+    /// RPM
+    pub rpm: Option<u64>,
+}
+
+fn scan_partitions(
+    disk_manager: Arc<DiskManage>,
+    lvm_devices: &HashSet<u64>,
+    zfs_devices: &HashSet<u64>,
+    device: &str,
+) -> Result<DiskUsageType, Error> {
+    let mut sys_path = std::path::PathBuf::from("/sys/block");
+    sys_path.push(device);
+
+    let mut used = DiskUsageType::Unused;
+
+    let mut found_lvm = false;
+    let mut found_zfs = false;
+    let mut found_mountpoints = false;
+    let mut found_dm = false;
+    let mut found_partitions = false;
+
+    for item in proxmox_sys::fs::read_subdir(libc::AT_FDCWD, &sys_path)? {
+        let item = item?;
+        let name = match item.file_name().to_str() {
+            Ok(name) => name,
+            Err(_) => continue, // skip non utf8 entries
+        };
+        if !name.starts_with(device) {
+            continue;
+        }
+
+        found_partitions = true;
+
+        let mut part_path = sys_path.clone();
+        part_path.push(name);
+
+        let data = disk_manager.clone().disk_by_sys_path(&part_path)?;
+
+        let devnum = data.devnum()?;
+
+        if lvm_devices.contains(&devnum) {
+            found_lvm = true;
+        }
+
+        if data.is_mounted()? {
+            found_mountpoints = true;
+        }
+
+        if data.has_holders()? {
+            found_dm = true;
+        }
+
+        if zfs_devices.contains(&devnum) {
+            found_zfs = true;
+        }
+    }
+
+    if found_mountpoints {
+        used = DiskUsageType::Mounted;
+    } else if found_lvm {
+        used = DiskUsageType::LVM;
+    } else if found_zfs {
+        used = DiskUsageType::ZFS;
+    } else if found_dm {
+        used = DiskUsageType::DeviceMapper;
+    } else if found_partitions {
+        used = DiskUsageType::Partitions;
+    }
+
+    Ok(used)
+}
+
+pub struct DiskUsageQuery {
+    smart: bool,
+    partitions: bool,
+}
+
+impl Default for DiskUsageQuery {
+    fn default() -> Self {
+        Self::new()
+    }
+}
+
+impl DiskUsageQuery {
+    pub const fn new() -> Self {
+        Self {
+            smart: true,
+            partitions: false,
+        }
+    }
+
+    pub fn smart(&mut self, smart: bool) -> &mut Self {
+        self.smart = smart;
+        self
+    }
+
+    pub fn partitions(&mut self, partitions: bool) -> &mut Self {
+        self.partitions = partitions;
+        self
+    }
+
+    pub fn query(&self) -> Result<HashMap<String, DiskUsageInfo>, Error> {
+        get_disks(None, !self.smart, self.partitions)
+    }
+
+    pub fn find(&self, disk: &str) -> Result<DiskUsageInfo, Error> {
+        let mut map = get_disks(Some(vec![disk.to_string()]), !self.smart, self.partitions)?;
+        if let Some(info) = map.remove(disk) {
+            Ok(info)
+        } else {
+            bail!("failed to get disk usage info - internal error"); // should not happen
+        }
+    }
+
+    pub fn find_all(&self, disks: Vec<String>) -> Result<HashMap<String, DiskUsageInfo>, Error> {
+        get_disks(Some(disks), !self.smart, self.partitions)
+    }
+}
+
+fn get_partitions_info(
+    partitions: HashMap<u64, Disk>,
+    lvm_devices: &HashSet<u64>,
+    zfs_devices: &HashSet<u64>,
+    file_system_devices: &HashSet<u64>,
+    lsblk_infos: &[LsblkInfo],
+) -> Vec<PartitionInfo> {
+    partitions
+        .values()
+        .map(|disk| {
+            let devpath = disk
+                .device_path()
+                .map(|p| p.to_owned())
+                .map(|p| p.to_string_lossy().to_string());
+
+            let mut used = PartitionUsageType::Unused;
+
+            if let Ok(devnum) = disk.devnum() {
+                if lvm_devices.contains(&devnum) {
+                    used = PartitionUsageType::LVM;
+                } else if zfs_devices.contains(&devnum) {
+                    used = PartitionUsageType::ZFS;
+                } else if file_system_devices.contains(&devnum) {
+                    used = PartitionUsageType::FileSystem;
+                }
+            }
+
+            let mounted = disk.is_mounted().unwrap_or(false);
+            let mut filesystem = None;
+            let mut uuid = None;
+            if let Some(devpath) = devpath.as_ref() {
+                for info in lsblk_infos.iter().filter(|i| i.path.eq(devpath)) {
+                    uuid = info.uuid.clone().filter(|uuid| UUID_REGEX.is_match(uuid));
+                    used = match info.partition_type.as_deref() {
+                        Some("21686148-6449-6e6f-744e-656564454649") => PartitionUsageType::BIOS,
+                        Some("c12a7328-f81f-11d2-ba4b-00a0c93ec93b") => PartitionUsageType::EFI,
+                        Some("6a945a3b-1dd2-11b2-99a6-080020736631") => {
+                            PartitionUsageType::ZfsReserved
+                        }
+                        _ => used,
+                    };
+                    if used == PartitionUsageType::FileSystem {
+                        filesystem.clone_from(&info.file_system_type);
+                    }
+                }
+            }
+
+            PartitionInfo {
+                name: disk.sysname().to_str().unwrap_or("?").to_string(),
+                devpath,
+                used,
+                mounted,
+                filesystem,
+                size: disk.size().ok(),
+                gpt: disk.has_gpt(),
+                uuid,
+            }
+        })
+        .collect()
+}
+
+/// Get disk usage information for multiple disks
+fn get_disks(
+    // filter - list of device names (without leading /dev)
+    disks: Option<Vec<String>>,
+    // do no include data from smartctl
+    no_smart: bool,
+    // include partitions
+    include_partitions: bool,
+) -> Result<HashMap<String, DiskUsageInfo>, Error> {
+    let disk_manager = DiskManage::new();
+
+    let lsblk_info = get_lsblk_info()?;
+
+    let zfs_devices =
+        zfs_devices(&lsblk_info, None).or_else(|err| -> Result<HashSet<u64>, Error> {
+            eprintln!("error getting zfs devices: {err}");
+            Ok(HashSet::new())
+        })?;
+
+    let lvm_devices = get_lvm_devices(&lsblk_info)?;
+
+    let file_system_devices = get_file_system_devices(&lsblk_info)?;
+
+    // fixme: ceph journals/volumes
+
+    let mut result = HashMap::new();
+    let mut device_paths = Vec::new();
+
+    for item in proxmox_sys::fs::scan_subdir(libc::AT_FDCWD, "/sys/block", &BLOCKDEVICE_NAME_REGEX)?
+    {
+        let item = item?;
+
+        let name = item.file_name().to_str().unwrap().to_string();
+
+        if let Some(ref disks) = disks {
+            if !disks.contains(&name) {
+                continue;
+            }
+        }
+
+        let sys_path = format!("/sys/block/{name}");
+
+        if let Ok(target) = std::fs::read_link(&sys_path) {
+            if let Some(target) = target.to_str() {
+                if ISCSI_PATH_REGEX.is_match(target) {
+                    continue;
+                } // skip iSCSI devices
+            }
+        }
+
+        let disk = disk_manager.clone().disk_by_sys_path(&sys_path)?;
+
+        let devnum = disk.devnum()?;
+
+        let size = match disk.size() {
+            Ok(size) => size,
+            Err(_) => continue, // skip devices with unreadable size
+        };
+
+        let disk_type = match disk.guess_disk_type() {
+            Ok(disk_type) => disk_type,
+            Err(_) => continue, // skip devices with undetectable type
+        };
+
+        let mut usage = DiskUsageType::Unused;
+
+        if lvm_devices.contains(&devnum) {
+            usage = DiskUsageType::LVM;
+        }
+
+        match disk.is_mounted() {
+            Ok(true) => usage = DiskUsageType::Mounted,
+            Ok(false) => {}
+            Err(_) => continue, // skip devices with undetectable mount status
+        }
+
+        if zfs_devices.contains(&devnum) {
+            usage = DiskUsageType::ZFS;
+        }
+
+        let vendor = disk
+            .vendor()
+            .unwrap_or(None)
+            .map(|s| s.to_string_lossy().trim().to_string());
+
+        let model = disk.model().map(|s| s.to_string_lossy().into_owned());
+
+        let serial = disk.serial().map(|s| s.to_string_lossy().into_owned());
+
+        let devpath = disk
+            .device_path()
+            .map(|p| p.to_owned())
+            .map(|p| p.to_string_lossy().to_string());
+
+        device_paths.push((name.clone(), devpath.clone()));
+
+        let wwn = disk.wwn().map(|s| s.to_string_lossy().into_owned());
+
+        let partitions: Option<Vec<PartitionInfo>> = if include_partitions {
+            disk.partitions().map_or(None, |parts| {
+                Some(get_partitions_info(
+                    parts,
+                    &lvm_devices,
+                    &zfs_devices,
+                    &file_system_devices,
+                    &lsblk_info,
+                ))
+            })
+        } else {
+            None
+        };
+
+        if usage != DiskUsageType::Mounted {
+            match scan_partitions(disk_manager.clone(), &lvm_devices, &zfs_devices, &name) {
+                Ok(part_usage) => {
+                    if part_usage != DiskUsageType::Unused {
+                        usage = part_usage;
+                    }
+                }
+                Err(_) => continue, // skip devices if scan_partitions fail
+            };
+        }
+
+        if usage == DiskUsageType::Unused && file_system_devices.contains(&devnum) {
+            usage = DiskUsageType::FileSystem;
+        }
+
+        if usage == DiskUsageType::Unused && disk.has_holders()? {
+            usage = DiskUsageType::DeviceMapper;
+        }
+
+        let info = DiskUsageInfo {
+            name: name.clone(),
+            vendor,
+            model,
+            partitions,
+            serial,
+            devpath,
+            size,
+            wwn,
+            disk_type,
+            status: SmartStatus::Unknown,
+            wearout: None,
+            used: usage,
+            gpt: disk.has_gpt(),
+            rpm: disk.ata_rotation_rate_rpm(),
+        };
+
+        result.insert(name, info);
+    }
+
+    if !no_smart {
+        let (tx, rx) = crossbeam_channel::bounded(result.len());
+
+        let parallel_handler =
+            ParallelHandler::new("smartctl data", 4, move |device: (String, String)| {
+                match get_smart_data(Path::new(&device.1), false) {
+                    Ok(smart_data) => tx.send((device.0, smart_data))?,
+                    // do not fail the whole disk output just because smartctl couldn't query one
+                    Err(err) => {
+                        proxmox_log::error!("failed to gather smart data for {} – {err}", device.1)
+                    }
+                }
+                Ok(())
+            });
+
+        for (name, path) in device_paths.into_iter() {
+            if let Some(p) = path {
+                parallel_handler.send((name, p))?;
+            }
+        }
+
+        parallel_handler.complete()?;
+        while let Ok(msg) = rx.recv() {
+            if let Some(value) = result.get_mut(&msg.0) {
+                value.wearout = msg.1.wearout;
+                value.status = msg.1.status;
+            }
+        }
+    }
+    Ok(result)
+}
+
+/// Try to reload the partition table
+pub fn reread_partition_table(disk: &Disk) -> Result<(), Error> {
+    let disk_path = match disk.device_path() {
+        Some(path) => path,
+        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
+    };
+
+    let mut command = std::process::Command::new("blockdev");
+    command.arg("--rereadpt");
+    command.arg(disk_path);
+
+    proxmox_sys::command::run_command(command, None)?;
+
+    Ok(())
+}
+
+/// Initialize disk by writing a GPT partition table
+pub fn inititialize_gpt_disk(disk: &Disk, uuid: Option<&str>) -> Result<(), Error> {
+    let disk_path = match disk.device_path() {
+        Some(path) => path,
+        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
+    };
+
+    let uuid = uuid.unwrap_or("R"); // R .. random disk GUID
+
+    let mut command = std::process::Command::new("sgdisk");
+    command.arg(disk_path);
+    command.args(["-U", uuid]);
+
+    proxmox_sys::command::run_command(command, None)?;
+
+    Ok(())
+}
+
+/// Wipes all labels, the first 200 MiB, and the last 4096 bytes of a disk/partition.
+/// If called with a partition, also sets the partition type to 0x83 'Linux filesystem'.
+pub fn wipe_blockdev(disk: &Disk) -> Result<(), Error> {
+    let disk_path = match disk.device_path() {
+        Some(path) => path,
+        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
+    };
+
+    let is_partition = disk.is_partition();
+
+    let mut to_wipe: Vec<PathBuf> = Vec::new();
+
+    let partitions_map = disk.partitions()?;
+    for part_disk in partitions_map.values() {
+        let part_path = match part_disk.device_path() {
+            Some(path) => path,
+            None => bail!("disk {:?} has no node in /dev", part_disk.syspath()),
+        };
+        to_wipe.push(part_path.to_path_buf());
+    }
+
+    to_wipe.push(disk_path.to_path_buf());
+
+    info!("Wiping block device {}", disk_path.display());
+
+    let mut wipefs_command = std::process::Command::new("wipefs");
+    wipefs_command.arg("--all").args(&to_wipe);
+
+    let wipefs_output = proxmox_sys::command::run_command(wipefs_command, None)?;
+    info!("wipefs output: {wipefs_output}");
+
+    zero_disk_start_and_end(disk)?;
+
+    if is_partition {
+        // set the partition type to 0x83 'Linux filesystem'
+        change_parttype(disk, "8300")?;
+    }
+
+    Ok(())
+}
+
+pub fn zero_disk_start_and_end(disk: &Disk) -> Result<(), Error> {
+    let disk_path = match disk.device_path() {
+        Some(path) => path,
+        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
+    };
+
+    let disk_size = disk.size()?;
+    let file = std::fs::OpenOptions::new()
+        .write(true)
+        .custom_flags(libc::O_CLOEXEC | libc::O_DSYNC)
+        .open(disk_path)
+        .with_context(|| "failed to open device {disk_path:?} for writing")?;
+    let write_size = disk_size.min(200 * 1024 * 1024);
+    let zeroes = proxmox_io::boxed::zeroed(write_size as usize);
+    file.write_all_at(&zeroes, 0)
+        .with_context(|| "failed to wipe start of device {disk_path:?}")?;
+    if disk_size > write_size {
+        file.write_all_at(&zeroes[0..4096], disk_size - 4096)
+            .with_context(|| "failed to wipe end of device {disk_path:?}")?;
+    }
+    Ok(())
+}
+
+pub fn change_parttype(part_disk: &Disk, part_type: &str) -> Result<(), Error> {
+    let part_path = match part_disk.device_path() {
+        Some(path) => path,
+        None => bail!("disk {:?} has no node in /dev", part_disk.syspath()),
+    };
+    if let Ok(stat) = nix::sys::stat::stat(part_path) {
+        let mut sgdisk_command = std::process::Command::new("sgdisk");
+        let major = unsafe { libc::major(stat.st_rdev) };
+        let minor = unsafe { libc::minor(stat.st_rdev) };
+        let partnum_path = &format!("/sys/dev/block/{major}:{minor}/partition");
+        let partnum: u32 = std::fs::read_to_string(partnum_path)?.trim_end().parse()?;
+        sgdisk_command.arg(format!("-t{partnum}:{part_type}"));
+        let part_disk_parent = match part_disk.parent() {
+            Some(disk) => disk,
+            None => bail!("disk {:?} has no node in /dev", part_disk.syspath()),
+        };
+        let part_disk_parent_path = match part_disk_parent.device_path() {
+            Some(path) => path,
+            None => bail!("disk {:?} has no node in /dev", part_disk.syspath()),
+        };
+        sgdisk_command.arg(part_disk_parent_path);
+        let sgdisk_output = proxmox_sys::command::run_command(sgdisk_command, None)?;
+        info!("sgdisk output: {sgdisk_output}");
+    }
+    Ok(())
+}
+
+/// Create a single linux partition using the whole available space
+pub fn create_single_linux_partition(disk: &Disk) -> Result<Disk, Error> {
+    let disk_path = match disk.device_path() {
+        Some(path) => path,
+        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
+    };
+
+    let mut command = std::process::Command::new("sgdisk");
+    command.args(["-n1", "-t1:8300"]);
+    command.arg(disk_path);
+
+    proxmox_sys::command::run_command(command, None)?;
+
+    let mut partitions = disk.partitions()?;
+
+    match partitions.remove(&1) {
+        Some(partition) => Ok(partition),
+        None => bail!("unable to lookup device partition"),
+    }
+}
+
+#[api()]
+#[derive(Debug, Copy, Clone, Serialize, Deserialize, Eq, PartialEq)]
+#[serde(rename_all = "lowercase")]
+/// A file system type supported by our tooling.
+pub enum FileSystemType {
+    /// Linux Ext4
+    Ext4,
+    /// XFS
+    Xfs,
+}
+
+impl std::fmt::Display for FileSystemType {
+    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+        let text = match self {
+            FileSystemType::Ext4 => "ext4",
+            FileSystemType::Xfs => "xfs",
+        };
+        write!(f, "{text}")
+    }
+}
+
+impl std::str::FromStr for FileSystemType {
+    type Err = serde_json::Error;
+
+    fn from_str(s: &str) -> Result<Self, Self::Err> {
+        use serde::de::IntoDeserializer;
+        Self::deserialize(s.into_deserializer())
+    }
+}
+
+/// Create a file system on a disk or disk partition
+pub fn create_file_system(disk: &Disk, fs_type: FileSystemType) -> Result<(), Error> {
+    let disk_path = match disk.device_path() {
+        Some(path) => path,
+        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
+    };
+
+    let fs_type = fs_type.to_string();
+
+    let mut command = std::process::Command::new("mkfs");
+    command.args(["-t", &fs_type]);
+    command.arg(disk_path);
+
+    proxmox_sys::command::run_command(command, None)?;
+
+    Ok(())
+}
+/// Block device name completion helper
+pub fn complete_disk_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
+    let dir =
+        match proxmox_sys::fs::scan_subdir(libc::AT_FDCWD, "/sys/block", &BLOCKDEVICE_NAME_REGEX) {
+            Ok(dir) => dir,
+            Err(_) => return vec![],
+        };
+
+    dir.flatten()
+        .map(|item| item.file_name().to_str().unwrap().to_string())
+        .collect()
+}
+
+/// Block device partition name completion helper
+pub fn complete_partition_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
+    let dir = match proxmox_sys::fs::scan_subdir(
+        libc::AT_FDCWD,
+        "/sys/class/block",
+        &BLOCKDEVICE_DISK_AND_PARTITION_NAME_REGEX,
+    ) {
+        Ok(dir) => dir,
+        Err(_) => return vec![],
+    };
+
+    dir.flatten()
+        .map(|item| item.file_name().to_str().unwrap().to_string())
+        .collect()
+}
+
+/// Read the FS UUID (parse blkid output)
+///
+/// Note: Calling blkid is more reliable than using the udev ID_FS_UUID property.
+pub fn get_fs_uuid(disk: &Disk) -> Result<String, Error> {
+    let disk_path = match disk.device_path() {
+        Some(path) => path,
+        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
+    };
+
+    let mut command = std::process::Command::new("blkid");
+    command.args(["-o", "export"]);
+    command.arg(disk_path);
+
+    let output = proxmox_sys::command::run_command(command, None)?;
+
+    for line in output.lines() {
+        if let Some(uuid) = line.strip_prefix("UUID=") {
+            return Ok(uuid.to_string());
+        }
+    }
+
+    bail!("get_fs_uuid failed - missing UUID");
+}
+
+/// Mount a disk by its UUID and the mount point.
+pub fn mount_by_uuid(uuid: &str, mount_point: &Path) -> Result<(), Error> {
+    let mut command = std::process::Command::new("mount");
+    command.arg(format!("UUID={uuid}"));
+    command.arg(mount_point);
+
+    proxmox_sys::command::run_command(command, None)?;
+    Ok(())
+}
+
+/// Create bind mount.
+pub fn bind_mount(path: &Path, target: &Path) -> Result<(), Error> {
+    let mut command = std::process::Command::new("mount");
+    command.arg("--bind");
+    command.arg(path);
+    command.arg(target);
+
+    proxmox_sys::command::run_command(command, None)?;
+    Ok(())
+}
+
+/// Unmount a disk by its mount point.
+pub fn unmount_by_mountpoint(path: &Path) -> Result<(), Error> {
+    let mut command = std::process::Command::new("umount");
+    command.arg(path);
+
+    proxmox_sys::command::run_command(command, None)?;
+    Ok(())
+}
diff --git a/proxmox-disks/src/lvm.rs b/proxmox-disks/src/lvm.rs
new file mode 100644
index 00000000..1456a21c
--- /dev/null
+++ b/proxmox-disks/src/lvm.rs
@@ -0,0 +1,60 @@
+use std::collections::HashSet;
+use std::os::unix::fs::MetadataExt;
+use std::sync::LazyLock;
+
+use anyhow::Error;
+use serde_json::Value;
+
+use super::LsblkInfo;
+
+static LVM_UUIDS: LazyLock<HashSet<&'static str>> = LazyLock::new(|| {
+    let mut set = HashSet::new();
+    set.insert("e6d6d379-f507-44c2-a23c-238f2a3df928");
+    set
+});
+
+/// Get set of devices used by LVM (pvs).
+///
+/// The set is indexed by using the unix raw device number (dev_t is u64)
+pub fn get_lvm_devices(lsblk_info: &[LsblkInfo]) -> Result<HashSet<u64>, Error> {
+    const PVS_BIN_PATH: &str = "pvs";
+
+    let mut command = std::process::Command::new(PVS_BIN_PATH);
+    command.args([
+        "--reportformat",
+        "json",
+        "--noheadings",
+        "--readonly",
+        "-o",
+        "pv_name",
+    ]);
+
+    let output = proxmox_sys::command::run_command(command, None)?;
+
+    let mut device_set: HashSet<u64> = HashSet::new();
+
+    for info in lsblk_info.iter() {
+        if let Some(partition_type) = &info.partition_type {
+            if LVM_UUIDS.contains(partition_type.as_str()) {
+                let meta = std::fs::metadata(&info.path)?;
+                device_set.insert(meta.rdev());
+            }
+        }
+    }
+
+    let output: Value = output.parse()?;
+
+    match output["report"][0]["pv"].as_array() {
+        Some(list) => {
+            for info in list {
+                if let Some(pv_name) = info["pv_name"].as_str() {
+                    let meta = std::fs::metadata(pv_name)?;
+                    device_set.insert(meta.rdev());
+                }
+            }
+        }
+        None => return Ok(device_set),
+    }
+
+    Ok(device_set)
+}
diff --git a/proxmox-disks/src/parse_helpers.rs b/proxmox-disks/src/parse_helpers.rs
new file mode 100644
index 00000000..563866d6
--- /dev/null
+++ b/proxmox-disks/src/parse_helpers.rs
@@ -0,0 +1,52 @@
+use anyhow::{bail, Error};
+
+use nom::{
+    bytes::complete::take_while1,
+    combinator::all_consuming,
+    error::{ContextError, VerboseError},
+};
+
+pub(crate) type IResult<I, O, E = VerboseError<I>> = Result<(I, O), nom::Err<E>>;
+
+fn verbose_err<'a>(i: &'a str, ctx: &'static str) -> VerboseError<&'a str> {
+    VerboseError::add_context(i, ctx, VerboseError { errors: vec![] })
+}
+
+pub(crate) fn parse_error<'a>(
+    i: &'a str,
+    context: &'static str,
+) -> nom::Err<VerboseError<&'a str>> {
+    nom::Err::Error(verbose_err(i, context))
+}
+
+pub(crate) fn parse_failure<'a>(
+    i: &'a str,
+    context: &'static str,
+) -> nom::Err<VerboseError<&'a str>> {
+    nom::Err::Error(verbose_err(i, context))
+}
+
+/// Recognizes one or more non-whitespace characters
+pub(crate) fn notspace1(i: &str) -> IResult<&str, &str> {
+    take_while1(|c| !(c == ' ' || c == '\t' || c == '\n'))(i)
+}
+
+/// Parse complete input, generate verbose error message with line numbers
+pub(crate) fn parse_complete<'a, F, O>(what: &str, i: &'a str, parser: F) -> Result<O, Error>
+where
+    F: FnMut(&'a str) -> IResult<&'a str, O>,
+{
+    match all_consuming(parser)(i) {
+        Err(nom::Err::Error(err)) | Err(nom::Err::Failure(err)) => {
+            bail!(
+                "unable to parse {} - {}",
+                what,
+                nom::error::convert_error(i, err)
+            );
+        }
+        Err(err) => {
+            bail!("unable to parse {} - {}", what, err);
+        }
+        Ok((_, data)) => Ok(data),
+    }
+}
diff --git a/proxmox-disks/src/smart.rs b/proxmox-disks/src/smart.rs
new file mode 100644
index 00000000..1d41cee2
--- /dev/null
+++ b/proxmox-disks/src/smart.rs
@@ -0,0 +1,227 @@
+use std::sync::LazyLock;
+use std::{
+    collections::{HashMap, HashSet},
+    path::Path,
+};
+
+use ::serde::{Deserialize, Serialize};
+use anyhow::Error;
+
+use proxmox_schema::api;
+
+#[api()]
+#[derive(Debug, Serialize, Deserialize)]
+#[serde(rename_all = "lowercase")]
+/// SMART status
+pub enum SmartStatus {
+    /// Smart tests passed - everything is OK
+    Passed,
+    /// Smart tests failed - disk has problems
+    Failed,
+    /// Unknown status
+    Unknown,
+}
+
+#[api()]
+#[derive(Debug, Serialize, Deserialize)]
+/// SMART Attribute
+pub struct SmartAttribute {
+    /// Attribute name
+    name: String,
+    // FIXME: remove value with next major release (PBS 3.0)
+    /// duplicate of raw - kept for API stability
+    value: String,
+    /// Attribute raw value
+    raw: String,
+    // the rest of the values is available for ATA type
+    /// ATA Attribute ID
+    #[serde(skip_serializing_if = "Option::is_none")]
+    id: Option<u64>,
+    /// ATA Flags
+    #[serde(skip_serializing_if = "Option::is_none")]
+    flags: Option<String>,
+    /// ATA normalized value (0..100)
+    #[serde(skip_serializing_if = "Option::is_none")]
+    normalized: Option<f64>,
+    /// ATA worst
+    #[serde(skip_serializing_if = "Option::is_none")]
+    worst: Option<f64>,
+    /// ATA threshold
+    #[serde(skip_serializing_if = "Option::is_none")]
+    threshold: Option<f64>,
+}
+
+#[api(
+    properties: {
+        status: {
+            type: SmartStatus,
+        },
+        wearout: {
+            description: "Wearout level.",
+            type: f64,
+            optional: true,
+        },
+        attributes: {
+            description: "SMART attributes.",
+            type: Array,
+            items: {
+                type: SmartAttribute,
+            },
+        },
+    },
+)]
+#[derive(Debug, Serialize, Deserialize)]
+/// Data from smartctl
+pub struct SmartData {
+    pub status: SmartStatus,
+    pub wearout: Option<f64>,
+    pub attributes: Vec<SmartAttribute>,
+}
+
+/// Read smartctl data for a disk (/dev/XXX).
+pub fn get_smart_data(disk_path: &Path, health_only: bool) -> Result<SmartData, Error> {
+    const SMARTCTL_BIN_PATH: &str = "smartctl";
+
+    let mut command = std::process::Command::new(SMARTCTL_BIN_PATH);
+    command.arg("-H");
+    if !health_only {
+        command.args(["-A", "-j"]);
+    }
+
+    command.arg(disk_path);
+
+    let output = proxmox_sys::command::run_command(
+        command,
+        Some(
+            |exitcode| (exitcode & 0b0011) == 0, // only bits 0-1 are fatal errors
+        ),
+    )?;
+
+    let output: serde_json::Value = output.parse()?;
+
+    let mut wearout = None;
+
+    let mut attributes = Vec::new();
+    let mut wearout_candidates = HashMap::new();
+
+    // ATA devices
+    if let Some(list) = output["ata_smart_attributes"]["table"].as_array() {
+        for item in list {
+            let id = match item["id"].as_u64() {
+                Some(id) => id,
+                None => continue, // skip attributes without id
+            };
+
+            let name = match item["name"].as_str() {
+                Some(name) => name.to_string(),
+                None => continue, // skip attributes without name
+            };
+
+            let raw_value = match item["raw"]["string"].as_str() {
+                Some(value) => value.to_string(),
+                None => continue, // skip attributes without raw value
+            };
+
+            let flags = match item["flags"]["string"].as_str() {
+                Some(flags) => flags.to_string(),
+                None => continue, // skip attributes without flags
+            };
+
+            let normalized = match item["value"].as_f64() {
+                Some(v) => v,
+                None => continue, // skip attributes without normalize value
+            };
+
+            let worst = match item["worst"].as_f64() {
+                Some(v) => v,
+                None => continue, // skip attributes without worst entry
+            };
+
+            let threshold = match item["thresh"].as_f64() {
+                Some(v) => v,
+                None => continue, // skip attributes without threshold entry
+            };
+
+            if WEAROUT_FIELD_NAMES.contains(&name as &str) {
+                wearout_candidates.insert(name.clone(), normalized);
+            }
+
+            attributes.push(SmartAttribute {
+                name,
+                value: raw_value.clone(),
+                raw: raw_value,
+                id: Some(id),
+                flags: Some(flags),
+                normalized: Some(normalized),
+                worst: Some(worst),
+                threshold: Some(threshold),
+            });
+        }
+    }
+
+    if !wearout_candidates.is_empty() {
+        for field in WEAROUT_FIELD_ORDER {
+            if let Some(value) = wearout_candidates.get(field as &str) {
+                wearout = Some(*value);
+                break;
+            }
+        }
+    }
+
+    // NVME devices
+    if let Some(list) = output["nvme_smart_health_information_log"].as_object() {
+        for (name, value) in list {
+            if name == "percentage_used" {
+                // extract wearout from nvme text, allow for decimal values
+                if let Some(v) = value.as_f64() {
+                    if v <= 100.0 {
+                        wearout = Some(100.0 - v);
+                    }
+                }
+            }
+            if let Some(value) = value.as_f64() {
+                attributes.push(SmartAttribute {
+                    name: name.to_string(),
+                    value: value.to_string(),
+                    raw: value.to_string(),
+                    id: None,
+                    flags: None,
+                    normalized: None,
+                    worst: None,
+                    threshold: None,
+                });
+            }
+        }
+    }
+
+    let status = match output["smart_status"]["passed"].as_bool() {
+        None => SmartStatus::Unknown,
+        Some(true) => SmartStatus::Passed,
+        Some(false) => SmartStatus::Failed,
+    };
+
+    Ok(SmartData {
+        status,
+        wearout,
+        attributes,
+    })
+}
+
+static WEAROUT_FIELD_ORDER: &[&str] = &[
+    "Media_Wearout_Indicator",
+    "SSD_Life_Left",
+    "Wear_Leveling_Count",
+    "Perc_Write/Erase_Ct_BC",
+    "Perc_Rated_Life_Remain",
+    "Remaining_Lifetime_Perc",
+    "Percent_Lifetime_Remain",
+    "Lifetime_Left",
+    "PCT_Life_Remaining",
+    "Lifetime_Remaining",
+    "Percent_Life_Remaining",
+    "Percent_Lifetime_Used",
+    "Perc_Rated_Life_Used",
+];
+
+static WEAROUT_FIELD_NAMES: LazyLock<HashSet<&'static str>> =
+    LazyLock::new(|| WEAROUT_FIELD_ORDER.iter().cloned().collect());
diff --git a/proxmox-disks/src/zfs.rs b/proxmox-disks/src/zfs.rs
new file mode 100644
index 00000000..0babb887
--- /dev/null
+++ b/proxmox-disks/src/zfs.rs
@@ -0,0 +1,205 @@
+use std::collections::HashSet;
+use std::os::unix::fs::MetadataExt;
+use std::path::PathBuf;
+use std::sync::{LazyLock, Mutex};
+
+use anyhow::{bail, Error};
+
+use proxmox_schema::const_regex;
+
+use super::*;
+
+static ZFS_UUIDS: LazyLock<HashSet<&'static str>> = LazyLock::new(|| {
+    let mut set = HashSet::new();
+    set.insert("6a898cc3-1dd2-11b2-99a6-080020736631"); // apple
+    set.insert("516e7cba-6ecf-11d6-8ff8-00022d09712b"); // bsd
+    set
+});
+
+fn get_pool_from_dataset(dataset: &str) -> &str {
+    if let Some(idx) = dataset.find('/') {
+        dataset[0..idx].as_ref()
+    } else {
+        dataset
+    }
+}
+
+/// Returns kernel IO-stats for zfs pools
+pub fn zfs_pool_stats(pool: &OsStr) -> Result<Option<BlockDevStat>, Error> {
+    let mut path = PathBuf::from("/proc/spl/kstat/zfs");
+    path.push(pool);
+    path.push("io");
+
+    let text = match proxmox_sys::fs::file_read_optional_string(&path)? {
+        Some(text) => text,
+        None => {
+            return Ok(None);
+        }
+    };
+
+    let lines: Vec<&str> = text.lines().collect();
+
+    if lines.len() < 3 {
+        bail!("unable to parse {:?} - got less than 3 lines", path);
+    }
+
+    // https://github.com/openzfs/zfs/blob/master/lib/libspl/include/sys/kstat.h#L578
+    // nread    nwritten reads    writes   wtime    wlentime wupdate  rtime    rlentime rupdate  wcnt     rcnt
+    // Note: w -> wait (wtime -> wait time)
+    // Note: r -> run  (rtime -> run time)
+    // All times are nanoseconds
+    let stat: Vec<u64> = lines[2]
+        .split_ascii_whitespace()
+        .map(|s| s.parse().unwrap_or_default())
+        .collect();
+
+    let ticks = (stat[4] + stat[7]) / 1_000_000; // convert to milisec
+
+    let stat = BlockDevStat {
+        read_sectors: stat[0] >> 9,
+        write_sectors: stat[1] >> 9,
+        read_ios: stat[2],
+        write_ios: stat[3],
+        io_ticks: ticks,
+    };
+
+    Ok(Some(stat))
+}
+
+/// Get set of devices used by zfs (or a specific zfs pool)
+///
+/// The set is indexed by using the unix raw device number (dev_t is u64)
+pub fn zfs_devices(lsblk_info: &[LsblkInfo], pool: Option<String>) -> Result<HashSet<u64>, Error> {
+    let list = zpool_list(pool.as_ref(), true)?;
+
+    let mut device_set = HashSet::new();
+    for entry in list {
+        for device in entry.devices {
+            let meta = std::fs::metadata(device)?;
+            device_set.insert(meta.rdev());
+        }
+    }
+    if pool.is_none() {
+        for info in lsblk_info.iter() {
+            if let Some(partition_type) = &info.partition_type {
+                if ZFS_UUIDS.contains(partition_type.as_str()) {
+                    let meta = std::fs::metadata(&info.path)?;
+                    device_set.insert(meta.rdev());
+                }
+            }
+        }
+    }
+
+    Ok(device_set)
+}
+
+const ZFS_KSTAT_BASE_PATH: &str = "/proc/spl/kstat/zfs";
+const_regex! {
+    OBJSET_REGEX = r"^objset-0x[a-fA-F0-9]+$";
+}
+
+static ZFS_DATASET_OBJSET_MAP: LazyLock<Mutex<HashMap<String, (String, String)>>> =
+    LazyLock::new(|| Mutex::new(HashMap::new()));
+
+// parses /proc/spl/kstat/zfs/POOL/objset-ID files
+// they have the following format:
+//
+// 0 0 0x00 0 0000 00000000000 000000000000000000
+// name                            type data
+// dataset_name                    7    pool/dataset
+// writes                          4    0
+// nwritten                        4    0
+// reads                           4    0
+// nread                           4    0
+// nunlinks                        4    0
+// nunlinked                       4    0
+//
+// we are only interested in the dataset_name, writes, nwrites, reads and nread
+fn parse_objset_stat(pool: &str, objset_id: &str) -> Result<(String, BlockDevStat), Error> {
+    let path = PathBuf::from(format!("{ZFS_KSTAT_BASE_PATH}/{pool}/{objset_id}"));
+
+    let text = match proxmox_sys::fs::file_read_optional_string(path)? {
+        Some(text) => text,
+        None => bail!("could not parse '{}' stat file", objset_id),
+    };
+
+    let mut dataset_name = String::new();
+    let mut stat = BlockDevStat {
+        read_sectors: 0,
+        write_sectors: 0,
+        read_ios: 0,
+        write_ios: 0,
+        io_ticks: 0,
+    };
+
+    for (i, line) in text.lines().enumerate() {
+        if i < 2 {
+            continue;
+        }
+
+        let mut parts = line.split_ascii_whitespace();
+        let name = parts.next();
+        parts.next(); // discard type
+        let value = parts.next().ok_or_else(|| format_err!("no value found"))?;
+        match name {
+            Some("dataset_name") => dataset_name = value.to_string(),
+            Some("writes") => stat.write_ios = value.parse().unwrap_or_default(),
+            Some("nwritten") => stat.write_sectors = value.parse::<u64>().unwrap_or_default() / 512,
+            Some("reads") => stat.read_ios = value.parse().unwrap_or_default(),
+            Some("nread") => stat.read_sectors = value.parse::<u64>().unwrap_or_default() / 512,
+            _ => {}
+        }
+    }
+
+    Ok((dataset_name, stat))
+}
+
+fn get_mapping(dataset: &str) -> Option<(String, String)> {
+    ZFS_DATASET_OBJSET_MAP
+        .lock()
+        .unwrap()
+        .get(dataset)
+        .map(|c| c.to_owned())
+}
+
+/// Updates the dataset <-> objset_map
+pub(crate) fn update_zfs_objset_map(pool: &str) -> Result<(), Error> {
+    let mut map = ZFS_DATASET_OBJSET_MAP.lock().unwrap();
+    map.clear();
+    let path = PathBuf::from(format!("{ZFS_KSTAT_BASE_PATH}/{pool}"));
+
+    proxmox_sys::fs::scandir(
+        libc::AT_FDCWD,
+        &path,
+        &OBJSET_REGEX,
+        |_l2_fd, filename, _type| {
+            let (name, _) = parse_objset_stat(pool, filename)?;
+            map.insert(name, (pool.to_string(), filename.to_string()));
+            Ok(())
+        },
+    )?;
+
+    Ok(())
+}
+
+/// Gets io stats for the dataset from /proc/spl/kstat/zfs/POOL/objset-ID
+pub fn zfs_dataset_stats(dataset: &str) -> Result<BlockDevStat, Error> {
+    let mut mapping = get_mapping(dataset);
+    if mapping.is_none() {
+        let pool = get_pool_from_dataset(dataset);
+        update_zfs_objset_map(pool)?;
+        mapping = get_mapping(dataset);
+    }
+    let (pool, objset_id) =
+        mapping.ok_or_else(|| format_err!("could not find objset id for dataset"))?;
+
+    match parse_objset_stat(&pool, &objset_id) {
+        Ok((_, stat)) => Ok(stat),
+        Err(err) => {
+            // on error remove dataset from map, it probably vanished or the
+            // mapping was incorrect
+            ZFS_DATASET_OBJSET_MAP.lock().unwrap().remove(dataset);
+            Err(err)
+        }
+    }
+}
diff --git a/proxmox-disks/src/zpool_list.rs b/proxmox-disks/src/zpool_list.rs
new file mode 100644
index 00000000..4083629f
--- /dev/null
+++ b/proxmox-disks/src/zpool_list.rs
@@ -0,0 +1,294 @@
+use anyhow::{bail, Error};
+
+use crate::parse_helpers::{notspace1, IResult};
+
+use nom::{
+    bytes::complete::{take_till, take_till1, take_while1},
+    character::complete::{char, digit1, line_ending, space0, space1},
+    combinator::{all_consuming, map_res, opt, recognize},
+    multi::many0,
+    sequence::{preceded, tuple},
+};
+
+#[derive(Debug, PartialEq)]
+pub struct ZFSPoolUsage {
+    pub size: u64,
+    pub alloc: u64,
+    pub free: u64,
+    pub dedup: f64,
+    pub frag: u64,
+}
+
+#[derive(Debug, PartialEq)]
+pub struct ZFSPoolInfo {
+    pub name: String,
+    pub health: String,
+    pub usage: Option<ZFSPoolUsage>,
+    pub devices: Vec<String>,
+}
+
+fn parse_optional_u64(i: &str) -> IResult<&str, Option<u64>> {
+    if let Some(rest) = i.strip_prefix('-') {
+        Ok((rest, None))
+    } else {
+        let (i, value) = map_res(recognize(digit1), str::parse)(i)?;
+        Ok((i, Some(value)))
+    }
+}
+
+fn parse_optional_f64(i: &str) -> IResult<&str, Option<f64>> {
+    if let Some(rest) = i.strip_prefix('-') {
+        Ok((rest, None))
+    } else {
+        let (i, value) = nom::number::complete::double(i)?;
+        Ok((i, Some(value)))
+    }
+}
+
+fn parse_pool_device(i: &str) -> IResult<&str, String> {
+    let (i, (device, _, _rest)) = tuple((
+        preceded(space1, take_till1(|c| c == ' ' || c == '\t')),
+        space1,
+        preceded(take_till(|c| c == '\n'), char('\n')),
+    ))(i)?;
+
+    Ok((i, device.to_string()))
+}
+
+fn parse_zpool_list_header(i: &str) -> IResult<&str, ZFSPoolInfo> {
+    // name, size, allocated, free, checkpoint, expandsize, fragmentation, capacity, dedupratio, health, altroot.
+
+    let (i, (text, size, alloc, free, _, _, frag, _, dedup, health, _altroot, _eol)) = tuple((
+        take_while1(|c| char::is_alphanumeric(c) || c == '-' || c == ':' || c == '_' || c == '.'), // name
+        preceded(space1, parse_optional_u64), // size
+        preceded(space1, parse_optional_u64), // allocated
+        preceded(space1, parse_optional_u64), // free
+        preceded(space1, notspace1),          // checkpoint
+        preceded(space1, notspace1),          // expandsize
+        preceded(space1, parse_optional_u64), // fragmentation
+        preceded(space1, notspace1),          // capacity
+        preceded(space1, parse_optional_f64), // dedup
+        preceded(space1, notspace1),          // health
+        opt(preceded(space1, notspace1)),     // optional altroot
+        line_ending,
+    ))(i)?;
+
+    let status = if let (Some(size), Some(alloc), Some(free), Some(frag), Some(dedup)) =
+        (size, alloc, free, frag, dedup)
+    {
+        ZFSPoolInfo {
+            name: text.into(),
+            health: health.into(),
+            usage: Some(ZFSPoolUsage {
+                size,
+                alloc,
+                free,
+                frag,
+                dedup,
+            }),
+            devices: Vec::new(),
+        }
+    } else {
+        ZFSPoolInfo {
+            name: text.into(),
+            health: health.into(),
+            usage: None,
+            devices: Vec::new(),
+        }
+    };
+
+    Ok((i, status))
+}
+
+fn parse_zpool_list_item(i: &str) -> IResult<&str, ZFSPoolInfo> {
+    let (i, mut stat) = parse_zpool_list_header(i)?;
+    let (i, devices) = many0(parse_pool_device)(i)?;
+
+    for device_path in devices.into_iter().filter(|n| n.starts_with("/dev/")) {
+        stat.devices.push(device_path);
+    }
+
+    let (i, _) = many0(tuple((space0, char('\n'))))(i)?; // skip empty lines
+
+    Ok((i, stat))
+}
+
+/// Parse zpool list output
+///
+/// Note: This does not reveal any details on how the pool uses the devices, because
+/// the zpool list output format is not really defined...
+fn parse_zpool_list(i: &str) -> Result<Vec<ZFSPoolInfo>, Error> {
+    match all_consuming(many0(parse_zpool_list_item))(i) {
+        Err(nom::Err::Error(err)) | Err(nom::Err::Failure(err)) => {
+            bail!(
+                "unable to parse zfs list output - {}",
+                nom::error::convert_error(i, err)
+            );
+        }
+        Err(err) => {
+            bail!("unable to parse zfs list output - {}", err);
+        }
+        Ok((_, ce)) => Ok(ce),
+    }
+}
+
+/// Run zpool list and return parsed output
+///
+/// Devices are only included when run with verbose flags
+/// set. Without, device lists are empty.
+pub fn zpool_list(pool: Option<&String>, verbose: bool) -> Result<Vec<ZFSPoolInfo>, Error> {
+    // Note: zpools list verbose output can include entries for 'special', 'cache' and 'logs'
+    // and maybe other things.
+
+    let mut command = std::process::Command::new("zpool");
+    command.args(["list", "-H", "-p", "-P"]);
+
+    // Note: We do not use -o to define output properties, because zpool command ignores
+    // that completely for special vdevs and devices
+
+    if verbose {
+        command.arg("-v");
+    }
+
+    if let Some(pool) = pool {
+        command.arg(pool);
+    }
+
+    let output = proxmox_sys::command::run_command(command, None)?;
+
+    parse_zpool_list(&output)
+}
+
+#[test]
+fn test_zfs_parse_list() -> Result<(), Error> {
+    let output = "";
+
+    let data = parse_zpool_list(output)?;
+    let expect = Vec::new();
+
+    assert_eq!(data, expect);
+
+    let output = "btest	427349245952	405504	427348840448	-	-	0	0	1.00	ONLINE	-\n";
+    let data = parse_zpool_list(output)?;
+    let expect = vec![ZFSPoolInfo {
+        name: "btest".to_string(),
+        health: "ONLINE".to_string(),
+        devices: Vec::new(),
+        usage: Some(ZFSPoolUsage {
+            size: 427349245952,
+            alloc: 405504,
+            free: 427348840448,
+            dedup: 1.0,
+            frag: 0,
+        }),
+    }];
+
+    assert_eq!(data, expect);
+
+    let output = "\
+rpool	535260299264      402852388864      132407910400      -          -          22         75         1.00      ONLINE   -
+            /dev/disk/by-id/ata-Crucial_CT500MX200SSD1_154210EB4078-part3    498216206336      392175546368      106040659968      -          -          22         78         -          ONLINE
+special                                                                                             -         -         -            -             -         -         -         -   -
+            /dev/sda2          37044092928       10676842496       26367250432       -          -          63         28         -          ONLINE
+logs                                                                                                 -         -         -            -             -         -         -         -   -
+            /dev/sda3          4831838208         1445888 4830392320         -          -          0          0          -          ONLINE
+
+";
+
+    let data = parse_zpool_list(output)?;
+    let expect = vec![
+        ZFSPoolInfo {
+            name: String::from("rpool"),
+            health: String::from("ONLINE"),
+            devices: vec![String::from(
+                "/dev/disk/by-id/ata-Crucial_CT500MX200SSD1_154210EB4078-part3",
+            )],
+            usage: Some(ZFSPoolUsage {
+                size: 535260299264,
+                alloc: 402852388864,
+                free: 132407910400,
+                dedup: 1.0,
+                frag: 22,
+            }),
+        },
+        ZFSPoolInfo {
+            name: String::from("special"),
+            health: String::from("-"),
+            devices: vec![String::from("/dev/sda2")],
+            usage: None,
+        },
+        ZFSPoolInfo {
+            name: String::from("logs"),
+            health: String::from("-"),
+            devices: vec![String::from("/dev/sda3")],
+            usage: None,
+        },
+    ];
+
+    assert_eq!(data, expect);
+
+    let output = "\
+b-test	427349245952	761856	427348484096	-	-	0	0	1.00	ONLINE	-
+	mirror	213674622976	438272	213674184704	-	-	0	0	-	ONLINE
+	/dev/sda1	-	-	-	-	-	-	-	-	ONLINE
+	/dev/sda2	-	-	-	-	-	-	-	-	ONLINE
+	mirror	213674622976	323584	213674299392	-	-	0	0	-	ONLINE
+	/dev/sda3	-	-	-	-	-	-	-	-	ONLINE
+	/dev/sda4	-	-	-	-	-	-	-	-	ONLINE
+logs               -      -      -        -         -      -      -      -  -
+	/dev/sda5	213674622976	0	213674622976	-	-	0	0	-	ONLINE
+";
+
+    let data = parse_zpool_list(output)?;
+    let expect = vec![
+        ZFSPoolInfo {
+            name: String::from("b-test"),
+            health: String::from("ONLINE"),
+            usage: Some(ZFSPoolUsage {
+                size: 427349245952,
+                alloc: 761856,
+                free: 427348484096,
+                dedup: 1.0,
+                frag: 0,
+            }),
+            devices: vec![
+                String::from("/dev/sda1"),
+                String::from("/dev/sda2"),
+                String::from("/dev/sda3"),
+                String::from("/dev/sda4"),
+            ],
+        },
+        ZFSPoolInfo {
+            name: String::from("logs"),
+            health: String::from("-"),
+            usage: None,
+            devices: vec![String::from("/dev/sda5")],
+        },
+    ];
+
+    assert_eq!(data, expect);
+
+    let output = "\
+b.test	427349245952	761856	427348484096	-	-	0	0	1.00	ONLINE	-
+	mirror	213674622976	438272	213674184704	-	-	0	0	-	ONLINE
+	/dev/sda1	-	-	-	-	-	-	-	-	ONLINE
+";
+
+    let data = parse_zpool_list(output)?;
+    let expect = vec![ZFSPoolInfo {
+        name: String::from("b.test"),
+        health: String::from("ONLINE"),
+        usage: Some(ZFSPoolUsage {
+            size: 427349245952,
+            alloc: 761856,
+            free: 427348484096,
+            dedup: 1.0,
+            frag: 0,
+        }),
+        devices: vec![String::from("/dev/sda1")],
+    }];
+
+    assert_eq!(data, expect);
+
+    Ok(())
+}
diff --git a/proxmox-disks/src/zpool_status.rs b/proxmox-disks/src/zpool_status.rs
new file mode 100644
index 00000000..674dbe63
--- /dev/null
+++ b/proxmox-disks/src/zpool_status.rs
@@ -0,0 +1,496 @@
+use std::mem::{replace, take};
+
+use anyhow::{bail, Error};
+use serde::{Deserialize, Serialize};
+use serde_json::{Map, Value};
+
+use crate::parse_helpers::{notspace1, parse_complete, parse_error, parse_failure, IResult};
+
+use nom::{
+    bytes::complete::{tag, take_while, take_while1},
+    character::complete::{line_ending, space0, space1},
+    combinator::opt,
+    error::VerboseError,
+    multi::{many0, many1},
+    sequence::preceded,
+};
+
+#[derive(Debug, Serialize, Deserialize)]
+pub struct ZFSPoolVDevState {
+    pub name: String,
+    pub lvl: u64,
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub state: Option<String>,
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub read: Option<u64>,
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub write: Option<u64>,
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub cksum: Option<u64>,
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub msg: Option<String>,
+}
+
+fn expand_tab_length(input: &str) -> usize {
+    input.chars().map(|c| if c == '\t' { 8 } else { 1 }).sum()
+}
+
+fn parse_zpool_status_vdev(i: &str) -> IResult<&str, ZFSPoolVDevState> {
+    let (n, indent) = space0(i)?;
+
+    let indent_len = expand_tab_length(indent);
+
+    if (indent_len & 1) != 0 {
+        return Err(parse_failure(n, "wrong indent length"));
+    }
+    let i = n;
+
+    let indent_level = (indent_len as u64) / 2;
+
+    let (i, vdev_name) = notspace1(i)?;
+
+    if let Ok((n, _)) = preceded(space0::<&str, VerboseError<&str>>, line_ending)(i) {
+        // special device
+        let vdev = ZFSPoolVDevState {
+            name: vdev_name.to_string(),
+            lvl: indent_level,
+            state: None,
+            read: None,
+            write: None,
+            cksum: None,
+            msg: None,
+        };
+        return Ok((n, vdev));
+    }
+
+    let (i, state) = preceded(space1, notspace1)(i)?;
+    if let Ok((n, _)) = preceded(space0::<&str, VerboseError<&str>>, line_ending)(i) {
+        // spares
+        let vdev = ZFSPoolVDevState {
+            name: vdev_name.to_string(),
+            lvl: indent_level,
+            state: Some(state.to_string()),
+            read: None,
+            write: None,
+            cksum: None,
+            msg: None,
+        };
+        return Ok((n, vdev));
+    }
+
+    let (i, read) = preceded(space1, nom::character::complete::u64)(i)?;
+    let (i, write) = preceded(space1, nom::character::complete::u64)(i)?;
+    let (i, cksum) = preceded(space1, nom::character::complete::u64)(i)?;
+    let (i, msg) = opt(preceded(space1, take_while(|c| c != '\n')))(i)?;
+    let (i, _) = line_ending(i)?;
+
+    let vdev = ZFSPoolVDevState {
+        name: vdev_name.to_string(),
+        lvl: indent_level,
+        state: Some(state.to_string()),
+        read: Some(read),
+        write: Some(write),
+        cksum: Some(cksum),
+        msg: msg.map(String::from),
+    };
+
+    Ok((i, vdev))
+}
+
+fn parse_zpool_status_tree(i: &str) -> IResult<&str, Vec<ZFSPoolVDevState>> {
+    // skip header
+    let (i, _) = tag("NAME")(i)?;
+    let (i, _) = space1(i)?;
+    let (i, _) = tag("STATE")(i)?;
+    let (i, _) = space1(i)?;
+    let (i, _) = tag("READ")(i)?;
+    let (i, _) = space1(i)?;
+    let (i, _) = tag("WRITE")(i)?;
+    let (i, _) = space1(i)?;
+    let (i, _) = tag("CKSUM")(i)?;
+    let (i, _) = line_ending(i)?;
+
+    // parse vdev list
+    many1(parse_zpool_status_vdev)(i)
+}
+
+fn space_indented_line(indent: usize) -> impl Fn(&str) -> IResult<&str, &str> {
+    move |i| {
+        let mut len = 0;
+        let mut n = i;
+        loop {
+            if n.starts_with('\t') {
+                len += 8;
+            } else if n.starts_with(' ') {
+                len += 1;
+            } else {
+                break;
+            }
+            n = &n[1..];
+            if len >= indent {
+                break;
+            }
+        }
+        if len != indent {
+            return Err(parse_error(i, "not correctly indented"));
+        }
+
+        take_while1(|c| c != '\n')(n)
+    }
+}
+
+fn parse_zpool_status_field(i: &str) -> IResult<&str, (String, String)> {
+    let (i, prefix) = take_while1(|c| c != ':')(i)?;
+    let (i, _) = tag(":")(i)?;
+    let (i, mut value) = take_while(|c| c != '\n')(i)?;
+    if value.starts_with(' ') {
+        value = &value[1..];
+    }
+
+    let (mut i, _) = line_ending(i)?;
+
+    let field = prefix.trim().to_string();
+
+    let prefix_len = expand_tab_length(prefix);
+
+    let indent: usize = prefix_len + 2;
+
+    let mut parse_continuation = opt(space_indented_line(indent));
+
+    let mut value = value.to_string();
+
+    if field == "config" {
+        let (n, _) = line_ending(i)?;
+        i = n;
+    }
+
+    loop {
+        let (n, cont) = parse_continuation(i)?;
+
+        if let Some(cont) = cont {
+            let (n, _) = line_ending(n)?;
+            i = n;
+            if !value.is_empty() {
+                value.push('\n');
+            }
+            value.push_str(cont);
+        } else {
+            if field == "config" {
+                let (n, _) = line_ending(i)?;
+                value.push('\n');
+                i = n;
+            }
+            break;
+        }
+    }
+
+    Ok((i, (field, value)))
+}
+
+pub fn parse_zpool_status_config_tree(i: &str) -> Result<Vec<ZFSPoolVDevState>, Error> {
+    parse_complete("zfs status config tree", i, parse_zpool_status_tree)
+}
+
+fn parse_zpool_status(input: &str) -> Result<Vec<(String, String)>, Error> {
+    parse_complete("zfs status output", input, many0(parse_zpool_status_field))
+}
+
+pub fn vdev_list_to_tree(vdev_list: &[ZFSPoolVDevState]) -> Result<Value, Error> {
+    indented_list_to_tree(vdev_list, |vdev| {
+        let node = serde_json::to_value(vdev).unwrap();
+        (node, vdev.lvl)
+    })
+}
+
+fn indented_list_to_tree<'a, T, F, I>(items: I, to_node: F) -> Result<Value, Error>
+where
+    T: 'a,
+    I: IntoIterator<Item = &'a T>,
+    F: Fn(&T) -> (Value, u64),
+{
+    struct StackItem {
+        node: Map<String, Value>,
+        level: u64,
+        children_of_parent: Vec<Value>,
+    }
+
+    let mut stack = Vec::<StackItem>::new();
+    // hold current node and the children of the current parent (as that's where we insert)
+    let mut cur = StackItem {
+        node: Map::<String, Value>::new(),
+        level: 0,
+        children_of_parent: Vec::new(),
+    };
+
+    for item in items {
+        let (node, node_level) = to_node(item);
+        let vdev_level = 1 + node_level;
+        let mut node = match node {
+            Value::Object(map) => map,
+            _ => bail!("to_node returned wrong type"),
+        };
+
+        node.insert("leaf".to_string(), Value::Bool(true));
+
+        // if required, go back up (possibly multiple levels):
+        while vdev_level < cur.level {
+            cur.children_of_parent.push(Value::Object(cur.node));
+            let mut parent = stack.pop().unwrap();
+            parent
+                .node
+                .insert("children".to_string(), Value::Array(cur.children_of_parent));
+            parent.node.insert("leaf".to_string(), Value::Bool(false));
+            cur = parent;
+
+            if vdev_level > cur.level {
+                // when we encounter misimatching levels like "0, 2, 1" instead of "0, 1, 2, 1"
+                bail!("broken indentation between levels");
+            }
+        }
+
+        if vdev_level > cur.level {
+            // indented further, push our current state and start a new "map"
+            stack.push(StackItem {
+                node: replace(&mut cur.node, node),
+                level: replace(&mut cur.level, vdev_level),
+                children_of_parent: take(&mut cur.children_of_parent),
+            });
+        } else {
+            // same indentation level, add to children of the previous level:
+            cur.children_of_parent
+                .push(Value::Object(replace(&mut cur.node, node)));
+        }
+    }
+
+    while !stack.is_empty() {
+        cur.children_of_parent.push(Value::Object(cur.node));
+        let mut parent = stack.pop().unwrap();
+        parent
+            .node
+            .insert("children".to_string(), Value::Array(cur.children_of_parent));
+        parent.node.insert("leaf".to_string(), Value::Bool(false));
+        cur = parent;
+    }
+
+    Ok(Value::Object(cur.node))
+}
+
+#[test]
+fn test_vdev_list_to_tree() {
+    const DEFAULT: ZFSPoolVDevState = ZFSPoolVDevState {
+        name: String::new(),
+        lvl: 0,
+        state: None,
+        read: None,
+        write: None,
+        cksum: None,
+        msg: None,
+    };
+
+    #[rustfmt::skip]
+    let input = vec![
+        //ZFSPoolVDevState { name: "root".to_string(), lvl: 0, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev1".to_string(), lvl: 1, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev1-disk1".to_string(), lvl: 2, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev1-disk2".to_string(), lvl: 2, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev2".to_string(), lvl: 1, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev2-g1".to_string(), lvl: 2, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev2-g1-d1".to_string(), lvl: 3, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev2-g1-d2".to_string(), lvl: 3, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev2-g2".to_string(), lvl: 2, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev3".to_string(), lvl: 1, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev4".to_string(), lvl: 1, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev4-g1".to_string(), lvl: 2, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev4-g1-d1".to_string(), lvl: 3, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev4-g1-d1-x1".to_string(), lvl: 4, ..DEFAULT },
+        ZFSPoolVDevState { name: "vdev4-g2".to_string(), lvl: 2, ..DEFAULT }, // up by 2
+    ];
+
+    const EXPECTED: &str = "{\
+        \"children\":[{\
+            \"children\":[{\
+                \"leaf\":true,\
+                \"lvl\":2,\"name\":\"vdev1-disk1\"\
+            },{\
+                \"leaf\":true,\
+                \"lvl\":2,\"name\":\"vdev1-disk2\"\
+            }],\
+            \"leaf\":false,\
+            \"lvl\":1,\"name\":\"vdev1\"\
+        },{\
+            \"children\":[{\
+                \"children\":[{\
+                    \"leaf\":true,\
+                    \"lvl\":3,\"name\":\"vdev2-g1-d1\"\
+                },{\
+                    \"leaf\":true,\
+                    \"lvl\":3,\"name\":\"vdev2-g1-d2\"\
+                }],\
+                \"leaf\":false,\
+                \"lvl\":2,\"name\":\"vdev2-g1\"\
+            },{\
+                \"leaf\":true,\
+                \"lvl\":2,\"name\":\"vdev2-g2\"\
+            }],\
+            \"leaf\":false,\
+            \"lvl\":1,\"name\":\"vdev2\"\
+        },{\
+            \"leaf\":true,\
+            \"lvl\":1,\"name\":\"vdev3\"\
+        },{\
+            \"children\":[{\
+                \"children\":[{\
+                    \"children\":[{\
+                        \"leaf\":true,\
+                        \"lvl\":4,\"name\":\"vdev4-g1-d1-x1\"\
+                    }],\
+                    \"leaf\":false,\
+                    \"lvl\":3,\"name\":\"vdev4-g1-d1\"\
+                }],\
+                \"leaf\":false,\
+                \"lvl\":2,\"name\":\"vdev4-g1\"\
+            },{\
+                \"leaf\":true,\
+                \"lvl\":2,\"name\":\"vdev4-g2\"\
+            }],\
+            \"leaf\":false,\
+            \"lvl\":1,\"name\":\"vdev4\"\
+        }],\
+        \"leaf\":false\
+   }";
+    let expected: Value =
+        serde_json::from_str(EXPECTED).expect("failed to parse expected json value");
+
+    let tree = vdev_list_to_tree(&input).expect("failed to turn valid vdev list into a tree");
+    assert_eq!(tree, expected);
+}
+
+pub fn zpool_status(pool: &str) -> Result<Vec<(String, String)>, Error> {
+    let mut command = std::process::Command::new("zpool");
+    command.args(["status", "-p", "-P", pool]);
+
+    let output = proxmox_sys::command::run_command(command, None)?;
+
+    parse_zpool_status(&output)
+}
+
+#[cfg(test)]
+fn test_parse(output: &str) -> Result<(), Error> {
+    let mut found_config = false;
+
+    for (k, v) in parse_zpool_status(output)? {
+        println!("<{k}> => '{v}'");
+        if k == "config" {
+            let vdev_list = parse_zpool_status_config_tree(&v)?;
+            let _tree = vdev_list_to_tree(&vdev_list);
+            found_config = true;
+        }
+    }
+    if !found_config {
+        bail!("got zpool status without config key");
+    }
+
+    Ok(())
+}
+
+#[test]
+fn test_zpool_status_parser() -> Result<(), Error> {
+    let output = r###"  pool: tank
+ state: DEGRADED
+status: One or more devices could not be opened.  Sufficient replicas exist for
+	the pool to continue functioning in a degraded state.
+action: Attach the missing device and online it using 'zpool online'.
+   see: http://www.sun.com/msg/ZFS-8000-2Q
+ scrub: none requested
+config:
+
+	NAME        STATE     READ WRITE CKSUM
+	tank        DEGRADED     0     0     0
+	  mirror-0  DEGRADED     0     0     0
+	    c1t0d0  ONLINE       0     0     0
+	    c1t2d0  ONLINE       0     0     0
+	    c1t1d0  UNAVAIL      0     0     0  cannot open
+	  mirror-1  DEGRADED     0     0     0
+	tank1       DEGRADED     0     0     0
+	tank2       DEGRADED     0     0     0
+
+errors: No known data errors
+"###;
+
+    test_parse(output)
+}
+
+#[test]
+fn test_zpool_status_parser2() -> Result<(), Error> {
+    // Note: this input create TABS
+    let output = r###"  pool: btest
+ state: ONLINE
+  scan: none requested
+config:
+
+	NAME           STATE     READ WRITE CKSUM
+	btest          ONLINE       0     0     0
+	  mirror-0     ONLINE       0     0     0
+	    /dev/sda1  ONLINE       0     0     0
+	    /dev/sda2  ONLINE       0     0     0
+	  mirror-1     ONLINE       0     0     0
+	    /dev/sda3  ONLINE       0     0     0
+	    /dev/sda4  ONLINE       0     0     0
+	logs
+	  /dev/sda5    ONLINE       0     0     0
+
+errors: No known data errors
+"###;
+    test_parse(output)
+}
+
+#[test]
+fn test_zpool_status_parser3() -> Result<(), Error> {
+    let output = r###"  pool: bt-est
+ state: ONLINE
+  scan: none requested
+config:
+
+	NAME           STATE     READ WRITE CKSUM
+	bt-est          ONLINE       0     0     0
+	  mirror-0     ONLINE       0     0     0
+	    /dev/sda1  ONLINE       0     0     0
+	    /dev/sda2  ONLINE       0     0     0
+	  mirror-1     ONLINE       0     0     0
+	    /dev/sda3  ONLINE       0     0     0
+	    /dev/sda4  ONLINE       0     0     0
+	logs
+	  /dev/sda5    ONLINE       0     0     0
+
+errors: No known data errors
+"###;
+
+    test_parse(output)
+}
+
+#[test]
+fn test_zpool_status_parser_spares() -> Result<(), Error> {
+    let output = r###"  pool: tank
+ state: ONLINE
+  scan: none requested
+config:
+
+	NAME           STATE     READ WRITE CKSUM
+	tank          ONLINE       0     0     0
+	  mirror-0     ONLINE       0     0     0
+	    /dev/sda1  ONLINE       0     0     0
+	    /dev/sda2  ONLINE       0     0     0
+	  mirror-1     ONLINE       0     0     0
+	    /dev/sda3  ONLINE       0     0     0
+	    /dev/sda4  ONLINE       0     0     0
+	logs
+	  /dev/sda5    ONLINE       0     0     0
+        spares
+          /dev/sdb     AVAIL
+          /dev/sdc     AVAIL
+
+errors: No known data errors
+"###;
+
+    test_parse(output)
+}
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 07/26] disks: fix typo in `initialize_gpt_disk`
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (5 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox 06/26] disks: import from Proxmox Backup Server Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 08/26] disks: add parts of gather_disk_stats from PBS Lukas Wagner
                   ` (19 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 proxmox-disks/src/lib.rs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/proxmox-disks/src/lib.rs b/proxmox-disks/src/lib.rs
index e6056c14..d672854a 100644
--- a/proxmox-disks/src/lib.rs
+++ b/proxmox-disks/src/lib.rs
@@ -1136,7 +1136,7 @@ pub fn reread_partition_table(disk: &Disk) -> Result<(), Error> {
 }
 
 /// Initialize disk by writing a GPT partition table
-pub fn inititialize_gpt_disk(disk: &Disk, uuid: Option<&str>) -> Result<(), Error> {
+pub fn initialize_gpt_disk(disk: &Disk, uuid: Option<&str>) -> Result<(), Error> {
     let disk_path = match disk.device_path() {
         Some(path) => path,
         None => bail!("disk {:?} has no node in /dev", disk.syspath()),
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 08/26] disks: add parts of gather_disk_stats from PBS
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (6 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox 07/26] disks: fix typo in `initialize_gpt_disk` Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 09/26] disks: gate api macro behind 'api-types' feature Lukas Wagner
                   ` (18 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Move one part of the `gather_disk_stats` function from PBS to
proxmox-disks as a new method of the `DiskManage` type. We need this in
both, PBS and PDM and structure and behavior indicated that should
rather be a method on this type.

The code has been refactored a bit to make it more readable.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 proxmox-disks/src/lib.rs | 37 +++++++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/proxmox-disks/src/lib.rs b/proxmox-disks/src/lib.rs
index d672854a..4039b4cd 100644
--- a/proxmox-disks/src/lib.rs
+++ b/proxmox-disks/src/lib.rs
@@ -189,6 +189,43 @@ impl DiskManage {
     pub fn is_devnum_mounted(&self, dev: dev_t) -> Result<bool, Error> {
         self.mounted_devices().map(|mounted| mounted.contains(&dev))
     }
+
+    /// Query [`BlockDevStat`] for a given path.
+    pub fn blockdev_stat_for_path<P: AsRef<Path>>(
+        self: Arc<DiskManage>,
+        path: P,
+    ) -> Result<BlockDevStat, Error> {
+        let (fs_type, device, mount_source) = self
+            .find_mounted_device(path.as_ref())
+            .context("find_mounted_device failed")?
+            .ok_or_else(|| {
+                format_err!(
+                    "could not determine mounted device for path {}",
+                    path.as_ref().display()
+                )
+            })?;
+
+        if let Some(source) = mount_source
+            && fs_type == "zfs"
+        {
+            let dataset = source.into_string().map_err(|s| {
+                format_err!("could not convert {s:?} to string - invalid characters")
+            })?;
+
+            zfs_dataset_stats(&dataset)
+        } else {
+            let disk = self
+                .clone()
+                .disk_by_dev_num(device.into_dev_t())
+                .context("could not look up disk by device num")?;
+
+            disk.read_stat()
+                .with_context(|| format!("could not read stats for {}", path.as_ref().display()))?
+                .ok_or_else(|| {
+                    format_err!("could not read disk stats for {}", path.as_ref().display())
+                })
+        }
+    }
 }
 
 /// Queries (and caches) various information about a specific disk.
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 09/26] disks: gate api macro behind 'api-types' feature
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (7 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox 08/26] disks: add parts of gather_disk_stats from PBS Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 10/26] disks: clippy: collapse if-let chains where possible Lukas Wagner
                   ` (17 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Some users of this crate don't need the derived API types, so it is best
to gate it behind a feature.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 proxmox-disks/Cargo.toml   |  6 +++++-
 proxmox-disks/src/lib.rs   | 18 ++++++++++--------
 proxmox-disks/src/smart.rs |  9 +++++----
 3 files changed, 20 insertions(+), 13 deletions(-)

diff --git a/proxmox-disks/Cargo.toml b/proxmox-disks/Cargo.toml
index 29bf56fe..f581e464 100644
--- a/proxmox-disks/Cargo.toml
+++ b/proxmox-disks/Cargo.toml
@@ -26,5 +26,9 @@ proxmox-io.workspace = true
 proxmox-lang.workspace = true
 proxmox-log.workspace = true
 proxmox-parallel-handler.workspace = true
-proxmox-schema = { workspace = true, features = [ "api-macro", "api-types" ] }
+proxmox-schema = { workspace = true, features = [ "api-types" ], optional = true }
 proxmox-sys.workspace = true
+
+[features]
+default = [ "dep:proxmox-schema" ]
+api-types = [ "proxmox-schema/api-macro" ]
diff --git a/proxmox-disks/src/lib.rs b/proxmox-disks/src/lib.rs
index 4039b4cd..74907711 100644
--- a/proxmox-disks/src/lib.rs
+++ b/proxmox-disks/src/lib.rs
@@ -17,13 +17,15 @@ use ::serde::{Deserialize, Serialize};
 use proxmox_lang::{io_bail, io_format_err};
 use proxmox_log::info;
 use proxmox_parallel_handler::ParallelHandler;
-use proxmox_schema::api;
 use proxmox_sys::linux::procfs::{mountinfo::Device, MountInfo};
 
 use proxmox_schema::api_types::{
     BLOCKDEVICE_DISK_AND_PARTITION_NAME_REGEX, BLOCKDEVICE_NAME_REGEX, UUID_REGEX,
 };
 
+#[cfg(feature = "api-types")]
+use proxmox_schema::api;
+
 mod zfs;
 pub use zfs::*;
 mod zpool_status;
@@ -626,7 +628,7 @@ impl Disk {
     }
 }
 
-#[api()]
+#[cfg_attr(feature = "api-types", api)]
 #[derive(Debug, Serialize, Deserialize)]
 #[serde(rename_all = "lowercase")]
 /// This is just a rough estimate for a "type" of disk.
@@ -682,7 +684,7 @@ fn get_file_system_devices(lsblk_info: &[LsblkInfo]) -> Result<HashSet<u64>, Err
     Ok(device_set)
 }
 
-#[api()]
+#[cfg_attr(feature = "api-types", api)]
 #[derive(Debug, Serialize, Deserialize, Eq, PartialEq)]
 #[serde(rename_all = "lowercase")]
 /// What a block device partition is used for.
@@ -703,7 +705,7 @@ pub enum PartitionUsageType {
     FileSystem,
 }
 
-#[api()]
+#[cfg_attr(feature = "api-types", api)]
 #[derive(Debug, Serialize, Deserialize, Eq, PartialEq)]
 #[serde(rename_all = "lowercase")]
 /// What a block device (disk) is used for.
@@ -724,7 +726,7 @@ pub enum DiskUsageType {
     FileSystem,
 }
 
-#[api()]
+#[cfg_attr(feature = "api-types", api)]
 #[derive(Debug, Serialize, Deserialize)]
 #[serde(rename_all = "kebab-case")]
 /// Basic information about a partition
@@ -747,7 +749,7 @@ pub struct PartitionInfo {
     pub uuid: Option<String>,
 }
 
-#[api(
+#[cfg_attr(feature = "api-types", api(
     properties: {
         used: {
             type: DiskUsageType,
@@ -765,7 +767,7 @@ pub struct PartitionInfo {
             }
         }
     }
-)]
+))]
 #[derive(Debug, Serialize, Deserialize)]
 #[serde(rename_all = "kebab-case")]
 /// Information about how a Disk is used
@@ -1302,7 +1304,7 @@ pub fn create_single_linux_partition(disk: &Disk) -> Result<Disk, Error> {
     }
 }
 
-#[api()]
+#[cfg_attr(feature = "api-types", api)]
 #[derive(Debug, Copy, Clone, Serialize, Deserialize, Eq, PartialEq)]
 #[serde(rename_all = "lowercase")]
 /// A file system type supported by our tooling.
diff --git a/proxmox-disks/src/smart.rs b/proxmox-disks/src/smart.rs
index 1d41cee2..247bc4d3 100644
--- a/proxmox-disks/src/smart.rs
+++ b/proxmox-disks/src/smart.rs
@@ -7,9 +7,10 @@ use std::{
 use ::serde::{Deserialize, Serialize};
 use anyhow::Error;
 
+#[cfg(feature = "api-types")]
 use proxmox_schema::api;
 
-#[api()]
+#[cfg_attr(feature = "api-types", api)]
 #[derive(Debug, Serialize, Deserialize)]
 #[serde(rename_all = "lowercase")]
 /// SMART status
@@ -22,7 +23,7 @@ pub enum SmartStatus {
     Unknown,
 }
 
-#[api()]
+#[cfg_attr(feature = "api-types", api)]
 #[derive(Debug, Serialize, Deserialize)]
 /// SMART Attribute
 pub struct SmartAttribute {
@@ -51,7 +52,7 @@ pub struct SmartAttribute {
     threshold: Option<f64>,
 }
 
-#[api(
+#[cfg_attr(feature = "api-types", api(
     properties: {
         status: {
             type: SmartStatus,
@@ -69,7 +70,7 @@ pub struct SmartAttribute {
             },
         },
     },
-)]
+))]
 #[derive(Debug, Serialize, Deserialize)]
 /// Data from smartctl
 pub struct SmartData {
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 10/26] disks: clippy: collapse if-let chains where possible
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (8 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox 09/26] disks: gate api macro behind 'api-types' feature Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox 11/26] procfs: add helpers for querying pressure stall information Lukas Wagner
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 proxmox-disks/src/lib.rs   | 21 ++++++++++-----------
 proxmox-disks/src/lvm.rs   | 10 +++++-----
 proxmox-disks/src/smart.rs |  8 ++++----
 proxmox-disks/src/zfs.rs   | 10 +++++-----
 4 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/proxmox-disks/src/lib.rs b/proxmox-disks/src/lib.rs
index 74907711..41e95b40 100644
--- a/proxmox-disks/src/lib.rs
+++ b/proxmox-disks/src/lib.rs
@@ -1009,21 +1009,20 @@ fn get_disks(
 
         let name = item.file_name().to_str().unwrap().to_string();
 
-        if let Some(ref disks) = disks {
-            if !disks.contains(&name) {
-                continue;
-            }
+        if let Some(ref disks) = disks
+            && !disks.contains(&name)
+        {
+            continue;
         }
 
         let sys_path = format!("/sys/block/{name}");
 
-        if let Ok(target) = std::fs::read_link(&sys_path) {
-            if let Some(target) = target.to_str() {
-                if ISCSI_PATH_REGEX.is_match(target) {
-                    continue;
-                } // skip iSCSI devices
-            }
-        }
+        if let Ok(target) = std::fs::read_link(&sys_path)
+            && let Some(target) = target.to_str()
+            && ISCSI_PATH_REGEX.is_match(target)
+        {
+            continue;
+        } // skip iSCSI devices
 
         let disk = disk_manager.clone().disk_by_sys_path(&sys_path)?;
 
diff --git a/proxmox-disks/src/lvm.rs b/proxmox-disks/src/lvm.rs
index 1456a21c..4c1ed4a4 100644
--- a/proxmox-disks/src/lvm.rs
+++ b/proxmox-disks/src/lvm.rs
@@ -34,11 +34,11 @@ pub fn get_lvm_devices(lsblk_info: &[LsblkInfo]) -> Result<HashSet<u64>, Error>
     let mut device_set: HashSet<u64> = HashSet::new();
 
     for info in lsblk_info.iter() {
-        if let Some(partition_type) = &info.partition_type {
-            if LVM_UUIDS.contains(partition_type.as_str()) {
-                let meta = std::fs::metadata(&info.path)?;
-                device_set.insert(meta.rdev());
-            }
+        if let Some(partition_type) = &info.partition_type
+            && LVM_UUIDS.contains(partition_type.as_str())
+        {
+            let meta = std::fs::metadata(&info.path)?;
+            device_set.insert(meta.rdev());
         }
     }
 
diff --git a/proxmox-disks/src/smart.rs b/proxmox-disks/src/smart.rs
index 247bc4d3..e27348a6 100644
--- a/proxmox-disks/src/smart.rs
+++ b/proxmox-disks/src/smart.rs
@@ -174,10 +174,10 @@ pub fn get_smart_data(disk_path: &Path, health_only: bool) -> Result<SmartData,
         for (name, value) in list {
             if name == "percentage_used" {
                 // extract wearout from nvme text, allow for decimal values
-                if let Some(v) = value.as_f64() {
-                    if v <= 100.0 {
-                        wearout = Some(100.0 - v);
-                    }
+                if let Some(v) = value.as_f64()
+                    && v <= 100.0
+                {
+                    wearout = Some(100.0 - v);
                 }
             }
             if let Some(value) = value.as_f64() {
diff --git a/proxmox-disks/src/zfs.rs b/proxmox-disks/src/zfs.rs
index 0babb887..874224a7 100644
--- a/proxmox-disks/src/zfs.rs
+++ b/proxmox-disks/src/zfs.rs
@@ -81,11 +81,11 @@ pub fn zfs_devices(lsblk_info: &[LsblkInfo], pool: Option<String>) -> Result<Has
     }
     if pool.is_none() {
         for info in lsblk_info.iter() {
-            if let Some(partition_type) = &info.partition_type {
-                if ZFS_UUIDS.contains(partition_type.as_str()) {
-                    let meta = std::fs::metadata(&info.path)?;
-                    device_set.insert(meta.rdev());
-                }
+            if let Some(partition_type) = &info.partition_type
+                && ZFS_UUIDS.contains(partition_type.as_str())
+            {
+                let meta = std::fs::metadata(&info.path)?;
+                device_set.insert(meta.rdev());
             }
         }
     }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 11/26] procfs: add helpers for querying pressure stall information
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (9 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox 10/26] disks: clippy: collapse if-let chains where possible Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-16 13:25   ` Arthur Bied-Charreton
  2026-03-12 13:52 ` [PATCH proxmox 12/26] time: use u64 parse helper from nom Lukas Wagner
                   ` (15 subsequent siblings)
  26 siblings, 1 reply; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

This is put into a new crate, proxmox-procfs, since proxmox-sys is
already quite large and should be split in the future. The general idea
is that the contents of proxmox_sys::linux::procfs should be moved into
this new crate (potentially after some API cleanup) at some point.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 Cargo.toml                          |   2 +
 proxmox-procfs/Cargo.toml           |  18 ++
 proxmox-procfs/debian/changelog     |   5 +
 proxmox-procfs/debian/control       |  50 +++++
 proxmox-procfs/debian/copyright     |  18 ++
 proxmox-procfs/debian/debcargo.toml |   7 +
 proxmox-procfs/src/lib.rs           |   1 +
 proxmox-procfs/src/pressure.rs      | 334 ++++++++++++++++++++++++++++
 8 files changed, 435 insertions(+)
 create mode 100644 proxmox-procfs/Cargo.toml
 create mode 100644 proxmox-procfs/debian/changelog
 create mode 100644 proxmox-procfs/debian/control
 create mode 100644 proxmox-procfs/debian/copyright
 create mode 100644 proxmox-procfs/debian/debcargo.toml
 create mode 100644 proxmox-procfs/src/lib.rs
 create mode 100644 proxmox-procfs/src/pressure.rs

diff --git a/Cargo.toml b/Cargo.toml
index 8f3886bd..47847048 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -36,6 +36,7 @@ members = [
     "proxmox-openid",
     "proxmox-parallel-handler",
     "proxmox-pgp",
+    "proxmox-procfs",
     "proxmox-product-config",
     "proxmox-rate-limiter",
     "proxmox-resource-scheduling",
@@ -171,6 +172,7 @@ proxmox-login = { version = "1.0.0", path = "proxmox-login" }
 proxmox-network-types = { version = "1.0.0", path = "proxmox-network-types" }
 proxmox-parallel-handler = { version = "1.0.0", path = "proxmox-parallel-handler" }
 proxmox-pgp = { version = "1.0.0", path = "proxmox-pgp" }
+proxmox-procfs = { version = "0.1.0", path = "proxmox-procfs" }
 proxmox-product-config = { version = "1.0.0", path = "proxmox-product-config" }
 proxmox-config-digest = { version = "1.0.0", path = "proxmox-config-digest" }
 proxmox-rate-limiter = { version = "1.0.0", path = "proxmox-rate-limiter" }
diff --git a/proxmox-procfs/Cargo.toml b/proxmox-procfs/Cargo.toml
new file mode 100644
index 00000000..3c0fe1dd
--- /dev/null
+++ b/proxmox-procfs/Cargo.toml
@@ -0,0 +1,18 @@
+[package]
+name = "proxmox-procfs"
+description = "helpers for reading system information from /proc"
+version = "0.1.0"
+
+authors.workspace = true
+edition.workspace = true
+exclude.workspace = true
+homepage.workspace = true
+license.workspace = true
+repository.workspace = true
+
+[dependencies]
+serde = { workspace = true, optional = true, features = ["derive"] }
+thiserror.workspace = true
+
+[features]
+serde = ["dep:serde"]
diff --git a/proxmox-procfs/debian/changelog b/proxmox-procfs/debian/changelog
new file mode 100644
index 00000000..9eee8f0b
--- /dev/null
+++ b/proxmox-procfs/debian/changelog
@@ -0,0 +1,5 @@
+rust-proxmox-procfs (0.1.0-1) unstable; urgency=medium
+
+  * initial version
+
+ -- Proxmox Support Team <support@proxmox.com>  Thu, 26 Feb 2026 15:54:07 +0100
diff --git a/proxmox-procfs/debian/control b/proxmox-procfs/debian/control
new file mode 100644
index 00000000..6c4b4798
--- /dev/null
+++ b/proxmox-procfs/debian/control
@@ -0,0 +1,50 @@
+Source: rust-proxmox-procfs
+Section: rust
+Priority: optional
+Build-Depends: debhelper-compat (= 13),
+ dh-sequence-cargo
+Build-Depends-Arch: cargo:native <!nocheck>,
+ rustc:native <!nocheck>,
+ libstd-rust-dev <!nocheck>,
+ librust-thiserror-2+default-dev <!nocheck>
+Maintainer: Proxmox Support Team <support@proxmox.com>
+Standards-Version: 4.7.2
+Vcs-Git: git://git.proxmox.com/git/proxmox.git
+Vcs-Browser: https://git.proxmox.com/?p=proxmox.git
+Homepage: https://proxmox.com
+X-Cargo-Crate: proxmox-procfs
+
+Package: librust-proxmox-procfs-dev
+Architecture: any
+Multi-Arch: same
+Depends:
+ ${misc:Depends},
+ librust-thiserror-2+default-dev
+Suggests:
+ librust-proxmox-procfs+serde-dev (= ${binary:Version})
+Provides:
+ librust-proxmox-procfs+default-dev (= ${binary:Version}),
+ librust-proxmox-procfs-0-dev (= ${binary:Version}),
+ librust-proxmox-procfs-0+default-dev (= ${binary:Version}),
+ librust-proxmox-procfs-0.1-dev (= ${binary:Version}),
+ librust-proxmox-procfs-0.1+default-dev (= ${binary:Version}),
+ librust-proxmox-procfs-0.1.0-dev (= ${binary:Version}),
+ librust-proxmox-procfs-0.1.0+default-dev (= ${binary:Version})
+Description: Helpers for reading system information from /proc - Rust source code
+ Source code for Debianized Rust crate "proxmox-procfs"
+
+Package: librust-proxmox-procfs+serde-dev
+Architecture: any
+Multi-Arch: same
+Depends:
+ ${misc:Depends},
+ librust-proxmox-procfs-dev (= ${binary:Version}),
+ librust-serde-1+default-dev,
+ librust-serde-1+derive-dev
+Provides:
+ librust-proxmox-procfs-0+serde-dev (= ${binary:Version}),
+ librust-proxmox-procfs-0.1+serde-dev (= ${binary:Version}),
+ librust-proxmox-procfs-0.1.0+serde-dev (= ${binary:Version})
+Description: Helpers for reading system information from /proc - feature "serde"
+ This metapackage enables feature "serde" for the Rust proxmox-procfs crate, by
+ pulling in any additional dependencies needed by that feature.
diff --git a/proxmox-procfs/debian/copyright b/proxmox-procfs/debian/copyright
new file mode 100644
index 00000000..01138fa0
--- /dev/null
+++ b/proxmox-procfs/debian/copyright
@@ -0,0 +1,18 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+
+Files:
+ *
+Copyright: 2026 Proxmox Server Solutions GmbH <support@proxmox.com>
+License: AGPL-3.0-or-later
+ This program is free software: you can redistribute it and/or modify it under
+ the terms of the GNU Affero General Public License as published by the Free
+ Software Foundation, either version 3 of the License, or (at your option) any
+ later version.
+ .
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
+ details.
+ .
+ You should have received a copy of the GNU Affero General Public License along
+ with this program. If not, see <https://www.gnu.org/licenses/>.
diff --git a/proxmox-procfs/debian/debcargo.toml b/proxmox-procfs/debian/debcargo.toml
new file mode 100644
index 00000000..b7864cdb
--- /dev/null
+++ b/proxmox-procfs/debian/debcargo.toml
@@ -0,0 +1,7 @@
+overlay = "."
+crate_src_path = ".."
+maintainer = "Proxmox Support Team <support@proxmox.com>"
+
+[source]
+vcs_git = "git://git.proxmox.com/git/proxmox.git"
+vcs_browser = "https://git.proxmox.com/?p=proxmox.git"
diff --git a/proxmox-procfs/src/lib.rs b/proxmox-procfs/src/lib.rs
new file mode 100644
index 00000000..a5670f34
--- /dev/null
+++ b/proxmox-procfs/src/lib.rs
@@ -0,0 +1 @@
+pub mod pressure;
diff --git a/proxmox-procfs/src/pressure.rs b/proxmox-procfs/src/pressure.rs
new file mode 100644
index 00000000..452b3892
--- /dev/null
+++ b/proxmox-procfs/src/pressure.rs
@@ -0,0 +1,334 @@
+//! Utilities for reading [Pressure Stall Information][psi] for the system or cgroups.
+//!
+//! To read pressure data, refer to [`PressureData::read_system`] and [`PressureData::read_cgroup`].
+//! [`PressureData::read_file`] can be use for lower-level access, proving the path to the
+//! pressure file directly.
+//!
+//! # Examples
+//!
+//! Read system-wide CPU pressure:
+//!
+//! ```no_run
+//! use proxmox_procfs::pressure::{PressureData, Resource};
+//!
+//! let cpu = PressureData::read_system(Resource::Cpu).unwrap();
+//! println!("CPU some avg10: {:.2}%", cpu.some.average_10);
+//! ```
+//!
+//! Read cgroup-level memory pressure:
+//!
+//! ```no_run
+//! use proxmox_procfs::pressure::{PressureData, Resource};
+//!
+//! let mem = PressureData::read_cgroup("system.slice", Resource::Memory).unwrap();
+//! println!("mem some avg10: {:.2}%", mem.some.average_10);
+//! ```
+//!
+//! [psi]: https://docs.kernel.org/accounting/psi.html
+//!
+
+use std::ffi::OsStr;
+use std::fs::File;
+use std::io::{BufRead, BufReader, ErrorKind};
+use std::path::{Path, PathBuf};
+use std::str::FromStr;
+
+#[derive(thiserror::Error, Debug)]
+/// Error type for pressure-related errors.
+pub enum Error {
+    /// General IO error when reading the pressure stall information file.
+    #[error("could not read pressure stall info file: {0}")]
+    Io(#[from] std::io::Error),
+
+    /// Pressure stall info file does not exist.
+    /// This is a distinct error variant so that the caller can differentiate between a
+    /// disappeared cgroup (e.g. if the guest was stopped) and other kinds of IO errors
+    #[error("pressure stall info file does not exist: {0}")]
+    NotFound(PathBuf),
+
+    /// The contents of the pressure stall file are unexpected. Should not really happen,
+    /// hopefully.
+    #[error("unexpected pressure stall file format: {0}")]
+    InvalidFormat(String),
+}
+
+#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
+#[derive(Clone, Debug)]
+/// Pressure stall information data.
+pub struct PressureData {
+    /// At least some tasks were stalled on a given resource.
+    pub some: PressureRecord,
+    /// All non-idle tasks were stalled on a given resource.
+    ///
+    /// Note: When querying CPU pressure stall information on a system level,
+    /// all members in `full` contain 0 (see [here]).
+    ///
+    /// [here]: https://docs.kernel.org/accounting/psi.html#pressure-interface
+    pub full: PressureRecord,
+}
+
+#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
+#[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))]
+#[derive(Clone, Debug)]
+/// Individual record corresponding to one line from a pressure stall information file.
+pub struct PressureRecord {
+    /// Average pressure stall ratio over the last 10 seconds.
+    pub average_10: f64,
+    /// Average pressure stall ratio over the last 60 seconds.
+    pub average_60: f64,
+    /// Average pressure stall ratio over the last 300 seconds.
+    pub average_300: f64,
+    /// Total stall time in microseconds.
+    pub total: u64,
+}
+
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+enum PressureRecordKind {
+    Full,
+    Some,
+}
+
+impl FromStr for PressureRecordKind {
+    type Err = Error;
+
+    fn from_str(s: &str) -> Result<Self, Self::Err> {
+        match s {
+            "some" => Ok(Self::Some),
+            "full" => Ok(Self::Full),
+            _ => Err(Error::InvalidFormat(format!("invalid pressure kind '{s}'"))),
+        }
+    }
+}
+
+#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
+#[derive(Clone, Copy, Debug, PartialEq)]
+/// Which pressure stall information to query.
+pub enum Resource {
+    /// Query CPU pressure stall information.
+    Cpu,
+    /// Query memory pressure stall information.
+    Memory,
+    /// Query IO pressure stall information.
+    Io,
+}
+
+impl Resource {
+    fn into_proc_path(self) -> &'static Path {
+        match self {
+            Resource::Cpu => Path::new("/proc/pressure/cpu"),
+            Resource::Memory => Path::new("/proc/pressure/memory"),
+            Resource::Io => Path::new("/proc/pressure/io"),
+        }
+    }
+
+    fn into_cgroup_path_component(self) -> &'static OsStr {
+        match self {
+            Resource::Cpu => OsStr::new("cpu.pressure"),
+            Resource::Memory => OsStr::new("memory.pressure"),
+            Resource::Io => OsStr::new("io.pressure"),
+        }
+    }
+}
+
+impl PressureData {
+    /// Read pressure stall information for the entire host from `/proc/pressure/*`.
+    ///
+    /// ```no_run
+    /// use proxmox_procfs::pressure::*;
+    ///
+    /// let pressure = PressureData::read_system(Resource::Cpu).unwrap();
+    /// println!("{}", pressure.some.average_10);
+    ///
+    ///```
+    pub fn read_system(what: Resource) -> Result<PressureData, Error> {
+        Self::read_file(what.into_proc_path())
+    }
+
+    /// Read pressure stall information for a cgroup.
+    ///
+    /// The `cgroup` parameter will be directly used to assemble the path for the PSI file. For
+    /// instance, if set to `lxc/101`, then `/sys/fs/cgroup/lxc/101/cpu.pressure` will be read.
+    ///
+    /// Note: This functions will return [`Error::NotFound`] in case the pressure file does not exist,
+    /// usually meaning that the cgroup does not exist (any more). This distinct error variant allows
+    /// the caller to differentiate this case from other kinds of IO errors.
+    ///
+    /// ```no_run
+    /// use proxmox_procfs::pressure::{PressureData, Resource};
+    ///
+    /// let pressure = PressureData::read_cgroup("qemu.slice/100.scope", Resource::Cpu).unwrap();
+    /// println!("{}", pressure.some.average_10);
+    ///
+    /// let pressure = PressureData::read_cgroup("lxc/101", Resource::Io).unwrap();
+    /// println!("{}", pressure.some.average_10);
+    ///
+    /// ```
+    pub fn read_cgroup(cgroup: &str, resource: Resource) -> Result<PressureData, Error> {
+        let path = Path::new("/sys/fs/cgroup/")
+            .join(cgroup)
+            .join(resource.into_cgroup_path_component());
+
+        Self::read_file(&path)
+    }
+
+    /// Read pressure stall information from a provided path.
+    ///
+    /// ```no_run
+    /// use proxmox_procfs::pressure::{PressureData, Resource};
+    ///
+    /// let pressure = PressureData::read_file("/proc/pressure/io").unwrap();
+    /// println!("{}", pressure.some.average_10);
+    ///
+    /// ```
+    pub fn read_file<P: AsRef<Path>>(path: P) -> Result<PressureData, Error> {
+        let file = match File::open(path.as_ref()) {
+            Ok(file) => file,
+            Err(err) if err.kind() == ErrorKind::NotFound => {
+                return Err(Error::NotFound(path.as_ref().into()))
+            }
+            Err(err) => return Err(Error::Io(err)),
+        };
+
+        let reader = BufReader::new(file);
+
+        PressureData::read(reader)
+    }
+
+    fn read<R: BufRead>(mut reader: R) -> Result<Self, Error> {
+        // Depending on the length of the 'total' field, one line in the pressure output is around
+        // 60 characters long. Pre-alloc roughly double the size to pretty much eliminate the need
+        // for ever having to resize the vec.
+        let mut buf = Vec::with_capacity(128);
+        let (some_kind, some) = Self::read_pressure_line(&mut reader, &mut buf)?;
+        buf.clear();
+
+        let (full_kind, full) = Self::read_pressure_line(&mut reader, &mut buf)?;
+
+        if some_kind != PressureRecordKind::Some || full_kind != PressureRecordKind::Full {
+            return Err(Error::InvalidFormat(
+                "unexpected pressure record structure".into(),
+            ));
+        }
+
+        Ok(PressureData { some, full })
+    }
+
+    fn read_pressure_line<R: BufRead>(
+        reader: &mut R,
+        buf: &mut Vec<u8>,
+    ) -> Result<(PressureRecordKind, PressureRecord), Error> {
+        // The buffer should be empty. It is only passed by the caller as a performance
+        // optimization
+        debug_assert!(buf.is_empty());
+
+        reader.read_until(b'\n', buf)?;
+        // SAFETY: In production, `reader` is expected to read from
+        // procfs/sysfs pressure files, which only ever should return ASCII strings.
+        let line = unsafe { std::str::from_utf8_unchecked(buf) };
+
+        Self::read_record(line)
+    }
+
+    fn read_record(line: &str) -> Result<(PressureRecordKind, PressureRecord), Error> {
+        let mut iter = line.split_ascii_whitespace();
+
+        let kind = iter
+            .next()
+            .ok_or_else(|| Error::InvalidFormat("missing pressure kind field".into()))
+            .and_then(PressureRecordKind::from_str)?;
+
+        let average_10 = Self::parse_field(iter.next(), "avg10=")?;
+        let average_60 = Self::parse_field(iter.next(), "avg60=")?;
+        let average_300 = Self::parse_field(iter.next(), "avg300=")?;
+        let total = Self::parse_field(iter.next(), "total=")?;
+
+        Ok((
+            kind,
+            PressureRecord {
+                average_10,
+                average_60,
+                average_300,
+                total,
+            },
+        ))
+    }
+
+    fn parse_field<T: FromStr>(s: Option<&str>, prefix: &str) -> Result<T, Error>
+    where
+        <T as FromStr>::Err: std::fmt::Display,
+    {
+        s.and_then(|s| s.strip_prefix(prefix))
+            .ok_or_else(|| {
+                Error::InvalidFormat(format!("expected '{prefix}' prefix for next field"))
+            })?
+            .parse()
+            .map_err(|err| Error::InvalidFormat(format!("failed to parse '{prefix}': {err}")))
+    }
+}
+
+#[cfg(test)]
+mod test {
+    use super::*;
+
+    #[test]
+    fn test_read_psi() {
+        let s = "some avg10=1.42 avg60=2.09 avg300=1.42 total=40979658
+full avg10=0.08 avg60=0.18 avg300=0.13 total=22865313
+";
+
+        let mut reader = std::io::Cursor::new(s);
+        let stats = PressureData::read(&mut reader).unwrap();
+
+        assert_eq!(stats.some.total, 40979658);
+        assert!((stats.some.average_10 - 1.82).abs() < f64::EPSILON);
+        assert!((stats.some.average_60 - 2.09).abs() < f64::EPSILON);
+        assert!((stats.some.average_300 - 1.42).abs() < f64::EPSILON);
+
+        assert_eq!(stats.full.total, 22865313);
+        assert!((stats.full.average_10 - 0.08).abs() < f64::EPSILON);
+        assert!((stats.full.average_60 - 0.18).abs() < f64::EPSILON);
+        assert!((stats.full.average_300 - 0.13).abs() < f64::EPSILON);
+    }
+
+    #[test]
+    fn test_read_error() {
+        let s = "invalid avg10=1.42 avg60=2.09 avg300=1.42 total=40979658
+full avg10=0.08 avg60=0.18 avg300=0.13 total=22865313
+";
+
+        let mut reader = std::io::Cursor::new(s);
+        assert!(PressureData::read(&mut reader).is_err());
+    }
+
+    #[test]
+    fn test_invalid_field() {
+        let s = "some foo=1.42 avg60=2.09 avg300=1.42 total=40979658
+full avg10=0.08 avg60=0.18 avg300=0.13 total=22865313
+";
+
+        let mut reader = std::io::Cursor::new(s);
+        assert!(PressureData::read(&mut reader).is_err());
+    }
+
+    #[test]
+    fn test_read_system_pressure() {
+        for resource in [Resource::Io, Resource::Memory, Resource::Cpu] {
+            PressureData::read_system(resource).unwrap();
+        }
+    }
+
+    #[test]
+    fn test_read_cgroup_pressure() {
+        for resource in [Resource::Io, Resource::Memory, Resource::Cpu] {
+            PressureData::read_cgroup("system.slice", resource).unwrap();
+        }
+    }
+
+    #[test]
+    fn test_read_file_notfound() {
+        assert!(matches!(
+            PressureData::read_file("/invalid"),
+            Err(Error::NotFound(_))
+        ))
+    }
+}
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox 12/26] time: use u64 parse helper from nom
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (10 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox 11/26] procfs: add helpers for querying pressure stall information Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox-backup 13/26] tools: move ParallelHandler to new proxmox-parallel-handler crate Lukas Wagner
                   ` (14 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

...instead of our own implementation.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 proxmox-time/src/parse_helpers.rs | 5 -----
 proxmox-time/src/time_span.rs     | 4 ++--
 2 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/proxmox-time/src/parse_helpers.rs b/proxmox-time/src/parse_helpers.rs
index 5c46a9d5..d2ca9d10 100644
--- a/proxmox-time/src/parse_helpers.rs
+++ b/proxmox-time/src/parse_helpers.rs
@@ -21,11 +21,6 @@ pub(crate) fn parse_error<'a>(
     nom::Err::Error(err)
 }
 
-// Parse a 64 bit unsigned integer
-pub(crate) fn parse_u64(i: &str) -> IResult<&str, u64> {
-    map_res(recognize(digit1), str::parse)(i)
-}
-
 // Parse complete input, generate simple error message (use this for simple line input).
 pub(crate) fn parse_complete_line<'a, F, O>(what: &str, i: &'a str, parser: F) -> Result<O, Error>
 where
diff --git a/proxmox-time/src/time_span.rs b/proxmox-time/src/time_span.rs
index c3c47f44..951e82e5 100644
--- a/proxmox-time/src/time_span.rs
+++ b/proxmox-time/src/time_span.rs
@@ -117,7 +117,7 @@
 
 use anyhow::{Context, Error};
 
-use crate::parse_helpers::{parse_complete_line, parse_error, parse_u64, IResult};
+use crate::parse_helpers::{parse_complete_line, parse_error, IResult};
 
 // Seconds-per-unit constants. Month and year are the systemd definitions:
 // 1 month = 30.44 days = 2,630,016 s (exact), 1 year = 365.25 days = 31,557,600 s (exact).
@@ -776,7 +776,7 @@ fn parse_time_span_incomplete(mut i: &str) -> IResult<&str, TimeSpanParts> {
         if i.is_empty() {
             break;
         }
-        let (n, num) = parse_u64(i)?;
+        let (n, num) = nom::character::complete::u64(i)?;
         i = n.trim_start();
         parsed_any = true;
 
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox-backup 13/26] tools: move ParallelHandler to new proxmox-parallel-handler crate
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (11 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox 12/26] time: use u64 parse helper from nom Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox-backup 14/26] tools: replace disks module with proxmox-disks Lukas Wagner
                   ` (13 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

The also extraced proxmox-disks crate requires the ParallelHandler
helper, so we need to extract it as well.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 Cargo.toml                                  |   3 +
 src/api2/tape/restore.rs                    |  25 ++-
 src/backup/verify.rs                        |   3 +-
 src/server/pull.rs                          |   5 +-
 src/tape/pool_writer/new_chunks_iterator.rs |   3 +-
 src/tools/disks/mod.rs                      |   2 +-
 src/tools/mod.rs                            |   2 -
 src/tools/parallel_handler.rs               | 160 --------------------
 8 files changed, 21 insertions(+), 182 deletions(-)
 delete mode 100644 src/tools/parallel_handler.rs

diff --git a/Cargo.toml b/Cargo.toml
index cf993f514..57f6aa88e 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -75,6 +75,7 @@ proxmox-network-api = "1"
 proxmox-network-types = "1.0.1"
 proxmox-notify = "1"
 proxmox-openid = "1"
+proxmox-parallel-handler = "1"
 proxmox-product-config = "1"
 proxmox-rate-limiter = "1.0.0"
 proxmox-rest-server = { version = "1.0.5", features = [ "templates" ] }
@@ -235,6 +236,7 @@ proxmox-network-types.workspace = true
 proxmox-notify = { workspace = true, features = [ "pbs-context" ] }
 proxmox-openid.workspace = true
 proxmox-product-config.workspace = true
+proxmox-parallel-handler.workspace = true
 proxmox-rate-limiter = { workspace = true, features = [ "shared-rate-limiter" ] }
 proxmox-rest-server = { workspace = true, features = [ "rate-limited-stream" ] }
 proxmox-router = { workspace = true, features = [ "cli", "server"] }
@@ -300,6 +302,7 @@ proxmox-rrd-api-types.workspace = true
 #proxmox-network-types = { path = "../proxmox/proxmox-network-types" }
 #proxmox-notify = { path = "../proxmox/proxmox-notify" }
 #proxmox-openid = { path = "../proxmox/proxmox-openid" }
+#proxmox-parallel-handler = { path = "../proxmox/proxmox-parallel-handler" }
 #proxmox-product-config = { path = "../proxmox/proxmox-product-config" }
 #proxmox-rate-limiter = { path = "../proxmox/proxmox-rate-limiter" }
 #proxmox-rest-server = { path = "../proxmox/proxmox-rest-server" }
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 4f2ee3db6..011037216 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -10,6 +10,7 @@ use tracing::{info, warn};
 
 use proxmox_human_byte::HumanByte;
 use proxmox_io::ReadExt;
+use proxmox_parallel_handler::ParallelHandler;
 use proxmox_rest_server::WorkerTask;
 use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::{api, ApiType};
@@ -38,21 +39,17 @@ use pbs_tape::{
 
 use crate::backup::check_ns_modification_privs;
 use crate::tape::{assert_datastore_type, TapeNotificationMode};
-use crate::{
-    tape::{
-        drive::{lock_tape_device, request_and_load_media, set_tape_device_state, TapeDriver},
-        file_formats::{
-            CatalogArchiveHeader, ChunkArchiveDecoder, ChunkArchiveHeader, SnapshotArchiveHeader,
-            PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_1,
-            PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1,
-            PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0, PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
-            PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1,
-            PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_2,
-        },
-        lock_media_set, Inventory, MediaCatalog, MediaId, MediaSet, MediaSetCatalog,
-        TAPE_STATUS_DIR,
+use crate::tape::{
+    drive::{lock_tape_device, request_and_load_media, set_tape_device_state, TapeDriver},
+    file_formats::{
+        CatalogArchiveHeader, ChunkArchiveDecoder, ChunkArchiveHeader, SnapshotArchiveHeader,
+        PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_1,
+        PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1,
+        PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0, PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
+        PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1,
+        PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_2,
     },
-    tools::parallel_handler::ParallelHandler,
+    lock_media_set, Inventory, MediaCatalog, MediaId, MediaSet, MediaSetCatalog, TAPE_STATUS_DIR,
 };
 
 struct NamespaceMap {
diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index f52d77815..9e24d254d 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -8,6 +8,7 @@ use anyhow::{bail, Error};
 use http_body_util::BodyExt;
 use tracing::{error, info, warn};
 
+use proxmox_parallel_handler::{ParallelHandler, SendHandle};
 use proxmox_worker_task::WorkerTaskContext;
 
 use pbs_api_types::{
@@ -20,8 +21,6 @@ use pbs_datastore::index::{ChunkReadInfo, IndexFile};
 use pbs_datastore::manifest::{BackupManifest, FileInfo};
 use pbs_datastore::{DataBlob, DataStore, DatastoreBackend, StoreProgress};
 
-use crate::tools::parallel_handler::{ParallelHandler, SendHandle};
-
 use crate::backup::hierarchy::ListAccessibleBackupGroups;
 
 /// A VerifyWorker encapsulates a task worker, datastore and information about which chunks have
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 57c5ef323..edc5e563d 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -8,9 +8,11 @@ use std::sync::{Arc, Mutex};
 use std::time::SystemTime;
 
 use anyhow::{bail, format_err, Context, Error};
-use proxmox_human_byte::HumanByte;
 use tracing::{info, warn};
 
+use proxmox_human_byte::HumanByte;
+use proxmox_parallel_handler::ParallelHandler;
+
 use pbs_api_types::{
     print_store_and_ns, ArchiveType, Authid, BackupArchiveName, BackupDir, BackupGroup,
     BackupNamespace, GroupFilter, Operation, RateLimitConfig, Remote, SnapshotListItem,
@@ -34,7 +36,6 @@ use super::sync::{
     SkipReason, SyncSource, SyncSourceReader, SyncStats,
 };
 use crate::backup::{check_ns_modification_privs, check_ns_privs};
-use crate::tools::parallel_handler::ParallelHandler;
 
 pub(crate) struct PullTarget {
     store: Arc<DataStore>,
diff --git a/src/tape/pool_writer/new_chunks_iterator.rs b/src/tape/pool_writer/new_chunks_iterator.rs
index 0e29516f8..f077d823f 100644
--- a/src/tape/pool_writer/new_chunks_iterator.rs
+++ b/src/tape/pool_writer/new_chunks_iterator.rs
@@ -3,10 +3,11 @@ use std::sync::{Arc, Mutex};
 
 use anyhow::{format_err, Error};
 
+use proxmox_parallel_handler::ParallelHandler;
+
 use pbs_datastore::{DataBlob, DataStore, SnapshotReader};
 
 use crate::tape::CatalogSet;
-use crate::tools::parallel_handler::ParallelHandler;
 
 /// Chunk iterator which uses separate threads to read chunks
 ///
diff --git a/src/tools/disks/mod.rs b/src/tools/disks/mod.rs
index a86cbdf79..4197d0b0f 100644
--- a/src/tools/disks/mod.rs
+++ b/src/tools/disks/mod.rs
@@ -21,7 +21,7 @@ use proxmox_sys::linux::procfs::{mountinfo::Device, MountInfo};
 
 use pbs_api_types::{BLOCKDEVICE_DISK_AND_PARTITION_NAME_REGEX, BLOCKDEVICE_NAME_REGEX};
 
-use crate::tools::parallel_handler::ParallelHandler;
+use proxmox_parallel_handler::ParallelHandler;
 
 mod zfs;
 pub use zfs::*;
diff --git a/src/tools/mod.rs b/src/tools/mod.rs
index 6a975bde2..7f5acc0e3 100644
--- a/src/tools/mod.rs
+++ b/src/tools/mod.rs
@@ -20,8 +20,6 @@ pub mod statistics;
 pub mod systemd;
 pub mod ticket;
 
-pub mod parallel_handler;
-
 pub fn assert_if_modified(digest1: &str, digest2: &str) -> Result<(), Error> {
     if digest1 != digest2 {
         bail!("detected modified configuration - file changed by other user? Try again.");
diff --git a/src/tools/parallel_handler.rs b/src/tools/parallel_handler.rs
deleted file mode 100644
index 81e83bb13..000000000
--- a/src/tools/parallel_handler.rs
+++ /dev/null
@@ -1,160 +0,0 @@
-//! A thread pool which run a closure in parallel.
-
-use std::sync::{Arc, Mutex};
-use std::thread::JoinHandle;
-
-use anyhow::{bail, format_err, Error};
-use crossbeam_channel::{bounded, Sender};
-
-/// A handle to send data to the worker thread (implements clone)
-pub struct SendHandle<I> {
-    input: Sender<I>,
-    abort: Arc<Mutex<Option<String>>>,
-}
-
-/// Returns the first error happened, if any
-pub fn check_abort(abort: &Mutex<Option<String>>) -> Result<(), Error> {
-    let guard = abort.lock().unwrap();
-    if let Some(err_msg) = &*guard {
-        return Err(format_err!("{}", err_msg));
-    }
-    Ok(())
-}
-
-impl<I: Send> SendHandle<I> {
-    /// Send data to the worker threads
-    pub fn send(&self, input: I) -> Result<(), Error> {
-        check_abort(&self.abort)?;
-        match self.input.send(input) {
-            Ok(()) => Ok(()),
-            Err(_) => bail!("send failed - channel closed"),
-        }
-    }
-}
-
-/// A thread pool which run the supplied closure
-///
-/// The send command sends data to the worker threads. If one handler
-/// returns an error, we mark the channel as failed and it is no
-/// longer possible to send data.
-///
-/// When done, the 'complete()' method needs to be called to check for
-/// outstanding errors.
-pub struct ParallelHandler<I> {
-    handles: Vec<JoinHandle<()>>,
-    name: String,
-    input: Option<SendHandle<I>>,
-}
-
-impl<I> Clone for SendHandle<I> {
-    fn clone(&self) -> Self {
-        Self {
-            input: self.input.clone(),
-            abort: Arc::clone(&self.abort),
-        }
-    }
-}
-
-impl<I: Send + 'static> ParallelHandler<I> {
-    /// Create a new thread pool, each thread processing incoming data
-    /// with 'handler_fn'.
-    pub fn new<F>(name: &str, threads: usize, handler_fn: F) -> Self
-    where
-        F: Fn(I) -> Result<(), Error> + Send + Clone + 'static,
-    {
-        let mut handles = Vec::new();
-        let (input_tx, input_rx) = bounded::<I>(threads);
-
-        let abort = Arc::new(Mutex::new(None));
-
-        for i in 0..threads {
-            let input_rx = input_rx.clone();
-            let abort = Arc::clone(&abort);
-            let handler_fn = handler_fn.clone();
-
-            handles.push(
-                std::thread::Builder::new()
-                    .name(format!("{name} ({i})"))
-                    .spawn(move || loop {
-                        let data = match input_rx.recv() {
-                            Ok(data) => data,
-                            Err(_) => return,
-                        };
-                        if let Err(err) = (handler_fn)(data) {
-                            let mut guard = abort.lock().unwrap();
-                            if guard.is_none() {
-                                *guard = Some(err.to_string());
-                            }
-                        }
-                    })
-                    .unwrap(),
-            );
-        }
-        Self {
-            handles,
-            name: name.to_string(),
-            input: Some(SendHandle {
-                input: input_tx,
-                abort,
-            }),
-        }
-    }
-
-    /// Returns a cloneable channel to send data to the worker threads
-    pub fn channel(&self) -> SendHandle<I> {
-        self.input.as_ref().unwrap().clone()
-    }
-
-    /// Send data to the worker threads
-    pub fn send(&self, input: I) -> Result<(), Error> {
-        self.input.as_ref().unwrap().send(input)?;
-        Ok(())
-    }
-
-    /// Wait for worker threads to complete and check for errors
-    pub fn complete(mut self) -> Result<(), Error> {
-        let input = self.input.take().unwrap();
-        let abort = Arc::clone(&input.abort);
-        check_abort(&abort)?;
-        drop(input);
-
-        let msg_list = self.join_threads();
-
-        // an error might be encountered while waiting for the join
-        check_abort(&abort)?;
-
-        if msg_list.is_empty() {
-            return Ok(());
-        }
-        Err(format_err!("{}", msg_list.join("\n")))
-    }
-
-    fn join_threads(&mut self) -> Vec<String> {
-        let mut msg_list = Vec::new();
-
-        let mut i = 0;
-        while let Some(handle) = self.handles.pop() {
-            if let Err(panic) = handle.join() {
-                if let Some(panic_msg) = panic.downcast_ref::<&str>() {
-                    msg_list.push(format!("thread {} ({i}) panicked: {panic_msg}", self.name));
-                } else if let Some(panic_msg) = panic.downcast_ref::<String>() {
-                    msg_list.push(format!("thread {} ({i}) panicked: {panic_msg}", self.name));
-                } else {
-                    msg_list.push(format!("thread {} ({i}) panicked", self.name));
-                }
-            }
-            i += 1;
-        }
-        msg_list
-    }
-}
-
-// Note: We make sure that all threads will be joined
-impl<I> Drop for ParallelHandler<I> {
-    fn drop(&mut self) {
-        drop(self.input.take());
-        while let Some(handle) = self.handles.pop() {
-            let _ = handle.join();
-        }
-    }
-}
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox-backup 14/26] tools: replace disks module with proxmox-disks
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (12 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox-backup 13/26] tools: move ParallelHandler to new proxmox-parallel-handler crate Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-16 13:27   ` Arthur Bied-Charreton
  2026-03-12 13:52 ` [PATCH proxmox-backup 15/26] metric collection: use blockdev_stat_for_path from proxmox_disks Lukas Wagner
                   ` (12 subsequent siblings)
  26 siblings, 1 reply; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

This commit replaces the disks module with the proxmox-disks crate. It
is extracted to enable disk metric collection in PDM.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 Cargo.toml                             |    3 +
 src/api2/admin/datastore.rs            |   10 +-
 src/api2/config/datastore.rs           |    2 +-
 src/api2/node/disks/directory.rs       |   10 +-
 src/api2/node/disks/mod.rs             |   20 +-
 src/api2/node/disks/zfs.rs             |   14 +-
 src/bin/proxmox_backup_manager/disk.rs |    9 +-
 src/server/metric_collection/mod.rs    |    3 +-
 src/tools/disks/lvm.rs                 |   60 -
 src/tools/disks/mod.rs                 | 1394 ------------------------
 src/tools/disks/smart.rs               |  227 ----
 src/tools/disks/zfs.rs                 |  205 ----
 src/tools/mod.rs                       |    1 -
 13 files changed, 33 insertions(+), 1925 deletions(-)
 delete mode 100644 src/tools/disks/lvm.rs
 delete mode 100644 src/tools/disks/mod.rs
 delete mode 100644 src/tools/disks/smart.rs
 delete mode 100644 src/tools/disks/zfs.rs

diff --git a/Cargo.toml b/Cargo.toml
index 57f6aa88e..03a98de64 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -62,6 +62,7 @@ proxmox-borrow = "1"
 proxmox-compression = "1.0.1"
 proxmox-config-digest = "1"
 proxmox-daemon = "1"
+proxmox-disks = "0.1"
 proxmox-fuse = "2"
 proxmox-docgen = "1"
 proxmox-http = { version = "1.0.2", features = [ "client", "http-helpers", "websocket" ] } # see below
@@ -224,6 +225,7 @@ proxmox-compression.workspace = true
 proxmox-config-digest.workspace = true
 proxmox-daemon.workspace = true
 proxmox-docgen.workspace = true
+proxmox-disks = { workspace =  true, features = ["api-types"] }
 proxmox-http = { workspace = true, features = [ "body", "client-trait", "proxmox-async", "rate-limited-stream" ] } # pbs-client doesn't use these
 proxmox-human-byte.workspace = true
 proxmox-io.workspace = true
@@ -290,6 +292,7 @@ proxmox-rrd-api-types.workspace = true
 #proxmox-config-digest = { path = "../proxmox/proxmox-config-digest" }
 #proxmox-daemon = { path = "../proxmox/proxmox-daemon" }
 #proxmox-docgen = { path = "../proxmox/proxmox-docgen" }
+#proxmox-disks = { path = "../proxmox/proxmox-disks" }
 #proxmox-http = { path = "../proxmox/proxmox-http" }
 #proxmox-http-error = { path = "../proxmox/proxmox-http-error" }
 #proxmox-human-byte = { path = "../proxmox/proxmox-human-byte" }
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 88ad5d53b..d75e10e37 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -1874,7 +1874,7 @@ pub fn get_rrd_stats(
     _param: Value,
 ) -> Result<Value, Error> {
     let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
-    let disk_manager = crate::tools::disks::DiskManage::new();
+    let disk_manager = proxmox_disks::DiskManage::new();
 
     let mut rrd_fields = vec![
         "total",
@@ -2340,7 +2340,7 @@ fn setup_mounted_device(datastore: &DataStoreConfig, tmp_mount_path: &str) -> Re
         datastore.name, datastore.path, mount_point
     );
 
-    crate::tools::disks::bind_mount(Path::new(&full_store_path), Path::new(&mount_point))
+    proxmox_disks::bind_mount(Path::new(&full_store_path), Path::new(&mount_point))
 }
 
 /// Here we
@@ -2377,13 +2377,13 @@ pub fn do_mount_device(datastore: DataStoreConfig) -> Result<bool, Error> {
         )?;
 
         info!("temporarily mounting '{uuid}' to '{}'", tmp_mount_path);
-        crate::tools::disks::mount_by_uuid(uuid, Path::new(&tmp_mount_path))
+        proxmox_disks::mount_by_uuid(uuid, Path::new(&tmp_mount_path))
             .map_err(|e| format_err!("mounting to tmp path failed: {e}"))?;
 
         let setup_result = setup_mounted_device(&datastore, &tmp_mount_path);
 
         let mut unmounted = true;
-        if let Err(e) = crate::tools::disks::unmount_by_mountpoint(Path::new(&tmp_mount_path)) {
+        if let Err(e) = proxmox_disks::unmount_by_mountpoint(Path::new(&tmp_mount_path)) {
             unmounted = false;
             warn!("unmounting from tmp path '{tmp_mount_path} failed: {e}'");
         }
@@ -2614,7 +2614,7 @@ fn do_unmount_device(
 
     let mount_point = datastore.absolute_path();
     run_maintenance_locked(&datastore.name, MaintenanceType::Unmount, worker, || {
-        crate::tools::disks::unmount_by_mountpoint(Path::new(&mount_point))
+        proxmox_disks::unmount_by_mountpoint(Path::new(&mount_point))
     })
 }
 
diff --git a/src/api2/config/datastore.rs b/src/api2/config/datastore.rs
index f845fe2d0..034daf14b 100644
--- a/src/api2/config/datastore.rs
+++ b/src/api2/config/datastore.rs
@@ -8,6 +8,7 @@ use http_body_util::BodyExt;
 use serde_json::Value;
 use tracing::{info, warn};
 
+use proxmox_disks::unmount_by_mountpoint;
 use proxmox_router::{http_bail, Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::{api, param_bail, ApiType};
 use proxmox_section_config::SectionConfigData;
@@ -37,7 +38,6 @@ use proxmox_rest_server::WorkerTask;
 use proxmox_s3_client::{S3ObjectKey, S3_HTTP_REQUEST_TIMEOUT};
 
 use crate::server::jobstate;
-use crate::tools::disks::unmount_by_mountpoint;
 
 #[derive(Default, serde::Deserialize, serde::Serialize)]
 #[serde(rename_all = "kebab-case")]
diff --git a/src/api2/node/disks/directory.rs b/src/api2/node/disks/directory.rs
index c37d65b8d..cd8c6b4e5 100644
--- a/src/api2/node/disks/directory.rs
+++ b/src/api2/node/disks/directory.rs
@@ -1,11 +1,15 @@
+use std::os::linux::fs::MetadataExt;
 use std::sync::LazyLock;
 
 use ::serde::{Deserialize, Serialize};
 use anyhow::{bail, Error};
 use serde_json::json;
-use std::os::linux::fs::MetadataExt;
 use tracing::info;
 
+use proxmox_disks::{
+    create_file_system, create_single_linux_partition, get_fs_uuid, DiskManage, DiskUsageQuery,
+    DiskUsageType, FileSystemType,
+};
 use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::api;
 use proxmox_section_config::SectionConfigData;
@@ -15,10 +19,6 @@ use pbs_api_types::{
     PRIV_SYS_AUDIT, PRIV_SYS_MODIFY, UPID_SCHEMA,
 };
 
-use crate::tools::disks::{
-    create_file_system, create_single_linux_partition, get_fs_uuid, DiskManage, DiskUsageQuery,
-    DiskUsageType, FileSystemType,
-};
 use crate::tools::systemd::{self, types::*};
 
 use proxmox_rest_server::WorkerTask;
diff --git a/src/api2/node/disks/mod.rs b/src/api2/node/disks/mod.rs
index abcb8ee40..1c21bb91f 100644
--- a/src/api2/node/disks/mod.rs
+++ b/src/api2/node/disks/mod.rs
@@ -1,23 +1,21 @@
 use anyhow::{bail, Error};
 use serde_json::{json, Value};
-
-use proxmox_router::{
-    list_subdirs_api_method, Permission, Router, RpcEnvironment, RpcEnvironmentType, SubdirMap,
-};
-use proxmox_schema::api;
-use proxmox_sortable_macro::sortable;
 use tracing::info;
 
 use pbs_api_types::{
     BLOCKDEVICE_DISK_AND_PARTITION_NAME_SCHEMA, BLOCKDEVICE_NAME_SCHEMA, NODE_SCHEMA,
     PRIV_SYS_AUDIT, PRIV_SYS_MODIFY, UPID_SCHEMA,
 };
-
-use crate::tools::disks::{
-    get_smart_data, inititialize_gpt_disk, wipe_blockdev, DiskManage, DiskUsageInfo,
-    DiskUsageQuery, DiskUsageType, SmartData,
+use proxmox_disks::{
+    get_smart_data, initialize_gpt_disk, wipe_blockdev, DiskManage, DiskUsageInfo, DiskUsageQuery,
+    DiskUsageType, SmartData,
 };
 use proxmox_rest_server::WorkerTask;
+use proxmox_router::{
+    list_subdirs_api_method, Permission, Router, RpcEnvironment, RpcEnvironmentType, SubdirMap,
+};
+use proxmox_schema::api;
+use proxmox_sortable_macro::sortable;
 
 pub mod directory;
 pub mod zfs;
@@ -174,7 +172,7 @@ pub fn initialize_disk(
             let disk_manager = DiskManage::new();
             let disk_info = disk_manager.disk_by_name(&disk)?;
 
-            inititialize_gpt_disk(&disk_info, uuid.as_deref())?;
+            initialize_gpt_disk(&disk_info, uuid.as_deref())?;
 
             Ok(())
         },
diff --git a/src/api2/node/disks/zfs.rs b/src/api2/node/disks/zfs.rs
index 3e5a7decf..21f4a3073 100644
--- a/src/api2/node/disks/zfs.rs
+++ b/src/api2/node/disks/zfs.rs
@@ -2,20 +2,18 @@ use anyhow::{bail, Error};
 use serde_json::{json, Value};
 use tracing::{error, info};
 
-use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
-use proxmox_schema::api;
-
 use pbs_api_types::{
     DataStoreConfig, ZfsCompressionType, ZfsRaidLevel, ZpoolListItem, DATASTORE_SCHEMA,
     DISK_ARRAY_SCHEMA, DISK_LIST_SCHEMA, NODE_SCHEMA, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY, UPID_SCHEMA,
     ZFS_ASHIFT_SCHEMA, ZPOOL_NAME_SCHEMA,
 };
-
-use crate::tools::disks::{
-    parse_zpool_status_config_tree, vdev_list_to_tree, zpool_list, zpool_status, DiskUsageType,
+use proxmox_disks::{
+    parse_zpool_status_config_tree, vdev_list_to_tree, zpool_list, zpool_status, DiskUsageQuery,
+    DiskUsageType,
 };
-
 use proxmox_rest_server::WorkerTask;
+use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
+use proxmox_schema::api;
 
 #[api(
     protected: true,
@@ -174,7 +172,7 @@ pub fn create_zpool(
         .map(|v| v.as_str().unwrap().to_string())
         .collect();
 
-    let disk_map = crate::tools::disks::DiskUsageQuery::new().query()?;
+    let disk_map = DiskUsageQuery::new().query()?;
     for disk in devices.iter() {
         match disk_map.get(disk) {
             Some(info) => {
diff --git a/src/bin/proxmox_backup_manager/disk.rs b/src/bin/proxmox_backup_manager/disk.rs
index cd7a0b7aa..f10c6e696 100644
--- a/src/bin/proxmox_backup_manager/disk.rs
+++ b/src/bin/proxmox_backup_manager/disk.rs
@@ -1,17 +1,14 @@
 use anyhow::{bail, Error};
 use serde_json::Value;
-
-use proxmox_router::{cli::*, ApiHandler, RpcEnvironment};
-use proxmox_schema::api;
 use std::io::IsTerminal;
 
 use pbs_api_types::{
     ZfsCompressionType, ZfsRaidLevel, BLOCKDEVICE_DISK_AND_PARTITION_NAME_SCHEMA,
     BLOCKDEVICE_NAME_SCHEMA, DATASTORE_SCHEMA, DISK_LIST_SCHEMA, ZFS_ASHIFT_SCHEMA,
 };
-use proxmox_backup::tools::disks::{
-    complete_disk_name, complete_partition_name, FileSystemType, SmartAttribute,
-};
+use proxmox_disks::{complete_disk_name, complete_partition_name, FileSystemType, SmartAttribute};
+use proxmox_router::{cli::*, ApiHandler, RpcEnvironment};
+use proxmox_schema::api;
 
 use proxmox_backup::api2;
 
diff --git a/src/server/metric_collection/mod.rs b/src/server/metric_collection/mod.rs
index 9b62cbb42..3fa6e9fbf 100644
--- a/src/server/metric_collection/mod.rs
+++ b/src/server/metric_collection/mod.rs
@@ -10,14 +10,13 @@ use anyhow::Error;
 use tokio::join;
 
 use pbs_api_types::{DataStoreConfig, Operation};
+use proxmox_disks::{zfs_dataset_stats, BlockDevStat, DiskManage};
 use proxmox_network_api::{get_network_interfaces, IpLink};
 use proxmox_sys::{
     fs::FileSystemInformation,
     linux::procfs::{Loadavg, ProcFsMemInfo, ProcFsNetDev, ProcFsStat},
 };
 
-use crate::tools::disks::{zfs_dataset_stats, BlockDevStat, DiskManage};
-
 mod metric_server;
 pub(crate) mod pull_metrics;
 pub(crate) mod rrd;
diff --git a/src/tools/disks/lvm.rs b/src/tools/disks/lvm.rs
deleted file mode 100644
index 1456a21c3..000000000
--- a/src/tools/disks/lvm.rs
+++ /dev/null
@@ -1,60 +0,0 @@
-use std::collections::HashSet;
-use std::os::unix::fs::MetadataExt;
-use std::sync::LazyLock;
-
-use anyhow::Error;
-use serde_json::Value;
-
-use super::LsblkInfo;
-
-static LVM_UUIDS: LazyLock<HashSet<&'static str>> = LazyLock::new(|| {
-    let mut set = HashSet::new();
-    set.insert("e6d6d379-f507-44c2-a23c-238f2a3df928");
-    set
-});
-
-/// Get set of devices used by LVM (pvs).
-///
-/// The set is indexed by using the unix raw device number (dev_t is u64)
-pub fn get_lvm_devices(lsblk_info: &[LsblkInfo]) -> Result<HashSet<u64>, Error> {
-    const PVS_BIN_PATH: &str = "pvs";
-
-    let mut command = std::process::Command::new(PVS_BIN_PATH);
-    command.args([
-        "--reportformat",
-        "json",
-        "--noheadings",
-        "--readonly",
-        "-o",
-        "pv_name",
-    ]);
-
-    let output = proxmox_sys::command::run_command(command, None)?;
-
-    let mut device_set: HashSet<u64> = HashSet::new();
-
-    for info in lsblk_info.iter() {
-        if let Some(partition_type) = &info.partition_type {
-            if LVM_UUIDS.contains(partition_type.as_str()) {
-                let meta = std::fs::metadata(&info.path)?;
-                device_set.insert(meta.rdev());
-            }
-        }
-    }
-
-    let output: Value = output.parse()?;
-
-    match output["report"][0]["pv"].as_array() {
-        Some(list) => {
-            for info in list {
-                if let Some(pv_name) = info["pv_name"].as_str() {
-                    let meta = std::fs::metadata(pv_name)?;
-                    device_set.insert(meta.rdev());
-                }
-            }
-        }
-        None => return Ok(device_set),
-    }
-
-    Ok(device_set)
-}
diff --git a/src/tools/disks/mod.rs b/src/tools/disks/mod.rs
deleted file mode 100644
index 4197d0b0f..000000000
--- a/src/tools/disks/mod.rs
+++ /dev/null
@@ -1,1394 +0,0 @@
-//! Disk query/management utilities for.
-
-use std::collections::{HashMap, HashSet};
-use std::ffi::{OsStr, OsString};
-use std::io;
-use std::os::unix::ffi::{OsStrExt, OsStringExt};
-use std::os::unix::fs::{FileExt, MetadataExt, OpenOptionsExt};
-use std::path::{Path, PathBuf};
-use std::sync::{Arc, LazyLock};
-
-use anyhow::{bail, format_err, Context as _, Error};
-use libc::dev_t;
-use once_cell::sync::OnceCell;
-
-use ::serde::{Deserialize, Serialize};
-
-use proxmox_lang::{io_bail, io_format_err};
-use proxmox_log::info;
-use proxmox_schema::api;
-use proxmox_sys::linux::procfs::{mountinfo::Device, MountInfo};
-
-use pbs_api_types::{BLOCKDEVICE_DISK_AND_PARTITION_NAME_REGEX, BLOCKDEVICE_NAME_REGEX};
-
-use proxmox_parallel_handler::ParallelHandler;
-
-mod zfs;
-pub use zfs::*;
-mod zpool_status;
-pub use zpool_status::*;
-mod zpool_list;
-pub use zpool_list::*;
-mod lvm;
-pub use lvm::*;
-mod smart;
-pub use smart::*;
-
-static ISCSI_PATH_REGEX: LazyLock<regex::Regex> =
-    LazyLock::new(|| regex::Regex::new(r"host[^/]*/session[^/]*").unwrap());
-
-/// Disk management context.
-///
-/// This provides access to disk information with some caching for faster querying of multiple
-/// devices.
-pub struct DiskManage {
-    mount_info: OnceCell<MountInfo>,
-    mounted_devices: OnceCell<HashSet<dev_t>>,
-}
-
-/// Information for a device as returned by lsblk.
-#[derive(Deserialize)]
-pub struct LsblkInfo {
-    /// Path to the device.
-    path: String,
-    /// Partition type GUID.
-    #[serde(rename = "parttype")]
-    partition_type: Option<String>,
-    /// File system label.
-    #[serde(rename = "fstype")]
-    file_system_type: Option<String>,
-    /// File system UUID.
-    uuid: Option<String>,
-}
-
-impl DiskManage {
-    /// Create a new disk management context.
-    pub fn new() -> Arc<Self> {
-        Arc::new(Self {
-            mount_info: OnceCell::new(),
-            mounted_devices: OnceCell::new(),
-        })
-    }
-
-    /// Get the current mount info. This simply caches the result of `MountInfo::read` from the
-    /// `proxmox::sys` module.
-    pub fn mount_info(&self) -> Result<&MountInfo, Error> {
-        self.mount_info.get_or_try_init(MountInfo::read)
-    }
-
-    /// Get a `Disk` from a device node (eg. `/dev/sda`).
-    pub fn disk_by_node<P: AsRef<Path>>(self: Arc<Self>, devnode: P) -> io::Result<Disk> {
-        let devnode = devnode.as_ref();
-
-        let meta = std::fs::metadata(devnode)?;
-        if (meta.mode() & libc::S_IFBLK) == libc::S_IFBLK {
-            self.disk_by_dev_num(meta.rdev())
-        } else {
-            io_bail!("not a block device: {:?}", devnode);
-        }
-    }
-
-    /// Get a `Disk` for a specific device number.
-    pub fn disk_by_dev_num(self: Arc<Self>, devnum: dev_t) -> io::Result<Disk> {
-        self.disk_by_sys_path(format!(
-            "/sys/dev/block/{}:{}",
-            unsafe { libc::major(devnum) },
-            unsafe { libc::minor(devnum) },
-        ))
-    }
-
-    /// Get a `Disk` for a path in `/sys`.
-    pub fn disk_by_sys_path<P: AsRef<Path>>(self: Arc<Self>, path: P) -> io::Result<Disk> {
-        let device = udev::Device::from_syspath(path.as_ref())?;
-        Ok(Disk {
-            manager: self,
-            device,
-            info: Default::default(),
-        })
-    }
-
-    /// Get a `Disk` for a name in `/sys/block/<name>`.
-    pub fn disk_by_name(self: Arc<Self>, name: &str) -> io::Result<Disk> {
-        let syspath = format!("/sys/block/{name}");
-        self.disk_by_sys_path(syspath)
-    }
-
-    /// Get a `Disk` for a name in `/sys/class/block/<name>`.
-    pub fn partition_by_name(self: Arc<Self>, name: &str) -> io::Result<Disk> {
-        let syspath = format!("/sys/class/block/{name}");
-        self.disk_by_sys_path(syspath)
-    }
-
-    /// Gather information about mounted disks:
-    fn mounted_devices(&self) -> Result<&HashSet<dev_t>, Error> {
-        self.mounted_devices
-            .get_or_try_init(|| -> Result<_, Error> {
-                let mut mounted = HashSet::new();
-
-                for (_id, mp) in self.mount_info()? {
-                    let source = match mp.mount_source.as_deref() {
-                        Some(s) => s,
-                        None => continue,
-                    };
-
-                    let path = Path::new(source);
-                    if !path.is_absolute() {
-                        continue;
-                    }
-
-                    let meta = match std::fs::metadata(path) {
-                        Ok(meta) => meta,
-                        Err(ref err) if err.kind() == io::ErrorKind::NotFound => continue,
-                        Err(other) => return Err(Error::from(other)),
-                    };
-
-                    if (meta.mode() & libc::S_IFBLK) != libc::S_IFBLK {
-                        // not a block device
-                        continue;
-                    }
-
-                    mounted.insert(meta.rdev());
-                }
-
-                Ok(mounted)
-            })
-    }
-
-    /// Information about file system type and used device for a path
-    ///
-    /// Returns tuple (fs_type, device, mount_source)
-    pub fn find_mounted_device(
-        &self,
-        path: &std::path::Path,
-    ) -> Result<Option<(String, Device, Option<OsString>)>, Error> {
-        let stat = nix::sys::stat::stat(path)?;
-        let device = Device::from_dev_t(stat.st_dev);
-
-        let root_path = std::path::Path::new("/");
-
-        for (_id, entry) in self.mount_info()? {
-            if entry.root == root_path && entry.device == device {
-                return Ok(Some((
-                    entry.fs_type.clone(),
-                    entry.device,
-                    entry.mount_source.clone(),
-                )));
-            }
-        }
-
-        Ok(None)
-    }
-
-    /// Check whether a specific device node is mounted.
-    ///
-    /// Note that this tries to `stat` the sources of all mount points without caching the result
-    /// of doing so, so this is always somewhat expensive.
-    pub fn is_devnum_mounted(&self, dev: dev_t) -> Result<bool, Error> {
-        self.mounted_devices().map(|mounted| mounted.contains(&dev))
-    }
-}
-
-/// Queries (and caches) various information about a specific disk.
-///
-/// This belongs to a `Disks` and provides information for a single disk.
-pub struct Disk {
-    manager: Arc<DiskManage>,
-    device: udev::Device,
-    info: DiskInfo,
-}
-
-/// Helper struct (so we can initialize this with Default)
-///
-/// We probably want this to be serializable to the same hash type we use in perl currently.
-#[derive(Default)]
-struct DiskInfo {
-    size: OnceCell<u64>,
-    vendor: OnceCell<Option<OsString>>,
-    model: OnceCell<Option<OsString>>,
-    rotational: OnceCell<Option<bool>>,
-    // for perl: #[serde(rename = "devpath")]
-    ata_rotation_rate_rpm: OnceCell<Option<u64>>,
-    // for perl: #[serde(rename = "devpath")]
-    device_path: OnceCell<Option<PathBuf>>,
-    wwn: OnceCell<Option<OsString>>,
-    serial: OnceCell<Option<OsString>>,
-    // for perl: #[serde(skip_serializing)]
-    partition_table_type: OnceCell<Option<OsString>>,
-    // for perl: #[serde(skip_serializing)]
-    partition_entry_scheme: OnceCell<Option<OsString>>,
-    // for perl: #[serde(skip_serializing)]
-    partition_entry_uuid: OnceCell<Option<OsString>>,
-    // for perl: #[serde(skip_serializing)]
-    partition_entry_type: OnceCell<Option<OsString>>,
-    gpt: OnceCell<bool>,
-    // ???
-    bus: OnceCell<Option<OsString>>,
-    // ???
-    fs_type: OnceCell<Option<OsString>>,
-    // ???
-    has_holders: OnceCell<bool>,
-    // ???
-    is_mounted: OnceCell<bool>,
-}
-
-impl Disk {
-    /// Try to get the device number for this disk.
-    ///
-    /// (In udev this can fail...)
-    pub fn devnum(&self) -> Result<dev_t, Error> {
-        // not sure when this can fail...
-        self.device
-            .devnum()
-            .ok_or_else(|| format_err!("failed to get device number"))
-    }
-
-    /// Get the sys-name of this device. (The final component in the `/sys` path).
-    pub fn sysname(&self) -> &OsStr {
-        self.device.sysname()
-    }
-
-    /// Get the this disk's `/sys` path.
-    pub fn syspath(&self) -> &Path {
-        self.device.syspath()
-    }
-
-    /// Get the device node in `/dev`, if any.
-    pub fn device_path(&self) -> Option<&Path> {
-        //self.device.devnode()
-        self.info
-            .device_path
-            .get_or_init(|| self.device.devnode().map(Path::to_owned))
-            .as_ref()
-            .map(PathBuf::as_path)
-    }
-
-    /// Get the parent device.
-    pub fn parent(&self) -> Option<Self> {
-        self.device.parent().map(|parent| Self {
-            manager: self.manager.clone(),
-            device: parent,
-            info: Default::default(),
-        })
-    }
-
-    /// Read from a file in this device's sys path.
-    ///
-    /// Note: path must be a relative path!
-    pub fn read_sys(&self, path: &Path) -> io::Result<Option<Vec<u8>>> {
-        assert!(path.is_relative());
-
-        std::fs::read(self.syspath().join(path))
-            .map(Some)
-            .or_else(|err| {
-                if err.kind() == io::ErrorKind::NotFound {
-                    Ok(None)
-                } else {
-                    Err(err)
-                }
-            })
-    }
-
-    /// Convenience wrapper for reading a `/sys` file which contains just a simple `OsString`.
-    pub fn read_sys_os_str<P: AsRef<Path>>(&self, path: P) -> io::Result<Option<OsString>> {
-        Ok(self.read_sys(path.as_ref())?.map(|mut v| {
-            if Some(&b'\n') == v.last() {
-                v.pop();
-            }
-            OsString::from_vec(v)
-        }))
-    }
-
-    /// Convenience wrapper for reading a `/sys` file which contains just a simple utf-8 string.
-    pub fn read_sys_str<P: AsRef<Path>>(&self, path: P) -> io::Result<Option<String>> {
-        Ok(match self.read_sys(path.as_ref())? {
-            Some(data) => Some(String::from_utf8(data).map_err(io::Error::other)?),
-            None => None,
-        })
-    }
-
-    /// Convenience wrapper for unsigned integer `/sys` values up to 64 bit.
-    pub fn read_sys_u64<P: AsRef<Path>>(&self, path: P) -> io::Result<Option<u64>> {
-        Ok(match self.read_sys_str(path)? {
-            Some(data) => Some(data.trim().parse().map_err(io::Error::other)?),
-            None => None,
-        })
-    }
-
-    /// Get the disk's size in bytes.
-    pub fn size(&self) -> io::Result<u64> {
-        Ok(*self.info.size.get_or_try_init(|| {
-            self.read_sys_u64("size")?.map(|s| s * 512).ok_or_else(|| {
-                io_format_err!(
-                    "failed to get disk size from {:?}",
-                    self.syspath().join("size"),
-                )
-            })
-        })?)
-    }
-
-    /// Get the device vendor (`/sys/.../device/vendor`) entry if available.
-    pub fn vendor(&self) -> io::Result<Option<&OsStr>> {
-        Ok(self
-            .info
-            .vendor
-            .get_or_try_init(|| self.read_sys_os_str("device/vendor"))?
-            .as_ref()
-            .map(OsString::as_os_str))
-    }
-
-    /// Get the device model (`/sys/.../device/model`) entry if available.
-    pub fn model(&self) -> Option<&OsStr> {
-        self.info
-            .model
-            .get_or_init(|| self.device.property_value("ID_MODEL").map(OsStr::to_owned))
-            .as_ref()
-            .map(OsString::as_os_str)
-    }
-
-    /// Check whether this is a rotational disk.
-    ///
-    /// Returns `None` if there's no `queue/rotational` file, in which case no information is
-    /// known. `Some(false)` if `queue/rotational` is zero, `Some(true)` if it has a non-zero
-    /// value.
-    pub fn rotational(&self) -> io::Result<Option<bool>> {
-        Ok(*self
-            .info
-            .rotational
-            .get_or_try_init(|| -> io::Result<Option<bool>> {
-                Ok(self.read_sys_u64("queue/rotational")?.map(|n| n != 0))
-            })?)
-    }
-
-    /// Get the WWN if available.
-    pub fn wwn(&self) -> Option<&OsStr> {
-        self.info
-            .wwn
-            .get_or_init(|| self.device.property_value("ID_WWN").map(|v| v.to_owned()))
-            .as_ref()
-            .map(OsString::as_os_str)
-    }
-
-    /// Get the device serial if available.
-    pub fn serial(&self) -> Option<&OsStr> {
-        self.info
-            .serial
-            .get_or_init(|| {
-                self.device
-                    .property_value("ID_SERIAL_SHORT")
-                    .map(|v| v.to_owned())
-            })
-            .as_ref()
-            .map(OsString::as_os_str)
-    }
-
-    /// Get the ATA rotation rate value from udev. This is not necessarily the same as sysfs'
-    /// `rotational` value.
-    pub fn ata_rotation_rate_rpm(&self) -> Option<u64> {
-        *self.info.ata_rotation_rate_rpm.get_or_init(|| {
-            std::str::from_utf8(
-                self.device
-                    .property_value("ID_ATA_ROTATION_RATE_RPM")?
-                    .as_bytes(),
-            )
-            .ok()?
-            .parse()
-            .ok()
-        })
-    }
-
-    /// Get the partition table type, if any.
-    pub fn partition_table_type(&self) -> Option<&OsStr> {
-        self.info
-            .partition_table_type
-            .get_or_init(|| {
-                self.device
-                    .property_value("ID_PART_TABLE_TYPE")
-                    .map(|v| v.to_owned())
-            })
-            .as_ref()
-            .map(OsString::as_os_str)
-    }
-
-    /// Check if this contains a GPT partition table.
-    pub fn has_gpt(&self) -> bool {
-        *self.info.gpt.get_or_init(|| {
-            self.partition_table_type()
-                .map(|s| s == "gpt")
-                .unwrap_or(false)
-        })
-    }
-
-    /// Get the partitioning scheme of which this device is a partition.
-    pub fn partition_entry_scheme(&self) -> Option<&OsStr> {
-        self.info
-            .partition_entry_scheme
-            .get_or_init(|| {
-                self.device
-                    .property_value("ID_PART_ENTRY_SCHEME")
-                    .map(|v| v.to_owned())
-            })
-            .as_ref()
-            .map(OsString::as_os_str)
-    }
-
-    /// Check if this is a partition.
-    pub fn is_partition(&self) -> bool {
-        self.partition_entry_scheme().is_some()
-    }
-
-    /// Get the type of partition entry (ie. type UUID from the entry in the GPT partition table).
-    pub fn partition_entry_type(&self) -> Option<&OsStr> {
-        self.info
-            .partition_entry_type
-            .get_or_init(|| {
-                self.device
-                    .property_value("ID_PART_ENTRY_TYPE")
-                    .map(|v| v.to_owned())
-            })
-            .as_ref()
-            .map(OsString::as_os_str)
-    }
-
-    /// Get the partition entry UUID (ie. the UUID from the entry in the GPT partition table).
-    pub fn partition_entry_uuid(&self) -> Option<&OsStr> {
-        self.info
-            .partition_entry_uuid
-            .get_or_init(|| {
-                self.device
-                    .property_value("ID_PART_ENTRY_UUID")
-                    .map(|v| v.to_owned())
-            })
-            .as_ref()
-            .map(OsString::as_os_str)
-    }
-
-    /// Get the bus type used for this disk.
-    pub fn bus(&self) -> Option<&OsStr> {
-        self.info
-            .bus
-            .get_or_init(|| self.device.property_value("ID_BUS").map(|v| v.to_owned()))
-            .as_ref()
-            .map(OsString::as_os_str)
-    }
-
-    /// Attempt to guess the disk type.
-    pub fn guess_disk_type(&self) -> io::Result<DiskType> {
-        Ok(match self.rotational()? {
-            Some(false) => DiskType::Ssd,
-            Some(true) => DiskType::Hdd,
-            None => match self.ata_rotation_rate_rpm() {
-                Some(_) => DiskType::Hdd,
-                None => match self.bus() {
-                    Some(bus) if bus == "usb" => DiskType::Usb,
-                    _ => DiskType::Unknown,
-                },
-            },
-        })
-    }
-
-    /// Get the file system type found on the disk, if any.
-    ///
-    /// Note that `None` may also just mean "unknown".
-    pub fn fs_type(&self) -> Option<&OsStr> {
-        self.info
-            .fs_type
-            .get_or_init(|| {
-                self.device
-                    .property_value("ID_FS_TYPE")
-                    .map(|v| v.to_owned())
-            })
-            .as_ref()
-            .map(OsString::as_os_str)
-    }
-
-    /// Check if there are any "holders" in `/sys`. This usually means the device is in use by
-    /// another kernel driver like the device mapper.
-    pub fn has_holders(&self) -> io::Result<bool> {
-        Ok(*self
-            .info
-            .has_holders
-            .get_or_try_init(|| -> io::Result<bool> {
-                let mut subdir = self.syspath().to_owned();
-                subdir.push("holders");
-                for entry in std::fs::read_dir(subdir)? {
-                    match entry?.file_name().as_bytes() {
-                        b"." | b".." => (),
-                        _ => return Ok(true),
-                    }
-                }
-                Ok(false)
-            })?)
-    }
-
-    /// Check if this disk is mounted.
-    pub fn is_mounted(&self) -> Result<bool, Error> {
-        Ok(*self
-            .info
-            .is_mounted
-            .get_or_try_init(|| self.manager.is_devnum_mounted(self.devnum()?))?)
-    }
-
-    /// Read block device stats
-    ///
-    /// see <https://www.kernel.org/doc/Documentation/block/stat.txt>
-    pub fn read_stat(&self) -> std::io::Result<Option<BlockDevStat>> {
-        if let Some(stat) = self.read_sys(Path::new("stat"))? {
-            let stat = unsafe { std::str::from_utf8_unchecked(&stat) };
-            let stat: Vec<u64> = stat
-                .split_ascii_whitespace()
-                .map(|s| s.parse().unwrap_or_default())
-                .collect();
-
-            if stat.len() < 15 {
-                return Ok(None);
-            }
-
-            return Ok(Some(BlockDevStat {
-                read_ios: stat[0],
-                read_sectors: stat[2],
-                write_ios: stat[4] + stat[11],     // write + discard
-                write_sectors: stat[6] + stat[13], // write + discard
-                io_ticks: stat[10],
-            }));
-        }
-        Ok(None)
-    }
-
-    /// List device partitions
-    pub fn partitions(&self) -> Result<HashMap<u64, Disk>, Error> {
-        let sys_path = self.syspath();
-        let device = self.sysname().to_string_lossy().to_string();
-
-        let mut map = HashMap::new();
-
-        for item in proxmox_sys::fs::read_subdir(libc::AT_FDCWD, sys_path)? {
-            let item = item?;
-            let name = match item.file_name().to_str() {
-                Ok(name) => name,
-                Err(_) => continue, // skip non utf8 entries
-            };
-
-            if !name.starts_with(&device) {
-                continue;
-            }
-
-            let mut part_path = sys_path.to_owned();
-            part_path.push(name);
-
-            let disk_part = self.manager.clone().disk_by_sys_path(&part_path)?;
-
-            if let Some(partition) = disk_part.read_sys_u64("partition")? {
-                map.insert(partition, disk_part);
-            }
-        }
-
-        Ok(map)
-    }
-}
-
-#[api()]
-#[derive(Debug, Serialize, Deserialize)]
-#[serde(rename_all = "lowercase")]
-/// This is just a rough estimate for a "type" of disk.
-pub enum DiskType {
-    /// We know nothing.
-    Unknown,
-
-    /// May also be a USB-HDD.
-    Hdd,
-
-    /// May also be a USB-SSD.
-    Ssd,
-
-    /// Some kind of USB disk, but we don't know more than that.
-    Usb,
-}
-
-#[derive(Debug)]
-/// Represents the contents of the `/sys/block/<dev>/stat` file.
-pub struct BlockDevStat {
-    pub read_ios: u64,
-    pub read_sectors: u64,
-    pub write_ios: u64,
-    pub write_sectors: u64,
-    pub io_ticks: u64, // milliseconds
-}
-
-/// Use lsblk to read partition type uuids and file system types.
-pub fn get_lsblk_info() -> Result<Vec<LsblkInfo>, Error> {
-    let mut command = std::process::Command::new("lsblk");
-    command.args(["--json", "-o", "path,parttype,fstype,uuid"]);
-
-    let output = proxmox_sys::command::run_command(command, None)?;
-
-    let mut output: serde_json::Value = output.parse()?;
-
-    Ok(serde_json::from_value(output["blockdevices"].take())?)
-}
-
-/// Get set of devices with a file system label.
-///
-/// The set is indexed by using the unix raw device number (dev_t is u64)
-fn get_file_system_devices(lsblk_info: &[LsblkInfo]) -> Result<HashSet<u64>, Error> {
-    let mut device_set: HashSet<u64> = HashSet::new();
-
-    for info in lsblk_info.iter() {
-        if info.file_system_type.is_some() {
-            let meta = std::fs::metadata(&info.path)?;
-            device_set.insert(meta.rdev());
-        }
-    }
-
-    Ok(device_set)
-}
-
-#[api()]
-#[derive(Debug, Serialize, Deserialize, Eq, PartialEq)]
-#[serde(rename_all = "lowercase")]
-/// What a block device partition is used for.
-pub enum PartitionUsageType {
-    /// Partition is not used (as far we can tell)
-    Unused,
-    /// Partition is used by LVM
-    LVM,
-    /// Partition is used by ZFS
-    ZFS,
-    /// Partition is ZFS reserved
-    ZfsReserved,
-    /// Partition is an EFI partition
-    EFI,
-    /// Partition is a BIOS partition
-    BIOS,
-    /// Partition contains a file system label
-    FileSystem,
-}
-
-#[api()]
-#[derive(Debug, Serialize, Deserialize, Eq, PartialEq)]
-#[serde(rename_all = "lowercase")]
-/// What a block device (disk) is used for.
-pub enum DiskUsageType {
-    /// Disk is not used (as far we can tell)
-    Unused,
-    /// Disk is mounted
-    Mounted,
-    /// Disk is used by LVM
-    LVM,
-    /// Disk is used by ZFS
-    ZFS,
-    /// Disk is used by device-mapper
-    DeviceMapper,
-    /// Disk has partitions
-    Partitions,
-    /// Disk contains a file system label
-    FileSystem,
-}
-
-#[api()]
-#[derive(Debug, Serialize, Deserialize)]
-#[serde(rename_all = "kebab-case")]
-/// Basic information about a partition
-pub struct PartitionInfo {
-    /// The partition name
-    pub name: String,
-    /// What the partition is used for
-    pub used: PartitionUsageType,
-    /// Is the partition mounted
-    pub mounted: bool,
-    /// The filesystem of the partition
-    pub filesystem: Option<String>,
-    /// The partition devpath
-    pub devpath: Option<String>,
-    /// Size in bytes
-    pub size: Option<u64>,
-    /// GPT partition
-    pub gpt: bool,
-    /// UUID
-    pub uuid: Option<String>,
-}
-
-#[api(
-    properties: {
-        used: {
-            type: DiskUsageType,
-        },
-        "disk-type": {
-            type: DiskType,
-        },
-        status: {
-            type: SmartStatus,
-        },
-        partitions: {
-            optional: true,
-            items: {
-                type: PartitionInfo
-            }
-        }
-    }
-)]
-#[derive(Debug, Serialize, Deserialize)]
-#[serde(rename_all = "kebab-case")]
-/// Information about how a Disk is used
-pub struct DiskUsageInfo {
-    /// Disk name (`/sys/block/<name>`)
-    pub name: String,
-    pub used: DiskUsageType,
-    pub disk_type: DiskType,
-    pub status: SmartStatus,
-    /// Disk wearout
-    pub wearout: Option<f64>,
-    /// Vendor
-    pub vendor: Option<String>,
-    /// Model
-    pub model: Option<String>,
-    /// WWN
-    pub wwn: Option<String>,
-    /// Disk size
-    pub size: u64,
-    /// Serisal number
-    pub serial: Option<String>,
-    /// Partitions on the device
-    pub partitions: Option<Vec<PartitionInfo>>,
-    /// Linux device path (/dev/xxx)
-    pub devpath: Option<String>,
-    /// Set if disk contains a GPT partition table
-    pub gpt: bool,
-    /// RPM
-    pub rpm: Option<u64>,
-}
-
-fn scan_partitions(
-    disk_manager: Arc<DiskManage>,
-    lvm_devices: &HashSet<u64>,
-    zfs_devices: &HashSet<u64>,
-    device: &str,
-) -> Result<DiskUsageType, Error> {
-    let mut sys_path = std::path::PathBuf::from("/sys/block");
-    sys_path.push(device);
-
-    let mut used = DiskUsageType::Unused;
-
-    let mut found_lvm = false;
-    let mut found_zfs = false;
-    let mut found_mountpoints = false;
-    let mut found_dm = false;
-    let mut found_partitions = false;
-
-    for item in proxmox_sys::fs::read_subdir(libc::AT_FDCWD, &sys_path)? {
-        let item = item?;
-        let name = match item.file_name().to_str() {
-            Ok(name) => name,
-            Err(_) => continue, // skip non utf8 entries
-        };
-        if !name.starts_with(device) {
-            continue;
-        }
-
-        found_partitions = true;
-
-        let mut part_path = sys_path.clone();
-        part_path.push(name);
-
-        let data = disk_manager.clone().disk_by_sys_path(&part_path)?;
-
-        let devnum = data.devnum()?;
-
-        if lvm_devices.contains(&devnum) {
-            found_lvm = true;
-        }
-
-        if data.is_mounted()? {
-            found_mountpoints = true;
-        }
-
-        if data.has_holders()? {
-            found_dm = true;
-        }
-
-        if zfs_devices.contains(&devnum) {
-            found_zfs = true;
-        }
-    }
-
-    if found_mountpoints {
-        used = DiskUsageType::Mounted;
-    } else if found_lvm {
-        used = DiskUsageType::LVM;
-    } else if found_zfs {
-        used = DiskUsageType::ZFS;
-    } else if found_dm {
-        used = DiskUsageType::DeviceMapper;
-    } else if found_partitions {
-        used = DiskUsageType::Partitions;
-    }
-
-    Ok(used)
-}
-
-pub struct DiskUsageQuery {
-    smart: bool,
-    partitions: bool,
-}
-
-impl Default for DiskUsageQuery {
-    fn default() -> Self {
-        Self::new()
-    }
-}
-
-impl DiskUsageQuery {
-    pub const fn new() -> Self {
-        Self {
-            smart: true,
-            partitions: false,
-        }
-    }
-
-    pub fn smart(&mut self, smart: bool) -> &mut Self {
-        self.smart = smart;
-        self
-    }
-
-    pub fn partitions(&mut self, partitions: bool) -> &mut Self {
-        self.partitions = partitions;
-        self
-    }
-
-    pub fn query(&self) -> Result<HashMap<String, DiskUsageInfo>, Error> {
-        get_disks(None, !self.smart, self.partitions)
-    }
-
-    pub fn find(&self, disk: &str) -> Result<DiskUsageInfo, Error> {
-        let mut map = get_disks(Some(vec![disk.to_string()]), !self.smart, self.partitions)?;
-        if let Some(info) = map.remove(disk) {
-            Ok(info)
-        } else {
-            bail!("failed to get disk usage info - internal error"); // should not happen
-        }
-    }
-
-    pub fn find_all(&self, disks: Vec<String>) -> Result<HashMap<String, DiskUsageInfo>, Error> {
-        get_disks(Some(disks), !self.smart, self.partitions)
-    }
-}
-
-fn get_partitions_info(
-    partitions: HashMap<u64, Disk>,
-    lvm_devices: &HashSet<u64>,
-    zfs_devices: &HashSet<u64>,
-    file_system_devices: &HashSet<u64>,
-    lsblk_infos: &[LsblkInfo],
-) -> Vec<PartitionInfo> {
-    partitions
-        .values()
-        .map(|disk| {
-            let devpath = disk
-                .device_path()
-                .map(|p| p.to_owned())
-                .map(|p| p.to_string_lossy().to_string());
-
-            let mut used = PartitionUsageType::Unused;
-
-            if let Ok(devnum) = disk.devnum() {
-                if lvm_devices.contains(&devnum) {
-                    used = PartitionUsageType::LVM;
-                } else if zfs_devices.contains(&devnum) {
-                    used = PartitionUsageType::ZFS;
-                } else if file_system_devices.contains(&devnum) {
-                    used = PartitionUsageType::FileSystem;
-                }
-            }
-
-            let mounted = disk.is_mounted().unwrap_or(false);
-            let mut filesystem = None;
-            let mut uuid = None;
-            if let Some(devpath) = devpath.as_ref() {
-                for info in lsblk_infos.iter().filter(|i| i.path.eq(devpath)) {
-                    uuid = info
-                        .uuid
-                        .clone()
-                        .filter(|uuid| pbs_api_types::UUID_REGEX.is_match(uuid));
-                    used = match info.partition_type.as_deref() {
-                        Some("21686148-6449-6e6f-744e-656564454649") => PartitionUsageType::BIOS,
-                        Some("c12a7328-f81f-11d2-ba4b-00a0c93ec93b") => PartitionUsageType::EFI,
-                        Some("6a945a3b-1dd2-11b2-99a6-080020736631") => {
-                            PartitionUsageType::ZfsReserved
-                        }
-                        _ => used,
-                    };
-                    if used == PartitionUsageType::FileSystem {
-                        filesystem.clone_from(&info.file_system_type);
-                    }
-                }
-            }
-
-            PartitionInfo {
-                name: disk.sysname().to_str().unwrap_or("?").to_string(),
-                devpath,
-                used,
-                mounted,
-                filesystem,
-                size: disk.size().ok(),
-                gpt: disk.has_gpt(),
-                uuid,
-            }
-        })
-        .collect()
-}
-
-/// Get disk usage information for multiple disks
-fn get_disks(
-    // filter - list of device names (without leading /dev)
-    disks: Option<Vec<String>>,
-    // do no include data from smartctl
-    no_smart: bool,
-    // include partitions
-    include_partitions: bool,
-) -> Result<HashMap<String, DiskUsageInfo>, Error> {
-    let disk_manager = DiskManage::new();
-
-    let lsblk_info = get_lsblk_info()?;
-
-    let zfs_devices =
-        zfs_devices(&lsblk_info, None).or_else(|err| -> Result<HashSet<u64>, Error> {
-            eprintln!("error getting zfs devices: {err}");
-            Ok(HashSet::new())
-        })?;
-
-    let lvm_devices = get_lvm_devices(&lsblk_info)?;
-
-    let file_system_devices = get_file_system_devices(&lsblk_info)?;
-
-    // fixme: ceph journals/volumes
-
-    let mut result = HashMap::new();
-    let mut device_paths = Vec::new();
-
-    for item in proxmox_sys::fs::scan_subdir(libc::AT_FDCWD, "/sys/block", &BLOCKDEVICE_NAME_REGEX)?
-    {
-        let item = item?;
-
-        let name = item.file_name().to_str().unwrap().to_string();
-
-        if let Some(ref disks) = disks {
-            if !disks.contains(&name) {
-                continue;
-            }
-        }
-
-        let sys_path = format!("/sys/block/{name}");
-
-        if let Ok(target) = std::fs::read_link(&sys_path) {
-            if let Some(target) = target.to_str() {
-                if ISCSI_PATH_REGEX.is_match(target) {
-                    continue;
-                } // skip iSCSI devices
-            }
-        }
-
-        let disk = disk_manager.clone().disk_by_sys_path(&sys_path)?;
-
-        let devnum = disk.devnum()?;
-
-        let size = match disk.size() {
-            Ok(size) => size,
-            Err(_) => continue, // skip devices with unreadable size
-        };
-
-        let disk_type = match disk.guess_disk_type() {
-            Ok(disk_type) => disk_type,
-            Err(_) => continue, // skip devices with undetectable type
-        };
-
-        let mut usage = DiskUsageType::Unused;
-
-        if lvm_devices.contains(&devnum) {
-            usage = DiskUsageType::LVM;
-        }
-
-        match disk.is_mounted() {
-            Ok(true) => usage = DiskUsageType::Mounted,
-            Ok(false) => {}
-            Err(_) => continue, // skip devices with undetectable mount status
-        }
-
-        if zfs_devices.contains(&devnum) {
-            usage = DiskUsageType::ZFS;
-        }
-
-        let vendor = disk
-            .vendor()
-            .unwrap_or(None)
-            .map(|s| s.to_string_lossy().trim().to_string());
-
-        let model = disk.model().map(|s| s.to_string_lossy().into_owned());
-
-        let serial = disk.serial().map(|s| s.to_string_lossy().into_owned());
-
-        let devpath = disk
-            .device_path()
-            .map(|p| p.to_owned())
-            .map(|p| p.to_string_lossy().to_string());
-
-        device_paths.push((name.clone(), devpath.clone()));
-
-        let wwn = disk.wwn().map(|s| s.to_string_lossy().into_owned());
-
-        let partitions: Option<Vec<PartitionInfo>> = if include_partitions {
-            disk.partitions().map_or(None, |parts| {
-                Some(get_partitions_info(
-                    parts,
-                    &lvm_devices,
-                    &zfs_devices,
-                    &file_system_devices,
-                    &lsblk_info,
-                ))
-            })
-        } else {
-            None
-        };
-
-        if usage != DiskUsageType::Mounted {
-            match scan_partitions(disk_manager.clone(), &lvm_devices, &zfs_devices, &name) {
-                Ok(part_usage) => {
-                    if part_usage != DiskUsageType::Unused {
-                        usage = part_usage;
-                    }
-                }
-                Err(_) => continue, // skip devices if scan_partitions fail
-            };
-        }
-
-        if usage == DiskUsageType::Unused && file_system_devices.contains(&devnum) {
-            usage = DiskUsageType::FileSystem;
-        }
-
-        if usage == DiskUsageType::Unused && disk.has_holders()? {
-            usage = DiskUsageType::DeviceMapper;
-        }
-
-        let info = DiskUsageInfo {
-            name: name.clone(),
-            vendor,
-            model,
-            partitions,
-            serial,
-            devpath,
-            size,
-            wwn,
-            disk_type,
-            status: SmartStatus::Unknown,
-            wearout: None,
-            used: usage,
-            gpt: disk.has_gpt(),
-            rpm: disk.ata_rotation_rate_rpm(),
-        };
-
-        result.insert(name, info);
-    }
-
-    if !no_smart {
-        let (tx, rx) = crossbeam_channel::bounded(result.len());
-
-        let parallel_handler =
-            ParallelHandler::new("smartctl data", 4, move |device: (String, String)| {
-                match get_smart_data(Path::new(&device.1), false) {
-                    Ok(smart_data) => tx.send((device.0, smart_data))?,
-                    // do not fail the whole disk output just because smartctl couldn't query one
-                    Err(err) => log::error!("failed to gather smart data for {} – {err}", device.1),
-                }
-                Ok(())
-            });
-
-        for (name, path) in device_paths.into_iter() {
-            if let Some(p) = path {
-                parallel_handler.send((name, p))?;
-            }
-        }
-
-        parallel_handler.complete()?;
-        while let Ok(msg) = rx.recv() {
-            if let Some(value) = result.get_mut(&msg.0) {
-                value.wearout = msg.1.wearout;
-                value.status = msg.1.status;
-            }
-        }
-    }
-    Ok(result)
-}
-
-/// Try to reload the partition table
-pub fn reread_partition_table(disk: &Disk) -> Result<(), Error> {
-    let disk_path = match disk.device_path() {
-        Some(path) => path,
-        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
-    };
-
-    let mut command = std::process::Command::new("blockdev");
-    command.arg("--rereadpt");
-    command.arg(disk_path);
-
-    proxmox_sys::command::run_command(command, None)?;
-
-    Ok(())
-}
-
-/// Initialize disk by writing a GPT partition table
-pub fn inititialize_gpt_disk(disk: &Disk, uuid: Option<&str>) -> Result<(), Error> {
-    let disk_path = match disk.device_path() {
-        Some(path) => path,
-        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
-    };
-
-    let uuid = uuid.unwrap_or("R"); // R .. random disk GUID
-
-    let mut command = std::process::Command::new("sgdisk");
-    command.arg(disk_path);
-    command.args(["-U", uuid]);
-
-    proxmox_sys::command::run_command(command, None)?;
-
-    Ok(())
-}
-
-/// Wipes all labels, the first 200 MiB, and the last 4096 bytes of a disk/partition.
-/// If called with a partition, also sets the partition type to 0x83 'Linux filesystem'.
-pub fn wipe_blockdev(disk: &Disk) -> Result<(), Error> {
-    let disk_path = match disk.device_path() {
-        Some(path) => path,
-        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
-    };
-
-    let is_partition = disk.is_partition();
-
-    let mut to_wipe: Vec<PathBuf> = Vec::new();
-
-    let partitions_map = disk.partitions()?;
-    for part_disk in partitions_map.values() {
-        let part_path = match part_disk.device_path() {
-            Some(path) => path,
-            None => bail!("disk {:?} has no node in /dev", part_disk.syspath()),
-        };
-        to_wipe.push(part_path.to_path_buf());
-    }
-
-    to_wipe.push(disk_path.to_path_buf());
-
-    info!("Wiping block device {}", disk_path.display());
-
-    let mut wipefs_command = std::process::Command::new("wipefs");
-    wipefs_command.arg("--all").args(&to_wipe);
-
-    let wipefs_output = proxmox_sys::command::run_command(wipefs_command, None)?;
-    info!("wipefs output: {wipefs_output}");
-
-    zero_disk_start_and_end(disk)?;
-
-    if is_partition {
-        // set the partition type to 0x83 'Linux filesystem'
-        change_parttype(disk, "8300")?;
-    }
-
-    Ok(())
-}
-
-pub fn zero_disk_start_and_end(disk: &Disk) -> Result<(), Error> {
-    let disk_path = match disk.device_path() {
-        Some(path) => path,
-        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
-    };
-
-    let disk_size = disk.size()?;
-    let file = std::fs::OpenOptions::new()
-        .write(true)
-        .custom_flags(libc::O_CLOEXEC | libc::O_DSYNC)
-        .open(disk_path)
-        .with_context(|| "failed to open device {disk_path:?} for writing")?;
-    let write_size = disk_size.min(200 * 1024 * 1024);
-    let zeroes = proxmox_io::boxed::zeroed(write_size as usize);
-    file.write_all_at(&zeroes, 0)
-        .with_context(|| "failed to wipe start of device {disk_path:?}")?;
-    if disk_size > write_size {
-        file.write_all_at(&zeroes[0..4096], disk_size - 4096)
-            .with_context(|| "failed to wipe end of device {disk_path:?}")?;
-    }
-    Ok(())
-}
-
-pub fn change_parttype(part_disk: &Disk, part_type: &str) -> Result<(), Error> {
-    let part_path = match part_disk.device_path() {
-        Some(path) => path,
-        None => bail!("disk {:?} has no node in /dev", part_disk.syspath()),
-    };
-    if let Ok(stat) = nix::sys::stat::stat(part_path) {
-        let mut sgdisk_command = std::process::Command::new("sgdisk");
-        let major = unsafe { libc::major(stat.st_rdev) };
-        let minor = unsafe { libc::minor(stat.st_rdev) };
-        let partnum_path = &format!("/sys/dev/block/{major}:{minor}/partition");
-        let partnum: u32 = std::fs::read_to_string(partnum_path)?.trim_end().parse()?;
-        sgdisk_command.arg(format!("-t{partnum}:{part_type}"));
-        let part_disk_parent = match part_disk.parent() {
-            Some(disk) => disk,
-            None => bail!("disk {:?} has no node in /dev", part_disk.syspath()),
-        };
-        let part_disk_parent_path = match part_disk_parent.device_path() {
-            Some(path) => path,
-            None => bail!("disk {:?} has no node in /dev", part_disk.syspath()),
-        };
-        sgdisk_command.arg(part_disk_parent_path);
-        let sgdisk_output = proxmox_sys::command::run_command(sgdisk_command, None)?;
-        info!("sgdisk output: {sgdisk_output}");
-    }
-    Ok(())
-}
-
-/// Create a single linux partition using the whole available space
-pub fn create_single_linux_partition(disk: &Disk) -> Result<Disk, Error> {
-    let disk_path = match disk.device_path() {
-        Some(path) => path,
-        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
-    };
-
-    let mut command = std::process::Command::new("sgdisk");
-    command.args(["-n1", "-t1:8300"]);
-    command.arg(disk_path);
-
-    proxmox_sys::command::run_command(command, None)?;
-
-    let mut partitions = disk.partitions()?;
-
-    match partitions.remove(&1) {
-        Some(partition) => Ok(partition),
-        None => bail!("unable to lookup device partition"),
-    }
-}
-
-#[api()]
-#[derive(Debug, Copy, Clone, Serialize, Deserialize, Eq, PartialEq)]
-#[serde(rename_all = "lowercase")]
-/// A file system type supported by our tooling.
-pub enum FileSystemType {
-    /// Linux Ext4
-    Ext4,
-    /// XFS
-    Xfs,
-}
-
-impl std::fmt::Display for FileSystemType {
-    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
-        let text = match self {
-            FileSystemType::Ext4 => "ext4",
-            FileSystemType::Xfs => "xfs",
-        };
-        write!(f, "{text}")
-    }
-}
-
-impl std::str::FromStr for FileSystemType {
-    type Err = serde_json::Error;
-
-    fn from_str(s: &str) -> Result<Self, Self::Err> {
-        use serde::de::IntoDeserializer;
-        Self::deserialize(s.into_deserializer())
-    }
-}
-
-/// Create a file system on a disk or disk partition
-pub fn create_file_system(disk: &Disk, fs_type: FileSystemType) -> Result<(), Error> {
-    let disk_path = match disk.device_path() {
-        Some(path) => path,
-        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
-    };
-
-    let fs_type = fs_type.to_string();
-
-    let mut command = std::process::Command::new("mkfs");
-    command.args(["-t", &fs_type]);
-    command.arg(disk_path);
-
-    proxmox_sys::command::run_command(command, None)?;
-
-    Ok(())
-}
-/// Block device name completion helper
-pub fn complete_disk_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
-    let dir =
-        match proxmox_sys::fs::scan_subdir(libc::AT_FDCWD, "/sys/block", &BLOCKDEVICE_NAME_REGEX) {
-            Ok(dir) => dir,
-            Err(_) => return vec![],
-        };
-
-    dir.flatten()
-        .map(|item| item.file_name().to_str().unwrap().to_string())
-        .collect()
-}
-
-/// Block device partition name completion helper
-pub fn complete_partition_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
-    let dir = match proxmox_sys::fs::scan_subdir(
-        libc::AT_FDCWD,
-        "/sys/class/block",
-        &BLOCKDEVICE_DISK_AND_PARTITION_NAME_REGEX,
-    ) {
-        Ok(dir) => dir,
-        Err(_) => return vec![],
-    };
-
-    dir.flatten()
-        .map(|item| item.file_name().to_str().unwrap().to_string())
-        .collect()
-}
-
-/// Read the FS UUID (parse blkid output)
-///
-/// Note: Calling blkid is more reliable than using the udev ID_FS_UUID property.
-pub fn get_fs_uuid(disk: &Disk) -> Result<String, Error> {
-    let disk_path = match disk.device_path() {
-        Some(path) => path,
-        None => bail!("disk {:?} has no node in /dev", disk.syspath()),
-    };
-
-    let mut command = std::process::Command::new("blkid");
-    command.args(["-o", "export"]);
-    command.arg(disk_path);
-
-    let output = proxmox_sys::command::run_command(command, None)?;
-
-    for line in output.lines() {
-        if let Some(uuid) = line.strip_prefix("UUID=") {
-            return Ok(uuid.to_string());
-        }
-    }
-
-    bail!("get_fs_uuid failed - missing UUID");
-}
-
-/// Mount a disk by its UUID and the mount point.
-pub fn mount_by_uuid(uuid: &str, mount_point: &Path) -> Result<(), Error> {
-    let mut command = std::process::Command::new("mount");
-    command.arg(format!("UUID={uuid}"));
-    command.arg(mount_point);
-
-    proxmox_sys::command::run_command(command, None)?;
-    Ok(())
-}
-
-/// Create bind mount.
-pub fn bind_mount(path: &Path, target: &Path) -> Result<(), Error> {
-    let mut command = std::process::Command::new("mount");
-    command.arg("--bind");
-    command.arg(path);
-    command.arg(target);
-
-    proxmox_sys::command::run_command(command, None)?;
-    Ok(())
-}
-
-/// Unmount a disk by its mount point.
-pub fn unmount_by_mountpoint(path: &Path) -> Result<(), Error> {
-    let mut command = std::process::Command::new("umount");
-    command.arg(path);
-
-    proxmox_sys::command::run_command(command, None)?;
-    Ok(())
-}
diff --git a/src/tools/disks/smart.rs b/src/tools/disks/smart.rs
deleted file mode 100644
index 1d41cee24..000000000
--- a/src/tools/disks/smart.rs
+++ /dev/null
@@ -1,227 +0,0 @@
-use std::sync::LazyLock;
-use std::{
-    collections::{HashMap, HashSet},
-    path::Path,
-};
-
-use ::serde::{Deserialize, Serialize};
-use anyhow::Error;
-
-use proxmox_schema::api;
-
-#[api()]
-#[derive(Debug, Serialize, Deserialize)]
-#[serde(rename_all = "lowercase")]
-/// SMART status
-pub enum SmartStatus {
-    /// Smart tests passed - everything is OK
-    Passed,
-    /// Smart tests failed - disk has problems
-    Failed,
-    /// Unknown status
-    Unknown,
-}
-
-#[api()]
-#[derive(Debug, Serialize, Deserialize)]
-/// SMART Attribute
-pub struct SmartAttribute {
-    /// Attribute name
-    name: String,
-    // FIXME: remove value with next major release (PBS 3.0)
-    /// duplicate of raw - kept for API stability
-    value: String,
-    /// Attribute raw value
-    raw: String,
-    // the rest of the values is available for ATA type
-    /// ATA Attribute ID
-    #[serde(skip_serializing_if = "Option::is_none")]
-    id: Option<u64>,
-    /// ATA Flags
-    #[serde(skip_serializing_if = "Option::is_none")]
-    flags: Option<String>,
-    /// ATA normalized value (0..100)
-    #[serde(skip_serializing_if = "Option::is_none")]
-    normalized: Option<f64>,
-    /// ATA worst
-    #[serde(skip_serializing_if = "Option::is_none")]
-    worst: Option<f64>,
-    /// ATA threshold
-    #[serde(skip_serializing_if = "Option::is_none")]
-    threshold: Option<f64>,
-}
-
-#[api(
-    properties: {
-        status: {
-            type: SmartStatus,
-        },
-        wearout: {
-            description: "Wearout level.",
-            type: f64,
-            optional: true,
-        },
-        attributes: {
-            description: "SMART attributes.",
-            type: Array,
-            items: {
-                type: SmartAttribute,
-            },
-        },
-    },
-)]
-#[derive(Debug, Serialize, Deserialize)]
-/// Data from smartctl
-pub struct SmartData {
-    pub status: SmartStatus,
-    pub wearout: Option<f64>,
-    pub attributes: Vec<SmartAttribute>,
-}
-
-/// Read smartctl data for a disk (/dev/XXX).
-pub fn get_smart_data(disk_path: &Path, health_only: bool) -> Result<SmartData, Error> {
-    const SMARTCTL_BIN_PATH: &str = "smartctl";
-
-    let mut command = std::process::Command::new(SMARTCTL_BIN_PATH);
-    command.arg("-H");
-    if !health_only {
-        command.args(["-A", "-j"]);
-    }
-
-    command.arg(disk_path);
-
-    let output = proxmox_sys::command::run_command(
-        command,
-        Some(
-            |exitcode| (exitcode & 0b0011) == 0, // only bits 0-1 are fatal errors
-        ),
-    )?;
-
-    let output: serde_json::Value = output.parse()?;
-
-    let mut wearout = None;
-
-    let mut attributes = Vec::new();
-    let mut wearout_candidates = HashMap::new();
-
-    // ATA devices
-    if let Some(list) = output["ata_smart_attributes"]["table"].as_array() {
-        for item in list {
-            let id = match item["id"].as_u64() {
-                Some(id) => id,
-                None => continue, // skip attributes without id
-            };
-
-            let name = match item["name"].as_str() {
-                Some(name) => name.to_string(),
-                None => continue, // skip attributes without name
-            };
-
-            let raw_value = match item["raw"]["string"].as_str() {
-                Some(value) => value.to_string(),
-                None => continue, // skip attributes without raw value
-            };
-
-            let flags = match item["flags"]["string"].as_str() {
-                Some(flags) => flags.to_string(),
-                None => continue, // skip attributes without flags
-            };
-
-            let normalized = match item["value"].as_f64() {
-                Some(v) => v,
-                None => continue, // skip attributes without normalize value
-            };
-
-            let worst = match item["worst"].as_f64() {
-                Some(v) => v,
-                None => continue, // skip attributes without worst entry
-            };
-
-            let threshold = match item["thresh"].as_f64() {
-                Some(v) => v,
-                None => continue, // skip attributes without threshold entry
-            };
-
-            if WEAROUT_FIELD_NAMES.contains(&name as &str) {
-                wearout_candidates.insert(name.clone(), normalized);
-            }
-
-            attributes.push(SmartAttribute {
-                name,
-                value: raw_value.clone(),
-                raw: raw_value,
-                id: Some(id),
-                flags: Some(flags),
-                normalized: Some(normalized),
-                worst: Some(worst),
-                threshold: Some(threshold),
-            });
-        }
-    }
-
-    if !wearout_candidates.is_empty() {
-        for field in WEAROUT_FIELD_ORDER {
-            if let Some(value) = wearout_candidates.get(field as &str) {
-                wearout = Some(*value);
-                break;
-            }
-        }
-    }
-
-    // NVME devices
-    if let Some(list) = output["nvme_smart_health_information_log"].as_object() {
-        for (name, value) in list {
-            if name == "percentage_used" {
-                // extract wearout from nvme text, allow for decimal values
-                if let Some(v) = value.as_f64() {
-                    if v <= 100.0 {
-                        wearout = Some(100.0 - v);
-                    }
-                }
-            }
-            if let Some(value) = value.as_f64() {
-                attributes.push(SmartAttribute {
-                    name: name.to_string(),
-                    value: value.to_string(),
-                    raw: value.to_string(),
-                    id: None,
-                    flags: None,
-                    normalized: None,
-                    worst: None,
-                    threshold: None,
-                });
-            }
-        }
-    }
-
-    let status = match output["smart_status"]["passed"].as_bool() {
-        None => SmartStatus::Unknown,
-        Some(true) => SmartStatus::Passed,
-        Some(false) => SmartStatus::Failed,
-    };
-
-    Ok(SmartData {
-        status,
-        wearout,
-        attributes,
-    })
-}
-
-static WEAROUT_FIELD_ORDER: &[&str] = &[
-    "Media_Wearout_Indicator",
-    "SSD_Life_Left",
-    "Wear_Leveling_Count",
-    "Perc_Write/Erase_Ct_BC",
-    "Perc_Rated_Life_Remain",
-    "Remaining_Lifetime_Perc",
-    "Percent_Lifetime_Remain",
-    "Lifetime_Left",
-    "PCT_Life_Remaining",
-    "Lifetime_Remaining",
-    "Percent_Life_Remaining",
-    "Percent_Lifetime_Used",
-    "Perc_Rated_Life_Used",
-];
-
-static WEAROUT_FIELD_NAMES: LazyLock<HashSet<&'static str>> =
-    LazyLock::new(|| WEAROUT_FIELD_ORDER.iter().cloned().collect());
diff --git a/src/tools/disks/zfs.rs b/src/tools/disks/zfs.rs
deleted file mode 100644
index 0babb8870..000000000
--- a/src/tools/disks/zfs.rs
+++ /dev/null
@@ -1,205 +0,0 @@
-use std::collections::HashSet;
-use std::os::unix::fs::MetadataExt;
-use std::path::PathBuf;
-use std::sync::{LazyLock, Mutex};
-
-use anyhow::{bail, Error};
-
-use proxmox_schema::const_regex;
-
-use super::*;
-
-static ZFS_UUIDS: LazyLock<HashSet<&'static str>> = LazyLock::new(|| {
-    let mut set = HashSet::new();
-    set.insert("6a898cc3-1dd2-11b2-99a6-080020736631"); // apple
-    set.insert("516e7cba-6ecf-11d6-8ff8-00022d09712b"); // bsd
-    set
-});
-
-fn get_pool_from_dataset(dataset: &str) -> &str {
-    if let Some(idx) = dataset.find('/') {
-        dataset[0..idx].as_ref()
-    } else {
-        dataset
-    }
-}
-
-/// Returns kernel IO-stats for zfs pools
-pub fn zfs_pool_stats(pool: &OsStr) -> Result<Option<BlockDevStat>, Error> {
-    let mut path = PathBuf::from("/proc/spl/kstat/zfs");
-    path.push(pool);
-    path.push("io");
-
-    let text = match proxmox_sys::fs::file_read_optional_string(&path)? {
-        Some(text) => text,
-        None => {
-            return Ok(None);
-        }
-    };
-
-    let lines: Vec<&str> = text.lines().collect();
-
-    if lines.len() < 3 {
-        bail!("unable to parse {:?} - got less than 3 lines", path);
-    }
-
-    // https://github.com/openzfs/zfs/blob/master/lib/libspl/include/sys/kstat.h#L578
-    // nread    nwritten reads    writes   wtime    wlentime wupdate  rtime    rlentime rupdate  wcnt     rcnt
-    // Note: w -> wait (wtime -> wait time)
-    // Note: r -> run  (rtime -> run time)
-    // All times are nanoseconds
-    let stat: Vec<u64> = lines[2]
-        .split_ascii_whitespace()
-        .map(|s| s.parse().unwrap_or_default())
-        .collect();
-
-    let ticks = (stat[4] + stat[7]) / 1_000_000; // convert to milisec
-
-    let stat = BlockDevStat {
-        read_sectors: stat[0] >> 9,
-        write_sectors: stat[1] >> 9,
-        read_ios: stat[2],
-        write_ios: stat[3],
-        io_ticks: ticks,
-    };
-
-    Ok(Some(stat))
-}
-
-/// Get set of devices used by zfs (or a specific zfs pool)
-///
-/// The set is indexed by using the unix raw device number (dev_t is u64)
-pub fn zfs_devices(lsblk_info: &[LsblkInfo], pool: Option<String>) -> Result<HashSet<u64>, Error> {
-    let list = zpool_list(pool.as_ref(), true)?;
-
-    let mut device_set = HashSet::new();
-    for entry in list {
-        for device in entry.devices {
-            let meta = std::fs::metadata(device)?;
-            device_set.insert(meta.rdev());
-        }
-    }
-    if pool.is_none() {
-        for info in lsblk_info.iter() {
-            if let Some(partition_type) = &info.partition_type {
-                if ZFS_UUIDS.contains(partition_type.as_str()) {
-                    let meta = std::fs::metadata(&info.path)?;
-                    device_set.insert(meta.rdev());
-                }
-            }
-        }
-    }
-
-    Ok(device_set)
-}
-
-const ZFS_KSTAT_BASE_PATH: &str = "/proc/spl/kstat/zfs";
-const_regex! {
-    OBJSET_REGEX = r"^objset-0x[a-fA-F0-9]+$";
-}
-
-static ZFS_DATASET_OBJSET_MAP: LazyLock<Mutex<HashMap<String, (String, String)>>> =
-    LazyLock::new(|| Mutex::new(HashMap::new()));
-
-// parses /proc/spl/kstat/zfs/POOL/objset-ID files
-// they have the following format:
-//
-// 0 0 0x00 0 0000 00000000000 000000000000000000
-// name                            type data
-// dataset_name                    7    pool/dataset
-// writes                          4    0
-// nwritten                        4    0
-// reads                           4    0
-// nread                           4    0
-// nunlinks                        4    0
-// nunlinked                       4    0
-//
-// we are only interested in the dataset_name, writes, nwrites, reads and nread
-fn parse_objset_stat(pool: &str, objset_id: &str) -> Result<(String, BlockDevStat), Error> {
-    let path = PathBuf::from(format!("{ZFS_KSTAT_BASE_PATH}/{pool}/{objset_id}"));
-
-    let text = match proxmox_sys::fs::file_read_optional_string(path)? {
-        Some(text) => text,
-        None => bail!("could not parse '{}' stat file", objset_id),
-    };
-
-    let mut dataset_name = String::new();
-    let mut stat = BlockDevStat {
-        read_sectors: 0,
-        write_sectors: 0,
-        read_ios: 0,
-        write_ios: 0,
-        io_ticks: 0,
-    };
-
-    for (i, line) in text.lines().enumerate() {
-        if i < 2 {
-            continue;
-        }
-
-        let mut parts = line.split_ascii_whitespace();
-        let name = parts.next();
-        parts.next(); // discard type
-        let value = parts.next().ok_or_else(|| format_err!("no value found"))?;
-        match name {
-            Some("dataset_name") => dataset_name = value.to_string(),
-            Some("writes") => stat.write_ios = value.parse().unwrap_or_default(),
-            Some("nwritten") => stat.write_sectors = value.parse::<u64>().unwrap_or_default() / 512,
-            Some("reads") => stat.read_ios = value.parse().unwrap_or_default(),
-            Some("nread") => stat.read_sectors = value.parse::<u64>().unwrap_or_default() / 512,
-            _ => {}
-        }
-    }
-
-    Ok((dataset_name, stat))
-}
-
-fn get_mapping(dataset: &str) -> Option<(String, String)> {
-    ZFS_DATASET_OBJSET_MAP
-        .lock()
-        .unwrap()
-        .get(dataset)
-        .map(|c| c.to_owned())
-}
-
-/// Updates the dataset <-> objset_map
-pub(crate) fn update_zfs_objset_map(pool: &str) -> Result<(), Error> {
-    let mut map = ZFS_DATASET_OBJSET_MAP.lock().unwrap();
-    map.clear();
-    let path = PathBuf::from(format!("{ZFS_KSTAT_BASE_PATH}/{pool}"));
-
-    proxmox_sys::fs::scandir(
-        libc::AT_FDCWD,
-        &path,
-        &OBJSET_REGEX,
-        |_l2_fd, filename, _type| {
-            let (name, _) = parse_objset_stat(pool, filename)?;
-            map.insert(name, (pool.to_string(), filename.to_string()));
-            Ok(())
-        },
-    )?;
-
-    Ok(())
-}
-
-/// Gets io stats for the dataset from /proc/spl/kstat/zfs/POOL/objset-ID
-pub fn zfs_dataset_stats(dataset: &str) -> Result<BlockDevStat, Error> {
-    let mut mapping = get_mapping(dataset);
-    if mapping.is_none() {
-        let pool = get_pool_from_dataset(dataset);
-        update_zfs_objset_map(pool)?;
-        mapping = get_mapping(dataset);
-    }
-    let (pool, objset_id) =
-        mapping.ok_or_else(|| format_err!("could not find objset id for dataset"))?;
-
-    match parse_objset_stat(&pool, &objset_id) {
-        Ok((_, stat)) => Ok(stat),
-        Err(err) => {
-            // on error remove dataset from map, it probably vanished or the
-            // mapping was incorrect
-            ZFS_DATASET_OBJSET_MAP.lock().unwrap().remove(dataset);
-            Err(err)
-        }
-    }
-}
diff --git a/src/tools/mod.rs b/src/tools/mod.rs
index 7f5acc0e3..4a30c1f71 100644
--- a/src/tools/mod.rs
+++ b/src/tools/mod.rs
@@ -14,7 +14,6 @@ use pbs_datastore::backup_info::{BackupDir, BackupInfo};
 use pbs_datastore::manifest::BackupManifest;
 
 pub mod config;
-pub mod disks;
 pub mod fs;
 pub mod statistics;
 pub mod systemd;
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox-backup 15/26] metric collection: use blockdev_stat_for_path from proxmox_disks
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (13 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox-backup 14/26] tools: replace disks module with proxmox-disks Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox-yew-comp 16/26] node status panel: add `children` property Lukas Wagner
                   ` (11 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

This part of the gather_disk_stats helper has been moved to
proxmox_disks directly, so it makes sense to use the moved
implementation in PBS as well.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 src/server/metric_collection/mod.rs | 49 ++++++-----------------------
 1 file changed, 10 insertions(+), 39 deletions(-)

diff --git a/src/server/metric_collection/mod.rs b/src/server/metric_collection/mod.rs
index 3fa6e9fbf..bd197a386 100644
--- a/src/server/metric_collection/mod.rs
+++ b/src/server/metric_collection/mod.rs
@@ -7,10 +7,11 @@ use std::{
 };
 
 use anyhow::Error;
+use log::error;
 use tokio::join;
 
 use pbs_api_types::{DataStoreConfig, Operation};
-use proxmox_disks::{zfs_dataset_stats, BlockDevStat, DiskManage};
+use proxmox_disks::{BlockDevStat, DiskManage};
 use proxmox_network_api::{get_network_interfaces, IpLink};
 use proxmox_sys::{
     fs::FileSystemInformation,
@@ -221,7 +222,7 @@ fn collect_host_stats_sync() -> HostStats {
 fn collect_disk_stats_sync() -> (DiskStat, Vec<DiskStat>) {
     let disk_manager = DiskManage::new();
 
-    let root = gather_disk_stats(disk_manager.clone(), Path::new("/"), "host");
+    let root = gather_disk_stats(disk_manager.clone(), Path::new("/"), "root".into());
 
     let mut datastores = Vec::new();
     match pbs_config::datastore::config() {
@@ -245,7 +246,7 @@ fn collect_disk_stats_sync() -> (DiskStat, Vec<DiskStat>) {
                 datastores.push(gather_disk_stats(
                     disk_manager.clone(),
                     Path::new(&config.absolute_path()),
-                    &config.name,
+                    config.name.clone(),
                 ));
             }
         }
@@ -257,7 +258,7 @@ fn collect_disk_stats_sync() -> (DiskStat, Vec<DiskStat>) {
     (root, datastores)
 }
 
-fn gather_disk_stats(disk_manager: Arc<DiskManage>, path: &Path, name: &str) -> DiskStat {
+fn gather_disk_stats(disk_manager: Arc<DiskManage>, path: &Path, name: String) -> DiskStat {
     let usage = match proxmox_sys::fs::fs_info(path) {
         Ok(status) => Some(status),
         Err(err) => {
@@ -266,40 +267,10 @@ fn gather_disk_stats(disk_manager: Arc<DiskManage>, path: &Path, name: &str) ->
         }
     };
 
-    let dev = match disk_manager.find_mounted_device(path) {
-        Ok(None) => None,
-        Ok(Some((fs_type, device, source))) => {
-            let mut device_stat = None;
-            match (fs_type.as_str(), source) {
-                ("zfs", Some(source)) => match source.into_string() {
-                    Ok(dataset) => match zfs_dataset_stats(&dataset) {
-                        Ok(stat) => device_stat = Some(stat),
-                        Err(err) => eprintln!("zfs_dataset_stats({dataset:?}) failed - {err}"),
-                    },
-                    Err(source) => {
-                        eprintln!("zfs_pool_stats({source:?}) failed - invalid characters")
-                    }
-                },
-                _ => {
-                    if let Ok(disk) = disk_manager.clone().disk_by_dev_num(device.into_dev_t()) {
-                        match disk.read_stat() {
-                            Ok(stat) => device_stat = stat,
-                            Err(err) => eprintln!("disk.read_stat {path:?} failed - {err}"),
-                        }
-                    }
-                }
-            }
-            device_stat
-        }
-        Err(err) => {
-            eprintln!("find_mounted_device failed - {err}");
-            None
-        }
-    };
+    let dev = disk_manager
+        .blockdev_stat_for_path(path)
+        .inspect_err(|err| error!("could not read blockdev stats: {err:#}"))
+        .ok();
 
-    DiskStat {
-        name: name.to_string(),
-        usage,
-        dev,
-    }
+    DiskStat { name, usage, dev }
 }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox-yew-comp 16/26] node status panel: add `children` property
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (14 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox-backup 15/26] metric collection: use blockdev_stat_for_path from proxmox_disks Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox-yew-comp 17/26] RRDGrid: fix size observer by attaching node reference to rendered container Lukas Wagner
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

This property allows us to pass child VNodes that should be rendered on
the same panel, allowing us to inject product specific UI elements.

We also implement the ContainerBuilder trait for NodeStatusPanel,
allowing us to use methods such as `with_child`, `with_optional_child`
etc.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 src/node_status_panel.rs | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/src/node_status_panel.rs b/src/node_status_panel.rs
index 357c328..6dca90a 100644
--- a/src/node_status_panel.rs
+++ b/src/node_status_panel.rs
@@ -35,6 +35,10 @@ pub struct NodeStatusPanel {
     #[builder(IntoPropValue, into_prop_value)]
     #[prop_or_default]
     power_management_buttons: bool,
+
+    /// Children that should be rendered on this panel
+    #[prop_or_default]
+    children: Vec<VNode>,
 }
 
 impl NodeStatusPanel {
@@ -49,6 +53,12 @@ impl Default for NodeStatusPanel {
     }
 }
 
+impl ContainerBuilder for NodeStatusPanel {
+    fn as_children_mut(&mut self) -> &mut Vec<VNode> {
+        &mut self.children
+    }
+}
+
 enum Msg {
     Error(Error),
     Loaded(Rc<NodeStatus>),
@@ -203,6 +213,8 @@ impl LoadableComponent for ProxmoxNodeStatusPanel {
     }
 
     fn main_view(&self, ctx: &LoadableComponentContext<Self>) -> Html {
+        let props = ctx.props();
+
         let status = self
             .node_status
             .as_ref()
@@ -222,6 +234,10 @@ impl LoadableComponent for ProxmoxNodeStatusPanel {
             .with_child(node_info(status))
             .with_optional_child(self.error.as_ref().map(|e| error_message(&e.to_string())));
 
+        for c in props.children.clone() {
+            panel = panel.with_child(c);
+        }
+
         if ctx.props().power_management_buttons {
             panel.add_tool(
                 ConfirmButton::new(tr!("Reboot"))
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox-yew-comp 17/26] RRDGrid: fix size observer by attaching node reference to rendered container
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (15 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox-yew-comp 16/26] node status panel: add `children` property Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH proxmox-yew-comp 18/26] RRDGrid: add padding and increase gap between elements Lukas Wagner
                   ` (9 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Without it, the cast to web_sys::Element does not return anything and
the DomSizeObserver is never initialized.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 src/rrd_grid.rs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/rrd_grid.rs b/src/rrd_grid.rs
index 354f1bf..10f1db7 100644
--- a/src/rrd_grid.rs
+++ b/src/rrd_grid.rs
@@ -77,7 +77,7 @@ impl Component for ProxmoxRRDGrid {
                     .children(props.children.clone()),
             )
             .with_child(html! {<div class="pwt-flex-fill"/>})
-            .into()
+            .into_html_with_ref(self.node_ref.clone())
     }
 
     fn rendered(&mut self, ctx: &Context<Self>, first_render: bool) {
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH proxmox-yew-comp 18/26] RRDGrid: add padding and increase gap between elements
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (16 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox-yew-comp 17/26] RRDGrid: fix size observer by attaching node reference to rendered container Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH datacenter-manager 19/26] metric collection: clarify naming for remote metric collection Lukas Wagner
                   ` (8 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

This seems visually a tiny bit nicer and avoids the overflow scrollbar
covering child elements.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 src/rrd_grid.rs | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/rrd_grid.rs b/src/rrd_grid.rs
index 10f1db7..271401b 100644
--- a/src/rrd_grid.rs
+++ b/src/rrd_grid.rs
@@ -69,7 +69,8 @@ impl Component for ProxmoxRRDGrid {
             .with_child(
                 Container::new()
                     .class(Display::Grid)
-                    .class("pwt-gap-2 pwt-w-100")
+                    .class("pwt-gap-4 pwt-w-100")
+                    .padding(4)
                     .attribute(
                         "style",
                         format!("grid-template-columns:repeat({}, 1fr);", self.cols),
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH datacenter-manager 19/26] metric collection: clarify naming for remote metric collection
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (17 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH proxmox-yew-comp 18/26] RRDGrid: add padding and increase gap between elements Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH datacenter-manager 20/26] metric collection: fix minor typo in error message Lukas Wagner
                   ` (7 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Primarily to avoid confusion with the new task that will be added for
local metric collection.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 cli/client/src/metric_collection.rs           |  4 ++--
 lib/pdm-api-types/src/metric_collection.rs    |  2 +-
 lib/pdm-client/src/lib.rs                     |  6 +++---
 server/src/api/metric_collection.rs           | 10 +++++-----
 server/src/api/remotes.rs                     |  2 +-
 server/src/api/rrd_common.rs                  |  2 +-
 server/src/metric_collection/mod.rs           | 19 +++++++++++--------
 ...tion_task.rs => remote_collection_task.rs} |  8 ++++----
 server/src/metric_collection/rrd_task.rs      |  2 +-
 server/src/metric_collection/state.rs         |  2 +-
 10 files changed, 30 insertions(+), 27 deletions(-)
 rename server/src/metric_collection/{collection_task.rs => remote_collection_task.rs} (99%)

diff --git a/cli/client/src/metric_collection.rs b/cli/client/src/metric_collection.rs
index e9dbd804..77dcaab5 100644
--- a/cli/client/src/metric_collection.rs
+++ b/cli/client/src/metric_collection.rs
@@ -34,7 +34,7 @@ pub fn cli() -> CommandLineInterface {
 /// all.
 async fn trigger_metric_collection(remote: Option<String>) -> Result<(), Error> {
     client()?
-        .trigger_metric_collection(remote.as_deref())
+        .trigger_remote_metric_collection(remote.as_deref())
         .await?;
     Ok(())
 }
@@ -42,7 +42,7 @@ async fn trigger_metric_collection(remote: Option<String>) -> Result<(), Error>
 #[api]
 /// Show metric collection status.
 async fn metric_collection_status() -> Result<(), Error> {
-    let result = client()?.get_metric_collection_status().await?;
+    let result = client()?.get_remote_metric_collection_status().await?;
 
     let output_format = env().format_args.output_format;
     if output_format == OutputFormat::Text {
diff --git a/lib/pdm-api-types/src/metric_collection.rs b/lib/pdm-api-types/src/metric_collection.rs
index 5279c8a4..cda6ac2a 100644
--- a/lib/pdm-api-types/src/metric_collection.rs
+++ b/lib/pdm-api-types/src/metric_collection.rs
@@ -8,7 +8,7 @@ use proxmox_schema::api;
 #[derive(Clone, Deserialize, Serialize)]
 #[serde(rename_all = "kebab-case")]
 /// Per-remote collection status.
-pub struct MetricCollectionStatus {
+pub struct RemoteMetricCollectionStatus {
     /// The remote's name.
     pub remote: String,
     /// Any error that occured during the last collection attempt.
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index 0fee97a7..7ce9c244 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -347,7 +347,7 @@ impl<T: HttpApiClient> PdmClient<T> {
     }
 
     /// Trigger metric collection for a single remote or for all remotes, if no remote is provided.
-    pub async fn trigger_metric_collection(
+    pub async fn trigger_remote_metric_collection(
         &self,
         remote: Option<&str>,
     ) -> Result<(), proxmox_client::Error> {
@@ -368,9 +368,9 @@ impl<T: HttpApiClient> PdmClient<T> {
     }
 
     /// Get global metric collection status.
-    pub async fn get_metric_collection_status(
+    pub async fn get_remote_metric_collection_status(
         &self,
-    ) -> Result<Vec<pdm_api_types::MetricCollectionStatus>, Error> {
+    ) -> Result<Vec<pdm_api_types::RemoteMetricCollectionStatus>, Error> {
         let path = "/api2/extjs/remotes/metric-collection/status";
         Ok(self.0.get(path).await?.expect_json()?.data)
     }
diff --git a/server/src/api/metric_collection.rs b/server/src/api/metric_collection.rs
index b4c81c68..5a480b36 100644
--- a/server/src/api/metric_collection.rs
+++ b/server/src/api/metric_collection.rs
@@ -4,7 +4,7 @@ use proxmox_router::{Router, SubdirMap};
 use proxmox_schema::api;
 use proxmox_sortable_macro::sortable;
 
-use pdm_api_types::{remotes::REMOTE_ID_SCHEMA, MetricCollectionStatus};
+use pdm_api_types::{remotes::REMOTE_ID_SCHEMA, RemoteMetricCollectionStatus};
 
 use crate::metric_collection;
 
@@ -34,7 +34,7 @@ const SUBDIRS: SubdirMap = &sorted!([
 )]
 /// Trigger metric collection for a provided remote or for all remotes if no remote is passed.
 pub async fn trigger_metric_collection(remote: Option<String>) -> Result<(), Error> {
-    crate::metric_collection::trigger_metric_collection(remote, false).await?;
+    crate::metric_collection::trigger_remote_metric_collection(remote, false).await?;
 
     Ok(())
 }
@@ -44,11 +44,11 @@ pub async fn trigger_metric_collection(remote: Option<String>) -> Result<(), Err
         type: Array,
         description: "A list of metric collection statuses.",
         items: {
-            type: MetricCollectionStatus,
+            type: RemoteMetricCollectionStatus,
         }
     }
 )]
 /// Read metric collection status.
-fn get_metric_collection_status() -> Result<Vec<MetricCollectionStatus>, Error> {
-    metric_collection::get_status()
+fn get_metric_collection_status() -> Result<Vec<RemoteMetricCollectionStatus>, Error> {
+    metric_collection::remote_metric_collection_status()
 }
diff --git a/server/src/api/remotes.rs b/server/src/api/remotes.rs
index 9700611d..678e0ed2 100644
--- a/server/src/api/remotes.rs
+++ b/server/src/api/remotes.rs
@@ -324,7 +324,7 @@ pub async fn add_remote(mut entry: Remote, create_token: Option<String>) -> Resu
 
     pdm_config::remotes::save_config(remotes)?;
 
-    if let Err(e) = metric_collection::trigger_metric_collection(Some(name), false).await {
+    if let Err(e) = metric_collection::trigger_remote_metric_collection(Some(name), false).await {
         log::error!("could not trigger metric collection after adding remote: {e}");
     }
 
diff --git a/server/src/api/rrd_common.rs b/server/src/api/rrd_common.rs
index b5d1a786..8c0fb798 100644
--- a/server/src/api/rrd_common.rs
+++ b/server/src/api/rrd_common.rs
@@ -74,7 +74,7 @@ pub async fn get_rrd_datapoints<T: DataPoint + Send + 'static>(
         // is super slow or if the metric collection tasks currently busy with collecting other
         // metrics, we just return the data we already have, not the newest one.
         let _ = tokio::time::timeout(WAIT_FOR_NEWEST_METRIC_TIMEOUT, async {
-            metric_collection::trigger_metric_collection(Some(remote), true).await
+            metric_collection::trigger_remote_metric_collection(Some(remote), true).await
         })
         .await;
     }
diff --git a/server/src/metric_collection/mod.rs b/server/src/metric_collection/mod.rs
index 0e6860fc..6bc534f8 100644
--- a/server/src/metric_collection/mod.rs
+++ b/server/src/metric_collection/mod.rs
@@ -7,16 +7,16 @@ use nix::sys::stat::Mode;
 use tokio::sync::mpsc::{self, Sender};
 use tokio::sync::oneshot;
 
-use pdm_api_types::MetricCollectionStatus;
+use pdm_api_types::RemoteMetricCollectionStatus;
 use pdm_buildcfg::PDM_STATE_DIR_M;
 
-mod collection_task;
+mod remote_collection_task;
 pub mod rrd_cache;
 mod rrd_task;
 mod state;
 pub mod top_entities;
 
-use collection_task::{ControlMsg, MetricCollectionTask};
+use remote_collection_task::{ControlMsg, RemoteMetricCollectionTask};
 use rrd_cache::RrdCache;
 
 const RRD_CACHE_BASEDIR: &str = concat!(PDM_STATE_DIR_M!(), "/rrdb");
@@ -46,7 +46,7 @@ pub fn start_task() -> Result<(), Error> {
 
     tokio::spawn(async move {
         let metric_collection_task_future = pin!(async move {
-            match MetricCollectionTask::new(metric_data_tx, trigger_collection_rx) {
+            match RemoteMetricCollectionTask::new(metric_data_tx, trigger_collection_rx) {
                 Ok(mut task) => task.run().await,
                 Err(err) => log::error!("could not start metric collection task: {err}"),
             }
@@ -76,7 +76,10 @@ pub fn start_task() -> Result<(), Error> {
 ///
 /// Has no effect if the tx end of the channel has not been initialized yet.
 /// Returns an error if the mpsc channel has been closed already.
-pub async fn trigger_metric_collection(remote: Option<String>, wait: bool) -> Result<(), Error> {
+pub async fn trigger_remote_metric_collection(
+    remote: Option<String>,
+    wait: bool,
+) -> Result<(), Error> {
     let (done_sender, done_receiver) = oneshot::channel();
 
     if let Some(sender) = CONTROL_MESSAGE_TX.get() {
@@ -93,15 +96,15 @@ pub async fn trigger_metric_collection(remote: Option<String>, wait: bool) -> Re
 }
 
 /// Get each remote's metric collection status.
-pub fn get_status() -> Result<Vec<MetricCollectionStatus>, Error> {
+pub fn remote_metric_collection_status() -> Result<Vec<RemoteMetricCollectionStatus>, Error> {
     let (remotes, _) = pdm_config::remotes::config()?;
-    let state = collection_task::load_state()?;
+    let state = remote_collection_task::load_state()?;
 
     let mut result = Vec::new();
 
     for (remote, _) in remotes.into_iter() {
         if let Some(status) = state.get_status(&remote) {
-            result.push(MetricCollectionStatus {
+            result.push(RemoteMetricCollectionStatus {
                 remote,
                 error: status.error.clone(),
                 last_collection: status.last_collection,
diff --git a/server/src/metric_collection/collection_task.rs b/server/src/metric_collection/remote_collection_task.rs
similarity index 99%
rename from server/src/metric_collection/collection_task.rs
rename to server/src/metric_collection/remote_collection_task.rs
index cc1a460e..eca0e11d 100644
--- a/server/src/metric_collection/collection_task.rs
+++ b/server/src/metric_collection/remote_collection_task.rs
@@ -46,13 +46,13 @@ pub(super) enum ControlMsg {
 
 /// Task which periodically collects metrics from all remotes and stores
 /// them in the local metrics database.
-pub(super) struct MetricCollectionTask {
+pub(super) struct RemoteMetricCollectionTask {
     state: MetricCollectionState,
     metric_data_tx: Sender<RrdStoreRequest>,
     control_message_rx: Receiver<ControlMsg>,
 }
 
-impl MetricCollectionTask {
+impl RemoteMetricCollectionTask {
     /// Create a new metric collection task.
     pub(super) fn new(
         metric_data_tx: Sender<RrdStoreRequest>,
@@ -574,7 +574,7 @@ pub(super) mod tests {
 
         let (_control_tx, control_rx) = tokio::sync::mpsc::channel(10);
 
-        let mut task = MetricCollectionTask {
+        let mut task = RemoteMetricCollectionTask {
             state,
             metric_data_tx: tx,
             control_message_rx: control_rx,
@@ -644,7 +644,7 @@ pub(super) mod tests {
 
         let (_control_tx, control_rx) = tokio::sync::mpsc::channel(10);
 
-        let mut task = MetricCollectionTask {
+        let mut task = RemoteMetricCollectionTask {
             state,
             metric_data_tx: tx,
             control_message_rx: control_rx,
diff --git a/server/src/metric_collection/rrd_task.rs b/server/src/metric_collection/rrd_task.rs
index 48b6de9e..29137858 100644
--- a/server/src/metric_collection/rrd_task.rs
+++ b/server/src/metric_collection/rrd_task.rs
@@ -200,7 +200,7 @@ mod tests {
     use pve_api_types::{ClusterMetrics, ClusterMetricsData};
 
     use crate::{
-        metric_collection::collection_task::tests::get_create_options,
+        metric_collection::remote_collection_task::tests::get_create_options,
         test_support::temp::NamedTempDir,
     };
 
diff --git a/server/src/metric_collection/state.rs b/server/src/metric_collection/state.rs
index 7f68843e..fd313c48 100644
--- a/server/src/metric_collection/state.rs
+++ b/server/src/metric_collection/state.rs
@@ -92,7 +92,7 @@ impl MetricCollectionState {
 
 #[cfg(test)]
 mod tests {
-    use crate::metric_collection::collection_task::tests::get_create_options;
+    use crate::metric_collection::remote_collection_task::tests::get_create_options;
     use crate::test_support::temp::NamedTempFile;
 
     use super::*;
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH datacenter-manager 20/26] metric collection: fix minor typo in error message
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (18 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH datacenter-manager 19/26] metric collection: clarify naming for remote metric collection Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH datacenter-manager 21/26] metric collection: collect PDM host metrics in a new collection task Lukas Wagner
                   ` (6 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 server/src/metric_collection/mod.rs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/server/src/metric_collection/mod.rs b/server/src/metric_collection/mod.rs
index 6bc534f8..3cd58148 100644
--- a/server/src/metric_collection/mod.rs
+++ b/server/src/metric_collection/mod.rs
@@ -41,7 +41,7 @@ pub fn start_task() -> Result<(), Error> {
 
     let (trigger_collection_tx, trigger_collection_rx) = mpsc::channel(128);
     if CONTROL_MESSAGE_TX.set(trigger_collection_tx).is_err() {
-        bail!("control message sender alread set");
+        bail!("control message sender already set");
     }
 
     tokio::spawn(async move {
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH datacenter-manager 21/26] metric collection: collect PDM host metrics in a new collection task
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (19 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH datacenter-manager 20/26] metric collection: fix minor typo in error message Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH datacenter-manager 22/26] api: fix /nodes/localhost/rrddata endpoint Lukas Wagner
                   ` (5 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Then whole architecture is pretty similar to the remote metric
collection. We introduce a task that fetches host metrics and sends them
via a channel to the RRD task, which is responsible for persisting them
in the RRD database.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 Cargo.toml                                    |   2 +
 debian/control                                |   1 +
 server/Cargo.toml                             |   2 +
 .../local_collection_task.rs                  | 199 ++++++++++++++++++
 server/src/metric_collection/mod.rs           |  21 +-
 server/src/metric_collection/rrd_task.rs      | 185 ++++++++++++++++
 6 files changed, 405 insertions(+), 5 deletions(-)
 create mode 100644 server/src/metric_collection/local_collection_task.rs

diff --git a/Cargo.toml b/Cargo.toml
index 1adb8a0a..91741ea1 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -39,6 +39,7 @@ proxmox-auth-api = "1.0.5"
 proxmox-base64 = "1"
 proxmox-client = "1"
 proxmox-daemon = "1"
+proxmox-disks = "0.1"
 proxmox-docgen = "1"
 proxmox-http = { version = "1.0.4", features = [ "client", "http-helpers", "websocket" ] } # see below
 proxmox-human-byte = "1"
@@ -47,6 +48,7 @@ proxmox-ldap = { version = "1.1", features = ["sync"] }
 proxmox-lang = "1.1"
 proxmox-log = "1"
 proxmox-login = "1.0.2"
+proxmox-procfs = "0.1"
 proxmox-rest-server = "1"
 # some use "cli", some use "cli" and "server", pbs-config uses nothing
 proxmox-router = { version = "3.0.0", default-features = false }
diff --git a/debian/control b/debian/control
index 4ddc9efc..c61e8795 100644
--- a/debian/control
+++ b/debian/control
@@ -52,6 +52,7 @@ Build-Depends: debhelper-compat (= 13),
                librust-proxmox-config-digest-1+default-dev,
                librust-proxmox-config-digest-1+openssl-dev,
                librust-proxmox-daemon-1+default-dev,
+               librust-proxmox-disks-0.1+default-dev,
                librust-proxmox-dns-api-1+default-dev,
                librust-proxmox-dns-api-1+impl-dev,
                librust-proxmox-docgen-1+default-dev,
diff --git a/server/Cargo.toml b/server/Cargo.toml
index 6969549f..65170864 100644
--- a/server/Cargo.toml
+++ b/server/Cargo.toml
@@ -40,6 +40,7 @@ proxmox-async.workspace = true
 proxmox-auth-api = { workspace = true, features = [ "api", "ticket", "pam-authenticator", "password-authenticator" ] }
 proxmox-base64.workspace = true
 proxmox-daemon.workspace = true
+proxmox-disks.workspace = true
 proxmox-docgen.workspace = true
 proxmox-http = { workspace = true, features = [ "client-trait", "proxmox-async" ] } # pbs-client doesn't use these
 proxmox-lang.workspace = true
@@ -47,6 +48,7 @@ proxmox-ldap.workspace = true
 proxmox-log.workspace = true
 proxmox-login.workspace = true
 proxmox-openid.workspace = true
+proxmox-procfs.workspace = true
 proxmox-rest-server = { workspace = true, features = [ "templates" ] }
 proxmox-router = { workspace = true, features = [ "cli", "server"] }
 proxmox-rrd.workspace = true
diff --git a/server/src/metric_collection/local_collection_task.rs b/server/src/metric_collection/local_collection_task.rs
new file mode 100644
index 00000000..a70b3d96
--- /dev/null
+++ b/server/src/metric_collection/local_collection_task.rs
@@ -0,0 +1,199 @@
+use std::sync::Mutex;
+use std::time::Instant;
+use std::{collections::HashMap, time::Duration};
+
+use anyhow::{Context, Error};
+use tokio::{sync::mpsc::Sender, time::MissedTickBehavior};
+
+use proxmox_disks::DiskManage;
+use proxmox_log::{debug, error};
+use proxmox_network_api::IpLink;
+use proxmox_procfs::pressure::{PressureData, Resource};
+use proxmox_sys::fs;
+use proxmox_sys::linux::procfs;
+
+use super::rrd_task::RrdStoreRequest;
+
+const HOST_METRIC_COLLECTION_INTERVAL: Duration = Duration::from_secs(10);
+
+/// Task which periodically collects metrics from the PDM host and stores
+/// them in the local metrics database.
+pub(super) struct LocalMetricCollectionTask {
+    metric_data_tx: Sender<RrdStoreRequest>,
+}
+
+impl LocalMetricCollectionTask {
+    /// Create a new metric collection task.
+    pub(super) fn new(metric_data_tx: Sender<RrdStoreRequest>) -> Self {
+        Self { metric_data_tx }
+    }
+
+    /// Run the metric collection task.
+    ///
+    /// This function never returns.
+    pub(super) async fn run(&mut self) {
+        let mut timer = tokio::time::interval(HOST_METRIC_COLLECTION_INTERVAL);
+        timer.set_missed_tick_behavior(MissedTickBehavior::Skip);
+
+        loop {
+            timer.tick().await;
+            self.handle_tick().await;
+        }
+    }
+
+    /// Handle a timer tick.
+    async fn handle_tick(&mut self) {
+        let stats = match tokio::task::spawn_blocking(collect_host_metrics).await {
+            Ok(stats) => stats,
+            Err(err) => {
+                error!("join error while collecting host stats: {err}");
+                return;
+            }
+        };
+
+        let _ = self
+            .metric_data_tx
+            .send(RrdStoreRequest::Host {
+                timestamp: proxmox_time::epoch_i64(),
+                metrics: Box::new(stats),
+            })
+            .await;
+    }
+}
+
+/// Container type for various metrics of a PDM host.
+pub(super) struct PdmHostMetrics {
+    /// CPU statistics from `/proc/stat`.
+    pub proc: Option<procfs::ProcFsStat>,
+    /// Memory statistics from `/proc/meminfo`.
+    pub meminfo: Option<procfs::ProcFsMemInfo>,
+    /// System load stats from `/proc/loadavg`.
+    pub load: Option<procfs::Loadavg>,
+    /// Aggregated network device traffic for all physical NICs.
+    pub netstats: Option<NetDevStats>,
+    /// Block device stats for the root disk.
+    pub root_blockdev_stat: Option<proxmox_disks::BlockDevStat>,
+    /// File system usage for the root disk.
+    pub root_filesystem_info: Option<fs::FileSystemInformation>,
+    /// CPU pressure stall information for the host.
+    pub cpu_pressure: Option<PressureData>,
+    /// CPU pressure stall information for the host.
+    pub memory_pressure: Option<PressureData>,
+    /// IO pressure stall information for the host.
+    pub io_pressure: Option<PressureData>,
+}
+
+/// Aggregated network device traffic for all physical NICs.
+pub(super) struct NetDevStats {
+    /// Aggregate inbound traffic over all physical NICs in bytes.
+    pub netin: u64,
+    /// Aggregate outbound traffic over all physical NICs in bytes.
+    pub netout: u64,
+}
+
+fn collect_host_metrics() -> PdmHostMetrics {
+    let proc = procfs::read_proc_stat()
+        .inspect_err(|err| error!("failed to read '/proc/stat': {err:#}"))
+        .ok();
+
+    let meminfo = procfs::read_meminfo()
+        .inspect_err(|err| error!("failed to read '/proc/meminfo': {err:#}"))
+        .ok();
+
+    let cpu_pressure = PressureData::read_system(Resource::Cpu)
+        .inspect_err(|err| error!("failed to read CPU pressure stall information: {err:#}"))
+        .ok();
+
+    let memory_pressure = PressureData::read_system(Resource::Memory)
+        .inspect_err(|err| error!("failed to read memory pressure stall information: {err:#}"))
+        .ok();
+
+    let io_pressure = PressureData::read_system(Resource::Io)
+        .inspect_err(|err| error!("failed to read IO pressure stall information: {err:#}"))
+        .ok();
+
+    let load = procfs::read_loadavg()
+        .inspect_err(|err| error!("failed to read '/proc/loadavg': {err:#}"))
+        .ok();
+
+    let root_blockdev_stat = DiskManage::new()
+        .blockdev_stat_for_path("/")
+        .inspect_err(|err| error!("failed to collect blockdev statistics for '/': {err:#}"))
+        .ok();
+
+    let root_filesystem_info = proxmox_sys::fs::fs_info("/")
+        .inspect_err(|err| {
+            error!("failed to query filesystem usage for '/': {err:#}");
+        })
+        .ok();
+
+    let netstats = collect_netdev_metrics()
+        .inspect_err(|err| {
+            error!("failed to collect network device statistics: {err:#}");
+        })
+        .ok();
+
+    PdmHostMetrics {
+        proc,
+        meminfo,
+        load,
+        netstats,
+        root_blockdev_stat,
+        root_filesystem_info,
+        cpu_pressure,
+        memory_pressure,
+        io_pressure,
+    }
+}
+
+struct NetdevCacheEntry {
+    interfaces: HashMap<String, IpLink>,
+    timestamp: Instant,
+}
+
+const NETWORK_INTERFACE_CACHE_MAX_AGE: Duration = Duration::from_secs(300);
+static NETWORK_INTERFACE_CACHE: Mutex<Option<NetdevCacheEntry>> = Mutex::new(None);
+
+fn collect_netdev_metrics() -> Result<NetDevStats, Error> {
+    let net_devs = procfs::read_proc_net_dev()?;
+
+    let mut cache = NETWORK_INTERFACE_CACHE.lock().unwrap();
+
+    let now = Instant::now();
+
+    let needs_refresh = match cache.as_ref() {
+        Some(entry) => now.duration_since(entry.timestamp) > NETWORK_INTERFACE_CACHE_MAX_AGE,
+        None => true,
+    };
+
+    if needs_refresh {
+        cache.replace({
+            debug!("updating cached network devices");
+
+            let interfaces = proxmox_network_api::get_network_interfaces()
+                .context("failed to enumerate network devices")?;
+
+            NetdevCacheEntry {
+                interfaces,
+                timestamp: now,
+            }
+        });
+    }
+
+    // unwrap: at this point we *know* that the Option is Some
+    let ip_links = cache.as_ref().unwrap();
+
+    let mut netin = 0;
+    let mut netout = 0;
+
+    for net_dev in net_devs {
+        if let Some(ip_link) = ip_links.interfaces.get(&net_dev.device) {
+            if ip_link.is_physical() {
+                netin += net_dev.receive;
+                netout += net_dev.send;
+            }
+        }
+    }
+
+    Ok(NetDevStats { netin, netout })
+}
diff --git a/server/src/metric_collection/mod.rs b/server/src/metric_collection/mod.rs
index 3cd58148..8a945fab 100644
--- a/server/src/metric_collection/mod.rs
+++ b/server/src/metric_collection/mod.rs
@@ -10,6 +10,7 @@ use tokio::sync::oneshot;
 use pdm_api_types::RemoteMetricCollectionStatus;
 use pdm_buildcfg::PDM_STATE_DIR_M;
 
+mod local_collection_task;
 mod remote_collection_task;
 pub mod rrd_cache;
 mod rrd_task;
@@ -19,6 +20,8 @@ pub mod top_entities;
 use remote_collection_task::{ControlMsg, RemoteMetricCollectionTask};
 use rrd_cache::RrdCache;
 
+use crate::metric_collection::local_collection_task::LocalMetricCollectionTask;
+
 const RRD_CACHE_BASEDIR: &str = concat!(PDM_STATE_DIR_M!(), "/rrdb");
 
 static CONTROL_MESSAGE_TX: OnceLock<Sender<ControlMsg>> = OnceLock::new();
@@ -39,14 +42,22 @@ pub fn init() -> Result<(), Error> {
 pub fn start_task() -> Result<(), Error> {
     let (metric_data_tx, metric_data_rx) = mpsc::channel(128);
 
+    let cache = rrd_cache::get_cache();
+    tokio::spawn(async move {
+        let rrd_task_future = pin!(rrd_task::store_in_rrd_task(cache, metric_data_rx));
+        let abort_future = pin!(proxmox_daemon::shutdown_future());
+        futures::future::select(rrd_task_future, abort_future).await;
+    });
+
     let (trigger_collection_tx, trigger_collection_rx) = mpsc::channel(128);
     if CONTROL_MESSAGE_TX.set(trigger_collection_tx).is_err() {
         bail!("control message sender already set");
     }
 
+    let metric_data_tx_clone = metric_data_tx.clone();
     tokio::spawn(async move {
         let metric_collection_task_future = pin!(async move {
-            match RemoteMetricCollectionTask::new(metric_data_tx, trigger_collection_rx) {
+            match RemoteMetricCollectionTask::new(metric_data_tx_clone, trigger_collection_rx) {
                 Ok(mut task) => task.run().await,
                 Err(err) => log::error!("could not start metric collection task: {err}"),
             }
@@ -56,12 +67,12 @@ pub fn start_task() -> Result<(), Error> {
         futures::future::select(metric_collection_task_future, abort_future).await;
     });
 
-    let cache = rrd_cache::get_cache();
-
     tokio::spawn(async move {
-        let rrd_task_future = pin!(rrd_task::store_in_rrd_task(cache, metric_data_rx));
+        let metric_collection_task_future =
+            pin!(async move { LocalMetricCollectionTask::new(metric_data_tx).run().await });
+
         let abort_future = pin!(proxmox_daemon::shutdown_future());
-        futures::future::select(rrd_task_future, abort_future).await;
+        futures::future::select(metric_collection_task_future, abort_future).await;
     });
 
     Ok(())
diff --git a/server/src/metric_collection/rrd_task.rs b/server/src/metric_collection/rrd_task.rs
index 29137858..4cf18679 100644
--- a/server/src/metric_collection/rrd_task.rs
+++ b/server/src/metric_collection/rrd_task.rs
@@ -8,6 +8,7 @@ use proxmox_rrd::rrd::DataSourceType;
 use pbs_api_types::{MetricDataPoint, MetricDataType, Metrics};
 use pve_api_types::{ClusterMetrics, ClusterMetricsData, ClusterMetricsDataType};
 
+use super::local_collection_task::PdmHostMetrics;
 use super::rrd_cache::RrdCache;
 
 /// Store request for the RRD task.
@@ -45,6 +46,16 @@ pub(super) enum RrdStoreRequest {
         /// Statistics.
         stats: CollectionStats,
     },
+    /// Store PDM host metrics.
+    Host {
+        /// Timestamp at which the metrics were collected (UNIX epoch).
+        timestamp: i64,
+
+        /// Metric data for this PDM host.
+        // Boxed to avoid a clippy warning regarding large size differences between
+        // enum variants.
+        metrics: Box<PdmHostMetrics>,
+    },
 }
 
 /// Result for a [`RrdStoreRequest`].
@@ -117,6 +128,9 @@ pub(super) async fn store_in_rrd_task(
                 RrdStoreRequest::CollectionStats { timestamp, stats } => {
                     store_stats(&cache_clone, &stats, timestamp)
                 }
+                RrdStoreRequest::Host { timestamp, metrics } => {
+                    store_pdm_host_metrics(&cache_clone, timestamp, &metrics)
+                }
             };
         })
         .await;
@@ -194,6 +208,177 @@ fn store_stats(cache: &RrdCache, stats: &CollectionStats, timestamp: i64) {
     );
 }
 
+fn store_pdm_host_metrics(cache: &RrdCache, timestamp: i64, metrics: &PdmHostMetrics) {
+    if let Some(proc) = &metrics.proc {
+        cache.update_value(
+            "nodes/localhost/cpu-current",
+            proc.cpu,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+        cache.update_value(
+            "nodes/localhost/cpu-iowait",
+            proc.iowait_percent,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+    }
+
+    if let Some(load) = &metrics.load {
+        cache.update_value(
+            "nodes/localhost/cpu-avg1",
+            load.0,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+        cache.update_value(
+            "nodes/localhost/cpu-avg5",
+            load.1,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+        cache.update_value(
+            "nodes/localhost/cpu-avg15",
+            load.2,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+    }
+
+    if let Some(cpu_pressure) = &metrics.cpu_pressure {
+        cache.update_value(
+            "nodes/localhost/cpu-pressure-some-avg10",
+            cpu_pressure.some.average_10,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+
+        // NOTE: On a system level, 'full' CPU pressure is undefined and reported as 0,
+        // so it does not make sense to store it.
+        // https://docs.kernel.org/accounting/psi.html#pressure-interface
+    }
+
+    if let Some(meminfo) = &metrics.meminfo {
+        cache.update_value(
+            "nodes/localhost/mem-total",
+            meminfo.memtotal as f64,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+        cache.update_value(
+            "nodes/localhost/mem-used",
+            meminfo.memused as f64,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+        cache.update_value(
+            "nodes/localhost/swap-total",
+            meminfo.swaptotal as f64,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+        cache.update_value(
+            "nodes/localhost/swap-used",
+            meminfo.swapused as f64,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+    }
+
+    if let Some(memory_pressure) = &metrics.memory_pressure {
+        cache.update_value(
+            "nodes/localhost/mem-pressure-some-avg10",
+            memory_pressure.some.average_10,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+        cache.update_value(
+            "nodes/localhost/mem-pressure-full-avg10",
+            memory_pressure.full.average_10,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+    }
+
+    if let Some(netstats) = &metrics.netstats {
+        cache.update_value(
+            "nodes/localhost/net-in",
+            netstats.netin as f64,
+            timestamp,
+            DataSourceType::Derive,
+        );
+        cache.update_value(
+            "nodes/localhost/net-out",
+            netstats.netout as f64,
+            timestamp,
+            DataSourceType::Derive,
+        );
+    }
+
+    if let Some(disk) = &metrics.root_filesystem_info {
+        cache.update_value(
+            "nodes/localhost/disk-total",
+            disk.total as f64,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+        cache.update_value(
+            "nodes/localhost/disk-used",
+            disk.used as f64,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+    }
+
+    if let Some(stat) = &metrics.root_blockdev_stat {
+        cache.update_value(
+            "nodes/localhost/disk-read-iops",
+            stat.read_ios as f64,
+            timestamp,
+            DataSourceType::Derive,
+        );
+        cache.update_value(
+            "nodes/localhost/disk-write-iops",
+            stat.write_ios as f64,
+            timestamp,
+            DataSourceType::Derive,
+        );
+        cache.update_value(
+            "nodes/localhost/disk-read",
+            (stat.read_sectors * 512) as f64,
+            timestamp,
+            DataSourceType::Derive,
+        );
+        cache.update_value(
+            "nodes/localhost/disk-write",
+            (stat.write_sectors * 512) as f64,
+            timestamp,
+            DataSourceType::Derive,
+        );
+        cache.update_value(
+            "nodes/localhost/disk-io-ticks",
+            (stat.io_ticks as f64) / 1000.0,
+            timestamp,
+            DataSourceType::Derive,
+        );
+    }
+
+    if let Some(io_pressure) = &metrics.io_pressure {
+        cache.update_value(
+            "nodes/localhost/io-pressure-some-avg10",
+            io_pressure.some.average_10,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+        cache.update_value(
+            "nodes/localhost/io-pressure-full-avg10",
+            io_pressure.full.average_10,
+            timestamp,
+            DataSourceType::Gauge,
+        );
+    }
+}
+
 #[cfg(test)]
 mod tests {
     use proxmox_rrd_api_types::{RrdMode, RrdTimeframe};
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH datacenter-manager 22/26] api: fix /nodes/localhost/rrddata endpoint
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (20 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH datacenter-manager 21/26] metric collection: collect PDM host metrics in a new collection task Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH datacenter-manager 23/26] pdm: node rrd data: rename 'total-time' to 'metric-collection-total-time' Lukas Wagner
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

We didn't use this existing endpoint so far, which is why this mistake
was not discovered yet. First, there was a typo in the API handler path,
and second the `node` parameter was missing from the handler itself.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 lib/pdm-client/src/lib.rs       |  2 +-
 server/src/api/nodes/mod.rs     |  2 +-
 server/src/api/nodes/rrddata.rs | 18 ++++++++++++++++--
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index 7ce9c244..2d98146b 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -380,7 +380,7 @@ impl<T: HttpApiClient> PdmClient<T> {
         &self,
         mode: RrdMode,
         timeframe: RrdTimeframe,
-    ) -> Result<pdm_api_types::rrddata::PdmNodeDatapoint, Error> {
+    ) -> Result<Vec<pdm_api_types::rrddata::PdmNodeDatapoint>, Error> {
         let path = ApiPathBuilder::new("/api2/extjs/nodes/localhost/rrddata")
             .arg("cf", mode)
             .arg("timeframe", timeframe)
diff --git a/server/src/api/nodes/mod.rs b/server/src/api/nodes/mod.rs
index bd1396bc..7903d63a 100644
--- a/server/src/api/nodes/mod.rs
+++ b/server/src/api/nodes/mod.rs
@@ -48,7 +48,7 @@ pub const SUBDIRS: SubdirMap = &sorted!([
     ("journal", &journal::ROUTER),
     ("network", &network::ROUTER),
     ("report", &report::ROUTER),
-    ("rrdata", &rrddata::ROUTER),
+    ("rrddata", &rrddata::ROUTER),
     ("sdn", &sdn::ROUTER),
     ("subscription", &subscription::ROUTER),
     ("status", &status::ROUTER),
diff --git a/server/src/api/nodes/rrddata.rs b/server/src/api/nodes/rrddata.rs
index 75900965..4c2302c8 100644
--- a/server/src/api/nodes/rrddata.rs
+++ b/server/src/api/nodes/rrddata.rs
@@ -1,10 +1,11 @@
 use anyhow::Error;
 use proxmox_rrd_api_types::{RrdMode, RrdTimeframe};
 
-use proxmox_router::Router;
+use proxmox_router::{http_bail, Router};
 use proxmox_schema::api;
 
 use pdm_api_types::rrddata::PdmNodeDatapoint;
+use pdm_api_types::NODE_SCHEMA;
 
 use crate::api::rrd_common::{self, DataPoint};
 
@@ -36,6 +37,9 @@ impl DataPoint for PdmNodeDatapoint {
             cf: {
                 type: RrdMode,
             },
+            node: {
+                schema: NODE_SCHEMA,
+            },
         },
     },
     returns: {
@@ -47,7 +51,17 @@ impl DataPoint for PdmNodeDatapoint {
     }
 )]
 /// Read RRD data for this PDM node.
-fn get_node_rrddata(timeframe: RrdTimeframe, cf: RrdMode) -> Result<Vec<PdmNodeDatapoint>, Error> {
+fn get_node_rrddata(
+    node: String,
+    timeframe: RrdTimeframe,
+    cf: RrdMode,
+) -> Result<Vec<PdmNodeDatapoint>, Error> {
+    if node != "localhost" {
+        http_bail!(
+            BAD_REQUEST,
+            "PDM only supports `localhost` as a `node` parameter"
+        );
+    }
     let base = "nodes/localhost";
     rrd_common::create_datapoints_from_rrd(base, timeframe, cf)
 }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH datacenter-manager 23/26] pdm: node rrd data: rename 'total-time' to 'metric-collection-total-time'
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (21 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH datacenter-manager 22/26] api: fix /nodes/localhost/rrddata endpoint Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH datacenter-manager 24/26] pdm-api-types: add PDM host metric fields Lukas Wagner
                   ` (3 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

In the initial version of the remote metric collection series, there was
a separate API endpoint for metric collection rrd data, hence the short
name. Unfortunately we forgot to rename the field when the metric
collection stats were move the PDM host stats.

Neither the client tool nor the UI used this field yet, and also we
didn't stabilize PDMs API yet, so it should be fine to just rename the
field.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 lib/pdm-api-types/src/rrddata.rs | 2 +-
 server/src/api/nodes/rrddata.rs  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/pdm-api-types/src/rrddata.rs b/lib/pdm-api-types/src/rrddata.rs
index 70619233..6eaaff3c 100644
--- a/lib/pdm-api-types/src/rrddata.rs
+++ b/lib/pdm-api-types/src/rrddata.rs
@@ -242,7 +242,7 @@ pub struct PdmNodeDatapoint {
 
     /// Total time in milliseconds needed for full metric collection run.
     #[serde(skip_serializing_if = "Option::is_none")]
-    pub total_time: Option<f64>,
+    pub metric_collection_total_time: Option<f64>,
 }
 
 #[api]
diff --git a/server/src/api/nodes/rrddata.rs b/server/src/api/nodes/rrddata.rs
index 4c2302c8..00c4eee0 100644
--- a/server/src/api/nodes/rrddata.rs
+++ b/server/src/api/nodes/rrddata.rs
@@ -23,7 +23,7 @@ impl DataPoint for PdmNodeDatapoint {
 
     fn set_field(&mut self, name: &str, value: f64) {
         if name == "metric-collection-total-time" {
-            self.total_time = Some(value);
+            self.metric_collection_total_time = Some(value);
         }
     }
 }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH datacenter-manager 24/26] pdm-api-types: add PDM host metric fields
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (22 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH datacenter-manager 23/26] pdm: node rrd data: rename 'total-time' to 'metric-collection-total-time' Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH datacenter-manager 25/26] ui: node status: add RRD graphs for PDM host metrics Lukas Wagner
                   ` (2 subsequent siblings)
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 lib/pdm-api-types/src/rrddata.rs | 72 +++++++++++++++++++++++++++++++-
 server/src/api/nodes/rrddata.rs  | 55 ++++++++++++++++++++++--
 2 files changed, 122 insertions(+), 5 deletions(-)

diff --git a/lib/pdm-api-types/src/rrddata.rs b/lib/pdm-api-types/src/rrddata.rs
index 6eaaff3c..452597a8 100644
--- a/lib/pdm-api-types/src/rrddata.rs
+++ b/lib/pdm-api-types/src/rrddata.rs
@@ -233,13 +233,81 @@ pub struct PbsDatastoreDataPoint {
 }
 
 #[api]
-#[derive(Serialize, Deserialize, Default)]
+#[derive(Serialize, Deserialize, Default, Debug)]
 #[serde(rename_all = "kebab-case")]
 /// RRD datapoint for statistics about the metric collection loop.
 pub struct PdmNodeDatapoint {
     /// Timestamp (UNIX epoch)
     pub time: u64,
-
+    /// Current CPU utilization
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub cpu_current: Option<f64>,
+    /// Current IO wait
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub cpu_iowait: Option<f64>,
+    /// CPU utilization, averaged over the last minute
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub cpu_avg1: Option<f64>,
+    /// CPU utilization, averaged over the last five minutes
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub cpu_avg5: Option<f64>,
+    /// CPU utilization, averaged over the last fifteen minutes
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub cpu_avg15: Option<f64>,
+    /// Total root disk size
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub disk_total: Option<f64>,
+    /// Total root disk usage
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub disk_used: Option<f64>,
+    /// Root disk read IOPS
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub disk_read_iops: Option<f64>,
+    /// Root disk write IOPS
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub disk_write_iops: Option<f64>,
+    /// Root disk read rate
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub disk_read: Option<f64>,
+    /// Root disk write rate
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub disk_write: Option<f64>,
+    /// Root disk IO ticks
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub disk_io_ticks: Option<f64>,
+    /// Total memory size
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub mem_total: Option<f64>,
+    /// Currently used memory
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub mem_used: Option<f64>,
+    /// Total swap size
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub swap_total: Option<f64>,
+    /// Current swap usage
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub swap_used: Option<f64>,
+    /// Inbound network data rate
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub net_in: Option<f64>,
+    /// Outbound network data rate
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub net_out: Option<f64>,
+    /// Average 'some' CPU pressure over the last 10 minutes.
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub cpu_pressure_some_avg10: Option<f64>,
+    /// Average 'some' memory pressure over the last 10 minutes.
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub mem_pressure_some_avg10: Option<f64>,
+    /// Average 'full' memory pressure over the last 10 minutes.
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub mem_pressure_full_avg10: Option<f64>,
+    /// Average 'some' IO pressure over the last 10 minutes.
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub io_pressure_some_avg10: Option<f64>,
+    /// Average 'full' IO pressure over the last 10 minutes.
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub io_pressure_full_avg10: Option<f64>,
     /// Total time in milliseconds needed for full metric collection run.
     #[serde(skip_serializing_if = "Option::is_none")]
     pub metric_collection_total_time: Option<f64>,
diff --git a/server/src/api/nodes/rrddata.rs b/server/src/api/nodes/rrddata.rs
index 00c4eee0..8ba11a5f 100644
--- a/server/src/api/nodes/rrddata.rs
+++ b/server/src/api/nodes/rrddata.rs
@@ -18,12 +18,61 @@ impl DataPoint for PdmNodeDatapoint {
     }
 
     fn fields() -> &'static [&'static str] {
-        &["metric-collection-total-time"]
+        &[
+            "cpu-current",
+            "cpu-iowait",
+            "cpu-avg1",
+            "cpu-avg5",
+            "cpu-avg15",
+            "cpu-pressure-some-avg10",
+            "disk-total",
+            "disk-used",
+            "disk-read-iops",
+            "disk-write-iops",
+            "disk-read",
+            "disk-write",
+            "disk-io-ticks",
+            "io-pressure-some-avg10",
+            "io-pressure-full-avg10",
+            "mem-total",
+            "mem-used",
+            "mem-pressure-some-avg10",
+            "mem-pressure-full-avg10",
+            "swap-total",
+            "swap-used",
+            "net-in",
+            "net-out",
+            "metric-collection-total-time",
+        ]
     }
 
     fn set_field(&mut self, name: &str, value: f64) {
-        if name == "metric-collection-total-time" {
-            self.metric_collection_total_time = Some(value);
+        match name {
+            "cpu-current" => self.cpu_current = Some(value),
+            "cpu-iowait" => self.cpu_iowait = Some(value),
+            "cpu-avg1" => self.cpu_avg1 = Some(value),
+            "cpu-avg5" => self.cpu_avg5 = Some(value),
+            "cpu-avg15" => self.cpu_avg15 = Some(value),
+            "cpu-pressure-some-avg10" => self.cpu_pressure_some_avg10 = Some(value),
+            "disk-total" => self.disk_total = Some(value),
+            "disk-used" => self.disk_used = Some(value),
+            "disk-read-iops" => self.disk_read_iops = Some(value),
+            "disk-write-iops" => self.disk_write_iops = Some(value),
+            "disk-read" => self.disk_read = Some(value),
+            "disk-write" => self.disk_write = Some(value),
+            "disk-io-ticks" => self.disk_io_ticks = Some(value),
+            "io-pressure-some-avg10" => self.io_pressure_some_avg10 = Some(value),
+            "io-pressure-full-avg10" => self.io_pressure_full_avg10 = Some(value),
+            "mem-total" => self.mem_total = Some(value),
+            "mem-used" => self.mem_used = Some(value),
+            "mem-pressure-some-avg10" => self.mem_pressure_some_avg10 = Some(value),
+            "mem-pressure-full-avg10" => self.mem_pressure_full_avg10 = Some(value),
+            "swap-total" => self.swap_total = Some(value),
+            "swap-used" => self.swap_used = Some(value),
+            "net-in" => self.net_in = Some(value),
+            "net-out" => self.net_out = Some(value),
+            "metric-collection-total-time" => self.metric_collection_total_time = Some(value),
+            _ => log::error!("setting invalid field '{name}' in PdmNodeDatapoint"),
         }
     }
 }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH datacenter-manager 25/26] ui: node status: add RRD graphs for PDM host metrics
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (23 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH datacenter-manager 24/26] pdm-api-types: add PDM host metric fields Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-12 13:52 ` [PATCH datacenter-manager 26/26] ui: lxc/qemu/node: use RRD value render helpers Lukas Wagner
  2026-03-16 13:42 ` [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Arthur Bied-Charreton
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

This adds RRD graphs in the existing node status panel. We add graphs
for
  - CPU/IOWait
  - Load-Avg
  - Memory usage
  - Network utilization
  - Pressure (CPU, memory, IO)

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 ui/src/administration/node_status.rs | 312 ++++++++++++++++++++++++++-
 ui/src/renderer.rs                   |  49 +++++
 2 files changed, 354 insertions(+), 7 deletions(-)

diff --git a/ui/src/administration/node_status.rs b/ui/src/administration/node_status.rs
index a61f25a2..62684338 100644
--- a/ui/src/administration/node_status.rs
+++ b/ui/src/administration/node_status.rs
@@ -7,17 +7,28 @@ use proxmox_node_status::NodePowerCommand;
 use proxmox_time::epoch_i64;
 use proxmox_yew_comp::percent_encoding::percent_encode_component;
 use proxmox_yew_comp::utils::{copy_text_to_clipboard, render_epoch};
-use proxmox_yew_comp::{http_post, ConfirmButton, NodeStatusPanel};
-use pwt::prelude::*;
+use proxmox_yew_comp::{
+    http_post, ConfirmButton, NodeStatusPanel, RRDGraph, RRDGrid, RRDTimeframe,
+    RRDTimeframeSelector, Series,
+};
+use pwt::css::JustifyContent;
 use pwt::widget::{Button, Column, Container, Row};
 use pwt::AsyncAbortGuard;
+use pwt::{prelude::*, AsyncPool};
 
-use crate::get_nodename;
+use pdm_api_types::rrddata::PdmNodeDatapoint;
+
+use crate::{get_nodename, renderer};
 
 #[derive(Properties, Clone, PartialEq)]
-pub(crate) struct NodeStatus {}
+pub(crate) struct NodeStatus {
+    #[prop_or(60_000)]
+    /// The interval for refreshing the rrd data
+    pub rrd_interval: u32,
+}
 
 impl NodeStatus {
+    /// Create new [`NodeStatus`] panel.
     pub(crate) fn new() -> Self {
         yew::props!(Self {})
     }
@@ -31,20 +42,58 @@ impl From<NodeStatus> for VNode {
 
 enum Msg {
     Reload,
+    ReloadRrd,
+    UpdateRrdTimeframe(RRDTimeframe),
     Error(Error),
     RebootOrShutdown(NodePowerCommand),
     ShowSystemReport(bool),
     ShowPackageVersions(bool),
+    RrdLoadFinished(Result<Vec<PdmNodeDatapoint>, proxmox_client::Error>),
 }
 
 struct PdmNodeStatus {
+    time_data: Rc<Vec<i64>>,
+
+    cpu_data: Rc<Series>,
+    iowait_data: Rc<Series>,
+    load_data: Rc<Series>,
+    mem_data: Rc<Series>,
+    mem_total_data: Rc<Series>,
+    swap_data: Rc<Series>,
+    swap_total_data: Rc<Series>,
+    disk_usage_data: Rc<Series>,
+    disk_total_data: Rc<Series>,
+    disk_transfer_read_data: Rc<Series>,
+    disk_transfer_write_data: Rc<Series>,
+    disk_iops_read_data: Rc<Series>,
+    disk_iops_write_data: Rc<Series>,
+    cpu_pressure_some_data: Rc<Series>,
+    mem_pressure_some_data: Rc<Series>,
+    mem_pressure_full_data: Rc<Series>,
+    io_pressure_some_data: Rc<Series>,
+    io_pressure_full_data: Rc<Series>,
+    net_in: Rc<Series>,
+    net_out: Rc<Series>,
+
+    rrd_time_frame: RRDTimeframe,
     error: Option<Error>,
     abort_guard: Option<AsyncAbortGuard>,
     show_system_report: bool,
     show_package_versions: bool,
+
+    async_pool: AsyncPool,
+    _timeout: Option<gloo_timers::callback::Timeout>,
 }
 
 impl PdmNodeStatus {
+    async fn reload_rrd(rrd_time_frame: RRDTimeframe) -> Msg {
+        let res = crate::pdm_client()
+            .get_pdm_node_rrddata(rrd_time_frame.mode, rrd_time_frame.timeframe)
+            .await;
+
+        Msg::RrdLoadFinished(res)
+    }
+
     fn change_power_state(&mut self, ctx: &yew::Context<Self>, command: NodePowerCommand) {
         let link = ctx.link().clone();
         self.abort_guard.replace(AsyncAbortGuard::spawn(async move {
@@ -184,8 +233,37 @@ impl Component for PdmNodeStatus {
     type Message = Msg;
     type Properties = NodeStatus;
 
-    fn create(_ctx: &yew::Context<Self>) -> Self {
+    fn create(ctx: &yew::Context<Self>) -> Self {
+        ctx.link().send_message(Msg::ReloadRrd);
+
         Self {
+            time_data: Rc::new(Vec::new()),
+
+            cpu_data: empty_series(),
+            cpu_pressure_some_data: empty_series(),
+            mem_pressure_some_data: empty_series(),
+            mem_pressure_full_data: empty_series(),
+            io_pressure_some_data: empty_series(),
+            io_pressure_full_data: empty_series(),
+            iowait_data: empty_series(),
+            load_data: empty_series(),
+            mem_data: empty_series(),
+            mem_total_data: empty_series(),
+            swap_data: empty_series(),
+            swap_total_data: empty_series(),
+            net_in: empty_series(),
+            net_out: empty_series(),
+            disk_usage_data: empty_series(),
+            disk_total_data: empty_series(),
+            disk_transfer_read_data: empty_series(),
+            disk_transfer_write_data: empty_series(),
+            disk_iops_read_data: empty_series(),
+            disk_iops_write_data: empty_series(),
+
+            async_pool: AsyncPool::new(),
+            _timeout: None,
+
+            rrd_time_frame: RRDTimeframe::load(),
             error: None,
             abort_guard: None,
             show_system_report: false,
@@ -212,6 +290,121 @@ impl Component for PdmNodeStatus {
                 self.show_package_versions = show_package_versions;
                 true
             }
+            Msg::ReloadRrd => {
+                self._timeout = None;
+                let timeframe = self.rrd_time_frame;
+                self.async_pool.send_future(ctx.link().clone(), async move {
+                    Self::reload_rrd(timeframe).await
+                });
+                true
+            }
+            Msg::RrdLoadFinished(res) => match res {
+                Ok(data_points) => {
+                    self.error = None;
+                    let mut cpu_vec = Vec::with_capacity(data_points.len());
+                    let mut cpu_pressure_some_vec = Vec::with_capacity(data_points.len());
+                    let mut iowait_vec = Vec::with_capacity(data_points.len());
+                    let mut load_vec = Vec::with_capacity(data_points.len());
+                    let mut mem_vec = Vec::with_capacity(data_points.len());
+                    let mut mem_total_vec = Vec::with_capacity(data_points.len());
+                    let mut swap_vec = Vec::with_capacity(data_points.len());
+                    let mut swap_total_vec = Vec::with_capacity(data_points.len());
+                    let mut mem_pressure_some_vec = Vec::with_capacity(data_points.len());
+                    let mut mem_pressure_full_vec = Vec::with_capacity(data_points.len());
+                    let mut io_pressure_some_vec = Vec::with_capacity(data_points.len());
+                    let mut io_pressure_full_vec = Vec::with_capacity(data_points.len());
+                    let mut time_vec = Vec::with_capacity(data_points.len());
+                    let mut net_in_vec = Vec::with_capacity(data_points.len());
+                    let mut net_out_vec = Vec::with_capacity(data_points.len());
+                    let mut disk_usage_vec = Vec::with_capacity(data_points.len());
+                    let mut disk_total_vec = Vec::with_capacity(data_points.len());
+                    let mut disk_transfer_read_vec = Vec::with_capacity(data_points.len());
+                    let mut disk_transfer_write_vec = Vec::with_capacity(data_points.len());
+                    let mut disk_iops_read_vec = Vec::with_capacity(data_points.len());
+                    let mut disk_iops_write_vec = Vec::with_capacity(data_points.len());
+
+                    for data in data_points {
+                        cpu_vec.push(data.cpu_current.unwrap_or(f64::NAN));
+                        iowait_vec.push(data.cpu_iowait.unwrap_or(f64::NAN));
+                        load_vec.push(data.cpu_avg1.unwrap_or(f64::NAN));
+                        cpu_pressure_some_vec
+                            .push(data.cpu_pressure_some_avg10.unwrap_or(f64::NAN));
+                        mem_vec.push(data.mem_used.unwrap_or(f64::NAN));
+                        mem_total_vec.push(data.mem_total.unwrap_or(f64::NAN));
+                        swap_vec.push(data.swap_used.unwrap_or(f64::NAN));
+                        swap_total_vec.push(data.swap_total.unwrap_or(f64::NAN));
+                        mem_pressure_some_vec
+                            .push(data.mem_pressure_some_avg10.unwrap_or(f64::NAN));
+                        mem_pressure_full_vec
+                            .push(data.mem_pressure_full_avg10.unwrap_or(f64::NAN));
+                        net_in_vec.push(data.net_in.unwrap_or(f64::NAN));
+                        net_out_vec.push(data.net_out.unwrap_or(f64::NAN));
+                        io_pressure_some_vec.push(data.io_pressure_some_avg10.unwrap_or(f64::NAN));
+                        io_pressure_full_vec.push(data.io_pressure_full_avg10.unwrap_or(f64::NAN));
+
+                        disk_total_vec.push(data.disk_total.unwrap_or(f64::NAN));
+                        disk_usage_vec.push(data.disk_used.unwrap_or(f64::NAN));
+                        disk_transfer_read_vec.push(data.disk_read.unwrap_or(f64::NAN));
+                        disk_transfer_write_vec.push(data.disk_write.unwrap_or(f64::NAN));
+
+                        disk_iops_read_vec.push(data.disk_read_iops.unwrap_or(f64::NAN));
+                        disk_iops_write_vec.push(data.disk_write_iops.unwrap_or(f64::NAN));
+
+                        time_vec.push(data.time as i64);
+                    }
+
+                    self.cpu_data = Rc::new(Series::new(tr!("CPU usage"), cpu_vec));
+                    self.iowait_data = Rc::new(Series::new(tr!("IO delay"), iowait_vec));
+                    self.load_data = Rc::new(Series::new(tr!("Server Load"), load_vec));
+                    self.cpu_pressure_some_data =
+                        Rc::new(Series::new(tr!("Some"), cpu_pressure_some_vec));
+                    self.mem_data = Rc::new(Series::new(tr!("Used Memory"), mem_vec));
+                    self.mem_total_data = Rc::new(Series::new(tr!("Total Memory"), mem_total_vec));
+                    self.swap_data = Rc::new(Series::new(tr!("Used Swap"), swap_vec));
+                    self.swap_total_data = Rc::new(Series::new(tr!("Total Swap"), swap_total_vec));
+                    self.mem_pressure_some_data =
+                        Rc::new(Series::new(tr!("Some"), mem_pressure_some_vec));
+                    self.mem_pressure_full_data =
+                        Rc::new(Series::new(tr!("Full"), mem_pressure_full_vec));
+                    self.io_pressure_some_data =
+                        Rc::new(Series::new(tr!("Some"), io_pressure_some_vec));
+                    self.io_pressure_full_data =
+                        Rc::new(Series::new(tr!("Full"), io_pressure_full_vec));
+
+                    self.net_in = Rc::new(Series::new(tr!("Incoming"), net_in_vec));
+                    self.net_out = Rc::new(Series::new(tr!("Outgoing"), net_out_vec));
+
+                    self.disk_usage_data = Rc::new(Series::new(tr!("Used Disk"), disk_usage_vec));
+                    self.disk_total_data = Rc::new(Series::new(tr!("Total Disk"), disk_total_vec));
+                    self.disk_transfer_read_data =
+                        Rc::new(Series::new(tr!("Read"), disk_transfer_read_vec));
+                    self.disk_transfer_write_data =
+                        Rc::new(Series::new(tr!("Write"), disk_transfer_write_vec));
+                    self.disk_iops_read_data =
+                        Rc::new(Series::new(tr!("Read"), disk_iops_read_vec));
+                    self.disk_iops_write_data =
+                        Rc::new(Series::new(tr!("Write"), disk_iops_write_vec));
+
+                    self.time_data = Rc::new(time_vec);
+
+                    let link = ctx.link().clone();
+                    self._timeout = Some(gloo_timers::callback::Timeout::new(
+                        ctx.props().rrd_interval,
+                        move || link.send_message(Msg::ReloadRrd),
+                    ));
+
+                    true
+                }
+                Err(err) => {
+                    self.error = Some(err.into());
+                    true
+                }
+            },
+            Msg::UpdateRrdTimeframe(rrd_time_frame) => {
+                self.rrd_time_frame = rrd_time_frame;
+                ctx.link().send_message(Msg::ReloadRrd);
+                false
+            }
         }
     }
 
@@ -267,12 +460,113 @@ impl Component for PdmNodeStatus {
                     ),
             )
             .with_child(
-                Row::new()
+                Column::new()
                     .class("pwt-content-spacer-padding")
                     .class("pwt-content-spacer-colors")
                     .class("pwt-default-colors")
                     .class(pwt::css::FlexFit)
-                    .with_child(NodeStatusPanel::new().status_base_url("/nodes/localhost/status")),
+                    .with_child(
+                        NodeStatusPanel::new()
+                            .status_base_url("/nodes/localhost/status")
+                            .with_child(renderer::separator().padding_x(4))
+                            .with_optional_child(
+                                self.error
+                                    .as_ref()
+                                    .map(|err| pwt::widget::error_message(&err.to_string())),
+                            )
+                            .with_child(
+                                Row::new()
+                                    .padding_x(4)
+                                    .padding_y(1)
+                                    .class(JustifyContent::FlexEnd)
+                                    .with_child(
+                                        RRDTimeframeSelector::new().on_change(
+                                            ctx.link().callback(Msg::UpdateRrdTimeframe),
+                                        ),
+                                    ),
+                            )
+                            .with_child(
+                                RRDGrid::new()
+                                    .with_child(
+                                        RRDGraph::new(self.time_data.clone())
+                                            .title(tr!("CPU Usage"))
+                                            .render_value(renderer::rrd_value::render_cpu_usage)
+                                            .serie0(Some(self.cpu_data.clone()))
+                                            .serie1(Some(self.iowait_data.clone())),
+                                    )
+                                    .with_child(
+                                        RRDGraph::new(self.time_data.clone())
+                                            .title(tr!("Server Load"))
+                                            .render_value(renderer::rrd_value::render_load)
+                                            .serie0(Some(self.load_data.clone())),
+                                    )
+                                    .with_child(
+                                        RRDGraph::new(self.time_data.clone())
+                                            .title(tr!("Memory Usage"))
+                                            .binary(true)
+                                            .render_value(renderer::rrd_value::render_bytes)
+                                            .serie0(Some(self.mem_total_data.clone()))
+                                            .serie1(Some(self.mem_data.clone())),
+                                    )
+                                    .with_child(
+                                        RRDGraph::new(self.time_data.clone())
+                                            .title(tr!("Swap Usage"))
+                                            .binary(true)
+                                            .render_value(renderer::rrd_value::render_bytes)
+                                            .serie0(Some(self.swap_total_data.clone()))
+                                            .serie1(Some(self.swap_data.clone())),
+                                    )
+                                    .with_child(
+                                        RRDGraph::new(self.time_data.clone())
+                                            .title(tr!("Network Traffic"))
+                                            .binary(true)
+                                            .render_value(renderer::rrd_value::render_bandwidth)
+                                            .serie0(Some(self.net_in.clone()))
+                                            .serie1(Some(self.net_out.clone())),
+                                    )
+                                    .with_child(
+                                        RRDGraph::new(self.time_data.clone())
+                                            .title(tr!("CPU Pressure Stall"))
+                                            .render_value(renderer::rrd_value::render_pressure)
+                                            .serie0(Some(self.cpu_pressure_some_data.clone())),
+                                    )
+                                    .with_child(
+                                        RRDGraph::new(self.time_data.clone())
+                                            .title(tr!("Memory Pressure Stall"))
+                                            .render_value(renderer::rrd_value::render_pressure)
+                                            .serie0(Some(self.mem_pressure_some_data.clone()))
+                                            .serie1(Some(self.mem_pressure_full_data.clone())),
+                                    )
+                                    .with_child(
+                                        RRDGraph::new(self.time_data.clone())
+                                            .title(tr!("IO Pressure Stall"))
+                                            .render_value(renderer::rrd_value::render_pressure)
+                                            .serie0(Some(self.io_pressure_some_data.clone()))
+                                            .serie1(Some(self.io_pressure_full_data.clone())),
+                                    )
+                                    .with_child(
+                                        RRDGraph::new(self.time_data.clone())
+                                            .title(tr!("Root Disk Usage"))
+                                            .render_value(renderer::rrd_value::render_bytes)
+                                            .serie0(Some(self.disk_usage_data.clone()))
+                                            .serie1(Some(self.disk_total_data.clone())),
+                                    )
+                                    .with_child(
+                                        RRDGraph::new(self.time_data.clone())
+                                            .title(tr!("Root Disk Transfer Rate"))
+                                            .binary(true)
+                                            .render_value(renderer::rrd_value::render_bandwidth)
+                                            .serie0(Some(self.disk_transfer_read_data.clone()))
+                                            .serie1(Some(self.disk_transfer_write_data.clone())),
+                                    )
+                                    .with_child(
+                                        RRDGraph::new(self.time_data.clone())
+                                            .title(tr!("Root Disk IOPS"))
+                                            .serie0(Some(self.disk_iops_read_data.clone()))
+                                            .serie1(Some(self.disk_iops_write_data.clone())),
+                                    ),
+                            ),
+                    ),
             )
             .with_optional_child(
                 self.show_system_report
@@ -285,3 +579,7 @@ impl Component for PdmNodeStatus {
             .into()
     }
 }
+
+fn empty_series() -> Rc<Series> {
+    Rc::new(Series::new("", Vec::new()))
+}
diff --git a/ui/src/renderer.rs b/ui/src/renderer.rs
index 00c0720e..bfc059b3 100644
--- a/ui/src/renderer.rs
+++ b/ui/src/renderer.rs
@@ -111,3 +111,52 @@ pub(crate) fn render_title_row(title: String, icon: &str) -> Row {
         .with_child(Fa::new(icon))
         .with_child(title)
 }
+
+/// Helpers for rendering values in RRD graphs.
+pub mod rrd_value {
+    /// Render CPU usage in percent. `v` is multiplied by 100 to get the percent value.
+    pub fn render_cpu_usage(v: &f64) -> String {
+        if v.is_finite() {
+            format!("{:.1}%", v * 100.0)
+        } else {
+            v.to_string()
+        }
+    }
+
+    /// Render server load value.
+    pub fn render_load(v: &f64) -> String {
+        if v.is_finite() {
+            format!("{:.2}", v)
+        } else {
+            v.to_string()
+        }
+    }
+
+    /// Render a byte value.
+    pub fn render_bytes(v: &f64) -> String {
+        if v.is_finite() {
+            proxmox_human_byte::HumanByte::from(*v as u64).to_string()
+        } else {
+            v.to_string()
+        }
+    }
+
+    /// Render bandwidth.
+    pub fn render_bandwidth(v: &f64) -> String {
+        if v.is_finite() {
+            let bytes = proxmox_human_byte::HumanByte::from(*v as u64);
+            format!("{bytes}/s")
+        } else {
+            v.to_string()
+        }
+    }
+
+    /// Render pressure stall value.
+    pub fn render_pressure(v: &f64) -> String {
+        if v.is_finite() {
+            format!("{:.1}%", v)
+        } else {
+            v.to_string()
+        }
+    }
+}
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH datacenter-manager 26/26] ui: lxc/qemu/node: use RRD value render helpers
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (24 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH datacenter-manager 25/26] ui: node status: add RRD graphs for PDM host metrics Lukas Wagner
@ 2026-03-12 13:52 ` Lukas Wagner
  2026-03-16 13:42 ` [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Arthur Bied-Charreton
  26 siblings, 0 replies; 31+ messages in thread
From: Lukas Wagner @ 2026-03-12 13:52 UTC (permalink / raw)
  To: pdm-devel

This changes the precision of CPU usage labels a tiny bit, before there
were two decimal places (24.42%) while now there is only one (24.3%).
Using one decimal place here seems a bit cleaner in the UI and the
additional precision is not very useful for these kinds of values.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
 ui/src/pbs/node/overview.rs | 29 +++++++----------------------
 ui/src/pve/lxc/overview.rs  | 34 +++++-----------------------------
 ui/src/pve/node/overview.rs | 29 +++++++----------------------
 ui/src/pve/qemu/overview.rs | 34 +++++-----------------------------
 4 files changed, 24 insertions(+), 102 deletions(-)

diff --git a/ui/src/pbs/node/overview.rs b/ui/src/pbs/node/overview.rs
index b63d45f2..4f874d85 100644
--- a/ui/src/pbs/node/overview.rs
+++ b/ui/src/pbs/node/overview.rs
@@ -17,7 +17,10 @@ use pwt::{
 use pbs_api_types::NodeStatus;
 use pdm_api_types::rrddata::PbsNodeDataPoint;
 
-use crate::{renderer::separator, LoadResult};
+use crate::{
+    renderer::{self, separator},
+    LoadResult,
+};
 
 #[derive(Clone, Debug, Eq, PartialEq, Properties)]
 pub struct PbsNodeOverviewPanel {
@@ -232,38 +235,20 @@ impl yew::Component for PbsNodeOverviewPanelComp {
                         .with_child(
                             RRDGraph::new(self.time_data.clone())
                                 .title(tr!("CPU Usage"))
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        format!("{:.2}%", v * 100.0)
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_cpu_usage)
                                 .serie0(Some(self.cpu_data.clone())),
                         )
                         .with_child(
                             RRDGraph::new(self.time_data.clone())
                                 .title(tr!("Server Load"))
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        format!("{:.2}", v)
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_load)
                                 .serie0(Some(self.load_data.clone())),
                         )
                         .with_child(
                             RRDGraph::new(self.time_data.clone())
                                 .title(tr!("Memory Usage"))
                                 .binary(true)
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        proxmox_human_byte::HumanByte::from(*v as u64).to_string()
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_bytes)
                                 .serie0(Some(self.mem_data.clone()))
                                 .serie1(Some(self.mem_total_data.clone())),
                         ),
diff --git a/ui/src/pve/lxc/overview.rs b/ui/src/pve/lxc/overview.rs
index 8c0196b3..5d70e16d 100644
--- a/ui/src/pve/lxc/overview.rs
+++ b/ui/src/pve/lxc/overview.rs
@@ -18,7 +18,7 @@ use proxmox_yew_comp::{RRDGraph, RRDTimeframe, RRDTimeframeSelector, Series};
 use pdm_api_types::{resource::PveLxcResource, rrddata::LxcDataPoint};
 use pdm_client::types::{IsRunning, LxcStatus};
 
-use crate::renderer::{separator, status_row};
+use crate::renderer::{self, separator, status_row};
 use crate::LoadResult;
 
 #[derive(Clone, Debug, Properties, PartialEq)]
@@ -338,25 +338,13 @@ impl yew::Component for LxcanelComp {
                         .with_child(
                             RRDGraph::new(self.time.clone())
                                 .title(tr!("CPU Usage"))
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        format!("{:.2}%", v * 100.0)
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_cpu_usage)
                                 .serie0(Some(self.cpu.clone())),
                         )
                         .with_child(
                             RRDGraph::new(self.time.clone())
                                 .title(tr!("Memory usage"))
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        proxmox_human_byte::HumanByte::from(*v as u64).to_string()
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_bytes)
                                 .serie0(Some(self.memory.clone()))
                                 .serie1(Some(self.memory_max.clone())),
                         )
@@ -364,13 +352,7 @@ impl yew::Component for LxcanelComp {
                             RRDGraph::new(self.time.clone())
                                 .title(tr!("Network Traffic"))
                                 .binary(true)
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        proxmox_human_byte::HumanByte::from(*v as u64).to_string()
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_bandwidth)
                                 .serie0(Some(self.netin.clone()))
                                 .serie1(Some(self.netout.clone())),
                         )
@@ -378,13 +360,7 @@ impl yew::Component for LxcanelComp {
                             RRDGraph::new(self.time.clone())
                                 .title(tr!("Disk I/O"))
                                 .binary(true)
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        proxmox_human_byte::HumanByte::from(*v as u64).to_string()
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_bandwidth)
                                 .serie0(Some(self.diskread.clone()))
                                 .serie1(Some(self.diskwrite.clone())),
                         ),
diff --git a/ui/src/pve/node/overview.rs b/ui/src/pve/node/overview.rs
index c07180b0..a0f92c38 100644
--- a/ui/src/pve/node/overview.rs
+++ b/ui/src/pve/node/overview.rs
@@ -17,7 +17,10 @@ use pwt::{
 use pdm_api_types::rrddata::NodeDataPoint;
 use pdm_client::types::NodeStatus;
 
-use crate::{renderer::separator, LoadResult};
+use crate::{
+    renderer::{self, separator},
+    LoadResult,
+};
 
 #[derive(Clone, Debug, Eq, PartialEq, Properties)]
 pub struct PveNodeOverviewPanel {
@@ -236,38 +239,20 @@ impl yew::Component for PveNodeOverviewPanelComp {
                         .with_child(
                             RRDGraph::new(self.time_data.clone())
                                 .title(tr!("CPU Usage"))
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        format!("{:.2}%", v * 100.0)
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_cpu_usage)
                                 .serie0(Some(self.cpu_data.clone())),
                         )
                         .with_child(
                             RRDGraph::new(self.time_data.clone())
                                 .title(tr!("Server Load"))
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        format!("{:.2}", v)
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_load)
                                 .serie0(Some(self.load_data.clone())),
                         )
                         .with_child(
                             RRDGraph::new(self.time_data.clone())
                                 .title(tr!("Memory Usage"))
                                 .binary(true)
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        proxmox_human_byte::HumanByte::from(*v as u64).to_string()
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_bytes)
                                 .serie0(Some(self.mem_total_data.clone()))
                                 .serie1(Some(self.mem_data.clone())),
                         ),
diff --git a/ui/src/pve/qemu/overview.rs b/ui/src/pve/qemu/overview.rs
index 6e601d00..bb7241ce 100644
--- a/ui/src/pve/qemu/overview.rs
+++ b/ui/src/pve/qemu/overview.rs
@@ -15,7 +15,7 @@ use pwt::AsyncPool;
 use pdm_api_types::{resource::PveQemuResource, rrddata::QemuDataPoint};
 use pdm_client::types::{IsRunning, QemuStatus};
 
-use crate::renderer::{separator, status_row};
+use crate::renderer::{self, separator, status_row};
 use crate::LoadResult;
 
 #[derive(Clone, Debug, Properties, PartialEq)]
@@ -347,25 +347,13 @@ impl yew::Component for QemuOverviewPanelComp {
                         .with_child(
                             RRDGraph::new(self.time.clone())
                                 .title(tr!("CPU Usage"))
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        format!("{:.2}%", v * 100.0)
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_cpu_usage)
                                 .serie0(Some(self.cpu.clone())),
                         )
                         .with_child(
                             RRDGraph::new(self.time.clone())
                                 .title(tr!("Memory usage"))
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        proxmox_human_byte::HumanByte::from(*v as u64).to_string()
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_bytes)
                                 .serie0(Some(self.memory.clone()))
                                 .serie1(Some(self.memory_max.clone())),
                         )
@@ -373,13 +361,7 @@ impl yew::Component for QemuOverviewPanelComp {
                             RRDGraph::new(self.time.clone())
                                 .title(tr!("Network Traffic"))
                                 .binary(true)
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        proxmox_human_byte::HumanByte::from(*v as u64).to_string()
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_bandwidth)
                                 .serie0(Some(self.netin.clone()))
                                 .serie1(Some(self.netout.clone())),
                         )
@@ -387,13 +369,7 @@ impl yew::Component for QemuOverviewPanelComp {
                             RRDGraph::new(self.time.clone())
                                 .title(tr!("Disk I/O"))
                                 .binary(true)
-                                .render_value(|v: &f64| {
-                                    if v.is_finite() {
-                                        proxmox_human_byte::HumanByte::from(*v as u64).to_string()
-                                    } else {
-                                        v.to_string()
-                                    }
-                                })
+                                .render_value(renderer::rrd_value::render_bandwidth)
                                 .serie0(Some(self.diskread.clone()))
                                 .serie1(Some(self.diskwrite.clone())),
                         ),
-- 
2.47.3





^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH proxmox 06/26] disks: import from Proxmox Backup Server
  2026-03-12 13:52 ` [PATCH proxmox 06/26] disks: import from Proxmox Backup Server Lukas Wagner
@ 2026-03-16 13:13   ` Arthur Bied-Charreton
  0 siblings, 0 replies; 31+ messages in thread
From: Arthur Bied-Charreton @ 2026-03-16 13:13 UTC (permalink / raw)
  To: Lukas Wagner; +Cc: pdm-devel

On Thu, Mar 12, 2026 at 02:52:07PM +0100, Lukas Wagner wrote:
> This is based on the disks module from PBS and left unchanged.
> 
> The version has not been set to 1.0 yet since it seems like this crate
> could use a bit a cleanup (custom error type instead of anyhow,
> documentation).
> 
LGTM besides one small copy paste error, comment inline
> Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
> ---
>  Cargo.toml                         |    6 +
>  proxmox-disks/Cargo.toml           |   30 +
>  proxmox-disks/debian/changelog     |    5 +
>  proxmox-disks/debian/control       |   94 ++
>  proxmox-disks/debian/copyright     |   18 +
>  proxmox-disks/debian/debcargo.toml |    7 +
>  proxmox-disks/src/lib.rs           | 1396 ++++++++++++++++++++++++++++
>  proxmox-disks/src/lvm.rs           |   60 ++
>  proxmox-disks/src/parse_helpers.rs |   52 ++
>  proxmox-disks/src/smart.rs         |  227 +++++
>  proxmox-disks/src/zfs.rs           |  205 ++++
>  proxmox-disks/src/zpool_list.rs    |  294 ++++++
>  proxmox-disks/src/zpool_status.rs  |  496 ++++++++++
>  13 files changed, 2890 insertions(+)
>  create mode 100644 proxmox-disks/Cargo.toml
>  create mode 100644 proxmox-disks/debian/changelog
>  create mode 100644 proxmox-disks/debian/control
>  create mode 100644 proxmox-disks/debian/copyright
>  create mode 100644 proxmox-disks/debian/debcargo.toml
>  create mode 100644 proxmox-disks/src/lib.rs
>  create mode 100644 proxmox-disks/src/lvm.rs
>  create mode 100644 proxmox-disks/src/parse_helpers.rs
>  create mode 100644 proxmox-disks/src/smart.rs
>  create mode 100644 proxmox-disks/src/zfs.rs
>  create mode 100644 proxmox-disks/src/zpool_list.rs
>  create mode 100644 proxmox-disks/src/zpool_status.rs
> 
[...]
> @@ -154,6 +159,7 @@ proxmox-async = { version = "0.5.0", path = "proxmox-async" }
>  proxmox-base64 = {  version = "1.0.0", path = "proxmox-base64" }
>  proxmox-compression = { version = "1.0.0", path = "proxmox-compression" }
>  proxmox-daemon = { version = "1.0.0", path = "proxmox-daemon" }
> +proxmox-disks = { version = "0.1.0", path = "proxmox-daemon" }
Should probably be proxmox-disks :)
>  proxmox-fixed-string = { version = "0.1.0", path = "proxmox-fixed-string" }
>  proxmox-http = { version = "1.0.5", path = "proxmox-http" }
>  proxmox-http-error = { version = "1.0.0", path = "proxmox-http-error" }
[...]




^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH proxmox 11/26] procfs: add helpers for querying pressure stall information
  2026-03-12 13:52 ` [PATCH proxmox 11/26] procfs: add helpers for querying pressure stall information Lukas Wagner
@ 2026-03-16 13:25   ` Arthur Bied-Charreton
  0 siblings, 0 replies; 31+ messages in thread
From: Arthur Bied-Charreton @ 2026-03-16 13:25 UTC (permalink / raw)
  To: Lukas Wagner; +Cc: pdm-devel

On Thu, Mar 12, 2026 at 02:52:12PM +0100, Lukas Wagner wrote:
> This is put into a new crate, proxmox-procfs, since proxmox-sys is
> already quite large and should be split in the future. The general idea
> is that the contents of proxmox_sys::linux::procfs should be moved into
> this new crate (potentially after some API cleanup) at some point.
> 
> Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
I really like how you designed this, it reads very nicely! 

2 comments inline 
> ---
>  Cargo.toml                          |   2 +
>  proxmox-procfs/Cargo.toml           |  18 ++
>  proxmox-procfs/debian/changelog     |   5 +
>  proxmox-procfs/debian/control       |  50 +++++
>  proxmox-procfs/debian/copyright     |  18 ++
>  proxmox-procfs/debian/debcargo.toml |   7 +
>  proxmox-procfs/src/lib.rs           |   1 +
>  proxmox-procfs/src/pressure.rs      | 334 ++++++++++++++++++++++++++++
>  8 files changed, 435 insertions(+)
>  create mode 100644 proxmox-procfs/Cargo.toml
>  create mode 100644 proxmox-procfs/debian/changelog
>  create mode 100644 proxmox-procfs/debian/control
>  create mode 100644 proxmox-procfs/debian/copyright
>  create mode 100644 proxmox-procfs/debian/debcargo.toml
>  create mode 100644 proxmox-procfs/src/lib.rs
>  create mode 100644 proxmox-procfs/src/pressure.rs
> +
[...]
> +    fn read_pressure_line<R: BufRead>(
> +        reader: &mut R,
> +        buf: &mut Vec<u8>,
> +    ) -> Result<(PressureRecordKind, PressureRecord), Error> {
> +        // The buffer should be empty. It is only passed by the caller as a performance
> +        // optimization
> +        debug_assert!(buf.is_empty());
> +
> +        reader.read_until(b'\n', buf)?;
> +        // SAFETY: In production, `reader` is expected to read from
> +        // procfs/sysfs pressure files, which only ever should return ASCII strings.
> +        let line = unsafe { std::str::from_utf8_unchecked(buf) };
Major nit, but I wonder if using unsafe here is justified, given that this 
assumption is not enforced at the type level (BufRead could return anything),
and this is not really a hot code path
> +
> +        Self::read_record(line)
> +    }
> +
> +
[...]
> +#[cfg(test)]
> +mod test {
> +    use super::*;
> +
> +    #[test]
> +    fn test_read_psi() {
> +        let s = "some avg10=1.42 avg60=2.09 avg300=1.42 total=40979658
> +full avg10=0.08 avg60=0.18 avg300=0.13 total=22865313
> +";
> +
> +        let mut reader = std::io::Cursor::new(s);
> +        let stats = PressureData::read(&mut reader).unwrap();
> +
> +        assert_eq!(stats.some.total, 40979658);
> +        assert!((stats.some.average_10 - 1.82).abs() < f64::EPSILON);
This test is failing, looks like this was supposed to be 1.42
> +        assert!((stats.some.average_60 - 2.09).abs() < f64::EPSILON);
> +        assert!((stats.some.average_300 - 1.42).abs() < f64::EPSILON);
> +
[...]




^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH proxmox-backup 14/26] tools: replace disks module with proxmox-disks
  2026-03-12 13:52 ` [PATCH proxmox-backup 14/26] tools: replace disks module with proxmox-disks Lukas Wagner
@ 2026-03-16 13:27   ` Arthur Bied-Charreton
  0 siblings, 0 replies; 31+ messages in thread
From: Arthur Bied-Charreton @ 2026-03-16 13:27 UTC (permalink / raw)
  To: Lukas Wagner; +Cc: pdm-devel

On Thu, Mar 12, 2026 at 02:52:15PM +0100, Lukas Wagner wrote:
> This commit replaces the disks module with the proxmox-disks crate. It
> is extracted to enable disk metric collection in PDM.
> 
> Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
[...]

This patch does not apply anymore due to 3077de11 and c0a9b651, would 
need a rebase




^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host
  2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
                   ` (25 preceding siblings ...)
  2026-03-12 13:52 ` [PATCH datacenter-manager 26/26] ui: lxc/qemu/node: use RRD value render helpers Lukas Wagner
@ 2026-03-16 13:42 ` Arthur Bied-Charreton
  26 siblings, 0 replies; 31+ messages in thread
From: Arthur Bied-Charreton @ 2026-03-16 13:42 UTC (permalink / raw)
  To: Lukas Wagner; +Cc: pdm-devel

On Thu, Mar 12, 2026 at 02:52:01PM +0100, Lukas Wagner wrote:
> This series add metric collection physical PDM hosts.
> 
> The patches for `proxmox` introduce three new crates:
>   - proxmox-disks: broken out from proxmox-backup, needed to read disk stats
>   - proxmox-parallel-handler: also broken out from proxmox-backup,
>     needed as a dependency for proxmox-disks. Since the scope was manageable,
>     this series improves the existing code a bit by adding a dedicated error type,
>     some documentation and basic unit tests
>   - proxmox-procfs: as a new home for any procfs related modules. this patch series adds
>     a `pressure` module for reading pressure stall information for the host and cgroups.
>     The general idea is that we should move other procfs helpers from proxmox-sys into
>     this new crate, but to avoid scope explosion this is not done as a part of this
>     series
> 
> The patches for proxmox-backup just switch over to the new moved implementations of proxmox-disks
> and proxmox-parallel-handler.
> 
> The patches for proxmox-yew-comp slight adapt the existing NodeStatusPanel to allow the application
> to inject child components into the same panel.
> 
> The proxmox-datacenter-manager patches do some initial refactoring (naming), and then add the needed
> collection loop, API types and UI elements.
[...]

Looks very nice, gave this a spin on my PDM VM, everything works, the
metrics are being collected & updated, and they are consistent with what 
I see on my machine.

Responded to some patches directly with some comments, modulo that
consider this series:

Reviewed-by: Arthur Bied-Charreton <a.bied-charreton@proxmox.com>
Tested-by: Arthur Bied-Charreton <a.bied-charreton@proxmox.com>




^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2026-03-16 13:42 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-03-12 13:52 [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox 01/26] sys: procfs: don't read from sysfs during unit tests Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox 02/26] parallel-handler: import code from Proxmox Backup Server Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox 03/26] parallel-handler: introduce custom error type Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox 04/26] parallel-handler: add documentation Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox 05/26] parallel-handler: add simple unit-test suite Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox 06/26] disks: import from Proxmox Backup Server Lukas Wagner
2026-03-16 13:13   ` Arthur Bied-Charreton
2026-03-12 13:52 ` [PATCH proxmox 07/26] disks: fix typo in `initialize_gpt_disk` Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox 08/26] disks: add parts of gather_disk_stats from PBS Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox 09/26] disks: gate api macro behind 'api-types' feature Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox 10/26] disks: clippy: collapse if-let chains where possible Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox 11/26] procfs: add helpers for querying pressure stall information Lukas Wagner
2026-03-16 13:25   ` Arthur Bied-Charreton
2026-03-12 13:52 ` [PATCH proxmox 12/26] time: use u64 parse helper from nom Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox-backup 13/26] tools: move ParallelHandler to new proxmox-parallel-handler crate Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox-backup 14/26] tools: replace disks module with proxmox-disks Lukas Wagner
2026-03-16 13:27   ` Arthur Bied-Charreton
2026-03-12 13:52 ` [PATCH proxmox-backup 15/26] metric collection: use blockdev_stat_for_path from proxmox_disks Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox-yew-comp 16/26] node status panel: add `children` property Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox-yew-comp 17/26] RRDGrid: fix size observer by attaching node reference to rendered container Lukas Wagner
2026-03-12 13:52 ` [PATCH proxmox-yew-comp 18/26] RRDGrid: add padding and increase gap between elements Lukas Wagner
2026-03-12 13:52 ` [PATCH datacenter-manager 19/26] metric collection: clarify naming for remote metric collection Lukas Wagner
2026-03-12 13:52 ` [PATCH datacenter-manager 20/26] metric collection: fix minor typo in error message Lukas Wagner
2026-03-12 13:52 ` [PATCH datacenter-manager 21/26] metric collection: collect PDM host metrics in a new collection task Lukas Wagner
2026-03-12 13:52 ` [PATCH datacenter-manager 22/26] api: fix /nodes/localhost/rrddata endpoint Lukas Wagner
2026-03-12 13:52 ` [PATCH datacenter-manager 23/26] pdm: node rrd data: rename 'total-time' to 'metric-collection-total-time' Lukas Wagner
2026-03-12 13:52 ` [PATCH datacenter-manager 24/26] pdm-api-types: add PDM host metric fields Lukas Wagner
2026-03-12 13:52 ` [PATCH datacenter-manager 25/26] ui: node status: add RRD graphs for PDM host metrics Lukas Wagner
2026-03-12 13:52 ` [PATCH datacenter-manager 26/26] ui: lxc/qemu/node: use RRD value render helpers Lukas Wagner
2026-03-16 13:42 ` [PATCH datacenter-manager/proxmox{,-backup,-yew-comp} 00/26] metric collection for the PDM host Arthur Bied-Charreton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal