* [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics
@ 2026-02-09 9:15 Christian Ebner
2026-02-09 9:15 ` [PATCH v1 1/6] shared-memory: drop check for mmap file being located on tmpfs Christian Ebner
` (18 more replies)
0 siblings, 19 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
This patch series implements request and traffic counters for the s3
client, being shared as atomic counters via shared memory and mmapping
across all s3-client instances.
The shared counters are instantiated on demand for individual
datastores with s3 backend on s3 client instantiation and stored as
part of the datastore implementation by loading the mmapped file on
datastore instantiation, being cached for further access.
Further, counters are being loaded during rrd metrics collection and
exposed as charts in the datastore summary, including the total
request counts per method, the total upload and download traffic as
well as the averaged upload and download rate.
Usage statistics for the s3 backend are not included in this series
an will be tackled as followup series.
Link to the bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=6563
proxmox:
Christian Ebner (6):
shared-memory: drop check for mmap file being located on tmpfs
s3-client: add persistent shared request counters for client
s3-client: add counters for upload/download traffic
s3-client: account for upload traffic on successful request sending
s3-client: account for downloaded bytes in incoming response body
pbs-api-types: define api type for s3 request statistics
pbs-api-types/src/datastore.rs | 28 +++
proxmox-s3-client/Cargo.toml | 4 +
proxmox-s3-client/debian/control | 2 +
proxmox-s3-client/examples/s3_client.rs | 1 +
proxmox-s3-client/src/client.rs | 70 +++++-
proxmox-s3-client/src/lib.rs | 7 +-
proxmox-s3-client/src/response_reader.rs | 75 +++++-
.../src/shared_request_counters.rs | 216 ++++++++++++++++++
proxmox-shared-memory/src/lib.rs | 7 -
9 files changed, 387 insertions(+), 23 deletions(-)
create mode 100644 proxmox-s3-client/src/shared_request_counters.rs
proxmox-backup:
Christian Ebner (11):
datastore: collect request statistics for s3 backed datastores
datastore: expose request counters for s3 backed datastores
api: s3: add endpoint to reset s3 request counters
bin: s3: expose request counter reset method as cli command
datastore: add helper method to get datastore backend type
ui: improve variable name indirectly fixing typo
ui: datastore summary: move store to be part of summary panel
ui: expose s3 request counter statistics in the datastore summary
metrics: collect s3 datastore statistics as rrd metrics
api: admin: expose s3 statistics in datastore rrd data
partially fix #6563: ui: expose s3 rrd charts in datastore summary
pbs-datastore/src/datastore.rs | 66 +++-
pbs-datastore/src/lib.rs | 2 +-
src/api2/admin/datastore.rs | 23 +-
src/api2/admin/s3.rs | 86 ++++-
src/api2/config/s3.rs | 18 +-
src/bin/proxmox_backup_manager/s3.rs | 33 ++
src/server/metric_collection/metric_server.rs | 8 +-
src/server/metric_collection/mod.rs | 65 +++-
src/server/metric_collection/pull_metrics.rs | 18 +-
src/server/metric_collection/rrd.rs | 34 +-
www/datastore/Summary.js | 341 ++++++++++++------
11 files changed, 556 insertions(+), 138 deletions(-)
Summary over all repositories:
20 files changed, 943 insertions(+), 161 deletions(-)
--
Generated by murpp 0.9.0
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 1/6] shared-memory: drop check for mmap file being located on tmpfs
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 2/6] s3-client: add persistent shared request counters for client Christian Ebner
` (17 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
For some usecases, e.g. s3 client request counters, the mmapped file
should outlive a reboot and must therefore be placed on persistent
storage, not just a tmpfs. Drop the explicit check for the file being
located on tmpfs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
proxmox-shared-memory/src/lib.rs | 7 -------
1 file changed, 7 deletions(-)
diff --git a/proxmox-shared-memory/src/lib.rs b/proxmox-shared-memory/src/lib.rs
index b067d1b9..10c2c132 100644
--- a/proxmox-shared-memory/src/lib.rs
+++ b/proxmox-shared-memory/src/lib.rs
@@ -105,13 +105,6 @@ impl<T: Sized + Init> SharedMemory<T> {
.ok_or_else(|| format_err!("bad path {:?}", path))?
.to_owned();
- if !dir_name.ends_with("shmemtest") {
- let statfs = nix::sys::statfs::statfs(&dir_name)?;
- if statfs.filesystem_type() != nix::sys::statfs::TMPFS_MAGIC {
- bail!("path {:?} is not on tmpfs", dir_name);
- }
- }
-
let oflag = OFlag::O_RDWR | OFlag::O_CLOEXEC;
// Try to open existing file
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 2/6] s3-client: add persistent shared request counters for client
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
2026-02-09 9:15 ` [PATCH v1 1/6] shared-memory: drop check for mmap file being located on tmpfs Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-11 12:13 ` Robert Obkircher
2026-02-09 9:15 ` [PATCH v1 3/6] s3-client: add counters for upload/download traffic Christian Ebner
` (16 subsequent siblings)
18 siblings, 1 reply; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Implements atomic counters for api requests successfully send to the
S3 API endpoint via the client, accounting for individual requests
discriminating based on their method.
Since multiple client instances might exists accessing the API
concurrently, even in different processes, provide the atomic
counters via shared memory mapping. This follows along the lines of
the shared traffic limiter implementation.
The counter mappings are conditionally constructed on client
instantiation by caller give configuration options.
Keep sequential ordering of the counters with the intent to use them
for statistics, soft limits and/or notifications within Proxmox
Backup Server.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
proxmox-s3-client/Cargo.toml | 4 +
proxmox-s3-client/debian/control | 2 +
proxmox-s3-client/examples/s3_client.rs | 1 +
proxmox-s3-client/src/client.rs | 48 +++++-
proxmox-s3-client/src/lib.rs | 7 +-
.../src/shared_request_counters.rs | 152 ++++++++++++++++++
6 files changed, 212 insertions(+), 2 deletions(-)
create mode 100644 proxmox-s3-client/src/shared_request_counters.rs
diff --git a/proxmox-s3-client/Cargo.toml b/proxmox-s3-client/Cargo.toml
index a50fa715..1e31bca4 100644
--- a/proxmox-s3-client/Cargo.toml
+++ b/proxmox-s3-client/Cargo.toml
@@ -38,7 +38,9 @@ proxmox-base64 = { workspace = true, optional = true }
proxmox-http = { workspace = true, features = [ "body", "client", "client-trait" ], optional = true }
proxmox-human-byte.workspace = true
proxmox-rate-limiter = { workspace = true, features = [ "rate-limiter", "shared-rate-limiter" ], optional = true }
+proxmox-shared-memory = { workspace = true, optional = true }
proxmox-schema = { workspace = true, features = [ "api-macro", "api-types" ] }
+proxmox-sys = { workspace = true, optional = true }
proxmox-serde.workspace = true
proxmox-time = {workspace = true, optional = true }
@@ -65,6 +67,8 @@ impl = [
"dep:proxmox-base64",
"dep:proxmox-http",
"dep:proxmox-rate-limiter",
+ "dep:proxmox-shared-memory",
+ "dep:proxmox-sys",
"dep:proxmox-time",
]
diff --git a/proxmox-s3-client/debian/control b/proxmox-s3-client/debian/control
index 33418881..a534a107 100644
--- a/proxmox-s3-client/debian/control
+++ b/proxmox-s3-client/debian/control
@@ -85,6 +85,8 @@ Depends:
librust-proxmox-rate-limiter-1+default-dev,
librust-proxmox-rate-limiter-1+rate-limiter-dev,
librust-proxmox-rate-limiter-1+shared-rate-limiter-dev,
+ librust-proxmox-shared-memory-1+default-dev,
+ librust-proxmox-sys-1+default-dev,
librust-proxmox-time-2+default-dev (>= 2.1.0-~~),
librust-quick-xml-0.36+async-tokio-dev (>= 0.36.1-~~),
librust-quick-xml-0.36+default-dev (>= 0.36.1-~~),
diff --git a/proxmox-s3-client/examples/s3_client.rs b/proxmox-s3-client/examples/s3_client.rs
index ca69971c..329de47a 100644
--- a/proxmox-s3-client/examples/s3_client.rs
+++ b/proxmox-s3-client/examples/s3_client.rs
@@ -40,6 +40,7 @@ async fn run() -> Result<(), anyhow::Error> {
put_rate_limit: None,
provider_quirks: Vec::new(),
rate_limiter_config: None,
+ request_counter_config: None,
};
// Creating a client instance and connect to api endpoint
diff --git a/proxmox-s3-client/src/client.rs b/proxmox-s3-client/src/client.rs
index 83176b39..4a6a702b 100644
--- a/proxmox-s3-client/src/client.rs
+++ b/proxmox-s3-client/src/client.rs
@@ -1,5 +1,6 @@
use std::path::{Path, PathBuf};
use std::str::FromStr;
+use std::sync::atomic::Ordering;
use std::sync::{Arc, Mutex};
use std::time::{Duration, Instant};
@@ -32,6 +33,7 @@ use crate::response_reader::{
CopyObjectResponse, DeleteObjectsResponse, GetObjectResponse, HeadObjectResponse,
ListBucketsResponse, ListObjectsV2Response, PutObjectResponse, ResponseReader,
};
+use crate::shared_request_counters::SharedRequestCounters;
/// Default timeout for s3 api requests.
pub const S3_HTTP_REQUEST_TIMEOUT: Duration = Duration::from_secs(30 * 60);
@@ -74,6 +76,21 @@ pub struct S3RateLimiterConfig {
burst_out: Option<u64>,
}
+/// Options for the s3 client's shared request counters
+pub struct S3RequestCounterOptions {
+ /// ID for the memory mapped file
+ pub id: String,
+ /// Base path for the shared memory mapped file
+ pub base_path: PathBuf,
+ /// User for the to be created shared memory mapped file and folders
+ pub user: User,
+}
+
+/// Configuration for the s3 client's shared request counters
+pub struct S3RequestCounterConfig {
+ options: S3RequestCounterOptions,
+}
+
/// Configuration options for client
pub struct S3ClientOptions {
/// Endpoint to access S3 object store.
@@ -100,6 +117,8 @@ pub struct S3ClientOptions {
pub provider_quirks: Vec<ProviderQuirks>,
/// Configuration options for the shared rate limiter.
pub rate_limiter_config: Option<S3RateLimiterConfig>,
+ /// Configuration options for the client's shared request counters.
+ pub request_counter_config: Option<S3RequestCounterConfig>,
}
impl S3ClientOptions {
@@ -110,6 +129,7 @@ impl S3ClientOptions {
bucket: Option<String>,
common_prefix: String,
rate_limiter_options: Option<S3RateLimiterOptions>,
+ request_counter_options: Option<S3RequestCounterOptions>,
) -> Self {
let rate_limiter_config = rate_limiter_options.map(|options| S3RateLimiterConfig {
options,
@@ -118,6 +138,8 @@ impl S3ClientOptions {
rate_out: config.rate_out.map(|human_bytes| human_bytes.as_u64()),
burst_out: config.burst_out.map(|human_bytes| human_bytes.as_u64()),
});
+ let request_counter_config =
+ request_counter_options.map(|options| S3RequestCounterConfig { options });
Self {
endpoint: config.endpoint,
port: config.port,
@@ -131,6 +153,7 @@ impl S3ClientOptions {
put_rate_limit: config.put_rate_limit,
provider_quirks: config.provider_quirks.unwrap_or_default(),
rate_limiter_config,
+ request_counter_config,
}
}
}
@@ -141,6 +164,7 @@ pub struct S3Client {
options: S3ClientOptions,
authority: Authority,
put_rate_limiter: Option<Arc<Mutex<RateLimiter>>>,
+ request_counters: Option<Arc<SharedRequestCounters>>,
}
impl S3Client {
@@ -213,6 +237,21 @@ impl S3Client {
}
}
+ let request_counters = if let Some(config) = options.request_counter_config.as_ref() {
+ let path = config
+ .options
+ .base_path
+ .join(format!("{}.shmem", config.options.id));
+ let request_counters = SharedRequestCounters::open_shared_memory_mapped(
+ &path,
+ config.options.user.clone(),
+ )
+ .context("failed to mmap shared S3 request counters")?;
+ Some(Arc::new(request_counters))
+ } else {
+ None
+ };
+
let client = Client::builder(TokioExecutor::new()).build::<_, Body>(https_connector);
let authority_template = if let Some(port) = options.port {
@@ -241,6 +280,7 @@ impl S3Client {
options,
authority,
put_rate_limiter,
+ request_counters,
})
}
@@ -392,7 +432,13 @@ impl S3Client {
};
match response {
- Ok(Ok(response)) => return Ok(response),
+ Ok(Ok(response)) => {
+ if let Some(counters) = self.request_counters.as_ref() {
+ let _prev = counters.increment(parts.method.clone(), Ordering::SeqCst);
+ }
+
+ return Ok(response);
+ }
Ok(Err(err)) => {
if retry >= MAX_S3_HTTP_REQUEST_RETRY - 1 {
return Err(err.into());
diff --git a/proxmox-s3-client/src/lib.rs b/proxmox-s3-client/src/lib.rs
index d02fd0dc..ceee41a2 100644
--- a/proxmox-s3-client/src/lib.rs
+++ b/proxmox-s3-client/src/lib.rs
@@ -21,7 +21,8 @@ pub use aws_sign_v4::uri_decode;
mod client;
#[cfg(feature = "impl")]
pub use client::{
- S3Client, S3ClientOptions, S3PathPrefix, S3RateLimiterOptions, S3_HTTP_REQUEST_TIMEOUT,
+ S3Client, S3ClientOptions, S3PathPrefix, S3RateLimiterOptions, S3RequestCounterOptions,
+ S3_HTTP_REQUEST_TIMEOUT,
};
#[cfg(feature = "impl")]
mod timestamps;
@@ -33,3 +34,7 @@ mod object_key;
pub use object_key::S3ObjectKey;
#[cfg(feature = "impl")]
mod response_reader;
+#[cfg(feature = "impl")]
+mod shared_request_counters;
+#[cfg(feature = "impl")]
+pub use shared_request_counters::SharedRequestCounters;
diff --git a/proxmox-s3-client/src/shared_request_counters.rs b/proxmox-s3-client/src/shared_request_counters.rs
new file mode 100644
index 00000000..b236490b
--- /dev/null
+++ b/proxmox-s3-client/src/shared_request_counters.rs
@@ -0,0 +1,152 @@
+use std::mem::MaybeUninit;
+use std::path::Path;
+use std::sync::atomic::{AtomicU64, Ordering};
+
+use anyhow::{bail, Error};
+use hyper::http::method::Method;
+use nix::sys::stat::Mode;
+use nix::unistd::User;
+
+use proxmox_shared_memory::{Init, SharedMemory};
+use proxmox_sys::fs::CreateOptions;
+
+const MEMORY_PAGE_SIZE: usize = 4096;
+/// Generated via openssl::sha::sha256(b"Proxmox shared request counters v1.0")[0..8]
+const PROXMOX_SHARED_REQUEST_COUNTERS_1_0: [u8; 8] = [224, 110, 88, 252, 26, 77, 180, 5];
+
+#[repr(C)]
+#[derive(Default)]
+struct RequestCounters {
+ delete: AtomicU64,
+ get: AtomicU64,
+ head: AtomicU64,
+ post: AtomicU64,
+ put: AtomicU64,
+}
+
+impl Init for RequestCounters {
+ fn initialize(this: &mut MaybeUninit<Self>) {
+ unsafe {
+ let this = &mut *this.as_mut_ptr();
+ *this = RequestCounters::default();
+ }
+ }
+}
+
+impl RequestCounters {
+ /// Increment the counter for given method, following the provided memory ordering constrains.
+ ///
+ /// Returns the previously stored value.
+ pub fn increment(&self, method: Method, ordering: Ordering) -> u64 {
+ match method {
+ Method::DELETE => self.delete.fetch_add(1, ordering),
+ Method::GET => self.get.fetch_add(1, ordering),
+ Method::HEAD => self.head.fetch_add(1, ordering),
+ Method::POST => self.post.fetch_add(1, ordering),
+ Method::PUT => self.put.fetch_add(1, ordering),
+ _ => 0,
+ }
+ }
+
+ /// Load current counter state for given method, following the provided memory ordering constrains
+ pub fn load(&self, method: Method, ordering: Ordering) -> u64 {
+ match method {
+ Method::DELETE => self.delete.load(ordering),
+ Method::GET => self.get.load(ordering),
+ Method::HEAD => self.head.load(ordering),
+ Method::POST => self.post.load(ordering),
+ Method::PUT => self.put.load(ordering),
+ _ => 0,
+ }
+ }
+
+ /// Reset all counters, following the provided memory ordering constrains
+ pub fn reset(&self, ordering: Ordering) {
+ self.delete.store(0, ordering);
+ self.get.store(0, ordering);
+ self.head.store(0, ordering);
+ self.post.store(0, ordering);
+ self.put.store(0, ordering);
+ }
+}
+
+#[repr(C)]
+struct MappableRequestCounters {
+ magic: [u8; 8],
+ counters: RequestCounters,
+ _page_size_padding: [u8; MEMORY_PAGE_SIZE
+ - PROXMOX_SHARED_REQUEST_COUNTERS_1_0.len()
+ - std::mem::size_of::<RequestCounters>()],
+}
+
+impl Init for MappableRequestCounters {
+ fn initialize(this: &mut MaybeUninit<Self>) {
+ unsafe {
+ let this = &mut *this.as_mut_ptr();
+ this.magic = PROXMOX_SHARED_REQUEST_COUNTERS_1_0;
+ this.counters = RequestCounters::default();
+ }
+ }
+
+ fn check_type_magic(this: &MaybeUninit<Self>) -> Result<(), Error> {
+ unsafe {
+ let this = &*this.as_ptr();
+ if this.magic != PROXMOX_SHARED_REQUEST_COUNTERS_1_0 {
+ bail!("incorrect magic number for request counters detected");
+ }
+ proxmox_shared_memory::check_subtype(&this.counters)?;
+ }
+ Ok(())
+ }
+}
+
+/// Atomic counters storing per-request method counts for the client.
+///
+/// If set, the counts can be filtered based on a path prefix.
+pub struct SharedRequestCounters {
+ shared_memory: SharedMemory<MappableRequestCounters>,
+}
+
+impl SharedRequestCounters {
+ /// Create a new shared counter instance.
+ ///
+ /// Opens or creates mmap file and accesses it via shared memory mapping.
+ pub fn open_shared_memory_mapped<P: AsRef<Path>>(path: P, user: User) -> Result<Self, Error> {
+ let path = path.as_ref();
+ if let Some(parent) = path.parent() {
+ let dir_opts = CreateOptions::new()
+ .perm(Mode::from_bits_truncate(0o770))
+ .owner(user.uid)
+ .group(user.gid);
+
+ proxmox_sys::fs::create_path(parent, Some(dir_opts), Some(dir_opts))?;
+ }
+
+ let file_opts = CreateOptions::new()
+ .perm(Mode::from_bits_truncate(0o660))
+ .owner(user.uid)
+ .group(user.gid);
+ let shared_memory = SharedMemory::open(path, file_opts)?;
+ Ok(Self { shared_memory })
+ }
+
+ /// Increment the counter for given method, following the provided memory ordering constrains
+ ///
+ /// Returns the previously stored value.
+ pub fn increment(&self, method: Method, ordering: Ordering) -> u64 {
+ self.shared_memory
+ .data()
+ .counters
+ .increment(method, ordering)
+ }
+
+ /// Load current counter state for given method, following the provided memory ordering constrains
+ pub fn load(&self, method: Method, ordering: Ordering) -> u64 {
+ self.shared_memory.data().counters.load(method, ordering)
+ }
+
+ /// Reset all counters, following the provided memory ordering constrains
+ pub fn reset(&self, ordering: Ordering) {
+ self.shared_memory.data().counters.reset(ordering)
+ }
+}
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 3/6] s3-client: add counters for upload/download traffic
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
2026-02-09 9:15 ` [PATCH v1 1/6] shared-memory: drop check for mmap file being located on tmpfs Christian Ebner
2026-02-09 9:15 ` [PATCH v1 2/6] s3-client: add persistent shared request counters for client Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 4/6] s3-client: account for upload traffic on successful request sending Christian Ebner
` (15 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
In addition to accounting for requests, also allow to track the
number of bytes uploaded or downloaded via the s3 clients.
With the intention to estimate shared upload/download bandwidth in
Proxmox Backup Server as well as easily estimate the total traffic
volume.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
.../src/shared_request_counters.rs | 64 +++++++++++++++++++
1 file changed, 64 insertions(+)
diff --git a/proxmox-s3-client/src/shared_request_counters.rs b/proxmox-s3-client/src/shared_request_counters.rs
index b236490b..b3a43a0b 100644
--- a/proxmox-s3-client/src/shared_request_counters.rs
+++ b/proxmox-s3-client/src/shared_request_counters.rs
@@ -17,11 +17,15 @@ const PROXMOX_SHARED_REQUEST_COUNTERS_1_0: [u8; 8] = [224, 110, 88, 252, 26, 77,
#[repr(C)]
#[derive(Default)]
struct RequestCounters {
+ // request count
delete: AtomicU64,
get: AtomicU64,
head: AtomicU64,
post: AtomicU64,
put: AtomicU64,
+ // traffic in bytes
+ upload: AtomicU64,
+ download: AtomicU64,
}
impl Init for RequestCounters {
@@ -68,6 +72,30 @@ impl RequestCounters {
self.post.store(0, ordering);
self.put.store(0, ordering);
}
+
+ /// Account for new upload traffic.
+ ///
+ /// Returns the previously stored value.
+ pub fn add_upload_traffic(&self, count: u64, ordering: Ordering) -> u64 {
+ self.upload.fetch_add(count, ordering)
+ }
+
+ /// Returns upload traffic count.
+ pub fn get_upload_traffic(&self, ordering: Ordering) -> u64 {
+ self.upload.load(ordering)
+ }
+
+ /// Account for new download traffic.
+ ///
+ /// Returns the previously stored value.
+ pub fn add_download_traffic(&self, count: u64, ordering: Ordering) -> u64 {
+ self.download.fetch_add(count, ordering)
+ }
+
+ /// Returns download traffic count.
+ pub fn get_download_traffic(&self, ordering: Ordering) -> u64 {
+ self.download.load(ordering)
+ }
}
#[repr(C)]
@@ -149,4 +177,40 @@ impl SharedRequestCounters {
pub fn reset(&self, ordering: Ordering) {
self.shared_memory.data().counters.reset(ordering)
}
+
+ /// Account for new upload traffic.
+ ///
+ /// Returns the previously stored value.
+ pub fn add_upload_traffic(&self, count: u64, ordering: Ordering) -> u64 {
+ self.shared_memory
+ .data()
+ .counters
+ .add_upload_traffic(count, ordering)
+ }
+
+ /// Returns upload traffic count.
+ pub fn get_upload_traffic(&self, ordering: Ordering) -> u64 {
+ self.shared_memory
+ .data()
+ .counters
+ .get_upload_traffic(ordering)
+ }
+
+ /// Account for new download traffic.
+ ///
+ /// Returns the previously stored value.
+ pub fn add_download_traffic(&self, count: u64, ordering: Ordering) -> u64 {
+ self.shared_memory
+ .data()
+ .counters
+ .add_download_traffic(count, ordering)
+ }
+
+ /// Returns download traffic count.
+ pub fn get_download_traffic(&self, ordering: Ordering) -> u64 {
+ self.shared_memory
+ .data()
+ .counters
+ .get_download_traffic(ordering)
+ }
}
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 4/6] s3-client: account for upload traffic on successful request sending
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (2 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 3/6] s3-client: add counters for upload/download traffic Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 5/6] s3-client: account for downloaded bytes in incoming response body Christian Ebner
` (14 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
If the request could be send with success, account for uploaded
traffic in the request counters. Do not account the traffic if
the request could not be send completely.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
proxmox-s3-client/src/client.rs | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/proxmox-s3-client/src/client.rs b/proxmox-s3-client/src/client.rs
index 4a6a702b..63adab29 100644
--- a/proxmox-s3-client/src/client.rs
+++ b/proxmox-s3-client/src/client.rs
@@ -435,6 +435,12 @@ impl S3Client {
Ok(Ok(response)) => {
if let Some(counters) = self.request_counters.as_ref() {
let _prev = counters.increment(parts.method.clone(), Ordering::SeqCst);
+ let transferred: u64 = body_bytes
+ .len()
+ .try_into()
+ .context("failed to account for upload traffic")?;
+ let _prev_uploaded =
+ counters.add_upload_traffic(transferred, Ordering::SeqCst);
}
return Ok(response);
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 5/6] s3-client: account for downloaded bytes in incoming response body
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (3 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 4/6] s3-client: account for upload traffic on successful request sending Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 6/6] pbs-api-types: define api type for s3 request statistics Christian Ebner
` (13 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Keep track of the downloaded contents in get object responses by
accounting of passing bytes when collecting the incoming body.
To do so, the shared request counters are stored via an atomic
reference counter and cloned along to the response reader and
a new `Content` type which wraps `Incoming` and implements the
`Body`, where the accounting happens.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
proxmox-s3-client/src/client.rs | 16 ++---
proxmox-s3-client/src/response_reader.rs | 75 ++++++++++++++++++++++--
2 files changed, 77 insertions(+), 14 deletions(-)
diff --git a/proxmox-s3-client/src/client.rs b/proxmox-s3-client/src/client.rs
index 63adab29..1cb94698 100644
--- a/proxmox-s3-client/src/client.rs
+++ b/proxmox-s3-client/src/client.rs
@@ -490,7 +490,7 @@ impl S3Client {
.uri(self.build_uri("/", &[])?)
.body(Body::empty())?;
let response = self.send(request, Some(S3_HTTP_REQUEST_TIMEOUT)).await?;
- let response_reader = ResponseReader::new(response);
+ let response_reader = ResponseReader::new(response, self.request_counters.clone());
response_reader.list_buckets_response().await
}
@@ -506,7 +506,7 @@ impl S3Client {
.uri(self.build_uri(&object_key, &[])?)
.body(Body::empty())?;
let response = self.send(request, Some(S3_HTTP_REQUEST_TIMEOUT)).await?;
- let response_reader = ResponseReader::new(response);
+ let response_reader = ResponseReader::new(response, self.request_counters.clone());
response_reader.head_object_response().await
}
@@ -523,7 +523,7 @@ impl S3Client {
.body(Body::empty())?;
let response = self.send(request, Some(S3_HTTP_REQUEST_TIMEOUT)).await?;
- let response_reader = ResponseReader::new(response);
+ let response_reader = ResponseReader::new(response, self.request_counters.clone());
response_reader.get_object_response().await
}
@@ -553,7 +553,7 @@ impl S3Client {
.body(Body::empty())?;
let response = self.send(request, Some(S3_HTTP_REQUEST_TIMEOUT)).await?;
- let response_reader = ResponseReader::new(response);
+ let response_reader = ResponseReader::new(response, self.request_counters.clone());
response_reader.list_objects_v2_response().await
}
@@ -590,7 +590,7 @@ impl S3Client {
let request = request.body(object_data)?;
let response = self.send(request, timeout).await?;
- let response_reader = ResponseReader::new(response);
+ let response_reader = ResponseReader::new(response, self.request_counters.clone());
response_reader.put_object_response().await
}
@@ -604,7 +604,7 @@ impl S3Client {
.body(Body::empty())?;
let response = self.send(request, None).await?;
- let response_reader = ResponseReader::new(response);
+ let response_reader = ResponseReader::new(response, self.request_counters.clone());
response_reader.delete_object_response().await
}
@@ -631,7 +631,7 @@ impl S3Client {
.body(Body::from(body))?;
let response = self.send(request, Some(S3_HTTP_REQUEST_TIMEOUT)).await?;
- let response_reader = ResponseReader::new(response);
+ let response_reader = ResponseReader::new(response, self.request_counters.clone());
response_reader.delete_objects_response().await
}
@@ -662,7 +662,7 @@ impl S3Client {
.body(Body::empty())?;
let response = self.send(request, Some(S3_HTTP_REQUEST_TIMEOUT)).await?;
- let response_reader = ResponseReader::new(response);
+ let response_reader = ResponseReader::new(response, self.request_counters.clone());
response_reader.copy_object_response().await
}
diff --git a/proxmox-s3-client/src/response_reader.rs b/proxmox-s3-client/src/response_reader.rs
index e03b3bb0..5ab82efe 100644
--- a/proxmox-s3-client/src/response_reader.rs
+++ b/proxmox-s3-client/src/response_reader.rs
@@ -1,19 +1,24 @@
+use std::pin::Pin;
use std::str::FromStr;
+use std::sync::atomic::Ordering;
+use std::sync::Arc;
+use std::task::{Context as Ctx, Poll};
use anyhow::{anyhow, bail, Context, Error};
use http_body_util::BodyExt;
-use hyper::body::{Bytes, Incoming};
+use hyper::body::{Body, Bytes, Frame, Incoming, SizeHint};
use hyper::header::HeaderName;
use hyper::http::header;
use hyper::http::StatusCode;
use hyper::{HeaderMap, Response};
use serde::Deserialize;
-use crate::{HttpDate, LastModifiedTimestamp, S3ObjectKey};
+use crate::{HttpDate, LastModifiedTimestamp, S3ObjectKey, SharedRequestCounters};
/// Response reader to check S3 api response status codes and parse response body, if any.
pub(crate) struct ResponseReader {
response: Response<Incoming>,
+ request_counters: Option<Arc<SharedRequestCounters>>,
}
#[derive(Debug)]
@@ -105,7 +110,7 @@ pub struct GetObjectResponse {
/// Last modified http header.
pub last_modified: HttpDate,
/// Object content in http response body.
- pub content: Incoming,
+ pub content: Content,
}
#[derive(Debug)]
@@ -226,10 +231,64 @@ pub struct Bucket {
pub creation_date: LastModifiedTimestamp,
}
+/// Response content stream
+pub struct Content {
+ incoming: Incoming,
+ request_counters: Option<Arc<SharedRequestCounters>>,
+}
+
+impl Body for Content {
+ type Data = Bytes;
+ type Error = hyper::Error;
+
+ fn poll_frame(
+ mut self: Pin<&mut Self>,
+ cx: &mut Ctx<'_>,
+ ) -> Poll<Option<Result<Frame<Self::Data>, Self::Error>>> {
+ let mut this = self.as_mut();
+
+ let incoming = Pin::new(&mut this.incoming).poll_frame(cx);
+
+ if let Some(counter) = self.request_counters.as_ref() {
+ match incoming {
+ Poll::Pending => Poll::Pending,
+ Poll::Ready(f) => {
+ if let Some(Ok(frame)) = f {
+ let bytes = frame
+ .data_ref()
+ .map(|bytes| bytes.len() as u64)
+ .unwrap_or(0);
+ let _ = counter.add_download_traffic(bytes, Ordering::SeqCst);
+ Poll::Ready(Some(Ok(frame)))
+ } else {
+ Poll::Ready(None)
+ }
+ }
+ }
+ } else {
+ return incoming;
+ }
+ }
+
+ fn is_end_stream(&self) -> bool {
+ self.incoming.is_end_stream()
+ }
+
+ fn size_hint(&self) -> SizeHint {
+ self.incoming.size_hint()
+ }
+}
+
impl ResponseReader {
/// Create a new response reader to parse given response.
- pub(crate) fn new(response: Response<Incoming>) -> Self {
- Self { response }
+ pub(crate) fn new(
+ response: Response<Incoming>,
+ request_counters: Option<Arc<SharedRequestCounters>>,
+ ) -> Self {
+ Self {
+ response,
+ request_counters,
+ }
}
/// Read and parse the list object v2 response.
@@ -299,7 +358,11 @@ impl ResponseReader {
/// Returns with error if the object is not accessible, an unexpected status code is encountered
/// or the response headers or body cannot be parsed.
pub(crate) async fn get_object_response(self) -> Result<Option<GetObjectResponse>, Error> {
- let (parts, content) = self.response.into_parts();
+ let (parts, incoming) = self.response.into_parts();
+ let content = Content {
+ incoming,
+ request_counters: self.request_counters.clone(),
+ };
match parts.status {
StatusCode::OK => (),
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 6/6] pbs-api-types: define api type for s3 request statistics
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (4 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 5/6] s3-client: account for downloaded bytes in incoming response body Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 01/11] datastore: collect request statistics for s3 backed datastores Christian Ebner
` (12 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Will be used as part of the status response for PBS datastores in
order to show the S3 request statistics in the UI.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-api-types/src/datastore.rs | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index b4e7ccf5..a2ff4a1d 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -1677,6 +1677,10 @@ pub struct GarbageCollectionJobStatus {
type: Counts,
optional: true,
},
+ "s3-statistics": {
+ type: S3Statistics,
+ optional: true,
+ },
},
)]
#[derive(Serialize, Deserialize)]
@@ -1695,6 +1699,30 @@ pub struct DataStoreStatus {
/// Group/Snapshot counts
#[serde(skip_serializing_if = "Option::is_none")]
pub counts: Option<Counts>,
+ /// S3 backend statistics (on datastores with s3 backend only).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub s3_statistics: Option<S3Statistics>,
+}
+
+#[api()]
+#[derive(Serialize, Deserialize, Clone, PartialEq)]
+#[serde(rename_all = "kebab-case")]
+/// Statistics specific to the S3 backend
+pub struct S3Statistics {
+ /// Total downloaded (bytes).
+ pub downloaded: u64,
+ /// Total uploaded (bytes).
+ pub uploaded: u64,
+ /// Get requests
+ pub get: u64,
+ /// Post requests
+ pub post: u64,
+ /// Put requests
+ pub put: u64,
+ /// Head requests
+ pub head: u64,
+ /// Delete requests
+ pub delete: u64,
}
#[api(
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 01/11] datastore: collect request statistics for s3 backed datastores
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (5 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 6/6] pbs-api-types: define api type for s3 request statistics Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 02/11] datastore: expose request counters " Christian Ebner
` (11 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Define and provide the s3 request counter options to the s3 client
invocations so all requests and traffic is being tracked. When no
datastore is involved, account the statistics for the bucket instead.
Bucket or even endpoint wide statistics might then be gathered in the
future by parsing the corresponding memory mapped files and collecting
their contents.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 15 +++++++++++++++
pbs-datastore/src/lib.rs | 2 +-
src/api2/admin/s3.rs | 17 +++++++++++++++--
src/api2/config/s3.rs | 18 +++++++++++++++---
4 files changed, 46 insertions(+), 6 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 7ad3d917d..957e900d6 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -17,6 +17,7 @@ use tracing::{info, warn};
use proxmox_human_byte::HumanByte;
use proxmox_s3_client::{
S3Client, S3ClientConf, S3ClientOptions, S3ObjectKey, S3PathPrefix, S3RateLimiterOptions,
+ S3RequestCounterOptions,
};
use proxmox_schema::ApiType;
@@ -75,6 +76,8 @@ pub const GROUP_NOTES_FILE_NAME: &str = "notes";
pub const GROUP_OWNER_FILE_NAME: &str = "owner";
/// Filename for in-use marker stored on S3 object store backend
pub const S3_DATASTORE_IN_USE_MARKER: &str = ".in-use";
+/// Base directory for storing shared memory mapped s3 request counters
+pub const S3_CLIENT_REQUEST_COUNTER_BASE_PATH: &str = "/var/lib/proxmox-backup/s3-statistics";
const S3_CLIENT_RATE_LIMITER_BASE_PATH: &str = pbs_buildcfg::rundir!("/s3/shmem/tbf");
const NAMESPACE_MARKER_FILENAME: &str = ".namespace";
// s3 put request times out after upload_size / 1 Kib/s, so about 2.3 hours for 8 MiB
@@ -426,6 +429,11 @@ impl DataStore {
user: pbs_config::backup_user()?,
base_path: S3_CLIENT_RATE_LIMITER_BASE_PATH.into(),
};
+ let request_counter_options = S3RequestCounterOptions {
+ id: format!("{s3_client_id}-{bucket}-{}", self.name()),
+ user: pbs_config::backup_user()?,
+ base_path: S3_CLIENT_REQUEST_COUNTER_BASE_PATH.into(),
+ };
let options = S3ClientOptions::from_config(
config.config,
@@ -433,6 +441,7 @@ impl DataStore {
Some(bucket),
self.name().to_owned(),
Some(rate_limiter_options),
+ Some(request_counter_options),
);
let s3_client = S3Client::new(options)?;
DatastoreBackend::S3(Arc::new(s3_client))
@@ -2659,6 +2668,11 @@ impl DataStore {
user: pbs_config::backup_user()?,
base_path: S3_CLIENT_RATE_LIMITER_BASE_PATH.into(),
};
+ let request_counter_options = S3RequestCounterOptions {
+ id: format!("{s3_client_id}-{bucket}-{}", datastore_config.name),
+ user: pbs_config::backup_user()?,
+ base_path: S3_CLIENT_REQUEST_COUNTER_BASE_PATH.into(),
+ };
let options = S3ClientOptions::from_config(
client_config.config,
@@ -2666,6 +2680,7 @@ impl DataStore {
Some(bucket),
datastore_config.name.to_owned(),
Some(rate_limiter_options),
+ Some(request_counter_options),
);
let s3_client = S3Client::new(options)
.context("failed to create s3 client")
diff --git a/pbs-datastore/src/lib.rs b/pbs-datastore/src/lib.rs
index 1f7c54ae8..afe340a65 100644
--- a/pbs-datastore/src/lib.rs
+++ b/pbs-datastore/src/lib.rs
@@ -217,7 +217,7 @@ pub use store_progress::StoreProgress;
mod datastore;
pub use datastore::{
check_backup_owner, ensure_datastore_is_mounted, get_datastore_mount_status, DataStore,
- DatastoreBackend, S3_DATASTORE_IN_USE_MARKER,
+ DatastoreBackend, S3_CLIENT_REQUEST_COUNTER_BASE_PATH, S3_DATASTORE_IN_USE_MARKER,
};
mod hierarchy;
diff --git a/src/api2/admin/s3.rs b/src/api2/admin/s3.rs
index 73388281b..d20cae483 100644
--- a/src/api2/admin/s3.rs
+++ b/src/api2/admin/s3.rs
@@ -6,8 +6,8 @@ use serde_json::Value;
use proxmox_http::Body;
use proxmox_router::{list_subdirs_api_method, Permission, Router, RpcEnvironment, SubdirMap};
use proxmox_s3_client::{
- S3Client, S3ClientConf, S3ClientOptions, S3ObjectKey, S3_BUCKET_NAME_SCHEMA,
- S3_CLIENT_ID_SCHEMA, S3_HTTP_REQUEST_TIMEOUT,
+ S3Client, S3ClientConf, S3ClientOptions, S3ObjectKey, S3RequestCounterOptions,
+ S3_BUCKET_NAME_SCHEMA, S3_CLIENT_ID_SCHEMA, S3_HTTP_REQUEST_TIMEOUT,
};
use proxmox_schema::*;
use proxmox_sortable_macro::sortable;
@@ -15,6 +15,7 @@ use proxmox_sortable_macro::sortable;
use pbs_api_types::PRIV_SYS_MODIFY;
use pbs_config::s3::S3_CFG_TYPE_ID;
+use pbs_datastore::S3_CLIENT_REQUEST_COUNTER_BASE_PATH;
#[api(
input: {
@@ -48,6 +49,17 @@ pub async fn check(
.lookup(S3_CFG_TYPE_ID, &s3_client_id)
.context("config lookup failed")?;
+ let request_counter_id = if let Some(store) = &store_prefix {
+ format!("{s3_client_id}-{bucket}-{store}")
+ } else {
+ format!("{s3_client_id}-{bucket}")
+ };
+ let request_counter_options = S3RequestCounterOptions {
+ id: request_counter_id,
+ user: pbs_config::backup_user()?,
+ base_path: S3_CLIENT_REQUEST_COUNTER_BASE_PATH.into(),
+ };
+
let store_prefix = store_prefix.unwrap_or_default();
let options = S3ClientOptions::from_config(
config.config,
@@ -55,6 +67,7 @@ pub async fn check(
Some(bucket),
store_prefix,
None,
+ Some(request_counter_options),
);
let test_object_key =
diff --git a/src/api2/config/s3.rs b/src/api2/config/s3.rs
index 27b3c4cc2..046e247e4 100644
--- a/src/api2/config/s3.rs
+++ b/src/api2/config/s3.rs
@@ -6,7 +6,7 @@ use serde_json::Value;
use proxmox_router::{http_bail, Permission, Router, RpcEnvironment};
use proxmox_s3_client::{
S3BucketListItem, S3Client, S3ClientConf, S3ClientConfig, S3ClientConfigUpdater,
- S3ClientConfigWithoutSecret, S3ClientOptions, S3_CLIENT_ID_SCHEMA,
+ S3ClientConfigWithoutSecret, S3ClientOptions, S3RequestCounterOptions, S3_CLIENT_ID_SCHEMA,
};
use proxmox_schema::{api, param_bail, ApiType};
@@ -16,6 +16,7 @@ use pbs_api_types::{
};
use pbs_config::s3::{self, S3_CFG_TYPE_ID};
use pbs_config::CachedUserInfo;
+use pbs_datastore::S3_CLIENT_REQUEST_COUNTER_BASE_PATH;
#[api(
input: {
@@ -350,8 +351,19 @@ pub async fn list_buckets(
.context("config lookup failed")?;
let empty_prefix = String::new();
- let options =
- S3ClientOptions::from_config(config.config, config.secret_key, None, empty_prefix, None);
+ let request_counter_options = S3RequestCounterOptions {
+ id,
+ user: pbs_config::backup_user()?,
+ base_path: S3_CLIENT_REQUEST_COUNTER_BASE_PATH.into(),
+ };
+ let options = S3ClientOptions::from_config(
+ config.config,
+ config.secret_key,
+ None,
+ empty_prefix,
+ None,
+ Some(request_counter_options),
+ );
let client = S3Client::new(options).context("client creation failed")?;
let list_buckets_response = client
.list_buckets()
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 02/11] datastore: expose request counters for s3 backed datastores
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (6 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 01/11] datastore: collect request statistics for s3 backed datastores Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 03/11] api: s3: add endpoint to reset s3 request counters Christian Ebner
` (10 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Allows to introspect the current state of the request counters related
to a datastore. With the intention to show the request counter
statistics in the ui and allow to use them for soft limits and warnings
via the notification system in the future.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 48 ++++++++++++++++++++++++++++++----
src/api2/admin/datastore.rs | 4 +++
2 files changed, 47 insertions(+), 5 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 957e900d6..bbf4d30f4 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -3,12 +3,14 @@ use std::io::{self, Write};
use std::os::unix::ffi::OsStrExt;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf};
+use std::sync::atomic::Ordering;
use std::sync::{Arc, LazyLock, Mutex};
use std::time::{Duration, SystemTime};
use anyhow::{bail, format_err, Context, Error};
use http_body_util::BodyExt;
use hyper::body::Bytes;
+use hyper::Method;
use nix::unistd::{unlinkat, UnlinkatFlags};
use pbs_tools::lru_cache::LruCache;
use tokio::io::AsyncWriteExt;
@@ -17,7 +19,7 @@ use tracing::{info, warn};
use proxmox_human_byte::HumanByte;
use proxmox_s3_client::{
S3Client, S3ClientConf, S3ClientOptions, S3ObjectKey, S3PathPrefix, S3RateLimiterOptions,
- S3RequestCounterOptions,
+ S3RequestCounterOptions, SharedRequestCounters,
};
use proxmox_schema::ApiType;
@@ -32,7 +34,7 @@ use pbs_api_types::{
ArchiveType, Authid, BackupGroupDeleteStats, BackupNamespace, BackupType, ChunkOrder,
DataStoreConfig, DatastoreBackendConfig, DatastoreBackendType, DatastoreFSyncLevel,
DatastoreTuning, GarbageCollectionCacheStats, GarbageCollectionStatus, MaintenanceMode,
- MaintenanceType, Operation, UPID,
+ MaintenanceType, Operation, S3Statistics, UPID,
};
use pbs_config::s3::S3_CFG_TYPE_ID;
use pbs_config::{BackupLockGuard, ConfigVersionCache};
@@ -177,6 +179,7 @@ pub struct DataStoreImpl {
/// datastore.cfg cache generation number at lookup time, used to
/// invalidate this cached `DataStoreImpl`
config_generation: Option<usize>,
+ request_counters: Option<Arc<SharedRequestCounters>>,
}
impl DataStoreImpl {
@@ -194,6 +197,7 @@ impl DataStoreImpl {
lru_store_caching: None,
thread_settings: Default::default(),
config_generation: None,
+ request_counters: None,
})
}
}
@@ -451,6 +455,22 @@ impl DataStore {
Ok(backend_type)
}
+ /// Get the s3 statistics for this datastore
+ pub fn s3_statistics(&self) -> Option<S3Statistics> {
+ self.inner
+ .request_counters
+ .as_ref()
+ .map(|counters| S3Statistics {
+ get: counters.load(Method::GET, Ordering::SeqCst),
+ put: counters.load(Method::PUT, Ordering::SeqCst),
+ post: counters.load(Method::POST, Ordering::SeqCst),
+ delete: counters.load(Method::DELETE, Ordering::SeqCst),
+ head: counters.load(Method::HEAD, Ordering::SeqCst),
+ uploaded: counters.get_upload_traffic(Ordering::SeqCst),
+ downloaded: counters.get_download_traffic(Ordering::SeqCst),
+ })
+ }
+
pub fn cache(&self) -> Option<&LocalDatastoreLruCache> {
self.inner.lru_store_caching.as_ref()
}
@@ -654,7 +674,8 @@ impl DataStore {
.parse_property_string(config.backend.as_deref().unwrap_or(""))?,
)?;
- let lru_store_caching = if DatastoreBackendType::S3 == backend_config.ty.unwrap_or_default()
+ let (lru_store_caching, request_counters) = if DatastoreBackendType::S3
+ == backend_config.ty.unwrap_or_default()
{
let mut cache_capacity = 0;
if let Ok(fs_info) = proxmox_sys::fs::fs_info(&chunk_store.base_path()) {
@@ -676,9 +697,25 @@ impl DataStore {
);
let cache = LocalDatastoreLruCache::new(cache_capacity, chunk_store.clone());
- Some(cache)
+
+ let path = format!(
+ "{}/{}-{}-{}.shmem",
+ S3_CLIENT_REQUEST_COUNTER_BASE_PATH,
+ backend_config
+ .client
+ .as_ref()
+ .ok_or(format_err!("missing s3 endpoint id"))?,
+ backend_config
+ .bucket
+ .as_ref()
+ .ok_or(format_err!("missing s3 bucket"))?,
+ config.name,
+ );
+ let request_counters =
+ SharedRequestCounters::open_shared_memory_mapped(path, pbs_config::backup_user()?)?;
+ (Some(cache), Some(Arc::new(request_counters)))
} else {
- None
+ (None, None)
};
let thread_settings = DatastoreThreadSettings::new(
@@ -697,6 +734,7 @@ impl DataStore {
lru_store_caching,
thread_settings,
config_generation: generation,
+ request_counters,
})
}
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 88ad5d53b..d71112475 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -622,6 +622,8 @@ pub async fn status(
(None, None)
};
+ let s3_statistics = datastore.s3_statistics();
+
Ok(if store_stats {
let storage = crate::tools::fs::fs_info(datastore.base_path()).await?;
DataStoreStatus {
@@ -630,6 +632,7 @@ pub async fn status(
avail: storage.available,
gc_status,
counts,
+ s3_statistics,
}
} else {
DataStoreStatus {
@@ -638,6 +641,7 @@ pub async fn status(
avail: 0,
gc_status,
counts,
+ s3_statistics,
}
})
}
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 03/11] api: s3: add endpoint to reset s3 request counters
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (7 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 02/11] datastore: expose request counters " Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 04/11] bin: s3: expose request counter reset method as cli command Christian Ebner
` (9 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Allows to reset the current counter states. This can be done
manually or possibly by a scheduled task in the future.
The intent is to start fresh in case of e.g. monthly limit warnings.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/admin/s3.rs | 71 ++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 68 insertions(+), 3 deletions(-)
diff --git a/src/api2/admin/s3.rs b/src/api2/admin/s3.rs
index d20cae483..a08215165 100644
--- a/src/api2/admin/s3.rs
+++ b/src/api2/admin/s3.rs
@@ -1,13 +1,16 @@
//! S3 bucket operations
-use anyhow::{Context, Error};
+use std::path::Path;
+use std::sync::atomic::Ordering;
+
+use anyhow::{bail, Context, Error};
use serde_json::Value;
use proxmox_http::Body;
use proxmox_router::{list_subdirs_api_method, Permission, Router, RpcEnvironment, SubdirMap};
use proxmox_s3_client::{
S3Client, S3ClientConf, S3ClientOptions, S3ObjectKey, S3RequestCounterOptions,
- S3_BUCKET_NAME_SCHEMA, S3_CLIENT_ID_SCHEMA, S3_HTTP_REQUEST_TIMEOUT,
+ SharedRequestCounters, S3_BUCKET_NAME_SCHEMA, S3_CLIENT_ID_SCHEMA, S3_HTTP_REQUEST_TIMEOUT,
};
use proxmox_schema::*;
use proxmox_sortable_macro::sortable;
@@ -95,8 +98,70 @@ pub async fn check(
Ok(Value::Null)
}
+#[api(
+ input: {
+ properties: {
+ "s3-endpoint-id": {
+ schema: S3_CLIENT_ID_SCHEMA,
+ },
+ bucket: {
+ schema: S3_BUCKET_NAME_SCHEMA,
+ },
+ "store-prefix": {
+ type: String,
+ description: "Store prefix within bucket for S3 object keys (commonly datastore name)",
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Reset the S3 request counters for matching endpoint, bucket or datastore (if prefix is given).
+pub async fn reset_counters(
+ s3_endpoint_id: String,
+ bucket: String,
+ store_prefix: Option<String>,
+ _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let (config, _digest) = pbs_config::s3::config()?;
+ // only check if the provided endpoint id exists
+ let _config: S3ClientConf = config
+ .lookup(S3_CFG_TYPE_ID, &s3_endpoint_id)
+ .context("config lookup failed")?;
+
+ let request_counter_id = if let Some(store) = &store_prefix {
+ format!("{s3_endpoint_id}-{bucket}-{store}")
+ } else {
+ format!("{s3_endpoint_id}-{bucket}")
+ };
+
+ let path = format!("{S3_CLIENT_REQUEST_COUNTER_BASE_PATH}/{request_counter_id}.shmem");
+ let path = Path::new(&path);
+ // Fail early to not create the file when opening shared memory map below. Accept that
+ // this can race, with a new counter file being created in the mean time, but that is
+ // not an issue.
+ if !path.is_file() {
+ bail!("Cannot find s3 counters file '{path:?}'");
+ }
+
+ let user = pbs_config::backup_user()?;
+ let request_counters = SharedRequestCounters::open_shared_memory_mapped(path, user)
+ .context("failed to open shared request counters")?;
+ request_counters.reset(Ordering::SeqCst);
+
+ Ok(())
+}
+
#[sortable]
-const S3_OPERATION_SUBDIRS: SubdirMap = &[("check", &Router::new().put(&API_METHOD_CHECK))];
+const S3_OPERATION_SUBDIRS: SubdirMap = &[
+ ("check", &Router::new().put(&API_METHOD_CHECK)),
+ (
+ "reset-counters",
+ &Router::new().put(&API_METHOD_RESET_COUNTERS),
+ ),
+];
const S3_OPERATION_ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(S3_OPERATION_SUBDIRS))
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 04/11] bin: s3: expose request counter reset method as cli command
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (8 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 03/11] api: s3: add endpoint to reset s3 request counters Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 05/11] datastore: add helper method to get datastore backend type Christian Ebner
` (8 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Allows to reset the s3 request counters from the cli by calling the
corresponding api method. Place it as a subcommand to `s3 endpoint`
since the endpoint as this should only be allowed for existing
endpoints.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/bin/proxmox_backup_manager/s3.rs | 33 ++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/src/bin/proxmox_backup_manager/s3.rs b/src/bin/proxmox_backup_manager/s3.rs
index a94371e09..64154466f 100644
--- a/src/bin/proxmox_backup_manager/s3.rs
+++ b/src/bin/proxmox_backup_manager/s3.rs
@@ -86,6 +86,33 @@ fn list_s3_clients(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Valu
Ok(Value::Null)
}
+#[api(
+ input: {
+ properties: {
+ "s3-endpoint-id": {
+ schema: S3_CLIENT_ID_SCHEMA,
+ },
+ bucket: {
+ schema: S3_BUCKET_NAME_SCHEMA,
+ },
+ "store-prefix": {
+ type: String,
+ description: "Store prefix within bucket for S3 object keys (commonly datastore name)",
+ optional: true,
+ },
+ },
+ },
+)]
+/// Reset the S3 request counters for matching endpoint, bucket or datastore (if prefix is given).
+async fn reset_counters(
+ s3_endpoint_id: String,
+ bucket: String,
+ store_prefix: Option<String>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ api2::admin::s3::reset_counters(s3_endpoint_id, bucket, store_prefix, rpcenv).await
+}
+
pub fn s3_commands() -> CommandLineInterface {
let endpoint_cmd_def = CliCommandMap::new()
.insert("list", CliCommand::new(&API_METHOD_LIST_S3_CLIENTS))
@@ -111,6 +138,12 @@ pub fn s3_commands() -> CommandLineInterface {
CliCommand::new(&API_METHOD_LIST_BUCKETS)
.arg_param(&["s3-endpoint-id"])
.completion_cb("s3-endpoint-id", pbs_config::s3::complete_s3_client_id),
+ )
+ .insert(
+ "reset-counters",
+ CliCommand::new(&API_METHOD_RESET_COUNTERS)
+ .arg_param(&["s3-endpoint-id", "bucket"])
+ .completion_cb("s3-endpoint-id", pbs_config::s3::complete_s3_client_id),
);
let cmd_def = CliCommandMap::new()
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 05/11] datastore: add helper method to get datastore backend type
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (9 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 04/11] bin: s3: expose request counter reset method as cli command Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 06/11] ui: improve variable name indirectly fixing typo Christian Ebner
` (7 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Allows to check what type the datastore backend is without
instantiation of the backend itself as DataStore::backend() does.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index bbf4d30f4..91205e6c3 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -455,6 +455,11 @@ impl DataStore {
Ok(backend_type)
}
+ /// Get the backend type for this datastore based on it's configuration
+ pub fn backend_type(&self) -> DatastoreBackendType {
+ self.inner.backend_config.ty.unwrap_or_default()
+ }
+
/// Get the s3 statistics for this datastore
pub fn s3_statistics(&self) -> Option<S3Statistics> {
self.inner
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 06/11] ui: improve variable name indirectly fixing typo
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (10 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 05/11] datastore: add helper method to get datastore backend type Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 07/11] ui: datastore summary: move store to be part of summary panel Christian Ebner
` (6 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
`lastRequestWasFailue` not only has a typo but is also not that easily
readable, use `lastRequestFailed` instead.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/datastore/Summary.js | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/www/datastore/Summary.js b/www/datastore/Summary.js
index cdb34aea3..cb412630d 100644
--- a/www/datastore/Summary.js
+++ b/www/datastore/Summary.js
@@ -394,12 +394,12 @@ Ext.define('PBS.DataStoreSummary', {
interval: 1000,
});
- let lastRequestWasFailue = false;
+ let lastRequestFailed = false;
me.mon(me.statusStore, 'load', (s, records, success) => {
let mountBtn = me.lookupReferenceHolder().lookupReference('mountButton');
let unmountBtn = me.lookupReferenceHolder().lookupReference('unmountButton');
if (!success) {
- lastRequestWasFailue = true;
+ lastRequestFailed = true;
me.statusStore.stopUpdate();
me.rrdstore.stopUpdate();
@@ -430,13 +430,13 @@ Ext.define('PBS.DataStoreSummary', {
});
} else {
// only trigger on edges, else we couple our interval to the info one
- if (lastRequestWasFailue) {
+ if (lastRequestFailed) {
me.down('pbsDataStoreInfo').fireEvent('activate');
me.rrdstore.startUpdate();
}
unmountBtn.setDisabled(false);
mountBtn.setDisabled(true);
- lastRequestWasFailue = false;
+ lastRequestFailed = false;
}
});
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 07/11] ui: datastore summary: move store to be part of summary panel
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (11 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 06/11] ui: improve variable name indirectly fixing typo Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 08/11] ui: expose s3 request counter statistics in the datastore summary Christian Ebner
` (5 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Move the store from the datastore info panel to the parent datastore
summary panel and refactor the store load logic. By this, the same
view model can be reused by all child items, which is required to
show the s3 statistics if present for the datastore, avoiding the
need to perform additional api requests.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/datastore/Summary.js | 175 +++++++++++++++++----------------------
1 file changed, 74 insertions(+), 101 deletions(-)
diff --git a/www/datastore/Summary.js b/www/datastore/Summary.js
index cb412630d..4dd7dc4ce 100644
--- a/www/datastore/Summary.js
+++ b/www/datastore/Summary.js
@@ -48,100 +48,6 @@ Ext.define('PBS.DataStoreInfo', {
extend: 'Ext.panel.Panel',
alias: 'widget.pbsDataStoreInfo',
- viewModel: {
- data: {
- countstext: '',
- usage: {},
- stillbad: 0,
- mountpoint: '',
- },
- },
-
- controller: {
- xclass: 'Ext.app.ViewController',
-
- onLoad: function (store, data, success) {
- let me = this;
- if (!success) {
- Proxmox.Utils.API2Request({
- url: `/config/datastore/${me.view.datastore}`,
- success: function (response) {
- let maintenanceString = response.result.data['maintenance-mode'];
- let removable = !!response.result.data['backing-device'];
- if (!maintenanceString && !removable) {
- me.view.el.mask(gettext('Datastore is not available'));
- return;
- }
-
- let [_type, msg] = PBS.Utils.parseMaintenanceMode(maintenanceString);
- let isUnplugged = !maintenanceString && removable;
- let maskMessage = isUnplugged
- ? gettext('Datastore is not mounted')
- : `${gettext('Datastore is in maintenance mode')}${msg ? ': ' + msg : ''}`;
-
- let maskIcon = isUnplugged
- ? 'fa pbs-unplugged-mask'
- : 'fa pbs-maintenance-mask';
- me.view.el.mask(maskMessage, maskIcon);
- },
- });
- return;
- }
- me.view.el.unmask();
-
- let vm = me.getViewModel();
-
- let counts = store.getById('counts').data.value;
- let used = store.getById('used').data.value;
- let total = store.getById('avail').data.value + used;
-
- let usage = Proxmox.Utils.render_size_usage(used, total, true);
- vm.set('usagetext', usage);
- vm.set('usage', used / total);
-
- let countstext = function (count) {
- count = count || {};
- return `${count.groups || 0} ${gettext('Groups')}, ${count.snapshots || 0} ${gettext('Snapshots')}`;
- };
- let gcstatus = store.getById('gc-status')?.data.value;
- if (gcstatus) {
- let dedup = PBS.Utils.calculate_dedup_factor(gcstatus);
- vm.set('deduplication', dedup.toFixed(2));
- vm.set('stillbad', gcstatus['still-bad']);
- }
-
- vm.set('ctcount', countstext(counts.ct));
- vm.set('vmcount', countstext(counts.vm));
- vm.set('hostcount', countstext(counts.host));
- },
-
- startStore: function () {
- this.store.startUpdate();
- },
- stopStore: function () {
- this.store.stopUpdate();
- },
- doSingleStoreLoad: function () {
- this.store.load();
- },
-
- init: function (view) {
- let me = this;
- let datastore = encodeURIComponent(view.datastore);
- me.store = Ext.create('Proxmox.data.ObjectStore', {
- interval: 5 * 1000,
- url: `/api2/json/admin/datastore/${datastore}/status/?verbose=true`,
- });
- me.store.on('load', me.onLoad, me);
- },
- },
-
- listeners: {
- activate: 'startStore',
- beforedestroy: 'stopStore',
- deactivate: 'stopStore',
- },
-
defaults: {
xtype: 'pmxInfoWidget',
},
@@ -237,6 +143,15 @@ Ext.define('PBS.DataStoreSummary', {
padding: 5,
},
+ viewModel: {
+ data: {
+ countstext: '',
+ usage: {},
+ stillbad: 0,
+ mountpoint: '',
+ },
+ },
+
tbar: [
{
xtype: 'button',
@@ -365,16 +280,19 @@ Ext.define('PBS.DataStoreSummary', {
listeners: {
activate: function () {
this.rrdstore.startUpdate();
+ this.infoStore.startUpdate();
},
afterrender: function () {
this.statusStore.startUpdate();
},
deactivate: function () {
this.rrdstore.stopUpdate();
+ this.infoStore.stopUpdate();
},
destroy: function () {
this.rrdstore.stopUpdate();
this.statusStore.stopUpdate();
+ this.infoStore.stopUpdate();
},
resize: function (panel) {
Proxmox.Utils.updateColumns(panel);
@@ -394,6 +312,11 @@ Ext.define('PBS.DataStoreSummary', {
interval: 1000,
});
+ me.infoStore = Ext.create('Proxmox.data.ObjectStore', {
+ interval: 5 * 1000,
+ url: `/api2/json/admin/datastore/${me.datastore}/status/?verbose=true`,
+ });
+
let lastRequestFailed = false;
me.mon(me.statusStore, 'load', (s, records, success) => {
let mountBtn = me.lookupReferenceHolder().lookupReference('mountButton');
@@ -403,10 +326,8 @@ Ext.define('PBS.DataStoreSummary', {
me.statusStore.stopUpdate();
me.rrdstore.stopUpdate();
-
- let infoPanelController = me.down('pbsDataStoreInfo').getController();
- infoPanelController.stopStore();
- infoPanelController.doSingleStoreLoad();
+ me.infoStore.stopUpdate();
+ me.infoStore.load();
Proxmox.Utils.API2Request({
url: `/config/datastore/${me.datastore}`,
@@ -431,7 +352,7 @@ Ext.define('PBS.DataStoreSummary', {
} else {
// only trigger on edges, else we couple our interval to the info one
if (lastRequestFailed) {
- me.down('pbsDataStoreInfo').fireEvent('activate');
+ me.infoStore.startUpdate();
me.rrdstore.startUpdate();
}
unmountBtn.setDisabled(false);
@@ -486,6 +407,60 @@ Ext.define('PBS.DataStoreSummary', {
},
});
+ me.mon(me.infoStore, 'load', (store, records, success) => {
+ if (!success) {
+ Proxmox.Utils.API2Request({
+ url: `/config/datastore/${me.datastore}`,
+ success: function (response) {
+ let maintenanceString = response.result.data['maintenance-mode'];
+ let removable = !!response.result.data['backing-device'];
+ if (!maintenanceString && !removable) {
+ me.down('pbsDataStoreInfo').mask(gettext('Datastore is not available'));
+ return;
+ }
+
+ let [_type, msg] = PBS.Utils.parseMaintenanceMode(maintenanceString);
+ let isUnplugged = !maintenanceString && removable;
+ let maskMessage = isUnplugged
+ ? gettext('Datastore is not mounted')
+ : `${gettext('Datastore is in maintenance mode')}${msg ? ': ' + msg : ''}`;
+
+ let maskIcon = isUnplugged
+ ? 'fa pbs-unplugged-mask'
+ : 'fa pbs-maintenance-mask';
+ me.down('pbsDataStoreInfo').mask(maskMessage, maskIcon);
+ },
+ });
+ return;
+ }
+ me.down('pbsDataStoreInfo').unmask();
+
+ let vm = me.getViewModel();
+
+ let counts = store.getById('counts').data.value;
+ let used = store.getById('used').data.value;
+ let total = store.getById('avail').data.value + used;
+
+ let usage = Proxmox.Utils.render_size_usage(used, total, true);
+ vm.set('usagetext', usage);
+ vm.set('usage', used / total);
+
+ let countstext = function (count) {
+ count = count || {};
+ return `${count.groups || 0} ${gettext('Groups')}, ${count.snapshots || 0} ${gettext('Snapshots')}`;
+ };
+ let gcstatus = store.getById('gc-status')?.data.value;
+ if (gcstatus) {
+ let dedup = PBS.Utils.calculate_dedup_factor(gcstatus);
+ vm.set('deduplication', dedup.toFixed(2));
+ vm.set('stillbad', gcstatus['still-bad']);
+ }
+
+ vm.set('ctcount', countstext(counts.ct));
+ vm.set('vmcount', countstext(counts.vm));
+ vm.set('hostcount', countstext(counts.host));
+ });
+
me.mon(
me.rrdstore,
'load',
@@ -500,7 +475,5 @@ Ext.define('PBS.DataStoreSummary', {
me.query('proxmoxRRDChart').forEach((chart) => {
chart.setStore(me.rrdstore);
});
-
- me.down('pbsDataStoreInfo').relayEvents(me, ['activate', 'deactivate']);
},
});
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 08/11] ui: expose s3 request counter statistics in the datastore summary
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (12 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 07/11] ui: datastore summary: move store to be part of summary panel Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 09/11] metrics: collect s3 datastore statistics as rrd metrics Christian Ebner
` (4 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Show the current s3 request counter statistics for datastore backend.
Use a dedicated info widget, only shown for s3 datastores.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/datastore/Summary.js | 110 +++++++++++++++++++++++++++++++++++++++
1 file changed, 110 insertions(+)
diff --git a/www/datastore/Summary.js b/www/datastore/Summary.js
index 4dd7dc4ce..f73827747 100644
--- a/www/datastore/Summary.js
+++ b/www/datastore/Summary.js
@@ -129,6 +129,95 @@ Ext.define('PBS.DataStoreInfo', {
],
});
+Ext.define('PBS.DataStoreS3Stats', {
+ extend: 'Ext.panel.Panel',
+ alias: 'widget.pbsDataStoreS3Stats',
+
+ defaults: {
+ xtype: 'pmxInfoWidget',
+ },
+
+ bodyPadding: 20,
+
+ items: [
+ {
+ xtype: 'box',
+ html: `<b>${gettext('S3 traffic:')}</b>`,
+ padding: '10 0 5 0',
+ },
+ {
+ iconCls: 'fa fa-fw fa-arrow-up',
+ title: gettext('Data uploaeded'),
+ printBar: false,
+ bind: {
+ data: {
+ text: '{uploaded}',
+ },
+ },
+ },
+ {
+ iconCls: 'fa fa-fw fa-arrow-down',
+ title: gettext('Data downloaded'),
+ printBar: false,
+ bind: {
+ data: {
+ text: '{downloaded}',
+ },
+ },
+ },
+ {
+ xtype: 'box',
+ html: `<b>${gettext('S3 requests:')}</b>`,
+ padding: '10 0 5 0',
+ },
+ {
+ title: gettext('GET'),
+ printBar: false,
+ bind: {
+ data: {
+ text: '{get}',
+ },
+ },
+ },
+ {
+ title: gettext('PUT'),
+ printBar: false,
+ bind: {
+ data: {
+ text: '{put}',
+ },
+ },
+ },
+ {
+ title: gettext('POST'),
+ printBar: false,
+ bind: {
+ data: {
+ text: '{post}',
+ },
+ },
+ },
+ {
+ title: gettext('HEAD'),
+ printBar: false,
+ bind: {
+ data: {
+ text: '{head}',
+ },
+ },
+ },
+ {
+ title: gettext('DELETE'),
+ printBar: false,
+ bind: {
+ data: {
+ text: '{delete}',
+ },
+ },
+ },
+ ],
+});
+
Ext.define('PBS.DataStoreSummary', {
extend: 'Ext.panel.Panel',
alias: 'widget.pbsDataStoreSummary',
@@ -149,6 +238,7 @@ Ext.define('PBS.DataStoreSummary', {
usage: {},
stillbad: 0,
mountpoint: '',
+ showS3Stats: false,
},
},
@@ -243,10 +333,19 @@ Ext.define('PBS.DataStoreSummary', {
{
xtype: 'pbsDataStoreNotes',
flex: 1,
+ padding: '0 10 0 0',
cbind: {
datastore: '{datastore}',
},
},
+ {
+ xtype: 'pbsDataStoreS3Stats',
+ flex: 1,
+ title: gettext('S3 statistics'),
+ bind: {
+ visible: '{showS3Stats}',
+ },
+ },
],
},
{
@@ -455,6 +554,17 @@ Ext.define('PBS.DataStoreSummary', {
vm.set('deduplication', dedup.toFixed(2));
vm.set('stillbad', gcstatus['still-bad']);
}
+ let s3Stats = store.getById('s3-statistics')?.data.value;
+ if (s3Stats) {
+ vm.set('uploaded', Proxmox.Utils.format_size(s3Stats.uploaded));
+ vm.set('downloaded', Proxmox.Utils.format_size(s3Stats.downloaded));
+ vm.set('get', s3Stats.get);
+ vm.set('post', s3Stats.post);
+ vm.set('delete', s3Stats.delete);
+ vm.set('head', s3Stats.head);
+ vm.set('put', s3Stats.put);
+ vm.set('showS3Stats', true);
+ }
vm.set('ctcount', countstext(counts.ct));
vm.set('vmcount', countstext(counts.vm));
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 09/11] metrics: collect s3 datastore statistics as rrd metrics
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (13 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 08/11] ui: expose s3 request counter statistics in the datastore summary Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-11 16:29 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 10/11] api: admin: expose s3 statistics in datastore rrd data Christian Ebner
` (3 subsequent siblings)
18 siblings, 1 reply; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
For datastores with s3 backend, load the shared s3 request counters
via the mmapped file and include them as rrd metrics. Combine the
pre-existing DiskStat with an optional S3Statistics into a common
DatastoreStats struct as dedicated type for the internal method
interfaces.
Request counters are collected by method, total upload and download
traffic as gauge values as well as derived values to get averaged
rate statistics.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/metric_collection/metric_server.rs | 8 +--
src/server/metric_collection/mod.rs | 65 +++++++++++++++++--
src/server/metric_collection/pull_metrics.rs | 18 +++--
src/server/metric_collection/rrd.rs | 34 ++++++++--
4 files changed, 108 insertions(+), 17 deletions(-)
diff --git a/src/server/metric_collection/metric_server.rs b/src/server/metric_collection/metric_server.rs
index ba20628a0..4584fc14c 100644
--- a/src/server/metric_collection/metric_server.rs
+++ b/src/server/metric_collection/metric_server.rs
@@ -5,10 +5,10 @@ use serde_json::{json, Value};
use proxmox_metrics::MetricsData;
-use super::{DiskStat, HostStats};
+use super::{DatastoreStats, DiskStat, HostStats};
pub async fn send_data_to_metric_servers(
- stats: Arc<(HostStats, DiskStat, Vec<DiskStat>)>,
+ stats: Arc<(HostStats, DiskStat, Vec<DatastoreStats>)>,
) -> Result<(), Error> {
let (config, _digest) = pbs_config::metrics::config()?;
let channel_list = get_metric_server_connections(config)?;
@@ -66,10 +66,10 @@ pub async fn send_data_to_metric_servers(
for datastore in stats.2.iter() {
values.push(Arc::new(
- MetricsData::new("blockstat", ctime, datastore.to_value())?
+ MetricsData::new("blockstat", ctime, datastore.disk.to_value())?
.tag("object", "host")
.tag("host", nodename)
- .tag("datastore", datastore.name.clone()),
+ .tag("datastore", datastore.disk.name.clone()),
));
}
diff --git a/src/server/metric_collection/mod.rs b/src/server/metric_collection/mod.rs
index 9b62cbb42..db5830ad1 100644
--- a/src/server/metric_collection/mod.rs
+++ b/src/server/metric_collection/mod.rs
@@ -1,3 +1,4 @@
+use std::sync::atomic::Ordering;
use std::{
collections::HashMap,
path::Path,
@@ -6,11 +7,17 @@ use std::{
time::{Duration, Instant},
};
-use anyhow::Error;
+use anyhow::{format_err, Error};
+use hyper::Method;
use tokio::join;
-use pbs_api_types::{DataStoreConfig, Operation};
+use pbs_api_types::{
+ DataStoreConfig, DatastoreBackendConfig, DatastoreBackendType, Operation, S3Statistics,
+};
+use proxmox_lang::try_block;
use proxmox_network_api::{get_network_interfaces, IpLink};
+use proxmox_s3_client::SharedRequestCounters;
+use proxmox_schema::ApiType;
use proxmox_sys::{
fs::FileSystemInformation,
linux::procfs::{Loadavg, ProcFsMemInfo, ProcFsNetDev, ProcFsStat},
@@ -113,6 +120,11 @@ struct DiskStat {
dev: Option<BlockDevStat>,
}
+struct DatastoreStats {
+ disk: DiskStat,
+ s3_stats: Option<S3Statistics>,
+}
+
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, serde::Serialize, serde::Deserialize)]
enum NetdevType {
Physical,
@@ -219,7 +231,7 @@ fn collect_host_stats_sync() -> HostStats {
}
}
-fn collect_disk_stats_sync() -> (DiskStat, Vec<DiskStat>) {
+fn collect_disk_stats_sync() -> (DiskStat, Vec<DatastoreStats>) {
let disk_manager = DiskManage::new();
let root = gather_disk_stats(disk_manager.clone(), Path::new("/"), "host");
@@ -243,11 +255,54 @@ fn collect_disk_stats_sync() -> (DiskStat, Vec<DiskStat>) {
continue;
}
- datastores.push(gather_disk_stats(
+ let s3_stats: Option<S3Statistics> = try_block!({
+ let backend_config: DatastoreBackendConfig = serde_json::from_value(
+ DatastoreBackendConfig::API_SCHEMA
+ .parse_property_string(config.backend.as_deref().unwrap_or(""))?,
+ )?;
+ if backend_config.ty.unwrap_or_default() == DatastoreBackendType::S3 {
+ let path = format!(
+ "{}/{}-{}-{}.shmem",
+ pbs_datastore::S3_CLIENT_REQUEST_COUNTER_BASE_PATH,
+ backend_config
+ .client
+ .as_ref()
+ .ok_or(format_err!("missing s3 endpoint id"))?,
+ backend_config
+ .bucket
+ .as_ref()
+ .ok_or(format_err!("missing s3 bucket name"))?,
+ config.name,
+ );
+ let user = pbs_config::backup_user()?;
+ let counters =
+ SharedRequestCounters::open_shared_memory_mapped(path, user)?;
+ let s3_statistics = S3Statistics {
+ get: counters.load(Method::GET, Ordering::SeqCst),
+ put: counters.load(Method::PUT, Ordering::SeqCst),
+ post: counters.load(Method::POST, Ordering::SeqCst),
+ delete: counters.load(Method::DELETE, Ordering::SeqCst),
+ head: counters.load(Method::HEAD, Ordering::SeqCst),
+ uploaded: counters.get_upload_traffic(Ordering::SeqCst),
+ downloaded: counters.get_download_traffic(Ordering::SeqCst),
+ };
+ Ok(Some(s3_statistics))
+ } else {
+ Ok(None)
+ }
+ })
+ .unwrap_or_else(|err: Error| {
+ eprintln!("reading datastore s3 counters failed - {err}");
+ None
+ });
+
+ let disk = gather_disk_stats(
disk_manager.clone(),
Path::new(&config.absolute_path()),
&config.name,
- ));
+ );
+
+ datastores.push(DatastoreStats { disk, s3_stats });
}
}
Err(err) => {
diff --git a/src/server/metric_collection/pull_metrics.rs b/src/server/metric_collection/pull_metrics.rs
index e99662faf..4dcd336a5 100644
--- a/src/server/metric_collection/pull_metrics.rs
+++ b/src/server/metric_collection/pull_metrics.rs
@@ -6,13 +6,14 @@ use nix::sys::stat::Mode;
use pbs_api_types::{
MetricDataPoint,
MetricDataType::{self, Derive, Gauge},
+ S3Statistics,
};
use pbs_buildcfg::PROXMOX_BACKUP_RUN_DIR;
use proxmox_shared_cache::SharedCache;
use proxmox_sys::fs::CreateOptions;
use serde::{Deserialize, Serialize};
-use super::{DiskStat, HostStats, NetdevType, METRIC_COLLECTION_INTERVAL};
+use super::{DatastoreStats, DiskStat, HostStats, NetdevType, METRIC_COLLECTION_INTERVAL};
const METRIC_CACHE_TIME: Duration = Duration::from_secs(30 * 60);
const STORED_METRIC_GENERATIONS: u64 =
@@ -89,7 +90,7 @@ pub fn get_all_metrics(start_time: i64) -> Result<Vec<MetricDataPoint>, Error> {
pub(super) fn update_metrics(
host: &HostStats,
hostdisk: &DiskStat,
- datastores: &[DiskStat],
+ datastores: &[DatastoreStats],
) -> Result<(), Error> {
let mut points = MetricDataPoints::new(proxmox_time::epoch_i64());
@@ -129,8 +130,11 @@ pub(super) fn update_metrics(
update_disk_metrics(&mut points, hostdisk, "host");
for stat in datastores {
- let id = format!("datastore/{}", stat.name);
- update_disk_metrics(&mut points, stat, &id);
+ let id = format!("datastore/{}", stat.disk.name);
+ update_disk_metrics(&mut points, &stat.disk, &id);
+ if let Some(stat) = &stat.s3_stats {
+ update_s3_metrics(&mut points, stat, &id);
+ }
}
get_cache()?.set(&points, Duration::from_secs(2))?;
@@ -158,6 +162,12 @@ fn update_disk_metrics(points: &mut MetricDataPoints, disk: &DiskStat, id: &str)
}
}
+fn update_s3_metrics(points: &mut MetricDataPoints, stat: &S3Statistics, id: &str) {
+ let id = format!("{id}/s3");
+ points.add(Gauge, &id, "uploaded", stat.uploaded as f64);
+ points.add(Gauge, &id, "downloaded", stat.downloaded as f64);
+}
+
#[derive(Serialize, Deserialize)]
struct MetricDataPoints {
timestamp: i64,
diff --git a/src/server/metric_collection/rrd.rs b/src/server/metric_collection/rrd.rs
index 7b13b51ff..a0ea1a566 100644
--- a/src/server/metric_collection/rrd.rs
+++ b/src/server/metric_collection/rrd.rs
@@ -13,10 +13,11 @@ use proxmox_rrd::rrd::{AggregationFn, Archive, DataSourceType, Database};
use proxmox_rrd::Cache;
use proxmox_sys::fs::CreateOptions;
+use pbs_api_types::S3Statistics;
use pbs_buildcfg::PROXMOX_BACKUP_STATE_DIR_M;
use proxmox_rrd_api_types::{RrdMode, RrdTimeframe};
-use super::{DiskStat, HostStats, NetdevType};
+use super::{DatastoreStats, DiskStat, HostStats, NetdevType};
const RRD_CACHE_BASEDIR: &str = concat!(PROXMOX_BACKUP_STATE_DIR_M!(), "/rrdb");
@@ -148,7 +149,7 @@ fn update_derive(name: &str, value: f64) {
}
}
-pub(super) fn update_metrics(host: &HostStats, hostdisk: &DiskStat, datastores: &[DiskStat]) {
+pub(super) fn update_metrics(host: &HostStats, hostdisk: &DiskStat, datastores: &[DatastoreStats]) {
if let Some(stat) = &host.proc {
update_gauge("host/cpu", stat.cpu);
update_gauge("host/iowait", stat.iowait_percent);
@@ -182,8 +183,11 @@ pub(super) fn update_metrics(host: &HostStats, hostdisk: &DiskStat, datastores:
update_disk_metrics(hostdisk, "host");
for stat in datastores {
- let rrd_prefix = format!("datastore/{}", stat.name);
- update_disk_metrics(stat, &rrd_prefix);
+ let rrd_prefix = format!("datastore/{}", stat.disk.name);
+ update_disk_metrics(&stat.disk, &rrd_prefix);
+ if let Some(stats) = &stat.s3_stats {
+ update_s3_metrics(stats, &rrd_prefix);
+ }
}
}
@@ -212,3 +216,25 @@ fn update_disk_metrics(disk: &DiskStat, rrd_prefix: &str) {
update_derive(&rrd_key, (stat.io_ticks as f64) / 1000.0);
}
}
+
+fn update_s3_metrics(stats: &S3Statistics, rrd_prefix: &str) {
+ let rrd_key = format!("{rrd_prefix}/s3/total/uploaded");
+ update_gauge(&rrd_key, stats.uploaded as f64);
+ let rrd_key = format!("{rrd_prefix}/s3/total/downloaded");
+ update_gauge(&rrd_key, stats.downloaded as f64);
+ let rrd_key = format!("{rrd_prefix}/s3/uploaded");
+ update_derive(&rrd_key, stats.uploaded as f64);
+ let rrd_key = format!("{rrd_prefix}/s3/downloaded");
+ update_derive(&rrd_key, stats.downloaded as f64);
+
+ let rrd_key = format!("{rrd_prefix}/s3/total/get");
+ update_gauge(&rrd_key, stats.get as f64);
+ let rrd_key = format!("{rrd_prefix}/s3/total/put");
+ update_gauge(&rrd_key, stats.put as f64);
+ let rrd_key = format!("{rrd_prefix}/s3/total/post");
+ update_gauge(&rrd_key, stats.post as f64);
+ let rrd_key = format!("{rrd_prefix}/s3/total/head");
+ update_gauge(&rrd_key, stats.head as f64);
+ let rrd_key = format!("{rrd_prefix}/s3/total/delete");
+ update_gauge(&rrd_key, stats.delete as f64);
+}
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 10/11] api: admin: expose s3 statistics in datastore rrd data
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (14 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 09/11] metrics: collect s3 datastore statistics as rrd metrics Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 11/11] partially fix #6563: ui: expose s3 rrd charts in datastore summary Christian Ebner
` (2 subsequent siblings)
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Includes the additional s3 related rrd data metrics in the api
response to expose them in the ui.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/admin/datastore.rs | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index d71112475..f4133011c 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -39,10 +39,10 @@ use pbs_api_types::{
print_ns_and_snapshot, print_store_and_ns, ArchiveType, Authid, BackupArchiveName,
BackupContent, BackupGroupDeleteStats, BackupNamespace, BackupType, Counts, CryptMode,
DataStoreConfig, DataStoreListItem, DataStoreMountStatus, DataStoreStatus,
- GarbageCollectionJobStatus, GroupListItem, JobScheduleStatus, KeepOptions, MaintenanceMode,
- MaintenanceType, Operation, PruneJobOptions, SnapshotListItem, SyncJobConfig,
- BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA,
- BACKUP_TYPE_SCHEMA, CATALOG_NAME, CLIENT_LOG_BLOB_NAME, DATASTORE_SCHEMA,
+ DatastoreBackendType, GarbageCollectionJobStatus, GroupListItem, JobScheduleStatus,
+ KeepOptions, MaintenanceMode, MaintenanceType, Operation, PruneJobOptions, SnapshotListItem,
+ SyncJobConfig, BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA,
+ BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA, CATALOG_NAME, CLIENT_LOG_BLOB_NAME, DATASTORE_SCHEMA,
IGNORE_VERIFIED_BACKUPS_SCHEMA, MAX_NAMESPACE_DEPTH, NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ,
PRIV_DATASTORE_VERIFY, PRIV_SYS_MODIFY, UPID, UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA,
@@ -1895,6 +1895,17 @@ pub fn get_rrd_stats(
Ok(Some((fs_type, _, _))) if fs_type.as_str() == "zfs" => {}
_ => rrd_fields.push("io_ticks"),
};
+ if datastore.backend_type() == DatastoreBackendType::S3 {
+ rrd_fields.push("s3/uploaded");
+ rrd_fields.push("s3/downloaded");
+ rrd_fields.push("s3/total/uploaded");
+ rrd_fields.push("s3/total/downloaded");
+ rrd_fields.push("s3/total/get");
+ rrd_fields.push("s3/total/put");
+ rrd_fields.push("s3/total/post");
+ rrd_fields.push("s3/total/head");
+ rrd_fields.push("s3/total/delete");
+ }
create_value_from_rrd(&format!("datastore/{store}"), &rrd_fields, timeframe, cf)
}
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v1 11/11] partially fix #6563: ui: expose s3 rrd charts in datastore summary
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (15 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 10/11] api: admin: expose s3 statistics in datastore rrd data Christian Ebner
@ 2026-02-09 9:15 ` Christian Ebner
2026-02-09 9:39 ` [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
2026-02-16 12:15 ` superseded: " Christian Ebner
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:15 UTC (permalink / raw)
To: pbs-devel
Show the total request counts per method as well as the total and
derived upload/download s3 api statistics as rrd charts.
This partially fixes issue 6563, further information such as usage
statistics for the s3 backend will be implemented.
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=6563
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/datastore/Summary.js | 48 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 48 insertions(+)
diff --git a/www/datastore/Summary.js b/www/datastore/Summary.js
index f73827747..251e79b97 100644
--- a/www/datastore/Summary.js
+++ b/www/datastore/Summary.js
@@ -22,6 +22,15 @@ Ext.define('pve-rrd-datastore', {
'write_ios',
'write_bytes',
'io_ticks',
+ 's3/uploaded',
+ 's3/downloaded',
+ 's3/total/uploaded',
+ 's3/total/downloaded',
+ 's3/total/get',
+ 's3/total/put',
+ 's3/total/post',
+ 's3/total/head',
+ 's3/total/delete',
{
name: 'io_delay',
calculate: function (data) {
@@ -348,6 +357,45 @@ Ext.define('PBS.DataStoreSummary', {
},
],
},
+ {
+ xtype: 'proxmoxRRDChart',
+ title: gettext('S3 API requests'),
+ fields: [
+ 's3/total/get',
+ 's3/total/put',
+ 's3/total/post',
+ 's3/total/head',
+ 's3/total/delete',
+ ],
+ fieldTitles: [
+ gettext('GET'),
+ gettext('PUT'),
+ gettext('POST'),
+ gettext('HEAD'),
+ gettext('DELETE'),
+ ],
+ bind: {
+ visible: '{showS3Stats}',
+ },
+ },
+ {
+ xtype: 'proxmoxRRDChart',
+ title: gettext('S3 API download/upload rate (bytes/second)'),
+ fields: ['s3/downloaded', 's3/uploaded'],
+ fieldTitles: [gettext('Upload'), gettext('Download')],
+ bind: {
+ visible: '{showS3Stats}',
+ },
+ },
+ {
+ xtype: 'proxmoxRRDChart',
+ title: gettext('S3 API total download/upload (bytes)'),
+ fields: ['s3/total/downloaded', 's3/total/uploaded'],
+ fieldTitles: [gettext('Download'), gettext('Upload')],
+ bind: {
+ visible: '{showS3Stats}',
+ },
+ },
{
xtype: 'proxmoxRRDChart',
title: gettext('Storage usage (bytes)'),
--
2.47.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (16 preceding siblings ...)
2026-02-09 9:15 ` [PATCH v1 11/11] partially fix #6563: ui: expose s3 rrd charts in datastore summary Christian Ebner
@ 2026-02-09 9:39 ` Christian Ebner
2026-02-16 12:15 ` superseded: " Christian Ebner
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-09 9:39 UTC (permalink / raw)
To: pbs-devel
Seems my murpp config resulted in the repo names not being included in
the patches subject prefix.
The first 6 patches are intended for `proxmox` repo, the latter ones for
`proxmox-backup`.
Sorry for the inconvenience.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v1 2/6] s3-client: add persistent shared request counters for client
2026-02-09 9:15 ` [PATCH v1 2/6] s3-client: add persistent shared request counters for client Christian Ebner
@ 2026-02-11 12:13 ` Robert Obkircher
2026-02-11 12:41 ` Christian Ebner
0 siblings, 1 reply; 26+ messages in thread
From: Robert Obkircher @ 2026-02-11 12:13 UTC (permalink / raw)
To: Christian Ebner, pbs-devel
On 2/9/26 10:14, Christian Ebner wrote:
> Implements atomic counters for api requests successfully send to the
> S3 API endpoint via the client, accounting for individual requests
> discriminating based on their method.
>
> Since multiple client instances might exists accessing the API
> concurrently, even in different processes, provide the atomic
> counters via shared memory mapping. This follows along the lines of
> the shared traffic limiter implementation.
>
> The counter mappings are conditionally constructed on client
> instantiation by caller give configuration options.
>
> Keep sequential ordering of the counters with the intent to use them
> for statistics, soft limits and/or notifications within Proxmox
> Backup Server.
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> proxmox-s3-client/Cargo.toml | 4 +
> proxmox-s3-client/debian/control | 2 +
> proxmox-s3-client/examples/s3_client.rs | 1 +
> proxmox-s3-client/src/client.rs | 48 +++++-
> proxmox-s3-client/src/lib.rs | 7 +-
> .../src/shared_request_counters.rs | 152 ++++++++++++++++++
> 6 files changed, 212 insertions(+), 2 deletions(-)
> create mode 100644 proxmox-s3-client/src/shared_request_counters.rs
>
> diff --git a/proxmox-s3-client/Cargo.toml b/proxmox-s3-client/Cargo.toml
> index a50fa715..1e31bca4 100644
> --- a/proxmox-s3-client/Cargo.toml
> +++ b/proxmox-s3-client/Cargo.toml
> @@ -38,7 +38,9 @@ proxmox-base64 = { workspace = true, optional = true }
> proxmox-http = { workspace = true, features = [ "body", "client", "client-trait" ], optional = true }
> proxmox-human-byte.workspace = true
> proxmox-rate-limiter = { workspace = true, features = [ "rate-limiter", "shared-rate-limiter" ], optional = true }
> +proxmox-shared-memory = { workspace = true, optional = true }
> proxmox-schema = { workspace = true, features = [ "api-macro", "api-types" ] }
> +proxmox-sys = { workspace = true, optional = true }
> proxmox-serde.workspace = true
> proxmox-time = {workspace = true, optional = true }
>
> @@ -65,6 +67,8 @@ impl = [
> "dep:proxmox-base64",
> "dep:proxmox-http",
> "dep:proxmox-rate-limiter",
> + "dep:proxmox-shared-memory",
> + "dep:proxmox-sys",
> "dep:proxmox-time",
> ]
>
> diff --git a/proxmox-s3-client/debian/control b/proxmox-s3-client/debian/control
> index 33418881..a534a107 100644
> --- a/proxmox-s3-client/debian/control
> +++ b/proxmox-s3-client/debian/control
> @@ -85,6 +85,8 @@ Depends:
> librust-proxmox-rate-limiter-1+default-dev,
> librust-proxmox-rate-limiter-1+rate-limiter-dev,
> librust-proxmox-rate-limiter-1+shared-rate-limiter-dev,
> + librust-proxmox-shared-memory-1+default-dev,
> + librust-proxmox-sys-1+default-dev,
> librust-proxmox-time-2+default-dev (>= 2.1.0-~~),
> librust-quick-xml-0.36+async-tokio-dev (>= 0.36.1-~~),
> librust-quick-xml-0.36+default-dev (>= 0.36.1-~~),
> diff --git a/proxmox-s3-client/examples/s3_client.rs b/proxmox-s3-client/examples/s3_client.rs
> index ca69971c..329de47a 100644
> --- a/proxmox-s3-client/examples/s3_client.rs
> +++ b/proxmox-s3-client/examples/s3_client.rs
> @@ -40,6 +40,7 @@ async fn run() -> Result<(), anyhow::Error> {
> put_rate_limit: None,
> provider_quirks: Vec::new(),
> rate_limiter_config: None,
> + request_counter_config: None,
> };
>
> // Creating a client instance and connect to api endpoint
> diff --git a/proxmox-s3-client/src/client.rs b/proxmox-s3-client/src/client.rs
> index 83176b39..4a6a702b 100644
> --- a/proxmox-s3-client/src/client.rs
> +++ b/proxmox-s3-client/src/client.rs
> @@ -1,5 +1,6 @@
> use std::path::{Path, PathBuf};
> use std::str::FromStr;
> +use std::sync::atomic::Ordering;
> use std::sync::{Arc, Mutex};
> use std::time::{Duration, Instant};
>
> @@ -32,6 +33,7 @@ use crate::response_reader::{
> CopyObjectResponse, DeleteObjectsResponse, GetObjectResponse, HeadObjectResponse,
> ListBucketsResponse, ListObjectsV2Response, PutObjectResponse, ResponseReader,
> };
> +use crate::shared_request_counters::SharedRequestCounters;
>
> /// Default timeout for s3 api requests.
> pub const S3_HTTP_REQUEST_TIMEOUT: Duration = Duration::from_secs(30 * 60);
> @@ -74,6 +76,21 @@ pub struct S3RateLimiterConfig {
> burst_out: Option<u64>,
> }
>
> +/// Options for the s3 client's shared request counters
> +pub struct S3RequestCounterOptions {
> + /// ID for the memory mapped file
> + pub id: String,
> + /// Base path for the shared memory mapped file
> + pub base_path: PathBuf,
> + /// User for the to be created shared memory mapped file and folders
> + pub user: User,
> +}
> +
> +/// Configuration for the s3 client's shared request counters
> +pub struct S3RequestCounterConfig {
> + options: S3RequestCounterOptions,
> +}
> +
> /// Configuration options for client
> pub struct S3ClientOptions {
> /// Endpoint to access S3 object store.
> @@ -100,6 +117,8 @@ pub struct S3ClientOptions {
> pub provider_quirks: Vec<ProviderQuirks>,
> /// Configuration options for the shared rate limiter.
> pub rate_limiter_config: Option<S3RateLimiterConfig>,
> + /// Configuration options for the client's shared request counters.
> + pub request_counter_config: Option<S3RequestCounterConfig>,
> }
>
> impl S3ClientOptions {
> @@ -110,6 +129,7 @@ impl S3ClientOptions {
> bucket: Option<String>,
> common_prefix: String,
> rate_limiter_options: Option<S3RateLimiterOptions>,
> + request_counter_options: Option<S3RequestCounterOptions>,
> ) -> Self {
> let rate_limiter_config = rate_limiter_options.map(|options| S3RateLimiterConfig {
> options,
> @@ -118,6 +138,8 @@ impl S3ClientOptions {
> rate_out: config.rate_out.map(|human_bytes| human_bytes.as_u64()),
> burst_out: config.burst_out.map(|human_bytes| human_bytes.as_u64()),
> });
> + let request_counter_config =
> + request_counter_options.map(|options| S3RequestCounterConfig { options });
> Self {
> endpoint: config.endpoint,
> port: config.port,
> @@ -131,6 +153,7 @@ impl S3ClientOptions {
> put_rate_limit: config.put_rate_limit,
> provider_quirks: config.provider_quirks.unwrap_or_default(),
> rate_limiter_config,
> + request_counter_config,
> }
> }
> }
> @@ -141,6 +164,7 @@ pub struct S3Client {
> options: S3ClientOptions,
> authority: Authority,
> put_rate_limiter: Option<Arc<Mutex<RateLimiter>>>,
> + request_counters: Option<Arc<SharedRequestCounters>>,
> }
>
> impl S3Client {
> @@ -213,6 +237,21 @@ impl S3Client {
> }
> }
>
> + let request_counters = if let Some(config) = options.request_counter_config.as_ref() {
> + let path = config
> + .options
> + .base_path
> + .join(format!("{}.shmem", config.options.id));
> + let request_counters = SharedRequestCounters::open_shared_memory_mapped(
> + &path,
> + config.options.user.clone(),
> + )
> + .context("failed to mmap shared S3 request counters")?;
> + Some(Arc::new(request_counters))
> + } else {
> + None
> + };
> +
> let client = Client::builder(TokioExecutor::new()).build::<_, Body>(https_connector);
>
> let authority_template = if let Some(port) = options.port {
> @@ -241,6 +280,7 @@ impl S3Client {
> options,
> authority,
> put_rate_limiter,
> + request_counters,
> })
> }
>
> @@ -392,7 +432,13 @@ impl S3Client {
> };
>
> match response {
> - Ok(Ok(response)) => return Ok(response),
> + Ok(Ok(response)) => {
> + if let Some(counters) = self.request_counters.as_ref() {
> + let _prev = counters.increment(parts.method.clone(), Ordering::SeqCst);
> + }
> +
> + return Ok(response);
> + }
> Ok(Err(err)) => {
> if retry >= MAX_S3_HTTP_REQUEST_RETRY - 1 {
> return Err(err.into());
> diff --git a/proxmox-s3-client/src/lib.rs b/proxmox-s3-client/src/lib.rs
> index d02fd0dc..ceee41a2 100644
> --- a/proxmox-s3-client/src/lib.rs
> +++ b/proxmox-s3-client/src/lib.rs
> @@ -21,7 +21,8 @@ pub use aws_sign_v4::uri_decode;
> mod client;
> #[cfg(feature = "impl")]
> pub use client::{
> - S3Client, S3ClientOptions, S3PathPrefix, S3RateLimiterOptions, S3_HTTP_REQUEST_TIMEOUT,
> + S3Client, S3ClientOptions, S3PathPrefix, S3RateLimiterOptions, S3RequestCounterOptions,
> + S3_HTTP_REQUEST_TIMEOUT,
> };
> #[cfg(feature = "impl")]
> mod timestamps;
> @@ -33,3 +34,7 @@ mod object_key;
> pub use object_key::S3ObjectKey;
> #[cfg(feature = "impl")]
> mod response_reader;
> +#[cfg(feature = "impl")]
> +mod shared_request_counters;
> +#[cfg(feature = "impl")]
> +pub use shared_request_counters::SharedRequestCounters;
> diff --git a/proxmox-s3-client/src/shared_request_counters.rs b/proxmox-s3-client/src/shared_request_counters.rs
> new file mode 100644
> index 00000000..b236490b
> --- /dev/null
> +++ b/proxmox-s3-client/src/shared_request_counters.rs
> @@ -0,0 +1,152 @@
> +use std::mem::MaybeUninit;
> +use std::path::Path;
> +use std::sync::atomic::{AtomicU64, Ordering};
> +
> +use anyhow::{bail, Error};
> +use hyper::http::method::Method;
> +use nix::sys::stat::Mode;
> +use nix::unistd::User;
> +
> +use proxmox_shared_memory::{Init, SharedMemory};
> +use proxmox_sys::fs::CreateOptions;
> +
> +const MEMORY_PAGE_SIZE: usize = 4096;
> +/// Generated via openssl::sha::sha256(b"Proxmox shared request counters v1.0")[0..8]
> +const PROXMOX_SHARED_REQUEST_COUNTERS_1_0: [u8; 8] = [224, 110, 88, 252, 26, 77, 180, 5];
> +
> +#[repr(C)]
> +#[derive(Default)]
> +struct RequestCounters {
> + delete: AtomicU64,
> + get: AtomicU64,
> + head: AtomicU64,
> + post: AtomicU64,
> + put: AtomicU64,
It probably doesn't matter (especially not with SeqCst), but aligning
each field to 64 bytes would avoid false sharing.
> +}
> +
> +impl Init for RequestCounters {
> + fn initialize(this: &mut MaybeUninit<Self>) {
> + unsafe {
> + let this = &mut *this.as_mut_ptr();
> + *this = RequestCounters::default();
If the memory is actually uninitialized this is undefined behavior
because `*this =` drops the previous value. The memory from mmap is
initialized, but I think it would still be better to use ptr::write.
> + }
> + }
> +}
> +
> +impl RequestCounters {
> + /// Increment the counter for given method, following the provided memory ordering constrains.
> + ///
> + /// Returns the previously stored value.
> + pub fn increment(&self, method: Method, ordering: Ordering) -> u64 {
> + match method {
> + Method::DELETE => self.delete.fetch_add(1, ordering),
> + Method::GET => self.get.fetch_add(1, ordering),
> + Method::HEAD => self.head.fetch_add(1, ordering),
> + Method::POST => self.post.fetch_add(1, ordering),
> + Method::PUT => self.put.fetch_add(1, ordering),
> + _ => 0,
> + }
> + }
> +
> + /// Load current counter state for given method, following the provided memory ordering constrains
> + pub fn load(&self, method: Method, ordering: Ordering) -> u64 {
> + match method {
> + Method::DELETE => self.delete.load(ordering),
> + Method::GET => self.get.load(ordering),
> + Method::HEAD => self.head.load(ordering),
> + Method::POST => self.post.load(ordering),
> + Method::PUT => self.put.load(ordering),
> + _ => 0,
> + }
> + }
> +
> + /// Reset all counters, following the provided memory ordering constrains
> + pub fn reset(&self, ordering: Ordering) {
> + self.delete.store(0, ordering);
> + self.get.store(0, ordering);
> + self.head.store(0, ordering);
> + self.post.store(0, ordering);
> + self.put.store(0, ordering);
> + }
> +}
> +
> +#[repr(C)]
> +struct MappableRequestCounters {
> + magic: [u8; 8],
> + counters: RequestCounters,
> + _page_size_padding: [u8; MEMORY_PAGE_SIZE
> + - PROXMOX_SHARED_REQUEST_COUNTERS_1_0.len()
> + - std::mem::size_of::<RequestCounters>()],
Why do we need to hard-code a possibly incorrect assumption about the
page size? I know `SharedMemory` claims that the size needs to be a
multiple of 4096, but I don't understand why.
I guess this implicitly prevents unused padding at the end of the
struct which would be problematic if we add a field and run two
processes with different versions at the same time, but that doesn't
explain the page size.
> +}
> +
> +impl Init for MappableRequestCounters {
> + fn initialize(this: &mut MaybeUninit<Self>) {
> + unsafe {
> + let this = &mut *this.as_mut_ptr();
> + this.magic = PROXMOX_SHARED_REQUEST_COUNTERS_1_0;
> + this.counters = RequestCounters::default();
> + }
> + }
> +
> + fn check_type_magic(this: &MaybeUninit<Self>) -> Result<(), Error> {
> + unsafe {
> + let this = &*this.as_ptr();
> + if this.magic != PROXMOX_SHARED_REQUEST_COUNTERS_1_0 {
> + bail!("incorrect magic number for request counters detected");
> + }
> + proxmox_shared_memory::check_subtype(&this.counters)?;
> + }
> + Ok(())
> + }
> +}
> +
> +/// Atomic counters storing per-request method counts for the client.
> +///
> +/// If set, the counts can be filtered based on a path prefix.
> +pub struct SharedRequestCounters {
> + shared_memory: SharedMemory<MappableRequestCounters>,
> +}
> +
> +impl SharedRequestCounters {
> + /// Create a new shared counter instance.
> + ///
> + /// Opens or creates mmap file and accesses it via shared memory mapping.
> + pub fn open_shared_memory_mapped<P: AsRef<Path>>(path: P, user: User) -> Result<Self, Error> {
> + let path = path.as_ref();
> + if let Some(parent) = path.parent() {
> + let dir_opts = CreateOptions::new()
> + .perm(Mode::from_bits_truncate(0o770))
> + .owner(user.uid)
> + .group(user.gid);
> +
> + proxmox_sys::fs::create_path(parent, Some(dir_opts), Some(dir_opts))?;
> + }
> +
> + let file_opts = CreateOptions::new()
> + .perm(Mode::from_bits_truncate(0o660))
> + .owner(user.uid)
> + .group(user.gid);
> + let shared_memory = SharedMemory::open(path, file_opts)?;
> + Ok(Self { shared_memory })
> + }
> +
> + /// Increment the counter for given method, following the provided memory ordering constrains
> + ///
> + /// Returns the previously stored value.
> + pub fn increment(&self, method: Method, ordering: Ordering) -> u64 {
> + self.shared_memory
> + .data()
> + .counters
> + .increment(method, ordering)
> + }
> +
> + /// Load current counter state for given method, following the provided memory ordering constrains
> + pub fn load(&self, method: Method, ordering: Ordering) -> u64 {
> + self.shared_memory.data().counters.load(method, ordering)
> + }
> +
> + /// Reset all counters, following the provided memory ordering constrains
> + pub fn reset(&self, ordering: Ordering) {
> + self.shared_memory.data().counters.reset(ordering)
> + }
> +}
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v1 2/6] s3-client: add persistent shared request counters for client
2026-02-11 12:13 ` Robert Obkircher
@ 2026-02-11 12:41 ` Christian Ebner
2026-02-12 9:55 ` Robert Obkircher
0 siblings, 1 reply; 26+ messages in thread
From: Christian Ebner @ 2026-02-11 12:41 UTC (permalink / raw)
To: Robert Obkircher, pbs-devel
On 2/11/26 1:12 PM, Robert Obkircher wrote:
>
> On 2/9/26 10:14, Christian Ebner wrote:
>> Implements atomic counters for api requests successfully send to the
>> S3 API endpoint via the client, accounting for individual requests
>> discriminating based on their method.
>>
>> Since multiple client instances might exists accessing the API
>> concurrently, even in different processes, provide the atomic
>> counters via shared memory mapping. This follows along the lines of
>> the shared traffic limiter implementation.
>>
>> The counter mappings are conditionally constructed on client
>> instantiation by caller give configuration options.
>>
>> Keep sequential ordering of the counters with the intent to use them
>> for statistics, soft limits and/or notifications within Proxmox
>> Backup Server.
>>
>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>> ---
>> proxmox-s3-client/Cargo.toml | 4 +
>> proxmox-s3-client/debian/control | 2 +
>> proxmox-s3-client/examples/s3_client.rs | 1 +
>> proxmox-s3-client/src/client.rs | 48 +++++-
>> proxmox-s3-client/src/lib.rs | 7 +-
>> .../src/shared_request_counters.rs | 152 ++++++++++++++++++
>> 6 files changed, 212 insertions(+), 2 deletions(-)
>> create mode 100644 proxmox-s3-client/src/shared_request_counters.rs
>>
>> diff --git a/proxmox-s3-client/Cargo.toml b/proxmox-s3-client/Cargo.toml
>> index a50fa715..1e31bca4 100644
>> --- a/proxmox-s3-client/Cargo.toml
>> +++ b/proxmox-s3-client/Cargo.toml
>> @@ -38,7 +38,9 @@ proxmox-base64 = { workspace = true, optional = true }
>> proxmox-http = { workspace = true, features = [ "body", "client", "client-trait" ], optional = true }
>> proxmox-human-byte.workspace = true
>> proxmox-rate-limiter = { workspace = true, features = [ "rate-limiter", "shared-rate-limiter" ], optional = true }
>> +proxmox-shared-memory = { workspace = true, optional = true }
>> proxmox-schema = { workspace = true, features = [ "api-macro", "api-types" ] }
>> +proxmox-sys = { workspace = true, optional = true }
>> proxmox-serde.workspace = true
>> proxmox-time = {workspace = true, optional = true }
>>
>> @@ -65,6 +67,8 @@ impl = [
>> "dep:proxmox-base64",
>> "dep:proxmox-http",
>> "dep:proxmox-rate-limiter",
>> + "dep:proxmox-shared-memory",
>> + "dep:proxmox-sys",
>> "dep:proxmox-time",
>> ]
>>
>> diff --git a/proxmox-s3-client/debian/control b/proxmox-s3-client/debian/control
>> index 33418881..a534a107 100644
>> --- a/proxmox-s3-client/debian/control
>> +++ b/proxmox-s3-client/debian/control
>> @@ -85,6 +85,8 @@ Depends:
>> librust-proxmox-rate-limiter-1+default-dev,
>> librust-proxmox-rate-limiter-1+rate-limiter-dev,
>> librust-proxmox-rate-limiter-1+shared-rate-limiter-dev,
>> + librust-proxmox-shared-memory-1+default-dev,
>> + librust-proxmox-sys-1+default-dev,
>> librust-proxmox-time-2+default-dev (>= 2.1.0-~~),
>> librust-quick-xml-0.36+async-tokio-dev (>= 0.36.1-~~),
>> librust-quick-xml-0.36+default-dev (>= 0.36.1-~~),
>> diff --git a/proxmox-s3-client/examples/s3_client.rs b/proxmox-s3-client/examples/s3_client.rs
>> index ca69971c..329de47a 100644
>> --- a/proxmox-s3-client/examples/s3_client.rs
>> +++ b/proxmox-s3-client/examples/s3_client.rs
>> @@ -40,6 +40,7 @@ async fn run() -> Result<(), anyhow::Error> {
>> put_rate_limit: None,
>> provider_quirks: Vec::new(),
>> rate_limiter_config: None,
>> + request_counter_config: None,
>> };
>>
>> // Creating a client instance and connect to api endpoint
>> diff --git a/proxmox-s3-client/src/client.rs b/proxmox-s3-client/src/client.rs
>> index 83176b39..4a6a702b 100644
>> --- a/proxmox-s3-client/src/client.rs
>> +++ b/proxmox-s3-client/src/client.rs
>> @@ -1,5 +1,6 @@
>> use std::path::{Path, PathBuf};
>> use std::str::FromStr;
>> +use std::sync::atomic::Ordering;
>> use std::sync::{Arc, Mutex};
>> use std::time::{Duration, Instant};
>>
>> @@ -32,6 +33,7 @@ use crate::response_reader::{
>> CopyObjectResponse, DeleteObjectsResponse, GetObjectResponse, HeadObjectResponse,
>> ListBucketsResponse, ListObjectsV2Response, PutObjectResponse, ResponseReader,
>> };
>> +use crate::shared_request_counters::SharedRequestCounters;
>>
>> /// Default timeout for s3 api requests.
>> pub const S3_HTTP_REQUEST_TIMEOUT: Duration = Duration::from_secs(30 * 60);
>> @@ -74,6 +76,21 @@ pub struct S3RateLimiterConfig {
>> burst_out: Option<u64>,
>> }
>>
>> +/// Options for the s3 client's shared request counters
>> +pub struct S3RequestCounterOptions {
>> + /// ID for the memory mapped file
>> + pub id: String,
>> + /// Base path for the shared memory mapped file
>> + pub base_path: PathBuf,
>> + /// User for the to be created shared memory mapped file and folders
>> + pub user: User,
>> +}
>> +
>> +/// Configuration for the s3 client's shared request counters
>> +pub struct S3RequestCounterConfig {
>> + options: S3RequestCounterOptions,
>> +}
>> +
>> /// Configuration options for client
>> pub struct S3ClientOptions {
>> /// Endpoint to access S3 object store.
>> @@ -100,6 +117,8 @@ pub struct S3ClientOptions {
>> pub provider_quirks: Vec<ProviderQuirks>,
>> /// Configuration options for the shared rate limiter.
>> pub rate_limiter_config: Option<S3RateLimiterConfig>,
>> + /// Configuration options for the client's shared request counters.
>> + pub request_counter_config: Option<S3RequestCounterConfig>,
>> }
>>
>> impl S3ClientOptions {
>> @@ -110,6 +129,7 @@ impl S3ClientOptions {
>> bucket: Option<String>,
>> common_prefix: String,
>> rate_limiter_options: Option<S3RateLimiterOptions>,
>> + request_counter_options: Option<S3RequestCounterOptions>,
>> ) -> Self {
>> let rate_limiter_config = rate_limiter_options.map(|options| S3RateLimiterConfig {
>> options,
>> @@ -118,6 +138,8 @@ impl S3ClientOptions {
>> rate_out: config.rate_out.map(|human_bytes| human_bytes.as_u64()),
>> burst_out: config.burst_out.map(|human_bytes| human_bytes.as_u64()),
>> });
>> + let request_counter_config =
>> + request_counter_options.map(|options| S3RequestCounterConfig { options });
>> Self {
>> endpoint: config.endpoint,
>> port: config.port,
>> @@ -131,6 +153,7 @@ impl S3ClientOptions {
>> put_rate_limit: config.put_rate_limit,
>> provider_quirks: config.provider_quirks.unwrap_or_default(),
>> rate_limiter_config,
>> + request_counter_config,
>> }
>> }
>> }
>> @@ -141,6 +164,7 @@ pub struct S3Client {
>> options: S3ClientOptions,
>> authority: Authority,
>> put_rate_limiter: Option<Arc<Mutex<RateLimiter>>>,
>> + request_counters: Option<Arc<SharedRequestCounters>>,
>> }
>>
>> impl S3Client {
>> @@ -213,6 +237,21 @@ impl S3Client {
>> }
>> }
>>
>> + let request_counters = if let Some(config) = options.request_counter_config.as_ref() {
>> + let path = config
>> + .options
>> + .base_path
>> + .join(format!("{}.shmem", config.options.id));
>> + let request_counters = SharedRequestCounters::open_shared_memory_mapped(
>> + &path,
>> + config.options.user.clone(),
>> + )
>> + .context("failed to mmap shared S3 request counters")?;
>> + Some(Arc::new(request_counters))
>> + } else {
>> + None
>> + };
>> +
>> let client = Client::builder(TokioExecutor::new()).build::<_, Body>(https_connector);
>>
>> let authority_template = if let Some(port) = options.port {
>> @@ -241,6 +280,7 @@ impl S3Client {
>> options,
>> authority,
>> put_rate_limiter,
>> + request_counters,
>> })
>> }
>>
>> @@ -392,7 +432,13 @@ impl S3Client {
>> };
>>
>> match response {
>> - Ok(Ok(response)) => return Ok(response),
>> + Ok(Ok(response)) => {
>> + if let Some(counters) = self.request_counters.as_ref() {
>> + let _prev = counters.increment(parts.method.clone(), Ordering::SeqCst);
>> + }
>> +
>> + return Ok(response);
>> + }
>> Ok(Err(err)) => {
>> if retry >= MAX_S3_HTTP_REQUEST_RETRY - 1 {
>> return Err(err.into());
>> diff --git a/proxmox-s3-client/src/lib.rs b/proxmox-s3-client/src/lib.rs
>> index d02fd0dc..ceee41a2 100644
>> --- a/proxmox-s3-client/src/lib.rs
>> +++ b/proxmox-s3-client/src/lib.rs
>> @@ -21,7 +21,8 @@ pub use aws_sign_v4::uri_decode;
>> mod client;
>> #[cfg(feature = "impl")]
>> pub use client::{
>> - S3Client, S3ClientOptions, S3PathPrefix, S3RateLimiterOptions, S3_HTTP_REQUEST_TIMEOUT,
>> + S3Client, S3ClientOptions, S3PathPrefix, S3RateLimiterOptions, S3RequestCounterOptions,
>> + S3_HTTP_REQUEST_TIMEOUT,
>> };
>> #[cfg(feature = "impl")]
>> mod timestamps;
>> @@ -33,3 +34,7 @@ mod object_key;
>> pub use object_key::S3ObjectKey;
>> #[cfg(feature = "impl")]
>> mod response_reader;
>> +#[cfg(feature = "impl")]
>> +mod shared_request_counters;
>> +#[cfg(feature = "impl")]
>> +pub use shared_request_counters::SharedRequestCounters;
>> diff --git a/proxmox-s3-client/src/shared_request_counters.rs b/proxmox-s3-client/src/shared_request_counters.rs
>> new file mode 100644
>> index 00000000..b236490b
>> --- /dev/null
>> +++ b/proxmox-s3-client/src/shared_request_counters.rs
>> @@ -0,0 +1,152 @@
>> +use std::mem::MaybeUninit;
>> +use std::path::Path;
>> +use std::sync::atomic::{AtomicU64, Ordering};
>> +
>> +use anyhow::{bail, Error};
>> +use hyper::http::method::Method;
>> +use nix::sys::stat::Mode;
>> +use nix::unistd::User;
>> +
>> +use proxmox_shared_memory::{Init, SharedMemory};
>> +use proxmox_sys::fs::CreateOptions;
>> +
>> +const MEMORY_PAGE_SIZE: usize = 4096;
>> +/// Generated via openssl::sha::sha256(b"Proxmox shared request counters v1.0")[0..8]
>> +const PROXMOX_SHARED_REQUEST_COUNTERS_1_0: [u8; 8] = [224, 110, 88, 252, 26, 77, 180, 5];
>> +
>> +#[repr(C)]
>> +#[derive(Default)]
>> +struct RequestCounters {
>> + delete: AtomicU64,
>> + get: AtomicU64,
>> + head: AtomicU64,
>> + post: AtomicU64,
>> + put: AtomicU64,
> It probably doesn't matter (especially not with SeqCst), but aligning
> each field to 64 bytes would avoid false sharing.
Can you elaborate a bit on this? Not sure if I understand what the
benefits for this would be.
>> +}
>> +
>> +impl Init for RequestCounters {
>> + fn initialize(this: &mut MaybeUninit<Self>) {
>> + unsafe {
>> + let this = &mut *this.as_mut_ptr();
>> + *this = RequestCounters::default();
> If the memory is actually uninitialized this is undefined behavior
> because `*this =` drops the previous value. The memory from mmap is
> initialized, but I think it would still be better to use ptr::write.
Okay, can adapt this accordingly, thanks!
>> + }
>> + }
>> +}
>> +
>> +impl RequestCounters {
>> + /// Increment the counter for given method, following the provided memory ordering constrains.
>> + ///
>> + /// Returns the previously stored value.
>> + pub fn increment(&self, method: Method, ordering: Ordering) -> u64 {
>> + match method {
>> + Method::DELETE => self.delete.fetch_add(1, ordering),
>> + Method::GET => self.get.fetch_add(1, ordering),
>> + Method::HEAD => self.head.fetch_add(1, ordering),
>> + Method::POST => self.post.fetch_add(1, ordering),
>> + Method::PUT => self.put.fetch_add(1, ordering),
>> + _ => 0,
>> + }
>> + }
>> +
>> + /// Load current counter state for given method, following the provided memory ordering constrains
>> + pub fn load(&self, method: Method, ordering: Ordering) -> u64 {
>> + match method {
>> + Method::DELETE => self.delete.load(ordering),
>> + Method::GET => self.get.load(ordering),
>> + Method::HEAD => self.head.load(ordering),
>> + Method::POST => self.post.load(ordering),
>> + Method::PUT => self.put.load(ordering),
>> + _ => 0,
>> + }
>> + }
>> +
>> + /// Reset all counters, following the provided memory ordering constrains
>> + pub fn reset(&self, ordering: Ordering) {
>> + self.delete.store(0, ordering);
>> + self.get.store(0, ordering);
>> + self.head.store(0, ordering);
>> + self.post.store(0, ordering);
>> + self.put.store(0, ordering);
>> + }
>> +}
>> +
>> +#[repr(C)]
>> +struct MappableRequestCounters {
>> + magic: [u8; 8],
>> + counters: RequestCounters,
>> + _page_size_padding: [u8; MEMORY_PAGE_SIZE
>> + - PROXMOX_SHARED_REQUEST_COUNTERS_1_0.len()
>> + - std::mem::size_of::<RequestCounters>()],
> Why do we need to hard-code a possibly incorrect assumption about the
> page size? I know `SharedMemory` claims that the size needs to be a
> multiple of 4096, but I don't understand why.
This padding was chosen exactly because of that doc comment and since
other users of `SharedMemory` do the same 4k page size padding.
Will however have a look at the implementation details there, now that
you sparked my interest, maybe that sheds some light.
Or maybe you already have further details since you are working on the
patches to make the code compatible with kernels with non-default page
sizes?
> I guess this implicitly prevents unused padding at the end of the
> struct which would be problematic if we add a field and run two
> processes with different versions at the same time, but that doesn't
> explain the page size.
Allowing to extend with further fields in the future has proven to be
beneficial, yes. E.g. for the token shadow config version as mentioned
in
https://lore.proxmox.com/pbs-devel/20260121151408.731516-2-s.rufinatscha@proxmox.com/T/
>> +}
>> +
>> +impl Init for MappableRequestCounters {
>> + fn initialize(this: &mut MaybeUninit<Self>) {
>> + unsafe {
>> + let this = &mut *this.as_mut_ptr();
>> + this.magic = PROXMOX_SHARED_REQUEST_COUNTERS_1_0;
>> + this.counters = RequestCounters::default();
>> + }
>> + }
>> +
>> + fn check_type_magic(this: &MaybeUninit<Self>) -> Result<(), Error> {
>> + unsafe {
>> + let this = &*this.as_ptr();
>> + if this.magic != PROXMOX_SHARED_REQUEST_COUNTERS_1_0 {
>> + bail!("incorrect magic number for request counters detected");
>> + }
>> + proxmox_shared_memory::check_subtype(&this.counters)?;
>> + }
>> + Ok(())
>> + }
>> +}
>> +
>> +/// Atomic counters storing per-request method counts for the client.
>> +///
>> +/// If set, the counts can be filtered based on a path prefix.
>> +pub struct SharedRequestCounters {
>> + shared_memory: SharedMemory<MappableRequestCounters>,
>> +}
>> +
>> +impl SharedRequestCounters {
>> + /// Create a new shared counter instance.
>> + ///
>> + /// Opens or creates mmap file and accesses it via shared memory mapping.
>> + pub fn open_shared_memory_mapped<P: AsRef<Path>>(path: P, user: User) -> Result<Self, Error> {
>> + let path = path.as_ref();
>> + if let Some(parent) = path.parent() {
>> + let dir_opts = CreateOptions::new()
>> + .perm(Mode::from_bits_truncate(0o770))
>> + .owner(user.uid)
>> + .group(user.gid);
>> +
>> + proxmox_sys::fs::create_path(parent, Some(dir_opts), Some(dir_opts))?;
>> + }
>> +
>> + let file_opts = CreateOptions::new()
>> + .perm(Mode::from_bits_truncate(0o660))
>> + .owner(user.uid)
>> + .group(user.gid);
>> + let shared_memory = SharedMemory::open(path, file_opts)?;
>> + Ok(Self { shared_memory })
>> + }
>> +
>> + /// Increment the counter for given method, following the provided memory ordering constrains
>> + ///
>> + /// Returns the previously stored value.
>> + pub fn increment(&self, method: Method, ordering: Ordering) -> u64 {
>> + self.shared_memory
>> + .data()
>> + .counters
>> + .increment(method, ordering)
>> + }
>> +
>> + /// Load current counter state for given method, following the provided memory ordering constrains
>> + pub fn load(&self, method: Method, ordering: Ordering) -> u64 {
>> + self.shared_memory.data().counters.load(method, ordering)
>> + }
>> +
>> + /// Reset all counters, following the provided memory ordering constrains
>> + pub fn reset(&self, ordering: Ordering) {
>> + self.shared_memory.data().counters.reset(ordering)
>> + }
>> +}
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v1 09/11] metrics: collect s3 datastore statistics as rrd metrics
2026-02-09 9:15 ` [PATCH v1 09/11] metrics: collect s3 datastore statistics as rrd metrics Christian Ebner
@ 2026-02-11 16:29 ` Christian Ebner
0 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-11 16:29 UTC (permalink / raw)
To: pbs-devel
On 2/9/26 10:14 AM, Christian Ebner wrote:
> For datastores with s3 backend, load the shared s3 request counters
> via the mmapped file and include them as rrd metrics. Combine the
> pre-existing DiskStat with an optional S3Statistics into a common
> DatastoreStats struct as dedicated type for the internal method
> interfaces.
>
> Request counters are collected by method, total upload and download
> traffic as gauge values as well as derived values to get averaged
> rate statistics.
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> src/server/metric_collection/metric_server.rs | 8 +--
> src/server/metric_collection/mod.rs | 65 +++++++++++++++++--
> src/server/metric_collection/pull_metrics.rs | 18 +++--
> src/server/metric_collection/rrd.rs | 34 ++++++++--
> 4 files changed, 108 insertions(+), 17 deletions(-)
>
> diff --git a/src/server/metric_collection/metric_server.rs b/src/server/metric_collection/metric_server.rs
> index ba20628a0..4584fc14c 100644
> --- a/src/server/metric_collection/metric_server.rs
> +++ b/src/server/metric_collection/metric_server.rs
> @@ -5,10 +5,10 @@ use serde_json::{json, Value};
>
> use proxmox_metrics::MetricsData;
>
> -use super::{DiskStat, HostStats};
> +use super::{DatastoreStats, DiskStat, HostStats};
>
> pub async fn send_data_to_metric_servers(
> - stats: Arc<(HostStats, DiskStat, Vec<DiskStat>)>,
> + stats: Arc<(HostStats, DiskStat, Vec<DatastoreStats>)>,
> ) -> Result<(), Error> {
> let (config, _digest) = pbs_config::metrics::config()?;
> let channel_list = get_metric_server_connections(config)?;
> @@ -66,10 +66,10 @@ pub async fn send_data_to_metric_servers(
>
> for datastore in stats.2.iter() {
> values.push(Arc::new(
> - MetricsData::new("blockstat", ctime, datastore.to_value())?
> + MetricsData::new("blockstat", ctime, datastore.disk.to_value())?
> .tag("object", "host")
> .tag("host", nodename)
> - .tag("datastore", datastore.name.clone()),
> + .tag("datastore", datastore.disk.name.clone()),
> ));
> }
>
> diff --git a/src/server/metric_collection/mod.rs b/src/server/metric_collection/mod.rs
> index 9b62cbb42..db5830ad1 100644
> --- a/src/server/metric_collection/mod.rs
> +++ b/src/server/metric_collection/mod.rs
> @@ -1,3 +1,4 @@
> +use std::sync::atomic::Ordering;
> use std::{
> collections::HashMap,
> path::Path,
> @@ -6,11 +7,17 @@ use std::{
> time::{Duration, Instant},
> };
>
> -use anyhow::Error;
> +use anyhow::{format_err, Error};
> +use hyper::Method;
> use tokio::join;
>
> -use pbs_api_types::{DataStoreConfig, Operation};
> +use pbs_api_types::{
> + DataStoreConfig, DatastoreBackendConfig, DatastoreBackendType, Operation, S3Statistics,
> +};
> +use proxmox_lang::try_block;
> use proxmox_network_api::{get_network_interfaces, IpLink};
> +use proxmox_s3_client::SharedRequestCounters;
> +use proxmox_schema::ApiType;
> use proxmox_sys::{
> fs::FileSystemInformation,
> linux::procfs::{Loadavg, ProcFsMemInfo, ProcFsNetDev, ProcFsStat},
> @@ -113,6 +120,11 @@ struct DiskStat {
> dev: Option<BlockDevStat>,
> }
>
> +struct DatastoreStats {
> + disk: DiskStat,
> + s3_stats: Option<S3Statistics>,
> +}
> +
> #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, serde::Serialize, serde::Deserialize)]
> enum NetdevType {
> Physical,
> @@ -219,7 +231,7 @@ fn collect_host_stats_sync() -> HostStats {
> }
> }
>
> -fn collect_disk_stats_sync() -> (DiskStat, Vec<DiskStat>) {
> +fn collect_disk_stats_sync() -> (DiskStat, Vec<DatastoreStats>) {
> let disk_manager = DiskManage::new();
>
> let root = gather_disk_stats(disk_manager.clone(), Path::new("/"), "host");
> @@ -243,11 +255,54 @@ fn collect_disk_stats_sync() -> (DiskStat, Vec<DiskStat>) {
> continue;
> }
>
> - datastores.push(gather_disk_stats(
> + let s3_stats: Option<S3Statistics> = try_block!({
> + let backend_config: DatastoreBackendConfig = serde_json::from_value(
> + DatastoreBackendConfig::API_SCHEMA
> + .parse_property_string(config.backend.as_deref().unwrap_or(""))?,
> + )?;
> + if backend_config.ty.unwrap_or_default() == DatastoreBackendType::S3 {
> + let path = format!(
> + "{}/{}-{}-{}.shmem",
> + pbs_datastore::S3_CLIENT_REQUEST_COUNTER_BASE_PATH,
> + backend_config
> + .client
> + .as_ref()
> + .ok_or(format_err!("missing s3 endpoint id"))?,
> + backend_config
> + .bucket
> + .as_ref()
> + .ok_or(format_err!("missing s3 bucket name"))?,
> + config.name,
> + );
> + let user = pbs_config::backup_user()?;
> + let counters =
> + SharedRequestCounters::open_shared_memory_mapped(path, user)?;
This should not happen each time the metrics are collected, but rather
once per datastore on demand, keeping the mmaped counters open.
Will incorporate this in a version 2 of the patch series as well.
> + let s3_statistics = S3Statistics {
> + get: counters.load(Method::GET, Ordering::SeqCst),
> + put: counters.load(Method::PUT, Ordering::SeqCst),
> + post: counters.load(Method::POST, Ordering::SeqCst),
> + delete: counters.load(Method::DELETE, Ordering::SeqCst),
> + head: counters.load(Method::HEAD, Ordering::SeqCst),
> + uploaded: counters.get_upload_traffic(Ordering::SeqCst),
> + downloaded: counters.get_download_traffic(Ordering::SeqCst),
> + };
> + Ok(Some(s3_statistics))
> + } else {
> + Ok(None)
> + }
> + })
> + .unwrap_or_else(|err: Error| {
> + eprintln!("reading datastore s3 counters failed - {err}");
> + None
> + });
> +
> + let disk = gather_disk_stats(
> disk_manager.clone(),
> Path::new(&config.absolute_path()),
> &config.name,
> - ));
> + );
> +
> + datastores.push(DatastoreStats { disk, s3_stats });
> }
> }
> Err(err) => {
> diff --git a/src/server/metric_collection/pull_metrics.rs b/src/server/metric_collection/pull_metrics.rs
> index e99662faf..4dcd336a5 100644
> --- a/src/server/metric_collection/pull_metrics.rs
> +++ b/src/server/metric_collection/pull_metrics.rs
> @@ -6,13 +6,14 @@ use nix::sys::stat::Mode;
> use pbs_api_types::{
> MetricDataPoint,
> MetricDataType::{self, Derive, Gauge},
> + S3Statistics,
> };
> use pbs_buildcfg::PROXMOX_BACKUP_RUN_DIR;
> use proxmox_shared_cache::SharedCache;
> use proxmox_sys::fs::CreateOptions;
> use serde::{Deserialize, Serialize};
>
> -use super::{DiskStat, HostStats, NetdevType, METRIC_COLLECTION_INTERVAL};
> +use super::{DatastoreStats, DiskStat, HostStats, NetdevType, METRIC_COLLECTION_INTERVAL};
>
> const METRIC_CACHE_TIME: Duration = Duration::from_secs(30 * 60);
> const STORED_METRIC_GENERATIONS: u64 =
> @@ -89,7 +90,7 @@ pub fn get_all_metrics(start_time: i64) -> Result<Vec<MetricDataPoint>, Error> {
> pub(super) fn update_metrics(
> host: &HostStats,
> hostdisk: &DiskStat,
> - datastores: &[DiskStat],
> + datastores: &[DatastoreStats],
> ) -> Result<(), Error> {
> let mut points = MetricDataPoints::new(proxmox_time::epoch_i64());
>
> @@ -129,8 +130,11 @@ pub(super) fn update_metrics(
> update_disk_metrics(&mut points, hostdisk, "host");
>
> for stat in datastores {
> - let id = format!("datastore/{}", stat.name);
> - update_disk_metrics(&mut points, stat, &id);
> + let id = format!("datastore/{}", stat.disk.name);
> + update_disk_metrics(&mut points, &stat.disk, &id);
> + if let Some(stat) = &stat.s3_stats {
> + update_s3_metrics(&mut points, stat, &id);
> + }
> }
>
> get_cache()?.set(&points, Duration::from_secs(2))?;
> @@ -158,6 +162,12 @@ fn update_disk_metrics(points: &mut MetricDataPoints, disk: &DiskStat, id: &str)
> }
> }
>
> +fn update_s3_metrics(points: &mut MetricDataPoints, stat: &S3Statistics, id: &str) {
> + let id = format!("{id}/s3");
> + points.add(Gauge, &id, "uploaded", stat.uploaded as f64);
> + points.add(Gauge, &id, "downloaded", stat.downloaded as f64);
> +}
> +
> #[derive(Serialize, Deserialize)]
> struct MetricDataPoints {
> timestamp: i64,
> diff --git a/src/server/metric_collection/rrd.rs b/src/server/metric_collection/rrd.rs
> index 7b13b51ff..a0ea1a566 100644
> --- a/src/server/metric_collection/rrd.rs
> +++ b/src/server/metric_collection/rrd.rs
> @@ -13,10 +13,11 @@ use proxmox_rrd::rrd::{AggregationFn, Archive, DataSourceType, Database};
> use proxmox_rrd::Cache;
> use proxmox_sys::fs::CreateOptions;
>
> +use pbs_api_types::S3Statistics;
> use pbs_buildcfg::PROXMOX_BACKUP_STATE_DIR_M;
> use proxmox_rrd_api_types::{RrdMode, RrdTimeframe};
>
> -use super::{DiskStat, HostStats, NetdevType};
> +use super::{DatastoreStats, DiskStat, HostStats, NetdevType};
>
> const RRD_CACHE_BASEDIR: &str = concat!(PROXMOX_BACKUP_STATE_DIR_M!(), "/rrdb");
>
> @@ -148,7 +149,7 @@ fn update_derive(name: &str, value: f64) {
> }
> }
>
> -pub(super) fn update_metrics(host: &HostStats, hostdisk: &DiskStat, datastores: &[DiskStat]) {
> +pub(super) fn update_metrics(host: &HostStats, hostdisk: &DiskStat, datastores: &[DatastoreStats]) {
> if let Some(stat) = &host.proc {
> update_gauge("host/cpu", stat.cpu);
> update_gauge("host/iowait", stat.iowait_percent);
> @@ -182,8 +183,11 @@ pub(super) fn update_metrics(host: &HostStats, hostdisk: &DiskStat, datastores:
> update_disk_metrics(hostdisk, "host");
>
> for stat in datastores {
> - let rrd_prefix = format!("datastore/{}", stat.name);
> - update_disk_metrics(stat, &rrd_prefix);
> + let rrd_prefix = format!("datastore/{}", stat.disk.name);
> + update_disk_metrics(&stat.disk, &rrd_prefix);
> + if let Some(stats) = &stat.s3_stats {
> + update_s3_metrics(stats, &rrd_prefix);
> + }
> }
> }
>
> @@ -212,3 +216,25 @@ fn update_disk_metrics(disk: &DiskStat, rrd_prefix: &str) {
> update_derive(&rrd_key, (stat.io_ticks as f64) / 1000.0);
> }
> }
> +
> +fn update_s3_metrics(stats: &S3Statistics, rrd_prefix: &str) {
> + let rrd_key = format!("{rrd_prefix}/s3/total/uploaded");
> + update_gauge(&rrd_key, stats.uploaded as f64);
> + let rrd_key = format!("{rrd_prefix}/s3/total/downloaded");
> + update_gauge(&rrd_key, stats.downloaded as f64);
> + let rrd_key = format!("{rrd_prefix}/s3/uploaded");
> + update_derive(&rrd_key, stats.uploaded as f64);
> + let rrd_key = format!("{rrd_prefix}/s3/downloaded");
> + update_derive(&rrd_key, stats.downloaded as f64);
> +
> + let rrd_key = format!("{rrd_prefix}/s3/total/get");
> + update_gauge(&rrd_key, stats.get as f64);
> + let rrd_key = format!("{rrd_prefix}/s3/total/put");
> + update_gauge(&rrd_key, stats.put as f64);
> + let rrd_key = format!("{rrd_prefix}/s3/total/post");
> + update_gauge(&rrd_key, stats.post as f64);
> + let rrd_key = format!("{rrd_prefix}/s3/total/head");
> + update_gauge(&rrd_key, stats.head as f64);
> + let rrd_key = format!("{rrd_prefix}/s3/total/delete");
> + update_gauge(&rrd_key, stats.delete as f64);
> +}
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v1 2/6] s3-client: add persistent shared request counters for client
2026-02-11 12:41 ` Christian Ebner
@ 2026-02-12 9:55 ` Robert Obkircher
2026-02-12 10:19 ` Christian Ebner
2026-02-13 13:37 ` Christian Ebner
0 siblings, 2 replies; 26+ messages in thread
From: Robert Obkircher @ 2026-02-12 9:55 UTC (permalink / raw)
To: Christian Ebner, pbs-devel
On 2/11/26 13:40, Christian Ebner wrote:
> On 2/11/26 1:12 PM, Robert Obkircher wrote:
>>
>> On 2/9/26 10:14, Christian Ebner wrote:
>>> Implements atomic counters for api requests successfully send to the
>>> S3 API endpoint via the client, accounting for individual requests
>>> discriminating based on their method.
>>>
>>> Since multiple client instances might exists accessing the API
>>> concurrently, even in different processes, provide the atomic
>>> counters via shared memory mapping. This follows along the lines of
>>> the shared traffic limiter implementation.
>>>
>>> The counter mappings are conditionally constructed on client
>>> instantiation by caller give configuration options.
>>>
>>> Keep sequential ordering of the counters with the intent to use them
>>> for statistics, soft limits and/or notifications within Proxmox
>>> Backup Server.
>>>
>>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>>> ---
>>> proxmox-s3-client/Cargo.toml | 4 +
>>> proxmox-s3-client/debian/control | 2 +
>>> proxmox-s3-client/examples/s3_client.rs | 1 +
>>> proxmox-s3-client/src/client.rs | 48 +++++-
>>> proxmox-s3-client/src/lib.rs | 7 +-
>>> .../src/shared_request_counters.rs | 152
>>> ++++++++++++++++++
>>> 6 files changed, 212 insertions(+), 2 deletions(-)
>>> create mode 100644 proxmox-s3-client/src/shared_request_counters.rs
>>>
>>> diff --git a/proxmox-s3-client/Cargo.toml
>>> b/proxmox-s3-client/Cargo.toml
>>> index a50fa715..1e31bca4 100644
>>> --- a/proxmox-s3-client/Cargo.toml
>>> +++ b/proxmox-s3-client/Cargo.toml
>>> @@ -38,7 +38,9 @@ proxmox-base64 = { workspace = true, optional =
>>> true }
>>> proxmox-http = { workspace = true, features = [ "body",
>>> "client", "client-trait" ], optional = true }
>>> proxmox-human-byte.workspace = true
>>> proxmox-rate-limiter = { workspace = true, features = [
>>> "rate-limiter", "shared-rate-limiter" ], optional = true }
>>> +proxmox-shared-memory = { workspace = true, optional = true }
>>> proxmox-schema = { workspace = true, features = [ "api-macro",
>>> "api-types" ] }
>>> +proxmox-sys = { workspace = true, optional = true }
>>> proxmox-serde.workspace = true
>>> proxmox-time = {workspace = true, optional = true }
>>> @@ -65,6 +67,8 @@ impl = [
>>> "dep:proxmox-base64",
>>> "dep:proxmox-http",
>>> "dep:proxmox-rate-limiter",
>>> + "dep:proxmox-shared-memory",
>>> + "dep:proxmox-sys",
>>> "dep:proxmox-time",
>>> ]
>>> diff --git a/proxmox-s3-client/debian/control
>>> b/proxmox-s3-client/debian/control
>>> index 33418881..a534a107 100644
>>> --- a/proxmox-s3-client/debian/control
>>> +++ b/proxmox-s3-client/debian/control
>>> @@ -85,6 +85,8 @@ Depends:
>>> librust-proxmox-rate-limiter-1+default-dev,
>>> librust-proxmox-rate-limiter-1+rate-limiter-dev,
>>> librust-proxmox-rate-limiter-1+shared-rate-limiter-dev,
>>> + librust-proxmox-shared-memory-1+default-dev,
>>> + librust-proxmox-sys-1+default-dev,
>>> librust-proxmox-time-2+default-dev (>= 2.1.0-~~),
>>> librust-quick-xml-0.36+async-tokio-dev (>= 0.36.1-~~),
>>> librust-quick-xml-0.36+default-dev (>= 0.36.1-~~),
>>> diff --git a/proxmox-s3-client/examples/s3_client.rs
>>> b/proxmox-s3-client/examples/s3_client.rs
>>> index ca69971c..329de47a 100644
>>> --- a/proxmox-s3-client/examples/s3_client.rs
>>> +++ b/proxmox-s3-client/examples/s3_client.rs
>>> @@ -40,6 +40,7 @@ async fn run() -> Result<(), anyhow::Error> {
>>> put_rate_limit: None,
>>> provider_quirks: Vec::new(),
>>> rate_limiter_config: None,
>>> + request_counter_config: None,
>>> };
>>> // Creating a client instance and connect to api endpoint
>>> diff --git a/proxmox-s3-client/src/client.rs
>>> b/proxmox-s3-client/src/client.rs
>>> index 83176b39..4a6a702b 100644
>>> --- a/proxmox-s3-client/src/client.rs
>>> +++ b/proxmox-s3-client/src/client.rs
>>> @@ -1,5 +1,6 @@
>>> use std::path::{Path, PathBuf};
>>> use std::str::FromStr;
>>> +use std::sync::atomic::Ordering;
>>> use std::sync::{Arc, Mutex};
>>> use std::time::{Duration, Instant};
>>> @@ -32,6 +33,7 @@ use crate::response_reader::{
>>> CopyObjectResponse, DeleteObjectsResponse,
>>> GetObjectResponse, HeadObjectResponse,
>>> ListBucketsResponse, ListObjectsV2Response,
>>> PutObjectResponse, ResponseReader,
>>> };
>>> +use crate::shared_request_counters::SharedRequestCounters;
>>> /// Default timeout for s3 api requests.
>>> pub const S3_HTTP_REQUEST_TIMEOUT: Duration =
>>> Duration::from_secs(30 * 60);
>>> @@ -74,6 +76,21 @@ pub struct S3RateLimiterConfig {
>>> burst_out: Option<u64>,
>>> }
>>> +/// Options for the s3 client's shared request counters
>>> +pub struct S3RequestCounterOptions {
>>> + /// ID for the memory mapped file
>>> + pub id: String,
>>> + /// Base path for the shared memory mapped file
>>> + pub base_path: PathBuf,
>>> + /// User for the to be created shared memory mapped file and
>>> folders
>>> + pub user: User,
>>> +}
>>> +
>>> +/// Configuration for the s3 client's shared request counters
>>> +pub struct S3RequestCounterConfig {
>>> + options: S3RequestCounterOptions,
>>> +}
>>> +
>>> /// Configuration options for client
>>> pub struct S3ClientOptions {
>>> /// Endpoint to access S3 object store.
>>> @@ -100,6 +117,8 @@ pub struct S3ClientOptions {
>>> pub provider_quirks: Vec<ProviderQuirks>,
>>> /// Configuration options for the shared rate limiter.
>>> pub rate_limiter_config: Option<S3RateLimiterConfig>,
>>> + /// Configuration options for the client's shared request
>>> counters.
>>> + pub request_counter_config: Option<S3RequestCounterConfig>,
>>> }
>>> impl S3ClientOptions {
>>> @@ -110,6 +129,7 @@ impl S3ClientOptions {
>>> bucket: Option<String>,
>>> common_prefix: String,
>>> rate_limiter_options: Option<S3RateLimiterOptions>,
>>> + request_counter_options: Option<S3RequestCounterOptions>,
>>> ) -> Self {
>>> let rate_limiter_config =
>>> rate_limiter_options.map(|options| S3RateLimiterConfig {
>>> options,
>>> @@ -118,6 +138,8 @@ impl S3ClientOptions {
>>> rate_out: config.rate_out.map(|human_bytes|
>>> human_bytes.as_u64()),
>>> burst_out: config.burst_out.map(|human_bytes|
>>> human_bytes.as_u64()),
>>> });
>>> + let request_counter_config =
>>> + request_counter_options.map(|options|
>>> S3RequestCounterConfig { options });
>>> Self {
>>> endpoint: config.endpoint,
>>> port: config.port,
>>> @@ -131,6 +153,7 @@ impl S3ClientOptions {
>>> put_rate_limit: config.put_rate_limit,
>>> provider_quirks:
>>> config.provider_quirks.unwrap_or_default(),
>>> rate_limiter_config,
>>> + request_counter_config,
>>> }
>>> }
>>> }
>>> @@ -141,6 +164,7 @@ pub struct S3Client {
>>> options: S3ClientOptions,
>>> authority: Authority,
>>> put_rate_limiter: Option<Arc<Mutex<RateLimiter>>>,
>>> + request_counters: Option<Arc<SharedRequestCounters>>,
>>> }
>>> impl S3Client {
>>> @@ -213,6 +237,21 @@ impl S3Client {
>>> }
>>> }
>>> + let request_counters = if let Some(config) =
>>> options.request_counter_config.as_ref() {
>>> + let path = config
>>> + .options
>>> + .base_path
>>> + .join(format!("{}.shmem", config.options.id));
>>> + let request_counters =
>>> SharedRequestCounters::open_shared_memory_mapped(
>>> + &path,
>>> + config.options.user.clone(),
>>> + )
>>> + .context("failed to mmap shared S3 request counters")?;
>>> + Some(Arc::new(request_counters))
>>> + } else {
>>> + None
>>> + };
>>> +
>>> let client =
>>> Client::builder(TokioExecutor::new()).build::<_,
>>> Body>(https_connector);
>>> let authority_template = if let Some(port) =
>>> options.port {
>>> @@ -241,6 +280,7 @@ impl S3Client {
>>> options,
>>> authority,
>>> put_rate_limiter,
>>> + request_counters,
>>> })
>>> }
>>> @@ -392,7 +432,13 @@ impl S3Client {
>>> };
>>> match response {
>>> - Ok(Ok(response)) => return Ok(response),
>>> + Ok(Ok(response)) => {
>>> + if let Some(counters) =
>>> self.request_counters.as_ref() {
>>> + let _prev =
>>> counters.increment(parts.method.clone(), Ordering::SeqCst);
>>> + }
>>> +
>>> + return Ok(response);
>>> + }
>>> Ok(Err(err)) => {
>>> if retry >= MAX_S3_HTTP_REQUEST_RETRY - 1 {
>>> return Err(err.into());
>>> diff --git a/proxmox-s3-client/src/lib.rs
>>> b/proxmox-s3-client/src/lib.rs
>>> index d02fd0dc..ceee41a2 100644
>>> --- a/proxmox-s3-client/src/lib.rs
>>> +++ b/proxmox-s3-client/src/lib.rs
>>> @@ -21,7 +21,8 @@ pub use aws_sign_v4::uri_decode;
>>> mod client;
>>> #[cfg(feature = "impl")]
>>> pub use client::{
>>> - S3Client, S3ClientOptions, S3PathPrefix,
>>> S3RateLimiterOptions, S3_HTTP_REQUEST_TIMEOUT,
>>> + S3Client, S3ClientOptions, S3PathPrefix,
>>> S3RateLimiterOptions, S3RequestCounterOptions,
>>> + S3_HTTP_REQUEST_TIMEOUT,
>>> };
>>> #[cfg(feature = "impl")]
>>> mod timestamps;
>>> @@ -33,3 +34,7 @@ mod object_key;
>>> pub use object_key::S3ObjectKey;
>>> #[cfg(feature = "impl")]
>>> mod response_reader;
>>> +#[cfg(feature = "impl")]
>>> +mod shared_request_counters;
>>> +#[cfg(feature = "impl")]
>>> +pub use shared_request_counters::SharedRequestCounters;
>>> diff --git a/proxmox-s3-client/src/shared_request_counters.rs
>>> b/proxmox-s3-client/src/shared_request_counters.rs
>>> new file mode 100644
>>> index 00000000..b236490b
>>> --- /dev/null
>>> +++ b/proxmox-s3-client/src/shared_request_counters.rs
>>> @@ -0,0 +1,152 @@
>>> +use std::mem::MaybeUninit;
>>> +use std::path::Path;
>>> +use std::sync::atomic::{AtomicU64, Ordering};
>>> +
>>> +use anyhow::{bail, Error};
>>> +use hyper::http::method::Method;
>>> +use nix::sys::stat::Mode;
>>> +use nix::unistd::User;
>>> +
>>> +use proxmox_shared_memory::{Init, SharedMemory};
>>> +use proxmox_sys::fs::CreateOptions;
>>> +
>>> +const MEMORY_PAGE_SIZE: usize = 4096;
>>> +/// Generated via openssl::sha::sha256(b"Proxmox shared request
>>> counters v1.0")[0..8]
>>> +const PROXMOX_SHARED_REQUEST_COUNTERS_1_0: [u8; 8] = [224, 110,
>>> 88, 252, 26, 77, 180, 5];
>>> +
>>> +#[repr(C)]
>>> +#[derive(Default)]
>>> +struct RequestCounters {
>>> + delete: AtomicU64,
>>> + get: AtomicU64,
>>> + head: AtomicU64,
>>> + post: AtomicU64,
>>> + put: AtomicU64,
>> It probably doesn't matter (especially not with SeqCst), but aligning
>> each field to 64 bytes would avoid false sharing.
>
> Can you elaborate a bit on this? Not sure if I understand what the
> benefits for this would be.
If those 5 variables share the same cache line then only one core can
write to any of them at a time because they all share a single lock
that bounces back and forth. I don't think it makes a measurable
difference for counting requests though.
>
>>> +}
>>> +
>>> +impl Init for RequestCounters {
>>> + fn initialize(this: &mut MaybeUninit<Self>) {
>>> + unsafe {
>>> + let this = &mut *this.as_mut_ptr();
>>> + *this = RequestCounters::default();
>> If the memory is actually uninitialized this is undefined behavior
>> because `*this =` drops the previous value. The memory from mmap is
>> initialized, but I think it would still be better to use ptr::write.
>
> Okay, can adapt this accordingly, thanks!
>
>>> + }
>>> + }
>>> +}
>>> +
>>> +impl RequestCounters {
>>> + /// Increment the counter for given method, following the
>>> provided memory ordering constrains.
>>> + ///
>>> + /// Returns the previously stored value.
>>> + pub fn increment(&self, method: Method, ordering: Ordering)
>>> -> u64 {
>>> + match method {
>>> + Method::DELETE => self.delete.fetch_add(1, ordering),
>>> + Method::GET => self.get.fetch_add(1, ordering),
>>> + Method::HEAD => self.head.fetch_add(1, ordering),
>>> + Method::POST => self.post.fetch_add(1, ordering),
>>> + Method::PUT => self.put.fetch_add(1, ordering),
>>> + _ => 0,
>>> + }
>>> + }
>>> +
>>> + /// Load current counter state for given method, following
>>> the provided memory ordering constrains
>>> + pub fn load(&self, method: Method, ordering: Ordering) -> u64 {
>>> + match method {
>>> + Method::DELETE => self.delete.load(ordering),
>>> + Method::GET => self.get.load(ordering),
>>> + Method::HEAD => self.head.load(ordering),
>>> + Method::POST => self.post.load(ordering),
>>> + Method::PUT => self.put.load(ordering),
>>> + _ => 0,
>>> + }
>>> + }
>>> +
>>> + /// Reset all counters, following the provided memory
>>> ordering constrains
>>> + pub fn reset(&self, ordering: Ordering) {
>>> + self.delete.store(0, ordering);
>>> + self.get.store(0, ordering);
>>> + self.head.store(0, ordering);
>>> + self.post.store(0, ordering);
>>> + self.put.store(0, ordering);
>>> + }
>>> +}
>>> +
>>> +#[repr(C)]
>>> +struct MappableRequestCounters {
>>> + magic: [u8; 8],
>>> + counters: RequestCounters,
>>> + _page_size_padding: [u8; MEMORY_PAGE_SIZE
>>> + - PROXMOX_SHARED_REQUEST_COUNTERS_1_0.len()
>>> + - std::mem::size_of::<RequestCounters>()],
>> Why do we need to hard-code a possibly incorrect assumption about the
>> page size? I know `SharedMemory` claims that the size needs to be a
>> multiple of 4096, but I don't understand why.
>
> This padding was chosen exactly because of that doc comment and
> since other users of `SharedMemory` do the same 4k page size padding.
>
> Will however have a look at the implementation details there, now
> that you sparked my interest, maybe that sheds some light.
>
> Or maybe you already have further details since you are working on
> the patches to make the code compatible with kernels with
> non-default page sizes?
That's why I was thinking about it, but I don't have further details yet.
>> I guess this implicitly prevents unused padding at the end of the
>> struct which would be problematic if we add a field and run two
>> processes with different versions at the same time, but that doesn't
>> explain the page size.
>
> Allowing to extend with further fields in the future has proven to
> be beneficial, yes. E.g. for the token shadow config version as
> mentioned in
> https://lore.proxmox.com/pbs-devel/20260121151408.731516-2-s.rufinatscha@proxmox.com/T/
>
>>> +}
>>> +
>>> +impl Init for MappableRequestCounters {
>>> + fn initialize(this: &mut MaybeUninit<Self>) {
>>> + unsafe {
>>> + let this = &mut *this.as_mut_ptr();
>>> + this.magic = PROXMOX_SHARED_REQUEST_COUNTERS_1_0;
>>> + this.counters = RequestCounters::default();
>>> + }
>>> + }
>>> +
>>> + fn check_type_magic(this: &MaybeUninit<Self>) -> Result<(),
>>> Error> {
>>> + unsafe {
>>> + let this = &*this.as_ptr();
>>> + if this.magic != PROXMOX_SHARED_REQUEST_COUNTERS_1_0 {
>>> + bail!("incorrect magic number for request
>>> counters detected");
>>> + }
>>> + proxmox_shared_memory::check_subtype(&this.counters)?;
>>> + }
>>> + Ok(())
>>> + }
>>> +}
>>> +
>>> +/// Atomic counters storing per-request method counts for the
>>> client.
>>> +///
>>> +/// If set, the counts can be filtered based on a path prefix.
>>> +pub struct SharedRequestCounters {
>>> + shared_memory: SharedMemory<MappableRequestCounters>,
>>> +}
>>> +
>>> +impl SharedRequestCounters {
>>> + /// Create a new shared counter instance.
>>> + ///
>>> + /// Opens or creates mmap file and accesses it via shared
>>> memory mapping.
>>> + pub fn open_shared_memory_mapped<P: AsRef<Path>>(path: P,
>>> user: User) -> Result<Self, Error> {
>>> + let path = path.as_ref();
>>> + if let Some(parent) = path.parent() {
>>> + let dir_opts = CreateOptions::new()
>>> + .perm(Mode::from_bits_truncate(0o770))
>>> + .owner(user.uid)
>>> + .group(user.gid);
>>> +
>>> + proxmox_sys::fs::create_path(parent, Some(dir_opts),
>>> Some(dir_opts))?;
>>> + }
>>> +
>>> + let file_opts = CreateOptions::new()
>>> + .perm(Mode::from_bits_truncate(0o660))
>>> + .owner(user.uid)
>>> + .group(user.gid);
>>> + let shared_memory = SharedMemory::open(path, file_opts)?;
>>> + Ok(Self { shared_memory })
>>> + }
>>> +
>>> + /// Increment the counter for given method, following the
>>> provided memory ordering constrains
>>> + ///
>>> + /// Returns the previously stored value.
>>> + pub fn increment(&self, method: Method, ordering: Ordering)
>>> -> u64 {
>>> + self.shared_memory
>>> + .data()
>>> + .counters
>>> + .increment(method, ordering)
>>> + }
>>> +
>>> + /// Load current counter state for given method, following
>>> the provided memory ordering constrains
>>> + pub fn load(&self, method: Method, ordering: Ordering) -> u64 {
>>> + self.shared_memory.data().counters.load(method, ordering)
>>> + }
>>> +
>>> + /// Reset all counters, following the provided memory
>>> ordering constrains
>>> + pub fn reset(&self, ordering: Ordering) {
>>> + self.shared_memory.data().counters.reset(ordering)
>>> + }
>>> +}
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v1 2/6] s3-client: add persistent shared request counters for client
2026-02-12 9:55 ` Robert Obkircher
@ 2026-02-12 10:19 ` Christian Ebner
2026-02-13 13:37 ` Christian Ebner
1 sibling, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-12 10:19 UTC (permalink / raw)
To: Robert Obkircher, pbs-devel
On 2/12/26 10:54 AM, Robert Obkircher wrote:
>
> On 2/11/26 13:40, Christian Ebner wrote:
>>
>> Can you elaborate a bit on this? Not sure if I understand what the
>> benefits for this would be.
> If those 5 variables share the same cache line then only one core can
> write to any of them at a time because they all share a single lock
> that bounces back and forth. I don't think it makes a measurable
> difference for counting requests though.
Thanks a lot for the input!
Already looked a bit into it yesterday after you sparked my interest:
Might indeed make sense to pad the items to 64-bytes so they are in
different cache lines.
So I will therefore create a wrapper type for the AtomicU64 counters and
align them using the alignment modifier [0].
Also, not sure anymore if `Ordering::SeqCst` is required, this could
probably be relaxed to `Ordering::AcqRel` for given use-case.
[0]
https://doc.rust-lang.org/reference/type-layout.html#the-alignment-modifiers
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v1 2/6] s3-client: add persistent shared request counters for client
2026-02-12 9:55 ` Robert Obkircher
2026-02-12 10:19 ` Christian Ebner
@ 2026-02-13 13:37 ` Christian Ebner
1 sibling, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-13 13:37 UTC (permalink / raw)
To: Robert Obkircher, pbs-devel
On 2/12/26 10:54 AM, Robert Obkircher wrote:
> On 2/11/26 13:40, Christian Ebner wrote:
>> On 2/11/26 1:12 PM, Robert Obkircher wrote:
>>> On 2/9/26 10:14, Christian Ebner wrote:
[...]
>>>> +
>>>> +#[repr(C)]
>>>> +struct MappableRequestCounters {
>>>> + magic: [u8; 8],
>>>> + counters: RequestCounters,
>>>> + _page_size_padding: [u8; MEMORY_PAGE_SIZE
>>>> + - PROXMOX_SHARED_REQUEST_COUNTERS_1_0.len()
>>>> + - std::mem::size_of::<RequestCounters>()],
>>> Why do we need to hard-code a possibly incorrect assumption about the
>>> page size? I know `SharedMemory` claims that the size needs to be a
>>> multiple of 4096, but I don't understand why.
>>
>> This padding was chosen exactly because of that doc comment and
>> since other users of `SharedMemory` do the same 4k page size padding.
>>
>> Will however have a look at the implementation details there, now
>> that you sparked my interest, maybe that sheds some light.
>>
>> Or maybe you already have further details since you are working on
>> the patches to make the code compatible with kernels with
>> non-default page sizes?
> That's why I was thinking about it, but I don't have further details yet.
>>> I guess this implicitly prevents unused padding at the end of the
>>> struct which would be problematic if we add a field and run two
>>> processes with different versions at the same time, but that doesn't
>>> explain the page size.
>>
>> Allowing to extend with further fields in the future has proven to
>> be beneficial, yes. E.g. for the token shadow config version as
>> mentioned in
>> https://lore.proxmox.com/pbs-devel/20260121151408.731516-2-s.rufinatscha@proxmox.com/T/
>>
So took a closer look at the implementation, the page size check for the
length is actually enforced there, not just suggested.
Also, CC'ing Dietmar as the author of the code, maybe he can tell us
more about the reasons why.
Here is what I found:
proxmox_shared_memory::SharedMemory mmap's the file on init [0] using
the implementation of proxmox_sys::mmap::Mmap::map_fd() [1], passing
however a hard coded `offset` of 0.
This internally calls nix::sys::mman::mmap(), passing no address (so the
Kernel is free to choose it, would be page aligned by the Kernel if
given), passing along the zero offset set before.
The man-page [0] for mmap states that `offset` must be a multiple of the
page size as returned by `sysconf(_SC_PAGE_SIZE)`, and that the contents
of a file mapping are initialized for length bytes starting from given
offset. So with the hard coded 0 this will always be fine.
For the `length` the only constraint is that it must be greater than zero:
"The length argument specifies the length of the mapping (which must be
greater than 0)."
It is however also specified [3], that:
"The system shall always zero-fill any partial page at the end of an
object. Further, the system shall never write out any modified portions
of the last page of an object which are beyond its end. References
within the address range starting at pa and continuing for len bytes to
whole pages following the end of an object shall result in delivery of a
SIGBUS signal."
So I guess that the chosen constraint on the length is simply best
practice to avoid writing to possibly invalid parts of the mappings.
Given that, I would either opt to keep the PAGE_SIZE constant as
implemented above or directly switch to read it from
sysconf(_SC_PAGE_SIZE), in an effort to facilitate any transitions to
Linux kernels using non-default page sizes.
What are your thoughts on that?
[0]
https://git.proxmox.com/?p=proxmox.git;a=blob;f=proxmox-shared-memory/src/lib.rs;h=b067d1b96e73740e82a5b7be922edb7ba162c650;hb=HEAD#l58
[1]
https://git.proxmox.com/?p=proxmox.git;a=blob;f=proxmox-sys/src/mmap.rs;h=8b8ad8a6ab7c102c156313e29e96816140912b44;hb=HEAD#l30
[2] https://man7.org/linux/man-pages/man2/mmap.2.html
[3] https://pubs.opengroup.org/onlinepubs/9799919799/
^ permalink raw reply [flat|nested] 26+ messages in thread
* superseded: [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
` (17 preceding siblings ...)
2026-02-09 9:39 ` [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
@ 2026-02-16 12:15 ` Christian Ebner
18 siblings, 0 replies; 26+ messages in thread
From: Christian Ebner @ 2026-02-16 12:15 UTC (permalink / raw)
To: pbs-devel
superseded-by version 2:
https://lore.proxmox.com/pbs-devel/20260216121406.99617-1-c.ebner@proxmox.com/T/
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2026-02-16 12:14 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-09 9:15 [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
2026-02-09 9:15 ` [PATCH v1 1/6] shared-memory: drop check for mmap file being located on tmpfs Christian Ebner
2026-02-09 9:15 ` [PATCH v1 2/6] s3-client: add persistent shared request counters for client Christian Ebner
2026-02-11 12:13 ` Robert Obkircher
2026-02-11 12:41 ` Christian Ebner
2026-02-12 9:55 ` Robert Obkircher
2026-02-12 10:19 ` Christian Ebner
2026-02-13 13:37 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 3/6] s3-client: add counters for upload/download traffic Christian Ebner
2026-02-09 9:15 ` [PATCH v1 4/6] s3-client: account for upload traffic on successful request sending Christian Ebner
2026-02-09 9:15 ` [PATCH v1 5/6] s3-client: account for downloaded bytes in incoming response body Christian Ebner
2026-02-09 9:15 ` [PATCH v1 6/6] pbs-api-types: define api type for s3 request statistics Christian Ebner
2026-02-09 9:15 ` [PATCH v1 01/11] datastore: collect request statistics for s3 backed datastores Christian Ebner
2026-02-09 9:15 ` [PATCH v1 02/11] datastore: expose request counters " Christian Ebner
2026-02-09 9:15 ` [PATCH v1 03/11] api: s3: add endpoint to reset s3 request counters Christian Ebner
2026-02-09 9:15 ` [PATCH v1 04/11] bin: s3: expose request counter reset method as cli command Christian Ebner
2026-02-09 9:15 ` [PATCH v1 05/11] datastore: add helper method to get datastore backend type Christian Ebner
2026-02-09 9:15 ` [PATCH v1 06/11] ui: improve variable name indirectly fixing typo Christian Ebner
2026-02-09 9:15 ` [PATCH v1 07/11] ui: datastore summary: move store to be part of summary panel Christian Ebner
2026-02-09 9:15 ` [PATCH v1 08/11] ui: expose s3 request counter statistics in the datastore summary Christian Ebner
2026-02-09 9:15 ` [PATCH v1 09/11] metrics: collect s3 datastore statistics as rrd metrics Christian Ebner
2026-02-11 16:29 ` Christian Ebner
2026-02-09 9:15 ` [PATCH v1 10/11] api: admin: expose s3 statistics in datastore rrd data Christian Ebner
2026-02-09 9:15 ` [PATCH v1 11/11] partially fix #6563: ui: expose s3 rrd charts in datastore summary Christian Ebner
2026-02-09 9:39 ` [PATCH v1 00/17] partially fix #6563: add s3 request and traffic counter statistics Christian Ebner
2026-02-16 12:15 ` superseded: " Christian Ebner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox