* [PATCH proxmox v3 1/7] pbs-api-types: derive FromStr for DatastoreTuning parsing
2026-05-12 13:10 [PATCH proxmox{,-backup} v3 0/7] fix sync level updates for chunk store Christian Ebner
@ 2026-05-12 13:10 ` Christian Ebner
2026-05-12 13:10 ` [PATCH proxmox-backup v3 2/7] datastore: GC: avoid double parsing of datastore tuning options Christian Ebner
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-05-12 13:10 UTC (permalink / raw)
To: pbs-devel
Facilitates parsing of the datastore tuning options for PBS.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-api-types/src/datastore.rs | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index 098d2b7c..8d50e7ae 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -307,6 +307,16 @@ pub struct DatastoreTuning {
pub default_verification_readers: Option<usize>,
}
+impl FromStr for DatastoreTuning {
+ type Err = Error;
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ let tuning: DatastoreTuning =
+ proxmox_schema::property_string::parse_with_schema(s, &DatastoreTuning::API_SCHEMA)?;
+ Ok(tuning)
+ }
+}
+
pub const DATASTORE_TUNING_STRING_SCHEMA: Schema = StringSchema::new("Datastore tuning options")
.format(&ApiStringFormat::PropertyString(
&DatastoreTuning::API_SCHEMA,
--
2.47.3
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH proxmox-backup v3 2/7] datastore: GC: avoid double parsing of datastore tuning options
2026-05-12 13:10 [PATCH proxmox{,-backup} v3 0/7] fix sync level updates for chunk store Christian Ebner
2026-05-12 13:10 ` [PATCH proxmox v3 1/7] pbs-api-types: derive FromStr for DatastoreTuning parsing Christian Ebner
@ 2026-05-12 13:10 ` Christian Ebner
2026-05-12 13:10 ` [PATCH proxmox-backup v3 3/7] pbs-config/datastore: add common helper for parsing " Christian Ebner
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-05-12 13:10 UTC (permalink / raw)
To: pbs-devel
The tuning options are currently parsed twice from the config during
garbage collection. This is however not needed, the same unchanged
config is used in both cases.
Use the first parsed tuning option instance.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 4 ----
1 file changed, 4 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 34efcd398..e39bdeb0a 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -2438,10 +2438,6 @@ impl DataStore {
);
}
- let tuning: DatastoreTuning = serde_json::from_value(
- DatastoreTuning::API_SCHEMA
- .parse_property_string(gc_store_config.tuning.as_deref().unwrap_or(""))?,
- )?;
let gc_cache_capacity = if let Some(capacity) = tuning.gc_cache_capacity {
info!("Using chunk digest cache capacity of {capacity}.");
capacity
--
2.47.3
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH proxmox-backup v3 3/7] pbs-config/datastore: add common helper for parsing tuning options
2026-05-12 13:10 [PATCH proxmox{,-backup} v3 0/7] fix sync level updates for chunk store Christian Ebner
2026-05-12 13:10 ` [PATCH proxmox v3 1/7] pbs-api-types: derive FromStr for DatastoreTuning parsing Christian Ebner
2026-05-12 13:10 ` [PATCH proxmox-backup v3 2/7] datastore: GC: avoid double parsing of datastore tuning options Christian Ebner
@ 2026-05-12 13:10 ` Christian Ebner
2026-05-12 13:10 ` [PATCH proxmox-backup v3 4/7] datastore: restrict chunk store mutex scope to crate only Christian Ebner
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-05-12 13:10 UTC (permalink / raw)
To: pbs-devel
The datastore tuning options are parsed in several places, add and
use a common helper just like for the datastore backend config.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-config/src/datastore.rs | 12 +++++++++++-
pbs-datastore/src/datastore.rs | 25 ++++++-------------------
src/api2/config/datastore.rs | 12 ++++--------
3 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/pbs-config/src/datastore.rs b/pbs-config/src/datastore.rs
index d75d6c557..704841fb2 100644
--- a/pbs-config/src/datastore.rs
+++ b/pbs-config/src/datastore.rs
@@ -6,7 +6,7 @@ use anyhow::Error;
use proxmox_schema::{AllOfSchema, ApiType};
use proxmox_section_config::{SectionConfig, SectionConfigData, SectionConfigPlugin};
-use pbs_api_types::{DataStoreConfig, DatastoreBackendConfig, DATASTORE_SCHEMA};
+use pbs_api_types::{DataStoreConfig, DatastoreBackendConfig, DatastoreTuning, DATASTORE_SCHEMA};
use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard, ConfigVersionCache};
@@ -119,6 +119,16 @@ pub fn parse_backend_config(config: &DataStoreConfig) -> Result<DatastoreBackend
config.backend.as_deref().unwrap_or("").parse()
}
+/// Parse the datastore tuning options from a datastore config.
+pub fn parse_datastore_tuning_options(config: &DataStoreConfig) -> Result<DatastoreTuning, Error> {
+ config
+ .tuning
+ .as_deref()
+ .unwrap_or("")
+ .parse()
+ .map_err(Error::from)
+}
+
/// Returns the datastore backend type from its name.
pub fn datastore_backend_type(store: &str) -> Result<pbs_api_types::DatastoreBackendType, Error> {
let (config, _) = self::config()?;
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index e39bdeb0a..b589ff1e0 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -33,8 +33,8 @@ use proxmox_worker_task::WorkerTaskContext;
use pbs_api_types::{
ArchiveType, Authid, BackupGroupDeleteStats, BackupNamespace, BackupType, ChunkOrder,
DataStoreConfig, DatastoreBackendConfig, DatastoreBackendType, DatastoreFSyncLevel,
- DatastoreTuning, GarbageCollectionCacheStats, GarbageCollectionStatus, MaintenanceMode,
- MaintenanceType, Operation, S3Statistics, MAX_NAMESPACE_DEPTH, UPID,
+ GarbageCollectionCacheStats, GarbageCollectionStatus, MaintenanceMode, MaintenanceType,
+ Operation, S3Statistics, MAX_NAMESPACE_DEPTH, UPID,
};
use pbs_config::s3::S3_CFG_TYPE_ID;
use pbs_config::{BackupLockGuard, ConfigVersionCache};
@@ -599,10 +599,7 @@ impl DataStore {
}
Arc::clone(&datastore.chunk_store)
} else {
- let tuning: DatastoreTuning = serde_json::from_value(
- DatastoreTuning::API_SCHEMA
- .parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
- )?;
+ let tuning = pbs_config::datastore::parse_datastore_tuning_options(&config)?;
Arc::new(ChunkStore::open(
lookup.name,
config.absolute_path(),
@@ -691,10 +688,7 @@ impl DataStore {
ensure_datastore_is_mounted(&config)?;
- let tuning: DatastoreTuning = serde_json::from_value(
- DatastoreTuning::API_SCHEMA
- .parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
- )?;
+ let tuning = pbs_config::datastore::parse_datastore_tuning_options(&config)?;
let chunk_store = ChunkStore::open(
&name,
config.absolute_path(),
@@ -735,11 +729,7 @@ impl DataStore {
GarbageCollectionStatus::default()
};
- let tuning: DatastoreTuning = serde_json::from_value(
- DatastoreTuning::API_SCHEMA
- .parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
- )?;
-
+ let tuning = pbs_config::datastore::parse_datastore_tuning_options(&config)?;
let backend_config = pbs_config::datastore::parse_backend_config(&config)?;
let (lru_store_caching, request_counters) =
@@ -2393,10 +2383,7 @@ impl DataStore {
cache_stats: Some(GarbageCollectionCacheStats::default()),
..Default::default()
};
- let tuning: DatastoreTuning = serde_json::from_value(
- DatastoreTuning::API_SCHEMA
- .parse_property_string(gc_store_config.tuning.as_deref().unwrap_or(""))?,
- )?;
+ let tuning = pbs_config::datastore::parse_datastore_tuning_options(&gc_store_config)?;
let s3_client = match self.backend()? {
DatastoreBackend::Filesystem => None,
diff --git a/src/api2/config/datastore.rs b/src/api2/config/datastore.rs
index 16e85a636..99c7d8e47 100644
--- a/src/api2/config/datastore.rs
+++ b/src/api2/config/datastore.rs
@@ -14,9 +14,9 @@ use proxmox_uuid::Uuid;
use pbs_api_types::{
Authid, DataStoreConfig, DataStoreConfigUpdater, DatastoreBackendType, DatastoreNotify,
- DatastoreTuning, KeepOptions, MaintenanceMode, MaintenanceType, PruneJobConfig,
- PruneJobOptions, DATASTORE_SCHEMA, PRIV_DATASTORE_ALLOCATE, PRIV_DATASTORE_AUDIT,
- PRIV_DATASTORE_MODIFY, PRIV_SYS_MODIFY, PROXMOX_CONFIG_DIGEST_SCHEMA, UPID_SCHEMA,
+ KeepOptions, MaintenanceMode, MaintenanceType, PruneJobConfig, PruneJobOptions,
+ DATASTORE_SCHEMA, PRIV_DATASTORE_ALLOCATE, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_MODIFY,
+ PRIV_SYS_MODIFY, PROXMOX_CONFIG_DIGEST_SCHEMA, UPID_SCHEMA,
};
use pbs_config::BackupLockGuard;
use pbs_datastore::chunk_store::ChunkStore;
@@ -121,11 +121,7 @@ pub(crate) fn do_create_datastore(
param_bail!("path", err);
}
- let tuning: DatastoreTuning = serde_json::from_value(
- DatastoreTuning::API_SCHEMA
- .parse_property_string(datastore.tuning.as_deref().unwrap_or(""))?,
- )?;
-
+ let tuning = pbs_config::datastore::parse_datastore_tuning_options(&datastore)?;
let (backend_type, backend_s3_client) =
match DataStore::s3_client_and_backend_from_datastore_config(&datastore)? {
(backend_type, Some(s3_client)) => {
--
2.47.3
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH proxmox-backup v3 4/7] datastore: restrict chunk store mutex scope to crate only
2026-05-12 13:10 [PATCH proxmox{,-backup} v3 0/7] fix sync level updates for chunk store Christian Ebner
` (2 preceding siblings ...)
2026-05-12 13:10 ` [PATCH proxmox-backup v3 3/7] pbs-config/datastore: add common helper for parsing " Christian Ebner
@ 2026-05-12 13:10 ` Christian Ebner
2026-05-12 13:10 ` [PATCH proxmox-backup v3 5/7] datastore: avoid useless double borrowing of datastore Christian Ebner
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-05-12 13:10 UTC (permalink / raw)
To: pbs-devel
The mutex should never be used outside of the datastore
implementation, all operations on it go through other method calls.
Restrict the scope for the method to get the mutex.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/chunk_store.rs | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 68db88eab..2888dea39 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -423,7 +423,7 @@ impl ChunkStore {
ProcessLocker::oldest_shared_lock(self.locker.clone().unwrap())
}
- pub fn mutex(&self) -> &std::sync::Mutex<()> {
+ pub(crate) fn mutex(&self) -> &std::sync::Mutex<()> {
&self.mutex
}
--
2.47.3
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH proxmox-backup v3 5/7] datastore: avoid useless double borrowing of datastore
2026-05-12 13:10 [PATCH proxmox{,-backup} v3 0/7] fix sync level updates for chunk store Christian Ebner
` (3 preceding siblings ...)
2026-05-12 13:10 ` [PATCH proxmox-backup v3 4/7] datastore: restrict chunk store mutex scope to crate only Christian Ebner
@ 2026-05-12 13:10 ` Christian Ebner
2026-05-12 13:10 ` [PATCH proxmox-backup v3 6/7] datastore: move try_ensure_sync_level() implementation to chunk store Christian Ebner
2026-05-12 13:10 ` [PATCH proxmox-backup v3 7/7] datastore: fix sync level update propagation " Christian Ebner
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-05-12 13:10 UTC (permalink / raw)
To: pbs-devel
Getting the datastore entry from the cache already gives a reference,
no need to borrow again.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index b589ff1e0..dbf9e7742 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -588,7 +588,7 @@ impl DataStore {
let entry = datastore_cache.get(lookup.name);
// reuse chunk store so that we keep using the same process locker instance!
- let chunk_store = if let Some(datastore) = &entry {
+ let chunk_store = if let Some(datastore) = entry {
// Re-use DataStoreImpl
if datastore.config_generation == gen_num && gen_num.is_some() {
update_active_operations(lookup.name, lookup.operation, 1)?;
--
2.47.3
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH proxmox-backup v3 6/7] datastore: move try_ensure_sync_level() implementation to chunk store
2026-05-12 13:10 [PATCH proxmox{,-backup} v3 0/7] fix sync level updates for chunk store Christian Ebner
` (4 preceding siblings ...)
2026-05-12 13:10 ` [PATCH proxmox-backup v3 5/7] datastore: avoid useless double borrowing of datastore Christian Ebner
@ 2026-05-12 13:10 ` Christian Ebner
2026-05-12 13:10 ` [PATCH proxmox-backup v3 7/7] datastore: fix sync level update propagation " Christian Ebner
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-05-12 13:10 UTC (permalink / raw)
To: pbs-devel
Keep the public interface for the datastore in place, but move the
method to the chunk store, dropping the now unneeded state stored on
DataStoreImpl. The path for the syncfs call does not change by the
move as the previous implementation already passed the chunk store
base path.
While moving, adapt the log message to use the tracing macro.
This is in preparation for fixing the sync level updates not being
propagated to the chunk store.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/chunk_store.rs | 13 +++++++++++++
pbs-datastore/src/datastore.rs | 22 +++++-----------------
2 files changed, 18 insertions(+), 17 deletions(-)
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 2888dea39..e8c279a62 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -966,6 +966,19 @@ impl ChunkStore {
}
(chunk_path, counter)
}
+
+ pub(super) fn try_ensure_sync_level(&self) -> Result<(), Error> {
+ if self.sync_level != DatastoreFSyncLevel::Filesystem {
+ return Ok(());
+ }
+ let file = std::fs::File::open(self.base_path())?;
+ let fd = file.as_raw_fd();
+ info!("syncing filesystem");
+ if unsafe { libc::syncfs(fd) } < 0 {
+ bail!("error during syncfs: {}", std::io::Error::last_os_error());
+ }
+ Ok(())
+ }
}
#[derive(PartialEq)]
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index dbf9e7742..b67ab2f3a 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -32,9 +32,9 @@ use proxmox_worker_task::WorkerTaskContext;
use pbs_api_types::{
ArchiveType, Authid, BackupGroupDeleteStats, BackupNamespace, BackupType, ChunkOrder,
- DataStoreConfig, DatastoreBackendConfig, DatastoreBackendType, DatastoreFSyncLevel,
- GarbageCollectionCacheStats, GarbageCollectionStatus, MaintenanceMode, MaintenanceType,
- Operation, S3Statistics, MAX_NAMESPACE_DEPTH, UPID,
+ DataStoreConfig, DatastoreBackendConfig, DatastoreBackendType, GarbageCollectionCacheStats,
+ GarbageCollectionStatus, MaintenanceMode, MaintenanceType, Operation, S3Statistics,
+ MAX_NAMESPACE_DEPTH, UPID,
};
use pbs_config::s3::S3_CFG_TYPE_ID;
use pbs_config::{BackupLockGuard, ConfigVersionCache};
@@ -204,7 +204,6 @@ pub struct DataStoreImpl {
last_gc_status: Mutex<GarbageCollectionStatus>,
verify_new: bool,
chunk_order: ChunkOrder,
- sync_level: DatastoreFSyncLevel,
backend_config: DatastoreBackendConfig,
lru_store_caching: Option<LocalDatastoreLruCache>,
thread_settings: DatastoreThreadSettings,
@@ -225,7 +224,6 @@ impl DataStoreImpl {
last_gc_status: Mutex::new(GarbageCollectionStatus::default()),
verify_new: false,
chunk_order: Default::default(),
- sync_level: Default::default(),
backend_config: Default::default(),
lru_store_caching: None,
thread_settings: Default::default(),
@@ -775,7 +773,6 @@ impl DataStore {
last_gc_status: Mutex::new(gc_status),
verify_new: config.verify_new.unwrap_or(false),
chunk_order: tuning.chunk_order.unwrap_or_default(),
- sync_level: tuning.sync_level.unwrap_or_default(),
backend_config,
lru_store_caching,
thread_settings,
@@ -2995,19 +2992,10 @@ impl DataStore {
}
*/
- /// Syncs the filesystem of the datastore if 'sync_level' is set to
+ /// Syncs the filesystem of the chunk store base path if 'sync_level' is set to
/// [`DatastoreFSyncLevel::Filesystem`]. Uses syncfs(2).
pub fn try_ensure_sync_level(&self) -> Result<(), Error> {
- if self.inner.sync_level != DatastoreFSyncLevel::Filesystem {
- return Ok(());
- }
- let file = std::fs::File::open(self.base_path())?;
- let fd = file.as_raw_fd();
- log::info!("syncing filesystem");
- if unsafe { libc::syncfs(fd) } < 0 {
- bail!("error during syncfs: {}", std::io::Error::last_os_error());
- }
- Ok(())
+ self.inner.chunk_store.try_ensure_sync_level()
}
/// Destroy a datastore. This requires that there are no active operations on the datastore.
--
2.47.3
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH proxmox-backup v3 7/7] datastore: fix sync level update propagation to chunk store
2026-05-12 13:10 [PATCH proxmox{,-backup} v3 0/7] fix sync level updates for chunk store Christian Ebner
` (5 preceding siblings ...)
2026-05-12 13:10 ` [PATCH proxmox-backup v3 6/7] datastore: move try_ensure_sync_level() implementation to chunk store Christian Ebner
@ 2026-05-12 13:10 ` Christian Ebner
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-05-12 13:10 UTC (permalink / raw)
To: pbs-devel
Changing the datastore tuning options triggers an invalidation of the
datastore cache entry, leading to re-instantiation with the new
config parameters on the next datastore lookup.
Since commit 0bd9c8701 ("datastore: lookup: reuse ChunkStore on stale
datastore re-open") this does however not lead to re-creation of the
chunk store instance in order to avoid dropping the process locker,
which would lead to loosing any existing shared lock. However, as a
consequence the sync level is not updated on the chunk store.
Fix this by:
- Storing the sync level as runtime properties of the chunk store as
state within the mutex syncing concurrent modify access. It is held
where needed anyways.
- Pass the mutex guard as additional parameters to the methods
requiring the locked state. This encodes the requirement for the
mutex guard directly into the function signature instead of
labeling it as unsafe only.
- Assuring the previous sync level on config changes for consistency.
To do this, extend and rename try_ensure_sync_level to also take the
mutex guard and update the sync level if given.
Reported-by: Robert Obkircher <r.obkircher@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/chunk_store.rs | 73 +++++++++++++------
pbs-datastore/src/datastore.rs | 26 ++++++-
.../src/local_datastore_lru_cache.rs | 12 +--
3 files changed, 78 insertions(+), 33 deletions(-)
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index e8c279a62..6c97e31d7 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -1,7 +1,7 @@
use std::os::unix::fs::MetadataExt;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf};
-use std::sync::{Arc, Mutex};
+use std::sync::{Arc, Mutex, MutexGuard};
use std::time::Duration;
use anyhow::{bail, format_err, Context, Error};
@@ -27,14 +27,20 @@ use crate::{DataBlob, LocalDatastoreLruCache};
const USING_MARKER_FILENAME_EXT: &str = "using";
+#[derive(Default)]
+/// Configurable runtime properties of a chunk store
+pub(crate) struct ChunkStoreProperties {
+ pub(crate) sync_level: DatastoreFSyncLevel,
+}
+
/// File system based chunk store
pub struct ChunkStore {
name: String, // used for error reporting
pub(crate) base: PathBuf,
chunk_dir: PathBuf,
- mutex: Mutex<()>,
+ // Mutex to sync chunk store access, including property updates
+ mutex: Arc<Mutex<ChunkStoreProperties>>,
locker: Option<Arc<Mutex<ProcessLocker>>>,
- sync_level: DatastoreFSyncLevel,
}
// TODO: what about sysctl setting vm.vfs_cache_pressure (0 - 100) ?
@@ -79,9 +85,8 @@ impl ChunkStore {
name: String::new(),
base: PathBuf::new(),
chunk_dir: PathBuf::new(),
- mutex: Mutex::new(()),
+ mutex: Arc::new(Mutex::new(Default::default())),
locker: None,
- sync_level: Default::default(),
}
}
@@ -204,16 +209,19 @@ impl ChunkStore {
base,
chunk_dir,
locker: Some(locker),
- mutex: Mutex::new(()),
- sync_level,
+ mutex: Arc::new(Mutex::new(ChunkStoreProperties { sync_level })),
})
}
- fn touch_chunk_no_lock(&self, digest: &[u8; 32]) -> Result<(), Error> {
+ fn touch_chunk_no_lock(
+ &self,
+ digest: &[u8; 32],
+ mutex_guard: MutexGuard<ChunkStoreProperties>,
+ ) -> Result<(), Error> {
// unwrap: only `None` in unit tests
assert!(self.locker.is_some());
- self.cond_touch_chunk_no_lock(digest, true)?;
+ self.cond_touch_chunk_no_lock(digest, true, mutex_guard)?;
Ok(())
}
@@ -226,14 +234,15 @@ impl ChunkStore {
digest: &[u8; 32],
assert_exists: bool,
) -> Result<bool, Error> {
- let _lock = self.mutex.lock();
- self.cond_touch_chunk_no_lock(digest, assert_exists)
+ let lock = self.mutex.lock().unwrap();
+ self.cond_touch_chunk_no_lock(digest, assert_exists, lock)
}
fn cond_touch_chunk_no_lock(
&self,
digest: &[u8; 32],
assert_exists: bool,
+ _mutex_guard: MutexGuard<ChunkStoreProperties>,
) -> Result<bool, Error> {
// unwrap: only `None` in unit tests
assert!(self.locker.is_some());
@@ -423,7 +432,11 @@ impl ChunkStore {
ProcessLocker::oldest_shared_lock(self.locker.clone().unwrap())
}
- pub(crate) fn mutex(&self) -> &std::sync::Mutex<()> {
+ /// Mutex to lock chunk store for exclusive access.
+ ///
+ /// Must be held when modifying chunk store contents and allows to update
+ /// chunk store runtime properties.
+ pub(crate) fn mutex(&self) -> &Mutex<ChunkStoreProperties> {
&self.mutex
}
@@ -665,18 +678,17 @@ impl ChunkStore {
//println!("DIGEST {}", hex::encode(digest));
- let _lock = self.mutex.lock();
+ let lock = self.mutex.lock().unwrap();
- // Safety: lock acquired above
- unsafe { self.insert_chunk_nolock(chunk, digest, true) }
+ self.insert_chunk_nolock(chunk, digest, true, lock)
}
- /// Safety: requires holding the chunk store mutex!
- pub(crate) unsafe fn insert_chunk_nolock(
+ pub(crate) fn insert_chunk_nolock(
&self,
chunk: &DataBlob,
digest: &[u8; 32],
warn_on_overwrite_empty: bool,
+ mutex_guard: MutexGuard<ChunkStoreProperties>,
) -> Result<(bool, u64), Error> {
// unwrap: only `None` in unit tests
assert!(self.locker.is_some());
@@ -694,7 +706,7 @@ impl ChunkStore {
}
let old_size = metadata.len();
if encoded_size == old_size {
- self.touch_chunk_no_lock(digest)?;
+ self.touch_chunk_no_lock(digest, mutex_guard)?;
return Ok((true, old_size));
} else if old_size == 0 {
if warn_on_overwrite_empty {
@@ -721,11 +733,11 @@ impl ChunkStore {
// compressed, the size mismatch could be caused by different zstd versions
// so let's keep the one that was uploaded first, bit-rot is hopefully detected by
// verification at some point..
- self.touch_chunk_no_lock(digest)?;
+ self.touch_chunk_no_lock(digest, mutex_guard)?;
return Ok((true, old_size));
} else if old_size < encoded_size {
log::debug!("Got another copy of chunk with digest '{digest_str}', existing chunk is smaller, discarding uploaded one.");
- self.touch_chunk_no_lock(digest)?;
+ self.touch_chunk_no_lock(digest, mutex_guard)?;
return Ok((true, old_size));
} else {
log::debug!("Got another copy of chunk with digest '{digest_str}', existing chunk is bigger, replacing with uploaded one.");
@@ -742,17 +754,18 @@ impl ChunkStore {
let gid = pbs_config::backup_group()?.gid;
create_options = create_options.owner(uid).group(gid);
}
+
proxmox_sys::fs::replace_file(
&chunk_path,
raw_data,
create_options,
- self.sync_level == DatastoreFSyncLevel::File,
+ mutex_guard.sync_level == DatastoreFSyncLevel::File,
)
.map_err(|err| {
format_err!("inserting chunk on store '{name}' failed for {digest_str} - {err}")
})?;
- if self.sync_level == DatastoreFSyncLevel::File {
+ if mutex_guard.sync_level == DatastoreFSyncLevel::File {
// fsync dir handle to persist the tmp rename
let dir = std::fs::File::open(chunk_dir_path)?;
nix::unistd::fsync(dir.as_raw_fd())
@@ -967,10 +980,22 @@ impl ChunkStore {
(chunk_path, counter)
}
- pub(super) fn try_ensure_sync_level(&self) -> Result<(), Error> {
- if self.sync_level != DatastoreFSyncLevel::Filesystem {
+ pub(super) fn try_ensure_sync_level_with_update(
+ &self,
+ mut mutex_guard: MutexGuard<ChunkStoreProperties>,
+ update_sync_level: Option<DatastoreFSyncLevel>,
+ ) -> Result<(), Error> {
+ let assure_sync_level = mutex_guard.sync_level;
+ if let Some(sync_level) = update_sync_level {
+ mutex_guard.sync_level = sync_level;
+ }
+
+ drop(mutex_guard); // never hold during syncfs
+
+ if assure_sync_level != DatastoreFSyncLevel::Filesystem {
return Ok(());
}
+
let file = std::fs::File::open(self.base_path())?;
let fd = file.as_raw_fd();
info!("syncing filesystem");
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index b67ab2f3a..6ec52d3f5 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -595,6 +595,20 @@ impl DataStore {
operation: Some(lookup.operation),
}));
}
+
+ let tuning = pbs_config::datastore::parse_datastore_tuning_options(&config)?;
+ let sync_level = tuning.sync_level.unwrap_or_default();
+
+ let mutex = datastore.chunk_store.mutex();
+ let mutex_guard = mutex.lock().unwrap();
+ if mutex_guard.sync_level != sync_level {
+ datastore
+ .chunk_store
+ .try_ensure_sync_level_with_update(mutex_guard, Some(sync_level))?;
+ } else {
+ drop(mutex_guard);
+ }
+
Arc::clone(&datastore.chunk_store)
} else {
let tuning = pbs_config::datastore::parse_datastore_tuning_options(&config)?;
@@ -2466,7 +2480,8 @@ impl DataStore {
Ok(guard) => guard,
Err(_) => continue,
};
- let _guard = self.inner.chunk_store.mutex().lock().unwrap();
+ let mutex = self.inner.chunk_store.mutex();
+ let _guard = mutex.lock().unwrap();
// Check local markers (created or atime updated during phase1) and
// keep or delete chunk based on that.
@@ -2995,7 +3010,11 @@ impl DataStore {
/// Syncs the filesystem of the chunk store base path if 'sync_level' is set to
/// [`DatastoreFSyncLevel::Filesystem`]. Uses syncfs(2).
pub fn try_ensure_sync_level(&self) -> Result<(), Error> {
- self.inner.chunk_store.try_ensure_sync_level()
+ let mutex = self.inner.chunk_store.mutex();
+ let mutex_guard = mutex.lock().unwrap();
+ self.inner
+ .chunk_store
+ .try_ensure_sync_level_with_update(mutex_guard, None)
}
/// Destroy a datastore. This requires that there are no active operations on the datastore.
@@ -3417,7 +3436,8 @@ impl DataStore {
.chunk_store
.lock_chunk(digest, CHUNK_LOCK_TIMEOUT)?;
}
- let _lock = self.inner.chunk_store.mutex().lock().unwrap();
+ let mutex = self.inner.chunk_store.mutex();
+ let _lock = mutex.lock().unwrap();
let (new_path, counter) = self.inner.chunk_store.next_bad_chunk_path(digest);
diff --git a/pbs-datastore/src/local_datastore_lru_cache.rs b/pbs-datastore/src/local_datastore_lru_cache.rs
index ac27d4637..cf38d4a57 100644
--- a/pbs-datastore/src/local_datastore_lru_cache.rs
+++ b/pbs-datastore/src/local_datastore_lru_cache.rs
@@ -34,12 +34,11 @@ impl LocalDatastoreLruCache {
///
/// Fails if the chunk cannot be inserted successfully.
pub fn insert(&self, digest: &[u8; 32], chunk: &DataBlob) -> Result<(), Error> {
- let _lock = self.store.mutex().lock().unwrap();
+ let shared_mutex = self.store.mutex();
+ let lock = shared_mutex.lock().unwrap();
+
+ self.store.insert_chunk_nolock(chunk, digest, false, lock)?;
- // Safety: lock acquire above
- unsafe {
- self.store.insert_chunk_nolock(chunk, digest, false)?;
- }
self.cache.insert(*digest, (), |digest| {
// Safety: lock acquired above, this is executed inline!
unsafe {
@@ -80,7 +79,8 @@ impl LocalDatastoreLruCache {
Ok(mut file) => match DataBlob::load_from_reader(&mut file) {
// File was still cached with contents, load response from file
Ok(chunk) => {
- let _lock = self.store.mutex().lock().unwrap();
+ let shared_mutex = self.store.mutex();
+ let _lock = shared_mutex.lock().unwrap();
self.cache.insert(*digest, (), |digest| {
// Safety: lock acquired above, this is executed inline
unsafe {
--
2.47.3
^ permalink raw reply related [flat|nested] 8+ messages in thread