public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup 0/5] add 'sync-level' to datatore tuning options
@ 2022-05-20 12:42 Dominik Csapak
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 1/5] docs: add information about chunk order option for datastores Dominik Csapak
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Dominik Csapak @ 2022-05-20 12:42 UTC (permalink / raw)
  To: pbs-devel

this series is a successor to my previous RFC for it.[0]

changes from RFC:
* seperate introducing of the option and changing the default
* renaming to DatastoreFSyncLevel
* adding more and better comments + docs
* saving the whole level and not only a bool in the datastore/chunk_store
* adding a fsync on the dir handle for the 'file' case in insert_chunk
* split the change to 'replace_file' into seperate patch

the first patch is mostly unrelated, but it introduces a place where
we can document the option, and could be applied independent from
the remaining patches of this series.

the second patch only changes the use of replace_file in insert_chunk,
so that could also be applied independently.

0: https://lists.proxmox.com/pipermail/pbs-devel/2022-May/005118.html

Dominik Csapak (5):
  docs: add information about chunk order option for datastores
  pbs-datastore: chunk_store: use replace_file in insert_chunk
  datastore: implement sync-level tuning for datastores
  docs: add documentation about the 'sync-level' tuning
  datastore: make 'filesystem' the default sync-level

 docs/storage.rst                 | 60 +++++++++++++++++++++++++++++
 pbs-api-types/src/datastore.rs   | 32 ++++++++++++++++
 pbs-datastore/src/chunk_store.rs | 66 +++++++++++++++++++++++---------
 pbs-datastore/src/datastore.rs   | 37 ++++++++++++++++--
 src/api2/backup/environment.rs   |  2 +
 src/api2/config/datastore.rs     |  9 ++++-
 6 files changed, 181 insertions(+), 25 deletions(-)

-- 
2.30.2





^ permalink raw reply	[flat|nested] 8+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 1/5] docs: add information about chunk order option for datastores
  2022-05-20 12:42 [pbs-devel] [PATCH proxmox-backup 0/5] add 'sync-level' to datatore tuning options Dominik Csapak
@ 2022-05-20 12:42 ` Dominik Csapak
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 2/5] pbs-datastore: chunk_store: use replace_file in insert_chunk Dominik Csapak
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2022-05-20 12:42 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 docs/storage.rst | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/docs/storage.rst b/docs/storage.rst
index d2a12e8c..ab59ad4d 100644
--- a/docs/storage.rst
+++ b/docs/storage.rst
@@ -315,3 +315,26 @@ There are a few per-datastore options:
 * :ref:`Notifications <maintenance_notification>`
 * :ref:`Maintenance Mode <maintenance_mode>`
 * Verification of incoming backups
+
+Tuning
+^^^^^^
+There are some tuning related options for the datastore that are more advanced
+and only available on the CLI:
+
+* ``chunk-order``: Chunk order for verify & tape backup:
+
+  You can specify the order in which Proxmox Backup Server iterates the chunks
+  when doing a verify or backing up to tape. The two options are:
+
+  - `inode`  (default): Sorts the chunks by inode number of the filesystem before iterating
+    over them. This should be fine for most storages, especially spinning disks.
+  - `none`  Iterates the chunks in the order they appear in the
+    index file (.fidx/.didx). While this might slow down iterating on many slow
+    storages, on very fast ones (for example: NVMEs) the collecting and sorting
+    can take more time than gained through the sorted iterating.
+  This option can be set with:
+
+.. code-block:: console
+
+  # proxmox-backup-manager datastore update <storename> --tuning 'chunk-order=none'
+
-- 
2.30.2





^ permalink raw reply	[flat|nested] 8+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 2/5] pbs-datastore: chunk_store: use replace_file in insert_chunk
  2022-05-20 12:42 [pbs-devel] [PATCH proxmox-backup 0/5] add 'sync-level' to datatore tuning options Dominik Csapak
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 1/5] docs: add information about chunk order option for datastores Dominik Csapak
@ 2022-05-20 12:42 ` Dominik Csapak
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 3/5] datastore: implement sync-level tuning for datastores Dominik Csapak
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2022-05-20 12:42 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 pbs-datastore/src/chunk_store.rs | 18 +++---------------
 1 file changed, 3 insertions(+), 15 deletions(-)

diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 5bfe9fac..1f169dee 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -461,21 +461,9 @@ impl ChunkStore {
             }
         }
 
-        let mut tmp_path = chunk_path.clone();
-        tmp_path.set_extension("tmp");
-
-        let mut file = std::fs::File::create(&tmp_path).map_err(|err| {
-            format_err!("creating chunk on store '{name}' failed for {digest_str} - {err}")
-        })?;
-
-        file.write_all(raw_data).map_err(|err| {
-            format_err!("writing chunk on store '{name}' failed for {digest_str} - {err}")
-        })?;
-
-        if let Err(err) = std::fs::rename(&tmp_path, &chunk_path) {
-            if std::fs::remove_file(&tmp_path).is_err() { /* ignore */ }
-            bail!("atomic rename on store '{name}' failed for chunk {digest_str} - {err}");
-        }
+        proxmox_sys::fs::replace_file(chunk_path, raw_data, CreateOptions::new(), false).map_err(
+            |err| format_err!("inserting chunk on store '{name}' failed for {digest_str} - {err}"),
+        )?;
 
         drop(lock);
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 8+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 3/5] datastore: implement sync-level tuning for datastores
  2022-05-20 12:42 [pbs-devel] [PATCH proxmox-backup 0/5] add 'sync-level' to datatore tuning options Dominik Csapak
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 1/5] docs: add information about chunk order option for datastores Dominik Csapak
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 2/5] pbs-datastore: chunk_store: use replace_file in insert_chunk Dominik Csapak
@ 2022-05-20 12:42 ` Dominik Csapak
  2022-05-23  7:13   ` Fabian Grünbichler
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 4/5] docs: add documentation about the 'sync-level' tuning Dominik Csapak
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 5/5] datastore: make 'filesystem' the default sync-level Dominik Csapak
  4 siblings, 1 reply; 8+ messages in thread
From: Dominik Csapak @ 2022-05-20 12:42 UTC (permalink / raw)
  To: pbs-devel

currently, we don't (f)sync on chunk insertion (or at any point after
that), which can lead to broken chunks in case of e.g. an unexpected
powerloss. To fix that, offer a tuning option for datastores that
controls the level of syncs it does:

* None (default): same as current state, no (f)syncs done at any point
* Filesystem: at the end of a backup, the datastore issues
  a syncfs(2) to the filesystem of the datastore
* File: issues an fsync on each chunk as they get inserted
  (using our 'replace_file' helper) and a fsync on the directory handle

a small benchmark showed the following (times in mm:ss):
setup: virtual pbs, 4 cores, 8GiB memory, ext4 on spinner

size                none    filesystem  file
2GiB (fits in ram)   00:13   0:41        01:00
33GiB                05:21   05:31       13:45

so if the backup fits in memory, there is a large difference between all
of the modes (expected), but as soon as it exceeds the memory size,
the difference between not syncing and syncing the fs at the end becomes
much smaller.

i also tested on an nvme, but there the syncs basically made no difference

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
i retested with the 'fsync on dirhandle' part, but the result
was not significantly different than before, neither on nvme nor on
a spinning disk...

 pbs-api-types/src/datastore.rs   | 32 +++++++++++++++++
 pbs-datastore/src/chunk_store.rs | 60 ++++++++++++++++++++++++++------
 pbs-datastore/src/datastore.rs   | 37 +++++++++++++++++---
 src/api2/backup/environment.rs   |  2 ++
 src/api2/config/datastore.rs     |  9 +++--
 5 files changed, 124 insertions(+), 16 deletions(-)

diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index e2bf70aa..6563aa73 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -214,6 +214,37 @@ pub enum ChunkOrder {
     Inode,
 }
 
+#[api]
+#[derive(PartialEq, Serialize, Deserialize)]
+#[serde(rename_all = "lowercase")]
+/// The level of syncing that is done when writing into a datastore.
+pub enum DatastoreFSyncLevel {
+    /// No special fsync or syncfs calls are triggered. The system default dirty write back
+    /// mechanism ensures that data gets is flushed eventually via the `dirty_writeback_centisecs`
+    /// and `dirty_expire_centisecs` kernel sysctls, defaulting to ~ 30s.
+    ///
+    /// This mode provides generally the best performance, as all write back can happen async,
+    /// which reduces IO pressure.
+    /// But it may cause losing data on powerloss or system crash without any uninterruptible power supply.
+    None,
+    /// Triggers a fsync after writing any chunk on the datastore. While this can slow down
+    /// backups significantly, depending on the underlying file system and storage used, it
+    /// will ensure fine-grained consistency. But in practice there are no benefits over the
+    /// file system level sync, so you should prefer that one, as on most systems the file level
+    /// one is slower and causes more IO pressure compared to the file system level one.
+    File,
+    /// Trigger a filesystem wide sync after all backup data got written but before finishing the
+    /// task. This allows that every finished backup is fully written back to storage
+    /// while reducing the impact most file systems in contrast to the file level sync.
+    Filesystem,
+}
+
+impl Default for DatastoreFSyncLevel {
+    fn default() -> Self {
+        DatastoreFSyncLevel::None
+    }
+}
+
 #[api(
     properties: {
         "chunk-order": {
@@ -228,6 +259,7 @@ pub enum ChunkOrder {
 pub struct DatastoreTuning {
     /// Iterate chunks in this order
     pub chunk_order: Option<ChunkOrder>,
+    pub sync_level: Option<DatastoreFSyncLevel>,
 }
 
 pub const DATASTORE_TUNING_STRING_SCHEMA: Schema = StringSchema::new("Datastore tuning options")
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 1f169dee..86dd1c17 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -1,11 +1,10 @@
-use std::io::Write;
 use std::os::unix::io::AsRawFd;
 use std::path::{Path, PathBuf};
 use std::sync::{Arc, Mutex};
 
 use anyhow::{bail, format_err, Error};
 
-use pbs_api_types::GarbageCollectionStatus;
+use pbs_api_types::{DatastoreFSyncLevel, GarbageCollectionStatus};
 use proxmox_sys::fs::{create_dir, create_path, CreateOptions};
 use proxmox_sys::process_locker::{
     ProcessLockExclusiveGuard, ProcessLockSharedGuard, ProcessLocker,
@@ -22,6 +21,7 @@ pub struct ChunkStore {
     chunk_dir: PathBuf,
     mutex: Mutex<()>,
     locker: Option<Arc<Mutex<ProcessLocker>>>,
+    sync_level: DatastoreFSyncLevel,
 }
 
 // TODO: what about sysctl setting vm.vfs_cache_pressure (0 - 100) ?
@@ -68,6 +68,7 @@ impl ChunkStore {
             chunk_dir: PathBuf::new(),
             mutex: Mutex::new(()),
             locker: None,
+            sync_level: Default::default(),
         }
     }
 
@@ -88,6 +89,7 @@ impl ChunkStore {
         uid: nix::unistd::Uid,
         gid: nix::unistd::Gid,
         worker: Option<&dyn WorkerTaskContext>,
+        sync_level: DatastoreFSyncLevel,
     ) -> Result<Self, Error>
     where
         P: Into<PathBuf>,
@@ -144,7 +146,7 @@ impl ChunkStore {
             }
         }
 
-        Self::open(name, base)
+        Self::open(name, base, sync_level)
     }
 
     fn lockfile_path<P: Into<PathBuf>>(base: P) -> PathBuf {
@@ -153,7 +155,11 @@ impl ChunkStore {
         lockfile_path
     }
 
-    pub fn open<P: Into<PathBuf>>(name: &str, base: P) -> Result<Self, Error> {
+    pub fn open<P: Into<PathBuf>>(
+        name: &str,
+        base: P,
+        sync_level: DatastoreFSyncLevel,
+    ) -> Result<Self, Error> {
         let base: PathBuf = base.into();
 
         if !base.is_absolute() {
@@ -176,6 +182,7 @@ impl ChunkStore {
             chunk_dir,
             locker: Some(locker),
             mutex: Mutex::new(()),
+            sync_level,
         })
     }
 
@@ -461,9 +468,27 @@ impl ChunkStore {
             }
         }
 
-        proxmox_sys::fs::replace_file(chunk_path, raw_data, CreateOptions::new(), false).map_err(
-            |err| format_err!("inserting chunk on store '{name}' failed for {digest_str} - {err}"),
-        )?;
+        let chunk_dir_path = chunk_path
+            .parent()
+            .ok_or_else(|| format_err!("unable to get chunk dir"))?
+            .to_owned();
+
+        proxmox_sys::fs::replace_file(
+            chunk_path,
+            raw_data,
+            CreateOptions::new(),
+            self.sync_level == DatastoreFSyncLevel::File,
+        )
+        .map_err(|err| {
+            format_err!("inserting chunk on store '{name}' failed for {digest_str} - {err}")
+        })?;
+
+        if self.sync_level == DatastoreFSyncLevel::File {
+            // fsync dir handle to persist the tmp rename
+            let dir = std::fs::File::open(chunk_dir_path)?;
+            nix::unistd::fsync(dir.as_raw_fd())
+                .map_err(|err| format_err!("fsync failed: {err}"))?;
+        }
 
         drop(lock);
 
@@ -520,13 +545,21 @@ fn test_chunk_store1() {
 
     if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }
 
-    let chunk_store = ChunkStore::open("test", &path);
+    let chunk_store = ChunkStore::open("test", &path, DatastoreFSyncLevel::None);
     assert!(chunk_store.is_err());
 
     let user = nix::unistd::User::from_uid(nix::unistd::Uid::current())
         .unwrap()
         .unwrap();
-    let chunk_store = ChunkStore::create("test", &path, user.uid, user.gid, None).unwrap();
+    let chunk_store = ChunkStore::create(
+        "test",
+        &path,
+        user.uid,
+        user.gid,
+        None,
+        DatastoreFSyncLevel::None,
+    )
+    .unwrap();
 
     let (chunk, digest) = crate::data_blob::DataChunkBuilder::new(&[0u8, 1u8])
         .build()
@@ -538,7 +571,14 @@ fn test_chunk_store1() {
     let (exists, _) = chunk_store.insert_chunk(&chunk, &digest).unwrap();
     assert!(exists);
 
-    let chunk_store = ChunkStore::create("test", &path, user.uid, user.gid, None);
+    let chunk_store = ChunkStore::create(
+        "test",
+        &path,
+        user.uid,
+        user.gid,
+        None,
+        DatastoreFSyncLevel::None,
+    );
     assert!(chunk_store.is_err());
 
     if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 5af8a295..613f3196 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -18,8 +18,8 @@ use proxmox_sys::WorkerTaskContext;
 use proxmox_sys::{task_log, task_warn};
 
 use pbs_api_types::{
-    Authid, BackupNamespace, BackupType, ChunkOrder, DataStoreConfig, DatastoreTuning,
-    GarbageCollectionStatus, HumanByte, Operation, UPID,
+    Authid, BackupNamespace, BackupType, ChunkOrder, DataStoreConfig, DatastoreFSyncLevel,
+    DatastoreTuning, GarbageCollectionStatus, HumanByte, Operation, UPID,
 };
 use pbs_config::ConfigVersionCache;
 
@@ -61,6 +61,7 @@ pub struct DataStoreImpl {
     chunk_order: ChunkOrder,
     last_generation: usize,
     last_update: i64,
+    sync_level: DatastoreFSyncLevel,
 }
 
 impl DataStoreImpl {
@@ -75,6 +76,7 @@ impl DataStoreImpl {
             chunk_order: ChunkOrder::None,
             last_generation: 0,
             last_update: 0,
+            sync_level: Default::default(),
         })
     }
 }
@@ -154,7 +156,13 @@ impl DataStore {
             }
         }
 
-        let chunk_store = ChunkStore::open(name, &config.path)?;
+        let tuning: DatastoreTuning = serde_json::from_value(
+            DatastoreTuning::API_SCHEMA
+                .parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
+        )?;
+
+        let chunk_store =
+            ChunkStore::open(name, &config.path, tuning.sync_level.unwrap_or_default())?;
         let datastore = DataStore::with_store_and_config(chunk_store, config, generation, now)?;
 
         let datastore = Arc::new(datastore);
@@ -197,7 +205,12 @@ impl DataStore {
     ) -> Result<Arc<Self>, Error> {
         let name = config.name.clone();
 
-        let chunk_store = ChunkStore::open(&name, &config.path)?;
+        let tuning: DatastoreTuning = serde_json::from_value(
+            DatastoreTuning::API_SCHEMA
+                .parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
+        )?;
+        let chunk_store =
+            ChunkStore::open(&name, &config.path, tuning.sync_level.unwrap_or_default())?;
         let inner = Arc::new(Self::with_store_and_config(chunk_store, config, 0, 0)?);
 
         if let Some(operation) = operation {
@@ -242,6 +255,7 @@ impl DataStore {
             chunk_order,
             last_generation,
             last_update,
+            sync_level: tuning.sync_level.unwrap_or_default(),
         })
     }
 
@@ -1257,4 +1271,19 @@ impl DataStore {
         todo!("split out the namespace");
     }
     */
+
+    /// Syncs the filesystem of the datastore if 'sync_level' is set to
+    /// [`DatastoreFSyncLevel::Filesystem`]. Uses syncfs(2).
+    pub fn try_ensure_sync_level(&self) -> Result<(), Error> {
+        if self.inner.sync_level != DatastoreFSyncLevel::Filesystem {
+            return Ok(());
+        }
+        let file = std::fs::File::open(self.base_path())?;
+        let fd = file.as_raw_fd();
+        log::info!("syncinc filesystem");
+        if unsafe { libc::syncfs(fd) } < 0 {
+            bail!("error during syncfs: {}", std::io::Error::last_os_error());
+        }
+        Ok(())
+    }
 }
diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
index 8c1c42db..98a13c6e 100644
--- a/src/api2/backup/environment.rs
+++ b/src/api2/backup/environment.rs
@@ -623,6 +623,8 @@ impl BackupEnvironment {
             }
         }
 
+        self.datastore.try_ensure_sync_level()?;
+
         // marks the backup as successful
         state.finished = true;
 
diff --git a/src/api2/config/datastore.rs b/src/api2/config/datastore.rs
index 28342c2c..9fdd73dc 100644
--- a/src/api2/config/datastore.rs
+++ b/src/api2/config/datastore.rs
@@ -11,8 +11,8 @@ use proxmox_section_config::SectionConfigData;
 use proxmox_sys::WorkerTaskContext;
 
 use pbs_api_types::{
-    Authid, DataStoreConfig, DataStoreConfigUpdater, DatastoreNotify, DATASTORE_SCHEMA,
-    PRIV_DATASTORE_ALLOCATE, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_MODIFY,
+    Authid, DataStoreConfig, DataStoreConfigUpdater, DatastoreNotify, DatastoreTuning,
+    DATASTORE_SCHEMA, PRIV_DATASTORE_ALLOCATE, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_MODIFY,
     PROXMOX_CONFIG_DIGEST_SCHEMA,
 };
 use pbs_config::BackupLockGuard;
@@ -70,6 +70,10 @@ pub(crate) fn do_create_datastore(
 ) -> Result<(), Error> {
     let path: PathBuf = datastore.path.clone().into();
 
+    let tuning: DatastoreTuning = serde_json::from_value(
+        DatastoreTuning::API_SCHEMA
+            .parse_property_string(datastore.tuning.as_deref().unwrap_or(""))?,
+    )?;
     let backup_user = pbs_config::backup_user()?;
     let _store = ChunkStore::create(
         &datastore.name,
@@ -77,6 +81,7 @@ pub(crate) fn do_create_datastore(
         backup_user.uid,
         backup_user.gid,
         worker,
+        tuning.sync_level.unwrap_or_default(),
     )?;
 
     config.set_data(&datastore.name, "datastore", &datastore)?;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 8+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 4/5] docs: add documentation about the 'sync-level' tuning
  2022-05-20 12:42 [pbs-devel] [PATCH proxmox-backup 0/5] add 'sync-level' to datatore tuning options Dominik Csapak
                   ` (2 preceding siblings ...)
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 3/5] datastore: implement sync-level tuning for datastores Dominik Csapak
@ 2022-05-20 12:42 ` Dominik Csapak
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 5/5] datastore: make 'filesystem' the default sync-level Dominik Csapak
  4 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2022-05-20 12:42 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 docs/storage.rst | 37 +++++++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/docs/storage.rst b/docs/storage.rst
index ab59ad4d..8ca696cd 100644
--- a/docs/storage.rst
+++ b/docs/storage.rst
@@ -338,3 +338,40 @@ and only available on the CLI:
 
   # proxmox-backup-manager datastore update <storename> --tuning 'chunk-order=none'
 
+* ``sync-level``: Datastore fsync level:
+
+  You can set the level of syncing on the datastore for chunks, which influences
+  the crash resistancy of backups in case of a powerloss or hard shutoff.
+  There are currently three levels:
+
+  - `none` (default): Does not do any syncing when writing chunks. This is fast
+    and normally ok, since the kernel eventually flushes writes onto the disk.
+    The kernel sysctls `dirty_expire_centisecs` and `dirty_writeback_centisecs`
+    are used to tune that behaviour, while the default is to flush old data
+    after ~30s.
+
+  - `filesystem` : This triggers a ``syncfs(2)`` after a backup, but before
+    the task returns `OK`. This way it is ensured that the written backups
+    are on disk. This is a good balance between speed and consistency,.
+    Note that the underlying storage device sitll needs to protect itself against
+    powerloss to flush its internal ephemeral caches to the permanent storage layer.
+
+  - `file` With this mode, a fsync is triggered on every chunk insertion, which
+    makes sure each and every chunk reaches the disk as soon as possible. While
+    this reaches the hightest level of consistency, for many storages (especially
+    slower ones) this comes at the cost of speed. In general the `filesystem`
+    mode is better suited for most setups, but for very fast storages this mode
+    can be ok.
+
+  This can be set with:
+
+.. code-block:: console
+
+  # proxmox-backup-manager datastore update <storename> --tuning 'sync-level=filesystem'
+
+If you want to set multiple tuning options simultaniously, you can seperate them
+with a comma, like this:
+
+.. code-block:: console
+
+  # proxmox-backup-manager datastore update <storename> --tuning 'sync-level=filesystem,chunk-order=none'
-- 
2.30.2





^ permalink raw reply	[flat|nested] 8+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 5/5] datastore: make 'filesystem' the default sync-level
  2022-05-20 12:42 [pbs-devel] [PATCH proxmox-backup 0/5] add 'sync-level' to datatore tuning options Dominik Csapak
                   ` (3 preceding siblings ...)
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 4/5] docs: add documentation about the 'sync-level' tuning Dominik Csapak
@ 2022-05-20 12:42 ` Dominik Csapak
  4 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2022-05-20 12:42 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 docs/storage.rst               | 4 ++--
 pbs-api-types/src/datastore.rs | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/storage.rst b/docs/storage.rst
index 8ca696cd..089d5e53 100644
--- a/docs/storage.rst
+++ b/docs/storage.rst
@@ -344,13 +344,13 @@ and only available on the CLI:
   the crash resistancy of backups in case of a powerloss or hard shutoff.
   There are currently three levels:
 
-  - `none` (default): Does not do any syncing when writing chunks. This is fast
+  - `none` : Does not do any syncing when writing chunks. This is fast
     and normally ok, since the kernel eventually flushes writes onto the disk.
     The kernel sysctls `dirty_expire_centisecs` and `dirty_writeback_centisecs`
     are used to tune that behaviour, while the default is to flush old data
     after ~30s.
 
-  - `filesystem` : This triggers a ``syncfs(2)`` after a backup, but before
+  - `filesystem` (default): This triggers a ``syncfs(2)`` after a backup, but before
     the task returns `OK`. This way it is ensured that the written backups
     are on disk. This is a good balance between speed and consistency,.
     Note that the underlying storage device sitll needs to protect itself against
diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index 6563aa73..5bc45948 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -241,7 +241,7 @@ pub enum DatastoreFSyncLevel {
 
 impl Default for DatastoreFSyncLevel {
     fn default() -> Self {
-        DatastoreFSyncLevel::None
+        DatastoreFSyncLevel::Filesystem
     }
 }
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 3/5] datastore: implement sync-level tuning for datastores
  2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 3/5] datastore: implement sync-level tuning for datastores Dominik Csapak
@ 2022-05-23  7:13   ` Fabian Grünbichler
  2022-05-24  8:14     ` Thomas Lamprecht
  0 siblings, 1 reply; 8+ messages in thread
From: Fabian Grünbichler @ 2022-05-23  7:13 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion

On May 20, 2022 2:42 pm, Dominik Csapak wrote:
> currently, we don't (f)sync on chunk insertion (or at any point after
> that), which can lead to broken chunks in case of e.g. an unexpected
> powerloss. To fix that, offer a tuning option for datastores that
> controls the level of syncs it does:
> 
> * None (default): same as current state, no (f)syncs done at any point
> * Filesystem: at the end of a backup, the datastore issues
>   a syncfs(2) to the filesystem of the datastore
> * File: issues an fsync on each chunk as they get inserted
>   (using our 'replace_file' helper) and a fsync on the directory handle
> 
> a small benchmark showed the following (times in mm:ss):
> setup: virtual pbs, 4 cores, 8GiB memory, ext4 on spinner
> 
> size                none    filesystem  file
> 2GiB (fits in ram)   00:13   0:41        01:00
> 33GiB                05:21   05:31       13:45
> 
> so if the backup fits in memory, there is a large difference between all
> of the modes (expected), but as soon as it exceeds the memory size,
> the difference between not syncing and syncing the fs at the end becomes
> much smaller.
> 
> i also tested on an nvme, but there the syncs basically made no difference
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> i retested with the 'fsync on dirhandle' part, but the result
> was not significantly different than before, neither on nvme nor on
> a spinning disk...
> 
>  pbs-api-types/src/datastore.rs   | 32 +++++++++++++++++
>  pbs-datastore/src/chunk_store.rs | 60 ++++++++++++++++++++++++++------
>  pbs-datastore/src/datastore.rs   | 37 +++++++++++++++++---
>  src/api2/backup/environment.rs   |  2 ++
>  src/api2/config/datastore.rs     |  9 +++--
>  5 files changed, 124 insertions(+), 16 deletions(-)
> 
> diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
> index e2bf70aa..6563aa73 100644
> --- a/pbs-api-types/src/datastore.rs
> +++ b/pbs-api-types/src/datastore.rs
> @@ -214,6 +214,37 @@ pub enum ChunkOrder {
>      Inode,
>  }
>  
> +#[api]
> +#[derive(PartialEq, Serialize, Deserialize)]
> +#[serde(rename_all = "lowercase")]
> +/// The level of syncing that is done when writing into a datastore.
> +pub enum DatastoreFSyncLevel {
> +    /// No special fsync or syncfs calls are triggered. The system default dirty write back
> +    /// mechanism ensures that data gets is flushed eventually via the `dirty_writeback_centisecs`
> +    /// and `dirty_expire_centisecs` kernel sysctls, defaulting to ~ 30s.
> +    ///
> +    /// This mode provides generally the best performance, as all write back can happen async,
> +    /// which reduces IO pressure.
> +    /// But it may cause losing data on powerloss or system crash without any uninterruptible power supply.
> +    None,
> +    /// Triggers a fsync after writing any chunk on the datastore. While this can slow down
> +    /// backups significantly, depending on the underlying file system and storage used, it
> +    /// will ensure fine-grained consistency. But in practice there are no benefits over the
> +    /// file system level sync, so you should prefer that one, as on most systems the file level
> +    /// one is slower and causes more IO pressure compared to the file system level one.

I am not sure whether that "in practice" statement really holds
- we only tested the exact failure case on a few filesystems, there 
  might be ones in use out there where a powerloss can also lead to a 
  truncated chunk, not only an empty/missing one. granted, both will be 
  detected on the next verification, but only the latter will be 
  automatically cleaned up by a subsequent backup task that uploads this 
  chunk..
- the FS underlying the datastore might be used for many datastores, or 
  even other, busy, non-datastore usage. not an ideal setup, but there 
  might be $reasons. in this case, syncfs might have a much bigger 
  negative effect (because of syncing out other, unrelated I/O) than 
  fsync.
- not sure what effect syncfs has if a datastore is really busy (as in, 
  has tons of basically no-op backups over a short period of time)

I'd rather mark 'Filesystem' as a good compromise, and the 'File' on as 
most consistent.

> +    File,
> +    /// Trigger a filesystem wide sync after all backup data got written but before finishing the
> +    /// task. This allows that every finished backup is fully written back to storage
> +    /// while reducing the impact most file systems in contrast to the file level sync.

missing 'on'? (reducing the impact *on* most file systems)?

> +    Filesystem,
> +}
> +
> +impl Default for DatastoreFSyncLevel {
> +    fn default() -> Self {
> +        DatastoreFSyncLevel::None
> +    }
> +}
> +
>  #[api(
>      properties: {
>          "chunk-order": {
> @@ -228,6 +259,7 @@ pub enum ChunkOrder {
>  pub struct DatastoreTuning {
>      /// Iterate chunks in this order
>      pub chunk_order: Option<ChunkOrder>,
> +    pub sync_level: Option<DatastoreFSyncLevel>,
>  }
>  
>  pub const DATASTORE_TUNING_STRING_SCHEMA: Schema = StringSchema::new("Datastore tuning options")
> diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
> index 1f169dee..86dd1c17 100644
> --- a/pbs-datastore/src/chunk_store.rs
> +++ b/pbs-datastore/src/chunk_store.rs
> @@ -1,11 +1,10 @@
> -use std::io::Write;
>  use std::os::unix::io::AsRawFd;
>  use std::path::{Path, PathBuf};
>  use std::sync::{Arc, Mutex};
>  
>  use anyhow::{bail, format_err, Error};
>  
> -use pbs_api_types::GarbageCollectionStatus;
> +use pbs_api_types::{DatastoreFSyncLevel, GarbageCollectionStatus};
>  use proxmox_sys::fs::{create_dir, create_path, CreateOptions};
>  use proxmox_sys::process_locker::{
>      ProcessLockExclusiveGuard, ProcessLockSharedGuard, ProcessLocker,
> @@ -22,6 +21,7 @@ pub struct ChunkStore {
>      chunk_dir: PathBuf,
>      mutex: Mutex<()>,
>      locker: Option<Arc<Mutex<ProcessLocker>>>,
> +    sync_level: DatastoreFSyncLevel,
>  }
>  
>  // TODO: what about sysctl setting vm.vfs_cache_pressure (0 - 100) ?
> @@ -68,6 +68,7 @@ impl ChunkStore {
>              chunk_dir: PathBuf::new(),
>              mutex: Mutex::new(()),
>              locker: None,
> +            sync_level: Default::default(),
>          }
>      }
>  
> @@ -88,6 +89,7 @@ impl ChunkStore {
>          uid: nix::unistd::Uid,
>          gid: nix::unistd::Gid,
>          worker: Option<&dyn WorkerTaskContext>,
> +        sync_level: DatastoreFSyncLevel,
>      ) -> Result<Self, Error>
>      where
>          P: Into<PathBuf>,
> @@ -144,7 +146,7 @@ impl ChunkStore {
>              }
>          }
>  
> -        Self::open(name, base)
> +        Self::open(name, base, sync_level)
>      }
>  
>      fn lockfile_path<P: Into<PathBuf>>(base: P) -> PathBuf {
> @@ -153,7 +155,11 @@ impl ChunkStore {
>          lockfile_path
>      }
>  
> -    pub fn open<P: Into<PathBuf>>(name: &str, base: P) -> Result<Self, Error> {
> +    pub fn open<P: Into<PathBuf>>(
> +        name: &str,
> +        base: P,
> +        sync_level: DatastoreFSyncLevel,
> +    ) -> Result<Self, Error> {
>          let base: PathBuf = base.into();
>  
>          if !base.is_absolute() {
> @@ -176,6 +182,7 @@ impl ChunkStore {
>              chunk_dir,
>              locker: Some(locker),
>              mutex: Mutex::new(()),
> +            sync_level,
>          })
>      }
>  
> @@ -461,9 +468,27 @@ impl ChunkStore {
>              }
>          }
>  
> -        proxmox_sys::fs::replace_file(chunk_path, raw_data, CreateOptions::new(), false).map_err(
> -            |err| format_err!("inserting chunk on store '{name}' failed for {digest_str} - {err}"),
> -        )?;
> +        let chunk_dir_path = chunk_path
> +            .parent()
> +            .ok_or_else(|| format_err!("unable to get chunk dir"))?
> +            .to_owned();
> +
> +        proxmox_sys::fs::replace_file(
> +            chunk_path,
> +            raw_data,
> +            CreateOptions::new(),
> +            self.sync_level == DatastoreFSyncLevel::File,
> +        )
> +        .map_err(|err| {
> +            format_err!("inserting chunk on store '{name}' failed for {digest_str} - {err}")
> +        })?;
> +
> +        if self.sync_level == DatastoreFSyncLevel::File {
> +            // fsync dir handle to persist the tmp rename
> +            let dir = std::fs::File::open(chunk_dir_path)?;
> +            nix::unistd::fsync(dir.as_raw_fd())
> +                .map_err(|err| format_err!("fsync failed: {err}"))?;
> +        }
>  
>          drop(lock);
>  
> @@ -520,13 +545,21 @@ fn test_chunk_store1() {
>  
>      if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }
>  
> -    let chunk_store = ChunkStore::open("test", &path);
> +    let chunk_store = ChunkStore::open("test", &path, DatastoreFSyncLevel::None);
>      assert!(chunk_store.is_err());
>  
>      let user = nix::unistd::User::from_uid(nix::unistd::Uid::current())
>          .unwrap()
>          .unwrap();
> -    let chunk_store = ChunkStore::create("test", &path, user.uid, user.gid, None).unwrap();
> +    let chunk_store = ChunkStore::create(
> +        "test",
> +        &path,
> +        user.uid,
> +        user.gid,
> +        None,
> +        DatastoreFSyncLevel::None,
> +    )
> +    .unwrap();
>  
>      let (chunk, digest) = crate::data_blob::DataChunkBuilder::new(&[0u8, 1u8])
>          .build()
> @@ -538,7 +571,14 @@ fn test_chunk_store1() {
>      let (exists, _) = chunk_store.insert_chunk(&chunk, &digest).unwrap();
>      assert!(exists);
>  
> -    let chunk_store = ChunkStore::create("test", &path, user.uid, user.gid, None);
> +    let chunk_store = ChunkStore::create(
> +        "test",
> +        &path,
> +        user.uid,
> +        user.gid,
> +        None,
> +        DatastoreFSyncLevel::None,
> +    );
>      assert!(chunk_store.is_err());
>  
>      if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index 5af8a295..613f3196 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -18,8 +18,8 @@ use proxmox_sys::WorkerTaskContext;
>  use proxmox_sys::{task_log, task_warn};
>  
>  use pbs_api_types::{
> -    Authid, BackupNamespace, BackupType, ChunkOrder, DataStoreConfig, DatastoreTuning,
> -    GarbageCollectionStatus, HumanByte, Operation, UPID,
> +    Authid, BackupNamespace, BackupType, ChunkOrder, DataStoreConfig, DatastoreFSyncLevel,
> +    DatastoreTuning, GarbageCollectionStatus, HumanByte, Operation, UPID,
>  };
>  use pbs_config::ConfigVersionCache;
>  
> @@ -61,6 +61,7 @@ pub struct DataStoreImpl {
>      chunk_order: ChunkOrder,
>      last_generation: usize,
>      last_update: i64,
> +    sync_level: DatastoreFSyncLevel,
>  }
>  
>  impl DataStoreImpl {
> @@ -75,6 +76,7 @@ impl DataStoreImpl {
>              chunk_order: ChunkOrder::None,
>              last_generation: 0,
>              last_update: 0,
> +            sync_level: Default::default(),
>          })
>      }
>  }
> @@ -154,7 +156,13 @@ impl DataStore {
>              }
>          }
>  
> -        let chunk_store = ChunkStore::open(name, &config.path)?;
> +        let tuning: DatastoreTuning = serde_json::from_value(
> +            DatastoreTuning::API_SCHEMA
> +                .parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
> +        )?;
> +
> +        let chunk_store =
> +            ChunkStore::open(name, &config.path, tuning.sync_level.unwrap_or_default())?;
>          let datastore = DataStore::with_store_and_config(chunk_store, config, generation, now)?;
>  
>          let datastore = Arc::new(datastore);
> @@ -197,7 +205,12 @@ impl DataStore {
>      ) -> Result<Arc<Self>, Error> {
>          let name = config.name.clone();
>  
> -        let chunk_store = ChunkStore::open(&name, &config.path)?;
> +        let tuning: DatastoreTuning = serde_json::from_value(
> +            DatastoreTuning::API_SCHEMA
> +                .parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
> +        )?;
> +        let chunk_store =
> +            ChunkStore::open(&name, &config.path, tuning.sync_level.unwrap_or_default())?;
>          let inner = Arc::new(Self::with_store_and_config(chunk_store, config, 0, 0)?);
>  
>          if let Some(operation) = operation {
> @@ -242,6 +255,7 @@ impl DataStore {
>              chunk_order,
>              last_generation,
>              last_update,
> +            sync_level: tuning.sync_level.unwrap_or_default(),
>          })
>      }
>  
> @@ -1257,4 +1271,19 @@ impl DataStore {
>          todo!("split out the namespace");
>      }
>      */
> +
> +    /// Syncs the filesystem of the datastore if 'sync_level' is set to
> +    /// [`DatastoreFSyncLevel::Filesystem`]. Uses syncfs(2).
> +    pub fn try_ensure_sync_level(&self) -> Result<(), Error> {
> +        if self.inner.sync_level != DatastoreFSyncLevel::Filesystem {
> +            return Ok(());
> +        }
> +        let file = std::fs::File::open(self.base_path())?;
> +        let fd = file.as_raw_fd();
> +        log::info!("syncinc filesystem");
> +        if unsafe { libc::syncfs(fd) } < 0 {
> +            bail!("error during syncfs: {}", std::io::Error::last_os_error());
> +        }
> +        Ok(())
> +    }
>  }
> diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
> index 8c1c42db..98a13c6e 100644
> --- a/src/api2/backup/environment.rs
> +++ b/src/api2/backup/environment.rs
> @@ -623,6 +623,8 @@ impl BackupEnvironment {
>              }
>          }
>  
> +        self.datastore.try_ensure_sync_level()?;
> +
>          // marks the backup as successful
>          state.finished = true;
>  
> diff --git a/src/api2/config/datastore.rs b/src/api2/config/datastore.rs
> index 28342c2c..9fdd73dc 100644
> --- a/src/api2/config/datastore.rs
> +++ b/src/api2/config/datastore.rs
> @@ -11,8 +11,8 @@ use proxmox_section_config::SectionConfigData;
>  use proxmox_sys::WorkerTaskContext;
>  
>  use pbs_api_types::{
> -    Authid, DataStoreConfig, DataStoreConfigUpdater, DatastoreNotify, DATASTORE_SCHEMA,
> -    PRIV_DATASTORE_ALLOCATE, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_MODIFY,
> +    Authid, DataStoreConfig, DataStoreConfigUpdater, DatastoreNotify, DatastoreTuning,
> +    DATASTORE_SCHEMA, PRIV_DATASTORE_ALLOCATE, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_MODIFY,
>      PROXMOX_CONFIG_DIGEST_SCHEMA,
>  };
>  use pbs_config::BackupLockGuard;
> @@ -70,6 +70,10 @@ pub(crate) fn do_create_datastore(
>  ) -> Result<(), Error> {
>      let path: PathBuf = datastore.path.clone().into();
>  
> +    let tuning: DatastoreTuning = serde_json::from_value(
> +        DatastoreTuning::API_SCHEMA
> +            .parse_property_string(datastore.tuning.as_deref().unwrap_or(""))?,
> +    )?;
>      let backup_user = pbs_config::backup_user()?;
>      let _store = ChunkStore::create(
>          &datastore.name,
> @@ -77,6 +81,7 @@ pub(crate) fn do_create_datastore(
>          backup_user.uid,
>          backup_user.gid,
>          worker,
> +        tuning.sync_level.unwrap_or_default(),
>      )?;
>  
>      config.set_data(&datastore.name, "datastore", &datastore)?;
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 3/5] datastore: implement sync-level tuning for datastores
  2022-05-23  7:13   ` Fabian Grünbichler
@ 2022-05-24  8:14     ` Thomas Lamprecht
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Lamprecht @ 2022-05-24  8:14 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Fabian Grünbichler

On 23/05/2022 09:13, Fabian Grünbichler wrote:
> I am not sure whether that "in practice" statement really holds
> - we only tested the exact failure case on a few filesystems, there 
>   might be ones in use out there where a powerloss can also lead to a 
>   truncated chunk, not only an empty/missing one. granted, both will be 
>   detected on the next verification, but only the latter will be 
>   automatically cleaned up by a subsequent backup task that uploads this 
>   chunk..

I don't think partial written files can happen on journaled FS in that way,
at least as long as the default-on write barriers are not disabled.
XFS, ext4 and btrfs are fine in that regard, FWICT, didn't checked others,
but I'd think that additionally NFS, CIFS and ZFS would be interesting.

> - the FS underlying the datastore might be used for many datastores, or 
>   even other, busy, non-datastore usage. not an ideal setup, but there 
>   might be $reasons. in this case, syncfs might have a much bigger 
>   negative effect (because of syncing out other, unrelated I/O) than 
>   fsync.

Yeah, edge case exists, in practice means for the general case, in which
PBS is still its own appliance that one won't also co-host their high churn
appache cassandra DB or whatever on it, so this still holds and in most cases
it will be more efficient then too than two fsyncs per chunk (which internally
often flushes more than just the two inodes too).

If an admin still does it for $reasons they can still switch to fsync based
level, I'd find it odd if the "in practice" of a doc comment would hinder
them of ever trying, especially as such setups most of the time get created
when the users won't care for best practice anyway.

> - not sure what effect syncfs has if a datastore is really busy (as in, 
>   has tons of basically no-op backups over a short period of time)

What effects do you imagine? It just starts a writeback kernel worker that
flushes all dirty inodes belonging to a super block from the time the syncfs
was called in a lockless manner using the RCU (see sync.c and fs-writeback.c
in the kernel fs/ tree), new IO isn't stopped and the inodes would have been
synced over the next 30s (default) anyway..

> 
> I'd rather mark 'Filesystem' as a good compromise, and the 'File' on as 
> most consistent.





^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-05-24  8:14 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-20 12:42 [pbs-devel] [PATCH proxmox-backup 0/5] add 'sync-level' to datatore tuning options Dominik Csapak
2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 1/5] docs: add information about chunk order option for datastores Dominik Csapak
2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 2/5] pbs-datastore: chunk_store: use replace_file in insert_chunk Dominik Csapak
2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 3/5] datastore: implement sync-level tuning for datastores Dominik Csapak
2022-05-23  7:13   ` Fabian Grünbichler
2022-05-24  8:14     ` Thomas Lamprecht
2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 4/5] docs: add documentation about the 'sync-level' tuning Dominik Csapak
2022-05-20 12:42 ` [pbs-devel] [PATCH proxmox-backup 5/5] datastore: make 'filesystem' the default sync-level Dominik Csapak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal