public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection
@ 2026-04-30 15:05 Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox 1/3] pbs-api-types: add datastore operation variant for reclaiming storage Robert Obkircher
                   ` (12 more replies)
  0 siblings, 13 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

Add a maintenance mode that only allows garbage collection, and avoid
running out of space by checking how much is available before writes.

I mostly wanted to ask if it really makes sense to add the maintenance
mode. It could also just be treated as a special case where the entire
space is reserved.

I'm also not super happy with the "check before writes" approach but I
couldn't come up with a better apporach either. Let me know if you have
other ideas.


proxmox:

Robert Obkircher (3):
  pbs-api-types: add datastore operation variant for reclaiming storage
  pbs-abi-types: add GarbageCollection maintenance mode
  pbs-api-types: add reserved space to datastore tuning options

 pbs-api-types/src/datastore.rs   |  8 ++++++++
 pbs-api-types/src/maintenance.rs | 18 +++++++++++-------
 2 files changed, 19 insertions(+), 7 deletions(-)


proxmox-backup:

Robert Obkircher (10):
  task tracking: count Reclaim datastore operations as writes
  datastore: open datastores with Reclaim instead of Write operation
  fix #5797: www: display new GarbageCollection maintenance mode
  www: access active operation fields by name instead of index
  www: don't claim that all active writers are gc mode conflicts
  chunk_store: add method to limit file system usage
  chunk_store: check file system space before inserting new chunks
  datastore: check file system space for blobs and group notes
  api2: backup: check space for fixed and dynamic index files
  fix #7254: datastore: refuse new backps when capacity is almost full

 pbs-datastore/src/chunk_store.rs       | 46 ++++++++++++--
 pbs-datastore/src/datastore.rs         | 15 +++++
 pbs-datastore/src/file_system_limit.rs | 87 ++++++++++++++++++++++++++
 pbs-datastore/src/lib.rs               |  2 +
 pbs-datastore/src/task_tracking.rs     |  2 +
 src/api2/admin/datastore.rs            | 12 ++--
 src/api2/admin/namespace.rs            |  2 +-
 src/api2/backup/environment.rs         | 23 +++++++
 src/api2/backup/mod.rs                 | 19 +++++-
 src/api2/config/datastore.rs           |  2 +
 src/bin/proxmox-backup-proxy.rs        |  2 +-
 src/server/prune_job.rs                |  2 +-
 www/Utils.js                           | 24 +++++--
 www/datastore/OptionView.js            | 19 +++++-
 www/window/MaintenanceOptions.js       |  1 +
 15 files changed, 236 insertions(+), 22 deletions(-)
 create mode 100644 pbs-datastore/src/file_system_limit.rs


Summary over all repositories:
  17 files changed, 255 insertions(+), 29 deletions(-)

-- 
Generated by git-murpp 0.8.1




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH proxmox 1/3] pbs-api-types: add datastore operation variant for reclaiming storage
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox 2/3] pbs-abi-types: add GarbageCollection maintenance mode Robert Obkircher
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

The Reclaim variant will be used for Write operations that should be
allowed even when storage is almost full.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 pbs-api-types/src/maintenance.rs | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/pbs-api-types/src/maintenance.rs b/pbs-api-types/src/maintenance.rs
index fc7d240d..8a37756a 100644
--- a/pbs-api-types/src/maintenance.rs
+++ b/pbs-api-types/src/maintenance.rs
@@ -25,14 +25,16 @@ pub const MAINTENANCE_MESSAGE_SCHEMA: Schema =
 pub enum Operation {
     /// for any read operation like backup restore or RRD metric collection
     Read,
-    /// for any write/delete operation, like backup create or GC
+    /// for any read/write/delete operation, like backup create
     Write,
+    /// for read/write/delete operations that reduce or maintain
+    /// storage usage, like GC or prune.
+    Reclaim,
     /// for any purely logical operation on the in-memory state of the datastore, e.g., to check if
     /// some mutex could be locked (e.g., GC already running?)
     ///
     /// NOTE: one must *not* do any IO operations when only helding this Op state
     Lookup,
-    // GarbageCollect or Delete?
 }
 
 #[api]
@@ -110,7 +112,9 @@ impl MaintenanceMode {
             bail!("offline maintenance mode: {}", message);
         } else if self.ty == MaintenanceType::S3Refresh {
             bail!("S3 refresh maintenance mode: {}", message);
-        } else if self.ty == MaintenanceType::ReadOnly && Operation::Write == operation {
+        } else if self.ty == MaintenanceType::ReadOnly
+            && matches!(operation, Operation::Write | Operation::Reclaim)
+        {
             bail!("read-only maintenance mode: {}", message);
         }
         Ok(())
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox 2/3] pbs-abi-types: add GarbageCollection maintenance mode
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox 1/3] pbs-api-types: add datastore operation variant for reclaiming storage Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox 3/3] pbs-api-types: add reserved space to datastore tuning options Robert Obkircher
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

This mode may be used to safely prune and garbage collect a datastore
without the risk of running out of space due to new backups.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 pbs-api-types/src/datastore.rs   | 1 +
 pbs-api-types/src/maintenance.rs | 8 ++++----
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index 098d2b7c..47ae1819 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -635,6 +635,7 @@ impl DataStoreConfig {
 
         match current_type {
             Some(MaintenanceType::ReadOnly) => { /* always OK  */ }
+            Some(MaintenanceType::GarbageCollection) => { /* always OK  */ }
             Some(MaintenanceType::Offline) => { /* always OK  */ }
             Some(MaintenanceType::Unmount) => {
                 /* used to reset it after failed unmount, or alternative for aborting unmount task */
diff --git a/pbs-api-types/src/maintenance.rs b/pbs-api-types/src/maintenance.rs
index 8a37756a..a46764bc 100644
--- a/pbs-api-types/src/maintenance.rs
+++ b/pbs-api-types/src/maintenance.rs
@@ -42,12 +42,10 @@ pub enum Operation {
 #[serde(rename_all = "kebab-case")]
 /// Maintenance type.
 pub enum MaintenanceType {
-    // TODO:
-    //  - Add "GarbageCollection" or "DeleteOnly" as type and track GC (or all deletes) as separate
-    //    operation, so that one can enable a mode where nothing new can be added but stuff can be
-    //    cleaned
     /// Only read operations are allowed on the datastore.
     ReadOnly,
+    /// Allow reads and reclaim operations, but no new writes.
+    GarbageCollection,
     /// Neither read nor write operations are allowed on the datastore.
     Offline,
     /// The datastore is being deleted.
@@ -116,6 +114,8 @@ impl MaintenanceMode {
             && matches!(operation, Operation::Write | Operation::Reclaim)
         {
             bail!("read-only maintenance mode: {}", message);
+        } else if self.ty == MaintenanceType::GarbageCollection && operation == Operation::Write {
+            bail!("garbage-collection maintenance mode: {}", message);
         }
         Ok(())
     }
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox 3/3] pbs-api-types: add reserved space to datastore tuning options
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox 1/3] pbs-api-types: add datastore operation variant for reclaiming storage Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox 2/3] pbs-abi-types: add GarbageCollection maintenance mode Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox-backup 01/10] task tracking: count Reclaim datastore operations as writes Robert Obkircher
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

This is a tuning option because the optimal value depends on the
number and speed of concurrent writers.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 pbs-api-types/src/datastore.rs | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index 47ae1819..36474f31 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -260,6 +260,10 @@ pub const GC_CACHE_CAPACITY_SCHEMA: Schema =
             type: ChunkOrder,
             optional: true,
         },
+        "reserved-space": {
+            type: HumanByte,
+            optional: true,
+        },
         "gc-atime-safety-check": {
             description:
                 "Check filesystem atime updates are honored during store creation and garbage \
@@ -295,6 +299,9 @@ pub struct DatastoreTuning {
     pub chunk_order: Option<ChunkOrder>,
     #[serde(skip_serializing_if = "Option::is_none")]
     pub sync_level: Option<DatastoreFSyncLevel>,
+    /// Amount of reserved space used to prevent the file system from becoming completely full.
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub reserved_space: Option<HumanByte>,
     #[serde(skip_serializing_if = "Option::is_none")]
     pub gc_atime_safety_check: Option<bool>,
     #[serde(skip_serializing_if = "Option::is_none")]
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox-backup 01/10] task tracking: count Reclaim datastore operations as writes
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
                   ` (2 preceding siblings ...)
  2026-04-30 15:05 ` [PATCH proxmox 3/3] pbs-api-types: add reserved space to datastore tuning options Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox-backup 02/10] datastore: open datastores with Reclaim instead of Write operation Robert Obkircher
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

This ensures that changing a Write to a Reclaim does not break forward
compatibility when older versions read the counters.

Ideally, Write operations would also be tracked separately to display
them as conflicts of the GarbageCollection maintenance mode, but an
extra field would lead to parse errors in older versions which are not
always propagated (for example when cloning a Datastore) and would
thus result in incorrect values.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 pbs-datastore/src/task_tracking.rs | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/pbs-datastore/src/task_tracking.rs b/pbs-datastore/src/task_tracking.rs
index 44a4522dc..c8833759b 100644
--- a/pbs-datastore/src/task_tracking.rs
+++ b/pbs-datastore/src/task_tracking.rs
@@ -106,6 +106,7 @@ pub fn update_active_operations(
     let mut updated_active_operations = match operation {
         Operation::Read => ActiveOperationStats { read: 1, write: 0 },
         Operation::Write => ActiveOperationStats { read: 0, write: 1 },
+        Operation::Reclaim => ActiveOperationStats { read: 0, write: 1 },
         Operation::Lookup => ActiveOperationStats { read: 0, write: 0 },
     };
     let mut found_entry = false;
@@ -121,6 +122,7 @@ pub fn update_active_operations(
                             match operation {
                                 Operation::Read => task.active_operations.read += count,
                                 Operation::Write => task.active_operations.write += count,
+                                Operation::Reclaim => task.active_operations.write += count,
                                 Operation::Lookup => (), // no IO must happen there
                             };
                             updated_active_operations = task.active_operations;
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox-backup 02/10] datastore: open datastores with Reclaim instead of Write operation
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
                   ` (3 preceding siblings ...)
  2026-04-30 15:05 ` [PATCH proxmox-backup 01/10] task tracking: count Reclaim datastore operations as writes Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox-backup 03/10] fix #5797: www: display new GarbageCollection maintenance mode Robert Obkircher
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

These operations do not significantly increase storage requirements,
so they should be allowed even when the file system is almost full.

For now, this only includes the most important ones for reclaiming
space but others, such as set_backup_owner and group/namespace moves,
could be reasonably supported as well.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 src/api2/admin/datastore.rs     | 12 ++++++------
 src/api2/admin/namespace.rs     |  2 +-
 src/bin/proxmox-backup-proxy.rs |  2 +-
 src/server/prune_job.rs         |  2 +-
 4 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index a814c076c..60069c472 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -253,7 +253,7 @@ pub async fn delete_group(
             &auth_id,
             PRIV_DATASTORE_MODIFY,
             PRIV_DATASTORE_PRUNE,
-            Operation::Write,
+            Operation::Reclaim,
             &group,
         )?;
 
@@ -463,7 +463,7 @@ pub async fn delete_snapshot(
             &auth_id,
             PRIV_DATASTORE_MODIFY,
             PRIV_DATASTORE_PRUNE,
-            Operation::Write,
+            Operation::Reclaim,
             &backup_dir.group,
         )?;
 
@@ -1002,7 +1002,7 @@ pub fn prune(
         &auth_id,
         PRIV_DATASTORE_MODIFY,
         PRIV_DATASTORE_PRUNE,
-        Operation::Write,
+        Operation::Reclaim,
         &group,
     )?;
 
@@ -1174,7 +1174,7 @@ pub fn prune_datastore(
         true,
     )?;
 
-    let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Write))?;
+    let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Reclaim))?;
     let ns = prune_options.ns.clone().unwrap_or_default();
     let worker_id = format!("{store}:{ns}");
 
@@ -1212,7 +1212,7 @@ pub fn start_garbage_collection(
     _info: &ApiMethod,
     rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<Value, Error> {
-    let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Write))?;
+    let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Reclaim))?;
     let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
 
     let job = Job::new("garbage_collection", &store)
@@ -2306,7 +2306,7 @@ pub async fn set_protection(
             &auth_id,
             PRIV_DATASTORE_MODIFY,
             PRIV_DATASTORE_BACKUP,
-            Operation::Write,
+            Operation::Reclaim,
             &backup_dir.group,
         )?;
 
diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
index 56f420025..b29bcb68a 100644
--- a/src/api2/admin/namespace.rs
+++ b/src/api2/admin/namespace.rs
@@ -168,7 +168,7 @@ pub fn delete_namespace(
 
     check_ns_modification_privs(&store, &ns, &auth_id)?;
 
-    let lookup = crate::tools::lookup_with(&store, Operation::Write);
+    let lookup = crate::tools::lookup_with(&store, Operation::Reclaim);
     let datastore = DataStore::lookup_datastore(lookup)?;
 
     let (removed_all, stats) = datastore.remove_namespace_recursive(&ns, delete_groups)?;
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index b18550420..2d9f55d37 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -592,7 +592,7 @@ async fn schedule_datastore_garbage_collection() {
             Err(_) => continue, // could not get lock
         };
 
-        let lookup = lookup_with(&store, Operation::Write);
+        let lookup = lookup_with(&store, Operation::Reclaim);
         let datastore = match DataStore::lookup_datastore(lookup) {
             Ok(datastore) => datastore,
             Err(err) => {
diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
index ca5c67541..95ebb4965 100644
--- a/src/server/prune_job.rs
+++ b/src/server/prune_job.rs
@@ -133,7 +133,7 @@ pub fn do_prune_job(
     auth_id: &Authid,
     schedule: Option<String>,
 ) -> Result<String, Error> {
-    let lookup = crate::tools::lookup_with(&store, Operation::Write);
+    let lookup = crate::tools::lookup_with(&store, Operation::Reclaim);
     let datastore = DataStore::lookup_datastore(lookup)?;
 
     let worker_type = job.jobtype().to_string();
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox-backup 03/10] fix #5797: www: display new GarbageCollection maintenance mode
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
                   ` (4 preceding siblings ...)
  2026-04-30 15:05 ` [PATCH proxmox-backup 02/10] datastore: open datastores with Reclaim instead of Write operation Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox-backup 04/10] www: access active operation fields by name instead of index Robert Obkircher
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

Display the new mode and make it selectable. This does not yet display
the correct "conflicting tasks" count after selecting the mode.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 www/Utils.js                     | 3 +++
 www/window/MaintenanceOptions.js | 1 +
 2 files changed, 4 insertions(+)

diff --git a/www/Utils.js b/www/Utils.js
index bf4b025c7..f5c032ee2 100644
--- a/www/Utils.js
+++ b/www/Utils.js
@@ -846,6 +846,9 @@ Ext.define('PBS.Utils', {
             case 'read-only':
                 modeText = gettext('Read-only');
                 break;
+            case 'garbage-collection':
+                modeText = gettext('Garbage Collection');
+                break;
             case 'offline':
                 modeText = gettext('Offline');
                 break;
diff --git a/www/window/MaintenanceOptions.js b/www/window/MaintenanceOptions.js
index 9a735e5e8..9228f4ee9 100644
--- a/www/window/MaintenanceOptions.js
+++ b/www/window/MaintenanceOptions.js
@@ -5,6 +5,7 @@ Ext.define('PBS.form.maintenanceType', {
     comboItems: [
         ['__default__', gettext('None')],
         ['read-only', gettext('Read only')],
+        ['garbage-collection', gettext('Garbage Collection')],
         ['offline', gettext('Offline')],
     ],
 });
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox-backup 04/10] www: access active operation fields by name instead of index
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
                   ` (5 preceding siblings ...)
  2026-04-30 15:05 ` [PATCH proxmox-backup 03/10] fix #5797: www: display new GarbageCollection maintenance mode Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox-backup 05/10] www: don't claim that all active writers are gc mode conflicts Robert Obkircher
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

Avoid relying on (presumably sorted) field order so new fields could
be added to the API response in the future.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 www/datastore/OptionView.js | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/www/datastore/OptionView.js b/www/datastore/OptionView.js
index 2b35355ab..bac9eab0c 100644
--- a/www/datastore/OptionView.js
+++ b/www/datastore/OptionView.js
@@ -122,8 +122,13 @@ Ext.define('PBS.Datastore.Options', {
 
             view.mon(me.activeOperationsRstore, 'load', (store, data, success) => {
                 let activeTasks = me.getView().maintenanceActiveTasks;
-                activeTasks.read = data?.[0]?.data.value ?? 0;
-                activeTasks.write = data?.[1]?.data.value ?? 0;
+                if (success) {
+                  activeTasks.read = store.getById('read')?.data.value ?? 0;
+                  activeTasks.write = store.getById('write')?.data.value ?? 0;
+                } else {
+                  activeTasks.read = 0;
+                  activeTasks.write = 0;
+                }
             });
 
         },
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox-backup 05/10] www: don't claim that all active writers are gc mode conflicts
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
                   ` (6 preceding siblings ...)
  2026-04-30 15:05 ` [PATCH proxmox-backup 04/10] www: access active operation fields by name instead of index Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox-backup 06/10] chunk_store: add method to limit file system usage Robert Obkircher
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

Tracking the precise number of conflicts doesn't seem worth it, so
continue displaying the writer count with a different label.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 www/Utils.js | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/www/Utils.js b/www/Utils.js
index f5c032ee2..a9239b005 100644
--- a/www/Utils.js
+++ b/www/Utils.js
@@ -828,10 +828,17 @@ Ext.define('PBS.Utils', {
 
             if (conflictingTasks > 0) {
                 extra += '| <i class="fa fa-spinner fa-pulse fa-fw"></i> ';
-                extra += Ext.String.format(
-                    gettext('{0} conflicting tasks still active.'),
-                    conflictingTasks,
-                );
+                if (type === 'garbage-collection') {
+                    extra += Ext.String.format(
+                        gettext('{0} active writers.'),
+                        conflictingTasks,
+                    );
+                } else {
+                    extra += Ext.String.format(
+                        gettext('{0} conflicting tasks still active.'),
+                        conflictingTasks,
+                    );
+                }
             } else {
                 extra += '<i class="fa fa-check"></i>';
             }
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox-backup 06/10] chunk_store: add method to limit file system usage
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
                   ` (7 preceding siblings ...)
  2026-04-30 15:05 ` [PATCH proxmox-backup 05/10] www: don't claim that all active writers are gc mode conflicts Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox-backup 07/10] chunk_store: check file system space before inserting new chunks Robert Obkircher
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

Provide a way to check whether enough space is available to write new
backup data to a local chunk store. This is especially important on
copy-on-write file systems where GC and prune jobs need additional
space for metadata updates.

The check is not completely safe because multiple threads/processes
could perform it at the same time, but with a big enough reservation
it should be good enough in practice.

Caching is used to avoid unnecessary syscalls, but this is likely only
beneficial on NFS.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 pbs-datastore/src/chunk_store.rs       | 12 ++++
 pbs-datastore/src/file_system_limit.rs | 87 ++++++++++++++++++++++++++
 pbs-datastore/src/lib.rs               |  2 +
 3 files changed, 101 insertions(+)
 create mode 100644 pbs-datastore/src/file_system_limit.rs

diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 68db88eab..a02f437c1 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -23,6 +23,7 @@ use crate::data_blob::DataChunkBuilder;
 use crate::file_formats::{
     COMPRESSED_BLOB_MAGIC_1_0, ENCRYPTED_BLOB_MAGIC_1_0, UNCOMPRESSED_BLOB_MAGIC_1_0,
 };
+use crate::file_system_limit::FileSystemLimit;
 use crate::{DataBlob, LocalDatastoreLruCache};
 
 const USING_MARKER_FILENAME_EXT: &str = "using";
@@ -35,6 +36,7 @@ pub struct ChunkStore {
     mutex: Mutex<()>,
     locker: Option<Arc<Mutex<ProcessLocker>>>,
     sync_level: DatastoreFSyncLevel,
+    fs_limit: FileSystemLimit,
 }
 
 // TODO: what about sysctl setting vm.vfs_cache_pressure (0 - 100) ?
@@ -82,6 +84,7 @@ impl ChunkStore {
             mutex: Mutex::new(()),
             locker: None,
             sync_level: Default::default(),
+            fs_limit: FileSystemLimit::new(None),
         }
     }
 
@@ -206,6 +209,7 @@ impl ChunkStore {
             locker: Some(locker),
             mutex: Mutex::new(()),
             sync_level,
+            fs_limit: FileSystemLimit::new(None),
         })
     }
 
@@ -966,6 +970,10 @@ impl ChunkStore {
         }
         (chunk_path, counter)
     }
+
+    pub(crate) fn check_space(&self, size: u64) -> Result<(), Error> {
+        self.fs_limit.check_available(&self.base, size)
+    }
 }
 
 #[derive(PartialEq)]
@@ -1001,6 +1009,10 @@ fn test_chunk_store1() {
         .build()
         .unwrap();
 
+    chunk_store
+        .check_space(chunk.raw_size())
+        .expect("enough space");
+
     let (exists, _) = chunk_store.insert_chunk(&chunk, &digest).unwrap();
     assert!(!exists);
 
diff --git a/pbs-datastore/src/file_system_limit.rs b/pbs-datastore/src/file_system_limit.rs
new file mode 100644
index 000000000..fab62d046
--- /dev/null
+++ b/pbs-datastore/src/file_system_limit.rs
@@ -0,0 +1,87 @@
+use std::path::Path;
+use std::sync::atomic::{AtomicU64, Ordering};
+use std::time::{Duration, Instant};
+
+use anyhow::{bail, format_err, Error};
+
+/// Cached file system space availability check.
+///
+/// Supports reserving a safety buffer because multiple threads
+/// and processes may pass the check at the same time before they
+/// write.
+pub struct FileSystemLimit {
+    reserved: AtomicU64,
+    base: Instant,
+    elapsed_nanos: AtomicU64,
+    available: AtomicU64,
+}
+
+/// Encode `None` as `MAX` because nobody has enough storage to
+/// notice the difference.
+fn encode_reserved(bytes: Option<u64>) -> u64 {
+    bytes.map_or(u64::MAX, |b| b.min(u64::MAX - 1))
+}
+
+impl FileSystemLimit {
+    /// Specify the amount of reserved space for checks, or disable them with `None`.
+    pub fn new(reserved_space: Option<u64>) -> Self {
+        Self {
+            reserved: AtomicU64::new(encode_reserved(reserved_space)),
+            base: Instant::now(),
+            elapsed_nanos: AtomicU64::new(0),
+            available: AtomicU64::new(0),
+        }
+    }
+
+    /// Specify the amount of reserved space for checks, or disable them with `None`.
+    pub fn set_reserved_space(&self, bytes: Option<u64>) {
+        self.reserved
+            .store(encode_reserved(bytes), Ordering::Release);
+    }
+
+    /// Check if there is probably enough space to write `size` bytes.
+    ///
+    /// Repeated calls must specify paths to the same file system.
+    pub fn check_available(&self, path: &Path, size: u64) -> Result<(), Error> {
+        let reserved = self.reserved.load(Ordering::Acquire);
+        if reserved == u64::MAX {
+            return Ok(()); // disabled
+        }
+        let required = reserved.saturating_add(size);
+
+        let since_base = self.base.elapsed().as_nanos() as u64;
+        let last_update = self.elapsed_nanos.load(Ordering::Acquire);
+        let since_update = since_base.saturating_sub(last_update);
+
+        // Limit max age in case of unexpected changes like a manual resize of the file system.
+        if last_update != 0 && since_update as u128 <= Duration::from_secs(1).as_nanos() {
+            // Assume at most 100 GB/s (1 GB/s = 1 B/ns)
+            let max_written = 100 * since_update;
+
+            let available = self.available.load(Ordering::Acquire);
+            if required.saturating_add(max_written) <= available {
+                log::trace!( "file_system_limit: cached, path={path:?}, available={available}, requested={size}, reserved={reserved}");
+                return Ok(());
+            }
+        }
+
+        // Repeated calls on a local file system take less than 2 microseconds,
+        // so it should be fine if multiple threads get here at the same time
+        // and race on the stores below.
+        let info = proxmox_sys::fs::fs_info(path)
+            .map_err(|e| format_err!("failed to read file system info for {path:?} - {e}"))?;
+
+        let available = info.available;
+        self.available.store(available, Ordering::Release);
+        self.elapsed_nanos.store(since_base, Ordering::Release);
+
+        log::trace!( "file_system_limit: uncached, path={path:?}, available={available}, requested={size}, reserved={reserved}");
+        if required > available {
+            // The UI also shows this instead of `info.total`
+            let total = info.used + info.available;
+
+            bail!("Not enough space: path={path:?}, available={available}/{total}, requested={size}, reserved={reserved}");
+        }
+        Ok(())
+    }
+}
diff --git a/pbs-datastore/src/lib.rs b/pbs-datastore/src/lib.rs
index 6647ee2b6..6a0c58a91 100644
--- a/pbs-datastore/src/lib.rs
+++ b/pbs-datastore/src/lib.rs
@@ -222,6 +222,8 @@ pub use datastore::{
     S3_CLIENT_REQUEST_COUNTER_BASE_PATH, S3_DATASTORE_IN_USE_MARKER,
 };
 
+mod file_system_limit;
+
 mod hierarchy;
 pub use hierarchy::{
     ListGroups, ListGroupsType, ListNamespaces, ListNamespacesRecursive, ListSnapshots,
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox-backup 07/10] chunk_store: check file system space before inserting new chunks
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
                   ` (8 preceding siblings ...)
  2026-04-30 15:05 ` [PATCH proxmox-backup 06/10] chunk_store: add method to limit file system usage Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox-backup 08/10] datastore: check file system space for blobs and group notes Robert Obkircher
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

This is done here to cover all possible paths (including tape restore)
and to avoid the check for exiting chunks.

Note that the atime update check invoked by gc and prune jobs tries to
insert a zero chunk. However, that chunk is already inserted during
datastore creation and never removed, so this should not prevent them
from running on a full disk.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 pbs-datastore/src/chunk_store.rs | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index a02f437c1..6a9dfbcef 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -736,6 +736,8 @@ impl ChunkStore {
             }
         }
 
+        self.check_space(encoded_size)?;
+
         let chunk_dir_path = chunk_path
             .parent()
             .ok_or_else(|| format_err!("unable to get chunk dir"))?;
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox-backup 08/10] datastore: check file system space for blobs and group notes
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
                   ` (9 preceding siblings ...)
  2026-04-30 15:05 ` [PATCH proxmox-backup 07/10] chunk_store: check file system space before inserting new chunks Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox-backup 09/10] api2: backup: check space for fixed and dynamic index files Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox-backup 10/10] fix #7254: datastore: refuse new backps when capacity is almost full Robert Obkircher
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

This covers upload_blob and upload_backup_log.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 pbs-datastore/src/datastore.rs | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 34efcd398..def88f30a 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -3386,6 +3386,7 @@ impl DataStore {
             .context("failed to set notes on s3 backend")?;
         }
         let notes_path = self.group_notes_path(backup_group.backup_ns(), backup_group.group());
+        self.check_space(notes.len() as u64)?;
         replace_file(notes_path, notes.as_bytes(), CreateOptions::new(), false)
             .context("failed to replace group notes file")?;
         Ok(())
@@ -3419,6 +3420,7 @@ impl DataStore {
 
         let mut path = snapshot.full_path();
         path.push(filename);
+        self.check_space(blob.raw_size())?;
         replace_file(&path, blob.raw_data(), CreateOptions::new(), false)?;
         Ok(())
     }
@@ -3496,6 +3498,10 @@ impl DataStore {
 
         result
     }
+
+    pub fn check_space(&self, size: u64) -> Result<(), Error> {
+        self.inner.chunk_store.check_space(size)
+    }
 }
 
 /// Track S3 object keys to be deleted by garbage collection while holding their file lock.
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox-backup 09/10] api2: backup: check space for fixed and dynamic index files
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
                   ` (10 preceding siblings ...)
  2026-04-30 15:05 ` [PATCH proxmox-backup 08/10] datastore: check file system space for blobs and group notes Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  2026-04-30 15:05 ` [PATCH proxmox-backup 10/10] fix #7254: datastore: refuse new backps when capacity is almost full Robert Obkircher
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

The dynamic index writer uses a 1 MiB buffer, so size checks only need
to include that.

The fixed index writer uses mmap+ftruncate, which makes it difficult
to tell whether file system space has already been reserved. Because
running out of space would risk getting killed with SIGBUS it is
better to always check for the total size. On non-CoW file systems the
risk could be reduced further by switching to fallocate.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 src/api2/backup/environment.rs | 23 +++++++++++++++++++++++
 src/api2/backup/mod.rs         | 19 ++++++++++++++++++-
 2 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
index ab623f1ff..c297d78c5 100644
--- a/src/api2/backup/environment.rs
+++ b/src/api2/backup/environment.rs
@@ -377,6 +377,24 @@ impl BackupEnvironment {
         Ok(uid)
     }
 
+    pub fn fixed_writer_check_space(&self, wid: usize, offset: u64) -> Result<(), Error> {
+        let mut state = self.state.lock().unwrap();
+
+        state.ensure_unfinished()?;
+
+        let data = match state.fixed_writers.get_mut(&wid) {
+            Some(data) => data,
+            None => bail!("fixed writer '{}' not registered", wid),
+        };
+
+        let content_size = data
+            .size
+            .unwrap_or_else(|| data.index.size().max(offset + data.chunk_size as u64));
+
+        self.datastore
+            .check_space(4096 + 32 * content_size.div_ceil(data.chunk_size as u64))
+    }
+
     /// Append chunk to dynamic writer
     pub fn dynamic_writer_append_chunk(
         &self,
@@ -533,6 +551,8 @@ impl BackupEnvironment {
             );
         }
 
+        self.datastore.check_space(1024 * 1024)?;
+
         let expected_csum = data.index.close()?;
         data.closed = true;
 
@@ -642,6 +662,9 @@ impl BackupEnvironment {
             }
         }
 
+        self.datastore
+            .check_space(4096 + data.index.index_length() as u64 * 32)?;
+
         let expected_csum = data.index.close()?;
         data.closed = true;
 
diff --git a/src/api2/backup/mod.rs b/src/api2/backup/mod.rs
index 86ec49487..0edaca601 100644
--- a/src/api2/backup/mod.rs
+++ b/src/api2/backup/mod.rs
@@ -437,6 +437,8 @@ fn create_dynamic_index(
         bail!("wrong archive extension: '{}'", archive_name);
     }
 
+    env.datastore.check_space(1024 * 1024)?;
+
     let mut path = env.backup_dir.relative_path();
     path.push(archive_name);
 
@@ -489,6 +491,8 @@ fn create_fixed_index(
         bail!("wrong archive extension: '{}'", archive_name);
     }
 
+    env.datastore.check_space(size.unwrap_or(4096 + 4096))?;
+
     let mut path = env.backup_dir.relative_path();
     path.push(&archive_name);
 
@@ -610,6 +614,10 @@ fn dynamic_append(
 
     env.debug(format!("dynamic_append {} chunks", digest_list.len()));
 
+    // BufWriter capacity + new data
+    env.datastore
+        .check_space(1024 * 1024 + digest_list.len() as u64 * 40)?;
+
     for (i, item) in digest_list.iter().enumerate() {
         let digest_str = item.as_str().unwrap();
         let digest = <[u8; 32]>::from_hex(digest_str)?;
@@ -683,10 +691,19 @@ fn fixed_append(
 
     env.debug(format!("fixed_append {} chunks", digest_list.len()));
 
+    let offset_list = offset_list
+        .iter()
+        .map(|o| o.as_u64().unwrap())
+        .collect::<Vec<_>>();
+
+    if let Some(max_offset) = offset_list.iter().max() {
+        env.fixed_writer_check_space(wid, *max_offset)?;
+    }
+
     for (i, item) in digest_list.iter().enumerate() {
         let digest_str = item.as_str().unwrap();
         let digest = <[u8; 32]>::from_hex(digest_str)?;
-        let offset = offset_list[i].as_u64().unwrap();
+        let offset = offset_list[i];
         let size = env
             .lookup_chunk(&digest)
             .ok_or_else(|| format_err!("no such chunk {}", digest_str))?;
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH proxmox-backup 10/10] fix #7254: datastore: refuse new backps when capacity is almost full
  2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
                   ` (11 preceding siblings ...)
  2026-04-30 15:05 ` [PATCH proxmox-backup 09/10] api2: backup: check space for fixed and dynamic index files Robert Obkircher
@ 2026-04-30 15:05 ` Robert Obkircher
  12 siblings, 0 replies; 14+ messages in thread
From: Robert Obkircher @ 2026-04-30 15:05 UTC (permalink / raw)
  To: pbs-devel

Add a datastore config option that can be used to reserve free space.

HumanByte is somewhat inaccurate (e.g. 1025 MiB rounds to 1024 MiB)
but it should be good enough for this case.

Setting this option is not backwards compatible as older versions of
the parser will not recognize it. For example, running something like
`proxmox-backup-manager datastore list` will fail with older binaries.

Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
 pbs-datastore/src/chunk_store.rs | 34 +++++++++++++++++++++++++-------
 pbs-datastore/src/datastore.rs   |  9 +++++++++
 src/api2/config/datastore.rs     |  2 ++
 www/Utils.js                     |  6 ++++++
 www/datastore/OptionView.js      | 10 ++++++++++
 5 files changed, 54 insertions(+), 7 deletions(-)

diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 6a9dfbcef..515f007d3 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -10,6 +10,7 @@ use tracing::{info, warn};
 
 use pbs_api_types::{DatastoreFSyncLevel, GarbageCollectionStatus};
 use pbs_config::BackupLockGuard;
+use proxmox_human_byte::HumanByte;
 use proxmox_io::ReadExt;
 use proxmox_s3_client::S3Client;
 use proxmox_sys::fs::{create_dir, create_path, file_type_from_file_stat, CreateOptions};
@@ -105,6 +106,7 @@ impl ChunkStore {
         uid: nix::unistd::Uid,
         gid: nix::unistd::Gid,
         sync_level: DatastoreFSyncLevel,
+        reserved_space: Option<HumanByte>,
     ) -> Result<Self, Error>
     where
         P: Into<PathBuf>,
@@ -159,7 +161,7 @@ impl ChunkStore {
             }
         }
 
-        Self::open(name, base, sync_level)
+        Self::open(name, base, sync_level, reserved_space)
     }
 
     fn lockfile_path<P: Into<PathBuf>>(base: P) -> PathBuf {
@@ -193,6 +195,7 @@ impl ChunkStore {
         name: &str,
         base: P,
         sync_level: DatastoreFSyncLevel,
+        reserved_space: Option<HumanByte>,
     ) -> Result<Self, Error> {
         let base: PathBuf = base.into();
 
@@ -209,10 +212,14 @@ impl ChunkStore {
             locker: Some(locker),
             mutex: Mutex::new(()),
             sync_level,
-            fs_limit: FileSystemLimit::new(None),
+            fs_limit: FileSystemLimit::new(reserved_space.map(|s| s.as_u64())),
         })
     }
 
+    pub(crate) fn set_reserved_space(&self, bytes: Option<HumanByte>) {
+        self.fs_limit.set_reserved_space(bytes.map(|s| s.as_u64()));
+    }
+
     fn touch_chunk_no_lock(&self, digest: &[u8; 32]) -> Result<(), Error> {
         // unwrap: only `None` in unit tests
         assert!(self.locker.is_some());
@@ -998,14 +1005,21 @@ fn test_chunk_store1() {
     let temp_dir = TempDir::new().unwrap();
     let path = temp_dir.path();
 
-    let chunk_store = ChunkStore::open("test", path, DatastoreFSyncLevel::None);
+    let chunk_store = ChunkStore::open("test", path, DatastoreFSyncLevel::None, None);
     assert!(chunk_store.is_err());
 
     let user = nix::unistd::User::from_uid(nix::unistd::Uid::current())
         .unwrap()
         .unwrap();
-    let chunk_store =
-        ChunkStore::create("test", path, user.uid, user.gid, DatastoreFSyncLevel::None).unwrap();
+    let chunk_store = ChunkStore::create(
+        "test",
+        path,
+        user.uid,
+        user.gid,
+        DatastoreFSyncLevel::None,
+        Some(1 << 20),
+    )
+    .unwrap();
 
     let (chunk, digest) = crate::data_blob::DataChunkBuilder::new(&[0u8, 1u8])
         .build()
@@ -1021,8 +1035,14 @@ fn test_chunk_store1() {
     let (exists, _) = chunk_store.insert_chunk(&chunk, &digest).unwrap();
     assert!(exists);
 
-    let chunk_store =
-        ChunkStore::create("test", path, user.uid, user.gid, DatastoreFSyncLevel::None);
+    let chunk_store = ChunkStore::create(
+        "test",
+        path,
+        user.uid,
+        user.gid,
+        DatastoreFSyncLevel::None,
+        None,
+    );
     assert!(chunk_store.is_err());
 
     temp_dir.close().unwrap();
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index def88f30a..46163f2bd 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -597,6 +597,13 @@ impl DataStore {
                     operation: Some(lookup.operation),
                 }));
             }
+            let tuning: DatastoreTuning = serde_json::from_value(
+                DatastoreTuning::API_SCHEMA
+                    .parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
+            )?;
+            datastore
+                .chunk_store
+                .set_reserved_space(tuning.reserved_space);
             Arc::clone(&datastore.chunk_store)
         } else {
             let tuning: DatastoreTuning = serde_json::from_value(
@@ -607,6 +614,7 @@ impl DataStore {
                 lookup.name,
                 config.absolute_path(),
                 tuning.sync_level.unwrap_or_default(),
+                tuning.reserved_space,
             )?)
         };
 
@@ -699,6 +707,7 @@ impl DataStore {
             &name,
             config.absolute_path(),
             tuning.sync_level.unwrap_or_default(),
+            tuning.reserved_space,
         )?;
         let inner = Arc::new(Self::with_store_and_config(
             Arc::new(chunk_store),
diff --git a/src/api2/config/datastore.rs b/src/api2/config/datastore.rs
index 16e85a636..50019e8bd 100644
--- a/src/api2/config/datastore.rs
+++ b/src/api2/config/datastore.rs
@@ -170,6 +170,7 @@ pub(crate) fn do_create_datastore(
                 &datastore.name,
                 &path,
                 tuning.sync_level.unwrap_or_default(),
+                tuning.reserved_space,
             )
         })?
     } else {
@@ -207,6 +208,7 @@ pub(crate) fn do_create_datastore(
             backup_user.uid,
             backup_user.gid,
             tuning.sync_level.unwrap_or_default(),
+            tuning.reserved_space,
         )?
     };
 
diff --git a/www/Utils.js b/www/Utils.js
index a9239b005..ebb681bfa 100644
--- a/www/Utils.js
+++ b/www/Utils.js
@@ -909,6 +909,12 @@ Ext.define('PBS.Utils', {
         sync = PBS.Utils.tuningOptions['sync-level'][sync ?? '__default__'];
         options.push(`${gettext('Sync Level')}: ${sync}`);
 
+        let reserved_space = tuning['reserved-space'];
+        delete tuning['reserved-space'];
+        options.push(
+            `${gettext('Reserved Space')}: ${reserved_space ?? gettext('None')}`,
+        );
+
         let gc_atime_safety_check = tuning['gc-atime-safety-check'];
         delete tuning['gc-atime-safety-check'];
         options.push(
diff --git a/www/datastore/OptionView.js b/www/datastore/OptionView.js
index bac9eab0c..dbf12b99b 100644
--- a/www/datastore/OptionView.js
+++ b/www/datastore/OptionView.js
@@ -309,6 +309,16 @@ Ext.define('PBS.Datastore.Options', {
                             deleteEmpty: true,
                             value: '__default__',
                         },
+                        {
+                            xtype: 'pmxSizeField',
+                            name: 'reserved-space',
+                            fieldLabel: gettext('Reserved Space'),
+                            labelWidth: 200,
+                            unit: 'MiB',
+                            submitAutoScaledSizeUnit: true,
+                            allowZero: true,
+                            emptyText: gettext('None'),
+                        },
                         {
                             xtype: 'proxmoxcheckbox',
                             name: 'gc-atime-safety-check',
-- 
2.47.3





^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2026-04-30 15:07 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-30 15:05 [RFC proxmox{,-backup} 00/13] gc maintenance mode and full datastore protection Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox 1/3] pbs-api-types: add datastore operation variant for reclaiming storage Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox 2/3] pbs-abi-types: add GarbageCollection maintenance mode Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox 3/3] pbs-api-types: add reserved space to datastore tuning options Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox-backup 01/10] task tracking: count Reclaim datastore operations as writes Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox-backup 02/10] datastore: open datastores with Reclaim instead of Write operation Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox-backup 03/10] fix #5797: www: display new GarbageCollection maintenance mode Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox-backup 04/10] www: access active operation fields by name instead of index Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox-backup 05/10] www: don't claim that all active writers are gc mode conflicts Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox-backup 06/10] chunk_store: add method to limit file system usage Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox-backup 07/10] chunk_store: check file system space before inserting new chunks Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox-backup 08/10] datastore: check file system space for blobs and group notes Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox-backup 09/10] api2: backup: check space for fixed and dynamic index files Robert Obkircher
2026-04-30 15:05 ` [PATCH proxmox-backup 10/10] fix #7254: datastore: refuse new backps when capacity is almost full Robert Obkircher

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal