* [pbs-devel] [PATCH proxmox-backup v3 1/6] GC: Move S3 delete list state and logic to a dedicated struct
2026-01-14 12:31 [pbs-devel] [PATCH proxmox-backup v3 0/6] followups for garbage collection Christian Ebner
@ 2026-01-14 12:31 ` Christian Ebner
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 2/6] chunk store: rename and limit scope for chunk store iterator Christian Ebner
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-01-14 12:31 UTC (permalink / raw)
To: pbs-devel
To better keep track of the state and declutter the code on the
callsites, bundle the S3 delete list and it's logic by a dedicated
struct. Since the check for empty lists is now performed as part
of the deletion related methods, the callsites can get rid of that.
Further, avoid the proxmox_time::epoch_i64() and use SystemTime and
Duration with their methods directly, as the helper is only a
convenience wrapper around them anyways [0] and we can avoid the
conversion to raw unix epoch timestamps.
If the system time jumps to a time older than the time the first
entry was added to the delete list, delete all entries as the chunk
marker files have already been deleted anyways and the chunks should
be removed. If the age based on the new system time however still
lies within the acceptable threshold range, it is still kept.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- align threshold to be greater or equal for list to be deleted for
both, capacity and age.
- delete list if age was reached because of system time jumps to the
past.
- opted for keeping SystemTime and Duration over raw unix epoch for
stricter type checking.
pbs-datastore/src/datastore.rs | 137 +++++++++++++++++++--------------
1 file changed, 80 insertions(+), 57 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 60400f4b4..c32f232d9 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -24,7 +24,7 @@ use proxmox_sys::error::SysError;
use proxmox_sys::fs::{file_read_optional_string, replace_file, CreateOptions};
use proxmox_sys::linux::procfs::MountInfo;
use proxmox_sys::process_locker::{ProcessLockExclusiveGuard, ProcessLockSharedGuard};
-use proxmox_time::{epoch_i64, TimeSpan};
+use proxmox_time::TimeSpan;
use proxmox_worker_task::WorkerTaskContext;
use pbs_api_types::{
@@ -64,7 +64,7 @@ const CHUNK_LOCK_TIMEOUT: Duration = Duration::from_secs(3 * 60 * 60);
// s3 deletion batch size to avoid 1024 open files soft limit
const S3_DELETE_BATCH_LIMIT: usize = 100;
// max defer time for s3 batch deletions
-const S3_DELETE_DEFER_LIMIT_SECONDS: i64 = 60 * 5;
+const S3_DELETE_DEFER_LIMIT_SECONDS: Duration = Duration::from_secs(60 * 5);
/// checks if auth_id is owner, or, if owner is a token, if
/// auth_id is the user of the token
@@ -1689,40 +1689,8 @@ impl DataStore {
proxmox_async::runtime::block_on(s3_client.list_objects_v2(&prefix, None))
.context("failed to list chunk in s3 object store")?;
- let mut delete_list = Vec::with_capacity(S3_DELETE_BATCH_LIMIT);
- let mut delete_list_age = epoch_i64();
-
- let s3_delete_batch = |delete_list: &mut Vec<(S3ObjectKey, BackupLockGuard)>,
- s3_client: &Arc<S3Client>|
- -> Result<(), Error> {
- let delete_objects_result = proxmox_async::runtime::block_on(
- s3_client.delete_objects(
- &delete_list
- .iter()
- .map(|(key, _)| key.clone())
- .collect::<Vec<S3ObjectKey>>(),
- ),
- )?;
- if let Some(_err) = delete_objects_result.error {
- bail!("failed to delete some objects");
- }
- // drops all chunk flock guards
- delete_list.clear();
- Ok(())
- };
-
- let add_to_delete_list =
- |delete_list: &mut Vec<(S3ObjectKey, BackupLockGuard)>,
- delete_list_age: &mut i64,
- key: S3ObjectKey,
- _chunk_guard: BackupLockGuard| {
- // set age based on first insertion
- if delete_list.is_empty() {
- *delete_list_age = epoch_i64();
- }
- delete_list.push((key, _chunk_guard));
- };
-
+ let mut delete_list =
+ S3DeleteList::with_thresholds(S3_DELETE_BATCH_LIMIT, S3_DELETE_DEFER_LIMIT_SECONDS);
loop {
for content in list_bucket_result.contents {
worker.check_abort()?;
@@ -1779,12 +1747,7 @@ impl DataStore {
std::fs::remove_file(chunk_path)?;
}
}
- add_to_delete_list(
- &mut delete_list,
- &mut delete_list_age,
- content.key,
- _chunk_guard,
- );
+ delete_list.push(content.key, _chunk_guard);
Ok(())
},
)?;
@@ -1792,12 +1755,7 @@ impl DataStore {
} else {
gc_status.removed_chunks += 1;
gc_status.removed_bytes += content.size;
- add_to_delete_list(
- &mut delete_list,
- &mut delete_list_age,
- content.key,
- _chunk_guard,
- );
+ delete_list.push(content.key, _chunk_guard);
}
chunk_count += 1;
@@ -1806,12 +1764,7 @@ impl DataStore {
drop(_guard);
// limit pending deletes to avoid holding too many chunk flocks
- if delete_list.len() >= S3_DELETE_BATCH_LIMIT
- || (!delete_list.is_empty()
- && epoch_i64() - delete_list_age > S3_DELETE_DEFER_LIMIT_SECONDS)
- {
- s3_delete_batch(&mut delete_list, s3_client)?;
- }
+ delete_list.conditional_delete_and_drop_locks(s3_client)?;
}
// Process next batch of chunks if there is more
if list_bucket_result.is_truncated {
@@ -1827,9 +1780,7 @@ impl DataStore {
}
// delete the last batch of objects, if there are any remaining
- if !delete_list.is_empty() {
- s3_delete_batch(&mut delete_list, s3_client)?;
- }
+ delete_list.delete_and_drop_locks(s3_client)?;
info!("processed {chunk_count} total chunks");
@@ -2768,3 +2719,75 @@ impl DataStore {
result
}
}
+
+/// Track S3 object keys to be deleted by garbage collection while holding their file lock.
+struct S3DeleteList {
+ list: Vec<(S3ObjectKey, BackupLockGuard)>,
+ first_entry_added: SystemTime,
+ age_threshold: Duration,
+ capacity_threshold: usize,
+}
+
+impl S3DeleteList {
+ /// Create a new list instance with given capacity and age thresholds.
+ fn with_thresholds(capacity_threshold: usize, age_threshold: Duration) -> Self {
+ Self {
+ first_entry_added: SystemTime::now(), // init only, updated once added
+ list: Vec::with_capacity(capacity_threshold),
+ age_threshold,
+ capacity_threshold,
+ }
+ }
+
+ /// Pushes the current key and backup lock guard to the list, updating the delete list age if
+ /// the list was empty before the insert.
+ fn push(&mut self, key: S3ObjectKey, guard: BackupLockGuard) {
+ // set age based on first insertion
+ if self.list.is_empty() {
+ self.first_entry_added = SystemTime::now();
+ }
+ self.list.push((key, guard));
+ }
+
+ /// Delete the objects in the list via the provided S3 client instance.
+ /// Clears the list contents and frees the per-chunk file locks.
+ fn delete_and_drop_locks(&mut self, s3_client: &Arc<S3Client>) -> Result<(), Error> {
+ if self.list.is_empty() {
+ return Ok(());
+ }
+ let delete_objects_result = proxmox_async::runtime::block_on(
+ s3_client.delete_objects(
+ &self
+ .list
+ .iter()
+ .map(|(key, _)| key.clone())
+ .collect::<Vec<S3ObjectKey>>(),
+ ),
+ )?;
+ if delete_objects_result.error.is_some() {
+ bail!("failed to delete some objects");
+ }
+ // drops all chunk flock guards
+ self.list.clear();
+ Ok(())
+ }
+
+ /// Delete the object stored in the list if either the list equals or exceed the capacity
+ /// threshold or the delete list age threshold.
+ fn conditional_delete_and_drop_locks(
+ &mut self,
+ s3_client: &Arc<S3Client>,
+ ) -> Result<(), Error> {
+ if self.list.len() >= self.capacity_threshold
+ || (!self.list.is_empty()
+ && self
+ .first_entry_added
+ .elapsed()
+ .unwrap_or(self.age_threshold) // time jump to past, chunk markers already gone
+ >= self.age_threshold)
+ {
+ self.delete_and_drop_locks(s3_client)?;
+ }
+ Ok(())
+ }
+}
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 8+ messages in thread* [pbs-devel] [PATCH proxmox-backup v3 2/6] chunk store: rename and limit scope for chunk store iterator
2026-01-14 12:31 [pbs-devel] [PATCH proxmox-backup v3 0/6] followups for garbage collection Christian Ebner
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 1/6] GC: Move S3 delete list state and logic to a dedicated struct Christian Ebner
@ 2026-01-14 12:31 ` Christian Ebner
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 3/6] chunk store: return chunk extension and restrict chunk filename check Christian Ebner
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-01-14 12:31 UTC (permalink / raw)
To: pbs-devel
Since the introduction of marker files for datastores backed by s3
object stores, the iterator can also include the <digest>.using file
entries and chunk marker files which have 0 size. Also, the returned
entries for regular datastores are not guaranteed be of type file, as
that is only checked by stating the entry afterwards, so there is no
guarantee to only get chunks by the iterator.
Therefore, rename the method to get_chunk_store_iterator(), which is
more generic and does not imply the returned entries are chunks.
While at it, also limit the scope for the method as this is not used
outside the module.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes
pbs-datastore/src/chunk_store.rs | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index cb4a29af9..084f62b80 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -276,7 +276,7 @@ impl ChunkStore {
Ok(true)
}
- pub fn get_chunk_iterator(
+ fn get_chunk_store_iterator(
&self,
) -> Result<
impl std::iter::FusedIterator<
@@ -397,7 +397,7 @@ impl ChunkStore {
let mut last_percentage = 0;
let mut chunk_count = 0;
- for (entry, percentage, bad) in self.get_chunk_iterator()? {
+ for (entry, percentage, bad) in self.get_chunk_store_iterator()? {
if last_percentage != percentage {
last_percentage = percentage;
info!("processed {percentage}% ({chunk_count} chunks)");
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 8+ messages in thread* [pbs-devel] [PATCH proxmox-backup v3 3/6] chunk store: return chunk extension and restrict chunk filename check
2026-01-14 12:31 [pbs-devel] [PATCH proxmox-backup v3 0/6] followups for garbage collection Christian Ebner
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 1/6] GC: Move S3 delete list state and logic to a dedicated struct Christian Ebner
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 2/6] chunk store: rename and limit scope for chunk store iterator Christian Ebner
@ 2026-01-14 12:31 ` Christian Ebner
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 4/6] datastore: move bad chunk touching logic to chunk store and lock it Christian Ebner
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-01-14 12:31 UTC (permalink / raw)
To: pbs-devel
Clearly distinguish the cases for regular, bad and chunk marker files
by returning the respective extension as ChunkExt enum variant in the
chunk iterator.
Restrict the filenames to not only consist of 64 hexdigits and match
size and ending, but rather check that each individual byte in the
filename matches, including the bad chunk number and the marker file
extension.
Directory entries which no longer match the stricter criteria are now
skipped over.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- refactor to not duplicate the is_ascii_hexdigit check for better
readability
- stricter extension checks to truly match all expected bytes in
fileaname
- inline trivial extension type checks instead of introducing helpers
pbs-datastore/src/chunk_store.rs | 58 ++++++++++++++++++++++++++------
1 file changed, 47 insertions(+), 11 deletions(-)
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 084f62b80..5148d6c46 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -280,7 +280,11 @@ impl ChunkStore {
&self,
) -> Result<
impl std::iter::FusedIterator<
- Item = (Result<proxmox_sys::fs::ReadDirEntry, Error>, usize, bool),
+ Item = (
+ Result<proxmox_sys::fs::ReadDirEntry, Error>,
+ usize,
+ ChunkExt,
+ ),
>,
Error,
> {
@@ -315,21 +319,47 @@ impl ChunkStore {
Some(Ok(entry)) => {
// skip files if they're not a hash
let bytes = entry.file_name().to_bytes();
- if bytes.len() != 64 && bytes.len() != 64 + ".0.bad".len() {
+
+ if bytes.len() < 64 {
continue;
}
+
if !bytes.iter().take(64).all(u8::is_ascii_hexdigit) {
continue;
}
- let bad = bytes.ends_with(b".bad");
- return Some((Ok(entry), percentage, bad));
+ // regular chunk
+ if bytes.len() == 64 {
+ return Some((Ok(entry), percentage, ChunkExt::None));
+ }
+
+ // i-th bad chunk
+ if bytes.len() == 64 + ".i.bad".len()
+ && bytes[64] == b'.'
+ && bytes[65] >= b'0'
+ && bytes[65] <= b'9'
+ && bytes[66] == b'.'
+ && bytes.ends_with(b"bad")
+ {
+ return Some((Ok(entry), percentage, ChunkExt::Bad));
+ }
+
+ // chunk marker file
+ let marker_ext_bytes = USING_MARKER_FILENAME_EXT.as_bytes();
+ if bytes.len() == 64 + 1 + marker_ext_bytes.len()
+ && bytes[64] == b'.'
+ && bytes.ends_with(marker_ext_bytes)
+ {
+ return Some((Ok(entry), percentage, ChunkExt::UsedMarker));
+ }
+
+ continue;
}
Some(Err(err)) => {
// stop after first error
done = true;
// and pass the error through:
- return Some((Err(err), percentage, false));
+ return Some((Err(err), percentage, ChunkExt::None));
}
None => (), // open next directory
}
@@ -362,7 +392,7 @@ impl ChunkStore {
return Some((
Err(format_err!("unable to read subdir '{subdir}' - {err}")),
percentage,
- false,
+ ChunkExt::None,
));
}
}
@@ -397,7 +427,8 @@ impl ChunkStore {
let mut last_percentage = 0;
let mut chunk_count = 0;
- for (entry, percentage, bad) in self.get_chunk_store_iterator()? {
+ for (entry, percentage, chunk_ext) in self.get_chunk_store_iterator()? {
+ let bad = chunk_ext == ChunkExt::Bad;
if last_percentage != percentage {
last_percentage = percentage;
info!("processed {percentage}% ({chunk_count} chunks)");
@@ -428,10 +459,7 @@ impl ChunkStore {
drop(lock);
continue;
}
- if filename
- .to_bytes()
- .ends_with(USING_MARKER_FILENAME_EXT.as_bytes())
- {
+ if chunk_ext == ChunkExt::UsedMarker {
unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir).map_err(|err| {
format_err!("unlinking chunk using marker {filename:?} failed - {err}")
})?;
@@ -903,6 +931,14 @@ impl ChunkStore {
}
}
+#[derive(PartialEq)]
+/// Chunk iterator directory entry filename extension
+enum ChunkExt {
+ None,
+ Bad,
+ UsedMarker,
+}
+
#[test]
fn test_chunk_store1() {
let mut path = std::fs::canonicalize(".").unwrap(); // we need absolute path
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 8+ messages in thread* [pbs-devel] [PATCH proxmox-backup v3 4/6] datastore: move bad chunk touching logic to chunk store and lock it
2026-01-14 12:31 [pbs-devel] [PATCH proxmox-backup v3 0/6] followups for garbage collection Christian Ebner
` (2 preceding siblings ...)
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 3/6] chunk store: return chunk extension and restrict chunk filename check Christian Ebner
@ 2026-01-14 12:31 ` Christian Ebner
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 5/6] chunk store: move next bad chunk path generator into dedicated helper Christian Ebner
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-01-14 12:31 UTC (permalink / raw)
To: pbs-devel
Provide an dedicated method and encapsulate the bad chunk filename
generation and access time updates. This is an implementation detail
of the chunks store, the garbage collection should not be concerned
about this.
To assure exclusive access to the chunk store, acquire the chunk store
mutex guard before performing access time updates. While there is
currently no code path which would lead to races as the bad chunk
renaming (which generates the bad chunks) is concerned about file
existence only and the garbage collection cannot be executed
concurrently, locking makes access time updates future prove.
In preparation for making the bad filename generation a dedicated
helper as well, all contained within the chunk store module.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- lock chunk store before performing atime updates
pbs-datastore/src/chunk_store.rs | 21 ++++++++++++++++++++-
pbs-datastore/src/datastore.rs | 11 +----------
2 files changed, 21 insertions(+), 11 deletions(-)
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 5148d6c46..8bc74faf7 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -242,7 +242,7 @@ impl ChunkStore {
self.cond_touch_path(&chunk_path, assert_exists)
}
- pub(super) fn cond_touch_path(&self, path: &Path, assert_exists: bool) -> Result<bool, Error> {
+ fn cond_touch_path(&self, path: &Path, assert_exists: bool) -> Result<bool, Error> {
// unwrap: only `None` in unit tests
assert!(self.locker.is_some());
@@ -276,6 +276,25 @@ impl ChunkStore {
Ok(true)
}
+ /// Update access timestamp on all bad chunks for given digest
+ ///
+ /// Gets exclusive access by acquiring the chunk store mutex guard.
+ pub(super) fn cond_touch_bad_chunks(&self, digest: &[u8; 32]) -> Result<bool, Error> {
+ let _lock = self.mutex.lock();
+
+ let (chunk_path, _digest_str) = self.chunk_path(digest);
+ let mut is_bad = false;
+ for i in 0..=9 {
+ let bad_ext = format!("{i}.bad");
+ let mut bad_path = chunk_path.clone();
+ bad_path.set_extension(bad_ext);
+ if self.cond_touch_path(&bad_path, false)? {
+ is_bad = true;
+ }
+ }
+ Ok(is_bad)
+ }
+
fn get_chunk_store_iterator(
&self,
) -> Result<
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index c32f232d9..a84c6fe54 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -1355,19 +1355,10 @@ impl DataStore {
}
if !self.inner.chunk_store.cond_touch_chunk(digest, false)? {
- let (chunk_path, _digest_str) = self.chunk_path(digest);
// touch any corresponding .bad files to keep them around, meaning if a chunk is
// rewritten correctly they will be removed automatically, as well as if no index
// file requires the chunk anymore (won't get to this loop then)
- let mut is_bad = false;
- for i in 0..=9 {
- let bad_ext = format!("{i}.bad");
- let mut bad_path = chunk_path.clone();
- bad_path.set_extension(bad_ext);
- if self.inner.chunk_store.cond_touch_path(&bad_path, false)? {
- is_bad = true;
- }
- }
+ let is_bad = self.inner.chunk_store.cond_touch_bad_chunks(digest)?;
if let Some(ref _s3_client) = s3_client {
// Do not retry here, this is very unlikely to happen as chunk markers will
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 8+ messages in thread* [pbs-devel] [PATCH proxmox-backup v3 5/6] chunk store: move next bad chunk path generator into dedicated helper
2026-01-14 12:31 [pbs-devel] [PATCH proxmox-backup v3 0/6] followups for garbage collection Christian Ebner
` (3 preceding siblings ...)
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 4/6] datastore: move bad chunk touching logic to chunk store and lock it Christian Ebner
@ 2026-01-14 12:31 ` Christian Ebner
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 6/6] chunk store: move bad chunk filename generation " Christian Ebner
2026-01-14 13:52 ` [pbs-devel] applied-series: [PATCH proxmox-backup v3 0/6] followups for garbage collection Fabian Grünbichler
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-01-14 12:31 UTC (permalink / raw)
To: pbs-devel
Better encapsulate the logic as this concerns the chunk store and
perpare for placing the logic on bad chunk filename generation into
a single place.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- rebase onto changes in the rest of the series
pbs-datastore/src/chunk_store.rs | 16 ++++++++++++++++
pbs-datastore/src/datastore.rs | 11 +----------
2 files changed, 17 insertions(+), 10 deletions(-)
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 8bc74faf7..16fb5f611 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -948,6 +948,22 @@ impl ChunkStore {
})?;
Ok(guard)
}
+
+ /// Generate the next bad chunk file path for given digest. Returns the path as well as the bad
+ /// chunk counter.
+ pub(crate) fn next_bad_chunk_path(&self, digest: &[u8; 32]) -> (PathBuf, usize) {
+ let (mut chunk_path, digest_str) = self.chunk_path(digest);
+ let mut counter = 0;
+ loop {
+ chunk_path.set_file_name(format!("{digest_str}.{counter}.bad"));
+ if chunk_path.exists() && counter < 9 {
+ counter += 1;
+ } else {
+ break;
+ }
+ }
+ (chunk_path, counter)
+ }
}
#[derive(PartialEq)]
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index a84c6fe54..60b98ac9b 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -2652,16 +2652,7 @@ impl DataStore {
}
let _lock = self.inner.chunk_store.mutex().lock().unwrap();
- let mut counter = 0;
- let mut new_path = path.clone();
- loop {
- new_path.set_file_name(format!("{digest_str}.{counter}.bad"));
- if new_path.exists() && counter < 9 {
- counter += 1;
- } else {
- break;
- }
- }
+ let (new_path, counter) = self.inner.chunk_store.next_bad_chunk_path(digest);
let result = match std::fs::rename(&path, &new_path) {
Ok(_) => Ok(Some(new_path)),
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 8+ messages in thread* [pbs-devel] [PATCH proxmox-backup v3 6/6] chunk store: move bad chunk filename generation into dedicated helper
2026-01-14 12:31 [pbs-devel] [PATCH proxmox-backup v3 0/6] followups for garbage collection Christian Ebner
` (4 preceding siblings ...)
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 5/6] chunk store: move next bad chunk path generator into dedicated helper Christian Ebner
@ 2026-01-14 12:31 ` Christian Ebner
2026-01-14 13:52 ` [pbs-devel] applied-series: [PATCH proxmox-backup v3 0/6] followups for garbage collection Fabian Grünbichler
6 siblings, 0 replies; 8+ messages in thread
From: Christian Ebner @ 2026-01-14 12:31 UTC (permalink / raw)
To: pbs-devel
Deduplicate the bad chunk filename generation logic by moving it to
be an associated function of `ChunkExt`. Instead of set_extension()
garbage collection now uses set_file_name() as well, which boils down
to a pop and push [0] as compared to an extension replacement [1].
[0] https://doc.rust-lang.org/src/std/path.rs.html#1455
[1] https://doc.rust-lang.org/src/std/path.rs.html#1524
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- rebase onto changes of the rest of the series
pbs-datastore/src/chunk_store.rs | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 16fb5f611..bd1dc353b 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -282,13 +282,11 @@ impl ChunkStore {
pub(super) fn cond_touch_bad_chunks(&self, digest: &[u8; 32]) -> Result<bool, Error> {
let _lock = self.mutex.lock();
- let (chunk_path, _digest_str) = self.chunk_path(digest);
+ let (mut chunk_path, digest_str) = self.chunk_path(digest);
let mut is_bad = false;
for i in 0..=9 {
- let bad_ext = format!("{i}.bad");
- let mut bad_path = chunk_path.clone();
- bad_path.set_extension(bad_ext);
- if self.cond_touch_path(&bad_path, false)? {
+ chunk_path.set_file_name(ChunkExt::bad_chunk_filename(&digest_str, i));
+ if self.cond_touch_path(&chunk_path, false)? {
is_bad = true;
}
}
@@ -955,7 +953,7 @@ impl ChunkStore {
let (mut chunk_path, digest_str) = self.chunk_path(digest);
let mut counter = 0;
loop {
- chunk_path.set_file_name(format!("{digest_str}.{counter}.bad"));
+ chunk_path.set_file_name(ChunkExt::bad_chunk_filename(&digest_str, counter));
if chunk_path.exists() && counter < 9 {
counter += 1;
} else {
@@ -974,6 +972,12 @@ enum ChunkExt {
UsedMarker,
}
+impl ChunkExt {
+ fn bad_chunk_filename(digest_str: &str, counter: usize) -> String {
+ format!("{digest_str}.{counter}.bad")
+ }
+}
+
#[test]
fn test_chunk_store1() {
let mut path = std::fs::canonicalize(".").unwrap(); // we need absolute path
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 8+ messages in thread* [pbs-devel] applied-series: [PATCH proxmox-backup v3 0/6] followups for garbage collection
2026-01-14 12:31 [pbs-devel] [PATCH proxmox-backup v3 0/6] followups for garbage collection Christian Ebner
` (5 preceding siblings ...)
2026-01-14 12:31 ` [pbs-devel] [PATCH proxmox-backup v3 6/6] chunk store: move bad chunk filename generation " Christian Ebner
@ 2026-01-14 13:52 ` Fabian Grünbichler
6 siblings, 0 replies; 8+ messages in thread
From: Fabian Grünbichler @ 2026-01-14 13:52 UTC (permalink / raw)
To: pbs-devel, Christian Ebner
On Wed, 14 Jan 2026 13:31:33 +0100, Christian Ebner wrote:
> This patch series contains some followup to the recently introduced
> changes to phase 2 of garbage collection on s3 backed datastores [0]
> and clenups for marker files during phase 3 [1], as well as some
> refactoring of the chunk filename extension parsing and bad chunk
> filename generation.
>
> The first patch introduces a dedicated struct for tracking objects
> to be deleted from the s3 object store during phase 2, bundling the
> length and age based deletion threshold logic to the list state.
>
> [...]
Applied with a tiny rebase of the first patch, thanks!
[1/6] GC: Move S3 delete list state and logic to a dedicated struct
commit: 6e320ab47a312dca5232855205c8b1705c03ea10
[2/6] chunk store: rename and limit scope for chunk store iterator
commit: 6105ac10fb6867ff7a65457c94218e8b34bec0db
[3/6] chunk store: return chunk extension and restrict chunk filename check
commit: 8411b729c277daa8513d5c84b82d44d31f20543b
[4/6] datastore: move bad chunk touching logic to chunk store and lock it
commit: 27b9e4003a1564fa0b06cceae616f53f336aec4f
[5/6] chunk store: move next bad chunk path generator into dedicated helper
commit: a4079aa8998c8199a69637347959ba9561114295
[6/6] chunk store: move bad chunk filename generation into dedicated helper
commit: 99c212cdc871024c2f3976c6a3b27298a322aaae
Best regards,
--
Fabian Grünbichler <f.gruenbichler@proxmox.com>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 8+ messages in thread