all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Christian Ebner <c.ebner@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [pbs-devel] [PATCH proxmox-backup v2 2/3] chunk store: fix race window between chunk stat and gc cleanup
Date: Thu,  6 Nov 2025 18:13:57 +0100	[thread overview]
Message-ID: <20251106171358.865503-3-c.ebner@proxmox.com> (raw)
In-Reply-To: <20251106171358.865503-1-c.ebner@proxmox.com>

Sweeping of unused chunks during garbage collection checks their
atime to distinguish between chunks being in-use and chunks no
longer being used. While garbage collection does lock the chunk
store by guarding its mutex before reading file stats and deleting
unused chunks, the conditional touch did not do this before updating
the chunks atime (thereby also checking the presence).

Therefore there is a race window between the chunks metadata being
read and the chunk being removed, but the chunk being touched
in-between.

The race is however rare, as for this to happen the chunk must be
older than the cutoff time and not be referenced by any index file,
otherwise the atime would be updated during phase 1 already.

Fix by guarding the chunk store mutex before touching a chunk.

To achieve this, rename and splitoff internal touch chunk helpers to
reflect that the internal helpers do not acquire the chunk store
lock, while the one exposed to be accessed from outside the chunk
store module does.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 1:
- make sure internal helpers already holding the mutex guard try to lock
  it again

 pbs-datastore/src/chunk_store.rs | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 1262377d5..b88a0a096 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -204,18 +204,31 @@ impl ChunkStore {
         })
     }
 
-    fn touch_chunk(&self, digest: &[u8; 32]) -> Result<(), Error> {
+    fn touch_chunk_no_lock(&self, digest: &[u8; 32]) -> Result<(), Error> {
         // unwrap: only `None` in unit tests
         assert!(self.locker.is_some());
 
-        self.cond_touch_chunk(digest, true)?;
+        self.cond_touch_chunk_no_lock(digest, true)?;
         Ok(())
     }
 
+    /// Update the chunk files atime if it exists.
+    ///
+    /// If the chunk file does not exist, return with error if assert_exists is true, with
+    /// Ok(false) otherwise.
     pub(super) fn cond_touch_chunk(
         &self,
         digest: &[u8; 32],
         assert_exists: bool,
+    ) -> Result<bool, Error> {
+        let _lock = self.mutex.lock();
+        self.cond_touch_chunk_no_lock(digest, assert_exists)
+    }
+
+    fn cond_touch_chunk_no_lock(
+        &self,
+        digest: &[u8; 32],
+        assert_exists: bool,
     ) -> Result<bool, Error> {
         // unwrap: only `None` in unit tests
         assert!(self.locker.is_some());
@@ -587,7 +600,7 @@ impl ChunkStore {
             }
             let old_size = metadata.len();
             if encoded_size == old_size {
-                self.touch_chunk(digest)?;
+                self.touch_chunk_no_lock(digest)?;
                 return Ok((true, old_size));
             } else if old_size == 0 {
                 log::warn!("found empty chunk '{digest_str}' in store {name}, overwriting");
@@ -612,11 +625,11 @@ impl ChunkStore {
                 // compressed, the size mismatch could be caused by different zstd versions
                 // so let's keep the one that was uploaded first, bit-rot is hopefully detected by
                 // verification at some point..
-                self.touch_chunk(digest)?;
+                self.touch_chunk_no_lock(digest)?;
                 return Ok((true, old_size));
             } else if old_size < encoded_size {
                 log::debug!("Got another copy of chunk with digest '{digest_str}', existing chunk is smaller, discarding uploaded one.");
-                self.touch_chunk(digest)?;
+                self.touch_chunk_no_lock(digest)?;
                 return Ok((true, old_size));
             } else {
                 log::debug!("Got another copy of chunk with digest '{digest_str}', existing chunk is bigger, replacing with uploaded one.");
-- 
2.47.3



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


  parent reply	other threads:[~2025-11-06 17:14 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-06 17:13 [pbs-devel] [PATCH proxmox-backup v2 0/3] fix GC atime update race window Christian Ebner
2025-11-06 17:13 ` [pbs-devel] [PATCH proxmox-backup v2 1/3] chunk store: limit scope for atime update helper methods Christian Ebner
2025-11-06 17:13 ` Christian Ebner [this message]
2025-11-06 17:13 ` [pbs-devel] [PATCH proxmox-backup v2 3/3] datastore: insert chunk marker and touch bad chunks in locked context Christian Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251106171358.865503-3-c.ebner@proxmox.com \
    --to=c.ebner@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal