public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox Backup Server development discussion
	<pbs-devel@lists.proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox-backup 2/7] gc: chunk store: rework atime check and gc status into common helper
Date: Mon, 06 Oct 2025 15:14:41 +0200	[thread overview]
Message-ID: <1759750587.hi20p6x9ui.astroid@yuna.none> (raw)
In-Reply-To: <20251006104151.487202-3-c.ebner@proxmox.com>

On October 6, 2025 12:41 pm, Christian Ebner wrote:
> Use the shared code paths for both, filesystem and s3 backend to a
> common helper to avoid code duplication and adapt callsites
> accordingly.
> 
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
>  pbs-datastore/src/chunk_store.rs | 69 ++++++++++++++++++++++----------
>  pbs-datastore/src/datastore.rs   | 29 +++++---------
>  2 files changed, 57 insertions(+), 41 deletions(-)
> 
> diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
> index 3c59612bb..0725ca3a7 100644
> --- a/pbs-datastore/src/chunk_store.rs
> +++ b/pbs-datastore/src/chunk_store.rs
> @@ -408,36 +408,27 @@ impl ChunkStore {
>  
>                  chunk_count += 1;
>  
> -                if stat.st_atime < min_atime {
> -                    //let age = now - stat.st_atime;
> -                    //println!("UNLINK {}  {:?}", age/(3600*24), filename);
> +                if Self::check_atime_and_update_gc_status(
> +                    stat.st_atime,
> +                    min_atime,
> +                    oldest_writer,
> +                    stat.st_size as u64,
> +                    bad,
> +                    status,
> +                ) {
>                      if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {

if this part where also handled by the helper (in a fashion that allows
using it for both cache and regular chunk store)

>                          if bad {
> +                            status.removed_bad -= 1;
>                              status.still_bad += 1;
> +                        } else {
> +                            status.removed_chunks += 1;
>                          }
> +                        status.removed_bytes -= stat.st_size as u64;

then this error handling here would not leak outside of the helper, and
we could "simply" call the helper and bubble up any error it returns..

>                          bail!(
>                              "unlinking chunk {filename:?} failed on store '{}' - {err}",
>                              self.name,
>                          );
>                      }
> -                    if bad {
> -                        status.removed_bad += 1;
> -                    } else {
> -                        status.removed_chunks += 1;
> -                    }
> -                    status.removed_bytes += stat.st_size as u64;
> -                } else if stat.st_atime < oldest_writer {
> -                    if bad {
> -                        status.still_bad += 1;
> -                    } else {
> -                        status.pending_chunks += 1;
> -                    }
> -                    status.pending_bytes += stat.st_size as u64;
> -                } else {
> -                    if !bad {
> -                        status.disk_chunks += 1;
> -                    }
> -                    status.disk_bytes += stat.st_size as u64;
>                  }
>              }
>              drop(lock);
> @@ -446,6 +437,42 @@ impl ChunkStore {
>          Ok(())
>      }
>  
> +    /// Check within what range the provided chunks atime falls and update the garbage collection
> +    /// status accordingly.
> +    ///
> +    /// Returns true if the chunk file should be removed.
> +    pub(super) fn check_atime_and_update_gc_status(
> +        atime: i64,
> +        min_atime: i64,
> +        oldest_writer: i64,
> +        size: u64,
> +        bad: bool,
> +        gc_status: &mut GarbageCollectionStatus,
> +    ) -> bool {
> +        if atime < min_atime {
> +            if bad {
> +                gc_status.removed_bad += 1;
> +            } else {
> +                gc_status.removed_chunks += 1;
> +            }
> +            gc_status.removed_bytes += size;
> +            return true;
> +        } else if atime < oldest_writer {
> +            if bad {
> +                gc_status.still_bad += 1;
> +            } else {
> +                gc_status.pending_chunks += 1;
> +            }
> +            gc_status.pending_bytes += size;
> +        } else {
> +            if !bad {
> +                gc_status.disk_chunks += 1;
> +            }
> +            gc_status.disk_bytes += size;
> +        }
> +        false
> +    }
> +
>      /// Check if atime updates are honored by the filesystem backing the chunk store.
>      ///
>      /// Checks if the atime is always updated by utimensat taking into consideration the Linux
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index c2a82b8b8..e36af68fc 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -1676,30 +1676,19 @@ impl DataStore {
>                          .extension()
>                          .is_some_and(|ext| ext == "bad");
>  
> -                    if atime < min_atime {
> +                    if ChunkStore::check_atime_and_update_gc_status(
> +                        atime,
> +                        min_atime,
> +                        oldest_writer,
> +                        content.size,
> +                        bad,
> +                        &mut gc_status,
> +                    ) {
>                          if let Some(cache) = self.cache() {
>                              // ignore errors, phase 3 will retry cleanup anyways
>                              let _ = cache.remove(&digest);
>                          }
> -                        delete_list.push(content.key.clone());
> -                        if bad {
> -                            gc_status.removed_bad += 1;
> -                        } else {
> -                            gc_status.removed_chunks += 1;
> -                        }
> -                        gc_status.removed_bytes += content.size;
> -                    } else if atime < oldest_writer {
> -                        if bad {
> -                            gc_status.still_bad += 1;
> -                        } else {
> -                            gc_status.pending_chunks += 1;
> -                        }
> -                        gc_status.pending_bytes += content.size;
> -                    } else {
> -                        if !bad {
> -                            gc_status.disk_chunks += 1;
> -                        }
> -                        gc_status.disk_bytes += content.size;
> +                        delete_list.push(content.key);

nit: this removal of the clone could have happened in patch #1

>                      }
>  
>                      chunk_count += 1;
> -- 
> 2.47.3
> 
> 
> 
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
> 
> 
> 


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


  reply	other threads:[~2025-10-06 13:15 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-06 10:41 [pbs-devel] [PATCH proxmox-backup 0/7] s3 store: fix issues with chunk s3 backend upload and cache eviction Christian Ebner
2025-10-06 10:41 ` [pbs-devel] [PATCH proxmox-backup 1/7] datastore: gc: inline single callsite method Christian Ebner
2025-10-06 10:41 ` [pbs-devel] [PATCH proxmox-backup 2/7] gc: chunk store: rework atime check and gc status into common helper Christian Ebner
2025-10-06 13:14   ` Fabian Grünbichler [this message]
2025-10-06 10:41 ` [pbs-devel] [PATCH proxmox-backup 3/7] chunk store: add and use method to remove chunks Christian Ebner
2025-10-06 13:17   ` Fabian Grünbichler
2025-10-06 10:41 ` [pbs-devel] [PATCH proxmox-backup 4/7] chunk store: fix: replace evicted cache chunks instead of truncate Christian Ebner
2025-10-06 13:18   ` Fabian Grünbichler
2025-10-06 15:35     ` Christian Ebner
2025-10-06 16:14       ` Christian Ebner
2025-10-06 10:41 ` [pbs-devel] [PATCH proxmox-backup 5/7] api: chunk upload: fix race between chunk backend upload and insert Christian Ebner
2025-10-06 13:18   ` Fabian Grünbichler
2025-10-07 10:15     ` Christian Ebner
2025-10-06 10:41 ` [pbs-devel] [PATCH proxmox-backup 6/7] api: chunk upload: fix race with garbage collection for no-cache on s3 Christian Ebner
2025-10-06 13:18   ` Fabian Grünbichler
2025-10-06 10:41 ` [pbs-devel] [PATCH proxmox-backup 7/7] pull: guard chunk upload and only insert into cache after upload Christian Ebner
2025-10-06 13:18   ` Fabian Grünbichler
2025-10-06 13:18 ` [pbs-devel] [PATCH proxmox-backup 0/7] s3 store: fix issues with chunk s3 backend upload and cache eviction Fabian Grünbichler
2025-10-08 15:22 ` [pbs-devel] superseded: " Christian Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1759750587.hi20p6x9ui.astroid@yuna.none \
    --to=f.gruenbichler@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal