From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox Backup Server development discussion
<pbs-devel@lists.proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox-backup 2/2] GC: raise nofile soft limit to the hard limit on s3 backed stores
Date: Wed, 19 Nov 2025 13:34:38 +0100 [thread overview]
Message-ID: <1763555614.6dpj0qjsh7.astroid@yuna.none> (raw)
In-Reply-To: <20251118104529.254348-3-c.ebner@proxmox.com>
On November 18, 2025 11:45 am, Christian Ebner wrote:
> Since commit 86d5d073 ("GC: fix race with chunk upload/insert on s3
> backends"), per-chunk file locks are acquired during phase 2 of
> garbage collection for datastores backed by s3 object stores. This
> however means that up to 1000 file locks might be held at once, which
> can result in the limit of open file handles to be reached.
>
> Therefore, bump the nolimit from the soft to the hard limit.
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> pbs-datastore/src/datastore.rs | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index 0a5179230..ac22c10c5 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -11,6 +11,7 @@ use http_body_util::BodyExt;
> use hyper::body::Bytes;
> use nix::unistd::{unlinkat, UnlinkatFlags};
> use pbs_tools::lru_cache::LruCache;
> +use pbs_tools::raise_nofile_limit;
> use tokio::io::AsyncWriteExt;
> use tracing::{info, warn};
>
> @@ -1589,6 +1590,12 @@ impl DataStore {
> let s3_client = match self.backend()? {
> DatastoreBackend::Filesystem => None,
> DatastoreBackend::S3(s3_client) => {
> + // required for per-chunk file locks in GC phase 2 on S3 backed stores
> + let old_rlimit =
> + raise_nofile_limit().context("failed to raise open file handle limit")?;
> + if old_rlimit.rlim_max <= 4096 {
> + info!("limit for open file handles low: {}", old_rlimit.rlim_max);
> + }
shouldn't we just do this either in the service unit, or at service
startup, instead of during every GC run?
> proxmox_async::runtime::block_on(s3_client.head_bucket())
> .context("failed to reach bucket")?;
> Some(s3_client)
> --
> 2.47.3
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
next prev parent reply other threads:[~2025-11-19 12:34 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-18 10:45 [pbs-devel] [PATCH proxmox-backup 0/2] raise nofile limit for GC on S3 stores Christian Ebner
2025-11-18 10:45 ` [pbs-devel] [PATCH proxmox-backup 1/2] tools: move rlimit helper from pbs-client to pbs-tools Christian Ebner
2025-11-18 10:45 ` [pbs-devel] [PATCH proxmox-backup 2/2] GC: raise nofile soft limit to the hard limit on s3 backed stores Christian Ebner
2025-11-19 12:34 ` Fabian Grünbichler [this message]
2025-11-19 12:53 ` Christian Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1763555614.6dpj0qjsh7.astroid@yuna.none \
--to=f.gruenbichler@proxmox.com \
--cc=pbs-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.