From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 7F2061FF183 for ; Wed, 19 Nov 2025 13:34:40 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 477A66400; Wed, 19 Nov 2025 13:34:45 +0100 (CET) Date: Wed, 19 Nov 2025 13:34:38 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox Backup Server development discussion References: <20251118104529.254348-1-c.ebner@proxmox.com> <20251118104529.254348-3-c.ebner@proxmox.com> In-Reply-To: <20251118104529.254348-3-c.ebner@proxmox.com> MIME-Version: 1.0 User-Agent: astroid/0.17.0 (https://github.com/astroidmail/astroid) Message-Id: <1763555614.6dpj0qjsh7.astroid@yuna.none> X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1763555650801 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.047 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pbs-devel] [PATCH proxmox-backup 2/2] GC: raise nofile soft limit to the hard limit on s3 backed stores X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Backup Server development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" On November 18, 2025 11:45 am, Christian Ebner wrote: > Since commit 86d5d073 ("GC: fix race with chunk upload/insert on s3 > backends"), per-chunk file locks are acquired during phase 2 of > garbage collection for datastores backed by s3 object stores. This > however means that up to 1000 file locks might be held at once, which > can result in the limit of open file handles to be reached. > > Therefore, bump the nolimit from the soft to the hard limit. > > Signed-off-by: Christian Ebner > --- > pbs-datastore/src/datastore.rs | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs > index 0a5179230..ac22c10c5 100644 > --- a/pbs-datastore/src/datastore.rs > +++ b/pbs-datastore/src/datastore.rs > @@ -11,6 +11,7 @@ use http_body_util::BodyExt; > use hyper::body::Bytes; > use nix::unistd::{unlinkat, UnlinkatFlags}; > use pbs_tools::lru_cache::LruCache; > +use pbs_tools::raise_nofile_limit; > use tokio::io::AsyncWriteExt; > use tracing::{info, warn}; > > @@ -1589,6 +1590,12 @@ impl DataStore { > let s3_client = match self.backend()? { > DatastoreBackend::Filesystem => None, > DatastoreBackend::S3(s3_client) => { > + // required for per-chunk file locks in GC phase 2 on S3 backed stores > + let old_rlimit = > + raise_nofile_limit().context("failed to raise open file handle limit")?; > + if old_rlimit.rlim_max <= 4096 { > + info!("limit for open file handles low: {}", old_rlimit.rlim_max); > + } shouldn't we just do this either in the service unit, or at service startup, instead of during every GC run? > proxmox_async::runtime::block_on(s3_client.head_bucket()) > .context("failed to reach bucket")?; > Some(s3_client) > -- > 2.47.3 > > > > _______________________________________________ > pbs-devel mailing list > pbs-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel > > > _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel