From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Christian Ebner <c.ebner@proxmox.com>,
Proxmox Backup Server development discussion
<pbs-devel@lists.proxmox.com>,
Thomas Lamprecht <t.lamprecht@proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox-backup v3] etc: raise nofile soft limit to hard limit for proxmox-backup-proxy
Date: Fri, 21 Nov 2025 08:43:11 +0100 [thread overview]
Message-ID: <1763710199.7j2qmp71ya.astroid@yuna.none> (raw)
In-Reply-To: <8e4d6f6e-7b43-4049-8e02-11f4bc780bff@proxmox.com>
On November 20, 2025 6:23 pm, Thomas Lamprecht wrote:
> Am 20.11.25 um 16:12 schrieb Christian Ebner:
>> On 11/20/25 4:05 PM, Thomas Lamprecht wrote:
>>> Am 20.11.25 um 15:32 schrieb Christian Ebner:
>>>> This is acceptable since PBS does not directly depend on problematic
>>>> select() calls as verified via `nm` and does not use it in linked
>>>> libraries to the best of my knowledge.
>>>>
>>>
>>> Isn't above and
>>
>> With above I intended to state that the PBS code itself does not call into select(), while below are dependencies on shared objects which might call into select() according to their symbols.
>>
>
> And the systemd news entry you link to in the commit message clearly states:
>
> ----8<----
> Programs that want to take benefit of the increased limit have to "opt-in" into
> high file descriptors explicitly by raising their soft limit. Of course, when
> they do that they must acknowledge that they cannot use select() anymore (and
> **neither can any shared library they use — or any shared library used by any
> shared library they use and so on**).
> ---->8----
>
> I just checked the apt repo, and it includes various select calls. Most seem
> to center around downloading packages and such, but I'd not bet on it that
> no such select is anywhere in the code paths we use.
>
> PAM uses select in the pam_loginuid, which might be part of the login call,
> albeit it uses it only if require_auditd is enabled (which I don't think it is).
> I did not yet checked the others out.
>
> I mean, one option might be to provide our own select wrapper preloaded
> overriding the glibc one and keep some FDs below 1024 resereved for that, but
> I really really dislike doing such things. Similar in spirit would be providing
> a select compatible implementation using poll and ld_preload that, but also far
> from great..
>
> Moving either GC, or all the things that might call select as per your list,
> into a dedicated process might be the nicer thing to do. But as mentioned offlist
> I'll try to walk through the problem and code again tomorrow and see if I can
> find some other viable options (or you/fabian got some ideas), as of my current
> knowledge I cannot really accept doing this bump.
if we move something, we should move the things (potentially) calling
select, as we can then benefit from higher FD limits for all the regular
operations. 1k open FDs is not much even without the newly added locks,
and we had users running into issues already before that fixed them by
raising the limit with a systemd override or other means (or not at
all):
https://forum.proxmox.com/threads/too-many-open-files-os-error-24.73094/
https://forum.proxmox.com/threads/garbage-collect-job-fails-with-emfile-too-many-open-files.152687/
https://forum.proxmox.com/threads/tasks-fail-with-too-many-open-files-os-error-24.126770/
https://forum.proxmox.com/threads/sync-from-pbs-to-pbs-failed-too-many-open-files.113036/
https://forum.proxmox.com/threads/another-sync-error.73417/
the only alternative I see at the moment would be to either
- reduce the lock granularity of the newly introduced lock (e.g.,
lock-per-chunk-prefix)
- reduce the batch size (which determines the number of concurrently
held locks in GC) for S3 deletion
the latter would be a fairly simple patch, but make GC potentially a bit
more expensive (more delete requests to S3):
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 0a5179230..20372190c 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -1716,6 +1716,24 @@ impl DataStore {
}
chunk_count += 1;
+
+ drop(_guard);
+
+ if delete_list.len() > 100 {
+ let delete_objects_result = proxmox_async::runtime::block_on(
+ s3_client.delete_objects(
+ &delete_list
+ .iter()
+ .map(|(key, _)| key.clone())
+ .collect::<Vec<S3ObjectKey>>(),
+ ),
+ )?;
+ if let Some(_err) = delete_objects_result.error {
+ bail!("failed to delete some objects");
+ }
+ // release all chunk guards
+ delete_list.clear();
+ }
}
if !delete_list.is_empty() {
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
next prev parent reply other threads:[~2025-11-21 7:43 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-20 14:31 Christian Ebner
2025-11-20 15:05 ` Thomas Lamprecht
2025-11-20 15:12 ` Christian Ebner
2025-11-20 17:23 ` Thomas Lamprecht
2025-11-21 7:02 ` Christian Ebner
2025-11-21 7:43 ` Fabian Grünbichler [this message]
2025-11-21 8:00 ` Christian Ebner
2025-11-21 9:06 ` Fabian Grünbichler
2025-11-21 9:07 ` Fabian Grünbichler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1763710199.7j2qmp71ya.astroid@yuna.none \
--to=f.gruenbichler@proxmox.com \
--cc=c.ebner@proxmox.com \
--cc=pbs-devel@lists.proxmox.com \
--cc=t.lamprecht@proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox