public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Samuel Rufinatscha <s.rufinatscha@proxmox.com>
To: "Proxmox Backup Server development discussion"
	<pbs-devel@lists.proxmox.com>,
	"Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox-backup 0/3] datastore: remove config reload on hot path
Date: Wed, 12 Nov 2025 18:27:17 +0100	[thread overview]
Message-ID: <7c911437-6102-4ae6-91af-c98a9b9df69b@proxmox.com> (raw)
In-Reply-To: <1762935902.jw31anddp2.astroid@yuna.none>

On 11/12/25 12:27 PM, Fabian Grünbichler wrote:
> On November 11, 2025 1:29 pm, Samuel Rufinatscha wrote:
>> Hi,
>>
>> this series reduces CPU time in datastore lookups by avoiding repeated
>> datastore.cfg reads/parses in both `lookup_datastore()` and
>> `DataStore::Drop`. It also adds a TTL so manual config edits are
>> noticed without reintroducing hashing on every request.
>>
>> While investigating #6049 [1], cargo-flamegraph [2] showed hotspots during
>> repeated `/status` calls in `lookup_datastore()` and in `Drop`,
>> dominated by `pbs_config::datastore::config()` (config parse).
>>
>> The parsing cost itself should eventually be investigated in a future
>> effort. Furthermore, cargo-flamegraph showed that when using a
>> token-based auth method to access the API, a significant amount of time
>> is spent in validation on every request, likely related to bcrypt.
>> Also this should be eventually revisited in a future effort.
> 
> please file a bug for the token part, if there isn't already one!
> 

Thanks for the in-depth review Fabian! I created a bug report for the 
token part and added the relevant flamegraph - this should help narrow 
down the issue: https://bugzilla.proxmox.com/show_bug.cgi?id=7017

> thanks for diving into this, it already looks promising, even though the
> effect on more "normal" systems with more reasonable numbers of
> datastores and clients will be less pronounced ;)
> 
> the big TL;DR would be that we trade faster datastore lookups (which
> happen quite frequently, in particular if there are many datastores with
> clients checking their status) against slightly delayed reload of the
> configuration in case of manual, behind-our-backs edits, with one
> particular corner case that is slightly problematic, but also a bit
> contrived:
> - datastore is looked up
> - config is edited (manually) to set maintenance mode to one that
>    requires removing from the datastore map once the last task exits
> - last task drops the datastore struct
> - no regular edits happened in the meantime
> - the Drop handler doesn't know it needs to remove the datastore from
>    the map
> - open FD is held by proxy, datastore fails to be unmounted/..
> 
> we could solve this issue by not only bumping the generation on save,
> but also when we reload the config (in particular if we cache the whole
> config!). that would make the Drop handler efficient enough for idle
> systems that have mostly lookups but no long running tasks. as soon as a
> datastore has long running tasks, the last such task will likely exit
> long after the TTL for its config lookup has expired, so will need to do
> a refresh - although that refresh could again be from the global cache,
> instead of from disk? still wouldn't close the window entirely, but make
> it pretty unlikely to be hit in practice..
> 
> 
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
> 
> 

Good idea! I will add the bump in the lookup_datastore() slow path 
directly after (config, digest) is read and increment the generation if 
the digest changed but generation hasn’t - this should also help avoid 
unnecessary cache invalidations.

In Drop we then either check if the shared gen differs from the cached 
tag gen or the tag is TTL expired, otherwise use the cached decision.





_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

  reply	other threads:[~2025-11-12 17:27 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-11 12:29 Samuel Rufinatscha
2025-11-11 12:29 ` [pbs-devel] [PATCH proxmox-backup 1/3] partial fix #6049: datastore: impl ConfigVersionCache fast path for lookups Samuel Rufinatscha
2025-11-12 13:24   ` Fabian Grünbichler
2025-11-13 12:59     ` Samuel Rufinatscha
2025-11-11 12:29 ` [pbs-devel] [PATCH proxmox-backup 2/3] partial fix #6049: datastore: use config fast-path in Drop Samuel Rufinatscha
2025-11-12 11:24   ` Fabian Grünbichler
2025-11-12 15:20     ` Samuel Rufinatscha
2025-11-11 12:29 ` [pbs-devel] [PATCH proxmox-backup 3/3] datastore: add TTL fallback to catch manual config edits Samuel Rufinatscha
2025-11-12 11:27 ` [pbs-devel] [PATCH proxmox-backup 0/3] datastore: remove config reload on hot path Fabian Grünbichler
2025-11-12 17:27   ` Samuel Rufinatscha [this message]
2025-11-14 15:08 ` [pbs-devel] superseded: " Samuel Rufinatscha

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7c911437-6102-4ae6-91af-c98a9b9df69b@proxmox.com \
    --to=s.rufinatscha@proxmox.com \
    --cc=f.gruenbichler@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal