From: Christian Ebner <c.ebner@proxmox.com>
To: Proxmox Backup Server development discussion
<pbs-devel@lists.proxmox.com>,
Samuel Rufinatscha <s.rufinatscha@proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox-backup v4 2/4] pbs-config: cache verified API token secrets
Date: Tue, 10 Feb 2026 13:54:01 +0100 [thread overview]
Message-ID: <1f0a4451-a767-46d2-976d-7c7cef86cd9b@proxmox.com> (raw)
In-Reply-To: <20260121151408.731516-3-s.rufinatscha@proxmox.com>
one suggestion below
On 1/21/26 4:13 PM, Samuel Rufinatscha wrote:
> Adds an in-memory cache of successfully verified token secrets.
> Subsequent requests for the same token+secret combination only perform a
> comparison using openssl::memcmp::eq and avoid re-running the password
> hash. The cache is updated when a token secret is set and cleared when a
> token is deleted.
>
> Signed-off-by: Samuel Rufinatscha <s.rufinatscha@proxmox.com>
> ---
> Changes from v3 to v4:
> * Add gen param to invalidate_cache_state()
> * Validates the generation bump after obtaining write lock in
> apply_api_mutation
> * Pass lock to apply_api_mutation
> * Remove unnecessary gen check cache_try_secret_matches
> * Adjusted commit message
>
> Changes from v2 to v3:
> * Replaced process-local cache invalidation (AtomicU64
> API_MUTATION_GENERATION) with a cross-process shared generation via
> ConfigVersionCache.
> * Validate shared generation before/after the constant-time secret
> compare; only insert into cache if the generation is unchanged.
> * invalidate_cache_state() on insert if shared generation changed.
>
> Changes from v1 to v2:
> * Replace OnceCell with LazyLock, and std::sync::RwLock with
> parking_lot::RwLock.
> * Add API_MUTATION_GENERATION and guard cache inserts
> to prevent “zombie inserts” across concurrent set/delete.
> * Refactor cache operations into cache_try_secret_matches,
> cache_try_insert_secret, and centralize write-side behavior in
> apply_api_mutation.
> * Switch fast-path cache access to try_read/try_write (best-effort).
>
> Cargo.toml | 1 +
> pbs-config/Cargo.toml | 1 +
> pbs-config/src/token_shadow.rs | 160 ++++++++++++++++++++++++++++++++-
> 3 files changed, 159 insertions(+), 3 deletions(-)
>
> diff --git a/Cargo.toml b/Cargo.toml
> index 0da18383..aed66fe3 100644
> --- a/Cargo.toml
> +++ b/Cargo.toml
> @@ -143,6 +143,7 @@ nom = "7"
> num-traits = "0.2"
> once_cell = "1.3.1"
> openssl = "0.10.40"
> +parking_lot = "0.12"
> percent-encoding = "2.1"
> pin-project-lite = "0.2"
> regex = "1.5.5"
> diff --git a/pbs-config/Cargo.toml b/pbs-config/Cargo.toml
> index 74afb3c6..eb81ce00 100644
> --- a/pbs-config/Cargo.toml
> +++ b/pbs-config/Cargo.toml
> @@ -13,6 +13,7 @@ libc.workspace = true
> nix.workspace = true
> once_cell.workspace = true
> openssl.workspace = true
> +parking_lot.workspace = true
> regex.workspace = true
> serde.workspace = true
> serde_json.workspace = true
> diff --git a/pbs-config/src/token_shadow.rs b/pbs-config/src/token_shadow.rs
> index 640fabbf..d5aa5de2 100644
> --- a/pbs-config/src/token_shadow.rs
> +++ b/pbs-config/src/token_shadow.rs
> @@ -1,6 +1,8 @@
> use std::collections::HashMap;
> +use std::sync::LazyLock;
>
> use anyhow::{bail, format_err, Error};
> +use parking_lot::RwLock;
> use serde::{Deserialize, Serialize};
> use serde_json::{from_value, Value};
>
> @@ -13,6 +15,18 @@ use crate::{open_backup_lockfile, BackupLockGuard};
> const LOCK_FILE: &str = pbs_buildcfg::configdir!("/token.shadow.lock");
> const CONF_FILE: &str = pbs_buildcfg::configdir!("/token.shadow");
>
> +/// Global in-memory cache for successfully verified API token secrets.
> +/// The cache stores plain text secrets for token Authids that have already been
> +/// verified against the hashed values in `token.shadow`. This allows for cheap
> +/// subsequent authentications for the same token+secret combination, avoiding
> +/// recomputing the password hash on every request.
> +static TOKEN_SECRET_CACHE: LazyLock<RwLock<ApiTokenSecretCache>> = LazyLock::new(|| {
> + RwLock::new(ApiTokenSecretCache {
> + secrets: HashMap::new(),
> + shared_gen: 0,
> + })
> +});
> +
> #[derive(Serialize, Deserialize)]
> #[serde(rename_all = "kebab-case")]
> /// ApiToken id / secret pair
> @@ -54,9 +68,27 @@ pub fn verify_secret(tokenid: &Authid, secret: &str) -> Result<(), Error> {
> bail!("not an API token ID");
> }
>
> + // Fast path
> + if cache_try_secret_matches(tokenid, secret) {
> + return Ok(());
> + }
> +
> + // Slow path
> + // First, capture the shared generation before doing the hash verification.
> + let gen_before = token_shadow_shared_gen();
> +
> let data = read_file()?;
> match data.get(tokenid) {
> - Some(hashed_secret) => proxmox_sys::crypt::verify_crypt_pw(secret, hashed_secret),
> + Some(hashed_secret) => {
> + proxmox_sys::crypt::verify_crypt_pw(secret, hashed_secret)?;
> +
> + // Try to cache only if nothing changed while verifying the secret.
> + if let Some(gen) = gen_before {
> + cache_try_insert_secret(tokenid.clone(), secret.to_owned(), gen);
> + }
> +
> + Ok(())
> + }
> None => bail!("invalid API token"),
> }
> }
> @@ -75,13 +107,15 @@ fn set_secret(tokenid: &Authid, secret: &str) -> Result<(), Error> {
> bail!("not an API token ID");
> }
>
> - let _guard = lock_config()?;
> + let guard = lock_config()?;
>
> let mut data = read_file()?;
> let hashed_secret = proxmox_sys::crypt::encrypt_pw(secret)?;
> data.insert(tokenid.clone(), hashed_secret);
> write_file(data)?;
>
> + apply_api_mutation(guard, tokenid, Some(secret));
> +
> Ok(())
> }
>
> @@ -91,11 +125,131 @@ pub fn delete_secret(tokenid: &Authid) -> Result<(), Error> {
> bail!("not an API token ID");
> }
>
> - let _guard = lock_config()?;
> + let guard = lock_config()?;
>
> let mut data = read_file()?;
> data.remove(tokenid);
> write_file(data)?;
>
> + apply_api_mutation(guard, tokenid, None);
> +
> Ok(())
> }
> +
> +struct ApiTokenSecretCache {
> + /// Keys are token Authids, values are the corresponding plain text secrets.
> + /// Entries are added after a successful on-disk verification in
> + /// `verify_secret` or when a new token secret is generated by
> + /// `generate_and_set_secret`. Used to avoid repeated
> + /// password-hash computation on subsequent authentications.
> + secrets: HashMap<Authid, CachedSecret>,
> + /// Shared generation to detect mutations of the underlying token.shadow file.
> + shared_gen: usize,
> +}
> +
> +/// Cached secret.
> +struct CachedSecret {
> + secret: String,
> +}
> +
> +fn cache_try_insert_secret(tokenid: Authid, secret: String, shared_gen_before: usize) {
> + let Some(mut cache) = TOKEN_SECRET_CACHE.try_write() else {
> + return;
> + };
> +
> + let Some(shared_gen_now) = token_shadow_shared_gen() else {
> + return;
> + };
> +
> + // If this process missed a generation bump, its cache is stale.
> + if cache.shared_gen != shared_gen_now {
> + invalidate_cache_state_and_set_gen(&mut cache, shared_gen_now);
> + }
> +
> + // If a mutation happened while we were verifying the secret, do not insert.
> + if shared_gen_now == shared_gen_before {
> + cache.secrets.insert(tokenid, CachedSecret { secret });
> + }
> +}
> +
> +/// Tries to match the given token secret against the cached secret.
> +///
> +/// Verifies the generation/version before doing the constant-time
> +/// comparison to reduce TOCTOU risk. During token rotation or deletion
> +/// tokens for in-flight requests may still validate against the previous
> +/// generation.
> +fn cache_try_secret_matches(tokenid: &Authid, secret: &str) -> bool {
> + let Some(cache) = TOKEN_SECRET_CACHE.try_read() else {
> + return false;
> + };
> + let Some(entry) = cache.secrets.get(tokenid) else {
> + return false;
> + };
> + let Some(current_gen) = token_shadow_shared_gen() else {
> + return false;
> + };
> +
> + if current_gen == cache.shared_gen {
> + return openssl::memcmp::eq(entry.secret.as_bytes(), secret.as_bytes());
> + }
> +
> + false
> +}
> +
> +fn apply_api_mutation(_guard: BackupLockGuard, tokenid: &Authid, new_secret: Option<&str>) {
> + // Signal cache invalidation to other processes (best-effort).
> + let bumped_gen = bump_token_shadow_shared_gen();
> +
> + let mut cache = TOKEN_SECRET_CACHE.write();
> +
> + // If we cannot get the current generation, we cannot trust the cache
> + let Some(current_gen) = token_shadow_shared_gen() else {
> + invalidate_cache_state_and_set_gen(&mut cache, 0);
> + return;
> + };
> +
> + // If we cannot bump the shared generation, or if it changed after
> + // obtaining the cache write lock, we cannot trust the cache
> + if bumped_gen != Some(current_gen) {
> + invalidate_cache_state_and_set_gen(&mut cache, current_gen);
> + return;
> + }
> +
> + // Update to the post-mutation generation.
> + cache.shared_gen = current_gen;
> +
> + // Apply the new mutation.
> + match new_secret {
> + Some(secret) => {
> + cache.secrets.insert(
> + tokenid.clone(),
> + CachedSecret {
> + secret: secret.to_owned(),
> + },
> + );
> + }
> + None => {
> + cache.secrets.remove(tokenid);
> + }
> + }
> +}
> +
> +/// Get the current shared generation.
> +fn token_shadow_shared_gen() -> Option<usize> {
> + crate::ConfigVersionCache::new()
> + .ok()
> + .map(|cvc| cvc.token_shadow_generation())
> +}
> +
> +/// Bump and return the new shared generation.
> +fn bump_token_shadow_shared_gen() -> Option<usize> {
> + crate::ConfigVersionCache::new()
> + .ok()
> + .map(|cvc| cvc.increase_token_shadow_generation() + 1)
> +}
> +
> +/// Invalidates local cache contents and sets/updates the cached generation.
> +fn invalidate_cache_state_and_set_gen(cache: &mut ApiTokenSecretCache, gen: usize) {
> + cache.secrets.clear();
> + cache.shared_gen = gen;
> +}
above function operates on the chache, so why not make it a method
thereof? And also bundle the generation bumps, so they might not be
forgotten.
Something along the lines of the following diff on top of this patch:
diff --git a/pbs-config/src/token_shadow.rs b/pbs-config/src/token_shadow.rs
index d5aa5de28..a8104f142 100644
--- a/pbs-config/src/token_shadow.rs
+++ b/pbs-config/src/token_shadow.rs
@@ -136,6 +136,11 @@ pub fn delete_secret(tokenid: &Authid) ->
Result<(), Error> {
Ok(())
}
+/// Cached secret.
+struct CachedSecret {
+ secret: String,
+}
+
struct ApiTokenSecretCache {
/// Keys are token Authids, values are the corresponding plain
text secrets.
/// Entries are added after a successful on-disk verification in
@@ -147,9 +152,22 @@ struct ApiTokenSecretCache {
shared_gen: usize,
}
-/// Cached secret.
-struct CachedSecret {
- secret: String,
+impl ApiTokenSecretCache {
+ /// Invalidates local cache contents and sets/updates the cached
generation.
+ fn invalidate_state_and_set_gen(&mut self, gen: usize) {
+ self.secrets.clear();
+ self.shared_gen = gen;
+ }
+
+ fn insert_and_set_gen(&mut self, tokenid: Authid, secret:
CachedSecret, gen: usize) {
+ self.secrets.insert(tokenid.clone(), secret);
+ self.shared_gen = gen;
+ }
+
+ fn evict_and_set_gen(&mut self, tokenid: &Authid, gen: usize) {
+ self.secrets.remove(tokenid);
+ self.shared_gen = gen;
+ }
}
fn cache_try_insert_secret(tokenid: Authid, secret: String,
shared_gen_before: usize) {
@@ -163,12 +181,12 @@ fn cache_try_insert_secret(tokenid: Authid,
secret: String, shared_gen_before: u
// If this process missed a generation bump, its cache is stale.
if cache.shared_gen != shared_gen_now {
- invalidate_cache_state_and_set_gen(&mut cache, shared_gen_now);
+ cache.invalidate_state_and_set_gen(shared_gen_now);
}
// If a mutation happened while we were verifying the secret, do
not insert.
if shared_gen_now == shared_gen_before {
- cache.secrets.insert(tokenid, CachedSecret { secret });
+ cache.insert_and_set_gen(tokenid, CachedSecret { secret },
shared_gen_now);
}
}
@@ -204,33 +222,24 @@ fn apply_api_mutation(_guard: BackupLockGuard,
tokenid: &Authid, new_secret: Opt
// If we cannot get the current generation, we cannot trust the cache
let Some(current_gen) = token_shadow_shared_gen() else {
- invalidate_cache_state_and_set_gen(&mut cache, 0);
+ cache.invalidate_state_and_set_gen(0);
return;
};
// If we cannot bump the shared generation, or if it changed after
// obtaining the cache write lock, we cannot trust the cache
if bumped_gen != Some(current_gen) {
- invalidate_cache_state_and_set_gen(&mut cache, current_gen);
+ cache.invalidate_state_and_set_gen(current_gen);
return;
}
- // Update to the post-mutation generation.
- cache.shared_gen = current_gen;
-
// Apply the new mutation.
match new_secret {
Some(secret) => {
- cache.secrets.insert(
- tokenid.clone(),
- CachedSecret {
- secret: secret.to_owned(),
- },
- );
- }
- None => {
- cache.secrets.remove(tokenid);
+ let cached_secret = CachedSecret { secret: secret.to_owned() };
+ cache.insert_and_set_gen(tokenid.clone(), cached_secret,
current_gen);
}
+ None => cache.evict_and_set_gen(tokenid, current_gen),
}
}
@@ -248,8 +257,3 @@ fn bump_token_shadow_shared_gen() -> Option<usize> {
.map(|cvc| cvc.increase_token_shadow_generation() + 1)
}
-/// Invalidates local cache contents and sets/updates the cached
generation.
-fn invalidate_cache_state_and_set_gen(cache: &mut ApiTokenSecretCache,
gen: usize) {
- cache.secrets.clear();
- cache.shared_gen = gen;
-}
next prev parent reply other threads:[~2026-02-10 12:53 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-21 15:13 [pbs-devel] [PATCH proxmox{-backup, , -datacenter-manager} v4 00/11] token-shadow: reduce api token verification overhead Samuel Rufinatscha
2026-01-21 15:13 ` [pbs-devel] [PATCH proxmox-backup v4 1/4] pbs-config: add token.shadow generation to ConfigVersionCache Samuel Rufinatscha
2026-01-21 15:13 ` [pbs-devel] [PATCH proxmox-backup v4 2/4] pbs-config: cache verified API token secrets Samuel Rufinatscha
2026-02-10 12:54 ` Christian Ebner [this message]
2026-02-10 13:08 ` Samuel Rufinatscha
2026-01-21 15:13 ` [pbs-devel] [PATCH proxmox-backup v4 3/4] pbs-config: invalidate token-secret cache on token.shadow changes Samuel Rufinatscha
2026-01-21 15:14 ` [pbs-devel] [PATCH proxmox-backup v4 4/4] pbs-config: add TTL window to token secret cache Samuel Rufinatscha
2026-02-10 12:58 ` Christian Ebner
2026-02-10 13:18 ` Samuel Rufinatscha
2026-01-21 15:14 ` [pbs-devel] [PATCH proxmox v4 1/4] proxmox-access-control: split AccessControlConfig and add token.shadow gen Samuel Rufinatscha
2026-01-21 15:14 ` [pbs-devel] [PATCH proxmox v4 2/4] proxmox-access-control: cache verified API token secrets Samuel Rufinatscha
2026-02-10 13:38 ` Christian Ebner
2026-02-10 14:07 ` Samuel Rufinatscha
2026-01-21 15:14 ` [pbs-devel] [PATCH proxmox v4 3/4] proxmox-access-control: invalidate token-secret cache on token.shadow changes Samuel Rufinatscha
2026-01-21 15:14 ` [pbs-devel] [PATCH proxmox v4 4/4] proxmox-access-control: add TTL window to token secret cache Samuel Rufinatscha
2026-01-21 15:14 ` [pbs-devel] [PATCH proxmox-datacenter-manager v4 1/3] pdm-config: implement token.shadow generation Samuel Rufinatscha
2026-01-21 15:14 ` [pbs-devel] [PATCH proxmox-datacenter-manager v4 2/3] docs: document API token-cache TTL effects Samuel Rufinatscha
2026-01-21 15:14 ` [pbs-devel] [PATCH proxmox-datacenter-manager v4 3/3] pdm-config: wire user+acl cache generation Samuel Rufinatscha
2026-02-10 14:25 ` [pbs-devel] [PATCH proxmox{-backup, , -datacenter-manager} v4 00/11] token-shadow: reduce api token verification overhead Christian Ebner
2026-02-17 11:14 ` [pbs-devel] superseded: [PATCH proxmox{-backup,,-datacenter-manager} " Samuel Rufinatscha
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1f0a4451-a767-46d2-976d-7c7cef86cd9b@proxmox.com \
--to=c.ebner@proxmox.com \
--cc=pbs-devel@lists.proxmox.com \
--cc=s.rufinatscha@proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox