From: "Lukas Wagner" <l.wagner@proxmox.com>
To: "Thomas Lamprecht" <t.lamprecht@proxmox.com>,
"Lukas Wagner" <l.wagner@proxmox.com>
Cc: pdm-devel@lists.proxmox.com
Subject: Re: [PATCH datacenter-manager 4/4] remote-updates: switch over to new api_cache
Date: Fri, 15 May 2026 14:56:55 +0200 [thread overview]
Message-ID: <DIJ9MRW8VM5U.3C4D5N0OP7B02@proxmox.com> (raw)
In-Reply-To: <20260515090637.950992-4-t.lamprecht@proxmox.com>
On Fri May 15, 2026 at 11:06 AM CEST, Thomas Lamprecht wrote:
> On Wed, 13 May 2026 15:54:57 +0200, Lukas Wagner wrote:
>> diff --git a/server/src/remote_updates.rs b/server/src/remote_updates.rs
>> --- a/server/src/remote_updates.rs
>> +++ b/server/src/remote_updates.rs
>> @@ -179,10 +167,11 @@ async fn update_cached_summary_for_node(
>> node: String,
>> node_data: NodeUpdateSummary,
>> ) -> Result<(), Error> {
>> - let mut file = File::open(UPDATE_CACHE)?;
>> - let mut cache_content: UpdateSummary = serde_json::from_reader(&mut file)?;
>> - let remote_entry =
>> - cache_content
>> + let cache = api_cache::write_global().await?;
>> + let cache_content = cache.get::<UpdateSummary>(UPDATE_SUMMARY_CACHE_KEY).await?;
>> +
>> + if let Some(mut entry) = cache_content {
>> + let remote_entry = entry
>> .remotes
>> .entry(remote.id)
>> .or_insert_with(|| RemoteUpdateSummary {
>> @@ -191,15 +180,9 @@ async fn update_cached_summary_for_node(
>> status: RemoteUpdateStatus::Success,
>> });
>>
>> - remote_entry.nodes.insert(node, node_data);
> [...]
>> + remote_entry.nodes.insert(node, node_data);
>> + cache.set(UPDATE_SUMMARY_CACHE_KEY, entry).await?;
>> + }
>>
>> Ok(())
>> }
>
> Small behaviour change worth a second look: the old code did
> `File::open(UPDATE_CACHE)?`, which returned an error if the cache file
> did not exist.
Yeah, that actually was problematic before. It did not really surface,
since at the moment we trigger the remote update fetching for all
remotes very soon after daemon startup.
> The new code uses `cache.get(..)`, which returns `Ok(None)` for that
> case, and the `if let Some(..)` then silently skips the write.
> So, if `list_available_updates` is called before any refresh has
> populated the cache, its result is now thrown away silently instead of
> surfaced as an error.
> If you want this code path to be able to create the initial cache entry
> as well, replacing the `if let Some(..)` with `let mut entry =
> cache_content.unwrap_or_default();` would do it.
>
Did exactly that, thanks!
>> @@ -212,7 +195,7 @@ pub async fn refresh_update_summary_cache(remotes: Vec<Remote>) -> Result<(), Er
>> .do_for_all_remote_nodes(remotes.clone().into_iter(), fetch_available_updates)
>> .await;
>>
>> - let mut content = get_cached_summary_or_default()?;
>> + let mut content = get_cached_summary_or_default().await?;
>>
>> // Clean out any remotes that might have been removed from the remote config in the meanwhile.
>> content
>> @@ -275,8 +258,28 @@ pub async fn refresh_update_summary_cache(remotes: Vec<Remote>) -> Result<(), Er
>> }
>> }
>>
>> - let options = proxmox_product_config::default_create_options();
>> - proxmox_sys::fs::replace_file(UPDATE_CACHE, &serde_json::to_vec(&content)?, options, true)?;
>> + cleanup_old_cachefile().await?;
>> +
>> + let cache = api_cache::write_global().await?;
>> + cache.set(UPDATE_SUMMARY_CACHE_KEY, content).await?;
>> +
>> + Ok(())
>> +}
>
> Two things on the final write:
>
> - `get_cached_summary_or_default()` above takes a read lock and drops
> it again before `write_global()` is called down here. If
> `update_cached_summary_for_node` runs in that gap, the entry it just
> wrote will be overwritten by the `cache.set` below. Holding a single
> write lock for the whole function would prevent that. Not sure if we
> strictly need that guarantee though.
Moved the write_global() up top to make sure that the lock is held while
updating the values in the UpdateSummary data structure.
Thanks!
>
> - `cleanup_old_cachefile` runs before the new `cache.set`. If the `set`
> ever fails, the old file is already gone, so the next refresh starts
> from an empty cache.
We don't really migrate contents over from the old cachefile, but start
from a clean slate anyway with the new cache, so I don't think this
makes any difference?
> Either move the cleanup after the successful
> write, or check whether the old file exists first so the cleanup only
> runs once.
The cleanup literally just make sure that the old, unused cachefile does
not litter /var/cache/... - it's also fine to run
'cleanup_old_cachefile' multiple times. It would re-attempt to delete
the file, but explicitly catches the 'file does not exist' case and does
not log or throw an error then.
I changed the order of operations anyway (so, set before cleanup), but
I don't really think it makes a difference. (But maybe I'm missing
something?)
next prev parent reply other threads:[~2026-05-15 12:57 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-13 13:54 [PATCH datacenter-manager 0/4] add generic, per-remote (and global) cache for remote API responses Lukas Wagner
2026-05-13 13:54 ` [PATCH datacenter-manager 1/4] add persistent, generic, namespaced key-value cache implementation Lukas Wagner
2026-05-15 9:06 ` Thomas Lamprecht
2026-05-15 9:19 ` Lukas Wagner
2026-05-13 13:54 ` [PATCH datacenter-manager 2/4] add api_cache as a specialized wrapper around the namespaced cache Lukas Wagner
2026-05-15 9:06 ` Thomas Lamprecht
2026-05-15 9:22 ` Lukas Wagner
2026-05-13 13:54 ` [PATCH datacenter-manager 3/4] api: resources: subscriptions: switch over to api_cache Lukas Wagner
2026-05-15 9:06 ` Thomas Lamprecht
2026-05-15 9:49 ` Lukas Wagner
2026-05-13 13:54 ` [PATCH datacenter-manager 4/4] remote-updates: switch over to new api_cache Lukas Wagner
2026-05-15 9:06 ` Thomas Lamprecht
2026-05-15 12:56 ` Lukas Wagner [this message]
2026-05-15 8:30 ` superseded: [PATCH datacenter-manager 0/4] add generic, per-remote (and global) cache for remote API responses Lukas Wagner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DIJ9MRW8VM5U.3C4D5N0OP7B02@proxmox.com \
--to=l.wagner@proxmox.com \
--cc=pdm-devel@lists.proxmox.com \
--cc=t.lamprecht@proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.