From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id B00081FF14C for ; Fri, 15 May 2026 11:07:18 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 2A58E125A6; Fri, 15 May 2026 11:07:18 +0200 (CEST) From: Thomas Lamprecht To: Lukas Wagner Subject: Re: [PATCH datacenter-manager 4/4] remote-updates: switch over to new api_cache Date: Fri, 15 May 2026 11:06:32 +0200 Message-ID: <20260515090637.950992-4-t.lamprecht@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260513135457.573414-5-l.wagner@proxmox.com> References: <20260513135457.573414-1-l.wagner@proxmox.com> <20260513135457.573414-5-l.wagner@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1778835997583 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.004 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: 3DLFCERCTLV7W3SF5QNZ4CDWTL6ACPJW X-Message-ID-Hash: 3DLFCERCTLV7W3SF5QNZ4CDWTL6ACPJW X-MailFrom: t.lamprecht@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: pdm-devel@lists.proxmox.com X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox Datacenter Manager development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Wed, 13 May 2026 15:54:57 +0200, Lukas Wagner wrote:=0D > diff --git a/server/src/remote_updates.rs b/server/src/remote_updates.rs= =0D > --- a/server/src/remote_updates.rs=0D > +++ b/server/src/remote_updates.rs=0D > @@ -179,10 +167,11 @@ async fn update_cached_summary_for_node(=0D > node: String,=0D > node_data: NodeUpdateSummary,=0D > ) -> Result<(), Error> {=0D > - let mut file =3D File::open(UPDATE_CACHE)?;=0D > - let mut cache_content: UpdateSummary =3D serde_json::from_reader(&mu= t file)?;=0D > - let remote_entry =3D=0D > - cache_content=0D > + let cache =3D api_cache::write_global().await?;=0D > + let cache_content =3D cache.get::(UPDATE_SUMMARY_CACH= E_KEY).await?;=0D > +=0D > + if let Some(mut entry) =3D cache_content {=0D > + let remote_entry =3D entry=0D > .remotes=0D > .entry(remote.id)=0D > .or_insert_with(|| RemoteUpdateSummary {=0D > @@ -191,15 +180,9 @@ async fn update_cached_summary_for_node(=0D > status: RemoteUpdateStatus::Success,=0D > });=0D >=0D > - remote_entry.nodes.insert(node, node_data);=0D [...]=0D > + remote_entry.nodes.insert(node, node_data);=0D > + cache.set(UPDATE_SUMMARY_CACHE_KEY, entry).await?;=0D > + }=0D >=0D > Ok(())=0D > }=0D =0D Small behaviour change worth a second look: the old code did=0D `File::open(UPDATE_CACHE)?`, which returned an error if the cache file=0D did not exist.=0D The new code uses `cache.get(..)`, which returns `Ok(None)` for that=0D case, and the `if let Some(..)` then silently skips the write.=0D So, if `list_available_updates` is called before any refresh has=0D populated the cache, its result is now thrown away silently instead of=0D surfaced as an error.=0D If you want this code path to be able to create the initial cache entry=0D as well, replacing the `if let Some(..)` with `let mut entry =3D=0D cache_content.unwrap_or_default();` would do it.=0D =0D > @@ -212,7 +195,7 @@ pub async fn refresh_update_summary_cache(remotes: Ve= c) -> Result<(), Er=0D > .do_for_all_remote_nodes(remotes.clone().into_iter(), fetch_avai= lable_updates)=0D > .await;=0D >=0D > - let mut content =3D get_cached_summary_or_default()?;=0D > + let mut content =3D get_cached_summary_or_default().await?;=0D >=0D > // Clean out any remotes that might have been removed from the remot= e config in the meanwhile.=0D > content=0D > @@ -275,8 +258,28 @@ pub async fn refresh_update_summary_cache(remotes: V= ec) -> Result<(), Er=0D > }=0D > }=0D >=0D > - let options =3D proxmox_product_config::default_create_options();=0D > - proxmox_sys::fs::replace_file(UPDATE_CACHE, &serde_json::to_vec(&con= tent)?, options, true)?;=0D > + cleanup_old_cachefile().await?;=0D > +=0D > + let cache =3D api_cache::write_global().await?;=0D > + cache.set(UPDATE_SUMMARY_CACHE_KEY, content).await?;=0D > +=0D > + Ok(())=0D > +}=0D =0D Two things on the final write:=0D =0D - `get_cached_summary_or_default()` above takes a read lock and drops=0D it again before `write_global()` is called down here. If=0D `update_cached_summary_for_node` runs in that gap, the entry it just=0D wrote will be overwritten by the `cache.set` below. Holding a single=0D write lock for the whole function would prevent that. Not sure if we=0D strictly need that guarantee though.=0D =0D - `cleanup_old_cachefile` runs before the new `cache.set`. If the `set`=0D ever fails, the old file is already gone, so the next refresh starts=0D from an empty cache. Either move the cleanup after the successful=0D write, or check whether the old file exists first so the cleanup only=0D runs once.=0D