From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id A2B9D1FF13C for ; Thu, 19 Mar 2026 15:47:49 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 192A5270F; Thu, 19 Mar 2026 15:48:04 +0100 (CET) Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 Date: Thu, 19 Mar 2026 15:47:59 +0100 Message-Id: From: =?utf-8?q?Michael_K=C3=B6ppl?= To: "Christian Ebner" , =?utf-8?q?Michael_K=C3=B6ppl?= , Subject: Re: [PATCH proxmox-backup v2 1/3] api: move statefile loading into compute_schedule_status X-Mailer: aerc 0.21.0 References: <20260319110318.70346-1-m.koeppl@proxmox.com> <20260319110318.70346-2-m.koeppl@proxmox.com> <75768b61-8ff5-4e0a-b7e4-3b5fafbdd1c9@proxmox.com> In-Reply-To: <75768b61-8ff5-4e0a-b7e4-3b5fafbdd1c9@proxmox.com> X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1773931636940 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.979 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.408 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.819 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.903 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: SWFF2K535HV4LRNO4OBUEYMTEHBSZERZ X-Message-ID-Hash: SWFF2K535HV4LRNO4OBUEYMTEHBSZERZ X-MailFrom: m.koeppl@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox Backup Server development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Thu Mar 19, 2026 at 12:24 PM CET, Christian Ebner wrote: > one comment inline. > > On 3/19/26 12:03 PM, Michael K=C3=B6ppl wrote: >> Centralize loading of the job statefiles in compute_schedule_status, >> reducing code duplication across the job management API endpoints. >>=20 >> Signed-off-by: Michael K=C3=B6ppl >> --- >> src/api2/admin/datastore.rs | 13 +++---------- >> src/api2/admin/prune.rs | 9 +++------ >> src/api2/admin/sync.rs | 9 +++------ >> src/api2/admin/verify.rs | 9 +++------ >> src/api2/tape/backup.rs | 9 +++------ >> src/server/jobstate.rs | 8 ++++++-- >> 6 files changed, 21 insertions(+), 36 deletions(-) >>=20 >> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs >> index cca340553..4018e0301 100644 >> --- a/src/api2/admin/datastore.rs >> +++ b/src/api2/admin/datastore.rs >> @@ -70,7 +70,7 @@ use proxmox_rest_server::{formatter, worker_is_active,= WorkerTask}; >> use crate::api2::backup::optional_ns_param; >> use crate::api2::node::rrd::create_value_from_rrd; >> use crate::backup::{check_ns_privs_full, ListAccessibleBackupGroups, V= erifyWorker, NS_PRIVS_OK}; >> -use crate::server::jobstate::{compute_schedule_status, Job, JobState}; >> +use crate::server::jobstate::{compute_schedule_status, Job}; >> use crate::tools::{backup_info_to_snapshot_list_item, get_all_snapshot= _files, read_backup_index}; >> =20 >> // helper to unify common sequence of checks: >> @@ -1167,19 +1167,12 @@ pub fn garbage_collection_status( >> =20 >> let datastore =3D DataStore::lookup_datastore(&store, Operation::R= ead)?; >> let status_in_memory =3D datastore.last_gc_status(); >> - let state_file =3D JobState::load("garbage_collection", &store) >> - .map_err(|err| log::error!("could not open GC statefile for {st= ore}: {err}")) >> - .ok(); >> =20 >> let mut last =3D proxmox_time::epoch_i64(); >> =20 >> if let Some(ref upid) =3D status_in_memory.upid { >> - let mut computed_schedule: JobScheduleStatus =3D JobScheduleSta= tus::default(); >> - if let Some(state) =3D state_file { >> - if let Ok(cs) =3D compute_schedule_status(&state, Some(upid= )) { >> - computed_schedule =3D cs; >> - } >> - } >> + let computed_schedule: JobScheduleStatus =3D >> + compute_schedule_status("garbage_collection", &store, Some(= upid))?; > > This alters behavior as now it is never tried to load the state file if= =20 > status_in_memory.upid is None, so there is no error logged. > > So this must be expanded by an else branch where the loading is=20 > attempted also for that case and the potential error logged. Missed that while refactoring, sorry for the oversight. Also noticed that there is an additional change in behavior regarding the handling of any *other* error that might occur in compute_schedule_status because previously, we would use basically ignore any error and return the default status here, e.g. if the UPID could not be parsed for a started job. To match this behavior, I could just do let computed_schedule: JobScheduleStatus =3D compute_schedule_status("garbage_collection", &store, Some(upid)) .unwrap_or_else(|_| JobScheduleStatus::default()); But the question is if the behavior here *should* differ from all other endpoints if the UPID could not be parsed? Because everywhere else we'd still return an error in that case.