From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id A04E51FF13C for ; Thu, 19 Mar 2026 16:26:51 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 0E3983892; Thu, 19 Mar 2026 16:27:07 +0100 (CET) Message-ID: <3b18d413-11f2-4179-aabd-4e9f000924ea@proxmox.com> Date: Thu, 19 Mar 2026 16:27:02 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH proxmox-backup v2 1/3] api: move statefile loading into compute_schedule_status To: =?UTF-8?Q?Michael_K=C3=B6ppl?= , pbs-devel@lists.proxmox.com References: <20260319110318.70346-1-m.koeppl@proxmox.com> <20260319110318.70346-2-m.koeppl@proxmox.com> <75768b61-8ff5-4e0a-b7e4-3b5fafbdd1c9@proxmox.com> Content-Language: en-US, de-DE From: Christian Ebner In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1773933980151 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.062 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: HVLJVHFCA4PSDZI5MYUGQWKXZUHKVS2K X-Message-ID-Hash: HVLJVHFCA4PSDZI5MYUGQWKXZUHKVS2K X-MailFrom: c.ebner@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox Backup Server development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On 3/19/26 3:47 PM, Michael Köppl wrote: > On Thu Mar 19, 2026 at 12:24 PM CET, Christian Ebner wrote: >> one comment inline. >> >> On 3/19/26 12:03 PM, Michael Köppl wrote: >>> Centralize loading of the job statefiles in compute_schedule_status, >>> reducing code duplication across the job management API endpoints. >>> >>> Signed-off-by: Michael Köppl >>> --- >>> src/api2/admin/datastore.rs | 13 +++---------- >>> src/api2/admin/prune.rs | 9 +++------ >>> src/api2/admin/sync.rs | 9 +++------ >>> src/api2/admin/verify.rs | 9 +++------ >>> src/api2/tape/backup.rs | 9 +++------ >>> src/server/jobstate.rs | 8 ++++++-- >>> 6 files changed, 21 insertions(+), 36 deletions(-) >>> >>> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs >>> index cca340553..4018e0301 100644 >>> --- a/src/api2/admin/datastore.rs >>> +++ b/src/api2/admin/datastore.rs >>> @@ -70,7 +70,7 @@ use proxmox_rest_server::{formatter, worker_is_active, WorkerTask}; >>> use crate::api2::backup::optional_ns_param; >>> use crate::api2::node::rrd::create_value_from_rrd; >>> use crate::backup::{check_ns_privs_full, ListAccessibleBackupGroups, VerifyWorker, NS_PRIVS_OK}; >>> -use crate::server::jobstate::{compute_schedule_status, Job, JobState}; >>> +use crate::server::jobstate::{compute_schedule_status, Job}; >>> use crate::tools::{backup_info_to_snapshot_list_item, get_all_snapshot_files, read_backup_index}; >>> >>> // helper to unify common sequence of checks: >>> @@ -1167,19 +1167,12 @@ pub fn garbage_collection_status( >>> >>> let datastore = DataStore::lookup_datastore(&store, Operation::Read)?; >>> let status_in_memory = datastore.last_gc_status(); >>> - let state_file = JobState::load("garbage_collection", &store) >>> - .map_err(|err| log::error!("could not open GC statefile for {store}: {err}")) >>> - .ok(); >>> >>> let mut last = proxmox_time::epoch_i64(); >>> >>> if let Some(ref upid) = status_in_memory.upid { >>> - let mut computed_schedule: JobScheduleStatus = JobScheduleStatus::default(); >>> - if let Some(state) = state_file { >>> - if let Ok(cs) = compute_schedule_status(&state, Some(upid)) { >>> - computed_schedule = cs; >>> - } >>> - } >>> + let computed_schedule: JobScheduleStatus = >>> + compute_schedule_status("garbage_collection", &store, Some(upid))?; >> >> This alters behavior as now it is never tried to load the state file if >> status_in_memory.upid is None, so there is no error logged. >> >> So this must be expanded by an else branch where the loading is >> attempted also for that case and the potential error logged. > > Missed that while refactoring, sorry for the oversight. Also noticed > that there is an additional change in behavior regarding the handling of > any *other* error that might occur in compute_schedule_status because > previously, we would use basically ignore any error and return the > default status here, e.g. if the UPID could not be parsed for a started > job. To match this behavior, I could just do > > let computed_schedule: JobScheduleStatus = > compute_schedule_status("garbage_collection", &store, Some(upid)) > .unwrap_or_else(|_| JobScheduleStatus::default()); > > But the question is if the behavior here *should* differ from all other > endpoints if the UPID could not be parsed? Because everywhere else we'd > still return an error in that case. True: this was introduced with commit fe1d34d2e ("api: garbage collect job status") and then adapted with commit 3ae21d87c ("GC: flatten existing status into job status"). So I guess this is related to the mentioned renaming. Maybe Fabian can give us a clue?