public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Christian Ebner <c.ebner@proxmox.com>
To: "Michael Köppl" <m.koeppl@proxmox.com>, pbs-devel@lists.proxmox.com
Subject: Re: [PATCH proxmox-backup v2 1/3] api: move statefile loading into compute_schedule_status
Date: Thu, 19 Mar 2026 16:27:02 +0100	[thread overview]
Message-ID: <3b18d413-11f2-4179-aabd-4e9f000924ea@proxmox.com> (raw)
In-Reply-To: <DH6UARBSGJYA.172KMABDLD4GQ@proxmox.com>

On 3/19/26 3:47 PM, Michael Köppl wrote:
> On Thu Mar 19, 2026 at 12:24 PM CET, Christian Ebner wrote:
>> one comment inline.
>>
>> On 3/19/26 12:03 PM, Michael Köppl wrote:
>>> Centralize loading of the job statefiles in compute_schedule_status,
>>> reducing code duplication across the job management API endpoints.
>>>
>>> Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
>>> ---
>>>    src/api2/admin/datastore.rs | 13 +++----------
>>>    src/api2/admin/prune.rs     |  9 +++------
>>>    src/api2/admin/sync.rs      |  9 +++------
>>>    src/api2/admin/verify.rs    |  9 +++------
>>>    src/api2/tape/backup.rs     |  9 +++------
>>>    src/server/jobstate.rs      |  8 ++++++--
>>>    6 files changed, 21 insertions(+), 36 deletions(-)
>>>
>>> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
>>> index cca340553..4018e0301 100644
>>> --- a/src/api2/admin/datastore.rs
>>> +++ b/src/api2/admin/datastore.rs
>>> @@ -70,7 +70,7 @@ use proxmox_rest_server::{formatter, worker_is_active, WorkerTask};
>>>    use crate::api2::backup::optional_ns_param;
>>>    use crate::api2::node::rrd::create_value_from_rrd;
>>>    use crate::backup::{check_ns_privs_full, ListAccessibleBackupGroups, VerifyWorker, NS_PRIVS_OK};
>>> -use crate::server::jobstate::{compute_schedule_status, Job, JobState};
>>> +use crate::server::jobstate::{compute_schedule_status, Job};
>>>    use crate::tools::{backup_info_to_snapshot_list_item, get_all_snapshot_files, read_backup_index};
>>>    
>>>    // helper to unify common sequence of checks:
>>> @@ -1167,19 +1167,12 @@ pub fn garbage_collection_status(
>>>    
>>>        let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
>>>        let status_in_memory = datastore.last_gc_status();
>>> -    let state_file = JobState::load("garbage_collection", &store)
>>> -        .map_err(|err| log::error!("could not open GC statefile for {store}: {err}"))
>>> -        .ok();
>>>    
>>>        let mut last = proxmox_time::epoch_i64();
>>>    
>>>        if let Some(ref upid) = status_in_memory.upid {
>>> -        let mut computed_schedule: JobScheduleStatus = JobScheduleStatus::default();
>>> -        if let Some(state) = state_file {
>>> -            if let Ok(cs) = compute_schedule_status(&state, Some(upid)) {
>>> -                computed_schedule = cs;
>>> -            }
>>> -        }
>>> +        let computed_schedule: JobScheduleStatus =
>>> +            compute_schedule_status("garbage_collection", &store, Some(upid))?;
>>
>> This alters behavior as now it is never tried to load the state file if
>> status_in_memory.upid is None, so there is no error logged.
>>
>> So this must be expanded by an else branch where the loading is
>> attempted also for that case and the potential error logged.
> 
> Missed that while refactoring, sorry for the oversight. Also noticed
> that there is an additional change in behavior regarding the handling of
> any *other* error that might occur in compute_schedule_status because
> previously, we would use basically ignore any error and return the
> default status here, e.g. if the UPID could not be parsed for a started
> job. To match this behavior, I could just do
> 
>      let computed_schedule: JobScheduleStatus =
>          compute_schedule_status("garbage_collection", &store, Some(upid))
>              .unwrap_or_else(|_| JobScheduleStatus::default());
> 
> But the question is if the behavior here *should* differ from all other
> endpoints if the UPID could not be parsed? Because everywhere else we'd
> still return an error in that case.

True: this was introduced with commit fe1d34d2e ("api: garbage collect 
job status") and then adapted with commit 3ae21d87c ("GC: flatten 
existing status into job status"). So I guess this is related to the 
mentioned renaming.

Maybe Fabian can give us a clue?




  reply	other threads:[~2026-03-19 15:26 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19 11:03 [PATCH proxmox-backup v2 0/3] fix #7400: improve handling of corrupted job statefiles Michael Köppl
2026-03-19 11:03 ` [PATCH proxmox-backup v2 1/3] api: move statefile loading into compute_schedule_status Michael Köppl
2026-03-19 11:24   ` Christian Ebner
2026-03-19 14:47     ` Michael Köppl
2026-03-19 15:27       ` Christian Ebner [this message]
2026-03-19 11:03 ` [PATCH proxmox-backup v2 2/3] fix #7400: api: gracefully handle corrupted job statefiles Michael Köppl
2026-03-19 11:23   ` Christian Ebner
2026-03-19 11:03 ` [PATCH proxmox-backup v2 3/3] fix #7400: proxy: self-heal " Michael Köppl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3b18d413-11f2-4179-aabd-4e9f000924ea@proxmox.com \
    --to=c.ebner@proxmox.com \
    --cc=m.koeppl@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal