public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Lukas Wagner <l.wagner@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [pbs-devel] [PATCH proxmox-backup v5 01/10] api: garbage collect job status
Date: Thu, 18 Apr 2024 12:16:57 +0200	[thread overview]
Message-ID: <20240418101706.237597-2-l.wagner@proxmox.com> (raw)
In-Reply-To: <20240418101706.237597-1-l.wagner@proxmox.com>

From: Stefan Lendl <s.lendl@proxmox.com>

Adds an api endpoint on the datastore that reports the gc job status
such as:
 - Schedule
 - State (of last run)
 - Duration (of last run)
 - Last Run
 - Next Run (if scheduled)
 - Pending Chunks (of last run)
 - Pending Bytes (of last run)
 - Removed Chunks (of last run)
 - Removed Bytes (of last run)

Adds a dedicated endpoint admin/gc that reports gc job status for all
datastores including the onces without a gc-schedule.

Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Originally-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewd-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
---
 pbs-api-types/src/datastore.rs |  46 ++++++++++++
 src/api2/admin/datastore.rs    | 131 ++++++++++++++++++++++++++++++++-
 src/api2/admin/gc.rs           |  57 ++++++++++++++
 src/api2/admin/mod.rs          |   2 +
 4 files changed, 233 insertions(+), 3 deletions(-)
 create mode 100644 src/api2/admin/gc.rs

diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index 5e13c157..872c4bc7 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -1273,6 +1273,52 @@ pub struct GarbageCollectionStatus {
     pub still_bad: usize,
 }
 
+#[api(
+    properties: {
+        "last-run-upid": {
+            optional: true,
+            type: UPID,
+        },
+    },
+)]
+#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq)]
+#[serde(rename_all = "kebab-case")]
+/// Garbage Collection general info
+pub struct GarbageCollectionJobStatus {
+    /// Datastore
+    pub store: String,
+    /// upid of the last run gc job
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub last_run_upid: Option<String>,
+    /// Sum of removed bytes.
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub removed_bytes: Option<u64>,
+    /// Number of removed chunks
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub removed_chunks: Option<usize>,
+    /// Sum of pending bytes
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub pending_bytes: Option<u64>,
+    /// Number of pending chunks
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub pending_chunks: Option<usize>,
+    /// Schedule of the gc job
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub schedule: Option<String>,
+    /// Time of the next gc run
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub next_run: Option<i64>,
+    /// Endtime of the last gc run
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub last_run_endtime: Option<i64>,
+    /// State of the last gc run
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub last_run_state: Option<String>,
+    /// Duration of last gc run
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub duration: Option<i64>,
+}
+
 #[api(
     properties: {
         "gc-status": {
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 35628c59..77f9fe3d 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -27,18 +27,20 @@ use proxmox_sys::fs::{
     file_read_firstline, file_read_optional_string, replace_file, CreateOptions,
 };
 use proxmox_sys::{task_log, task_warn};
+use proxmox_time::CalendarEvent;
 
 use pxar::accessor::aio::Accessor;
 use pxar::EntryKind;
 
 use pbs_api_types::{
     print_ns_and_snapshot, print_store_and_ns, Authid, BackupContent, BackupNamespace, BackupType,
-    Counts, CryptMode, DataStoreListItem, DataStoreStatus, GarbageCollectionStatus, GroupListItem,
+    Counts, CryptMode, DataStoreConfig, DataStoreListItem, DataStoreStatus,
+    GarbageCollectionJobStatus, GarbageCollectionStatus, GroupListItem, JobScheduleStatus,
     KeepOptions, Operation, PruneJobOptions, RRDMode, RRDTimeFrame, SnapshotListItem,
     SnapshotVerifyState, BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA,
     BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA, DATASTORE_SCHEMA, IGNORE_VERIFIED_BACKUPS_SCHEMA,
     MAX_NAMESPACE_DEPTH, NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP,
-    PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ, PRIV_DATASTORE_VERIFY,
+    PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ, PRIV_DATASTORE_VERIFY, UPID,
     UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA,
 };
 use pbs_client::pxar::{create_tar, create_zip};
@@ -67,7 +69,7 @@ use crate::backup::{
     ListAccessibleBackupGroups, NS_PRIVS_OK,
 };
 
-use crate::server::jobstate::Job;
+use crate::server::jobstate::{compute_schedule_status, Job, JobState};
 
 const GROUP_NOTES_FILE_NAME: &str = "notes";
 
@@ -1238,6 +1240,125 @@ pub fn garbage_collection_status(
     Ok(status)
 }
 
+#[api(
+    input: {
+        properties: {
+            store: {
+                schema: DATASTORE_SCHEMA,
+            },
+        },
+    },
+    returns: {
+        type: GarbageCollectionJobStatus,
+    },
+    access: {
+        permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_AUDIT, false),
+    },
+)]
+/// Garbage collection status.
+pub fn garbage_collection_job_status(
+    store: String,
+    _info: &ApiMethod,
+    _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<GarbageCollectionJobStatus, Error> {
+    let (config, _) = pbs_config::datastore::config()?;
+    let store_config: DataStoreConfig = config.lookup("datastore", &store)?;
+
+    let mut info = GarbageCollectionJobStatus {
+        store: store.clone(),
+        schedule: store_config.gc_schedule,
+        ..Default::default()
+    };
+
+    let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+    let status_in_memory = datastore.last_gc_status();
+    let state_file = JobState::load("garbage_collection", &store)
+        .map_err(|err| {
+            log::error!(
+                "could not open statefile for {:?}: {}",
+                info.last_run_upid,
+                err
+            )
+        })
+        .ok();
+
+    let mut selected_upid = None;
+    if status_in_memory.upid.is_some() {
+        selected_upid = status_in_memory.upid;
+    } else if let Some(JobState::Finished { upid, .. }) = &state_file {
+        selected_upid = Some(upid.to_owned());
+    }
+
+    info.last_run_upid = selected_upid.clone();
+
+    match selected_upid {
+        Some(upid) => {
+            info.removed_bytes = Some(status_in_memory.removed_bytes);
+            info.removed_chunks = Some(status_in_memory.removed_chunks);
+            info.pending_bytes = Some(status_in_memory.pending_bytes);
+            info.pending_chunks = Some(status_in_memory.pending_chunks);
+
+            let mut computed_schedule: JobScheduleStatus = JobScheduleStatus::default();
+            let mut duration = None;
+            if let Some(state) = state_file {
+                if let Ok(cs) = compute_schedule_status(&state, info.last_run_upid.as_deref()) {
+                    computed_schedule = cs;
+                }
+            }
+
+            if let Some(endtime) = computed_schedule.last_run_endtime {
+                computed_schedule.next_run = info
+                    .schedule
+                    .as_ref()
+                    .and_then(|s| {
+                        s.parse::<CalendarEvent>()
+                            .map_err(|err| log::error!("{err}"))
+                            .ok()
+                    })
+                    .and_then(|e| {
+                        e.compute_next_event(endtime)
+                            .map_err(|err| log::error!("{err}"))
+                            .ok()
+                    })
+                    .and_then(|ne| ne);
+
+                if let Ok(parsed_upid) = upid.parse::<UPID>() {
+                    duration = Some(endtime - parsed_upid.starttime);
+                }
+            }
+
+            info.next_run = computed_schedule.next_run;
+            info.last_run_endtime = computed_schedule.last_run_endtime;
+            info.last_run_state = computed_schedule.last_run_state;
+            info.duration = duration;
+        }
+        None => {
+            if let Some(schedule) = &info.schedule {
+                info.next_run = schedule
+                    .parse::<CalendarEvent>()
+                    .map_err(|err| log::error!("{err}"))
+                    .ok()
+                    .and_then(|e| {
+                        e.compute_next_event(proxmox_time::epoch_i64())
+                            .map_err(|err| log::error!("{err}"))
+                            .ok()
+                    })
+                    .and_then(|ne| ne);
+
+                if let Ok(event) = schedule.parse::<CalendarEvent>() {
+                    if let Ok(next_event) = event.compute_next_event(proxmox_time::epoch_i64()) {
+                        info.next_run = next_event;
+                    }
+                }
+            } else {
+                return Ok(info);
+            }
+        }
+    }
+
+    Ok(info)
+}
+
 #[api(
     returns: {
         description: "List the accessible datastores.",
@@ -2304,6 +2425,10 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
             .get(&API_METHOD_GARBAGE_COLLECTION_STATUS)
             .post(&API_METHOD_START_GARBAGE_COLLECTION),
     ),
+    (
+        "gc-job-status",
+        &Router::new().get(&API_METHOD_GARBAGE_COLLECTION_JOB_STATUS),
+    ),
     (
         "group-notes",
         &Router::new()
diff --git a/src/api2/admin/gc.rs b/src/api2/admin/gc.rs
new file mode 100644
index 00000000..7535f369
--- /dev/null
+++ b/src/api2/admin/gc.rs
@@ -0,0 +1,57 @@
+use anyhow::Error;
+use pbs_api_types::GarbageCollectionJobStatus;
+
+use proxmox_router::{ApiMethod, Permission, Router, RpcEnvironment};
+use proxmox_schema::api;
+
+use pbs_api_types::DATASTORE_SCHEMA;
+
+use serde_json::Value;
+
+use crate::api2::admin::datastore::{garbage_collection_job_status, get_datastore_list};
+
+#[api(
+    input: {
+        properties: {
+            store: {
+                schema: DATASTORE_SCHEMA,
+                optional: true,
+            },
+        },
+    },
+    returns: {
+        description: "List configured gc jobs and their status",
+        type: Array,
+        items: { type: GarbageCollectionJobStatus },
+    },
+    access: {
+        permission: &Permission::Anybody,
+        description: "Requires Datastore.Audit or Datastore.Modify on datastore.",
+    },
+)]
+/// List all GC jobs (max one per datastore)
+pub fn list_all_gc_jobs(
+    store: Option<String>,
+    _param: Value,
+    _info: &ApiMethod,
+    rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<GarbageCollectionJobStatus>, Error> {
+    let gc_info = match store {
+        Some(store) => {
+            garbage_collection_job_status(store, _info, rpcenv).map(|info| vec![info])?
+        }
+        None => get_datastore_list(Value::Null, _info, rpcenv)?
+            .into_iter()
+            .map(|store_list_item| store_list_item.store)
+            .filter_map(|store| garbage_collection_job_status(store, _info, rpcenv).ok())
+            .collect::<Vec<_>>(),
+    };
+
+    Ok(gc_info)
+}
+
+const GC_ROUTER: Router = Router::new().get(&API_METHOD_LIST_ALL_GC_JOBS);
+
+pub const ROUTER: Router = Router::new()
+    .get(&API_METHOD_LIST_ALL_GC_JOBS)
+    .match_all("store", &GC_ROUTER);
diff --git a/src/api2/admin/mod.rs b/src/api2/admin/mod.rs
index 168dc038..a1c49f8e 100644
--- a/src/api2/admin/mod.rs
+++ b/src/api2/admin/mod.rs
@@ -5,6 +5,7 @@ use proxmox_router::{Router, SubdirMap};
 use proxmox_sortable_macro::sortable;
 
 pub mod datastore;
+pub mod gc;
 pub mod metrics;
 pub mod namespace;
 pub mod prune;
@@ -17,6 +18,7 @@ const SUBDIRS: SubdirMap = &sorted!([
     ("datastore", &datastore::ROUTER),
     ("metrics", &metrics::ROUTER),
     ("prune", &prune::ROUTER),
+    ("gc", &gc::ROUTER),
     ("sync", &sync::ROUTER),
     ("traffic-control", &traffic_control::ROUTER),
     ("verify", &verify::ROUTER),
-- 
2.39.2



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


  reply	other threads:[~2024-04-18 10:17 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-18 10:16 [pbs-devel] [PATCH proxmox-backup v5 00/10] Add GC job status to datastore and global prune job view Lukas Wagner
2024-04-18 10:16 ` Lukas Wagner [this message]
2024-04-18 10:16 ` [pbs-devel] [PATCH proxmox-backup v5 02/10] fix #3217: ui: global prune and gc " Lukas Wagner
2024-04-18 10:16 ` [pbs-devel] [PATCH proxmox-backup v5 03/10] ui: move prune and gc widget to config Lukas Wagner
2024-04-18 10:17 ` [pbs-devel] [PATCH proxmox-backup v5 04/10] ui: hide datastore column in local gc view Lukas Wagner
2024-04-18 10:17 ` [pbs-devel] [PATCH proxmox-backup v5 05/10] ui: order Prune & GC before Sync Jobs Lukas Wagner
2024-04-18 10:17 ` [pbs-devel] [PATCH proxmox-backup v5 06/10] fix #4723: cli: list gc jobs with proxmox-backup-manager Lukas Wagner
2024-04-18 10:17 ` [pbs-devel] [PATCH proxmox-backup v5 07/10] ui: show removed and pending data of last run in bytes Lukas Wagner
2024-04-18 10:17 ` [pbs-devel] [PATCH proxmox-backup v5 08/10] ui: configure width and flex on GC Jobs columns Lukas Wagner
2024-04-18 10:17 ` [pbs-devel] [PATCH proxmox-backup v5 09/10] ui: gcview: fix eslint warnings Lukas Wagner
2024-04-18 10:17 ` [pbs-devel] [PATCH proxmox-backup v5 10/10] proxmox-backup-mgr: gc jobs: pretty-print bytes/duration/timestamps Lukas Wagner
2024-04-18 12:11 ` [pbs-devel] [PATCH proxmox-backup v5 00/10] Add GC job status to datastore and global prune job view Gabriel Goller
2024-04-23 12:37 ` [pbs-devel] applied-series: " Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240418101706.237597-2-l.wagner@proxmox.com \
    --to=l.wagner@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal