public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup 0/3] follow ups for gc job status series
@ 2024-04-22 11:05 Dominik Csapak
  2024-04-22 11:05 ` [pbs-devel] [PATCH proxmox-backup 1/3] ui: gc view: use beforedestroy for stopping the store Dominik Csapak
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Dominik Csapak @ 2024-04-22 11:05 UTC (permalink / raw)
  To: pbs-devel

this is a follow up series for Lukas/Stefans/Gabriels series:
    Add GC job status to datastore and global prune job view. [0]

did not find too many issues, mainly one exception and bad
scrolling/sizing behavior.

0: https://lists.proxmox.com/pipermail/pbs-devel/2024-April/008694.html

Dominik Csapak (3):
  ui: gc view: use beforedestroy for stopping the store
  ui: prune/gc view: improve sizing & scrolling behaviour
  ui: gc view: remove unnecessary widths in columns

 www/config/GCView.js     | 10 ++++++----
 www/config/PruneAndGC.js | 10 +++++++++-
 2 files changed, 15 insertions(+), 5 deletions(-)

-- 
2.39.2



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 1/3] ui: gc view: use beforedestroy for stopping the store
  2024-04-22 11:05 [pbs-devel] [PATCH proxmox-backup 0/3] follow ups for gc job status series Dominik Csapak
@ 2024-04-22 11:05 ` Dominik Csapak
  2024-04-22 11:05 ` [pbs-devel] [PATCH proxmox-backup 2/3] ui: prune/gc view: improve sizing & scrolling behaviour Dominik Csapak
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Dominik Csapak @ 2024-04-22 11:05 UTC (permalink / raw)
  To: pbs-devel

because during destroy, the controller (and the relevant function) might
not be there anymore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/config/GCView.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/config/GCView.js b/www/config/GCView.js
index 852dca5d..de43bc25 100644
--- a/www/config/GCView.js
+++ b/www/config/GCView.js
@@ -111,7 +111,7 @@ Ext.define('PBS.config.GCJobView', {
 
     listeners: {
 	activate: 'startStore',
-	destroy: 'stopStore',
+	beforedestroy: 'stopStore',
 	deactivate: 'stopStore',
 	itemdblclick: 'editGCJob',
     },
-- 
2.39.2



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 2/3] ui: prune/gc view: improve sizing & scrolling behaviour
  2024-04-22 11:05 [pbs-devel] [PATCH proxmox-backup 0/3] follow ups for gc job status series Dominik Csapak
  2024-04-22 11:05 ` [pbs-devel] [PATCH proxmox-backup 1/3] ui: gc view: use beforedestroy for stopping the store Dominik Csapak
@ 2024-04-22 11:05 ` Dominik Csapak
  2024-04-22 11:05 ` [pbs-devel] [PATCH proxmox-backup 3/3] ui: gc view: remove unnecessary widths in columns Dominik Csapak
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Dominik Csapak @ 2024-04-22 11:05 UTC (permalink / raw)
  To: pbs-devel

before, this was only used where the top list was a fixed size and only
for one datastore (which limits the number of prune jobs a bit)

since now we show gc jobs for all datastores here too and all their
prune jobs, this panel can get much bigger.

To improve it's scrolling sizing behavior, make the prune jobs panel
`flex: 1`, so it fills out the rest of the view, and add a splitter
between them so one can resize them on the fly. To prevent making one of
the panels too small, set an appropriate minHeight for both and make the
surrounding panel scrollable.

To not save the height into it's state, we have to filter that out for
the GCView.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
we probably still have to adapt the color of the splitter?
not sure about it's visibility

 www/config/GCView.js     |  5 +++++
 www/config/PruneAndGC.js | 10 +++++++++-
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/www/config/GCView.js b/www/config/GCView.js
index de43bc25..76fb262e 100644
--- a/www/config/GCView.js
+++ b/www/config/GCView.js
@@ -107,6 +107,10 @@ Ext.define('PBS.config.GCJobView', {
 	stopStore: function() { this.getView().getStore().rstore.stopUpdate(); },
 	reload: function() { this.getView().getStore().rstore.load(); },
 
+
+	filterState: function(view, state) {
+	    delete state.height;
+	},
     },
 
     listeners: {
@@ -114,6 +118,7 @@ Ext.define('PBS.config.GCJobView', {
 	beforedestroy: 'stopStore',
 	deactivate: 'stopStore',
 	itemdblclick: 'editGCJob',
+	beforestatesave: 'filterState',
     },
 
     store: {
diff --git a/www/config/PruneAndGC.js b/www/config/PruneAndGC.js
index a1163402..b85c2961 100644
--- a/www/config/PruneAndGC.js
+++ b/www/config/PruneAndGC.js
@@ -14,6 +14,7 @@ Ext.define('PBS.config.PruneAndGC', {
 	collapsible: false,
 	margin: '7 10 3 10',
     },
+    scrollable: true,
     items: [
 	{
 	    xtype: 'pbsGCJobView',
@@ -22,6 +23,11 @@ Ext.define('PBS.config.PruneAndGC', {
 	    cbind: {
 		datastore: '{datastore}',
 	    },
+	    minHeight: 125, // shows at least one line of content
+	},
+	{
+	    xtype: 'splitter',
+	    performCollapse: false,
 	},
 	{
 	    xtype: 'pbsPruneJobView',
@@ -30,12 +36,14 @@ Ext.define('PBS.config.PruneAndGC', {
 	    cbind: {
 		datastore: '{datastore}',
 	    },
+	    flex: 1,
+	    minHeight: 160, // shows at least one line of content
 	},
     ],
     initComponent: function() {
 	let me = this;
 
-	let subPanelIds = me.items.map(el => el.itemId);
+	let subPanelIds = me.items.map(el => el.itemId).filter(id => !!id);
 
 	me.callParent();
 
-- 
2.39.2



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 3/3] ui: gc view: remove unnecessary widths in columns
  2024-04-22 11:05 [pbs-devel] [PATCH proxmox-backup 0/3] follow ups for gc job status series Dominik Csapak
  2024-04-22 11:05 ` [pbs-devel] [PATCH proxmox-backup 1/3] ui: gc view: use beforedestroy for stopping the store Dominik Csapak
  2024-04-22 11:05 ` [pbs-devel] [PATCH proxmox-backup 2/3] ui: prune/gc view: improve sizing & scrolling behaviour Dominik Csapak
@ 2024-04-22 11:05 ` Dominik Csapak
  2024-04-22 11:38 ` [pbs-devel] [PATCH proxmox-backup 0/4] more follow-ups Fabian Grünbichler
  2024-04-23 12:36 ` [pbs-devel] applied: [PATCH proxmox-backup 0/3] follow ups for gc job status series Fabian Grünbichler
  4 siblings, 0 replies; 10+ messages in thread
From: Dominik Csapak @ 2024-04-22 11:05 UTC (permalink / raw)
  To: pbs-devel

setting `width` and `flex` in a column simultaneously won't work, and
the `flex` value takes priority. So remove the unused `width`
properties.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/config/GCView.js | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/www/config/GCView.js b/www/config/GCView.js
index 76fb262e..8cdf63f9 100644
--- a/www/config/GCView.js
+++ b/www/config/GCView.js
@@ -166,7 +166,6 @@ Ext.define('PBS.config.GCJobView', {
 	    renderer: Ext.String.htmlEncode,
 	    sortable: true,
 	    hideable: false,
-	    width: 150,
 	    minWidth: 120,
 	    maxWidth: 300,
 	    flex: 2,
@@ -177,7 +176,6 @@ Ext.define('PBS.config.GCJobView', {
 	    sortable: false,
 	    hideable: false,
 	    renderer: (value) => value ? value : Proxmox.Utils.NoneText,
-	    width: 85,
 	    minWidth: 85,
 	    flex: 1,
 	},
@@ -202,7 +200,6 @@ Ext.define('PBS.config.GCJobView', {
 	    dataIndex: 'last-run-state',
 	    renderer: PBS.Utils.render_task_status,
 	    sortable: true,
-	    width: 100,
 	    minWidth: 80,
 	    flex: 1,
 	},
-- 
2.39.2



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 0/4] more follow-ups
  2024-04-22 11:05 [pbs-devel] [PATCH proxmox-backup 0/3] follow ups for gc job status series Dominik Csapak
                   ` (2 preceding siblings ...)
  2024-04-22 11:05 ` [pbs-devel] [PATCH proxmox-backup 3/3] ui: gc view: remove unnecessary widths in columns Dominik Csapak
@ 2024-04-22 11:38 ` Fabian Grünbichler
  2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 1/4] GC: flatten existing status into job status Fabian Grünbichler
                     ` (3 more replies)
  2024-04-23 12:36 ` [pbs-devel] applied: [PATCH proxmox-backup 0/3] follow ups for gc job status series Fabian Grünbichler
  4 siblings, 4 replies; 10+ messages in thread
From: Fabian Grünbichler @ 2024-04-22 11:38 UTC (permalink / raw)
  To: pbs-devel

the series in general is definitely a step in the right direction - but
I really dislike the amount of duplication. IMHO we can collapse the new
job status into the status (or rather, flatten the latter into the
former). the only downside is that we can't flatten a non-existing
status except as default values (all zeroes), but that was already the
case with the old endpoint as well (which returned all zeroes despite no
GC ever having run). the UI (as well as any third party client) can
easily handle this, and it only affects stores where GC has never run,
so IMHO that's an okay tradeoff.

what do you think?

besides that, some of the R-b trailers have a typo, I would correct
those as well and push the combination of all three (original series,
Dominik's and my followup patches) unless there are objections.

Fabian Grünbichler (4):
  GC: flatten existing status into job status
  api: merge garbage-collection-status and -job-status
  ui: don't re-calculate GC duration
  GC status: reduce code duplication

 pbs-api-types/src/datastore.rs |  26 ++----
 src/api2/admin/datastore.rs    | 153 +++++++++------------------------
 src/api2/admin/gc.rs           |   8 +-
 www/Utils.js                   |   8 +-
 www/config/GCView.js           |  21 ++---
 5 files changed, 62 insertions(+), 154 deletions(-)

-- 
2.39.2



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 1/4] GC: flatten existing status into job status
  2024-04-22 11:38 ` [pbs-devel] [PATCH proxmox-backup 0/4] more follow-ups Fabian Grünbichler
@ 2024-04-22 11:38   ` Fabian Grünbichler
  2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 2/4] api: merge garbage-collection-status and -job-status Fabian Grünbichler
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Fabian Grünbichler @ 2024-04-22 11:38 UTC (permalink / raw)
  To: pbs-devel

to avoid drifting definitions and reduce duplication. with the next major
release, the 'upid' field could then be renamed and aliased to be in line with
the other jobs, which all use 'last-run-upid'. doing it now would break
existing callers of the GC status endpoint (or consumers of the on-disk status
file).

the main difference is that the GC status fields are now not optional (except
for the UPID) in the job status, since flattening an optional value is not
possible. this only affects datastores that were never GCed at all, and only
direct API consumers, since the UI handles those fields correctly.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 pbs-api-types/src/datastore.rs | 26 +++++---------------
 src/api2/admin/datastore.rs    | 43 ++++++++++------------------------
 www/Utils.js                   |  8 ++++++-
 www/config/GCView.js           | 12 +++++-----
 4 files changed, 31 insertions(+), 58 deletions(-)

diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index 872c4bc7..5d787369 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -1246,7 +1246,7 @@ pub struct TypeCounts {
         },
     },
 )]
-#[derive(Clone, Default, Serialize, Deserialize, PartialEq)]
+#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq)]
 #[serde(rename_all = "kebab-case")]
 /// Garbage collection status.
 pub struct GarbageCollectionStatus {
@@ -1275,11 +1275,10 @@ pub struct GarbageCollectionStatus {
 
 #[api(
     properties: {
-        "last-run-upid": {
-            optional: true,
-            type: UPID,
+        "status": {
+            type: GarbageCollectionStatus,
         },
-    },
+    }
 )]
 #[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq)]
 #[serde(rename_all = "kebab-case")]
@@ -1287,21 +1286,8 @@ pub struct GarbageCollectionStatus {
 pub struct GarbageCollectionJobStatus {
     /// Datastore
     pub store: String,
-    /// upid of the last run gc job
-    #[serde(skip_serializing_if = "Option::is_none")]
-    pub last_run_upid: Option<String>,
-    /// Sum of removed bytes.
-    #[serde(skip_serializing_if = "Option::is_none")]
-    pub removed_bytes: Option<u64>,
-    /// Number of removed chunks
-    #[serde(skip_serializing_if = "Option::is_none")]
-    pub removed_chunks: Option<usize>,
-    /// Sum of pending bytes
-    #[serde(skip_serializing_if = "Option::is_none")]
-    pub pending_bytes: Option<u64>,
-    /// Number of pending chunks
-    #[serde(skip_serializing_if = "Option::is_none")]
-    pub pending_chunks: Option<usize>,
+    #[serde(flatten)]
+    pub status: GarbageCollectionStatus,
     /// Schedule of the gc job
     #[serde(skip_serializing_if = "Option::is_none")]
     pub schedule: Option<String>,
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 77f9fe3d..da2b545a 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -35,13 +35,13 @@ use pxar::EntryKind;
 use pbs_api_types::{
     print_ns_and_snapshot, print_store_and_ns, Authid, BackupContent, BackupNamespace, BackupType,
     Counts, CryptMode, DataStoreConfig, DataStoreListItem, DataStoreStatus,
-    GarbageCollectionJobStatus, GarbageCollectionStatus, GroupListItem, JobScheduleStatus,
-    KeepOptions, Operation, PruneJobOptions, RRDMode, RRDTimeFrame, SnapshotListItem,
-    SnapshotVerifyState, BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA,
-    BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA, DATASTORE_SCHEMA, IGNORE_VERIFIED_BACKUPS_SCHEMA,
-    MAX_NAMESPACE_DEPTH, NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP,
-    PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ, PRIV_DATASTORE_VERIFY, UPID,
-    UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA,
+    GarbageCollectionJobStatus, GroupListItem, JobScheduleStatus, KeepOptions, Operation,
+    PruneJobOptions, RRDMode, RRDTimeFrame, SnapshotListItem, SnapshotVerifyState,
+    BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA,
+    BACKUP_TYPE_SCHEMA, DATASTORE_SCHEMA, IGNORE_VERIFIED_BACKUPS_SCHEMA, MAX_NAMESPACE_DEPTH,
+    NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY,
+    PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ, PRIV_DATASTORE_VERIFY, UPID, UPID_SCHEMA,
+    VERIFICATION_OUTDATED_AFTER_SCHEMA,
 };
 use pbs_client::pxar::{create_tar, create_zip};
 use pbs_config::CachedUserInfo;
@@ -1273,35 +1273,15 @@ pub fn garbage_collection_job_status(
     let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
     let status_in_memory = datastore.last_gc_status();
     let state_file = JobState::load("garbage_collection", &store)
-        .map_err(|err| {
-            log::error!(
-                "could not open statefile for {:?}: {}",
-                info.last_run_upid,
-                err
-            )
-        })
+        .map_err(|err| log::error!("could not open GC statefile for {store}: {err}"))
         .ok();
 
-    let mut selected_upid = None;
-    if status_in_memory.upid.is_some() {
-        selected_upid = status_in_memory.upid;
-    } else if let Some(JobState::Finished { upid, .. }) = &state_file {
-        selected_upid = Some(upid.to_owned());
-    }
-
-    info.last_run_upid = selected_upid.clone();
-
-    match selected_upid {
-        Some(upid) => {
-            info.removed_bytes = Some(status_in_memory.removed_bytes);
-            info.removed_chunks = Some(status_in_memory.removed_chunks);
-            info.pending_bytes = Some(status_in_memory.pending_bytes);
-            info.pending_chunks = Some(status_in_memory.pending_chunks);
-
+    match status_in_memory.upid {
+        Some(ref upid) => {
             let mut computed_schedule: JobScheduleStatus = JobScheduleStatus::default();
             let mut duration = None;
             if let Some(state) = state_file {
-                if let Ok(cs) = compute_schedule_status(&state, info.last_run_upid.as_deref()) {
+                if let Ok(cs) = compute_schedule_status(&state, Some(&upid)) {
                     computed_schedule = cs;
                 }
             }
@@ -1327,6 +1307,7 @@ pub fn garbage_collection_job_status(
                 }
             }
 
+            info.status = status_in_memory;
             info.next_run = computed_schedule.next_run;
             info.last_run_endtime = computed_schedule.last_run_endtime;
             info.last_run_state = computed_schedule.last_run_state;
diff --git a/www/Utils.js b/www/Utils.js
index acd6e0d8..0c1f78b6 100644
--- a/www/Utils.js
+++ b/www/Utils.js
@@ -200,7 +200,13 @@ Ext.define('PBS.Utils', {
     },
 
     render_task_status: function(value, metadata, record, rowIndex, colIndex, store) {
-	if (!record.data['last-run-upid'] && !store.getById('last-run-upid')?.data.value) {
+	// GC tasks use 'upid' for backwards-compat, rest use 'last-run-upid'
+	if (
+	    !record.data['last-run-upid'] &&
+	    !store.getById('last-run-upid')?.data.value &&
+	    !record.data.upid &&
+	    !store.getById('upid')?.data.value
+	) {
 	    return '-';
 	}
 
diff --git a/www/config/GCView.js b/www/config/GCView.js
index 8cdf63f9..982ab939 100644
--- a/www/config/GCView.js
+++ b/www/config/GCView.js
@@ -1,14 +1,14 @@
 Ext.define('pbs-gc-jobs-status', {
     extend: 'Ext.data.Model',
     fields: [
-	'store', 'last-run-upid', 'removed-bytes', 'pending-bytes', 'schedule',
+	'store', 'upid', 'removed-bytes', 'pending-bytes', 'schedule',
 	'next-run', 'last-run-endtime', 'last-run-state',
 	{
 	    name: 'duration',
 	    calculate: function(data) {
 		let endtime = data['last-run-endtime'];
 		if (!endtime) return undefined;
-		let task = Proxmox.Utils.parse_task_upid(data['last-run-upid']);
+		let task = Proxmox.Utils.parse_task_upid(data['upid']);
 		return endtime - task.starttime;
 	    },
 	},
@@ -97,7 +97,7 @@ Ext.define('PBS.config.GCJobView', {
 	showTaskLog: function() {
 	    let me = this;
 
-	    let upid = this.getData()['last-run-upid'];
+	    let upid = this.getData().upid;
 	    if (!upid) return;
 
 	    Ext.create('Proxmox.window.TaskViewer', { upid }).show();
@@ -147,7 +147,7 @@ Ext.define('PBS.config.GCJobView', {
 	    xtype: 'proxmoxButton',
 	    text: gettext('Show Log'),
 	    handler: 'showTaskLog',
-	    enableFn: (rec) => !!rec.data["last-run-upid"],
+	    enableFn: (rec) => !!rec.data.upid,
 	    disabled: true,
 	},
 	{
@@ -214,7 +214,7 @@ Ext.define('PBS.config.GCJobView', {
 	{
 	    header: gettext('Removed Data'),
 	    dataIndex: 'removed-bytes',
-	    renderer: (value) => value !== undefined
+	    renderer: (value, meta, record) => record.data.upid !== null
 		? Proxmox.Utils.format_size(value, true) : "-",
 	    sortable: false,
 	    minWidth: 85,
@@ -223,7 +223,7 @@ Ext.define('PBS.config.GCJobView', {
 	{
 	    header: gettext('Pending Data'),
 	    dataIndex: 'pending-bytes',
-	    renderer: (value) => value !== undefined
+	    renderer: (value, meta, record) => record.data.upid !== null
 		? Proxmox.Utils.format_size(value, true) : "-",
 	    sortable: false,
 	    minWidth: 80,
-- 
2.39.2



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 2/4] api: merge garbage-collection-status and -job-status
  2024-04-22 11:38 ` [pbs-devel] [PATCH proxmox-backup 0/4] more follow-ups Fabian Grünbichler
  2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 1/4] GC: flatten existing status into job status Fabian Grünbichler
@ 2024-04-22 11:38   ` Fabian Grünbichler
  2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 3/4] ui: don't re-calculate GC duration Fabian Grünbichler
  2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 4/4] GC status: reduce code duplication Fabian Grünbichler
  3 siblings, 0 replies; 10+ messages in thread
From: Fabian Grünbichler @ 2024-04-22 11:38 UTC (permalink / raw)
  To: pbs-devel

the latter was newly introduced, and they both return basically the same
information now. the new extended (job) status struct is a strict superset of
the old status struct, so this is not a breaking change API wise.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/api2/admin/datastore.rs | 34 +---------------------------------
 src/api2/admin/gc.rs        |  8 +++-----
 2 files changed, 4 insertions(+), 38 deletions(-)

diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index da2b545a..e7a823b5 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -1212,34 +1212,6 @@ pub fn start_garbage_collection(
     Ok(json!(upid_str))
 }
 
-#[api(
-    input: {
-        properties: {
-            store: {
-                schema: DATASTORE_SCHEMA,
-            },
-        },
-    },
-    returns: {
-        type: GarbageCollectionStatus,
-    },
-    access: {
-        permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_AUDIT, false),
-    },
-)]
-/// Garbage collection status.
-pub fn garbage_collection_status(
-    store: String,
-    _info: &ApiMethod,
-    _rpcenv: &mut dyn RpcEnvironment,
-) -> Result<GarbageCollectionStatus, Error> {
-    let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
-
-    let status = datastore.last_gc_status();
-
-    Ok(status)
-}
-
 #[api(
     input: {
         properties: {
@@ -1256,7 +1228,7 @@ pub fn garbage_collection_status(
     },
 )]
 /// Garbage collection status.
-pub fn garbage_collection_job_status(
+pub fn garbage_collection_status(
     store: String,
     _info: &ApiMethod,
     _rpcenv: &mut dyn RpcEnvironment,
@@ -2406,10 +2378,6 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
             .get(&API_METHOD_GARBAGE_COLLECTION_STATUS)
             .post(&API_METHOD_START_GARBAGE_COLLECTION),
     ),
-    (
-        "gc-job-status",
-        &Router::new().get(&API_METHOD_GARBAGE_COLLECTION_JOB_STATUS),
-    ),
     (
         "group-notes",
         &Router::new()
diff --git a/src/api2/admin/gc.rs b/src/api2/admin/gc.rs
index 7535f369..bca06897 100644
--- a/src/api2/admin/gc.rs
+++ b/src/api2/admin/gc.rs
@@ -8,7 +8,7 @@ use pbs_api_types::DATASTORE_SCHEMA;
 
 use serde_json::Value;
 
-use crate::api2::admin::datastore::{garbage_collection_job_status, get_datastore_list};
+use crate::api2::admin::datastore::{garbage_collection_status, get_datastore_list};
 
 #[api(
     input: {
@@ -37,13 +37,11 @@ pub fn list_all_gc_jobs(
     rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<Vec<GarbageCollectionJobStatus>, Error> {
     let gc_info = match store {
-        Some(store) => {
-            garbage_collection_job_status(store, _info, rpcenv).map(|info| vec![info])?
-        }
+        Some(store) => garbage_collection_status(store, _info, rpcenv).map(|info| vec![info])?,
         None => get_datastore_list(Value::Null, _info, rpcenv)?
             .into_iter()
             .map(|store_list_item| store_list_item.store)
-            .filter_map(|store| garbage_collection_job_status(store, _info, rpcenv).ok())
+            .filter_map(|store| garbage_collection_status(store, _info, rpcenv).ok())
             .collect::<Vec<_>>(),
     };
 
-- 
2.39.2



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 3/4] ui: don't re-calculate GC duration
  2024-04-22 11:38 ` [pbs-devel] [PATCH proxmox-backup 0/4] more follow-ups Fabian Grünbichler
  2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 1/4] GC: flatten existing status into job status Fabian Grünbichler
  2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 2/4] api: merge garbage-collection-status and -job-status Fabian Grünbichler
@ 2024-04-22 11:38   ` Fabian Grünbichler
  2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 4/4] GC status: reduce code duplication Fabian Grünbichler
  3 siblings, 0 replies; 10+ messages in thread
From: Fabian Grünbichler @ 2024-04-22 11:38 UTC (permalink / raw)
  To: pbs-devel

it is returned by the API anyway

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 www/config/GCView.js | 11 +----------
 1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/www/config/GCView.js b/www/config/GCView.js
index 982ab939..bcea72a5 100644
--- a/www/config/GCView.js
+++ b/www/config/GCView.js
@@ -2,16 +2,7 @@ Ext.define('pbs-gc-jobs-status', {
     extend: 'Ext.data.Model',
     fields: [
 	'store', 'upid', 'removed-bytes', 'pending-bytes', 'schedule',
-	'next-run', 'last-run-endtime', 'last-run-state',
-	{
-	    name: 'duration',
-	    calculate: function(data) {
-		let endtime = data['last-run-endtime'];
-		if (!endtime) return undefined;
-		let task = Proxmox.Utils.parse_task_upid(data['upid']);
-		return endtime - task.starttime;
-	    },
-	},
+	'next-run', 'last-run-endtime', 'last-run-state', 'duration',
     ],
     idProperty: 'store',
     proxy: {
-- 
2.39.2



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 4/4] GC status: reduce code duplication
  2024-04-22 11:38 ` [pbs-devel] [PATCH proxmox-backup 0/4] more follow-ups Fabian Grünbichler
                     ` (2 preceding siblings ...)
  2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 3/4] ui: don't re-calculate GC duration Fabian Grünbichler
@ 2024-04-22 11:38   ` Fabian Grünbichler
  3 siblings, 0 replies; 10+ messages in thread
From: Fabian Grünbichler @ 2024-04-22 11:38 UTC (permalink / raw)
  To: pbs-devel

the schedule handling is the same whether there was a last run or not, so let's
do it once and not twice. the duration can be stored right away, instead of
using an intermediate variable.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/api2/admin/datastore.rs | 84 ++++++++++++++-----------------------
 1 file changed, 31 insertions(+), 53 deletions(-)

diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index e7a823b5..3ea17499 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -1248,67 +1248,45 @@ pub fn garbage_collection_status(
         .map_err(|err| log::error!("could not open GC statefile for {store}: {err}"))
         .ok();
 
-    match status_in_memory.upid {
-        Some(ref upid) => {
-            let mut computed_schedule: JobScheduleStatus = JobScheduleStatus::default();
-            let mut duration = None;
-            if let Some(state) = state_file {
-                if let Ok(cs) = compute_schedule_status(&state, Some(&upid)) {
-                    computed_schedule = cs;
-                }
-            }
-
-            if let Some(endtime) = computed_schedule.last_run_endtime {
-                computed_schedule.next_run = info
-                    .schedule
-                    .as_ref()
-                    .and_then(|s| {
-                        s.parse::<CalendarEvent>()
-                            .map_err(|err| log::error!("{err}"))
-                            .ok()
-                    })
-                    .and_then(|e| {
-                        e.compute_next_event(endtime)
-                            .map_err(|err| log::error!("{err}"))
-                            .ok()
-                    })
-                    .and_then(|ne| ne);
+    let mut last = proxmox_time::epoch_i64();
 
-                if let Ok(parsed_upid) = upid.parse::<UPID>() {
-                    duration = Some(endtime - parsed_upid.starttime);
-                }
+    if let Some(ref upid) = status_in_memory.upid {
+        let mut computed_schedule: JobScheduleStatus = JobScheduleStatus::default();
+        if let Some(state) = state_file {
+            if let Ok(cs) = compute_schedule_status(&state, Some(&upid)) {
+                computed_schedule = cs;
             }
-
-            info.status = status_in_memory;
-            info.next_run = computed_schedule.next_run;
-            info.last_run_endtime = computed_schedule.last_run_endtime;
-            info.last_run_state = computed_schedule.last_run_state;
-            info.duration = duration;
         }
-        None => {
-            if let Some(schedule) = &info.schedule {
-                info.next_run = schedule
-                    .parse::<CalendarEvent>()
-                    .map_err(|err| log::error!("{err}"))
-                    .ok()
-                    .and_then(|e| {
-                        e.compute_next_event(proxmox_time::epoch_i64())
-                            .map_err(|err| log::error!("{err}"))
-                            .ok()
-                    })
-                    .and_then(|ne| ne);
 
-                if let Ok(event) = schedule.parse::<CalendarEvent>() {
-                    if let Ok(next_event) = event.compute_next_event(proxmox_time::epoch_i64()) {
-                        info.next_run = next_event;
-                    }
-                }
-            } else {
-                return Ok(info);
+        if let Some(endtime) = computed_schedule.last_run_endtime {
+            last = endtime;
+            if let Ok(parsed_upid) = upid.parse::<UPID>() {
+                info.duration = Some(endtime - parsed_upid.starttime);
             }
         }
+
+        info.next_run = computed_schedule.next_run;
+        info.last_run_endtime = computed_schedule.last_run_endtime;
+        info.last_run_state = computed_schedule.last_run_state;
     }
 
+    info.next_run = info
+        .schedule
+        .as_ref()
+        .and_then(|s| {
+            s.parse::<CalendarEvent>()
+                .map_err(|err| log::error!("{err}"))
+                .ok()
+        })
+        .and_then(|e| {
+            e.compute_next_event(last)
+                .map_err(|err| log::error!("{err}"))
+                .ok()
+        })
+        .and_then(|ne| ne);
+
+    info.status = status_in_memory;
+
     Ok(info)
 }
 
-- 
2.39.2



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup 0/3] follow ups for gc job status series
  2024-04-22 11:05 [pbs-devel] [PATCH proxmox-backup 0/3] follow ups for gc job status series Dominik Csapak
                   ` (3 preceding siblings ...)
  2024-04-22 11:38 ` [pbs-devel] [PATCH proxmox-backup 0/4] more follow-ups Fabian Grünbichler
@ 2024-04-23 12:36 ` Fabian Grünbichler
  4 siblings, 0 replies; 10+ messages in thread
From: Fabian Grünbichler @ 2024-04-23 12:36 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion

this and the follow-ups I sent as reply

On April 22, 2024 1:05 pm, Dominik Csapak wrote:
> this is a follow up series for Lukas/Stefans/Gabriels series:
>     Add GC job status to datastore and global prune job view. [0]
> 
> did not find too many issues, mainly one exception and bad
> scrolling/sizing behavior.
> 
> 0: https://lists.proxmox.com/pipermail/pbs-devel/2024-April/008694.html
> 
> Dominik Csapak (3):
>   ui: gc view: use beforedestroy for stopping the store
>   ui: prune/gc view: improve sizing & scrolling behaviour
>   ui: gc view: remove unnecessary widths in columns
> 
>  www/config/GCView.js     | 10 ++++++----
>  www/config/PruneAndGC.js | 10 +++++++++-
>  2 files changed, 15 insertions(+), 5 deletions(-)
> 
> -- 
> 2.39.2
> 
> 
> 
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
> 
> 
> 


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-04-23 12:37 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-22 11:05 [pbs-devel] [PATCH proxmox-backup 0/3] follow ups for gc job status series Dominik Csapak
2024-04-22 11:05 ` [pbs-devel] [PATCH proxmox-backup 1/3] ui: gc view: use beforedestroy for stopping the store Dominik Csapak
2024-04-22 11:05 ` [pbs-devel] [PATCH proxmox-backup 2/3] ui: prune/gc view: improve sizing & scrolling behaviour Dominik Csapak
2024-04-22 11:05 ` [pbs-devel] [PATCH proxmox-backup 3/3] ui: gc view: remove unnecessary widths in columns Dominik Csapak
2024-04-22 11:38 ` [pbs-devel] [PATCH proxmox-backup 0/4] more follow-ups Fabian Grünbichler
2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 1/4] GC: flatten existing status into job status Fabian Grünbichler
2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 2/4] api: merge garbage-collection-status and -job-status Fabian Grünbichler
2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 3/4] ui: don't re-calculate GC duration Fabian Grünbichler
2024-04-22 11:38   ` [pbs-devel] [PATCH proxmox-backup 4/4] GC status: reduce code duplication Fabian Grünbichler
2024-04-23 12:36 ` [pbs-devel] applied: [PATCH proxmox-backup 0/3] follow ups for gc job status series Fabian Grünbichler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal