public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup v2 0/2] add transfer-last parameter to pull/sync job
@ 2023-01-11 14:52 Stefan Hanreich
  2023-01-11 14:52 ` [pbs-devel] [PATCH proxmox-backup v2 1/2] partial fix #3701: sync/pull: add transfer-last parameter Stefan Hanreich
  2023-01-11 14:52 ` [pbs-devel] [PATCH proxmox-backup v2 2/2] ui: sync job: add transfer-last parameter to ui Stefan Hanreich
  0 siblings, 2 replies; 4+ messages in thread
From: Stefan Hanreich @ 2023-01-11 14:52 UTC (permalink / raw)
  To: pbs-devel

This patch series adds the possibility of specifying the transfer-last
parameter, limiting the amount of backups transferred. If specified, only the
newest N backups will get transferred, instead of all new backups.

This can be particularly useful in situations where the target PBS has less disk
space than the source PBS. It can also be used to limit the amount of bandwidth
used by the sync-job.

Part of a series of patches that attempt to fix #3701

Changes from v1 -> v2:
* made condition for deciding which backups to skip clearer
* changed type of transfer-last to usize instead of u64
* split api/ui changes into two commits

Stefan Hanreich (2):
  partial fix #3701: sync/pull: add transfer-last parameter
  ui: sync job: add transfer-last parameter to ui

 pbs-api-types/src/jobs.rs         | 11 +++++++++++
 src/api2/config/sync.rs           |  9 +++++++++
 src/api2/pull.rs                  | 10 +++++++++-
 src/bin/proxmox-backup-manager.rs | 11 ++++++++++-
 src/server/pull.rs                | 17 ++++++++++++++++-
 www/config/SyncView.js            |  9 ++++++++-
 www/window/SyncJobEdit.js         | 13 +++++++++++++
 7 files changed, 76 insertions(+), 4 deletions(-)

-- 
2.30.2




^ permalink raw reply	[flat|nested] 4+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 1/2] partial fix #3701: sync/pull: add transfer-last parameter
  2023-01-11 14:52 [pbs-devel] [PATCH proxmox-backup v2 0/2] add transfer-last parameter to pull/sync job Stefan Hanreich
@ 2023-01-11 14:52 ` Stefan Hanreich
  2023-01-17 12:16   ` Fabian Grünbichler
  2023-01-11 14:52 ` [pbs-devel] [PATCH proxmox-backup v2 2/2] ui: sync job: add transfer-last parameter to ui Stefan Hanreich
  1 sibling, 1 reply; 4+ messages in thread
From: Stefan Hanreich @ 2023-01-11 14:52 UTC (permalink / raw)
  To: pbs-devel

Specifying the transfer-last parameter limits the amount of backups
that get synced via the pull command/sync job. The parameter specifies
how many of the N latest backups should get pulled/synced. All other
backups will get skipped.

This is particularly useful in situations where the sync target has
less disk space than the source. Syncing all backups from the source
is not possible if there is not enough disk space on the target.
Additionally this can be used for limiting the amount of data
transferred, reducing load on the network.

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
I had to make slight adjustments to Wolfgang's proposed condition because it
wouldn't work in cases where transfer-last was greater than the total amount of
backups available.
Nevertheless, the condition should now be a lot less obtuse and easier to read.

 pbs-api-types/src/jobs.rs         | 11 +++++++++++
 src/api2/config/sync.rs           |  9 +++++++++
 src/api2/pull.rs                  | 10 +++++++++-
 src/bin/proxmox-backup-manager.rs | 11 ++++++++++-
 src/server/pull.rs                | 17 ++++++++++++++++-
 5 files changed, 55 insertions(+), 3 deletions(-)

diff --git a/pbs-api-types/src/jobs.rs b/pbs-api-types/src/jobs.rs
index cf7618c4..b9f57719 100644
--- a/pbs-api-types/src/jobs.rs
+++ b/pbs-api-types/src/jobs.rs
@@ -444,6 +444,11 @@ pub const GROUP_FILTER_SCHEMA: Schema = StringSchema::new(
 pub const GROUP_FILTER_LIST_SCHEMA: Schema =
     ArraySchema::new("List of group filters.", &GROUP_FILTER_SCHEMA).schema();
 
+pub const TRANSFER_LAST_SCHEMA: Schema =
+    IntegerSchema::new("The maximum amount of snapshots to be transferred (per group).")
+        .minimum(1)
+        .schema();
+
 #[api(
     properties: {
         id: {
@@ -493,6 +498,10 @@ pub const GROUP_FILTER_LIST_SCHEMA: Schema =
             schema: GROUP_FILTER_LIST_SCHEMA,
             optional: true,
         },
+        "transfer-last": {
+            schema: TRANSFER_LAST_SCHEMA,
+            optional: true,
+        },
     }
 )]
 #[derive(Serialize, Deserialize, Clone, Updater, PartialEq)]
@@ -522,6 +531,8 @@ pub struct SyncJobConfig {
     pub group_filter: Option<Vec<GroupFilter>>,
     #[serde(flatten)]
     pub limit: RateLimitConfig,
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub transfer_last: Option<usize>,
 }
 
 impl SyncJobConfig {
diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index bd7373df..01e5f2ce 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -215,6 +215,8 @@ pub enum DeletableProperty {
     RemoteNs,
     /// Delete the max_depth property,
     MaxDepth,
+    /// Delete the transfer_last property,
+    TransferLast,
 }
 
 #[api(
@@ -309,6 +311,9 @@ pub fn update_sync_job(
                 DeletableProperty::MaxDepth => {
                     data.max_depth = None;
                 }
+                DeletableProperty::TransferLast => {
+                    data.transfer_last = None;
+                }
             }
         }
     }
@@ -343,6 +348,9 @@ pub fn update_sync_job(
     if let Some(group_filter) = update.group_filter {
         data.group_filter = Some(group_filter);
     }
+    if let Some(transfer_last) = update.transfer_last {
+        data.transfer_last = Some(transfer_last);
+    }
 
     if update.limit.rate_in.is_some() {
         data.limit.rate_in = update.limit.rate_in;
@@ -507,6 +515,7 @@ acl:1:/remote/remote1/remotestore1:write@pbs:RemoteSyncOperator
         group_filter: None,
         schedule: None,
         limit: pbs_api_types::RateLimitConfig::default(), // no limit
+        transfer_last: None,
     };
 
     // should work without ACLs
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index b2473ec8..daeba7cf 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -10,6 +10,7 @@ use pbs_api_types::{
     Authid, BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, DATASTORE_SCHEMA,
     GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
     PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ, REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA,
+    TRANSFER_LAST_SCHEMA,
 };
 use pbs_config::CachedUserInfo;
 use proxmox_rest_server::WorkerTask;
@@ -76,6 +77,7 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
             sync_job.max_depth,
             sync_job.group_filter.clone(),
             sync_job.limit.clone(),
+            sync_job.transfer_last,
         )
     }
 }
@@ -201,7 +203,11 @@ pub fn do_sync_job(
             limit: {
                 type: RateLimitConfig,
                 flatten: true,
-            }
+            },
+            "transfer-last": {
+                schema: TRANSFER_LAST_SCHEMA,
+                optional: true,
+            },
         },
     },
     access: {
@@ -225,6 +231,7 @@ async fn pull(
     max_depth: Option<usize>,
     group_filter: Option<Vec<GroupFilter>>,
     limit: RateLimitConfig,
+    transfer_last: Option<usize>,
     rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<String, Error> {
     let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -257,6 +264,7 @@ async fn pull(
         max_depth,
         group_filter,
         limit,
+        transfer_last,
     )?;
     let client = pull_params.client().await?;
 
diff --git a/src/bin/proxmox-backup-manager.rs b/src/bin/proxmox-backup-manager.rs
index 06330c78..9ea5830c 100644
--- a/src/bin/proxmox-backup-manager.rs
+++ b/src/bin/proxmox-backup-manager.rs
@@ -13,7 +13,7 @@ use pbs_api_types::percent_encoding::percent_encode_component;
 use pbs_api_types::{
     BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, DATASTORE_SCHEMA,
     GROUP_FILTER_LIST_SCHEMA, IGNORE_VERIFIED_BACKUPS_SCHEMA, NS_MAX_DEPTH_SCHEMA,
-    REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA, UPID_SCHEMA,
+    REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA, TRANSFER_LAST_SCHEMA, UPID_SCHEMA,
     VERIFICATION_OUTDATED_AFTER_SCHEMA,
 };
 use pbs_client::{display_task_log, view_task_result};
@@ -272,6 +272,10 @@ fn task_mgmt_cli() -> CommandLineInterface {
                 schema: OUTPUT_FORMAT,
                 optional: true,
             },
+            "transfer-last": {
+                schema: TRANSFER_LAST_SCHEMA,
+                optional: true,
+            },
         }
    }
 )]
@@ -287,6 +291,7 @@ async fn pull_datastore(
     max_depth: Option<usize>,
     group_filter: Option<Vec<GroupFilter>>,
     limit: RateLimitConfig,
+    transfer_last: Option<usize>,
     param: Value,
 ) -> Result<Value, Error> {
     let output_format = get_output_format(&param);
@@ -319,6 +324,10 @@ async fn pull_datastore(
         args["remove-vanished"] = Value::from(remove_vanished);
     }
 
+    if transfer_last.is_some() {
+        args["transfer-last"] = json!(transfer_last)
+    }
+
     let mut limit_json = json!(limit);
     let limit_map = limit_json
         .as_object_mut()
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 65eedf2c..81f4faf3 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -1,5 +1,6 @@
 //! Sync datastore from remote server
 
+use std::cmp::min;
 use std::collections::{HashMap, HashSet};
 use std::io::{Seek, SeekFrom};
 use std::sync::atomic::{AtomicUsize, Ordering};
@@ -59,6 +60,8 @@ pub(crate) struct PullParameters {
     group_filter: Option<Vec<GroupFilter>>,
     /// Rate limits for all transfers from `remote`
     limit: RateLimitConfig,
+    /// How many snapshots should be transferred at most (taking the newest N snapshots)
+    transfer_last: Option<usize>,
 }
 
 impl PullParameters {
@@ -78,6 +81,7 @@ impl PullParameters {
         max_depth: Option<usize>,
         group_filter: Option<Vec<GroupFilter>>,
         limit: RateLimitConfig,
+        transfer_last: Option<usize>,
     ) -> Result<Self, Error> {
         let store = DataStore::lookup_datastore(store, Some(Operation::Write))?;
 
@@ -109,6 +113,7 @@ impl PullParameters {
             max_depth,
             group_filter,
             limit,
+            transfer_last,
         })
     }
 
@@ -573,7 +578,7 @@ impl std::fmt::Display for SkipInfo {
     fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
         write!(
             f,
-            "skipped: {} snapshot(s) ({}) older than the newest local snapshot",
+            "skipped: {} snapshot(s) ({}) - older than the newest local snapshot or excluded by transfer-last",
             self.count,
             self.affected().map_err(|_| std::fmt::Error)?
         )
@@ -646,6 +651,11 @@ async fn pull_group(
         count: 0,
     };
 
+    let total_amount = list.len();
+
+    let mut transfer_amount = params.transfer_last.unwrap_or(total_amount);
+    transfer_amount = min(transfer_amount, total_amount);
+
     for (pos, item) in list.into_iter().enumerate() {
         let snapshot = item.backup;
 
@@ -668,6 +678,11 @@ async fn pull_group(
             }
         }
 
+        if pos < (total_amount - transfer_amount) {
+            skip_info.update(snapshot.time);
+            continue;
+        }
+
         // get updated auth_info (new tickets)
         let auth_info = client.login().await?;
 
-- 
2.30.2




^ permalink raw reply	[flat|nested] 4+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 2/2] ui: sync job: add transfer-last parameter to ui
  2023-01-11 14:52 [pbs-devel] [PATCH proxmox-backup v2 0/2] add transfer-last parameter to pull/sync job Stefan Hanreich
  2023-01-11 14:52 ` [pbs-devel] [PATCH proxmox-backup v2 1/2] partial fix #3701: sync/pull: add transfer-last parameter Stefan Hanreich
@ 2023-01-11 14:52 ` Stefan Hanreich
  1 sibling, 0 replies; 4+ messages in thread
From: Stefan Hanreich @ 2023-01-11 14:52 UTC (permalink / raw)
  To: pbs-devel

Adds a new column to the sync job view that shows the current setting
of the transfer-last parameter. This column is hidden by default.

Added an input field to configure the transfer-last parameter for a
sync job in the create/edit view.

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
 www/config/SyncView.js    |  9 ++++++++-
 www/window/SyncJobEdit.js | 13 +++++++++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/www/config/SyncView.js b/www/config/SyncView.js
index a90e9a70..bf9072cb 100644
--- a/www/config/SyncView.js
+++ b/www/config/SyncView.js
@@ -3,7 +3,7 @@ Ext.define('pbs-sync-jobs-status', {
     fields: [
 	'id', 'owner', 'remote', 'remote-store', 'remote-ns', 'store', 'ns',
 	'schedule', 'group-filter', 'next-run', 'last-run-upid', 'last-run-state',
-	'last-run-endtime',
+	'last-run-endtime', 'transfer-last',
 	{
 	    name: 'duration',
 	    calculate: function(data) {
@@ -241,6 +241,13 @@ Ext.define('PBS.config.SyncJobView', {
 	    renderer: v => v ? Ext.String.htmlEncode(v) : gettext('All'),
 	    width: 80,
 	},
+	{
+	    header: gettext('Transfer Last'),
+	    dataIndex: 'transfer-last',
+	    flex: 1,
+	    sortable: true,
+	    hidden: true,
+	},
 	{
 	    header: gettext('Schedule'),
 	    dataIndex: 'schedule',
diff --git a/www/window/SyncJobEdit.js b/www/window/SyncJobEdit.js
index 948ad5da..64447ebc 100644
--- a/www/window/SyncJobEdit.js
+++ b/www/window/SyncJobEdit.js
@@ -131,6 +131,19 @@ Ext.define('PBS.window.SyncJobEdit', {
 			submitAutoScaledSizeUnit: true,
 			// NOTE: handle deleteEmpty in onGetValues due to bandwidth field having a cbind too
 		    },
+		    {
+			fieldLabel: gettext('Transfer Last'),
+			xtype: 'pbsPruneKeepInput',
+			name: 'transfer-last',
+			emptyText: gettext('all'),
+			autoEl: {
+			    tag: 'div',
+			    'data-qtip': gettext('The maximum amount of snapshots to be transferred (per group)'),
+			},
+			cbind: {
+			    deleteEmpty: '{!isCreate}',
+			},
+		    },
 		],
 
 		column2: [
-- 
2.30.2




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v2 1/2] partial fix #3701: sync/pull: add transfer-last parameter
  2023-01-11 14:52 ` [pbs-devel] [PATCH proxmox-backup v2 1/2] partial fix #3701: sync/pull: add transfer-last parameter Stefan Hanreich
@ 2023-01-17 12:16   ` Fabian Grünbichler
  0 siblings, 0 replies; 4+ messages in thread
From: Fabian Grünbichler @ 2023-01-17 12:16 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion

On January 11, 2023 3:52 pm, Stefan Hanreich wrote:
> Specifying the transfer-last parameter limits the amount of backups
> that get synced via the pull command/sync job. The parameter specifies
> how many of the N latest backups should get pulled/synced. All other
> backups will get skipped.
> 
> This is particularly useful in situations where the sync target has
> less disk space than the source. Syncing all backups from the source
> is not possible if there is not enough disk space on the target.
> Additionally this can be used for limiting the amount of data
> transferred, reducing load on the network.
> 
> Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
> ---
> I had to make slight adjustments to Wolfgang's proposed condition because it
> wouldn't work in cases where transfer-last was greater than the total amount of
> backups available.
> Nevertheless, the condition should now be a lot less obtuse and easier to read.

see below for a suggestion for further improvement

one high-level comment: I am not sure about the interaction between re-syncing
the last synced snapshot and transfer-last. I think I'd prefer the last synced
snapshot to be always re-synced (the main purpose is to get the backup log/notes
in case backup and pull/sync align just right, so not much data should be
transferred/used), which is not happening at the moment (but trivially
implemented, just needs an additional condition in the transfer-last check..).

> 
>  pbs-api-types/src/jobs.rs         | 11 +++++++++++
>  src/api2/config/sync.rs           |  9 +++++++++
>  src/api2/pull.rs                  | 10 +++++++++-
>  src/bin/proxmox-backup-manager.rs | 11 ++++++++++-
>  src/server/pull.rs                | 17 ++++++++++++++++-
>  5 files changed, 55 insertions(+), 3 deletions(-)
> 

[..]

> diff --git a/src/server/pull.rs b/src/server/pull.rs
> index 65eedf2c..81f4faf3 100644
> --- a/src/server/pull.rs
> +++ b/src/server/pull.rs
> @@ -1,5 +1,6 @@
>  //! Sync datastore from remote server
>  
> +use std::cmp::min;
>  use std::collections::{HashMap, HashSet};
>  use std::io::{Seek, SeekFrom};
>  use std::sync::atomic::{AtomicUsize, Ordering};
> @@ -59,6 +60,8 @@ pub(crate) struct PullParameters {
>      group_filter: Option<Vec<GroupFilter>>,
>      /// Rate limits for all transfers from `remote`
>      limit: RateLimitConfig,
> +    /// How many snapshots should be transferred at most (taking the newest N snapshots)
> +    transfer_last: Option<usize>,
>  }
>  
>  impl PullParameters {
> @@ -78,6 +81,7 @@ impl PullParameters {
>          max_depth: Option<usize>,
>          group_filter: Option<Vec<GroupFilter>>,
>          limit: RateLimitConfig,
> +        transfer_last: Option<usize>,
>      ) -> Result<Self, Error> {
>          let store = DataStore::lookup_datastore(store, Some(Operation::Write))?;
>  
> @@ -109,6 +113,7 @@ impl PullParameters {
>              max_depth,
>              group_filter,
>              limit,
> +            transfer_last,
>          })
>      }
>  
> @@ -573,7 +578,7 @@ impl std::fmt::Display for SkipInfo {
>      fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
>          write!(
>              f,
> -            "skipped: {} snapshot(s) ({}) older than the newest local snapshot",
> +            "skipped: {} snapshot(s) ({}) - older than the newest local snapshot or excluded by transfer-last",

it would possibly be nicer to store a "reason" in SkipInfo, and count the
"already synced" and "excluded by transfer-last" snapshots separately (if only
so that an admin can verify that transfer-last has the desired effect).

>              self.count,
>              self.affected().map_err(|_| std::fmt::Error)?
>          )
> @@ -646,6 +651,11 @@ async fn pull_group(
>          count: 0,
>      };
>  
> +    let total_amount = list.len();
> +
> +    let mut transfer_amount = params.transfer_last.unwrap_or(total_amount);
> +    transfer_amount = min(transfer_amount, total_amount);

I'd prefer this to just calculate the cutoff for the check below, e.g.

let XXX = params
        .transfer_last
        .map(|count| total_amount.saturating_sub(count))
        .unwrap_or_default();

(or an equivalent match construct, or some variant of your unwrap+min construct
- doesn't really matter as long as it treats underflows correctly)

> +
>      for (pos, item) in list.into_iter().enumerate() {
>          let snapshot = item.backup;
>  
> @@ -668,6 +678,11 @@ async fn pull_group(
>              }
>          }
>  
> +        if pos < (total_amount - transfer_amount) {

because this screams "underflow" to me (even though that is checked above in the
code in your variant, there might be more stuff added in-between and once it's
no longer on the same screen this adds cognitive overhead ;)), whereas

if pos < cutoff {
    ..
}

makes it clear that there is no chance of a bad underflow occuring *here*

> +            skip_info.update(snapshot.time);
> +            continue;
> +        }
> +
>          // get updated auth_info (new tickets)
>          let auth_info = client.login().await?;


one more related thing that might have room for improvement:

the progress info counts snapshots skipped cause of transfer-last as "done", but
the order of logging is:
- print re-sync (if applicable)
- print pulled snapshot progress
- print info about skipped snapshots (if applicable)

it might be better (with SkipInfo split) to print
- print info about skipped < last_sync
- print re-sync (see below though)
- print info about skipped transfer-last
- print pull snapshot progress ("done" can now include skipped snapshots without
confusion)

e.g., an example of the status quo (one snapshot synced, tansfer-last = 2, a
total of 8 remote snapshots):

2023-01-17T12:49:46+01:00: percentage done: 33.33% (1/3 groups)
2023-01-17T12:49:46+01:00: skipped: 2 snapshot(s) (2022-12-13T11:09:43Z .. 2022-12-16T09:44:34Z) - older than the newest local snapshot or excluded by transfer-last
2023-01-17T12:49:46+01:00: sync snapshot vm/104/2023-01-16T13:09:55Z
2023-01-17T12:49:46+01:00: sync archive qemu-server.conf.blob
2023-01-17T12:49:46+01:00: sync archive drive-scsi0.img.fidx
2023-01-17T12:49:46+01:00: downloaded 0 bytes (0.00 MiB/s)
2023-01-17T12:49:46+01:00: got backup log file "client.log.blob"
2023-01-17T12:49:46+01:00: sync snapshot vm/104/2023-01-16T13:09:55Z done
2023-01-17T12:49:46+01:00: percentage done: 62.50% (1/3 groups, 7/8 snapshots in group #2)
2023-01-17T12:49:46+01:00: sync snapshot vm/104/2023-01-16T13:24:58Z
2023-01-17T12:49:46+01:00: sync archive qemu-server.conf.blob
2023-01-17T12:49:46+01:00: sync archive drive-scsi0.img.fidx
2023-01-17T12:49:46+01:00: downloaded 0 bytes (0.00 MiB/s)
2023-01-17T12:49:46+01:00: got backup log file "client.log.blob"
2023-01-17T12:49:46+01:00: sync snapshot vm/104/2023-01-16T13:24:58Z done
2023-01-17T12:49:46+01:00: percentage done: 66.67% (2/3 groups)
2023-01-17T12:49:46+01:00: skipped: 6 snapshot(s) (2022-12-16T09:25:03Z ..
2023-01-12T09:43:15Z) - older than the newest local snapshot or excluded by transfer-last


if we revamp the progress display, it might also make sense to improve the
output for groups that are already 100% synced:

2023-01-17T12:49:46+01:00: re-sync snapshot vm/800/2023-01-16T12:28:10Z
2023-01-17T12:49:46+01:00: no data changes
2023-01-17T12:49:46+01:00: re-sync snapshot vm/800/2023-01-16T12:28:10Z done
2023-01-17T12:49:46+01:00: percentage done: 100.00% (3/3 groups)

IMHO we could remove the "re-sync .. done" line (the next line will be a
progress line anyway, either for this snapshot or the whole group!), and instead
add an opening line ("sync group XXX..") that helps when scanning the log.




^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-01-17 12:17 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-11 14:52 [pbs-devel] [PATCH proxmox-backup v2 0/2] add transfer-last parameter to pull/sync job Stefan Hanreich
2023-01-11 14:52 ` [pbs-devel] [PATCH proxmox-backup v2 1/2] partial fix #3701: sync/pull: add transfer-last parameter Stefan Hanreich
2023-01-17 12:16   ` Fabian Grünbichler
2023-01-11 14:52 ` [pbs-devel] [PATCH proxmox-backup v2 2/2] ui: sync job: add transfer-last parameter to ui Stefan Hanreich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal