public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Stefan Hanreich <s.hanreich@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [pbs-devel] [PATCH v4 proxmox-backup 2/3] sync job: pull: improve log output
Date: Tue, 18 Apr 2023 16:59:46 +0200	[thread overview]
Message-ID: <20230418145947.3003473-3-s.hanreich@proxmox.com> (raw)
In-Reply-To: <20230418145947.3003473-1-s.hanreich@proxmox.com>

Adding an opening line for every group makes parsing the log easier.

We can also remove the 're-sync [...] done' line, because the next
line should be a progress line anyway.

The new output for the sync job/pull logs looks as follows:

- skipped already synced (happens in most jobs, except for first run)
- re-sync of last synced snapshot (if it still exists on source)
- skipped because of transfer-last (if set and skips something)
- sync of new snapshots (if they exist)

Suggested-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
 src/server/pull.rs | 54 +++++++++++++++++++++++++++++++++++-----------
 1 file changed, 41 insertions(+), 13 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index 0219d47e..e50037ed 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -535,19 +535,39 @@ async fn pull_snapshot_from(
     } else {
         task_log!(worker, "re-sync snapshot {}", snapshot.dir());
         pull_snapshot(worker, reader, snapshot, downloaded_chunks).await?;
-        task_log!(worker, "re-sync snapshot {} done", snapshot.dir());
     }
 
     Ok(())
 }
 
+enum SkipReason {
+    AlreadySynced,
+    TransferLast,
+}
+
 struct SkipInfo {
     oldest: i64,
     newest: i64,
     count: u64,
+    skip_reason: SkipReason,
 }
 
 impl SkipInfo {
+    fn new(skip_reason: SkipReason) -> Self {
+        SkipInfo {
+            oldest: i64::MAX,
+            newest: i64::MIN,
+            count: 0,
+            skip_reason,
+        }
+    }
+
+    fn reset(&mut self) {
+        self.count = 0;
+        self.oldest = i64::MAX;
+        self.newest = i64::MIN;
+    }
+
     fn update(&mut self, backup_time: i64) {
         self.count += 1;
 
@@ -575,11 +595,17 @@ impl SkipInfo {
 
 impl std::fmt::Display for SkipInfo {
     fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+        let reason_string = match self.skip_reason {
+            SkipReason::AlreadySynced => "older than the newest local snapshot",
+            SkipReason::TransferLast => "due to transfer-last",
+        };
+
         write!(
             f,
-            "skipped: {} snapshot(s) ({}) older than the newest local snapshot",
+            "skipped: {} snapshot(s) ({}) - {}",
             self.count,
-            self.affected().map_err(|_| std::fmt::Error)?
+            self.affected().map_err(|_| std::fmt::Error)?,
+            reason_string
         )
     }
 }
@@ -610,6 +636,8 @@ async fn pull_group(
     remote_ns: BackupNamespace,
     progress: &mut StoreProgress,
 ) -> Result<(), Error> {
+    task_log!(worker, "sync group {}", group);
+
     let path = format!(
         "api2/json/admin/datastore/{}/snapshots",
         params.source.store()
@@ -645,11 +673,8 @@ async fn pull_group(
 
     progress.group_snapshots = list.len() as u64;
 
-    let mut skip_info = SkipInfo {
-        oldest: i64::MAX,
-        newest: i64::MIN,
-        count: 0,
-    };
+    let mut already_synced_skip_info = SkipInfo::new(SkipReason::AlreadySynced);
+    let mut transfer_last_skip_info = SkipInfo::new(SkipReason::TransferLast);
 
     let total_amount = list.len();
 
@@ -674,12 +699,19 @@ async fn pull_group(
         remote_snapshots.insert(snapshot.time);
 
         if last_sync_time > snapshot.time {
-            skip_info.update(snapshot.time);
+            already_synced_skip_info.update(snapshot.time);
             continue;
+        } else if already_synced_skip_info.count > 0 {
+            task_log!(worker, "{}", already_synced_skip_info);
+            already_synced_skip_info.reset();
         }
 
         if pos < cutoff && last_sync_time != snapshot.time {
+            transfer_last_skip_info.update(snapshot.time);
             continue;
+        } else if transfer_last_skip_info.count > 0 {
+            task_log!(worker, "{}", transfer_last_skip_info);
+            transfer_last_skip_info.reset();
         }
 
         // get updated auth_info (new tickets)
@@ -739,10 +771,6 @@ async fn pull_group(
         }
     }
 
-    if skip_info.count > 0 {
-        task_log!(worker, "{}", skip_info);
-    }
-
     Ok(())
 }
 
-- 
2.30.2




  parent reply	other threads:[~2023-04-18 14:59 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-18 14:59 [pbs-devel] [PATCH v4 proxmox-backup 0/3] add transfer-last parameter to pull/sync job Stefan Hanreich
2023-04-18 14:59 ` [pbs-devel] [PATCH v4 proxmox-backup 1/3] partial fix #3701: sync job: pull: add transfer-last parameter Stefan Hanreich
2023-04-18 14:59 ` Stefan Hanreich [this message]
2023-04-18 14:59 ` [pbs-devel] [PATCH v4 proxmox-backup 3/3] ui: sync job: " Stefan Hanreich
2023-04-25  8:06 ` [pbs-devel] applied-series: [PATCH v4 proxmox-backup 0/3] add transfer-last parameter to pull/sync job Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230418145947.3003473-3-s.hanreich@proxmox.com \
    --to=s.hanreich@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal