public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Dominik Csapak <d.csapak@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [pbs-devel] [PATCH proxmox-backup] backup/verify: improve speed by sorting chunks by inode
Date: Tue, 13 Apr 2021 16:35:36 +0200	[thread overview]
Message-ID: <20210413143536.19004-1-d.csapak@proxmox.com> (raw)

before reading the chunks from disk in the order of the index file,
stat them first and sort them by inode number.

this can have a very positive impact on read speed on spinning disks,
even with the additional stat'ing of the chunks.

memory footprint should be tolerable, for 1_000_000 chunks
we need about ~16MiB of memory (Vec of 64bit position + 64bit inode)
(assuming 4MiB Chunks, such an index would reference 4TiB of data)

two small benchmarks (single spinner, ext4) here showed an improvement from
~430 seconds to ~330 seconds for a 32GiB fixed index
and from
~160 seconds to ~120 seconds for a 10GiB dynamic index

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
it would be great if other people could also benchmark this patch on
different setups a little (in addition to me), to verify or disprove my results

 src/backup/datastore.rs |  5 +++++
 src/backup/verify.rs    | 32 +++++++++++++++++++++++++++++---
 2 files changed, 34 insertions(+), 3 deletions(-)

diff --git a/src/backup/datastore.rs b/src/backup/datastore.rs
index 28dda7e7..8162c269 100644
--- a/src/backup/datastore.rs
+++ b/src/backup/datastore.rs
@@ -686,6 +686,11 @@ impl DataStore {
     }
 
 
+    pub fn stat_chunk(&self, digest: &[u8; 32]) -> Result<std::fs::Metadata, Error> {
+        let (chunk_path, _digest_str) = self.chunk_store.chunk_path(digest);
+        std::fs::metadata(chunk_path).map_err(Error::from)
+    }
+
     pub fn load_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
 
         let (chunk_path, digest_str) = self.chunk_store.chunk_path(digest);
diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index ac4a6c29..9173bd9d 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -159,13 +159,16 @@ fn verify_index_chunks(
         }
     );
 
-    for pos in 0..index.index_count() {
+    let index_count = index.index_count();
+    let mut chunk_list = Vec::with_capacity(index_count);
 
+    use std::os::unix::fs::MetadataExt;
+
+    for pos in 0..index_count {
         verify_worker.worker.check_abort()?;
         crate::tools::fail_on_shutdown()?;
 
         let info = index.chunk_info(pos).unwrap();
-        let size = info.size();
 
         if verify_worker.verified_chunks.lock().unwrap().contains(&info.digest) {
             continue; // already verified
@@ -178,15 +181,38 @@ fn verify_index_chunks(
             continue;
         }
 
+        match verify_worker.datastore.stat_chunk(&info.digest) {
+            Err(err) => {
+                verify_worker.corrupt_chunks.lock().unwrap().insert(info.digest);
+                task_log!(verify_worker.worker, "can't verify chunk, stat failed - {}", err);
+                errors.fetch_add(1, Ordering::SeqCst);
+                rename_corrupted_chunk(verify_worker.datastore.clone(), &info.digest, &verify_worker.worker);
+            }
+            Ok(metadata) => {
+                chunk_list.push((pos, metadata.ino()));
+            }
+        }
+    }
+
+    chunk_list.sort_unstable_by(|(_, ino_a), (_, ino_b)| {
+        ino_a.cmp(&ino_b)
+    });
+
+    for (pos, _) in chunk_list {
+        verify_worker.worker.check_abort()?;
+        crate::tools::fail_on_shutdown()?;
+
+        let info = index.chunk_info(pos).unwrap();
+
         match verify_worker.datastore.load_chunk(&info.digest) {
             Err(err) => {
                 verify_worker.corrupt_chunks.lock().unwrap().insert(info.digest);
                 task_log!(verify_worker.worker, "can't verify chunk, load failed - {}", err);
                 errors.fetch_add(1, Ordering::SeqCst);
                 rename_corrupted_chunk(verify_worker.datastore.clone(), &info.digest, &verify_worker.worker);
-                continue;
             }
             Ok(chunk) => {
+                let size = info.size();
                 read_bytes += chunk.raw_size();
                 decoder_pool.send((chunk, info.digest, size))?;
                 decoded_bytes += size;
-- 
2.20.1





             reply	other threads:[~2021-04-13 14:36 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-13 14:35 Dominik Csapak [this message]
2021-04-14 13:24 ` Fabian Grünbichler
2021-04-14 15:42 ` [pbs-devel] applied: " Thomas Lamprecht
2021-04-14  7:17 [pbs-devel] " Dietmar Maurer
2021-04-14 16:44 Dietmar Maurer
2021-04-15  7:19 ` Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210413143536.19004-1-d.csapak@proxmox.com \
    --to=d.csapak@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal