public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Dominik Csapak <d.csapak@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [pbs-devel] [PATCH proxmox-backup 1/2] datastore: data blob: increase compression throughput
Date: Tue, 23 Jul 2024 12:10:36 +0200	[thread overview]
Message-ID: <20240723101037.1596714-1-d.csapak@proxmox.com> (raw)

by not using `zstd::stream::copy_encode`, because that has an allocation
pattern that reduces throughput if the target/source storage and the
network are faster than the chunk creation.

instead use `zstd::bulk::compress_to_buffer` which shouldn't to any big
allocations, since we provide the target buffer.

To handle the case that the target buffer is too small, we now ignore
all zstd error and continue with the unencrypted data, logging the error
except if the target buffer is too small.

For now, we have to parse the error string for that, as `zstd` maps all
errors as `io::ErrorKind::Other`. Until that gets changed, there is no
other way to differentiate between different kind of errors.

In my local benchmarks from tmpfs to tmpfs on localhost, where i
previously maxed out at ~450MiB/s i know get ~625MiB/s throughput.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---

Note: if we want a different behavior for the errors, that's also ok
with me, but zstd errors should be rare i guess (except the target
buffer one) and in that case I find it better to continue with
uncompressed data. For the case that it was a transient error,
the next upload of the chunk will replace the uncompressed one
if it's smaller anyway.

 pbs-datastore/src/data_blob.rs | 31 +++++++++++++++++++++----------
 1 file changed, 21 insertions(+), 10 deletions(-)

diff --git a/pbs-datastore/src/data_blob.rs b/pbs-datastore/src/data_blob.rs
index a7a55fb7..92242076 100644
--- a/pbs-datastore/src/data_blob.rs
+++ b/pbs-datastore/src/data_blob.rs
@@ -136,7 +136,8 @@ impl DataBlob {
 
             DataBlob { raw_data }
         } else {
-            let max_data_len = data.len() + std::mem::size_of::<DataBlobHeader>();
+            let header_len = std::mem::size_of::<DataBlobHeader>();
+            let max_data_len = data.len() + header_len;
             if compress {
                 let mut comp_data = Vec::with_capacity(max_data_len);
 
@@ -147,15 +148,25 @@ impl DataBlob {
                 unsafe {
                     comp_data.write_le_value(head)?;
                 }
-
-                zstd::stream::copy_encode(data, &mut comp_data, 1)?;
-
-                if comp_data.len() < max_data_len {
-                    let mut blob = DataBlob {
-                        raw_data: comp_data,
-                    };
-                    blob.set_crc(blob.compute_crc());
-                    return Ok(blob);
+                comp_data.resize(max_data_len, 0u8);
+
+                match zstd::bulk::compress_to_buffer(data, &mut comp_data[header_len..], 1) {
+                    Ok(size) if size <= data.len() => {
+                        comp_data.resize(header_len + size, 0u8);
+                        let mut blob = DataBlob {
+                            raw_data: comp_data,
+                        };
+                        blob.set_crc(blob.compute_crc());
+                        return Ok(blob);
+                    }
+                    // if size is bigger than the data, or any error is returned, continue with non
+                    // compressed archive but log all errors beside buffer too small
+                    Ok(_) => {}
+                    Err(err) => {
+                        if !err.to_string().contains("Destination buffer is too small") {
+                            log::error!("zstd compression error: {err}");
+                        }
+                    }
                 }
             }
 
-- 
2.39.2



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


             reply	other threads:[~2024-07-23 10:10 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-23 10:10 Dominik Csapak [this message]
2024-07-23 10:10 ` [pbs-devel] [PATCH proxmox-backup 2/2] remove data blob writer Dominik Csapak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240723101037.1596714-1-d.csapak@proxmox.com \
    --to=d.csapak@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal