public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup
@ 2023-09-22  7:16 Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 1/20] fix #3174: encoder: impl fn new for LinkOffset Christian Ebner
                   ` (20 more replies)
  0 siblings, 21 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

This (still rather rough) series of patches prototypes a possible
approach to improve the pxar file level backup creation speed.
The series is intended to get a first feedback on the implementation
approach and to find possible pitfalls I might not be aware of.

The current approach is to skip encoding of regular file payloads,
for which metadata (currently mtime and size) did not change as
compared to a previous backup run. Instead of re-encoding the files, a
reference to a newly introduced appendix section of the pxar archive
will be written. The appenidx section will be created as concatination
of indexed chunks from the previous backup run, thereby containing the
sequential file payload at a calculated offset with respect to the
starting point of the appendix section.

Metadata comparison and caclulation of the chunks to be indexed for the
appendix section is performed using the catalog of a previous backup as
reference. In order to be able to calculate the offsets, the current
catalog format is extended to include the file offset with respect to
the pxar archive byte stream. This allows to find the required chunks
indexes, the start padding within the concatenated chunks and the total
bytes introduced by the chunks.

During encoding, the chunks needed for the appendix section are injected
in the pxar archive after forcing a chunk boundary when regular pxar
encoding is finished. Finally, the pxar archive containing an appenidx
section are marked as such by appending a final pxar goodbye lookup
table only containing the offset to the appendix section start and total
size of that section, needed for random access as e.g. for mounting the
archive via the fuse filesystem implementation.

Currently, the code assumes the reference backup (for which the previous
run is used) to be a regular backup without appendix section, and the
catalog for that backup to already contain the required additional
offset information.

An invocation therefore looks lile:
```bash
proxmox-backup-client backup <label>.pxar:<source-path>
proxmox-backup-client backup <label>.pxar:<source-path> --incremental
```

pxar:

Christian Ebner (8):
  fix #3174: encoder: impl fn new for LinkOffset
  fix #3174: decoder: factor out skip_bytes from skip_entry
  fix #3174: decoder: impl skip_bytes for sync dec
  fix #3174: metadata: impl fn to calc byte size
  fix #3174: enc/dec: impl PXAR_APPENDIX_REF entrytype
  fix #3174: enc/dec: impl PXAR_APPENDIX entrytype
  fix #3174: encoder: add helper to incr encoder pos
  fix #3174: enc/dec: impl PXAR_APPENDIX_TAIL entrytype

 examples/mk-format-hashes.rs | 11 +++++
 examples/pxarcmd.rs          |  4 +-
 src/accessor/mod.rs          | 46 ++++++++++++++++++++
 src/decoder/mod.rs           | 38 +++++++++++++---
 src/decoder/sync.rs          |  6 +++
 src/encoder/aio.rs           | 36 ++++++++++++++--
 src/encoder/mod.rs           | 84 +++++++++++++++++++++++++++++++++++-
 src/encoder/sync.rs          | 32 +++++++++++++-
 src/format/mod.rs            | 16 +++++++
 src/lib.rs                   | 54 +++++++++++++++++++++++
 10 files changed, 312 insertions(+), 15 deletions(-)

proxmox-backup:

Christian Ebner (12):
  fix #3174: index: add fn index list from start/end-offsets
  fix #3174: index: add fn digest for DynamicEntry
  fix #3174: api: double catalog upload size
  fix #3174: catalog: incl pxar archives file offset
  fix #3174: archiver/extractor: impl appendix ref
  fix #3174: extractor: impl seq restore from appendix
  fix #3174: archiver: store ref to previous backup
  fix #3174: upload stream: impl reused chunk injector
  fix #3174: chunker: add forced boundaries
  fix #3174: backup writer: inject queued chunk in upload steam
  fix #3174: archiver: reuse files with unchanged metadata
  fix #3174: client: Add incremental flag to backup creation

 examples/test_chunk_speed2.rs                 |   9 +-
 pbs-client/src/backup_writer.rs               |  88 ++++---
 pbs-client/src/chunk_stream.rs                |  41 +++-
 pbs-client/src/inject_reused_chunks.rs        | 123 ++++++++++
 pbs-client/src/lib.rs                         |   1 +
 pbs-client/src/pxar/create.rs                 | 217 ++++++++++++++++--
 pbs-client/src/pxar/extract.rs                | 141 ++++++++++++
 pbs-client/src/pxar/mod.rs                    |   2 +-
 pbs-client/src/pxar/tools.rs                  |   9 +
 pbs-client/src/pxar_backup_stream.rs          |   8 +-
 pbs-datastore/src/catalog.rs                  | 122 ++++++++--
 pbs-datastore/src/dynamic_index.rs            |  38 +++
 proxmox-backup-client/src/main.rs             | 142 +++++++++++-
 .../src/proxmox_restore_daemon/api.rs         |  15 +-
 pxar-bin/src/main.rs                          |  22 +-
 src/api2/backup/upload_chunk.rs               |   4 +-
 src/tape/file_formats/snapshot_archive.rs     |   2 +-
 tests/catar.rs                                |   3 +
 18 files changed, 886 insertions(+), 101 deletions(-)
 create mode 100644 pbs-client/src/inject_reused_chunks.rs

-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC pxar 1/20] fix #3174: encoder: impl fn new for LinkOffset
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-27 12:08   ` Wolfgang Bumiller
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 2/20] fix #3174: decoder: factor out skip_bytes from skip_entry Christian Ebner
                   ` (19 subsequent siblings)
  20 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Allows to generate a new LinkOffset for storing the offset of regular
files in the backup catalog, based on the provided archiver position.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/encoder/mod.rs | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/src/encoder/mod.rs b/src/encoder/mod.rs
index 0d342ec..710ed98 100644
--- a/src/encoder/mod.rs
+++ b/src/encoder/mod.rs
@@ -31,6 +31,12 @@ pub use sync::Encoder;
 pub struct LinkOffset(u64);
 
 impl LinkOffset {
+    /// Create a new link from the raw byte offset.
+    #[inline]
+    pub fn new(raw: u64) -> Self {
+        Self(raw)
+    }
+
     /// Get the raw byte offset of this link.
     #[inline]
     pub fn raw(self) -> u64 {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC pxar 2/20] fix #3174: decoder: factor out skip_bytes from skip_entry
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 1/20] fix #3174: encoder: impl fn new for LinkOffset Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-27 11:32   ` Wolfgang Bumiller
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 3/20] fix #3174: decoder: impl skip_bytes for sync dec Christian Ebner
                   ` (18 subsequent siblings)
  20 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Allows to skip bytes independently of the current entries header, as is
implemented by skip_entry.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/decoder/mod.rs | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/src/decoder/mod.rs b/src/decoder/mod.rs
index d1fb911..2ca263b 100644
--- a/src/decoder/mod.rs
+++ b/src/decoder/mod.rs
@@ -562,20 +562,25 @@ impl<I: SeqRead> DecoderImpl<I> {
     // These utilize additional information and hence are not part of the `dyn SeqRead` impl.
     //
 
-    async fn skip_entry(&mut self, offset: u64) -> io::Result<()> {
-        let mut len = self.current_header.content_size() - offset;
+    async fn skip_bytes(&mut self, len: usize) -> io::Result<()> {
+        let mut remaining = len;
         let scratch = scratch_buffer();
-        while len >= (scratch.len() as u64) {
+        while remaining >= scratch.len() {
             seq_read_exact(&mut self.input, scratch).await?;
-            len -= scratch.len() as u64;
+            remaining -= scratch.len();
         }
-        let len = len as usize;
-        if len > 0 {
-            seq_read_exact(&mut self.input, &mut scratch[..len]).await?;
+        if remaining > 0 {
+            seq_read_exact(&mut self.input, &mut scratch[..remaining]).await?;
         }
         Ok(())
     }
 
+    async fn skip_entry(&mut self, offset: u64) -> io::Result<()> {
+        let len =
+            usize::try_from(self.current_header.content_size() - offset).map_err(io_err_other)?;
+        self.skip_bytes(len).await
+    }
+
     async fn read_entry_as_bytes(&mut self) -> io::Result<Vec<u8>> {
         let size = usize::try_from(self.current_header.content_size()).map_err(io_err_other)?;
         let data = seq_read_exact_data(&mut self.input, size).await?;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC pxar 3/20] fix #3174: decoder: impl skip_bytes for sync dec
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 1/20] fix #3174: encoder: impl fn new for LinkOffset Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 2/20] fix #3174: decoder: factor out skip_bytes from skip_entry Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 4/20] fix #3174: metadata: impl fn to calc byte size Christian Ebner
                   ` (17 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Allows to skip over a given number of bytes during pxar archive
decoding. This is used to skip over unneeded contents in a pxar
appendix.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/decoder/sync.rs | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/src/decoder/sync.rs b/src/decoder/sync.rs
index 5597a03..a3820c8 100644
--- a/src/decoder/sync.rs
+++ b/src/decoder/sync.rs
@@ -81,6 +81,12 @@ impl<T: SeqRead> Decoder<T> {
     pub fn enable_goodbye_entries(&mut self, on: bool) {
         self.inner.with_goodbye_tables = on;
     }
+
+    /// Skip bytes in decoder stream.
+    pub fn skip_bytes(&mut self, len: u64) -> io::Result<()> {
+        let len = usize::try_from(len).unwrap();
+        poll_result_once(self.inner.skip_bytes(len))
+    }
 }
 
 impl<T: SeqRead> Iterator for Decoder<T> {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC pxar 4/20] fix #3174: metadata: impl fn to calc byte size
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (2 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 3/20] fix #3174: decoder: impl skip_bytes for sync dec Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-27 11:38   ` Wolfgang Bumiller
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 5/20] fix #3174: enc/dec: impl PXAR_APPENDIX_REF entrytype Christian Ebner
                   ` (16 subsequent siblings)
  20 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Add a helper function to calculate the byte size of pxar Metadata
objects, needed to be able to recalculate offsets when creating archives
with appendix sections.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/lib.rs | 44 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/src/lib.rs b/src/lib.rs
index 210c4b1..ed7ba40 100644
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -144,6 +144,50 @@ impl Metadata {
     pub fn builder_from_stat(stat: &libc::stat) -> MetadataBuilder {
         MetadataBuilder::new(0).fill_from_stat(stat)
     }
+
+    /// Calculate the number of bytes when serialized in pxar archive
+    pub fn calculate_byte_len(&self) -> usize {
+        let mut bytes = mem::size_of::<format::Header>();
+        bytes += mem::size_of_val(&self.stat);
+        for xattr in &self.xattrs {
+            bytes += mem::size_of::<format::Header>();
+            bytes += mem::size_of_val(xattr);
+        }
+        for acl_user in &self.acl.users {
+            bytes += mem::size_of::<format::Header>();
+            bytes += mem::size_of_val(acl_user);
+        }
+        for acl_group in &self.acl.groups {
+            bytes += mem::size_of::<format::Header>();
+            bytes += mem::size_of_val(acl_group);
+        }
+        if let Some(group_obj) = &self.acl.group_obj {
+            bytes += mem::size_of::<format::Header>();
+            bytes += mem::size_of_val(group_obj);
+        }
+        if let Some(default) = &self.acl.default {
+            bytes += mem::size_of::<format::Header>();
+            bytes += mem::size_of_val(default);
+        }
+        for acl_default_user in &self.acl.default_users {
+            bytes += mem::size_of::<format::Header>();
+            bytes += mem::size_of_val(acl_default_user);
+        }
+        for acl_default_group in &self.acl.default_groups {
+            bytes += mem::size_of::<format::Header>();
+            bytes += mem::size_of_val(acl_default_group);
+        }
+        if let Some(fcaps) = &self.fcaps {
+            bytes += mem::size_of::<format::Header>();
+            bytes += mem::size_of_val(fcaps);
+        }
+        if let Some(quota_project_id) = &self.quota_project_id {
+            bytes += mem::size_of::<format::Header>();
+            bytes += mem::size_of_val(quota_project_id);
+        }
+
+        bytes
+    }
 }
 
 impl From<MetadataBuilder> for Metadata {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC pxar 5/20] fix #3174: enc/dec: impl PXAR_APPENDIX_REF entrytype
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (3 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 4/20] fix #3174: metadata: impl fn to calc byte size Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 6/20] fix #3174: enc/dec: impl PXAR_APPENDIX entrytype Christian Ebner
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Add an additional entry type for regular files to store a reference to
the appenidx section of the pxar archive, the position relative to the
appendix start is stored, in order to be able to access the file payload
within the appendix.

This new entry type is used to reference the contents of existing file
payload chunks for unchanged file payloads.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 examples/mk-format-hashes.rs |  5 +++++
 src/decoder/mod.rs           | 10 ++++++++++
 src/encoder/aio.rs           | 18 ++++++++++++++++++
 src/encoder/mod.rs           | 36 ++++++++++++++++++++++++++++++++++++
 src/encoder/sync.rs          | 16 ++++++++++++++++
 src/format/mod.rs            |  5 +++++
 src/lib.rs                   |  6 ++++++
 7 files changed, 96 insertions(+)

diff --git a/examples/mk-format-hashes.rs b/examples/mk-format-hashes.rs
index 1ad606c..8b4f5de 100644
--- a/examples/mk-format-hashes.rs
+++ b/examples/mk-format-hashes.rs
@@ -41,6 +41,11 @@ const CONSTANTS: &[(&str, &str, &str)] = &[
         "PXAR_PAYLOAD",
         "__PROXMOX_FORMAT_PXAR_PAYLOAD__",
     ),
+    (
+        "Marks the beginnig of an appendix reference for regular files",
+        "PXAR_APPENDIX_REF",
+        "__PROXMOX_FORMAT_PXAR_APPENDIX_REF__",
+    ),
     (
         "Marks item as entry of goodbye table",
         "PXAR_GOODBYE",
diff --git a/src/decoder/mod.rs b/src/decoder/mod.rs
index 2ca263b..143a01d 100644
--- a/src/decoder/mod.rs
+++ b/src/decoder/mod.rs
@@ -526,6 +526,16 @@ impl<I: SeqRead> DecoderImpl<I> {
                 self.entry.kind = EntryKind::Device(self.read_device().await?);
                 return Ok(ItemResult::Entry);
             }
+            format::PXAR_APPENDIX_REF => {
+                let bytes = self.read_entry_as_bytes().await?;
+                let appendix_offset = u64::from_le_bytes(bytes[0..8].try_into().unwrap());
+                let file_size = u64::from_le_bytes(bytes[8..16].try_into().unwrap());
+                self.entry.kind = EntryKind::AppendixRef {
+                    appendix_offset,
+                    file_size,
+                };
+                return Ok(ItemResult::Entry);
+            }
             format::PXAR_PAYLOAD => {
                 let offset = seq_read_position(&mut self.input).await.transpose()?;
                 self.entry.kind = EntryKind::File {
diff --git a/src/encoder/aio.rs b/src/encoder/aio.rs
index ad25fea..1d8e635 100644
--- a/src/encoder/aio.rs
+++ b/src/encoder/aio.rs
@@ -112,6 +112,24 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
         self.inner.finish().await
     }
 
+    /// Add reference to archive appendix
+    pub async fn add_appendix_ref<PF: AsRef<Path>>(
+        &mut self,
+        metadata: &Metadata,
+        file_name: PF,
+        appendix_offset: u64,
+        file_size: u64,
+    ) -> io::Result<()> {
+        self.inner
+            .add_appendix_ref(
+                metadata,
+                file_name.as_ref(),
+                appendix_offset,
+                file_size,
+            )
+            .await
+    }
+
     /// Add a symbolic link to the archive.
     pub async fn add_symlink<PF: AsRef<Path>, PT: AsRef<Path>>(
         &mut self,
diff --git a/src/encoder/mod.rs b/src/encoder/mod.rs
index 710ed98..ddb0125 100644
--- a/src/encoder/mod.rs
+++ b/src/encoder/mod.rs
@@ -437,6 +437,42 @@ impl<'a, T: SeqWrite + 'a> EncoderImpl<'a, T> {
         Ok(offset)
     }
 
+    /// Add reference to pxar archive appendix
+    pub async fn add_appendix_ref(
+        &mut self,
+        metadata: &Metadata,
+        file_name: &Path,
+        appendix_offset: u64,
+        file_size: u64,
+    ) -> io::Result<()> {
+        self.check()?;
+
+        let file_offset = self.position();
+        let file_name = file_name.as_os_str().as_bytes();
+        self.start_file_do(Some(metadata), file_name).await?;
+
+        let mut data = Vec::with_capacity(2 * 8);
+        data.extend(&appendix_offset.to_le_bytes());
+        data.extend(&file_size.to_le_bytes());
+        seq_write_pxar_entry(
+            self.output.as_mut(),
+            format::PXAR_APPENDIX_REF,
+            &data,
+            &mut self.state.write_position,
+        )
+        .await?;
+
+        let end_offset = self.position();
+
+        self.state.items.push(GoodbyeItem {
+            hash: format::hash_filename(file_name),
+            offset: file_offset,
+            size: end_offset - file_offset,
+        });
+
+        Ok(())
+    }
+
     /// Return a file offset usable with `add_hardlink`.
     pub async fn add_symlink(
         &mut self,
diff --git a/src/encoder/sync.rs b/src/encoder/sync.rs
index 1ec91b8..6cac7eb 100644
--- a/src/encoder/sync.rs
+++ b/src/encoder/sync.rs
@@ -110,6 +110,22 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
         poll_result_once(self.inner.finish())
     }
 
+    /// Add reference to archive appendix
+    pub async fn add_appendix_ref<PF: AsRef<Path>>(
+        &mut self,
+        metadata: &Metadata,
+        file_name: PF,
+        appendix_offset: u64,
+        file_size: u64,
+    ) -> io::Result<()> {
+        poll_result_once(self.inner.add_appendix_ref(
+            metadata,
+            file_name.as_ref(),
+            appendix_offset,
+            file_size,
+        ))
+    }
+
     /// Add a symbolic link to the archive.
     pub fn add_symlink<PF: AsRef<Path>, PT: AsRef<Path>>(
         &mut self,
diff --git a/src/format/mod.rs b/src/format/mod.rs
index 72a193c..5eb7562 100644
--- a/src/format/mod.rs
+++ b/src/format/mod.rs
@@ -22,6 +22,7 @@
 //!   * `FCAPS`             -- file capability in Linux disk format
 //!   * `QUOTA_PROJECT_ID`  -- the ext4/xfs quota project ID
 //!   * `PAYLOAD`           -- file contents, if it is one
+//!   * `APPENDIX_REF`      -- start offset and size of a file entry relative to the appendix start
 //!   * `SYMLINK`           -- symlink target, if it is one
 //!   * `DEVICE`            -- device major/minor, if it is a block/char device
 //!
@@ -99,6 +100,8 @@ pub const PXAR_QUOTA_PROJID: u64 = 0xe07540e82f7d1cbb;
 pub const PXAR_HARDLINK: u64 = 0x51269c8422bd7275;
 /// Marks the beginnig of the payload (actual content) of regular files
 pub const PXAR_PAYLOAD: u64 = 0x28147a1b0b7c1a25;
+/// Marks the beginnig of an appendix reference for regular files
+pub const PXAR_APPENDIX_REF: u64 = 0x849b4a17e0234f8e;
 /// Marks item as entry of goodbye table
 pub const PXAR_GOODBYE: u64 = 0x2fec4fa642d5731d;
 /// The end marker used in the GOODBYE object
@@ -151,6 +154,7 @@ impl Header {
             PXAR_ACL_GROUP_OBJ => size_of::<acl::GroupObject>() as u64,
             PXAR_QUOTA_PROJID => size_of::<QuotaProjectId>() as u64,
             PXAR_ENTRY => size_of::<Stat>() as u64,
+            PXAR_APPENDIX_REF => u64::MAX - (size_of::<Self>() as u64),
             PXAR_PAYLOAD | PXAR_GOODBYE => u64::MAX - (size_of::<Self>() as u64),
             _ => u64::MAX - (size_of::<Self>() as u64),
         }
@@ -191,6 +195,7 @@ impl Display for Header {
             PXAR_ACL_GROUP_OBJ => "ACL_GROUP_OBJ",
             PXAR_QUOTA_PROJID => "QUOTA_PROJID",
             PXAR_ENTRY => "ENTRY",
+            PXAR_APPENDIX_REF => "APPENDIX_REF",
             PXAR_PAYLOAD => "PAYLOAD",
             PXAR_GOODBYE => "GOODBYE",
             _ => "UNKNOWN",
diff --git a/src/lib.rs b/src/lib.rs
index ed7ba40..6a30fc3 100644
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -410,6 +410,12 @@ pub enum EntryKind {
         offset: Option<u64>,
     },
 
+    /// Reference to pxar archive appendix
+    AppendixRef {
+        appendix_offset: u64,
+        file_size: u64,
+    },
+
     /// Directory entry. When iterating through an archive, the contents follow next.
     Directory,
 
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC pxar 6/20] fix #3174: enc/dec: impl PXAR_APPENDIX entrytype
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (4 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 5/20] fix #3174: enc/dec: impl PXAR_APPENDIX_REF entrytype Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 7/20] fix #3174: encoder: add helper to incr encoder pos Christian Ebner
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Add an additional entry type for marking the start of a pxar archive
appendix section. The appendix is a concatenation of possibly
uncorrelated chunks, therefore not following the pxar archive format
anymore. The appendix is only used to access the file metadata and
payloads when a PXAR_APPENDIX_REF entry is encountered in the archive
before this point.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 examples/mk-format-hashes.rs |  1 +
 src/decoder/mod.rs           |  9 +++++++++
 src/encoder/aio.rs           |  7 +++++++
 src/encoder/mod.rs           | 20 ++++++++++++++++++++
 src/encoder/sync.rs          |  7 +++++++
 src/format/mod.rs            |  7 +++++++
 src/lib.rs                   |  4 ++++
 7 files changed, 55 insertions(+)

diff --git a/examples/mk-format-hashes.rs b/examples/mk-format-hashes.rs
index 8b4f5de..f068edd 100644
--- a/examples/mk-format-hashes.rs
+++ b/examples/mk-format-hashes.rs
@@ -12,6 +12,7 @@ const CONSTANTS: &[(&str, &str, &str)] = &[
         "__PROXMOX_FORMAT_ENTRY__",
     ),
     ("", "PXAR_FILENAME", "__PROXMOX_FORMAT_FILENAME__"),
+    ("", "PXAR_APPENDIX", "__PROXMOX_FORMAT_APPENDIX__"),
     ("", "PXAR_SYMLINK", "__PROXMOX_FORMAT_SYMLINK__"),
     ("", "PXAR_DEVICE", "__PROXMOX_FORMAT_DEVICE__"),
     ("", "PXAR_XATTR", "__PROXMOX_FORMAT_XATTR__"),
diff --git a/src/decoder/mod.rs b/src/decoder/mod.rs
index 143a01d..e54ab41 100644
--- a/src/decoder/mod.rs
+++ b/src/decoder/mod.rs
@@ -291,6 +291,7 @@ impl<I: SeqRead> DecoderImpl<I> {
                         continue;
                     }
                 }
+                format::PXAR_APPENDIX => return Ok(Some(self.entry.take())),
                 _ => io_bail!(
                     "expected filename or directory-goodbye pxar entry, got: {}",
                     self.current_header,
@@ -536,6 +537,14 @@ impl<I: SeqRead> DecoderImpl<I> {
                 };
                 return Ok(ItemResult::Entry);
             }
+            format::PXAR_APPENDIX => {
+                let bytes = self.read_entry_as_bytes().await?;
+                let total = u64::from_le_bytes(bytes[0..8].try_into().unwrap());
+                self.entry.kind = EntryKind::Appendix {
+                    total,
+                };
+                return Ok(ItemResult::Entry);
+            }
             format::PXAR_PAYLOAD => {
                 let offset = seq_read_position(&mut self.input).await.transpose()?;
                 self.entry.kind = EntryKind::File {
diff --git a/src/encoder/aio.rs b/src/encoder/aio.rs
index 1d8e635..3b6c900 100644
--- a/src/encoder/aio.rs
+++ b/src/encoder/aio.rs
@@ -130,6 +130,13 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
             .await
     }
 
+    /// Add the appendix start entry marker
+    ///
+    /// Returns the LinkOffset pointing after the entry, the appendix start offset
+    pub async fn add_appendix(&mut self, full_size: u64) -> io::Result<LinkOffset> {
+        self.inner.add_appendix(full_size).await
+    }
+
     /// Add a symbolic link to the archive.
     pub async fn add_symlink<PF: AsRef<Path>, PT: AsRef<Path>>(
         &mut self,
diff --git a/src/encoder/mod.rs b/src/encoder/mod.rs
index ddb0125..a513ce6 100644
--- a/src/encoder/mod.rs
+++ b/src/encoder/mod.rs
@@ -473,6 +473,26 @@ impl<'a, T: SeqWrite + 'a> EncoderImpl<'a, T> {
         Ok(())
     }
 
+    /// Add the appendix start entry marker
+    ///
+    /// Returns the LinkOffset pointing after the entry, the appendix start offset
+    pub async fn add_appendix(&mut self, full_size: u64) -> io::Result<LinkOffset> {
+        self.check()?;
+
+        let data = &full_size.to_le_bytes().to_vec();
+        seq_write_pxar_entry(
+            self.output.as_mut(),
+            format::PXAR_APPENDIX,
+            &data,
+            &mut self.state.write_position,
+        )
+        .await?;
+
+        let offset = self.position();
+
+        Ok(LinkOffset(offset))
+    }
+
     /// Return a file offset usable with `add_hardlink`.
     pub async fn add_symlink(
         &mut self,
diff --git a/src/encoder/sync.rs b/src/encoder/sync.rs
index 6cac7eb..372ca12 100644
--- a/src/encoder/sync.rs
+++ b/src/encoder/sync.rs
@@ -126,6 +126,13 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
         ))
     }
 
+    /// Add the appendix start entry marker
+    ///
+    /// Returns the LinkOffset pointing after the entry, the appendix start offset
+    pub async fn add_appendix(&mut self, full_size: u64) -> io::Result<LinkOffset> {
+        poll_result_once(self.inner.add_appendix(full_size))
+    }
+
     /// Add a symbolic link to the archive.
     pub fn add_symlink<PF: AsRef<Path>, PT: AsRef<Path>>(
         &mut self,
diff --git a/src/format/mod.rs b/src/format/mod.rs
index 5eb7562..8254df9 100644
--- a/src/format/mod.rs
+++ b/src/format/mod.rs
@@ -35,6 +35,12 @@
 //!   * `<archive>`         -- serialization of the second directory entry
 //!   * ...
 //!   * `GOODBYE`           -- lookup table at the end of a list of directory entries
+//!
+//! For backups referencing previous backups to skip file payloads, the archive is followed by a
+//! appendix maker after which the concatinated pxar archive fragments containing the file payloads
+//! are appended. They are NOT guaranteed to follow the full pxar structure and should only be
+//! used to extract the file payloads by given offset.
+//!   * `APPENDIX`          -- pxar archive fragments containing file payloads
 
 use std::cmp::Ordering;
 use std::ffi::{CStr, OsStr};
@@ -85,6 +91,7 @@ pub const PXAR_ENTRY: u64 = 0xd5956474e588acef;
 /// Previous version of the entry struct
 pub const PXAR_ENTRY_V1: u64 = 0x11da850a1c1cceff;
 pub const PXAR_FILENAME: u64 = 0x16701121063917b3;
+pub const PXAR_APPENDIX: u64 = 0x9ff6c9507864b38d;
 pub const PXAR_SYMLINK: u64 = 0x27f971e7dbf5dc5f;
 pub const PXAR_DEVICE: u64 = 0x9fc9e906586d5ce9;
 pub const PXAR_XATTR: u64 = 0x0dab0229b57dcd03;
diff --git a/src/lib.rs b/src/lib.rs
index 6a30fc3..086c7f2 100644
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -416,6 +416,10 @@ pub enum EntryKind {
         file_size: u64,
     },
 
+    Appendix {
+        total: u64,
+    },
+
     /// Directory entry. When iterating through an archive, the contents follow next.
     Directory,
 
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC pxar 7/20] fix #3174: encoder: add helper to incr encoder pos
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (5 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 6/20] fix #3174: enc/dec: impl PXAR_APPENDIX entrytype Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-27 12:07   ` Wolfgang Bumiller
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 8/20] fix #3174: enc/dec: impl PXAR_APPENDIX_TAIL entrytype Christian Ebner
                   ` (13 subsequent siblings)
  20 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Adds a helper to allow to increase the encoder position by a given
size. This is used to increase the position when adding an appendix
section to the pxar stream, as these bytes are never encoded directly
but rather referenced by already existing chunks.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/encoder/aio.rs  | 5 +++++
 src/encoder/mod.rs  | 6 ++++++
 src/encoder/sync.rs | 5 +++++
 3 files changed, 16 insertions(+)

diff --git a/src/encoder/aio.rs b/src/encoder/aio.rs
index 3b6c900..3587f65 100644
--- a/src/encoder/aio.rs
+++ b/src/encoder/aio.rs
@@ -112,6 +112,11 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
         self.inner.finish().await
     }
 
+    /// Add size to encoders position and return new position.
+    pub fn position_add(&mut self, size: u64) -> u64 {
+        self.inner.position_add(size)
+    }
+
     /// Add reference to archive appendix
     pub async fn add_appendix_ref<PF: AsRef<Path>>(
         &mut self,
diff --git a/src/encoder/mod.rs b/src/encoder/mod.rs
index a513ce6..abe21f2 100644
--- a/src/encoder/mod.rs
+++ b/src/encoder/mod.rs
@@ -626,6 +626,12 @@ impl<'a, T: SeqWrite + 'a> EncoderImpl<'a, T> {
         self.state.write_position
     }
 
+    #[inline]
+    pub fn position_add(&mut self, size: u64) -> u64 {
+        self.state.write_position += size;
+        self.state.write_position
+    }
+
     pub async fn create_directory(
         &mut self,
         file_name: &Path,
diff --git a/src/encoder/sync.rs b/src/encoder/sync.rs
index 372ca12..b3d7c44 100644
--- a/src/encoder/sync.rs
+++ b/src/encoder/sync.rs
@@ -110,6 +110,11 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
         poll_result_once(self.inner.finish())
     }
 
+    /// Add size to encoders position and return new position.
+    pub fn position_add(&mut self, size: u64) -> u64 {
+        self.inner.position_add(size)
+    }
+
     /// Add reference to archive appendix
     pub async fn add_appendix_ref<PF: AsRef<Path>>(
         &mut self,
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC pxar 8/20] fix #3174: enc/dec: impl PXAR_APPENDIX_TAIL entrytype
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (6 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 7/20] fix #3174: encoder: add helper to incr encoder pos Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 09/20] fix #3174: index: add fn index list from start/end-offsets Christian Ebner
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

The PXAR_APPENDIX_TAIL entry marks pxar archives containing an appendix
section. It has the same size as a goodbye tail marker item in order to
be able to easily read and distinguish archives with and without such
section.

This also implements the accessor used by e.g. the fuse implementation
to perform random io on the archive. The accessor reads the last entry
and stores the appendix offset if needed, in order to recalculate the
actual file payload offset within the archive when encountering a
appendix reference entry in the archive.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 examples/mk-format-hashes.rs |  5 ++++
 examples/pxarcmd.rs          |  4 ++--
 src/accessor/mod.rs          | 46 ++++++++++++++++++++++++++++++++++++
 src/encoder/aio.rs           |  6 ++---
 src/encoder/mod.rs           | 16 ++++++++++++-
 src/encoder/sync.rs          |  4 ++--
 src/format/mod.rs            |  4 ++++
 7 files changed, 77 insertions(+), 8 deletions(-)

diff --git a/examples/mk-format-hashes.rs b/examples/mk-format-hashes.rs
index f068edd..7fb938d 100644
--- a/examples/mk-format-hashes.rs
+++ b/examples/mk-format-hashes.rs
@@ -57,6 +57,11 @@ const CONSTANTS: &[(&str, &str, &str)] = &[
         "PXAR_GOODBYE_TAIL_MARKER",
         "__PROXMOX_FORMAT_PXAR_GOODBYE_TAIL_MARKER__",
     ),
+    (
+        "Marks the end of an archive containing an appendix section",
+        "PXAR_APPENDIX_TAIL",
+        "__PROXMOX_FORMAT_APPENDIX_TAIL__",
+    ),
 ];
 
 fn main() {
diff --git a/examples/pxarcmd.rs b/examples/pxarcmd.rs
index e0c779d..c7848cc 100644
--- a/examples/pxarcmd.rs
+++ b/examples/pxarcmd.rs
@@ -105,7 +105,7 @@ fn cmd_create(mut args: std::env::ArgsOs) -> Result<(), Error> {
 
     let mut encoder = Encoder::create(file, &meta)?;
     add_directory(&mut encoder, dir, &dir_path, &mut HashMap::new())?;
-    encoder.finish()?;
+    encoder.finish(None)?;
 
     Ok(())
 }
@@ -145,7 +145,7 @@ fn add_directory<'a, T: SeqWrite + 'a>(
                 root_path,
                 &mut *hardlinks,
             )?;
-            dir.finish()?;
+            dir.finish(None)?;
         } else if file_type.is_symlink() {
             todo!("symlink handling");
         } else if file_type.is_file() {
diff --git a/src/accessor/mod.rs b/src/accessor/mod.rs
index 6a2de73..1c19d7b 100644
--- a/src/accessor/mod.rs
+++ b/src/accessor/mod.rs
@@ -182,6 +182,7 @@ pub(crate) struct AccessorImpl<T> {
     input: T,
     size: u64,
     caches: Arc<Caches>,
+    appendix_offset: Option<u64>,
 }
 
 impl<T: ReadAt> AccessorImpl<T> {
@@ -190,10 +191,22 @@ impl<T: ReadAt> AccessorImpl<T> {
             io_bail!("too small to contain a pxar archive");
         }
 
+        let tail_offset = size - (size_of::<GoodbyeItem>() as u64);
+        let tail: GoodbyeItem = read_entry_at(&input, tail_offset).await?;
+
+        let (appendix_offset, size) = if tail.hash == format::PXAR_APPENDIX_TAIL {
+            (Some(tail.offset), size - 40)
+        } else if tail.hash != format::PXAR_GOODBYE_TAIL_MARKER {
+            io_bail!("no goodbye tail marker found");
+        } else {
+            (None, size)
+        };
+
         Ok(Self {
             input,
             size,
             caches: Arc::new(Caches::default()),
+            appendix_offset,
         })
     }
 
@@ -207,6 +220,7 @@ impl<T: ReadAt> AccessorImpl<T> {
             self.size,
             "/".into(),
             Arc::clone(&self.caches),
+            self.appendix_offset,
         )
         .await
     }
@@ -263,6 +277,7 @@ impl<T: Clone + ReadAt> AccessorImpl<T> {
             self.size,
             "/".into(),
             Arc::clone(&self.caches),
+            self.appendix_offset,
         )
         .await
     }
@@ -274,6 +289,7 @@ impl<T: Clone + ReadAt> AccessorImpl<T> {
             offset,
             "/".into(),
             Arc::clone(&self.caches),
+            self.appendix_offset,
         )
         .await
     }
@@ -369,6 +385,7 @@ pub(crate) struct DirectoryImpl<T> {
     table: Arc<[GoodbyeItem]>,
     path: PathBuf,
     caches: Arc<Caches>,
+    appendix_offset: Option<u64>,
 }
 
 impl<T: Clone + ReadAt> DirectoryImpl<T> {
@@ -378,6 +395,7 @@ impl<T: Clone + ReadAt> DirectoryImpl<T> {
         end_offset: u64,
         path: PathBuf,
         caches: Arc<Caches>,
+        appendix_offset: Option<u64>,
     ) -> io::Result<DirectoryImpl<T>> {
         let tail = Self::read_tail_entry(&input, end_offset).await?;
 
@@ -407,6 +425,7 @@ impl<T: Clone + ReadAt> DirectoryImpl<T> {
             table: table.as_ref().map_or_else(|| Arc::new([]), Arc::clone),
             path,
             caches,
+            appendix_offset,
         };
 
         // sanity check:
@@ -516,6 +535,32 @@ impl<T: Clone + ReadAt> DirectoryImpl<T> {
             .next()
             .await
             .ok_or_else(|| io_format_err!("unexpected EOF while decoding directory entry"))??;
+
+        if let EntryKind::AppendixRef {
+            appendix_offset,
+            file_size,
+        } = entry.kind()
+        {
+            let appendix_start = match self.appendix_offset {
+                Some(appendix_start) => appendix_start,
+                None => io_bail!("missing required appendix start offset information"),
+            };
+
+            let name = file_name.ok_or_else(|| io_format_err!("missing required filename"))?;
+            let c_string = std::ffi::CString::new(name.as_os_str().as_bytes())?;
+            let start =
+                appendix_start + appendix_offset + 16 + c_string.as_bytes_with_nul().len() as u64;
+            let end = start + file_size;
+            decoder = self.get_decoder(start..end, file_name).await?;
+
+            let entry = decoder
+                .next()
+                .await
+                .ok_or_else(|| io_format_err!("unexpected EOF while decoding directory entry"))??;
+
+            return Ok((entry, decoder));
+        }
+
         Ok((entry, decoder))
     }
 
@@ -698,6 +743,7 @@ impl<T: Clone + ReadAt> FileEntryImpl<T> {
             self.entry_range_info.entry_range.end,
             self.entry.path.clone(),
             Arc::clone(&self.caches),
+            None,
         )
         .await
     }
diff --git a/src/encoder/aio.rs b/src/encoder/aio.rs
index 3587f65..2a877b1 100644
--- a/src/encoder/aio.rs
+++ b/src/encoder/aio.rs
@@ -108,8 +108,8 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
     }
 
     /// Finish this directory. This is mandatory, otherwise the `Drop` handler will `panic!`.
-    pub async fn finish(self) -> io::Result<()> {
-        self.inner.finish().await
+    pub async fn finish(self, appendix_tail: Option<(LinkOffset, u64)>) -> io::Result<()> {
+        self.inner.finish(appendix_tail).await
     }
 
     /// Add size to encoders position and return new position.
@@ -333,7 +333,7 @@ mod test {
                     .await
                     .unwrap();
             }
-            encoder.finish().await.unwrap();
+            encoder.finish(None).await.unwrap();
         };
 
         fn test_send<T: Send>(_: T) {}
diff --git a/src/encoder/mod.rs b/src/encoder/mod.rs
index abe21f2..c3de8c2 100644
--- a/src/encoder/mod.rs
+++ b/src/encoder/mod.rs
@@ -822,7 +822,7 @@ impl<'a, T: SeqWrite + 'a> EncoderImpl<'a, T> {
         .await
     }
 
-    pub async fn finish(mut self) -> io::Result<()> {
+    pub async fn finish(mut self, appendix_tail: Option<(LinkOffset, u64)>) -> io::Result<()> {
         let tail_bytes = self.finish_goodbye_table().await?;
         seq_write_pxar_entry(
             self.output.as_mut(),
@@ -832,6 +832,20 @@ impl<'a, T: SeqWrite + 'a> EncoderImpl<'a, T> {
         )
         .await?;
 
+        if let Some((link_offset, size)) = appendix_tail {
+            let mut appendix_tail = Vec::new();
+            appendix_tail.append(&mut format::PXAR_APPENDIX_TAIL.to_le_bytes().to_vec());
+            appendix_tail.append(&mut link_offset.raw().to_le_bytes().to_vec());
+            appendix_tail.append(&mut size.to_le_bytes().to_vec());
+            seq_write_pxar_entry(
+                self.output.as_mut(),
+                format::PXAR_GOODBYE,
+                &appendix_tail,
+                &mut self.state.write_position,
+            )
+            .await?;
+        }
+
         if let EncoderOutput::Owned(output) = &mut self.output {
             flush(output).await?;
         }
diff --git a/src/encoder/sync.rs b/src/encoder/sync.rs
index b3d7c44..f16d2d6 100644
--- a/src/encoder/sync.rs
+++ b/src/encoder/sync.rs
@@ -106,8 +106,8 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
     }
 
     /// Finish this directory. This is mandatory, otherwise the `Drop` handler will `panic!`.
-    pub fn finish(self) -> io::Result<()> {
-        poll_result_once(self.inner.finish())
+    pub fn finish(self, appendix_tail: Option<(LinkOffset, u64)>) -> io::Result<()> {
+        poll_result_once(self.inner.finish(appendix_tail))
     }
 
     /// Add size to encoders position and return new position.
diff --git a/src/format/mod.rs b/src/format/mod.rs
index 8254df9..8016ab1 100644
--- a/src/format/mod.rs
+++ b/src/format/mod.rs
@@ -41,6 +41,8 @@
 //! are appended. They are NOT guaranteed to follow the full pxar structure and should only be
 //! used to extract the file payloads by given offset.
 //!   * `APPENDIX`          -- pxar archive fragments containing file payloads
+//!   * final goodbye table
+//!   * `APPENDIX_TAIL`     -- marks the end of an archive containing a APPENDIX section
 
 use std::cmp::Ordering;
 use std::ffi::{CStr, OsStr};
@@ -113,6 +115,8 @@ pub const PXAR_APPENDIX_REF: u64 = 0x849b4a17e0234f8e;
 pub const PXAR_GOODBYE: u64 = 0x2fec4fa642d5731d;
 /// The end marker used in the GOODBYE object
 pub const PXAR_GOODBYE_TAIL_MARKER: u64 = 0xef5eed5b753e1555;
+/// Marks the end of an archive containing an appendix section
+pub const PXAR_APPENDIX_TAIL: u64 = 0x5b1b9abb7ae454f1;
 
 #[derive(Debug, Endian)]
 #[repr(C)]
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 09/20] fix #3174: index: add fn index list from start/end-offsets
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (7 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 8/20] fix #3174: enc/dec: impl PXAR_APPENDIX_TAIL entrytype Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 10/20] fix #3174: index: add fn digest for DynamicEntry Christian Ebner
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Adds a function to get a list of DynamicEntry's from a chunk index by
given start and end offset, which should be contained within these
chunks.
This is needed in order to reference file payloads and reuse the
chunks containing them from a previous backup run.

The index entries are normalized, meaning each entrys end is equal to
the chunk size.

In addition to the list of index entries, the padding to the start of
the requested start offset from the first chunk as well as the total
size are returned, these are needed to update the encoder position
when reusing the chunks and calculating the offsets for decoding.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-datastore/src/dynamic_index.rs | 33 ++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/pbs-datastore/src/dynamic_index.rs b/pbs-datastore/src/dynamic_index.rs
index 71a5082e..59e57160 100644
--- a/pbs-datastore/src/dynamic_index.rs
+++ b/pbs-datastore/src/dynamic_index.rs
@@ -188,6 +188,39 @@ impl DynamicIndexReader {
             self.binary_search(middle_idx + 1, middle_end, end_idx, end, offset)
         }
     }
+
+    /// List of chunk indices containing the data from start_offset to end_offset
+    pub fn indices(
+        &self,
+        start_offset: u64,
+        end_offset: u64,
+    ) -> Result<(Vec<DynamicEntry>, usize, u64), Error> {
+        let end_idx = self.index.len() - 1;
+        let chunk_end = self.chunk_end(end_idx);
+        let start = self.binary_search(0, 0, end_idx, chunk_end, start_offset)?;
+        let end = self.binary_search(0, 0, end_idx, chunk_end, end_offset)?;
+
+        let offset_first = if start == 0 {
+            0
+        } else {
+            self.index[start - 1].end()
+        };
+
+        let size = (self.index[end].end() - offset_first).try_into()?;
+        let padding_start = start_offset - offset_first;
+
+        let mut indices = Vec::new();
+        let mut prev_end = offset_first;
+        for dynamic_entry in &self.index[start..end + 1] {
+            let mut dynamic_entry = dynamic_entry.clone();
+            // Normalize end to be equal to size
+            dynamic_entry.end_le -= prev_end;
+            prev_end += dynamic_entry.end_le;
+            indices.push(dynamic_entry);
+        }
+
+        Ok((indices, size, padding_start))
+    }
 }
 
 impl IndexFile for DynamicIndexReader {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 10/20] fix #3174: index: add fn digest for DynamicEntry
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (8 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 09/20] fix #3174: index: add fn index list from start/end-offsets Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 11/20] fix #3174: api: double catalog upload size Christian Ebner
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-datastore/src/dynamic_index.rs | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/pbs-datastore/src/dynamic_index.rs b/pbs-datastore/src/dynamic_index.rs
index 59e57160..67809e57 100644
--- a/pbs-datastore/src/dynamic_index.rs
+++ b/pbs-datastore/src/dynamic_index.rs
@@ -72,6 +72,11 @@ impl DynamicEntry {
     pub fn end(&self) -> u64 {
         u64::from_le(self.end_le)
     }
+
+    #[inline]
+    pub fn digest(&self) -> [u8; 32] {
+        self.digest.clone()
+    }
 }
 
 pub struct DynamicIndexReader {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 11/20] fix #3174: api: double catalog upload size
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (9 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 10/20] fix #3174: index: add fn digest for DynamicEntry Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 12/20] fix #3174: catalog: incl pxar archives file offset Christian Ebner
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Double the catalog upload size, as the catalog needs to contain the
additional archive offset information needed for referencing file
payloads in an appenidx if the catalog is used to perform backups
without re-reading file payloads.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/api2/backup/upload_chunk.rs | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/api2/backup/upload_chunk.rs b/src/api2/backup/upload_chunk.rs
index 20259660..84485bfd 100644
--- a/src/api2/backup/upload_chunk.rs
+++ b/src/api2/backup/upload_chunk.rs
@@ -126,7 +126,7 @@ pub const API_METHOD_UPLOAD_FIXED_CHUNK: ApiMethod = ApiMethod::new(
                 false,
                 &IntegerSchema::new("Chunk size.")
                     .minimum(1)
-                    .maximum(1024 * 1024 * 16)
+                    .maximum(1024 * 1024 * 32)
                     .schema()
             ),
             (
@@ -135,7 +135,7 @@ pub const API_METHOD_UPLOAD_FIXED_CHUNK: ApiMethod = ApiMethod::new(
                 &IntegerSchema::new("Encoded chunk size.")
                     .minimum((std::mem::size_of::<DataBlobHeader>() as isize) + 1)
                     .maximum(
-                        1024 * 1024 * 16
+                        1024 * 1024 * 32
                             + (std::mem::size_of::<EncryptedDataBlobHeader>() as isize)
                     )
                     .schema()
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 12/20] fix #3174: catalog: incl pxar archives file offset
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (10 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 11/20] fix #3174: api: double catalog upload size Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 13/20] fix #3174: archiver/extractor: impl appendix ref Christian Ebner
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Include the stream offset for regular files in the backup catalog.
This allows to calculate the files payload offset relative to the
appendix start offset in the pxar archive for future backup runs using
the catalog as reference to skip over unchanged file payloads,
referencing the existing chunks instead.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-client/src/pxar/create.rs                 |  30 +++--
 pbs-datastore/src/catalog.rs                  | 122 +++++++++++++++---
 .../src/proxmox_restore_daemon/api.rs         |   1 +
 3 files changed, 121 insertions(+), 32 deletions(-)

diff --git a/pbs-client/src/pxar/create.rs b/pbs-client/src/pxar/create.rs
index e7053d9e..0f23ed2f 100644
--- a/pbs-client/src/pxar/create.rs
+++ b/pbs-client/src/pxar/create.rs
@@ -390,12 +390,6 @@ impl Archiver {
         patterns_count: usize,
     ) -> Result<(), Error> {
         let content = generate_pxar_excludes_cli(&self.patterns[..patterns_count]);
-        if let Some(ref catalog) = self.catalog {
-            catalog
-                .lock()
-                .unwrap()
-                .add_file(file_name, content.len() as u64, 0)?;
-        }
 
         let mut metadata = Metadata::default();
         metadata.stat.mode = pxar::format::mode::IFREG | 0o600;
@@ -405,6 +399,14 @@ impl Archiver {
             .await?;
         file.write_all(&content).await?;
 
+        if let Some(ref catalog) = self.catalog {
+            let link_offset = file.file_offset();
+            catalog
+                .lock()
+                .unwrap()
+                .add_file(file_name, content.len() as u64, 0, link_offset)?;
+        }
+
         Ok(())
     }
 
@@ -572,17 +574,19 @@ impl Archiver {
                 }
 
                 let file_size = stat.st_size as u64;
-                if let Some(ref catalog) = self.catalog {
-                    catalog
-                        .lock()
-                        .unwrap()
-                        .add_file(c_file_name, file_size, stat.st_mtime)?;
-                }
-
                 let offset: LinkOffset = self
                     .add_regular_file(encoder, fd, file_name, &metadata, file_size)
                     .await?;
 
+                if let Some(ref catalog) = self.catalog {
+                    catalog.lock().unwrap().add_file(
+                        c_file_name,
+                        file_size,
+                        stat.st_mtime,
+                        offset,
+                    )?;
+                }
+
                 if stat.st_nlink > 1 {
                     self.hardlinks
                         .insert(link_info, (self.path.clone(), offset));
diff --git a/pbs-datastore/src/catalog.rs b/pbs-datastore/src/catalog.rs
index 86e20c92..1cc5421d 100644
--- a/pbs-datastore/src/catalog.rs
+++ b/pbs-datastore/src/catalog.rs
@@ -7,6 +7,7 @@ use anyhow::{bail, format_err, Error};
 use serde::{Deserialize, Serialize};
 
 use pathpatterns::{MatchList, MatchType};
+use pxar::encoder::LinkOffset;
 
 use proxmox_io::ReadExt;
 use proxmox_schema::api;
@@ -20,7 +21,13 @@ use crate::file_formats::PROXMOX_CATALOG_FILE_MAGIC_1_0;
 pub trait BackupCatalogWriter {
     fn start_directory(&mut self, name: &CStr) -> Result<(), Error>;
     fn end_directory(&mut self) -> Result<(), Error>;
-    fn add_file(&mut self, name: &CStr, size: u64, mtime: i64) -> Result<(), Error>;
+    fn add_file(
+        &mut self,
+        name: &CStr,
+        size: u64,
+        mtime: i64,
+        link_offset: LinkOffset,
+    ) -> Result<(), Error>;
     fn add_symlink(&mut self, name: &CStr) -> Result<(), Error>;
     fn add_hardlink(&mut self, name: &CStr) -> Result<(), Error>;
     fn add_block_device(&mut self, name: &CStr) -> Result<(), Error>;
@@ -94,8 +101,14 @@ pub struct DirEntry {
 /// Used to specific additional attributes inside DirEntry
 #[derive(Clone, Debug, PartialEq, Eq)]
 pub enum DirEntryAttribute {
-    Directory { start: u64 },
-    File { size: u64, mtime: i64 },
+    Directory {
+        start: u64,
+    },
+    File {
+        size: u64,
+        mtime: i64,
+        link_offset: LinkOffset,
+    },
     Symlink,
     Hardlink,
     BlockDevice,
@@ -105,7 +118,14 @@ pub enum DirEntryAttribute {
 }
 
 impl DirEntry {
-    fn new(etype: CatalogEntryType, name: Vec<u8>, start: u64, size: u64, mtime: i64) -> Self {
+    fn new(
+        etype: CatalogEntryType,
+        name: Vec<u8>,
+        start: u64,
+        size: u64,
+        mtime: i64,
+        link_offset: Option<LinkOffset>,
+    ) -> Self {
         match etype {
             CatalogEntryType::Directory => DirEntry {
                 name,
@@ -113,7 +133,11 @@ impl DirEntry {
             },
             CatalogEntryType::File => DirEntry {
                 name,
-                attr: DirEntryAttribute::File { size, mtime },
+                attr: DirEntryAttribute::File {
+                    size,
+                    mtime,
+                    link_offset: link_offset.unwrap(),
+                },
             },
             CatalogEntryType::Symlink => DirEntry {
                 name,
@@ -197,13 +221,19 @@ impl DirInfo {
             }
             DirEntry {
                 name,
-                attr: DirEntryAttribute::File { size, mtime },
+                attr:
+                    DirEntryAttribute::File {
+                        size,
+                        mtime,
+                        link_offset,
+                    },
             } => {
                 writer.write_all(&[CatalogEntryType::File as u8])?;
                 catalog_encode_u64(writer, name.len() as u64)?;
                 writer.write_all(name)?;
                 catalog_encode_u64(writer, *size)?;
                 catalog_encode_i64(writer, *mtime)?;
+                catalog_encode_u64(writer, link_offset.raw())?;
             }
             DirEntry {
                 name,
@@ -271,7 +301,9 @@ impl DirInfo {
         Ok((self.name, data))
     }
 
-    fn parse<C: FnMut(CatalogEntryType, &[u8], u64, u64, i64) -> Result<bool, Error>>(
+    fn parse<
+        C: FnMut(CatalogEntryType, &[u8], u64, u64, i64, Option<LinkOffset>) -> Result<bool, Error>,
+    >(
         data: &[u8],
         mut callback: C,
     ) -> Result<(), Error> {
@@ -300,14 +332,22 @@ impl DirInfo {
             let cont = match etype {
                 CatalogEntryType::Directory => {
                     let offset = catalog_decode_u64(&mut cursor)?;
-                    callback(etype, name, offset, 0, 0)?
+                    callback(etype, name, offset, 0, 0, None)?
                 }
                 CatalogEntryType::File => {
                     let size = catalog_decode_u64(&mut cursor)?;
                     let mtime = catalog_decode_i64(&mut cursor)?;
-                    callback(etype, name, 0, size, mtime)?
+                    let link_offset = catalog_decode_u64(&mut cursor)?;
+                    callback(
+                        etype,
+                        name,
+                        0,
+                        size,
+                        mtime,
+                        Some(LinkOffset::new(link_offset)),
+                    )?
                 }
-                _ => callback(etype, name, 0, 0, 0)?,
+                _ => callback(etype, name, 0, 0, 0, None)?,
             };
             if !cont {
                 return Ok(());
@@ -407,7 +447,13 @@ impl<W: Write> BackupCatalogWriter for CatalogWriter<W> {
         Ok(())
     }
 
-    fn add_file(&mut self, name: &CStr, size: u64, mtime: i64) -> Result<(), Error> {
+    fn add_file(
+        &mut self,
+        name: &CStr,
+        size: u64,
+        mtime: i64,
+        link_offset: LinkOffset,
+    ) -> Result<(), Error> {
         let dir = self
             .dirstack
             .last_mut()
@@ -415,7 +461,11 @@ impl<W: Write> BackupCatalogWriter for CatalogWriter<W> {
         let name = name.to_bytes().to_vec();
         dir.entries.push(DirEntry {
             name,
-            attr: DirEntryAttribute::File { size, mtime },
+            attr: DirEntryAttribute::File {
+                size,
+                mtime,
+                link_offset,
+            },
         });
         Ok(())
     }
@@ -550,8 +600,15 @@ impl<R: Read + Seek> CatalogReader<R> {
 
         let mut entry_list = Vec::new();
 
-        DirInfo::parse(&data, |etype, name, offset, size, mtime| {
-            let entry = DirEntry::new(etype, name.to_vec(), start - offset, size, mtime);
+        DirInfo::parse(&data, |etype, name, offset, size, mtime, link_offset| {
+            let entry = DirEntry::new(
+                etype,
+                name.to_vec(),
+                start - offset,
+                size,
+                mtime,
+                link_offset,
+            );
             entry_list.push(entry);
             Ok(true)
         })?;
@@ -600,12 +657,19 @@ impl<R: Read + Seek> CatalogReader<R> {
         let data = self.read_raw_dirinfo_block(start)?;
 
         let mut item = None;
-        DirInfo::parse(&data, |etype, name, offset, size, mtime| {
+        DirInfo::parse(&data, |etype, name, offset, size, mtime, link_offset| {
             if name != filename {
                 return Ok(true);
             }
 
-            let entry = DirEntry::new(etype, name.to_vec(), start - offset, size, mtime);
+            let entry = DirEntry::new(
+                etype,
+                name.to_vec(),
+                start - offset,
+                size,
+                mtime,
+                link_offset,
+            );
             item = Some(entry);
             Ok(false) // stop parsing
         })?;
@@ -628,7 +692,7 @@ impl<R: Read + Seek> CatalogReader<R> {
     pub fn dump_dir(&mut self, prefix: &std::path::Path, start: u64) -> Result<(), Error> {
         let data = self.read_raw_dirinfo_block(start)?;
 
-        DirInfo::parse(&data, |etype, name, offset, size, mtime| {
+        DirInfo::parse(&data, |etype, name, offset, size, mtime, link_offset| {
             let mut path = std::path::PathBuf::from(prefix);
             let name: &OsStr = OsStrExt::from_bytes(name);
             path.push(name);
@@ -648,7 +712,14 @@ impl<R: Read + Seek> CatalogReader<R> {
                         mtime_string = s;
                     }
 
-                    log::info!("{} {:?} {} {}", etype, path, size, mtime_string,);
+                    log::info!(
+                        "{} {:?} {} {} {:?}",
+                        etype,
+                        path,
+                        size,
+                        mtime_string,
+                        link_offset
+                    );
                 }
                 _ => {
                     log::info!("{} {:?}", etype, path);
@@ -705,9 +776,15 @@ impl<R: Read + Seek> CatalogReader<R> {
             components.push(b'/');
             components.extend(&direntry.name);
             let mut entry = ArchiveEntry::new(&components, Some(&direntry.attr));
-            if let DirEntryAttribute::File { size, mtime } = direntry.attr {
+            if let DirEntryAttribute::File {
+                size,
+                mtime,
+                link_offset,
+            } = direntry.attr
+            {
                 entry.size = size.into();
                 entry.mtime = mtime.into();
+                entry.link_offset = Some(link_offset.raw());
             }
             res.push(entry);
         }
@@ -916,6 +993,9 @@ pub struct ArchiveEntry {
     /// The file "last modified" time stamp, if entry_type is 'f' (file)
     #[serde(skip_serializing_if = "Option::is_none")]
     pub mtime: Option<i64>,
+    /// The file link offset in the pxar archive, if entry_type is 'f' (file)
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub link_offset: Option<u64>,
 }
 
 impl ArchiveEntry {
@@ -946,6 +1026,10 @@ impl ArchiveEntry {
                 Some(DirEntryAttribute::File { mtime, .. }) => Some(*mtime),
                 _ => None,
             },
+            link_offset: match entry_type {
+                Some(DirEntryAttribute::File { link_offset, .. }) => Some(link_offset.raw()),
+                _ => None,
+            },
         }
     }
 }
diff --git a/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs b/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs
index c4e97d33..95e3593b 100644
--- a/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs
+++ b/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs
@@ -109,6 +109,7 @@ fn get_dir_entry(path: &Path) -> Result<DirEntryAttribute, Error> {
         libc::S_IFREG => DirEntryAttribute::File {
             size: stat.st_size as u64,
             mtime: stat.st_mtime,
+            link_offset: pxar::encoder::LinkOffset::new(0),
         },
         libc::S_IFDIR => DirEntryAttribute::Directory { start: 0 },
         _ => bail!("unsupported file type: {}", stat.st_mode),
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 13/20] fix #3174: archiver/extractor: impl appendix ref
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (11 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 12/20] fix #3174: catalog: incl pxar archives file offset Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 14/20] fix #3174: extractor: impl seq restore from appendix Christian Ebner
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Implements the functionality to create and extract appendix references
via the pbs client.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-client/src/pxar/create.rs  | 13 +++++++++++++
 pbs-client/src/pxar/extract.rs | 16 ++++++++++++++++
 pbs-client/src/pxar/tools.rs   |  8 ++++++++
 3 files changed, 37 insertions(+)

diff --git a/pbs-client/src/pxar/create.rs b/pbs-client/src/pxar/create.rs
index 0f23ed2f..0468abe9 100644
--- a/pbs-client/src/pxar/create.rs
+++ b/pbs-client/src/pxar/create.rs
@@ -734,6 +734,19 @@ impl Archiver {
         Ok(out.file_offset())
     }
 
+    async fn add_appendix_ref<T: SeqWrite + Send>(
+        &mut self,
+        encoder: &mut Encoder<'_, T>,
+        file_name: &Path,
+        metadata: &Metadata,
+        appendix_offset: u64,
+        file_size: u64,
+    ) -> Result<(), Error> {
+        Ok(encoder
+            .add_appendix_ref(metadata, file_name, appendix_offset, file_size)
+            .await?)
+    }
+
     async fn add_symlink<T: SeqWrite + Send>(
         &mut self,
         encoder: &mut Encoder<'_, T>,
diff --git a/pbs-client/src/pxar/extract.rs b/pbs-client/src/pxar/extract.rs
index f78e06c2..d2d42749 100644
--- a/pbs-client/src/pxar/extract.rs
+++ b/pbs-client/src/pxar/extract.rs
@@ -74,6 +74,7 @@ struct ExtractorIterState {
     err_path_stack: Vec<OsString>,
     current_match: bool,
     end_reached: bool,
+    appendix_list: Vec<(PathBuf, u64, u64)>,
 }
 
 /// An [`Iterator`] that encapsulates the process of extraction in [extract_archive].
@@ -98,6 +99,7 @@ impl ExtractorIterState {
             err_path_stack: Vec::new(),
             current_match: options.extract_match_default,
             end_reached: false,
+            appendix_list: Vec::new(),
         }
     }
 }
@@ -373,6 +375,20 @@ where
                 }
                 .context(PxarExtractContext::ExtractFile)
             }
+            (
+                true,
+                EntryKind::AppendixRef {
+                    appendix_offset,
+                    file_size,
+                },
+            ) => {
+                self.state.appendix_list.push((
+                    entry.path().to_path_buf(),
+                    *appendix_offset,
+                    *file_size,
+                ));
+                Ok(())
+            }
             (false, _) => Ok(()), // skip this
         };
 
diff --git a/pbs-client/src/pxar/tools.rs b/pbs-client/src/pxar/tools.rs
index 0cfbaf5b..aac5a1e7 100644
--- a/pbs-client/src/pxar/tools.rs
+++ b/pbs-client/src/pxar/tools.rs
@@ -156,6 +156,14 @@ pub fn format_multi_line_entry(entry: &Entry) -> String {
 
     let (size, link, type_name) = match entry.kind() {
         EntryKind::File { size, .. } => (format!("{}", *size), String::new(), "file"),
+        EntryKind::AppendixRef {
+            appendix_offset,
+            file_size,
+        } => (
+            format!("{} {}", appendix_offset, file_size),
+            String::new(),
+            "appendix ref",
+        ),
         EntryKind::Symlink(link) => (
             "0".to_string(),
             format!(" -> {:?}", link.as_os_str()),
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 14/20] fix #3174: extractor: impl seq restore from appendix
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (12 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 13/20] fix #3174: archiver/extractor: impl appendix ref Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 15/20] fix #3174: archiver: store ref to previous backup Christian Ebner
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Restores the file payloads for all AppendixRef entries encountered
during the sequential restore of the pxar archive.
This is done by iterating over all the files listed in the corresponding
state variable, opening each of the parent directory while storing its
metadata for successive restore and creating the file, followed by
writing the contents to it.

When leaving the directories, their metatdata is restored.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-client/src/pxar/create.rs  |   4 +-
 pbs-client/src/pxar/extract.rs | 125 +++++++++++++++++++++++++++++++++
 pbs-client/src/pxar/tools.rs   |   1 +
 3 files changed, 128 insertions(+), 2 deletions(-)

diff --git a/pbs-client/src/pxar/create.rs b/pbs-client/src/pxar/create.rs
index 0468abe9..c0fc5e2d 100644
--- a/pbs-client/src/pxar/create.rs
+++ b/pbs-client/src/pxar/create.rs
@@ -43,7 +43,7 @@ pub struct PxarCreateOptions {
     pub skip_lost_and_found: bool,
 }
 
-fn detect_fs_type(fd: RawFd) -> Result<i64, Error> {
+pub fn detect_fs_type(fd: RawFd) -> Result<i64, Error> {
     let mut fs_stat = std::mem::MaybeUninit::uninit();
     let res = unsafe { libc::fstatfs(fd, fs_stat.as_mut_ptr()) };
     Errno::result(res)?;
@@ -776,7 +776,7 @@ impl Archiver {
     }
 }
 
-fn get_metadata(
+pub fn get_metadata(
     fd: RawFd,
     stat: &FileStat,
     flags: Flags,
diff --git a/pbs-client/src/pxar/extract.rs b/pbs-client/src/pxar/extract.rs
index d2d42749..3570eb01 100644
--- a/pbs-client/src/pxar/extract.rs
+++ b/pbs-client/src/pxar/extract.rs
@@ -313,6 +313,131 @@ where
 
                 res
             }
+            (_, EntryKind::Appendix { total }) => {
+                // Bytes consumed in decoder since encountering the appendix marker
+                let mut consumed = 0;
+
+                for (path, offset, size) in &self.state.appendix_list {
+                    self.extractor.allow_existing_dirs = true;
+
+                    let components = match path.parent() {
+                        Some(components) => components,
+                        None => return Some(Err(format_err!("expected path with parent"))),
+                    };
+
+                    // Open dir path components, skipping the root component, get metadata
+                    for dir in components.iter().skip(1) {
+                        let parent_fd = match self.extractor.dir_stack.last_dir_fd(true) {
+                            Ok(parent_fd) => parent_fd,
+                            Err(err) => return Some(Err(err)),
+                        };
+                        let fs_magic = match crate::pxar::create::detect_fs_type(parent_fd.as_raw_fd()) {
+                            Ok(fs_magic) => fs_magic,
+                            Err(err) => return Some(Err(err)),
+                        };
+
+                        let mut fs_feature_flags = Flags::from_magic(fs_magic);
+                        let file_name = match CString::new(dir.as_bytes()) {
+                            Ok(file_name) => file_name,
+                            Err(err) => return Some(Err(err.into())),
+                        };
+                        let fd = proxmox_sys::fd::openat(
+                            &parent_fd,
+                            file_name.as_ref(),
+                            OFlag::O_NOATIME,
+                            Mode::empty(),
+                        )
+                        .unwrap();
+                        let stat = nix::sys::stat::fstat(fd.as_raw_fd()).unwrap();
+                        let metadata = match crate::pxar::create::get_metadata(
+                            fd.as_raw_fd(),
+                            &stat,
+                            fs_feature_flags,
+                            fs_magic,
+                            &mut fs_feature_flags,
+                        ) {
+                            Ok(metadata) => metadata,
+                            Err(err) => return Some(Err(err)),
+                        };
+
+                        match self.extractor.enter_directory(dir.to_os_string(), metadata.clone(), true) {
+                            Ok(()) => (),
+                            Err(err) => return Some(Err(err)),
+                        };
+                    }
+
+                    let skip = *offset - consumed;
+                    match self.decoder.skip_bytes(skip) {
+                        Ok(()) => (),
+                        Err(err) => return Some(Err(err.into())),
+                    };
+
+                    let entry = match self.decoder.next() {
+                        Some(Ok(entry)) => entry,
+                        Some(Err(err)) => return Some(Err(err.into())),
+                        None => return Some(Err(format_err!("expected entry"))),
+                    };
+
+                    let file_name_os = entry.file_name();
+                    let file_name_bytes = file_name_os.as_bytes();
+
+                    let file_name = match CString::new(file_name_bytes) {
+                        Ok(file_name_ref) => file_name_ref,
+                        Err(err) => return Some(Err(format_err!(err))),
+                    };
+
+                    let metadata = entry.metadata();
+
+                    self.extractor.set_path(path.as_os_str().to_owned());
+
+                    let contents = self.decoder.contents();
+
+                    let result = if let Some(mut contents) = contents {
+                        self.extractor.extract_file(
+                            &file_name,
+                            metadata,
+                            *size,
+                            &mut contents,
+                            self.extractor
+                                .overwrite_flags
+                                .contains(OverwriteFlags::FILE),
+                        )
+                    } else {
+                        Err(format_err!(
+                            "found regular file entry without contents in archive"
+                        ))
+                    }
+                    .context(PxarExtractContext::ExtractFile);
+                    result.unwrap();
+
+                    // Iter over all dir path components, skipping the root component, set metadata
+                    for _dir in components.iter().skip(1) {
+                        match self.extractor.leave_directory() {
+                            Ok(()) => (),
+                            Err(err) => return Some(Err(err)),
+                        }
+                    }
+
+                    // Entry header
+                    let mut metadata_bytes = std::mem::size_of::<pxar::format::Header>();
+                    // Filename payload
+                    metadata_bytes += std::mem::size_of_val(file_name.as_bytes()) + 1;
+                    // Metadata with headers and payloads
+                    metadata_bytes += metadata.calculate_byte_len();
+                    // Payload header
+                    metadata_bytes += std::mem::size_of::<pxar::format::Header>();
+
+                    consumed += skip + metadata_bytes as u64 + *size;
+                }
+
+                let skip = *total - consumed;
+                match self.decoder.skip_bytes(skip) {
+                    Ok(()) => (),
+                    Err(err) => return Some(Err(err.into())),
+                }
+
+                Ok(())
+            }
             (true, EntryKind::Symlink(link)) => {
                 self.callback(entry.path());
                 self.extractor
diff --git a/pbs-client/src/pxar/tools.rs b/pbs-client/src/pxar/tools.rs
index aac5a1e7..174a7351 100644
--- a/pbs-client/src/pxar/tools.rs
+++ b/pbs-client/src/pxar/tools.rs
@@ -156,6 +156,7 @@ pub fn format_multi_line_entry(entry: &Entry) -> String {
 
     let (size, link, type_name) = match entry.kind() {
         EntryKind::File { size, .. } => (format!("{}", *size), String::new(), "file"),
+        EntryKind::Appendix { total } => (format!("{total}"), String::new(), "appendix"),
         EntryKind::AppendixRef {
             appendix_offset,
             file_size,
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 15/20] fix #3174: archiver: store ref to previous backup
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (13 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 14/20] fix #3174: extractor: impl seq restore from appendix Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 16/20] fix #3174: upload stream: impl reused chunk injector Christian Ebner
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Adds the statefull information needed for accessing the previous backups
information such as the index and catalog.

This patch only introduces the struct and interface, by setting all
occurrences to None it is not used for now.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-client/src/pxar/create.rs                 | 19 +++++++++++++++++--
 pbs-client/src/pxar/mod.rs                    |  2 +-
 proxmox-backup-client/src/main.rs             |  1 +
 .../src/proxmox_restore_daemon/api.rs         |  1 +
 pxar-bin/src/main.rs                          | 17 +++++++++--------
 5 files changed, 29 insertions(+), 11 deletions(-)

diff --git a/pbs-client/src/pxar/create.rs b/pbs-client/src/pxar/create.rs
index c0fc5e2d..5feb7e6e 100644
--- a/pbs-client/src/pxar/create.rs
+++ b/pbs-client/src/pxar/create.rs
@@ -24,14 +24,15 @@ use proxmox_io::vec;
 use proxmox_lang::c_str;
 use proxmox_sys::fs::{self, acl, xattr};
 
-use pbs_datastore::catalog::BackupCatalogWriter;
+use pbs_datastore::catalog::{BackupCatalogWriter, CatalogReader};
+use pbs_datastore::dynamic_index::DynamicIndexReader;
 
 use crate::pxar::metadata::errno_is_unsupported;
 use crate::pxar::tools::assert_single_path_component;
 use crate::pxar::Flags;
 
 /// Pxar options for creating a pxar archive/stream
-#[derive(Default, Clone)]
+#[derive(Default)]
 pub struct PxarCreateOptions {
     /// Device/mountpoint st_dev numbers that should be included. None for no limitation.
     pub device_set: Option<HashSet<u64>>,
@@ -41,6 +42,18 @@ pub struct PxarCreateOptions {
     pub entries_max: usize,
     /// Skip lost+found directory
     pub skip_lost_and_found: bool,
+    /// Reference state for partial backups
+    pub previous_ref: Option<PxarPrevRef>,
+}
+
+/// Contains statefull information of previous backups snapshots for partial backups
+pub struct PxarPrevRef {
+    /// Reference index for partial backups
+    pub index: DynamicIndexReader,
+    /// Reference catalog for partial backups
+    pub catalog: CatalogReader<std::fs::File>,
+    /// Reference archive name for partial backups
+    pub archive_name: String,
 }
 
 pub fn detect_fs_type(fd: RawFd) -> Result<i64, Error> {
@@ -128,6 +141,7 @@ struct Archiver {
     device_set: Option<HashSet<u64>>,
     hardlinks: HashMap<HardLinkInfo, (PathBuf, LinkOffset)>,
     file_copy_buffer: Vec<u8>,
+    previous_ref: Option<PxarPrevRef>,
 }
 
 type Encoder<'a, T> = pxar::encoder::aio::Encoder<'a, T>;
@@ -192,6 +206,7 @@ where
         device_set,
         hardlinks: HashMap::new(),
         file_copy_buffer: vec::undefined(4 * 1024 * 1024),
+        previous_ref: options.previous_ref,
     };
 
     archiver
diff --git a/pbs-client/src/pxar/mod.rs b/pbs-client/src/pxar/mod.rs
index 14674b9b..24315f5f 100644
--- a/pbs-client/src/pxar/mod.rs
+++ b/pbs-client/src/pxar/mod.rs
@@ -56,7 +56,7 @@ pub(crate) mod tools;
 mod flags;
 pub use flags::Flags;
 
-pub use create::{create_archive, PxarCreateOptions};
+pub use create::{create_archive, PxarCreateOptions, PxarPrevRef};
 pub use extract::{
     create_tar, create_zip, extract_archive, extract_sub_dir, extract_sub_dir_seq, ErrorHandler,
     OverwriteFlags, PxarExtractContext, PxarExtractOptions,
diff --git a/proxmox-backup-client/src/main.rs b/proxmox-backup-client/src/main.rs
index 1a13291a..509fa22c 100644
--- a/proxmox-backup-client/src/main.rs
+++ b/proxmox-backup-client/src/main.rs
@@ -993,6 +993,7 @@ async fn create_backup(
                     patterns: pattern_list.clone(),
                     entries_max: entries_max as usize,
                     skip_lost_and_found,
+                    previous_ref: None,
                 };
 
                 let upload_options = UploadOptions {
diff --git a/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs b/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs
index 95e3593b..27f20a1c 100644
--- a/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs
+++ b/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs
@@ -353,6 +353,7 @@ fn extract(
                         device_set: None,
                         patterns,
                         skip_lost_and_found: false,
+                        previous_ref: None,
                     };
 
                     let pxar_writer = TokioWriter::new(writer);
diff --git a/pxar-bin/src/main.rs b/pxar-bin/src/main.rs
index bc044035..9376a2c1 100644
--- a/pxar-bin/src/main.rs
+++ b/pxar-bin/src/main.rs
@@ -330,13 +330,6 @@ async fn create_archive(
         Some(HashSet::new())
     };
 
-    let options = pbs_client::pxar::PxarCreateOptions {
-        entries_max: entries_max as usize,
-        device_set,
-        patterns,
-        skip_lost_and_found: false,
-    };
-
     let source = PathBuf::from(source);
 
     let dir = nix::dir::Dir::open(
@@ -349,7 +342,7 @@ async fn create_archive(
         .create_new(true)
         .write(true)
         .mode(0o640)
-        .open(archive)?;
+        .open(archive.clone())?;
 
     let writer = std::io::BufWriter::with_capacity(1024 * 1024, file);
     let mut feature_flags = Flags::DEFAULT;
@@ -372,6 +365,14 @@ async fn create_archive(
         feature_flags.remove(Flags::WITH_SOCKETS);
     }
 
+    let options = pbs_client::pxar::PxarCreateOptions {
+        entries_max: entries_max as usize,
+        device_set,
+        patterns,
+        skip_lost_and_found: false,
+        previous_ref: None,
+    };
+
     let writer = pxar::encoder::sync::StandardWriter::new(writer);
     pbs_client::pxar::create_archive(
         dir,
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 16/20] fix #3174: upload stream: impl reused chunk injector
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (14 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 15/20] fix #3174: archiver: store ref to previous backup Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 17/20] fix #3174: chunker: add forced boundaries Christian Ebner
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

In order to be included in the backups index file, the reused chunks
which store the payload of skipped files during pxar encoding have to be
inserted after the encoder has written the pxar appendix entry type.

The chunker forces a chunk boundary after this marker and queues the
list of chunks to be uploaded thereafter.
This implements the logic to inject the chunks into the chunk upload
stream after such a boundary is requested, by looping over the queued
chunks and inserting them into the stream.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-client/src/inject_reused_chunks.rs | 123 +++++++++++++++++++++++++
 pbs-client/src/lib.rs                  |   1 +
 2 files changed, 124 insertions(+)
 create mode 100644 pbs-client/src/inject_reused_chunks.rs

diff --git a/pbs-client/src/inject_reused_chunks.rs b/pbs-client/src/inject_reused_chunks.rs
new file mode 100644
index 00000000..01cb1350
--- /dev/null
+++ b/pbs-client/src/inject_reused_chunks.rs
@@ -0,0 +1,123 @@
+use std::collections::VecDeque;
+use std::pin::Pin;
+use std::sync::atomic::{AtomicUsize, Ordering};
+use std::sync::{Arc, Mutex};
+use std::task::{Context, Poll};
+
+use anyhow::Error;
+use futures::{ready, Stream};
+use pin_project_lite::pin_project;
+
+use pbs_datastore::dynamic_index::DynamicEntry;
+
+pin_project! {
+    pub struct InjectReusedChunksQueue<S> {
+        #[pin]
+        input: S,
+        current: Option<InjectChunks>,
+        injection_queue: Arc<Mutex<VecDeque<InjectChunks>>>,
+        stream_len: Arc<AtomicUsize>,
+        index_csum: Arc<Mutex<Option<openssl::sha::Sha256>>>,
+    }
+}
+
+#[derive(Debug)]
+pub struct InjectChunks {
+    pub boundary: u64,
+    pub chunks: Vec<DynamicEntry>,
+    pub size: usize,
+}
+
+pub enum InjectedChunksInfo {
+    Known(Vec<(u64, [u8; 32])>),
+    Raw((u64, bytes::BytesMut)),
+}
+
+pub trait InjectReusedChunks: Sized {
+    fn inject_reused_chunks(
+        self,
+        injection_queue: Arc<Mutex<VecDeque<InjectChunks>>>,
+        stream_len: Arc<AtomicUsize>,
+        index_csum: Arc<Mutex<Option<openssl::sha::Sha256>>>,
+    ) -> InjectReusedChunksQueue<Self>;
+}
+
+impl<S> InjectReusedChunks for S
+where
+    S: Stream<Item = Result<bytes::BytesMut, Error>>,
+{
+    fn inject_reused_chunks(
+        self,
+        injection_queue: Arc<Mutex<VecDeque<InjectChunks>>>,
+        stream_len: Arc<AtomicUsize>,
+        index_csum: Arc<Mutex<Option<openssl::sha::Sha256>>>,
+    ) -> InjectReusedChunksQueue<Self> {
+        let current = injection_queue.lock().unwrap().pop_front();
+
+        InjectReusedChunksQueue {
+            input: self,
+            current,
+            injection_queue,
+            stream_len,
+            index_csum,
+        }
+    }
+}
+
+impl<S> Stream for InjectReusedChunksQueue<S>
+where
+    S: Stream<Item = Result<bytes::BytesMut, Error>>,
+{
+    type Item = Result<InjectedChunksInfo, Error>;
+
+    fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
+        let mut this = self.project();
+        loop {
+            let current = this.current.take();
+            if let Some(current) = current {
+                let mut chunks = Vec::new();
+                let mut guard = this.index_csum.lock().unwrap();
+                let csum = guard.as_mut().unwrap();
+
+                for chunk in current.chunks {
+                    let offset = this
+                        .stream_len
+                        .fetch_add(chunk.end() as usize, Ordering::SeqCst)
+                        as u64;
+                    let digest = chunk.digest();
+                    chunks.push((offset, digest));
+                    // Chunk end is assumed to be normalized to chunk size here
+                    let end_offset = offset + chunk.end();
+                    csum.update(&end_offset.to_le_bytes());
+                    csum.update(&digest);
+                }
+                let chunk_info = InjectedChunksInfo::Known(chunks);
+                return Poll::Ready(Some(Ok(chunk_info)));
+            }
+
+            match ready!(this.input.as_mut().poll_next(cx)) {
+                None => return Poll::Ready(None),
+                Some(Err(err)) => return Poll::Ready(Some(Err(err))),
+                Some(Ok(raw)) => {
+                    let chunk_size = raw.len();
+                    let offset = this.stream_len.fetch_add(chunk_size, Ordering::SeqCst) as u64;
+                    let mut injections = this.injection_queue.lock().unwrap();
+                    if let Some(inject) = injections.pop_front() {
+                        if inject.boundary == offset {
+                            let _ = this.current.insert(inject);
+                            // Should be injected here, directly jump to next loop iteration
+                            continue;
+                        } else if inject.boundary <= offset + chunk_size as u64 {
+                            let _ = this.current.insert(inject);
+                        } else {
+                            injections.push_front(inject);
+                        }
+                    }
+                    let data = InjectedChunksInfo::Raw((offset, raw));
+
+                    return Poll::Ready(Some(Ok(data)));
+                }
+            }
+        }
+    }
+}
diff --git a/pbs-client/src/lib.rs b/pbs-client/src/lib.rs
index 21cf8556..8bf26381 100644
--- a/pbs-client/src/lib.rs
+++ b/pbs-client/src/lib.rs
@@ -8,6 +8,7 @@ pub mod pxar;
 pub mod tools;
 
 mod merge_known_chunks;
+mod inject_reused_chunks;
 pub mod pipe_to_stream;
 
 mod http_client;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 17/20] fix #3174: chunker: add forced boundaries
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (15 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 16/20] fix #3174: upload stream: impl reused chunk injector Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 18/20] fix #3174: backup writer: inject queued chunk in upload steam Christian Ebner
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Allow to force a boundary while chunking and inject reused chunks into
the stream. Duoble ended queues are used to control the boundaries and
chunk injection between archiver, chunker and uploader.

The archiver gets an interface to request a boundary and push a list of
chunks to inject following that boundary. The chunker reads this queue,
creating the boundary and passing the list of chunks to inject to the
uploader via a second, dedicated double ended queue.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 examples/test_chunk_speed2.rs                 |  9 +++-
 pbs-client/src/backup_writer.rs               |  6 ++-
 pbs-client/src/chunk_stream.rs                | 41 ++++++++++++++++++-
 pbs-client/src/pxar/create.rs                 | 10 ++++-
 pbs-client/src/pxar_backup_stream.rs          |  8 +++-
 proxmox-backup-client/src/main.rs             | 36 ++++++++++++----
 .../src/proxmox_restore_daemon/api.rs         | 13 +++++-
 pxar-bin/src/main.rs                          |  5 ++-
 tests/catar.rs                                |  3 ++
 9 files changed, 114 insertions(+), 17 deletions(-)

diff --git a/examples/test_chunk_speed2.rs b/examples/test_chunk_speed2.rs
index 3f69b436..e8bac726 100644
--- a/examples/test_chunk_speed2.rs
+++ b/examples/test_chunk_speed2.rs
@@ -1,5 +1,7 @@
 use anyhow::Error;
 use futures::*;
+use std::collections::VecDeque;
+use std::sync::{Arc, Mutex};
 
 extern crate proxmox_backup;
 
@@ -26,7 +28,12 @@ async fn run() -> Result<(), Error> {
         .map_err(Error::from);
 
     //let chunk_stream = FixedChunkStream::new(stream, 4*1024*1024);
-    let mut chunk_stream = ChunkStream::new(stream, None);
+    let mut chunk_stream = ChunkStream::new(
+        stream,
+        None,
+        Arc::new(Mutex::new(VecDeque::new())),
+        Arc::new(Mutex::new(VecDeque::new())),
+    );
 
     let start_time = std::time::Instant::now();
 
diff --git a/pbs-client/src/backup_writer.rs b/pbs-client/src/backup_writer.rs
index 8a03d8ea..cc6dd49a 100644
--- a/pbs-client/src/backup_writer.rs
+++ b/pbs-client/src/backup_writer.rs
@@ -1,4 +1,4 @@
-use std::collections::HashSet;
+use std::collections::{HashSet, VecDeque};
 use std::future::Future;
 use std::os::unix::fs::OpenOptionsExt;
 use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
@@ -23,6 +23,7 @@ use pbs_tools::crypt_config::CryptConfig;
 
 use proxmox_human_byte::HumanByte;
 
+use super::inject_reused_chunks::InjectChunks;
 use super::merge_known_chunks::{MergeKnownChunks, MergedChunkInfo};
 
 use super::{H2Client, HttpClient};
@@ -265,6 +266,7 @@ impl BackupWriter {
         archive_name: &str,
         stream: impl Stream<Item = Result<bytes::BytesMut, Error>>,
         options: UploadOptions,
+        injection_queue: Arc<Mutex<VecDeque<InjectChunks>>>,
     ) -> Result<BackupStats, Error> {
         let known_chunks = Arc::new(Mutex::new(HashSet::new()));
 
@@ -341,6 +343,7 @@ impl BackupWriter {
                 None
             },
             options.compress,
+            injection_queue,
         )
         .await?;
 
@@ -637,6 +640,7 @@ impl BackupWriter {
         known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
         crypt_config: Option<Arc<CryptConfig>>,
         compress: bool,
+        injection_queue: Arc<Mutex<VecDeque<InjectChunks>>>,
     ) -> impl Future<Output = Result<UploadStats, Error>> {
         let total_chunks = Arc::new(AtomicUsize::new(0));
         let total_chunks2 = total_chunks.clone();
diff --git a/pbs-client/src/chunk_stream.rs b/pbs-client/src/chunk_stream.rs
index 895f6eae..1373502f 100644
--- a/pbs-client/src/chunk_stream.rs
+++ b/pbs-client/src/chunk_stream.rs
@@ -1,5 +1,7 @@
 use std::pin::Pin;
 use std::task::{Context, Poll};
+use std::sync::{Arc, Mutex};
+use std::collections::VecDeque;
 
 use anyhow::Error;
 use bytes::BytesMut;
@@ -8,21 +10,34 @@ use futures::stream::{Stream, TryStream};
 
 use pbs_datastore::Chunker;
 
+use crate::inject_reused_chunks::InjectChunks;
+
 /// Split input stream into dynamic sized chunks
 pub struct ChunkStream<S: Unpin> {
     input: S,
     chunker: Chunker,
     buffer: BytesMut,
     scan_pos: usize,
+    consumed: u64,
+    boundaries: Arc<Mutex<VecDeque<InjectChunks>>>,
+    injections: Arc<Mutex<VecDeque<InjectChunks>>>,
 }
 
 impl<S: Unpin> ChunkStream<S> {
-    pub fn new(input: S, chunk_size: Option<usize>) -> Self {
+    pub fn new(
+        input: S,
+        chunk_size: Option<usize>,
+        boundaries: Arc<Mutex<VecDeque<InjectChunks>>>,
+        injections: Arc<Mutex<VecDeque<InjectChunks>>>,
+    ) -> Self {
         Self {
             input,
             chunker: Chunker::new(chunk_size.unwrap_or(4 * 1024 * 1024)),
             buffer: BytesMut::new(),
             scan_pos: 0,
+            consumed: 0,
+            boundaries,
+            injections,
         }
     }
 }
@@ -40,6 +55,29 @@ where
     fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
         let this = self.get_mut();
         loop {
+            { 
+                // Make sure to release this lock and don't hold it longer than required
+                let mut boundaries = this.boundaries.lock().unwrap();
+                if let Some(inject) = boundaries.pop_front() {
+                    let max = this.consumed + this.buffer.len() as u64;
+                    if inject.boundary <= max {
+                        let chunk_size = (inject.boundary - this.consumed) as usize;
+                        let result = this.buffer.split_to(chunk_size);
+                        this.consumed += chunk_size as u64;
+                        this.scan_pos = 0;
+
+                        // Add the size of the injected chunks to consumed, so chunk stream offsets
+                        // are in sync with the rest of the archive.
+                        this.consumed += inject.size as u64;
+
+                        this.injections.lock().unwrap().push_back(inject);
+
+                        return Poll::Ready(Some(Ok(result)));
+                    }
+                    boundaries.push_front(inject);
+                }
+            }
+
             if this.scan_pos < this.buffer.len() {
                 let boundary = this.chunker.scan(&this.buffer[this.scan_pos..]);
 
@@ -50,6 +88,7 @@ where
                     // continue poll
                 } else if chunk_size <= this.buffer.len() {
                     let result = this.buffer.split_to(chunk_size);
+                    this.consumed += chunk_size as u64;
                     this.scan_pos = 0;
                     return Poll::Ready(Some(Ok(result)));
                 } else {
diff --git a/pbs-client/src/pxar/create.rs b/pbs-client/src/pxar/create.rs
index 5feb7e6e..d6afc465 100644
--- a/pbs-client/src/pxar/create.rs
+++ b/pbs-client/src/pxar/create.rs
@@ -1,4 +1,4 @@
-use std::collections::{HashMap, HashSet};
+use std::collections::{HashMap, HashSet, VecDeque};
 use std::ffi::{CStr, CString, OsStr};
 use std::fmt;
 use std::io::{self, Read};
@@ -25,8 +25,9 @@ use proxmox_lang::c_str;
 use proxmox_sys::fs::{self, acl, xattr};
 
 use pbs_datastore::catalog::{BackupCatalogWriter, CatalogReader};
-use pbs_datastore::dynamic_index::DynamicIndexReader;
+use pbs_datastore::dynamic_index::{DynamicEntry, DynamicIndexReader};
 
+use crate::inject_reused_chunks::InjectChunks;
 use crate::pxar::metadata::errno_is_unsupported;
 use crate::pxar::tools::assert_single_path_component;
 use crate::pxar::Flags;
@@ -142,6 +143,8 @@ struct Archiver {
     hardlinks: HashMap<HardLinkInfo, (PathBuf, LinkOffset)>,
     file_copy_buffer: Vec<u8>,
     previous_ref: Option<PxarPrevRef>,
+    forced_boundaries: Arc<Mutex<VecDeque<InjectChunks>>>,
+    inject: (usize, Vec<DynamicEntry>),
 }
 
 type Encoder<'a, T> = pxar::encoder::aio::Encoder<'a, T>;
@@ -153,6 +156,7 @@ pub async fn create_archive<T, F>(
     callback: F,
     catalog: Option<Arc<Mutex<dyn BackupCatalogWriter + Send>>>,
     options: PxarCreateOptions,
+    forced_boundaries: Arc<Mutex<VecDeque<InjectChunks>>>,
 ) -> Result<(), Error>
 where
     T: SeqWrite + Send,
@@ -207,6 +211,8 @@ where
         hardlinks: HashMap::new(),
         file_copy_buffer: vec::undefined(4 * 1024 * 1024),
         previous_ref: options.previous_ref,
+        forced_boundaries,
+        inject: (0, Vec::new()),
     };
 
     archiver
diff --git a/pbs-client/src/pxar_backup_stream.rs b/pbs-client/src/pxar_backup_stream.rs
index 22a6ffdc..d18ba470 100644
--- a/pbs-client/src/pxar_backup_stream.rs
+++ b/pbs-client/src/pxar_backup_stream.rs
@@ -1,3 +1,4 @@
+use std::collections::VecDeque;
 use std::io::Write;
 //use std::os::unix::io::FromRawFd;
 use std::path::Path;
@@ -17,6 +18,8 @@ use proxmox_io::StdChannelWriter;
 
 use pbs_datastore::catalog::CatalogWriter;
 
+use crate::inject_reused_chunks::InjectChunks;
+
 /// Stream implementation to encode and upload .pxar archives.
 ///
 /// The hyper client needs an async Stream for file upload, so we
@@ -40,6 +43,7 @@ impl PxarBackupStream {
         dir: Dir,
         catalog: Arc<Mutex<CatalogWriter<W>>>,
         options: crate::pxar::PxarCreateOptions,
+        boundaries: Arc<Mutex<VecDeque<InjectChunks>>>,
     ) -> Result<Self, Error> {
         let (tx, rx) = std::sync::mpsc::sync_channel(10);
 
@@ -64,6 +68,7 @@ impl PxarBackupStream {
                 },
                 Some(catalog),
                 options,
+                boundaries,
             )
             .await
             {
@@ -87,10 +92,11 @@ impl PxarBackupStream {
         dirname: &Path,
         catalog: Arc<Mutex<CatalogWriter<W>>>,
         options: crate::pxar::PxarCreateOptions,
+        boundaries: Arc<Mutex<VecDeque<InjectChunks>>>,
     ) -> Result<Self, Error> {
         let dir = nix::dir::Dir::open(dirname, OFlag::O_DIRECTORY, Mode::empty())?;
 
-        Self::new(dir, catalog, options)
+        Self::new(dir, catalog, options, boundaries)
     }
 }
 
diff --git a/proxmox-backup-client/src/main.rs b/proxmox-backup-client/src/main.rs
index 509fa22c..5945ae5d 100644
--- a/proxmox-backup-client/src/main.rs
+++ b/proxmox-backup-client/src/main.rs
@@ -1,4 +1,4 @@
-use std::collections::HashSet;
+use std::collections::{HashSet, VecDeque};
 use std::io::{self, Read, Seek, SeekFrom, Write};
 use std::path::{Path, PathBuf};
 use std::pin::Pin;
@@ -192,8 +192,17 @@ async fn backup_directory<P: AsRef<Path>>(
     pxar_create_options: pbs_client::pxar::PxarCreateOptions,
     upload_options: UploadOptions,
 ) -> Result<BackupStats, Error> {
-    let pxar_stream = PxarBackupStream::open(dir_path.as_ref(), catalog, pxar_create_options)?;
-    let mut chunk_stream = ChunkStream::new(pxar_stream, chunk_size);
+    let boundaries = Arc::new(Mutex::new(VecDeque::new()));
+    let pxar_stream = PxarBackupStream::open(
+        dir_path.as_ref(),
+        catalog,
+        pxar_create_options,
+        boundaries.clone(),
+    )?;
+
+    let injections = Arc::new(Mutex::new(VecDeque::new()));
+    let mut chunk_stream =
+        ChunkStream::new(pxar_stream, chunk_size, boundaries, injections.clone());
 
     let (tx, rx) = mpsc::channel(10); // allow to buffer 10 chunks
 
@@ -211,7 +220,7 @@ async fn backup_directory<P: AsRef<Path>>(
     }
 
     let stats = client
-        .upload_stream(archive_name, stream, upload_options)
+        .upload_stream(archive_name, stream, upload_options, injections)
         .await?;
 
     Ok(stats)
@@ -237,8 +246,9 @@ async fn backup_image<P: AsRef<Path>>(
         bail!("cannot backup image with dynamic chunk size!");
     }
 
+    let injection_queue = Arc::new(Mutex::new(VecDeque::new()));
     let stats = client
-        .upload_stream(archive_name, stream, upload_options)
+        .upload_stream(archive_name, stream, upload_options, injection_queue)
         .await?;
 
     Ok(stats)
@@ -529,7 +539,14 @@ fn spawn_catalog_upload(
     let (catalog_tx, catalog_rx) = std::sync::mpsc::sync_channel(10); // allow to buffer 10 writes
     let catalog_stream = proxmox_async::blocking::StdChannelStream(catalog_rx);
     let catalog_chunk_size = 512 * 1024;
-    let catalog_chunk_stream = ChunkStream::new(catalog_stream, Some(catalog_chunk_size));
+    let boundaries = Arc::new(Mutex::new(VecDeque::new()));
+    let injections = Arc::new(Mutex::new(VecDeque::new()));
+    let catalog_chunk_stream = ChunkStream::new(
+        catalog_stream,
+        Some(catalog_chunk_size),
+        boundaries,
+        injections.clone(),
+    );
 
     let catalog_writer = Arc::new(Mutex::new(CatalogWriter::new(TokioWriterAdapter::new(
         StdChannelWriter::new(catalog_tx),
@@ -545,7 +562,12 @@ fn spawn_catalog_upload(
 
     tokio::spawn(async move {
         let catalog_upload_result = client
-            .upload_stream(CATALOG_NAME, catalog_chunk_stream, upload_options)
+            .upload_stream(
+                CATALOG_NAME,
+                catalog_chunk_stream,
+                upload_options,
+                injections,
+            )
             .await;
 
         if let Err(ref err) = catalog_upload_result {
diff --git a/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs b/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs
index 27f20a1c..8f6b96ab 100644
--- a/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs
+++ b/proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs
@@ -1,8 +1,10 @@
 ///! File-restore API running inside the restore VM
+use std::collections::VecDeque;
 use std::ffi::OsStr;
 use std::fs;
 use std::os::unix::ffi::OsStrExt;
 use std::path::{Path, PathBuf};
+use std::sync::{Arc, Mutex};
 
 use anyhow::{bail, Error};
 use futures::FutureExt;
@@ -357,8 +359,15 @@ fn extract(
                     };
 
                     let pxar_writer = TokioWriter::new(writer);
-                    create_archive(dir, pxar_writer, Flags::DEFAULT, |_| Ok(()), None, options)
-                        .await
+                    create_archive(
+                        dir,
+                        pxar_writer,
+                        Flags::DEFAULT,
+                        |_| Ok(()),
+                        None,
+                        options,
+                        Arc::new(Mutex::new(VecDeque::new())),
+                    ).await
                 }
                 .await;
                 if let Err(err) = result {
diff --git a/pxar-bin/src/main.rs b/pxar-bin/src/main.rs
index 9376a2c1..c019f3e4 100644
--- a/pxar-bin/src/main.rs
+++ b/pxar-bin/src/main.rs
@@ -1,10 +1,10 @@
-use std::collections::HashSet;
+use std::collections::{HashSet, VecDeque};
 use std::ffi::OsStr;
 use std::fs::OpenOptions;
 use std::os::unix::fs::OpenOptionsExt;
 use std::path::{Path, PathBuf};
 use std::sync::atomic::{AtomicBool, Ordering};
-use std::sync::Arc;
+use std::sync::{Arc, Mutex};
 
 use anyhow::{bail, format_err, Error};
 use futures::future::FutureExt;
@@ -384,6 +384,7 @@ async fn create_archive(
         },
         None,
         options,
+        Arc::new(Mutex::new(VecDeque::new())),
     )
     .await?;
 
diff --git a/tests/catar.rs b/tests/catar.rs
index 36bb4f3b..d69cb37b 100644
--- a/tests/catar.rs
+++ b/tests/catar.rs
@@ -1,4 +1,6 @@
 use std::process::Command;
+use std::sync::{Arc, Mutex};
+use std::collections::VecDeque;
 
 use anyhow::Error;
 
@@ -40,6 +42,7 @@ fn run_test(dir_name: &str) -> Result<(), Error> {
         |_| Ok(()),
         None,
         options,
+        Arc::new(Mutex::new(VecDeque::new())),
     ))?;
 
     Command::new("cmp")
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 18/20] fix #3174: backup writer: inject queued chunk in upload steam
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (16 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 17/20] fix #3174: chunker: add forced boundaries Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 19/20] fix #3174: archiver: reuse files with unchanged metadata Christian Ebner
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

Inject the chunk in the backup writers upload stream, including them
thereby in the index file.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-client/src/backup_writer.rs | 84 +++++++++++++++++++--------------
 1 file changed, 49 insertions(+), 35 deletions(-)

diff --git a/pbs-client/src/backup_writer.rs b/pbs-client/src/backup_writer.rs
index cc6dd49a..0f18b1ff 100644
--- a/pbs-client/src/backup_writer.rs
+++ b/pbs-client/src/backup_writer.rs
@@ -23,7 +23,7 @@ use pbs_tools::crypt_config::CryptConfig;
 
 use proxmox_human_byte::HumanByte;
 
-use super::inject_reused_chunks::InjectChunks;
+use super::inject_reused_chunks::{InjectChunks, InjectReusedChunks, InjectedChunksInfo};
 use super::merge_known_chunks::{MergeKnownChunks, MergedChunkInfo};
 
 use super::{H2Client, HttpClient};
@@ -667,48 +667,62 @@ impl BackupWriter {
         let index_csum_2 = index_csum.clone();
 
         stream
-            .and_then(move |data| {
-                let chunk_len = data.len();
+            .inject_reused_chunks(injection_queue, stream_len, index_csum.clone())
+            .and_then(move |chunk_info| {
+                match chunk_info {
+                    InjectedChunksInfo::Known(chunks) => {
+                        total_chunks.fetch_add(chunks.len(), Ordering::SeqCst);
+                        future::ok(MergedChunkInfo::Known(chunks))
+                    }
+                    InjectedChunksInfo::Raw((offset, data)) => {
+                        let chunk_len = data.len();
 
-                total_chunks.fetch_add(1, Ordering::SeqCst);
-                let offset = stream_len.fetch_add(chunk_len, Ordering::SeqCst) as u64;
+                        total_chunks.fetch_add(1, Ordering::SeqCst);
 
-                let mut chunk_builder = DataChunkBuilder::new(data.as_ref()).compress(compress);
+                        let mut chunk_builder =
+                            DataChunkBuilder::new(data.as_ref()).compress(compress);
 
-                if let Some(ref crypt_config) = crypt_config {
-                    chunk_builder = chunk_builder.crypt_config(crypt_config);
-                }
+                        if let Some(ref crypt_config) = crypt_config {
+                            chunk_builder = chunk_builder.crypt_config(crypt_config);
+                        }
 
-                let mut known_chunks = known_chunks.lock().unwrap();
-                let digest = chunk_builder.digest();
+                        let mut known_chunks = known_chunks.lock().unwrap();
 
-                let mut guard = index_csum.lock().unwrap();
-                let csum = guard.as_mut().unwrap();
+                        let digest = chunk_builder.digest();
 
-                let chunk_end = offset + chunk_len as u64;
+                        let mut guard = index_csum.lock().unwrap();
+                        let csum = guard.as_mut().unwrap();
 
-                if !is_fixed_chunk_size {
-                    csum.update(&chunk_end.to_le_bytes());
-                }
-                csum.update(digest);
+                        let chunk_end = offset + chunk_len as u64;
 
-                let chunk_is_known = known_chunks.contains(digest);
-                if chunk_is_known {
-                    known_chunk_count.fetch_add(1, Ordering::SeqCst);
-                    reused_len.fetch_add(chunk_len, Ordering::SeqCst);
-                    future::ok(MergedChunkInfo::Known(vec![(offset, *digest)]))
-                } else {
-                    let compressed_stream_len2 = compressed_stream_len.clone();
-                    known_chunks.insert(*digest);
-                    future::ready(chunk_builder.build().map(move |(chunk, digest)| {
-                        compressed_stream_len2.fetch_add(chunk.raw_size(), Ordering::SeqCst);
-                        MergedChunkInfo::New(ChunkInfo {
-                            chunk,
-                            digest,
-                            chunk_len: chunk_len as u64,
-                            offset,
-                        })
-                    }))
+                        if !is_fixed_chunk_size {
+                            csum.update(&chunk_end.to_le_bytes());
+                        }
+                        csum.update(digest);
+
+                        let chunk_is_known = known_chunks.contains(digest);
+                        if chunk_is_known {
+                            known_chunk_count.fetch_add(1, Ordering::SeqCst);
+                            reused_len.fetch_add(chunk_len, Ordering::SeqCst);
+
+                            future::ok(MergedChunkInfo::Known(vec![(offset, *digest)]))
+                        } else {
+                            let compressed_stream_len2 = compressed_stream_len.clone();
+                            known_chunks.insert(*digest);
+
+                            future::ready(chunk_builder.build().map(move |(chunk, digest)| {
+                                compressed_stream_len2
+                                    .fetch_add(chunk.raw_size(), Ordering::SeqCst);
+
+                                MergedChunkInfo::New(ChunkInfo {
+                                    chunk,
+                                    digest,
+                                    chunk_len: chunk_len as u64,
+                                    offset,
+                                })
+                            }))
+                        }
+                    }
                 }
             })
             .merge_known_chunks()
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 19/20] fix #3174: archiver: reuse files with unchanged metadata
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (17 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 18/20] fix #3174: backup writer: inject queued chunk in upload steam Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-26  7:01   ` Christian Ebner
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 20/20] fix #3174: client: Add incremental flag to backup creation Christian Ebner
  2023-09-26  7:15 ` [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
  20 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

During pxar archive encoding, check regular files against their
previous backup catalogs metadata, if present.

Instead of re-encoding files with unchanged metadata with file size over
a given threshold limit, mark the entries as appendix references in the
pxar archive and append the chunks containing the file payload in the
appendix.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-client/src/pxar/create.rs             | 149 +++++++++++++++++++++-
 src/tape/file_formats/snapshot_archive.rs |   2 +-
 2 files changed, 147 insertions(+), 4 deletions(-)

diff --git a/pbs-client/src/pxar/create.rs b/pbs-client/src/pxar/create.rs
index d6afc465..cb9af26f 100644
--- a/pbs-client/src/pxar/create.rs
+++ b/pbs-client/src/pxar/create.rs
@@ -24,7 +24,7 @@ use proxmox_io::vec;
 use proxmox_lang::c_str;
 use proxmox_sys::fs::{self, acl, xattr};
 
-use pbs_datastore::catalog::{BackupCatalogWriter, CatalogReader};
+use pbs_datastore::catalog::{BackupCatalogWriter, CatalogReader, DirEntryAttribute};
 use pbs_datastore::dynamic_index::{DynamicEntry, DynamicIndexReader};
 
 use crate::inject_reused_chunks::InjectChunks;
@@ -32,6 +32,8 @@ use crate::pxar::metadata::errno_is_unsupported;
 use crate::pxar::tools::assert_single_path_component;
 use crate::pxar::Flags;
 
+const MAX_FILE_SIZE: u64 = 128;
+
 /// Pxar options for creating a pxar archive/stream
 #[derive(Default)]
 pub struct PxarCreateOptions {
@@ -218,7 +220,14 @@ where
     archiver
         .archive_dir_contents(&mut encoder, source_dir, true)
         .await?;
-    encoder.finish().await?;
+
+    if archiver.inject.1.len() > 0 {
+        let (appendix_offset, appendix_size) = archiver.add_appendix(&mut encoder).await?;
+        encoder.finish(Some((appendix_offset, appendix_size))).await?;
+    } else {
+        encoder.finish(None).await?;
+    }
+
     Ok(())
 }
 
@@ -529,6 +538,132 @@ impl Archiver {
         Ok(())
     }
 
+    async fn add_appendix<T: SeqWrite + Send>(
+        &mut self,
+        encoder: &mut Encoder<'_, T>,
+    ) -> Result<(LinkOffset, u64), Error> {
+        let total = self
+            .inject
+            .1
+            .iter()
+            .fold(0, |sum, inject| sum + inject.end());
+        let appendix_offset = encoder.add_appendix(total).await?;
+        let mut boundaries = self.forced_boundaries.lock().unwrap();
+        let mut position = encoder.position_add(0);
+
+        // Inject reused chunks in patches of 128 to not exceed upload post req size limit
+        for injects in self.inject.1.chunks(128) {
+            let size = injects
+                .iter()
+                .fold(0, |sum, inject| sum + inject.end() as usize);
+            let inject_chunks = InjectChunks {
+                boundary: position,
+                chunks: injects.to_vec(),
+                size,
+            };
+            boundaries.push_back(inject_chunks);
+            position = encoder.position_add(size as u64);
+        }
+
+        Ok((appendix_offset, total))
+    }
+
+    async fn reuse_if_metadata_unchanged<T: SeqWrite + Send>(
+        &mut self,
+        encoder: &mut Encoder<'_, T>,
+        c_file_name: &CStr,
+        metadata: &Metadata,
+        stat: &FileStat,
+    ) -> Result<bool, Error> {
+        let prev_ref = match self.previous_ref {
+            None => return Ok(false),
+            Some(ref mut prev_ref) => prev_ref
+        };
+
+        let path = Path::new(prev_ref.archive_name.as_str()).join(self.path.clone());
+        let catalog_entry = prev_ref
+            .catalog
+            .lookup_recursive(path.as_os_str().as_bytes())?;
+
+        match catalog_entry.attr {
+            DirEntryAttribute::File {
+                size,
+                mtime,
+                link_offset,
+            } => {
+                let file_size = stat.st_size as u64;
+                if mtime == stat.st_mtime && size == file_size {
+                    if let Some(ref catalog) = self.catalog {
+                        catalog.lock().unwrap().add_file(
+                            c_file_name,
+                            file_size,
+                            stat.st_mtime,
+                            link_offset,
+                        )?;
+                    }
+
+                    // Filename header
+                    let mut metadata_bytes = std::mem::size_of::<pxar::format::Header>();
+                    // Filename payload
+                    metadata_bytes += std::mem::size_of_val(c_file_name);
+                    // Metadata with headers and payloads
+                    metadata_bytes += metadata.calculate_byte_len();
+                    // Payload header
+                    metadata_bytes += std::mem::size_of::<pxar::format::Header>();
+
+                    let metadata_bytes = u64::try_from(metadata_bytes)?;
+                    let chunk_start_offset = link_offset.raw();
+                    let start = chunk_start_offset;
+                    let end = chunk_start_offset + metadata_bytes + file_size;
+                    let (indices, total_size, padding_start) =
+                        prev_ref.index.indices(start, end)?;
+
+                    let mut appendix_offset = self.inject.0 as u64 + padding_start;
+
+                    if let (Some(current_end), Some(new_start)) =
+                        (self.inject.1.last(), indices.first())
+                    {
+                        if new_start.digest() == current_end.digest() {
+                            // Already got that chunk, do not append it again and correct
+                            // appendix_offset to be relative to chunk before this one
+                            appendix_offset -= new_start.end();
+                            if indices.len() > 1 {
+                                // Append all following chunks
+                                self.inject.0 += indices[1..]
+                                    .iter()
+                                    .fold(0, |sum, index| sum + index.end() as usize);
+                                self.inject.1.extend_from_slice(&indices[1..]);
+                            }
+                        }
+                    } else {
+                        self.inject.0 += total_size;
+                        self.inject.1.extend_from_slice(&indices);
+                    }
+
+                    let file_name: &Path = OsStr::from_bytes(c_file_name.to_bytes()).as_ref();
+                    let _offset = self
+                        .add_appendix_ref(
+                            encoder,
+                            file_name,
+                            &metadata,
+                            appendix_offset,
+                            file_size,
+                        )
+                        .await?;
+
+                    return Ok(true);
+                }
+            }
+            DirEntryAttribute::Hardlink => {
+                // Catalog contains a hardlink, but the hard link was not present in the current
+                // pxar archive. So be sure to reencode this file instead of reusing it.
+                return Ok(false)
+            }
+            _ => println!("Unexpected attribute type, expected 'File' or 'Hardlink'"),
+        }
+        Ok(false)
+    }
+
     async fn add_entry<T: SeqWrite + Send>(
         &mut self,
         encoder: &mut Encoder<'_, T>,
@@ -595,6 +730,14 @@ impl Archiver {
                 }
 
                 let file_size = stat.st_size as u64;
+                if file_size > MAX_FILE_SIZE
+                    && self
+                        .reuse_if_metadata_unchanged(encoder, c_file_name, &metadata, stat)
+                        .await?
+                {
+                    return Ok(());
+                }
+
                 let offset: LinkOffset = self
                     .add_regular_file(encoder, fd, file_name, &metadata, file_size)
                     .await?;
@@ -712,7 +855,7 @@ impl Archiver {
         self.fs_feature_flags = old_fs_feature_flags;
         self.current_st_dev = old_st_dev;
 
-        encoder.finish().await?;
+        encoder.finish(None).await?;
         result
     }
 
diff --git a/src/tape/file_formats/snapshot_archive.rs b/src/tape/file_formats/snapshot_archive.rs
index 252384b5..4bbf4727 100644
--- a/src/tape/file_formats/snapshot_archive.rs
+++ b/src/tape/file_formats/snapshot_archive.rs
@@ -88,7 +88,7 @@ pub fn tape_write_snapshot_archive<'a>(
                 proxmox_lang::io_bail!("file '{}' shrunk while reading", filename);
             }
         }
-        encoder.finish()?;
+        encoder.finish(None)?;
         Ok(())
     });
 
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [pbs-devel] [RFC proxmox-backup 20/20] fix #3174: client: Add incremental flag to backup creation
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (18 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 19/20] fix #3174: archiver: reuse files with unchanged metadata Christian Ebner
@ 2023-09-22  7:16 ` Christian Ebner
  2023-09-26  7:11   ` Christian Ebner
  2023-09-26  7:15 ` [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
  20 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2023-09-22  7:16 UTC (permalink / raw)
  To: pbs-devel

When set, the catalog for the previous backup run and the corresponding
index file are fetched from the server and used as reference during pxar
archive creation.
This allows the archiver to skip encoding of file payloads for unchanged
regular files and referencing their existing chunks to be included in the
new backups index file instead, creating a pxar archive with appendix
section containing the payloads as concatenation of chunks.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 proxmox-backup-client/src/main.rs | 107 ++++++++++++++++++++++++++++--
 1 file changed, 103 insertions(+), 4 deletions(-)

diff --git a/proxmox-backup-client/src/main.rs b/proxmox-backup-client/src/main.rs
index 5945ae5d..90c73a55 100644
--- a/proxmox-backup-client/src/main.rs
+++ b/proxmox-backup-client/src/main.rs
@@ -1,5 +1,6 @@
 use std::collections::{HashSet, VecDeque};
 use std::io::{self, Read, Seek, SeekFrom, Write};
+use std::os::unix::fs::OpenOptionsExt;
 use std::path::{Path, PathBuf};
 use std::pin::Pin;
 use std::sync::{Arc, Mutex};
@@ -687,6 +688,12 @@ fn spawn_catalog_upload(
                optional: true,
                default: false,
            },
+           "incremental": {
+               type: Boolean,
+               description: "Only read files modified since last full-backup.",
+               optional: true,
+               default: false,
+           },
        }
    }
 )]
@@ -696,6 +703,7 @@ async fn create_backup(
     all_file_systems: bool,
     skip_lost_and_found: bool,
     dry_run: bool,
+    incremental: bool,
     _info: &ApiMethod,
     _rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<Value, Error> {
@@ -849,7 +857,17 @@ async fn create_backup(
 
     let backup_time = backup_time_opt.unwrap_or_else(epoch_i64);
 
-    let client = connect_rate_limited(&repo, rate_limit)?;
+    let client = connect_rate_limited(&repo, rate_limit.clone())?;
+    let backup_group = BackupGroup::new(backup_type, backup_id);
+
+    let previous_snapshot = if incremental {
+        let snapshot =
+            api_datastore_latest_snapshot(&client, &repo.store(), &backup_ns, backup_group).await?;
+        Some(snapshot)
+    } else {
+        None
+    };
+
     record_repository(&repo);
 
     let snapshot = BackupDir::from((backup_type, backup_id.to_owned(), backup_time));
@@ -959,8 +977,8 @@ async fn create_backup(
         log::info!("{} {} '{}' to '{}' as {}", what, desc, file, repo, target);
     };
 
-    for (backup_type, filename, target, size) in upload_list {
-        match (backup_type, dry_run) {
+    for (backup_spec_type, filename, target, size) in upload_list {
+        match (backup_spec_type, dry_run) {
             // dry-run
             (BackupSpecificationType::CONFIG, true) => log_file("config file", &filename, &target),
             (BackupSpecificationType::LOGFILE, true) => log_file("log file", &filename, &target),
@@ -1010,12 +1028,44 @@ async fn create_backup(
                     .unwrap()
                     .start_directory(std::ffi::CString::new(target.as_str())?.as_c_str())?;
 
+                let known_chunks = Arc::new(Mutex::new(HashSet::new()));
+                let previous_ref = if incremental {
+                    match previous_manifest {
+                        None => None,
+                        Some(ref manifest) => {
+                            let reference_index = client
+                                .download_previous_dynamic_index(
+                                    &target,
+                                    &manifest,
+                                    known_chunks.clone(),
+                                )
+                                .await?;
+
+                            let reference_catalog = download_reference_catalog(
+                                &repo,
+                                previous_snapshot.as_ref().unwrap(),
+                                &backup_ns,
+                                crypt_config.clone(),
+                            )
+                            .await?;
+
+                            Some(pbs_client::pxar::PxarPrevRef {
+                                index: reference_index,
+                                catalog: reference_catalog,
+                                archive_name: target.clone(),
+                            })
+                        }
+                    }
+                } else {
+                    None
+                };
+
                 let pxar_options = pbs_client::pxar::PxarCreateOptions {
                     device_set: devices.clone(),
                     patterns: pattern_list.clone(),
                     entries_max: entries_max as usize,
                     skip_lost_and_found,
-                    previous_ref: None,
+                    previous_ref,
                 };
 
                 let upload_options = UploadOptions {
@@ -1116,6 +1166,55 @@ async fn create_backup(
     Ok(Value::Null)
 }
 
+async fn download_reference_catalog(
+    repo: &BackupRepository,
+    previous_snapshot: &BackupDir,
+    backup_ns: &BackupNamespace,
+    crypt_config: Option<Arc<CryptConfig>>,
+) -> Result<CatalogReader<std::fs::File>, Error> {
+    let http_reader_client = connect(&repo)?;
+    let backup_reader = BackupReader::start(
+        http_reader_client,
+        crypt_config.clone(),
+        repo.store(),
+        &backup_ns,
+        &previous_snapshot,
+        true,
+    )
+    .await?;
+
+    let (manifest, _) = backup_reader.download_manifest().await?;
+    manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
+
+    let index = backup_reader
+        .download_dynamic_index(&manifest, CATALOG_NAME)
+        .await?;
+    let most_used = index.find_most_used_chunks(8);
+    let file_info = manifest.lookup_file_info(CATALOG_NAME)?;
+
+    let chunk_reader = RemoteChunkReader::new(
+        backup_reader,
+        crypt_config.clone(),
+        file_info.chunk_crypt_mode(),
+        most_used,
+    );
+
+    let mut reader = BufferedDynamicReader::new(index, chunk_reader);
+
+    let mut catalogfile = std::fs::OpenOptions::new()
+        .write(true)
+        .read(true)
+        .custom_flags(libc::O_TMPFILE)
+        .open("/tmp")?;
+
+    std::io::copy(&mut reader, &mut catalogfile)
+        .map_err(|err| format_err!("failed to download reference catalog - {}", err))?;
+
+    catalogfile.seek(SeekFrom::Start(0))?;
+
+    Ok(CatalogReader::new(catalogfile))
+}
+
 async fn dump_image<W: Write>(
     client: Arc<BackupReader>,
     crypt_config: Option<Arc<CryptConfig>>,
-- 
2.39.2





^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC proxmox-backup 19/20] fix #3174: archiver: reuse files with unchanged metadata
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 19/20] fix #3174: archiver: reuse files with unchanged metadata Christian Ebner
@ 2023-09-26  7:01   ` Christian Ebner
  0 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-26  7:01 UTC (permalink / raw)
  To: pbs-devel

During testing, a restore issue was discovered for some pxar archives if created using this patch series (in this case the backup of a linux kernel source tree). It seems that the optimization in order to upload duplicate consecutive chunks (see further comments inside) leads to some chunks not being indexed as expected. Therefore, a restore will fail as a premature end of the archive is encountered.

I will have a look on how to better approach this and provide a fix for the next version of the patch series.

Thanks also to Fabian for helping with testing and trying to reproduce this issue.

> On 22.09.2023 09:16 CEST Christian Ebner <c.ebner@proxmox.com> wrote:
> 
>  
> During pxar archive encoding, check regular files against their
> previous backup catalogs metadata, if present.
> 
> Instead of re-encoding files with unchanged metadata with file size over
> a given threshold limit, mark the entries as appendix references in the
> pxar archive and append the chunks containing the file payload in the
> appendix.
> 
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
>  pbs-client/src/pxar/create.rs             | 149 +++++++++++++++++++++-
>  src/tape/file_formats/snapshot_archive.rs |   2 +-
>  2 files changed, 147 insertions(+), 4 deletions(-)
> 
> diff --git a/pbs-client/src/pxar/create.rs b/pbs-client/src/pxar/create.rs
> index d6afc465..cb9af26f 100644
> --- a/pbs-client/src/pxar/create.rs
> +++ b/pbs-client/src/pxar/create.rs
> @@ -24,7 +24,7 @@ use proxmox_io::vec;
>  use proxmox_lang::c_str;
>  use proxmox_sys::fs::{self, acl, xattr};
>  
> -use pbs_datastore::catalog::{BackupCatalogWriter, CatalogReader};
> +use pbs_datastore::catalog::{BackupCatalogWriter, CatalogReader, DirEntryAttribute};
>  use pbs_datastore::dynamic_index::{DynamicEntry, DynamicIndexReader};
>  
>  use crate::inject_reused_chunks::InjectChunks;
> @@ -32,6 +32,8 @@ use crate::pxar::metadata::errno_is_unsupported;
>  use crate::pxar::tools::assert_single_path_component;
>  use crate::pxar::Flags;
>  
> +const MAX_FILE_SIZE: u64 = 128;
> +
>  /// Pxar options for creating a pxar archive/stream
>  #[derive(Default)]
>  pub struct PxarCreateOptions {
> @@ -218,7 +220,14 @@ where
>      archiver
>          .archive_dir_contents(&mut encoder, source_dir, true)
>          .await?;
> -    encoder.finish().await?;
> +
> +    if archiver.inject.1.len() > 0 {
> +        let (appendix_offset, appendix_size) = archiver.add_appendix(&mut encoder).await?;
> +        encoder.finish(Some((appendix_offset, appendix_size))).await?;
> +    } else {
> +        encoder.finish(None).await?;
> +    }
> +
>      Ok(())
>  }
>  
> @@ -529,6 +538,132 @@ impl Archiver {
>          Ok(())
>      }
>  
> +    async fn add_appendix<T: SeqWrite + Send>(
> +        &mut self,
> +        encoder: &mut Encoder<'_, T>,
> +    ) -> Result<(LinkOffset, u64), Error> {
> +        let total = self
> +            .inject
> +            .1
> +            .iter()
> +            .fold(0, |sum, inject| sum + inject.end());
> +        let appendix_offset = encoder.add_appendix(total).await?;
> +        let mut boundaries = self.forced_boundaries.lock().unwrap();
> +        let mut position = encoder.position_add(0);
> +
> +        // Inject reused chunks in patches of 128 to not exceed upload post req size limit
> +        for injects in self.inject.1.chunks(128) {
> +            let size = injects
> +                .iter()
> +                .fold(0, |sum, inject| sum + inject.end() as usize);
> +            let inject_chunks = InjectChunks {
> +                boundary: position,
> +                chunks: injects.to_vec(),
> +                size,
> +            };
> +            boundaries.push_back(inject_chunks);
> +            position = encoder.position_add(size as u64);
> +        }
> +
> +        Ok((appendix_offset, total))
> +    }
> +
> +    async fn reuse_if_metadata_unchanged<T: SeqWrite + Send>(
> +        &mut self,
> +        encoder: &mut Encoder<'_, T>,
> +        c_file_name: &CStr,
> +        metadata: &Metadata,
> +        stat: &FileStat,
> +    ) -> Result<bool, Error> {
> +        let prev_ref = match self.previous_ref {
> +            None => return Ok(false),
> +            Some(ref mut prev_ref) => prev_ref
> +        };
> +
> +        let path = Path::new(prev_ref.archive_name.as_str()).join(self.path.clone());
> +        let catalog_entry = prev_ref
> +            .catalog
> +            .lookup_recursive(path.as_os_str().as_bytes())?;
> +
> +        match catalog_entry.attr {
> +            DirEntryAttribute::File {
> +                size,
> +                mtime,
> +                link_offset,
> +            } => {
> +                let file_size = stat.st_size as u64;
> +                if mtime == stat.st_mtime && size == file_size {
> +                    if let Some(ref catalog) = self.catalog {
> +                        catalog.lock().unwrap().add_file(
> +                            c_file_name,
> +                            file_size,
> +                            stat.st_mtime,
> +                            link_offset,
> +                        )?;
> +                    }
> +
> +                    // Filename header
> +                    let mut metadata_bytes = std::mem::size_of::<pxar::format::Header>();
> +                    // Filename payload
> +                    metadata_bytes += std::mem::size_of_val(c_file_name);
> +                    // Metadata with headers and payloads
> +                    metadata_bytes += metadata.calculate_byte_len();
> +                    // Payload header
> +                    metadata_bytes += std::mem::size_of::<pxar::format::Header>();
> +
> +                    let metadata_bytes = u64::try_from(metadata_bytes)?;
> +                    let chunk_start_offset = link_offset.raw();
> +                    let start = chunk_start_offset;
> +                    let end = chunk_start_offset + metadata_bytes + file_size;
> +                    let (indices, total_size, padding_start) =
> +                        prev_ref.index.indices(start, end)?;
> +
> +                    let mut appendix_offset = self.inject.0 as u64 + padding_start;
> +

The intention was to not append the same file multiple times, if e.g. it is referenced by multiple unchanged consecutive files.
> +                    if let (Some(current_end), Some(new_start)) =
> +                        (self.inject.1.last(), indices.first())
> +                    {
> +                        if new_start.digest() == current_end.digest() {
> +                            // Already got that chunk, do not append it again and correct
> +                            // appendix_offset to be relative to chunk before this one
> +                            appendix_offset -= new_start.end();
> +                            if indices.len() > 1 {
> +                                // Append all following chunks
> +                                self.inject.0 += indices[1..]
> +                                    .iter()
> +                                    .fold(0, |sum, index| sum + index.end() as usize);
> +                                self.inject.1.extend_from_slice(&indices[1..]);
> +                            }
> +                        }
> +                    } else {

If only the else path is used for creating the backup, restore works as expected, but at the const of hugely increasing restore time, as now a lot of data has to be read and skipped.

> +                        self.inject.0 += total_size;
> +                        self.inject.1.extend_from_slice(&indices);
> +                    }
> +
> +                    let file_name: &Path = OsStr::from_bytes(c_file_name.to_bytes()).as_ref();
> +                    let _offset = self
> +                        .add_appendix_ref(
> +                            encoder,
> +                            file_name,
> +                            &metadata,
> +                            appendix_offset,
> +                            file_size,
> +                        )
> +                        .await?;
> +
> +                    return Ok(true);
> +                }
> +            }
> +            DirEntryAttribute::Hardlink => {
> +                // Catalog contains a hardlink, but the hard link was not present in the current
> +                // pxar archive. So be sure to reencode this file instead of reusing it.
> +                return Ok(false)
> +            }
> +            _ => println!("Unexpected attribute type, expected 'File' or 'Hardlink'"),
> +        }
> +        Ok(false)
> +    }
> +
>      async fn add_entry<T: SeqWrite + Send>(
>          &mut self,
>          encoder: &mut Encoder<'_, T>,
> @@ -595,6 +730,14 @@ impl Archiver {
>                  }
>  
>                  let file_size = stat.st_size as u64;
> +                if file_size > MAX_FILE_SIZE
> +                    && self
> +                        .reuse_if_metadata_unchanged(encoder, c_file_name, &metadata, stat)
> +                        .await?
> +                {
> +                    return Ok(());
> +                }
> +
>                  let offset: LinkOffset = self
>                      .add_regular_file(encoder, fd, file_name, &metadata, file_size)
>                      .await?;
> @@ -712,7 +855,7 @@ impl Archiver {
>          self.fs_feature_flags = old_fs_feature_flags;
>          self.current_st_dev = old_st_dev;
>  
> -        encoder.finish().await?;
> +        encoder.finish(None).await?;
>          result
>      }
>  
> diff --git a/src/tape/file_formats/snapshot_archive.rs b/src/tape/file_formats/snapshot_archive.rs
> index 252384b5..4bbf4727 100644
> --- a/src/tape/file_formats/snapshot_archive.rs
> +++ b/src/tape/file_formats/snapshot_archive.rs
> @@ -88,7 +88,7 @@ pub fn tape_write_snapshot_archive<'a>(
>                  proxmox_lang::io_bail!("file '{}' shrunk while reading", filename);
>              }
>          }
> -        encoder.finish()?;
> +        encoder.finish(None)?;
>          Ok(())
>      });
>  
> -- 
> 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC proxmox-backup 20/20] fix #3174: client: Add incremental flag to backup creation
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 20/20] fix #3174: client: Add incremental flag to backup creation Christian Ebner
@ 2023-09-26  7:11   ` Christian Ebner
  0 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-26  7:11 UTC (permalink / raw)
  To: pbs-devel

As discussed off list with Fabian and Thomas, the naming choice for the flag is not okay, as the term "incremental" is already taken in Proxmox Backup Server context and only might lead to confusion.
Further, it might be useful to extend from the flag to a more structured option including parameters in the future, so `--change-detection-mode <data|metadata>` was suggested by Thomas as is states a clear change in functionality.

A new version of the patch series will include this change in naming.

> On 22.09.2023 09:16 CEST Christian Ebner <c.ebner@proxmox.com> wrote:
> 
>  
> When set, the catalog for the previous backup run and the corresponding
> index file are fetched from the server and used as reference during pxar
> archive creation.
> This allows the archiver to skip encoding of file payloads for unchanged
> regular files and referencing their existing chunks to be included in the
> new backups index file instead, creating a pxar archive with appendix
> section containing the payloads as concatenation of chunks.
> 
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
>  proxmox-backup-client/src/main.rs | 107 ++++++++++++++++++++++++++++--
>  1 file changed, 103 insertions(+), 4 deletions(-)
> 
> diff --git a/proxmox-backup-client/src/main.rs b/proxmox-backup-client/src/main.rs
> index 5945ae5d..90c73a55 100644
> --- a/proxmox-backup-client/src/main.rs
> +++ b/proxmox-backup-client/src/main.rs
> @@ -1,5 +1,6 @@
>  use std::collections::{HashSet, VecDeque};
>  use std::io::{self, Read, Seek, SeekFrom, Write};
> +use std::os::unix::fs::OpenOptionsExt;
>  use std::path::{Path, PathBuf};
>  use std::pin::Pin;
>  use std::sync::{Arc, Mutex};
> @@ -687,6 +688,12 @@ fn spawn_catalog_upload(
>                 optional: true,
>                 default: false,
>             },
> +           "incremental": {
> +               type: Boolean,
> +               description: "Only read files modified since last full-backup.",
> +               optional: true,
> +               default: false,
> +           },
>         }
>     }
>  )]
> @@ -696,6 +703,7 @@ async fn create_backup(
>      all_file_systems: bool,
>      skip_lost_and_found: bool,
>      dry_run: bool,
> +    incremental: bool,
>      _info: &ApiMethod,
>      _rpcenv: &mut dyn RpcEnvironment,
>  ) -> Result<Value, Error> {
> @@ -849,7 +857,17 @@ async fn create_backup(
>  
>      let backup_time = backup_time_opt.unwrap_or_else(epoch_i64);
>  
> -    let client = connect_rate_limited(&repo, rate_limit)?;
> +    let client = connect_rate_limited(&repo, rate_limit.clone())?;
> +    let backup_group = BackupGroup::new(backup_type, backup_id);
> +
> +    let previous_snapshot = if incremental {
> +        let snapshot =
> +            api_datastore_latest_snapshot(&client, &repo.store(), &backup_ns, backup_group).await?;
> +        Some(snapshot)
> +    } else {
> +        None
> +    };
> +
>      record_repository(&repo);
>  
>      let snapshot = BackupDir::from((backup_type, backup_id.to_owned(), backup_time));
> @@ -959,8 +977,8 @@ async fn create_backup(
>          log::info!("{} {} '{}' to '{}' as {}", what, desc, file, repo, target);
>      };
>  
> -    for (backup_type, filename, target, size) in upload_list {
> -        match (backup_type, dry_run) {
> +    for (backup_spec_type, filename, target, size) in upload_list {
> +        match (backup_spec_type, dry_run) {
>              // dry-run
>              (BackupSpecificationType::CONFIG, true) => log_file("config file", &filename, &target),
>              (BackupSpecificationType::LOGFILE, true) => log_file("log file", &filename, &target),
> @@ -1010,12 +1028,44 @@ async fn create_backup(
>                      .unwrap()
>                      .start_directory(std::ffi::CString::new(target.as_str())?.as_c_str())?;
>  
> +                let known_chunks = Arc::new(Mutex::new(HashSet::new()));
> +                let previous_ref = if incremental {
> +                    match previous_manifest {
> +                        None => None,
> +                        Some(ref manifest) => {
> +                            let reference_index = client
> +                                .download_previous_dynamic_index(
> +                                    &target,
> +                                    &manifest,
> +                                    known_chunks.clone(),
> +                                )
> +                                .await?;
> +
> +                            let reference_catalog = download_reference_catalog(
> +                                &repo,
> +                                previous_snapshot.as_ref().unwrap(),
> +                                &backup_ns,
> +                                crypt_config.clone(),
> +                            )
> +                            .await?;
> +
> +                            Some(pbs_client::pxar::PxarPrevRef {
> +                                index: reference_index,
> +                                catalog: reference_catalog,
> +                                archive_name: target.clone(),
> +                            })
> +                        }
> +                    }
> +                } else {
> +                    None
> +                };
> +
>                  let pxar_options = pbs_client::pxar::PxarCreateOptions {
>                      device_set: devices.clone(),
>                      patterns: pattern_list.clone(),
>                      entries_max: entries_max as usize,
>                      skip_lost_and_found,
> -                    previous_ref: None,
> +                    previous_ref,
>                  };
>  
>                  let upload_options = UploadOptions {
> @@ -1116,6 +1166,55 @@ async fn create_backup(
>      Ok(Value::Null)
>  }
>  
> +async fn download_reference_catalog(
> +    repo: &BackupRepository,
> +    previous_snapshot: &BackupDir,
> +    backup_ns: &BackupNamespace,
> +    crypt_config: Option<Arc<CryptConfig>>,
> +) -> Result<CatalogReader<std::fs::File>, Error> {
> +    let http_reader_client = connect(&repo)?;
> +    let backup_reader = BackupReader::start(
> +        http_reader_client,
> +        crypt_config.clone(),
> +        repo.store(),
> +        &backup_ns,
> +        &previous_snapshot,
> +        true,
> +    )
> +    .await?;
> +
> +    let (manifest, _) = backup_reader.download_manifest().await?;
> +    manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
> +
> +    let index = backup_reader
> +        .download_dynamic_index(&manifest, CATALOG_NAME)
> +        .await?;
> +    let most_used = index.find_most_used_chunks(8);
> +    let file_info = manifest.lookup_file_info(CATALOG_NAME)?;
> +
> +    let chunk_reader = RemoteChunkReader::new(
> +        backup_reader,
> +        crypt_config.clone(),
> +        file_info.chunk_crypt_mode(),
> +        most_used,
> +    );
> +
> +    let mut reader = BufferedDynamicReader::new(index, chunk_reader);
> +
> +    let mut catalogfile = std::fs::OpenOptions::new()
> +        .write(true)
> +        .read(true)
> +        .custom_flags(libc::O_TMPFILE)
> +        .open("/tmp")?;
> +
> +    std::io::copy(&mut reader, &mut catalogfile)
> +        .map_err(|err| format_err!("failed to download reference catalog - {}", err))?;
> +
> +    catalogfile.seek(SeekFrom::Start(0))?;
> +
> +    Ok(CatalogReader::new(catalogfile))
> +}
> +
>  async fn dump_image<W: Write>(
>      client: Arc<BackupReader>,
>      crypt_config: Option<Arc<CryptConfig>>,
> -- 
> 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup
  2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
                   ` (19 preceding siblings ...)
  2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 20/20] fix #3174: client: Add incremental flag to backup creation Christian Ebner
@ 2023-09-26  7:15 ` Christian Ebner
  20 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-26  7:15 UTC (permalink / raw)
  To: pbs-devel

Thomas suggested to include some form of benchmark, which might be useful not only for measuring performance but rather might be used as regression test in a CI pipeline and/or used to optimize possible tunable parameters.

> On 22.09.2023 09:16 CEST Christian Ebner <c.ebner@proxmox.com> wrote:
> 
>  
> This (still rather rough) series of patches prototypes a possible
> approach to improve the pxar file level backup creation speed.
> The series is intended to get a first feedback on the implementation
> approach and to find possible pitfalls I might not be aware of.
> 
> The current approach is to skip encoding of regular file payloads,
> for which metadata (currently mtime and size) did not change as
> compared to a previous backup run. Instead of re-encoding the files, a
> reference to a newly introduced appendix section of the pxar archive
> will be written. The appenidx section will be created as concatination
> of indexed chunks from the previous backup run, thereby containing the
> sequential file payload at a calculated offset with respect to the
> starting point of the appendix section.
> 
> Metadata comparison and caclulation of the chunks to be indexed for the
> appendix section is performed using the catalog of a previous backup as
> reference. In order to be able to calculate the offsets, the current
> catalog format is extended to include the file offset with respect to
> the pxar archive byte stream. This allows to find the required chunks
> indexes, the start padding within the concatenated chunks and the total
> bytes introduced by the chunks.
> 
> During encoding, the chunks needed for the appendix section are injected
> in the pxar archive after forcing a chunk boundary when regular pxar
> encoding is finished. Finally, the pxar archive containing an appenidx
> section are marked as such by appending a final pxar goodbye lookup
> table only containing the offset to the appendix section start and total
> size of that section, needed for random access as e.g. for mounting the
> archive via the fuse filesystem implementation.
> 
> Currently, the code assumes the reference backup (for which the previous
> run is used) to be a regular backup without appendix section, and the
> catalog for that backup to already contain the required additional
> offset information.
> 
> An invocation therefore looks lile:
> ```bash
> proxmox-backup-client backup <label>.pxar:<source-path>
> proxmox-backup-client backup <label>.pxar:<source-path> --incremental
> ```
> 
> pxar:
> 
> Christian Ebner (8):
>   fix #3174: encoder: impl fn new for LinkOffset
>   fix #3174: decoder: factor out skip_bytes from skip_entry
>   fix #3174: decoder: impl skip_bytes for sync dec
>   fix #3174: metadata: impl fn to calc byte size
>   fix #3174: enc/dec: impl PXAR_APPENDIX_REF entrytype
>   fix #3174: enc/dec: impl PXAR_APPENDIX entrytype
>   fix #3174: encoder: add helper to incr encoder pos
>   fix #3174: enc/dec: impl PXAR_APPENDIX_TAIL entrytype
> 
>  examples/mk-format-hashes.rs | 11 +++++
>  examples/pxarcmd.rs          |  4 +-
>  src/accessor/mod.rs          | 46 ++++++++++++++++++++
>  src/decoder/mod.rs           | 38 +++++++++++++---
>  src/decoder/sync.rs          |  6 +++
>  src/encoder/aio.rs           | 36 ++++++++++++++--
>  src/encoder/mod.rs           | 84 +++++++++++++++++++++++++++++++++++-
>  src/encoder/sync.rs          | 32 +++++++++++++-
>  src/format/mod.rs            | 16 +++++++
>  src/lib.rs                   | 54 +++++++++++++++++++++++
>  10 files changed, 312 insertions(+), 15 deletions(-)
> 
> proxmox-backup:
> 
> Christian Ebner (12):
>   fix #3174: index: add fn index list from start/end-offsets
>   fix #3174: index: add fn digest for DynamicEntry
>   fix #3174: api: double catalog upload size
>   fix #3174: catalog: incl pxar archives file offset
>   fix #3174: archiver/extractor: impl appendix ref
>   fix #3174: extractor: impl seq restore from appendix
>   fix #3174: archiver: store ref to previous backup
>   fix #3174: upload stream: impl reused chunk injector
>   fix #3174: chunker: add forced boundaries
>   fix #3174: backup writer: inject queued chunk in upload steam
>   fix #3174: archiver: reuse files with unchanged metadata
>   fix #3174: client: Add incremental flag to backup creation
> 
>  examples/test_chunk_speed2.rs                 |   9 +-
>  pbs-client/src/backup_writer.rs               |  88 ++++---
>  pbs-client/src/chunk_stream.rs                |  41 +++-
>  pbs-client/src/inject_reused_chunks.rs        | 123 ++++++++++
>  pbs-client/src/lib.rs                         |   1 +
>  pbs-client/src/pxar/create.rs                 | 217 ++++++++++++++++--
>  pbs-client/src/pxar/extract.rs                | 141 ++++++++++++
>  pbs-client/src/pxar/mod.rs                    |   2 +-
>  pbs-client/src/pxar/tools.rs                  |   9 +
>  pbs-client/src/pxar_backup_stream.rs          |   8 +-
>  pbs-datastore/src/catalog.rs                  | 122 ++++++++--
>  pbs-datastore/src/dynamic_index.rs            |  38 +++
>  proxmox-backup-client/src/main.rs             | 142 +++++++++++-
>  .../src/proxmox_restore_daemon/api.rs         |  15 +-
>  pxar-bin/src/main.rs                          |  22 +-
>  src/api2/backup/upload_chunk.rs               |   4 +-
>  src/tape/file_formats/snapshot_archive.rs     |   2 +-
>  tests/catar.rs                                |   3 +
>  18 files changed, 886 insertions(+), 101 deletions(-)
>  create mode 100644 pbs-client/src/inject_reused_chunks.rs
> 
> -- 
> 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 2/20] fix #3174: decoder: factor out skip_bytes from skip_entry
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 2/20] fix #3174: decoder: factor out skip_bytes from skip_entry Christian Ebner
@ 2023-09-27 11:32   ` Wolfgang Bumiller
  2023-09-27 11:53     ` Christian Ebner
  0 siblings, 1 reply; 40+ messages in thread
From: Wolfgang Bumiller @ 2023-09-27 11:32 UTC (permalink / raw)
  To: Christian Ebner; +Cc: pbs-devel

On Fri, Sep 22, 2023 at 09:16:03AM +0200, Christian Ebner wrote:
> Allows to skip bytes independently of the current entries header, as is
> implemented by skip_entry.
> 
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
>  src/decoder/mod.rs | 19 ++++++++++++-------
>  1 file changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/src/decoder/mod.rs b/src/decoder/mod.rs
> index d1fb911..2ca263b 100644
> --- a/src/decoder/mod.rs
> +++ b/src/decoder/mod.rs
> @@ -562,20 +562,25 @@ impl<I: SeqRead> DecoderImpl<I> {
>      // These utilize additional information and hence are not part of the `dyn SeqRead` impl.
>      //
>  
> -    async fn skip_entry(&mut self, offset: u64) -> io::Result<()> {
> -        let mut len = self.current_header.content_size() - offset;
> +    async fn skip_bytes(&mut self, len: usize) -> io::Result<()> {

I think `len` here should be an `u64`, `skip_entry` otherwise gets
limited to 32 bits on some systems for no reason :-)

> +        let mut remaining = len;
>          let scratch = scratch_buffer();
> -        while len >= (scratch.len() as u64) {
> +        while remaining >= scratch.len() {
>              seq_read_exact(&mut self.input, scratch).await?;
> -            len -= scratch.len() as u64;
> +            remaining -= scratch.len();
>          }
> -        let len = len as usize;
> -        if len > 0 {
> -            seq_read_exact(&mut self.input, &mut scratch[..len]).await?;
> +        if remaining > 0 {
> +            seq_read_exact(&mut self.input, &mut scratch[..remaining]).await?;
>          }
>          Ok(())
>      }
>  
> +    async fn skip_entry(&mut self, offset: u64) -> io::Result<()> {
> +        let len =
> +            usize::try_from(self.current_header.content_size() - offset).map_err(io_err_other)?;
> +        self.skip_bytes(len).await
> +    }
> +
>      async fn read_entry_as_bytes(&mut self) -> io::Result<Vec<u8>> {
>          let size = usize::try_from(self.current_header.content_size()).map_err(io_err_other)?;
>          let data = seq_read_exact_data(&mut self.input, size).await?;
> -- 
> 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 4/20] fix #3174: metadata: impl fn to calc byte size
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 4/20] fix #3174: metadata: impl fn to calc byte size Christian Ebner
@ 2023-09-27 11:38   ` Wolfgang Bumiller
  2023-09-27 11:55     ` Christian Ebner
  0 siblings, 1 reply; 40+ messages in thread
From: Wolfgang Bumiller @ 2023-09-27 11:38 UTC (permalink / raw)
  To: Christian Ebner; +Cc: pbs-devel

On Fri, Sep 22, 2023 at 09:16:05AM +0200, Christian Ebner wrote:
> Add a helper function to calculate the byte size of pxar Metadata
> objects, needed to be able to recalculate offsets when creating archives
> with appendix sections.
> 
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
>  src/lib.rs | 44 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
> 
> diff --git a/src/lib.rs b/src/lib.rs
> index 210c4b1..ed7ba40 100644
> --- a/src/lib.rs
> +++ b/src/lib.rs
> @@ -144,6 +144,50 @@ impl Metadata {
>      pub fn builder_from_stat(stat: &libc::stat) -> MetadataBuilder {
>          MetadataBuilder::new(0).fill_from_stat(stat)
>      }
> +
> +    /// Calculate the number of bytes when serialized in pxar archive
> +    pub fn calculate_byte_len(&self) -> usize {

This looks like a maintenance nightmare with extra sprinkles.

Can we instead create a specialized `SeqWrite` type that just counts
bytes instead of actually writing anything and call the encoder's
`encode_metadata()` with it?
(Either by factorizing `encode_metadat()` out further or by creating an
actual `EncoderImpl` instance with it...)

> +        let mut bytes = mem::size_of::<format::Header>();
> +        bytes += mem::size_of_val(&self.stat);
> +        for xattr in &self.xattrs {
> +            bytes += mem::size_of::<format::Header>();
> +            bytes += mem::size_of_val(xattr);
> +        }
> +        for acl_user in &self.acl.users {
> +            bytes += mem::size_of::<format::Header>();
> +            bytes += mem::size_of_val(acl_user);
> +        }
> +        for acl_group in &self.acl.groups {
> +            bytes += mem::size_of::<format::Header>();
> +            bytes += mem::size_of_val(acl_group);
> +        }
> +        if let Some(group_obj) = &self.acl.group_obj {
> +            bytes += mem::size_of::<format::Header>();
> +            bytes += mem::size_of_val(group_obj);
> +        }
> +        if let Some(default) = &self.acl.default {
> +            bytes += mem::size_of::<format::Header>();
> +            bytes += mem::size_of_val(default);
> +        }
> +        for acl_default_user in &self.acl.default_users {
> +            bytes += mem::size_of::<format::Header>();
> +            bytes += mem::size_of_val(acl_default_user);
> +        }
> +        for acl_default_group in &self.acl.default_groups {
> +            bytes += mem::size_of::<format::Header>();
> +            bytes += mem::size_of_val(acl_default_group);
> +        }
> +        if let Some(fcaps) = &self.fcaps {
> +            bytes += mem::size_of::<format::Header>();
> +            bytes += mem::size_of_val(fcaps);
> +        }
> +        if let Some(quota_project_id) = &self.quota_project_id {
> +            bytes += mem::size_of::<format::Header>();
> +            bytes += mem::size_of_val(quota_project_id);
> +        }
> +
> +        bytes
> +    }
>  }
>  
>  impl From<MetadataBuilder> for Metadata {
> -- 
> 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 2/20] fix #3174: decoder: factor out skip_bytes from skip_entry
  2023-09-27 11:32   ` Wolfgang Bumiller
@ 2023-09-27 11:53     ` Christian Ebner
  0 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-27 11:53 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pbs-devel


> On 27.09.2023 13:32 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> 
>  
> On Fri, Sep 22, 2023 at 09:16:03AM +0200, Christian Ebner wrote:
> > Allows to skip bytes independently of the current entries header, as is
> > implemented by skip_entry.
> > 
> > Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> > ---
> >  src/decoder/mod.rs | 19 ++++++++++++-------
> >  1 file changed, 12 insertions(+), 7 deletions(-)
> > 
> > diff --git a/src/decoder/mod.rs b/src/decoder/mod.rs
> > index d1fb911..2ca263b 100644
> > --- a/src/decoder/mod.rs
> > +++ b/src/decoder/mod.rs
> > @@ -562,20 +562,25 @@ impl<I: SeqRead> DecoderImpl<I> {
> >      // These utilize additional information and hence are not part of the `dyn SeqRead` impl.
> >      //
> >  
> > -    async fn skip_entry(&mut self, offset: u64) -> io::Result<()> {
> > -        let mut len = self.current_header.content_size() - offset;
> > +    async fn skip_bytes(&mut self, len: usize) -> io::Result<()> {
> 
> I think `len` here should be an `u64`, `skip_entry` otherwise gets
> limited to 32 bits on some systems for no reason :-)

Yes, I will change this in the next version of the patch series, thx!

> 
> > +        let mut remaining = len;
> >          let scratch = scratch_buffer();
> > -        while len >= (scratch.len() as u64) {
> > +        while remaining >= scratch.len() {
> >              seq_read_exact(&mut self.input, scratch).await?;
> > -            len -= scratch.len() as u64;
> > +            remaining -= scratch.len();
> >          }
> > -        let len = len as usize;
> > -        if len > 0 {
> > -            seq_read_exact(&mut self.input, &mut scratch[..len]).await?;
> > +        if remaining > 0 {
> > +            seq_read_exact(&mut self.input, &mut scratch[..remaining]).await?;
> >          }
> >          Ok(())
> >      }
> >  
> > +    async fn skip_entry(&mut self, offset: u64) -> io::Result<()> {
> > +        let len =
> > +            usize::try_from(self.current_header.content_size() - offset).map_err(io_err_other)?;
> > +        self.skip_bytes(len).await
> > +    }
> > +
> >      async fn read_entry_as_bytes(&mut self) -> io::Result<Vec<u8>> {
> >          let size = usize::try_from(self.current_header.content_size()).map_err(io_err_other)?;
> >          let data = seq_read_exact_data(&mut self.input, size).await?;
> > -- 
> > 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 4/20] fix #3174: metadata: impl fn to calc byte size
  2023-09-27 11:38   ` Wolfgang Bumiller
@ 2023-09-27 11:55     ` Christian Ebner
  2023-09-28  8:07       ` Christian Ebner
  0 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2023-09-27 11:55 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pbs-devel


> On 27.09.2023 13:38 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> 
>  
> On Fri, Sep 22, 2023 at 09:16:05AM +0200, Christian Ebner wrote:
> > Add a helper function to calculate the byte size of pxar Metadata
> > objects, needed to be able to recalculate offsets when creating archives
> > with appendix sections.
> > 
> > Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> > ---
> >  src/lib.rs | 44 ++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 44 insertions(+)
> > 
> > diff --git a/src/lib.rs b/src/lib.rs
> > index 210c4b1..ed7ba40 100644
> > --- a/src/lib.rs
> > +++ b/src/lib.rs
> > @@ -144,6 +144,50 @@ impl Metadata {
> >      pub fn builder_from_stat(stat: &libc::stat) -> MetadataBuilder {
> >          MetadataBuilder::new(0).fill_from_stat(stat)
> >      }
> > +
> > +    /// Calculate the number of bytes when serialized in pxar archive
> > +    pub fn calculate_byte_len(&self) -> usize {
> 
> This looks like a maintenance nightmare with extra sprinkles.

;)

> 
> Can we instead create a specialized `SeqWrite` type that just counts
> bytes instead of actually writing anything and call the encoder's
> `encode_metadata()` with it?
> (Either by factorizing `encode_metadat()` out further or by creating an
> actual `EncoderImpl` instance with it...)

Yes, I will have a look on how to improve this and include this as well in the next version of the patch series, thx for the hints.

> 
> > +        let mut bytes = mem::size_of::<format::Header>();
> > +        bytes += mem::size_of_val(&self.stat);
> > +        for xattr in &self.xattrs {
> > +            bytes += mem::size_of::<format::Header>();
> > +            bytes += mem::size_of_val(xattr);
> > +        }
> > +        for acl_user in &self.acl.users {
> > +            bytes += mem::size_of::<format::Header>();
> > +            bytes += mem::size_of_val(acl_user);
> > +        }
> > +        for acl_group in &self.acl.groups {
> > +            bytes += mem::size_of::<format::Header>();
> > +            bytes += mem::size_of_val(acl_group);
> > +        }
> > +        if let Some(group_obj) = &self.acl.group_obj {
> > +            bytes += mem::size_of::<format::Header>();
> > +            bytes += mem::size_of_val(group_obj);
> > +        }
> > +        if let Some(default) = &self.acl.default {
> > +            bytes += mem::size_of::<format::Header>();
> > +            bytes += mem::size_of_val(default);
> > +        }
> > +        for acl_default_user in &self.acl.default_users {
> > +            bytes += mem::size_of::<format::Header>();
> > +            bytes += mem::size_of_val(acl_default_user);
> > +        }
> > +        for acl_default_group in &self.acl.default_groups {
> > +            bytes += mem::size_of::<format::Header>();
> > +            bytes += mem::size_of_val(acl_default_group);
> > +        }
> > +        if let Some(fcaps) = &self.fcaps {
> > +            bytes += mem::size_of::<format::Header>();
> > +            bytes += mem::size_of_val(fcaps);
> > +        }
> > +        if let Some(quota_project_id) = &self.quota_project_id {
> > +            bytes += mem::size_of::<format::Header>();
> > +            bytes += mem::size_of_val(quota_project_id);
> > +        }
> > +
> > +        bytes
> > +    }
> >  }
> >  
> >  impl From<MetadataBuilder> for Metadata {
> > -- 
> > 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 7/20] fix #3174: encoder: add helper to incr encoder pos
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 7/20] fix #3174: encoder: add helper to incr encoder pos Christian Ebner
@ 2023-09-27 12:07   ` Wolfgang Bumiller
  2023-09-27 12:20     ` Christian Ebner
  0 siblings, 1 reply; 40+ messages in thread
From: Wolfgang Bumiller @ 2023-09-27 12:07 UTC (permalink / raw)
  To: Christian Ebner; +Cc: pbs-devel

'incr' :S

On Fri, Sep 22, 2023 at 09:16:08AM +0200, Christian Ebner wrote:
> Adds a helper to allow to increase the encoder position by a given
> size. This is used to increase the position when adding an appendix
> section to the pxar stream, as these bytes are never encoded directly
> but rather referenced by already existing chunks.

Exposing this seems like a weird choice to me. Why exactly is this
needed? Why don't we instead expose methods to actually write the
appendix section instead?

> 
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
>  src/encoder/aio.rs  | 5 +++++
>  src/encoder/mod.rs  | 6 ++++++
>  src/encoder/sync.rs | 5 +++++
>  3 files changed, 16 insertions(+)
> 
> diff --git a/src/encoder/aio.rs b/src/encoder/aio.rs
> index 3b6c900..3587f65 100644
> --- a/src/encoder/aio.rs
> +++ b/src/encoder/aio.rs
> @@ -112,6 +112,11 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
>          self.inner.finish().await
>      }
>  
> +    /// Add size to encoders position and return new position.
> +    pub fn position_add(&mut self, size: u64) -> u64 {
> +        self.inner.position_add(size)
> +    }
> +
>      /// Add reference to archive appendix
>      pub async fn add_appendix_ref<PF: AsRef<Path>>(
>          &mut self,
> diff --git a/src/encoder/mod.rs b/src/encoder/mod.rs
> index a513ce6..abe21f2 100644
> --- a/src/encoder/mod.rs
> +++ b/src/encoder/mod.rs
> @@ -626,6 +626,12 @@ impl<'a, T: SeqWrite + 'a> EncoderImpl<'a, T> {
>          self.state.write_position
>      }
>  
> +    #[inline]
> +    pub fn position_add(&mut self, size: u64) -> u64 {
> +        self.state.write_position += size;
> +        self.state.write_position
> +    }
> +
>      pub async fn create_directory(
>          &mut self,
>          file_name: &Path,
> diff --git a/src/encoder/sync.rs b/src/encoder/sync.rs
> index 372ca12..b3d7c44 100644
> --- a/src/encoder/sync.rs
> +++ b/src/encoder/sync.rs
> @@ -110,6 +110,11 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
>          poll_result_once(self.inner.finish())
>      }
>  
> +    /// Add size to encoders position and return new position.
> +    pub fn position_add(&mut self, size: u64) -> u64 {
> +        self.inner.position_add(size)
> +    }
> +
>      /// Add reference to archive appendix
>      pub async fn add_appendix_ref<PF: AsRef<Path>>(
>          &mut self,
> -- 
> 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 1/20] fix #3174: encoder: impl fn new for LinkOffset
  2023-09-22  7:16 ` [pbs-devel] [RFC pxar 1/20] fix #3174: encoder: impl fn new for LinkOffset Christian Ebner
@ 2023-09-27 12:08   ` Wolfgang Bumiller
  2023-09-27 12:26     ` Christian Ebner
  0 siblings, 1 reply; 40+ messages in thread
From: Wolfgang Bumiller @ 2023-09-27 12:08 UTC (permalink / raw)
  To: Christian Ebner; +Cc: pbs-devel

On Fri, Sep 22, 2023 at 09:16:02AM +0200, Christian Ebner wrote:
> Allows to generate a new LinkOffset for storing the offset of regular
> files in the backup catalog, based on the provided archiver position.
> 
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
>  src/encoder/mod.rs | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/src/encoder/mod.rs b/src/encoder/mod.rs
> index 0d342ec..710ed98 100644
> --- a/src/encoder/mod.rs
> +++ b/src/encoder/mod.rs
> @@ -31,6 +31,12 @@ pub use sync::Encoder;
>  pub struct LinkOffset(u64);
>  
>  impl LinkOffset {
> +    /// Create a new link from the raw byte offset.
> +    #[inline]
> +    pub fn new(raw: u64) -> Self {

Not very happy about that. It was meant to be an opaque type that you
can't just create easily.
But oh well... if we need it...
Better than using u64 directly.

> +        Self(raw)
> +    }
> +
>      /// Get the raw byte offset of this link.
>      #[inline]
>      pub fn raw(self) -> u64 {
> -- 
> 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 7/20] fix #3174: encoder: add helper to incr encoder pos
  2023-09-27 12:07   ` Wolfgang Bumiller
@ 2023-09-27 12:20     ` Christian Ebner
  2023-09-28  7:04       ` Wolfgang Bumiller
  0 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2023-09-27 12:20 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pbs-devel


> On 27.09.2023 14:07 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> 
>  
> 'incr' :S
> 
> On Fri, Sep 22, 2023 at 09:16:08AM +0200, Christian Ebner wrote:
> > Adds a helper to allow to increase the encoder position by a given
> > size. This is used to increase the position when adding an appendix
> > section to the pxar stream, as these bytes are never encoded directly
> > but rather referenced by already existing chunks.
> 
> Exposing this seems like a weird choice to me. Why exactly is this
> needed? Why don't we instead expose methods to actually write the
> appendix section instead?

This is needed in order to increase the byte offset of the encoder itself.
The appendix section is a list of chunks which are injected in the chunk
stream on upload, but never really consumed by the encoder and subsequently
the chunker itself. So there is no direct writing of the appendix section to
the stream.

By adding the bytes, consistency with the rest of the pxar archive is assured,
as these chunks/bytes are present during decoding.

> 
> > 
> > Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> > ---
> >  src/encoder/aio.rs  | 5 +++++
> >  src/encoder/mod.rs  | 6 ++++++
> >  src/encoder/sync.rs | 5 +++++
> >  3 files changed, 16 insertions(+)
> > 
> > diff --git a/src/encoder/aio.rs b/src/encoder/aio.rs
> > index 3b6c900..3587f65 100644
> > --- a/src/encoder/aio.rs
> > +++ b/src/encoder/aio.rs
> > @@ -112,6 +112,11 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
> >          self.inner.finish().await
> >      }
> >  
> > +    /// Add size to encoders position and return new position.
> > +    pub fn position_add(&mut self, size: u64) -> u64 {
> > +        self.inner.position_add(size)
> > +    }
> > +
> >      /// Add reference to archive appendix
> >      pub async fn add_appendix_ref<PF: AsRef<Path>>(
> >          &mut self,
> > diff --git a/src/encoder/mod.rs b/src/encoder/mod.rs
> > index a513ce6..abe21f2 100644
> > --- a/src/encoder/mod.rs
> > +++ b/src/encoder/mod.rs
> > @@ -626,6 +626,12 @@ impl<'a, T: SeqWrite + 'a> EncoderImpl<'a, T> {
> >          self.state.write_position
> >      }
> >  
> > +    #[inline]
> > +    pub fn position_add(&mut self, size: u64) -> u64 {
> > +        self.state.write_position += size;
> > +        self.state.write_position
> > +    }
> > +
> >      pub async fn create_directory(
> >          &mut self,
> >          file_name: &Path,
> > diff --git a/src/encoder/sync.rs b/src/encoder/sync.rs
> > index 372ca12..b3d7c44 100644
> > --- a/src/encoder/sync.rs
> > +++ b/src/encoder/sync.rs
> > @@ -110,6 +110,11 @@ impl<'a, T: SeqWrite + 'a> Encoder<'a, T> {
> >          poll_result_once(self.inner.finish())
> >      }
> >  
> > +    /// Add size to encoders position and return new position.
> > +    pub fn position_add(&mut self, size: u64) -> u64 {
> > +        self.inner.position_add(size)
> > +    }
> > +
> >      /// Add reference to archive appendix
> >      pub async fn add_appendix_ref<PF: AsRef<Path>>(
> >          &mut self,
> > -- 
> > 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 1/20] fix #3174: encoder: impl fn new for LinkOffset
  2023-09-27 12:08   ` Wolfgang Bumiller
@ 2023-09-27 12:26     ` Christian Ebner
  2023-09-28  6:49       ` Wolfgang Bumiller
  0 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2023-09-27 12:26 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pbs-devel

> On 27.09.2023 14:08 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> 
>  
> On Fri, Sep 22, 2023 at 09:16:02AM +0200, Christian Ebner wrote:
> > Allows to generate a new LinkOffset for storing the offset of regular
> > files in the backup catalog, based on the provided archiver position.
> > 
> > Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> > ---
> >  src/encoder/mod.rs | 6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/src/encoder/mod.rs b/src/encoder/mod.rs
> > index 0d342ec..710ed98 100644
> > --- a/src/encoder/mod.rs
> > +++ b/src/encoder/mod.rs
> > @@ -31,6 +31,12 @@ pub use sync::Encoder;
> >  pub struct LinkOffset(u64);
> >  
> >  impl LinkOffset {
> > +    /// Create a new link from the raw byte offset.
> > +    #[inline]
> > +    pub fn new(raw: u64) -> Self {
> 
> Not very happy about that. It was meant to be an opaque type that you
> can't just create easily.
> But oh well... if we need it...
> Better than using u64 directly.

Might it be worth to create dedicated types or enums for the different offsets
needed? I do need some way to hande the different offsets and this just seemed
better than some raw u64, in order to have at least some type checking.

> 
> > +        Self(raw)
> > +    }
> > +
> >      /// Get the raw byte offset of this link.
> >      #[inline]
> >      pub fn raw(self) -> u64 {
> > -- 
> > 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 1/20] fix #3174: encoder: impl fn new for LinkOffset
  2023-09-27 12:26     ` Christian Ebner
@ 2023-09-28  6:49       ` Wolfgang Bumiller
  2023-09-28  7:52         ` Christian Ebner
  0 siblings, 1 reply; 40+ messages in thread
From: Wolfgang Bumiller @ 2023-09-28  6:49 UTC (permalink / raw)
  To: Christian Ebner; +Cc: pbs-devel

On Wed, Sep 27, 2023 at 02:26:03PM +0200, Christian Ebner wrote:
> > On 27.09.2023 14:08 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> > 
> >  
> > On Fri, Sep 22, 2023 at 09:16:02AM +0200, Christian Ebner wrote:
> > > Allows to generate a new LinkOffset for storing the offset of regular
> > > files in the backup catalog, based on the provided archiver position.
> > > 
> > > Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> > > ---
> > >  src/encoder/mod.rs | 6 ++++++
> > >  1 file changed, 6 insertions(+)
> > > 
> > > diff --git a/src/encoder/mod.rs b/src/encoder/mod.rs
> > > index 0d342ec..710ed98 100644
> > > --- a/src/encoder/mod.rs
> > > +++ b/src/encoder/mod.rs
> > > @@ -31,6 +31,12 @@ pub use sync::Encoder;
> > >  pub struct LinkOffset(u64);
> > >  
> > >  impl LinkOffset {
> > > +    /// Create a new link from the raw byte offset.
> > > +    #[inline]
> > > +    pub fn new(raw: u64) -> Self {
> > 
> > Not very happy about that. It was meant to be an opaque type that you
> > can't just create easily.
> > But oh well... if we need it...
> > Better than using u64 directly.
> 
> Might it be worth to create dedicated types or enums for the different offsets
> needed? I do need some way to hande the different offsets and this just seemed
> better than some raw u64, in order to have at least some type checking.

For different purposes I would *prefer* different types, if
possible.
Since your "new" offsets come from a new method (`app_appendix`), it
should IMO be possible.
That way it's clear that you shouldn't use these offsets for eg.
`add_hardlink` and you bind the value to a purpose via a type.




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 7/20] fix #3174: encoder: add helper to incr encoder pos
  2023-09-27 12:20     ` Christian Ebner
@ 2023-09-28  7:04       ` Wolfgang Bumiller
  2023-09-28  7:50         ` Christian Ebner
  0 siblings, 1 reply; 40+ messages in thread
From: Wolfgang Bumiller @ 2023-09-28  7:04 UTC (permalink / raw)
  To: Christian Ebner; +Cc: pbs-devel, m.carrara

On Wed, Sep 27, 2023 at 02:20:18PM +0200, Christian Ebner wrote:
> 
> > On 27.09.2023 14:07 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> > 
> >  
> > 'incr' :S
> > 
> > On Fri, Sep 22, 2023 at 09:16:08AM +0200, Christian Ebner wrote:
> > > Adds a helper to allow to increase the encoder position by a given
> > > size. This is used to increase the position when adding an appendix
> > > section to the pxar stream, as these bytes are never encoded directly
> > > but rather referenced by already existing chunks.
> > 
> > Exposing this seems like a weird choice to me. Why exactly is this
> > needed? Why don't we instead expose methods to actually write the
> > appendix section instead?
> 
> This is needed in order to increase the byte offset of the encoder itself.
> The appendix section is a list of chunks which are injected in the chunk
> stream on upload, but never really consumed by the encoder and subsequently
> the chunker itself. So there is no direct writing of the appendix section to
> the stream.
> 
> By adding the bytes, consistency with the rest of the pxar archive is assured,
> as these chunks/bytes are present during decoding.

Ah so we inject the *contents* of the old pxar archive by way of sending
the chunks a writing "layer" above. Initially I thought the archive
would contain chunk ids, but this makes more sense. And is unfortunate
for the API :-)

Maybe consider marking the position modification as `unsafe fn`, though?
I mean it is a foot gun to break the resulting archive with, after all
;-)

But this means we don't have a direct way of creating incremental pxars
without a PBS context, doesn't it?
Would it make sense to have a method here which returns a Writer to
the `EncoderOutput` where we could in theory also just "dump in"
contents of another actual pxar file (where the byte counting happens
implicitly), which also has an extra `unsafe fn add_out_of_band_bytes()`
to do the raw byte count modification?

One advantage of having a "starting point" for this type of operation is
that we'd also force a `flush()` before out-of-band data gets written.
Otherwise, if we don't need/want this, we should probably just add a
`flush()` to the encoder we should call before adding any chunks out of
band, given that Max already tried to sneak in a BufRead/Writers into
the pxar crate for optimization purposes, IIRC ;-)




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 7/20] fix #3174: encoder: add helper to incr encoder pos
  2023-09-28  7:04       ` Wolfgang Bumiller
@ 2023-09-28  7:50         ` Christian Ebner
  2023-09-28  8:32           ` Wolfgang Bumiller
  0 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2023-09-28  7:50 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pbs-devel, m.carrara


> On 28.09.2023 09:04 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> 
>  
> On Wed, Sep 27, 2023 at 02:20:18PM +0200, Christian Ebner wrote:
> > 
> > > On 27.09.2023 14:07 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> > > 
> > >  
> > > 'incr' :S
> > > 
> > > On Fri, Sep 22, 2023 at 09:16:08AM +0200, Christian Ebner wrote:
> > > > Adds a helper to allow to increase the encoder position by a given
> > > > size. This is used to increase the position when adding an appendix
> > > > section to the pxar stream, as these bytes are never encoded directly
> > > > but rather referenced by already existing chunks.
> > > 
> > > Exposing this seems like a weird choice to me. Why exactly is this
> > > needed? Why don't we instead expose methods to actually write the
> > > appendix section instead?
> > 
> > This is needed in order to increase the byte offset of the encoder itself.
> > The appendix section is a list of chunks which are injected in the chunk
> > stream on upload, but never really consumed by the encoder and subsequently
> > the chunker itself. So there is no direct writing of the appendix section to
> > the stream.
> > 
> > By adding the bytes, consistency with the rest of the pxar archive is assured,
> > as these chunks/bytes are present during decoding.
> 
> Ah so we inject the *contents* of the old pxar archive by way of sending
> the chunks a writing "layer" above. Initially I thought the archive
> would contain chunk ids, but this makes more sense. And is unfortunate
> for the API :-)

Yes, an initial approach was to store the chunk ids inline, but that is not
necessary and added unneeded storage overhead. As is, the chunks are appended
to a list to be injected after encoding the regular part of the archive,
while instead of the actual file payload the PXAR_APPENIDX_REF entry with
payload size and offset relative to the PXAR_APPENDIX entry is stored.

This section then contains the concatenated referenced chunks, allowing to
restore file payloads by sequentially skipping to the correct offset and
restoring the payload from there.

> 
> Maybe consider marking the position modification as `unsafe fn`, though?
> I mean it is a foot gun to break the resulting archive with, after all
> ;-)

You are right in that this is to be seen as an unsafe operation. Maybe instead
of the function to be unsafe, the interface could take the list of chunks as
input and shift the position accordingly?
Thereby consuming the chunks and store them for injection afterwards.

That way the ownership of the chunk list would be moved to the encoder rather than
being part of the archiver, as is now. The chunk list might then be passed from the
encoder to be injected to the backup upload stream, although I am not sure if and
how to bypass the chunker in that case.

> 
> But this means we don't have a direct way of creating incremental pxars
> without a PBS context, doesn't it?

This is correct. At the moment the only way to create an incremental pxar
archive is to use the PBS context. Both, index file and catalog are required,
which could in principle also be provided by a command line parameter, but
finally also the actual chunk data is needed. That is currently only provided
during restore of the archive from backup.

> Would it make sense to have a method here which returns a Writer to
> the `EncoderOutput` where we could in theory also just "dump in"
> contents of another actual pxar file (where the byte counting happens
> implicitly), which also has an extra `unsafe fn add_out_of_band_bytes()`
> to do the raw byte count modification?

Yes, this might be possible, but for creating the backup I completely want to
avoid that. This would require to download the chunk data just to inject it
for reuse, which is probably way more expensive and defies the purpose of
reusing the chunks to begin with.

If you intended this to be an addition to the current code, in order to create
a pxar archive with appendix locally, without the PBS context, then yes.
This might be possible by passing the data in form of a `MergedKnownChunk`,
which contains either the raw chunk data or the reused chunks hash and size,
allowing to pass either the data or the digest needed to index it.

> 
> One advantage of having a "starting point" for this type of operation is
> that we'd also force a `flush()` before out-of-band data gets written.
> Otherwise, if we don't need/want this, we should probably just add a
> `flush()` to the encoder we should call before adding any chunks out of
> band, given that Max already tried to sneak in a BufRead/Writers into
> the pxar crate for optimization purposes, IIRC ;-)

Good point, flushing is definitely  required if writes will be buffered to
not break the byte stream.




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 1/20] fix #3174: encoder: impl fn new for LinkOffset
  2023-09-28  6:49       ` Wolfgang Bumiller
@ 2023-09-28  7:52         ` Christian Ebner
  0 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-28  7:52 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pbs-devel

Agreed, I will introduce a separate type for appendix reference offsets as well
as for other offsets, if needed, leaving the LinkOffset as is.

> On 28.09.2023 08:49 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> 
>  
> On Wed, Sep 27, 2023 at 02:26:03PM +0200, Christian Ebner wrote:
> > > On 27.09.2023 14:08 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> > > 
> > >  
> > > On Fri, Sep 22, 2023 at 09:16:02AM +0200, Christian Ebner wrote:
> > > > Allows to generate a new LinkOffset for storing the offset of regular
> > > > files in the backup catalog, based on the provided archiver position.
> > > > 
> > > > Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> > > > ---
> > > >  src/encoder/mod.rs | 6 ++++++
> > > >  1 file changed, 6 insertions(+)
> > > > 
> > > > diff --git a/src/encoder/mod.rs b/src/encoder/mod.rs
> > > > index 0d342ec..710ed98 100644
> > > > --- a/src/encoder/mod.rs
> > > > +++ b/src/encoder/mod.rs
> > > > @@ -31,6 +31,12 @@ pub use sync::Encoder;
> > > >  pub struct LinkOffset(u64);
> > > >  
> > > >  impl LinkOffset {
> > > > +    /// Create a new link from the raw byte offset.
> > > > +    #[inline]
> > > > +    pub fn new(raw: u64) -> Self {
> > > 
> > > Not very happy about that. It was meant to be an opaque type that you
> > > can't just create easily.
> > > But oh well... if we need it...
> > > Better than using u64 directly.
> > 
> > Might it be worth to create dedicated types or enums for the different offsets
> > needed? I do need some way to hande the different offsets and this just seemed
> > better than some raw u64, in order to have at least some type checking.
> 
> For different purposes I would *prefer* different types, if
> possible.
> Since your "new" offsets come from a new method (`app_appendix`), it
> should IMO be possible.
> That way it's clear that you shouldn't use these offsets for eg.
> `add_hardlink` and you bind the value to a purpose via a type.




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 4/20] fix #3174: metadata: impl fn to calc byte size
  2023-09-27 11:55     ` Christian Ebner
@ 2023-09-28  8:07       ` Christian Ebner
  2023-09-28  9:00         ` Wolfgang Bumiller
  0 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2023-09-28  8:07 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pbs-devel

I was giving this some more thought and are not really convinced that sending
this trough an encoder instance, which digests the encoded byte stream and counts
the bytes is the right approach here.

The purpose of this function is to calculate the bytes, which I can easily skip over
*without* having to call any expensive encoding/decoding functionality.
I might get around this by simply calling the decoder on the byte stream, than I do
not need this at all (if I'm not missing something). Might that be the better approach?

Additionally, and maybe even better, I might get rid of this also by letting the
PXAR_APPENDIX_REF offset point to the start of the file payload entry, instead of the
file entry as is now, thereby being able to blindly skip over this already to begin with.
Although I am not sure if that is the best approach for handling the metadata, which should
ideally not be encoded twice, once before the PXAR_APPENDIX_REF and the PXAR_PAYLOAD.

> On 27.09.2023 13:55 CEST Christian Ebner <c.ebner@proxmox.com> wrote:
> 
>  
> > On 27.09.2023 13:38 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> > 
> >  
> > On Fri, Sep 22, 2023 at 09:16:05AM +0200, Christian Ebner wrote:
> > > Add a helper function to calculate the byte size of pxar Metadata
> > > objects, needed to be able to recalculate offsets when creating archives
> > > with appendix sections.
> > > 
> > > Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> > > ---
> > >  src/lib.rs | 44 ++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 44 insertions(+)
> > > 
> > > diff --git a/src/lib.rs b/src/lib.rs
> > > index 210c4b1..ed7ba40 100644
> > > --- a/src/lib.rs
> > > +++ b/src/lib.rs
> > > @@ -144,6 +144,50 @@ impl Metadata {
> > >      pub fn builder_from_stat(stat: &libc::stat) -> MetadataBuilder {
> > >          MetadataBuilder::new(0).fill_from_stat(stat)
> > >      }
> > > +
> > > +    /// Calculate the number of bytes when serialized in pxar archive
> > > +    pub fn calculate_byte_len(&self) -> usize {
> > 
> > This looks like a maintenance nightmare with extra sprinkles.
> 
> ;)
> 
> > 
> > Can we instead create a specialized `SeqWrite` type that just counts
> > bytes instead of actually writing anything and call the encoder's
> > `encode_metadata()` with it?
> > (Either by factorizing `encode_metadat()` out further or by creating an
> > actual `EncoderImpl` instance with it...)
> 
> Yes, I will have a look on how to improve this and include this as well in the next version of the patch series, thx for the hints.
> 
> > 
> > > +        let mut bytes = mem::size_of::<format::Header>();
> > > +        bytes += mem::size_of_val(&self.stat);
> > > +        for xattr in &self.xattrs {
> > > +            bytes += mem::size_of::<format::Header>();
> > > +            bytes += mem::size_of_val(xattr);
> > > +        }
> > > +        for acl_user in &self.acl.users {
> > > +            bytes += mem::size_of::<format::Header>();
> > > +            bytes += mem::size_of_val(acl_user);
> > > +        }
> > > +        for acl_group in &self.acl.groups {
> > > +            bytes += mem::size_of::<format::Header>();
> > > +            bytes += mem::size_of_val(acl_group);
> > > +        }
> > > +        if let Some(group_obj) = &self.acl.group_obj {
> > > +            bytes += mem::size_of::<format::Header>();
> > > +            bytes += mem::size_of_val(group_obj);
> > > +        }
> > > +        if let Some(default) = &self.acl.default {
> > > +            bytes += mem::size_of::<format::Header>();
> > > +            bytes += mem::size_of_val(default);
> > > +        }
> > > +        for acl_default_user in &self.acl.default_users {
> > > +            bytes += mem::size_of::<format::Header>();
> > > +            bytes += mem::size_of_val(acl_default_user);
> > > +        }
> > > +        for acl_default_group in &self.acl.default_groups {
> > > +            bytes += mem::size_of::<format::Header>();
> > > +            bytes += mem::size_of_val(acl_default_group);
> > > +        }
> > > +        if let Some(fcaps) = &self.fcaps {
> > > +            bytes += mem::size_of::<format::Header>();
> > > +            bytes += mem::size_of_val(fcaps);
> > > +        }
> > > +        if let Some(quota_project_id) = &self.quota_project_id {
> > > +            bytes += mem::size_of::<format::Header>();
> > > +            bytes += mem::size_of_val(quota_project_id);
> > > +        }
> > > +
> > > +        bytes
> > > +    }
> > >  }
> > >  
> > >  impl From<MetadataBuilder> for Metadata {
> > > -- 
> > > 2.39.2




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 7/20] fix #3174: encoder: add helper to incr encoder pos
  2023-09-28  7:50         ` Christian Ebner
@ 2023-09-28  8:32           ` Wolfgang Bumiller
  0 siblings, 0 replies; 40+ messages in thread
From: Wolfgang Bumiller @ 2023-09-28  8:32 UTC (permalink / raw)
  To: Christian Ebner; +Cc: pbs-devel

On Thu, Sep 28, 2023 at 09:50:03AM +0200, Christian Ebner wrote:
> 
> > On 28.09.2023 09:04 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> > 
> >  
> > On Wed, Sep 27, 2023 at 02:20:18PM +0200, Christian Ebner wrote:
> > > 
> > > > On 27.09.2023 14:07 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> > > > 
> > > >  
> > > > 'incr' :S
> > > > 
> > > > On Fri, Sep 22, 2023 at 09:16:08AM +0200, Christian Ebner wrote:
> > > > > Adds a helper to allow to increase the encoder position by a given
> > > > > size. This is used to increase the position when adding an appendix
> > > > > section to the pxar stream, as these bytes are never encoded directly
> > > > > but rather referenced by already existing chunks.
> > > > 
> > > > Exposing this seems like a weird choice to me. Why exactly is this
> > > > needed? Why don't we instead expose methods to actually write the
> > > > appendix section instead?
> > > 
> > > This is needed in order to increase the byte offset of the encoder itself.
> > > The appendix section is a list of chunks which are injected in the chunk
> > > stream on upload, but never really consumed by the encoder and subsequently
> > > the chunker itself. So there is no direct writing of the appendix section to
> > > the stream.
> > > 
> > > By adding the bytes, consistency with the rest of the pxar archive is assured,
> > > as these chunks/bytes are present during decoding.
> > 
> > Ah so we inject the *contents* of the old pxar archive by way of sending
> > the chunks a writing "layer" above. Initially I thought the archive
> > would contain chunk ids, but this makes more sense. And is unfortunate
> > for the API :-)
> 
> Yes, an initial approach was to store the chunk ids inline, but that is not
> necessary and added unneeded storage overhead. As is, the chunks are appended
> to a list to be injected after encoding the regular part of the archive,
> while instead of the actual file payload the PXAR_APPENIDX_REF entry with
> payload size and offset relative to the PXAR_APPENDIX entry is stored.
> 
> This section then contains the concatenated referenced chunks, allowing to
> restore file payloads by sequentially skipping to the correct offset and
> restoring the payload from there.
> 
> > 
> > Maybe consider marking the position modification as `unsafe fn`, though?
> > I mean it is a foot gun to break the resulting archive with, after all
> > ;-)
> 
> You are right in that this is to be seen as an unsafe operation. Maybe instead
> of the function to be unsafe, the interface could take the list of chunks as
> input and shift the position accordingly?
> Thereby consuming the chunks and store them for injection afterwards.
> 
> That way the ownership of the chunk list would be moved to the encoder rather than
> being part of the archiver, as is now. The chunk list might then be passed from the
> encoder to be injected to the backup upload stream, although I am not sure if and
> how to bypass the chunker in that case.
> 
> > 
> > But this means we don't have a direct way of creating incremental pxars
> > without a PBS context, doesn't it?
> 
> This is correct. At the moment the only way to create an incremental pxar
> archive is to use the PBS context. Both, index file and catalog are required,
> which could in principle also be provided by a command line parameter, but
> finally also the actual chunk data is needed. That is currently only provided
> during restore of the archive from backup.
> 
> > Would it make sense to have a method here which returns a Writer to
> > the `EncoderOutput` where we could in theory also just "dump in"
> > contents of another actual pxar file (where the byte counting happens
> > implicitly), which also has an extra `unsafe fn add_out_of_band_bytes()`
> > to do the raw byte count modification?
> 
> Yes, this might be possible, but for creating the backup I completely want to
> avoid that. This would require to download the chunk data just to inject it
> for reuse, which is probably way more expensive and defies the purpose of
> reusing the chunks to begin with.

No I meant the `unsafe fn add_out_of_band_bytes()` was supposed to bump
just the counter exactly as we do now, and its `Write` interface
specifically only for *non* PBS backup creation.
But we don't need to flesh out the non-PBS-related API right now at all,
my main concern was to make the pxar API more difficult to use wrongly,
specifically the flushing ;-)
But sure, the PBS part is so separated from the pxar code that there's
never anything preventing you from inserting bogus data into the stream
anyway I suppose... but that's on the PBS code side and doesn't really
need to be taken into account from the pxar crate's API point of view.

> 
> If you intended this to be an addition to the current code, in order to create
> a pxar archive with appendix locally, without the PBS context, then yes.
> This might be possible by passing the data in form of a `MergedKnownChunk`,
> which contains either the raw chunk data or the reused chunks hash and size,
> allowing to pass either the data or the digest needed to index it.
> 
> > 
> > One advantage of having a "starting point" for this type of operation is
> > that we'd also force a `flush()` before out-of-band data gets written.
> > Otherwise, if we don't need/want this, we should probably just add a
> > `flush()` to the encoder we should call before adding any chunks out of
> > band, given that Max already tried to sneak in a BufRead/Writers into
> > the pxar crate for optimization purposes, IIRC ;-)
> 
> Good point, flushing is definitely  required if writes will be buffered to
> not break the byte stream.




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 4/20] fix #3174: metadata: impl fn to calc byte size
  2023-09-28  8:07       ` Christian Ebner
@ 2023-09-28  9:00         ` Wolfgang Bumiller
  2023-09-28  9:27           ` Christian Ebner
  0 siblings, 1 reply; 40+ messages in thread
From: Wolfgang Bumiller @ 2023-09-28  9:00 UTC (permalink / raw)
  To: Christian Ebner; +Cc: pbs-devel

On Thu, Sep 28, 2023 at 10:07:40AM +0200, Christian Ebner wrote:
> I was giving this some more thought and are not really convinced that sending
> this trough an encoder instance, which digests the encoded byte stream and counts
> the bytes is the right approach here.

How about moving the logic `encode_metadata` from `Encoder` into
`Metadata` with an `Option<&mut SeqWrite>` parameter, not a full
Encoder, and just having the encoding vs counting logic live right next
to each other depending on whether the writer is Some?
That should be as cheap as it gets?

> 
> The purpose of this function is to calculate the bytes, which I can easily skip over
> *without* having to call any expensive encoding/decoding functionality.
> I might get around this by simply calling the decoder on the byte stream, than I do
> not need this at all (if I'm not missing something). Might that be the better approach?

I'm not sure decoding is that much cheaper than dummy-encoding...
depending on the data I'd say it could even be more expensive in some
cases? (rare cases though, only with lots of ACLs/xattrs around I
suppose...)

> 
> Additionally, and maybe even better, I might get rid of this also by letting the
> PXAR_APPENDIX_REF offset point to the start of the file payload entry, instead of the
> file entry as is now, thereby being able to blindly skip over this already to begin with.
> Although I am not sure if that is the best approach for handling the metadata, which should
> ideally not be encoded twice, once before the PXAR_APPENDIX_REF and the PXAR_PAYLOAD.

Not sure why skipping data would encode it twice? Or did you mean to
imply that previously we pointed to metadata, but when instead pointing
to the payload we need to instead encode it in the new archive which we
previously did not need to do?




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [pbs-devel] [RFC pxar 4/20] fix #3174: metadata: impl fn to calc byte size
  2023-09-28  9:00         ` Wolfgang Bumiller
@ 2023-09-28  9:27           ` Christian Ebner
  0 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2023-09-28  9:27 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pbs-devel


> On 28.09.2023 11:00 CEST Wolfgang Bumiller <w.bumiller@proxmox.com> wrote:
> 
>  
> On Thu, Sep 28, 2023 at 10:07:40AM +0200, Christian Ebner wrote:
> > I was giving this some more thought and are not really convinced that sending
> > this trough an encoder instance, which digests the encoded byte stream and counts
> > the bytes is the right approach here.
> 
> How about moving the logic `encode_metadata` from `Encoder` into
> `Metadata` with an `Option<&mut SeqWrite>` parameter, not a full
> Encoder, and just having the encoding vs counting logic live right next
> to each other depending on whether the writer is Some?
> That should be as cheap as it gets?
>

Hmm, the Metadata should however not be concerned about how it might be encoded in different
contexts. That is something only the encoder should be concerned about.

> > 
> > The purpose of this function is to calculate the bytes, which I can easily skip over
> > *without* having to call any expensive encoding/decoding functionality.
> > I might get around this by simply calling the decoder on the byte stream, than I do
> > not need this at all (if I'm not missing something). Might that be the better approach?
> 
> I'm not sure decoding is that much cheaper than dummy-encoding...
> depending on the data I'd say it could even be more expensive in some
> cases? (rare cases though, only with lots of ACLs/xattrs around I
> suppose...)

Probably so, although the data structures are somewhat simple in this case.

> 
> > 
> > Additionally, and maybe even better, I might get rid of this also by letting the
> > PXAR_APPENDIX_REF offset point to the start of the file payload entry, instead of the
> > file entry as is now, thereby being able to blindly skip over this already to begin with.
> > Although I am not sure if that is the best approach for handling the metadata, which should
> > ideally not be encoded twice, once before the PXAR_APPENDIX_REF and the PXAR_PAYLOAD.
> 
> Not sure why skipping data would encode it twice? Or did you mean to
> imply that previously we pointed to metadata, but when instead pointing
> to the payload we need to instead encode it in the new archive which we
> previously did not need to do?

Yes, I was referring to the latter, having to encode the metadata also in the regular part of the archive
would give the same data twice, once the newly encoded and the same in the appended chunks. Storing this
twice bloats the size unnecessarily.

Which brings me to another point I did not take into consideration so far: How to handle files which metadata
changed but was not checked against. Since the catalog only contains size and mtime, only these are comparable.
But I need the current xattrs, acls ecc... Will have to look up if changing those actually changes the mtime as well.




^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2023-09-28  9:28 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-22  7:16 [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC pxar 1/20] fix #3174: encoder: impl fn new for LinkOffset Christian Ebner
2023-09-27 12:08   ` Wolfgang Bumiller
2023-09-27 12:26     ` Christian Ebner
2023-09-28  6:49       ` Wolfgang Bumiller
2023-09-28  7:52         ` Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC pxar 2/20] fix #3174: decoder: factor out skip_bytes from skip_entry Christian Ebner
2023-09-27 11:32   ` Wolfgang Bumiller
2023-09-27 11:53     ` Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC pxar 3/20] fix #3174: decoder: impl skip_bytes for sync dec Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC pxar 4/20] fix #3174: metadata: impl fn to calc byte size Christian Ebner
2023-09-27 11:38   ` Wolfgang Bumiller
2023-09-27 11:55     ` Christian Ebner
2023-09-28  8:07       ` Christian Ebner
2023-09-28  9:00         ` Wolfgang Bumiller
2023-09-28  9:27           ` Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC pxar 5/20] fix #3174: enc/dec: impl PXAR_APPENDIX_REF entrytype Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC pxar 6/20] fix #3174: enc/dec: impl PXAR_APPENDIX entrytype Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC pxar 7/20] fix #3174: encoder: add helper to incr encoder pos Christian Ebner
2023-09-27 12:07   ` Wolfgang Bumiller
2023-09-27 12:20     ` Christian Ebner
2023-09-28  7:04       ` Wolfgang Bumiller
2023-09-28  7:50         ` Christian Ebner
2023-09-28  8:32           ` Wolfgang Bumiller
2023-09-22  7:16 ` [pbs-devel] [RFC pxar 8/20] fix #3174: enc/dec: impl PXAR_APPENDIX_TAIL entrytype Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 09/20] fix #3174: index: add fn index list from start/end-offsets Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 10/20] fix #3174: index: add fn digest for DynamicEntry Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 11/20] fix #3174: api: double catalog upload size Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 12/20] fix #3174: catalog: incl pxar archives file offset Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 13/20] fix #3174: archiver/extractor: impl appendix ref Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 14/20] fix #3174: extractor: impl seq restore from appendix Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 15/20] fix #3174: archiver: store ref to previous backup Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 16/20] fix #3174: upload stream: impl reused chunk injector Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 17/20] fix #3174: chunker: add forced boundaries Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 18/20] fix #3174: backup writer: inject queued chunk in upload steam Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 19/20] fix #3174: archiver: reuse files with unchanged metadata Christian Ebner
2023-09-26  7:01   ` Christian Ebner
2023-09-22  7:16 ` [pbs-devel] [RFC proxmox-backup 20/20] fix #3174: client: Add incremental flag to backup creation Christian Ebner
2023-09-26  7:11   ` Christian Ebner
2023-09-26  7:15 ` [pbs-devel] [RFC pxar proxmox-backup 00/20] fix #3174: improve file-level backup Christian Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal