public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH 00/22] Single file restore for VM images
@ 2021-02-16 17:06 Stefan Reiter
  2021-02-16 17:06 ` [pbs-devel] [PATCH pxar 01/22] decoder/aio: add contents() and content_size() calls Stefan Reiter
                   ` (22 more replies)
  0 siblings, 23 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

Implements CLI-based single file and directory restore for both pxar.didx
archives (containers, hosts) and img.fidx (VMs, raw block devices). The design
for VM restore uses a small virtual machine that the host communicates with via
virtio-vsock.

This is encapsuled into a new package called "proxmox-file-restore", providing a
binary of the same name. A second package is provided in a new git repository
called "proxmox-restore-vm-data", providing a minimal kernel image and a base
initramfs (without the daemon, which is included in proxmox-file-restore).

Requires my previously sent pxar asyncify series:
https://lists.proxmox.com/pipermail/pbs-devel/2020-December/001788.html

The first couple patches in the proxmox-backup repo are adapted versions of the
ones Dominik sent to the list a while ago:
https://lists.proxmox.com/pipermail/pbs-devel/2020-December/001788.html

Dependency bump in proxmox-backup for pxar is required, though best done
together with the changes from the aforementioned seperate series.

Tested with ext4 and NTFS VMs, but theoretically includes support for many more
filesystems (see 'config-base' in the new proxmox-restore-vm-data repository).

Known issues/Missing features:
* GUI/PVE support
* PBS_PASSWORD/PBS_FINGERPRINT currently have to be set manually for VM restore
* ZFS/LVM/md/... support
* shell auto-complete for "proxmox-file-restore" doesn't work (and I don't know
  why...)
* some patches might include some sneaky rustfmt/clippy fixes that'd better fit
  to a previous patch, sorry for that, rebasing so many patches is annoying ;)


pxar: Stefan Reiter (2):
  decoder/aio: add contents() and content_size() calls
  decoder: add peek()

 src/accessor/mod.rs |  3 +++
 src/decoder/aio.rs  | 53 +++++++++++++++++++++++++++++++++++++++++++--
 src/decoder/mod.rs  | 19 ++++++++++++++--
 src/decoder/sync.rs | 10 ++++++++-
 4 files changed, 80 insertions(+), 5 deletions(-)

proxmox-restore-vm-data: Stefan Reiter (1):
  initial commit

proxmox-backup: Dominik Csapak (5):
  api2/admin/datastore: refactor list_dir_content in catalog_reader
  api2/admin/datastore: accept "/" as path for root
  api2/admin/datastore: refactor create_zip into pxar/extract
  pxar/extract: add extract_sub_dir
  file-restore: add binary and basic commands

Stefan Reiter (14):
  pxar/extract: add sequential variants to create_zip, extract_sub_dir
  client: extract common functions to proxmox_client_tools module
  proxmox_client_tools: extract 'key' from client module
  file-restore: allow specifying output-format
  rest: implement tower service for UnixStream
  client: add VsockClient to connect to virtio-vsock VMs
  file-restore-daemon: add binary with virtio-vsock API server
  file-restore-daemon: add watchdog module
  file-restore-daemon: add disk module
  file-restore: add basic VM/block device support
  file-restore: improve logging of VM with logrotate
  debian/client: add postinst hook to rebuild file-restore initramfs
  file-restore(-daemon): implement list API
  file-restore: add 'extract' command for VM file restore

 Cargo.toml                                    |   5 +-
 Makefile                                      |  18 +-
 debian/control                                |  13 +
 debian/control.in                             |  10 +
 debian/proxmox-backup-client.triggers         |   1 +
 debian/proxmox-file-restore.bash-completion   |   1 +
 debian/proxmox-file-restore.bc                |   8 +
 debian/proxmox-file-restore.install           |   4 +
 debian/proxmox-file-restore.postinst          |  63 ++
 debian/proxmox-file-restore.triggers          |   1 +
 debian/rules                                  |   7 +-
 docs/Makefile                                 |  10 +-
 docs/command-line-tools.rst                   |   5 +
 docs/proxmox-file-restore/description.rst     |   4 +
 docs/proxmox-file-restore/man1.rst            |  28 +
 src/api2.rs                                   |   2 +-
 src/api2/admin/datastore.rs                   | 152 +---
 src/api2/helpers.rs                           |  31 +
 src/api2/types/file_restore.rs                |  15 +
 src/api2/types/mod.rs                         |  46 +
 src/backup/catalog.rs                         |  26 +
 src/bin/proxmox-backup-client.rs              | 799 +-----------------
 src/bin/proxmox-file-restore.rs               | 484 +++++++++++
 src/bin/proxmox-restore-daemon.rs             | 124 +++
 src/bin/proxmox_backup_client/catalog.rs      |   4 +-
 src/bin/proxmox_backup_client/mod.rs          |  30 -
 src/bin/proxmox_backup_client/snapshot.rs     |   3 +-
 .../key.rs                                    | 440 +++++++++-
 src/bin/proxmox_client_tools/mod.rs           | 392 +++++++++
 src/bin/proxmox_file_restore/block_driver.rs  | 221 +++++
 .../proxmox_file_restore/block_driver_qemu.rs | 478 +++++++++++
 src/bin/proxmox_file_restore/mod.rs           |   5 +
 src/bin/proxmox_restore_daemon/api.rs         | 316 +++++++
 src/bin/proxmox_restore_daemon/disk.rs        | 341 ++++++++
 src/bin/proxmox_restore_daemon/mod.rs         |   9 +
 src/bin/proxmox_restore_daemon/watchdog.rs    |  63 ++
 src/buildcfg.rs                               |  20 +
 src/client.rs                                 |   3 +
 src/client/vsock_client.rs                    | 259 ++++++
 src/pxar/extract.rs                           | 436 +++++++++-
 src/pxar/mod.rs                               |   5 +-
 src/server/rest.rs                            |  20 +
 www/window/FileBrowser.js                     |   1 +
 zsh-completions/_proxmox-file-restore         |  13 +
 44 files changed, 3940 insertions(+), 976 deletions(-)
 create mode 100644 debian/proxmox-backup-client.triggers
 create mode 100644 debian/proxmox-file-restore.bash-completion
 create mode 100644 debian/proxmox-file-restore.bc
 create mode 100644 debian/proxmox-file-restore.install
 create mode 100755 debian/proxmox-file-restore.postinst
 create mode 100644 debian/proxmox-file-restore.triggers
 create mode 100644 docs/proxmox-file-restore/description.rst
 create mode 100644 docs/proxmox-file-restore/man1.rst
 create mode 100644 src/api2/types/file_restore.rs
 create mode 100644 src/bin/proxmox-file-restore.rs
 create mode 100644 src/bin/proxmox-restore-daemon.rs
 rename src/bin/{proxmox_backup_client => proxmox_client_tools}/key.rs (52%)
 create mode 100644 src/bin/proxmox_client_tools/mod.rs
 create mode 100644 src/bin/proxmox_file_restore/block_driver.rs
 create mode 100644 src/bin/proxmox_file_restore/block_driver_qemu.rs
 create mode 100644 src/bin/proxmox_file_restore/mod.rs
 create mode 100644 src/bin/proxmox_restore_daemon/api.rs
 create mode 100644 src/bin/proxmox_restore_daemon/disk.rs
 create mode 100644 src/bin/proxmox_restore_daemon/mod.rs
 create mode 100644 src/bin/proxmox_restore_daemon/watchdog.rs
 create mode 100644 src/client/vsock_client.rs
 create mode 100644 zsh-completions/_proxmox-file-restore

-- 
2.20.1




^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH pxar 01/22] decoder/aio: add contents() and content_size() calls
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
@ 2021-02-16 17:06 ` Stefan Reiter
  2021-02-17  7:56   ` Wolfgang Bumiller
  2021-02-16 17:06 ` [pbs-devel] [PATCH pxar 02/22] decoder: add peek() Stefan Reiter
                   ` (21 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

Returns a tokio AsyncRead implementation for its "Contents" to keep with
the aio theme.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/decoder/aio.rs | 43 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 42 insertions(+), 1 deletion(-)

diff --git a/src/decoder/aio.rs b/src/decoder/aio.rs
index 82030b0..5cc6694 100644
--- a/src/decoder/aio.rs
+++ b/src/decoder/aio.rs
@@ -56,6 +56,18 @@ impl<T: SeqRead> Decoder<T> {
         self.inner.next_do().await.transpose()
     }
 
+    /// Get a reader for the contents of the current entry, if the entry has contents.
+    /// Only available for feature "tokio-io", since it returns an AsyncRead reader.
+    #[cfg(feature = "tokio-io")]
+    pub fn contents(&mut self) -> Option<Contents<T>> {
+        self.inner.content_reader().map(|inner| Contents { inner })
+    }
+
+    /// Get the size of the current contents, if the entry has contents.
+    pub fn content_size(&self) -> Option<u64> {
+        self.inner.content_size()
+    }
+
     /// Include goodbye tables in iteration.
     pub fn enable_goodbye_entries(&mut self, on: bool) {
         self.inner.with_goodbye_tables = on;
@@ -93,7 +105,36 @@ mod tok {
             }
         }
     }
+
+    pub struct Contents<'a, T: crate::decoder::SeqRead> {
+        pub(crate) inner: crate::decoder::Contents<'a, T>,
+    }
+
+    impl<'a, T: crate::decoder::SeqRead> tokio::io::AsyncRead for Contents<'a, T> {
+        fn poll_read(
+            self: Pin<&mut Self>,
+            cx: &mut Context<'_>,
+            buf: &mut tokio::io::ReadBuf<'_>,
+        ) -> Poll<io::Result<()>> {
+            unsafe {
+                // Safety: poll_seq_read will only write to the buffer, so we don't need to
+                // initialize it first, we can treat is a &[u8] immediately as long as we uphold
+                // the ReadBuf invariants in the conditional below
+                let write_buf =
+                    &mut *(buf.unfilled_mut() as *mut [std::mem::MaybeUninit<u8>] as *mut [u8]);
+                let result = self
+                    .map_unchecked_mut(|this| &mut this.inner as &mut dyn crate::decoder::SeqRead)
+                    .poll_seq_read(cx, write_buf);
+                if let Poll::Ready(Ok(n)) = result {
+                    // if we've written data, advance both initialized and filled bytes cursor
+                    buf.assume_init(buf.filled().len() + n);
+                    buf.advance(n);
+                }
+                result.map(|_| Ok(()))
+            }
+        }
+    }
 }
 
 #[cfg(feature = "tokio-io")]
-use tok::TokioReader;
+use tok::{Contents, TokioReader};
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH pxar 02/22] decoder: add peek()
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
  2021-02-16 17:06 ` [pbs-devel] [PATCH pxar 01/22] decoder/aio: add contents() and content_size() calls Stefan Reiter
@ 2021-02-16 17:06 ` Stefan Reiter
  2021-02-17  8:20   ` Wolfgang Bumiller
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-restore-vm-data 03/22] initial commit Stefan Reiter
                   ` (20 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

Allows peeking the current element, but will not advance the state
(except for contents() and content_size() functions).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/accessor/mod.rs |  3 +++
 src/decoder/aio.rs  | 10 +++++++++-
 src/decoder/mod.rs  | 19 +++++++++++++++++--
 src/decoder/sync.rs | 10 +++++++++-
 4 files changed, 38 insertions(+), 4 deletions(-)

diff --git a/src/accessor/mod.rs b/src/accessor/mod.rs
index d02dc13..aa1b3f6 100644
--- a/src/accessor/mod.rs
+++ b/src/accessor/mod.rs
@@ -293,6 +293,7 @@ impl<T: Clone + ReadAt> AccessorImpl<T> {
         let entry = decoder
             .next()
             .await
+            .transpose()
             .ok_or_else(|| io_format_err!("unexpected EOF while decoding file entry"))??;
         Ok(FileEntryImpl {
             input: self.input.clone(),
@@ -334,6 +335,7 @@ impl<T: Clone + ReadAt> AccessorImpl<T> {
         let entry = decoder
             .next()
             .await
+            .transpose()
             .ok_or_else(|| io_format_err!("unexpected EOF while following a hardlink"))??;
 
         match entry.kind() {
@@ -516,6 +518,7 @@ impl<T: Clone + ReadAt> DirectoryImpl<T> {
         let entry = decoder
             .next()
             .await
+            .transpose()
             .ok_or_else(|| io_format_err!("unexpected EOF while decoding directory entry"))??;
         Ok((entry, decoder))
     }
diff --git a/src/decoder/aio.rs b/src/decoder/aio.rs
index 5cc6694..c553d45 100644
--- a/src/decoder/aio.rs
+++ b/src/decoder/aio.rs
@@ -53,7 +53,15 @@ impl<T: SeqRead> Decoder<T> {
     #[allow(clippy::should_implement_trait)]
     /// If this is a directory entry, get the next item inside the directory.
     pub async fn next(&mut self) -> Option<io::Result<Entry>> {
-        self.inner.next_do().await.transpose()
+        self.inner.next().await.transpose()
+    }
+
+    /// If this is a directory entry, get the next item inside the directory.
+    /// Do not advance the cursor, so multiple calls to peek() will return the same entry,
+    /// and the next call to next() will read the item once again before moving on.
+    /// NOTE: This *will* advance the state for contents() and content_size()!
+    pub async fn peek(&mut self) -> Option<io::Result<Entry>> {
+        self.inner.peek().await.transpose()
     }
 
     /// Get a reader for the contents of the current entry, if the entry has contents.
diff --git a/src/decoder/mod.rs b/src/decoder/mod.rs
index 2a5e79a..041226d 100644
--- a/src/decoder/mod.rs
+++ b/src/decoder/mod.rs
@@ -155,6 +155,7 @@ pub(crate) struct DecoderImpl<T> {
     path_lengths: Vec<usize>,
     state: State,
     with_goodbye_tables: bool,
+    peeked: Option<io::Result<Option<Entry>>>,
 
     /// The random access code uses decoders for sub-ranges which may not end in a `PAYLOAD` for
     /// entries like FIFOs or sockets, so there we explicitly allow an item to terminate with EOF.
@@ -218,6 +219,7 @@ impl<I: SeqRead> DecoderImpl<I> {
             path_lengths: Vec::new(),
             state: State::Begin,
             with_goodbye_tables: false,
+            peeked: None,
             eof_after_entry,
         };
 
@@ -227,8 +229,21 @@ impl<I: SeqRead> DecoderImpl<I> {
     }
 
     /// Get the next file entry, recursing into directories.
-    pub async fn next(&mut self) -> Option<io::Result<Entry>> {
-        self.next_do().await.transpose()
+    pub async fn next(&mut self) -> io::Result<Option<Entry>> {
+        if let Some(ent) = self.peeked.take() {
+            return ent;
+        }
+        self.next_do().await
+    }
+
+    pub async fn peek(&mut self) -> io::Result<Option<Entry>> {
+        self.peeked = Some(self.next().await);
+        match &self.peeked {
+            Some(Ok(ent)) => Ok(ent.clone()),
+            // io::Error does not implement Clone...
+            Some(Err(err)) => Err(io_format_err!("{}", err)),
+            None => unreachable!()
+        }
     }
 
     async fn next_do(&mut self) -> io::Result<Option<Entry>> {
diff --git a/src/decoder/sync.rs b/src/decoder/sync.rs
index 85b4865..c6a1bc3 100644
--- a/src/decoder/sync.rs
+++ b/src/decoder/sync.rs
@@ -63,7 +63,15 @@ impl<T: SeqRead> Decoder<T> {
     #[allow(clippy::should_implement_trait)]
     /// If this is a directory entry, get the next item inside the directory.
     pub fn next(&mut self) -> Option<io::Result<Entry>> {
-        poll_result_once(self.inner.next_do()).transpose()
+        poll_result_once(self.inner.next()).transpose()
+    }
+
+    /// If this is a directory entry, get the next item inside the directory.
+    /// Do not advance the cursor, so multiple calls to peek() will return the same entry,
+    /// and the next call to next() will read the item once again before moving on.
+    /// NOTE: This *will* advance the state for contents() and content_size()!
+    pub async fn peek(&mut self) -> Option<io::Result<Entry>> {
+        poll_result_once(self.inner.peek()).transpose()
     }
 
     /// Get a reader for the contents of the current entry, if the entry has contents.
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-restore-vm-data 03/22] initial commit
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
  2021-02-16 17:06 ` [pbs-devel] [PATCH pxar 01/22] decoder/aio: add contents() and content_size() calls Stefan Reiter
  2021-02-16 17:06 ` [pbs-devel] [PATCH pxar 02/22] decoder: add peek() Stefan Reiter
@ 2021-02-16 17:06 ` Stefan Reiter
  2021-03-15 18:35   ` [pbs-devel] applied: " Thomas Lamprecht
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 04/22] api2/admin/datastore: refactor list_dir_content in catalog_reader Stefan Reiter
                   ` (19 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

proxmox-restore-vm-data provides means to build a debian package
containing a minimalistic Linux kernel and a corresponding initramfs
image for use in a file-restore VM.

Launched with QEMU/KVM, it boots in 1.6 seconds to userspace (on AMD
2700X) and has a minimal attack surface (no network stack other than
virtio-vsock, no auxiliary device support (USB, etc...), userspace
written in Rust) as opposed to mounting backup archives directly on the
host.

Since our Rust binaries are currently not fully statically linked, we
need to include some libraries into the initramfs as well. This is done
in 'build_initramfs.sh'.

A minimal /init is included as a Rust binary (init-shim-rs), doing only
the bare-minimum userspace setup before handing over control to the
file-restore daemon (see 'proxmox-backup' repository).

The debian package comes with a 'activate-noawait
pbs-file-restore-initramfs' trigger activation to rebuild the cached
initramfs when the base image shipped here updates. This is taken care
of by proxmox-file-restore.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

Brand new git repo! I called it proxmox-restore-vm-data for lack of any smarter
ideas, open for better names :)

I also decided to include the 5.10 kernel and ZFS 2.0.3 from current pve-kernel
repository pretty last-minute, it seems to work fine though (ZFS isn't used atm
anyway).


 .gitignore                                    |   9 ++
 .gitmodules                                   |   6 +
 Makefile                                      | 103 +++++++++++++
 build_initramfs.sh                            |  42 +++++
 config-base                                   | 144 ++++++++++++++++++
 debian/changelog                              |   6 +
 debian/compat                                 |   1 +
 debian/control                                |  34 +++++
 debian/copyright                              |  22 +++
 debian/install                                |   2 +
 debian/rules                                  |  13 ++
 debian/triggers                               |   1 +
 init-shim-rs/Cargo.lock                       |  51 +++++++
 init-shim-rs/Cargo.toml                       |   9 ++
 init-shim-rs/src/main.rs                      | 122 +++++++++++++++
 ...-OVERRIDE-do-not-build-xr-usb-serial.patch |  30 ++++
 ...2-FIXUP-syntax-error-in-Ubuntu-Sauce.patch |  26 ++++
 submodules/ubuntu-hirsute                     |   1 +
 submodules/zfsonlinux                         |   1 +
 19 files changed, 623 insertions(+)
 create mode 100644 .gitignore
 create mode 100644 .gitmodules
 create mode 100644 Makefile
 create mode 100755 build_initramfs.sh
 create mode 100644 config-base
 create mode 100644 debian/changelog
 create mode 100644 debian/compat
 create mode 100644 debian/control
 create mode 100644 debian/copyright
 create mode 100644 debian/install
 create mode 100755 debian/rules
 create mode 100644 debian/triggers
 create mode 100644 init-shim-rs/Cargo.lock
 create mode 100644 init-shim-rs/Cargo.toml
 create mode 100644 init-shim-rs/src/main.rs
 create mode 100644 patches/kernel/0001-OVERRIDE-do-not-build-xr-usb-serial.patch
 create mode 100644 patches/kernel/0002-FIXUP-syntax-error-in-Ubuntu-Sauce.patch
 create mode 160000 submodules/ubuntu-hirsute
 create mode 160000 submodules/zfsonlinux

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..d331656
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,9 @@
+build/
+init-shim-rs/target/
+
+*.deb
+*.dsc
+*.buildinfo
+*.changes
+*.prepared
+*.tar.gz
diff --git a/.gitmodules b/.gitmodules
new file mode 100644
index 0000000..fdd2bb0
--- /dev/null
+++ b/.gitmodules
@@ -0,0 +1,6 @@
+[submodule "submodules/zfsonlinux"]
+	path = submodules/zfsonlinux
+	url = git://git.proxmox.com/git/mirror_zfs.git
+[submodule "submodules/ubuntu-hirsute"]
+	path = submodules/ubuntu-hirsute
+	url = git://git.proxmox.com/git/mirror_ubuntu-hirsute-kernel.git
diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..2276f4b
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,103 @@
+include /usr/share/dpkg/pkg-info.mk
+include /usr/share/dpkg/architecture.mk
+
+PACKAGE=proxmox-restore-vm-data
+
+BUILDDIR=build
+INITRAMFS_BUILDDIR=build/initramfs
+
+ZFSONLINUX_SUBMODULE=submodules/zfsonlinux
+KERNEL_SUBMODULE=submodules/ubuntu-hirsute
+SHIM_DIR=init-shim-rs
+
+KERNEL_IMG=${BUILDDIR}/${KERNEL_SUBMODULE}/arch/x86/boot/bzImage
+INITRAMFS_IMG=${INITRAMFS_BUILDDIR}/initramfs.img
+
+CONFIG=config-base
+
+RUST_SRC=$(wildcard ${SHIM_DIR}/**/*.rs) ${SHIM_DIR}/Cargo.toml
+
+DEB=${PACKAGE}_${DEB_VERSION_UPSTREAM_REVISION}_${DEB_BUILD_ARCH}.deb
+DSC=${PACKAGE}_${DEB_VERSION_UPSTREAM_REVISION}.dsc
+
+all: deb
+
+submodules.prepared:
+	git submodule update --init ${KERNEL_SUBMODULE}
+	git submodule update --init --recursive ${ZFSONLINUX_SUBMODULE}
+	touch $@
+
+${BUILDDIR}.prepared: submodules.prepared ${CONFIG}
+	rm -rf ${BUILDDIR}
+	mkdir -p ${BUILDDIR}
+	cp -a submodules debian patches ${BUILDDIR}/
+	cp ${CONFIG} ${BUILDDIR}/${KERNEL_SUBMODULE}
+	cd ${BUILDDIR}/${KERNEL_SUBMODULE}; \
+		for p in ../../patches/kernel/*.patch; do \
+			patch -Np1 < $$p; \
+		done
+	touch $@
+
+kernel.prepared: ${BUILDDIR}.prepared
+	cd ${BUILDDIR}/${KERNEL_SUBMODULE}; \
+		KCONFIG_ALLCONFIG=${CONFIG} make allnoconfig && \
+		make -j$(nproc) prepare scripts
+	touch $@
+
+zfs.prepared: kernel.prepared
+	cd ${BUILDDIR}/${ZFSONLINUX_SUBMODULE}; \
+		sh autogen.sh && \
+		./configure \
+			--enable-linux-builtin \
+			--with-linux=../../${KERNEL_SUBMODULE} \
+			--with-linux-obj=../../${KERNEL_SUBMODULE} && \
+		./copy-builtin ../../${KERNEL_SUBMODULE}
+	# only now can we enable CONFIG_ZFS
+	cd ${BUILDDIR}/${KERNEL_SUBMODULE}; \
+		./scripts/config -e CONFIG_ZFS
+	touch $@
+
+${KERNEL_IMG}: zfs.prepared
+	cd ${BUILDDIR}/${KERNEL_SUBMODULE}; \
+	    make -j$(nproc)
+	mv ${BUILDDIR}/${KERNEL_SUBMODULE}/arch/x86/boot/bzImage ${BUILDDIR}/
+
+${INITRAMFS_IMG}: ${BUILDDIR}.prepared ${RUST_SRC} build_initramfs.sh
+	cd ${SHIM_DIR}; cargo build --release
+	sh build_initramfs.sh
+
+.PHONY: dinstall
+dinstall: deb
+	dpkg -i ${DEB}
+
+.PHONY: deb
+deb: ${DEB}
+${DEB}: ${KERNEL_IMG} ${INITRAMFS_IMG}
+	cd ${BUILDDIR}; dpkg-buildpackage -b -us -uc
+	lintian ${DEB}
+
+.PHONY: dsc
+dsc: ${DSC}
+${DSC}: ${KERNEL_IMG} ${INITRAMFS_IMG}
+	cd ${BUILDDIR}; dpkg-buildpackage -S -us -uc -d
+	lintian ${DSC}
+
+.PHONY: upload
+upload: ${DEB}
+	tar cf - ${DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
+	tar cf - ${DEB} | ssh -X repoman@repo.proxmox.com upload --product pve --dist buster
+
+.PHONY: test-run
+test-run: ${KERNEL_IMG} ${INITRAMFS_IMG}
+	# note: this will always fail since /proxmox-restore-daemon is not
+	# included in the initramfs, but it can be used to test the
+	# kernel/init-shim-rs builds
+	qemu-system-x86_64 -serial stdio -vnc none -enable-kvm \
+		-kernel build/${KERNEL_SUBMODULE}/arch/x86/boot/bzImage \
+		-initrd build/initramfs/initramfs.img
+
+.PHONY: clean
+clean:
+	rm -rf *~ ${BUILDDIR} ${INITRAMFS_BUILDDIR} *.prepared
+	rm -f ${PACKAGE}_${DEB_VERSION_UPSTREAM_REVISION}.tar.gz
+	rm -f *.deb *.changes *.buildinfo *.dsc
diff --git a/build_initramfs.sh b/build_initramfs.sh
new file mode 100755
index 0000000..72bb483
--- /dev/null
+++ b/build_initramfs.sh
@@ -0,0 +1,42 @@
+#!/bin/sh
+
+set -e
+
+ROOT="root"
+BUILDDIR="build/initramfs"
+INIT="../../init-shim-rs/target/release/init-shim-rs"
+
+PKGS=" \
+    libc6:amd64=2.28-10 \
+    libgcc1:amd64=1:8.3.0-6 \
+    libstdc++6:amd64=8.3.0-6 \
+    libssl1.1:amd64=1.1.1d-0+deb10u4 \
+    libattr1:amd64=1:2.4.48-4 \
+    libacl1:amd64=2.2.53-4
+"
+
+echo "Using build dir: $BUILDDIR"
+rm -rf "$BUILDDIR"
+mkdir -p "$BUILDDIR"
+cd "$BUILDDIR"
+mkdir "$ROOT"
+
+# add necessary packages to initramfs
+for pkg in $PKGS; do
+    apt-get download "$pkg"
+    dpkg-deb -x ./*.deb "$ROOT"
+    rm ./*.deb
+done
+
+rm -rf ${ROOT:?}/usr/share # contains only docs and debian stuff
+
+cp $INIT "$ROOT/init"
+chmod a+x "$ROOT/init" # just to be sure
+
+# tell daemon it's running in the correct environment
+touch "$ROOT/restore-vm-marker"
+
+fakeroot -- sh -c "
+    cd '$ROOT';
+    find . -print0 | cpio --null -oV --format=newc -F ../initramfs.img
+"
diff --git a/config-base b/config-base
new file mode 100644
index 0000000..db52460
--- /dev/null
+++ b/config-base
@@ -0,0 +1,144 @@
+CONFIG_LOCALVERSION="-pbs-restore"
+
+# kernel commandline override
+CONFIG_CMDLINE_BOOL=y
+CONFIG_CMDLINE="console=ttyS0"
+
+# NOTE: ZFS will be enabled from Makefile, since we can only activate it after
+# 'copy-builtin' creates the necessary Kconfig in the kernel tree.
+# CONFIG_ZFS=y
+
+# in case we crash the kernel, so we can at least read the stacktraces
+CONFIG_KALLSYMS=y
+
+# CPU settings
+CONFIG_64BIT=y
+CONFIG_MMU=y
+CONFIG_SMP=y
+CONFIG_X86_X2APIC=y
+CONFIG_ACPI=y
+# not super necessary, but avoids a warning
+CONFIG_RETPOLINE=y
+
+# basic kernel features
+CONFIG_MULTIUSER=y
+CONFIG_POSIX_TIMERS=y
+CONFIG_BUG=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_AIO=y
+CONFIG_BINFMT_ELF=y
+CONFIG_ELFCORE=y
+CONFIG_PRINTK=y
+CONFIG_EVENTFD=y
+CONFIG_MODULES=y
+
+# initramfs support
+CONFIG_TMPFS=y
+CONFIG_BLK_DEV_INITRD=y
+
+# paravirt acceleration
+CONFIG_KVM_GUEST=y
+CONFIG_PARAVIRT=y
+CONFIG_PARAVIRT_CLOCK=y
+CONFIG_PARAVIRT_SPINLOCKS=y
+CONFIG_VIRTIO_MENU=y
+CONFIG_VIRTIO=y
+
+# enable terminal on serial for debugging/logging
+CONFIG_TTY=y
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_INPUT_KEYBOARD=y
+
+# vsock support
+CONFIG_PCI=y
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_VSOCKETS=y
+CONFIG_VIRTIO_VSOCKETS=y
+CONFIG_VIRTIO_VSOCKETS_COMMON=y
+
+# block device support, especially virtio-scsi/blk
+CONFIG_BLOCK=y
+CONFIG_BLK_DEV=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_BLK_DEV_SR=y
+CONFIG_VIRT_DRIVERS=y
+CONFIG_SCSI=y
+CONFIG_SCSI_VIRTIO=y
+CONFIG_BLK_MQ_VIRTIO=y
+CONFIG_VIRTIO_BLK=y
+CONFIG_VIRTIO_MMIO=y
+CONFIG_VIRTIO_PCI=y
+
+# md/LVM/device-mapper support
+CONFIG_DM_CRYPT=y
+CONFIG_DM_MIRROR=y
+CONFIG_DM_RAID=y
+CONFIG_DM_SNAPSHOT=y
+CONFIG_DM_THIN_PROVISIONING=y
+CONFIG_DM_UNSTRIPED=y
+CONFIG_MD=y
+CONFIG_MD_AUTODETECT=y
+CONFIG_MD_LINEAR=y
+CONFIG_MD_RAID0=y
+CONFIG_MD_RAID1=y
+CONFIG_MD_RAID10=y
+CONFIG_MD_RAID456=y
+CONFIG_BLK_DEV_MD=y
+CONFIG_BLK_DEV_DM=y
+
+# basic fs features
+CONFIG_FS_POSIX_ACL=y
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+CONFIG_PROC_FS=y
+# proc_sysctl is necessary for ZFS, it panics otherwise
+CONFIG_PROC_SYSCTL=y
+CONFIG_SYSFS=y
+CONFIG_NLS=y
+CONFIG_NLS_UTF8=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_MSDOS_PARTITION=y
+CONFIG_EFI_PARTITION=y
+
+# filesystem support
+CONFIG_MISC_FILESYSTEMS=y
+CONFIG_EXT2_FS=y
+CONFIG_EXT2_FS_XATTR=y
+CONFIG_EXT2_FS_POSIX_ACL=y
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_FS_XATTR=y
+CONFIG_EXT3_FS_POSIX_ACL=y
+CONFIG_EXT4_FS=y
+CONFIG_EXT4_FS_XATTR=y
+CONFIG_EXT4_FS_POSIX_ACL=y
+CONFIG_REISERFS_FS=y
+CONFIG_REISERFS_FS_XATTR=y
+CONFIG_REISERFS_FS_POSIX_ACL=y
+CONFIG_JFS_FS=y
+CONFIG_JFS_POSIX_ACL=y
+CONFIG_XFS_FS=y
+CONFIG_XFS_POSIX_ACL=y
+CONFIG_BTRFS_FS=y
+CONFIG_BTRFS_FS_POSIX_ACL=y
+CONFIG_F2FS_FS=y
+CONFIG_F2FS_FS_XATTR=y
+CONFIG_F2FS_FS_POSIX_ACL=y
+CONFIG_F2FS_FS_COMPRESSION=y
+CONFIG_F2FS_FS_LZO=y
+CONFIG_F2FS_FS_LZ4=y
+CONFIG_F2FS_FS_ZSTD=y
+CONFIG_F2FS_FS_LZORLE=y
+CONFIG_HFS_FS=y
+CONFIG_HFSPLUS_FS=y
+CONFIG_BEFS_FS=y
+CONFIG_SYSV_FS=y
+CONFIG_UFS_FS=y
+CONFIG_ISO9660_FS=y
+CONFIG_NTFS_FS=y
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
diff --git a/debian/changelog b/debian/changelog
new file mode 100644
index 0000000..a389c1b
--- /dev/null
+++ b/debian/changelog
@@ -0,0 +1,6 @@
+proxmox-restore-vm-data (1.0.0-1) pbs; urgency=medium
+
+  * initial release
+
+ -- Proxmox Support Team <support@proxmox.com>  Tue, 16 Feb 2021 16:49:20 +0100
+
diff --git a/debian/compat b/debian/compat
new file mode 100644
index 0000000..f599e28
--- /dev/null
+++ b/debian/compat
@@ -0,0 +1 @@
+10
diff --git a/debian/control b/debian/control
new file mode 100644
index 0000000..ff128de
--- /dev/null
+++ b/debian/control
@@ -0,0 +1,34 @@
+Source: proxmox-restore-vm-data
+Section: admin
+Priority: optional
+Maintainer: Proxmox Support Team <support@proxmox.com>
+Build-Depends: asciidoc-base,
+               automake,
+               bc,
+               bison,
+               cpio,
+               debhelper (>= 10~),
+               dh-python,
+               flex,
+               gcc (>= 8.3.0-6),
+               git,
+               libdw-dev,
+               libelf-dev,
+               libtool,
+               lintian,
+               perl-modules,
+               python-minimal,
+               sed,
+               sphinx-common,
+               tar,
+               xmlto,
+               zlib1g-dev,
+Standards-Version: 4.5.1
+Homepage: https://www.proxmox.com
+
+Package: proxmox-restore-vm-data
+Architecture: amd64
+Recommends: proxmox-file-restore
+Description: VM kernel/initramfs images for PBS single file restore
+ Preconfigured images used as base for single file restore of PBS backup
+ snapshots. Useless on their own, use together with proxmox-file-restore.
diff --git a/debian/copyright b/debian/copyright
new file mode 100644
index 0000000..ce2c0a7
--- /dev/null
+++ b/debian/copyright
@@ -0,0 +1,22 @@
+Copyright (C) 2020 Proxmox Server Solutions GmbH
+
+This package contains a version of a linux kernel binary image (including
+patches by Ubuntu/Canonical and Proxmox) with built-in ZFS support.
+
+Linux is copyrighted by Linus Torvalds and others and distributed under the
+terms of the GPL-2.0 license.
+The complete text of the GNU General Public License can be found in
+`/usr/share/common-licenses/GPL-2'.
+
+ZFS is licensed under the Common Development and Distribution License (CDDL).
+
+The shipped initramfs image contains several files from other debian packages.
+For their copyright notices see the respective packages in the versions
+mentioned in build_initramfs.sh.
+
+The initramfs also contains a rust binary as /init, built from 'init-shim-rs'
+available in this package's sources. This binary is released under the terms of
+the AGPLv3 by Proxmox Server Solutions GmbH.
+
+This package was put together by Proxmox Server Solutions GmbH
+<support@proxmox.com>.
diff --git a/debian/install b/debian/install
new file mode 100644
index 0000000..5e83453
--- /dev/null
+++ b/debian/install
@@ -0,0 +1,2 @@
+bzImage /usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore/
+initramfs/initramfs.img /usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore/
diff --git a/debian/rules b/debian/rules
new file mode 100755
index 0000000..955dd78
--- /dev/null
+++ b/debian/rules
@@ -0,0 +1,13 @@
+#!/usr/bin/make -f
+# -*- makefile -*-
+# Sample debian/rules that uses debhelper.
+# This file was originally written by Joey Hess and Craig Small.
+# As a special exception, when this file is copied by dh-make into a
+# dh-make output file, you may use that output file without restriction.
+# This special exception was added by Craig Small in version 0.37 of dh-make.
+
+# Uncomment this to turn on verbose mode.
+#export DH_VERBOSE=1
+
+%:
+	dh $@
diff --git a/debian/triggers b/debian/triggers
new file mode 100644
index 0000000..a7abac5
--- /dev/null
+++ b/debian/triggers
@@ -0,0 +1 @@
+activate-noawait pbs-file-restore-initramfs
diff --git a/init-shim-rs/Cargo.lock b/init-shim-rs/Cargo.lock
new file mode 100644
index 0000000..a293b3c
--- /dev/null
+++ b/init-shim-rs/Cargo.lock
@@ -0,0 +1,51 @@
+# This file is automatically @generated by Cargo.
+# It is not intended for manual editing.
+[[package]]
+name = "anyhow"
+version = "1.0.34"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "bf8dcb5b4bbaa28653b647d8c77bd4ed40183b48882e130c1f1ffb73de069fd7"
+
+[[package]]
+name = "bitflags"
+version = "1.2.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "cf1de2fe8c75bc145a2f577add951f8134889b4795d47466a54a5c846d691693"
+
+[[package]]
+name = "cc"
+version = "1.0.62"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f1770ced377336a88a67c473594ccc14eca6f4559217c34f64aac8f83d641b40"
+
+[[package]]
+name = "cfg-if"
+version = "0.1.10"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "4785bdd1c96b2a846b2bd7cc02e86b6b3dbf14e7e53446c4f54c92a361040822"
+
+[[package]]
+name = "init-shim-rs"
+version = "1.0.0"
+dependencies = [
+ "anyhow",
+ "nix",
+]
+
+[[package]]
+name = "libc"
+version = "0.2.80"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "4d58d1b70b004888f764dfbf6a26a3b0342a1632d33968e4a179d8011c760614"
+
+[[package]]
+name = "nix"
+version = "0.19.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "85db2feff6bf70ebc3a4793191517d5f0331100a2f10f9bf93b5e5214f32b7b7"
+dependencies = [
+ "bitflags",
+ "cc",
+ "cfg-if",
+ "libc",
+]
diff --git a/init-shim-rs/Cargo.toml b/init-shim-rs/Cargo.toml
new file mode 100644
index 0000000..013395c
--- /dev/null
+++ b/init-shim-rs/Cargo.toml
@@ -0,0 +1,9 @@
+[package]
+name = "init-shim-rs"
+version = "1.0.0"
+authors = ["Stefan Reiter <s.reiter@proxmox.com>"]
+edition = "2018"
+
+[dependencies]
+anyhow = "1.0"
+nix = "0.19"
diff --git a/init-shim-rs/src/main.rs b/init-shim-rs/src/main.rs
new file mode 100644
index 0000000..89aff7b
--- /dev/null
+++ b/init-shim-rs/src/main.rs
@@ -0,0 +1,122 @@
+use anyhow::Error;
+use std::ffi::CStr;
+use std::fs;
+
+const URANDOM_MAJ: u64 = 1;
+const URANDOM_MIN: u64 = 9;
+
+/// Set up a somewhat normal linux userspace environment before starting the restore daemon, and
+/// provide error messages to the user if doing so fails.
+///
+/// This is supposed to run as /init in an initramfs image.
+fn main() {
+    println!("[init-shim] beginning user space setup");
+
+    // /dev is mounted automatically
+    wrap_err("mount /sys", || do_mount("/sys", "sysfs"));
+    wrap_err("mount /proc", || do_mount("/proc", "proc"));
+
+    // make device nodes required by daemon
+    wrap_err("mknod /dev/urandom", || {
+        do_mknod("/dev/urandom", URANDOM_MAJ, URANDOM_MIN)
+    });
+
+    let uptime = read_uptime();
+    println!("[init-shim] reached daemon start after {:.2}s", uptime);
+
+    do_run("/proxmox-restore-daemon");
+}
+
+fn do_mount(target: &str, fstype: &str) -> Result<(), Error> {
+    use nix::mount::{mount, MsFlags};
+    fs::create_dir(target)?;
+    let none_type: Option<&CStr> = None;
+    mount(
+        none_type,
+        target,
+        Some(fstype),
+        MsFlags::MS_NOSUID | MsFlags::MS_NOEXEC,
+        none_type,
+    )?;
+    Ok(())
+}
+
+fn do_mknod(path: &str, maj: u64, min: u64) -> Result<(), Error> {
+    use nix::sys::stat;
+    let dev = stat::makedev(maj, min);
+    stat::mknod(path, stat::SFlag::S_IFCHR, stat::Mode::S_IRWXU, dev)?;
+    Ok(())
+}
+
+fn read_uptime() -> f32 {
+    let uptime = wrap_err("read /proc/uptime", || {
+        fs::read_to_string("/proc/uptime").map_err(|e| e.into())
+    });
+    // this can never fail on a sane kernel, so just unwrap
+    uptime
+        .split_ascii_whitespace()
+        .next()
+        .unwrap()
+        .parse()
+        .unwrap()
+}
+
+fn do_run(cmd: &str) -> ! {
+    use std::io::ErrorKind;
+    use std::process::Command;
+
+    let spawn_res = Command::new(cmd).env("RUST_BACKTRACE", "1").spawn();
+
+    match spawn_res {
+        Ok(mut child) => {
+            let res = wrap_err("wait failed", || child.wait().map_err(|e| e.into()));
+            error(&format!(
+                "child process {} (pid={} exitcode={}) exited unexpectedly, check log for more info",
+                cmd,
+                child.id(),
+                res.code().unwrap_or(-1),
+            ));
+        }
+        Err(err) if err.kind() == ErrorKind::NotFound => {
+            error(&format!(
+                concat!(
+                    "{} missing from image.\n",
+                    "This initramfs should only be run with proxmox-file-restore!"
+                ),
+                cmd
+            ));
+        }
+        Err(err) => {
+            error(&format!(
+                "unexpected error during start of {}: {}",
+                cmd, err
+            ));
+        }
+    }
+}
+
+fn wrap_err<R, F: FnOnce() -> Result<R, Error>>(op: &str, f: F) -> R {
+    match f() {
+        Ok(r) => r,
+        Err(e) => error(&format!("operation '{}' failed: {}", op, e)),
+    }
+}
+
+fn error(msg: &str) -> ! {
+    use nix::sys::reboot;
+
+    println!("\n--------");
+    println!("ERROR: Init shim failed\n");
+    println!("{}", msg);
+    println!("--------\n");
+
+    // in case a fatal error occurs we shut down the VM, there's no sense in continuing and this
+    // will certainly alert whoever started us up in the first place
+    let err = reboot::reboot(reboot::RebootMode::RB_POWER_OFF).unwrap_err();
+    println!("'reboot' syscall failed: {} - cannot continue", err);
+
+    // in case 'reboot' fails just loop forever
+    loop {
+        std::thread::sleep(std::time::Duration::from_secs(600));
+    }
+}
diff --git a/patches/kernel/0001-OVERRIDE-do-not-build-xr-usb-serial.patch b/patches/kernel/0001-OVERRIDE-do-not-build-xr-usb-serial.patch
new file mode 100644
index 0000000..7873602
--- /dev/null
+++ b/patches/kernel/0001-OVERRIDE-do-not-build-xr-usb-serial.patch
@@ -0,0 +1,30 @@
+From 4cf77185b43a29ad2d70749648cac83330030cf9 Mon Sep 17 00:00:00 2001
+From: Stefan Reiter <s.reiter@proxmox.com>
+Date: Tue, 17 Nov 2020 14:42:52 +0100
+Subject: [PATCH] OVERRIDE: do not build xr-usb-serial
+
+We don't have USB support in the kernel, so this will fail - and for
+some reason there's no Kconfig setting for this...
+
+Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
+---
+ ubuntu/Makefile | 3 ---
+ 1 file changed, 3 deletions(-)
+
+diff --git a/ubuntu/Makefile b/ubuntu/Makefile
+index 67c6d5b98b53..6e7264845b66 100644
+--- a/ubuntu/Makefile
++++ b/ubuntu/Makefile
+@@ -19,9 +19,6 @@ obj-$(CONFIG_HIO)             += hio/
+ ##
+ ##
+ ##
+-ifeq ($(ARCH),x86)
+-obj-y				+= xr-usb-serial/
+-endif
+ ##
+ ##
+ ##
+-- 
+2.20.1
+
diff --git a/patches/kernel/0002-FIXUP-syntax-error-in-Ubuntu-Sauce.patch b/patches/kernel/0002-FIXUP-syntax-error-in-Ubuntu-Sauce.patch
new file mode 100644
index 0000000..6273847
--- /dev/null
+++ b/patches/kernel/0002-FIXUP-syntax-error-in-Ubuntu-Sauce.patch
@@ -0,0 +1,26 @@
+From 2c972569ef5b641846773bee3b3a0191ba66165e Mon Sep 17 00:00:00 2001
+From: Stefan Reiter <s.reiter@proxmox.com>
+Date: Tue, 16 Feb 2021 17:14:41 +0100
+Subject: [PATCH] FIXUP: syntax error in Ubuntu Sauce
+
+Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
+---
+ include/linux/audit.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/include/linux/audit.h b/include/linux/audit.h
+index 55cc03c1bed8..8f84c9503827 100644
+--- a/include/linux/audit.h
++++ b/include/linux/audit.h
+@@ -253,7 +253,7 @@ static inline void audit_log_path_denied(int type, const char *operation)
+ static inline void audit_log_lsm(struct lsmblob *blob, bool exiting)
+ { }
+ static inline int audit_log_task_context(struct audit_buffer *ab,
+-					 struct lsmblob *blob);
++					 struct lsmblob *blob)
+ {
+ 	return 0;
+ }
+-- 
+2.20.1
+
diff --git a/submodules/ubuntu-hirsute b/submodules/ubuntu-hirsute
new file mode 160000
index 0000000..01f2ad6
--- /dev/null
+++ b/submodules/ubuntu-hirsute
@@ -0,0 +1 @@
+Subproject commit 01f2ad60c19fc07666c3cad5e6f527bc46af6303
diff --git a/submodules/zfsonlinux b/submodules/zfsonlinux
new file mode 160000
index 0000000..9f5f866
--- /dev/null
+++ b/submodules/zfsonlinux
@@ -0,0 +1 @@
+Subproject commit 9f5f86626620c52ad1bebf27d17cece6a28d39a0
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 04/22] api2/admin/datastore: refactor list_dir_content in catalog_reader
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (2 preceding siblings ...)
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-restore-vm-data 03/22] initial commit Stefan Reiter
@ 2021-02-16 17:06 ` Stefan Reiter
  2021-02-17  7:50   ` [pbs-devel] applied: " Thomas Lamprecht
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 05/22] api2/admin/datastore: accept "/" as path for root Stefan Reiter
                   ` (18 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

From: Dominik Csapak <d.csapak@proxmox.com>

we will reuse that later in the client, so we need it somewhere
we can use from there

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

[add strongly typed ArchiveEntry and put api code into helpers.rs]
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/api2/admin/datastore.rs | 53 ++++++-------------------------------
 src/api2/helpers.rs         | 31 ++++++++++++++++++++++
 src/api2/types/mod.rs       | 43 ++++++++++++++++++++++++++++++
 src/backup/catalog.rs       | 26 ++++++++++++++++++
 4 files changed, 108 insertions(+), 45 deletions(-)

diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 6f02e460..ab88d172 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -27,6 +27,7 @@ use pxar::EntryKind;
 
 use crate::api2::types::*;
 use crate::api2::node::rrd::create_value_from_rrd;
+use crate::api2::helpers;
 use crate::backup::*;
 use crate::config::datastore;
 use crate::config::cached_user_info::CachedUserInfo;
@@ -1294,7 +1295,7 @@ pub fn catalog(
     backup_time: i64,
     filepath: String,
     rpcenv: &mut dyn RpcEnvironment,
-) -> Result<Value, Error> {
+) -> Result<Vec<ArchiveEntry>, Error> {
     let datastore = DataStore::lookup_datastore(&store)?;
 
     let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -1326,52 +1327,14 @@ pub fn catalog(
     let reader = BufferedDynamicReader::new(index, chunk_reader);
 
     let mut catalog_reader = CatalogReader::new(reader);
-    let mut current = catalog_reader.root()?;
-    let mut components = vec![];
 
+    let path = if filepath != "root" {
+        base64::decode(filepath)?
+    } else {
+        vec![b'/']
+    };
 
-    if filepath != "root" {
-        components = base64::decode(filepath)?;
-        if !components.is_empty() && components[0] == b'/' {
-            components.remove(0);
-        }
-        for component in components.split(|c| *c == b'/') {
-            if let Some(entry) = catalog_reader.lookup(&current, component)? {
-                current = entry;
-            } else {
-                bail!("path {:?} not found in catalog", &String::from_utf8_lossy(&components));
-            }
-        }
-    }
-
-    let mut res = Vec::new();
-
-    for direntry in catalog_reader.read_dir(&current)? {
-        let mut components = components.clone();
-        components.push(b'/');
-        components.extend(&direntry.name);
-        let path = base64::encode(components);
-        let text = String::from_utf8_lossy(&direntry.name);
-        let mut entry = json!({
-            "filepath": path,
-            "text": text,
-            "type": CatalogEntryType::from(&direntry.attr).to_string(),
-            "leaf": true,
-        });
-        match direntry.attr {
-            DirEntryAttribute::Directory { start: _ } => {
-                entry["leaf"] = false.into();
-            },
-            DirEntryAttribute::File { size, mtime } => {
-                entry["size"] = size.into();
-                entry["mtime"] = mtime.into();
-            },
-            _ => {},
-        }
-        res.push(entry);
-    }
-
-    Ok(res.into())
+    helpers::list_dir_content(&mut catalog_reader, &path)
 }
 
 fn recurse_files<'a, T, W>(
diff --git a/src/api2/helpers.rs b/src/api2/helpers.rs
index 2a822654..41391b77 100644
--- a/src/api2/helpers.rs
+++ b/src/api2/helpers.rs
@@ -1,3 +1,4 @@
+use std::io::{Read, Seek};
 use std::path::PathBuf;
 
 use anyhow::Error;
@@ -6,6 +7,9 @@ use hyper::{Body, Response, StatusCode, header};
 
 use proxmox::http_bail;
 
+use crate::api2::types::ArchiveEntry;
+use crate::backup::{CatalogReader, DirEntryAttribute};
+
 pub async fn create_download_response(path: PathBuf) -> Result<Response<Body>, Error> {
     let file = match tokio::fs::File::open(path.clone()).await {
         Ok(file) => file,
@@ -27,3 +31,30 @@ pub async fn create_download_response(path: PathBuf) -> Result<Response<Body>, E
         .body(body)
         .unwrap())
 }
+
+/// Returns the list of content of the given path
+pub fn list_dir_content<R: Read + Seek>(
+    reader: &mut CatalogReader<R>,
+    path: &[u8],
+) -> Result<Vec<ArchiveEntry>, Error> {
+    let dir = reader.lookup_recursive(path)?;
+    let mut res = vec![];
+    let mut path = path.to_vec();
+    if !path.is_empty() && path[0] == b'/' {
+        path.remove(0);
+    }
+
+    for direntry in reader.read_dir(&dir)? {
+        let mut components = path.clone();
+        components.push(b'/');
+        components.extend(&direntry.name);
+        let mut entry = ArchiveEntry::new(&components, &direntry.attr);
+        if let DirEntryAttribute::File { size, mtime } = direntry.attr {
+            entry.size = size.into();
+            entry.mtime = mtime.into();
+        }
+        res.push(entry);
+    }
+
+    Ok(res)
+}
diff --git a/src/api2/types/mod.rs b/src/api2/types/mod.rs
index d9394586..4c663335 100644
--- a/src/api2/types/mod.rs
+++ b/src/api2/types/mod.rs
@@ -12,6 +12,8 @@ use crate::{
         CryptMode,
         Fingerprint,
         BACKUP_ID_REGEX,
+        DirEntryAttribute,
+        CatalogEntryType,
     },
     server::UPID,
     config::acl::Role,
@@ -1303,6 +1305,47 @@ pub struct DatastoreNotify {
     pub sync: Option<Notify>,
 }
 
+/// An entry in a hierarchy of files for restore and listing.
+#[api()]
+#[derive(Serialize, Deserialize)]
+pub struct ArchiveEntry {
+    /// Base64-encoded full path to the file, including the filename
+    pub filepath: String,
+    /// Displayable filename text for UIs
+    pub text: String,
+    /// File or directory type of this entry
+    #[serde(rename = "type")]
+    pub entry_type: String,
+    /// Is this entry a leaf node, or does it have children (i.e. a directory)?
+    pub leaf: bool,
+    /// The file size, if entry_type is 'f' (file)
+    #[serde(skip_serializing_if="Option::is_none")]
+    pub size: Option<u64>,
+    /// The file "last modified" time stamp, if entry_type is 'f' (file)
+    #[serde(skip_serializing_if="Option::is_none")]
+    pub mtime: Option<i64>,
+}
+
+impl ArchiveEntry {
+    pub fn new(filepath: &[u8], entry_type: &DirEntryAttribute) -> Self {
+        Self {
+            filepath: base64::encode(filepath),
+            text: String::from_utf8_lossy(filepath.split(|x| *x == b'/').last().unwrap())
+                .to_string(),
+            entry_type: CatalogEntryType::from(entry_type).to_string(),
+            leaf: matches!(entry_type, DirEntryAttribute::Directory { .. }),
+            size: match entry_type {
+                DirEntryAttribute::File { size, .. } => Some(*size),
+                _ => None
+            },
+            mtime: match entry_type {
+                DirEntryAttribute::File { mtime, .. } => Some(*mtime),
+                _ => None
+            },
+        }
+    }
+}
+
 pub const DATASTORE_NOTIFY_STRING_SCHEMA: Schema = StringSchema::new(
     "Datastore notification setting")
     .format(&ApiStringFormat::PropertyString(&DatastoreNotify::API_SCHEMA))
diff --git a/src/backup/catalog.rs b/src/backup/catalog.rs
index 224e6bf7..a307f9d8 100644
--- a/src/backup/catalog.rs
+++ b/src/backup/catalog.rs
@@ -468,6 +468,32 @@ impl <R: Read + Seek> CatalogReader<R> {
         Ok(entry_list)
     }
 
+    /// Lookup a DirEntry from an absolute path
+    pub fn lookup_recursive(
+        &mut self,
+        path: &[u8],
+    ) -> Result<DirEntry, Error> {
+        let mut current = self.root()?;
+        if path == b"/" {
+            return Ok(current);
+        }
+
+        let components = if !path.is_empty() && path[0] == b'/' {
+            &path[1..]
+        } else {
+            path
+        }.split(|c| *c == b'/');
+
+        for comp in components {
+            if let Some(entry) = self.lookup(&current, comp)? {
+                current = entry;
+            } else {
+                bail!("path {:?} not found in catalog", String::from_utf8_lossy(&path));
+            }
+        }
+        Ok(current)
+    }
+
     /// Lockup a DirEntry inside a parent directory
     pub fn lookup(
         &mut self,
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 05/22] api2/admin/datastore: accept "/" as path for root
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (3 preceding siblings ...)
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 04/22] api2/admin/datastore: refactor list_dir_content in catalog_reader Stefan Reiter
@ 2021-02-16 17:06 ` Stefan Reiter
  2021-02-17  7:50   ` [pbs-devel] applied: " Thomas Lamprecht
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 06/22] api2/admin/datastore: refactor create_zip into pxar/extract Stefan Reiter
                   ` (17 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

From: Dominik Csapak <d.csapak@proxmox.com>

makes more sense than sending "root'"

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/api2/admin/datastore.rs | 2 +-
 www/window/FileBrowser.js   | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index ab88d172..88f011e4 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -1328,7 +1328,7 @@ pub fn catalog(
 
     let mut catalog_reader = CatalogReader::new(reader);
 
-    let path = if filepath != "root" {
+    let path = if filepath != "root" && filepath != "/" {
         base64::decode(filepath)?
     } else {
         vec![b'/']
diff --git a/www/window/FileBrowser.js b/www/window/FileBrowser.js
index 01b5d79b..724e1791 100644
--- a/www/window/FileBrowser.js
+++ b/www/window/FileBrowser.js
@@ -185,6 +185,7 @@ Ext.define("PBS.window.FileBrowser", {
 	    store: {
 		autoLoad: false,
 		model: 'pbs-file-tree',
+		defaultRootId: '/',
 		nodeParam: 'filepath',
 		sorters: 'text',
 		proxy: {
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 06/22] api2/admin/datastore: refactor create_zip into pxar/extract
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (4 preceding siblings ...)
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 05/22] api2/admin/datastore: accept "/" as path for root Stefan Reiter
@ 2021-02-16 17:06 ` Stefan Reiter
  2021-02-17  7:50   ` [pbs-devel] applied: " Thomas Lamprecht
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 07/22] pxar/extract: add extract_sub_dir Stefan Reiter
                   ` (16 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

From: Dominik Csapak <d.csapak@proxmox.com>

we will reuse that code in the client, so we need to move it to
where we can access it from the client

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

[clippy fixes]
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/api2/admin/datastore.rs |  99 +++--------------------------
 src/pxar/extract.rs         | 120 +++++++++++++++++++++++++++++++++++-
 src/pxar/mod.rs             |   2 +-
 3 files changed, 130 insertions(+), 91 deletions(-)

diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 88f011e4..a3e115f6 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -3,8 +3,6 @@
 use std::collections::HashSet;
 use std::ffi::OsStr;
 use std::os::unix::ffi::OsStrExt;
-use std::path::{Path, PathBuf};
-use std::pin::Pin;
 
 use anyhow::{bail, format_err, Error};
 use futures::*;
@@ -22,7 +20,7 @@ use proxmox::api::schema::*;
 use proxmox::tools::fs::{replace_file, CreateOptions};
 use proxmox::{http_err, identity, list_subdirs_api_method, sortable};
 
-use pxar::accessor::aio::{Accessor, FileContents, FileEntry};
+use pxar::accessor::aio::Accessor;
 use pxar::EntryKind;
 
 use crate::api2::types::*;
@@ -31,11 +29,11 @@ use crate::api2::helpers;
 use crate::backup::*;
 use crate::config::datastore;
 use crate::config::cached_user_info::CachedUserInfo;
+use crate::pxar::create_zip;
 
 use crate::server::{jobstate::Job, WorkerTask};
 use crate::tools::{
     self,
-    zip::{ZipEncoder, ZipEntry},
     AsyncChannelWriter, AsyncReaderStream, WrappedReaderStream,
 };
 
@@ -1337,66 +1335,6 @@ pub fn catalog(
     helpers::list_dir_content(&mut catalog_reader, &path)
 }
 
-fn recurse_files<'a, T, W>(
-    zip: &'a mut ZipEncoder<W>,
-    decoder: &'a mut Accessor<T>,
-    prefix: &'a Path,
-    file: FileEntry<T>,
-) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'a>>
-where
-    T: Clone + pxar::accessor::ReadAt + Unpin + Send + Sync + 'static,
-    W: tokio::io::AsyncWrite + Unpin + Send + 'static,
-{
-    Box::pin(async move {
-        let metadata = file.entry().metadata();
-        let path = file.entry().path().strip_prefix(&prefix)?.to_path_buf();
-
-        match file.kind() {
-            EntryKind::File { .. } => {
-                let entry = ZipEntry::new(
-                    path,
-                    metadata.stat.mtime.secs,
-                    metadata.stat.mode as u16,
-                    true,
-                );
-                zip.add_entry(entry, Some(file.contents().await?))
-                   .await
-                   .map_err(|err| format_err!("could not send file entry: {}", err))?;
-            }
-            EntryKind::Hardlink(_) => {
-                let realfile = decoder.follow_hardlink(&file).await?;
-                let entry = ZipEntry::new(
-                    path,
-                    metadata.stat.mtime.secs,
-                    metadata.stat.mode as u16,
-                    true,
-                );
-                zip.add_entry(entry, Some(realfile.contents().await?))
-                   .await
-                   .map_err(|err| format_err!("could not send file entry: {}", err))?;
-            }
-            EntryKind::Directory => {
-                let dir = file.enter_directory().await?;
-                let mut readdir = dir.read_dir();
-                let entry = ZipEntry::new(
-                    path,
-                    metadata.stat.mtime.secs,
-                    metadata.stat.mode as u16,
-                    false,
-                );
-                zip.add_entry::<FileContents<T>>(entry, None).await?;
-                while let Some(entry) = readdir.next().await {
-                    let entry = entry?.decode_entry().await?;
-                    recurse_files(zip, decoder, prefix, entry).await?;
-                }
-            }
-            _ => {} // ignore all else
-        };
-
-        Ok(())
-    })
-}
-
 #[sortable]
 pub const API_METHOD_PXAR_FILE_DOWNLOAD: ApiMethod = ApiMethod::new(
     &ApiHandler::AsyncHttp(&pxar_file_download),
@@ -1472,9 +1410,10 @@ pub fn pxar_file_download(
 
         let decoder = Accessor::new(reader, archive_size).await?;
         let root = decoder.open_root().await?;
+        let path = OsStr::from_bytes(file_path).to_os_string();
         let file = root
-            .lookup(OsStr::from_bytes(file_path)).await?
-            .ok_or_else(|| format_err!("error opening '{:?}'", file_path))?;
+            .lookup(&path).await?
+            .ok_or_else(|| format_err!("error opening '{:?}'", path))?;
 
         let body = match file.kind() {
             EntryKind::File { .. } => Body::wrap_stream(
@@ -1488,37 +1427,19 @@ pub fn pxar_file_download(
                     .map_err(move |err| {
                         eprintln!(
                             "error during streaming of hardlink '{:?}' - {}",
-                            filepath, err
+                            path, err
                         );
                         err
                     }),
             ),
             EntryKind::Directory => {
                 let (sender, receiver) = tokio::sync::mpsc::channel(100);
-                let mut prefix = PathBuf::new();
-                let mut components = file.entry().path().components();
-                components.next_back(); // discar last
-                for comp in components {
-                    prefix.push(comp);
-                }
-
                 let channelwriter = AsyncChannelWriter::new(sender, 1024 * 1024);
-
-                crate::server::spawn_internal_task(async move {
-                    let mut zipencoder = ZipEncoder::new(channelwriter);
-                    let mut decoder = decoder;
-                    recurse_files(&mut zipencoder, &mut decoder, &prefix, file)
-                        .await
-                        .map_err(|err| eprintln!("error during creating of zip: {}", err))?;
-
-                    zipencoder
-                        .finish()
-                        .await
-                        .map_err(|err| eprintln!("error during finishing of zip: {}", err))
-                });
-
+                crate::server::spawn_internal_task(
+                    create_zip(channelwriter, decoder, path.clone(), false)
+                );
                 Body::wrap_stream(ReceiverStream::new(receiver).map_err(move |err| {
-                    eprintln!("error during streaming of zip '{:?}' - {}", filepath, err);
+                    eprintln!("error during streaming of zip '{:?}' - {}", path, err);
                     err
                 }))
             }
diff --git a/src/pxar/extract.rs b/src/pxar/extract.rs
index 0a61c885..d246e7ec 100644
--- a/src/pxar/extract.rs
+++ b/src/pxar/extract.rs
@@ -5,9 +5,11 @@ use std::ffi::{CStr, CString, OsStr, OsString};
 use std::io;
 use std::os::unix::ffi::OsStrExt;
 use std::os::unix::io::{AsRawFd, FromRawFd, RawFd};
-use std::path::Path;
+use std::path::{Path, PathBuf};
 use std::sync::{Arc, Mutex};
+use std::pin::Pin;
 
+use futures::future::Future;
 use anyhow::{bail, format_err, Error};
 use nix::dir::Dir;
 use nix::fcntl::OFlag;
@@ -16,6 +18,7 @@ use nix::sys::stat::Mode;
 use pathpatterns::{MatchEntry, MatchList, MatchType};
 use pxar::format::Device;
 use pxar::Metadata;
+use pxar::accessor::aio::{Accessor, FileContents, FileEntry};
 
 use proxmox::c_result;
 use proxmox::tools::fs::{create_path, CreateOptions};
@@ -24,6 +27,8 @@ use crate::pxar::dir_stack::PxarDirStack;
 use crate::pxar::metadata;
 use crate::pxar::Flags;
 
+use crate::tools::zip::{ZipEncoder, ZipEntry};
+
 pub struct PxarExtractOptions<'a> {
     pub match_list: &'a[MatchEntry],
     pub extract_match_default: bool,
@@ -465,3 +470,116 @@ impl Extractor {
         )
     }
 }
+
+pub async fn create_zip<T, W, P>(
+    output: W,
+    decoder: Accessor<T>,
+    path: P,
+    verbose: bool,
+) -> Result<(), Error>
+where
+    T: Clone + pxar::accessor::ReadAt + Unpin + Send + Sync + 'static,
+    W: tokio::io::AsyncWrite + Unpin + Send + 'static,
+    P: AsRef<Path>,
+{
+    let root = decoder.open_root().await?;
+    let file = root
+        .lookup(&path).await?
+        .ok_or(format_err!("error opening '{:?}'", path.as_ref()))?;
+
+    let mut prefix = PathBuf::new();
+    let mut components = file.entry().path().components();
+    components.next_back(); // discar last
+    for comp in components {
+        prefix.push(comp);
+    }
+
+    let mut zipencoder = ZipEncoder::new(output);
+    let mut decoder = decoder;
+    recurse_files_zip(&mut zipencoder, &mut decoder, &prefix, file, verbose)
+        .await
+        .map_err(|err| {
+            eprintln!("error during creating of zip: {}", err);
+            err
+        })?;
+
+    zipencoder
+        .finish()
+        .await
+        .map_err(|err| {
+            eprintln!("error during finishing of zip: {}", err);
+            err
+        })
+}
+
+fn recurse_files_zip<'a, T, W>(
+    zip: &'a mut ZipEncoder<W>,
+    decoder: &'a mut Accessor<T>,
+    prefix: &'a Path,
+    file: FileEntry<T>,
+    verbose: bool,
+) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'a>>
+where
+    T: Clone + pxar::accessor::ReadAt + Unpin + Send + Sync + 'static,
+    W: tokio::io::AsyncWrite + Unpin + Send + 'static,
+{
+    use pxar::EntryKind;
+    Box::pin(async move {
+        let metadata = file.entry().metadata();
+        let path = file.entry().path().strip_prefix(&prefix)?.to_path_buf();
+
+        match file.kind() {
+            EntryKind::File { .. } => {
+                if verbose {
+                    eprintln!("adding '{}' to zip", path.display());
+                }
+                let entry = ZipEntry::new(
+                    path,
+                    metadata.stat.mtime.secs,
+                    metadata.stat.mode as u16,
+                    true,
+                );
+                zip.add_entry(entry, Some(file.contents().await?))
+                   .await
+                   .map_err(|err| format_err!("could not send file entry: {}", err))?;
+            }
+            EntryKind::Hardlink(_) => {
+                let realfile = decoder.follow_hardlink(&file).await?;
+                if verbose {
+                    eprintln!("adding '{}' to zip", path.display());
+                }
+                let entry = ZipEntry::new(
+                    path,
+                    metadata.stat.mtime.secs,
+                    metadata.stat.mode as u16,
+                    true,
+                );
+                zip.add_entry(entry, Some(realfile.contents().await?))
+                   .await
+                   .map_err(|err| format_err!("could not send file entry: {}", err))?;
+            }
+            EntryKind::Directory => {
+                let dir = file.enter_directory().await?;
+                let mut readdir = dir.read_dir();
+                if verbose {
+                    eprintln!("adding '{}' to zip", path.display());
+                }
+                let entry = ZipEntry::new(
+                    path,
+                    metadata.stat.mtime.secs,
+                    metadata.stat.mode as u16,
+                    false,
+                );
+                zip.add_entry::<FileContents<T>>(entry, None).await?;
+                while let Some(entry) = readdir.next().await {
+                    let entry = entry?.decode_entry().await?;
+                    recurse_files_zip(zip, decoder, prefix, entry, verbose).await?;
+                }
+            }
+            _ => {} // ignore all else
+        };
+
+        Ok(())
+    })
+}
+
diff --git a/src/pxar/mod.rs b/src/pxar/mod.rs
index 5d03591b..e2632653 100644
--- a/src/pxar/mod.rs
+++ b/src/pxar/mod.rs
@@ -59,7 +59,7 @@ mod flags;
 pub use flags::Flags;
 
 pub use create::{create_archive, PxarCreateOptions};
-pub use extract::{extract_archive, ErrorHandler, PxarExtractOptions};
+pub use extract::{create_zip, extract_archive, ErrorHandler, PxarExtractOptions};
 
 /// The format requires to build sorted directory lookup tables in
 /// memory, so we restrict the number of allowed entries to limit
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 07/22] pxar/extract: add extract_sub_dir
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (5 preceding siblings ...)
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 06/22] api2/admin/datastore: refactor create_zip into pxar/extract Stefan Reiter
@ 2021-02-16 17:06 ` Stefan Reiter
  2021-02-17  7:51   ` [pbs-devel] applied: " Thomas Lamprecht
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 08/22] pxar/extract: add sequential variants to create_zip, extract_sub_dir Stefan Reiter
                   ` (15 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

From: Dominik Csapak <d.csapak@proxmox.com>

to extract some subdirectory of a pxar into a given target
this will be used in the client

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/pxar/extract.rs | 122 ++++++++++++++++++++++++++++++++++++++++++++
 src/pxar/mod.rs     |   2 +-
 2 files changed, 123 insertions(+), 1 deletion(-)

diff --git a/src/pxar/extract.rs b/src/pxar/extract.rs
index d246e7ec..b673b4b8 100644
--- a/src/pxar/extract.rs
+++ b/src/pxar/extract.rs
@@ -583,3 +583,125 @@ where
     })
 }
 
+
+pub async fn extract_sub_dir<T, DEST, PATH>(
+    destination: DEST,
+    mut decoder: Accessor<T>,
+    path: PATH,
+    verbose: bool,
+) -> Result<(), Error>
+where
+    T: Clone + pxar::accessor::ReadAt + Unpin + Send + Sync + 'static,
+    DEST: AsRef<Path>,
+    PATH: AsRef<Path>,
+{
+    let root = decoder.open_root().await?;
+
+    create_path(
+        &destination,
+        None,
+        Some(CreateOptions::new().perm(Mode::from_bits_truncate(0o700))),
+    )
+    .map_err(|err| format_err!("error creating directory {:?}: {}", destination.as_ref(), err))?;
+
+    let dir = Dir::open(
+        destination.as_ref(),
+        OFlag::O_DIRECTORY | OFlag::O_CLOEXEC,
+        Mode::empty(),
+    )
+    .map_err(|err| format_err!("unable to open target directory {:?}: {}", destination.as_ref(), err,))?;
+
+    let mut extractor =  Extractor::new(
+        dir,
+        root.lookup_self().await?.entry().metadata().clone(),
+        false,
+        Flags::DEFAULT,
+    );
+
+    let file = root
+        .lookup(&path).await?
+        .ok_or(format_err!("error opening '{:?}'", path.as_ref()))?;
+
+    recurse_files_extractor(&mut extractor, &mut decoder, file, verbose).await
+}
+
+fn recurse_files_extractor<'a, T>(
+    extractor: &'a mut Extractor,
+    decoder: &'a mut Accessor<T>,
+    file: FileEntry<T>,
+    verbose: bool,
+) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'a>>
+where
+    T: Clone + pxar::accessor::ReadAt + Unpin + Send + Sync + 'static,
+{
+    use pxar::EntryKind;
+    Box::pin(async move {
+        let metadata = file.entry().metadata();
+        let file_name_os = file.file_name();
+
+        // safety check: a file entry in an archive must never contain slashes:
+        if file_name_os.as_bytes().contains(&b'/') {
+            bail!("archive file entry contains slashes, which is invalid and a security concern");
+        }
+
+        let file_name = CString::new(file_name_os.as_bytes())
+            .map_err(|_| format_err!("encountered file name with null-bytes"))?;
+
+        if verbose {
+            eprintln!("extracting: {}", file.path().display());
+        }
+
+        match file.kind() {
+            EntryKind::Directory => {
+                extractor
+                    .enter_directory(file_name_os.to_owned(), metadata.clone(), true)
+                    .map_err(|err| format_err!("error at entry {:?}: {}", file_name_os, err))?;
+
+                let dir = file.enter_directory().await?;
+                let mut readdir = dir.read_dir();
+                while let Some(entry) = readdir.next().await {
+                    let entry = entry?.decode_entry().await?;
+                    let filename = entry.path().to_path_buf();
+
+                    // log errors and continue
+                    if let Err(err) = recurse_files_extractor(extractor, decoder, entry, verbose).await {
+                        eprintln!("error extracting {:?}: {}", filename.display(), err);
+                    }
+                }
+                extractor.leave_directory()?;
+            }
+            EntryKind::Symlink(link) => {
+                extractor.extract_symlink(&file_name, metadata, link.as_ref())?;
+            }
+            EntryKind::Hardlink(link) => {
+                extractor.extract_hardlink(&file_name, link.as_os_str())?;
+            }
+            EntryKind::Device(dev) => {
+                if extractor.contains_flags(Flags::WITH_DEVICE_NODES) {
+                    extractor.extract_device(&file_name, metadata, dev)?;
+                }
+            }
+            EntryKind::Fifo => {
+                if extractor.contains_flags(Flags::WITH_FIFOS) {
+                    extractor.extract_special(&file_name, metadata, 0)?;
+                }
+            }
+            EntryKind::Socket => {
+                if extractor.contains_flags(Flags::WITH_SOCKETS) {
+                    extractor.extract_special(&file_name, metadata, 0)?;
+                }
+            }
+            EntryKind::File { size, .. } => extractor.async_extract_file(
+                &file_name,
+                metadata,
+                *size,
+                &mut file.contents().await.map_err(|_| {
+                    format_err!("found regular file entry without contents in archive")
+                })?,
+            ).await?,
+            EntryKind::GoodbyeTable => {}, // ignore
+        }
+        Ok(())
+    })
+}
+
diff --git a/src/pxar/mod.rs b/src/pxar/mod.rs
index e2632653..d1302962 100644
--- a/src/pxar/mod.rs
+++ b/src/pxar/mod.rs
@@ -59,7 +59,7 @@ mod flags;
 pub use flags::Flags;
 
 pub use create::{create_archive, PxarCreateOptions};
-pub use extract::{create_zip, extract_archive, ErrorHandler, PxarExtractOptions};
+pub use extract::{create_zip, extract_archive, extract_sub_dir, ErrorHandler, PxarExtractOptions};
 
 /// The format requires to build sorted directory lookup tables in
 /// memory, so we restrict the number of allowed entries to limit
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 08/22] pxar/extract: add sequential variants to create_zip, extract_sub_dir
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (6 preceding siblings ...)
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 07/22] pxar/extract: add extract_sub_dir Stefan Reiter
@ 2021-02-16 17:06 ` Stefan Reiter
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module Stefan Reiter
                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

For streaming pxar files directly from a restore source and extracting
them on the fly, we cannot create an Accessor, and instead have to live
with a sequential Decoder. This supports only the aio::Decoder variant,
since the functions are async anyway.

The original functionality remains in place, the new functions are
labelled with a _seq suffix. The recursive functions actually doing the
work are changed to take an EitherEntry enum variant that can contain
either an Accessor (recursive operation) or a Decoder (linear
operation).

If the seq_ variants are given an encoder where the current position
points to a file, they will only extract/encode this file, if it's a
directory, they will extract until they leave the directory they started
in.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/pxar/extract.rs | 388 ++++++++++++++++++++++++++++++++------------
 src/pxar/mod.rs     |   5 +-
 2 files changed, 292 insertions(+), 101 deletions(-)

diff --git a/src/pxar/extract.rs b/src/pxar/extract.rs
index b673b4b8..66a5ed59 100644
--- a/src/pxar/extract.rs
+++ b/src/pxar/extract.rs
@@ -17,8 +17,9 @@ use nix::sys::stat::Mode;
 
 use pathpatterns::{MatchEntry, MatchList, MatchType};
 use pxar::format::Device;
-use pxar::Metadata;
+use pxar::{Entry, Metadata, EntryKind};
 use pxar::accessor::aio::{Accessor, FileContents, FileEntry};
+use pxar::decoder::aio::Decoder;
 
 use proxmox::c_result;
 use proxmox::tools::fs::{create_path, CreateOptions};
@@ -90,8 +91,6 @@ where
     let mut err_path_stack = vec![OsString::from("/")];
     let mut current_match = options.extract_match_default;
     while let Some(entry) = decoder.next() {
-        use pxar::EntryKind;
-
         let entry = entry.map_err(|err| format_err!("error reading pxar archive: {}", err))?;
 
         let file_name_os = entry.file_name();
@@ -471,9 +470,23 @@ impl Extractor {
     }
 }
 
+enum EitherEntry<
+    'a,
+    S: pxar::decoder::SeqRead + Unpin + Send + 'static,
+    T: Clone + pxar::accessor::ReadAt + Unpin + Send + Sync + 'static,
+> {
+    Entry(Entry, &'a mut Decoder<S>),
+    FileEntry(FileEntry<T>, &'a mut Accessor<T>),
+}
+
+// These types are never constructed, but we need some concrete type fulfilling S and T from
+// EitherEntry so rust is happy with its use in async fns
+type BogusSeqRead = pxar::decoder::sync::StandardReader<std::io::Empty>;
+type BogusReadAt = pxar::accessor::sync::FileRefReader<Arc<std::fs::File>>;
+
 pub async fn create_zip<T, W, P>(
     output: W,
-    decoder: Accessor<T>,
+    mut decoder: Accessor<T>,
     path: P,
     verbose: bool,
 ) -> Result<(), Error>
@@ -484,96 +497,174 @@ where
 {
     let root = decoder.open_root().await?;
     let file = root
-        .lookup(&path).await?
-        .ok_or(format_err!("error opening '{:?}'", path.as_ref()))?;
+        .lookup(&path)
+        .await?
+        .ok_or_else(|| format_err!("error opening '{:?}'", path.as_ref()))?;
 
     let mut prefix = PathBuf::new();
     let mut components = file.entry().path().components();
-    components.next_back(); // discar last
+    components.next_back(); // discard last
     for comp in components {
         prefix.push(comp);
     }
 
     let mut zipencoder = ZipEncoder::new(output);
-    let mut decoder = decoder;
-    recurse_files_zip(&mut zipencoder, &mut decoder, &prefix, file, verbose)
+    let entry: EitherEntry<BogusSeqRead, T> = EitherEntry::FileEntry(file, &mut decoder);
+    add_entry_to_zip(&mut zipencoder, entry, &prefix, verbose)
         .await
         .map_err(|err| {
             eprintln!("error during creating of zip: {}", err);
             err
         })?;
 
-    zipencoder
-        .finish()
-        .await
-        .map_err(|err| {
-            eprintln!("error during finishing of zip: {}", err);
-            err
-        })
+    zipencoder.finish().await.map_err(|err| {
+        eprintln!("error during finishing of zip: {}", err);
+        err
+    })
 }
 
-fn recurse_files_zip<'a, T, W>(
+pub async fn create_zip_seq<S, W>(
+    output: W,
+    mut decoder: Decoder<S>,
+    verbose: bool,
+) -> Result<(), Error>
+where
+    S: pxar::decoder::SeqRead + Unpin + Send + 'static,
+    W: tokio::io::AsyncWrite + Unpin + Send + 'static,
+{
+    decoder.enable_goodbye_entries(true);
+    let root = match decoder.peek().await {
+        Some(Ok(root)) => root,
+        Some(Err(err)) => bail!("error getting root entry from pxar: {}", err),
+        None => bail!("cannot extract empty archive"),
+    };
+
+    let mut prefix = PathBuf::new();
+    let mut components = root.path().components();
+    components.next_back(); // discard last
+    for comp in components {
+        prefix.push(comp);
+    }
+
+    let mut zipencoder = ZipEncoder::new(output);
+
+    let root_is_file = matches!(root.kind(), EntryKind::File { .. });
+    let mut dir_level = 0;
+
+    while let Some(file) = decoder.next().await {
+        match file {
+            Ok(file) => {
+                match file.kind() {
+                    EntryKind::Directory => dir_level += 1,
+                    EntryKind::GoodbyeTable => {
+                        dir_level -= 1;
+                        // only extract until we leave the directory we started in
+                        if dir_level == 0 {
+                            break;
+                        }
+                    }
+                    _ => {}
+                }
+
+                let entry: EitherEntry<S, BogusReadAt> = EitherEntry::Entry(file, &mut decoder);
+                add_entry_to_zip(&mut zipencoder, entry, &prefix, verbose)
+                    .await
+                    .map_err(|err| {
+                        eprintln!("error during creating of zip: {}", err);
+                        err
+                    })?;
+
+                if root_is_file {
+                    break;
+                }
+            }
+            Err(err) => bail!("error in decoder: {}", err),
+        }
+    }
+
+    zipencoder.finish().await.map_err(|err| {
+        eprintln!("error during finishing of zip: {}", err);
+        err
+    })
+}
+
+fn add_entry_to_zip<'a, S, T, W>(
     zip: &'a mut ZipEncoder<W>,
-    decoder: &'a mut Accessor<T>,
+    file: EitherEntry<'a, S, T>,
     prefix: &'a Path,
-    file: FileEntry<T>,
     verbose: bool,
 ) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'a>>
 where
+    S: pxar::decoder::SeqRead + Unpin + Send + 'static,
     T: Clone + pxar::accessor::ReadAt + Unpin + Send + Sync + 'static,
     W: tokio::io::AsyncWrite + Unpin + Send + 'static,
 {
-    use pxar::EntryKind;
     Box::pin(async move {
-        let metadata = file.entry().metadata();
-        let path = file.entry().path().strip_prefix(&prefix)?.to_path_buf();
+        let (metadata, path, kind) = match file {
+            EitherEntry::Entry(ref e, _) => (e.metadata(), e.path(), e.kind()),
+            EitherEntry::FileEntry(ref fe, _) => (fe.metadata(), fe.path(), fe.kind()),
+        };
 
-        match file.kind() {
+        if verbose && !matches!(kind, EntryKind::GoodbyeTable) {
+            eprintln!("adding '{}' to zip", path.display());
+        }
+
+        match kind {
             EntryKind::File { .. } => {
-                if verbose {
-                    eprintln!("adding '{}' to zip", path.display());
-                }
                 let entry = ZipEntry::new(
                     path,
                     metadata.stat.mtime.secs,
                     metadata.stat.mode as u16,
                     true,
                 );
-                zip.add_entry(entry, Some(file.contents().await?))
-                   .await
-                   .map_err(|err| format_err!("could not send file entry: {}", err))?;
+                let contents = match file {
+                    EitherEntry::Entry(_, dec) => Box::new(match dec.contents() {
+                        Some(con) => con,
+                        None => bail!("file without contents found"),
+                    })
+                        as Box<dyn tokio::io::AsyncRead + Unpin + Send>,
+                    EitherEntry::FileEntry(ref fe, _) => Box::new(
+                        fe.contents()
+                            .await
+                            .map_err(|err| format_err!("file with bad contents found: {}", err))?,
+                    )
+                        as Box<dyn tokio::io::AsyncRead + Unpin + Send>,
+                };
+                zip.add_entry(entry, Some(contents))
+                    .await
+                    .map_err(|err| format_err!("could not send file entry: {}", err))?;
             }
             EntryKind::Hardlink(_) => {
-                let realfile = decoder.follow_hardlink(&file).await?;
-                if verbose {
-                    eprintln!("adding '{}' to zip", path.display());
+                // we can't extract hardlinks in sequential extraction
+                if let EitherEntry::FileEntry(ref fe, ref accessor) = file {
+                    let realfile = accessor.follow_hardlink(&fe).await?;
+                    let entry = ZipEntry::new(
+                        path,
+                        metadata.stat.mtime.secs,
+                        metadata.stat.mode as u16,
+                        true,
+                    );
+                    zip.add_entry(entry, Some(realfile.contents().await?))
+                        .await
+                        .map_err(|err| format_err!("could not send file entry: {}", err))?;
                 }
-                let entry = ZipEntry::new(
-                    path,
-                    metadata.stat.mtime.secs,
-                    metadata.stat.mode as u16,
-                    true,
-                );
-                zip.add_entry(entry, Some(realfile.contents().await?))
-                   .await
-                   .map_err(|err| format_err!("could not send file entry: {}", err))?;
             }
             EntryKind::Directory => {
-                let dir = file.enter_directory().await?;
-                let mut readdir = dir.read_dir();
-                if verbose {
-                    eprintln!("adding '{}' to zip", path.display());
-                }
                 let entry = ZipEntry::new(
                     path,
                     metadata.stat.mtime.secs,
                     metadata.stat.mode as u16,
                     false,
                 );
-                zip.add_entry::<FileContents<T>>(entry, None).await?;
-                while let Some(entry) = readdir.next().await {
-                    let entry = entry?.decode_entry().await?;
-                    recurse_files_zip(zip, decoder, prefix, entry, verbose).await?;
+                if let EitherEntry::FileEntry(fe, a) = file {
+                    let dir = fe.enter_directory().await?;
+                    let mut readdir = dir.read_dir();
+                    zip.add_entry::<FileContents<T>>(entry, None).await?;
+                    while let Some(entry) = readdir.next().await {
+                        let entry = entry?.decode_entry().await?;
+                        let entry: EitherEntry<BogusSeqRead, T> = EitherEntry::FileEntry(entry, a);
+                        add_entry_to_zip(zip, entry, prefix, verbose).await?;
+                    }
                 }
             }
             _ => {} // ignore all else
@@ -583,6 +674,43 @@ where
     })
 }
 
+fn get_extractor<DEST>(destination: DEST, metadata: Metadata) -> Result<Extractor, Error>
+where
+    DEST: AsRef<Path>
+{
+    create_path(
+        &destination,
+        None,
+        Some(CreateOptions::new().perm(Mode::from_bits_truncate(0o700))),
+    )
+    .map_err(|err| {
+        format_err!(
+            "error creating directory {:?}: {}",
+            destination.as_ref(),
+            err
+        )
+    })?;
+
+    let dir = Dir::open(
+        destination.as_ref(),
+        OFlag::O_DIRECTORY | OFlag::O_CLOEXEC,
+        Mode::empty(),
+    )
+    .map_err(|err| {
+        format_err!(
+            "unable to open target directory {:?}: {}",
+            destination.as_ref(),
+            err,
+        )
+    })?;
+
+    Ok(Extractor::new(
+        dir,
+        metadata,
+        false,
+        Flags::DEFAULT,
+    ))
+}
 
 pub async fn extract_sub_dir<T, DEST, PATH>(
     destination: DEST,
@@ -597,47 +725,86 @@ where
 {
     let root = decoder.open_root().await?;
 
-    create_path(
-        &destination,
-        None,
-        Some(CreateOptions::new().perm(Mode::from_bits_truncate(0o700))),
-    )
-    .map_err(|err| format_err!("error creating directory {:?}: {}", destination.as_ref(), err))?;
-
-    let dir = Dir::open(
-        destination.as_ref(),
-        OFlag::O_DIRECTORY | OFlag::O_CLOEXEC,
-        Mode::empty(),
-    )
-    .map_err(|err| format_err!("unable to open target directory {:?}: {}", destination.as_ref(), err,))?;
-
-    let mut extractor =  Extractor::new(
-        dir,
+    let mut extractor = get_extractor(
+        destination,
         root.lookup_self().await?.entry().metadata().clone(),
-        false,
-        Flags::DEFAULT,
-    );
+    )?;
 
     let file = root
-        .lookup(&path).await?
-        .ok_or(format_err!("error opening '{:?}'", path.as_ref()))?;
+        .lookup(&path)
+        .await?
+        .ok_or_else(|| format_err!("error opening '{:?}'", path.as_ref()))?;
 
-    recurse_files_extractor(&mut extractor, &mut decoder, file, verbose).await
+    let entry: EitherEntry<BogusSeqRead, T> = EitherEntry::FileEntry(file, &mut decoder);
+    do_extract_sub_dir(&mut extractor, entry, verbose).await
 }
 
-fn recurse_files_extractor<'a, T>(
+pub async fn extract_sub_dir_seq<S, DEST>(
+    destination: DEST,
+    mut decoder: Decoder<S>,
+    verbose: bool,
+) -> Result<(), Error>
+where
+    S: pxar::decoder::SeqRead + Unpin + Send + 'static,
+    DEST: AsRef<Path>,
+{
+    decoder.enable_goodbye_entries(true);
+    let root = match decoder.peek().await {
+        Some(Ok(root)) => root,
+        Some(Err(err)) => bail!("error getting root entry from pxar: {}", err),
+        None => bail!("cannot extract empty archive"),
+    };
+
+    let mut extractor = get_extractor(destination, root.metadata().clone())?;
+    let root_is_file = matches!(root.kind(), EntryKind::File { .. });
+    let mut dir_level = 0;
+
+    while let Some(file) = decoder.next().await {
+        match file {
+            Ok(file) => {
+                match file.kind() {
+                    EntryKind::Directory => dir_level += 1,
+                    EntryKind::GoodbyeTable => {
+                        dir_level -= 1;
+                        // only extract until we leave the directory we started in
+                        if dir_level == 0 {
+                            break;
+                        }
+                    },
+                    _ => {}
+                }
+
+                let path = file.path().to_owned();
+                let entry: EitherEntry<S, BogusReadAt> = EitherEntry::Entry(file, &mut decoder);
+                if let Err(err) = do_extract_sub_dir(&mut extractor, entry, verbose).await {
+                    eprintln!("error extracting {}: {}", path.display(), err);
+                }
+
+                if root_is_file {
+                    break;
+                }
+            }
+            Err(err) => bail!("error in decoder: {}", err),
+        }
+    }
+
+    Ok(())
+}
+
+fn do_extract_sub_dir<'a, S, T>(
     extractor: &'a mut Extractor,
-    decoder: &'a mut Accessor<T>,
-    file: FileEntry<T>,
+    file: EitherEntry<'a, S, T>,
     verbose: bool,
 ) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'a>>
 where
+    S: pxar::decoder::SeqRead + Unpin + Send,
     T: Clone + pxar::accessor::ReadAt + Unpin + Send + Sync + 'static,
 {
-    use pxar::EntryKind;
     Box::pin(async move {
-        let metadata = file.entry().metadata();
-        let file_name_os = file.file_name();
+        let (metadata, file_name_os, path, kind) = match file {
+            EitherEntry::Entry(ref e, _) => (e.metadata(), e.file_name(), e.path(), e.kind()),
+            EitherEntry::FileEntry(ref fe, _) => (fe.metadata(), fe.file_name(), fe.path(), fe.kind()),
+        };
 
         // safety check: a file entry in an archive must never contain slashes:
         if file_name_os.as_bytes().contains(&b'/') {
@@ -647,28 +814,32 @@ where
         let file_name = CString::new(file_name_os.as_bytes())
             .map_err(|_| format_err!("encountered file name with null-bytes"))?;
 
-        if verbose {
-            eprintln!("extracting: {}", file.path().display());
+        if verbose && !matches!(kind, EntryKind::GoodbyeTable) {
+            eprintln!("extracting: {}", path.display());
         }
 
-        match file.kind() {
+        match kind {
             EntryKind::Directory => {
                 extractor
                     .enter_directory(file_name_os.to_owned(), metadata.clone(), true)
                     .map_err(|err| format_err!("error at entry {:?}: {}", file_name_os, err))?;
 
-                let dir = file.enter_directory().await?;
-                let mut readdir = dir.read_dir();
-                while let Some(entry) = readdir.next().await {
-                    let entry = entry?.decode_entry().await?;
-                    let filename = entry.path().to_path_buf();
+                // for EitherEntry::Entry we detect directory end with GoodbyeTable
+                if let EitherEntry::FileEntry(file, a) = file {
+                    let dir = file.enter_directory().await?;
+                    let mut readdir = dir.read_dir();
+                    while let Some(entry) = readdir.next().await {
+                        let entry = entry?.decode_entry().await?;
+                        let filename = entry.path().to_path_buf();
 
-                    // log errors and continue
-                    if let Err(err) = recurse_files_extractor(extractor, decoder, entry, verbose).await {
-                        eprintln!("error extracting {:?}: {}", filename.display(), err);
+                        // log errors and continue
+                        let entry: EitherEntry<BogusSeqRead, T> = EitherEntry::FileEntry(entry, a);
+                        if let Err(err) = do_extract_sub_dir(extractor, entry, verbose).await {
+                            eprintln!("error extracting {}: {}", filename.display(), err);
+                        }
                     }
+                    extractor.leave_directory()?;
                 }
-                extractor.leave_directory()?;
             }
             EntryKind::Symlink(link) => {
                 extractor.extract_symlink(&file_name, metadata, link.as_ref())?;
@@ -691,17 +862,34 @@ where
                     extractor.extract_special(&file_name, metadata, 0)?;
                 }
             }
-            EntryKind::File { size, .. } => extractor.async_extract_file(
-                &file_name,
-                metadata,
-                *size,
-                &mut file.contents().await.map_err(|_| {
-                    format_err!("found regular file entry without contents in archive")
-                })?,
-            ).await?,
-            EntryKind::GoodbyeTable => {}, // ignore
+            EntryKind::File { size, .. } => {
+                extractor
+                    .async_extract_file(
+                        &file_name,
+                        metadata,
+                        *size,
+                        &mut match file {
+                            EitherEntry::Entry(_, dec) => Box::new(match dec.contents() {
+                                Some(con) => con,
+                                None => bail!("file without contents found"),
+                            })
+                                as Box<dyn tokio::io::AsyncRead + Unpin + Send>,
+                            EitherEntry::FileEntry(ref fe, _) => {
+                                Box::new(fe.contents().await.map_err(|err| {
+                                    format_err!("file with bad contents found: {}", err)
+                                })?)
+                                    as Box<dyn tokio::io::AsyncRead + Unpin + Send>
+                            }
+                        },
+                    )
+                    .await?
+            }
+            EntryKind::GoodbyeTable => {
+                if let EitherEntry::Entry(_, _) = file {
+                    extractor.leave_directory()?;
+                }
+            }
         }
         Ok(())
     })
 }
-
diff --git a/src/pxar/mod.rs b/src/pxar/mod.rs
index d1302962..d5c42942 100644
--- a/src/pxar/mod.rs
+++ b/src/pxar/mod.rs
@@ -59,7 +59,10 @@ mod flags;
 pub use flags::Flags;
 
 pub use create::{create_archive, PxarCreateOptions};
-pub use extract::{create_zip, extract_archive, extract_sub_dir, ErrorHandler, PxarExtractOptions};
+pub use extract::{
+    create_zip, create_zip_seq, extract_archive, extract_sub_dir, extract_sub_dir_seq,
+    ErrorHandler, PxarExtractOptions,
+};
 
 /// The format requires to build sorted directory lookup tables in
 /// memory, so we restrict the number of allowed entries to limit
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (7 preceding siblings ...)
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 08/22] pxar/extract: add sequential variants to create_zip, extract_sub_dir Stefan Reiter
@ 2021-02-16 17:06 ` Stefan Reiter
  2021-02-17  6:49   ` Dietmar Maurer
  2021-02-17  9:13   ` [pbs-devel] applied: " Dietmar Maurer
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 10/22] proxmox_client_tools: extract 'key' from client module Stefan Reiter
                   ` (13 subsequent siblings)
  22 siblings, 2 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

...including common schemata, connect(), extract_*() and completion
functions.

For later use with proxmox-file-restore binary.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/bin/proxmox-backup-client.rs    | 361 +--------------------------
 src/bin/proxmox_client_tools/mod.rs | 366 ++++++++++++++++++++++++++++
 2 files changed, 369 insertions(+), 358 deletions(-)
 create mode 100644 src/bin/proxmox_client_tools/mod.rs

diff --git a/src/bin/proxmox-backup-client.rs b/src/bin/proxmox-backup-client.rs
index ebcbc983..1b8b5bec 100644
--- a/src/bin/proxmox-backup-client.rs
+++ b/src/bin/proxmox-backup-client.rs
@@ -1,4 +1,4 @@
-use std::collections::{HashSet, HashMap};
+use std::collections::HashSet;
 use std::convert::TryFrom;
 use std::io::{self, Read, Write, Seek, SeekFrom};
 use std::os::unix::io::{FromRawFd, RawFd};
@@ -33,7 +33,6 @@ use proxmox::{
 use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
 
 use proxmox_backup::tools;
-use proxmox_backup::api2::access::user::UserWithTokens;
 use proxmox_backup::api2::types::*;
 use proxmox_backup::api2::version;
 use proxmox_backup::client::*;
@@ -68,68 +67,8 @@ use proxmox_backup::backup::{
 mod proxmox_backup_client;
 use proxmox_backup_client::*;
 
-const ENV_VAR_PBS_FINGERPRINT: &str = "PBS_FINGERPRINT";
-const ENV_VAR_PBS_PASSWORD: &str = "PBS_PASSWORD";
-
-
-pub const REPO_URL_SCHEMA: Schema = StringSchema::new("Repository URL.")
-    .format(&BACKUP_REPO_URL)
-    .max_length(256)
-    .schema();
-
-pub const KEYFILE_SCHEMA: Schema =
-    StringSchema::new("Path to encryption key. All data will be encrypted using this key.")
-        .schema();
-
-pub const KEYFD_SCHEMA: Schema =
-    IntegerSchema::new("Pass an encryption key via an already opened file descriptor.")
-        .minimum(0)
-        .schema();
-
-pub const MASTER_PUBKEY_FILE_SCHEMA: Schema = StringSchema::new(
-    "Path to master public key. The encryption key used for a backup will be encrypted using this key and appended to the backup.")
-    .schema();
-
-pub const MASTER_PUBKEY_FD_SCHEMA: Schema =
-    IntegerSchema::new("Pass a master public key via an already opened file descriptor.")
-        .minimum(0)
-        .schema();
-
-const CHUNK_SIZE_SCHEMA: Schema = IntegerSchema::new(
-    "Chunk size in KB. Must be a power of 2.")
-    .minimum(64)
-    .maximum(4096)
-    .default(4096)
-    .schema();
-
-fn get_default_repository() -> Option<String> {
-    std::env::var("PBS_REPOSITORY").ok()
-}
-
-pub fn extract_repository_from_value(
-    param: &Value,
-) -> Result<BackupRepository, Error> {
-
-    let repo_url = param["repository"]
-        .as_str()
-        .map(String::from)
-        .or_else(get_default_repository)
-        .ok_or_else(|| format_err!("unable to get (default) repository"))?;
-
-    let repo: BackupRepository = repo_url.parse()?;
-
-    Ok(repo)
-}
-
-fn extract_repository_from_map(
-    param: &HashMap<String, String>,
-) -> Option<BackupRepository> {
-
-    param.get("repository")
-        .map(String::from)
-        .or_else(get_default_repository)
-        .and_then(|repo_url| repo_url.parse::<BackupRepository>().ok())
-}
+mod proxmox_client_tools;
+use proxmox_client_tools::*;
 
 fn record_repository(repo: &BackupRepository) {
 
@@ -179,52 +118,6 @@ fn record_repository(repo: &BackupRepository) {
     let _ = replace_file(path, new_data.to_string().as_bytes(), CreateOptions::new());
 }
 
-pub fn complete_repository(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
-
-    let mut result = vec![];
-
-    let base = match BaseDirectories::with_prefix("proxmox-backup") {
-        Ok(v) => v,
-        _ => return result,
-    };
-
-    // usually $HOME/.cache/proxmox-backup/repo-list
-    let path = match base.place_cache_file("repo-list") {
-        Ok(v) => v,
-        _ => return result,
-    };
-
-    let data = file_get_json(&path, None).unwrap_or_else(|_| json!({}));
-
-    if let Some(map) = data.as_object() {
-        for (repo, _count) in map {
-            result.push(repo.to_owned());
-        }
-    }
-
-    result
-}
-
-fn connect(repo: &BackupRepository) -> Result<HttpClient, Error> {
-    connect_do(repo.host(), repo.port(), repo.auth_id())
-        .map_err(|err| format_err!("error building client for repository {} - {}", repo, err))
-}
-
-fn connect_do(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, Error> {
-    let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
-
-    use std::env::VarError::*;
-    let password = match std::env::var(ENV_VAR_PBS_PASSWORD) {
-        Ok(p) => Some(p),
-        Err(NotUnicode(_)) => bail!(format!("{} contains bad characters", ENV_VAR_PBS_PASSWORD)),
-        Err(NotPresent) => None,
-    };
-
-    let options = HttpClientOptions::new_interactive(password, fingerprint);
-
-    HttpClient::new(server, port, auth_id, options)
-}
-
 async fn api_datastore_list_snapshots(
     client: &HttpClient,
     store: &str,
@@ -1483,27 +1376,6 @@ async fn create_backup(
     Ok(Value::Null)
 }
 
-fn complete_backup_source(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
-
-    let mut result = vec![];
-
-    let data: Vec<&str> = arg.splitn(2, ':').collect();
-
-    if data.len() != 2 {
-        result.push(String::from("root.pxar:/"));
-        result.push(String::from("etc.pxar:/etc"));
-        return result;
-    }
-
-    let files = tools::complete_file_name(data[1], param);
-
-    for file in files {
-        result.push(format!("{}:{}", data[0], file));
-    }
-
-    result
-}
-
 async fn dump_image<W: Write>(
     client: Arc<BackupReader>,
     crypt_config: Option<Arc<CryptConfig>>,
@@ -1923,233 +1795,6 @@ async fn status(param: Value) -> Result<Value, Error> {
     Ok(Value::Null)
 }
 
-// like get, but simply ignore errors and return Null instead
-async fn try_get(repo: &BackupRepository, url: &str) -> Value {
-
-    let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
-    let password = std::env::var(ENV_VAR_PBS_PASSWORD).ok();
-
-    // ticket cache, but no questions asked
-    let options = HttpClientOptions::new_interactive(password, fingerprint)
-        .interactive(false);
-
-    let client = match HttpClient::new(repo.host(), repo.port(), repo.auth_id(), options) {
-        Ok(v) => v,
-        _ => return Value::Null,
-    };
-
-    let mut resp = match client.get(url, None).await {
-        Ok(v) => v,
-        _ => return Value::Null,
-    };
-
-    if let Some(map) = resp.as_object_mut() {
-        if let Some(data) = map.remove("data") {
-            return data;
-        }
-    }
-    Value::Null
-}
-
-fn complete_backup_group(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
-    proxmox_backup::tools::runtime::main(async { complete_backup_group_do(param).await })
-}
-
-async fn complete_backup_group_do(param: &HashMap<String, String>) -> Vec<String> {
-
-    let mut result = vec![];
-
-    let repo = match extract_repository_from_map(param) {
-        Some(v) => v,
-        _ => return result,
-    };
-
-    let path = format!("api2/json/admin/datastore/{}/groups", repo.store());
-
-    let data = try_get(&repo, &path).await;
-
-    if let Some(list) = data.as_array() {
-        for item in list {
-            if let (Some(backup_id), Some(backup_type)) =
-                (item["backup-id"].as_str(), item["backup-type"].as_str())
-            {
-                result.push(format!("{}/{}", backup_type, backup_id));
-            }
-        }
-    }
-
-    result
-}
-
-pub fn complete_group_or_snapshot(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
-    proxmox_backup::tools::runtime::main(async { complete_group_or_snapshot_do(arg, param).await })
-}
-
-async fn complete_group_or_snapshot_do(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
-
-    if arg.matches('/').count() < 2 {
-        let groups = complete_backup_group_do(param).await;
-        let mut result = vec![];
-        for group in groups {
-            result.push(group.to_string());
-            result.push(format!("{}/", group));
-        }
-        return result;
-    }
-
-    complete_backup_snapshot_do(param).await
-}
-
-fn complete_backup_snapshot(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
-    proxmox_backup::tools::runtime::main(async { complete_backup_snapshot_do(param).await })
-}
-
-async fn complete_backup_snapshot_do(param: &HashMap<String, String>) -> Vec<String> {
-
-    let mut result = vec![];
-
-    let repo = match extract_repository_from_map(param) {
-        Some(v) => v,
-        _ => return result,
-    };
-
-    let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
-
-    let data = try_get(&repo, &path).await;
-
-    if let Some(list) = data.as_array() {
-        for item in list {
-            if let (Some(backup_id), Some(backup_type), Some(backup_time)) =
-                (item["backup-id"].as_str(), item["backup-type"].as_str(), item["backup-time"].as_i64())
-            {
-                if let Ok(snapshot) = BackupDir::new(backup_type, backup_id, backup_time) {
-                    result.push(snapshot.relative_path().to_str().unwrap().to_owned());
-                }
-            }
-        }
-    }
-
-    result
-}
-
-fn complete_server_file_name(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
-    proxmox_backup::tools::runtime::main(async { complete_server_file_name_do(param).await })
-}
-
-async fn complete_server_file_name_do(param: &HashMap<String, String>) -> Vec<String> {
-
-    let mut result = vec![];
-
-    let repo = match extract_repository_from_map(param) {
-        Some(v) => v,
-        _ => return result,
-    };
-
-    let snapshot: BackupDir = match param.get("snapshot") {
-        Some(path) => {
-            match path.parse() {
-                Ok(v) => v,
-                _ => return result,
-            }
-        }
-        _ => return result,
-    };
-
-    let query = tools::json_object_to_query(json!({
-        "backup-type": snapshot.group().backup_type(),
-        "backup-id": snapshot.group().backup_id(),
-        "backup-time": snapshot.backup_time(),
-    })).unwrap();
-
-    let path = format!("api2/json/admin/datastore/{}/files?{}", repo.store(), query);
-
-    let data = try_get(&repo, &path).await;
-
-    if let Some(list) = data.as_array() {
-        for item in list {
-            if let Some(filename) = item["filename"].as_str() {
-                result.push(filename.to_owned());
-            }
-        }
-    }
-
-    result
-}
-
-fn complete_archive_name(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
-    complete_server_file_name(arg, param)
-        .iter()
-        .map(|v| tools::format::strip_server_file_extension(&v))
-        .collect()
-}
-
-pub fn complete_pxar_archive_name(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
-    complete_server_file_name(arg, param)
-        .iter()
-        .filter_map(|name| {
-            if name.ends_with(".pxar.didx") {
-                Some(tools::format::strip_server_file_extension(name))
-            } else {
-                None
-            }
-        })
-        .collect()
-}
-
-pub fn complete_img_archive_name(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
-    complete_server_file_name(arg, param)
-        .iter()
-        .filter_map(|name| {
-            if name.ends_with(".img.fidx") {
-                Some(tools::format::strip_server_file_extension(name))
-            } else {
-                None
-            }
-        })
-        .collect()
-}
-
-fn complete_chunk_size(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
-
-    let mut result = vec![];
-
-    let mut size = 64;
-    loop {
-        result.push(size.to_string());
-        size *= 2;
-        if size > 4096 { break; }
-    }
-
-    result
-}
-
-fn complete_auth_id(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
-    proxmox_backup::tools::runtime::main(async { complete_auth_id_do(param).await })
-}
-
-async fn complete_auth_id_do(param: &HashMap<String, String>) -> Vec<String> {
-
-    let mut result = vec![];
-
-    let repo = match extract_repository_from_map(param) {
-        Some(v) => v,
-        _ => return result,
-    };
-
-    let data = try_get(&repo, "api2/json/access/users?include_tokens=true").await;
-
-    if let Ok(parsed) = serde_json::from_value::<Vec<UserWithTokens>>(data) {
-        for user in parsed {
-            result.push(user.userid.to_string());
-            for token in user.tokens {
-                result.push(token.tokenid.to_string());
-            }
-        }
-    };
-
-    result
-}
-
 use proxmox_backup::client::RemoteChunkReader;
 /// This is a workaround until we have cleaned up the chunk/reader/... infrastructure for better
 /// async use!
diff --git a/src/bin/proxmox_client_tools/mod.rs b/src/bin/proxmox_client_tools/mod.rs
new file mode 100644
index 00000000..7b69e8cb
--- /dev/null
+++ b/src/bin/proxmox_client_tools/mod.rs
@@ -0,0 +1,366 @@
+//! Shared tools useful for common CLI clients.
+
+use std::collections::HashMap;
+
+use anyhow::{bail, format_err, Error};
+use serde_json::{json, Value};
+use xdg::BaseDirectories;
+
+use proxmox::{
+    api::schema::*,
+    tools::fs::file_get_json,
+};
+
+use proxmox_backup::api2::access::user::UserWithTokens;
+use proxmox_backup::api2::types::*;
+use proxmox_backup::backup::BackupDir;
+use proxmox_backup::client::*;
+use proxmox_backup::tools;
+
+const ENV_VAR_PBS_FINGERPRINT: &str = "PBS_FINGERPRINT";
+const ENV_VAR_PBS_PASSWORD: &str = "PBS_PASSWORD";
+
+pub const REPO_URL_SCHEMA: Schema = StringSchema::new("Repository URL.")
+    .format(&BACKUP_REPO_URL)
+    .max_length(256)
+    .schema();
+
+pub const KEYFILE_SCHEMA: Schema =
+    StringSchema::new("Path to encryption key. All data will be encrypted using this key.")
+        .schema();
+
+pub const KEYFD_SCHEMA: Schema =
+    IntegerSchema::new("Pass an encryption key via an already opened file descriptor.")
+        .minimum(0)
+        .schema();
+
+pub const MASTER_PUBKEY_FILE_SCHEMA: Schema = StringSchema::new(
+    "Path to master public key. The encryption key used for a backup will be encrypted using this key and appended to the backup.")
+    .schema();
+
+pub const MASTER_PUBKEY_FD_SCHEMA: Schema =
+    IntegerSchema::new("Pass a master public key via an already opened file descriptor.")
+        .minimum(0)
+        .schema();
+
+pub const CHUNK_SIZE_SCHEMA: Schema = IntegerSchema::new("Chunk size in KB. Must be a power of 2.")
+    .minimum(64)
+    .maximum(4096)
+    .default(4096)
+    .schema();
+
+pub fn get_default_repository() -> Option<String> {
+    std::env::var("PBS_REPOSITORY").ok()
+}
+
+pub fn extract_repository_from_value(param: &Value) -> Result<BackupRepository, Error> {
+    let repo_url = param["repository"]
+        .as_str()
+        .map(String::from)
+        .or_else(get_default_repository)
+        .ok_or_else(|| format_err!("unable to get (default) repository"))?;
+
+    let repo: BackupRepository = repo_url.parse()?;
+
+    Ok(repo)
+}
+
+pub fn extract_repository_from_map(param: &HashMap<String, String>) -> Option<BackupRepository> {
+    param
+        .get("repository")
+        .map(String::from)
+        .or_else(get_default_repository)
+        .and_then(|repo_url| repo_url.parse::<BackupRepository>().ok())
+}
+
+pub fn connect(repo: &BackupRepository) -> Result<HttpClient, Error> {
+    connect_do(repo.host(), repo.port(), repo.auth_id())
+        .map_err(|err| format_err!("error building client for repository {} - {}", repo, err))
+}
+
+fn connect_do(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, Error> {
+    let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
+
+    use std::env::VarError::*;
+    let password = match std::env::var(ENV_VAR_PBS_PASSWORD) {
+        Ok(p) => Some(p),
+        Err(NotUnicode(_)) => bail!(format!("{} contains bad characters", ENV_VAR_PBS_PASSWORD)),
+        Err(NotPresent) => None,
+    };
+
+    let options = HttpClientOptions::new_interactive(password, fingerprint);
+
+    HttpClient::new(server, port, auth_id, options)
+}
+
+/// like get, but simply ignore errors and return Null instead
+pub async fn try_get(repo: &BackupRepository, url: &str) -> Value {
+
+    let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
+    let password = std::env::var(ENV_VAR_PBS_PASSWORD).ok();
+
+    // ticket cache, but no questions asked
+    let options = HttpClientOptions::new_interactive(password, fingerprint)
+        .interactive(false);
+
+    let client = match HttpClient::new(repo.host(), repo.port(), repo.auth_id(), options) {
+        Ok(v) => v,
+        _ => return Value::Null,
+    };
+
+    let mut resp = match client.get(url, None).await {
+        Ok(v) => v,
+        _ => return Value::Null,
+    };
+
+    if let Some(map) = resp.as_object_mut() {
+        if let Some(data) = map.remove("data") {
+            return data;
+        }
+    }
+    Value::Null
+}
+
+pub fn complete_backup_group(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+    proxmox_backup::tools::runtime::main(async { complete_backup_group_do(param).await })
+}
+
+pub async fn complete_backup_group_do(param: &HashMap<String, String>) -> Vec<String> {
+
+    let mut result = vec![];
+
+    let repo = match extract_repository_from_map(param) {
+        Some(v) => v,
+        _ => return result,
+    };
+
+    let path = format!("api2/json/admin/datastore/{}/groups", repo.store());
+
+    let data = try_get(&repo, &path).await;
+
+    if let Some(list) = data.as_array() {
+        for item in list {
+            if let (Some(backup_id), Some(backup_type)) =
+                (item["backup-id"].as_str(), item["backup-type"].as_str())
+            {
+                result.push(format!("{}/{}", backup_type, backup_id));
+            }
+        }
+    }
+
+    result
+}
+
+pub fn complete_group_or_snapshot(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+    proxmox_backup::tools::runtime::main(async { complete_group_or_snapshot_do(arg, param).await })
+}
+
+pub async fn complete_group_or_snapshot_do(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+
+    if arg.matches('/').count() < 2 {
+        let groups = complete_backup_group_do(param).await;
+        let mut result = vec![];
+        for group in groups {
+            result.push(group.to_string());
+            result.push(format!("{}/", group));
+        }
+        return result;
+    }
+
+    complete_backup_snapshot_do(param).await
+}
+
+pub fn complete_backup_snapshot(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+    proxmox_backup::tools::runtime::main(async { complete_backup_snapshot_do(param).await })
+}
+
+pub async fn complete_backup_snapshot_do(param: &HashMap<String, String>) -> Vec<String> {
+
+    let mut result = vec![];
+
+    let repo = match extract_repository_from_map(param) {
+        Some(v) => v,
+        _ => return result,
+    };
+
+    let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
+
+    let data = try_get(&repo, &path).await;
+
+    if let Some(list) = data.as_array() {
+        for item in list {
+            if let (Some(backup_id), Some(backup_type), Some(backup_time)) =
+                (item["backup-id"].as_str(), item["backup-type"].as_str(), item["backup-time"].as_i64())
+            {
+                if let Ok(snapshot) = BackupDir::new(backup_type, backup_id, backup_time) {
+                    result.push(snapshot.relative_path().to_str().unwrap().to_owned());
+                }
+            }
+        }
+    }
+
+    result
+}
+
+pub fn complete_server_file_name(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+    proxmox_backup::tools::runtime::main(async { complete_server_file_name_do(param).await })
+}
+
+pub async fn complete_server_file_name_do(param: &HashMap<String, String>) -> Vec<String> {
+
+    let mut result = vec![];
+
+    let repo = match extract_repository_from_map(param) {
+        Some(v) => v,
+        _ => return result,
+    };
+
+    let snapshot: BackupDir = match param.get("snapshot") {
+        Some(path) => {
+            match path.parse() {
+                Ok(v) => v,
+                _ => return result,
+            }
+        }
+        _ => return result,
+    };
+
+    let query = tools::json_object_to_query(json!({
+        "backup-type": snapshot.group().backup_type(),
+        "backup-id": snapshot.group().backup_id(),
+        "backup-time": snapshot.backup_time(),
+    })).unwrap();
+
+    let path = format!("api2/json/admin/datastore/{}/files?{}", repo.store(), query);
+
+    let data = try_get(&repo, &path).await;
+
+    if let Some(list) = data.as_array() {
+        for item in list {
+            if let Some(filename) = item["filename"].as_str() {
+                result.push(filename.to_owned());
+            }
+        }
+    }
+
+    result
+}
+
+pub fn complete_archive_name(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+    complete_server_file_name(arg, param)
+        .iter()
+        .map(|v| tools::format::strip_server_file_extension(&v))
+        .collect()
+}
+
+pub fn complete_pxar_archive_name(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+    complete_server_file_name(arg, param)
+        .iter()
+        .filter_map(|name| {
+            if name.ends_with(".pxar.didx") {
+                Some(tools::format::strip_server_file_extension(name))
+            } else {
+                None
+            }
+        })
+        .collect()
+}
+
+pub fn complete_img_archive_name(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+    complete_server_file_name(arg, param)
+        .iter()
+        .filter_map(|name| {
+            if name.ends_with(".img.fidx") {
+                Some(tools::format::strip_server_file_extension(name))
+            } else {
+                None
+            }
+        })
+        .collect()
+}
+
+pub fn complete_chunk_size(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
+
+    let mut result = vec![];
+
+    let mut size = 64;
+    loop {
+        result.push(size.to_string());
+        size *= 2;
+        if size > 4096 { break; }
+    }
+
+    result
+}
+
+pub fn complete_auth_id(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+    proxmox_backup::tools::runtime::main(async { complete_auth_id_do(param).await })
+}
+
+pub async fn complete_auth_id_do(param: &HashMap<String, String>) -> Vec<String> {
+
+    let mut result = vec![];
+
+    let repo = match extract_repository_from_map(param) {
+        Some(v) => v,
+        _ => return result,
+    };
+
+    let data = try_get(&repo, "api2/json/access/users?include_tokens=true").await;
+
+    if let Ok(parsed) = serde_json::from_value::<Vec<UserWithTokens>>(data) {
+        for user in parsed {
+            result.push(user.userid.to_string());
+            for token in user.tokens {
+                result.push(token.tokenid.to_string());
+            }
+        }
+    };
+
+    result
+}
+
+pub fn complete_repository(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
+    let mut result = vec![];
+
+    let base = match BaseDirectories::with_prefix("proxmox-backup") {
+        Ok(v) => v,
+        _ => return result,
+    };
+
+    // usually $HOME/.cache/proxmox-backup/repo-list
+    let path = match base.place_cache_file("repo-list") {
+        Ok(v) => v,
+        _ => return result,
+    };
+
+    let data = file_get_json(&path, None).unwrap_or_else(|_| json!({}));
+
+    if let Some(map) = data.as_object() {
+        for (repo, _count) in map {
+            result.push(repo.to_owned());
+        }
+    }
+
+    result
+}
+
+pub fn complete_backup_source(arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+    let mut result = vec![];
+
+    let data: Vec<&str> = arg.splitn(2, ':').collect();
+
+    if data.len() != 2 {
+        result.push(String::from("root.pxar:/"));
+        result.push(String::from("etc.pxar:/etc"));
+        return result;
+    }
+
+    let files = tools::complete_file_name(data[1], param);
+
+    for file in files {
+        result.push(format!("{}:{}", data[0], file));
+    }
+
+    result
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 10/22] proxmox_client_tools: extract 'key' from client module
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (8 preceding siblings ...)
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module Stefan Reiter
@ 2021-02-16 17:06 ` Stefan Reiter
  2021-02-17  9:11   ` Dietmar Maurer
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 11/22] file-restore: add binary and basic commands Stefan Reiter
                   ` (12 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

To be used by other command line tools. Requires moving XDG helpers as
well, which find their place in the tools module quite cozily IMHO.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/bin/proxmox-backup-client.rs              | 440 +-----------------
 src/bin/proxmox_backup_client/catalog.rs      |   4 +-
 src/bin/proxmox_backup_client/mod.rs          |  30 --
 src/bin/proxmox_backup_client/snapshot.rs     |   3 +-
 .../key.rs                                    | 440 +++++++++++++++++-
 src/bin/proxmox_client_tools/mod.rs           |  30 +-
 6 files changed, 474 insertions(+), 473 deletions(-)
 rename src/bin/{proxmox_backup_client => proxmox_client_tools}/key.rs (52%)

diff --git a/src/bin/proxmox-backup-client.rs b/src/bin/proxmox-backup-client.rs
index 1b8b5bec..794f783c 100644
--- a/src/bin/proxmox-backup-client.rs
+++ b/src/bin/proxmox-backup-client.rs
@@ -1,7 +1,5 @@
 use std::collections::HashSet;
-use std::convert::TryFrom;
 use std::io::{self, Read, Write, Seek, SeekFrom};
-use std::os::unix::io::{FromRawFd, RawFd};
 use std::path::{Path, PathBuf};
 use std::pin::Pin;
 use std::sync::{Arc, Mutex};
@@ -19,7 +17,7 @@ use pathpatterns::{MatchEntry, MatchType, PatternFlag};
 use proxmox::{
     tools::{
         time::{strftime_local, epoch_i64},
-        fs::{file_get_contents, file_get_json, replace_file, CreateOptions, image_size},
+        fs::{file_get_json, replace_file, CreateOptions, image_size},
     },
     api::{
         api,
@@ -68,7 +66,10 @@ mod proxmox_backup_client;
 use proxmox_backup_client::*;
 
 mod proxmox_client_tools;
-use proxmox_client_tools::*;
+use proxmox_client_tools::{
+    *,
+    key::{format_key_source, crypto_parameters},
+};
 
 fn record_repository(repo: &BackupRepository) {
 
@@ -499,437 +500,6 @@ fn spawn_catalog_upload(
     Ok(CatalogUploadResult { catalog_writer, result: catalog_result_rx })
 }
 
-#[derive(Clone, Debug, Eq, PartialEq)]
-enum KeySource {
-    DefaultKey,
-    Fd,
-    Path(String),
-}
-
-fn format_key_source(source: &KeySource, key_type: &str) -> String {
-    match source {
-        KeySource::DefaultKey => format!("Using default {} key..", key_type),
-        KeySource::Fd => format!("Using {} key from file descriptor..", key_type),
-        KeySource::Path(path) => format!("Using {} key from '{}'..", key_type, path),
-    }
-}
-
-#[derive(Clone, Debug, Eq, PartialEq)]
-struct KeyWithSource {
-    pub source: KeySource,
-    pub key: Vec<u8>,
-}
-
-impl KeyWithSource {
-    pub fn from_fd(key: Vec<u8>) -> Self {
-        Self {
-            source: KeySource::Fd,
-            key,
-        }
-    }
-
-    pub fn from_default(key: Vec<u8>) -> Self {
-        Self {
-            source: KeySource::DefaultKey,
-            key,
-        }
-    }
-
-    pub fn from_path(path: String, key: Vec<u8>) -> Self {
-        Self {
-            source: KeySource::Path(path),
-            key,
-        }
-    }
-}
-
-#[derive(Debug, Eq, PartialEq)]
-struct CryptoParams {
-    mode: CryptMode,
-    enc_key: Option<KeyWithSource>,
-    // FIXME switch to openssl::rsa::rsa<openssl::pkey::Public> once that is Eq?
-    master_pubkey: Option<KeyWithSource>,
-}
-
-fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
-    let keyfile = match param.get("keyfile") {
-        Some(Value::String(keyfile)) => Some(keyfile),
-        Some(_) => bail!("bad --keyfile parameter type"),
-        None => None,
-    };
-
-    let key_fd = match param.get("keyfd") {
-        Some(Value::Number(key_fd)) => Some(
-            RawFd::try_from(key_fd
-                .as_i64()
-                .ok_or_else(|| format_err!("bad key fd: {:?}", key_fd))?
-            )
-            .map_err(|err| format_err!("bad key fd: {:?}: {}", key_fd, err))?
-        ),
-        Some(_) => bail!("bad --keyfd parameter type"),
-        None => None,
-    };
-
-    let master_pubkey_file = match param.get("master-pubkey-file") {
-        Some(Value::String(keyfile)) => Some(keyfile),
-        Some(_) => bail!("bad --master-pubkey-file parameter type"),
-        None => None,
-    };
-
-    let master_pubkey_fd = match param.get("master-pubkey-fd") {
-        Some(Value::Number(key_fd)) => Some(
-            RawFd::try_from(key_fd
-                .as_i64()
-                .ok_or_else(|| format_err!("bad master public key fd: {:?}", key_fd))?
-            )
-            .map_err(|err| format_err!("bad public master key fd: {:?}: {}", key_fd, err))?
-        ),
-        Some(_) => bail!("bad --master-pubkey-fd parameter type"),
-        None => None,
-    };
-
-    let mode: Option<CryptMode> = match param.get("crypt-mode") {
-        Some(mode) => Some(serde_json::from_value(mode.clone())?),
-        None => None,
-    };
-
-    let key = match (keyfile, key_fd) {
-        (None, None) => None,
-        (Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
-        (Some(keyfile), None) => Some(KeyWithSource::from_path(
-            keyfile.clone(),
-            file_get_contents(keyfile)?,
-        )),
-        (None, Some(fd)) => {
-            let input = unsafe { std::fs::File::from_raw_fd(fd) };
-            let mut data = Vec::new();
-            let _len: usize = { input }.read_to_end(&mut data).map_err(|err| {
-                format_err!("error reading encryption key from fd {}: {}", fd, err)
-            })?;
-            Some(KeyWithSource::from_fd(data))
-        }
-    };
-
-    let master_pubkey = match (master_pubkey_file, master_pubkey_fd) {
-        (None, None) => None,
-        (Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
-        (Some(keyfile), None) => Some(KeyWithSource::from_path(
-            keyfile.clone(),
-            file_get_contents(keyfile)?,
-        )),
-        (None, Some(fd)) => {
-            let input = unsafe { std::fs::File::from_raw_fd(fd) };
-            let mut data = Vec::new();
-            let _len: usize = { input }
-                .read_to_end(&mut data)
-                .map_err(|err| format_err!("error reading master key from fd {}: {}", fd, err))?;
-            Some(KeyWithSource::from_fd(data))
-        }
-    };
-
-    let res = match mode {
-        // no crypt mode, enable encryption if keys are available
-        None => match (key, master_pubkey) {
-            // only default keys if available
-            (None, None) => match key::read_optional_default_encryption_key()? {
-                None => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
-                enc_key => {
-                    let master_pubkey = key::read_optional_default_master_pubkey()?;
-                    CryptoParams {
-                        mode: CryptMode::Encrypt,
-                        enc_key,
-                        master_pubkey,
-                    }
-                },
-            },
-
-            // explicit master key, default enc key needed
-            (None, master_pubkey) => match key::read_optional_default_encryption_key()? {
-                None => bail!("--master-pubkey-file/--master-pubkey-fd specified, but no key available"),
-                enc_key => {
-                    CryptoParams {
-                        mode: CryptMode::Encrypt,
-                        enc_key,
-                        master_pubkey,
-                    }
-                },
-            },
-
-            // explicit keyfile, maybe default master key
-            (enc_key, None) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey: key::read_optional_default_master_pubkey()? },
-
-            // explicit keyfile and master key
-            (enc_key, master_pubkey) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey },
-        },
-
-        // explicitly disabled encryption
-        Some(CryptMode::None) => match (key, master_pubkey) {
-            // no keys => OK, no encryption
-            (None, None) => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
-
-            // --keyfile and --crypt-mode=none
-            (Some(_), _) => bail!("--keyfile/--keyfd and --crypt-mode=none are mutually exclusive"),
-
-            // --master-pubkey-file and --crypt-mode=none
-            (_, Some(_)) => bail!("--master-pubkey-file/--master-pubkey-fd and --crypt-mode=none are mutually exclusive"),
-        },
-
-        // explicitly enabled encryption
-        Some(mode) => match (key, master_pubkey) {
-            // no key, maybe master key
-            (None, master_pubkey) => match key::read_optional_default_encryption_key()? {
-                None => bail!("--crypt-mode without --keyfile and no default key file available"),
-                enc_key => {
-                    eprintln!("Encrypting with default encryption key!");
-                    let master_pubkey = match master_pubkey {
-                        None => key::read_optional_default_master_pubkey()?,
-                        master_pubkey => master_pubkey,
-                    };
-
-                    CryptoParams {
-                        mode,
-                        enc_key,
-                        master_pubkey,
-                    }
-                },
-            },
-
-            // --keyfile and --crypt-mode other than none
-            (enc_key, master_pubkey) => {
-                let master_pubkey = match master_pubkey {
-                    None => key::read_optional_default_master_pubkey()?,
-                    master_pubkey => master_pubkey,
-                };
-
-                CryptoParams { mode, enc_key, master_pubkey }
-            },
-        },
-    };
-
-    Ok(res)
-}
-
-#[test]
-// WARNING: there must only be one test for crypto_parameters as the default key handling is not
-// safe w.r.t. concurrency
-fn test_crypto_parameters_handling() -> Result<(), Error> {
-    let some_key = vec![1;1];
-    let default_key = vec![2;1];
-
-    let some_master_key = vec![3;1];
-    let default_master_key = vec![4;1];
-
-    let keypath = "./target/testout/keyfile.test";
-    let master_keypath = "./target/testout/masterkeyfile.test";
-    let invalid_keypath = "./target/testout/invalid_keyfile.test";
-
-    let no_key_res = CryptoParams {
-        enc_key: None,
-        master_pubkey: None,
-        mode: CryptMode::None,
-    };
-    let some_key_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_path(
-            keypath.to_string(),
-            some_key.clone(),
-        )),
-        master_pubkey: None,
-        mode: CryptMode::Encrypt,
-    };
-    let some_key_some_master_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_path(
-            keypath.to_string(),
-            some_key.clone(),
-        )),
-        master_pubkey: Some(KeyWithSource::from_path(
-            master_keypath.to_string(),
-            some_master_key.clone(),
-        )),
-        mode: CryptMode::Encrypt,
-    };
-    let some_key_default_master_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_path(
-            keypath.to_string(),
-            some_key.clone(),
-        )),
-        master_pubkey: Some(KeyWithSource::from_default(default_master_key.clone())),
-        mode: CryptMode::Encrypt,
-    };
-
-    let some_key_sign_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_path(
-            keypath.to_string(),
-            some_key.clone(),
-        )),
-        master_pubkey: None,
-        mode: CryptMode::SignOnly,
-    };
-    let default_key_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_default(default_key.clone())),
-        master_pubkey: None,
-        mode: CryptMode::Encrypt,
-    };
-    let default_key_sign_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_default(default_key.clone())),
-        master_pubkey: None,
-        mode: CryptMode::SignOnly,
-    };
-
-    replace_file(&keypath, &some_key, CreateOptions::default())?;
-    replace_file(&master_keypath, &some_master_key, CreateOptions::default())?;
-
-    // no params, no default key == no key
-    let res = crypto_parameters(&json!({}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // keyfile param == key from keyfile
-    let res = crypto_parameters(&json!({"keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // crypt mode none == no key
-    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // crypt mode encrypt/sign-only, no keyfile, no default key == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
-
-    // crypt mode none with explicit key == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
-
-    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
-    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_sign_res);
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // invalid keyfile parameter always errors
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
-
-    // now set a default key
-    unsafe { key::set_test_encryption_key(Ok(Some(default_key.clone()))); }
-
-    // and repeat
-
-    // no params but default key == default key
-    let res = crypto_parameters(&json!({}));
-    assert_eq!(res.unwrap(), default_key_res);
-
-    // keyfile param == key from keyfile
-    let res = crypto_parameters(&json!({"keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // crypt mode none == no key
-    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // crypt mode encrypt/sign-only, no keyfile, default key == default key with correct mode
-    let res = crypto_parameters(&json!({"crypt-mode": "sign-only"}));
-    assert_eq!(res.unwrap(), default_key_sign_res);
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt"}));
-    assert_eq!(res.unwrap(), default_key_res);
-
-    // crypt mode none with explicit key == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
-
-    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
-    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_sign_res);
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // invalid keyfile parameter always errors
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
-
-    // now make default key retrieval error
-    unsafe { key::set_test_encryption_key(Err(format_err!("test error"))); }
-
-    // and repeat
-
-    // no params, default key retrieval errors == Error
-    assert!(crypto_parameters(&json!({})).is_err());
-
-    // keyfile param == key from keyfile
-    let res = crypto_parameters(&json!({"keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // crypt mode none == no key
-    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // crypt mode encrypt/sign-only, no keyfile, default key error == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
-
-    // crypt mode none with explicit key == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
-
-    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
-    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_sign_res);
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // invalid keyfile parameter always errors
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
-
-    // now remove default key again
-    unsafe { key::set_test_encryption_key(Ok(None)); }
-    // set a default master key
-    unsafe { key::set_test_default_master_pubkey(Ok(Some(default_master_key.clone()))); }
-
-    // and use an explicit master key
-    assert!(crypto_parameters(&json!({"master-pubkey-file": master_keypath})).is_err());
-    // just a default == no key
-    let res = crypto_parameters(&json!({}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // keyfile param == key from keyfile
-    let res = crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": master_keypath}));
-    assert_eq!(res.unwrap(), some_key_some_master_res);
-    // same with fallback to default master key
-    let res = crypto_parameters(&json!({"keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_default_master_res);
-
-    // crypt mode none == error
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "master-pubkey-file": master_keypath})).is_err());
-    // with just default master key == no key
-    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // crypt mode encrypt without enc key == error
-    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt", "master-pubkey-file": master_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
-
-    // crypt mode none with explicit key == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath, "master-pubkey-file": master_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
-
-    // crypt mode encrypt with keyfile == key from keyfile with correct mode
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath, "master-pubkey-file": master_keypath}));
-    assert_eq!(res.unwrap(), some_key_some_master_res);
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_default_master_res);
-
-    // invalid master keyfile parameter always errors when a key is passed, even with a valid
-    // default master key
-    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "none"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "encrypt"})).is_err());
-
-    Ok(())
-}
-
 #[api(
    input: {
        properties: {
diff --git a/src/bin/proxmox_backup_client/catalog.rs b/src/bin/proxmox_backup_client/catalog.rs
index 659200ff..80d72a55 100644
--- a/src/bin/proxmox_backup_client/catalog.rs
+++ b/src/bin/proxmox_backup_client/catalog.rs
@@ -15,7 +15,6 @@ use crate::{
     REPO_URL_SCHEMA,
     KEYFD_SCHEMA,
     extract_repository_from_value,
-    format_key_source,
     record_repository,
     key::get_encryption_key_password,
     decrypt_key,
@@ -25,7 +24,6 @@ use crate::{
     complete_group_or_snapshot,
     complete_pxar_archive_name,
     connect,
-    crypto_parameters,
     BackupDir,
     BackupGroup,
     BufferedDynamicReader,
@@ -38,6 +36,8 @@ use crate::{
     Shell,
 };
 
+use crate::proxmox_client_tools::key::{format_key_source, crypto_parameters};
+
 #[api(
    input: {
         properties: {
diff --git a/src/bin/proxmox_backup_client/mod.rs b/src/bin/proxmox_backup_client/mod.rs
index a14b0dc1..bc03f243 100644
--- a/src/bin/proxmox_backup_client/mod.rs
+++ b/src/bin/proxmox_backup_client/mod.rs
@@ -1,5 +1,3 @@
-use anyhow::{Context, Error};
-
 mod benchmark;
 pub use benchmark::*;
 mod mount;
@@ -11,31 +9,3 @@ pub use catalog::*;
 mod snapshot;
 pub use snapshot::*;
 
-pub mod key;
-
-pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
-    xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
-}
-
-/// Convenience helper for better error messages:
-pub fn find_xdg_file(
-    file_name: impl AsRef<std::path::Path>,
-    description: &'static str,
-) -> Result<Option<std::path::PathBuf>, Error> {
-    let file_name = file_name.as_ref();
-    base_directories()
-        .map(|base| base.find_config_file(file_name))
-        .with_context(|| format!("error searching for {}", description))
-}
-
-pub fn place_xdg_file(
-    file_name: impl AsRef<std::path::Path>,
-    description: &'static str,
-) -> Result<std::path::PathBuf, Error> {
-    let file_name = file_name.as_ref();
-    base_directories()
-        .and_then(|base| {
-            base.place_config_file(file_name).map_err(Error::from)
-        })
-        .with_context(|| format!("failed to place {} in xdg home", description))
-}
diff --git a/src/bin/proxmox_backup_client/snapshot.rs b/src/bin/proxmox_backup_client/snapshot.rs
index 5988ebf6..45ae63b3 100644
--- a/src/bin/proxmox_backup_client/snapshot.rs
+++ b/src/bin/proxmox_backup_client/snapshot.rs
@@ -30,11 +30,12 @@ use crate::{
     complete_backup_group,
     complete_repository,
     connect,
-    crypto_parameters,
     extract_repository_from_value,
     record_repository,
 };
 
+use crate::proxmox_client_tools::key::crypto_parameters;
+
 #[api(
    input: {
         properties: {
diff --git a/src/bin/proxmox_backup_client/key.rs b/src/bin/proxmox_client_tools/key.rs
similarity index 52%
rename from src/bin/proxmox_backup_client/key.rs
rename to src/bin/proxmox_client_tools/key.rs
index 6e18a026..11cf01e6 100644
--- a/src/bin/proxmox_backup_client/key.rs
+++ b/src/bin/proxmox_client_tools/key.rs
@@ -1,5 +1,7 @@
 use std::convert::TryFrom;
 use std::path::PathBuf;
+use std::os::unix::io::{FromRawFd, RawFd};
+use std::io::Read;
 
 use anyhow::{bail, format_err, Error};
 use serde_json::Value;
@@ -15,16 +17,224 @@ use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
 
 use proxmox_backup::{
     api2::types::{Kdf, KeyInfo, RsaPubKeyInfo, PASSWORD_HINT_SCHEMA},
-    backup::{rsa_decrypt_key_config, KeyConfig},
+    backup::{rsa_decrypt_key_config, CryptMode, KeyConfig},
     tools,
     tools::paperkey::{generate_paper_key, PaperkeyFormat},
 };
 
-use crate::KeyWithSource;
-
 pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
 pub const DEFAULT_MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
 
+#[derive(Clone, Debug, Eq, PartialEq)]
+pub enum KeySource {
+    DefaultKey,
+    Fd,
+    Path(String),
+}
+
+pub fn format_key_source(source: &KeySource, key_type: &str) -> String {
+    match source {
+        KeySource::DefaultKey => format!("Using default {} key..", key_type),
+        KeySource::Fd => format!("Using {} key from file descriptor..", key_type),
+        KeySource::Path(path) => format!("Using {} key from '{}'..", key_type, path),
+    }
+}
+
+#[derive(Clone, Debug, Eq, PartialEq)]
+pub struct KeyWithSource {
+    pub source: KeySource,
+    pub key: Vec<u8>,
+}
+
+impl KeyWithSource {
+    pub fn from_fd(key: Vec<u8>) -> Self {
+        Self {
+            source: KeySource::Fd,
+            key,
+        }
+    }
+
+    pub fn from_default(key: Vec<u8>) -> Self {
+        Self {
+            source: KeySource::DefaultKey,
+            key,
+        }
+    }
+
+    pub fn from_path(path: String, key: Vec<u8>) -> Self {
+        Self {
+            source: KeySource::Path(path),
+            key,
+        }
+    }
+}
+
+#[derive(Debug, Eq, PartialEq)]
+pub struct CryptoParams {
+    pub mode: CryptMode,
+    pub enc_key: Option<KeyWithSource>,
+    // FIXME switch to openssl::rsa::rsa<openssl::pkey::Public> once that is Eq?
+    pub master_pubkey: Option<KeyWithSource>,
+}
+
+pub fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
+    let keyfile = match param.get("keyfile") {
+        Some(Value::String(keyfile)) => Some(keyfile),
+        Some(_) => bail!("bad --keyfile parameter type"),
+        None => None,
+    };
+
+    let key_fd = match param.get("keyfd") {
+        Some(Value::Number(key_fd)) => Some(
+            RawFd::try_from(key_fd
+                .as_i64()
+                .ok_or_else(|| format_err!("bad key fd: {:?}", key_fd))?
+            )
+            .map_err(|err| format_err!("bad key fd: {:?}: {}", key_fd, err))?
+        ),
+        Some(_) => bail!("bad --keyfd parameter type"),
+        None => None,
+    };
+
+    let master_pubkey_file = match param.get("master-pubkey-file") {
+        Some(Value::String(keyfile)) => Some(keyfile),
+        Some(_) => bail!("bad --master-pubkey-file parameter type"),
+        None => None,
+    };
+
+    let master_pubkey_fd = match param.get("master-pubkey-fd") {
+        Some(Value::Number(key_fd)) => Some(
+            RawFd::try_from(key_fd
+                .as_i64()
+                .ok_or_else(|| format_err!("bad master public key fd: {:?}", key_fd))?
+            )
+            .map_err(|err| format_err!("bad public master key fd: {:?}: {}", key_fd, err))?
+        ),
+        Some(_) => bail!("bad --master-pubkey-fd parameter type"),
+        None => None,
+    };
+
+    let mode: Option<CryptMode> = match param.get("crypt-mode") {
+        Some(mode) => Some(serde_json::from_value(mode.clone())?),
+        None => None,
+    };
+
+    let key = match (keyfile, key_fd) {
+        (None, None) => None,
+        (Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
+        (Some(keyfile), None) => Some(KeyWithSource::from_path(
+            keyfile.clone(),
+            file_get_contents(keyfile)?,
+        )),
+        (None, Some(fd)) => {
+            let input = unsafe { std::fs::File::from_raw_fd(fd) };
+            let mut data = Vec::new();
+            let _len: usize = { input }.read_to_end(&mut data).map_err(|err| {
+                format_err!("error reading encryption key from fd {}: {}", fd, err)
+            })?;
+            Some(KeyWithSource::from_fd(data))
+        }
+    };
+
+    let master_pubkey = match (master_pubkey_file, master_pubkey_fd) {
+        (None, None) => None,
+        (Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
+        (Some(keyfile), None) => Some(KeyWithSource::from_path(
+            keyfile.clone(),
+            file_get_contents(keyfile)?,
+        )),
+        (None, Some(fd)) => {
+            let input = unsafe { std::fs::File::from_raw_fd(fd) };
+            let mut data = Vec::new();
+            let _len: usize = { input }
+                .read_to_end(&mut data)
+                .map_err(|err| format_err!("error reading master key from fd {}: {}", fd, err))?;
+            Some(KeyWithSource::from_fd(data))
+        }
+    };
+
+    let res = match mode {
+        // no crypt mode, enable encryption if keys are available
+        None => match (key, master_pubkey) {
+            // only default keys if available
+            (None, None) => match read_optional_default_encryption_key()? {
+                None => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
+                enc_key => {
+                    let master_pubkey = read_optional_default_master_pubkey()?;
+                    CryptoParams {
+                        mode: CryptMode::Encrypt,
+                        enc_key,
+                        master_pubkey,
+                    }
+                },
+            },
+
+            // explicit master key, default enc key needed
+            (None, master_pubkey) => match read_optional_default_encryption_key()? {
+                None => bail!("--master-pubkey-file/--master-pubkey-fd specified, but no key available"),
+                enc_key => {
+                    CryptoParams {
+                        mode: CryptMode::Encrypt,
+                        enc_key,
+                        master_pubkey,
+                    }
+                },
+            },
+
+            // explicit keyfile, maybe default master key
+            (enc_key, None) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey: read_optional_default_master_pubkey()? },
+
+            // explicit keyfile and master key
+            (enc_key, master_pubkey) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey },
+        },
+
+        // explicitly disabled encryption
+        Some(CryptMode::None) => match (key, master_pubkey) {
+            // no keys => OK, no encryption
+            (None, None) => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
+
+            // --keyfile and --crypt-mode=none
+            (Some(_), _) => bail!("--keyfile/--keyfd and --crypt-mode=none are mutually exclusive"),
+
+            // --master-pubkey-file and --crypt-mode=none
+            (_, Some(_)) => bail!("--master-pubkey-file/--master-pubkey-fd and --crypt-mode=none are mutually exclusive"),
+        },
+
+        // explicitly enabled encryption
+        Some(mode) => match (key, master_pubkey) {
+            // no key, maybe master key
+            (None, master_pubkey) => match read_optional_default_encryption_key()? {
+                None => bail!("--crypt-mode without --keyfile and no default key file available"),
+                enc_key => {
+                    eprintln!("Encrypting with default encryption key!");
+                    let master_pubkey = match master_pubkey {
+                        None => read_optional_default_master_pubkey()?,
+                        master_pubkey => master_pubkey,
+                    };
+
+                    CryptoParams {
+                        mode,
+                        enc_key,
+                        master_pubkey,
+                    }
+                },
+            },
+
+            // --keyfile and --crypt-mode other than none
+            (enc_key, master_pubkey) => {
+                let master_pubkey = match master_pubkey {
+                    None => read_optional_default_master_pubkey()?,
+                    master_pubkey => master_pubkey,
+                };
+
+                CryptoParams { mode, enc_key, master_pubkey }
+            },
+        },
+    };
+
+    Ok(res)
+}
+
 pub fn find_default_master_pubkey() -> Result<Option<PathBuf>, Error> {
     super::find_xdg_file(
         DEFAULT_MASTER_PUBKEY_FILE_NAME,
@@ -600,3 +810,227 @@ pub fn cli() -> CliCommandMap {
         .insert("show-master-pubkey", key_show_master_pubkey_cmd_def)
         .insert("paperkey", paper_key_cmd_def)
 }
+
+#[test]
+// WARNING: there must only be one test for crypto_parameters as the default key handling is not
+// safe w.r.t. concurrency
+fn test_crypto_parameters_handling() -> Result<(), Error> {
+    use serde_json::json;
+
+    let some_key = vec![1;1];
+    let default_key = vec![2;1];
+
+    let some_master_key = vec![3;1];
+    let default_master_key = vec![4;1];
+
+    let keypath = "./target/testout/keyfile.test";
+    let master_keypath = "./target/testout/masterkeyfile.test";
+    let invalid_keypath = "./target/testout/invalid_keyfile.test";
+
+    let no_key_res = CryptoParams {
+        enc_key: None,
+        master_pubkey: None,
+        mode: CryptMode::None,
+    };
+    let some_key_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_path(
+            keypath.to_string(),
+            some_key.clone(),
+        )),
+        master_pubkey: None,
+        mode: CryptMode::Encrypt,
+    };
+    let some_key_some_master_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_path(
+            keypath.to_string(),
+            some_key.clone(),
+        )),
+        master_pubkey: Some(KeyWithSource::from_path(
+            master_keypath.to_string(),
+            some_master_key.clone(),
+        )),
+        mode: CryptMode::Encrypt,
+    };
+    let some_key_default_master_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_path(
+            keypath.to_string(),
+            some_key.clone(),
+        )),
+        master_pubkey: Some(KeyWithSource::from_default(default_master_key.clone())),
+        mode: CryptMode::Encrypt,
+    };
+
+    let some_key_sign_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_path(
+            keypath.to_string(),
+            some_key.clone(),
+        )),
+        master_pubkey: None,
+        mode: CryptMode::SignOnly,
+    };
+    let default_key_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_default(default_key.clone())),
+        master_pubkey: None,
+        mode: CryptMode::Encrypt,
+    };
+    let default_key_sign_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_default(default_key.clone())),
+        master_pubkey: None,
+        mode: CryptMode::SignOnly,
+    };
+
+    replace_file(&keypath, &some_key, CreateOptions::default())?;
+    replace_file(&master_keypath, &some_master_key, CreateOptions::default())?;
+
+    // no params, no default key == no key
+    let res = crypto_parameters(&json!({}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // keyfile param == key from keyfile
+    let res = crypto_parameters(&json!({"keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // crypt mode none == no key
+    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // crypt mode encrypt/sign-only, no keyfile, no default key == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
+
+    // crypt mode none with explicit key == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
+
+    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
+    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_sign_res);
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // invalid keyfile parameter always errors
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
+
+    // now set a default key
+    unsafe { set_test_encryption_key(Ok(Some(default_key.clone()))); }
+
+    // and repeat
+
+    // no params but default key == default key
+    let res = crypto_parameters(&json!({}));
+    assert_eq!(res.unwrap(), default_key_res);
+
+    // keyfile param == key from keyfile
+    let res = crypto_parameters(&json!({"keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // crypt mode none == no key
+    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // crypt mode encrypt/sign-only, no keyfile, default key == default key with correct mode
+    let res = crypto_parameters(&json!({"crypt-mode": "sign-only"}));
+    assert_eq!(res.unwrap(), default_key_sign_res);
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt"}));
+    assert_eq!(res.unwrap(), default_key_res);
+
+    // crypt mode none with explicit key == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
+
+    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
+    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_sign_res);
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // invalid keyfile parameter always errors
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
+
+    // now make default key retrieval error
+    unsafe { set_test_encryption_key(Err(format_err!("test error"))); }
+
+    // and repeat
+
+    // no params, default key retrieval errors == Error
+    assert!(crypto_parameters(&json!({})).is_err());
+
+    // keyfile param == key from keyfile
+    let res = crypto_parameters(&json!({"keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // crypt mode none == no key
+    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // crypt mode encrypt/sign-only, no keyfile, default key error == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
+
+    // crypt mode none with explicit key == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
+
+    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
+    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_sign_res);
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // invalid keyfile parameter always errors
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
+
+    // now remove default key again
+    unsafe { set_test_encryption_key(Ok(None)); }
+    // set a default master key
+    unsafe { set_test_default_master_pubkey(Ok(Some(default_master_key.clone()))); }
+
+    // and use an explicit master key
+    assert!(crypto_parameters(&json!({"master-pubkey-file": master_keypath})).is_err());
+    // just a default == no key
+    let res = crypto_parameters(&json!({}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // keyfile param == key from keyfile
+    let res = crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": master_keypath}));
+    assert_eq!(res.unwrap(), some_key_some_master_res);
+    // same with fallback to default master key
+    let res = crypto_parameters(&json!({"keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_default_master_res);
+
+    // crypt mode none == error
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "master-pubkey-file": master_keypath})).is_err());
+    // with just default master key == no key
+    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // crypt mode encrypt without enc key == error
+    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt", "master-pubkey-file": master_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
+
+    // crypt mode none with explicit key == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath, "master-pubkey-file": master_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
+
+    // crypt mode encrypt with keyfile == key from keyfile with correct mode
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath, "master-pubkey-file": master_keypath}));
+    assert_eq!(res.unwrap(), some_key_some_master_res);
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_default_master_res);
+
+    // invalid master keyfile parameter always errors when a key is passed, even with a valid
+    // default master key
+    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "none"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "encrypt"})).is_err());
+
+    Ok(())
+}
+
diff --git a/src/bin/proxmox_client_tools/mod.rs b/src/bin/proxmox_client_tools/mod.rs
index 7b69e8cb..40698f1d 100644
--- a/src/bin/proxmox_client_tools/mod.rs
+++ b/src/bin/proxmox_client_tools/mod.rs
@@ -1,8 +1,7 @@
 //! Shared tools useful for common CLI clients.
-
 use std::collections::HashMap;
 
-use anyhow::{bail, format_err, Error};
+use anyhow::{bail, format_err, Context, Error};
 use serde_json::{json, Value};
 use xdg::BaseDirectories;
 
@@ -17,6 +16,8 @@ use proxmox_backup::backup::BackupDir;
 use proxmox_backup::client::*;
 use proxmox_backup::tools;
 
+pub mod key;
+
 const ENV_VAR_PBS_FINGERPRINT: &str = "PBS_FINGERPRINT";
 const ENV_VAR_PBS_PASSWORD: &str = "PBS_PASSWORD";
 
@@ -364,3 +365,28 @@ pub fn complete_backup_source(arg: &str, param: &HashMap<String, String>) -> Vec
 
     result
 }
+
+pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
+    xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
+}
+
+/// Convenience helper for better error messages:
+pub fn find_xdg_file(
+    file_name: impl AsRef<std::path::Path>,
+    description: &'static str,
+) -> Result<Option<std::path::PathBuf>, Error> {
+    let file_name = file_name.as_ref();
+    base_directories()
+        .map(|base| base.find_config_file(file_name))
+        .with_context(|| format!("error searching for {}", description))
+}
+
+pub fn place_xdg_file(
+    file_name: impl AsRef<std::path::Path>,
+    description: &'static str,
+) -> Result<std::path::PathBuf, Error> {
+    let file_name = file_name.as_ref();
+    base_directories()
+        .and_then(|base| base.place_config_file(file_name).map_err(Error::from))
+        .with_context(|| format!("failed to place {} in xdg home", description))
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 11/22] file-restore: add binary and basic commands
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (9 preceding siblings ...)
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 10/22] proxmox_client_tools: extract 'key' from client module Stefan Reiter
@ 2021-02-16 17:06 ` Stefan Reiter
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 12/22] file-restore: allow specifying output-format Stefan Reiter
                   ` (11 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:06 UTC (permalink / raw)
  To: pbs-devel

From: Dominik Csapak <d.csapak@proxmox.com>

For now it only supports 'list' and 'extract' commands for 'pxar.didx'
files. This should be the foundation for a general file-restore
interface that is shared with block-level snapshots.

This is packaged as a seperate .deb file, since for block level restore
it will need to depend on pve-qemu-kvm, which we want to seperate from
proxmox-backup-client.

[original code for proxmox-file-restore.rs]
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

[code cleanups/clippy, use helpers::list_dir_content/ArchiveEntry, no
/block subdir for .fidx files, seperate binary and package]
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 Cargo.toml                                  |   2 +-
 Makefile                                    |   9 +-
 debian/control                              |  11 +
 debian/control.in                           |  10 +
 debian/proxmox-file-restore.bash-completion |   1 +
 debian/proxmox-file-restore.bc              |   8 +
 debian/proxmox-file-restore.install         |   3 +
 debian/proxmox-file-restore.triggers        |   1 +
 debian/rules                                |   7 +-
 docs/Makefile                               |  10 +-
 docs/command-line-tools.rst                 |   5 +
 docs/proxmox-file-restore/description.rst   |   4 +
 docs/proxmox-file-restore/man1.rst          |  28 ++
 src/api2.rs                                 |   2 +-
 src/bin/proxmox-file-restore.rs             | 342 ++++++++++++++++++++
 zsh-completions/_proxmox-file-restore       |  13 +
 16 files changed, 449 insertions(+), 7 deletions(-)
 create mode 100644 debian/proxmox-file-restore.bash-completion
 create mode 100644 debian/proxmox-file-restore.bc
 create mode 100644 debian/proxmox-file-restore.install
 create mode 100644 debian/proxmox-file-restore.triggers
 create mode 100644 docs/proxmox-file-restore/description.rst
 create mode 100644 docs/proxmox-file-restore/man1.rst
 create mode 100644 src/bin/proxmox-file-restore.rs
 create mode 100644 zsh-completions/_proxmox-file-restore

diff --git a/Cargo.toml b/Cargo.toml
index a436e1ad..28ca8e64 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -60,7 +60,7 @@ serde = { version = "1.0", features = ["derive"] }
 serde_json = "1.0"
 siphasher = "0.3"
 syslog = "4.0"
-tokio = { version = "1.0", features = [ "fs", "io-util", "macros", "net", "parking_lot", "process", "rt", "rt-multi-thread", "signal", "time" ] }
+tokio = { version = "1.0", features = [ "fs", "io-util", "io-std", "macros", "net", "parking_lot", "process", "rt", "rt-multi-thread", "signal", "time" ] }
 tokio-openssl = "0.6.1"
 tokio-stream = "0.1.0"
 tokio-util = { version = "0.6", features = [ "codec" ] }
diff --git a/Makefile b/Makefile
index b2ef9d32..3b865083 100644
--- a/Makefile
+++ b/Makefile
@@ -9,6 +9,7 @@ SUBDIRS := etc www docs
 # Binaries usable by users
 USR_BIN := \
 	proxmox-backup-client 	\
+	proxmox-file-restore \
 	pxar			\
 	pmtx			\
 	pmt
@@ -46,9 +47,12 @@ SERVER_DEB=${PACKAGE}-server_${DEB_VERSION}_${ARCH}.deb
 SERVER_DBG_DEB=${PACKAGE}-server-dbgsym_${DEB_VERSION}_${ARCH}.deb
 CLIENT_DEB=${PACKAGE}-client_${DEB_VERSION}_${ARCH}.deb
 CLIENT_DBG_DEB=${PACKAGE}-client-dbgsym_${DEB_VERSION}_${ARCH}.deb
+RESTORE_DEB=proxmox-file-restore_${DEB_VERSION}_${ARCH}.deb
+RESTORE_DBG_DEB=proxmox-file-restore-dbgsym_${DEB_VERSION}_${ARCH}.deb
 DOC_DEB=${PACKAGE}-docs_${DEB_VERSION}_all.deb
 
-DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB}
+DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} \
+     ${RESTORE_DEB} ${RESTORE_DBG_DEB}
 
 DSC = rust-${PACKAGE}_${DEB_VERSION}.dsc
 
@@ -151,8 +155,9 @@ install: $(COMPILED_BINS)
 	$(MAKE) -C docs install
 
 .PHONY: upload
-upload: ${SERVER_DEB} ${CLIENT_DEB} ${DOC_DEB}
+upload: ${SERVER_DEB} ${CLIENT_DEB} ${RESTORE_DEB} ${DOC_DEB}
 	# check if working directory is clean
 	git diff --exit-code --stat && git diff --exit-code --stat --staged
 	tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
 	tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve,pmg" --dist buster
+	tar cf - ${RESTORE_DEB} ${RESTORE_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve,pmg" --dist buster
diff --git a/debian/control b/debian/control
index c0bc61bc..57d47a85 100644
--- a/debian/control
+++ b/debian/control
@@ -52,6 +52,7 @@ Build-Depends: debhelper (>= 11),
  librust-syslog-4+default-dev,
  librust-tokio-1+default-dev,
  librust-tokio-1+fs-dev,
+ librust-tokio-1+io-std-dev,
  librust-tokio-1+io-util-dev,
  librust-tokio-1+macros-dev,
  librust-tokio-1+net-dev,
@@ -145,3 +146,13 @@ Depends: libjs-extjs,
 Architecture: all
 Description: Proxmox Backup Documentation
  This package contains the Proxmox Backup Documentation files.
+
+Package: proxmox-file-restore
+Architecture: any
+Depends: ${misc:Depends},
+         ${shlibs:Depends},
+Recommends: pve-qemu-kvm (>= 5.0.0-9),
+Description: PBS single file restore for pxar and block device backups
+ This package contains the Proxmox Backup single file restore client for
+ restoring individual files and folders from both host/container and VM/block
+ device backups. It includes a block device restore driver using QEMU.
diff --git a/debian/control.in b/debian/control.in
index b4b4d22e..f9fb8fe4 100644
--- a/debian/control.in
+++ b/debian/control.in
@@ -42,3 +42,13 @@ Depends: libjs-extjs,
 Architecture: all
 Description: Proxmox Backup Documentation
  This package contains the Proxmox Backup Documentation files.
+
+Package: proxmox-file-restore
+Architecture: any
+Depends: ${misc:Depends},
+         ${shlibs:Depends},
+Recommends: pve-qemu-kvm (>= 5.0.0-9),
+Description: PBS single file restore for pxar and block device backups
+ This package contains the Proxmox Backup single file restore client for
+ restoring individual files and folders from both host/container and VM/block
+ device backups. It includes a block device restore driver using QEMU.
diff --git a/debian/proxmox-file-restore.bash-completion b/debian/proxmox-file-restore.bash-completion
new file mode 100644
index 00000000..7160209c
--- /dev/null
+++ b/debian/proxmox-file-restore.bash-completion
@@ -0,0 +1 @@
+debian/proxmox-file-restore.bc proxmox-file-restore
diff --git a/debian/proxmox-file-restore.bc b/debian/proxmox-file-restore.bc
new file mode 100644
index 00000000..646ebdd2
--- /dev/null
+++ b/debian/proxmox-file-restore.bc
@@ -0,0 +1,8 @@
+# proxmox-file-restore bash completion
+
+# see http://tiswww.case.edu/php/chet/bash/FAQ
+# and __ltrim_colon_completions() in /usr/share/bash-completion/bash_completion
+# this modifies global var, but I found no better way
+COMP_WORDBREAKS=${COMP_WORDBREAKS//:}
+
+complete -C 'proxmox-file-restore bashcomplete' proxmox-file-restore
diff --git a/debian/proxmox-file-restore.install b/debian/proxmox-file-restore.install
new file mode 100644
index 00000000..2082e46b
--- /dev/null
+++ b/debian/proxmox-file-restore.install
@@ -0,0 +1,3 @@
+usr/bin/proxmox-file-restore
+usr/share/man/man1/proxmox-file-restore.1
+usr/share/zsh/vendor-completions/_proxmox-file-restore
diff --git a/debian/proxmox-file-restore.triggers b/debian/proxmox-file-restore.triggers
new file mode 100644
index 00000000..998cda4b
--- /dev/null
+++ b/debian/proxmox-file-restore.triggers
@@ -0,0 +1 @@
+interest-noawait pbs-file-restore-initramfs
diff --git a/debian/rules b/debian/rules
index 22671c0a..ce2db72e 100755
--- a/debian/rules
+++ b/debian/rules
@@ -52,8 +52,11 @@ override_dh_dwz:
 
 override_dh_strip:
 	dh_strip
-	for exe in $$(find debian/proxmox-backup-client/usr \
-	  debian/proxmox-backup-server/usr -executable -type f); do \
+	for exe in $$(find \
+	    debian/proxmox-backup-client/usr \
+	    debian/proxmox-backup-server/usr \
+	    debian/proxmox-file-restore/usr \
+	    -executable -type f); do \
 	  debian/scripts/elf-strip-unused-dependencies.sh "$$exe" || true; \
 	done
 
diff --git a/docs/Makefile b/docs/Makefile
index 4dc0019b..f6af8916 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -5,6 +5,7 @@ GENERATED_SYNOPSIS := 						\
 	proxmox-backup-client/synopsis.rst			\
 	proxmox-backup-client/catalog-shell-synopsis.rst 	\
 	proxmox-backup-manager/synopsis.rst			\
+	proxmox-file-restore/synopsis.rst			\
 	pxar/synopsis.rst					\
 	pmtx/synopsis.rst					\
 	pmt/synopsis.rst					\
@@ -27,7 +28,8 @@ MAN1_PAGES := 				\
 	proxmox-tape.1			\
 	proxmox-backup-proxy.1		\
 	proxmox-backup-client.1		\
-	proxmox-backup-manager.1
+	proxmox-backup-manager.1	\
+	proxmox-file-restore.1
 
 MAN5_PAGES :=				\
 	media-pool.cfg.5		\
@@ -185,6 +187,12 @@ proxmox-backup-manager.1: proxmox-backup-manager/man1.rst  proxmox-backup-manage
 proxmox-backup-proxy.1: proxmox-backup-proxy/man1.rst  proxmox-backup-proxy/description.rst
 	rst2man $< >$@
 
+proxmox-file-restore/synopsis.rst: ${COMPILEDIR}/proxmox-file-restore
+	${COMPILEDIR}/proxmox-file-restore printdoc > proxmox-file-restore/synopsis.rst
+
+proxmox-file-restore.1: proxmox-file-restore/man1.rst  proxmox-file-restore/description.rst proxmox-file-restore/synopsis.rst
+	rst2man $< >$@
+
 .PHONY: onlinehelpinfo
 onlinehelpinfo:
 	@echo "Generating OnlineHelpInfo.js..."
diff --git a/docs/command-line-tools.rst b/docs/command-line-tools.rst
index 9b0a1290..bf3a92cc 100644
--- a/docs/command-line-tools.rst
+++ b/docs/command-line-tools.rst
@@ -6,6 +6,11 @@ Command Line Tools
 
 .. include:: proxmox-backup-client/description.rst
 
+``proxmox-file-restore``
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. include:: proxmox-file-restore/description.rst
+
 ``proxmox-backup-manager``
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/docs/proxmox-file-restore/description.rst b/docs/proxmox-file-restore/description.rst
new file mode 100644
index 00000000..34872663
--- /dev/null
+++ b/docs/proxmox-file-restore/description.rst
@@ -0,0 +1,4 @@
+This is just a test.
+
+.. NOTE:: No further info.
+
diff --git a/docs/proxmox-file-restore/man1.rst b/docs/proxmox-file-restore/man1.rst
new file mode 100644
index 00000000..fe3625b1
--- /dev/null
+++ b/docs/proxmox-file-restore/man1.rst
@@ -0,0 +1,28 @@
+==========================
+proxmox-file-restore
+==========================
+
+.. include:: ../epilog.rst
+
+-----------------------------------------------------------------------
+Command line tool for restoring files and directories from PBS archives
+-----------------------------------------------------------------------
+
+:Author: |AUTHOR|
+:Version: Version |VERSION|
+:Manual section: 1
+
+
+Synopsis
+==========
+
+.. include:: synopsis.rst
+
+
+Description
+============
+
+.. include:: description.rst
+
+
+.. include:: ../pbs-copyright.rst
diff --git a/src/api2.rs b/src/api2.rs
index b7230f75..132e2c2a 100644
--- a/src/api2.rs
+++ b/src/api2.rs
@@ -12,7 +12,7 @@ pub mod version;
 pub mod ping;
 pub mod pull;
 pub mod tape;
-mod helpers;
+pub mod helpers;
 
 use proxmox::api::router::SubdirMap;
 use proxmox::api::Router;
diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
new file mode 100644
index 00000000..f2d2ce3a
--- /dev/null
+++ b/src/bin/proxmox-file-restore.rs
@@ -0,0 +1,342 @@
+use std::ffi::OsStr;
+use std::os::unix::ffi::OsStrExt;
+use std::path::PathBuf;
+use std::sync::Arc;
+
+use anyhow::{bail, format_err, Error};
+use serde_json::Value;
+
+use proxmox::api::{
+    api,
+    cli::{run_cli_command, CliCommand, CliCommandMap, CliEnvironment},
+};
+use pxar::accessor::aio::Accessor;
+
+use proxmox_backup::api2::{helpers, types::ArchiveEntry};
+use proxmox_backup::backup::{
+    decrypt_key, BackupDir, BufferedDynamicReader, CatalogReader, CryptConfig, CryptMode,
+    DirEntryAttribute, IndexFile, LocalDynamicReadAt, CATALOG_NAME,
+};
+use proxmox_backup::client::{BackupReader, RemoteChunkReader};
+use proxmox_backup::pxar::{create_zip, extract_sub_dir};
+use proxmox_backup::tools;
+
+// use "pub" so rust doesn't complain about "unused" functions in the module
+pub mod proxmox_client_tools;
+use proxmox_client_tools::{
+    complete_group_or_snapshot, complete_repository, connect, extract_repository_from_value, key,
+    key::{crypto_parameters, format_key_source},
+    KEYFD_SCHEMA, KEYFILE_SCHEMA, REPO_URL_SCHEMA,
+};
+
+enum ExtractPath {
+    ListArchives,
+    Pxar(String, Vec<u8>),
+}
+
+fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
+    let mut bytes = if base64 {
+        base64::decode(path)?
+    } else {
+        path.into_bytes()
+    };
+
+    if bytes == b"/" {
+        return Ok(ExtractPath::ListArchives);
+    }
+
+    while bytes.len() > 0 && bytes[0] == b'/' {
+        bytes.remove(0);
+    }
+
+    let (file, path) = {
+        let slash_pos = bytes.iter().position(|c| *c == b'/').unwrap_or(bytes.len());
+        let path = bytes.split_off(slash_pos);
+        let file = String::from_utf8(bytes)?;
+        (file, path)
+    };
+
+    if file.ends_with(".pxar.didx") {
+        Ok(ExtractPath::Pxar(file, path))
+    } else {
+        bail!("'{}' is not supported for file-restore", file);
+    }
+}
+
+#[api(
+   input: {
+       properties: {
+           repository: {
+               schema: REPO_URL_SCHEMA,
+               optional: true,
+           },
+           snapshot: {
+               type: String,
+               description: "Group/Snapshot path.",
+           },
+           "path": {
+               description: "Path to restore. Directories will be restored as .zip files.",
+               type: String,
+           },
+           "base64": {
+               type: Boolean,
+               description: "If set, 'path' will be interpreted as base64 encoded.",
+               optional: true,
+               default: false,
+           },
+           keyfile: {
+               schema: KEYFILE_SCHEMA,
+               optional: true,
+           },
+           "keyfd": {
+               schema: KEYFD_SCHEMA,
+               optional: true,
+           },
+           "crypt-mode": {
+               type: CryptMode,
+               optional: true,
+           },
+       }
+   }
+)]
+/// List a directory from a backup snapshot.
+async fn list(param: Value) -> Result<Vec<ArchiveEntry>, Error> {
+    let repo = extract_repository_from_value(&param)?;
+    let base64 = param["base64"].as_bool().unwrap_or(false);
+    let path = parse_path(
+        tools::required_string_param(&param, "path")?.to_string(),
+        base64,
+    )?;
+    let snapshot: BackupDir = tools::required_string_param(&param, "snapshot")?.parse()?;
+
+    let crypto = crypto_parameters(&param)?;
+    let crypt_config = match crypto.enc_key {
+        None => None,
+        Some(ref key) => {
+            let (key, _, _) =
+                decrypt_key(&key.key, &key::get_encryption_key_password).map_err(|err| {
+                    eprintln!("{}", format_key_source(&key.source, "encryption"));
+                    err
+                })?;
+            Some(Arc::new(CryptConfig::new(key)?))
+        }
+    };
+
+    let client = connect(&repo)?;
+    let client = BackupReader::start(
+        client,
+        crypt_config.clone(),
+        repo.store(),
+        &snapshot.group().backup_type(),
+        &snapshot.group().backup_id(),
+        snapshot.backup_time(),
+        true,
+    )
+    .await?;
+
+    let (manifest, _) = client.download_manifest().await?;
+    manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
+
+    match path {
+        ExtractPath::ListArchives => {
+            let mut entries = vec![];
+            for file in manifest.files() {
+                match file.filename.rsplitn(2, '.').next().unwrap() {
+                    "didx" => {}
+                    "fidx" => {}
+                    _ => continue, // ignore all non fidx/didx
+                }
+                let path = format!("/{}", file.filename);
+                let attr = DirEntryAttribute::Directory { start: 0 };
+                entries.push(ArchiveEntry::new(path.as_bytes(), &attr));
+            }
+
+            Ok(entries)
+        }
+        ExtractPath::Pxar(file, mut path) => {
+            let index = client
+                .download_dynamic_index(&manifest, CATALOG_NAME)
+                .await?;
+            let most_used = index.find_most_used_chunks(8);
+            let file_info = manifest.lookup_file_info(&CATALOG_NAME)?;
+            let chunk_reader = RemoteChunkReader::new(
+                client.clone(),
+                crypt_config,
+                file_info.chunk_crypt_mode(),
+                most_used,
+            );
+            let reader = BufferedDynamicReader::new(index, chunk_reader);
+            let mut catalog_reader = CatalogReader::new(reader);
+
+            let mut fullpath = file.into_bytes();
+            fullpath.append(&mut path);
+
+            helpers::list_dir_content(&mut catalog_reader, &fullpath)
+        }
+    }
+}
+
+#[api(
+   input: {
+       properties: {
+           repository: {
+               schema: REPO_URL_SCHEMA,
+               optional: true,
+           },
+           snapshot: {
+               type: String,
+               description: "Group/Snapshot path.",
+           },
+           "path": {
+               description: "Path to restore. Directories will be restored as .zip files if extracted to stdout.",
+               type: String,
+           },
+           "base64": {
+               type: Boolean,
+               description: "If set, 'path' will be interpreted as base64 encoded.",
+               optional: true,
+               default: false,
+           },
+           target: {
+               type: String,
+               optional: true,
+               description: "Target directory path. Use '-' to write to standard output.",
+           },
+           keyfile: {
+               schema: KEYFILE_SCHEMA,
+               optional: true,
+           },
+           "keyfd": {
+               schema: KEYFD_SCHEMA,
+               optional: true,
+           },
+           "crypt-mode": {
+               type: CryptMode,
+               optional: true,
+           },
+           verbose: {
+               type: Boolean,
+               description: "Print verbose information",
+               optional: true,
+               default: false,
+           }
+       }
+   }
+)]
+/// Restore files from a backup snapshot.
+async fn extract(param: Value) -> Result<Value, Error> {
+    let repo = extract_repository_from_value(&param)?;
+    let verbose = param["verbose"].as_bool().unwrap_or(false);
+    let base64 = param["base64"].as_bool().unwrap_or(false);
+    let orig_path = tools::required_string_param(&param, "path")?.to_string();
+    let path = parse_path(orig_path.clone(), base64)?;
+
+    let target = match param["target"].as_str() {
+        Some(target) if target == "-" => None,
+        Some(target) => Some(PathBuf::from(target)),
+        None => Some(std::env::current_dir()?),
+    };
+
+    let snapshot: BackupDir = tools::required_string_param(&param, "snapshot")?.parse()?;
+
+    let crypto = crypto_parameters(&param)?;
+    let crypt_config = match crypto.enc_key {
+        None => None,
+        Some(ref key) => {
+            let (key, _, _) =
+                decrypt_key(&key.key, &key::get_encryption_key_password).map_err(|err| {
+                    eprintln!("{}", format_key_source(&key.source, "encryption"));
+                    err
+                })?;
+            Some(Arc::new(CryptConfig::new(key)?))
+        }
+    };
+
+    match path {
+        ExtractPath::Pxar(archive_name, path) => {
+            let client = connect(&repo)?;
+            let client = BackupReader::start(
+                client,
+                crypt_config.clone(),
+                repo.store(),
+                &snapshot.group().backup_type(),
+                &snapshot.group().backup_id(),
+                snapshot.backup_time(),
+                true,
+            )
+            .await?;
+            let (manifest, _) = client.download_manifest().await?;
+            let file_info = manifest.lookup_file_info(&archive_name)?;
+            let index = client
+                .download_dynamic_index(&manifest, &archive_name)
+                .await?;
+            let most_used = index.find_most_used_chunks(8);
+            let chunk_reader = RemoteChunkReader::new(
+                client.clone(),
+                crypt_config,
+                file_info.chunk_crypt_mode(),
+                most_used,
+            );
+            let reader = BufferedDynamicReader::new(index, chunk_reader);
+
+            let archive_size = reader.archive_size();
+            let reader = LocalDynamicReadAt::new(reader);
+            let decoder = Accessor::new(reader, archive_size).await?;
+
+            let root = decoder.open_root().await?;
+            let file = root
+                .lookup(OsStr::from_bytes(&path))
+                .await?
+                .ok_or(format_err!("error opening '{:?}'", path))?;
+
+            if let Some(target) = target {
+                extract_sub_dir(target, decoder, OsStr::from_bytes(&path), verbose).await?;
+            } else {
+                match file.kind() {
+                    pxar::EntryKind::File { .. } => {
+                        tokio::io::copy(&mut file.contents().await?, &mut tokio::io::stdout())
+                            .await?;
+                    }
+                    _ => {
+                        create_zip(
+                            tokio::io::stdout(),
+                            decoder,
+                            OsStr::from_bytes(&path),
+                            verbose,
+                        )
+                        .await?;
+                    }
+                }
+            }
+        }
+        _ => {
+            bail!("cannot extract '{}'", orig_path);
+        }
+    }
+
+    Ok(Value::Null)
+}
+
+fn main() {
+    let list_cmd_def = CliCommand::new(&API_METHOD_LIST)
+        .arg_param(&["snapshot", "path"])
+        .completion_cb("repository", complete_repository)
+        .completion_cb("snapshot", complete_group_or_snapshot);
+
+    let restore_cmd_def = CliCommand::new(&API_METHOD_EXTRACT)
+        .arg_param(&["snapshot", "path", "target"])
+        .completion_cb("repository", complete_repository)
+        .completion_cb("snapshot", complete_group_or_snapshot)
+        .completion_cb("target", tools::complete_file_name);
+
+    let cmd_def = CliCommandMap::new()
+        .insert("list", list_cmd_def)
+        .insert("extract", restore_cmd_def);
+
+    let rpcenv = CliEnvironment::new();
+    run_cli_command(
+        cmd_def,
+        rpcenv,
+        Some(|future| proxmox_backup::tools::runtime::main(future)),
+    );
+}
diff --git a/zsh-completions/_proxmox-file-restore b/zsh-completions/_proxmox-file-restore
new file mode 100644
index 00000000..e2e48c7a
--- /dev/null
+++ b/zsh-completions/_proxmox-file-restore
@@ -0,0 +1,13 @@
+#compdef _proxmox-backup-client() proxmox-backup-client
+
+function _proxmox-backup-client() {
+    local cwords line point cmd curr prev
+    cworkds=${#words[@]}
+    line=$words
+    point=${#line}
+    cmd=${words[1]}
+    curr=${words[cwords]}
+    prev=${words[cwords-1]}
+    compadd -- $(COMP_CWORD="$cwords" COMP_LINE="$line" COMP_POINT="$point" \
+        proxmox-file-restore bashcomplete "$cmd" "$curr" "$prev")
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 12/22] file-restore: allow specifying output-format
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (10 preceding siblings ...)
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 11/22] file-restore: add binary and basic commands Stefan Reiter
@ 2021-02-16 17:07 ` Stefan Reiter
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 13/22] rest: implement tower service for UnixStream Stefan Reiter
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:07 UTC (permalink / raw)
  To: pbs-devel

Makes CLI use more comfortable by not just printing JSON to the
terminal.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/bin/proxmox-file-restore.rs | 42 +++++++++++++++++++++++++++++----
 1 file changed, 37 insertions(+), 5 deletions(-)

diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
index f2d2ce3a..ec3378b0 100644
--- a/src/bin/proxmox-file-restore.rs
+++ b/src/bin/proxmox-file-restore.rs
@@ -4,11 +4,14 @@ use std::path::PathBuf;
 use std::sync::Arc;
 
 use anyhow::{bail, format_err, Error};
-use serde_json::Value;
+use serde_json::{json, Value};
 
 use proxmox::api::{
     api,
-    cli::{run_cli_command, CliCommand, CliCommandMap, CliEnvironment},
+    cli::{
+        default_table_format_options, format_and_print_result_full, get_output_format,
+        run_cli_command, CliCommand, CliCommandMap, CliEnvironment, ColumnConfig, OUTPUT_FORMAT,
+    },
 };
 use pxar::accessor::aio::Accessor;
 
@@ -96,11 +99,22 @@ fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
                type: CryptMode,
                optional: true,
            },
+           "output-format": {
+               schema: OUTPUT_FORMAT,
+               optional: true,
+           },
+       }
+   },
+   returns: {
+       description: "A list of elements under the given path",
+       type: Array,
+       items: {
+           type: ArchiveEntry,
        }
    }
 )]
 /// List a directory from a backup snapshot.
-async fn list(param: Value) -> Result<Vec<ArchiveEntry>, Error> {
+async fn list(param: Value) -> Result<Value, Error> {
     let repo = extract_repository_from_value(&param)?;
     let base64 = param["base64"].as_bool().unwrap_or(false);
     let path = parse_path(
@@ -137,7 +151,7 @@ async fn list(param: Value) -> Result<Vec<ArchiveEntry>, Error> {
     let (manifest, _) = client.download_manifest().await?;
     manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
 
-    match path {
+    let result = match path {
         ExtractPath::ListArchives => {
             let mut entries = vec![];
             for file in manifest.files() {
@@ -173,7 +187,25 @@ async fn list(param: Value) -> Result<Vec<ArchiveEntry>, Error> {
 
             helpers::list_dir_content(&mut catalog_reader, &fullpath)
         }
-    }
+    }?;
+
+    let options = default_table_format_options()
+        .sortby("type", false)
+        .sortby("text", false)
+        .column(ColumnConfig::new("type"))
+        .column(ColumnConfig::new("text").header("name"))
+        .column(ColumnConfig::new("mtime").header("last modified"))
+        .column(ColumnConfig::new("size"));
+
+    let output_format = get_output_format(&param);
+    format_and_print_result_full(
+        &mut json!(result),
+        &API_METHOD_LIST.returns,
+        &output_format,
+        &options,
+    );
+
+    Ok(Value::Null)
 }
 
 #[api(
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 13/22] rest: implement tower service for UnixStream
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (11 preceding siblings ...)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 12/22] file-restore: allow specifying output-format Stefan Reiter
@ 2021-02-16 17:07 ` Stefan Reiter
  2021-02-17  6:52   ` [pbs-devel] applied: " Dietmar Maurer
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 14/22] client: add VsockClient to connect to virtio-vsock VMs Stefan Reiter
                   ` (9 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:07 UTC (permalink / raw)
  To: pbs-devel

This allows anything that can be represented as a UnixStream to be used
as transport for an API server (e.g. virtio sockets).

A tower service expects an IP address as it's peer, which we can't
reliably provide for unix socket based transports, so just fake one.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/server/rest.rs | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/src/server/rest.rs b/src/server/rest.rs
index fc59be9a..9bf494fd 100644
--- a/src/server/rest.rs
+++ b/src/server/rest.rs
@@ -107,6 +107,26 @@ impl tower_service::Service<&tokio::net::TcpStream> for RestServer {
     }
 }
 
+impl tower_service::Service<&tokio::net::UnixStream> for RestServer {
+    type Response = ApiService;
+    type Error = Error;
+    type Future = Pin<Box<dyn Future<Output = Result<ApiService, Error>> + Send>>;
+
+    fn poll_ready(&mut self, _cx: &mut Context) -> Poll<Result<(), Self::Error>> {
+        Poll::Ready(Ok(()))
+    }
+
+    fn call(&mut self, _ctx: &tokio::net::UnixStream) -> Self::Future {
+        // TODO: Find a way to actually represent the vsock peer in the ApiService struct - for now
+        // it doesn't really matter, so just use a fake IP address
+        let fake_peer = "0.0.0.0:807".parse().unwrap();
+        future::ok(ApiService {
+            peer: fake_peer,
+            api_config: self.api_config.clone()
+        }).boxed()
+    }
+}
+
 pub struct ApiService {
     pub peer: std::net::SocketAddr,
     pub api_config: Arc<ApiConfig>,
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 14/22] client: add VsockClient to connect to virtio-vsock VMs
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (12 preceding siblings ...)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 13/22] rest: implement tower service for UnixStream Stefan Reiter
@ 2021-02-16 17:07 ` Stefan Reiter
  2021-02-17  7:24   ` [pbs-devel] applied: " Dietmar Maurer
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server Stefan Reiter
                   ` (8 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:07 UTC (permalink / raw)
  To: pbs-devel

Currently useful only for single file restore, but kept generic enough
to use any compatible API endpoint over a virtio-vsock[0,1] interface.

VsockClient is adapted and slimmed down from HttpClient.

A tower-compatible VsockConnector is implemented, using a wrapped
UnixStream as transfer. The UnixStream has to be wrapped in a custom
struct to implement 'Connection', Async{Read,Write} are simply forwarded
directly to the underlying stream.

[0] https://www.man7.org/linux/man-pages/man7/vsock.7.html
[1] https://wiki.qemu.org/Features/VirtioVsock

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/client.rs              |   3 +
 src/client/vsock_client.rs | 259 +++++++++++++++++++++++++++++++++++++
 2 files changed, 262 insertions(+)
 create mode 100644 src/client/vsock_client.rs

diff --git a/src/client.rs b/src/client.rs
index d50c26c2..1eae7dd1 100644
--- a/src/client.rs
+++ b/src/client.rs
@@ -19,6 +19,9 @@ pub mod pipe_to_stream;
 mod http_client;
 pub use http_client::*;
 
+mod vsock_client;
+pub use vsock_client::*;
+
 mod task_log;
 pub use task_log::*;
 
diff --git a/src/client/vsock_client.rs b/src/client/vsock_client.rs
new file mode 100644
index 00000000..ce3f7bc7
--- /dev/null
+++ b/src/client/vsock_client.rs
@@ -0,0 +1,259 @@
+use anyhow::{bail, format_err, Error};
+use futures::*;
+
+use core::task::Context;
+use std::pin::Pin;
+use std::task::Poll;
+
+use http::Uri;
+use http::{Request, Response};
+use hyper::client::connect::{Connected, Connection};
+use hyper::client::Client;
+use hyper::Body;
+use pin_project::pin_project;
+use serde_json::Value;
+use tokio::io::{ReadBuf, AsyncRead, AsyncWrite, AsyncWriteExt};
+use tokio::net::UnixStream;
+
+use crate::tools;
+use proxmox::api::error::HttpError;
+
+/// Port below 1024 is privileged, this is intentional so only root (on host) can connect
+pub const DEFAULT_VSOCK_PORT: u16 = 807;
+
+#[derive(Clone)]
+struct VsockConnector;
+
+#[pin_project]
+/// Wrapper around UnixStream so we can implement hyper::client::connect::Connection
+struct UnixConnection {
+    #[pin]
+    stream: UnixStream,
+}
+
+impl tower_service::Service<Uri> for VsockConnector {
+    type Response = UnixConnection;
+    type Error = Error;
+    type Future = Pin<Box<dyn Future<Output = Result<UnixConnection, Error>> + Send>>;
+
+    fn poll_ready(&mut self, _cx: &mut task::Context<'_>) -> Poll<Result<(), Self::Error>> {
+        Poll::Ready(Ok(()))
+    }
+
+    fn call(&mut self, dst: Uri) -> Self::Future {
+        use nix::sys::socket::*;
+        use std::os::unix::io::FromRawFd;
+
+        // connect can block, so run in blocking task (though in reality it seems to immediately
+        // return with either ENODEV or ETIMEDOUT in case of error)
+        tokio::task::spawn_blocking(move || {
+            if dst.scheme_str().unwrap_or_default() != "vsock" {
+                bail!("invalid URI (scheme) for vsock connector: {}", dst);
+            }
+
+            let cid = match dst.host() {
+                Some(host) => host.parse().map_err(|err| {
+                    format_err!(
+                        "invalid URI (host not a number) for vsock connector: {} ({})",
+                        dst,
+                        err
+                    )
+                })?,
+                None => bail!("invalid URI (no host) for vsock connector: {}", dst),
+            };
+
+            let port = match dst.port_u16() {
+                Some(port) => port,
+                None => bail!("invalid URI (bad port) for vsock connector: {}", dst),
+            };
+
+            let sock_fd = socket(
+                AddressFamily::Vsock,
+                SockType::Stream,
+                SockFlag::empty(),
+                None,
+            )?;
+
+            let sock_addr = VsockAddr::new(cid, port as u32);
+            connect(sock_fd, &SockAddr::Vsock(sock_addr))?;
+
+            // connect sync, but set nonblock after (tokio requires it)
+            let std_stream = unsafe { std::os::unix::net::UnixStream::from_raw_fd(sock_fd) };
+            std_stream.set_nonblocking(true)?;
+
+            let stream = tokio::net::UnixStream::from_std(std_stream)?;
+            let connection = UnixConnection { stream };
+
+            Ok(connection)
+        })
+        // unravel the thread JoinHandle to a useable future
+        .map(|res| match res {
+            Ok(res) => res,
+            Err(err) => Err(format_err!("thread join error on vsock connect: {}", err)),
+        })
+        .boxed()
+    }
+}
+
+impl Connection for UnixConnection {
+    fn connected(&self) -> Connected {
+        Connected::new()
+    }
+}
+
+impl AsyncRead for UnixConnection {
+    fn poll_read(
+        self: Pin<&mut Self>,
+        cx: &mut Context<'_>,
+        buf: &mut ReadBuf,
+    ) -> Poll<Result<(), std::io::Error>> {
+        let this = self.project();
+        this.stream.poll_read(cx, buf)
+    }
+}
+
+impl AsyncWrite for UnixConnection {
+    fn poll_write(
+        self: Pin<&mut Self>,
+        cx: &mut Context<'_>,
+        buf: &[u8],
+    ) -> Poll<tokio::io::Result<usize>> {
+        let this = self.project();
+        this.stream.poll_write(cx, buf)
+    }
+
+    fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<tokio::io::Result<()>> {
+        let this = self.project();
+        this.stream.poll_flush(cx)
+    }
+
+    fn poll_shutdown(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<tokio::io::Result<()>> {
+        let this = self.project();
+        this.stream.poll_shutdown(cx)
+    }
+}
+
+/// Slimmed down version of HttpClient for virtio-vsock connections (file restore daemon)
+pub struct VsockClient {
+    client: Client<VsockConnector>,
+    cid: i32,
+    port: u16,
+}
+
+impl VsockClient {
+    pub fn new(cid: i32, port: u16) -> Self {
+        let conn = VsockConnector {};
+        let client = Client::builder().build::<_, Body>(conn);
+        Self { client, cid, port }
+    }
+
+    pub async fn get(&self, path: &str, data: Option<Value>) -> Result<Value, Error> {
+        let req = Self::request_builder(self.cid, self.port, "GET", path, data)?;
+        self.api_request(req).await
+    }
+
+    pub async fn post(&mut self, path: &str, data: Option<Value>) -> Result<Value, Error> {
+        let req = Self::request_builder(self.cid, self.port, "POST", path, data)?;
+        self.api_request(req).await
+    }
+
+    pub async fn download(
+        &mut self,
+        path: &str,
+        data: Option<Value>,
+        output: &mut (dyn AsyncWrite + Send + Unpin),
+    ) -> Result<(), Error> {
+        let req = Self::request_builder(self.cid, self.port, "GET", path, data)?;
+
+        let client = self.client.clone();
+
+        let resp = client.request(req)
+            .await
+            .map_err(|_| format_err!("vsock download request timed out"))?;
+        let status = resp.status();
+        if !status.is_success() {
+            Self::api_response(resp)
+                .await
+                .map(|_| ())?
+        } else {
+            resp.into_body()
+                .map_err(Error::from)
+                .try_fold(output, move |acc, chunk| async move {
+                    acc.write_all(&chunk).await?;
+                    Ok::<_, Error>(acc)
+                })
+                .await?;
+        }
+        Ok(())
+    }
+
+    async fn api_response(response: Response<Body>) -> Result<Value, Error> {
+        let status = response.status();
+        let data = hyper::body::to_bytes(response.into_body()).await?;
+
+        let text = String::from_utf8(data.to_vec()).unwrap();
+        if status.is_success() {
+            if text.is_empty() {
+                Ok(Value::Null)
+            } else {
+                let value: Value = serde_json::from_str(&text)?;
+                Ok(value)
+            }
+        } else {
+            Err(Error::from(HttpError::new(status, text)))
+        }
+    }
+
+    async fn api_request(&self, req: Request<Body>) -> Result<Value, Error> {
+        self.client
+            .request(req)
+            .map_err(Error::from)
+            .and_then(Self::api_response)
+            .await
+    }
+
+    pub fn request_builder(
+        cid: i32,
+        port: u16,
+        method: &str,
+        path: &str,
+        data: Option<Value>,
+    ) -> Result<Request<Body>, Error> {
+        let path = path.trim_matches('/');
+        let url: Uri = format!("vsock://{}:{}/{}", cid, port, path).parse()?;
+
+        if let Some(data) = data {
+            if method == "POST" {
+                let request = Request::builder()
+                    .method(method)
+                    .uri(url)
+                    .header(hyper::header::CONTENT_TYPE, "application/json")
+                    .body(Body::from(data.to_string()))?;
+                return Ok(request);
+            } else {
+                let query = tools::json_object_to_query(data)?;
+                let url: Uri = format!("vsock://{}:{}/{}?{}", cid, port, path, query).parse()?;
+                let request = Request::builder()
+                    .method(method)
+                    .uri(url)
+                    .header(
+                        hyper::header::CONTENT_TYPE,
+                        "application/x-www-form-urlencoded",
+                    )
+                    .body(Body::empty())?;
+                return Ok(request);
+            }
+        }
+
+        let request = Request::builder()
+            .method(method)
+            .uri(url)
+            .header(
+                hyper::header::CONTENT_TYPE,
+                "application/x-www-form-urlencoded",
+            )
+            .body(Body::empty())?;
+
+        Ok(request)
+    }
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (13 preceding siblings ...)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 14/22] client: add VsockClient to connect to virtio-vsock VMs Stefan Reiter
@ 2021-02-16 17:07 ` Stefan Reiter
  2021-02-17 10:17   ` Dietmar Maurer
                     ` (2 more replies)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 16/22] file-restore-daemon: add watchdog module Stefan Reiter
                   ` (7 subsequent siblings)
  22 siblings, 3 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:07 UTC (permalink / raw)
  To: pbs-devel

Implements the base of a small daemon to run within a file-restore VM.

The binary spawns an API server on a virtio-vsock socket, listening for
connections from the host. This happens mostly manually via the standard
Unix socket API, since tokio/hyper do not have support for vsock built
in. Once we have the accept'ed file descriptor, we can create a
UnixStream and use our tower service implementation for that.

The binary is deliberately not installed in the usual $PATH location,
since it shouldn't be executed on the host by a user anyway.

For now, only one simple API call ('status') is implemented, to
demonstrate and test proxmox::api functionality.

Since the REST server implementation uses the log!() macro, we can
redirect its output to stdout by registering env_logger as the logging
target. env_logger is already in our dependency tree via zstd/bindgen.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 Cargo.toml                            |   1 +
 Makefile                              |   9 ++-
 debian/control                        |   1 +
 debian/proxmox-backup-client.install  |   1 +
 src/api2/types/file_restore.rs        |  12 +++
 src/api2/types/mod.rs                 |   3 +
 src/bin/proxmox-restore-daemon.rs     | 104 ++++++++++++++++++++++++++
 src/bin/proxmox_restore_daemon/api.rs |  45 +++++++++++
 src/bin/proxmox_restore_daemon/mod.rs |   3 +
 9 files changed, 178 insertions(+), 1 deletion(-)
 create mode 100644 src/api2/types/file_restore.rs
 create mode 100644 src/bin/proxmox-restore-daemon.rs
 create mode 100644 src/bin/proxmox_restore_daemon/api.rs
 create mode 100644 src/bin/proxmox_restore_daemon/mod.rs

diff --git a/Cargo.toml b/Cargo.toml
index 28ca8e64..de42c2ff 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -29,6 +29,7 @@ bitflags = "1.2.1"
 bytes = "1.0"
 crc32fast = "1"
 endian_trait = { version = "0.6", features = ["arrays"] }
+env_logger = "0.7"
 anyhow = "1.0"
 futures = "0.3"
 h2 = { version = "0.3", features = [ "stream" ] }
diff --git a/Makefile b/Makefile
index 3b865083..f177e79d 100644
--- a/Makefile
+++ b/Makefile
@@ -25,6 +25,10 @@ SERVICE_BIN := \
 	proxmox-backup-proxy \
 	proxmox-daily-update
 
+# Single file restore daemon
+RESTORE_BIN := \
+	proxmox-restore-daemon
+
 ifeq ($(BUILD_MODE), release)
 CARGO_BUILD_ARGS += --release
 COMPILEDIR := target/release
@@ -39,7 +43,7 @@ endif
 CARGO ?= cargo
 
 COMPILED_BINS := \
-	$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN))
+	$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN) $(RESTORE_BIN))
 
 export DEB_VERSION DEB_VERSION_UPSTREAM
 
@@ -151,6 +155,9 @@ install: $(COMPILED_BINS)
 	install -m4755 -o root -g root $(COMPILEDIR)/sg-tape-cmd $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/sg-tape-cmd
 	$(foreach i,$(SERVICE_BIN), \
 	    install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/ ;)
+	install -dm755 $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/file-restore
+	$(foreach i,$(RESTORE_BIN), \
+	    install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/file-restore/ ;)
 	$(MAKE) -C www install
 	$(MAKE) -C docs install
 
diff --git a/debian/control b/debian/control
index 57d47a85..f4d81732 100644
--- a/debian/control
+++ b/debian/control
@@ -15,6 +15,7 @@ Build-Depends: debhelper (>= 11),
  librust-crossbeam-channel-0.5+default-dev,
  librust-endian-trait-0.6+arrays-dev,
  librust-endian-trait-0.6+default-dev,
+ librust-env-logger-0.7+default-dev,
  librust-futures-0.3+default-dev,
  librust-h2-0.3+default-dev,
  librust-h2-0.3+stream-dev,
diff --git a/debian/proxmox-backup-client.install b/debian/proxmox-backup-client.install
index 74b568f1..b203f152 100644
--- a/debian/proxmox-backup-client.install
+++ b/debian/proxmox-backup-client.install
@@ -1,5 +1,6 @@
 usr/bin/proxmox-backup-client
 usr/bin/pxar
+usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore/proxmox-restore-daemon
 usr/share/man/man1/proxmox-backup-client.1
 usr/share/man/man1/pxar.1
 usr/share/zsh/vendor-completions/_proxmox-backup-client
diff --git a/src/api2/types/file_restore.rs b/src/api2/types/file_restore.rs
new file mode 100644
index 00000000..cd8df16a
--- /dev/null
+++ b/src/api2/types/file_restore.rs
@@ -0,0 +1,12 @@
+use serde::{Deserialize, Serialize};
+use proxmox::api::api;
+
+#[api()]
+#[derive(Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
+/// General status information about a running VM file-restore daemon
+pub struct RestoreDaemonStatus {
+    /// VM uptime in seconds
+    pub uptime: i64,
+}
+
diff --git a/src/api2/types/mod.rs b/src/api2/types/mod.rs
index 4c663335..763b86fd 100644
--- a/src/api2/types/mod.rs
+++ b/src/api2/types/mod.rs
@@ -34,6 +34,9 @@ pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GRO
 mod tape;
 pub use tape::*;
 
+mod file_restore;
+pub use file_restore::*;
+
 // File names: may not contain slashes, may not start with "."
 pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
     if name.starts_with('.') {
diff --git a/src/bin/proxmox-restore-daemon.rs b/src/bin/proxmox-restore-daemon.rs
new file mode 100644
index 00000000..1ec90794
--- /dev/null
+++ b/src/bin/proxmox-restore-daemon.rs
@@ -0,0 +1,104 @@
+///! Daemon binary to run inside a micro-VM for secure single file restore of disk images
+use anyhow::{bail, Error};
+use log::error;
+
+use std::os::unix::{
+    io::{FromRawFd, RawFd},
+    net,
+};
+use std::path::Path;
+
+use tokio::sync::mpsc;
+use tokio_stream::wrappers::ReceiverStream;
+
+use proxmox::api::RpcEnvironmentType;
+use proxmox_backup::client::DEFAULT_VSOCK_PORT;
+use proxmox_backup::server::{rest::*, ApiConfig};
+
+mod proxmox_restore_daemon;
+use proxmox_restore_daemon::*;
+
+/// Maximum amount of pending requests. If saturated, virtio-vsock returns ETIMEDOUT immediately.
+/// We should never have more than a few requests in queue, so use a low number.
+pub const MAX_PENDING: usize = 32;
+
+/// Will be present in base initramfs
+pub const VM_DETECT_FILE: &str = "/restore-vm-marker";
+
+/// This is expected to be run by 'proxmox-file-restore' within a mini-VM
+fn main() -> Result<(), Error> {
+    if !Path::new(VM_DETECT_FILE).exists() {
+        bail!(concat!(
+            "This binary is not supposed to be run manually. ",
+            "Please use 'proxmox-file-restore' instead."
+        ));
+    }
+
+    // don't have a real syslog (and no persistance), so use env_logger to print to a log file (via
+    // stdout to a serial terminal attached by QEMU)
+    env_logger::from_env(env_logger::Env::default().default_filter_or("info"))
+        .write_style(env_logger::WriteStyle::Never)
+        .init();
+
+    proxmox_backup::tools::runtime::main(run())
+}
+
+async fn run() -> Result<(), Error> {
+    let config = ApiConfig::new("", &ROUTER, RpcEnvironmentType::PUBLIC)?;
+    let rest_server = RestServer::new(config);
+
+    let vsock_fd = get_vsock_fd()?;
+    let connections = accept_vsock_connections(vsock_fd);
+    let receiver_stream = ReceiverStream::new(connections);
+    let acceptor = hyper::server::accept::from_stream(receiver_stream);
+
+    hyper::Server::builder(acceptor).serve(rest_server).await?;
+
+    bail!("hyper server exited");
+}
+
+fn accept_vsock_connections(
+    vsock_fd: RawFd,
+) -> mpsc::Receiver<Result<tokio::net::UnixStream, Error>> {
+    use nix::sys::socket::*;
+    let (sender, receiver) = mpsc::channel(MAX_PENDING);
+
+    tokio::spawn(async move {
+        loop {
+            let stream: Result<tokio::net::UnixStream, Error> = tokio::task::block_in_place(|| {
+                // we need to accept manually, as UnixListener aborts if socket type != AF_UNIX ...
+                let client_fd = accept(vsock_fd)?;
+                let stream = unsafe { net::UnixStream::from_raw_fd(client_fd) };
+                stream.set_nonblocking(true)?;
+                tokio::net::UnixStream::from_std(stream).map_err(|err| err.into())
+            });
+
+            match stream {
+                Ok(stream) => {
+                    if sender.send(Ok(stream)).await.is_err() {
+                        error!("connection accept channel was closed");
+                    }
+                }
+                Err(err) => {
+                    error!("error accepting vsock connetion: {}", err);
+                }
+            }
+        }
+    });
+
+    receiver
+}
+
+fn get_vsock_fd() -> Result<RawFd, Error> {
+    use nix::sys::socket::*;
+    let sock_fd = socket(
+        AddressFamily::Vsock,
+        SockType::Stream,
+        SockFlag::empty(),
+        None,
+    )?;
+    let sock_addr = VsockAddr::new(libc::VMADDR_CID_ANY, DEFAULT_VSOCK_PORT as u32);
+    bind(sock_fd, &SockAddr::Vsock(sock_addr))?;
+    listen(sock_fd, MAX_PENDING)?;
+    Ok(sock_fd)
+}
diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
new file mode 100644
index 00000000..3c642aaf
--- /dev/null
+++ b/src/bin/proxmox_restore_daemon/api.rs
@@ -0,0 +1,45 @@
+///! File-restore API running inside the restore VM
+use anyhow::Error;
+use serde_json::Value;
+use std::fs;
+
+use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment, SubdirMap};
+use proxmox::list_subdirs_api_method;
+
+use proxmox_backup::api2::types::*;
+
+// NOTE: All API endpoints must have Permission::World, as the configs for authentication do not
+// exist within the restore VM. Safety is guaranteed since we use a low port, so only root on the
+// host can contact us - and there the proxmox-backup-client validates permissions already.
+
+const SUBDIRS: SubdirMap = &[("status", &Router::new().get(&API_METHOD_STATUS))];
+
+pub const ROUTER: Router = Router::new()
+    .get(&list_subdirs_api_method!(SUBDIRS))
+    .subdirs(SUBDIRS);
+
+fn read_uptime() -> Result<f32, Error> {
+    let uptime = fs::read_to_string("/proc/uptime")?;
+    // unwrap the Option, if /proc/uptime is empty we have bigger problems
+    Ok(uptime.split_ascii_whitespace().next().unwrap().parse()?)
+}
+
+#[api(
+    access: {
+        description: "Permissions are handled outside restore VM.",
+        permission: &Permission::World,
+    },
+    returns: {
+        type: RestoreDaemonStatus,
+    }
+)]
+/// General status information
+fn status(
+    _param: Value,
+    _info: &ApiMethod,
+    _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<RestoreDaemonStatus, Error> {
+    Ok(RestoreDaemonStatus {
+        uptime: read_uptime()? as i64,
+    })
+}
diff --git a/src/bin/proxmox_restore_daemon/mod.rs b/src/bin/proxmox_restore_daemon/mod.rs
new file mode 100644
index 00000000..d938a5bb
--- /dev/null
+++ b/src/bin/proxmox_restore_daemon/mod.rs
@@ -0,0 +1,3 @@
+///! File restore VM related functionality
+mod api;
+pub use api::*;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 16/22] file-restore-daemon: add watchdog module
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (14 preceding siblings ...)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server Stefan Reiter
@ 2021-02-16 17:07 ` Stefan Reiter
  2021-02-17 10:52   ` Wolfgang Bumiller
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 17/22] file-restore-daemon: add disk module Stefan Reiter
                   ` (6 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:07 UTC (permalink / raw)
  To: pbs-devel

Add a watchdog that will automatically shut down the VM after 10
minutes, if no API call is received.

This is handled using the unix 'alarm' syscall.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/api2/types/file_restore.rs             |  3 ++
 src/bin/proxmox-restore-daemon.rs          |  5 ++
 src/bin/proxmox_restore_daemon/api.rs      | 22 ++++++--
 src/bin/proxmox_restore_daemon/mod.rs      |  3 ++
 src/bin/proxmox_restore_daemon/watchdog.rs | 63 ++++++++++++++++++++++
 5 files changed, 91 insertions(+), 5 deletions(-)
 create mode 100644 src/bin/proxmox_restore_daemon/watchdog.rs

diff --git a/src/api2/types/file_restore.rs b/src/api2/types/file_restore.rs
index cd8df16a..710c6d83 100644
--- a/src/api2/types/file_restore.rs
+++ b/src/api2/types/file_restore.rs
@@ -8,5 +8,8 @@ use proxmox::api::api;
 pub struct RestoreDaemonStatus {
     /// VM uptime in seconds
     pub uptime: i64,
+    /// time left until auto-shutdown, keep in mind that this is inaccurate when 'keep-timeout' is
+    /// not set, as then after the status call the timer will have reset
+    pub timeout: i64,
 }
 
diff --git a/src/bin/proxmox-restore-daemon.rs b/src/bin/proxmox-restore-daemon.rs
index 1ec90794..d30da563 100644
--- a/src/bin/proxmox-restore-daemon.rs
+++ b/src/bin/proxmox-restore-daemon.rs
@@ -40,6 +40,9 @@ fn main() -> Result<(), Error> {
         .write_style(env_logger::WriteStyle::Never)
         .init();
 
+    // start watchdog, failure is a critical error as it leads to a scenario where we never exit
+    watchdog_init()?;
+
     proxmox_backup::tools::runtime::main(run())
 }
 
@@ -77,6 +80,8 @@ fn accept_vsock_connections(
                 Ok(stream) => {
                     if sender.send(Ok(stream)).await.is_err() {
                         error!("connection accept channel was closed");
+                    } else {
+                        watchdog_ping();
                     }
                 }
                 Err(err) => {
diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
index 3c642aaf..8eb727df 100644
--- a/src/bin/proxmox_restore_daemon/api.rs
+++ b/src/bin/proxmox_restore_daemon/api.rs
@@ -8,6 +8,8 @@ use proxmox::list_subdirs_api_method;
 
 use proxmox_backup::api2::types::*;
 
+use super::{watchdog_remaining, watchdog_undo_ping};
+
 // NOTE: All API endpoints must have Permission::World, as the configs for authentication do not
 // exist within the restore VM. Safety is guaranteed since we use a low port, so only root on the
 // host can contact us - and there the proxmox-backup-client validates permissions already.
@@ -25,6 +27,16 @@ fn read_uptime() -> Result<f32, Error> {
 }
 
 #[api(
+    input: {
+        properties: {
+            "keep-timeout": {
+                type: bool,
+                description: "If true, do not reset the watchdog timer on this API call.",
+                default: false,
+                optional: true,
+            },
+        },
+    },
     access: {
         description: "Permissions are handled outside restore VM.",
         permission: &Permission::World,
@@ -34,12 +46,12 @@ fn read_uptime() -> Result<f32, Error> {
     }
 )]
 /// General status information
-fn status(
-    _param: Value,
-    _info: &ApiMethod,
-    _rpcenv: &mut dyn RpcEnvironment,
-) -> Result<RestoreDaemonStatus, Error> {
+fn status(keep_timeout: bool) -> Result<RestoreDaemonStatus, Error> {
+    if keep_timeout {
+        watchdog_undo_ping();
+    }
     Ok(RestoreDaemonStatus {
         uptime: read_uptime()? as i64,
+        timeout: watchdog_remaining(false),
     })
 }
diff --git a/src/bin/proxmox_restore_daemon/mod.rs b/src/bin/proxmox_restore_daemon/mod.rs
index d938a5bb..6802d31c 100644
--- a/src/bin/proxmox_restore_daemon/mod.rs
+++ b/src/bin/proxmox_restore_daemon/mod.rs
@@ -1,3 +1,6 @@
 ///! File restore VM related functionality
 mod api;
 pub use api::*;
+
+mod watchdog;
+pub use watchdog::*;
diff --git a/src/bin/proxmox_restore_daemon/watchdog.rs b/src/bin/proxmox_restore_daemon/watchdog.rs
new file mode 100644
index 00000000..f722be0b
--- /dev/null
+++ b/src/bin/proxmox_restore_daemon/watchdog.rs
@@ -0,0 +1,63 @@
+//! SIGALRM/alarm(1) based watchdog that shuts down the VM if not pinged for TIMEOUT
+use anyhow::Error;
+use std::sync::atomic::{AtomicI64, Ordering};
+
+use nix::sys::{reboot, signal::*};
+use nix::unistd::alarm;
+
+const TIMEOUT: u32 = 600; // seconds
+static TRIGGERED: AtomicI64 = AtomicI64::new(0);
+static LAST_TRIGGERED: AtomicI64 = AtomicI64::new(0);
+
+/// Handler is called when alarm-watchdog expires, immediately shuts down VM when triggered
+extern "C" fn alarm_handler(_signal: nix::libc::c_int) {
+    // use println! instead of log, since log might buffer and not print before shut down
+    println!("Watchdog expired, shutting down VM...");
+    let err = reboot::reboot(reboot::RebootMode::RB_POWER_OFF).unwrap_err();
+    println!("'reboot' syscall failed: {}", err);
+    std::process::exit(1);
+}
+
+/// Initialize alarm() based watchdog
+pub fn watchdog_init() -> Result<(), Error> {
+    unsafe {
+        sigaction(
+            Signal::SIGALRM,
+            &SigAction::new(
+                SigHandler::Handler(alarm_handler),
+                SaFlags::empty(),
+                SigSet::empty(),
+            ),
+        )?;
+    }
+
+    watchdog_ping();
+
+    Ok(())
+}
+
+/// Trigger watchdog keepalive
+pub fn watchdog_ping() {
+    alarm::set(TIMEOUT);
+    let cur_time = proxmox::tools::time::epoch_i64();
+    let last = TRIGGERED.swap(cur_time, Ordering::SeqCst);
+    LAST_TRIGGERED.store(last, Ordering::SeqCst);
+}
+
+/// Returns the remaining time before watchdog expiry in seconds if 'current' is true, otherwise it
+/// returns the remaining time before the last ping (which is probably what you want in the API, as
+/// from an API call 'current'=true will *always* return TIMEOUT)
+pub fn watchdog_remaining(current: bool) -> i64 {
+    let cur_time = proxmox::tools::time::epoch_i64();
+    let last_time = (if current { &TRIGGERED } else { &LAST_TRIGGERED }).load(Ordering::SeqCst);
+    TIMEOUT as i64 - (cur_time - last_time)
+}
+
+/// Undo the last watchdog ping and set timer back to previous state, call this in the API to fake
+/// a non-resetting call
+pub fn watchdog_undo_ping() {
+    let set = watchdog_remaining(false);
+    TRIGGERED.store(LAST_TRIGGERED.load(Ordering::SeqCst), Ordering::SeqCst);
+    // make sure argument cannot be 0, as that would cancel any alarm
+    alarm::set(1.max(set) as u32);
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 17/22] file-restore-daemon: add disk module
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (15 preceding siblings ...)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 16/22] file-restore-daemon: add watchdog module Stefan Reiter
@ 2021-02-16 17:07 ` Stefan Reiter
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 18/22] file-restore: add basic VM/block device support Stefan Reiter
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:07 UTC (permalink / raw)
  To: pbs-devel

Includes functionality for scanning and referring to partitions on
attached disks (i.e. snapshot images).

Fairly modular structure, so adding ZFS/LVM/etc... support in the future
should be easy.

The path is encoded as "/disk/bucket/component/path/to/file", e.g.
"/drive-scsi0/part/0/etc/passwd". See the comments for further
explanations on the design.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/bin/proxmox-restore-daemon.rs      |  15 ++
 src/bin/proxmox_restore_daemon/disk.rs | 341 +++++++++++++++++++++++++
 src/bin/proxmox_restore_daemon/mod.rs  |   3 +
 3 files changed, 359 insertions(+)
 create mode 100644 src/bin/proxmox_restore_daemon/disk.rs

diff --git a/src/bin/proxmox-restore-daemon.rs b/src/bin/proxmox-restore-daemon.rs
index d30da563..987723ed 100644
--- a/src/bin/proxmox-restore-daemon.rs
+++ b/src/bin/proxmox-restore-daemon.rs
@@ -1,12 +1,14 @@
 ///! Daemon binary to run inside a micro-VM for secure single file restore of disk images
 use anyhow::{bail, Error};
 use log::error;
+use lazy_static::lazy_static;
 
 use std::os::unix::{
     io::{FromRawFd, RawFd},
     net,
 };
 use std::path::Path;
+use std::sync::{Arc, Mutex};
 
 use tokio::sync::mpsc;
 use tokio_stream::wrappers::ReceiverStream;
@@ -25,6 +27,13 @@ pub const MAX_PENDING: usize = 32;
 /// Will be present in base initramfs
 pub const VM_DETECT_FILE: &str = "/restore-vm-marker";
 
+lazy_static! {
+    /// The current disks state. Use for accessing data on the attached snapshots.
+    pub static ref DISK_STATE: Arc<Mutex<DiskState>> = {
+        Arc::new(Mutex::new(DiskState::scan().unwrap()))
+    };
+}
+
 /// This is expected to be run by 'proxmox-file-restore' within a mini-VM
 fn main() -> Result<(), Error> {
     if !Path::new(VM_DETECT_FILE).exists() {
@@ -43,6 +52,12 @@ fn main() -> Result<(), Error> {
     // start watchdog, failure is a critical error as it leads to a scenario where we never exit
     watchdog_init()?;
 
+    // scan all attached disks now, before starting the API
+    // this will panic and stop the VM if anything goes wrong
+    {
+        let _disk_state = DISK_STATE.lock().unwrap();
+    }
+
     proxmox_backup::tools::runtime::main(run())
 }
 
diff --git a/src/bin/proxmox_restore_daemon/disk.rs b/src/bin/proxmox_restore_daemon/disk.rs
new file mode 100644
index 00000000..941a9a43
--- /dev/null
+++ b/src/bin/proxmox_restore_daemon/disk.rs
@@ -0,0 +1,341 @@
+//! Low-level disk (image) access functions for file restore VMs.
+use anyhow::{bail, format_err, Error};
+use log::{info, warn};
+use lazy_static::lazy_static;
+
+use std::collections::HashMap;
+use std::fs::{File, create_dir_all};
+use std::io::{BufRead, BufReader};
+use std::path::{Component, Path, PathBuf};
+
+use proxmox::const_regex;
+use proxmox::tools::fs;
+use proxmox_backup::api2::types::BLOCKDEVICE_NAME_REGEX;
+
+const_regex! {
+    VIRTIO_PART_REGEX = r"^vd[a-z]+(\d+)$";
+}
+
+lazy_static! {
+    static ref FS_OPT_MAP: HashMap<&'static str, &'static str> = {
+        let mut m = HashMap::new();
+
+        // otherwise ext complains about mounting read-only
+        m.insert("ext2", "noload");
+        m.insert("ext3", "noload");
+        m.insert("ext4", "noload");
+
+        // ufs2 is used as default since FreeBSD 5.0 released in 2003, so let's assume that
+        // whatever the user is trying to restore is not using anything older...
+        m.insert("ufs", "ufstype=ufs2");
+
+        m
+    };
+}
+
+pub enum ResolveResult {
+    Path(PathBuf),
+    BucketTypes(Vec<&'static str>),
+    BucketComponents(Vec<String>),
+}
+
+struct PartitionBucketData {
+    dev_node: String,
+    number: i32,
+    mountpoint: Option<PathBuf>,
+}
+
+/// A "Bucket" represents a mapping found on a disk, e.g. a partition, a zfs dataset or an LV. A
+/// uniquely identifying path to a file then consists of four components:
+/// "/disk/bucket/component/path"
+/// where
+///   disk: fidx file name
+///   bucket: bucket type
+///   component: identifier of the specific bucket
+///   path: relative path of the file on the filesystem indicated by the other parts, may contain
+///         more subdirectories
+/// e.g.: "/drive-scsi0/part/0/etc/passwd"
+enum Bucket {
+    Partition(PartitionBucketData),
+}
+
+impl Bucket {
+    fn filter_mut<'a, A: AsRef<str>, B: AsRef<str>>(
+        haystack: &'a mut Vec<Bucket>,
+        typ: A,
+        comp: B,
+    ) -> Option<&'a mut Bucket> {
+        let typ = typ.as_ref();
+        let comp = comp.as_ref();
+        haystack
+            .iter_mut()
+            .find(|b| match b {
+                Bucket::Partition(data) => {
+                    typ == "part" && comp.parse::<i32>().unwrap() == data.number
+                }
+            })
+    }
+
+    fn type_string(&self) -> &'static str {
+        match self {
+            Bucket::Partition(_) => "part",
+        }
+    }
+
+    fn component_string(&self) -> String {
+        match self {
+            Bucket::Partition(data) => data.number.to_string(),
+        }
+    }
+}
+
+/// Functions related to the local filesystem. This mostly exists so we can use 'supported_fs' in
+/// try_mount while a Bucket is still mutably borrowed from DiskState.
+struct Filesystems {
+    supported_fs: Vec<String>,
+}
+
+impl Filesystems {
+    fn scan() -> Result<Self, Error> {
+        // detect kernel supported filesystems
+        let mut supported_fs = Vec::new();
+        for f in BufReader::new(File::open("/proc/filesystems")?).lines() {
+            if let Ok(f) = f {
+                // ZFS is treated specially, don't attempt to do a regular mount with it
+                if !f.starts_with("nodev") && f != "zfs" {
+                    supported_fs.push(f.trim().to_owned());
+                }
+            }
+        }
+
+        Ok(Self { supported_fs })
+    }
+
+    fn ensure_mounted(&self, bucket: &mut Bucket) -> Result<PathBuf, Error> {
+        match bucket {
+            Bucket::Partition(data) => {
+                // regular data partition à la "/dev/vdxN"
+                if let Some(mp) = &data.mountpoint {
+                    return Ok(mp.clone());
+                }
+
+                let mp = format!("/mnt{}/", data.dev_node);
+                self.try_mount(&data.dev_node, &mp)?;
+                let mp = PathBuf::from(mp);
+                data.mountpoint = Some(mp.clone());
+                Ok(mp)
+            }
+        }
+    }
+
+    fn try_mount(&self, source: &str, target: &str) -> Result<(), Error> {
+        use nix::mount::*;
+
+        create_dir_all(target)?;
+
+        // try all supported fs until one works - this is the way Busybox's 'mount' does it too:
+        // https://git.busybox.net/busybox/tree/util-linux/mount.c?id=808d93c0eca49e0b22056e23d965f0d967433fbb#n2152
+        // note that ZFS is intentionally left out (see scan())
+        let flags =
+            MsFlags::MS_RDONLY | MsFlags::MS_NOEXEC | MsFlags::MS_NOSUID | MsFlags::MS_NODEV;
+        for fs in &self.supported_fs {
+            let fs: &str = fs.as_ref();
+            let opts = FS_OPT_MAP.get(fs).copied();
+            match mount(Some(source), target, Some(fs), flags, opts) {
+                Ok(()) => {
+                    info!("mounting '{}' succeeded, fstype: '{}'", source, fs);
+                    return Ok(());
+                },
+                Err(err) => {
+                    warn!("mount error on '{}' ({}) - {}", source, fs, err);
+                }
+            }
+        }
+
+        bail!("all mounts failed or no supported file system")
+    }
+}
+
+pub struct DiskState {
+    filesystems: Filesystems,
+    disk_map: HashMap<String, Vec<Bucket>>,
+}
+
+impl DiskState {
+    /// Scan all disks for supported buckets.
+    pub fn scan() -> Result<Self, Error> {
+        // create mapping for virtio drives and .fidx files (via serial description)
+        // note: disks::DiskManager relies on udev, which we don't have
+        let mut disk_map = HashMap::new();
+        for entry in proxmox_backup::tools::fs::scan_subdir(
+            libc::AT_FDCWD,
+            "/sys/block",
+            &BLOCKDEVICE_NAME_REGEX,
+        )?
+        .filter_map(|x| x.ok())
+        {
+            let name = unsafe { entry.file_name_utf8_unchecked() };
+            if !name.starts_with("vd") {
+                continue;
+            }
+
+            let sys_path: &str = &format!("/sys/block/{}", name);
+
+            let serial = fs::file_read_string(&format!("{}/serial", sys_path));
+            let fidx = match serial {
+                Ok(serial) => serial,
+                Err(err) => {
+                    warn!("disk '{}': could not read serial file - {}", name, err);
+                    continue;
+                }
+            };
+
+            let mut parts = Vec::new();
+            for entry in proxmox_backup::tools::fs::scan_subdir(
+                libc::AT_FDCWD,
+                sys_path,
+                &VIRTIO_PART_REGEX,
+            )?
+            .filter_map(|x| x.ok())
+            {
+                let part_name = unsafe { entry.file_name_utf8_unchecked() };
+                let devnode = format!("/dev/{}", part_name);
+                let part_path = format!("/sys/block/{}/{}", name, part_name);
+
+                // create partition device node for further use
+                let dev_num_str = fs::file_read_firstline(&format!("{}/dev", part_path))?;
+                let split: Vec<&str> = dev_num_str.trim().split(':').collect();
+                if split.len() != 2 {
+                    bail!(
+                        "got invalid 'dev' content: '{}' - broken kernel?",
+                        dev_num_str
+                    );
+                }
+                Self::mknod_blk(&devnode, split[0].parse()?, split[1].parse()?)?;
+
+                let number = match fs::file_read_firstline(&format!("{}/partition", part_path))?
+                    .trim()
+                    .parse::<i32>()
+                {
+                    Ok(number) => number,
+                    Err(err) => bail!(
+                        "got invalid 'partition' number content - '{}' - broken kernel?",
+                        err
+                    ),
+                };
+
+                info!(
+                    "drive '{}' ('{}'): found partition '{}' ({})",
+                    name, fidx, devnode, number
+                );
+
+                let bucket = Bucket::Partition(PartitionBucketData {
+                    dev_node: devnode,
+                    mountpoint: None,
+                    number,
+                });
+
+                parts.push(bucket);
+            }
+
+            disk_map.insert(fidx.to_owned(), parts);
+        }
+
+        Ok(Self {
+            filesystems: Filesystems::scan()?,
+            disk_map,
+        })
+    }
+
+    /// Given a path like "/drive-scsi0.img.fidx/part/0/etc/passwd", this will mount the first
+    /// partition of 'drive-scsi0' on-demand (i.e. if not already mounted) and return a path
+    /// pointing to the requested file locally, e.g. "/mnt/vda1/etc/passwd", which can be used to
+    /// read the file.  Given a partial path, i.e. only "/drive-scsi0.img.fidx" or
+    /// "/drive-scsi0.img.fidx/part", it will return a list of available bucket types or bucket
+    /// components respectively
+    pub fn resolve(&mut self, path: &Path) -> Result<ResolveResult, Error> {
+        let mut cmp = path.components().peekable();
+        match cmp.peek() {
+            Some(Component::RootDir) | Some(Component::CurDir) => {
+                cmp.next();
+            }
+            None => bail!("empty path cannot be resolved to file location"),
+            _ => {}
+        }
+
+        let req_fidx = match cmp.next() {
+            Some(Component::Normal(x)) => x.to_string_lossy(),
+            _ => bail!("no or invalid image in path"),
+        };
+
+        let buckets = match self.disk_map.get_mut(req_fidx.as_ref()) {
+            Some(x) => x,
+            None => bail!("given image '{}' not found", req_fidx),
+        };
+
+        let bucket_type = match cmp.next() {
+            Some(Component::Normal(x)) => x.to_string_lossy(),
+            Some(c) => bail!("invalid bucket in path: {:?}", c),
+            None => {
+                // list bucket types available
+                let mut types = buckets
+                    .iter()
+                    .map(|b| b.type_string())
+                    .collect::<Vec<&'static str>>();
+                // dedup requires duplicates to be consecutive, which is the case - see scan()
+                types.dedup();
+                return Ok(ResolveResult::BucketTypes(types));
+            }
+        };
+
+        let component = match cmp.next() {
+            Some(Component::Normal(x)) => x.to_string_lossy(),
+            Some(c) => bail!("invalid bucket component in path: {:?}", c),
+            None => {
+                // list bucket components available
+                let comps = buckets
+                    .iter()
+                    .filter(|b| b.type_string() == bucket_type)
+                    .map(Bucket::component_string)
+                    .collect();
+                return Ok(ResolveResult::BucketComponents(comps));
+            }
+        };
+
+        let mut bucket = match Bucket::filter_mut(buckets, &bucket_type, &component) {
+            Some(bucket) => bucket,
+            None => bail!(
+                "bucket/component path not found: {}/{}/{}",
+                req_fidx,
+                bucket_type,
+                component
+            ),
+        };
+
+        // bucket found, check mount
+        let mountpoint = self.filesystems.ensure_mounted(&mut bucket).map_err(|err| {
+            format_err!(
+                "mounting '{}/{}/{}' failed: {}",
+                req_fidx,
+                bucket_type,
+                component,
+                err
+            )
+        })?;
+
+        let mut local_path = PathBuf::new();
+        local_path.push(mountpoint);
+        for rem in cmp {
+            local_path.push(rem);
+        }
+
+        Ok(ResolveResult::Path(local_path))
+    }
+
+    fn mknod_blk(path: &str, maj: u64, min: u64) -> Result<(), Error> {
+        use nix::sys::stat;
+        let dev = stat::makedev(maj, min);
+        stat::mknod(path, stat::SFlag::S_IFBLK, stat::Mode::S_IRWXU, dev)?;
+        Ok(())
+    }
+}
diff --git a/src/bin/proxmox_restore_daemon/mod.rs b/src/bin/proxmox_restore_daemon/mod.rs
index 6802d31c..16c31f0f 100644
--- a/src/bin/proxmox_restore_daemon/mod.rs
+++ b/src/bin/proxmox_restore_daemon/mod.rs
@@ -4,3 +4,6 @@ pub use api::*;
 
 mod watchdog;
 pub use watchdog::*;
+
+mod disk;
+pub use disk::*;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 18/22] file-restore: add basic VM/block device support
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (16 preceding siblings ...)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 17/22] file-restore-daemon: add disk module Stefan Reiter
@ 2021-02-16 17:07 ` Stefan Reiter
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 19/22] file-restore: improve logging of VM with logrotate Stefan Reiter
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:07 UTC (permalink / raw)
  To: pbs-devel

Includes methods to start, stop and list QEMU file-restore VMs, as well
as CLI commands do the latter two.

The implementation is abstracted behind the concept of a
"BlockRestoreDriver", so other methods can be implemented later (e.g.
mapping directly to loop devices on the host, using other hypervisors
then QEMU, etc...).

Starting VMs is currently unused but will be needed for further changes.

The design for the QEMU driver uses a locked 'map' file
(/run/proxmox-backup/restore-vm.map) containing a JSON encoding of
currently running VMs. VMs are addressed by a 'name', which is a
systemd-unit encoded combination of repository and snapshot string, thus
uniquely identifying it.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 debian/proxmox-backup-client.install          |   1 -
 debian/proxmox-file-restore.install           |   1 +
 src/bin/proxmox-file-restore.rs               |  16 +-
 src/bin/proxmox_file_restore/block_driver.rs  | 157 +++++++
 .../proxmox_file_restore/block_driver_qemu.rs | 407 ++++++++++++++++++
 src/bin/proxmox_file_restore/mod.rs           |   5 +
 src/buildcfg.rs                               |  20 +
 7 files changed, 603 insertions(+), 4 deletions(-)
 create mode 100644 src/bin/proxmox_file_restore/block_driver.rs
 create mode 100644 src/bin/proxmox_file_restore/block_driver_qemu.rs
 create mode 100644 src/bin/proxmox_file_restore/mod.rs

diff --git a/debian/proxmox-backup-client.install b/debian/proxmox-backup-client.install
index b203f152..74b568f1 100644
--- a/debian/proxmox-backup-client.install
+++ b/debian/proxmox-backup-client.install
@@ -1,6 +1,5 @@
 usr/bin/proxmox-backup-client
 usr/bin/pxar
-usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore/proxmox-restore-daemon
 usr/share/man/man1/proxmox-backup-client.1
 usr/share/man/man1/pxar.1
 usr/share/zsh/vendor-completions/_proxmox-backup-client
diff --git a/debian/proxmox-file-restore.install b/debian/proxmox-file-restore.install
index 2082e46b..d952836e 100644
--- a/debian/proxmox-file-restore.install
+++ b/debian/proxmox-file-restore.install
@@ -1,3 +1,4 @@
 usr/bin/proxmox-file-restore
 usr/share/man/man1/proxmox-file-restore.1
 usr/share/zsh/vendor-completions/_proxmox-file-restore
+usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore/proxmox-restore-daemon
diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
index ec3378b0..767cc057 100644
--- a/src/bin/proxmox-file-restore.rs
+++ b/src/bin/proxmox-file-restore.rs
@@ -32,6 +32,9 @@ use proxmox_client_tools::{
     KEYFD_SCHEMA, KEYFILE_SCHEMA, REPO_URL_SCHEMA,
 };
 
+mod proxmox_file_restore;
+use proxmox_file_restore::*;
+
 enum ExtractPath {
     ListArchives,
     Pxar(String, Vec<u8>),
@@ -48,7 +51,7 @@ fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
         return Ok(ExtractPath::ListArchives);
     }
 
-    while bytes.len() > 0 && bytes[0] == b'/' {
+    while !bytes.is_empty() && bytes[0] == b'/' {
         bytes.remove(0);
     }
 
@@ -319,7 +322,7 @@ async fn extract(param: Value) -> Result<Value, Error> {
             let file = root
                 .lookup(OsStr::from_bytes(&path))
                 .await?
-                .ok_or(format_err!("error opening '{:?}'", path))?;
+                .ok_or_else(|| format_err!("error opening '{:?}'", path))?;
 
             if let Some(target) = target {
                 extract_sub_dir(target, decoder, OsStr::from_bytes(&path), verbose).await?;
@@ -361,9 +364,16 @@ fn main() {
         .completion_cb("snapshot", complete_group_or_snapshot)
         .completion_cb("target", tools::complete_file_name);
 
+    let status_cmd_def = CliCommand::new(&API_METHOD_STATUS);
+    let stop_cmd_def = CliCommand::new(&API_METHOD_STOP)
+        .arg_param(&["name"])
+        .completion_cb("name", complete_block_driver_ids);
+
     let cmd_def = CliCommandMap::new()
         .insert("list", list_cmd_def)
-        .insert("extract", restore_cmd_def);
+        .insert("extract", restore_cmd_def)
+        .insert("status", status_cmd_def)
+        .insert("stop", stop_cmd_def);
 
     let rpcenv = CliEnvironment::new();
     run_cli_command(
diff --git a/src/bin/proxmox_file_restore/block_driver.rs b/src/bin/proxmox_file_restore/block_driver.rs
new file mode 100644
index 00000000..0ba67f34
--- /dev/null
+++ b/src/bin/proxmox_file_restore/block_driver.rs
@@ -0,0 +1,157 @@
+//! Abstraction layer over different methods of accessing a block backup
+use anyhow::{bail, Error};
+use serde::{Deserialize, Serialize};
+use serde_json::{json, Value};
+
+use std::collections::HashMap;
+use std::future::Future;
+use std::hash::BuildHasher;
+use std::pin::Pin;
+
+use proxmox_backup::backup::{BackupDir, BackupManifest};
+use proxmox_backup::client::BackupRepository;
+
+use proxmox::api::{api, cli::*};
+
+use super::block_driver_qemu::QemuBlockDriver;
+
+/// Contains details about a snapshot that is to be accessed by block file restore
+pub struct SnapRestoreDetails {
+    pub repo: BackupRepository,
+    pub snapshot: BackupDir,
+    pub manifest: BackupManifest,
+}
+
+pub type Async<R> = Pin<Box<dyn Future<Output = R> + Send>>;
+
+/// An abstract implementation for retrieving data out of a block file backup
+pub trait BlockRestoreDriver {
+    /// Return status of all running/mapped images, result value is (id, extra data), where id must
+    /// match with the ones returned from list()
+    fn status(&self) -> Async<Result<Vec<(String, Value)>, Error>>;
+    /// Stop/Close a running restore method
+    fn stop(&self, id: String) -> Async<Result<(), Error>>;
+    /// Returned ids must be prefixed with driver type so that they cannot collide between drivers,
+    /// the returned values must be passable to stop()
+    fn list(&self) -> Vec<String>;
+}
+
+#[api()]
+#[derive(Debug, Serialize, Deserialize, PartialEq, Clone, Copy)]
+pub enum BlockDriverType {
+    /// Uses a small QEMU/KVM virtual machine to map images securely. Requires PVE-patched QEMU.
+    Qemu,
+}
+
+impl BlockDriverType {
+    fn resolve(&self) -> impl BlockRestoreDriver {
+        match self {
+            BlockDriverType::Qemu => QemuBlockDriver {},
+        }
+    }
+}
+
+const DEFAULT_DRIVER: BlockDriverType = BlockDriverType::Qemu;
+const ALL_DRIVERS: &[BlockDriverType] = &[BlockDriverType::Qemu];
+
+#[api(
+   input: {
+       properties: {
+            "driver": {
+                type: BlockDriverType,
+                optional: true,
+            },
+            "output-format": {
+                schema: OUTPUT_FORMAT,
+                optional: true,
+            },
+        },
+   },
+)]
+/// Retrieve status information about currently running/mapped restore images
+pub async fn status(driver: Option<BlockDriverType>, param: Value) -> Result<(), Error> {
+    let output_format = get_output_format(&param);
+    let text = output_format == "text";
+
+    let mut ret = json!({});
+
+    for dt in ALL_DRIVERS {
+        if driver.is_some() && &driver.unwrap() != dt {
+            continue;
+        }
+
+        let drv_name = format!("{:?}", dt);
+        let drv = dt.resolve();
+        match drv.status().await {
+            Ok(data) if data.is_empty() => {
+                if text {
+                    println!("{}: no mappings", drv_name);
+                } else {
+                    ret[drv_name] = json!({});
+                }
+            }
+            Ok(data) => {
+                if text {
+                    println!("{}:", drv_name);
+                }
+
+                ret[&drv_name]["ids"] = json!([]);
+                for (id, extra) in data {
+                    if text {
+                        println!("{} \t({})", id, extra);
+                    } else {
+                        ret[&drv_name]["ids"][id] = extra;
+                    }
+                }
+            }
+            Err(err) => {
+                if text {
+                    eprintln!("error getting status from driver '{}' - {}", drv_name, err);
+                } else {
+                    ret[drv_name] = json!({ "error": format!("{}", err) });
+                }
+            }
+        }
+    }
+
+    if !text {
+        format_and_print_result(&ret, &output_format);
+    }
+
+    Ok(())
+}
+
+#[api(
+   input: {
+       properties: {
+            "name": {
+                type: String,
+                description: "The name of the VM to stop.",
+            },
+        },
+   },
+)]
+/// Immediately stop/unmap a given image. Not typically necessary, as VMs will stop themselves
+/// after a timer anyway.
+pub async fn stop(name: String) -> Result<(), Error> {
+    for drv in ALL_DRIVERS.iter().map(BlockDriverType::resolve) {
+        if drv.list().contains(&name) {
+            return drv.stop(name).await;
+        }
+    }
+
+    bail!("no mapping with name '{}' found", name);
+}
+
+/// Autocompletion handler for block mappings
+pub fn complete_block_driver_ids<S: BuildHasher>(
+    _arg: &str,
+    _param: &HashMap<String, String, S>,
+) -> Vec<String> {
+    ALL_DRIVERS
+        .iter()
+        .map(BlockDriverType::resolve)
+        .map(|d| d.list())
+        .flatten()
+        .collect()
+}
diff --git a/src/bin/proxmox_file_restore/block_driver_qemu.rs b/src/bin/proxmox_file_restore/block_driver_qemu.rs
new file mode 100644
index 00000000..8bbea962
--- /dev/null
+++ b/src/bin/proxmox_file_restore/block_driver_qemu.rs
@@ -0,0 +1,407 @@
+//! Block file access via a small QEMU restore VM using the PBS block driver in QEMU
+use anyhow::{bail, format_err, Error};
+use futures::FutureExt;
+use serde::{Deserialize, Serialize};
+use serde_json::{json, Value};
+
+use std::collections::HashMap;
+use std::fs::{File, OpenOptions};
+use std::io::{prelude::*, SeekFrom};
+use std::path::PathBuf;
+use std::time::Duration;
+
+use tokio::time;
+
+use proxmox::tools::fs::{file_read_string, lock_file, make_tmp_file, CreateOptions};
+use proxmox_backup::backup::BackupDir;
+use proxmox_backup::buildcfg;
+use proxmox_backup::client::*;
+use proxmox_backup::tools;
+
+use super::block_driver::*;
+
+pub struct QemuBlockDriver {}
+
+#[derive(Clone, Hash, Serialize, Deserialize)]
+struct VMState {
+    pid: i32,
+    cid: i32,
+}
+
+struct VMStateMap {
+    map: HashMap<String, VMState>,
+    file: File,
+}
+
+impl VMStateMap {
+    fn open_file_raw(write: bool) -> Result<File, Error> {
+        // ensure the file is only created as root to get correct permissions
+        let running_uid = nix::unistd::Uid::effective();
+        if running_uid.is_root() {
+            std::fs::create_dir_all(buildcfg::PROXMOX_BACKUP_RUN_DIR)?;
+        }
+
+        OpenOptions::new()
+            .read(true)
+            .write(write)
+            .create(write && running_uid.is_root())
+            .open(buildcfg::PROXMOX_BACKUP_VM_MAP_FN)
+            .map_err(Error::from)
+    }
+
+    /// Acquire a lock on the state map and retrieve a deserialized version
+    fn load() -> Result<Self, Error> {
+        let mut file = Self::open_file_raw(true)?;
+        lock_file(&mut file, true, Some(std::time::Duration::from_secs(5)))?;
+        let map = serde_json::from_reader(&file).unwrap_or_default();
+        Ok(Self { map, file })
+    }
+
+    /// Load a read-only copy of the current VM map. Only use for informational purposes, like
+    /// shell auto-completion, for anything requiring consistency use load() !
+    fn load_read_only() -> Result<HashMap<String, VMState>, Error> {
+        let file = Self::open_file_raw(false)?;
+        Ok(serde_json::from_reader(&file).unwrap_or_default())
+    }
+
+    /// Write back a potentially modified state map, consuming the held lock
+    fn write(mut self) -> Result<(), Error> {
+        self.file.seek(SeekFrom::Start(0))?;
+        self.file.set_len(0)?;
+        serde_json::to_writer(self.file, &self.map)?;
+
+        // drop ourselves including file lock
+        Ok(())
+    }
+
+    /// Return the map, but drop the lock immediately
+    fn read_only(self) -> HashMap<String, VMState> {
+        self.map
+    }
+}
+
+fn validate_img_existance() -> Result<(), Error> {
+    let kernel = PathBuf::from(buildcfg::PROXMOX_BACKUP_KERNEL_FN);
+    let initramfs = PathBuf::from(buildcfg::PROXMOX_BACKUP_INITRAMFS_FN);
+    if !kernel.exists() || !initramfs.exists() {
+        bail!("cannot run file-restore VM: package 'proxmox-file-restore' is not (correctly) installed");
+    }
+    Ok(())
+}
+
+fn make_name(repo: &BackupRepository, snap: &BackupDir) -> String {
+    let full = format!("qemu_{}/{}", repo, snap);
+    tools::systemd::escape_unit(&full, false)
+}
+
+fn try_kill_vm(pid: i32, name: &str) -> Result<(), Error> {
+    use nix::sys::signal::{kill, Signal};
+    use nix::unistd::Pid;
+
+    let pid = Pid::from_raw(pid);
+    if let Ok(()) = kill(pid, None) {
+        // process is running (and we could kill it), check if it is actually ours
+        if let Ok(cmdline) = file_read_string(format!("/proc/{}/cmdline", pid)) {
+            if cmdline.split('\0').any(|a| a == name) {
+                // yes, it's ours, kill it brutally with SIGKILL, no reason to take
+                // any chances - in this state it's most likely broken anyway
+                if let Err(err) = kill(pid, Signal::SIGKILL) {
+                    bail!(
+                        "reaping broken VM (pid {}) with SIGKILL failed: {}",
+                        pid,
+                        err
+                    );
+                }
+            }
+        }
+    }
+
+    Ok(())
+}
+
+/// remove non-responsive VMs from given map, returns 'true' if map was modified
+async fn cleanup_map(map: &mut HashMap<String, VMState>) -> bool {
+    let mut to_remove = Vec::new();
+    for (name, state) in map.iter() {
+        let client = VsockClient::new(state.cid, DEFAULT_VSOCK_PORT);
+        let res = client
+            .get("api2/json/status", Some(json!({"keep-timeout": true})))
+            .await;
+        if res.is_err() {
+            // VM is not reachable, remove from map then try reap
+            to_remove.push(name.clone());
+            if let Err(err) = try_kill_vm(state.pid, name) {
+                eprintln!("restore VM cleanup: {}", err);
+            }
+        }
+    }
+
+    for tr in &to_remove {
+        map.remove(tr);
+    }
+
+    !to_remove.is_empty()
+}
+
+async fn ensure_running(details: &SnapRestoreDetails) -> Result<VsockClient, Error> {
+    let name = make_name(&details.repo, &details.snapshot);
+    let mut state = VMStateMap::load()?;
+
+    cleanup_map(&mut state.map).await;
+
+    let new_cid;
+    match state.map.get(&name) {
+        Some(vm) => {
+            let client = VsockClient::new(vm.cid, DEFAULT_VSOCK_PORT);
+            let res = client.get("api2/json/status", None).await;
+            match res {
+                Ok(_) => {
+                    // VM is running and we just reset its timeout, nothing to do
+                    return Ok(client);
+                }
+                Err(err) => {
+                    eprintln!("dead VM detected: {}", err);
+                    // VM is dead, restart
+                    try_kill_vm(vm.pid, &name)?;
+                    let vms = start_vm(vm.cid, &name, details).await?;
+                    new_cid = vms.cid;
+                    state.map.insert(name, vms);
+                }
+            }
+        }
+        None => {
+            let cid = state
+                .map
+                .iter()
+                .map(|v| v.1.cid)
+                .max()
+                .unwrap_or(10) // some low CIDs have special meaning, start at 10 to avoid
+                + 1;
+
+            let vms = start_vm(cid, &name, details).await?;
+            new_cid = vms.cid;
+            state.map.insert(name, vms);
+        }
+    }
+
+    state.write()?;
+    Ok(VsockClient::new(new_cid, DEFAULT_VSOCK_PORT))
+}
+
+async fn start_vm(
+    mut cid: i32,
+    name: &str,
+    details: &SnapRestoreDetails,
+) -> Result<VMState, Error> {
+    use nix::sys::signal::kill;
+    use nix::unistd::Pid;
+    use std::os::unix::io::{AsRawFd, FromRawFd};
+
+    validate_img_existance()?;
+
+    if let Err(_) = std::env::var("PBS_PASSWORD") {
+        bail!("environment variable PBS_PASSWORD has to be set for QEMU VM restore");
+    }
+    if let Err(_) = std::env::var("PBS_FINGERPRINT") {
+        bail!("environment variable PBS_FINGERPRINT has to be set for QEMU VM restore");
+    }
+
+    let pid;
+    let (pid_fd, pid_path) = make_tmp_file("/tmp", CreateOptions::new())?;
+    nix::unistd::unlink(&pid_path)?;
+    tools::fd_change_cloexec(pid_fd.0, false)?;
+
+    let base_args = [
+        "-serial",
+        &format!(
+            "file:{}/file_restore_vm_{}.log",
+            buildcfg::PROXMOX_BACKUP_LOG_DIR,
+            {
+                let now = proxmox::tools::time::epoch_i64();
+                proxmox::tools::time::epoch_to_rfc3339(now)?
+            },
+        ),
+        "-vnc",
+        "none",
+        "-enable-kvm",
+        "-m",
+        "512",
+        "-name",
+        name,
+        "-kernel",
+        buildcfg::PROXMOX_BACKUP_KERNEL_FN,
+        "-initrd",
+        buildcfg::PROXMOX_BACKUP_INITRAMFS_FN,
+        "-append",
+        "quiet",
+        "-daemonize",
+        "-pidfile",
+        &format!("/dev/fd/{}", pid_fd.as_raw_fd()),
+    ];
+
+    // Generate drive arguments for all fidx files in backup snapshot
+    let mut drives = Vec::new();
+    let mut id = 0;
+    for file in details.manifest.files() {
+        if !file.filename.ends_with(".img.fidx") {
+            continue;
+        }
+        drives.push("-drive".to_owned());
+        drives.push(format!(
+            "file=pbs:repository={},,snapshot={},,archive={},read-only=on,if=none,id=drive{}",
+            details.repo, details.snapshot, file.filename, id
+        ));
+        drives.push("-device".to_owned());
+        // drive serial is used by VM to map .fidx files to /dev paths
+        drives.push(format!(
+            "virtio-blk-pci,drive=drive{},serial={}",
+            id, file.filename
+        ));
+        id += 1;
+    }
+
+    // Try starting QEMU in a loop to retry if we fail because of a bad 'cid' value
+    loop {
+        let mut qemu_cmd = std::process::Command::new("qemu-system-x86_64");
+        qemu_cmd.args(base_args.iter());
+        qemu_cmd.args(&drives);
+        qemu_cmd.arg("-device");
+        qemu_cmd.arg(format!(
+            "vhost-vsock-pci,guest-cid={},disable-legacy=on",
+            cid
+        ));
+
+        qemu_cmd.stdout(std::process::Stdio::null());
+        qemu_cmd.stderr(std::process::Stdio::piped());
+
+        let res = tokio::task::block_in_place(|| qemu_cmd.spawn()?.wait_with_output())?;
+
+        if res.status.success() {
+            // at this point QEMU is already daemonized and running, so if anything fails we
+            // technically leave behind a zombie-VM... this shouldn't matter, as it will stop
+            // itself soon enough (timer), and the following operations are unlikely to fail
+            let mut pid_file = unsafe { File::from_raw_fd(pid_fd.as_raw_fd()) };
+            std::mem::forget(pid_fd); // FD ownership is now in pid_fd/File
+            let mut pidstr = String::new();
+            pid_file.read_to_string(&mut pidstr)?;
+            pid = pidstr.trim_end().parse().map_err(|err| {
+                format_err!("cannot parse PID returned by QEMU ('{}'): {}", &pidstr, err)
+            })?;
+            break;
+        } else {
+            let out = String::from_utf8_lossy(&res.stderr);
+            if out.contains("unable to set guest cid: Address already in use") {
+                // CID in use, try next higher one
+                eprintln!("CID '{}' in use by other VM, attempting next one", cid);
+                cid += 1;
+            } else {
+                eprint!("{}", out);
+                bail!("Starting VM failed. See QEMU output above for more information.");
+            }
+        }
+    }
+
+    // QEMU has started successfully, now wait for virtio socket to become ready
+    let pid_t = Pid::from_raw(pid);
+    for _ in 0..60 {
+        let client = VsockClient::new(cid, DEFAULT_VSOCK_PORT);
+        if let Ok(Ok(_)) =
+            time::timeout(Duration::from_secs(2), client.get("api2/json/status", None)).await
+        {
+            return Ok(VMState { pid, cid });
+        }
+        if kill(pid_t, None).is_err() {
+            // QEMU exited
+            bail!("VM exited before connection could be established");
+        }
+        time::sleep(Duration::from_millis(500)).await;
+    }
+
+    // start failed
+    if let Err(err) = try_kill_vm(pid, name) {
+        eprintln!("killing failed VM failed: {}", err);
+    }
+    bail!("starting VM timed out");
+}
+
+impl BlockRestoreDriver for QemuBlockDriver {
+    fn status(&self) -> Async<Result<Vec<(String, Value)>, Error>> {
+        async move {
+            let mut state_map = VMStateMap::load()?;
+            let modified = cleanup_map(&mut state_map.map).await;
+            let map = if modified {
+                let m = state_map.map.clone();
+                state_map.write()?;
+                m
+            } else {
+                state_map.read_only()
+            };
+            let mut result = Vec::new();
+
+            for (n, s) in map.iter() {
+                let client = VsockClient::new(s.cid, DEFAULT_VSOCK_PORT);
+                let resp = client
+                    .get("api2/json/status", Some(json!({"keep-timeout": true})))
+                    .await;
+                let name = tools::systemd::unescape_unit(n)
+                    .unwrap_or_else(|_| "<invalid name>".to_owned());
+                let mut extra = json!({"pid": s.pid, "cid": s.cid});
+
+                match resp {
+                    Ok(status) => match status["data"].as_object() {
+                        Some(map) => {
+                            for (k, v) in map.iter() {
+                                extra[k] = v.clone();
+                            }
+                        }
+                        None => {
+                            let err = format!(
+                                "invalid JSON received from /status call: {}",
+                                status.to_string()
+                            );
+                            extra["error"] = json!(err);
+                        }
+                    },
+                    Err(err) => {
+                        let err = format!("error during /status API call: {}", err);
+                        extra["error"] = json!(err);
+                    }
+                }
+
+                result.push((name, extra));
+            }
+
+            Ok(result)
+        }
+        .boxed()
+    }
+
+    fn stop(&self, id: String) -> Async<Result<(), Error>> {
+        async move {
+            let name = tools::systemd::escape_unit(&id, false);
+            let mut map = VMStateMap::load()?;
+            match map.map.get(&name) {
+                Some(state) => {
+                    try_kill_vm(state.pid, &name)?;
+                    map.map.remove(&name);
+                    map.write()?;
+                }
+                None => {
+                    bail!("VM with name '{}' not found", name);
+                }
+            }
+            Ok(())
+        }
+        .boxed()
+    }
+
+    fn list(&self) -> Vec<String> {
+        match VMStateMap::load_read_only() {
+            Ok(state) => state
+                .iter()
+                .filter_map(|(name, _)| tools::systemd::unescape_unit(&name).ok())
+                .collect(),
+            Err(_) => Vec::new(),
+        }
+    }
+}
diff --git a/src/bin/proxmox_file_restore/mod.rs b/src/bin/proxmox_file_restore/mod.rs
new file mode 100644
index 00000000..52a1259e
--- /dev/null
+++ b/src/bin/proxmox_file_restore/mod.rs
@@ -0,0 +1,5 @@
+//! Block device drivers and tools for single file restore
+pub mod block_driver;
+pub use block_driver::*;
+
+mod block_driver_qemu;
diff --git a/src/buildcfg.rs b/src/buildcfg.rs
index 9aff8b4b..28a518ad 100644
--- a/src/buildcfg.rs
+++ b/src/buildcfg.rs
@@ -10,6 +10,14 @@ macro_rules! PROXMOX_BACKUP_RUN_DIR_M { () => ("/run/proxmox-backup") }
 #[macro_export]
 macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") }
 
+#[macro_export]
+macro_rules! PROXMOX_BACKUP_CACHE_DIR_M { () => ("/var/cache/proxmox-backup") }
+
+#[macro_export]
+macro_rules! PROXMOX_BACKUP_FILE_RESTORE_BIN_DIR_M {
+    () => ("/usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore")
+}
+
 /// namespaced directory for in-memory (tmpfs) run state
 pub const PROXMOX_BACKUP_RUN_DIR: &str = PROXMOX_BACKUP_RUN_DIR_M!();
 
@@ -30,6 +38,18 @@ pub const PROXMOX_BACKUP_PROXY_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(
 /// the PID filename for the privileged api daemon
 pub const PROXMOX_BACKUP_API_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(), "/api.pid");
 
+/// the filename for the file-restore VM state map
+pub const PROXMOX_BACKUP_VM_MAP_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(), "/restore-vm.map");
+
+/// filename of the cached initramfs to use for booting single file restore VMs, this file is
+/// automatically created by APT hooks
+pub const PROXMOX_BACKUP_INITRAMFS_FN: &str =
+    concat!(PROXMOX_BACKUP_CACHE_DIR_M!(), "/file-restore-initramfs.img");
+
+/// filename of the kernel to use for booting single file restore VMs
+pub const PROXMOX_BACKUP_KERNEL_FN: &str =
+    concat!(PROXMOX_BACKUP_FILE_RESTORE_BIN_DIR_M!(), "/bzImage");
+
 /// Prepend configuration directory to a file name
 ///
 /// This is a simply way to get the full path for configuration files.
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 19/22] file-restore: improve logging of VM with logrotate
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (17 preceding siblings ...)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 18/22] file-restore: add basic VM/block device support Stefan Reiter
@ 2021-02-16 17:07 ` Stefan Reiter
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 20/22] debian/client: add postinst hook to rebuild file-restore initramfs Stefan Reiter
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:07 UTC (permalink / raw)
  To: pbs-devel

Keep the log files of the last 16 VM starts (log output generated by the
daemon binary via QEMU's serial-to-logfile interface). Also put them
into a seperate /var/log/proxmox-backup/file-restore directory.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/bin/proxmox_file_restore/block_driver.rs  | 32 ++++++++++++++-
 .../proxmox_file_restore/block_driver_qemu.rs | 39 +++++++++++++++----
 2 files changed, 61 insertions(+), 10 deletions(-)

diff --git a/src/bin/proxmox_file_restore/block_driver.rs b/src/bin/proxmox_file_restore/block_driver.rs
index 0ba67f34..f2d5b00e 100644
--- a/src/bin/proxmox_file_restore/block_driver.rs
+++ b/src/bin/proxmox_file_restore/block_driver.rs
@@ -1,5 +1,5 @@
 //! Abstraction layer over different methods of accessing a block backup
-use anyhow::{bail, Error};
+use anyhow::{bail, format_err, Error};
 use serde::{Deserialize, Serialize};
 use serde_json::{json, Value};
 
@@ -8,10 +8,12 @@ use std::future::Future;
 use std::hash::BuildHasher;
 use std::pin::Pin;
 
-use proxmox_backup::backup::{BackupDir, BackupManifest};
+use proxmox_backup::backup::{backup_user, BackupDir, BackupManifest};
+use proxmox_backup::buildcfg;
 use proxmox_backup::client::BackupRepository;
 
 use proxmox::api::{api, cli::*};
+use proxmox::tools::fs::{create_path, CreateOptions};
 
 use super::block_driver_qemu::QemuBlockDriver;
 
@@ -155,3 +157,29 @@ pub fn complete_block_driver_ids<S: BuildHasher>(
         .flatten()
         .collect()
 }
+
+/// Create the /file-restore logging subdirectory with root ownership
+pub fn create_restore_log_dir() -> Result<String, Error> {
+    let logpath = format!("{}/file-restore", buildcfg::PROXMOX_BACKUP_LOG_DIR);
+
+    proxmox::try_block!({
+        let backup_user = backup_user()?;
+        let opts = CreateOptions::new()
+            .owner(backup_user.uid)
+            .group(backup_user.gid);
+
+        let opts_root = CreateOptions::new()
+            .owner(nix::unistd::ROOT)
+            .group(nix::unistd::Gid::from_raw(0));
+
+        create_path(buildcfg::PROXMOX_BACKUP_LOG_DIR, None, Some(opts))?;
+
+        // the QEMU logs may contain information from snapshots users should not have access to, so
+        // restrict to root (just like running the restore command itself)
+        create_path(&logpath, None, Some(opts_root))?;
+        Ok(())
+    })
+    .map_err(|err: Error| format_err!("unable to create file-restore log dir - {}", err))?;
+
+    Ok(logpath)
+}
diff --git a/src/bin/proxmox_file_restore/block_driver_qemu.rs b/src/bin/proxmox_file_restore/block_driver_qemu.rs
index 8bbea962..d406d523 100644
--- a/src/bin/proxmox_file_restore/block_driver_qemu.rs
+++ b/src/bin/proxmox_file_restore/block_driver_qemu.rs
@@ -211,16 +211,39 @@ async fn start_vm(
     nix::unistd::unlink(&pid_path)?;
     tools::fd_change_cloexec(pid_fd.0, false)?;
 
+    let logpath = create_restore_log_dir()?;
+    let logfile = &format!("{}/qemu.log", logpath);
+    let mut logrotate = tools::logrotate::LogRotate::new(logfile, false)
+        .ok_or_else(|| format_err!("could not get QEMU log file names"))?;
+
+    if let Err(err) = logrotate.do_rotate(CreateOptions::default(), Some(16)) {
+        eprintln!("warning: logrotate for QEMU log file failed - {}", err);
+    }
+
+    // preface log file with information about the VM
+    let mut logfd = OpenOptions::new()
+        .append(true)
+        .create_new(true)
+        .open(logfile)?;
+    writeln!(
+        logfd,
+        "[{}] file restore VM log for '{}'",
+        {
+            let now = proxmox::tools::time::epoch_i64();
+            proxmox::tools::time::epoch_to_rfc3339(now)?
+        },
+        tools::systemd::unescape_unit(name).unwrap_or_else(|_| "<invalid name>".to_owned())
+    )?;
+    tools::fd_change_cloexec(logfd.as_raw_fd(), false)?;
+
     let base_args = [
-        "-serial",
+        "-chardev",
         &format!(
-            "file:{}/file_restore_vm_{}.log",
-            buildcfg::PROXMOX_BACKUP_LOG_DIR,
-            {
-                let now = proxmox::tools::time::epoch_i64();
-                proxmox::tools::time::epoch_to_rfc3339(now)?
-            },
+            "file,id=log,path=/dev/null,logfile=/dev/fd/{},logappend=on",
+            logfd.as_raw_fd()
         ),
+        "-serial",
+        "chardev:log",
         "-vnc",
         "none",
         "-enable-kvm",
@@ -296,7 +319,7 @@ async fn start_vm(
                 cid += 1;
             } else {
                 eprint!("{}", out);
-                bail!("Starting VM failed. See QEMU output above for more information.");
+                bail!("Starting VM failed. See output above for more information.");
             }
         }
     }
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 20/22] debian/client: add postinst hook to rebuild file-restore initramfs
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (18 preceding siblings ...)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 19/22] file-restore: improve logging of VM with logrotate Stefan Reiter
@ 2021-02-16 17:07 ` Stefan Reiter
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 21/22] file-restore(-daemon): implement list API Stefan Reiter
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:07 UTC (permalink / raw)
  To: pbs-devel

This will be triggered on updating proxmox-file-restore (via configure,
necessary since the daemon binary might change) and
proxmox-restore-vm-data (via 'activate-noawait', necessary since the
base image might change).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 debian/proxmox-backup-client.triggers |  1 +
 debian/proxmox-file-restore.postinst  | 63 +++++++++++++++++++++++++++
 2 files changed, 64 insertions(+)
 create mode 100644 debian/proxmox-backup-client.triggers
 create mode 100755 debian/proxmox-file-restore.postinst

diff --git a/debian/proxmox-backup-client.triggers b/debian/proxmox-backup-client.triggers
new file mode 100644
index 00000000..998cda4b
--- /dev/null
+++ b/debian/proxmox-backup-client.triggers
@@ -0,0 +1 @@
+interest-noawait pbs-file-restore-initramfs
diff --git a/debian/proxmox-file-restore.postinst b/debian/proxmox-file-restore.postinst
new file mode 100755
index 00000000..7832c8a0
--- /dev/null
+++ b/debian/proxmox-file-restore.postinst
@@ -0,0 +1,63 @@
+#!/bin/sh
+
+set -e
+
+update_initramfs() {
+    # regenerate initramfs for single file restore VM
+    INST_PATH="/usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore"
+    CACHE_PATH="/var/cache/proxmox-backup/file-restore-initramfs.img"
+
+    # cleanup first, in case proxmox-file-restore was uninstalled since we do
+    # not want an unuseable image lying around
+    rm -f "$CACHE_PATH"
+
+    # no base image exists, i.e. proxmox-file-restore is not installed.
+    # ignore, we will be called if the user decides to install it later
+    [ -f "$INST_PATH/initramfs.img" ] || exit 0
+
+    echo "Updating file-restore initramfs..."
+
+    # avoid leftover temp file
+    cleanup() {
+        rm -f "$CACHE_PATH.tmp"
+    }
+    trap cleanup EXIT
+
+    mkdir -p "/var/cache/proxmox-backup"
+    cp "$INST_PATH/initramfs.img" "$CACHE_PATH.tmp"
+
+    # cpio uses passed in path as offset inside the archive as well, so we need
+    # to be in the same dir as the daemon binary to ensure it's placed in /
+    ( cd "$INST_PATH"; \
+        printf "./proxmox-restore-daemon" \
+        | cpio -o --format=newc -A -F "$CACHE_PATH.tmp" )
+    mv -f "$CACHE_PATH.tmp" "$CACHE_PATH"
+
+    trap - EXIT
+}
+
+case "$1" in
+    configure)
+        # in case restore daemon was updated
+        update_initramfs
+    ;;
+
+    triggered)
+        if [ "$2" = "pbs-file-restore-initramfs" ]; then
+            # in case base-image was updated
+            update_initramfs
+        else
+            echo "postinst called with unknown trigger name: \`$2'" >&2
+        fi
+    ;;
+
+    abort-upgrade|abort-remove|abort-deconfigure)
+    ;;
+
+    *)
+        echo "postinst called with unknown argument \`$1'" >&2
+        exit 1
+    ;;
+esac
+
+exit 0
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 21/22] file-restore(-daemon): implement list API
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (19 preceding siblings ...)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 20/22] debian/client: add postinst hook to rebuild file-restore initramfs Stefan Reiter
@ 2021-02-16 17:07 ` Stefan Reiter
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 22/22] file-restore: add 'extract' command for VM file restore Stefan Reiter
  2021-02-16 17:11 ` [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
  22 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:07 UTC (permalink / raw)
  To: pbs-devel

Allows listing files and directories on a block device snapshot.
Hierarchy displayed is:

/archive.img.fidx/bucket/component/<path>
e.g.
/drive-scsi0.img.fidx/part/2/etc/passwd
(corresponding to /etc/passwd on the second partition of drive-scsi0)

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/bin/proxmox-file-restore.rs               |  19 +++
 src/bin/proxmox_file_restore/block_driver.rs  |  19 +++
 .../proxmox_file_restore/block_driver_qemu.rs |  21 +++
 src/bin/proxmox_restore_daemon/api.rs         | 133 +++++++++++++++++-
 4 files changed, 188 insertions(+), 4 deletions(-)

diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
index 767cc057..232931d9 100644
--- a/src/bin/proxmox-file-restore.rs
+++ b/src/bin/proxmox-file-restore.rs
@@ -38,6 +38,7 @@ use proxmox_file_restore::*;
 enum ExtractPath {
     ListArchives,
     Pxar(String, Vec<u8>),
+    VM(String, Vec<u8>),
 }
 
 fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
@@ -64,6 +65,8 @@ fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
 
     if file.ends_with(".pxar.didx") {
         Ok(ExtractPath::Pxar(file, path))
+    } else if file.ends_with(".img.fidx") {
+        Ok(ExtractPath::VM(file, path))
     } else {
         bail!("'{}' is not supported for file-restore", file);
     }
@@ -102,6 +105,10 @@ fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
                type: CryptMode,
                optional: true,
            },
+           "driver": {
+               type: BlockDriverType,
+               optional: true,
+           },
            "output-format": {
                schema: OUTPUT_FORMAT,
                optional: true,
@@ -190,6 +197,18 @@ async fn list(param: Value) -> Result<Value, Error> {
 
             helpers::list_dir_content(&mut catalog_reader, &fullpath)
         }
+        ExtractPath::VM(file, path) => {
+            let details = SnapRestoreDetails {
+                manifest,
+                repo,
+                snapshot,
+            };
+            let driver: Option<BlockDriverType> = match param.get("driver") {
+                Some(drv) => Some(serde_json::from_value(drv.clone())?),
+                None => None,
+            };
+            data_list(driver, details, file, path).await
+        }
     }?;
 
     let options = default_table_format_options()
diff --git a/src/bin/proxmox_file_restore/block_driver.rs b/src/bin/proxmox_file_restore/block_driver.rs
index f2d5b00e..5ed35f25 100644
--- a/src/bin/proxmox_file_restore/block_driver.rs
+++ b/src/bin/proxmox_file_restore/block_driver.rs
@@ -8,6 +8,7 @@ use std::future::Future;
 use std::hash::BuildHasher;
 use std::pin::Pin;
 
+use proxmox_backup::api2::types::ArchiveEntry;
 use proxmox_backup::backup::{backup_user, BackupDir, BackupManifest};
 use proxmox_backup::buildcfg;
 use proxmox_backup::client::BackupRepository;
@@ -28,6 +29,14 @@ pub type Async<R> = Pin<Box<dyn Future<Output = R> + Send>>;
 
 /// An abstract implementation for retrieving data out of a block file backup
 pub trait BlockRestoreDriver {
+    /// List ArchiveEntrys for the given image file and path
+    fn data_list(
+        &self,
+        details: SnapRestoreDetails,
+        img_file: String,
+        path: Vec<u8>,
+    ) -> Async<Result<Vec<ArchiveEntry>, Error>>;
+
     /// Return status of all running/mapped images, result value is (id, extra data), where id must
     /// match with the ones returned from list()
     fn status(&self) -> Async<Result<Vec<(String, Value)>, Error>>;
@@ -56,6 +65,16 @@ impl BlockDriverType {
 const DEFAULT_DRIVER: BlockDriverType = BlockDriverType::Qemu;
 const ALL_DRIVERS: &[BlockDriverType] = &[BlockDriverType::Qemu];
 
+pub async fn data_list(
+    driver: Option<BlockDriverType>,
+    details: SnapRestoreDetails,
+    img_file: String,
+    path: Vec<u8>,
+) -> Result<Vec<ArchiveEntry>, Error> {
+    let driver = driver.unwrap_or(DEFAULT_DRIVER).resolve();
+    driver.data_list(details, img_file, path).await
+}
+
 #[api(
    input: {
        properties: {
diff --git a/src/bin/proxmox_file_restore/block_driver_qemu.rs b/src/bin/proxmox_file_restore/block_driver_qemu.rs
index d406d523..3277af5d 100644
--- a/src/bin/proxmox_file_restore/block_driver_qemu.rs
+++ b/src/bin/proxmox_file_restore/block_driver_qemu.rs
@@ -13,6 +13,7 @@ use std::time::Duration;
 use tokio::time;
 
 use proxmox::tools::fs::{file_read_string, lock_file, make_tmp_file, CreateOptions};
+use proxmox_backup::api2::types::ArchiveEntry;
 use proxmox_backup::backup::BackupDir;
 use proxmox_backup::buildcfg;
 use proxmox_backup::client::*;
@@ -348,6 +349,26 @@ async fn start_vm(
 }
 
 impl BlockRestoreDriver for QemuBlockDriver {
+    fn data_list(
+        &self,
+        details: SnapRestoreDetails,
+        img_file: String,
+        mut path: Vec<u8>,
+    ) -> Async<Result<Vec<ArchiveEntry>, Error>> {
+        async move {
+            let client = ensure_running(&details).await?;
+            if !path.is_empty() && path[0] != b'/' {
+                path.insert(0, b'/');
+            }
+            let path = base64::encode(img_file.bytes().chain(path).collect::<Vec<u8>>());
+            let mut result = client
+                .get("api2/json/list", Some(json!({ "path": path })))
+                .await?;
+            serde_json::from_value(result["data"].take()).map_err(|err| err.into())
+        }
+        .boxed()
+    }
+
     fn status(&self) -> Async<Result<Vec<(String, Value)>, Error>> {
         async move {
             let mut state_map = VMStateMap::load()?;
diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
index 8eb727df..125b5bfb 100644
--- a/src/bin/proxmox_restore_daemon/api.rs
+++ b/src/bin/proxmox_restore_daemon/api.rs
@@ -1,20 +1,27 @@
 ///! File-restore API running inside the restore VM
-use anyhow::Error;
-use serde_json::Value;
+use anyhow::{bail, Error};
+use std::ffi::OsStr;
 use std::fs;
+use std::os::unix::ffi::OsStrExt;
+use std::path::{Path, PathBuf};
 
 use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment, SubdirMap};
 use proxmox::list_subdirs_api_method;
 
 use proxmox_backup::api2::types::*;
+use proxmox_backup::backup::DirEntryAttribute;
+use proxmox_backup::tools::fs::read_subdir;
 
-use super::{watchdog_remaining, watchdog_undo_ping};
+use super::{disk::ResolveResult, watchdog_remaining, watchdog_undo_ping};
 
 // NOTE: All API endpoints must have Permission::World, as the configs for authentication do not
 // exist within the restore VM. Safety is guaranteed since we use a low port, so only root on the
 // host can contact us - and there the proxmox-backup-client validates permissions already.
 
-const SUBDIRS: SubdirMap = &[("status", &Router::new().get(&API_METHOD_STATUS))];
+const SUBDIRS: SubdirMap = &[
+    ("list", &Router::new().get(&API_METHOD_LIST)),
+    ("status", &Router::new().get(&API_METHOD_STATUS)),
+];
 
 pub const ROUTER: Router = Router::new()
     .get(&list_subdirs_api_method!(SUBDIRS))
@@ -55,3 +62,121 @@ fn status(keep_timeout: bool) -> Result<RestoreDaemonStatus, Error> {
         timeout: watchdog_remaining(false),
     })
 }
+
+fn get_dir_entry(path: &Path) -> Result<DirEntryAttribute, Error> {
+    use nix::sys::stat;
+
+    let stat = stat::stat(path)?;
+    Ok(match stat.st_mode & libc::S_IFMT {
+        libc::S_IFREG => DirEntryAttribute::File {
+            size: stat.st_size as u64,
+            mtime: stat.st_mtime,
+        },
+        libc::S_IFDIR => DirEntryAttribute::Directory { start: 0 },
+        _ => bail!("unsupported file type: {}", stat.st_mode),
+    })
+}
+
+#[api(
+    input: {
+        properties: {
+            "path": {
+                type: String,
+                description: "base64-encoded path to list files and directories under",
+            },
+        },
+    },
+    access: {
+        description: "Permissions are handled outside restore VM.",
+        permission: &Permission::World,
+    },
+)]
+/// List file details for given file or a list of files and directories under the given path if it
+/// points to a directory.
+fn list(
+    path: String,
+    _info: &ApiMethod,
+    _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<ArchiveEntry>, Error> {
+    let mut res = Vec::new();
+
+    let param_path = base64::decode(path)?;
+    let mut path = param_path.clone();
+    if let Some(b'/') = path.last() {
+        path.pop();
+    }
+    let path_str = OsStr::from_bytes(&path[..]);
+    let param_path_buf = Path::new(path_str);
+
+    let mut disk_state = crate::DISK_STATE.lock().unwrap();
+    let query_result = disk_state.resolve(&param_path_buf)?;
+
+    match query_result {
+        ResolveResult::Path(vm_path) => {
+            let root_entry = get_dir_entry(&vm_path)?;
+            match root_entry {
+                DirEntryAttribute::File { .. } => {
+                    // list on file, return details
+                    res.push(ArchiveEntry::new(&param_path, &root_entry));
+                }
+                DirEntryAttribute::Directory { .. } => {
+                    // list on directory, return all contained files/dirs
+                    for f in read_subdir(libc::AT_FDCWD, &vm_path)? {
+                        if let Ok(f) = f {
+                            let name = f.file_name().to_bytes();
+                            let path = &Path::new(OsStr::from_bytes(name));
+                            if path.components().count() == 1 {
+                                // ignore '.' and '..'
+                                match path.components().next().unwrap() {
+                                    std::path::Component::CurDir
+                                    | std::path::Component::ParentDir => continue,
+                                    _ => {}
+                                }
+                            }
+
+                            let mut full_vm_path = PathBuf::new();
+                            full_vm_path.push(&vm_path);
+                            full_vm_path.push(path);
+                            let mut full_path = PathBuf::new();
+                            full_path.push(param_path_buf);
+                            full_path.push(path);
+
+                            let entry = get_dir_entry(&full_vm_path);
+                            if let Ok(entry) = entry {
+                                res.push(ArchiveEntry::new(
+                                    full_path.as_os_str().as_bytes(),
+                                    &entry,
+                                ));
+                            }
+                        }
+                    }
+                }
+                _ => unreachable!(),
+            }
+        }
+        ResolveResult::BucketTypes(types) => {
+            for t in types {
+                let mut t_path = path.clone();
+                t_path.push(b'/');
+                t_path.extend(t.as_bytes());
+                res.push(ArchiveEntry::new(
+                    &t_path[..],
+                    &DirEntryAttribute::Directory { start: 0 },
+                ));
+            }
+        }
+        ResolveResult::BucketComponents(comps) => {
+            for c in comps {
+                let mut c_path = path.clone();
+                c_path.push(b'/');
+                c_path.extend(c.as_bytes());
+                res.push(ArchiveEntry::new(
+                    &c_path[..],
+                    &DirEntryAttribute::Directory { start: 0 },
+                ));
+            }
+        }
+    }
+
+    Ok(res)
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 22/22] file-restore: add 'extract' command for VM file restore
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (20 preceding siblings ...)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 21/22] file-restore(-daemon): implement list API Stefan Reiter
@ 2021-02-16 17:07 ` Stefan Reiter
  2021-02-16 17:11 ` [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
  22 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:07 UTC (permalink / raw)
  To: pbs-devel

Encodes the data into a streaming pxar archive on the restore VM, then
extracts it locally. This allows sharing most of the code with regular
pxar (container) restore.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 Cargo.toml                                    |   2 +-
 debian/control                                |   1 +
 src/bin/proxmox-file-restore.rs               | 157 +++++++++++++-----
 src/bin/proxmox_file_restore/block_driver.rs  |  17 ++
 .../proxmox_file_restore/block_driver_qemu.rs |  27 +++
 src/bin/proxmox_restore_daemon/api.rs         | 140 +++++++++++++++-
 6 files changed, 302 insertions(+), 42 deletions(-)

diff --git a/Cargo.toml b/Cargo.toml
index de42c2ff..988496a4 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -64,7 +64,7 @@ syslog = "4.0"
 tokio = { version = "1.0", features = [ "fs", "io-util", "io-std", "macros", "net", "parking_lot", "process", "rt", "rt-multi-thread", "signal", "time" ] }
 tokio-openssl = "0.6.1"
 tokio-stream = "0.1.0"
-tokio-util = { version = "0.6", features = [ "codec" ] }
+tokio-util = { version = "0.6", features = [ "codec", "io" ] }
 tower-service = "0.3.0"
 udev = ">= 0.3, <0.5"
 url = "2.1"
diff --git a/debian/control b/debian/control
index f4d81732..661c2894 100644
--- a/debian/control
+++ b/debian/control
@@ -67,6 +67,7 @@ Build-Depends: debhelper (>= 11),
  librust-tokio-stream-0.1+default-dev,
  librust-tokio-util-0.6+codec-dev,
  librust-tokio-util-0.6+default-dev,
+ librust-tokio-util-0.6+io-dev,
  librust-tower-service-0.3+default-dev,
  librust-udev-0.4+default-dev | librust-udev-0.3+default-dev,
  librust-url-2+default-dev (>= 2.1-~~),
diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
index 232931d9..48e4643f 100644
--- a/src/bin/proxmox-file-restore.rs
+++ b/src/bin/proxmox-file-restore.rs
@@ -14,6 +14,7 @@ use proxmox::api::{
     },
 };
 use pxar::accessor::aio::Accessor;
+use pxar::decoder::aio::Decoder;
 
 use proxmox_backup::api2::{helpers, types::ArchiveEntry};
 use proxmox_backup::backup::{
@@ -21,7 +22,7 @@ use proxmox_backup::backup::{
     DirEntryAttribute, IndexFile, LocalDynamicReadAt, CATALOG_NAME,
 };
 use proxmox_backup::client::{BackupReader, RemoteChunkReader};
-use proxmox_backup::pxar::{create_zip, extract_sub_dir};
+use proxmox_backup::pxar::{create_zip, create_zip_seq, extract_sub_dir, extract_sub_dir_seq};
 use proxmox_backup::tools;
 
 // use "pub" so rust doesn't complain about "unused" functions in the module
@@ -273,7 +274,11 @@ async fn list(param: Value) -> Result<Value, Error> {
                description: "Print verbose information",
                optional: true,
                default: false,
-           }
+           },
+           "driver": {
+               type: BlockDriverType,
+               optional: true,
+           },
        }
    }
 )]
@@ -306,20 +311,21 @@ async fn extract(param: Value) -> Result<Value, Error> {
         }
     };
 
+    let client = connect(&repo)?;
+    let client = BackupReader::start(
+        client,
+        crypt_config.clone(),
+        repo.store(),
+        &snapshot.group().backup_type(),
+        &snapshot.group().backup_id(),
+        snapshot.backup_time(),
+        true,
+    )
+    .await?;
+    let (manifest, _) = client.download_manifest().await?;
+
     match path {
         ExtractPath::Pxar(archive_name, path) => {
-            let client = connect(&repo)?;
-            let client = BackupReader::start(
-                client,
-                crypt_config.clone(),
-                repo.store(),
-                &snapshot.group().backup_type(),
-                &snapshot.group().backup_id(),
-                snapshot.backup_time(),
-                true,
-            )
-            .await?;
-            let (manifest, _) = client.download_manifest().await?;
             let file_info = manifest.lookup_file_info(&archive_name)?;
             let index = client
                 .download_dynamic_index(&manifest, &archive_name)
@@ -336,32 +342,23 @@ async fn extract(param: Value) -> Result<Value, Error> {
             let archive_size = reader.archive_size();
             let reader = LocalDynamicReadAt::new(reader);
             let decoder = Accessor::new(reader, archive_size).await?;
+            extract_to_target(decoder, &path, target, verbose).await?;
+        }
+        ExtractPath::VM(file, path) => {
+            let details = SnapRestoreDetails {
+                manifest,
+                repo,
+                snapshot,
+            };
+            let driver: Option<BlockDriverType> = match param.get("driver") {
+                Some(drv) => Some(serde_json::from_value(drv.clone())?),
+                None => None,
+            };
 
-            let root = decoder.open_root().await?;
-            let file = root
-                .lookup(OsStr::from_bytes(&path))
-                .await?
-                .ok_or_else(|| format_err!("error opening '{:?}'", path))?;
+            let reader = data_extract(driver, details, file, path.clone()).await?;
+            let decoder = Decoder::from_tokio(reader).await?;
 
-            if let Some(target) = target {
-                extract_sub_dir(target, decoder, OsStr::from_bytes(&path), verbose).await?;
-            } else {
-                match file.kind() {
-                    pxar::EntryKind::File { .. } => {
-                        tokio::io::copy(&mut file.contents().await?, &mut tokio::io::stdout())
-                            .await?;
-                    }
-                    _ => {
-                        create_zip(
-                            tokio::io::stdout(),
-                            decoder,
-                            OsStr::from_bytes(&path),
-                            verbose,
-                        )
-                        .await?;
-                    }
-                }
-            }
+            extract_to_target_seq(decoder, target, verbose).await?;
         }
         _ => {
             bail!("cannot extract '{}'", orig_path);
@@ -371,6 +368,90 @@ async fn extract(param: Value) -> Result<Value, Error> {
     Ok(Value::Null)
 }
 
+async fn extract_to_target_seq<T>(
+    mut decoder: Decoder<T>,
+    target: Option<PathBuf>,
+    verbose: bool,
+) -> Result<(), Error>
+where
+    T: pxar::decoder::SeqRead + Send + Unpin + 'static,
+{
+    // skip / directory for extraction
+    let _root = decoder.next().await.transpose()?;
+
+    // take a peek at the root of the data we want to extract - don't call next(), as that would
+    // mean it couldn't be read by the extraction functions below anymore
+    let mut data_root = match decoder.peek().await.transpose()? {
+        Some(r) => r,
+        None => bail!("no pxar entries found"),
+    };
+
+    // skip .pxarexclude-cli if it comes first for some reason
+    if matches!(data_root.kind(), pxar::EntryKind::File { .. })
+        && data_root.file_name().as_bytes() == b".pxarexclude-cli"
+    {
+        decoder.next().await;
+        data_root = match decoder.peek().await.transpose()? {
+            Some(r) => r,
+            None => bail!("no pxar entries found (after skipping .pxarexclude-cli)"),
+        };
+    }
+
+    if let Some(target) = target {
+        extract_sub_dir_seq(target, decoder, verbose).await?;
+    } else {
+        if matches!(data_root.kind(), pxar::EntryKind::File { .. }) {
+            match decoder.contents() {
+                Some(mut c) => {
+                    tokio::io::copy(&mut c, &mut tokio::io::stdout()).await?;
+                }
+                None => bail!("cannot extract pxar file entry without content"),
+            }
+        } else {
+            create_zip_seq(tokio::io::stdout(), decoder, verbose).await?;
+        }
+    }
+
+    Ok(())
+}
+
+async fn extract_to_target<T>(
+    decoder: Accessor<T>,
+    path: &[u8],
+    target: Option<PathBuf>,
+    verbose: bool,
+) -> Result<(), Error>
+where
+    T: pxar::accessor::ReadAt + Clone + Send + Sync + Unpin + 'static,
+{
+    let root = decoder.open_root().await?;
+    let file = root
+        .lookup(OsStr::from_bytes(&path))
+        .await?
+        .ok_or_else(|| format_err!("error opening '{:?}'", path))?;
+
+    if let Some(target) = target {
+        extract_sub_dir(target, decoder, OsStr::from_bytes(&path), verbose).await?;
+    } else {
+        match file.kind() {
+            pxar::EntryKind::File { .. } => {
+                tokio::io::copy(&mut file.contents().await?, &mut tokio::io::stdout()).await?;
+            }
+            _ => {
+                create_zip(
+                    tokio::io::stdout(),
+                    decoder,
+                    OsStr::from_bytes(&path),
+                    verbose,
+                )
+                .await?;
+            }
+        }
+    }
+
+    Ok(())
+}
+
 fn main() {
     let list_cmd_def = CliCommand::new(&API_METHOD_LIST)
         .arg_param(&["snapshot", "path"])
diff --git a/src/bin/proxmox_file_restore/block_driver.rs b/src/bin/proxmox_file_restore/block_driver.rs
index 5ed35f25..2815ab60 100644
--- a/src/bin/proxmox_file_restore/block_driver.rs
+++ b/src/bin/proxmox_file_restore/block_driver.rs
@@ -36,6 +36,13 @@ pub trait BlockRestoreDriver {
         img_file: String,
         path: Vec<u8>,
     ) -> Async<Result<Vec<ArchiveEntry>, Error>>;
+    /// Attempt to create a pxar archive of the given file path and return a reader instance for it
+    fn data_extract(
+        &self,
+        details: SnapRestoreDetails,
+        img_file: String,
+        path: Vec<u8>,
+    ) -> Async<Result<Box<dyn tokio::io::AsyncRead + Unpin + Send>, Error>>;
 
     /// Return status of all running/mapped images, result value is (id, extra data), where id must
     /// match with the ones returned from list()
@@ -75,6 +82,16 @@ pub async fn data_list(
     driver.data_list(details, img_file, path).await
 }
 
+pub async fn data_extract(
+    driver: Option<BlockDriverType>,
+    details: SnapRestoreDetails,
+    img_file: String,
+    path: Vec<u8>,
+) -> Result<Box<dyn tokio::io::AsyncRead + Send + Unpin>, Error> {
+    let driver = driver.unwrap_or(DEFAULT_DRIVER).resolve();
+    driver.data_extract(details, img_file, path).await
+}
+
 #[api(
    input: {
        properties: {
diff --git a/src/bin/proxmox_file_restore/block_driver_qemu.rs b/src/bin/proxmox_file_restore/block_driver_qemu.rs
index 3277af5d..205d933c 100644
--- a/src/bin/proxmox_file_restore/block_driver_qemu.rs
+++ b/src/bin/proxmox_file_restore/block_driver_qemu.rs
@@ -369,6 +369,33 @@ impl BlockRestoreDriver for QemuBlockDriver {
         .boxed()
     }
 
+    fn data_extract(
+        &self,
+        details: SnapRestoreDetails,
+        img_file: String,
+        mut path: Vec<u8>,
+    ) -> Async<Result<Box<dyn tokio::io::AsyncRead + Unpin + Send>, Error>> {
+        async move {
+            let mut client = ensure_running(&details).await?;
+            if !path.is_empty() && path[0] != b'/' {
+                path.insert(0, b'/');
+            }
+            let path = base64::encode(img_file.bytes().chain(path).collect::<Vec<u8>>());
+            let (mut tx, rx) = tokio::io::duplex(1024 * 4096);
+            tokio::spawn(async move {
+                if let Err(err) = client
+                    .download("api2/json/extract", Some(json!({ "path": path })), &mut tx)
+                    .await
+                {
+                    eprintln!("reading file extraction stream failed - {}", err);
+                }
+            });
+
+            Ok(Box::new(rx) as Box<dyn tokio::io::AsyncRead + Unpin + Send>)
+        }
+        .boxed()
+    }
+
     fn status(&self) -> Async<Result<Vec<(String, Value)>, Error>> {
         async move {
             let mut state_map = VMStateMap::load()?;
diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
index 125b5bfb..281f2121 100644
--- a/src/bin/proxmox_restore_daemon/api.rs
+++ b/src/bin/proxmox_restore_daemon/api.rs
@@ -1,16 +1,29 @@
 ///! File-restore API running inside the restore VM
 use anyhow::{bail, Error};
+use futures::FutureExt;
+use hyper::http::request::Parts;
+use hyper::{header, Body, Response, StatusCode};
+use log::error;
+use pathpatterns::{MatchEntry, MatchPattern, MatchType, Pattern};
+use serde_json::Value;
+
 use std::ffi::OsStr;
 use std::fs;
 use std::os::unix::ffi::OsStrExt;
 use std::path::{Path, PathBuf};
 
-use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment, SubdirMap};
-use proxmox::list_subdirs_api_method;
+use proxmox::api::{
+    api, schema::*, ApiHandler, ApiMethod, ApiResponseFuture, Permission, Router, RpcEnvironment,
+    SubdirMap,
+};
+use proxmox::{identity, list_subdirs_api_method, sortable};
 
 use proxmox_backup::api2::types::*;
 use proxmox_backup::backup::DirEntryAttribute;
-use proxmox_backup::tools::fs::read_subdir;
+use proxmox_backup::pxar::{create_archive, Flags, PxarCreateOptions, ENCODER_MAX_ENTRIES};
+use proxmox_backup::tools::{self, fs::read_subdir};
+
+use pxar::encoder::aio::TokioWriter;
 
 use super::{disk::ResolveResult, watchdog_remaining, watchdog_undo_ping};
 
@@ -19,6 +32,7 @@ use super::{disk::ResolveResult, watchdog_remaining, watchdog_undo_ping};
 // host can contact us - and there the proxmox-backup-client validates permissions already.
 
 const SUBDIRS: SubdirMap = &[
+    ("extract", &Router::new().get(&API_METHOD_EXTRACT)),
     ("list", &Router::new().get(&API_METHOD_LIST)),
     ("status", &Router::new().get(&API_METHOD_STATUS)),
 ];
@@ -180,3 +194,123 @@ fn list(
 
     Ok(res)
 }
+
+#[sortable]
+pub const API_METHOD_EXTRACT: ApiMethod = ApiMethod::new(
+    &ApiHandler::AsyncHttp(&extract),
+    &ObjectSchema::new(
+        "Extract a file or directory from the VM as a pxar archive.",
+        &sorted!([(
+            "path",
+            false,
+            &StringSchema::new("base64-encoded path to list files and directories under").schema()
+        )]),
+    ),
+)
+.access(None, &Permission::World);
+
+fn extract(
+    _parts: Parts,
+    _req_body: Body,
+    param: Value,
+    _info: &ApiMethod,
+    _rpcenv: Box<dyn RpcEnvironment>,
+) -> ApiResponseFuture {
+    async move {
+        let path = tools::required_string_param(&param, "path")?;
+
+        let param_path = base64::decode(path)?;
+        let mut path = param_path.clone();
+        if let Some(b'/') = path.last() {
+            path.pop();
+        }
+        let path_str = OsStr::from_bytes(&path[..]);
+        let param_path_buf = Path::new(path_str);
+
+        let query_result = {
+            let mut disk_state = crate::DISK_STATE.lock().unwrap();
+            disk_state.resolve(&param_path_buf)?
+        };
+
+        let vm_path = match query_result {
+            ResolveResult::Path(vm_path) => vm_path,
+            _ => bail!(
+                "invalid path, cannot restore meta-directory: {:?}",
+                param_path_buf
+            ),
+        };
+
+        // check here so we can return a real error message, failing in the async task will stop
+        // the transfer, but not return a useful message
+        if !vm_path.exists() {
+            bail!("file or directory {:?} does not exist", param_path_buf);
+        }
+
+        let (writer, reader) = tokio::io::duplex(1024 * 64);
+        let pxar_writer = TokioWriter::new(writer);
+
+        tokio::spawn(async move {
+            let result = async move {
+                // pxar always expects a directory as it's root, so to accommodate files as well we
+                // encode the parent dir with a filter only matching the target instead
+                let mut patterns = vec![MatchEntry::new(
+                    MatchPattern::Pattern(Pattern::path(b"*").unwrap()),
+                    MatchType::Exclude,
+                )];
+
+                let name = match vm_path.file_name() {
+                    Some(name) => name,
+                    None => bail!("no file name found for path: {:?}", vm_path),
+                };
+
+                if vm_path.is_dir() {
+                    let mut pat = name.as_bytes().to_vec();
+                    patterns.push(MatchEntry::new(
+                        MatchPattern::Pattern(Pattern::path(pat.clone())?),
+                        MatchType::Include,
+                    ));
+                    pat.extend(b"/**/*".iter());
+                    patterns.push(MatchEntry::new(
+                        MatchPattern::Pattern(Pattern::path(pat)?),
+                        MatchType::Include,
+                    ));
+                } else {
+                    patterns.push(MatchEntry::new(
+                        MatchPattern::Literal(name.as_bytes().to_vec()),
+                        MatchType::Include,
+                    ));
+                }
+
+                let dir_path = vm_path.parent().unwrap_or_else(|| Path::new("/"));
+                let dir = nix::dir::Dir::open(
+                    dir_path,
+                    nix::fcntl::OFlag::O_NOFOLLOW,
+                    nix::sys::stat::Mode::empty(),
+                )?;
+
+                let options = PxarCreateOptions {
+                    entries_max: ENCODER_MAX_ENTRIES,
+                    device_set: None,
+                    patterns,
+                    verbose: false,
+                    skip_lost_and_found: false,
+                };
+                create_archive(dir, pxar_writer, Flags::DEFAULT, |_| Ok(()), None, options).await
+            }
+            .await;
+            if let Err(err) = result {
+                error!("pxar streaming task failed - {}", err);
+            }
+        });
+
+        let stream = tokio_util::io::ReaderStream::new(reader);
+
+        let body = Body::wrap_stream(stream);
+        Ok(Response::builder()
+            .status(StatusCode::OK)
+            .header(header::CONTENT_TYPE, "application/octet-stream")
+            .body(body)
+            .unwrap())
+    }
+    .boxed()
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH 00/22] Single file restore for VM images
  2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
                   ` (21 preceding siblings ...)
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 22/22] file-restore: add 'extract' command for VM file restore Stefan Reiter
@ 2021-02-16 17:11 ` Stefan Reiter
  22 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-16 17:11 UTC (permalink / raw)
  To: pbs-devel

On 16/02/2021 18:06, Stefan Reiter wrote:
> Implements CLI-based single file and directory restore for both pxar.didx
> archives (containers, hosts) and img.fidx (VMs, raw block devices). The design
> for VM restore uses a small virtual machine that the host communicates with via
> virtio-vsock.
> 
> This is encapsuled into a new package called "proxmox-file-restore", providing a
> binary of the same name. A second package is provided in a new git repository
> called "proxmox-restore-vm-data", providing a minimal kernel image and a base
> initramfs (without the daemon, which is included in proxmox-file-restore).
> 
> Requires my previously sent pxar asyncify series:
> https://lists.proxmox.com/pipermail/pbs-devel/2020-December/001788.html
> 

Whoops:
https://lists.proxmox.com/pipermail/pbs-devel/2021-February/002113.html

> The first couple patches in the proxmox-backup repo are adapted versions of the
> ones Dominik sent to the list a while ago:
> https://lists.proxmox.com/pipermail/pbs-devel/2020-December/001788.html
> 
> Dependency bump in proxmox-backup for pxar is required, though best done
> together with the changes from the aforementioned seperate series.
> 
> Tested with ext4 and NTFS VMs, but theoretically includes support for many more
> filesystems (see 'config-base' in the new proxmox-restore-vm-data repository).
> 
> Known issues/Missing features:
> * GUI/PVE support
> * PBS_PASSWORD/PBS_FINGERPRINT currently have to be set manually for VM restore
> * ZFS/LVM/md/... support
> * shell auto-complete for "proxmox-file-restore" doesn't work (and I don't know
>    why...)
> * some patches might include some sneaky rustfmt/clippy fixes that'd better fit
>    to a previous patch, sorry for that, rebasing so many patches is annoying ;)
> 
> 




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module Stefan Reiter
@ 2021-02-17  6:49   ` Dietmar Maurer
  2021-02-17  7:58     ` Stefan Reiter
  2021-02-17  9:13   ` [pbs-devel] applied: " Dietmar Maurer
  1 sibling, 1 reply; 50+ messages in thread
From: Dietmar Maurer @ 2021-02-17  6:49 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

I thought we can put all client related code into src/client/

So why starting another lib inside src/bin/proxmox_client_tools/?


> On 02/16/2021 6:06 PM Stefan Reiter <s.reiter@proxmox.com> wrote:
> 
>  
> ...including common schemata, connect(), extract_*() and completion
> functions.
> 
> For later use with proxmox-file-restore binary.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  src/bin/proxmox-backup-client.rs    | 361 +--------------------------
>  src/bin/proxmox_client_tools/mod.rs | 366 ++++++++++++++++++++++++++++




^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup 13/22] rest: implement tower service for UnixStream
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 13/22] rest: implement tower service for UnixStream Stefan Reiter
@ 2021-02-17  6:52   ` Dietmar Maurer
  0 siblings, 0 replies; 50+ messages in thread
From: Dietmar Maurer @ 2021-02-17  6:52 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

applied




^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup 14/22] client: add VsockClient to connect to virtio-vsock VMs
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 14/22] client: add VsockClient to connect to virtio-vsock VMs Stefan Reiter
@ 2021-02-17  7:24   ` Dietmar Maurer
  0 siblings, 0 replies; 50+ messages in thread
From: Dietmar Maurer @ 2021-02-17  7:24 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

applied




^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup 04/22] api2/admin/datastore: refactor list_dir_content in catalog_reader
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 04/22] api2/admin/datastore: refactor list_dir_content in catalog_reader Stefan Reiter
@ 2021-02-17  7:50   ` Thomas Lamprecht
  0 siblings, 0 replies; 50+ messages in thread
From: Thomas Lamprecht @ 2021-02-17  7:50 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

On 16.02.21 18:06, Stefan Reiter wrote:
> From: Dominik Csapak <d.csapak@proxmox.com>
> 
> we will reuse that later in the client, so we need it somewhere
> we can use from there
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> 
> [add strongly typed ArchiveEntry and put api code into helpers.rs]
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  src/api2/admin/datastore.rs | 53 ++++++-------------------------------
>  src/api2/helpers.rs         | 31 ++++++++++++++++++++++
>  src/api2/types/mod.rs       | 43 ++++++++++++++++++++++++++++++
>  src/backup/catalog.rs       | 26 ++++++++++++++++++
>  4 files changed, 108 insertions(+), 45 deletions(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup 05/22] api2/admin/datastore: accept "/" as path for root
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 05/22] api2/admin/datastore: accept "/" as path for root Stefan Reiter
@ 2021-02-17  7:50   ` Thomas Lamprecht
  0 siblings, 0 replies; 50+ messages in thread
From: Thomas Lamprecht @ 2021-02-17  7:50 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

On 16.02.21 18:06, Stefan Reiter wrote:
> From: Dominik Csapak <d.csapak@proxmox.com>
> 
> makes more sense than sending "root'"
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  src/api2/admin/datastore.rs | 2 +-
>  www/window/FileBrowser.js   | 1 +
>  2 files changed, 2 insertions(+), 1 deletion(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup 06/22] api2/admin/datastore: refactor create_zip into pxar/extract
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 06/22] api2/admin/datastore: refactor create_zip into pxar/extract Stefan Reiter
@ 2021-02-17  7:50   ` Thomas Lamprecht
  0 siblings, 0 replies; 50+ messages in thread
From: Thomas Lamprecht @ 2021-02-17  7:50 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter


On 16.02.21 18:06, Stefan Reiter wrote:
> From: Dominik Csapak <d.csapak@proxmox.com>
> 
> we will reuse that code in the client, so we need to move it to
> where we can access it from the client
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> 
> [clippy fixes]
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  src/api2/admin/datastore.rs |  99 +++--------------------------
>  src/pxar/extract.rs         | 120 +++++++++++++++++++++++++++++++++++-
>  src/pxar/mod.rs             |   2 +-
>  3 files changed, 130 insertions(+), 91 deletions(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup 07/22] pxar/extract: add extract_sub_dir
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 07/22] pxar/extract: add extract_sub_dir Stefan Reiter
@ 2021-02-17  7:51   ` Thomas Lamprecht
  0 siblings, 0 replies; 50+ messages in thread
From: Thomas Lamprecht @ 2021-02-17  7:51 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

On 16.02.21 18:06, Stefan Reiter wrote:
> From: Dominik Csapak <d.csapak@proxmox.com>
> 
> to extract some subdirectory of a pxar into a given target
> this will be used in the client
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  src/pxar/extract.rs | 122 ++++++++++++++++++++++++++++++++++++++++++++
>  src/pxar/mod.rs     |   2 +-
>  2 files changed, 123 insertions(+), 1 deletion(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH pxar 01/22] decoder/aio: add contents() and content_size() calls
  2021-02-16 17:06 ` [pbs-devel] [PATCH pxar 01/22] decoder/aio: add contents() and content_size() calls Stefan Reiter
@ 2021-02-17  7:56   ` Wolfgang Bumiller
  0 siblings, 0 replies; 50+ messages in thread
From: Wolfgang Bumiller @ 2021-02-17  7:56 UTC (permalink / raw)
  To: Stefan Reiter; +Cc: pbs-devel

On Tue, Feb 16, 2021 at 06:06:49PM +0100, Stefan Reiter wrote:
> Returns a tokio AsyncRead implementation for its "Contents" to keep with
> the aio theme.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  src/decoder/aio.rs | 43 ++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 42 insertions(+), 1 deletion(-)
> 
> diff --git a/src/decoder/aio.rs b/src/decoder/aio.rs
> index 82030b0..5cc6694 100644
> --- a/src/decoder/aio.rs
> +++ b/src/decoder/aio.rs
> @@ -56,6 +56,18 @@ impl<T: SeqRead> Decoder<T> {
>          self.inner.next_do().await.transpose()
>      }
>  
> +    /// Get a reader for the contents of the current entry, if the entry has contents.
> +    /// Only available for feature "tokio-io", since it returns an AsyncRead reader.
> +    #[cfg(feature = "tokio-io")]

^ Don't do this.
We basically have our own async I/O "entry point" with the SeqRead
trait, and if you want to use it with another I/O runtime (say async-std
or w/e futures or w/e else there is), you want to be able to use
`Contents` there as well and just wrap it an alternative to the
provided `TokioReader` manually.

So just leave `Contents` as a public just-`SeqRead` type.

> +    pub fn contents(&mut self) -> Option<Contents<T>> {
> +        self.inner.content_reader().map(|inner| Contents { inner })
> +    }
> +
> +    /// Get the size of the current contents, if the entry has contents.
> +    pub fn content_size(&self) -> Option<u64> {
> +        self.inner.content_size()
> +    }
> +
>      /// Include goodbye tables in iteration.
>      pub fn enable_goodbye_entries(&mut self, on: bool) {
>          self.inner.with_goodbye_tables = on;
> @@ -93,7 +105,36 @@ mod tok {
>              }
>          }
>      }
> +
> +    pub struct Contents<'a, T: crate::decoder::SeqRead> {
> +        pub(crate) inner: crate::decoder::Contents<'a, T>,
^ no need for the `pub(crate)` then when you move it up
> +    }
> +
> +    impl<'a, T: crate::decoder::SeqRead> tokio::io::AsyncRead for Contents<'a, T> {
> +        fn poll_read(
> +            self: Pin<&mut Self>,
> +            cx: &mut Context<'_>,
> +            buf: &mut tokio::io::ReadBuf<'_>,
> +        ) -> Poll<io::Result<()>> {
> +            unsafe {
> +                // Safety: poll_seq_read will only write to the buffer, so we don't need to
> +                // initialize it first, we can treat is a &[u8] immediately as long as we uphold
> +                // the ReadBuf invariants in the conditional below

^ This comment is actually wrong. `poll_seq_read` will do whatever the
heck the implementer of the trait decides to do ;-)

Personally, I really don't mind doing this *anyway* until a definitive
"solution" actually lands in the *standard* library. Because if someone
f's up a read impl even if it causes wild codegen bugs with tentacles then...
sorry not sorry.

> +                let write_buf =
> +                    &mut *(buf.unfilled_mut() as *mut [std::mem::MaybeUninit<u8>] as *mut [u8]);
> +                let result = self
> +                    .map_unchecked_mut(|this| &mut this.inner as &mut dyn crate::decoder::SeqRead)
> +                    .poll_seq_read(cx, write_buf);
> +                if let Poll::Ready(Ok(n)) = result {
> +                    // if we've written data, advance both initialized and filled bytes cursor
> +                    buf.assume_init(buf.filled().len() + n);
> +                    buf.advance(n);
> +                }
> +                result.map(|_| Ok(()))
> +            }
> +        }
> +    }
>  }
>  
>  #[cfg(feature = "tokio-io")]
> -use tok::TokioReader;
> +use tok::{Contents, TokioReader};

^ Needs a `pub` otherwise the type *can* exist as the return type of a
function, but users have no way to actually *type out* the type...




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module
  2021-02-17  6:49   ` Dietmar Maurer
@ 2021-02-17  7:58     ` Stefan Reiter
  2021-02-17  8:50       ` Dietmar Maurer
  0 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-17  7:58 UTC (permalink / raw)
  To: Dietmar Maurer, Proxmox Backup Server development discussion

On 17/02/2021 07:49, Dietmar Maurer wrote:
> I thought we can put all client related code into src/client/
> 
> So why starting another lib inside src/bin/proxmox_client_tools/?
> 

Why was (part of) it initially in src/bin/proxmox_backup_client then and 
not src/client/ ? I thought it was to seperate the "binary-specific" 
client code from more generic client code (such as used in pull or the 
QEMU library as well).

Wouldn't mind putting it in src/client either though.

> 
>> On 02/16/2021 6:06 PM Stefan Reiter <s.reiter@proxmox.com> wrote:
>>
>>   
>> ...including common schemata, connect(), extract_*() and completion
>> functions.
>>
>> For later use with proxmox-file-restore binary.
>>
>> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
>> ---
>>   src/bin/proxmox-backup-client.rs    | 361 +--------------------------
>>   src/bin/proxmox_client_tools/mod.rs | 366 ++++++++++++++++++++++++++++




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH pxar 02/22] decoder: add peek()
  2021-02-16 17:06 ` [pbs-devel] [PATCH pxar 02/22] decoder: add peek() Stefan Reiter
@ 2021-02-17  8:20   ` Wolfgang Bumiller
  2021-02-17  8:38     ` Stefan Reiter
  0 siblings, 1 reply; 50+ messages in thread
From: Wolfgang Bumiller @ 2021-02-17  8:20 UTC (permalink / raw)
  To: Stefan Reiter; +Cc: pbs-devel

On Tue, Feb 16, 2021 at 06:06:50PM +0100, Stefan Reiter wrote:
> Allows peeking the current element, but will not advance the state
> (except for contents() and content_size() functions).
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  src/accessor/mod.rs |  3 +++
>  src/decoder/aio.rs  | 10 +++++++++-
>  src/decoder/mod.rs  | 19 +++++++++++++++++--
>  src/decoder/sync.rs | 10 +++++++++-
>  4 files changed, 38 insertions(+), 4 deletions(-)
> 
> diff --git a/src/accessor/mod.rs b/src/accessor/mod.rs
> index d02dc13..aa1b3f6 100644
> --- a/src/accessor/mod.rs
> +++ b/src/accessor/mod.rs
> @@ -293,6 +293,7 @@ impl<T: Clone + ReadAt> AccessorImpl<T> {
>          let entry = decoder
>              .next()
>              .await
> +            .transpose()
>              .ok_or_else(|| io_format_err!("unexpected EOF while decoding file entry"))??;
>          Ok(FileEntryImpl {
>              input: self.input.clone(),
> @@ -334,6 +335,7 @@ impl<T: Clone + ReadAt> AccessorImpl<T> {
>          let entry = decoder
>              .next()
>              .await
> +            .transpose()
>              .ok_or_else(|| io_format_err!("unexpected EOF while following a hardlink"))??;
>  
>          match entry.kind() {
> @@ -516,6 +518,7 @@ impl<T: Clone + ReadAt> DirectoryImpl<T> {
>          let entry = decoder
>              .next()
>              .await
> +            .transpose()
>              .ok_or_else(|| io_format_err!("unexpected EOF while decoding directory entry"))??;
>          Ok((entry, decoder))
>      }
> diff --git a/src/decoder/aio.rs b/src/decoder/aio.rs
> index 5cc6694..c553d45 100644
> --- a/src/decoder/aio.rs
> +++ b/src/decoder/aio.rs
> @@ -53,7 +53,15 @@ impl<T: SeqRead> Decoder<T> {
>      #[allow(clippy::should_implement_trait)]
>      /// If this is a directory entry, get the next item inside the directory.
>      pub async fn next(&mut self) -> Option<io::Result<Entry>> {
> -        self.inner.next_do().await.transpose()
> +        self.inner.next().await.transpose()
> +    }
> +
> +    /// If this is a directory entry, get the next item inside the directory.
> +    /// Do not advance the cursor, so multiple calls to peek() will return the same entry,
> +    /// and the next call to next() will read the item once again before moving on.
> +    /// NOTE: This *will* advance the state for contents() and content_size()!

^ Which is why I'm wondering whether we should maybe leave this up to
the *user* rather than provide a sort-of broken API here?

I'd rather have this be guarded by a Seek trait, but that too is
something we won't get from `std` and so we'd have to add one.

Why do we need this exactly?

And would this be solved by simply *generally* storing a
"current_entry"? Then we can have a `.current_entry() -> Option<&Entry>`
which works after at least `next()` call, and `.next()` working as
usual.  And we may just have `next()` also return a reference instead.
The user can `.clone()` if necessary. Or we return a mutable reference
and allow `.take()`, then the user is responsible for knowing whether
calling `.current_entry()` makes sense ;-)

> +    pub async fn peek(&mut self) -> Option<io::Result<Entry>> {
> +        self.inner.peek().await.transpose()
>      }
>  
>      /// Get a reader for the contents of the current entry, if the entry has contents.
> diff --git a/src/decoder/mod.rs b/src/decoder/mod.rs
> index 2a5e79a..041226d 100644
> --- a/src/decoder/mod.rs
> +++ b/src/decoder/mod.rs
> @@ -155,6 +155,7 @@ pub(crate) struct DecoderImpl<T> {
>      path_lengths: Vec<usize>,
>      state: State,
>      with_goodbye_tables: bool,
> +    peeked: Option<io::Result<Option<Entry>>>,
>  
>      /// The random access code uses decoders for sub-ranges which may not end in a `PAYLOAD` for
>      /// entries like FIFOs or sockets, so there we explicitly allow an item to terminate with EOF.
> @@ -218,6 +219,7 @@ impl<I: SeqRead> DecoderImpl<I> {
>              path_lengths: Vec::new(),
>              state: State::Begin,
>              with_goodbye_tables: false,
> +            peeked: None,
>              eof_after_entry,
>          };
>  
> @@ -227,8 +229,21 @@ impl<I: SeqRead> DecoderImpl<I> {
>      }
>  
>      /// Get the next file entry, recursing into directories.
> -    pub async fn next(&mut self) -> Option<io::Result<Entry>> {
> -        self.next_do().await.transpose()
> +    pub async fn next(&mut self) -> io::Result<Option<Entry>> {
> +        if let Some(ent) = self.peeked.take() {
> +            return ent;
> +        }
> +        self.next_do().await
> +    }
> +
> +    pub async fn peek(&mut self) -> io::Result<Option<Entry>> {
> +        self.peeked = Some(self.next().await);
> +        match &self.peeked {
> +            Some(Ok(ent)) => Ok(ent.clone()),
> +            // io::Error does not implement Clone...
> +            Some(Err(err)) => Err(io_format_err!("{}", err)),
> +            None => unreachable!()
> +        }
>      }
>  
>      async fn next_do(&mut self) -> io::Result<Option<Entry>> {
> diff --git a/src/decoder/sync.rs b/src/decoder/sync.rs
> index 85b4865..c6a1bc3 100644
> --- a/src/decoder/sync.rs
> +++ b/src/decoder/sync.rs
> @@ -63,7 +63,15 @@ impl<T: SeqRead> Decoder<T> {
>      #[allow(clippy::should_implement_trait)]
>      /// If this is a directory entry, get the next item inside the directory.
>      pub fn next(&mut self) -> Option<io::Result<Entry>> {
> -        poll_result_once(self.inner.next_do()).transpose()
> +        poll_result_once(self.inner.next()).transpose()
> +    }
> +
> +    /// If this is a directory entry, get the next item inside the directory.
> +    /// Do not advance the cursor, so multiple calls to peek() will return the same entry,
> +    /// and the next call to next() will read the item once again before moving on.
> +    /// NOTE: This *will* advance the state for contents() and content_size()!
> +    pub async fn peek(&mut self) -> Option<io::Result<Entry>> {
> +        poll_result_once(self.inner.peek()).transpose()
>      }
>  
>      /// Get a reader for the contents of the current entry, if the entry has contents.
> -- 
> 2.20.1




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH pxar 02/22] decoder: add peek()
  2021-02-17  8:20   ` Wolfgang Bumiller
@ 2021-02-17  8:38     ` Stefan Reiter
  0 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-02-17  8:38 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pbs-devel

On 17/02/2021 09:20, Wolfgang Bumiller wrote:
> On Tue, Feb 16, 2021 at 06:06:50PM +0100, Stefan Reiter wrote:
>> Allows peeking the current element, but will not advance the state
>> (except for contents() and content_size() functions).
>>
>> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
>> ---
>>   src/accessor/mod.rs |  3 +++
>>   src/decoder/aio.rs  | 10 +++++++++-
>>   src/decoder/mod.rs  | 19 +++++++++++++++++--
>>   src/decoder/sync.rs | 10 +++++++++-
>>   4 files changed, 38 insertions(+), 4 deletions(-)
>>
>> diff --git a/src/accessor/mod.rs b/src/accessor/mod.rs
>> index d02dc13..aa1b3f6 100644
>> --- a/src/accessor/mod.rs
>> +++ b/src/accessor/mod.rs
>> @@ -293,6 +293,7 @@ impl<T: Clone + ReadAt> AccessorImpl<T> {
>>           let entry = decoder
>>               .next()
>>               .await
>> +            .transpose()
>>               .ok_or_else(|| io_format_err!("unexpected EOF while decoding file entry"))??;
>>           Ok(FileEntryImpl {
>>               input: self.input.clone(),
>> @@ -334,6 +335,7 @@ impl<T: Clone + ReadAt> AccessorImpl<T> {
>>           let entry = decoder
>>               .next()
>>               .await
>> +            .transpose()
>>               .ok_or_else(|| io_format_err!("unexpected EOF while following a hardlink"))??;
>>   
>>           match entry.kind() {
>> @@ -516,6 +518,7 @@ impl<T: Clone + ReadAt> DirectoryImpl<T> {
>>           let entry = decoder
>>               .next()
>>               .await
>> +            .transpose()
>>               .ok_or_else(|| io_format_err!("unexpected EOF while decoding directory entry"))??;
>>           Ok((entry, decoder))
>>       }
>> diff --git a/src/decoder/aio.rs b/src/decoder/aio.rs
>> index 5cc6694..c553d45 100644
>> --- a/src/decoder/aio.rs
>> +++ b/src/decoder/aio.rs
>> @@ -53,7 +53,15 @@ impl<T: SeqRead> Decoder<T> {
>>       #[allow(clippy::should_implement_trait)]
>>       /// If this is a directory entry, get the next item inside the directory.
>>       pub async fn next(&mut self) -> Option<io::Result<Entry>> {
>> -        self.inner.next_do().await.transpose()
>> +        self.inner.next().await.transpose()
>> +    }
>> +
>> +    /// If this is a directory entry, get the next item inside the directory.
>> +    /// Do not advance the cursor, so multiple calls to peek() will return the same entry,
>> +    /// and the next call to next() will read the item once again before moving on.
>> +    /// NOTE: This *will* advance the state for contents() and content_size()!
> 
> ^ Which is why I'm wondering whether we should maybe leave this up to
> the *user* rather than provide a sort-of broken API here?
> 
> I'd rather have this be guarded by a Seek trait, but that too is
> something we won't get from `std` and so we'd have to add one.
> 
> Why do we need this exactly?

See patches 8 and 22 (specifically 'fn extract_to_target_seq') of the 
series. I didn't want to add more special casing to the sequential 
extractors, they are "special-cased" enough as it is IMO, so they work 
on the assumption that they can just call "next()" and get the root 
entry of what they want to extract. But I also need to check whether 
that entry is a file or a dir before calling them, which I do with peek().

> 
> And would this be solved by simply *generally* storing a
> "current_entry"? Then we can have a `.current_entry() -> Option<&Entry>`
> which works after at least `next()` call, and `.next()` working as
> usual.  And we may just have `next()` also return a reference instead.
> The user can `.clone()` if necessary. Or we return a mutable reference
> and allow `.take()`, then the user is responsible for knowing whether
> calling `.current_entry()` makes sense ;-)
> 

current_entry() wouldn't help my use-case, and returning a reference is 
somewhat pointless since Entry is small and Clone anyway IIRC?

I believe there might be a way to avoid this patch entirely though if I 
give the sequential extractor API some more thought, if not I'll think 
about your proposals for a v2.

>> +    pub async fn peek(&mut self) -> Option<io::Result<Entry>> {
>> +        self.inner.peek().await.transpose()
>>       }
>>   
>>       /// Get a reader for the contents of the current entry, if the entry has contents.
>> diff --git a/src/decoder/mod.rs b/src/decoder/mod.rs
>> index 2a5e79a..041226d 100644
>> --- a/src/decoder/mod.rs
>> +++ b/src/decoder/mod.rs
>> @@ -155,6 +155,7 @@ pub(crate) struct DecoderImpl<T> {
>>       path_lengths: Vec<usize>,
>>       state: State,
>>       with_goodbye_tables: bool,
>> +    peeked: Option<io::Result<Option<Entry>>>,
>>   
>>       /// The random access code uses decoders for sub-ranges which may not end in a `PAYLOAD` for
>>       /// entries like FIFOs or sockets, so there we explicitly allow an item to terminate with EOF.
>> @@ -218,6 +219,7 @@ impl<I: SeqRead> DecoderImpl<I> {
>>               path_lengths: Vec::new(),
>>               state: State::Begin,
>>               with_goodbye_tables: false,
>> +            peeked: None,
>>               eof_after_entry,
>>           };
>>   
>> @@ -227,8 +229,21 @@ impl<I: SeqRead> DecoderImpl<I> {
>>       }
>>   
>>       /// Get the next file entry, recursing into directories.
>> -    pub async fn next(&mut self) -> Option<io::Result<Entry>> {
>> -        self.next_do().await.transpose()
>> +    pub async fn next(&mut self) -> io::Result<Option<Entry>> {
>> +        if let Some(ent) = self.peeked.take() {
>> +            return ent;
>> +        }
>> +        self.next_do().await
>> +    }
>> +
>> +    pub async fn peek(&mut self) -> io::Result<Option<Entry>> {
>> +        self.peeked = Some(self.next().await);
>> +        match &self.peeked {
>> +            Some(Ok(ent)) => Ok(ent.clone()),
>> +            // io::Error does not implement Clone...
>> +            Some(Err(err)) => Err(io_format_err!("{}", err)),
>> +            None => unreachable!()
>> +        }
>>       }
>>   
>>       async fn next_do(&mut self) -> io::Result<Option<Entry>> {
>> diff --git a/src/decoder/sync.rs b/src/decoder/sync.rs
>> index 85b4865..c6a1bc3 100644
>> --- a/src/decoder/sync.rs
>> +++ b/src/decoder/sync.rs
>> @@ -63,7 +63,15 @@ impl<T: SeqRead> Decoder<T> {
>>       #[allow(clippy::should_implement_trait)]
>>       /// If this is a directory entry, get the next item inside the directory.
>>       pub fn next(&mut self) -> Option<io::Result<Entry>> {
>> -        poll_result_once(self.inner.next_do()).transpose()
>> +        poll_result_once(self.inner.next()).transpose()
>> +    }
>> +
>> +    /// If this is a directory entry, get the next item inside the directory.
>> +    /// Do not advance the cursor, so multiple calls to peek() will return the same entry,
>> +    /// and the next call to next() will read the item once again before moving on.
>> +    /// NOTE: This *will* advance the state for contents() and content_size()!
>> +    pub async fn peek(&mut self) -> Option<io::Result<Entry>> {
>> +        poll_result_once(self.inner.peek()).transpose()
>>       }
>>   
>>       /// Get a reader for the contents of the current entry, if the entry has contents.
>> -- 
>> 2.20.1




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module
  2021-02-17  7:58     ` Stefan Reiter
@ 2021-02-17  8:50       ` Dietmar Maurer
  2021-02-17  9:47         ` Stefan Reiter
  0 siblings, 1 reply; 50+ messages in thread
From: Dietmar Maurer @ 2021-02-17  8:50 UTC (permalink / raw)
  To: Stefan Reiter, Proxmox Backup Server development discussion


> On 02/17/2021 8:58 AM Stefan Reiter <s.reiter@proxmox.com> wrote:
> 
>  
> On 17/02/2021 07:49, Dietmar Maurer wrote:
> > I thought we can put all client related code into src/client/
> > 
> > So why starting another lib inside src/bin/proxmox_client_tools/?
> > 
> 
> Why was (part of) it initially in src/bin/proxmox_backup_client then and 
> not src/client/ ? I thought it was to seperate the "binary-specific" 
> client code from more generic client code (such as used in pull or the 
> QEMU library as well).

Ok, but why do you move cli functions like:

fn paper_key()
fn show_key()


Those belongs to proxmox-backup-client - or do we reuse them somewhere?




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 10/22] proxmox_client_tools: extract 'key' from client module
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 10/22] proxmox_client_tools: extract 'key' from client module Stefan Reiter
@ 2021-02-17  9:11   ` Dietmar Maurer
  0 siblings, 0 replies; 50+ messages in thread
From: Dietmar Maurer @ 2021-02-17  9:11 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter


> On 02/16/2021 6:06 PM Stefan Reiter <s.reiter@proxmox.com> wrote:
> 
>  
> To be used by other command line tools. Requires moving XDG helpers as
> well, which find their place in the tools module quite cozily IMHO.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  src/bin/proxmox-backup-client.rs              | 440 +-----------------
>  src/bin/proxmox_backup_client/catalog.rs      |   4 +-
>  src/bin/proxmox_backup_client/mod.rs          |  30 --
>  src/bin/proxmox_backup_client/snapshot.rs     |   3 +-
>  .../key.rs                                    | 440 +++++++++++++++++-
>  src/bin/proxmox_client_tools/mod.rs           |  30 +-
>  6 files changed, 474 insertions(+), 473 deletions(-)
>  rename src/bin/{proxmox_backup_client => proxmox_client_tools}/key.rs (52%)

Also, this diff hides most code behind the rename.
This is extremely dangerous, and should be avoided (use an 
extra patch for renames).




^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module Stefan Reiter
  2021-02-17  6:49   ` Dietmar Maurer
@ 2021-02-17  9:13   ` Dietmar Maurer
  1 sibling, 0 replies; 50+ messages in thread
From: Dietmar Maurer @ 2021-02-17  9:13 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

applied




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module
  2021-02-17  8:50       ` Dietmar Maurer
@ 2021-02-17  9:47         ` Stefan Reiter
  2021-02-17 10:12           ` Dietmar Maurer
  0 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-17  9:47 UTC (permalink / raw)
  To: Dietmar Maurer, Proxmox Backup Server development discussion



On 17/02/2021 09:50, Dietmar Maurer wrote:
> 
>> On 02/17/2021 8:58 AM Stefan Reiter <s.reiter@proxmox.com> wrote:
>>
>>   
>> On 17/02/2021 07:49, Dietmar Maurer wrote:
>>> I thought we can put all client related code into src/client/
>>>
>>> So why starting another lib inside src/bin/proxmox_client_tools/?
>>>
>>
>> Why was (part of) it initially in src/bin/proxmox_backup_client then and
>> not src/client/ ? I thought it was to seperate the "binary-specific"
>> client code from more generic client code (such as used in pull or the
>> QEMU library as well).
> 
> Ok, but why do you move cli functions like:
> 
> fn paper_key()
> fn show_key()
> 
> 
> Those belongs to proxmox-backup-client - or do we reuse them somewhere?
> 

No, those just slipped in. I'll leave them and also make the rename a 
seperate commit for a v2.




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module
  2021-02-17  9:47         ` Stefan Reiter
@ 2021-02-17 10:12           ` Dietmar Maurer
  0 siblings, 0 replies; 50+ messages in thread
From: Dietmar Maurer @ 2021-02-17 10:12 UTC (permalink / raw)
  To: Stefan Reiter, Proxmox Backup Server development discussion

> > Those belongs to proxmox-backup-client - or do we reuse them somewhere?
> > 
> 
> No, those just slipped in. I'll leave them and also make the rename a 
> seperate commit for a v2.

Just copy the code needed - (there is no need to do a rename?)




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server Stefan Reiter
@ 2021-02-17 10:17   ` Dietmar Maurer
  2021-02-17 10:25   ` Dietmar Maurer
  2021-02-17 11:26   ` Dietmar Maurer
  2 siblings, 0 replies; 50+ messages in thread
From: Dietmar Maurer @ 2021-02-17 10:17 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

Please can we avoid env_logger?

> diff --git a/Cargo.toml b/Cargo.toml
> index 28ca8e64..de42c2ff 100644
> --- a/Cargo.toml
> +++ b/Cargo.toml
> @@ -29,6 +29,7 @@ bitflags = "1.2.1"
>  bytes = "1.0"
>  crc32fast = "1"
>  endian_trait = { version = "0.6", features = ["arrays"] }
> +env_logger = "0.7"




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server Stefan Reiter
  2021-02-17 10:17   ` Dietmar Maurer
@ 2021-02-17 10:25   ` Dietmar Maurer
  2021-02-17 10:30     ` Stefan Reiter
  2021-02-17 11:26   ` Dietmar Maurer
  2 siblings, 1 reply; 50+ messages in thread
From: Dietmar Maurer @ 2021-02-17 10:25 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

> Since the REST server implementation uses the log!() macro, we can
> redirect its output to stdout by registering env_logger as the logging
> target. env_logger is already in our dependency tree via zstd/bindgen.

Initializing the syslog crate should be enough:

   if let Err(err) = syslog::init(
        syslog::Facility::LOG_DAEMON,
        log::LevelFilter::Info,
        Some("file-restore-daemon")) {
        bail!("unable to inititialize syslog - {}", err);
    }




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server
  2021-02-17 10:25   ` Dietmar Maurer
@ 2021-02-17 10:30     ` Stefan Reiter
  2021-02-17 11:13       ` Dietmar Maurer
  0 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-17 10:30 UTC (permalink / raw)
  To: Dietmar Maurer, Proxmox Backup Server development discussion

On 17/02/2021 11:25, Dietmar Maurer wrote:
>> Since the REST server implementation uses the log!() macro, we can
>> redirect its output to stdout by registering env_logger as the logging
>> target. env_logger is already in our dependency tree via zstd/bindgen.
> 
> Initializing the syslog crate should be enough:
> 
>     if let Err(err) = syslog::init(
>          syslog::Facility::LOG_DAEMON,
>          log::LevelFilter::Info,
>          Some("file-restore-daemon")) {
>          bail!("unable to inititialize syslog - {}", err);
>      }
> 

Wouldn't the syslog crate depend on the systemd journal though? Or does 
it fall back to stdout/stderr?




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 16/22] file-restore-daemon: add watchdog module
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 16/22] file-restore-daemon: add watchdog module Stefan Reiter
@ 2021-02-17 10:52   ` Wolfgang Bumiller
  2021-02-17 11:14     ` Stefan Reiter
  0 siblings, 1 reply; 50+ messages in thread
From: Wolfgang Bumiller @ 2021-02-17 10:52 UTC (permalink / raw)
  To: Stefan Reiter; +Cc: pbs-devel

On Tue, Feb 16, 2021 at 06:07:04PM +0100, Stefan Reiter wrote:
> Add a watchdog that will automatically shut down the VM after 10
> minutes, if no API call is received.
> 
> This is handled using the unix 'alarm' syscall.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  src/api2/types/file_restore.rs             |  3 ++
>  src/bin/proxmox-restore-daemon.rs          |  5 ++
>  src/bin/proxmox_restore_daemon/api.rs      | 22 ++++++--
>  src/bin/proxmox_restore_daemon/mod.rs      |  3 ++
>  src/bin/proxmox_restore_daemon/watchdog.rs | 63 ++++++++++++++++++++++
>  5 files changed, 91 insertions(+), 5 deletions(-)
>  create mode 100644 src/bin/proxmox_restore_daemon/watchdog.rs
> 
> diff --git a/src/api2/types/file_restore.rs b/src/api2/types/file_restore.rs
> index cd8df16a..710c6d83 100644
> --- a/src/api2/types/file_restore.rs
> +++ b/src/api2/types/file_restore.rs
> @@ -8,5 +8,8 @@ use proxmox::api::api;
>  pub struct RestoreDaemonStatus {
>      /// VM uptime in seconds
>      pub uptime: i64,
> +    /// time left until auto-shutdown, keep in mind that this is inaccurate when 'keep-timeout' is
> +    /// not set, as then after the status call the timer will have reset
> +    pub timeout: i64,
>  }
>  
> diff --git a/src/bin/proxmox-restore-daemon.rs b/src/bin/proxmox-restore-daemon.rs
> index 1ec90794..d30da563 100644
> --- a/src/bin/proxmox-restore-daemon.rs
> +++ b/src/bin/proxmox-restore-daemon.rs
> @@ -40,6 +40,9 @@ fn main() -> Result<(), Error> {
>          .write_style(env_logger::WriteStyle::Never)
>          .init();
>  
> +    // start watchdog, failure is a critical error as it leads to a scenario where we never exit
> +    watchdog_init()?;
> +
>      proxmox_backup::tools::runtime::main(run())
>  }
>  
> @@ -77,6 +80,8 @@ fn accept_vsock_connections(
>                  Ok(stream) => {
>                      if sender.send(Ok(stream)).await.is_err() {
>                          error!("connection accept channel was closed");
> +                    } else {
> +                        watchdog_ping();

Should the ping not also happen at every api call in case connections
get reused?

>                      }
>                  }
>                  Err(err) => {
> diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
> index 3c642aaf..8eb727df 100644
> --- a/src/bin/proxmox_restore_daemon/api.rs
> +++ b/src/bin/proxmox_restore_daemon/api.rs
> @@ -8,6 +8,8 @@ use proxmox::list_subdirs_api_method;
>  
>  use proxmox_backup::api2::types::*;
>  
> +use super::{watchdog_remaining, watchdog_undo_ping};
> +
>  // NOTE: All API endpoints must have Permission::World, as the configs for authentication do not
>  // exist within the restore VM. Safety is guaranteed since we use a low port, so only root on the
>  // host can contact us - and there the proxmox-backup-client validates permissions already.
> @@ -25,6 +27,16 @@ fn read_uptime() -> Result<f32, Error> {
>  }
>  
>  #[api(
> +    input: {
> +        properties: {
> +            "keep-timeout": {
> +                type: bool,
> +                description: "If true, do not reset the watchdog timer on this API call.",
> +                default: false,
> +                optional: true,
> +            },
> +        },
> +    },
>      access: {
>          description: "Permissions are handled outside restore VM.",
>          permission: &Permission::World,
> @@ -34,12 +46,12 @@ fn read_uptime() -> Result<f32, Error> {
>      }
>  )]
>  /// General status information
> -fn status(
> -    _param: Value,
> -    _info: &ApiMethod,
> -    _rpcenv: &mut dyn RpcEnvironment,
> -) -> Result<RestoreDaemonStatus, Error> {
> +fn status(keep_timeout: bool) -> Result<RestoreDaemonStatus, Error> {
> +    if keep_timeout {

This seems just weird. Do we really need this?

> +        watchdog_undo_ping();
> +    }
>      Ok(RestoreDaemonStatus {
>          uptime: read_uptime()? as i64,
> +        timeout: watchdog_remaining(false),
>      })
>  }
> diff --git a/src/bin/proxmox_restore_daemon/mod.rs b/src/bin/proxmox_restore_daemon/mod.rs
> index d938a5bb..6802d31c 100644
> --- a/src/bin/proxmox_restore_daemon/mod.rs
> +++ b/src/bin/proxmox_restore_daemon/mod.rs
> @@ -1,3 +1,6 @@
>  ///! File restore VM related functionality
>  mod api;
>  pub use api::*;
> +
> +mod watchdog;
> +pub use watchdog::*;
> diff --git a/src/bin/proxmox_restore_daemon/watchdog.rs b/src/bin/proxmox_restore_daemon/watchdog.rs
> new file mode 100644
> index 00000000..f722be0b
> --- /dev/null
> +++ b/src/bin/proxmox_restore_daemon/watchdog.rs
> @@ -0,0 +1,63 @@
> +//! SIGALRM/alarm(1) based watchdog that shuts down the VM if not pinged for TIMEOUT
> +use anyhow::Error;
> +use std::sync::atomic::{AtomicI64, Ordering};
> +
> +use nix::sys::{reboot, signal::*};
> +use nix::unistd::alarm;
> +
> +const TIMEOUT: u32 = 600; // seconds
> +static TRIGGERED: AtomicI64 = AtomicI64::new(0);
> +static LAST_TRIGGERED: AtomicI64 = AtomicI64::new(0);
> +
> +/// Handler is called when alarm-watchdog expires, immediately shuts down VM when triggered
> +extern "C" fn alarm_handler(_signal: nix::libc::c_int) {
> +    // use println! instead of log, since log might buffer and not print before shut down
> +    println!("Watchdog expired, shutting down VM...");
> +    let err = reboot::reboot(reboot::RebootMode::RB_POWER_OFF).unwrap_err();
> +    println!("'reboot' syscall failed: {}", err);
> +    std::process::exit(1);
> +}
> +
> +/// Initialize alarm() based watchdog
> +pub fn watchdog_init() -> Result<(), Error> {
> +    unsafe {
> +        sigaction(
> +            Signal::SIGALRM,

Please don't use this with async code. Threads, signal handlers and
async are really annoying to keep track of.  This is a perlism we really
shouldn't continue to use. We only have a single semi-acceptable excuse
for timing signals currently and that's file locks with timeouts, as
those have no alternative (yet), which use the timer_create(2)
api btw. and will hopefully at some point be replaced by io-uring...

Please just spawn a future using
    tokio::time::sleep(watchdog_remaining()).await
in a loop (and don't forget to initialize `TRIGGERD` to the current time
before spawning it of course ;-) ).

> +            &SigAction::new(
> +                SigHandler::Handler(alarm_handler),
> +                SaFlags::empty(),
> +                SigSet::empty(),
> +            ),
> +        )?;
> +    }
> +
> +    watchdog_ping();
> +
> +    Ok(())
> +}
> +
> +/// Trigger watchdog keepalive
> +pub fn watchdog_ping() {
> +    alarm::set(TIMEOUT);

^ then this can just go

> +    let cur_time = proxmox::tools::time::epoch_i64();
> +    let last = TRIGGERED.swap(cur_time, Ordering::SeqCst);
> +    LAST_TRIGGERED.store(last, Ordering::SeqCst);
> +}
> +
> +/// Returns the remaining time before watchdog expiry in seconds if 'current' is true, otherwise it
> +/// returns the remaining time before the last ping (which is probably what you want in the API, as
> +/// from an API call 'current'=true will *always* return TIMEOUT)
> +pub fn watchdog_remaining(current: bool) -> i64 {
> +    let cur_time = proxmox::tools::time::epoch_i64();
> +    let last_time = (if current { &TRIGGERED } else { &LAST_TRIGGERED }).load(Ordering::SeqCst);
> +    TIMEOUT as i64 - (cur_time - last_time)
> +}
> +
> +/// Undo the last watchdog ping and set timer back to previous state, call this in the API to fake
> +/// a non-resetting call
> +pub fn watchdog_undo_ping() {

This still makes me cringe :-P

> +    let set = watchdog_remaining(false);
> +    TRIGGERED.store(LAST_TRIGGERED.load(Ordering::SeqCst), Ordering::SeqCst);
> +    // make sure argument cannot be 0, as that would cancel any alarm
> +    alarm::set(1.max(set) as u32);
> +}
> -- 
> 2.20.1




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server
  2021-02-17 10:30     ` Stefan Reiter
@ 2021-02-17 11:13       ` Dietmar Maurer
  0 siblings, 0 replies; 50+ messages in thread
From: Dietmar Maurer @ 2021-02-17 11:13 UTC (permalink / raw)
  To: Stefan Reiter, Proxmox Backup Server development discussion


> On 02/17/2021 11:30 AM Stefan Reiter <s.reiter@proxmox.com> wrote:
> 
>  
> On 17/02/2021 11:25, Dietmar Maurer wrote:
> >> Since the REST server implementation uses the log!() macro, we can
> >> redirect its output to stdout by registering env_logger as the logging
> >> target. env_logger is already in our dependency tree via zstd/bindgen.
> > 
> > Initializing the syslog crate should be enough:
> > 
> >     if let Err(err) = syslog::init(
> >          syslog::Facility::LOG_DAEMON,
> >          log::LevelFilter::Info,
> >          Some("file-restore-daemon")) {
> >          bail!("unable to inititialize syslog - {}", err);
> >      }
> > 
> 
> Wouldn't the syslog crate depend on the systemd journal though? Or does 
> it fall back to stdout/stderr?

Ah, I see - we do not have syslog running.

I guess its Ok to use env_logger then.




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 16/22] file-restore-daemon: add watchdog module
  2021-02-17 10:52   ` Wolfgang Bumiller
@ 2021-02-17 11:14     ` Stefan Reiter
  2021-02-17 11:29       ` Wolfgang Bumiller
  0 siblings, 1 reply; 50+ messages in thread
From: Stefan Reiter @ 2021-02-17 11:14 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pbs-devel

On 17/02/2021 11:52, Wolfgang Bumiller wrote:
> On Tue, Feb 16, 2021 at 06:07:04PM +0100, Stefan Reiter wrote:
>> Add a watchdog that will automatically shut down the VM after 10
>> minutes, if no API call is received.
>>
>> This is handled using the unix 'alarm' syscall.
>>
>> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
>> ---
>>   src/api2/types/file_restore.rs             |  3 ++
>>   src/bin/proxmox-restore-daemon.rs          |  5 ++
>>   src/bin/proxmox_restore_daemon/api.rs      | 22 ++++++--
>>   src/bin/proxmox_restore_daemon/mod.rs      |  3 ++
>>   src/bin/proxmox_restore_daemon/watchdog.rs | 63 ++++++++++++++++++++++
>>   5 files changed, 91 insertions(+), 5 deletions(-)
>>   create mode 100644 src/bin/proxmox_restore_daemon/watchdog.rs
>>
>> diff --git a/src/api2/types/file_restore.rs b/src/api2/types/file_restore.rs
>> index cd8df16a..710c6d83 100644
>> --- a/src/api2/types/file_restore.rs
>> +++ b/src/api2/types/file_restore.rs
>> @@ -8,5 +8,8 @@ use proxmox::api::api;
>>   pub struct RestoreDaemonStatus {
>>       /// VM uptime in seconds
>>       pub uptime: i64,
>> +    /// time left until auto-shutdown, keep in mind that this is inaccurate when 'keep-timeout' is
>> +    /// not set, as then after the status call the timer will have reset
>> +    pub timeout: i64,
>>   }
>>   
>> diff --git a/src/bin/proxmox-restore-daemon.rs b/src/bin/proxmox-restore-daemon.rs
>> index 1ec90794..d30da563 100644
>> --- a/src/bin/proxmox-restore-daemon.rs
>> +++ b/src/bin/proxmox-restore-daemon.rs
>> @@ -40,6 +40,9 @@ fn main() -> Result<(), Error> {
>>           .write_style(env_logger::WriteStyle::Never)
>>           .init();
>>   
>> +    // start watchdog, failure is a critical error as it leads to a scenario where we never exit
>> +    watchdog_init()?;
>> +
>>       proxmox_backup::tools::runtime::main(run())
>>   }
>>   
>> @@ -77,6 +80,8 @@ fn accept_vsock_connections(
>>                   Ok(stream) => {
>>                       if sender.send(Ok(stream)).await.is_err() {
>>                           error!("connection accept channel was closed");
>> +                    } else {
>> +                        watchdog_ping();
> 
> Should the ping not also happen at every api call in case connections
> get reused?
> 

I wanted to keep as much watchdog code out of API calls, lest some new 
code forgets to call a ping(), but yes, I didn't think of connection 
reuse (it doesn't currently happen anywhere, but still good to be safe).

>>                       }
>>                   }
>>                   Err(err) => {
>> diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
>> index 3c642aaf..8eb727df 100644
>> --- a/src/bin/proxmox_restore_daemon/api.rs
>> +++ b/src/bin/proxmox_restore_daemon/api.rs
>> @@ -8,6 +8,8 @@ use proxmox::list_subdirs_api_method;
>>   
>>   use proxmox_backup::api2::types::*;
>>   
>> +use super::{watchdog_remaining, watchdog_undo_ping};
>> +
>>   // NOTE: All API endpoints must have Permission::World, as the configs for authentication do not
>>   // exist within the restore VM. Safety is guaranteed since we use a low port, so only root on the
>>   // host can contact us - and there the proxmox-backup-client validates permissions already.
>> @@ -25,6 +27,16 @@ fn read_uptime() -> Result<f32, Error> {
>>   }
>>   
>>   #[api(
>> +    input: {
>> +        properties: {
>> +            "keep-timeout": {
>> +                type: bool,
>> +                description: "If true, do not reset the watchdog timer on this API call.",
>> +                default: false,
>> +                optional: true,
>> +            },
>> +        },
>> +    },
>>       access: {
>>           description: "Permissions are handled outside restore VM.",
>>           permission: &Permission::World,
>> @@ -34,12 +46,12 @@ fn read_uptime() -> Result<f32, Error> {
>>       }
>>   )]
>>   /// General status information
>> -fn status(
>> -    _param: Value,
>> -    _info: &ApiMethod,
>> -    _rpcenv: &mut dyn RpcEnvironment,
>> -) -> Result<RestoreDaemonStatus, Error> {
>> +fn status(keep_timeout: bool) -> Result<RestoreDaemonStatus, Error> {
>> +    if keep_timeout {
> 
> This seems just weird. Do we really need this?
> 

Not necessarily, but the idea I had in mind was someone running a script 
of sorts that calls 'proxmox-file-restore status' (for monitoring 
etc...) that would otherwise prevent the VMs from ever stopping...

>> +        watchdog_undo_ping();
>> +    }
>>       Ok(RestoreDaemonStatus {
>>           uptime: read_uptime()? as i64,
>> +        timeout: watchdog_remaining(false),
>>       })
>>   }
>> diff --git a/src/bin/proxmox_restore_daemon/mod.rs b/src/bin/proxmox_restore_daemon/mod.rs
>> index d938a5bb..6802d31c 100644
>> --- a/src/bin/proxmox_restore_daemon/mod.rs
>> +++ b/src/bin/proxmox_restore_daemon/mod.rs
>> @@ -1,3 +1,6 @@
>>   ///! File restore VM related functionality
>>   mod api;
>>   pub use api::*;
>> +
>> +mod watchdog;
>> +pub use watchdog::*;
>> diff --git a/src/bin/proxmox_restore_daemon/watchdog.rs b/src/bin/proxmox_restore_daemon/watchdog.rs
>> new file mode 100644
>> index 00000000..f722be0b
>> --- /dev/null
>> +++ b/src/bin/proxmox_restore_daemon/watchdog.rs
>> @@ -0,0 +1,63 @@
>> +//! SIGALRM/alarm(1) based watchdog that shuts down the VM if not pinged for TIMEOUT
>> +use anyhow::Error;
>> +use std::sync::atomic::{AtomicI64, Ordering};
>> +
>> +use nix::sys::{reboot, signal::*};
>> +use nix::unistd::alarm;
>> +
>> +const TIMEOUT: u32 = 600; // seconds
>> +static TRIGGERED: AtomicI64 = AtomicI64::new(0);
>> +static LAST_TRIGGERED: AtomicI64 = AtomicI64::new(0);
>> +
>> +/// Handler is called when alarm-watchdog expires, immediately shuts down VM when triggered
>> +extern "C" fn alarm_handler(_signal: nix::libc::c_int) {
>> +    // use println! instead of log, since log might buffer and not print before shut down
>> +    println!("Watchdog expired, shutting down VM...");
>> +    let err = reboot::reboot(reboot::RebootMode::RB_POWER_OFF).unwrap_err();
>> +    println!("'reboot' syscall failed: {}", err);
>> +    std::process::exit(1);
>> +}
>> +
>> +/// Initialize alarm() based watchdog
>> +pub fn watchdog_init() -> Result<(), Error> {
>> +    unsafe {
>> +        sigaction(
>> +            Signal::SIGALRM,
> 
> Please don't use this with async code. Threads, signal handlers and
> async are really annoying to keep track of.  This is a perlism we really
> shouldn't continue to use. We only have a single semi-acceptable excuse
> for timing signals currently and that's file locks with timeouts, as
> those have no alternative (yet), which use the timer_create(2)
> api btw. and will hopefully at some point be replaced by io-uring...
> 

I went with alarm() on the assumption that it might be a bit more 
reliable (tokio scheduler can get stuck?), and even had the idea to use 
an actual QEMU watchdog (that had some other issues though, that I don't 
quite remember atm).

> Please just spawn a future using
>      tokio::time::sleep(watchdog_remaining()).await
> in a loop (and don't forget to initialize `TRIGGERD` to the current time
> before spawning it of course ;-) ).
> 

...though the probability of a tokio hang is probably low enough that 
this will do just fine too - I'll change it in v2.

>> +            &SigAction::new(
>> +                SigHandler::Handler(alarm_handler),
>> +                SaFlags::empty(),
>> +                SigSet::empty(),
>> +            ),
>> +        )?;
>> +    }
>> +
>> +    watchdog_ping();
>> +
>> +    Ok(())
>> +}
>> +
>> +/// Trigger watchdog keepalive
>> +pub fn watchdog_ping() {
>> +    alarm::set(TIMEOUT);
> 
> ^ then this can just go
> 
>> +    let cur_time = proxmox::tools::time::epoch_i64();
>> +    let last = TRIGGERED.swap(cur_time, Ordering::SeqCst);
>> +    LAST_TRIGGERED.store(last, Ordering::SeqCst);
>> +}
>> +
>> +/// Returns the remaining time before watchdog expiry in seconds if 'current' is true, otherwise it
>> +/// returns the remaining time before the last ping (which is probably what you want in the API, as
>> +/// from an API call 'current'=true will *always* return TIMEOUT)
>> +pub fn watchdog_remaining(current: bool) -> i64 {
>> +    let cur_time = proxmox::tools::time::epoch_i64();
>> +    let last_time = (if current { &TRIGGERED } else { &LAST_TRIGGERED }).load(Ordering::SeqCst);
>> +    TIMEOUT as i64 - (cur_time - last_time)
>> +}
>> +
>> +/// Undo the last watchdog ping and set timer back to previous state, call this in the API to fake
>> +/// a non-resetting call
>> +pub fn watchdog_undo_ping() {
> 
> This still makes me cringe :-P
> 
>> +    let set = watchdog_remaining(false);
>> +    TRIGGERED.store(LAST_TRIGGERED.load(Ordering::SeqCst), Ordering::SeqCst);
>> +    // make sure argument cannot be 0, as that would cancel any alarm
>> +    alarm::set(1.max(set) as u32);
>> +}
>> -- 
>> 2.20.1




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server
  2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server Stefan Reiter
  2021-02-17 10:17   ` Dietmar Maurer
  2021-02-17 10:25   ` Dietmar Maurer
@ 2021-02-17 11:26   ` Dietmar Maurer
  2 siblings, 0 replies; 50+ messages in thread
From: Dietmar Maurer @ 2021-02-17 11:26 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

> diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
> new file mode 100644
> index 00000000..3c642aaf
> --- /dev/null
> +++ b/src/bin/proxmox_restore_daemon/api.rs
> @@ -0,0 +1,45 @@
> +///! File-restore API running inside the restore VM
> +use anyhow::Error;
> +use serde_json::Value;
> +use std::fs;
> +
> +use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment, SubdirMap};
> +use proxmox::list_subdirs_api_method;
> +
> +use proxmox_backup::api2::types::*;
> +
> +// NOTE: All API endpoints must have Permission::World, as the configs for authentication do not
> +// exist within the restore VM. Safety is guaranteed since we use a low port, so only root on the
> +// host can contact us - and there the proxmox-backup-client validates permissions already.

AFAIK, this assumption is wrong. Anyone can connect to a low port! 
Only bind() is restricted to root.

Also, we want to connect as user "backup"?




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 16/22] file-restore-daemon: add watchdog module
  2021-02-17 11:14     ` Stefan Reiter
@ 2021-02-17 11:29       ` Wolfgang Bumiller
  0 siblings, 0 replies; 50+ messages in thread
From: Wolfgang Bumiller @ 2021-02-17 11:29 UTC (permalink / raw)
  To: Stefan Reiter; +Cc: pbs-devel

On Wed, Feb 17, 2021 at 12:14:39PM +0100, Stefan Reiter wrote:
> On 17/02/2021 11:52, Wolfgang Bumiller wrote:
> > On Tue, Feb 16, 2021 at 06:07:04PM +0100, Stefan Reiter wrote:
> > > Add a watchdog that will automatically shut down the VM after 10
> > > minutes, if no API call is received.
> > > 
> > > This is handled using the unix 'alarm' syscall.
> > > 
> > > Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> > > ---
> > >   src/api2/types/file_restore.rs             |  3 ++
> > >   src/bin/proxmox-restore-daemon.rs          |  5 ++
> > >   src/bin/proxmox_restore_daemon/api.rs      | 22 ++++++--
> > >   src/bin/proxmox_restore_daemon/mod.rs      |  3 ++
> > >   src/bin/proxmox_restore_daemon/watchdog.rs | 63 ++++++++++++++++++++++
> > >   5 files changed, 91 insertions(+), 5 deletions(-)
> > >   create mode 100644 src/bin/proxmox_restore_daemon/watchdog.rs
> > > 
> > > diff --git a/src/api2/types/file_restore.rs b/src/api2/types/file_restore.rs
> > > index cd8df16a..710c6d83 100644
> > > --- a/src/api2/types/file_restore.rs
> > > +++ b/src/api2/types/file_restore.rs
> > > @@ -8,5 +8,8 @@ use proxmox::api::api;
> > >   pub struct RestoreDaemonStatus {
> > >       /// VM uptime in seconds
> > >       pub uptime: i64,
> > > +    /// time left until auto-shutdown, keep in mind that this is inaccurate when 'keep-timeout' is
> > > +    /// not set, as then after the status call the timer will have reset
> > > +    pub timeout: i64,
> > >   }
> > > diff --git a/src/bin/proxmox-restore-daemon.rs b/src/bin/proxmox-restore-daemon.rs
> > > index 1ec90794..d30da563 100644
> > > --- a/src/bin/proxmox-restore-daemon.rs
> > > +++ b/src/bin/proxmox-restore-daemon.rs
> > > @@ -40,6 +40,9 @@ fn main() -> Result<(), Error> {
> > >           .write_style(env_logger::WriteStyle::Never)
> > >           .init();
> > > +    // start watchdog, failure is a critical error as it leads to a scenario where we never exit
> > > +    watchdog_init()?;
> > > +
> > >       proxmox_backup::tools::runtime::main(run())
> > >   }
> > > @@ -77,6 +80,8 @@ fn accept_vsock_connections(
> > >                   Ok(stream) => {
> > >                       if sender.send(Ok(stream)).await.is_err() {
> > >                           error!("connection accept channel was closed");
> > > +                    } else {
> > > +                        watchdog_ping();
> > 
> > Should the ping not also happen at every api call in case connections
> > get reused?
> > 
> 
> I wanted to keep as much watchdog code out of API calls, lest some new code
> forgets to call a ping(), but yes, I didn't think of connection reuse (it
> doesn't currently happen anywhere, but still good to be safe).

So maybe the API handler should just get some kind of callback to
trigger before api calls.




^ permalink raw reply	[flat|nested] 50+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-restore-vm-data 03/22] initial commit
  2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-restore-vm-data 03/22] initial commit Stefan Reiter
@ 2021-03-15 18:35   ` Thomas Lamprecht
  2021-03-16 15:33     ` Stefan Reiter
  0 siblings, 1 reply; 50+ messages in thread
From: Thomas Lamprecht @ 2021-03-15 18:35 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

On 16.02.21 18:06, Stefan Reiter wrote:
> proxmox-restore-vm-data provides means to build a debian package
> containing a minimalistic Linux kernel and a corresponding initramfs
> image for use in a file-restore VM.
> 
> Launched with QEMU/KVM, it boots in 1.6 seconds to userspace (on AMD
> 2700X) and has a minimal attack surface (no network stack other than
> virtio-vsock, no auxiliary device support (USB, etc...), userspace
> written in Rust) as opposed to mounting backup archives directly on the
> host.
> 
> Since our Rust binaries are currently not fully statically linked, we
> need to include some libraries into the initramfs as well. This is done
> in 'build_initramfs.sh'.
> 
> A minimal /init is included as a Rust binary (init-shim-rs), doing only
> the bare-minimum userspace setup before handing over control to the
> file-restore daemon (see 'proxmox-backup' repository).
> 
> The debian package comes with a 'activate-noawait
> pbs-file-restore-initramfs' trigger activation to rebuild the cached
> initramfs when the base image shipped here updates. This is taken care
> of by proxmox-file-restore.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
> 
> Brand new git repo! I called it proxmox-restore-vm-data for lack of any smarter
> ideas, open for better names :)
> 
> I also decided to include the 5.10 kernel and ZFS 2.0.3 from current pve-kernel
> repository pretty last-minute, it seems to work fine though (ZFS isn't used atm
> anyway).
> 
> 
>  .gitignore                                    |   9 ++
>  .gitmodules                                   |   6 +
>  Makefile                                      | 103 +++++++++++++
>  build_initramfs.sh                            |  42 +++++
>  config-base                                   | 144 ++++++++++++++++++
>  debian/changelog                              |   6 +
>  debian/compat                                 |   1 +
>  debian/control                                |  34 +++++
>  debian/copyright                              |  22 +++
>  debian/install                                |   2 +
>  debian/rules                                  |  13 ++
>  debian/triggers                               |   1 +
>  init-shim-rs/Cargo.lock                       |  51 +++++++
>  init-shim-rs/Cargo.toml                       |   9 ++
>  init-shim-rs/src/main.rs                      | 122 +++++++++++++++
>  ...-OVERRIDE-do-not-build-xr-usb-serial.patch |  30 ++++
>  ...2-FIXUP-syntax-error-in-Ubuntu-Sauce.patch |  26 ++++
>  submodules/ubuntu-hirsute                     |   1 +
>  submodules/zfsonlinux                         |   1 +
>  19 files changed, 623 insertions(+)
>  create mode 100644 .gitignore
>  create mode 100644 .gitmodules
>  create mode 100644 Makefile
>  create mode 100755 build_initramfs.sh
>  create mode 100644 config-base
>  create mode 100644 debian/changelog
>  create mode 100644 debian/compat
>  create mode 100644 debian/control
>  create mode 100644 debian/copyright
>  create mode 100644 debian/install
>  create mode 100755 debian/rules
>  create mode 100644 debian/triggers
>  create mode 100644 init-shim-rs/Cargo.lock
>  create mode 100644 init-shim-rs/Cargo.toml
>  create mode 100644 init-shim-rs/src/main.rs
>  create mode 100644 patches/kernel/0001-OVERRIDE-do-not-build-xr-usb-serial.patch
>  create mode 100644 patches/kernel/0002-FIXUP-syntax-error-in-Ubuntu-Sauce.patch
>  create mode 160000 submodules/ubuntu-hirsute
>  create mode 160000 submodules/zfsonlinux
> 
>

applied, thanks!

Did two big changes though:
* renamed to "proxmox-backup-restore-image"
* split build system into packaging and actual build

As there was quite some stuff changed, which I did in a few ~10 minutes sessions with days/weeks
in-between: please re-check
https://git.proxmox.com/?p=proxmox-backup-restore-image.git;a=summary




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [pbs-devel] applied: [PATCH proxmox-restore-vm-data 03/22] initial commit
  2021-03-15 18:35   ` [pbs-devel] applied: " Thomas Lamprecht
@ 2021-03-16 15:33     ` Stefan Reiter
  0 siblings, 0 replies; 50+ messages in thread
From: Stefan Reiter @ 2021-03-16 15:33 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox Backup Server development discussion

On 15/03/2021 19:35, Thomas Lamprecht wrote:
> On 16.02.21 18:06, Stefan Reiter wrote:
>> proxmox-restore-vm-data provides means to build a debian package
>> containing a minimalistic Linux kernel and a corresponding initramfs
>> image for use in a file-restore VM.
>>
>> Launched with QEMU/KVM, it boots in 1.6 seconds to userspace (on AMD
>> 2700X) and has a minimal attack surface (no network stack other than
>> virtio-vsock, no auxiliary device support (USB, etc...), userspace
>> written in Rust) as opposed to mounting backup archives directly on the
>> host.
>>
>> Since our Rust binaries are currently not fully statically linked, we
>> need to include some libraries into the initramfs as well. This is done
>> in 'build_initramfs.sh'.
>>
>> A minimal /init is included as a Rust binary (init-shim-rs), doing only
>> the bare-minimum userspace setup before handing over control to the
>> file-restore daemon (see 'proxmox-backup' repository).
>>
>> The debian package comes with a 'activate-noawait
>> pbs-file-restore-initramfs' trigger activation to rebuild the cached
>> initramfs when the base image shipped here updates. This is taken care
>> of by proxmox-file-restore.
>>
>> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
>> ---
>>
>> Brand new git repo! I called it proxmox-restore-vm-data for lack of any smarter
>> ideas, open for better names :)
>>
>> I also decided to include the 5.10 kernel and ZFS 2.0.3 from current pve-kernel
>> repository pretty last-minute, it seems to work fine though (ZFS isn't used atm
>> anyway).
>>
>>
> 
> applied, thanks!
> 
> Did two big changes though:
> * renamed to "proxmox-backup-restore-image"
> * split build system into packaging and actual build
> 
> As there was quite some stuff changed, which I did in a few ~10 minutes sessions with days/weeks
> in-between: please re-check
> https://git.proxmox.com/?p=proxmox-backup-restore-image.git;a=summary
> 

LGTM in general, though "make test-run" was broken (and debian/ copied 
twice), little followup below.

Also updates the kernel to 5.11 like in pve-kernel - quickly tested my 
current v2 with that and it worked fine, better than having the 
intermediary 5.10 in there.

------------------------ >8 ------------------------

 From dd910d15e035f62335a5eb943753ade5dddce1a8 Mon Sep 17 00:00:00 2001
From: Stefan Reiter <s.reiter@proxmox.com>
Date: Tue, 16 Mar 2021 16:26:00 +0100
Subject: [PATCH] fixup "test-run" target and update kernel to 5.11.0

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
  src/Makefile                  | 6 +++---
  src/submodules/ubuntu-hirsute | 2 +-
  2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/Makefile b/src/Makefile
index dcfac03..37f385f 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -19,7 +19,7 @@ all: ${KERNEL_IMG} ${INITRAMFS_IMG}
  ${BUILDDIR}.prepared: ${CONFIG}
  	rm -rf ${BUILDDIR}
  	mkdir -p ${BUILDDIR}
-	cp -a submodules debian patches ${BUILDDIR}/
+	cp -a submodules patches ${BUILDDIR}/
  	cp ${CONFIG} ${BUILDDIR}/${KERNEL_SUBMODULE}
  	cd ${BUILDDIR}/${KERNEL_SUBMODULE}; \
  	   for p in ../../patches/kernel/*.patch; do \
@@ -60,8 +60,8 @@ test-run: ${KERNEL_IMG} ${INITRAMFS_IMG}
  	# included in the initramfs, but it can be used to test the
  	# kernel/init-shim-rs builds
  	qemu-system-x86_64 -serial stdio -vnc none -enable-kvm \
-	   -kernel ${BUILDDIR}/${KERNEL_IMG} \
-	   -initrd build/initramfs/initramfs.img
+	   -kernel ${KERNEL_IMG} \
+	   -initrd ${INITRAMFS_IMG}

  .PHONY: clean
  clean:
diff --git a/src/submodules/ubuntu-hirsute b/src/submodules/ubuntu-hirsute
index 01f2ad6..f488090 160000
--- a/src/submodules/ubuntu-hirsute
+++ b/src/submodules/ubuntu-hirsute
@@ -1 +1 @@
-Subproject commit 01f2ad60c19fc07666c3cad5e6f527bc46af6303
+Subproject commit f48809012350997899c3ce1afc47eb77f116fcf4
-- 
2.20.1




^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2021-03-16 15:33 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-16 17:06 [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter
2021-02-16 17:06 ` [pbs-devel] [PATCH pxar 01/22] decoder/aio: add contents() and content_size() calls Stefan Reiter
2021-02-17  7:56   ` Wolfgang Bumiller
2021-02-16 17:06 ` [pbs-devel] [PATCH pxar 02/22] decoder: add peek() Stefan Reiter
2021-02-17  8:20   ` Wolfgang Bumiller
2021-02-17  8:38     ` Stefan Reiter
2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-restore-vm-data 03/22] initial commit Stefan Reiter
2021-03-15 18:35   ` [pbs-devel] applied: " Thomas Lamprecht
2021-03-16 15:33     ` Stefan Reiter
2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 04/22] api2/admin/datastore: refactor list_dir_content in catalog_reader Stefan Reiter
2021-02-17  7:50   ` [pbs-devel] applied: " Thomas Lamprecht
2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 05/22] api2/admin/datastore: accept "/" as path for root Stefan Reiter
2021-02-17  7:50   ` [pbs-devel] applied: " Thomas Lamprecht
2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 06/22] api2/admin/datastore: refactor create_zip into pxar/extract Stefan Reiter
2021-02-17  7:50   ` [pbs-devel] applied: " Thomas Lamprecht
2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 07/22] pxar/extract: add extract_sub_dir Stefan Reiter
2021-02-17  7:51   ` [pbs-devel] applied: " Thomas Lamprecht
2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 08/22] pxar/extract: add sequential variants to create_zip, extract_sub_dir Stefan Reiter
2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 09/22] client: extract common functions to proxmox_client_tools module Stefan Reiter
2021-02-17  6:49   ` Dietmar Maurer
2021-02-17  7:58     ` Stefan Reiter
2021-02-17  8:50       ` Dietmar Maurer
2021-02-17  9:47         ` Stefan Reiter
2021-02-17 10:12           ` Dietmar Maurer
2021-02-17  9:13   ` [pbs-devel] applied: " Dietmar Maurer
2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 10/22] proxmox_client_tools: extract 'key' from client module Stefan Reiter
2021-02-17  9:11   ` Dietmar Maurer
2021-02-16 17:06 ` [pbs-devel] [PATCH proxmox-backup 11/22] file-restore: add binary and basic commands Stefan Reiter
2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 12/22] file-restore: allow specifying output-format Stefan Reiter
2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 13/22] rest: implement tower service for UnixStream Stefan Reiter
2021-02-17  6:52   ` [pbs-devel] applied: " Dietmar Maurer
2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 14/22] client: add VsockClient to connect to virtio-vsock VMs Stefan Reiter
2021-02-17  7:24   ` [pbs-devel] applied: " Dietmar Maurer
2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 15/22] file-restore-daemon: add binary with virtio-vsock API server Stefan Reiter
2021-02-17 10:17   ` Dietmar Maurer
2021-02-17 10:25   ` Dietmar Maurer
2021-02-17 10:30     ` Stefan Reiter
2021-02-17 11:13       ` Dietmar Maurer
2021-02-17 11:26   ` Dietmar Maurer
2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 16/22] file-restore-daemon: add watchdog module Stefan Reiter
2021-02-17 10:52   ` Wolfgang Bumiller
2021-02-17 11:14     ` Stefan Reiter
2021-02-17 11:29       ` Wolfgang Bumiller
2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 17/22] file-restore-daemon: add disk module Stefan Reiter
2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 18/22] file-restore: add basic VM/block device support Stefan Reiter
2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 19/22] file-restore: improve logging of VM with logrotate Stefan Reiter
2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 20/22] debian/client: add postinst hook to rebuild file-restore initramfs Stefan Reiter
2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 21/22] file-restore(-daemon): implement list API Stefan Reiter
2021-02-16 17:07 ` [pbs-devel] [PATCH proxmox-backup 22/22] file-restore: add 'extract' command for VM file restore Stefan Reiter
2021-02-16 17:11 ` [pbs-devel] [PATCH 00/22] Single file restore for VM images Stefan Reiter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal