public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH v3 00/20] Single file restore for VM images
@ 2021-03-31 10:21 Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 pxar 01/20] decoder/aio: add contents() and content_size() calls Stefan Reiter
                   ` (20 more replies)
  0 siblings, 21 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Implements CLI-based single file and directory restore for both pxar.didx
archives (containers, hosts) and img.fidx (VMs, raw block devices). The design
for VM restore uses a small virtual machine that the host communicates with via
virtio-vsock.

This is encapsuled into a new package called "proxmox-file-restore", providing a
binary of the same name. A second package is provided in a new git repository[0]
called "proxmox-backup-restore-image", providing a minimal kernel image and a
base initramfs (without the daemon, which is included in proxmox-file-restore).

Dependency bump in proxmox-backup for pxar is required.

Tested with ext4 and NTFS VMs, but theoretically includes support for many more
filesystems.

Known issues/Missing features:
* GUI/PVE support
* PBS_PASSWORD/PBS_FINGERPRINT currently have to be set manually for VM restore
* ZFS/LVM/md/... support
* shell auto-complete for "proxmox-file-restore" doesn't seem to work (and I
  don't know why...)

[0] now already public at:
    https://git.proxmox.com/?p=proxmox-backup-restore-image.git;a=summary


v3:
* rebase on master
* pxar: fix usage of assume_init (Wolfgang)
* fix crash with '--output-format json' in 'proxmox-file-restore status'
* make ApiAuth a single trait, makes for less generic-creep (Wolfgang)
* redo extract_sub_dir for sequential decoders (Wolfgang)
* fix Filesystems::scan in daemon/disk.rs for zfs (Wolfgang)
* some minor code cleanups


v2:
* rebase on master
* drop applied patches
* pxar: make contents() call available without tokio-io feature (Wolfgang)
* pxar: drop peek() implementation, rework extractor to cope (Wolfgang)
* only move necessary functions to new key_source.rs (Dietmar)
* implement static ticket-based authentication for VMs, as relying on ports
  <1024 does not guarantee security (Dietmar, Wolfgang)
* allow running proxmox-file-restore as regular user by providing setuid-binary
  to start QEMU VMs (setgid kvm is not enough because of /dev/vhost-vsock)
  (Dietmar, Fabian)
* update debian/* with new proxmox-backup-restore-image naming (Thomas)
* encode zip file directly on the VM, only encode pxar when requested (Dominik)
* use tokio task in watchdog, instead of alarm() (Wolfgang)


pxar: Stefan Reiter (1):
  decoder/aio: add contents() and content_size() calls

 src/decoder/aio.rs | 36 +++++++++++++++++++++++++++++++++++-
 1 file changed, 35 insertions(+), 1 deletion(-)

proxmox-backup: Dominik Csapak (1):
  file-restore: add binary and basic commands

Stefan Reiter (18):
  vsock_client: remove wrong comment
  vsock_client: remove some &mut restrictions and rustfmt
  vsock_client: support authorization header
  proxmox_client_tools: move common key related functions to
    key_source.rs
  file-restore: allow specifying output-format
  server/rest: extract auth to seperate module
  server/rest: add ApiAuth trait to make user auth generic
  file-restore-daemon: add binary with virtio-vsock API server
  file-restore-daemon: add watchdog module
  file-restore-daemon: add disk module
  add tools/cpio encoding module
  file-restore: add qemu-helper setuid binary
  file-restore: add basic VM/block device support
  debian/client: add postinst hook to rebuild file-restore initramfs
  file-restore(-daemon): implement list API
  pxar/extract: add sequential variant of extract_sub_dir
  tools/zip: add zip_directory helper
  file-restore: add 'extract' command for VM file restore

 Cargo.toml                                    |   5 +-
 Makefile                                      |  22 +-
 debian/control                                |  14 +
 debian/control.in                             |  11 +
 debian/proxmox-file-restore.bash-completion   |   1 +
 debian/proxmox-file-restore.bc                |   8 +
 debian/proxmox-file-restore.install           |   5 +
 debian/proxmox-file-restore.postinst          |  63 ++
 debian/proxmox-file-restore.triggers          |   1 +
 debian/rules                                  |   9 +-
 docs/Makefile                                 |  10 +-
 docs/command-line-tools.rst                   |   5 +
 docs/proxmox-file-restore/description.rst     |   3 +
 docs/proxmox-file-restore/man1.rst            |  28 +
 src/api2.rs                                   |   2 +-
 src/api2/types/file_restore.rs                |  15 +
 src/api2/types/mod.rs                         |   3 +
 src/bin/proxmox-backup-api.rs                 |  13 +-
 src/bin/proxmox-backup-client.rs              | 453 +-------------
 src/bin/proxmox-backup-proxy.rs               |   7 +-
 src/bin/proxmox-file-restore.rs               | 456 ++++++++++++++
 src/bin/proxmox-restore-daemon.rs             | 124 ++++
 src/bin/proxmox-restore-qemu-helper.rs        | 372 ++++++++++++
 src/bin/proxmox_backup_client/benchmark.rs    |   4 +-
 src/bin/proxmox_backup_client/catalog.rs      |   3 +-
 src/bin/proxmox_backup_client/key.rs          | 112 +---
 src/bin/proxmox_backup_client/mod.rs          |  28 -
 src/bin/proxmox_backup_client/mount.rs        |   4 +-
 src/bin/proxmox_backup_client/snapshot.rs     |   4 +-
 src/bin/proxmox_client_tools/key_source.rs    | 573 ++++++++++++++++++
 src/bin/proxmox_client_tools/mod.rs           |  65 +-
 src/bin/proxmox_file_restore/block_driver.rs  | 206 +++++++
 .../proxmox_file_restore/block_driver_qemu.rs | 362 +++++++++++
 src/bin/proxmox_file_restore/mod.rs           |   5 +
 src/bin/proxmox_restore_daemon/api.rs         | 369 +++++++++++
 src/bin/proxmox_restore_daemon/auth.rs        |  45 ++
 src/bin/proxmox_restore_daemon/disk.rs        | 329 ++++++++++
 src/bin/proxmox_restore_daemon/mod.rs         |  11 +
 src/bin/proxmox_restore_daemon/watchdog.rs    |  41 ++
 src/buildcfg.rs                               |  21 +
 src/client/vsock_client.rs                    |  78 +--
 src/pxar/extract.rs                           | 316 +++++++---
 src/pxar/mod.rs                               |   5 +-
 src/server.rs                                 |   2 +
 src/server/auth.rs                            | 140 +++++
 src/server/config.rs                          |  13 +-
 src/server/rest.rs                            | 130 +---
 src/tools.rs                                  |   1 +
 src/tools/cpio.rs                             |  73 +++
 src/tools/zip.rs                              |  77 +++
 zsh-completions/_proxmox-file-restore         |  13 +
 51 files changed, 3796 insertions(+), 864 deletions(-)
 create mode 100644 debian/proxmox-file-restore.bash-completion
 create mode 100644 debian/proxmox-file-restore.bc
 create mode 100644 debian/proxmox-file-restore.install
 create mode 100755 debian/proxmox-file-restore.postinst
 create mode 100644 debian/proxmox-file-restore.triggers
 create mode 100644 docs/proxmox-file-restore/description.rst
 create mode 100644 docs/proxmox-file-restore/man1.rst
 create mode 100644 src/api2/types/file_restore.rs
 create mode 100644 src/bin/proxmox-file-restore.rs
 create mode 100644 src/bin/proxmox-restore-daemon.rs
 create mode 100644 src/bin/proxmox-restore-qemu-helper.rs
 create mode 100644 src/bin/proxmox_client_tools/key_source.rs
 create mode 100644 src/bin/proxmox_file_restore/block_driver.rs
 create mode 100644 src/bin/proxmox_file_restore/block_driver_qemu.rs
 create mode 100644 src/bin/proxmox_file_restore/mod.rs
 create mode 100644 src/bin/proxmox_restore_daemon/api.rs
 create mode 100644 src/bin/proxmox_restore_daemon/auth.rs
 create mode 100644 src/bin/proxmox_restore_daemon/disk.rs
 create mode 100644 src/bin/proxmox_restore_daemon/mod.rs
 create mode 100644 src/bin/proxmox_restore_daemon/watchdog.rs
 create mode 100644 src/server/auth.rs
 create mode 100644 src/tools/cpio.rs
 create mode 100644 zsh-completions/_proxmox-file-restore

-- 
2.20.1




^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 pxar 01/20] decoder/aio: add contents() and content_size() calls
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-03-31 11:54   ` [pbs-devel] applied: " Wolfgang Bumiller
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 02/20] vsock_client: remove wrong comment Stefan Reiter
                   ` (19 subsequent siblings)
  20 siblings, 1 reply; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Returns a decoder::Contents without a wrapper type, since in this case
we don't want to hide the SeqRead implementation (as done in
decoder::sync). For conviencience also implement AsyncRead if "tokio-io"
is enabled.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

proxmox-backup requires a dependency bump on this!

v3:
* assume_init takes just 'n', already calculates offset correctly

v2:
* make contents() call available without tokio-io feature
* drop peek() implementation

 src/decoder/aio.rs | 36 +++++++++++++++++++++++++++++++++++-
 1 file changed, 35 insertions(+), 1 deletion(-)

diff --git a/src/decoder/aio.rs b/src/decoder/aio.rs
index 82030b0..55e6464 100644
--- a/src/decoder/aio.rs
+++ b/src/decoder/aio.rs
@@ -5,7 +5,7 @@ use std::io;
 #[cfg(feature = "tokio-fs")]
 use std::path::Path;
 
-use crate::decoder::{self, SeqRead};
+use crate::decoder::{self, Contents, SeqRead};
 use crate::Entry;
 
 /// Asynchronous `pxar` decoder.
@@ -56,6 +56,16 @@ impl<T: SeqRead> Decoder<T> {
         self.inner.next_do().await.transpose()
     }
 
+    /// Get a reader for the contents of the current entry, if the entry has contents.
+    pub fn contents(&mut self) -> Option<Contents<T>> {
+        self.inner.content_reader()
+    }
+
+    /// Get the size of the current contents, if the entry has contents.
+    pub fn content_size(&self) -> Option<u64> {
+        self.inner.content_size()
+    }
+
     /// Include goodbye tables in iteration.
     pub fn enable_goodbye_entries(&mut self, on: bool) {
         self.inner.with_goodbye_tables = on;
@@ -67,6 +77,7 @@ mod tok {
     use std::io;
     use std::pin::Pin;
     use std::task::{Context, Poll};
+    use crate::decoder::{Contents, SeqRead};
 
     /// Read adapter for `futures::io::AsyncRead`
     pub struct TokioReader<T> {
@@ -93,6 +104,29 @@ mod tok {
             }
         }
     }
+
+    impl<'a, T: crate::decoder::SeqRead> tokio::io::AsyncRead for Contents<'a, T> {
+        fn poll_read(
+            self: Pin<&mut Self>,
+            cx: &mut Context<'_>,
+            buf: &mut tokio::io::ReadBuf<'_>,
+        ) -> Poll<io::Result<()>> {
+            unsafe {
+                // Safety: poll_seq_read will *probably* only write to the buffer, so we don't
+                // initialize it first, instead we treat is a &[u8] immediately and uphold the
+                // ReadBuf invariants in the conditional below.
+                let write_buf =
+                    &mut *(buf.unfilled_mut() as *mut [std::mem::MaybeUninit<u8>] as *mut [u8]);
+                let result = self.poll_seq_read(cx, write_buf);
+                if let Poll::Ready(Ok(n)) = result {
+                    // if we've written data, advance both initialized and filled bytes cursor
+                    buf.assume_init(n);
+                    buf.advance(n);
+                }
+                result.map(|_| Ok(()))
+            }
+        }
+    }
 }
 
 #[cfg(feature = "tokio-io")]
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 02/20] vsock_client: remove wrong comment
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 pxar 01/20] decoder/aio: add contents() and content_size() calls Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-04-01  9:53   ` [pbs-devel] applied: " Thomas Lamprecht
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 03/20] vsock_client: remove some &mut restrictions and rustfmt Stefan Reiter
                   ` (18 subsequent siblings)
  20 siblings, 1 reply; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

new in v3

 src/client/vsock_client.rs | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/client/vsock_client.rs b/src/client/vsock_client.rs
index d78f2a8a..5002b53d 100644
--- a/src/client/vsock_client.rs
+++ b/src/client/vsock_client.rs
@@ -18,7 +18,6 @@ use tokio::net::UnixStream;
 use crate::tools;
 use proxmox::api::error::HttpError;
 
-/// Port below 1024 is privileged, this is intentional so only root (on host) can connect
 pub const DEFAULT_VSOCK_PORT: u16 = 807;
 
 #[derive(Clone)]
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 03/20] vsock_client: remove some &mut restrictions and rustfmt
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 pxar 01/20] decoder/aio: add contents() and content_size() calls Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 02/20] vsock_client: remove wrong comment Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-04-01  9:54   ` [pbs-devel] applied: " Thomas Lamprecht
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 04/20] vsock_client: support authorization header Stefan Reiter
                   ` (17 subsequent siblings)
  20 siblings, 1 reply; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

new in v3

 src/client/vsock_client.rs | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/src/client/vsock_client.rs b/src/client/vsock_client.rs
index 5002b53d..a7740ac2 100644
--- a/src/client/vsock_client.rs
+++ b/src/client/vsock_client.rs
@@ -12,7 +12,7 @@ use hyper::client::Client;
 use hyper::Body;
 use pin_project::pin_project;
 use serde_json::Value;
-use tokio::io::{ReadBuf, AsyncRead, AsyncWrite, AsyncWriteExt};
+use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt, ReadBuf};
 use tokio::net::UnixStream;
 
 use crate::tools;
@@ -151,13 +151,13 @@ impl VsockClient {
         self.api_request(req).await
     }
 
-    pub async fn post(&mut self, path: &str, data: Option<Value>) -> Result<Value, Error> {
+    pub async fn post(&self, path: &str, data: Option<Value>) -> Result<Value, Error> {
         let req = Self::request_builder(self.cid, self.port, "POST", path, data)?;
         self.api_request(req).await
     }
 
     pub async fn download(
-        &mut self,
+        &self,
         path: &str,
         data: Option<Value>,
         output: &mut (dyn AsyncWrite + Send + Unpin),
@@ -166,14 +166,13 @@ impl VsockClient {
 
         let client = self.client.clone();
 
-        let resp = client.request(req)
+        let resp = client
+            .request(req)
             .await
             .map_err(|_| format_err!("vsock download request timed out"))?;
         let status = resp.status();
         if !status.is_success() {
-            Self::api_response(resp)
-                .await
-                .map(|_| ())?
+            Self::api_response(resp).await.map(|_| ())?
         } else {
             resp.into_body()
                 .map_err(Error::from)
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 04/20] vsock_client: support authorization header
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (2 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 03/20] vsock_client: remove some &mut restrictions and rustfmt Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-04-01  9:54   ` [pbs-devel] applied: " Thomas Lamprecht
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 05/20] proxmox_client_tools: move common key related functions to key_source.rs Stefan Reiter
                   ` (16 subsequent siblings)
  20 siblings, 1 reply; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Pass in an optional auth tag, which will be passed as an Authorization
header on every subsequent call.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

new in v3

 src/client/vsock_client.rs | 64 ++++++++++++++++++++------------------
 1 file changed, 33 insertions(+), 31 deletions(-)

diff --git a/src/client/vsock_client.rs b/src/client/vsock_client.rs
index a7740ac2..5dd9eb4b 100644
--- a/src/client/vsock_client.rs
+++ b/src/client/vsock_client.rs
@@ -137,22 +137,28 @@ pub struct VsockClient {
     client: Client<VsockConnector>,
     cid: i32,
     port: u16,
+    auth: Option<String>,
 }
 
 impl VsockClient {
-    pub fn new(cid: i32, port: u16) -> Self {
+    pub fn new(cid: i32, port: u16, auth: Option<String>) -> Self {
         let conn = VsockConnector {};
         let client = Client::builder().build::<_, Body>(conn);
-        Self { client, cid, port }
+        Self {
+            client,
+            cid,
+            port,
+            auth,
+        }
     }
 
     pub async fn get(&self, path: &str, data: Option<Value>) -> Result<Value, Error> {
-        let req = Self::request_builder(self.cid, self.port, "GET", path, data)?;
+        let req = self.request_builder("GET", path, data)?;
         self.api_request(req).await
     }
 
     pub async fn post(&self, path: &str, data: Option<Value>) -> Result<Value, Error> {
-        let req = Self::request_builder(self.cid, self.port, "POST", path, data)?;
+        let req = self.request_builder("POST", path, data)?;
         self.api_request(req).await
     }
 
@@ -162,7 +168,7 @@ impl VsockClient {
         data: Option<Value>,
         output: &mut (dyn AsyncWrite + Send + Unpin),
     ) -> Result<(), Error> {
-        let req = Self::request_builder(self.cid, self.port, "GET", path, data)?;
+        let req = self.request_builder("GET", path, data)?;
 
         let client = self.client.clone();
 
@@ -210,47 +216,43 @@ impl VsockClient {
             .await
     }
 
-    pub fn request_builder(
-        cid: i32,
-        port: u16,
+    fn request_builder(
+        &self,
         method: &str,
         path: &str,
         data: Option<Value>,
     ) -> Result<Request<Body>, Error> {
         let path = path.trim_matches('/');
-        let url: Uri = format!("vsock://{}:{}/{}", cid, port, path).parse()?;
+        let url: Uri = format!("vsock://{}:{}/{}", self.cid, self.port, path).parse()?;
+
+        let make_builder = |content_type: &str, url: &Uri| {
+            let mut builder = Request::builder()
+                .method(method)
+                .uri(url)
+                .header(hyper::header::CONTENT_TYPE, content_type);
+            if let Some(auth) = &self.auth {
+                builder = builder.header(hyper::header::AUTHORIZATION, auth);
+            }
+            builder
+        };
 
         if let Some(data) = data {
             if method == "POST" {
-                let request = Request::builder()
-                    .method(method)
-                    .uri(url)
-                    .header(hyper::header::CONTENT_TYPE, "application/json")
-                    .body(Body::from(data.to_string()))?;
+                let builder = make_builder("application/json", &url);
+                let request = builder.body(Body::from(data.to_string()))?;
                 return Ok(request);
             } else {
                 let query = tools::json_object_to_query(data)?;
-                let url: Uri = format!("vsock://{}:{}/{}?{}", cid, port, path, query).parse()?;
-                let request = Request::builder()
-                    .method(method)
-                    .uri(url)
-                    .header(
-                        hyper::header::CONTENT_TYPE,
-                        "application/x-www-form-urlencoded",
-                    )
-                    .body(Body::empty())?;
+                let url: Uri =
+                    format!("vsock://{}:{}/{}?{}", self.cid, self.port, path, query).parse()?;
+                let builder = make_builder("application/x-www-form-urlencoded", &url);
+                let request = builder.body(Body::empty())?;
                 return Ok(request);
             }
         }
 
-        let request = Request::builder()
-            .method(method)
-            .uri(url)
-            .header(
-                hyper::header::CONTENT_TYPE,
-                "application/x-www-form-urlencoded",
-            )
-            .body(Body::empty())?;
+        let builder = make_builder("application/x-www-form-urlencoded", &url);
+        let request = builder.body(Body::empty())?;
 
         Ok(request)
     }
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 05/20] proxmox_client_tools: move common key related functions to key_source.rs
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (3 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 04/20] vsock_client: support authorization header Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-04-01  9:54   ` [pbs-devel] applied: " Thomas Lamprecht
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 06/20] file-restore: add binary and basic commands Stefan Reiter
                   ` (15 subsequent siblings)
  20 siblings, 1 reply; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Add a new module containing key-related functions and schemata from all
over, code moved is not changed as much as possible.

Requires adapting some 'use' statements across proxmox-backup-client and
putting the XDG helpers quite cozily into proxmox_client_tools/mod.rs

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

v2:
* don't move entire key.rs, just what is necessary

 src/bin/proxmox-backup-client.rs           | 453 +---------------
 src/bin/proxmox_backup_client/benchmark.rs |   4 +-
 src/bin/proxmox_backup_client/catalog.rs   |   3 +-
 src/bin/proxmox_backup_client/key.rs       | 112 +---
 src/bin/proxmox_backup_client/mod.rs       |  28 -
 src/bin/proxmox_backup_client/mount.rs     |   4 +-
 src/bin/proxmox_backup_client/snapshot.rs  |   4 +-
 src/bin/proxmox_client_tools/key_source.rs | 573 +++++++++++++++++++++
 src/bin/proxmox_client_tools/mod.rs        |  48 +-
 9 files changed, 631 insertions(+), 598 deletions(-)
 create mode 100644 src/bin/proxmox_client_tools/key_source.rs

diff --git a/src/bin/proxmox-backup-client.rs b/src/bin/proxmox-backup-client.rs
index 45b26c7a..50703dcb 100644
--- a/src/bin/proxmox-backup-client.rs
+++ b/src/bin/proxmox-backup-client.rs
@@ -1,7 +1,5 @@
 use std::collections::HashSet;
-use std::convert::TryFrom;
 use std::io::{self, Read, Write, Seek, SeekFrom};
-use std::os::unix::io::{FromRawFd, RawFd};
 use std::path::{Path, PathBuf};
 use std::pin::Pin;
 use std::sync::{Arc, Mutex};
@@ -19,7 +17,7 @@ use pathpatterns::{MatchEntry, MatchType, PatternFlag};
 use proxmox::{
     tools::{
         time::{strftime_local, epoch_i64},
-        fs::{file_get_contents, file_get_json, replace_file, CreateOptions, image_size},
+        fs::{file_get_json, replace_file, CreateOptions, image_size},
     },
     api::{
         api,
@@ -71,8 +69,18 @@ use proxmox_backup::backup::{
 mod proxmox_backup_client;
 use proxmox_backup_client::*;
 
-mod proxmox_client_tools;
-use proxmox_client_tools::*;
+pub mod proxmox_client_tools;
+use proxmox_client_tools::{
+    complete_archive_name, complete_auth_id, complete_backup_group, complete_backup_snapshot,
+    complete_backup_source, complete_chunk_size, complete_group_or_snapshot,
+    complete_img_archive_name, complete_pxar_archive_name, complete_repository, connect,
+    extract_repository_from_value,
+    key_source::{
+        crypto_parameters, format_key_source, get_encryption_key_password, KEYFD_SCHEMA,
+        KEYFILE_SCHEMA, MASTER_PUBKEY_FD_SCHEMA, MASTER_PUBKEY_FILE_SCHEMA,
+    },
+    CHUNK_SIZE_SCHEMA, REPO_URL_SCHEMA,
+};
 
 fn record_repository(repo: &BackupRepository) {
 
@@ -503,437 +511,6 @@ fn spawn_catalog_upload(
     Ok(CatalogUploadResult { catalog_writer, result: catalog_result_rx })
 }
 
-#[derive(Clone, Debug, Eq, PartialEq)]
-enum KeySource {
-    DefaultKey,
-    Fd,
-    Path(String),
-}
-
-fn format_key_source(source: &KeySource, key_type: &str) -> String {
-    match source {
-        KeySource::DefaultKey => format!("Using default {} key..", key_type),
-        KeySource::Fd => format!("Using {} key from file descriptor..", key_type),
-        KeySource::Path(path) => format!("Using {} key from '{}'..", key_type, path),
-    }
-}
-
-#[derive(Clone, Debug, Eq, PartialEq)]
-struct KeyWithSource {
-    pub source: KeySource,
-    pub key: Vec<u8>,
-}
-
-impl KeyWithSource {
-    pub fn from_fd(key: Vec<u8>) -> Self {
-        Self {
-            source: KeySource::Fd,
-            key,
-        }
-    }
-
-    pub fn from_default(key: Vec<u8>) -> Self {
-        Self {
-            source: KeySource::DefaultKey,
-            key,
-        }
-    }
-
-    pub fn from_path(path: String, key: Vec<u8>) -> Self {
-        Self {
-            source: KeySource::Path(path),
-            key,
-        }
-    }
-}
-
-#[derive(Debug, Eq, PartialEq)]
-struct CryptoParams {
-    mode: CryptMode,
-    enc_key: Option<KeyWithSource>,
-    // FIXME switch to openssl::rsa::rsa<openssl::pkey::Public> once that is Eq?
-    master_pubkey: Option<KeyWithSource>,
-}
-
-fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
-    let keyfile = match param.get("keyfile") {
-        Some(Value::String(keyfile)) => Some(keyfile),
-        Some(_) => bail!("bad --keyfile parameter type"),
-        None => None,
-    };
-
-    let key_fd = match param.get("keyfd") {
-        Some(Value::Number(key_fd)) => Some(
-            RawFd::try_from(key_fd
-                .as_i64()
-                .ok_or_else(|| format_err!("bad key fd: {:?}", key_fd))?
-            )
-            .map_err(|err| format_err!("bad key fd: {:?}: {}", key_fd, err))?
-        ),
-        Some(_) => bail!("bad --keyfd parameter type"),
-        None => None,
-    };
-
-    let master_pubkey_file = match param.get("master-pubkey-file") {
-        Some(Value::String(keyfile)) => Some(keyfile),
-        Some(_) => bail!("bad --master-pubkey-file parameter type"),
-        None => None,
-    };
-
-    let master_pubkey_fd = match param.get("master-pubkey-fd") {
-        Some(Value::Number(key_fd)) => Some(
-            RawFd::try_from(key_fd
-                .as_i64()
-                .ok_or_else(|| format_err!("bad master public key fd: {:?}", key_fd))?
-            )
-            .map_err(|err| format_err!("bad public master key fd: {:?}: {}", key_fd, err))?
-        ),
-        Some(_) => bail!("bad --master-pubkey-fd parameter type"),
-        None => None,
-    };
-
-    let mode: Option<CryptMode> = match param.get("crypt-mode") {
-        Some(mode) => Some(serde_json::from_value(mode.clone())?),
-        None => None,
-    };
-
-    let key = match (keyfile, key_fd) {
-        (None, None) => None,
-        (Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
-        (Some(keyfile), None) => Some(KeyWithSource::from_path(
-            keyfile.clone(),
-            file_get_contents(keyfile)?,
-        )),
-        (None, Some(fd)) => {
-            let input = unsafe { std::fs::File::from_raw_fd(fd) };
-            let mut data = Vec::new();
-            let _len: usize = { input }.read_to_end(&mut data).map_err(|err| {
-                format_err!("error reading encryption key from fd {}: {}", fd, err)
-            })?;
-            Some(KeyWithSource::from_fd(data))
-        }
-    };
-
-    let master_pubkey = match (master_pubkey_file, master_pubkey_fd) {
-        (None, None) => None,
-        (Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
-        (Some(keyfile), None) => Some(KeyWithSource::from_path(
-            keyfile.clone(),
-            file_get_contents(keyfile)?,
-        )),
-        (None, Some(fd)) => {
-            let input = unsafe { std::fs::File::from_raw_fd(fd) };
-            let mut data = Vec::new();
-            let _len: usize = { input }
-                .read_to_end(&mut data)
-                .map_err(|err| format_err!("error reading master key from fd {}: {}", fd, err))?;
-            Some(KeyWithSource::from_fd(data))
-        }
-    };
-
-    let res = match mode {
-        // no crypt mode, enable encryption if keys are available
-        None => match (key, master_pubkey) {
-            // only default keys if available
-            (None, None) => match key::read_optional_default_encryption_key()? {
-                None => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
-                enc_key => {
-                    let master_pubkey = key::read_optional_default_master_pubkey()?;
-                    CryptoParams {
-                        mode: CryptMode::Encrypt,
-                        enc_key,
-                        master_pubkey,
-                    }
-                },
-            },
-
-            // explicit master key, default enc key needed
-            (None, master_pubkey) => match key::read_optional_default_encryption_key()? {
-                None => bail!("--master-pubkey-file/--master-pubkey-fd specified, but no key available"),
-                enc_key => {
-                    CryptoParams {
-                        mode: CryptMode::Encrypt,
-                        enc_key,
-                        master_pubkey,
-                    }
-                },
-            },
-
-            // explicit keyfile, maybe default master key
-            (enc_key, None) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey: key::read_optional_default_master_pubkey()? },
-
-            // explicit keyfile and master key
-            (enc_key, master_pubkey) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey },
-        },
-
-        // explicitly disabled encryption
-        Some(CryptMode::None) => match (key, master_pubkey) {
-            // no keys => OK, no encryption
-            (None, None) => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
-
-            // --keyfile and --crypt-mode=none
-            (Some(_), _) => bail!("--keyfile/--keyfd and --crypt-mode=none are mutually exclusive"),
-
-            // --master-pubkey-file and --crypt-mode=none
-            (_, Some(_)) => bail!("--master-pubkey-file/--master-pubkey-fd and --crypt-mode=none are mutually exclusive"),
-        },
-
-        // explicitly enabled encryption
-        Some(mode) => match (key, master_pubkey) {
-            // no key, maybe master key
-            (None, master_pubkey) => match key::read_optional_default_encryption_key()? {
-                None => bail!("--crypt-mode without --keyfile and no default key file available"),
-                enc_key => {
-                    eprintln!("Encrypting with default encryption key!");
-                    let master_pubkey = match master_pubkey {
-                        None => key::read_optional_default_master_pubkey()?,
-                        master_pubkey => master_pubkey,
-                    };
-
-                    CryptoParams {
-                        mode,
-                        enc_key,
-                        master_pubkey,
-                    }
-                },
-            },
-
-            // --keyfile and --crypt-mode other than none
-            (enc_key, master_pubkey) => {
-                let master_pubkey = match master_pubkey {
-                    None => key::read_optional_default_master_pubkey()?,
-                    master_pubkey => master_pubkey,
-                };
-
-                CryptoParams { mode, enc_key, master_pubkey }
-            },
-        },
-    };
-
-    Ok(res)
-}
-
-#[test]
-// WARNING: there must only be one test for crypto_parameters as the default key handling is not
-// safe w.r.t. concurrency
-fn test_crypto_parameters_handling() -> Result<(), Error> {
-    let some_key = vec![1;1];
-    let default_key = vec![2;1];
-
-    let some_master_key = vec![3;1];
-    let default_master_key = vec![4;1];
-
-    let keypath = "./target/testout/keyfile.test";
-    let master_keypath = "./target/testout/masterkeyfile.test";
-    let invalid_keypath = "./target/testout/invalid_keyfile.test";
-
-    let no_key_res = CryptoParams {
-        enc_key: None,
-        master_pubkey: None,
-        mode: CryptMode::None,
-    };
-    let some_key_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_path(
-            keypath.to_string(),
-            some_key.clone(),
-        )),
-        master_pubkey: None,
-        mode: CryptMode::Encrypt,
-    };
-    let some_key_some_master_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_path(
-            keypath.to_string(),
-            some_key.clone(),
-        )),
-        master_pubkey: Some(KeyWithSource::from_path(
-            master_keypath.to_string(),
-            some_master_key.clone(),
-        )),
-        mode: CryptMode::Encrypt,
-    };
-    let some_key_default_master_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_path(
-            keypath.to_string(),
-            some_key.clone(),
-        )),
-        master_pubkey: Some(KeyWithSource::from_default(default_master_key.clone())),
-        mode: CryptMode::Encrypt,
-    };
-
-    let some_key_sign_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_path(
-            keypath.to_string(),
-            some_key.clone(),
-        )),
-        master_pubkey: None,
-        mode: CryptMode::SignOnly,
-    };
-    let default_key_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_default(default_key.clone())),
-        master_pubkey: None,
-        mode: CryptMode::Encrypt,
-    };
-    let default_key_sign_res = CryptoParams {
-        enc_key: Some(KeyWithSource::from_default(default_key.clone())),
-        master_pubkey: None,
-        mode: CryptMode::SignOnly,
-    };
-
-    replace_file(&keypath, &some_key, CreateOptions::default())?;
-    replace_file(&master_keypath, &some_master_key, CreateOptions::default())?;
-
-    // no params, no default key == no key
-    let res = crypto_parameters(&json!({}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // keyfile param == key from keyfile
-    let res = crypto_parameters(&json!({"keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // crypt mode none == no key
-    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // crypt mode encrypt/sign-only, no keyfile, no default key == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
-
-    // crypt mode none with explicit key == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
-
-    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
-    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_sign_res);
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // invalid keyfile parameter always errors
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
-
-    // now set a default key
-    unsafe { key::set_test_encryption_key(Ok(Some(default_key.clone()))); }
-
-    // and repeat
-
-    // no params but default key == default key
-    let res = crypto_parameters(&json!({}));
-    assert_eq!(res.unwrap(), default_key_res);
-
-    // keyfile param == key from keyfile
-    let res = crypto_parameters(&json!({"keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // crypt mode none == no key
-    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // crypt mode encrypt/sign-only, no keyfile, default key == default key with correct mode
-    let res = crypto_parameters(&json!({"crypt-mode": "sign-only"}));
-    assert_eq!(res.unwrap(), default_key_sign_res);
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt"}));
-    assert_eq!(res.unwrap(), default_key_res);
-
-    // crypt mode none with explicit key == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
-
-    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
-    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_sign_res);
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // invalid keyfile parameter always errors
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
-
-    // now make default key retrieval error
-    unsafe { key::set_test_encryption_key(Err(format_err!("test error"))); }
-
-    // and repeat
-
-    // no params, default key retrieval errors == Error
-    assert!(crypto_parameters(&json!({})).is_err());
-
-    // keyfile param == key from keyfile
-    let res = crypto_parameters(&json!({"keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // crypt mode none == no key
-    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // crypt mode encrypt/sign-only, no keyfile, default key error == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
-
-    // crypt mode none with explicit key == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
-
-    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
-    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_sign_res);
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_res);
-
-    // invalid keyfile parameter always errors
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
-
-    // now remove default key again
-    unsafe { key::set_test_encryption_key(Ok(None)); }
-    // set a default master key
-    unsafe { key::set_test_default_master_pubkey(Ok(Some(default_master_key.clone()))); }
-
-    // and use an explicit master key
-    assert!(crypto_parameters(&json!({"master-pubkey-file": master_keypath})).is_err());
-    // just a default == no key
-    let res = crypto_parameters(&json!({}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // keyfile param == key from keyfile
-    let res = crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": master_keypath}));
-    assert_eq!(res.unwrap(), some_key_some_master_res);
-    // same with fallback to default master key
-    let res = crypto_parameters(&json!({"keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_default_master_res);
-
-    // crypt mode none == error
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "master-pubkey-file": master_keypath})).is_err());
-    // with just default master key == no key
-    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
-    assert_eq!(res.unwrap(), no_key_res);
-
-    // crypt mode encrypt without enc key == error
-    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt", "master-pubkey-file": master_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
-
-    // crypt mode none with explicit key == Error
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath, "master-pubkey-file": master_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
-
-    // crypt mode encrypt with keyfile == key from keyfile with correct mode
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath, "master-pubkey-file": master_keypath}));
-    assert_eq!(res.unwrap(), some_key_some_master_res);
-    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
-    assert_eq!(res.unwrap(), some_key_default_master_res);
-
-    // invalid master keyfile parameter always errors when a key is passed, even with a valid
-    // default master key
-    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "none"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "sign-only"})).is_err());
-    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "encrypt"})).is_err());
-
-    Ok(())
-}
-
 #[api(
    input: {
        properties: {
@@ -1164,7 +741,7 @@ async fn create_backup(
             );
 
             let (key, created, fingerprint) =
-                decrypt_key(&key_with_source.key, &key::get_encryption_key_password)?;
+                decrypt_key(&key_with_source.key, &get_encryption_key_password)?;
             println!("Encryption key fingerprint: {}", fingerprint);
 
             let crypt_config = CryptConfig::new(key)?;
@@ -1514,7 +1091,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
         None => None,
         Some(ref key) => {
             let (key, _, _) =
-                decrypt_key(&key.key, &key::get_encryption_key_password).map_err(|err| {
+                decrypt_key(&key.key, &get_encryption_key_password).map_err(|err| {
                     eprintln!("{}", format_key_source(&key.source, "encryption"));
                     err
                 })?;
diff --git a/src/bin/proxmox_backup_client/benchmark.rs b/src/bin/proxmox_backup_client/benchmark.rs
index 1076dc19..c1673701 100644
--- a/src/bin/proxmox_backup_client/benchmark.rs
+++ b/src/bin/proxmox_backup_client/benchmark.rs
@@ -34,6 +34,8 @@ use crate::{
     connect,
 };
 
+use crate::proxmox_client_tools::key_source::get_encryption_key_password;
+
 #[api()]
 #[derive(Copy, Clone, Serialize)]
 /// Speed test result
@@ -152,7 +154,7 @@ pub async fn benchmark(
     let crypt_config = match keyfile {
         None => None,
         Some(path) => {
-            let (key, _, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
+            let (key, _, _) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
             let crypt_config = CryptConfig::new(key)?;
             Some(Arc::new(crypt_config))
         }
diff --git a/src/bin/proxmox_backup_client/catalog.rs b/src/bin/proxmox_backup_client/catalog.rs
index 659200ff..f4b0a1d5 100644
--- a/src/bin/proxmox_backup_client/catalog.rs
+++ b/src/bin/proxmox_backup_client/catalog.rs
@@ -17,7 +17,6 @@ use crate::{
     extract_repository_from_value,
     format_key_source,
     record_repository,
-    key::get_encryption_key_password,
     decrypt_key,
     api_datastore_latest_snapshot,
     complete_repository,
@@ -38,6 +37,8 @@ use crate::{
     Shell,
 };
 
+use crate::proxmox_client_tools::key_source::get_encryption_key_password;
+
 #[api(
    input: {
         properties: {
diff --git a/src/bin/proxmox_backup_client/key.rs b/src/bin/proxmox_backup_client/key.rs
index 76b135a2..c442fad9 100644
--- a/src/bin/proxmox_backup_client/key.rs
+++ b/src/bin/proxmox_backup_client/key.rs
@@ -20,114 +20,10 @@ use proxmox_backup::{
     tools::paperkey::{generate_paper_key, PaperkeyFormat},
 };
 
-use crate::KeyWithSource;
-
-pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
-pub const DEFAULT_MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
-
-pub fn find_default_master_pubkey() -> Result<Option<PathBuf>, Error> {
-    super::find_xdg_file(
-        DEFAULT_MASTER_PUBKEY_FILE_NAME,
-        "default master public key file",
-    )
-}
-
-pub fn place_default_master_pubkey() -> Result<PathBuf, Error> {
-    super::place_xdg_file(
-        DEFAULT_MASTER_PUBKEY_FILE_NAME,
-        "default master public key file",
-    )
-}
-
-pub fn find_default_encryption_key() -> Result<Option<PathBuf>, Error> {
-    super::find_xdg_file(
-        DEFAULT_ENCRYPTION_KEY_FILE_NAME,
-        "default encryption key file",
-    )
-}
-
-pub fn place_default_encryption_key() -> Result<PathBuf, Error> {
-    super::place_xdg_file(
-        DEFAULT_ENCRYPTION_KEY_FILE_NAME,
-        "default encryption key file",
-    )
-}
-
-#[cfg(not(test))]
-pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
-    find_default_encryption_key()?
-        .map(|path| file_get_contents(path).map(KeyWithSource::from_default))
-        .transpose()
-}
-
-#[cfg(not(test))]
-pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
-    find_default_master_pubkey()?
-        .map(|path| file_get_contents(path).map(KeyWithSource::from_default))
-        .transpose()
-}
-
-#[cfg(test)]
-static mut TEST_DEFAULT_ENCRYPTION_KEY: Result<Option<Vec<u8>>, Error> = Ok(None);
-
-#[cfg(test)]
-pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
-    // not safe when multiple concurrent test cases end up here!
-    unsafe {
-        match &TEST_DEFAULT_ENCRYPTION_KEY {
-            Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
-            Ok(None) => Ok(None),
-            Err(_) => bail!("test error"),
-        }
-    }
-}
-
-#[cfg(test)]
-// not safe when multiple concurrent test cases end up here!
-pub(crate) unsafe fn set_test_encryption_key(value: Result<Option<Vec<u8>>, Error>) {
-    TEST_DEFAULT_ENCRYPTION_KEY = value;
-}
-
-#[cfg(test)]
-static mut TEST_DEFAULT_MASTER_PUBKEY: Result<Option<Vec<u8>>, Error> = Ok(None);
-
-#[cfg(test)]
-pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
-    // not safe when multiple concurrent test cases end up here!
-    unsafe {
-        match &TEST_DEFAULT_MASTER_PUBKEY {
-            Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
-            Ok(None) => Ok(None),
-            Err(_) => bail!("test error"),
-        }
-    }
-}
-
-#[cfg(test)]
-// not safe when multiple concurrent test cases end up here!
-pub(crate) unsafe fn set_test_default_master_pubkey(value: Result<Option<Vec<u8>>, Error>) {
-    TEST_DEFAULT_MASTER_PUBKEY = value;
-}
-
-pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
-    // fixme: implement other input methods
-
-    use std::env::VarError::*;
-    match std::env::var("PBS_ENCRYPTION_PASSWORD") {
-        Ok(p) => return Ok(p.as_bytes().to_vec()),
-        Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
-        Err(NotPresent) => {
-            // Try another method
-        }
-    }
-
-    // If we're on a TTY, query the user for a password
-    if tty::stdin_isatty() {
-        return Ok(tty::read_password("Encryption Key Password: ")?);
-    }
-
-    bail!("no password input mechanism available");
-}
+use crate::proxmox_client_tools::key_source::{
+    find_default_encryption_key, find_default_master_pubkey, get_encryption_key_password,
+    place_default_encryption_key, place_default_master_pubkey,
+};
 
 #[api(
     input: {
diff --git a/src/bin/proxmox_backup_client/mod.rs b/src/bin/proxmox_backup_client/mod.rs
index a14b0dc1..d272dc8f 100644
--- a/src/bin/proxmox_backup_client/mod.rs
+++ b/src/bin/proxmox_backup_client/mod.rs
@@ -1,5 +1,3 @@
-use anyhow::{Context, Error};
-
 mod benchmark;
 pub use benchmark::*;
 mod mount;
@@ -13,29 +11,3 @@ pub use snapshot::*;
 
 pub mod key;
 
-pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
-    xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
-}
-
-/// Convenience helper for better error messages:
-pub fn find_xdg_file(
-    file_name: impl AsRef<std::path::Path>,
-    description: &'static str,
-) -> Result<Option<std::path::PathBuf>, Error> {
-    let file_name = file_name.as_ref();
-    base_directories()
-        .map(|base| base.find_config_file(file_name))
-        .with_context(|| format!("error searching for {}", description))
-}
-
-pub fn place_xdg_file(
-    file_name: impl AsRef<std::path::Path>,
-    description: &'static str,
-) -> Result<std::path::PathBuf, Error> {
-    let file_name = file_name.as_ref();
-    base_directories()
-        .and_then(|base| {
-            base.place_config_file(file_name).map_err(Error::from)
-        })
-        .with_context(|| format!("failed to place {} in xdg home", description))
-}
diff --git a/src/bin/proxmox_backup_client/mount.rs b/src/bin/proxmox_backup_client/mount.rs
index be6aca05..f3498e35 100644
--- a/src/bin/proxmox_backup_client/mount.rs
+++ b/src/bin/proxmox_backup_client/mount.rs
@@ -43,6 +43,8 @@ use crate::{
     BufferedDynamicReadAt,
 };
 
+use crate::proxmox_client_tools::key_source::get_encryption_key_password;
+
 #[sortable]
 const API_METHOD_MOUNT: ApiMethod = ApiMethod::new(
     &ApiHandler::Sync(&mount),
@@ -182,7 +184,7 @@ async fn mount_do(param: Value, pipe: Option<Fd>) -> Result<Value, Error> {
         None => None,
         Some(path) => {
             println!("Encryption key file: '{:?}'", path);
-            let (key, _, fingerprint) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
+            let (key, _, fingerprint) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
             println!("Encryption key fingerprint: '{}'", fingerprint);
             Some(Arc::new(CryptConfig::new(key)?))
         }
diff --git a/src/bin/proxmox_backup_client/snapshot.rs b/src/bin/proxmox_backup_client/snapshot.rs
index 5988ebf6..a98b1ca2 100644
--- a/src/bin/proxmox_backup_client/snapshot.rs
+++ b/src/bin/proxmox_backup_client/snapshot.rs
@@ -35,6 +35,8 @@ use crate::{
     record_repository,
 };
 
+use crate::proxmox_client_tools::key_source::get_encryption_key_password;
+
 #[api(
    input: {
         properties: {
@@ -239,7 +241,7 @@ async fn upload_log(param: Value) -> Result<Value, Error> {
     let crypt_config = match crypto.enc_key {
         None => None,
         Some(key) => {
-            let (key, _created, _) = decrypt_key(&key.key, &crate::key::get_encryption_key_password)?;
+            let (key, _created, _) = decrypt_key(&key.key, &get_encryption_key_password)?;
             let crypt_config = CryptConfig::new(key)?;
             Some(Arc::new(crypt_config))
         }
diff --git a/src/bin/proxmox_client_tools/key_source.rs b/src/bin/proxmox_client_tools/key_source.rs
new file mode 100644
index 00000000..92132ba5
--- /dev/null
+++ b/src/bin/proxmox_client_tools/key_source.rs
@@ -0,0 +1,573 @@
+use std::convert::TryFrom;
+use std::path::PathBuf;
+use std::os::unix::io::{FromRawFd, RawFd};
+use std::io::Read;
+
+use anyhow::{bail, format_err, Error};
+use serde_json::Value;
+
+use proxmox::api::schema::*;
+use proxmox::sys::linux::tty;
+use proxmox::tools::fs::file_get_contents;
+
+use proxmox_backup::backup::CryptMode;
+
+pub const DEFAULT_ENCRYPTION_KEY_FILE_NAME: &str = "encryption-key.json";
+pub const DEFAULT_MASTER_PUBKEY_FILE_NAME: &str = "master-public.pem";
+
+pub const KEYFILE_SCHEMA: Schema =
+    StringSchema::new("Path to encryption key. All data will be encrypted using this key.")
+        .schema();
+
+pub const KEYFD_SCHEMA: Schema =
+    IntegerSchema::new("Pass an encryption key via an already opened file descriptor.")
+        .minimum(0)
+        .schema();
+
+pub const MASTER_PUBKEY_FILE_SCHEMA: Schema = StringSchema::new(
+    "Path to master public key. The encryption key used for a backup will be encrypted using this key and appended to the backup.")
+    .schema();
+
+pub const MASTER_PUBKEY_FD_SCHEMA: Schema =
+    IntegerSchema::new("Pass a master public key via an already opened file descriptor.")
+        .minimum(0)
+        .schema();
+
+#[derive(Clone, Debug, Eq, PartialEq)]
+pub enum KeySource {
+    DefaultKey,
+    Fd,
+    Path(String),
+}
+
+pub fn format_key_source(source: &KeySource, key_type: &str) -> String {
+    match source {
+        KeySource::DefaultKey => format!("Using default {} key..", key_type),
+        KeySource::Fd => format!("Using {} key from file descriptor..", key_type),
+        KeySource::Path(path) => format!("Using {} key from '{}'..", key_type, path),
+    }
+}
+
+#[derive(Clone, Debug, Eq, PartialEq)]
+pub struct KeyWithSource {
+    pub source: KeySource,
+    pub key: Vec<u8>,
+}
+
+impl KeyWithSource {
+    pub fn from_fd(key: Vec<u8>) -> Self {
+        Self {
+            source: KeySource::Fd,
+            key,
+        }
+    }
+
+    pub fn from_default(key: Vec<u8>) -> Self {
+        Self {
+            source: KeySource::DefaultKey,
+            key,
+        }
+    }
+
+    pub fn from_path(path: String, key: Vec<u8>) -> Self {
+        Self {
+            source: KeySource::Path(path),
+            key,
+        }
+    }
+}
+
+#[derive(Debug, Eq, PartialEq)]
+pub struct CryptoParams {
+    pub mode: CryptMode,
+    pub enc_key: Option<KeyWithSource>,
+    // FIXME switch to openssl::rsa::rsa<openssl::pkey::Public> once that is Eq?
+    pub master_pubkey: Option<KeyWithSource>,
+}
+
+pub fn crypto_parameters(param: &Value) -> Result<CryptoParams, Error> {
+    let keyfile = match param.get("keyfile") {
+        Some(Value::String(keyfile)) => Some(keyfile),
+        Some(_) => bail!("bad --keyfile parameter type"),
+        None => None,
+    };
+
+    let key_fd = match param.get("keyfd") {
+        Some(Value::Number(key_fd)) => Some(
+            RawFd::try_from(key_fd
+                .as_i64()
+                .ok_or_else(|| format_err!("bad key fd: {:?}", key_fd))?
+            )
+            .map_err(|err| format_err!("bad key fd: {:?}: {}", key_fd, err))?
+        ),
+        Some(_) => bail!("bad --keyfd parameter type"),
+        None => None,
+    };
+
+    let master_pubkey_file = match param.get("master-pubkey-file") {
+        Some(Value::String(keyfile)) => Some(keyfile),
+        Some(_) => bail!("bad --master-pubkey-file parameter type"),
+        None => None,
+    };
+
+    let master_pubkey_fd = match param.get("master-pubkey-fd") {
+        Some(Value::Number(key_fd)) => Some(
+            RawFd::try_from(key_fd
+                .as_i64()
+                .ok_or_else(|| format_err!("bad master public key fd: {:?}", key_fd))?
+            )
+            .map_err(|err| format_err!("bad public master key fd: {:?}: {}", key_fd, err))?
+        ),
+        Some(_) => bail!("bad --master-pubkey-fd parameter type"),
+        None => None,
+    };
+
+    let mode: Option<CryptMode> = match param.get("crypt-mode") {
+        Some(mode) => Some(serde_json::from_value(mode.clone())?),
+        None => None,
+    };
+
+    let key = match (keyfile, key_fd) {
+        (None, None) => None,
+        (Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
+        (Some(keyfile), None) => Some(KeyWithSource::from_path(
+            keyfile.clone(),
+            file_get_contents(keyfile)?,
+        )),
+        (None, Some(fd)) => {
+            let input = unsafe { std::fs::File::from_raw_fd(fd) };
+            let mut data = Vec::new();
+            let _len: usize = { input }.read_to_end(&mut data).map_err(|err| {
+                format_err!("error reading encryption key from fd {}: {}", fd, err)
+            })?;
+            Some(KeyWithSource::from_fd(data))
+        }
+    };
+
+    let master_pubkey = match (master_pubkey_file, master_pubkey_fd) {
+        (None, None) => None,
+        (Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
+        (Some(keyfile), None) => Some(KeyWithSource::from_path(
+            keyfile.clone(),
+            file_get_contents(keyfile)?,
+        )),
+        (None, Some(fd)) => {
+            let input = unsafe { std::fs::File::from_raw_fd(fd) };
+            let mut data = Vec::new();
+            let _len: usize = { input }
+                .read_to_end(&mut data)
+                .map_err(|err| format_err!("error reading master key from fd {}: {}", fd, err))?;
+            Some(KeyWithSource::from_fd(data))
+        }
+    };
+
+    let res = match mode {
+        // no crypt mode, enable encryption if keys are available
+        None => match (key, master_pubkey) {
+            // only default keys if available
+            (None, None) => match read_optional_default_encryption_key()? {
+                None => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
+                enc_key => {
+                    let master_pubkey = read_optional_default_master_pubkey()?;
+                    CryptoParams {
+                        mode: CryptMode::Encrypt,
+                        enc_key,
+                        master_pubkey,
+                    }
+                },
+            },
+
+            // explicit master key, default enc key needed
+            (None, master_pubkey) => match read_optional_default_encryption_key()? {
+                None => bail!("--master-pubkey-file/--master-pubkey-fd specified, but no key available"),
+                enc_key => {
+                    CryptoParams {
+                        mode: CryptMode::Encrypt,
+                        enc_key,
+                        master_pubkey,
+                    }
+                },
+            },
+
+            // explicit keyfile, maybe default master key
+            (enc_key, None) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey: read_optional_default_master_pubkey()? },
+
+            // explicit keyfile and master key
+            (enc_key, master_pubkey) => CryptoParams { mode: CryptMode::Encrypt, enc_key, master_pubkey },
+        },
+
+        // explicitly disabled encryption
+        Some(CryptMode::None) => match (key, master_pubkey) {
+            // no keys => OK, no encryption
+            (None, None) => CryptoParams { mode: CryptMode::None, enc_key: None, master_pubkey: None },
+
+            // --keyfile and --crypt-mode=none
+            (Some(_), _) => bail!("--keyfile/--keyfd and --crypt-mode=none are mutually exclusive"),
+
+            // --master-pubkey-file and --crypt-mode=none
+            (_, Some(_)) => bail!("--master-pubkey-file/--master-pubkey-fd and --crypt-mode=none are mutually exclusive"),
+        },
+
+        // explicitly enabled encryption
+        Some(mode) => match (key, master_pubkey) {
+            // no key, maybe master key
+            (None, master_pubkey) => match read_optional_default_encryption_key()? {
+                None => bail!("--crypt-mode without --keyfile and no default key file available"),
+                enc_key => {
+                    eprintln!("Encrypting with default encryption key!");
+                    let master_pubkey = match master_pubkey {
+                        None => read_optional_default_master_pubkey()?,
+                        master_pubkey => master_pubkey,
+                    };
+
+                    CryptoParams {
+                        mode,
+                        enc_key,
+                        master_pubkey,
+                    }
+                },
+            },
+
+            // --keyfile and --crypt-mode other than none
+            (enc_key, master_pubkey) => {
+                let master_pubkey = match master_pubkey {
+                    None => read_optional_default_master_pubkey()?,
+                    master_pubkey => master_pubkey,
+                };
+
+                CryptoParams { mode, enc_key, master_pubkey }
+            },
+        },
+    };
+
+    Ok(res)
+}
+
+pub fn find_default_master_pubkey() -> Result<Option<PathBuf>, Error> {
+    super::find_xdg_file(
+        DEFAULT_MASTER_PUBKEY_FILE_NAME,
+        "default master public key file",
+    )
+}
+
+pub fn place_default_master_pubkey() -> Result<PathBuf, Error> {
+    super::place_xdg_file(
+        DEFAULT_MASTER_PUBKEY_FILE_NAME,
+        "default master public key file",
+    )
+}
+
+pub fn find_default_encryption_key() -> Result<Option<PathBuf>, Error> {
+    super::find_xdg_file(
+        DEFAULT_ENCRYPTION_KEY_FILE_NAME,
+        "default encryption key file",
+    )
+}
+
+pub fn place_default_encryption_key() -> Result<PathBuf, Error> {
+    super::place_xdg_file(
+        DEFAULT_ENCRYPTION_KEY_FILE_NAME,
+        "default encryption key file",
+    )
+}
+
+#[cfg(not(test))]
+pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
+    find_default_encryption_key()?
+        .map(|path| file_get_contents(path).map(KeyWithSource::from_default))
+        .transpose()
+}
+
+#[cfg(not(test))]
+pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
+    find_default_master_pubkey()?
+        .map(|path| file_get_contents(path).map(KeyWithSource::from_default))
+        .transpose()
+}
+
+#[cfg(test)]
+static mut TEST_DEFAULT_ENCRYPTION_KEY: Result<Option<Vec<u8>>, Error> = Ok(None);
+
+#[cfg(test)]
+pub(crate) fn read_optional_default_encryption_key() -> Result<Option<KeyWithSource>, Error> {
+    // not safe when multiple concurrent test cases end up here!
+    unsafe {
+        match &TEST_DEFAULT_ENCRYPTION_KEY {
+            Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
+            Ok(None) => Ok(None),
+            Err(_) => bail!("test error"),
+        }
+    }
+}
+
+#[cfg(test)]
+// not safe when multiple concurrent test cases end up here!
+pub(crate) unsafe fn set_test_encryption_key(value: Result<Option<Vec<u8>>, Error>) {
+    TEST_DEFAULT_ENCRYPTION_KEY = value;
+}
+
+#[cfg(test)]
+static mut TEST_DEFAULT_MASTER_PUBKEY: Result<Option<Vec<u8>>, Error> = Ok(None);
+
+#[cfg(test)]
+pub(crate) fn read_optional_default_master_pubkey() -> Result<Option<KeyWithSource>, Error> {
+    // not safe when multiple concurrent test cases end up here!
+    unsafe {
+        match &TEST_DEFAULT_MASTER_PUBKEY {
+            Ok(Some(key)) => Ok(Some(KeyWithSource::from_default(key.clone()))),
+            Ok(None) => Ok(None),
+            Err(_) => bail!("test error"),
+        }
+    }
+}
+
+#[cfg(test)]
+// not safe when multiple concurrent test cases end up here!
+pub(crate) unsafe fn set_test_default_master_pubkey(value: Result<Option<Vec<u8>>, Error>) {
+    TEST_DEFAULT_MASTER_PUBKEY = value;
+}
+
+pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
+    // fixme: implement other input methods
+
+    use std::env::VarError::*;
+    match std::env::var("PBS_ENCRYPTION_PASSWORD") {
+        Ok(p) => return Ok(p.as_bytes().to_vec()),
+        Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
+        Err(NotPresent) => {
+            // Try another method
+        }
+    }
+
+    // If we're on a TTY, query the user for a password
+    if tty::stdin_isatty() {
+        return Ok(tty::read_password("Encryption Key Password: ")?);
+    }
+
+    bail!("no password input mechanism available");
+}
+
+#[test]
+// WARNING: there must only be one test for crypto_parameters as the default key handling is not
+// safe w.r.t. concurrency
+fn test_crypto_parameters_handling() -> Result<(), Error> {
+    use serde_json::json;
+    use proxmox::tools::fs::{replace_file, CreateOptions};
+
+    let some_key = vec![1;1];
+    let default_key = vec![2;1];
+
+    let some_master_key = vec![3;1];
+    let default_master_key = vec![4;1];
+
+    let keypath = "./target/testout/keyfile.test";
+    let master_keypath = "./target/testout/masterkeyfile.test";
+    let invalid_keypath = "./target/testout/invalid_keyfile.test";
+
+    let no_key_res = CryptoParams {
+        enc_key: None,
+        master_pubkey: None,
+        mode: CryptMode::None,
+    };
+    let some_key_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_path(
+            keypath.to_string(),
+            some_key.clone(),
+        )),
+        master_pubkey: None,
+        mode: CryptMode::Encrypt,
+    };
+    let some_key_some_master_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_path(
+            keypath.to_string(),
+            some_key.clone(),
+        )),
+        master_pubkey: Some(KeyWithSource::from_path(
+            master_keypath.to_string(),
+            some_master_key.clone(),
+        )),
+        mode: CryptMode::Encrypt,
+    };
+    let some_key_default_master_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_path(
+            keypath.to_string(),
+            some_key.clone(),
+        )),
+        master_pubkey: Some(KeyWithSource::from_default(default_master_key.clone())),
+        mode: CryptMode::Encrypt,
+    };
+
+    let some_key_sign_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_path(
+            keypath.to_string(),
+            some_key.clone(),
+        )),
+        master_pubkey: None,
+        mode: CryptMode::SignOnly,
+    };
+    let default_key_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_default(default_key.clone())),
+        master_pubkey: None,
+        mode: CryptMode::Encrypt,
+    };
+    let default_key_sign_res = CryptoParams {
+        enc_key: Some(KeyWithSource::from_default(default_key.clone())),
+        master_pubkey: None,
+        mode: CryptMode::SignOnly,
+    };
+
+    replace_file(&keypath, &some_key, CreateOptions::default())?;
+    replace_file(&master_keypath, &some_master_key, CreateOptions::default())?;
+
+    // no params, no default key == no key
+    let res = crypto_parameters(&json!({}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // keyfile param == key from keyfile
+    let res = crypto_parameters(&json!({"keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // crypt mode none == no key
+    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // crypt mode encrypt/sign-only, no keyfile, no default key == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
+
+    // crypt mode none with explicit key == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
+
+    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
+    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_sign_res);
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // invalid keyfile parameter always errors
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
+
+    // now set a default key
+    unsafe { set_test_encryption_key(Ok(Some(default_key.clone()))); }
+
+    // and repeat
+
+    // no params but default key == default key
+    let res = crypto_parameters(&json!({}));
+    assert_eq!(res.unwrap(), default_key_res);
+
+    // keyfile param == key from keyfile
+    let res = crypto_parameters(&json!({"keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // crypt mode none == no key
+    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // crypt mode encrypt/sign-only, no keyfile, default key == default key with correct mode
+    let res = crypto_parameters(&json!({"crypt-mode": "sign-only"}));
+    assert_eq!(res.unwrap(), default_key_sign_res);
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt"}));
+    assert_eq!(res.unwrap(), default_key_res);
+
+    // crypt mode none with explicit key == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
+
+    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
+    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_sign_res);
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // invalid keyfile parameter always errors
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
+
+    // now make default key retrieval error
+    unsafe { set_test_encryption_key(Err(format_err!("test error"))); }
+
+    // and repeat
+
+    // no params, default key retrieval errors == Error
+    assert!(crypto_parameters(&json!({})).is_err());
+
+    // keyfile param == key from keyfile
+    let res = crypto_parameters(&json!({"keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // crypt mode none == no key
+    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // crypt mode encrypt/sign-only, no keyfile, default key error == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
+
+    // crypt mode none with explicit key == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
+
+    // crypt mode sign-only/encrypt with keyfile == key from keyfile with correct mode
+    let res = crypto_parameters(&json!({"crypt-mode": "sign-only", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_sign_res);
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_res);
+
+    // invalid keyfile parameter always errors
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "none"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": invalid_keypath, "crypt-mode": "encrypt"})).is_err());
+
+    // now remove default key again
+    unsafe { set_test_encryption_key(Ok(None)); }
+    // set a default master key
+    unsafe { set_test_default_master_pubkey(Ok(Some(default_master_key.clone()))); }
+
+    // and use an explicit master key
+    assert!(crypto_parameters(&json!({"master-pubkey-file": master_keypath})).is_err());
+    // just a default == no key
+    let res = crypto_parameters(&json!({}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // keyfile param == key from keyfile
+    let res = crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": master_keypath}));
+    assert_eq!(res.unwrap(), some_key_some_master_res);
+    // same with fallback to default master key
+    let res = crypto_parameters(&json!({"keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_default_master_res);
+
+    // crypt mode none == error
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "master-pubkey-file": master_keypath})).is_err());
+    // with just default master key == no key
+    let res = crypto_parameters(&json!({"crypt-mode": "none"}));
+    assert_eq!(res.unwrap(), no_key_res);
+
+    // crypt mode encrypt without enc key == error
+    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt", "master-pubkey-file": master_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"crypt-mode": "encrypt"})).is_err());
+
+    // crypt mode none with explicit key == Error
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath, "master-pubkey-file": master_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"crypt-mode": "none", "keyfile": keypath})).is_err());
+
+    // crypt mode encrypt with keyfile == key from keyfile with correct mode
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath, "master-pubkey-file": master_keypath}));
+    assert_eq!(res.unwrap(), some_key_some_master_res);
+    let res = crypto_parameters(&json!({"crypt-mode": "encrypt", "keyfile": keypath}));
+    assert_eq!(res.unwrap(), some_key_default_master_res);
+
+    // invalid master keyfile parameter always errors when a key is passed, even with a valid
+    // default master key
+    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "none"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "sign-only"})).is_err());
+    assert!(crypto_parameters(&json!({"keyfile": keypath, "master-pubkey-file": invalid_keypath,"crypt-mode": "encrypt"})).is_err());
+
+    Ok(())
+}
+
diff --git a/src/bin/proxmox_client_tools/mod.rs b/src/bin/proxmox_client_tools/mod.rs
index 7b69e8cb..73744ba2 100644
--- a/src/bin/proxmox_client_tools/mod.rs
+++ b/src/bin/proxmox_client_tools/mod.rs
@@ -1,8 +1,7 @@
 //! Shared tools useful for common CLI clients.
-
 use std::collections::HashMap;
 
-use anyhow::{bail, format_err, Error};
+use anyhow::{bail, format_err, Context, Error};
 use serde_json::{json, Value};
 use xdg::BaseDirectories;
 
@@ -17,6 +16,8 @@ use proxmox_backup::backup::BackupDir;
 use proxmox_backup::client::*;
 use proxmox_backup::tools;
 
+pub mod key_source;
+
 const ENV_VAR_PBS_FINGERPRINT: &str = "PBS_FINGERPRINT";
 const ENV_VAR_PBS_PASSWORD: &str = "PBS_PASSWORD";
 
@@ -25,24 +26,6 @@ pub const REPO_URL_SCHEMA: Schema = StringSchema::new("Repository URL.")
     .max_length(256)
     .schema();
 
-pub const KEYFILE_SCHEMA: Schema =
-    StringSchema::new("Path to encryption key. All data will be encrypted using this key.")
-        .schema();
-
-pub const KEYFD_SCHEMA: Schema =
-    IntegerSchema::new("Pass an encryption key via an already opened file descriptor.")
-        .minimum(0)
-        .schema();
-
-pub const MASTER_PUBKEY_FILE_SCHEMA: Schema = StringSchema::new(
-    "Path to master public key. The encryption key used for a backup will be encrypted using this key and appended to the backup.")
-    .schema();
-
-pub const MASTER_PUBKEY_FD_SCHEMA: Schema =
-    IntegerSchema::new("Pass a master public key via an already opened file descriptor.")
-        .minimum(0)
-        .schema();
-
 pub const CHUNK_SIZE_SCHEMA: Schema = IntegerSchema::new("Chunk size in KB. Must be a power of 2.")
     .minimum(64)
     .maximum(4096)
@@ -364,3 +347,28 @@ pub fn complete_backup_source(arg: &str, param: &HashMap<String, String>) -> Vec
 
     result
 }
+
+pub fn base_directories() -> Result<xdg::BaseDirectories, Error> {
+    xdg::BaseDirectories::with_prefix("proxmox-backup").map_err(Error::from)
+}
+
+/// Convenience helper for better error messages:
+pub fn find_xdg_file(
+    file_name: impl AsRef<std::path::Path>,
+    description: &'static str,
+) -> Result<Option<std::path::PathBuf>, Error> {
+    let file_name = file_name.as_ref();
+    base_directories()
+        .map(|base| base.find_config_file(file_name))
+        .with_context(|| format!("error searching for {}", description))
+}
+
+pub fn place_xdg_file(
+    file_name: impl AsRef<std::path::Path>,
+    description: &'static str,
+) -> Result<std::path::PathBuf, Error> {
+    let file_name = file_name.as_ref();
+    base_directories()
+        .and_then(|base| base.place_config_file(file_name).map_err(Error::from))
+        .with_context(|| format!("failed to place {} in xdg home", description))
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 06/20] file-restore: add binary and basic commands
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (4 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 05/20] proxmox_client_tools: move common key related functions to key_source.rs Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 07/20] file-restore: allow specifying output-format Stefan Reiter
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

From: Dominik Csapak <d.csapak@proxmox.com>

For now it only supports 'list' and 'extract' commands for 'pxar.didx'
files. This should be the foundation for a general file-restore
interface that is shared with block-level snapshots.

This is packaged as a seperate .deb file, since for block level restore
it will need to depend on pve-qemu-kvm, which we want to seperate from
proxmox-backup-client.

[original code for proxmox-file-restore.rs]
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

[code cleanups/clippy, use helpers::list_dir_content/ArchiveEntry, no
/block subdir for .fidx files, seperate binary and package]
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

v2:
* update debian/* with 'proxmox-backup-restore-image' naming

 Cargo.toml                                  |   2 +-
 Makefile                                    |   9 +-
 debian/control                              |  12 +
 debian/control.in                           |  11 +
 debian/proxmox-file-restore.bash-completion |   1 +
 debian/proxmox-file-restore.bc              |   8 +
 debian/proxmox-file-restore.install         |   3 +
 debian/rules                                |   7 +-
 docs/Makefile                               |  10 +-
 docs/command-line-tools.rst                 |   5 +
 docs/proxmox-file-restore/description.rst   |   3 +
 docs/proxmox-file-restore/man1.rst          |  28 ++
 src/api2.rs                                 |   2 +-
 src/bin/proxmox-file-restore.rs             | 350 ++++++++++++++++++++
 zsh-completions/_proxmox-file-restore       |  13 +
 15 files changed, 457 insertions(+), 7 deletions(-)
 create mode 100644 debian/proxmox-file-restore.bash-completion
 create mode 100644 debian/proxmox-file-restore.bc
 create mode 100644 debian/proxmox-file-restore.install
 create mode 100644 docs/proxmox-file-restore/description.rst
 create mode 100644 docs/proxmox-file-restore/man1.rst
 create mode 100644 src/bin/proxmox-file-restore.rs
 create mode 100644 zsh-completions/_proxmox-file-restore

diff --git a/Cargo.toml b/Cargo.toml
index 244040ad..4e4d18b4 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -60,7 +60,7 @@ serde = { version = "1.0", features = ["derive"] }
 serde_json = "1.0"
 siphasher = "0.3"
 syslog = "4.0"
-tokio = { version = "1.0", features = [ "fs", "io-util", "macros", "net", "parking_lot", "process", "rt", "rt-multi-thread", "signal", "time" ] }
+tokio = { version = "1.0", features = [ "fs", "io-util", "io-std", "macros", "net", "parking_lot", "process", "rt", "rt-multi-thread", "signal", "time" ] }
 tokio-openssl = "0.6.1"
 tokio-stream = "0.1.0"
 tokio-util = { version = "0.6", features = [ "codec" ] }
diff --git a/Makefile b/Makefile
index bf41c372..ec52d88f 100644
--- a/Makefile
+++ b/Makefile
@@ -9,6 +9,7 @@ SUBDIRS := etc www docs
 # Binaries usable by users
 USR_BIN := \
 	proxmox-backup-client 	\
+	proxmox-file-restore	\
 	pxar			\
 	proxmox-tape		\
 	pmtx			\
@@ -47,9 +48,12 @@ SERVER_DEB=${PACKAGE}-server_${DEB_VERSION}_${ARCH}.deb
 SERVER_DBG_DEB=${PACKAGE}-server-dbgsym_${DEB_VERSION}_${ARCH}.deb
 CLIENT_DEB=${PACKAGE}-client_${DEB_VERSION}_${ARCH}.deb
 CLIENT_DBG_DEB=${PACKAGE}-client-dbgsym_${DEB_VERSION}_${ARCH}.deb
+RESTORE_DEB=proxmox-file-restore_${DEB_VERSION}_${ARCH}.deb
+RESTORE_DBG_DEB=proxmox-file-restore-dbgsym_${DEB_VERSION}_${ARCH}.deb
 DOC_DEB=${PACKAGE}-docs_${DEB_VERSION}_all.deb
 
-DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB}
+DEBS=${SERVER_DEB} ${SERVER_DBG_DEB} ${CLIENT_DEB} ${CLIENT_DBG_DEB} \
+     ${RESTORE_DEB} ${RESTORE_DBG_DEB}
 
 DSC = rust-${PACKAGE}_${DEB_VERSION}.dsc
 
@@ -152,8 +156,9 @@ install: $(COMPILED_BINS)
 	$(MAKE) -C docs install
 
 .PHONY: upload
-upload: ${SERVER_DEB} ${CLIENT_DEB} ${DOC_DEB}
+upload: ${SERVER_DEB} ${CLIENT_DEB} ${RESTORE_DEB} ${DOC_DEB}
 	# check if working directory is clean
 	git diff --exit-code --stat && git diff --exit-code --stat --staged
 	tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
 	tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve,pmg" --dist buster
+	tar cf - ${RESTORE_DEB} ${RESTORE_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve,pmg" --dist buster
diff --git a/debian/control b/debian/control
index b58f701c..e4e64372 100644
--- a/debian/control
+++ b/debian/control
@@ -52,6 +52,7 @@ Build-Depends: debhelper (>= 11),
  librust-syslog-4+default-dev,
  librust-tokio-1+default-dev,
  librust-tokio-1+fs-dev,
+ librust-tokio-1+io-std-dev,
  librust-tokio-1+io-util-dev,
  librust-tokio-1+macros-dev,
  librust-tokio-1+net-dev,
@@ -146,3 +147,14 @@ Depends: libjs-extjs,
 Architecture: all
 Description: Proxmox Backup Documentation
  This package contains the Proxmox Backup Documentation files.
+
+Package: proxmox-file-restore
+Architecture: any
+Depends: ${misc:Depends},
+         ${shlibs:Depends},
+         proxmox-backup-restore-image,
+Recommends: pve-qemu-kvm (>= 5.0.0-9),
+Description: PBS single file restore for pxar and block device backups
+ This package contains the Proxmox Backup single file restore client for
+ restoring individual files and folders from both host/container and VM/block
+ device backups. It includes a block device restore driver using QEMU.
diff --git a/debian/control.in b/debian/control.in
index c6aee8ca..5d344664 100644
--- a/debian/control.in
+++ b/debian/control.in
@@ -43,3 +43,14 @@ Depends: libjs-extjs,
 Architecture: all
 Description: Proxmox Backup Documentation
  This package contains the Proxmox Backup Documentation files.
+
+Package: proxmox-file-restore
+Architecture: any
+Depends: ${misc:Depends},
+         ${shlibs:Depends},
+         proxmox-backup-restore-image,
+Recommends: pve-qemu-kvm (>= 5.0.0-9),
+Description: PBS single file restore for pxar and block device backups
+ This package contains the Proxmox Backup single file restore client for
+ restoring individual files and folders from both host/container and VM/block
+ device backups. It includes a block device restore driver using QEMU.
diff --git a/debian/proxmox-file-restore.bash-completion b/debian/proxmox-file-restore.bash-completion
new file mode 100644
index 00000000..7160209c
--- /dev/null
+++ b/debian/proxmox-file-restore.bash-completion
@@ -0,0 +1 @@
+debian/proxmox-file-restore.bc proxmox-file-restore
diff --git a/debian/proxmox-file-restore.bc b/debian/proxmox-file-restore.bc
new file mode 100644
index 00000000..646ebdd2
--- /dev/null
+++ b/debian/proxmox-file-restore.bc
@@ -0,0 +1,8 @@
+# proxmox-file-restore bash completion
+
+# see http://tiswww.case.edu/php/chet/bash/FAQ
+# and __ltrim_colon_completions() in /usr/share/bash-completion/bash_completion
+# this modifies global var, but I found no better way
+COMP_WORDBREAKS=${COMP_WORDBREAKS//:}
+
+complete -C 'proxmox-file-restore bashcomplete' proxmox-file-restore
diff --git a/debian/proxmox-file-restore.install b/debian/proxmox-file-restore.install
new file mode 100644
index 00000000..2082e46b
--- /dev/null
+++ b/debian/proxmox-file-restore.install
@@ -0,0 +1,3 @@
+usr/bin/proxmox-file-restore
+usr/share/man/man1/proxmox-file-restore.1
+usr/share/zsh/vendor-completions/_proxmox-file-restore
diff --git a/debian/rules b/debian/rules
index 22671c0a..ce2db72e 100755
--- a/debian/rules
+++ b/debian/rules
@@ -52,8 +52,11 @@ override_dh_dwz:
 
 override_dh_strip:
 	dh_strip
-	for exe in $$(find debian/proxmox-backup-client/usr \
-	  debian/proxmox-backup-server/usr -executable -type f); do \
+	for exe in $$(find \
+	    debian/proxmox-backup-client/usr \
+	    debian/proxmox-backup-server/usr \
+	    debian/proxmox-file-restore/usr \
+	    -executable -type f); do \
 	  debian/scripts/elf-strip-unused-dependencies.sh "$$exe" || true; \
 	done
 
diff --git a/docs/Makefile b/docs/Makefile
index 05352b48..85d44ee4 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -5,6 +5,7 @@ GENERATED_SYNOPSIS := 						\
 	proxmox-backup-client/synopsis.rst			\
 	proxmox-backup-client/catalog-shell-synopsis.rst 	\
 	proxmox-backup-manager/synopsis.rst			\
+	proxmox-file-restore/synopsis.rst			\
 	pxar/synopsis.rst					\
 	pmtx/synopsis.rst					\
 	pmt/synopsis.rst					\
@@ -25,7 +26,8 @@ MAN1_PAGES := 				\
 	proxmox-tape.1			\
 	proxmox-backup-proxy.1		\
 	proxmox-backup-client.1		\
-	proxmox-backup-manager.1
+	proxmox-backup-manager.1	\
+	proxmox-file-restore.1
 
 MAN5_PAGES :=				\
 	media-pool.cfg.5		\
@@ -179,6 +181,12 @@ proxmox-backup-manager.1: proxmox-backup-manager/man1.rst  proxmox-backup-manage
 proxmox-backup-proxy.1: proxmox-backup-proxy/man1.rst  proxmox-backup-proxy/description.rst
 	rst2man $< >$@
 
+proxmox-file-restore/synopsis.rst: ${COMPILEDIR}/proxmox-file-restore
+	${COMPILEDIR}/proxmox-file-restore printdoc > proxmox-file-restore/synopsis.rst
+
+proxmox-file-restore.1: proxmox-file-restore/man1.rst  proxmox-file-restore/description.rst proxmox-file-restore/synopsis.rst
+	rst2man $< >$@
+
 .PHONY: onlinehelpinfo
 onlinehelpinfo:
 	@echo "Generating OnlineHelpInfo.js..."
diff --git a/docs/command-line-tools.rst b/docs/command-line-tools.rst
index 9b0a1290..bf3a92cc 100644
--- a/docs/command-line-tools.rst
+++ b/docs/command-line-tools.rst
@@ -6,6 +6,11 @@ Command Line Tools
 
 .. include:: proxmox-backup-client/description.rst
 
+``proxmox-file-restore``
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. include:: proxmox-file-restore/description.rst
+
 ``proxmox-backup-manager``
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/docs/proxmox-file-restore/description.rst b/docs/proxmox-file-restore/description.rst
new file mode 100644
index 00000000..605dd12c
--- /dev/null
+++ b/docs/proxmox-file-restore/description.rst
@@ -0,0 +1,3 @@
+Command line tool for restoring files and directories from PBS archives. In contrast to
+proxmox-backup-client, this supports both container/host and VM backups.
+
diff --git a/docs/proxmox-file-restore/man1.rst b/docs/proxmox-file-restore/man1.rst
new file mode 100644
index 00000000..fe3625b1
--- /dev/null
+++ b/docs/proxmox-file-restore/man1.rst
@@ -0,0 +1,28 @@
+==========================
+proxmox-file-restore
+==========================
+
+.. include:: ../epilog.rst
+
+-----------------------------------------------------------------------
+Command line tool for restoring files and directories from PBS archives
+-----------------------------------------------------------------------
+
+:Author: |AUTHOR|
+:Version: Version |VERSION|
+:Manual section: 1
+
+
+Synopsis
+==========
+
+.. include:: synopsis.rst
+
+
+Description
+============
+
+.. include:: description.rst
+
+
+.. include:: ../pbs-copyright.rst
diff --git a/src/api2.rs b/src/api2.rs
index b7230f75..132e2c2a 100644
--- a/src/api2.rs
+++ b/src/api2.rs
@@ -12,7 +12,7 @@ pub mod version;
 pub mod ping;
 pub mod pull;
 pub mod tape;
-mod helpers;
+pub mod helpers;
 
 use proxmox::api::router::SubdirMap;
 use proxmox::api::Router;
diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
new file mode 100644
index 00000000..3cd0c73f
--- /dev/null
+++ b/src/bin/proxmox-file-restore.rs
@@ -0,0 +1,350 @@
+use std::ffi::OsStr;
+use std::os::unix::ffi::OsStrExt;
+use std::path::PathBuf;
+use std::sync::Arc;
+
+use anyhow::{bail, format_err, Error};
+use serde_json::Value;
+
+use proxmox::api::{
+    api,
+    cli::{run_cli_command, CliCommand, CliCommandMap, CliEnvironment},
+};
+use pxar::accessor::aio::Accessor;
+
+use proxmox_backup::api2::{helpers, types::ArchiveEntry};
+use proxmox_backup::backup::{
+    decrypt_key, BackupDir, BufferedDynamicReader, CatalogReader, CryptConfig, CryptMode,
+    DirEntryAttribute, IndexFile, LocalDynamicReadAt, CATALOG_NAME,
+};
+use proxmox_backup::client::{BackupReader, RemoteChunkReader};
+use proxmox_backup::pxar::{create_zip, extract_sub_dir};
+use proxmox_backup::tools;
+
+// use "pub" so rust doesn't complain about "unused" functions in the module
+pub mod proxmox_client_tools;
+use proxmox_client_tools::{
+    complete_group_or_snapshot, complete_repository, connect, extract_repository_from_value,
+    key_source::{
+        crypto_parameters, format_key_source, get_encryption_key_password, KEYFD_SCHEMA,
+        KEYFILE_SCHEMA,
+    },
+    REPO_URL_SCHEMA,
+};
+
+enum ExtractPath {
+    ListArchives,
+    Pxar(String, Vec<u8>),
+}
+
+fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
+    let mut bytes = if base64 {
+        base64::decode(path)?
+    } else {
+        path.into_bytes()
+    };
+
+    if bytes == b"/" {
+        return Ok(ExtractPath::ListArchives);
+    }
+
+    while bytes.len() > 0 && bytes[0] == b'/' {
+        bytes.remove(0);
+    }
+
+    let (file, path) = {
+        let slash_pos = bytes.iter().position(|c| *c == b'/').unwrap_or(bytes.len());
+        let path = bytes.split_off(slash_pos);
+        let file = String::from_utf8(bytes)?;
+        (file, path)
+    };
+
+    if file.ends_with(".pxar.didx") {
+        Ok(ExtractPath::Pxar(file, path))
+    } else {
+        bail!("'{}' is not supported for file-restore", file);
+    }
+}
+
+#[api(
+   input: {
+       properties: {
+           repository: {
+               schema: REPO_URL_SCHEMA,
+               optional: true,
+           },
+           snapshot: {
+               type: String,
+               description: "Group/Snapshot path.",
+           },
+           "path": {
+               description: "Path to restore. Directories will be restored as .zip files.",
+               type: String,
+           },
+           "base64": {
+               type: Boolean,
+               description: "If set, 'path' will be interpreted as base64 encoded.",
+               optional: true,
+               default: false,
+           },
+           keyfile: {
+               schema: KEYFILE_SCHEMA,
+               optional: true,
+           },
+           "keyfd": {
+               schema: KEYFD_SCHEMA,
+               optional: true,
+           },
+           "crypt-mode": {
+               type: CryptMode,
+               optional: true,
+           },
+       }
+   }
+)]
+/// List a directory from a backup snapshot.
+async fn list(
+    snapshot: String,
+    path: String,
+    base64: bool,
+    param: Value,
+) -> Result<Vec<ArchiveEntry>, Error> {
+    let repo = extract_repository_from_value(&param)?;
+    let snapshot: BackupDir = snapshot.parse()?;
+    let path = parse_path(path, base64)?;
+
+    let crypto = crypto_parameters(&param)?;
+    let crypt_config = match crypto.enc_key {
+        None => None,
+        Some(ref key) => {
+            let (key, _, _) =
+                decrypt_key(&key.key, &get_encryption_key_password).map_err(|err| {
+                    eprintln!("{}", format_key_source(&key.source, "encryption"));
+                    err
+                })?;
+            Some(Arc::new(CryptConfig::new(key)?))
+        }
+    };
+
+    let client = connect(&repo)?;
+    let client = BackupReader::start(
+        client,
+        crypt_config.clone(),
+        repo.store(),
+        &snapshot.group().backup_type(),
+        &snapshot.group().backup_id(),
+        snapshot.backup_time(),
+        true,
+    )
+    .await?;
+
+    let (manifest, _) = client.download_manifest().await?;
+    manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
+
+    match path {
+        ExtractPath::ListArchives => {
+            let mut entries = vec![];
+            for file in manifest.files() {
+                match file.filename.rsplitn(2, '.').next().unwrap() {
+                    "didx" => {}
+                    "fidx" => {}
+                    _ => continue, // ignore all non fidx/didx
+                }
+                let path = format!("/{}", file.filename);
+                let attr = DirEntryAttribute::Directory { start: 0 };
+                entries.push(ArchiveEntry::new(path.as_bytes(), &attr));
+            }
+
+            Ok(entries)
+        }
+        ExtractPath::Pxar(file, mut path) => {
+            let index = client
+                .download_dynamic_index(&manifest, CATALOG_NAME)
+                .await?;
+            let most_used = index.find_most_used_chunks(8);
+            let file_info = manifest.lookup_file_info(&CATALOG_NAME)?;
+            let chunk_reader = RemoteChunkReader::new(
+                client.clone(),
+                crypt_config,
+                file_info.chunk_crypt_mode(),
+                most_used,
+            );
+            let reader = BufferedDynamicReader::new(index, chunk_reader);
+            let mut catalog_reader = CatalogReader::new(reader);
+
+            let mut fullpath = file.into_bytes();
+            fullpath.append(&mut path);
+
+            helpers::list_dir_content(&mut catalog_reader, &fullpath)
+        }
+    }
+}
+
+#[api(
+   input: {
+       properties: {
+           repository: {
+               schema: REPO_URL_SCHEMA,
+               optional: true,
+           },
+           snapshot: {
+               type: String,
+               description: "Group/Snapshot path.",
+           },
+           "path": {
+               description: "Path to restore. Directories will be restored as .zip files if extracted to stdout.",
+               type: String,
+           },
+           "base64": {
+               type: Boolean,
+               description: "If set, 'path' will be interpreted as base64 encoded.",
+               optional: true,
+               default: false,
+           },
+           target: {
+               type: String,
+               optional: true,
+               description: "Target directory path. Use '-' to write to standard output.",
+           },
+           keyfile: {
+               schema: KEYFILE_SCHEMA,
+               optional: true,
+           },
+           "keyfd": {
+               schema: KEYFD_SCHEMA,
+               optional: true,
+           },
+           "crypt-mode": {
+               type: CryptMode,
+               optional: true,
+           },
+           verbose: {
+               type: Boolean,
+               description: "Print verbose information",
+               optional: true,
+               default: false,
+           }
+       }
+   }
+)]
+/// Restore files from a backup snapshot.
+async fn extract(
+    snapshot: String,
+    path: String,
+    base64: bool,
+    target: Option<String>,
+    verbose: bool,
+    param: Value,
+) -> Result<(), Error> {
+    let repo = extract_repository_from_value(&param)?;
+    let snapshot: BackupDir = snapshot.parse()?;
+    let orig_path = path;
+    let path = parse_path(orig_path.clone(), base64)?;
+
+    let target = match target {
+        Some(target) if target == "-" => None,
+        Some(target) => Some(PathBuf::from(target)),
+        None => Some(std::env::current_dir()?),
+    };
+
+    let crypto = crypto_parameters(&param)?;
+    let crypt_config = match crypto.enc_key {
+        None => None,
+        Some(ref key) => {
+            let (key, _, _) =
+                decrypt_key(&key.key, &get_encryption_key_password).map_err(|err| {
+                    eprintln!("{}", format_key_source(&key.source, "encryption"));
+                    err
+                })?;
+            Some(Arc::new(CryptConfig::new(key)?))
+        }
+    };
+
+    match path {
+        ExtractPath::Pxar(archive_name, path) => {
+            let client = connect(&repo)?;
+            let client = BackupReader::start(
+                client,
+                crypt_config.clone(),
+                repo.store(),
+                &snapshot.group().backup_type(),
+                &snapshot.group().backup_id(),
+                snapshot.backup_time(),
+                true,
+            )
+            .await?;
+            let (manifest, _) = client.download_manifest().await?;
+            let file_info = manifest.lookup_file_info(&archive_name)?;
+            let index = client
+                .download_dynamic_index(&manifest, &archive_name)
+                .await?;
+            let most_used = index.find_most_used_chunks(8);
+            let chunk_reader = RemoteChunkReader::new(
+                client.clone(),
+                crypt_config,
+                file_info.chunk_crypt_mode(),
+                most_used,
+            );
+            let reader = BufferedDynamicReader::new(index, chunk_reader);
+
+            let archive_size = reader.archive_size();
+            let reader = LocalDynamicReadAt::new(reader);
+            let decoder = Accessor::new(reader, archive_size).await?;
+
+            let root = decoder.open_root().await?;
+            let file = root
+                .lookup(OsStr::from_bytes(&path))
+                .await?
+                .ok_or(format_err!("error opening '{:?}'", path))?;
+
+            if let Some(target) = target {
+                extract_sub_dir(target, decoder, OsStr::from_bytes(&path), verbose).await?;
+            } else {
+                match file.kind() {
+                    pxar::EntryKind::File { .. } => {
+                        tokio::io::copy(&mut file.contents().await?, &mut tokio::io::stdout())
+                            .await?;
+                    }
+                    _ => {
+                        create_zip(
+                            tokio::io::stdout(),
+                            decoder,
+                            OsStr::from_bytes(&path),
+                            verbose,
+                        )
+                        .await?;
+                    }
+                }
+            }
+        }
+        _ => {
+            bail!("cannot extract '{}'", orig_path);
+        }
+    }
+
+    Ok(())
+}
+
+fn main() {
+    let list_cmd_def = CliCommand::new(&API_METHOD_LIST)
+        .arg_param(&["snapshot", "path"])
+        .completion_cb("repository", complete_repository)
+        .completion_cb("snapshot", complete_group_or_snapshot);
+
+    let restore_cmd_def = CliCommand::new(&API_METHOD_EXTRACT)
+        .arg_param(&["snapshot", "path", "target"])
+        .completion_cb("repository", complete_repository)
+        .completion_cb("snapshot", complete_group_or_snapshot)
+        .completion_cb("target", tools::complete_file_name);
+
+    let cmd_def = CliCommandMap::new()
+        .insert("list", list_cmd_def)
+        .insert("extract", restore_cmd_def);
+
+    let rpcenv = CliEnvironment::new();
+    run_cli_command(
+        cmd_def,
+        rpcenv,
+        Some(|future| proxmox_backup::tools::runtime::main(future)),
+    );
+}
diff --git a/zsh-completions/_proxmox-file-restore b/zsh-completions/_proxmox-file-restore
new file mode 100644
index 00000000..e2e48c7a
--- /dev/null
+++ b/zsh-completions/_proxmox-file-restore
@@ -0,0 +1,13 @@
+#compdef _proxmox-backup-client() proxmox-backup-client
+
+function _proxmox-backup-client() {
+    local cwords line point cmd curr prev
+    cworkds=${#words[@]}
+    line=$words
+    point=${#line}
+    cmd=${words[1]}
+    curr=${words[cwords]}
+    prev=${words[cwords-1]}
+    compadd -- $(COMP_CWORD="$cwords" COMP_LINE="$line" COMP_POINT="$point" \
+        proxmox-file-restore bashcomplete "$cmd" "$curr" "$prev")
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 07/20] file-restore: allow specifying output-format
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (5 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 06/20] file-restore: add binary and basic commands Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 08/20] server/rest: extract auth to seperate module Stefan Reiter
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Makes CLI use more comfortable by not just printing JSON to the
terminal.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/bin/proxmox-file-restore.rs | 42 +++++++++++++++++++++++++++++----
 1 file changed, 37 insertions(+), 5 deletions(-)

diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
index 3cd0c73f..f8affc03 100644
--- a/src/bin/proxmox-file-restore.rs
+++ b/src/bin/proxmox-file-restore.rs
@@ -4,11 +4,14 @@ use std::path::PathBuf;
 use std::sync::Arc;
 
 use anyhow::{bail, format_err, Error};
-use serde_json::Value;
+use serde_json::{json, Value};
 
 use proxmox::api::{
     api,
-    cli::{run_cli_command, CliCommand, CliCommandMap, CliEnvironment},
+    cli::{
+        default_table_format_options, format_and_print_result_full, get_output_format,
+        run_cli_command, CliCommand, CliCommandMap, CliEnvironment, ColumnConfig, OUTPUT_FORMAT,
+    },
 };
 use pxar::accessor::aio::Accessor;
 
@@ -99,6 +102,17 @@ fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
                type: CryptMode,
                optional: true,
            },
+           "output-format": {
+               schema: OUTPUT_FORMAT,
+               optional: true,
+           },
+       }
+   },
+   returns: {
+       description: "A list of elements under the given path",
+       type: Array,
+       items: {
+           type: ArchiveEntry,
        }
    }
 )]
@@ -108,7 +122,7 @@ async fn list(
     path: String,
     base64: bool,
     param: Value,
-) -> Result<Vec<ArchiveEntry>, Error> {
+) -> Result<(), Error> {
     let repo = extract_repository_from_value(&param)?;
     let snapshot: BackupDir = snapshot.parse()?;
     let path = parse_path(path, base64)?;
@@ -141,7 +155,7 @@ async fn list(
     let (manifest, _) = client.download_manifest().await?;
     manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
 
-    match path {
+    let result = match path {
         ExtractPath::ListArchives => {
             let mut entries = vec![];
             for file in manifest.files() {
@@ -177,7 +191,25 @@ async fn list(
 
             helpers::list_dir_content(&mut catalog_reader, &fullpath)
         }
-    }
+    }?;
+
+    let options = default_table_format_options()
+        .sortby("type", false)
+        .sortby("text", false)
+        .column(ColumnConfig::new("type"))
+        .column(ColumnConfig::new("text").header("name"))
+        .column(ColumnConfig::new("mtime").header("last modified"))
+        .column(ColumnConfig::new("size"));
+
+    let output_format = get_output_format(&param);
+    format_and_print_result_full(
+        &mut json!(result),
+        &API_METHOD_LIST.returns,
+        &output_format,
+        &options,
+    );
+
+    Ok(())
 }
 
 #[api(
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 08/20] server/rest: extract auth to seperate module
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (6 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 07/20] file-restore: allow specifying output-format Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-04-01  9:55   ` [pbs-devel] applied: " Thomas Lamprecht
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 09/20] server/rest: add ApiAuth trait to make user auth generic Stefan Reiter
                   ` (12 subsequent siblings)
  20 siblings, 1 reply; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/server.rs      |   2 +
 src/server/auth.rs | 102 +++++++++++++++++++++++++++++++++++++++++++++
 src/server/rest.rs |  96 +-----------------------------------------
 3 files changed, 105 insertions(+), 95 deletions(-)
 create mode 100644 src/server/auth.rs

diff --git a/src/server.rs b/src/server.rs
index 7c159c23..b6a37b92 100644
--- a/src/server.rs
+++ b/src/server.rs
@@ -89,3 +89,5 @@ mod report;
 pub use report::*;
 
 pub mod ticket;
+
+pub mod auth;
diff --git a/src/server/auth.rs b/src/server/auth.rs
new file mode 100644
index 00000000..24151886
--- /dev/null
+++ b/src/server/auth.rs
@@ -0,0 +1,102 @@
+//! Provides authentication primitives for the HTTP server
+use anyhow::{bail, format_err, Error};
+
+use crate::tools::ticket::Ticket;
+use crate::auth_helpers::*;
+use crate::tools;
+use crate::config::cached_user_info::CachedUserInfo;
+use crate::api2::types::{Authid, Userid};
+
+use hyper::header;
+use percent_encoding::percent_decode_str;
+
+pub struct UserAuthData {
+    ticket: String,
+    csrf_token: Option<String>,
+}
+
+pub enum AuthData {
+    User(UserAuthData),
+    ApiToken(String),
+}
+
+pub fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
+    if let Some(raw_cookie) = headers.get(header::COOKIE) {
+        if let Ok(cookie) = raw_cookie.to_str() {
+            if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
+                let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
+                    Some(Ok(v)) => Some(v.to_owned()),
+                    _ => None,
+                };
+                return Some(AuthData::User(UserAuthData {
+                    ticket,
+                    csrf_token,
+                }));
+            }
+        }
+    }
+
+    match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
+        Some(Ok(v)) => {
+            if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
+                Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
+            } else {
+                None
+            }
+        },
+        _ => None,
+    }
+}
+
+pub fn check_auth(
+    method: &hyper::Method,
+    auth_data: &AuthData,
+    user_info: &CachedUserInfo,
+) -> Result<Authid, Error> {
+    match auth_data {
+        AuthData::User(user_auth_data) => {
+            let ticket = user_auth_data.ticket.clone();
+            let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
+
+            let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
+                .verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
+                .require_full()?;
+
+            let auth_id = Authid::from(userid.clone());
+            if !user_info.is_active_auth_id(&auth_id) {
+                bail!("user account disabled or expired.");
+            }
+
+            if method != hyper::Method::GET {
+                if let Some(csrf_token) = &user_auth_data.csrf_token {
+                    verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?;
+                } else {
+                    bail!("missing CSRF prevention token");
+                }
+            }
+
+            Ok(auth_id)
+        },
+        AuthData::ApiToken(api_token) => {
+            let mut parts = api_token.splitn(2, ':');
+            let tokenid = parts.next()
+                .ok_or_else(|| format_err!("failed to split API token header"))?;
+            let tokenid: Authid = tokenid.parse()?;
+
+            if !user_info.is_active_auth_id(&tokenid) {
+                bail!("user account or token disabled or expired.");
+            }
+
+            let tokensecret = parts.next()
+                .ok_or_else(|| format_err!("failed to split API token header"))?;
+            let tokensecret = percent_decode_str(tokensecret)
+                .decode_utf8()
+                .map_err(|_| format_err!("failed to decode API token header"))?;
+
+            crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
+
+            Ok(tokenid)
+        }
+    }
+}
+
diff --git a/src/server/rest.rs b/src/server/rest.rs
index 150125ec..9a971890 100644
--- a/src/server/rest.rs
+++ b/src/server/rest.rs
@@ -34,6 +34,7 @@ use proxmox::http_err;
 use super::environment::RestEnvironment;
 use super::formatter::*;
 use super::ApiConfig;
+use super::auth::{check_auth, extract_auth_data};
 
 use crate::api2::types::{Authid, Userid};
 use crate::auth_helpers::*;
@@ -588,101 +589,6 @@ fn extract_lang_header(headers: &http::HeaderMap) -> Option<String> {
     None
 }
 
-struct UserAuthData {
-    ticket: String,
-    csrf_token: Option<String>,
-}
-
-enum AuthData {
-    User(UserAuthData),
-    ApiToken(String),
-}
-
-fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
-    if let Some(raw_cookie) = headers.get(header::COOKIE) {
-        if let Ok(cookie) = raw_cookie.to_str() {
-            if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
-                let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
-                    Some(Ok(v)) => Some(v.to_owned()),
-                    _ => None,
-                };
-                return Some(AuthData::User(UserAuthData { ticket, csrf_token }));
-            }
-        }
-    }
-
-    match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
-        Some(Ok(v)) => {
-            if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
-                Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
-            } else {
-                None
-            }
-        }
-        _ => None,
-    }
-}
-
-fn check_auth(
-    method: &hyper::Method,
-    auth_data: &AuthData,
-    user_info: &CachedUserInfo,
-) -> Result<Authid, Error> {
-    match auth_data {
-        AuthData::User(user_auth_data) => {
-            let ticket = user_auth_data.ticket.clone();
-            let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
-
-            let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
-                .verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
-                .require_full()?;
-
-            let auth_id = Authid::from(userid.clone());
-            if !user_info.is_active_auth_id(&auth_id) {
-                bail!("user account disabled or expired.");
-            }
-
-            if method != hyper::Method::GET {
-                if let Some(csrf_token) = &user_auth_data.csrf_token {
-                    verify_csrf_prevention_token(
-                        csrf_secret(),
-                        &userid,
-                        &csrf_token,
-                        -300,
-                        ticket_lifetime,
-                    )?;
-                } else {
-                    bail!("missing CSRF prevention token");
-                }
-            }
-
-            Ok(auth_id)
-        }
-        AuthData::ApiToken(api_token) => {
-            let mut parts = api_token.splitn(2, ':');
-            let tokenid = parts
-                .next()
-                .ok_or_else(|| format_err!("failed to split API token header"))?;
-            let tokenid: Authid = tokenid.parse()?;
-
-            if !user_info.is_active_auth_id(&tokenid) {
-                bail!("user account or token disabled or expired.");
-            }
-
-            let tokensecret = parts
-                .next()
-                .ok_or_else(|| format_err!("failed to split API token header"))?;
-            let tokensecret = percent_decode_str(tokensecret)
-                .decode_utf8()
-                .map_err(|_| format_err!("failed to decode API token header"))?;
-
-            crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
-
-            Ok(tokenid)
-        }
-    }
-}
-
 async fn handle_request(
     api: Arc<ApiConfig>,
     req: Request<Body>,
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 09/20] server/rest: add ApiAuth trait to make user auth generic
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (7 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 08/20] server/rest: extract auth to seperate module Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-03-31 12:55   ` Wolfgang Bumiller
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 10/20] file-restore-daemon: add binary with virtio-vsock API server Stefan Reiter
                   ` (11 subsequent siblings)
  20 siblings, 1 reply; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

This allows switching the base user identification/authentication method
in the rest server. Will initially be used for single file restore VMs,
where authentication is based on a ticket file, not the PBS user
backend (PAM/local).

To avoid putting generic types into the RestServer type for this, we
merge the two calls "extract_auth_data" and "check_auth" into a single
one, which can use whatever type it wants internally.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

v3:
* merge both calls into one trait, that way it doesn't have to be generic

 src/bin/proxmox-backup-api.rs   |  13 ++-
 src/bin/proxmox-backup-proxy.rs |   7 +-
 src/server/auth.rs              | 192 +++++++++++++++++++-------------
 src/server/config.rs            |  13 ++-
 src/server/rest.rs              |  36 +++---
 5 files changed, 159 insertions(+), 102 deletions(-)

diff --git a/src/bin/proxmox-backup-api.rs b/src/bin/proxmox-backup-api.rs
index 7d800259..e514a801 100644
--- a/src/bin/proxmox-backup-api.rs
+++ b/src/bin/proxmox-backup-api.rs
@@ -6,8 +6,11 @@ use proxmox::api::RpcEnvironmentType;
 
 //use proxmox_backup::tools;
 //use proxmox_backup::api_schema::config::*;
-use proxmox_backup::server::rest::*;
-use proxmox_backup::server;
+use proxmox_backup::server::{
+    self,
+    auth::default_api_auth,
+    rest::*,
+};
 use proxmox_backup::tools::daemon;
 use proxmox_backup::auth_helpers::*;
 use proxmox_backup::config;
@@ -53,7 +56,11 @@ async fn run() -> Result<(), Error> {
     let _ = csrf_secret(); // load with lazy_static
 
     let mut config = server::ApiConfig::new(
-        buildcfg::JS_DIR, &proxmox_backup::api2::ROUTER, RpcEnvironmentType::PRIVILEGED)?;
+        buildcfg::JS_DIR,
+        &proxmox_backup::api2::ROUTER,
+        RpcEnvironmentType::PRIVILEGED,
+        default_api_auth(),
+    )?;
 
     let mut commando_sock = server::CommandoSocket::new(server::our_ctrl_sock());
 
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 541d34b5..7e026455 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -14,6 +14,7 @@ use proxmox::api::RpcEnvironmentType;
 use proxmox_backup::{
     backup::DataStore,
     server::{
+        auth::default_api_auth,
         WorkerTask,
         ApiConfig,
         rest::*,
@@ -84,7 +85,11 @@ async fn run() -> Result<(), Error> {
     let _ = csrf_secret(); // load with lazy_static
 
     let mut config = ApiConfig::new(
-        buildcfg::JS_DIR, &proxmox_backup::api2::ROUTER, RpcEnvironmentType::PUBLIC)?;
+        buildcfg::JS_DIR,
+        &proxmox_backup::api2::ROUTER,
+        RpcEnvironmentType::PUBLIC,
+        default_api_auth(),
+    )?;
 
     // Enable experimental tape UI if tape.cfg exists
     if Path::new("/etc/proxmox-backup/tape.cfg").exists() {
diff --git a/src/server/auth.rs b/src/server/auth.rs
index 24151886..0a9a740c 100644
--- a/src/server/auth.rs
+++ b/src/server/auth.rs
@@ -1,102 +1,140 @@
 //! Provides authentication primitives for the HTTP server
-use anyhow::{bail, format_err, Error};
+use anyhow::{format_err, Error};
+
+use std::sync::Arc;
 
-use crate::tools::ticket::Ticket;
-use crate::auth_helpers::*;
-use crate::tools;
-use crate::config::cached_user_info::CachedUserInfo;
 use crate::api2::types::{Authid, Userid};
+use crate::auth_helpers::*;
+use crate::config::cached_user_info::CachedUserInfo;
+use crate::tools;
+use crate::tools::ticket::Ticket;
 
 use hyper::header;
 use percent_encoding::percent_decode_str;
 
-pub struct UserAuthData {
+pub enum AuthError {
+    Generic(Error),
+    NoData,
+}
+
+impl From<Error> for AuthError {
+    fn from(err: Error) -> Self {
+        AuthError::Generic(err)
+    }
+}
+
+pub trait ApiAuth {
+    fn check_auth(
+        &self,
+        headers: &http::HeaderMap,
+        method: &hyper::Method,
+        user_info: &CachedUserInfo,
+    ) -> Result<Authid, AuthError>;
+}
+
+struct UserAuthData {
     ticket: String,
     csrf_token: Option<String>,
 }
 
-pub enum AuthData {
+enum AuthData {
     User(UserAuthData),
     ApiToken(String),
 }
 
-pub fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
-    if let Some(raw_cookie) = headers.get(header::COOKIE) {
-        if let Ok(cookie) = raw_cookie.to_str() {
-            if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
-                let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
-                    Some(Ok(v)) => Some(v.to_owned()),
-                    _ => None,
-                };
-                return Some(AuthData::User(UserAuthData {
-                    ticket,
-                    csrf_token,
-                }));
-            }
-        }
-    }
-
-    match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
-        Some(Ok(v)) => {
-            if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
-                Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
-            } else {
-                None
-            }
-        },
-        _ => None,
-    }
+pub struct UserApiAuth {}
+pub fn default_api_auth() -> Arc<UserApiAuth> {
+    Arc::new(UserApiAuth {})
 }
 
-pub fn check_auth(
-    method: &hyper::Method,
-    auth_data: &AuthData,
-    user_info: &CachedUserInfo,
-) -> Result<Authid, Error> {
-    match auth_data {
-        AuthData::User(user_auth_data) => {
-            let ticket = user_auth_data.ticket.clone();
-            let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
-
-            let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
-                .verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
-                .require_full()?;
-
-            let auth_id = Authid::from(userid.clone());
-            if !user_info.is_active_auth_id(&auth_id) {
-                bail!("user account disabled or expired.");
-            }
-
-            if method != hyper::Method::GET {
-                if let Some(csrf_token) = &user_auth_data.csrf_token {
-                    verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?;
-                } else {
-                    bail!("missing CSRF prevention token");
+impl UserApiAuth {
+    fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
+        if let Some(raw_cookie) = headers.get(header::COOKIE) {
+            if let Ok(cookie) = raw_cookie.to_str() {
+                if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
+                    let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
+                        Some(Ok(v)) => Some(v.to_owned()),
+                        _ => None,
+                    };
+                    return Some(AuthData::User(UserAuthData { ticket, csrf_token }));
                 }
             }
+        }
 
-            Ok(auth_id)
-        },
-        AuthData::ApiToken(api_token) => {
-            let mut parts = api_token.splitn(2, ':');
-            let tokenid = parts.next()
-                .ok_or_else(|| format_err!("failed to split API token header"))?;
-            let tokenid: Authid = tokenid.parse()?;
-
-            if !user_info.is_active_auth_id(&tokenid) {
-                bail!("user account or token disabled or expired.");
+        match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
+            Some(Ok(v)) => {
+                if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
+                    Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
+                } else {
+                    None
+                }
             }
-
-            let tokensecret = parts.next()
-                .ok_or_else(|| format_err!("failed to split API token header"))?;
-            let tokensecret = percent_decode_str(tokensecret)
-                .decode_utf8()
-                .map_err(|_| format_err!("failed to decode API token header"))?;
-
-            crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
-
-            Ok(tokenid)
+            _ => None,
         }
     }
 }
 
+impl ApiAuth for UserApiAuth {
+    fn check_auth(
+        &self,
+        headers: &http::HeaderMap,
+        method: &hyper::Method,
+        user_info: &CachedUserInfo,
+    ) -> Result<Authid, AuthError> {
+        let auth_data = Self::extract_auth_data(headers);
+        match auth_data {
+            Some(AuthData::User(user_auth_data)) => {
+                let ticket = user_auth_data.ticket.clone();
+                let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
+
+                let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
+                    .verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
+                    .require_full()?;
+
+                let auth_id = Authid::from(userid.clone());
+                if !user_info.is_active_auth_id(&auth_id) {
+                    return Err(format_err!("user account disabled or expired.").into());
+                }
+
+                if method != hyper::Method::GET {
+                    if let Some(csrf_token) = &user_auth_data.csrf_token {
+                        verify_csrf_prevention_token(
+                            csrf_secret(),
+                            &userid,
+                            &csrf_token,
+                            -300,
+                            ticket_lifetime,
+                        )?;
+                    } else {
+                        return Err(format_err!("missing CSRF prevention token").into());
+                    }
+                }
+
+                Ok(auth_id)
+            }
+            Some(AuthData::ApiToken(api_token)) => {
+                let mut parts = api_token.splitn(2, ':');
+                let tokenid = parts
+                    .next()
+                    .ok_or_else(|| format_err!("failed to split API token header"))?;
+                let tokenid: Authid = tokenid.parse()?;
+
+                if !user_info.is_active_auth_id(&tokenid) {
+                    return Err(format_err!("user account or token disabled or expired.").into());
+                }
+
+                let tokensecret = parts
+                    .next()
+                    .ok_or_else(|| format_err!("failed to split API token header"))?;
+                let tokensecret = percent_decode_str(tokensecret)
+                    .decode_utf8()
+                    .map_err(|_| format_err!("failed to decode API token header"))?;
+
+                crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
+
+                Ok(tokenid)
+            }
+            None => Err(AuthError::NoData),
+        }
+    }
+}
diff --git a/src/server/config.rs b/src/server/config.rs
index 9094fa80..ad378b0a 100644
--- a/src/server/config.rs
+++ b/src/server/config.rs
@@ -13,6 +13,7 @@ use proxmox::api::{ApiMethod, Router, RpcEnvironmentType};
 use proxmox::tools::fs::{create_path, CreateOptions};
 
 use crate::tools::{FileLogger, FileLogOptions};
+use super::auth::ApiAuth;
 
 pub struct ApiConfig {
     basedir: PathBuf,
@@ -23,11 +24,16 @@ pub struct ApiConfig {
     template_files: RwLock<HashMap<String, (SystemTime, PathBuf)>>,
     request_log: Option<Arc<Mutex<FileLogger>>>,
     pub enable_tape_ui: bool,
+    pub api_auth: Arc<dyn ApiAuth + Send + Sync>,
 }
 
 impl ApiConfig {
-
-    pub fn new<B: Into<PathBuf>>(basedir: B, router: &'static Router, env_type: RpcEnvironmentType) -> Result<Self, Error> {
+    pub fn new<B: Into<PathBuf>>(
+        basedir: B,
+        router: &'static Router,
+        env_type: RpcEnvironmentType,
+        api_auth: Arc<dyn ApiAuth + Send + Sync>,
+    ) -> Result<Self, Error> {
         Ok(Self {
             basedir: basedir.into(),
             router,
@@ -37,7 +43,8 @@ impl ApiConfig {
             template_files: RwLock::new(HashMap::new()),
             request_log: None,
             enable_tape_ui: false,
-       })
+            api_auth,
+        })
     }
 
     pub fn find_method(
diff --git a/src/server/rest.rs b/src/server/rest.rs
index 9a971890..2d033510 100644
--- a/src/server/rest.rs
+++ b/src/server/rest.rs
@@ -14,7 +14,6 @@ use hyper::header::{self, HeaderMap};
 use hyper::http::request::Parts;
 use hyper::{Body, Request, Response, StatusCode};
 use lazy_static::lazy_static;
-use percent_encoding::percent_decode_str;
 use regex::Regex;
 use serde_json::{json, Value};
 use tokio::fs::File;
@@ -31,16 +30,15 @@ use proxmox::api::{
 };
 use proxmox::http_err;
 
+use super::auth::AuthError;
 use super::environment::RestEnvironment;
 use super::formatter::*;
 use super::ApiConfig;
-use super::auth::{check_auth, extract_auth_data};
 
 use crate::api2::types::{Authid, Userid};
 use crate::auth_helpers::*;
 use crate::config::cached_user_info::CachedUserInfo;
 use crate::tools;
-use crate::tools::ticket::Ticket;
 use crate::tools::FileLogger;
 
 extern "C" {
@@ -614,6 +612,7 @@ async fn handle_request(
     rpcenv.set_client_ip(Some(*peer));
 
     let user_info = CachedUserInfo::new()?;
+    let auth = &api.api_auth;
 
     let delay_unauth_time = std::time::Instant::now() + std::time::Duration::from_millis(3000);
     let access_forbidden_time = std::time::Instant::now() + std::time::Duration::from_millis(500);
@@ -639,13 +638,15 @@ async fn handle_request(
             }
 
             if auth_required {
-                let auth_result = match extract_auth_data(&parts.headers) {
-                    Some(auth_data) => check_auth(&method, &auth_data, &user_info),
-                    None => Err(format_err!("no authentication credentials provided.")),
-                };
-                match auth_result {
+                match auth.check_auth(&parts.headers, &method, &user_info) {
                     Ok(authid) => rpcenv.set_auth_id(Some(authid.to_string())),
-                    Err(err) => {
+                    Err(auth_err) => {
+                        let err = match auth_err {
+                            AuthError::Generic(err) => err,
+                            AuthError::NoData => {
+                                format_err!("no authentication credentials provided.")
+                            }
+                        };
                         let peer = peer.ip();
                         auth_logger()?.log(format!(
                             "authentication failure; rhost={} msg={}",
@@ -708,9 +709,9 @@ async fn handle_request(
 
         if comp_len == 0 {
             let language = extract_lang_header(&parts.headers);
-            if let Some(auth_data) = extract_auth_data(&parts.headers) {
-                match check_auth(&method, &auth_data, &user_info) {
-                    Ok(auth_id) if !auth_id.is_token() => {
+            match auth.check_auth(&parts.headers, &method, &user_info) {
+                Ok(auth_id) => {
+                    if !auth_id.is_token() {
                         let userid = auth_id.user();
                         let new_csrf_token = assemble_csrf_prevention_token(csrf_secret(), userid);
                         return Ok(get_index(
@@ -721,14 +722,13 @@ async fn handle_request(
                             parts,
                         ));
                     }
-                    _ => {
-                        tokio::time::sleep_until(Instant::from_std(delay_unauth_time)).await;
-                        return Ok(get_index(None, None, language, &api, parts));
-                    }
                 }
-            } else {
-                return Ok(get_index(None, None, language, &api, parts));
+                Err(AuthError::Generic(_)) => {
+                    tokio::time::sleep_until(Instant::from_std(delay_unauth_time)).await;
+                }
+                Err(AuthError::NoData) => {}
             }
+            return Ok(get_index(None, None, language, &api, parts));
         } else {
             let filename = api.find_alias(&components);
             return handle_static_file_download(filename).await;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 10/20] file-restore-daemon: add binary with virtio-vsock API server
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (8 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 09/20] server/rest: add ApiAuth trait to make user auth generic Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 11/20] file-restore-daemon: add watchdog module Stefan Reiter
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Implements the base of a small daemon to run within a file-restore VM.

The binary spawns an API server on a virtio-vsock socket, listening for
connections from the host. This happens mostly manually via the standard
Unix socket API, since tokio/hyper do not have support for vsock built
in. Once we have the accept'ed file descriptor, we can create a
UnixStream and use our tower service implementation for that.

The binary is deliberately not installed in the usual $PATH location,
since it shouldn't be executed on the host by a user anyway.

For now, only the API calls 'status' and 'stop' are implemented, to
demonstrate and test proxmox::api functionality.

Authorization is provided via a custom ApiAuth only checking a header
value against a static /ticket file.

Since the REST server implementation uses the log!() macro, we can
redirect its output to stdout by registering env_logger as the logging
target. env_logger is already in our dependency tree via zstd/bindgen.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

v2:
* implement custom static ticket auth with ApiAuth impl

 Cargo.toml                             |   1 +
 Makefile                               |   9 ++-
 debian/control                         |   1 +
 debian/proxmox-file-restore.install    |   1 +
 src/api2/types/file_restore.rs         |  12 +++
 src/api2/types/mod.rs                  |   3 +
 src/bin/proxmox-restore-daemon.rs      | 108 +++++++++++++++++++++++++
 src/bin/proxmox_restore_daemon/api.rs  |  62 ++++++++++++++
 src/bin/proxmox_restore_daemon/auth.rs |  45 +++++++++++
 src/bin/proxmox_restore_daemon/mod.rs  |   5 ++
 10 files changed, 246 insertions(+), 1 deletion(-)
 create mode 100644 src/api2/types/file_restore.rs
 create mode 100644 src/bin/proxmox-restore-daemon.rs
 create mode 100644 src/bin/proxmox_restore_daemon/api.rs
 create mode 100644 src/bin/proxmox_restore_daemon/auth.rs
 create mode 100644 src/bin/proxmox_restore_daemon/mod.rs

diff --git a/Cargo.toml b/Cargo.toml
index 4e4d18b4..6b880384 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -29,6 +29,7 @@ bitflags = "1.2.1"
 bytes = "1.0"
 crc32fast = "1"
 endian_trait = { version = "0.6", features = ["arrays"] }
+env_logger = "0.7"
 anyhow = "1.0"
 futures = "0.3"
 h2 = { version = "0.3", features = [ "stream" ] }
diff --git a/Makefile b/Makefile
index ec52d88f..269bb80c 100644
--- a/Makefile
+++ b/Makefile
@@ -26,6 +26,10 @@ SERVICE_BIN := \
 	proxmox-backup-proxy \
 	proxmox-daily-update
 
+# Single file restore daemon
+RESTORE_BIN := \
+	proxmox-restore-daemon
+
 ifeq ($(BUILD_MODE), release)
 CARGO_BUILD_ARGS += --release
 COMPILEDIR := target/release
@@ -40,7 +44,7 @@ endif
 CARGO ?= cargo
 
 COMPILED_BINS := \
-	$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN))
+	$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN) $(RESTORE_BIN))
 
 export DEB_VERSION DEB_VERSION_UPSTREAM
 
@@ -148,6 +152,9 @@ install: $(COMPILED_BINS)
 	    install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(SBINDIR)/ ; \
 	    install -m644 zsh-completions/_$(i) $(DESTDIR)$(ZSH_COMPL_DEST)/ ;)
 	install -dm755 $(DESTDIR)$(LIBEXECDIR)/proxmox-backup
+	install -dm755 $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/file-restore
+	$(foreach i,$(RESTORE_BIN), \
+	    install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/file-restore/ ;)
 	# install sg-tape-cmd as setuid binary
 	install -m4755 -o root -g root $(COMPILEDIR)/sg-tape-cmd $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/sg-tape-cmd
 	$(foreach i,$(SERVICE_BIN), \
diff --git a/debian/control b/debian/control
index e4e64372..0e12accb 100644
--- a/debian/control
+++ b/debian/control
@@ -15,6 +15,7 @@ Build-Depends: debhelper (>= 11),
  librust-crossbeam-channel-0.5+default-dev,
  librust-endian-trait-0.6+arrays-dev,
  librust-endian-trait-0.6+default-dev,
+ librust-env-logger-0.7+default-dev,
  librust-futures-0.3+default-dev,
  librust-h2-0.3+default-dev,
  librust-h2-0.3+stream-dev,
diff --git a/debian/proxmox-file-restore.install b/debian/proxmox-file-restore.install
index 2082e46b..d952836e 100644
--- a/debian/proxmox-file-restore.install
+++ b/debian/proxmox-file-restore.install
@@ -1,3 +1,4 @@
 usr/bin/proxmox-file-restore
 usr/share/man/man1/proxmox-file-restore.1
 usr/share/zsh/vendor-completions/_proxmox-file-restore
+usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore/proxmox-restore-daemon
diff --git a/src/api2/types/file_restore.rs b/src/api2/types/file_restore.rs
new file mode 100644
index 00000000..cd8df16a
--- /dev/null
+++ b/src/api2/types/file_restore.rs
@@ -0,0 +1,12 @@
+use serde::{Deserialize, Serialize};
+use proxmox::api::api;
+
+#[api()]
+#[derive(Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
+/// General status information about a running VM file-restore daemon
+pub struct RestoreDaemonStatus {
+    /// VM uptime in seconds
+    pub uptime: i64,
+}
+
diff --git a/src/api2/types/mod.rs b/src/api2/types/mod.rs
index 1bd4f92a..19186ea2 100644
--- a/src/api2/types/mod.rs
+++ b/src/api2/types/mod.rs
@@ -34,6 +34,9 @@ pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GRO
 mod tape;
 pub use tape::*;
 
+mod file_restore;
+pub use file_restore::*;
+
 // File names: may not contain slashes, may not start with "."
 pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
     if name.starts_with('.') {
diff --git a/src/bin/proxmox-restore-daemon.rs b/src/bin/proxmox-restore-daemon.rs
new file mode 100644
index 00000000..e803238a
--- /dev/null
+++ b/src/bin/proxmox-restore-daemon.rs
@@ -0,0 +1,108 @@
+///! Daemon binary to run inside a micro-VM for secure single file restore of disk images
+use anyhow::{bail, format_err, Error};
+use log::error;
+
+use std::os::unix::{
+    io::{FromRawFd, RawFd},
+    net,
+};
+use std::path::Path;
+use std::sync::Arc;
+
+use tokio::sync::mpsc;
+use tokio_stream::wrappers::ReceiverStream;
+
+use proxmox::api::RpcEnvironmentType;
+use proxmox_backup::client::DEFAULT_VSOCK_PORT;
+use proxmox_backup::server::{rest::*, ApiConfig};
+
+mod proxmox_restore_daemon;
+use proxmox_restore_daemon::*;
+
+/// Maximum amount of pending requests. If saturated, virtio-vsock returns ETIMEDOUT immediately.
+/// We should never have more than a few requests in queue, so use a low number.
+pub const MAX_PENDING: usize = 32;
+
+/// Will be present in base initramfs
+pub const VM_DETECT_FILE: &str = "/restore-vm-marker";
+
+/// This is expected to be run by 'proxmox-file-restore' within a mini-VM
+fn main() -> Result<(), Error> {
+    if !Path::new(VM_DETECT_FILE).exists() {
+        bail!(concat!(
+            "This binary is not supposed to be run manually. ",
+            "Please use 'proxmox-file-restore' instead."
+        ));
+    }
+
+    // don't have a real syslog (and no persistance), so use env_logger to print to a log file (via
+    // stdout to a serial terminal attached by QEMU)
+    env_logger::from_env(env_logger::Env::default().default_filter_or("info"))
+        .write_style(env_logger::WriteStyle::Never)
+        .init();
+
+    proxmox_backup::tools::runtime::main(run())
+}
+
+async fn run() -> Result<(), Error> {
+    let auth_config = Arc::new(
+        auth::ticket_auth().map_err(|err| format_err!("reading ticket file failed: {}", err))?,
+    );
+    let config = ApiConfig::new("", &ROUTER, RpcEnvironmentType::PUBLIC, auth_config)?;
+    let rest_server = RestServer::new(config);
+
+    let vsock_fd = get_vsock_fd()?;
+    let connections = accept_vsock_connections(vsock_fd);
+    let receiver_stream = ReceiverStream::new(connections);
+    let acceptor = hyper::server::accept::from_stream(receiver_stream);
+
+    hyper::Server::builder(acceptor).serve(rest_server).await?;
+
+    bail!("hyper server exited");
+}
+
+fn accept_vsock_connections(
+    vsock_fd: RawFd,
+) -> mpsc::Receiver<Result<tokio::net::UnixStream, Error>> {
+    use nix::sys::socket::*;
+    let (sender, receiver) = mpsc::channel(MAX_PENDING);
+
+    tokio::spawn(async move {
+        loop {
+            let stream: Result<tokio::net::UnixStream, Error> = tokio::task::block_in_place(|| {
+                // we need to accept manually, as UnixListener aborts if socket type != AF_UNIX ...
+                let client_fd = accept(vsock_fd)?;
+                let stream = unsafe { net::UnixStream::from_raw_fd(client_fd) };
+                stream.set_nonblocking(true)?;
+                tokio::net::UnixStream::from_std(stream).map_err(|err| err.into())
+            });
+
+            match stream {
+                Ok(stream) => {
+                    if sender.send(Ok(stream)).await.is_err() {
+                        error!("connection accept channel was closed");
+                    }
+                }
+                Err(err) => {
+                    error!("error accepting vsock connetion: {}", err);
+                }
+            }
+        }
+    });
+
+    receiver
+}
+
+fn get_vsock_fd() -> Result<RawFd, Error> {
+    use nix::sys::socket::*;
+    let sock_fd = socket(
+        AddressFamily::Vsock,
+        SockType::Stream,
+        SockFlag::empty(),
+        None,
+    )?;
+    let sock_addr = VsockAddr::new(libc::VMADDR_CID_ANY, DEFAULT_VSOCK_PORT as u32);
+    bind(sock_fd, &SockAddr::Vsock(sock_addr))?;
+    listen(sock_fd, MAX_PENDING)?;
+    Ok(sock_fd)
+}
diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
new file mode 100644
index 00000000..2dec11fe
--- /dev/null
+++ b/src/bin/proxmox_restore_daemon/api.rs
@@ -0,0 +1,62 @@
+///! File-restore API running inside the restore VM
+use anyhow::Error;
+use serde_json::Value;
+use std::fs;
+
+use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment, SubdirMap};
+use proxmox::list_subdirs_api_method;
+
+use proxmox_backup::api2::types::*;
+
+// NOTE: All API endpoints must have Permission::Superuser, as the configs for authentication do
+// not exist within the restore VM. Safety is guaranteed by checking a ticket via a custom ApiAuth.
+
+const SUBDIRS: SubdirMap = &[
+    ("status", &Router::new().get(&API_METHOD_STATUS)),
+    ("stop", &Router::new().get(&API_METHOD_STOP)),
+];
+
+pub const ROUTER: Router = Router::new()
+    .get(&list_subdirs_api_method!(SUBDIRS))
+    .subdirs(SUBDIRS);
+
+fn read_uptime() -> Result<f32, Error> {
+    let uptime = fs::read_to_string("/proc/uptime")?;
+    // unwrap the Option, if /proc/uptime is empty we have bigger problems
+    Ok(uptime.split_ascii_whitespace().next().unwrap().parse()?)
+}
+
+#[api(
+    access: {
+        description: "Permissions are handled outside restore VM.",
+        permission: &Permission::Superuser,
+    },
+    returns: {
+        type: RestoreDaemonStatus,
+    }
+)]
+/// General status information
+fn status(
+    _param: Value,
+    _info: &ApiMethod,
+    _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<RestoreDaemonStatus, Error> {
+    Ok(RestoreDaemonStatus {
+        uptime: read_uptime()? as i64,
+    })
+}
+
+#[api(
+    access: {
+        description: "Permissions are handled outside restore VM.",
+        permission: &Permission::Superuser,
+    },
+)]
+/// Stop the restore VM immediately, this will never return if successful
+fn stop() {
+    use nix::sys::reboot;
+    println!("/stop called, shutting down");
+    let err = reboot::reboot(reboot::RebootMode::RB_POWER_OFF).unwrap_err();
+    println!("'reboot' syscall failed: {}", err);
+    std::process::exit(1);
+}
diff --git a/src/bin/proxmox_restore_daemon/auth.rs b/src/bin/proxmox_restore_daemon/auth.rs
new file mode 100644
index 00000000..0973849e
--- /dev/null
+++ b/src/bin/proxmox_restore_daemon/auth.rs
@@ -0,0 +1,45 @@
+//! Authentication via a static ticket file
+use anyhow::{bail, format_err, Error};
+
+use std::fs::File;
+use std::io::prelude::*;
+
+use proxmox_backup::api2::types::Authid;
+use proxmox_backup::config::cached_user_info::CachedUserInfo;
+use proxmox_backup::server::auth::{ApiAuth, AuthError};
+
+const TICKET_FILE: &str = "/ticket";
+
+pub struct StaticAuth {
+    ticket: String,
+}
+
+impl ApiAuth for StaticAuth {
+    fn check_auth(
+        &self,
+        headers: &http::HeaderMap,
+        _method: &hyper::Method,
+        _user_info: &CachedUserInfo,
+    ) -> Result<Authid, AuthError> {
+        match headers.get(hyper::header::AUTHORIZATION) {
+            Some(header) if header.to_str().unwrap_or("") == &self.ticket => {
+                Ok(Authid::root_auth_id().to_owned())
+            }
+            _ => {
+                return Err(AuthError::Generic(format_err!(
+                    "invalid file restore ticket provided"
+                )));
+            }
+        }
+    }
+}
+
+pub fn ticket_auth() -> Result<StaticAuth, Error> {
+    let mut ticket_file = File::open(TICKET_FILE)?;
+    let mut ticket = String::new();
+    let len = ticket_file.read_to_string(&mut ticket)?;
+    if len <= 0 {
+        bail!("invalid ticket: cannot be empty");
+    }
+    Ok(StaticAuth { ticket })
+}
diff --git a/src/bin/proxmox_restore_daemon/mod.rs b/src/bin/proxmox_restore_daemon/mod.rs
new file mode 100644
index 00000000..8396ebc5
--- /dev/null
+++ b/src/bin/proxmox_restore_daemon/mod.rs
@@ -0,0 +1,5 @@
+///! File restore VM related functionality
+mod api;
+pub use api::*;
+
+pub mod auth;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 11/20] file-restore-daemon: add watchdog module
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (9 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 10/20] file-restore-daemon: add binary with virtio-vsock API server Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 12/20] file-restore-daemon: add disk module Stefan Reiter
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Add a watchdog that will automatically shut down the VM after 10
minutes, if no API call is received.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

v3:
* use fetch_max and better Ordering

v2:
* use tokio instead of alarm()

 src/api2/types/file_restore.rs             |  3 ++
 src/bin/proxmox-restore-daemon.rs          |  2 ++
 src/bin/proxmox_restore_daemon/api.rs      | 26 ++++++++++----
 src/bin/proxmox_restore_daemon/mod.rs      |  3 ++
 src/bin/proxmox_restore_daemon/watchdog.rs | 41 ++++++++++++++++++++++
 5 files changed, 68 insertions(+), 7 deletions(-)
 create mode 100644 src/bin/proxmox_restore_daemon/watchdog.rs

diff --git a/src/api2/types/file_restore.rs b/src/api2/types/file_restore.rs
index cd8df16a..29085c31 100644
--- a/src/api2/types/file_restore.rs
+++ b/src/api2/types/file_restore.rs
@@ -8,5 +8,8 @@ use proxmox::api::api;
 pub struct RestoreDaemonStatus {
     /// VM uptime in seconds
     pub uptime: i64,
+    /// time left until auto-shutdown, keep in mind that this is useless when 'keep-timeout' is
+    /// not set, as then the status call will have reset the timer before returning the value
+    pub timeout: i64,
 }
 
diff --git a/src/bin/proxmox-restore-daemon.rs b/src/bin/proxmox-restore-daemon.rs
index e803238a..6b453ad3 100644
--- a/src/bin/proxmox-restore-daemon.rs
+++ b/src/bin/proxmox-restore-daemon.rs
@@ -45,6 +45,8 @@ fn main() -> Result<(), Error> {
 }
 
 async fn run() -> Result<(), Error> {
+    watchdog_init();
+
     let auth_config = Arc::new(
         auth::ticket_auth().map_err(|err| format_err!("reading ticket file failed: {}", err))?,
     );
diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
index 2dec11fe..4c78a0e8 100644
--- a/src/bin/proxmox_restore_daemon/api.rs
+++ b/src/bin/proxmox_restore_daemon/api.rs
@@ -8,6 +8,8 @@ use proxmox::list_subdirs_api_method;
 
 use proxmox_backup::api2::types::*;
 
+use super::{watchdog_remaining, watchdog_ping};
+
 // NOTE: All API endpoints must have Permission::Superuser, as the configs for authentication do
 // not exist within the restore VM. Safety is guaranteed by checking a ticket via a custom ApiAuth.
 
@@ -27,22 +29,32 @@ fn read_uptime() -> Result<f32, Error> {
 }
 
 #[api(
+    input: {
+        properties: {
+            "keep-timeout": {
+                type: bool,
+                description: "If true, do not reset the watchdog timer on this API call.",
+                default: false,
+                optional: true,
+            },
+        },
+    },
     access: {
-        description: "Permissions are handled outside restore VM.",
-        permission: &Permission::Superuser,
+        description: "Permissions are handled outside restore VM. This call can be made without a ticket, but keep-timeout is always assumed 'true' then.",
+        permission: &Permission::World,
     },
     returns: {
         type: RestoreDaemonStatus,
     }
 )]
 /// General status information
-fn status(
-    _param: Value,
-    _info: &ApiMethod,
-    _rpcenv: &mut dyn RpcEnvironment,
-) -> Result<RestoreDaemonStatus, Error> {
+fn status(rpcenv: &mut dyn RpcEnvironment, keep_timeout: bool) -> Result<RestoreDaemonStatus, Error> {
+    if !keep_timeout && rpcenv.get_auth_id().is_some() {
+        watchdog_ping();
+    }
     Ok(RestoreDaemonStatus {
         uptime: read_uptime()? as i64,
+        timeout: watchdog_remaining(),
     })
 }
 
diff --git a/src/bin/proxmox_restore_daemon/mod.rs b/src/bin/proxmox_restore_daemon/mod.rs
index 8396ebc5..3b52cf06 100644
--- a/src/bin/proxmox_restore_daemon/mod.rs
+++ b/src/bin/proxmox_restore_daemon/mod.rs
@@ -3,3 +3,6 @@ mod api;
 pub use api::*;
 
 pub mod auth;
+
+mod watchdog;
+pub use watchdog::*;
diff --git a/src/bin/proxmox_restore_daemon/watchdog.rs b/src/bin/proxmox_restore_daemon/watchdog.rs
new file mode 100644
index 00000000..399f99a7
--- /dev/null
+++ b/src/bin/proxmox_restore_daemon/watchdog.rs
@@ -0,0 +1,41 @@
+//! Tokio-based watchdog that shuts down the VM if not pinged for TIMEOUT
+use std::sync::atomic::{AtomicI64, Ordering};
+use proxmox::tools::time::epoch_i64;
+
+const TIMEOUT: i64 = 600; // seconds
+static TRIGGERED: AtomicI64 = AtomicI64::new(0);
+
+fn handle_expired() -> ! {
+    use nix::sys::reboot;
+    println!("watchdog expired, shutting down");
+    let err = reboot::reboot(reboot::RebootMode::RB_POWER_OFF).unwrap_err();
+    println!("'reboot' syscall failed: {}", err);
+    std::process::exit(1);
+}
+
+async fn watchdog_loop() {
+    use tokio::time::{sleep, Duration};
+    loop {
+        let remaining = watchdog_remaining();
+        if remaining <= 0 {
+            handle_expired();
+        }
+        sleep(Duration::from_secs(remaining as u64)).await;
+    }
+}
+
+/// Initialize watchdog
+pub fn watchdog_init() {
+    watchdog_ping();
+    tokio::spawn(watchdog_loop());
+}
+
+/// Trigger watchdog keepalive
+pub fn watchdog_ping() {
+    TRIGGERED.fetch_max(epoch_i64(), Ordering::AcqRel);
+}
+
+/// Returns the remaining time before watchdog expiry in seconds
+pub fn watchdog_remaining() -> i64 {
+    TIMEOUT - (epoch_i64() - TRIGGERED.load(Ordering::Acquire))
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 12/20] file-restore-daemon: add disk module
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (10 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 11/20] file-restore-daemon: add watchdog module Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 13/20] add tools/cpio encoding module Stefan Reiter
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Includes functionality for scanning and referring to partitions on
attached disks (i.e. snapshot images).

Fairly modular structure, so adding ZFS/LVM/etc... support in the future
should be easy.

The path is encoded as "/disk/bucket/component/path/to/file", e.g.
"/drive-scsi0/part/0/etc/passwd". See the comments for further
explanations on the design.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

v3:
* fix ZFS exclusion in scan(), style cleanups

 src/bin/proxmox-restore-daemon.rs      |  16 +-
 src/bin/proxmox_restore_daemon/disk.rs | 329 +++++++++++++++++++++++++
 src/bin/proxmox_restore_daemon/mod.rs  |   3 +
 3 files changed, 347 insertions(+), 1 deletion(-)
 create mode 100644 src/bin/proxmox_restore_daemon/disk.rs

diff --git a/src/bin/proxmox-restore-daemon.rs b/src/bin/proxmox-restore-daemon.rs
index 6b453ad3..a2701b7c 100644
--- a/src/bin/proxmox-restore-daemon.rs
+++ b/src/bin/proxmox-restore-daemon.rs
@@ -1,13 +1,14 @@
 ///! Daemon binary to run inside a micro-VM for secure single file restore of disk images
 use anyhow::{bail, format_err, Error};
 use log::error;
+use lazy_static::lazy_static;
 
 use std::os::unix::{
     io::{FromRawFd, RawFd},
     net,
 };
 use std::path::Path;
-use std::sync::Arc;
+use std::sync::{Arc, Mutex};
 
 use tokio::sync::mpsc;
 use tokio_stream::wrappers::ReceiverStream;
@@ -26,6 +27,13 @@ pub const MAX_PENDING: usize = 32;
 /// Will be present in base initramfs
 pub const VM_DETECT_FILE: &str = "/restore-vm-marker";
 
+lazy_static! {
+    /// The current disks state. Use for accessing data on the attached snapshots.
+    pub static ref DISK_STATE: Arc<Mutex<DiskState>> = {
+        Arc::new(Mutex::new(DiskState::scan().unwrap()))
+    };
+}
+
 /// This is expected to be run by 'proxmox-file-restore' within a mini-VM
 fn main() -> Result<(), Error> {
     if !Path::new(VM_DETECT_FILE).exists() {
@@ -41,6 +49,12 @@ fn main() -> Result<(), Error> {
         .write_style(env_logger::WriteStyle::Never)
         .init();
 
+    // scan all attached disks now, before starting the API
+    // this will panic and stop the VM if anything goes wrong
+    {
+        let _disk_state = DISK_STATE.lock().unwrap();
+    }
+
     proxmox_backup::tools::runtime::main(run())
 }
 
diff --git a/src/bin/proxmox_restore_daemon/disk.rs b/src/bin/proxmox_restore_daemon/disk.rs
new file mode 100644
index 00000000..f9d7c8aa
--- /dev/null
+++ b/src/bin/proxmox_restore_daemon/disk.rs
@@ -0,0 +1,329 @@
+//! Low-level disk (image) access functions for file restore VMs.
+use anyhow::{bail, format_err, Error};
+use lazy_static::lazy_static;
+use log::{info, warn};
+
+use std::collections::HashMap;
+use std::fs::{create_dir_all, File};
+use std::io::{BufRead, BufReader};
+use std::path::{Component, Path, PathBuf};
+
+use proxmox::const_regex;
+use proxmox::tools::fs;
+use proxmox_backup::api2::types::BLOCKDEVICE_NAME_REGEX;
+
+const_regex! {
+    VIRTIO_PART_REGEX = r"^vd[a-z]+(\d+)$";
+}
+
+lazy_static! {
+    static ref FS_OPT_MAP: HashMap<&'static str, &'static str> = {
+        let mut m = HashMap::new();
+
+        // otherwise ext complains about mounting read-only
+        m.insert("ext2", "noload");
+        m.insert("ext3", "noload");
+        m.insert("ext4", "noload");
+
+        // ufs2 is used as default since FreeBSD 5.0 released in 2003, so let's assume that
+        // whatever the user is trying to restore is not using anything older...
+        m.insert("ufs", "ufstype=ufs2");
+
+        m
+    };
+}
+
+pub enum ResolveResult {
+    Path(PathBuf),
+    BucketTypes(Vec<&'static str>),
+    BucketComponents(Vec<String>),
+}
+
+struct PartitionBucketData {
+    dev_node: String,
+    number: i32,
+    mountpoint: Option<PathBuf>,
+}
+
+/// A "Bucket" represents a mapping found on a disk, e.g. a partition, a zfs dataset or an LV. A
+/// uniquely identifying path to a file then consists of four components:
+/// "/disk/bucket/component/path"
+/// where
+///   disk: fidx file name
+///   bucket: bucket type
+///   component: identifier of the specific bucket
+///   path: relative path of the file on the filesystem indicated by the other parts, may contain
+///         more subdirectories
+/// e.g.: "/drive-scsi0/part/0/etc/passwd"
+enum Bucket {
+    Partition(PartitionBucketData),
+}
+
+impl Bucket {
+    fn filter_mut<'a, A: AsRef<str>, B: AsRef<str>>(
+        haystack: &'a mut Vec<Bucket>,
+        ty: A,
+        comp: B,
+    ) -> Option<&'a mut Bucket> {
+        let ty = ty.as_ref();
+        let comp = comp.as_ref();
+        haystack.iter_mut().find(|b| match b {
+            Bucket::Partition(data) => ty == "part" && comp.parse::<i32>().unwrap() == data.number,
+        })
+    }
+
+    fn type_string(&self) -> &'static str {
+        match self {
+            Bucket::Partition(_) => "part",
+        }
+    }
+
+    fn component_string(&self) -> String {
+        match self {
+            Bucket::Partition(data) => data.number.to_string(),
+        }
+    }
+}
+
+/// Functions related to the local filesystem. This mostly exists so we can use 'supported_fs' in
+/// try_mount while a Bucket is still mutably borrowed from DiskState.
+struct Filesystems {
+    supported_fs: Vec<String>,
+}
+
+impl Filesystems {
+    fn scan() -> Result<Self, Error> {
+        // detect kernel supported filesystems
+        let mut supported_fs = Vec::new();
+        for f in BufReader::new(File::open("/proc/filesystems")?)
+            .lines()
+            .filter_map(Result::ok)
+        {
+            // ZFS is treated specially, don't attempt to do a regular mount with it
+            let f = f.trim();
+            if !f.starts_with("nodev") && f != "zfs" {
+                supported_fs.push(f.to_owned());
+            }
+        }
+
+        Ok(Self { supported_fs })
+    }
+
+    fn ensure_mounted(&self, bucket: &mut Bucket) -> Result<PathBuf, Error> {
+        match bucket {
+            Bucket::Partition(data) => {
+                // regular data partition à la "/dev/vdxN"
+                if let Some(mp) = &data.mountpoint {
+                    return Ok(mp.clone());
+                }
+
+                let mp = format!("/mnt{}/", data.dev_node);
+                self.try_mount(&data.dev_node, &mp)?;
+                let mp = PathBuf::from(mp);
+                data.mountpoint = Some(mp.clone());
+                Ok(mp)
+            }
+        }
+    }
+
+    fn try_mount(&self, source: &str, target: &str) -> Result<(), Error> {
+        use nix::mount::*;
+
+        create_dir_all(target)?;
+
+        // try all supported fs until one works - this is the way Busybox's 'mount' does it too:
+        // https://git.busybox.net/busybox/tree/util-linux/mount.c?id=808d93c0eca49e0b22056e23d965f0d967433fbb#n2152
+        // note that ZFS is intentionally left out (see scan())
+        let flags =
+            MsFlags::MS_RDONLY | MsFlags::MS_NOEXEC | MsFlags::MS_NOSUID | MsFlags::MS_NODEV;
+        for fs in &self.supported_fs {
+            let fs: &str = fs.as_ref();
+            let opts = FS_OPT_MAP.get(fs).copied();
+            match mount(Some(source), target, Some(fs), flags, opts) {
+                Ok(()) => {
+                    info!("mounting '{}' succeeded, fstype: '{}'", source, fs);
+                    return Ok(());
+                }
+                Err(err) => {
+                    warn!("mount error on '{}' ({}) - {}", source, fs, err);
+                }
+            }
+        }
+
+        bail!("all mounts failed or no supported file system")
+    }
+}
+
+pub struct DiskState {
+    filesystems: Filesystems,
+    disk_map: HashMap<String, Vec<Bucket>>,
+}
+
+impl DiskState {
+    /// Scan all disks for supported buckets.
+    pub fn scan() -> Result<Self, Error> {
+        // create mapping for virtio drives and .fidx files (via serial description)
+        // note: disks::DiskManager relies on udev, which we don't have
+        let mut disk_map = HashMap::new();
+        for entry in proxmox_backup::tools::fs::scan_subdir(
+            libc::AT_FDCWD,
+            "/sys/block",
+            &BLOCKDEVICE_NAME_REGEX,
+        )?
+        .filter_map(Result::ok)
+        {
+            let name = unsafe { entry.file_name_utf8_unchecked() };
+            if !name.starts_with("vd") {
+                continue;
+            }
+
+            let sys_path: &str = &format!("/sys/block/{}", name);
+
+            let serial = fs::file_read_string(&format!("{}/serial", sys_path));
+            let fidx = match serial {
+                Ok(serial) => serial,
+                Err(err) => {
+                    warn!("disk '{}': could not read serial file - {}", name, err);
+                    continue;
+                }
+            };
+
+            let mut parts = Vec::new();
+            for entry in proxmox_backup::tools::fs::scan_subdir(
+                libc::AT_FDCWD,
+                sys_path,
+                &VIRTIO_PART_REGEX,
+            )?
+            .filter_map(Result::ok)
+            {
+                let part_name = unsafe { entry.file_name_utf8_unchecked() };
+                let devnode = format!("/dev/{}", part_name);
+                let part_path = format!("/sys/block/{}/{}", name, part_name);
+
+                // create partition device node for further use
+                let dev_num_str = fs::file_read_firstline(&format!("{}/dev", part_path))?;
+                let (major, minor) = dev_num_str.split_at(dev_num_str.find(':').unwrap());
+                Self::mknod_blk(&devnode, major.parse()?, minor[1..].trim_end().parse()?)?;
+
+                let number = fs::file_read_firstline(&format!("{}/partition", part_path))?
+                    .trim()
+                    .parse::<i32>()?;
+
+                info!(
+                    "drive '{}' ('{}'): found partition '{}' ({})",
+                    name, fidx, devnode, number
+                );
+
+                let bucket = Bucket::Partition(PartitionBucketData {
+                    dev_node: devnode,
+                    mountpoint: None,
+                    number,
+                });
+
+                parts.push(bucket);
+            }
+
+            disk_map.insert(fidx.to_owned(), parts);
+        }
+
+        Ok(Self {
+            filesystems: Filesystems::scan()?,
+            disk_map,
+        })
+    }
+
+    /// Given a path like "/drive-scsi0.img.fidx/part/0/etc/passwd", this will mount the first
+    /// partition of 'drive-scsi0' on-demand (i.e. if not already mounted) and return a path
+    /// pointing to the requested file locally, e.g. "/mnt/vda1/etc/passwd", which can be used to
+    /// read the file.  Given a partial path, i.e. only "/drive-scsi0.img.fidx" or
+    /// "/drive-scsi0.img.fidx/part", it will return a list of available bucket types or bucket
+    /// components respectively
+    pub fn resolve(&mut self, path: &Path) -> Result<ResolveResult, Error> {
+        let mut cmp = path.components().peekable();
+        match cmp.peek() {
+            Some(Component::RootDir) | Some(Component::CurDir) => {
+                cmp.next();
+            }
+            None => bail!("empty path cannot be resolved to file location"),
+            _ => {}
+        }
+
+        let req_fidx = match cmp.next() {
+            Some(Component::Normal(x)) => x.to_string_lossy(),
+            _ => bail!("no or invalid image in path"),
+        };
+
+        let buckets = match self.disk_map.get_mut(req_fidx.as_ref()) {
+            Some(x) => x,
+            None => bail!("given image '{}' not found", req_fidx),
+        };
+
+        let bucket_type = match cmp.next() {
+            Some(Component::Normal(x)) => x.to_string_lossy(),
+            Some(c) => bail!("invalid bucket in path: {:?}", c),
+            None => {
+                // list bucket types available
+                let mut types = buckets
+                    .iter()
+                    .map(|b| b.type_string())
+                    .collect::<Vec<&'static str>>();
+                // dedup requires duplicates to be consecutive, which is the case - see scan()
+                types.dedup();
+                return Ok(ResolveResult::BucketTypes(types));
+            }
+        };
+
+        let component = match cmp.next() {
+            Some(Component::Normal(x)) => x.to_string_lossy(),
+            Some(c) => bail!("invalid bucket component in path: {:?}", c),
+            None => {
+                // list bucket components available
+                let comps = buckets
+                    .iter()
+                    .filter(|b| b.type_string() == bucket_type)
+                    .map(Bucket::component_string)
+                    .collect();
+                return Ok(ResolveResult::BucketComponents(comps));
+            }
+        };
+
+        let mut bucket = match Bucket::filter_mut(buckets, &bucket_type, &component) {
+            Some(bucket) => bucket,
+            None => bail!(
+                "bucket/component path not found: {}/{}/{}",
+                req_fidx,
+                bucket_type,
+                component
+            ),
+        };
+
+        // bucket found, check mount
+        let mountpoint = self
+            .filesystems
+            .ensure_mounted(&mut bucket)
+            .map_err(|err| {
+                format_err!(
+                    "mounting '{}/{}/{}' failed: {}",
+                    req_fidx,
+                    bucket_type,
+                    component,
+                    err
+                )
+            })?;
+
+        let mut local_path = PathBuf::new();
+        local_path.push(mountpoint);
+        for rem in cmp {
+            local_path.push(rem);
+        }
+
+        Ok(ResolveResult::Path(local_path))
+    }
+
+    fn mknod_blk(path: &str, maj: u64, min: u64) -> Result<(), Error> {
+        use nix::sys::stat;
+        let dev = stat::makedev(maj, min);
+        stat::mknod(path, stat::SFlag::S_IFBLK, stat::Mode::S_IRWXU, dev)?;
+        Ok(())
+    }
+}
diff --git a/src/bin/proxmox_restore_daemon/mod.rs b/src/bin/proxmox_restore_daemon/mod.rs
index 3b52cf06..58e2bb6e 100644
--- a/src/bin/proxmox_restore_daemon/mod.rs
+++ b/src/bin/proxmox_restore_daemon/mod.rs
@@ -6,3 +6,6 @@ pub mod auth;
 
 mod watchdog;
 pub use watchdog::*;
+
+mod disk;
+pub use disk::*;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 13/20] add tools/cpio encoding module
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (11 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 12/20] file-restore-daemon: add disk module Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 14/20] file-restore: add qemu-helper setuid binary Stefan Reiter
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/tools.rs      |  1 +
 src/tools/cpio.rs | 73 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 74 insertions(+)
 create mode 100644 src/tools/cpio.rs

diff --git a/src/tools.rs b/src/tools.rs
index 7e3bff7b..43fd070e 100644
--- a/src/tools.rs
+++ b/src/tools.rs
@@ -22,6 +22,7 @@ pub mod apt;
 pub mod async_io;
 pub mod borrow;
 pub mod cert;
+pub mod cpio;
 pub mod daemon;
 pub mod disks;
 pub mod format;
diff --git a/src/tools/cpio.rs b/src/tools/cpio.rs
new file mode 100644
index 00000000..8800e3ad
--- /dev/null
+++ b/src/tools/cpio.rs
@@ -0,0 +1,73 @@
+//! Provides a very basic "newc" format cpio encoder.
+//! See 'man 5 cpio' for format details, as well as:
+//! https://www.kernel.org/doc/html/latest/driver-api/early-userspace/buffer-format.html
+//! This does not provide full support for the format, only what is needed to include files in an
+//! initramfs intended for a linux kernel.
+use anyhow::{bail, Error};
+use std::ffi::{CString, CStr};
+use tokio::io::{copy, AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
+
+/// Write a cpio file entry to an AsyncWrite.
+pub async fn append_file<W: AsyncWrite + Unpin, R: AsyncRead + Unpin>(
+    mut target: W,
+    content: R,
+    name: &CStr,
+    inode: u16,
+    mode: u16,
+    uid: u16,
+    gid: u16,
+    // negative mtimes are generally valid, but cpio defines all fields as unsigned
+    mtime: u64,
+    // c_filesize has 8 bytes, but man page claims that 4 GB files are the maximum, let's be safe
+    size: u32,
+) -> Result<(), Error> {
+    let name = name.to_bytes_with_nul();
+
+    target.write_all(b"070701").await?; // c_magic
+    print_cpio_hex(&mut target, inode as u64).await?; // c_ino
+    print_cpio_hex(&mut target, mode as u64).await?; // c_mode
+    print_cpio_hex(&mut target, uid as u64).await?; // c_uid
+    print_cpio_hex(&mut target, gid as u64).await?; // c_gid
+    print_cpio_hex(&mut target, 0).await?; // c_nlink
+    print_cpio_hex(&mut target, mtime as u64).await?; // c_mtime
+    print_cpio_hex(&mut target, size as u64).await?; // c_filesize
+    print_cpio_hex(&mut target, 0).await?; // c_devmajor
+    print_cpio_hex(&mut target, 0).await?; // c_devminor
+    print_cpio_hex(&mut target, 0).await?; // c_rdevmajor
+    print_cpio_hex(&mut target, 0).await?; // c_rdevminor
+    print_cpio_hex(&mut target, name.len() as u64).await?; // c_namesize
+    print_cpio_hex(&mut target, 0).await?; // c_check (ignored for newc)
+
+    target.write_all(name).await?;
+    let header_size = 6 + 8*13 + name.len();
+    let mut name_pad = header_size;
+    while name_pad & 3 != 0 {
+        target.write_u8(0).await?;
+        name_pad += 1;
+    }
+
+    let mut content = content.take(size as u64);
+    let copied = copy(&mut content, &mut target).await?;
+    if copied < size as u64 {
+        bail!("cpio: not enough data, or size to big - encoding invalid");
+    }
+    let mut data_pad = copied;
+    while data_pad & 3 != 0 {
+        target.write_u8(0).await?;
+        data_pad += 1;
+    }
+
+    Ok(())
+}
+
+/// Write the TRAILER!!! file to an AsyncWrite, signifying the end of a cpio archive. Note that you
+/// can immediately add more files after, to create a concatenated archive, the kernel for example
+/// will merge these upon loading an initramfs.
+pub async fn append_trailer<W: AsyncWrite + Unpin>(target: W) -> Result<(), Error> {
+    let name = CString::new("TRAILER!!!").unwrap();
+    append_file(target, tokio::io::empty(), &name, 0, 0, 0, 0, 0, 0).await
+}
+
+async fn print_cpio_hex<W: AsyncWrite + Unpin>(target: &mut W, value: u64) -> Result<(), Error> {
+    target.write_all(format!("{:08x}", value).as_bytes()).await.map_err(|e| e.into())
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 14/20] file-restore: add qemu-helper setuid binary
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (12 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 13/20] add tools/cpio encoding module Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-03-31 14:15   ` Oguz Bektas
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 15/20] file-restore: add basic VM/block device support Stefan Reiter
                   ` (6 subsequent siblings)
  20 siblings, 1 reply; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Starting a VM requires root (for /dev/kvm and /dev/vhost-vsock), but we
want a regular user to use this functionality. Implement a setuid binary
that allows one to very specifically only start a restore VM, and
nothing else.

Keeps the log files of the last 16 VM starts (log output generated by
the daemon binary via QEMU's serial-to-logfile interface). Also put them
into a seperate /var/log/proxmox-backup/file-restore directory.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

v2:
* split this off from proxmox-file-restore binary

 Makefile                               |   4 +-
 debian/proxmox-file-restore.install    |   1 +
 debian/rules                           |   2 +-
 src/bin/proxmox-restore-qemu-helper.rs | 372 +++++++++++++++++++++++++
 src/buildcfg.rs                        |  21 ++
 5 files changed, 398 insertions(+), 2 deletions(-)
 create mode 100644 src/bin/proxmox-restore-qemu-helper.rs

diff --git a/Makefile b/Makefile
index 269bb80c..fbbf88a2 100644
--- a/Makefile
+++ b/Makefile
@@ -155,8 +155,10 @@ install: $(COMPILED_BINS)
 	install -dm755 $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/file-restore
 	$(foreach i,$(RESTORE_BIN), \
 	    install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/file-restore/ ;)
-	# install sg-tape-cmd as setuid binary
+	# install sg-tape-cmd and proxmox-restore-qemu-helper as setuid binary
 	install -m4755 -o root -g root $(COMPILEDIR)/sg-tape-cmd $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/sg-tape-cmd
+	install -m4755 -o root -g root $(COMPILEDIR)/proxmox-restore-qemu-helper \
+	    $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/file-restore/proxmox-restore-qemu-helper
 	$(foreach i,$(SERVICE_BIN), \
 	    install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/ ;)
 	$(MAKE) -C www install
diff --git a/debian/proxmox-file-restore.install b/debian/proxmox-file-restore.install
index d952836e..0f0e9d56 100644
--- a/debian/proxmox-file-restore.install
+++ b/debian/proxmox-file-restore.install
@@ -2,3 +2,4 @@ usr/bin/proxmox-file-restore
 usr/share/man/man1/proxmox-file-restore.1
 usr/share/zsh/vendor-completions/_proxmox-file-restore
 usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore/proxmox-restore-daemon
+usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore/proxmox-restore-qemu-helper
diff --git a/debian/rules b/debian/rules
index ce2db72e..ac9de7fe 100755
--- a/debian/rules
+++ b/debian/rules
@@ -43,7 +43,7 @@ override_dh_installsystemd:
 	dh_installsystemd --no-start --no-restart-after-upgrade
 
 override_dh_fixperms:
-	dh_fixperms --exclude sg-tape-cmd
+	dh_fixperms --exclude sg-tape-cmd --exclude proxmox-restore-qemu-helper
 
 # workaround https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933541
 # TODO: remove once available (Debian 11 ?)
diff --git a/src/bin/proxmox-restore-qemu-helper.rs b/src/bin/proxmox-restore-qemu-helper.rs
new file mode 100644
index 00000000..f56a6607
--- /dev/null
+++ b/src/bin/proxmox-restore-qemu-helper.rs
@@ -0,0 +1,372 @@
+//! Starts a QEMU VM for single file restore.
+//! Needs to be setuid, or otherwise able to access /dev/kvm and /dev/vhost-vsock.
+use std::fs::{File, OpenOptions};
+use std::io::prelude::*;
+use std::os::unix::io::{AsRawFd, FromRawFd};
+use std::path::PathBuf;
+use std::time::Duration;
+
+use anyhow::{bail, format_err, Error};
+use serde_json::{json, Value};
+use tokio::time;
+
+use nix::sys::signal::{kill, Signal};
+use nix::unistd::Pid;
+
+use proxmox::{
+    api::{api, cli::*, RpcEnvironment},
+    tools::{
+        fd::Fd,
+        fs::{create_path, file_read_string, make_tmp_file, CreateOptions},
+    },
+};
+
+use proxmox_backup::backup::backup_user;
+use proxmox_backup::client::{VsockClient, DEFAULT_VSOCK_PORT};
+use proxmox_backup::{buildcfg, tools};
+
+pub mod proxmox_client_tools;
+use proxmox_client_tools::REPO_URL_SCHEMA;
+
+const PBS_VM_NAME: &str = "pbs-restore-vm";
+const MAX_CID_TRIES: u64 = 32;
+
+fn create_restore_log_dir() -> Result<String, Error> {
+    let logpath = format!("{}/file-restore", buildcfg::PROXMOX_BACKUP_LOG_DIR);
+
+    proxmox::try_block!({
+        let backup_user = backup_user()?;
+        let opts = CreateOptions::new()
+            .owner(backup_user.uid)
+            .group(backup_user.gid);
+
+        let opts_root = CreateOptions::new()
+            .owner(nix::unistd::ROOT)
+            .group(nix::unistd::Gid::from_raw(0));
+
+        create_path(buildcfg::PROXMOX_BACKUP_LOG_DIR, None, Some(opts))?;
+        create_path(&logpath, None, Some(opts_root))?;
+        Ok(())
+    })
+    .map_err(|err: Error| format_err!("unable to create file-restore log dir - {}", err))?;
+
+    Ok(logpath)
+}
+
+fn validate_img_existance() -> Result<(), Error> {
+    let kernel = PathBuf::from(buildcfg::PROXMOX_BACKUP_KERNEL_FN);
+    let initramfs = PathBuf::from(buildcfg::PROXMOX_BACKUP_INITRAMFS_FN);
+    if !kernel.exists() || !initramfs.exists() {
+        bail!("cannot run file-restore VM: package 'proxmox-file-restore' is not (correctly) installed");
+    }
+    Ok(())
+}
+
+fn try_kill_vm(pid: i32) -> Result<(), Error> {
+    let pid = Pid::from_raw(pid);
+    if let Ok(()) = kill(pid, None) {
+        // process is running (and we could kill it), check if it is actually ours
+        // (if it errors assume we raced with the process's death and ignore it)
+        if let Ok(cmdline) = file_read_string(format!("/proc/{}/cmdline", pid)) {
+            if cmdline.split('\0').any(|a| a == PBS_VM_NAME) {
+                // yes, it's ours, kill it brutally with SIGKILL, no reason to take
+                // any chances - in this state it's most likely broken anyway
+                if let Err(err) = kill(pid, Signal::SIGKILL) {
+                    bail!(
+                        "reaping broken VM (pid {}) with SIGKILL failed: {}",
+                        pid,
+                        err
+                    );
+                }
+            }
+        }
+    }
+
+    Ok(())
+}
+
+async fn create_temp_initramfs(ticket: &str) -> Result<(Fd, String), Error> {
+    use std::ffi::CString;
+    use tokio::fs::File;
+
+    let (tmp_fd, tmp_path) =
+        make_tmp_file("/tmp/file-restore-qemu.initramfs.tmp", CreateOptions::new())?;
+    nix::unistd::unlink(&tmp_path)?;
+    tools::fd_change_cloexec(tmp_fd.0, false)?;
+
+    let mut f = File::from_std(unsafe { std::fs::File::from_raw_fd(tmp_fd.0) });
+    let mut base = File::open(buildcfg::PROXMOX_BACKUP_INITRAMFS_FN).await?;
+
+    tokio::io::copy(&mut base, &mut f).await?;
+
+    let name = CString::new("ticket").unwrap();
+    tools::cpio::append_file(
+        &mut f,
+        ticket.as_bytes(),
+        &name,
+        0,
+        (libc::S_IFREG | 0o400) as u16,
+        0,
+        0,
+        0,
+        ticket.len() as u32,
+    )
+    .await?;
+    tools::cpio::append_trailer(&mut f).await?;
+
+    // forget the tokio file, we close the file descriptor via the returned Fd
+    std::mem::forget(f);
+
+    let path = format!("/dev/fd/{}", &tmp_fd.0);
+    Ok((tmp_fd, path))
+}
+
+async fn start_vm(
+    // u16 so we can do wrapping_add without going too high
+    mut cid: u16,
+    repo: &str,
+    snapshot: &str,
+    files: impl Iterator<Item = &str>,
+    ticket: &str,
+) -> Result<(i32, i32), Error> {
+    validate_img_existance()?;
+
+    if let Err(_) = std::env::var("PBS_PASSWORD") {
+        bail!("environment variable PBS_PASSWORD has to be set for QEMU VM restore");
+    }
+    if let Err(_) = std::env::var("PBS_FINGERPRINT") {
+        bail!("environment variable PBS_FINGERPRINT has to be set for QEMU VM restore");
+    }
+
+    let pid;
+    let (pid_fd, pid_path) = make_tmp_file("/tmp/file-restore-qemu.pid.tmp", CreateOptions::new())?;
+    nix::unistd::unlink(&pid_path)?;
+    tools::fd_change_cloexec(pid_fd.0, false)?;
+
+    let (_ramfs_pid, ramfs_path) = create_temp_initramfs(ticket).await?;
+
+    let logpath = create_restore_log_dir()?;
+    let logfile = &format!("{}/qemu.log", logpath);
+    let mut logrotate = tools::logrotate::LogRotate::new(logfile, false)
+        .ok_or_else(|| format_err!("could not get QEMU log file names"))?;
+
+    if let Err(err) = logrotate.do_rotate(CreateOptions::default(), Some(16)) {
+        eprintln!("warning: logrotate for QEMU log file failed - {}", err);
+    }
+
+    let mut logfd = OpenOptions::new()
+        .append(true)
+        .create_new(true)
+        .open(logfile)?;
+    tools::fd_change_cloexec(logfd.as_raw_fd(), false)?;
+
+    // preface log file with start timestamp so one can see how long QEMU took to start
+    writeln!(logfd, "[{}] PBS file restore VM log", {
+        let now = proxmox::tools::time::epoch_i64();
+        proxmox::tools::time::epoch_to_rfc3339(now)?
+    },)?;
+
+    let base_args = [
+        "-chardev",
+        &format!(
+            "file,id=log,path=/dev/null,logfile=/dev/fd/{},logappend=on",
+            logfd.as_raw_fd()
+        ),
+        "-serial",
+        "chardev:log",
+        "-vnc",
+        "none",
+        "-enable-kvm",
+        "-m",
+        "512",
+        "-kernel",
+        buildcfg::PROXMOX_BACKUP_KERNEL_FN,
+        "-initrd",
+        &ramfs_path,
+        "-append",
+        "quiet",
+        "-daemonize",
+        "-pidfile",
+        &format!("/dev/fd/{}", pid_fd.as_raw_fd()),
+        "-name",
+        PBS_VM_NAME,
+    ];
+
+    // Generate drive arguments for all fidx files in backup snapshot
+    let mut drives = Vec::new();
+    let mut id = 0;
+    for file in files {
+        if !file.ends_with(".img.fidx") {
+            continue;
+        }
+        drives.push("-drive".to_owned());
+        drives.push(format!(
+            "file=pbs:repository={},,snapshot={},,archive={},read-only=on,if=none,id=drive{}",
+            repo, snapshot, file, id
+        ));
+        drives.push("-device".to_owned());
+        // drive serial is used by VM to map .fidx files to /dev paths
+        drives.push(format!("virtio-blk-pci,drive=drive{},serial={}", id, file));
+        id += 1;
+    }
+
+    // Try starting QEMU in a loop to retry if we fail because of a bad 'cid' value
+    let mut attempts = 0;
+    loop {
+        let mut qemu_cmd = std::process::Command::new("qemu-system-x86_64");
+        qemu_cmd.args(base_args.iter());
+        qemu_cmd.args(&drives);
+        qemu_cmd.arg("-device");
+        qemu_cmd.arg(format!(
+            "vhost-vsock-pci,guest-cid={},disable-legacy=on",
+            cid
+        ));
+
+        qemu_cmd.stdout(std::process::Stdio::null());
+        qemu_cmd.stderr(std::process::Stdio::piped());
+
+        let res = tokio::task::block_in_place(|| qemu_cmd.spawn()?.wait_with_output())?;
+
+        if res.status.success() {
+            // at this point QEMU is already daemonized and running, so if anything fails we
+            // technically leave behind a zombie-VM... this shouldn't matter, as it will stop
+            // itself soon enough (timer), and the following operations are unlikely to fail
+            let mut pid_file = unsafe { File::from_raw_fd(pid_fd.as_raw_fd()) };
+            std::mem::forget(pid_fd); // FD ownership is now in pid_fd/File
+            let mut pidstr = String::new();
+            pid_file.read_to_string(&mut pidstr)?;
+            pid = pidstr.trim_end().parse().map_err(|err| {
+                format_err!("cannot parse PID returned by QEMU ('{}'): {}", &pidstr, err)
+            })?;
+            break;
+        } else {
+            let out = String::from_utf8_lossy(&res.stderr);
+            if out.contains("unable to set guest cid: Address already in use") {
+                attempts += 1;
+                if attempts >= MAX_CID_TRIES {
+                    bail!("CID '{}' in use, but max attempts reached, aborting", cid);
+                }
+                // CID in use, try next higher one
+                eprintln!("CID '{}' in use by other VM, attempting next one", cid);
+                // skip special-meaning low values
+                cid = cid.wrapping_add(1).max(10);
+            } else {
+                eprint!("{}", out);
+                bail!("Starting VM failed. See output above for more information.");
+            }
+        }
+    }
+
+    // QEMU has started successfully, now wait for virtio socket to become ready
+    let pid_t = Pid::from_raw(pid);
+    for _ in 0..60 {
+        let client = VsockClient::new(cid as i32, DEFAULT_VSOCK_PORT, Some(ticket.to_owned()));
+        if let Ok(Ok(_)) =
+            time::timeout(Duration::from_secs(2), client.get("api2/json/status", None)).await
+        {
+            return Ok((pid, cid as i32));
+        }
+        if kill(pid_t, None).is_err() {
+            // QEMU exited
+            bail!("VM exited before connection could be established");
+        }
+        time::sleep(Duration::from_millis(200)).await;
+    }
+
+    // start failed
+    if let Err(err) = try_kill_vm(pid) {
+        eprintln!("killing failed VM failed: {}", err);
+    }
+    bail!("starting VM timed out");
+}
+
+#[api(
+   input: {
+       properties: {
+           repository: {
+               schema: REPO_URL_SCHEMA,
+           },
+           snapshot: {
+               type: String,
+               description: "Group/Snapshot path",
+           },
+           ticket: {
+               description: "A unique key acting as a password for communicating with the VM.",
+               type: String,
+           },
+           cid: {
+               description: "Request a specific CID, if it is unavailable the next free one will be used",
+               type: i32,
+               optional: true,
+           },
+           "files": {
+               description: "Files in snapshot to map to VM",
+               type: Array,
+               items: {
+                   description: "A .img.fidx file in the given snapshot",
+                   type: String,
+               },
+           },
+       },
+   },
+   returns: {
+       description: "Information about the started VM",
+       type: Object,
+       properties: {
+           cid: {
+               description: "The vsock CID of the started VM",
+               type: i32,
+           },
+           pid: {
+               description: "The process ID of the started VM",
+               type: i32,
+           },
+       },
+   }
+)]
+/// Start a VM with the given parameters and return its cid
+async fn start(param: Value) -> Result<Value, Error> {
+    let repo = tools::required_string_param(&param, "repository")?;
+    let snapshot = tools::required_string_param(&param, "snapshot")?;
+    let files = tools::required_array_param(&param, "files")?;
+    let ticket = tools::required_string_param(&param, "ticket")?;
+
+    let running_uid = nix::unistd::Uid::current();
+    let cid = (param["cid"].as_i64().unwrap_or(running_uid.as_raw() as i64) & 0xFFFF).max(10);
+
+    let (pid, cid) = start_vm(
+        cid as u16,
+        repo,
+        snapshot,
+        files.iter().map(|f| f.as_str().unwrap()),
+        ticket,
+    )
+    .await?;
+
+    // always print json, this is not supposed to be called manually anyway
+    print!("{}", json!({ "pid": pid, "cid": cid }));
+    Ok(Value::Null)
+}
+
+fn main() -> Result<(), Error> {
+    let effective_uid = nix::unistd::Uid::effective();
+    if !effective_uid.is_root() {
+        bail!("this program needs to be run with setuid root");
+    }
+
+    let cmd_def = CliCommandMap::new().insert(
+        "start",
+        CliCommand::new(&API_METHOD_START).arg_param(&["repository", "snapshot", "ticket", "cid"]),
+    );
+
+    let mut rpcenv = CliEnvironment::new();
+    rpcenv.set_auth_id(Some(String::from("root@pam")));
+
+    run_cli_command(
+        cmd_def,
+        rpcenv,
+        Some(|future| proxmox_backup::tools::runtime::main(future)),
+    );
+
+    Ok(())
+}
diff --git a/src/buildcfg.rs b/src/buildcfg.rs
index 4f333288..d80c5a12 100644
--- a/src/buildcfg.rs
+++ b/src/buildcfg.rs
@@ -10,6 +10,14 @@ macro_rules! PROXMOX_BACKUP_RUN_DIR_M { () => ("/run/proxmox-backup") }
 #[macro_export]
 macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") }
 
+#[macro_export]
+macro_rules! PROXMOX_BACKUP_CACHE_DIR_M { () => ("/var/cache/proxmox-backup") }
+
+#[macro_export]
+macro_rules! PROXMOX_BACKUP_FILE_RESTORE_BIN_DIR_M {
+    () => ("/usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore")
+}
+
 /// namespaced directory for in-memory (tmpfs) run state
 pub const PROXMOX_BACKUP_RUN_DIR: &str = PROXMOX_BACKUP_RUN_DIR_M!();
 
@@ -30,6 +38,19 @@ pub const PROXMOX_BACKUP_PROXY_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(
 /// the PID filename for the privileged api daemon
 pub const PROXMOX_BACKUP_API_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(), "/api.pid");
 
+/// filename of the cached initramfs to use for booting single file restore VMs, this file is
+/// automatically created by APT hooks
+pub const PROXMOX_BACKUP_INITRAMFS_FN: &str =
+    concat!(PROXMOX_BACKUP_CACHE_DIR_M!(), "/file-restore-initramfs.img");
+
+/// filename of the kernel to use for booting single file restore VMs
+pub const PROXMOX_BACKUP_KERNEL_FN: &str =
+    concat!(PROXMOX_BACKUP_FILE_RESTORE_BIN_DIR_M!(), "/bzImage");
+
+/// setuid binary location for starting restore VMs
+pub const PROXMOX_RESTORE_QEMU_HELPER_FN: &str =
+    concat!(PROXMOX_BACKUP_FILE_RESTORE_BIN_DIR_M!(), "/proxmox-restore-qemu-helper");
+
 /// Prepend configuration directory to a file name
 ///
 /// This is a simply way to get the full path for configuration files.
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 15/20] file-restore: add basic VM/block device support
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (13 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 14/20] file-restore: add qemu-helper setuid binary Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-04-01 15:43   ` [pbs-devel] [PATCH v4 " Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 16/20] debian/client: add postinst hook to rebuild file-restore initramfs Stefan Reiter
                   ` (5 subsequent siblings)
  20 siblings, 1 reply; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Includes methods to start, stop and list QEMU file-restore VMs, as well
as CLI commands do the latter two (start is implicit).

The implementation is abstracted behind the concept of a
"BlockRestoreDriver", so other methods can be implemented later (e.g.
mapping directly to loop devices on the host, using other hypervisors
then QEMU, etc...).

Starting VMs is currently unused but will be needed for further changes.

The design for the QEMU driver uses a locked 'map' file
(/run/user/$UID/restore-vm-map.json) containing a JSON encoding of
currently running VMs. VMs are addressed by a 'name', which is a
systemd-unit encoded combination of repository and snapshot string, thus
uniquely identifying it.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

v3:
* fix crash on 'status' with '--output-format json(-pretty)'

v2:
* this now works per-user, utilizing the setuid helper binary to call QEMU

 src/bin/proxmox-file-restore.rs               |  16 +-
 src/bin/proxmox_client_tools/mod.rs           |  17 +
 src/bin/proxmox_file_restore/block_driver.rs  | 163 +++++++++
 .../proxmox_file_restore/block_driver_qemu.rs | 309 ++++++++++++++++++
 src/bin/proxmox_file_restore/mod.rs           |   5 +
 5 files changed, 507 insertions(+), 3 deletions(-)
 create mode 100644 src/bin/proxmox_file_restore/block_driver.rs
 create mode 100644 src/bin/proxmox_file_restore/block_driver_qemu.rs
 create mode 100644 src/bin/proxmox_file_restore/mod.rs

diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
index f8affc03..0c2050f2 100644
--- a/src/bin/proxmox-file-restore.rs
+++ b/src/bin/proxmox-file-restore.rs
@@ -35,6 +35,9 @@ use proxmox_client_tools::{
     REPO_URL_SCHEMA,
 };
 
+mod proxmox_file_restore;
+use proxmox_file_restore::*;
+
 enum ExtractPath {
     ListArchives,
     Pxar(String, Vec<u8>),
@@ -51,7 +54,7 @@ fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
         return Ok(ExtractPath::ListArchives);
     }
 
-    while bytes.len() > 0 && bytes[0] == b'/' {
+    while !bytes.is_empty() && bytes[0] == b'/' {
         bytes.remove(0);
     }
 
@@ -327,7 +330,7 @@ async fn extract(
             let file = root
                 .lookup(OsStr::from_bytes(&path))
                 .await?
-                .ok_or(format_err!("error opening '{:?}'", path))?;
+                .ok_or_else(|| format_err!("error opening '{:?}'", path))?;
 
             if let Some(target) = target {
                 extract_sub_dir(target, decoder, OsStr::from_bytes(&path), verbose).await?;
@@ -369,9 +372,16 @@ fn main() {
         .completion_cb("snapshot", complete_group_or_snapshot)
         .completion_cb("target", tools::complete_file_name);
 
+    let status_cmd_def = CliCommand::new(&API_METHOD_STATUS);
+    let stop_cmd_def = CliCommand::new(&API_METHOD_STOP)
+        .arg_param(&["name"])
+        .completion_cb("name", complete_block_driver_ids);
+
     let cmd_def = CliCommandMap::new()
         .insert("list", list_cmd_def)
-        .insert("extract", restore_cmd_def);
+        .insert("extract", restore_cmd_def)
+        .insert("status", status_cmd_def)
+        .insert("stop", stop_cmd_def);
 
     let rpcenv = CliEnvironment::new();
     run_cli_command(
diff --git a/src/bin/proxmox_client_tools/mod.rs b/src/bin/proxmox_client_tools/mod.rs
index 73744ba2..03276993 100644
--- a/src/bin/proxmox_client_tools/mod.rs
+++ b/src/bin/proxmox_client_tools/mod.rs
@@ -372,3 +372,20 @@ pub fn place_xdg_file(
         .and_then(|base| base.place_config_file(file_name).map_err(Error::from))
         .with_context(|| format!("failed to place {} in xdg home", description))
 }
+
+/// Returns a runtime dir owned by the current user
+pub fn get_user_run_dir() -> Result<std::path::PathBuf, Error> {
+    if let Ok(xdg) = base_directories() {
+        if let Ok(path) = xdg.create_runtime_directory("proxmox-backup") {
+            return Ok(path);
+        }
+    }
+    let uid = nix::unistd::Uid::current();
+    let mut path: std::path::PathBuf = format!("/run/user/{}/", uid).into();
+    if !path.exists() {
+        bail!("XDG_RUNTIME_DIR is unavailable, and '{}' doesn't exist", path.display());
+    }
+    path.push("proxmox-backup");
+    std::fs::create_dir_all(&path)?;
+    Ok(path)
+}
diff --git a/src/bin/proxmox_file_restore/block_driver.rs b/src/bin/proxmox_file_restore/block_driver.rs
new file mode 100644
index 00000000..9c6fc5ac
--- /dev/null
+++ b/src/bin/proxmox_file_restore/block_driver.rs
@@ -0,0 +1,163 @@
+//! Abstraction layer over different methods of accessing a block backup
+use anyhow::{bail, Error};
+use serde::{Deserialize, Serialize};
+use serde_json::{json, Value};
+
+use std::collections::HashMap;
+use std::future::Future;
+use std::hash::BuildHasher;
+use std::pin::Pin;
+
+use proxmox_backup::backup::{BackupDir, BackupManifest};
+use proxmox_backup::client::BackupRepository;
+
+use proxmox::api::{api, cli::*};
+
+use super::block_driver_qemu::QemuBlockDriver;
+
+/// Contains details about a snapshot that is to be accessed by block file restore
+pub struct SnapRestoreDetails {
+    pub repo: BackupRepository,
+    pub snapshot: BackupDir,
+    pub manifest: BackupManifest,
+}
+
+/// Return value of a BlockRestoreDriver.status() call, 'id' must be valid for .stop(id)
+pub struct DriverStatus {
+    pub id: String,
+    pub data: Value,
+}
+
+pub type Async<R> = Pin<Box<dyn Future<Output = R> + Send>>;
+
+/// An abstract implementation for retrieving data out of a block file backup
+pub trait BlockRestoreDriver {
+    /// Return status of all running/mapped images, result value is (id, extra data), where id must
+    /// match with the ones returned from list()
+    fn status(&self) -> Async<Result<Vec<DriverStatus>, Error>>;
+    /// Stop/Close a running restore method
+    fn stop(&self, id: String) -> Async<Result<(), Error>>;
+    /// Returned ids must be prefixed with driver type so that they cannot collide between drivers,
+    /// the returned values must be passable to stop()
+    fn list(&self) -> Vec<String>;
+}
+
+#[api()]
+#[derive(Debug, Serialize, Deserialize, PartialEq, Clone, Copy)]
+pub enum BlockDriverType {
+    /// Uses a small QEMU/KVM virtual machine to map images securely. Requires PVE-patched QEMU.
+    Qemu,
+}
+
+impl BlockDriverType {
+    fn resolve(&self) -> impl BlockRestoreDriver {
+        match self {
+            BlockDriverType::Qemu => QemuBlockDriver {},
+        }
+    }
+}
+
+const DEFAULT_DRIVER: BlockDriverType = BlockDriverType::Qemu;
+const ALL_DRIVERS: &[BlockDriverType] = &[BlockDriverType::Qemu];
+
+#[api(
+   input: {
+       properties: {
+            "driver": {
+                type: BlockDriverType,
+                optional: true,
+            },
+            "output-format": {
+                schema: OUTPUT_FORMAT,
+                optional: true,
+            },
+        },
+   },
+)]
+/// Retrieve status information about currently running/mapped restore images
+pub async fn status(driver: Option<BlockDriverType>, param: Value) -> Result<(), Error> {
+    let output_format = get_output_format(&param);
+    let text = output_format == "text";
+
+    let mut ret = json!({});
+
+    for dt in ALL_DRIVERS {
+        if driver.is_some() && &driver.unwrap() != dt {
+            continue;
+        }
+
+        let drv_name = format!("{:?}", dt);
+        let drv = dt.resolve();
+        match drv.status().await {
+            Ok(data) if data.is_empty() => {
+                if text {
+                    println!("{}: no mappings", drv_name);
+                } else {
+                    ret[drv_name] = json!({});
+                }
+            }
+            Ok(data) => {
+                if text {
+                    println!("{}:", &drv_name);
+                }
+
+                ret[&drv_name]["ids"] = json!({});
+                for status in data {
+                    if text {
+                        println!("{} \t({})", status.id, status.data);
+                    } else {
+                        ret[&drv_name]["ids"][status.id] = status.data;
+                    }
+                }
+            }
+            Err(err) => {
+                if text {
+                    eprintln!("error getting status from driver '{}' - {}", drv_name, err);
+                } else {
+                    ret[drv_name] = json!({ "error": format!("{}", err) });
+                }
+            }
+        }
+    }
+
+    if !text {
+        format_and_print_result(&ret, &output_format);
+    }
+
+    Ok(())
+}
+
+#[api(
+   input: {
+       properties: {
+            "name": {
+                type: String,
+                description: "The name of the VM to stop.",
+            },
+        },
+   },
+)]
+/// Immediately stop/unmap a given image. Not typically necessary, as VMs will stop themselves
+/// after a timer anyway.
+pub async fn stop(name: String) -> Result<(), Error> {
+    for drv in ALL_DRIVERS.iter().map(BlockDriverType::resolve) {
+        if drv.list().contains(&name) {
+            return drv.stop(name).await;
+        }
+    }
+
+    bail!("no mapping with name '{}' found", name);
+}
+
+/// Autocompletion handler for block mappings
+pub fn complete_block_driver_ids<S: BuildHasher>(
+    _arg: &str,
+    _param: &HashMap<String, String, S>,
+) -> Vec<String> {
+    ALL_DRIVERS
+        .iter()
+        .map(BlockDriverType::resolve)
+        .map(|d| d.list())
+        .flatten()
+        .collect()
+}
diff --git a/src/bin/proxmox_file_restore/block_driver_qemu.rs b/src/bin/proxmox_file_restore/block_driver_qemu.rs
new file mode 100644
index 00000000..5fda5d6f
--- /dev/null
+++ b/src/bin/proxmox_file_restore/block_driver_qemu.rs
@@ -0,0 +1,309 @@
+//! Block file access via a small QEMU restore VM using the PBS block driver in QEMU
+use anyhow::{bail, Error};
+use futures::FutureExt;
+use serde::{Deserialize, Serialize};
+use serde_json::{json, Value};
+
+use std::collections::HashMap;
+use std::fs::{File, OpenOptions};
+use std::io::{prelude::*, SeekFrom};
+
+use proxmox::tools::fs::lock_file;
+use proxmox_backup::backup::BackupDir;
+use proxmox_backup::buildcfg;
+use proxmox_backup::client::*;
+use proxmox_backup::tools;
+
+use super::block_driver::*;
+use crate::proxmox_client_tools::get_user_run_dir;
+
+const RESTORE_VM_MAP: &str = "restore-vm-map.json";
+
+pub struct QemuBlockDriver {}
+
+#[derive(Clone, Hash, Serialize, Deserialize)]
+struct VMState {
+    pid: i32,
+    cid: i32,
+    ticket: String,
+}
+
+struct VMStateMap {
+    map: HashMap<String, VMState>,
+    file: File,
+}
+
+impl VMStateMap {
+    fn open_file_raw(write: bool) -> Result<File, Error> {
+        let mut path = get_user_run_dir()?;
+        std::fs::create_dir_all(&path)?;
+        path.push(RESTORE_VM_MAP);
+        OpenOptions::new()
+            .read(true)
+            .write(write)
+            .create(write)
+            .open(path)
+            .map_err(Error::from)
+    }
+
+    /// Acquire a lock on the state map and retrieve a deserialized version
+    fn load() -> Result<Self, Error> {
+        let mut file = Self::open_file_raw(true)?;
+        lock_file(&mut file, true, Some(std::time::Duration::from_secs(5)))?;
+        let map = serde_json::from_reader(&file).unwrap_or_default();
+        Ok(Self { map, file })
+    }
+
+    /// Load a read-only copy of the current VM map. Only use for informational purposes, like
+    /// shell auto-completion, for anything requiring consistency use load() !
+    fn load_read_only() -> Result<HashMap<String, VMState>, Error> {
+        let file = Self::open_file_raw(false)?;
+        Ok(serde_json::from_reader(&file).unwrap_or_default())
+    }
+
+    /// Write back a potentially modified state map, consuming the held lock
+    fn write(mut self) -> Result<(), Error> {
+        self.file.seek(SeekFrom::Start(0))?;
+        self.file.set_len(0)?;
+        serde_json::to_writer(self.file, &self.map)?;
+
+        // drop ourselves including file lock
+        Ok(())
+    }
+
+    /// Return the map, but drop the lock immediately
+    fn read_only(self) -> HashMap<String, VMState> {
+        self.map
+    }
+}
+
+fn make_name(repo: &BackupRepository, snap: &BackupDir) -> String {
+    let full = format!("qemu_{}/{}", repo, snap);
+    tools::systemd::escape_unit(&full, false)
+}
+
+/// remove non-responsive VMs from given map, returns 'true' if map was modified
+async fn cleanup_map(map: &mut HashMap<String, VMState>) -> bool {
+    let mut to_remove = Vec::new();
+    for (name, state) in map.iter() {
+        let client = VsockClient::new(state.cid, DEFAULT_VSOCK_PORT, Some(state.ticket.clone()));
+        let res = client
+            .get("api2/json/status", Some(json!({"keep-timeout": true})))
+            .await;
+        if res.is_err() {
+            // VM is not reachable, remove from map and inform user
+            to_remove.push(name.clone());
+            println!(
+                "VM '{}' (pid: {}, cid: {}) was not reachable, removing from map",
+                name, state.pid, state.cid
+            );
+        }
+    }
+
+    for tr in &to_remove {
+        map.remove(tr);
+    }
+
+    !to_remove.is_empty()
+}
+
+fn new_ticket() -> String {
+    proxmox::tools::Uuid::generate().to_string()
+}
+
+async fn ensure_running(details: &SnapRestoreDetails) -> Result<VsockClient, Error> {
+    let name = make_name(&details.repo, &details.snapshot);
+    let mut state = VMStateMap::load()?;
+
+    cleanup_map(&mut state.map).await;
+
+    let new_cid;
+    let vms = match state.map.get(&name) {
+        Some(vm) => {
+            let client = VsockClient::new(vm.cid, DEFAULT_VSOCK_PORT, Some(vm.ticket.clone()));
+            let res = client.get("api2/json/status", None).await;
+            match res {
+                Ok(_) => {
+                    // VM is running and we just reset its timeout, nothing to do
+                    return Ok(client);
+                }
+                Err(err) => {
+                    println!("stale VM detected, restarting ({})", err);
+                    // VM is dead, restart
+                    let vms = start_vm(vm.cid, details).await?;
+                    new_cid = vms.cid;
+                    state.map.insert(name, vms.clone());
+                    vms
+                }
+            }
+        }
+        None => {
+            let mut cid = state
+                .map
+                .iter()
+                .map(|v| v.1.cid)
+                .max()
+                .unwrap_or(0)
+                .wrapping_add(1);
+
+            // offset cid by user id, to avoid unneccessary retries
+            let running_uid = nix::unistd::Uid::current();
+            cid = cid.wrapping_add(running_uid.as_raw() as i32);
+
+            // some low CIDs have special meaning, start at 10 to avoid them
+            cid = cid.max(10);
+
+            let vms = start_vm(cid, details).await?;
+            new_cid = vms.cid;
+            state.map.insert(name, vms.clone());
+            vms
+        }
+    };
+
+    state.write()?;
+    Ok(VsockClient::new(
+        new_cid,
+        DEFAULT_VSOCK_PORT,
+        Some(vms.ticket.clone()),
+    ))
+}
+
+async fn start_vm(cid: i32, details: &SnapRestoreDetails) -> Result<VMState, Error> {
+    let ticket = new_ticket();
+    let mut cmd = std::process::Command::new(buildcfg::PROXMOX_RESTORE_QEMU_HELPER_FN);
+    cmd.arg("start");
+    cmd.arg(details.repo.to_string());
+    cmd.arg(details.snapshot.to_string());
+    cmd.arg(&ticket);
+    cmd.arg(cid.to_string());
+
+    for file in details.manifest.files() {
+        if !file.filename.ends_with(".img.fidx") {
+            continue;
+        }
+        cmd.arg("--files");
+        cmd.arg(&file.filename);
+    }
+
+    // allow the setuid binary to print error messages
+    cmd.stderr(std::process::Stdio::inherit());
+    cmd.stdout(std::process::Stdio::piped());
+
+    let res = tokio::task::block_in_place(|| cmd.spawn()?.wait_with_output())?;
+
+    if res.status.success() {
+        let out = String::from_utf8_lossy(&res.stdout);
+        let val: Value = serde_json::from_str(&out)?;
+        let cid = if let Some(cid) = val["cid"].as_i64() {
+            cid as i32
+        } else {
+            bail!("invalid return from proxmox-restore-qemu-helper: no cid")
+        };
+        let pid = if let Some(pid) = val["pid"].as_i64() {
+            pid as i32
+        } else {
+            bail!("invalid return from proxmox-restore-qemu-helper: no pid")
+        };
+        Ok(VMState {
+            cid,
+            pid,
+            ticket,
+        })
+    } else {
+        bail!("starting VM failed");
+    }
+}
+
+impl BlockRestoreDriver for QemuBlockDriver {
+    fn status(&self) -> Async<Result<Vec<DriverStatus>, Error>> {
+        async move {
+            let mut state_map = VMStateMap::load()?;
+            let modified = cleanup_map(&mut state_map.map).await;
+            let map = if modified {
+                let m = state_map.map.clone();
+                state_map.write()?;
+                m
+            } else {
+                state_map.read_only()
+            };
+            let mut result = Vec::new();
+
+            for (n, s) in map.iter() {
+                let client = VsockClient::new(s.cid, DEFAULT_VSOCK_PORT, Some(s.ticket.clone()));
+                let resp = client
+                    .get("api2/json/status", Some(json!({"keep-timeout": true})))
+                    .await;
+                let name = tools::systemd::unescape_unit(n)
+                    .unwrap_or_else(|_| "<invalid name>".to_owned());
+                let mut extra = json!({"pid": s.pid, "cid": s.cid});
+
+                match resp {
+                    Ok(status) => match status["data"].as_object() {
+                        Some(map) => {
+                            for (k, v) in map.iter() {
+                                extra[k] = v.clone();
+                            }
+                        }
+                        None => {
+                            let err = format!(
+                                "invalid JSON received from /status call: {}",
+                                status.to_string()
+                            );
+                            extra["error"] = json!(err);
+                        }
+                    },
+                    Err(err) => {
+                        let err = format!("error during /status API call: {}", err);
+                        extra["error"] = json!(err);
+                    }
+                }
+
+                result.push(DriverStatus {
+                    id: name,
+                    data: extra,
+                });
+            }
+
+            Ok(result)
+        }
+        .boxed()
+    }
+
+    fn stop(&self, id: String) -> Async<Result<(), Error>> {
+        async move {
+            let name = tools::systemd::escape_unit(&id, false);
+            let mut map = VMStateMap::load()?;
+            let map_mod = cleanup_map(&mut map.map).await;
+            match map.map.get(&name) {
+                Some(state) => {
+                    let client =
+                        VsockClient::new(state.cid, DEFAULT_VSOCK_PORT, Some(state.ticket.clone()));
+                    // ignore errors, this either fails because:
+                    // * the VM is unreachable/dead, in which case we don't want it in the map
+                    // * the call was successful and the connection reset when the VM stopped
+                    let _ = client.get("api2/json/stop", None).await;
+                    map.map.remove(&name);
+                    map.write()?;
+                }
+                None => {
+                    if map_mod {
+                        map.write()?;
+                    }
+                    bail!("VM with name '{}' not found", name);
+                }
+            }
+            Ok(())
+        }
+        .boxed()
+    }
+
+    fn list(&self) -> Vec<String> {
+        match VMStateMap::load_read_only() {
+            Ok(state) => state
+                .iter()
+                .filter_map(|(name, _)| tools::systemd::unescape_unit(&name).ok())
+                .collect(),
+            Err(_) => Vec::new(),
+        }
+    }
+}
diff --git a/src/bin/proxmox_file_restore/mod.rs b/src/bin/proxmox_file_restore/mod.rs
new file mode 100644
index 00000000..52a1259e
--- /dev/null
+++ b/src/bin/proxmox_file_restore/mod.rs
@@ -0,0 +1,5 @@
+//! Block device drivers and tools for single file restore
+pub mod block_driver;
+pub use block_driver::*;
+
+mod block_driver_qemu;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 16/20] debian/client: add postinst hook to rebuild file-restore initramfs
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (14 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 15/20] file-restore: add basic VM/block device support Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 17/20] file-restore(-daemon): implement list API Stefan Reiter
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

This will be triggered on updating proxmox-file-restore (via configure,
necessary since the daemon binary might change) and
proxmox-backup-restore-image (via 'activate-noawait', necessary since
the base image might change).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

v2:
* update 'proxmox-backup-restore-image' and trigger naming

 debian/proxmox-file-restore.postinst | 63 ++++++++++++++++++++++++++++
 debian/proxmox-file-restore.triggers |  1 +
 2 files changed, 64 insertions(+)
 create mode 100755 debian/proxmox-file-restore.postinst
 create mode 100644 debian/proxmox-file-restore.triggers

diff --git a/debian/proxmox-file-restore.postinst b/debian/proxmox-file-restore.postinst
new file mode 100755
index 00000000..bb039fae
--- /dev/null
+++ b/debian/proxmox-file-restore.postinst
@@ -0,0 +1,63 @@
+#!/bin/sh
+
+set -e
+
+update_initramfs() {
+    # regenerate initramfs for single file restore VM
+    INST_PATH="/usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore"
+    CACHE_PATH="/var/cache/proxmox-backup/file-restore-initramfs.img"
+
+    # cleanup first, in case proxmox-file-restore was uninstalled since we do
+    # not want an unuseable image lying around
+    rm -f "$CACHE_PATH"
+
+    [ -f "$INST_PATH/initramfs.img" ] || \
+        echo "proxmox-backup-restore-image is not installed correctly" >2 || \
+        exit 1
+
+    echo "Updating file-restore initramfs..."
+
+    # avoid leftover temp file
+    cleanup() {
+        rm -f "$CACHE_PATH.tmp"
+    }
+    trap cleanup EXIT
+
+    mkdir -p "/var/cache/proxmox-backup"
+    cp "$INST_PATH/initramfs.img" "$CACHE_PATH.tmp"
+
+    # cpio uses passed in path as offset inside the archive as well, so we need
+    # to be in the same dir as the daemon binary to ensure it's placed in /
+    ( cd "$INST_PATH"; \
+        printf "./proxmox-restore-daemon" \
+        | cpio -o --format=newc -A -F "$CACHE_PATH.tmp" )
+    mv -f "$CACHE_PATH.tmp" "$CACHE_PATH"
+
+    trap - EXIT
+}
+
+case "$1" in
+    configure)
+        # in case restore daemon was updated
+        update_initramfs
+    ;;
+
+    triggered)
+        if [ "$2" = "proxmox-backup-restore-image-update" ]; then
+            # in case base-image was updated
+            update_initramfs
+        else
+            echo "postinst called with unknown trigger name: \`$2'" >&2
+        fi
+    ;;
+
+    abort-upgrade|abort-remove|abort-deconfigure)
+    ;;
+
+    *)
+        echo "postinst called with unknown argument \`$1'" >&2
+        exit 1
+    ;;
+esac
+
+exit 0
diff --git a/debian/proxmox-file-restore.triggers b/debian/proxmox-file-restore.triggers
new file mode 100644
index 00000000..c316dc34
--- /dev/null
+++ b/debian/proxmox-file-restore.triggers
@@ -0,0 +1 @@
+interest-noawait proxmox-backup-restore-image-update
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 17/20] file-restore(-daemon): implement list API
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (15 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 16/20] debian/client: add postinst hook to rebuild file-restore initramfs Stefan Reiter
@ 2021-03-31 10:21 ` Stefan Reiter
  2021-03-31 10:22 ` [pbs-devel] [PATCH v3 proxmox-backup 18/20] pxar/extract: add sequential variant of extract_sub_dir Stefan Reiter
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:21 UTC (permalink / raw)
  To: pbs-devel

Allows listing files and directories on a block device snapshot.
Hierarchy displayed is:

/archive.img.fidx/bucket/component/<path>
e.g.
/drive-scsi0.img.fidx/part/2/etc/passwd
(corresponding to /etc/passwd on the second partition of drive-scsi0)

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/bin/proxmox-file-restore.rs               |  19 +++
 src/bin/proxmox_file_restore/block_driver.rs  |  19 +++
 .../proxmox_file_restore/block_driver_qemu.rs |  21 +++
 src/bin/proxmox_restore_daemon/api.rs         | 131 +++++++++++++++++-
 4 files changed, 187 insertions(+), 3 deletions(-)

diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
index 0c2050f2..d45c12af 100644
--- a/src/bin/proxmox-file-restore.rs
+++ b/src/bin/proxmox-file-restore.rs
@@ -41,6 +41,7 @@ use proxmox_file_restore::*;
 enum ExtractPath {
     ListArchives,
     Pxar(String, Vec<u8>),
+    VM(String, Vec<u8>),
 }
 
 fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
@@ -67,6 +68,8 @@ fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
 
     if file.ends_with(".pxar.didx") {
         Ok(ExtractPath::Pxar(file, path))
+    } else if file.ends_with(".img.fidx") {
+        Ok(ExtractPath::VM(file, path))
     } else {
         bail!("'{}' is not supported for file-restore", file);
     }
@@ -105,6 +108,10 @@ fn parse_path(path: String, base64: bool) -> Result<ExtractPath, Error> {
                type: CryptMode,
                optional: true,
            },
+           "driver": {
+               type: BlockDriverType,
+               optional: true,
+           },
            "output-format": {
                schema: OUTPUT_FORMAT,
                optional: true,
@@ -194,6 +201,18 @@ async fn list(
 
             helpers::list_dir_content(&mut catalog_reader, &fullpath)
         }
+        ExtractPath::VM(file, path) => {
+            let details = SnapRestoreDetails {
+                manifest,
+                repo,
+                snapshot,
+            };
+            let driver: Option<BlockDriverType> = match param.get("driver") {
+                Some(drv) => Some(serde_json::from_value(drv.clone())?),
+                None => None,
+            };
+            data_list(driver, details, file, path).await
+        }
     }?;
 
     let options = default_table_format_options()
diff --git a/src/bin/proxmox_file_restore/block_driver.rs b/src/bin/proxmox_file_restore/block_driver.rs
index 9c6fc5ac..63872f04 100644
--- a/src/bin/proxmox_file_restore/block_driver.rs
+++ b/src/bin/proxmox_file_restore/block_driver.rs
@@ -9,6 +9,7 @@ use std::hash::BuildHasher;
 use std::pin::Pin;
 
 use proxmox_backup::backup::{BackupDir, BackupManifest};
+use proxmox_backup::api2::types::ArchiveEntry;
 use proxmox_backup::client::BackupRepository;
 
 use proxmox::api::{api, cli::*};
@@ -32,6 +33,14 @@ pub type Async<R> = Pin<Box<dyn Future<Output = R> + Send>>;
 
 /// An abstract implementation for retrieving data out of a block file backup
 pub trait BlockRestoreDriver {
+    /// List ArchiveEntrys for the given image file and path
+    fn data_list(
+        &self,
+        details: SnapRestoreDetails,
+        img_file: String,
+        path: Vec<u8>,
+    ) -> Async<Result<Vec<ArchiveEntry>, Error>>;
+
     /// Return status of all running/mapped images, result value is (id, extra data), where id must
     /// match with the ones returned from list()
     fn status(&self) -> Async<Result<Vec<DriverStatus>, Error>>;
@@ -60,6 +69,16 @@ impl BlockDriverType {
 const DEFAULT_DRIVER: BlockDriverType = BlockDriverType::Qemu;
 const ALL_DRIVERS: &[BlockDriverType] = &[BlockDriverType::Qemu];
 
+pub async fn data_list(
+    driver: Option<BlockDriverType>,
+    details: SnapRestoreDetails,
+    img_file: String,
+    path: Vec<u8>,
+) -> Result<Vec<ArchiveEntry>, Error> {
+    let driver = driver.unwrap_or(DEFAULT_DRIVER).resolve();
+    driver.data_list(details, img_file, path).await
+}
+
 #[api(
    input: {
        properties: {
diff --git a/src/bin/proxmox_file_restore/block_driver_qemu.rs b/src/bin/proxmox_file_restore/block_driver_qemu.rs
index 5fda5d6f..1a96ef10 100644
--- a/src/bin/proxmox_file_restore/block_driver_qemu.rs
+++ b/src/bin/proxmox_file_restore/block_driver_qemu.rs
@@ -9,6 +9,7 @@ use std::fs::{File, OpenOptions};
 use std::io::{prelude::*, SeekFrom};
 
 use proxmox::tools::fs::lock_file;
+use proxmox_backup::api2::types::ArchiveEntry;
 use proxmox_backup::backup::BackupDir;
 use proxmox_backup::buildcfg;
 use proxmox_backup::client::*;
@@ -215,6 +216,26 @@ async fn start_vm(cid: i32, details: &SnapRestoreDetails) -> Result<VMState, Err
 }
 
 impl BlockRestoreDriver for QemuBlockDriver {
+    fn data_list(
+        &self,
+        details: SnapRestoreDetails,
+        img_file: String,
+        mut path: Vec<u8>,
+    ) -> Async<Result<Vec<ArchiveEntry>, Error>> {
+        async move {
+            let client = ensure_running(&details).await?;
+            if !path.is_empty() && path[0] != b'/' {
+                path.insert(0, b'/');
+            }
+            let path = base64::encode(img_file.bytes().chain(path).collect::<Vec<u8>>());
+            let mut result = client
+                .get("api2/json/list", Some(json!({ "path": path })))
+                .await?;
+            serde_json::from_value(result["data"].take()).map_err(|err| err.into())
+        }
+        .boxed()
+    }
+
     fn status(&self) -> Async<Result<Vec<DriverStatus>, Error>> {
         async move {
             let mut state_map = VMStateMap::load()?;
diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
index 4c78a0e8..2f990f36 100644
--- a/src/bin/proxmox_restore_daemon/api.rs
+++ b/src/bin/proxmox_restore_daemon/api.rs
@@ -1,19 +1,24 @@
 ///! File-restore API running inside the restore VM
-use anyhow::Error;
-use serde_json::Value;
+use anyhow::{bail, Error};
+use std::ffi::OsStr;
 use std::fs;
+use std::os::unix::ffi::OsStrExt;
+use std::path::{Path, PathBuf};
 
 use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment, SubdirMap};
 use proxmox::list_subdirs_api_method;
 
 use proxmox_backup::api2::types::*;
+use proxmox_backup::backup::DirEntryAttribute;
+use proxmox_backup::tools::fs::read_subdir;
 
-use super::{watchdog_remaining, watchdog_ping};
+use super::{disk::ResolveResult, watchdog_remaining, watchdog_ping};
 
 // NOTE: All API endpoints must have Permission::Superuser, as the configs for authentication do
 // not exist within the restore VM. Safety is guaranteed by checking a ticket via a custom ApiAuth.
 
 const SUBDIRS: SubdirMap = &[
+    ("list", &Router::new().get(&API_METHOD_LIST)),
     ("status", &Router::new().get(&API_METHOD_STATUS)),
     ("stop", &Router::new().get(&API_METHOD_STOP)),
 ];
@@ -72,3 +77,123 @@ fn stop() {
     println!("'reboot' syscall failed: {}", err);
     std::process::exit(1);
 }
+
+fn get_dir_entry(path: &Path) -> Result<DirEntryAttribute, Error> {
+    use nix::sys::stat;
+
+    let stat = stat::stat(path)?;
+    Ok(match stat.st_mode & libc::S_IFMT {
+        libc::S_IFREG => DirEntryAttribute::File {
+            size: stat.st_size as u64,
+            mtime: stat.st_mtime,
+        },
+        libc::S_IFDIR => DirEntryAttribute::Directory { start: 0 },
+        _ => bail!("unsupported file type: {}", stat.st_mode),
+    })
+}
+
+#[api(
+    input: {
+        properties: {
+            "path": {
+                type: String,
+                description: "base64-encoded path to list files and directories under",
+            },
+        },
+    },
+    access: {
+        description: "Permissions are handled outside restore VM.",
+        permission: &Permission::Superuser,
+    },
+)]
+/// List file details for given file or a list of files and directories under the given path if it
+/// points to a directory.
+fn list(
+    path: String,
+    _info: &ApiMethod,
+    _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<ArchiveEntry>, Error> {
+    watchdog_ping();
+
+    let mut res = Vec::new();
+
+    let param_path = base64::decode(path)?;
+    let mut path = param_path.clone();
+    if let Some(b'/') = path.last() {
+        path.pop();
+    }
+    let path_str = OsStr::from_bytes(&path[..]);
+    let param_path_buf = Path::new(path_str);
+
+    let mut disk_state = crate::DISK_STATE.lock().unwrap();
+    let query_result = disk_state.resolve(&param_path_buf)?;
+
+    match query_result {
+        ResolveResult::Path(vm_path) => {
+            let root_entry = get_dir_entry(&vm_path)?;
+            match root_entry {
+                DirEntryAttribute::File { .. } => {
+                    // list on file, return details
+                    res.push(ArchiveEntry::new(&param_path, &root_entry));
+                }
+                DirEntryAttribute::Directory { .. } => {
+                    // list on directory, return all contained files/dirs
+                    for f in read_subdir(libc::AT_FDCWD, &vm_path)? {
+                        if let Ok(f) = f {
+                            let name = f.file_name().to_bytes();
+                            let path = &Path::new(OsStr::from_bytes(name));
+                            if path.components().count() == 1 {
+                                // ignore '.' and '..'
+                                match path.components().next().unwrap() {
+                                    std::path::Component::CurDir
+                                    | std::path::Component::ParentDir => continue,
+                                    _ => {}
+                                }
+                            }
+
+                            let mut full_vm_path = PathBuf::new();
+                            full_vm_path.push(&vm_path);
+                            full_vm_path.push(path);
+                            let mut full_path = PathBuf::new();
+                            full_path.push(param_path_buf);
+                            full_path.push(path);
+
+                            let entry = get_dir_entry(&full_vm_path);
+                            if let Ok(entry) = entry {
+                                res.push(ArchiveEntry::new(
+                                    full_path.as_os_str().as_bytes(),
+                                    &entry,
+                                ));
+                            }
+                        }
+                    }
+                }
+                _ => unreachable!(),
+            }
+        }
+        ResolveResult::BucketTypes(types) => {
+            for t in types {
+                let mut t_path = path.clone();
+                t_path.push(b'/');
+                t_path.extend(t.as_bytes());
+                res.push(ArchiveEntry::new(
+                    &t_path[..],
+                    &DirEntryAttribute::Directory { start: 0 },
+                ));
+            }
+        }
+        ResolveResult::BucketComponents(comps) => {
+            for c in comps {
+                let mut c_path = path.clone();
+                c_path.push(b'/');
+                c_path.extend(c.as_bytes());
+                res.push(ArchiveEntry::new(
+                    &c_path[..],
+                    &DirEntryAttribute::Directory { start: 0 },
+                ));
+            }
+        }
+    }
+
+    Ok(res)
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 18/20] pxar/extract: add sequential variant of extract_sub_dir
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (16 preceding siblings ...)
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 17/20] file-restore(-daemon): implement list API Stefan Reiter
@ 2021-03-31 10:22 ` Stefan Reiter
  2021-03-31 10:22 ` [pbs-devel] [PATCH v3 proxmox-backup 19/20] tools/zip: add zip_directory helper Stefan Reiter
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:22 UTC (permalink / raw)
  To: pbs-devel

extract_sub_dir_seq, together with seq_files_extractor, allow extracting
files from a pxar Decoder, along with the existing option for an
Accessor. To facilitate code re-use, some helper functions are extracted
in the process.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

v3:
* basically a do-over, no more bogus types

 src/pxar/extract.rs | 316 ++++++++++++++++++++++++++++++--------------
 src/pxar/mod.rs     |   5 +-
 2 files changed, 224 insertions(+), 97 deletions(-)

diff --git a/src/pxar/extract.rs b/src/pxar/extract.rs
index 952e2d20..8f85c441 100644
--- a/src/pxar/extract.rs
+++ b/src/pxar/extract.rs
@@ -16,9 +16,10 @@ use nix::fcntl::OFlag;
 use nix::sys::stat::Mode;
 
 use pathpatterns::{MatchEntry, MatchList, MatchType};
-use pxar::format::Device;
-use pxar::Metadata;
 use pxar::accessor::aio::{Accessor, FileContents, FileEntry};
+use pxar::decoder::aio::Decoder;
+use pxar::format::Device;
+use pxar::{Entry, EntryKind, Metadata};
 
 use proxmox::c_result;
 use proxmox::tools::{
@@ -93,8 +94,6 @@ where
     let mut err_path_stack = vec![OsString::from("/")];
     let mut current_match = options.extract_match_default;
     while let Some(entry) = decoder.next() {
-        use pxar::EntryKind;
-
         let entry = entry.map_err(|err| format_err!("error reading pxar archive: {}", err))?;
 
         let file_name_os = entry.file_name();
@@ -552,7 +551,6 @@ where
     T: Clone + pxar::accessor::ReadAt + Unpin + Send + Sync + 'static,
     W: tokio::io::AsyncWrite + Unpin + Send + 'static,
 {
-    use pxar::EntryKind;
     Box::pin(async move {
         let metadata = file.entry().metadata();
         let path = file.entry().path().strip_prefix(&prefix)?.to_path_buf();
@@ -612,10 +610,42 @@ where
     })
 }
 
+fn get_extractor<DEST>(destination: DEST, metadata: Metadata) -> Result<Extractor, Error>
+where
+    DEST: AsRef<Path>,
+{
+    create_path(
+        &destination,
+        None,
+        Some(CreateOptions::new().perm(Mode::from_bits_truncate(0o700))),
+    )
+    .map_err(|err| {
+        format_err!(
+            "error creating directory {:?}: {}",
+            destination.as_ref(),
+            err
+        )
+    })?;
+
+    let dir = Dir::open(
+        destination.as_ref(),
+        OFlag::O_DIRECTORY | OFlag::O_CLOEXEC,
+        Mode::empty(),
+    )
+    .map_err(|err| {
+        format_err!(
+            "unable to open target directory {:?}: {}",
+            destination.as_ref(),
+            err,
+        )
+    })?;
+
+    Ok(Extractor::new(dir, metadata, false, Flags::DEFAULT))
+}
 
 pub async fn extract_sub_dir<T, DEST, PATH>(
     destination: DEST,
-    mut decoder: Accessor<T>,
+    decoder: Accessor<T>,
     path: PATH,
     verbose: bool,
 ) -> Result<(), Error>
@@ -626,111 +656,205 @@ where
 {
     let root = decoder.open_root().await?;
 
-    create_path(
-        &destination,
-        None,
-        Some(CreateOptions::new().perm(Mode::from_bits_truncate(0o700))),
-    )
-    .map_err(|err| format_err!("error creating directory {:?}: {}", destination.as_ref(), err))?;
-
-    let dir = Dir::open(
-        destination.as_ref(),
-        OFlag::O_DIRECTORY | OFlag::O_CLOEXEC,
-        Mode::empty(),
-    )
-    .map_err(|err| format_err!("unable to open target directory {:?}: {}", destination.as_ref(), err,))?;
-
-    let mut extractor =  Extractor::new(
-        dir,
+    let mut extractor = get_extractor(
+        destination,
         root.lookup_self().await?.entry().metadata().clone(),
-        false,
-        Flags::DEFAULT,
-    );
+    )?;
 
     let file = root
-        .lookup(&path).await?
+        .lookup(&path)
+        .await?
         .ok_or(format_err!("error opening '{:?}'", path.as_ref()))?;
 
-    recurse_files_extractor(&mut extractor, &mut decoder, file, verbose).await
+    recurse_files_extractor(&mut extractor, file, verbose).await
 }
 
-fn recurse_files_extractor<'a, T>(
+pub async fn extract_sub_dir_seq<S, DEST>(
+    destination: DEST,
+    mut decoder: Decoder<S>,
+    verbose: bool,
+) -> Result<(), Error>
+where
+    S: pxar::decoder::SeqRead + Unpin + Send + 'static,
+    DEST: AsRef<Path>,
+{
+    decoder.enable_goodbye_entries(true);
+    let root = match decoder.next().await {
+        Some(Ok(root)) => root,
+        Some(Err(err)) => bail!("error getting root entry from pxar: {}", err),
+        None => bail!("cannot extract empty archive"),
+    };
+
+    let mut extractor = get_extractor(destination, root.metadata().clone())?;
+
+    if let Err(err) = seq_files_extractor(&mut extractor, decoder, verbose).await {
+        eprintln!("error extracting pxar archive: {}", err);
+    }
+
+    Ok(())
+}
+
+fn extract_special(
+    extractor: &mut Extractor,
+    entry: &Entry,
+    file_name: &CStr,
+) -> Result<(), Error> {
+    let metadata = entry.metadata();
+    match entry.kind() {
+        EntryKind::Symlink(link) => {
+            extractor.extract_symlink(file_name, metadata, link.as_ref())?;
+        }
+        EntryKind::Hardlink(link) => {
+            extractor.extract_hardlink(file_name, link.as_os_str())?;
+        }
+        EntryKind::Device(dev) => {
+            if extractor.contains_flags(Flags::WITH_DEVICE_NODES) {
+                extractor.extract_device(file_name, metadata, dev)?;
+            }
+        }
+        EntryKind::Fifo => {
+            if extractor.contains_flags(Flags::WITH_FIFOS) {
+                extractor.extract_special(file_name, metadata, 0)?;
+            }
+        }
+        EntryKind::Socket => {
+            if extractor.contains_flags(Flags::WITH_SOCKETS) {
+                extractor.extract_special(file_name, metadata, 0)?;
+            }
+        }
+        _ => bail!("extract_special used with unsupported entry kind"),
+    }
+    Ok(())
+}
+
+fn get_filename(entry: &Entry) -> Result<(OsString, CString), Error> {
+    let file_name_os = entry.file_name().to_owned();
+
+    // safety check: a file entry in an archive must never contain slashes:
+    if file_name_os.as_bytes().contains(&b'/') {
+        bail!("archive file entry contains slashes, which is invalid and a security concern");
+    }
+
+    let file_name = CString::new(file_name_os.as_bytes())
+        .map_err(|_| format_err!("encountered file name with null-bytes"))?;
+
+    Ok((file_name_os, file_name))
+}
+
+async fn recurse_files_extractor<'a, T>(
     extractor: &'a mut Extractor,
-    decoder: &'a mut Accessor<T>,
     file: FileEntry<T>,
     verbose: bool,
-) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'a>>
+) -> Result<(), Error>
 where
     T: Clone + pxar::accessor::ReadAt + Unpin + Send + Sync + 'static,
 {
-    use pxar::EntryKind;
-    Box::pin(async move {
-        let metadata = file.entry().metadata();
-        let file_name_os = file.file_name();
+    let entry = file.entry();
+    let metadata = entry.metadata();
+    let (file_name_os, file_name) = get_filename(entry)?;
 
-        // safety check: a file entry in an archive must never contain slashes:
-        if file_name_os.as_bytes().contains(&b'/') {
-            bail!("archive file entry contains slashes, which is invalid and a security concern");
+    if verbose {
+        eprintln!("extracting: {}", file.path().display());
+    }
+
+    match file.kind() {
+        EntryKind::Directory => {
+            extractor
+                .enter_directory(file_name_os.to_owned(), metadata.clone(), true)
+                .map_err(|err| format_err!("error at entry {:?}: {}", file_name_os, err))?;
+
+            let dir = file.enter_directory().await?;
+            let mut seq_decoder = dir.decode_full().await?;
+            seq_decoder.enable_goodbye_entries(true);
+            seq_files_extractor(extractor, seq_decoder, verbose).await?;
+            extractor.leave_directory()?;
         }
-
-        let file_name = CString::new(file_name_os.as_bytes())
-            .map_err(|_| format_err!("encountered file name with null-bytes"))?;
-
-        if verbose {
-            eprintln!("extracting: {}", file.path().display());
+        EntryKind::File { size, .. } => {
+            extractor
+                .async_extract_file(
+                    &file_name,
+                    metadata,
+                    *size,
+                    &mut file.contents().await.map_err(|_| {
+                        format_err!("found regular file entry without contents in archive")
+                    })?,
+                )
+                .await?
         }
-
-        match file.kind() {
-            EntryKind::Directory => {
-                extractor
-                    .enter_directory(file_name_os.to_owned(), metadata.clone(), true)
-                    .map_err(|err| format_err!("error at entry {:?}: {}", file_name_os, err))?;
-
-                let dir = file.enter_directory().await?;
-                let mut readdir = dir.read_dir();
-                while let Some(entry) = readdir.next().await {
-                    let entry = entry?.decode_entry().await?;
-                    let filename = entry.path().to_path_buf();
-
-                    // log errors and continue
-                    if let Err(err) = recurse_files_extractor(extractor, decoder, entry, verbose).await {
-                        eprintln!("error extracting {:?}: {}", filename.display(), err);
-                    }
-                }
-                extractor.leave_directory()?;
-            }
-            EntryKind::Symlink(link) => {
-                extractor.extract_symlink(&file_name, metadata, link.as_ref())?;
-            }
-            EntryKind::Hardlink(link) => {
-                extractor.extract_hardlink(&file_name, link.as_os_str())?;
-            }
-            EntryKind::Device(dev) => {
-                if extractor.contains_flags(Flags::WITH_DEVICE_NODES) {
-                    extractor.extract_device(&file_name, metadata, dev)?;
-                }
-            }
-            EntryKind::Fifo => {
-                if extractor.contains_flags(Flags::WITH_FIFOS) {
-                    extractor.extract_special(&file_name, metadata, 0)?;
-                }
-            }
-            EntryKind::Socket => {
-                if extractor.contains_flags(Flags::WITH_SOCKETS) {
-                    extractor.extract_special(&file_name, metadata, 0)?;
-                }
-            }
-            EntryKind::File { size, .. } => extractor.async_extract_file(
-                &file_name,
-                metadata,
-                *size,
-                &mut file.contents().await.map_err(|_| {
-                    format_err!("found regular file entry without contents in archive")
-                })?,
-            ).await?,
-            EntryKind::GoodbyeTable => {}, // ignore
-        }
-        Ok(())
-    })
+        EntryKind::GoodbyeTable => {} // ignore
+        _ => extract_special(extractor, entry, &file_name)?,
+    }
+    Ok(())
 }
 
+async fn seq_files_extractor<'a, T>(
+    extractor: &'a mut Extractor,
+    mut decoder: pxar::decoder::aio::Decoder<T>,
+    verbose: bool,
+) -> Result<(), Error>
+where
+    T: pxar::decoder::SeqRead,
+{
+    let mut dir_level = 0;
+    loop {
+        let entry = match decoder.next().await {
+            Some(entry) => entry?,
+            None => return Ok(()),
+        };
+
+        let metadata = entry.metadata();
+        let (file_name_os, file_name) = get_filename(&entry)?;
+
+        if verbose && !matches!(entry.kind(), EntryKind::GoodbyeTable) {
+            eprintln!("extracting: {}", entry.path().display());
+        }
+
+        if let Err(err) = async {
+            match entry.kind() {
+                EntryKind::Directory => {
+                    dir_level += 1;
+                    extractor
+                        .enter_directory(file_name_os.to_owned(), metadata.clone(), true)
+                        .map_err(|err| format_err!("error at entry {:?}: {}", file_name_os, err))?;
+                }
+                EntryKind::File { size, .. } => {
+                    extractor
+                        .async_extract_file(
+                            &file_name,
+                            metadata,
+                            *size,
+                            &mut decoder.contents().ok_or_else(|| {
+                                format_err!("found regular file entry without contents in archive")
+                            })?,
+                        )
+                        .await?
+                }
+                EntryKind::GoodbyeTable => {
+                    dir_level -= 1;
+                    extractor.leave_directory()?;
+                }
+                _ => extract_special(extractor, &entry, &file_name)?,
+            }
+            Ok(()) as Result<(), Error>
+        }
+        .await
+        {
+            let display = entry.path().display().to_string();
+            eprintln!(
+                "error extracting {}: {}",
+                if matches!(entry.kind(), EntryKind::GoodbyeTable) {
+                    "<directory>"
+                } else {
+                    &display
+                },
+                err
+            );
+        }
+
+        if dir_level < 0 {
+            // we've encountered one Goodbye more then Directory, meaning we've left the dir we
+            // started in - exit early, otherwise the extractor might panic
+            return Ok(());
+        }
+    }
+}
diff --git a/src/pxar/mod.rs b/src/pxar/mod.rs
index d1302962..13eb9bd4 100644
--- a/src/pxar/mod.rs
+++ b/src/pxar/mod.rs
@@ -59,7 +59,10 @@ mod flags;
 pub use flags::Flags;
 
 pub use create::{create_archive, PxarCreateOptions};
-pub use extract::{create_zip, extract_archive, extract_sub_dir, ErrorHandler, PxarExtractOptions};
+pub use extract::{
+    create_zip, extract_archive, extract_sub_dir, extract_sub_dir_seq, ErrorHandler,
+    PxarExtractOptions,
+};
 
 /// The format requires to build sorted directory lookup tables in
 /// memory, so we restrict the number of allowed entries to limit
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 19/20] tools/zip: add zip_directory helper
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (17 preceding siblings ...)
  2021-03-31 10:22 ` [pbs-devel] [PATCH v3 proxmox-backup 18/20] pxar/extract: add sequential variant of extract_sub_dir Stefan Reiter
@ 2021-03-31 10:22 ` Stefan Reiter
  2021-03-31 10:22 ` [pbs-devel] [PATCH v3 proxmox-backup 20/20] file-restore: add 'extract' command for VM file restore Stefan Reiter
  2021-04-08 14:44 ` [pbs-devel] applied: [PATCH v3 00/20] Single file restore for VM images Thomas Lamprecht
  20 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:22 UTC (permalink / raw)
  To: pbs-devel

Encodes an entire local directory into an AsyncWrite recursively.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/tools/zip.rs | 77 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 77 insertions(+)

diff --git a/src/tools/zip.rs b/src/tools/zip.rs
index 55f2a24a..d1a98485 100644
--- a/src/tools/zip.rs
+++ b/src/tools/zip.rs
@@ -10,6 +10,7 @@ use std::io;
 use std::mem::size_of;
 use std::os::unix::ffi::OsStrExt;
 use std::path::{Component, Path, PathBuf};
+use std::time::SystemTime;
 
 use anyhow::{Error, Result};
 use endian_trait::Endian;
@@ -537,3 +538,79 @@ impl<W: AsyncWrite + Unpin> ZipEncoder<W> {
         Ok(())
     }
 }
+
+/// Zip a local directory and write encoded data to target. "source" has to point to a valid
+/// directory, it's name will be the root of the zip file - e.g.:
+/// source:
+///         /foo/bar
+/// zip file:
+///         /bar/file1
+///         /bar/dir1
+///         /bar/dir1/file2
+///         ...
+/// ...except if "source" is the root directory
+pub async fn zip_directory<W>(target: W, source: &Path) -> Result<(), Error>
+where
+    W: AsyncWrite + Unpin + Send,
+{
+    use walkdir::WalkDir;
+    use std::os::unix::fs::MetadataExt;
+
+    let base_path = source.parent().unwrap_or_else(|| Path::new("/"));
+    let mut encoder = ZipEncoder::new(target);
+
+    for entry in WalkDir::new(&source).into_iter() {
+        match entry {
+            Ok(entry) => {
+                let entry_path = entry.path().to_owned();
+                let encoder = &mut encoder;
+
+                if let Err(err) = async move {
+                    let entry_path_no_base = entry.path().strip_prefix(base_path)?;
+                    let metadata = entry.metadata()?;
+                    let mtime = match metadata.modified().unwrap_or_else(|_| SystemTime::now()).duration_since(SystemTime::UNIX_EPOCH) {
+                        Ok(dur) => dur.as_secs() as i64,
+                        Err(time_error) => -(time_error.duration().as_secs() as i64)
+                    };
+                    let mode = metadata.mode() as u16;
+
+                    if entry.file_type().is_file() {
+                        let file = tokio::fs::File::open(entry.path()).await?;
+                        let ze = ZipEntry::new(
+                            &entry_path_no_base,
+                            mtime,
+                            mode,
+                            true,
+                        );
+                        encoder.add_entry(ze, Some(file)).await?;
+                    } else if entry.file_type().is_dir() {
+                        let ze = ZipEntry::new(
+                            &entry_path_no_base,
+                            mtime,
+                            mode,
+                            false,
+                        );
+                        let content: Option<tokio::fs::File> = None;
+                        encoder.add_entry(ze, content).await?;
+                    }
+                    // ignore other file types
+                    let ok: Result<(), Error> = Ok(());
+                    ok
+                }
+                .await
+                {
+                    eprintln!(
+                        "zip: error encoding file or directory '{}': {}",
+                        entry_path.display(),
+                        err
+                    );
+                }
+            }
+            Err(err) => {
+                eprintln!("zip: error reading directory entry: {}", err);
+            }
+        }
+    }
+
+    encoder.finish().await
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v3 proxmox-backup 20/20] file-restore: add 'extract' command for VM file restore
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (18 preceding siblings ...)
  2021-03-31 10:22 ` [pbs-devel] [PATCH v3 proxmox-backup 19/20] tools/zip: add zip_directory helper Stefan Reiter
@ 2021-03-31 10:22 ` Stefan Reiter
  2021-04-08 14:44 ` [pbs-devel] applied: [PATCH v3 00/20] Single file restore for VM images Thomas Lamprecht
  20 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-03-31 10:22 UTC (permalink / raw)
  To: pbs-devel

The data on the restore daemon is either encoded into a pxar archive, to
provide the most accurate data for local restore, or encoded directly
into a zip file (or written out unprocessed for files), depending on the
'pxar' argument to the 'extract' API call.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

v3:
* minor adaptions to new extract_sub_dir_seq, auto-remove generated
  .pxarexclude-cli

v2:
* add 'pxar' property to VM API call to allow encoding files as zip/raw directly
  in the VM - this avoids the re-encoding in proxmox-file-restore

 Cargo.toml                                    |   2 +-
 debian/control                                |   1 +
 src/bin/proxmox-file-restore.rs               | 117 ++++++++----
 src/bin/proxmox_file_restore/block_driver.rs  |  24 +++
 .../proxmox_file_restore/block_driver_qemu.rs |  32 ++++
 src/bin/proxmox_restore_daemon/api.rs         | 176 +++++++++++++++++-
 6 files changed, 312 insertions(+), 40 deletions(-)

diff --git a/Cargo.toml b/Cargo.toml
index 6b880384..4aa678e4 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -64,7 +64,7 @@ syslog = "4.0"
 tokio = { version = "1.0", features = [ "fs", "io-util", "io-std", "macros", "net", "parking_lot", "process", "rt", "rt-multi-thread", "signal", "time" ] }
 tokio-openssl = "0.6.1"
 tokio-stream = "0.1.0"
-tokio-util = { version = "0.6", features = [ "codec" ] }
+tokio-util = { version = "0.6", features = [ "codec", "io" ] }
 tower-service = "0.3.0"
 udev = ">= 0.3, <0.5"
 url = "2.1"
diff --git a/debian/control b/debian/control
index 0e12accb..dec95a71 100644
--- a/debian/control
+++ b/debian/control
@@ -67,6 +67,7 @@ Build-Depends: debhelper (>= 11),
  librust-tokio-stream-0.1+default-dev,
  librust-tokio-util-0.6+codec-dev,
  librust-tokio-util-0.6+default-dev,
+ librust-tokio-util-0.6+io-dev,
  librust-tower-service-0.3+default-dev,
  librust-udev-0.4+default-dev | librust-udev-0.3+default-dev,
  librust-url-2+default-dev (>= 2.1-~~),
diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
index d45c12af..5a982108 100644
--- a/src/bin/proxmox-file-restore.rs
+++ b/src/bin/proxmox-file-restore.rs
@@ -14,6 +14,7 @@ use proxmox::api::{
     },
 };
 use pxar::accessor::aio::Accessor;
+use pxar::decoder::aio::Decoder;
 
 use proxmox_backup::api2::{helpers, types::ArchiveEntry};
 use proxmox_backup::backup::{
@@ -21,7 +22,7 @@ use proxmox_backup::backup::{
     DirEntryAttribute, IndexFile, LocalDynamicReadAt, CATALOG_NAME,
 };
 use proxmox_backup::client::{BackupReader, RemoteChunkReader};
-use proxmox_backup::pxar::{create_zip, extract_sub_dir};
+use proxmox_backup::pxar::{create_zip, extract_sub_dir, extract_sub_dir_seq};
 use proxmox_backup::tools;
 
 // use "pub" so rust doesn't complain about "unused" functions in the module
@@ -277,7 +278,11 @@ async fn list(
                description: "Print verbose information",
                optional: true,
                default: false,
-           }
+           },
+           "driver": {
+               type: BlockDriverType,
+               optional: true,
+           },
        }
    }
 )]
@@ -314,20 +319,21 @@ async fn extract(
         }
     };
 
+    let client = connect(&repo)?;
+    let client = BackupReader::start(
+        client,
+        crypt_config.clone(),
+        repo.store(),
+        &snapshot.group().backup_type(),
+        &snapshot.group().backup_id(),
+        snapshot.backup_time(),
+        true,
+    )
+    .await?;
+    let (manifest, _) = client.download_manifest().await?;
+
     match path {
         ExtractPath::Pxar(archive_name, path) => {
-            let client = connect(&repo)?;
-            let client = BackupReader::start(
-                client,
-                crypt_config.clone(),
-                repo.store(),
-                &snapshot.group().backup_type(),
-                &snapshot.group().backup_id(),
-                snapshot.backup_time(),
-                true,
-            )
-            .await?;
-            let (manifest, _) = client.download_manifest().await?;
             let file_info = manifest.lookup_file_info(&archive_name)?;
             let index = client
                 .download_dynamic_index(&manifest, &archive_name)
@@ -344,31 +350,33 @@ async fn extract(
             let archive_size = reader.archive_size();
             let reader = LocalDynamicReadAt::new(reader);
             let decoder = Accessor::new(reader, archive_size).await?;
+            extract_to_target(decoder, &path, target, verbose).await?;
+        }
+        ExtractPath::VM(file, path) => {
+            let details = SnapRestoreDetails {
+                manifest,
+                repo,
+                snapshot,
+            };
+            let driver: Option<BlockDriverType> = match param.get("driver") {
+                Some(drv) => Some(serde_json::from_value(drv.clone())?),
+                None => None,
+            };
 
-            let root = decoder.open_root().await?;
-            let file = root
-                .lookup(OsStr::from_bytes(&path))
-                .await?
-                .ok_or_else(|| format_err!("error opening '{:?}'", path))?;
+            if let Some(mut target) = target {
+                let reader = data_extract(driver, details, file, path.clone(), true).await?;
+                let decoder = Decoder::from_tokio(reader).await?;
+                extract_sub_dir_seq(&target, decoder, verbose).await?;
 
-            if let Some(target) = target {
-                extract_sub_dir(target, decoder, OsStr::from_bytes(&path), verbose).await?;
+                // we extracted a .pxarexclude-cli file auto-generated by the VM when encoding the
+                // archive, this file is of no use for the user, so try to remove it
+                target.push(".pxarexclude-cli");
+                std::fs::remove_file(target).map_err(|e| {
+                    format_err!("unable to remove temporary .pxarexclude-cli file - {}", e)
+                })?;
             } else {
-                match file.kind() {
-                    pxar::EntryKind::File { .. } => {
-                        tokio::io::copy(&mut file.contents().await?, &mut tokio::io::stdout())
-                            .await?;
-                    }
-                    _ => {
-                        create_zip(
-                            tokio::io::stdout(),
-                            decoder,
-                            OsStr::from_bytes(&path),
-                            verbose,
-                        )
-                        .await?;
-                    }
-                }
+                let mut reader = data_extract(driver, details, file, path.clone(), false).await?;
+                tokio::io::copy(&mut reader, &mut tokio::io::stdout()).await?;
             }
         }
         _ => {
@@ -379,6 +387,43 @@ async fn extract(
     Ok(())
 }
 
+async fn extract_to_target<T>(
+    decoder: Accessor<T>,
+    path: &[u8],
+    target: Option<PathBuf>,
+    verbose: bool,
+) -> Result<(), Error>
+where
+    T: pxar::accessor::ReadAt + Clone + Send + Sync + Unpin + 'static,
+{
+    let root = decoder.open_root().await?;
+    let file = root
+        .lookup(OsStr::from_bytes(&path))
+        .await?
+        .ok_or_else(|| format_err!("error opening '{:?}'", path))?;
+
+    if let Some(target) = target {
+        extract_sub_dir(target, decoder, OsStr::from_bytes(&path), verbose).await?;
+    } else {
+        match file.kind() {
+            pxar::EntryKind::File { .. } => {
+                tokio::io::copy(&mut file.contents().await?, &mut tokio::io::stdout()).await?;
+            }
+            _ => {
+                create_zip(
+                    tokio::io::stdout(),
+                    decoder,
+                    OsStr::from_bytes(&path),
+                    verbose,
+                )
+                .await?;
+            }
+        }
+    }
+
+    Ok(())
+}
+
 fn main() {
     let list_cmd_def = CliCommand::new(&API_METHOD_LIST)
         .arg_param(&["snapshot", "path"])
diff --git a/src/bin/proxmox_file_restore/block_driver.rs b/src/bin/proxmox_file_restore/block_driver.rs
index 63872f04..924503a7 100644
--- a/src/bin/proxmox_file_restore/block_driver.rs
+++ b/src/bin/proxmox_file_restore/block_driver.rs
@@ -41,6 +41,19 @@ pub trait BlockRestoreDriver {
         path: Vec<u8>,
     ) -> Async<Result<Vec<ArchiveEntry>, Error>>;
 
+    /// pxar=true:
+    /// Attempt to create a pxar archive of the given file path and return a reader instance for it
+    /// pxar=false:
+    /// Attempt to read the file or folder at the given path and return the file content or a zip
+    /// file as a stream
+    fn data_extract(
+        &self,
+        details: SnapRestoreDetails,
+        img_file: String,
+        path: Vec<u8>,
+        pxar: bool,
+    ) -> Async<Result<Box<dyn tokio::io::AsyncRead + Unpin + Send>, Error>>;
+
     /// Return status of all running/mapped images, result value is (id, extra data), where id must
     /// match with the ones returned from list()
     fn status(&self) -> Async<Result<Vec<DriverStatus>, Error>>;
@@ -79,6 +92,17 @@ pub async fn data_list(
     driver.data_list(details, img_file, path).await
 }
 
+pub async fn data_extract(
+    driver: Option<BlockDriverType>,
+    details: SnapRestoreDetails,
+    img_file: String,
+    path: Vec<u8>,
+    pxar: bool,
+) -> Result<Box<dyn tokio::io::AsyncRead + Send + Unpin>, Error> {
+    let driver = driver.unwrap_or(DEFAULT_DRIVER).resolve();
+    driver.data_extract(details, img_file, path, pxar).await
+}
+
 #[api(
    input: {
        properties: {
diff --git a/src/bin/proxmox_file_restore/block_driver_qemu.rs b/src/bin/proxmox_file_restore/block_driver_qemu.rs
index 1a96ef10..bb312747 100644
--- a/src/bin/proxmox_file_restore/block_driver_qemu.rs
+++ b/src/bin/proxmox_file_restore/block_driver_qemu.rs
@@ -236,6 +236,38 @@ impl BlockRestoreDriver for QemuBlockDriver {
         .boxed()
     }
 
+    fn data_extract(
+        &self,
+        details: SnapRestoreDetails,
+        img_file: String,
+        mut path: Vec<u8>,
+        pxar: bool,
+    ) -> Async<Result<Box<dyn tokio::io::AsyncRead + Unpin + Send>, Error>> {
+        async move {
+            let client = ensure_running(&details).await?;
+            if !path.is_empty() && path[0] != b'/' {
+                path.insert(0, b'/');
+            }
+            let path = base64::encode(img_file.bytes().chain(path).collect::<Vec<u8>>());
+            let (mut tx, rx) = tokio::io::duplex(1024 * 4096);
+            tokio::spawn(async move {
+                if let Err(err) = client
+                    .download(
+                        "api2/json/extract",
+                        Some(json!({ "path": path, "pxar": pxar })),
+                        &mut tx,
+                    )
+                    .await
+                {
+                    eprintln!("reading file extraction stream failed - {}", err);
+                }
+            });
+
+            Ok(Box::new(rx) as Box<dyn tokio::io::AsyncRead + Unpin + Send>)
+        }
+        .boxed()
+    }
+
     fn status(&self) -> Async<Result<Vec<DriverStatus>, Error>> {
         async move {
             let mut state_map = VMStateMap::load()?;
diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
index 2f990f36..7ac70278 100644
--- a/src/bin/proxmox_restore_daemon/api.rs
+++ b/src/bin/proxmox_restore_daemon/api.rs
@@ -1,16 +1,29 @@
 ///! File-restore API running inside the restore VM
 use anyhow::{bail, Error};
+use futures::FutureExt;
+use hyper::http::request::Parts;
+use hyper::{header, Body, Response, StatusCode};
+use log::error;
+use pathpatterns::{MatchEntry, MatchPattern, MatchType, Pattern};
+use serde_json::Value;
+
 use std::ffi::OsStr;
 use std::fs;
 use std::os::unix::ffi::OsStrExt;
 use std::path::{Path, PathBuf};
 
-use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment, SubdirMap};
-use proxmox::list_subdirs_api_method;
+use proxmox::api::{
+    api, schema::*, ApiHandler, ApiMethod, ApiResponseFuture, Permission, Router, RpcEnvironment,
+    SubdirMap,
+};
+use proxmox::{identity, list_subdirs_api_method, sortable};
 
 use proxmox_backup::api2::types::*;
 use proxmox_backup::backup::DirEntryAttribute;
-use proxmox_backup::tools::fs::read_subdir;
+use proxmox_backup::pxar::{create_archive, Flags, PxarCreateOptions, ENCODER_MAX_ENTRIES};
+use proxmox_backup::tools::{self, fs::read_subdir, zip::zip_directory};
+
+use pxar::encoder::aio::TokioWriter;
 
 use super::{disk::ResolveResult, watchdog_remaining, watchdog_ping};
 
@@ -18,6 +31,7 @@ use super::{disk::ResolveResult, watchdog_remaining, watchdog_ping};
 // not exist within the restore VM. Safety is guaranteed by checking a ticket via a custom ApiAuth.
 
 const SUBDIRS: SubdirMap = &[
+    ("extract", &Router::new().get(&API_METHOD_EXTRACT)),
     ("list", &Router::new().get(&API_METHOD_LIST)),
     ("status", &Router::new().get(&API_METHOD_STATUS)),
     ("stop", &Router::new().get(&API_METHOD_STOP)),
@@ -197,3 +211,159 @@ fn list(
 
     Ok(res)
 }
+
+#[sortable]
+pub const API_METHOD_EXTRACT: ApiMethod = ApiMethod::new(
+    &ApiHandler::AsyncHttp(&extract),
+    &ObjectSchema::new(
+        "Extract a file or directory from the VM as a pxar archive.",
+        &sorted!([
+            (
+                "path",
+                false,
+                &StringSchema::new("base64-encoded path to list files and directories under")
+                    .schema()
+            ),
+            (
+                "pxar",
+                true,
+                &BooleanSchema::new(concat!(
+                    "if true, return a pxar archive, otherwise either the ",
+                    "file content or the directory as a zip file"
+                ))
+                .default(true)
+                .schema()
+            )
+        ]),
+    ),
+)
+.access(None, &Permission::Superuser);
+
+fn extract(
+    _parts: Parts,
+    _req_body: Body,
+    param: Value,
+    _info: &ApiMethod,
+    _rpcenv: Box<dyn RpcEnvironment>,
+) -> ApiResponseFuture {
+    watchdog_ping();
+    async move {
+        let path = tools::required_string_param(&param, "path")?;
+        let mut path = base64::decode(path)?;
+        if let Some(b'/') = path.last() {
+            path.pop();
+        }
+        let path = Path::new(OsStr::from_bytes(&path[..]));
+
+        let pxar = param["pxar"].as_bool().unwrap_or(true);
+
+        let query_result = {
+            let mut disk_state = crate::DISK_STATE.lock().unwrap();
+            disk_state.resolve(&path)?
+        };
+
+        let vm_path = match query_result {
+            ResolveResult::Path(vm_path) => vm_path,
+            _ => bail!("invalid path, cannot restore meta-directory: {:?}", path),
+        };
+
+        // check here so we can return a real error message, failing in the async task will stop
+        // the transfer, but not return a useful message
+        if !vm_path.exists() {
+            bail!("file or directory {:?} does not exist", path);
+        }
+
+        let (mut writer, reader) = tokio::io::duplex(1024 * 64);
+
+        if pxar {
+            tokio::spawn(async move {
+                let result = async move {
+                    // pxar always expects a directory as it's root, so to accommodate files as
+                    // well we encode the parent dir with a filter only matching the target instead
+                    let mut patterns = vec![MatchEntry::new(
+                        MatchPattern::Pattern(Pattern::path(b"*").unwrap()),
+                        MatchType::Exclude,
+                    )];
+
+                    let name = match vm_path.file_name() {
+                        Some(name) => name,
+                        None => bail!("no file name found for path: {:?}", vm_path),
+                    };
+
+                    if vm_path.is_dir() {
+                        let mut pat = name.as_bytes().to_vec();
+                        patterns.push(MatchEntry::new(
+                            MatchPattern::Pattern(Pattern::path(pat.clone())?),
+                            MatchType::Include,
+                        ));
+                        pat.extend(b"/**/*".iter());
+                        patterns.push(MatchEntry::new(
+                            MatchPattern::Pattern(Pattern::path(pat)?),
+                            MatchType::Include,
+                        ));
+                    } else {
+                        patterns.push(MatchEntry::new(
+                            MatchPattern::Literal(name.as_bytes().to_vec()),
+                            MatchType::Include,
+                        ));
+                    }
+
+                    let dir_path = vm_path.parent().unwrap_or_else(|| Path::new("/"));
+                    let dir = nix::dir::Dir::open(
+                        dir_path,
+                        nix::fcntl::OFlag::O_NOFOLLOW,
+                        nix::sys::stat::Mode::empty(),
+                    )?;
+
+                    let options = PxarCreateOptions {
+                        entries_max: ENCODER_MAX_ENTRIES,
+                        device_set: None,
+                        patterns,
+                        verbose: false,
+                        skip_lost_and_found: false,
+                    };
+
+                    let pxar_writer = TokioWriter::new(writer);
+                    create_archive(dir, pxar_writer, Flags::DEFAULT, |_| Ok(()), None, options)
+                        .await
+                }
+                .await;
+                if let Err(err) = result {
+                    error!("pxar streaming task failed - {}", err);
+                }
+            });
+        } else {
+            tokio::spawn(async move {
+                let result = async move {
+                    if vm_path.is_dir() {
+                        zip_directory(&mut writer, &vm_path).await?;
+                        Ok(())
+                    } else if vm_path.is_file() {
+                        let mut file = tokio::fs::OpenOptions::new()
+                            .read(true)
+                            .open(vm_path)
+                            .await?;
+                        tokio::io::copy(&mut file, &mut writer).await?;
+                        Ok(())
+                    } else {
+                        bail!("invalid entry type for path: {:?}", vm_path);
+                    }
+                }
+                .await;
+                if let Err(err) = result {
+                    error!("file or dir streaming task failed - {}", err);
+                }
+            });
+        }
+
+        let stream = tokio_util::io::ReaderStream::new(reader);
+
+        let body = Body::wrap_stream(stream);
+        Ok(Response::builder()
+            .status(StatusCode::OK)
+            .header(header::CONTENT_TYPE, "application/octet-stream")
+            .body(body)
+            .unwrap())
+    }
+    .boxed()
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] applied: [PATCH v3 pxar 01/20] decoder/aio: add contents() and content_size() calls
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 pxar 01/20] decoder/aio: add contents() and content_size() calls Stefan Reiter
@ 2021-03-31 11:54   ` Wolfgang Bumiller
  0 siblings, 0 replies; 32+ messages in thread
From: Wolfgang Bumiller @ 2021-03-31 11:54 UTC (permalink / raw)
  To: Stefan Reiter; +Cc: pbs-devel

applied

On Wed, Mar 31, 2021 at 12:21:43PM +0200, Stefan Reiter wrote:
> Returns a decoder::Contents without a wrapper type, since in this case
> we don't want to hide the SeqRead implementation (as done in
> decoder::sync). For conviencience also implement AsyncRead if "tokio-io"
> is enabled.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
> 
> proxmox-backup requires a dependency bump on this!
> 
> v3:
> * assume_init takes just 'n', already calculates offset correctly
> 
> v2:
> * make contents() call available without tokio-io feature
> * drop peek() implementation
> 
>  src/decoder/aio.rs | 36 +++++++++++++++++++++++++++++++++++-
>  1 file changed, 35 insertions(+), 1 deletion(-)
> 
> diff --git a/src/decoder/aio.rs b/src/decoder/aio.rs
> index 82030b0..55e6464 100644
> --- a/src/decoder/aio.rs
> +++ b/src/decoder/aio.rs
> @@ -5,7 +5,7 @@ use std::io;
>  #[cfg(feature = "tokio-fs")]
>  use std::path::Path;
>  
> -use crate::decoder::{self, SeqRead};
> +use crate::decoder::{self, Contents, SeqRead};
>  use crate::Entry;
>  
>  /// Asynchronous `pxar` decoder.
> @@ -56,6 +56,16 @@ impl<T: SeqRead> Decoder<T> {
>          self.inner.next_do().await.transpose()
>      }
>  
> +    /// Get a reader for the contents of the current entry, if the entry has contents.
> +    pub fn contents(&mut self) -> Option<Contents<T>> {
> +        self.inner.content_reader()
> +    }
> +
> +    /// Get the size of the current contents, if the entry has contents.
> +    pub fn content_size(&self) -> Option<u64> {
> +        self.inner.content_size()
> +    }
> +
>      /// Include goodbye tables in iteration.
>      pub fn enable_goodbye_entries(&mut self, on: bool) {
>          self.inner.with_goodbye_tables = on;
> @@ -67,6 +77,7 @@ mod tok {
>      use std::io;
>      use std::pin::Pin;
>      use std::task::{Context, Poll};
> +    use crate::decoder::{Contents, SeqRead};
>  
>      /// Read adapter for `futures::io::AsyncRead`
>      pub struct TokioReader<T> {
> @@ -93,6 +104,29 @@ mod tok {
>              }
>          }
>      }
> +
> +    impl<'a, T: crate::decoder::SeqRead> tokio::io::AsyncRead for Contents<'a, T> {
> +        fn poll_read(
> +            self: Pin<&mut Self>,
> +            cx: &mut Context<'_>,
> +            buf: &mut tokio::io::ReadBuf<'_>,
> +        ) -> Poll<io::Result<()>> {
> +            unsafe {
> +                // Safety: poll_seq_read will *probably* only write to the buffer, so we don't
> +                // initialize it first, instead we treat is a &[u8] immediately and uphold the
> +                // ReadBuf invariants in the conditional below.
> +                let write_buf =
> +                    &mut *(buf.unfilled_mut() as *mut [std::mem::MaybeUninit<u8>] as *mut [u8]);
> +                let result = self.poll_seq_read(cx, write_buf);
> +                if let Poll::Ready(Ok(n)) = result {
> +                    // if we've written data, advance both initialized and filled bytes cursor
> +                    buf.assume_init(n);
> +                    buf.advance(n);
> +                }
> +                result.map(|_| Ok(()))
> +            }
> +        }
> +    }
>  }
>  
>  #[cfg(feature = "tokio-io")]
> -- 
> 2.20.1




^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [pbs-devel] [PATCH v3 proxmox-backup 09/20] server/rest: add ApiAuth trait to make user auth generic
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 09/20] server/rest: add ApiAuth trait to make user auth generic Stefan Reiter
@ 2021-03-31 12:55   ` Wolfgang Bumiller
  2021-03-31 14:07     ` Thomas Lamprecht
  0 siblings, 1 reply; 32+ messages in thread
From: Wolfgang Bumiller @ 2021-03-31 12:55 UTC (permalink / raw)
  To: Stefan Reiter; +Cc: pbs-devel

LGTM but I'll suggest some quality-of-life improvements:

On Wed, Mar 31, 2021 at 12:21:51PM +0200, Stefan Reiter wrote:
> This allows switching the base user identification/authentication method
> in the rest server. Will initially be used for single file restore VMs,
> where authentication is based on a ticket file, not the PBS user
> backend (PAM/local).
> 
> To avoid putting generic types into the RestServer type for this, we
> merge the two calls "extract_auth_data" and "check_auth" into a single
> one, which can use whatever type it wants internally.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
> 
> v3:
> * merge both calls into one trait, that way it doesn't have to be generic
> 
>  src/bin/proxmox-backup-api.rs   |  13 ++-
>  src/bin/proxmox-backup-proxy.rs |   7 +-
>  src/server/auth.rs              | 192 +++++++++++++++++++-------------
>  src/server/config.rs            |  13 ++-
>  src/server/rest.rs              |  36 +++---
>  5 files changed, 159 insertions(+), 102 deletions(-)
> 
> diff --git a/src/bin/proxmox-backup-api.rs b/src/bin/proxmox-backup-api.rs
> index 7d800259..e514a801 100644
> --- a/src/bin/proxmox-backup-api.rs
> +++ b/src/bin/proxmox-backup-api.rs
> @@ -6,8 +6,11 @@ use proxmox::api::RpcEnvironmentType;
>  
>  //use proxmox_backup::tools;
>  //use proxmox_backup::api_schema::config::*;
> -use proxmox_backup::server::rest::*;
> -use proxmox_backup::server;
> +use proxmox_backup::server::{
> +    self,
> +    auth::default_api_auth,
> +    rest::*,
> +};
>  use proxmox_backup::tools::daemon;
>  use proxmox_backup::auth_helpers::*;
>  use proxmox_backup::config;
> @@ -53,7 +56,11 @@ async fn run() -> Result<(), Error> {
>      let _ = csrf_secret(); // load with lazy_static
>  
>      let mut config = server::ApiConfig::new(
> -        buildcfg::JS_DIR, &proxmox_backup::api2::ROUTER, RpcEnvironmentType::PRIVILEGED)?;
> +        buildcfg::JS_DIR,
> +        &proxmox_backup::api2::ROUTER,
> +        RpcEnvironmentType::PRIVILEGED,
> +        default_api_auth(),
> +    )?;
>  
>      let mut commando_sock = server::CommandoSocket::new(server::our_ctrl_sock());
>  
> diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
> index 541d34b5..7e026455 100644
> --- a/src/bin/proxmox-backup-proxy.rs
> +++ b/src/bin/proxmox-backup-proxy.rs
> @@ -14,6 +14,7 @@ use proxmox::api::RpcEnvironmentType;
>  use proxmox_backup::{
>      backup::DataStore,
>      server::{
> +        auth::default_api_auth,
>          WorkerTask,
>          ApiConfig,
>          rest::*,
> @@ -84,7 +85,11 @@ async fn run() -> Result<(), Error> {
>      let _ = csrf_secret(); // load with lazy_static
>  
>      let mut config = ApiConfig::new(
> -        buildcfg::JS_DIR, &proxmox_backup::api2::ROUTER, RpcEnvironmentType::PUBLIC)?;
> +        buildcfg::JS_DIR,
> +        &proxmox_backup::api2::ROUTER,
> +        RpcEnvironmentType::PUBLIC,
> +        default_api_auth(),
> +    )?;
>  
>      // Enable experimental tape UI if tape.cfg exists
>      if Path::new("/etc/proxmox-backup/tape.cfg").exists() {
> diff --git a/src/server/auth.rs b/src/server/auth.rs
> index 24151886..0a9a740c 100644
> --- a/src/server/auth.rs
> +++ b/src/server/auth.rs
> @@ -1,102 +1,140 @@
>  //! Provides authentication primitives for the HTTP server
> -use anyhow::{bail, format_err, Error};
> +use anyhow::{format_err, Error};
> +
> +use std::sync::Arc;
>  
> -use crate::tools::ticket::Ticket;
> -use crate::auth_helpers::*;
> -use crate::tools;
> -use crate::config::cached_user_info::CachedUserInfo;
>  use crate::api2::types::{Authid, Userid};
> +use crate::auth_helpers::*;
> +use crate::config::cached_user_info::CachedUserInfo;
> +use crate::tools;
> +use crate::tools::ticket::Ticket;
>  
>  use hyper::header;
>  use percent_encoding::percent_decode_str;
>  
> -pub struct UserAuthData {
> +pub enum AuthError {
> +    Generic(Error),
> +    NoData,
> +}
> +
> +impl From<Error> for AuthError {
> +    fn from(err: Error) -> Self {
> +        AuthError::Generic(err)
> +    }
> +}

^ When you define an Error type you should immediately also derive Debug
and implement `std::fmt::Display`. While you only "display" it once, I
don't think the error message should be left up to the caller (see
further down below).

In order to make this shorter and easier, I'd propose the inclusion of
the `thiserror` helper crate, then we'd need only as little code as:

    #[derive(thiserror::Error, Debug)]
    pub enum AuthError {
        #[error(transparent)]
        Generic(#[from] Error),

        #[error("no authentication credentials provided")]
        NoData,
    }

And the manual `From<anyhow::Error>` impl can simply be dropped ;-)

That is *unless* you explicitly want this to *not* be convertible to
`anyhow::Error` automagically. Then you don't want this to impl
`std::error::Error`, but then you should add a comment for this ;-)
Which would be a valid way to go given that you're already wrapping
`anyhow::Error` in there... OTOH all the token parsing errors could be
combined into a single `AuthError::BadToken` instead of multiple
"slightly different" `format_err!` messages where the minor difference
in the error message doesn't really provide much value anyway (IMO).

Anyway, this can easily be left as is and updated up in a later series.

> +
> +pub trait ApiAuth {
> +    fn check_auth(
> +        &self,
> +        headers: &http::HeaderMap,
> +        method: &hyper::Method,
> +        user_info: &CachedUserInfo,
> +    ) -> Result<Authid, AuthError>;
> +}
> +
> +struct UserAuthData {
>      ticket: String,
>      csrf_token: Option<String>,
>  }
>  
> -pub enum AuthData {
> +enum AuthData {
>      User(UserAuthData),
>      ApiToken(String),
>  }
>  
> -pub fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
> -    if let Some(raw_cookie) = headers.get(header::COOKIE) {
> -        if let Ok(cookie) = raw_cookie.to_str() {
> -            if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
> -                let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
> -                    Some(Ok(v)) => Some(v.to_owned()),
> -                    _ => None,
> -                };
> -                return Some(AuthData::User(UserAuthData {
> -                    ticket,
> -                    csrf_token,
> -                }));
> -            }
> -        }
> -    }
> -
> -    match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
> -        Some(Ok(v)) => {
> -            if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
> -                Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
> -            } else {
> -                None
> -            }
> -        },
> -        _ => None,
> -    }
> +pub struct UserApiAuth {}
> +pub fn default_api_auth() -> Arc<UserApiAuth> {
> +    Arc::new(UserApiAuth {})
>  }
>  
> -pub fn check_auth(
> -    method: &hyper::Method,
> -    auth_data: &AuthData,
> -    user_info: &CachedUserInfo,
> -) -> Result<Authid, Error> {
> -    match auth_data {
> -        AuthData::User(user_auth_data) => {
> -            let ticket = user_auth_data.ticket.clone();
> -            let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
> -
> -            let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
> -                .verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
> -                .require_full()?;
> -
> -            let auth_id = Authid::from(userid.clone());
> -            if !user_info.is_active_auth_id(&auth_id) {
> -                bail!("user account disabled or expired.");
> -            }
> -
> -            if method != hyper::Method::GET {
> -                if let Some(csrf_token) = &user_auth_data.csrf_token {
> -                    verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?;
> -                } else {
> -                    bail!("missing CSRF prevention token");
> +impl UserApiAuth {
> +    fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
> +        if let Some(raw_cookie) = headers.get(header::COOKIE) {
> +            if let Ok(cookie) = raw_cookie.to_str() {
> +                if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
> +                    let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
> +                        Some(Ok(v)) => Some(v.to_owned()),
> +                        _ => None,
> +                    };
> +                    return Some(AuthData::User(UserAuthData { ticket, csrf_token }));
>                  }
>              }
> +        }
>  
> -            Ok(auth_id)
> -        },
> -        AuthData::ApiToken(api_token) => {
> -            let mut parts = api_token.splitn(2, ':');
> -            let tokenid = parts.next()
> -                .ok_or_else(|| format_err!("failed to split API token header"))?;
> -            let tokenid: Authid = tokenid.parse()?;
> -
> -            if !user_info.is_active_auth_id(&tokenid) {
> -                bail!("user account or token disabled or expired.");
> +        match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
> +            Some(Ok(v)) => {
> +                if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
> +                    Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
> +                } else {
> +                    None
> +                }
>              }
> -
> -            let tokensecret = parts.next()
> -                .ok_or_else(|| format_err!("failed to split API token header"))?;
> -            let tokensecret = percent_decode_str(tokensecret)
> -                .decode_utf8()
> -                .map_err(|_| format_err!("failed to decode API token header"))?;
> -
> -            crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
> -
> -            Ok(tokenid)
> +            _ => None,
>          }
>      }
>  }
>  
> +impl ApiAuth for UserApiAuth {
> +    fn check_auth(
> +        &self,
> +        headers: &http::HeaderMap,
> +        method: &hyper::Method,
> +        user_info: &CachedUserInfo,
> +    ) -> Result<Authid, AuthError> {
> +        let auth_data = Self::extract_auth_data(headers);
> +        match auth_data {
> +            Some(AuthData::User(user_auth_data)) => {
> +                let ticket = user_auth_data.ticket.clone();
> +                let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
> +
> +                let userid: Userid = Ticket::<super::ticket::ApiTicket>::parse(&ticket)?
> +                    .verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?
> +                    .require_full()?;
> +
> +                let auth_id = Authid::from(userid.clone());
> +                if !user_info.is_active_auth_id(&auth_id) {
> +                    return Err(format_err!("user account disabled or expired.").into());
> +                }
> +
> +                if method != hyper::Method::GET {
> +                    if let Some(csrf_token) = &user_auth_data.csrf_token {
> +                        verify_csrf_prevention_token(
> +                            csrf_secret(),
> +                            &userid,
> +                            &csrf_token,
> +                            -300,
> +                            ticket_lifetime,
> +                        )?;
> +                    } else {
> +                        return Err(format_err!("missing CSRF prevention token").into());
> +                    }
> +                }
> +
> +                Ok(auth_id)
> +            }
> +            Some(AuthData::ApiToken(api_token)) => {
> +                let mut parts = api_token.splitn(2, ':');
> +                let tokenid = parts
> +                    .next()
> +                    .ok_or_else(|| format_err!("failed to split API token header"))?;
> +                let tokenid: Authid = tokenid.parse()?;
> +
> +                if !user_info.is_active_auth_id(&tokenid) {
> +                    return Err(format_err!("user account or token disabled or expired.").into());
> +                }
> +
> +                let tokensecret = parts
> +                    .next()
> +                    .ok_or_else(|| format_err!("failed to split API token header"))?;
> +                let tokensecret = percent_decode_str(tokensecret)
> +                    .decode_utf8()
> +                    .map_err(|_| format_err!("failed to decode API token header"))?;
> +
> +                crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
> +
> +                Ok(tokenid)
> +            }
> +            None => Err(AuthError::NoData),
> +        }
> +    }
> +}
> diff --git a/src/server/config.rs b/src/server/config.rs
> index 9094fa80..ad378b0a 100644
> --- a/src/server/config.rs
> +++ b/src/server/config.rs
> @@ -13,6 +13,7 @@ use proxmox::api::{ApiMethod, Router, RpcEnvironmentType};
>  use proxmox::tools::fs::{create_path, CreateOptions};
>  
>  use crate::tools::{FileLogger, FileLogOptions};
> +use super::auth::ApiAuth;
>  
>  pub struct ApiConfig {
>      basedir: PathBuf,
> @@ -23,11 +24,16 @@ pub struct ApiConfig {
>      template_files: RwLock<HashMap<String, (SystemTime, PathBuf)>>,
>      request_log: Option<Arc<Mutex<FileLogger>>>,
>      pub enable_tape_ui: bool,
> +    pub api_auth: Arc<dyn ApiAuth + Send + Sync>,
>  }
>  
>  impl ApiConfig {
> -
> -    pub fn new<B: Into<PathBuf>>(basedir: B, router: &'static Router, env_type: RpcEnvironmentType) -> Result<Self, Error> {
> +    pub fn new<B: Into<PathBuf>>(
> +        basedir: B,
> +        router: &'static Router,
> +        env_type: RpcEnvironmentType,
> +        api_auth: Arc<dyn ApiAuth + Send + Sync>,
> +    ) -> Result<Self, Error> {
>          Ok(Self {
>              basedir: basedir.into(),
>              router,
> @@ -37,7 +43,8 @@ impl ApiConfig {
>              template_files: RwLock::new(HashMap::new()),
>              request_log: None,
>              enable_tape_ui: false,
> -       })
> +            api_auth,
> +        })
>      }
>  
>      pub fn find_method(
> diff --git a/src/server/rest.rs b/src/server/rest.rs
> index 9a971890..2d033510 100644
> --- a/src/server/rest.rs
> +++ b/src/server/rest.rs
> @@ -14,7 +14,6 @@ use hyper::header::{self, HeaderMap};
>  use hyper::http::request::Parts;
>  use hyper::{Body, Request, Response, StatusCode};
>  use lazy_static::lazy_static;
> -use percent_encoding::percent_decode_str;
>  use regex::Regex;
>  use serde_json::{json, Value};
>  use tokio::fs::File;
> @@ -31,16 +30,15 @@ use proxmox::api::{
>  };
>  use proxmox::http_err;
>  
> +use super::auth::AuthError;
>  use super::environment::RestEnvironment;
>  use super::formatter::*;
>  use super::ApiConfig;
> -use super::auth::{check_auth, extract_auth_data};
>  
>  use crate::api2::types::{Authid, Userid};
>  use crate::auth_helpers::*;
>  use crate::config::cached_user_info::CachedUserInfo;
>  use crate::tools;
> -use crate::tools::ticket::Ticket;
>  use crate::tools::FileLogger;
>  
>  extern "C" {
> @@ -614,6 +612,7 @@ async fn handle_request(
>      rpcenv.set_client_ip(Some(*peer));
>  
>      let user_info = CachedUserInfo::new()?;
> +    let auth = &api.api_auth;
>  
>      let delay_unauth_time = std::time::Instant::now() + std::time::Duration::from_millis(3000);
>      let access_forbidden_time = std::time::Instant::now() + std::time::Duration::from_millis(500);
> @@ -639,13 +638,15 @@ async fn handle_request(
>              }
>  
>              if auth_required {
> -                let auth_result = match extract_auth_data(&parts.headers) {
> -                    Some(auth_data) => check_auth(&method, &auth_data, &user_info),
> -                    None => Err(format_err!("no authentication credentials provided.")),
> -                };
> -                match auth_result {
> +                match auth.check_auth(&parts.headers, &method, &user_info) {
>                      Ok(authid) => rpcenv.set_auth_id(Some(authid.to_string())),
> -                    Err(err) => {
> +                    Err(auth_err) => {
> +                        let err = match auth_err {
> +                            AuthError::Generic(err) => err,
> +                            AuthError::NoData => {
> +                                format_err!("no authentication credentials provided.")
> +                            }
> +                        };

With the `fmt::Display` impl for `AuthError` the above match on
`auth_err` can be dropped as well and the `Err(auth_err)` case can
stay `Err(err)` with the code below still working fine as is.

>                          let peer = peer.ip();
>                          auth_logger()?.log(format!(
>                              "authentication failure; rhost={} msg={}",
> @@ -708,9 +709,9 @@ async fn handle_request(
>  
>          if comp_len == 0 {
>              let language = extract_lang_header(&parts.headers);
> -            if let Some(auth_data) = extract_auth_data(&parts.headers) {
> -                match check_auth(&method, &auth_data, &user_info) {
> -                    Ok(auth_id) if !auth_id.is_token() => {
> +            match auth.check_auth(&parts.headers, &method, &user_info) {
> +                Ok(auth_id) => {
> +                    if !auth_id.is_token() {
>                          let userid = auth_id.user();
>                          let new_csrf_token = assemble_csrf_prevention_token(csrf_secret(), userid);
>                          return Ok(get_index(
> @@ -721,14 +722,13 @@ async fn handle_request(
>                              parts,
>                          ));
>                      }
> -                    _ => {
> -                        tokio::time::sleep_until(Instant::from_std(delay_unauth_time)).await;
> -                        return Ok(get_index(None, None, language, &api, parts));
> -                    }
>                  }
> -            } else {
> -                return Ok(get_index(None, None, language, &api, parts));
> +                Err(AuthError::Generic(_)) => {
> +                    tokio::time::sleep_until(Instant::from_std(delay_unauth_time)).await;
> +                }
> +                Err(AuthError::NoData) => {}
>              }
> +            return Ok(get_index(None, None, language, &api, parts));
>          } else {
>              let filename = api.find_alias(&components);
>              return handle_static_file_download(filename).await;
> -- 
> 2.20.1




^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [pbs-devel] [PATCH v3 proxmox-backup 09/20] server/rest: add ApiAuth trait to make user auth generic
  2021-03-31 12:55   ` Wolfgang Bumiller
@ 2021-03-31 14:07     ` Thomas Lamprecht
  0 siblings, 0 replies; 32+ messages in thread
From: Thomas Lamprecht @ 2021-03-31 14:07 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Wolfgang Bumiller,
	Stefan Reiter

On 31.03.21 14:55, Wolfgang Bumiller wrote:
>> -pub struct UserAuthData {
>> +pub enum AuthError {
>> +    Generic(Error),
>> +    NoData,
>> +}
>> +
>> +impl From<Error> for AuthError {
>> +    fn from(err: Error) -> Self {
>> +        AuthError::Generic(err)
>> +    }
>> +}
> ^ When you define an Error type you should immediately also derive Debug
> and implement `std::fmt::Display`. While you only "display" it once, I
> don't think the error message should be left up to the caller (see
> further down below).
> 
> In order to make this shorter and easier, I'd propose the inclusion of
> the `thiserror` helper crate, then we'd need only as little code as:
> 
>     #[derive(thiserror::Error, Debug)]
>     pub enum AuthError {
>         #[error(transparent)]
>         Generic(#[from] Error),
> 
>         #[error("no authentication credentials provided")]
>         NoData,
>     }
> 
> And the manual `From<anyhow::Error>` impl can simply be dropped ;-)

+1 for using thiserror in general from my side...

> 
> That is *unless* you explicitly want this to *not* be convertible to
> `anyhow::Error` automagically. Then you don't want this to impl
> `std::error::Error`, but then you should add a comment for this ;-)
> Which would be a valid way to go given that you're already wrapping
> `anyhow::Error` in there... OTOH all the token parsing errors could be
> combined into a single `AuthError::BadToken` instead of multiple
> "slightly different" `format_err!` messages where the minor difference
> in the error message doesn't really provide much value anyway (IMO).
> 
> Anyway, this can easily be left as is and updated up in a later series.
> 





^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [pbs-devel] [PATCH v3 proxmox-backup 14/20] file-restore: add qemu-helper setuid binary
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 14/20] file-restore: add qemu-helper setuid binary Stefan Reiter
@ 2021-03-31 14:15   ` Oguz Bektas
  0 siblings, 0 replies; 32+ messages in thread
From: Oguz Bektas @ 2021-03-31 14:15 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion

hi,

On Wed, Mar 31, 2021 at 12:21:56PM +0200, Stefan Reiter wrote:
> +    // Try starting QEMU in a loop to retry if we fail because of a bad 'cid' value
> +    let mut attempts = 0;
> +    loop {
> +        let mut qemu_cmd = std::process::Command::new("qemu-system-x86_64");
> +        qemu_cmd.args(base_args.iter());

is vulnerable to path confusion, since setuid helper can be called directly by
unprivileged user.


please add this:

==================================================

diff --git a/src/bin/proxmox-restore-qemu-helper.rs b/src/bin/proxmox-restore-qemu-helper.rs
index f56a6607..ad707d69 100644
--- a/src/bin/proxmox-restore-qemu-helper.rs
+++ b/src/bin/proxmox-restore-qemu-helper.rs
@@ -212,6 +212,7 @@ async fn start_vm(
 
     // Try starting QEMU in a loop to retry if we fail because of a bad 'cid' value
     let mut attempts = 0;
+
     loop {
         let mut qemu_cmd = std::process::Command::new("qemu-system-x86_64");
         qemu_cmd.args(base_args.iter());
@@ -349,6 +350,7 @@ async fn start(param: Value) -> Result<Value, Error> {
 }
 
 fn main() -> Result<(), Error> {
+    proxmox_backup::tools::setup_safe_path_env();
     let effective_uid = nix::unistd::Uid::effective();
     if !effective_uid.is_root() {
         bail!("this program needs to be run with setuid root");

==================================================

and then it should be alright from some quick tests...

maybe it also makes sense to add this in the backup client too.




^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] applied: [PATCH v3 proxmox-backup 02/20] vsock_client: remove wrong comment
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 02/20] vsock_client: remove wrong comment Stefan Reiter
@ 2021-04-01  9:53   ` Thomas Lamprecht
  0 siblings, 0 replies; 32+ messages in thread
From: Thomas Lamprecht @ 2021-04-01  9:53 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

On 31.03.21 12:21, Stefan Reiter wrote:
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
> 
> new in v3
> 
>  src/client/vsock_client.rs | 1 -
>  1 file changed, 1 deletion(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] applied: [PATCH v3 proxmox-backup 03/20] vsock_client: remove some &mut restrictions and rustfmt
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 03/20] vsock_client: remove some &mut restrictions and rustfmt Stefan Reiter
@ 2021-04-01  9:54   ` Thomas Lamprecht
  0 siblings, 0 replies; 32+ messages in thread
From: Thomas Lamprecht @ 2021-04-01  9:54 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

On 31.03.21 12:21, Stefan Reiter wrote:
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
> 
> new in v3
> 
>  src/client/vsock_client.rs | 13 ++++++-------
>  1 file changed, 6 insertions(+), 7 deletions(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] applied: [PATCH v3 proxmox-backup 04/20] vsock_client: support authorization header
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 04/20] vsock_client: support authorization header Stefan Reiter
@ 2021-04-01  9:54   ` Thomas Lamprecht
  0 siblings, 0 replies; 32+ messages in thread
From: Thomas Lamprecht @ 2021-04-01  9:54 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

On 31.03.21 12:21, Stefan Reiter wrote:
> Pass in an optional auth tag, which will be passed as an Authorization
> header on every subsequent call.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
> 
> new in v3
> 
>  src/client/vsock_client.rs | 64 ++++++++++++++++++++------------------
>  1 file changed, 33 insertions(+), 31 deletions(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] applied: [PATCH v3 proxmox-backup 05/20] proxmox_client_tools: move common key related functions to key_source.rs
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 05/20] proxmox_client_tools: move common key related functions to key_source.rs Stefan Reiter
@ 2021-04-01  9:54   ` Thomas Lamprecht
  0 siblings, 0 replies; 32+ messages in thread
From: Thomas Lamprecht @ 2021-04-01  9:54 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

On 31.03.21 12:21, Stefan Reiter wrote:
> Add a new module containing key-related functions and schemata from all
> over, code moved is not changed as much as possible.
> 
> Requires adapting some 'use' statements across proxmox-backup-client and
> putting the XDG helpers quite cozily into proxmox_client_tools/mod.rs
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
> 
> v2:
> * don't move entire key.rs, just what is necessary
> 
>  src/bin/proxmox-backup-client.rs           | 453 +---------------
>  src/bin/proxmox_backup_client/benchmark.rs |   4 +-
>  src/bin/proxmox_backup_client/catalog.rs   |   3 +-
>  src/bin/proxmox_backup_client/key.rs       | 112 +---
>  src/bin/proxmox_backup_client/mod.rs       |  28 -
>  src/bin/proxmox_backup_client/mount.rs     |   4 +-
>  src/bin/proxmox_backup_client/snapshot.rs  |   4 +-
>  src/bin/proxmox_client_tools/key_source.rs | 573 +++++++++++++++++++++
>  src/bin/proxmox_client_tools/mod.rs        |  48 +-
>  9 files changed, 631 insertions(+), 598 deletions(-)
>  create mode 100644 src/bin/proxmox_client_tools/key_source.rs
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] applied: [PATCH v3 proxmox-backup 08/20] server/rest: extract auth to seperate module
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 08/20] server/rest: extract auth to seperate module Stefan Reiter
@ 2021-04-01  9:55   ` Thomas Lamprecht
  0 siblings, 0 replies; 32+ messages in thread
From: Thomas Lamprecht @ 2021-04-01  9:55 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

On 31.03.21 12:21, Stefan Reiter wrote:
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  src/server.rs      |   2 +
>  src/server/auth.rs | 102 +++++++++++++++++++++++++++++++++++++++++++++
>  src/server/rest.rs |  96 +-----------------------------------------
>  3 files changed, 105 insertions(+), 95 deletions(-)
>  create mode 100644 src/server/auth.rs
> 
>

applied, thanks!

FYI: I dropped the imports of server/rest which got unused in this patch
in a follow-up.




^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] [PATCH v4 proxmox-backup 15/20] file-restore: add basic VM/block device support
  2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 15/20] file-restore: add basic VM/block device support Stefan Reiter
@ 2021-04-01 15:43   ` Stefan Reiter
  0 siblings, 0 replies; 32+ messages in thread
From: Stefan Reiter @ 2021-04-01 15:43 UTC (permalink / raw)
  To: pbs-devel

Includes methods to start, stop and list QEMU file-restore VMs, as well
as CLI commands do the latter two (start is implicit).

The implementation is abstracted behind the concept of a
"BlockRestoreDriver", so other methods can be implemented later (e.g.
mapping directly to loop devices on the host, using other hypervisors
then QEMU, etc...).

Starting VMs is currently unused but will be needed for further changes.

The design for the QEMU driver uses a locked 'map' file
(/run/proxmox-backup/$UID/restore-vm-map.json) containing a JSON
encoding of currently running VMs. VMs are addressed by a 'name', which
is a systemd-unit encoded combination of repository and snapshot string,
thus uniquely identifying it.

Note that currently you need to run proxmox-file-restore as root to use
this method of restoring.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---

!!! NOTE:
This replaces BOTH patches 14/20 and 15/20 of v3!


v4:
* change state directory to /run/proxmox-backup/$UID since XDG_RUNTIME_DIR is
  not available for root if not logged in on a tty
* create statefile with correct 0600 permissions
* remove setuid binary, start VM directly again - reuses most of the code, but
  without the separate binary

The last change was made after realizing that we cannot call the binary as
www-data unprivileged anyway, since the PBS password file is in /etc/pve/priv...
So instead let's run this as root again, avoid the setuid binary, and instead
focus on making sure the pveproxy<->pvedaemon communication is optimized away
differently - e.g. passing file descriptors or similar.


 src/bin/proxmox-file-restore.rs               |  12 +-
 src/bin/proxmox_client_tools/mod.rs           |  13 +
 src/bin/proxmox_file_restore/block_driver.rs  | 163 +++++++++++
 .../proxmox_file_restore/block_driver_qemu.rs | 277 ++++++++++++++++++
 src/bin/proxmox_file_restore/mod.rs           |   6 +
 src/bin/proxmox_file_restore/qemu_helper.rs   | 274 +++++++++++++++++
 src/buildcfg.rs                               |  17 ++
 7 files changed, 761 insertions(+), 1 deletion(-)
 create mode 100644 src/bin/proxmox_file_restore/block_driver.rs
 create mode 100644 src/bin/proxmox_file_restore/block_driver_qemu.rs
 create mode 100644 src/bin/proxmox_file_restore/mod.rs
 create mode 100644 src/bin/proxmox_file_restore/qemu_helper.rs

diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
index f8affc03..de2cb971 100644
--- a/src/bin/proxmox-file-restore.rs
+++ b/src/bin/proxmox-file-restore.rs
@@ -35,6 +35,9 @@ use proxmox_client_tools::{
     REPO_URL_SCHEMA,
 };
 
+mod proxmox_file_restore;
+use proxmox_file_restore::*;
+
 enum ExtractPath {
     ListArchives,
     Pxar(String, Vec<u8>),
@@ -369,9 +372,16 @@ fn main() {
         .completion_cb("snapshot", complete_group_or_snapshot)
         .completion_cb("target", tools::complete_file_name);
 
+    let status_cmd_def = CliCommand::new(&API_METHOD_STATUS);
+    let stop_cmd_def = CliCommand::new(&API_METHOD_STOP)
+        .arg_param(&["name"])
+        .completion_cb("name", complete_block_driver_ids);
+
     let cmd_def = CliCommandMap::new()
         .insert("list", list_cmd_def)
-        .insert("extract", restore_cmd_def);
+        .insert("extract", restore_cmd_def)
+        .insert("status", status_cmd_def)
+        .insert("stop", stop_cmd_def);
 
     let rpcenv = CliEnvironment::new();
     run_cli_command(
diff --git a/src/bin/proxmox_client_tools/mod.rs b/src/bin/proxmox_client_tools/mod.rs
index 73744ba2..1cdcf0df 100644
--- a/src/bin/proxmox_client_tools/mod.rs
+++ b/src/bin/proxmox_client_tools/mod.rs
@@ -13,6 +13,7 @@ use proxmox::{
 use proxmox_backup::api2::access::user::UserWithTokens;
 use proxmox_backup::api2::types::*;
 use proxmox_backup::backup::BackupDir;
+use proxmox_backup::buildcfg;
 use proxmox_backup::client::*;
 use proxmox_backup::tools;
 
@@ -372,3 +373,15 @@ pub fn place_xdg_file(
         .and_then(|base| base.place_config_file(file_name).map_err(Error::from))
         .with_context(|| format!("failed to place {} in xdg home", description))
 }
+
+/// Returns a runtime dir owned by the current user.
+/// Note that XDG_RUNTIME_DIR is not always available, especially for non-login users like
+/// "www-data", so we use a custom one in /run/proxmox-backup/<uid> instead.
+pub fn get_user_run_dir() -> Result<std::path::PathBuf, Error> {
+    let uid = nix::unistd::Uid::current();
+    let mut path: std::path::PathBuf = buildcfg::PROXMOX_BACKUP_RUN_DIR.into();
+    path.push(uid.to_string());
+    tools::create_run_dir()?;
+    std::fs::create_dir_all(&path)?;
+    Ok(path)
+}
diff --git a/src/bin/proxmox_file_restore/block_driver.rs b/src/bin/proxmox_file_restore/block_driver.rs
new file mode 100644
index 00000000..9c6fc5ac
--- /dev/null
+++ b/src/bin/proxmox_file_restore/block_driver.rs
@@ -0,0 +1,163 @@
+//! Abstraction layer over different methods of accessing a block backup
+use anyhow::{bail, Error};
+use serde::{Deserialize, Serialize};
+use serde_json::{json, Value};
+
+use std::collections::HashMap;
+use std::future::Future;
+use std::hash::BuildHasher;
+use std::pin::Pin;
+
+use proxmox_backup::backup::{BackupDir, BackupManifest};
+use proxmox_backup::client::BackupRepository;
+
+use proxmox::api::{api, cli::*};
+
+use super::block_driver_qemu::QemuBlockDriver;
+
+/// Contains details about a snapshot that is to be accessed by block file restore
+pub struct SnapRestoreDetails {
+    pub repo: BackupRepository,
+    pub snapshot: BackupDir,
+    pub manifest: BackupManifest,
+}
+
+/// Return value of a BlockRestoreDriver.status() call, 'id' must be valid for .stop(id)
+pub struct DriverStatus {
+    pub id: String,
+    pub data: Value,
+}
+
+pub type Async<R> = Pin<Box<dyn Future<Output = R> + Send>>;
+
+/// An abstract implementation for retrieving data out of a block file backup
+pub trait BlockRestoreDriver {
+    /// Return status of all running/mapped images, result value is (id, extra data), where id must
+    /// match with the ones returned from list()
+    fn status(&self) -> Async<Result<Vec<DriverStatus>, Error>>;
+    /// Stop/Close a running restore method
+    fn stop(&self, id: String) -> Async<Result<(), Error>>;
+    /// Returned ids must be prefixed with driver type so that they cannot collide between drivers,
+    /// the returned values must be passable to stop()
+    fn list(&self) -> Vec<String>;
+}
+
+#[api()]
+#[derive(Debug, Serialize, Deserialize, PartialEq, Clone, Copy)]
+pub enum BlockDriverType {
+    /// Uses a small QEMU/KVM virtual machine to map images securely. Requires PVE-patched QEMU.
+    Qemu,
+}
+
+impl BlockDriverType {
+    fn resolve(&self) -> impl BlockRestoreDriver {
+        match self {
+            BlockDriverType::Qemu => QemuBlockDriver {},
+        }
+    }
+}
+
+const DEFAULT_DRIVER: BlockDriverType = BlockDriverType::Qemu;
+const ALL_DRIVERS: &[BlockDriverType] = &[BlockDriverType::Qemu];
+
+#[api(
+   input: {
+       properties: {
+            "driver": {
+                type: BlockDriverType,
+                optional: true,
+            },
+            "output-format": {
+                schema: OUTPUT_FORMAT,
+                optional: true,
+            },
+        },
+   },
+)]
+/// Retrieve status information about currently running/mapped restore images
+pub async fn status(driver: Option<BlockDriverType>, param: Value) -> Result<(), Error> {
+    let output_format = get_output_format(&param);
+    let text = output_format == "text";
+
+    let mut ret = json!({});
+
+    for dt in ALL_DRIVERS {
+        if driver.is_some() && &driver.unwrap() != dt {
+            continue;
+        }
+
+        let drv_name = format!("{:?}", dt);
+        let drv = dt.resolve();
+        match drv.status().await {
+            Ok(data) if data.is_empty() => {
+                if text {
+                    println!("{}: no mappings", drv_name);
+                } else {
+                    ret[drv_name] = json!({});
+                }
+            }
+            Ok(data) => {
+                if text {
+                    println!("{}:", &drv_name);
+                }
+
+                ret[&drv_name]["ids"] = json!({});
+                for status in data {
+                    if text {
+                        println!("{} \t({})", status.id, status.data);
+                    } else {
+                        ret[&drv_name]["ids"][status.id] = status.data;
+                    }
+                }
+            }
+            Err(err) => {
+                if text {
+                    eprintln!("error getting status from driver '{}' - {}", drv_name, err);
+                } else {
+                    ret[drv_name] = json!({ "error": format!("{}", err) });
+                }
+            }
+        }
+    }
+
+    if !text {
+        format_and_print_result(&ret, &output_format);
+    }
+
+    Ok(())
+}
+
+#[api(
+   input: {
+       properties: {
+            "name": {
+                type: String,
+                description: "The name of the VM to stop.",
+            },
+        },
+   },
+)]
+/// Immediately stop/unmap a given image. Not typically necessary, as VMs will stop themselves
+/// after a timer anyway.
+pub async fn stop(name: String) -> Result<(), Error> {
+    for drv in ALL_DRIVERS.iter().map(BlockDriverType::resolve) {
+        if drv.list().contains(&name) {
+            return drv.stop(name).await;
+        }
+    }
+
+    bail!("no mapping with name '{}' found", name);
+}
+
+/// Autocompletion handler for block mappings
+pub fn complete_block_driver_ids<S: BuildHasher>(
+    _arg: &str,
+    _param: &HashMap<String, String, S>,
+) -> Vec<String> {
+    ALL_DRIVERS
+        .iter()
+        .map(BlockDriverType::resolve)
+        .map(|d| d.list())
+        .flatten()
+        .collect()
+}
diff --git a/src/bin/proxmox_file_restore/block_driver_qemu.rs b/src/bin/proxmox_file_restore/block_driver_qemu.rs
new file mode 100644
index 00000000..f66d7738
--- /dev/null
+++ b/src/bin/proxmox_file_restore/block_driver_qemu.rs
@@ -0,0 +1,277 @@
+//! Block file access via a small QEMU restore VM using the PBS block driver in QEMU
+use anyhow::{bail, Error};
+use futures::FutureExt;
+use serde::{Deserialize, Serialize};
+use serde_json::json;
+
+use std::collections::HashMap;
+use std::fs::{File, OpenOptions};
+use std::io::{prelude::*, SeekFrom};
+
+use proxmox::tools::fs::lock_file;
+use proxmox_backup::backup::BackupDir;
+use proxmox_backup::client::*;
+use proxmox_backup::tools;
+
+use super::block_driver::*;
+use crate::proxmox_client_tools::get_user_run_dir;
+
+const RESTORE_VM_MAP: &str = "restore-vm-map.json";
+
+pub struct QemuBlockDriver {}
+
+#[derive(Clone, Hash, Serialize, Deserialize)]
+struct VMState {
+    pid: i32,
+    cid: i32,
+    ticket: String,
+}
+
+struct VMStateMap {
+    map: HashMap<String, VMState>,
+    file: File,
+}
+
+impl VMStateMap {
+    fn open_file_raw(write: bool) -> Result<File, Error> {
+        use std::os::unix::fs::OpenOptionsExt;
+        let mut path = get_user_run_dir()?;
+        path.push(RESTORE_VM_MAP);
+        OpenOptions::new()
+            .read(true)
+            .write(write)
+            .create(write)
+            .mode(0o600)
+            .open(path)
+            .map_err(Error::from)
+    }
+
+    /// Acquire a lock on the state map and retrieve a deserialized version
+    fn load() -> Result<Self, Error> {
+        let mut file = Self::open_file_raw(true)?;
+        lock_file(&mut file, true, Some(std::time::Duration::from_secs(5)))?;
+        let map = serde_json::from_reader(&file).unwrap_or_default();
+        Ok(Self { map, file })
+    }
+
+    /// Load a read-only copy of the current VM map. Only use for informational purposes, like
+    /// shell auto-completion, for anything requiring consistency use load() !
+    fn load_read_only() -> Result<HashMap<String, VMState>, Error> {
+        let file = Self::open_file_raw(false)?;
+        Ok(serde_json::from_reader(&file).unwrap_or_default())
+    }
+
+    /// Write back a potentially modified state map, consuming the held lock
+    fn write(mut self) -> Result<(), Error> {
+        self.file.seek(SeekFrom::Start(0))?;
+        self.file.set_len(0)?;
+        serde_json::to_writer(self.file, &self.map)?;
+
+        // drop ourselves including file lock
+        Ok(())
+    }
+
+    /// Return the map, but drop the lock immediately
+    fn read_only(self) -> HashMap<String, VMState> {
+        self.map
+    }
+}
+
+fn make_name(repo: &BackupRepository, snap: &BackupDir) -> String {
+    let full = format!("qemu_{}/{}", repo, snap);
+    tools::systemd::escape_unit(&full, false)
+}
+
+/// remove non-responsive VMs from given map, returns 'true' if map was modified
+async fn cleanup_map(map: &mut HashMap<String, VMState>) -> bool {
+    let mut to_remove = Vec::new();
+    for (name, state) in map.iter() {
+        let client = VsockClient::new(state.cid, DEFAULT_VSOCK_PORT, Some(state.ticket.clone()));
+        let res = client
+            .get("api2/json/status", Some(json!({"keep-timeout": true})))
+            .await;
+        if res.is_err() {
+            // VM is not reachable, remove from map and inform user
+            to_remove.push(name.clone());
+            println!(
+                "VM '{}' (pid: {}, cid: {}) was not reachable, removing from map",
+                name, state.pid, state.cid
+            );
+        }
+    }
+
+    for tr in &to_remove {
+        map.remove(tr);
+    }
+
+    !to_remove.is_empty()
+}
+
+fn new_ticket() -> String {
+    proxmox::tools::Uuid::generate().to_string()
+}
+
+async fn ensure_running(details: &SnapRestoreDetails) -> Result<VsockClient, Error> {
+    let name = make_name(&details.repo, &details.snapshot);
+    let mut state = VMStateMap::load()?;
+
+    cleanup_map(&mut state.map).await;
+
+    let new_cid;
+    let vms = match state.map.get(&name) {
+        Some(vm) => {
+            let client = VsockClient::new(vm.cid, DEFAULT_VSOCK_PORT, Some(vm.ticket.clone()));
+            let res = client.get("api2/json/status", None).await;
+            match res {
+                Ok(_) => {
+                    // VM is running and we just reset its timeout, nothing to do
+                    return Ok(client);
+                }
+                Err(err) => {
+                    println!("stale VM detected, restarting ({})", err);
+                    // VM is dead, restart
+                    let vms = start_vm(vm.cid, details).await?;
+                    new_cid = vms.cid;
+                    state.map.insert(name, vms.clone());
+                    vms
+                }
+            }
+        }
+        None => {
+            let mut cid = state
+                .map
+                .iter()
+                .map(|v| v.1.cid)
+                .max()
+                .unwrap_or(0)
+                .wrapping_add(1);
+
+            // offset cid by user id, to avoid unneccessary retries
+            let running_uid = nix::unistd::Uid::current();
+            cid = cid.wrapping_add(running_uid.as_raw() as i32);
+
+            // some low CIDs have special meaning, start at 10 to avoid them
+            cid = cid.max(10);
+
+            let vms = start_vm(cid, details).await?;
+            new_cid = vms.cid;
+            state.map.insert(name, vms.clone());
+            vms
+        }
+    };
+
+    state.write()?;
+    Ok(VsockClient::new(
+        new_cid,
+        DEFAULT_VSOCK_PORT,
+        Some(vms.ticket.clone()),
+    ))
+}
+
+async fn start_vm(cid_request: i32, details: &SnapRestoreDetails) -> Result<VMState, Error> {
+    let ticket = new_ticket();
+    let files = details
+        .manifest
+        .files()
+        .iter()
+        .map(|file| file.filename.clone())
+        .filter(|name| name.ends_with(".img.fidx"));
+    let (pid, cid) =
+        super::qemu_helper::start_vm((cid_request.abs() & 0xFFFF) as u16, details, files, &ticket)
+            .await?;
+    Ok(VMState { pid, cid, ticket })
+}
+
+impl BlockRestoreDriver for QemuBlockDriver {
+    fn status(&self) -> Async<Result<Vec<DriverStatus>, Error>> {
+        async move {
+            let mut state_map = VMStateMap::load()?;
+            let modified = cleanup_map(&mut state_map.map).await;
+            let map = if modified {
+                let m = state_map.map.clone();
+                state_map.write()?;
+                m
+            } else {
+                state_map.read_only()
+            };
+            let mut result = Vec::new();
+
+            for (n, s) in map.iter() {
+                let client = VsockClient::new(s.cid, DEFAULT_VSOCK_PORT, Some(s.ticket.clone()));
+                let resp = client
+                    .get("api2/json/status", Some(json!({"keep-timeout": true})))
+                    .await;
+                let name = tools::systemd::unescape_unit(n)
+                    .unwrap_or_else(|_| "<invalid name>".to_owned());
+                let mut extra = json!({"pid": s.pid, "cid": s.cid});
+
+                match resp {
+                    Ok(status) => match status["data"].as_object() {
+                        Some(map) => {
+                            for (k, v) in map.iter() {
+                                extra[k] = v.clone();
+                            }
+                        }
+                        None => {
+                            let err = format!(
+                                "invalid JSON received from /status call: {}",
+                                status.to_string()
+                            );
+                            extra["error"] = json!(err);
+                        }
+                    },
+                    Err(err) => {
+                        let err = format!("error during /status API call: {}", err);
+                        extra["error"] = json!(err);
+                    }
+                }
+
+                result.push(DriverStatus {
+                    id: name,
+                    data: extra,
+                });
+            }
+
+            Ok(result)
+        }
+        .boxed()
+    }
+
+    fn stop(&self, id: String) -> Async<Result<(), Error>> {
+        async move {
+            let name = tools::systemd::escape_unit(&id, false);
+            let mut map = VMStateMap::load()?;
+            let map_mod = cleanup_map(&mut map.map).await;
+            match map.map.get(&name) {
+                Some(state) => {
+                    let client =
+                        VsockClient::new(state.cid, DEFAULT_VSOCK_PORT, Some(state.ticket.clone()));
+                    // ignore errors, this either fails because:
+                    // * the VM is unreachable/dead, in which case we don't want it in the map
+                    // * the call was successful and the connection reset when the VM stopped
+                    let _ = client.get("api2/json/stop", None).await;
+                    map.map.remove(&name);
+                    map.write()?;
+                }
+                None => {
+                    if map_mod {
+                        map.write()?;
+                    }
+                    bail!("VM with name '{}' not found", name);
+                }
+            }
+            Ok(())
+        }
+        .boxed()
+    }
+
+    fn list(&self) -> Vec<String> {
+        match VMStateMap::load_read_only() {
+            Ok(state) => state
+                .iter()
+                .filter_map(|(name, _)| tools::systemd::unescape_unit(&name).ok())
+                .collect(),
+            Err(_) => Vec::new(),
+        }
+    }
+}
diff --git a/src/bin/proxmox_file_restore/mod.rs b/src/bin/proxmox_file_restore/mod.rs
new file mode 100644
index 00000000..aa65b664
--- /dev/null
+++ b/src/bin/proxmox_file_restore/mod.rs
@@ -0,0 +1,6 @@
+//! Block device drivers and tools for single file restore
+pub mod block_driver;
+pub use block_driver::*;
+
+mod qemu_helper;
+mod block_driver_qemu;
diff --git a/src/bin/proxmox_file_restore/qemu_helper.rs b/src/bin/proxmox_file_restore/qemu_helper.rs
new file mode 100644
index 00000000..22563263
--- /dev/null
+++ b/src/bin/proxmox_file_restore/qemu_helper.rs
@@ -0,0 +1,274 @@
+//! Helper to start a QEMU VM for single file restore.
+use std::fs::{File, OpenOptions};
+use std::io::prelude::*;
+use std::os::unix::io::{AsRawFd, FromRawFd};
+use std::path::PathBuf;
+use std::time::Duration;
+
+use anyhow::{bail, format_err, Error};
+use tokio::time;
+
+use nix::sys::signal::{kill, Signal};
+use nix::unistd::Pid;
+
+use proxmox::tools::{
+    fd::Fd,
+    fs::{create_path, file_read_string, make_tmp_file, CreateOptions},
+};
+
+use proxmox_backup::backup::backup_user;
+use proxmox_backup::client::{VsockClient, DEFAULT_VSOCK_PORT};
+use proxmox_backup::{buildcfg, tools};
+
+use super::SnapRestoreDetails;
+
+const PBS_VM_NAME: &str = "pbs-restore-vm";
+const MAX_CID_TRIES: u64 = 32;
+
+fn create_restore_log_dir() -> Result<String, Error> {
+    let logpath = format!("{}/file-restore", buildcfg::PROXMOX_BACKUP_LOG_DIR);
+
+    proxmox::try_block!({
+        let backup_user = backup_user()?;
+        let opts = CreateOptions::new()
+            .owner(backup_user.uid)
+            .group(backup_user.gid);
+
+        let opts_root = CreateOptions::new()
+            .owner(nix::unistd::ROOT)
+            .group(nix::unistd::Gid::from_raw(0));
+
+        create_path(buildcfg::PROXMOX_BACKUP_LOG_DIR, None, Some(opts))?;
+        create_path(&logpath, None, Some(opts_root))?;
+        Ok(())
+    })
+    .map_err(|err: Error| format_err!("unable to create file-restore log dir - {}", err))?;
+
+    Ok(logpath)
+}
+
+fn validate_img_existance() -> Result<(), Error> {
+    let kernel = PathBuf::from(buildcfg::PROXMOX_BACKUP_KERNEL_FN);
+    let initramfs = PathBuf::from(buildcfg::PROXMOX_BACKUP_INITRAMFS_FN);
+    if !kernel.exists() || !initramfs.exists() {
+        bail!("cannot run file-restore VM: package 'proxmox-file-restore' is not (correctly) installed");
+    }
+    Ok(())
+}
+
+fn try_kill_vm(pid: i32) -> Result<(), Error> {
+    let pid = Pid::from_raw(pid);
+    if let Ok(()) = kill(pid, None) {
+        // process is running (and we could kill it), check if it is actually ours
+        // (if it errors assume we raced with the process's death and ignore it)
+        if let Ok(cmdline) = file_read_string(format!("/proc/{}/cmdline", pid)) {
+            if cmdline.split('\0').any(|a| a == PBS_VM_NAME) {
+                // yes, it's ours, kill it brutally with SIGKILL, no reason to take
+                // any chances - in this state it's most likely broken anyway
+                if let Err(err) = kill(pid, Signal::SIGKILL) {
+                    bail!(
+                        "reaping broken VM (pid {}) with SIGKILL failed: {}",
+                        pid,
+                        err
+                    );
+                }
+            }
+        }
+    }
+
+    Ok(())
+}
+
+async fn create_temp_initramfs(ticket: &str) -> Result<(Fd, String), Error> {
+    use std::ffi::CString;
+    use tokio::fs::File;
+
+    let (tmp_fd, tmp_path) =
+        make_tmp_file("/tmp/file-restore-qemu.initramfs.tmp", CreateOptions::new())?;
+    nix::unistd::unlink(&tmp_path)?;
+    tools::fd_change_cloexec(tmp_fd.0, false)?;
+
+    let mut f = File::from_std(unsafe { std::fs::File::from_raw_fd(tmp_fd.0) });
+    let mut base = File::open(buildcfg::PROXMOX_BACKUP_INITRAMFS_FN).await?;
+
+    tokio::io::copy(&mut base, &mut f).await?;
+
+    let name = CString::new("ticket").unwrap();
+    tools::cpio::append_file(
+        &mut f,
+        ticket.as_bytes(),
+        &name,
+        0,
+        (libc::S_IFREG | 0o400) as u16,
+        0,
+        0,
+        0,
+        ticket.len() as u32,
+    )
+    .await?;
+    tools::cpio::append_trailer(&mut f).await?;
+
+    // forget the tokio file, we close the file descriptor via the returned Fd
+    std::mem::forget(f);
+
+    let path = format!("/dev/fd/{}", &tmp_fd.0);
+    Ok((tmp_fd, path))
+}
+
+pub async fn start_vm(
+    // u16 so we can do wrapping_add without going too high
+    mut cid: u16,
+    details: &SnapRestoreDetails,
+    files: impl Iterator<Item = String>,
+    ticket: &str,
+) -> Result<(i32, i32), Error> {
+    validate_img_existance()?;
+
+    if let Err(_) = std::env::var("PBS_PASSWORD") {
+        bail!("environment variable PBS_PASSWORD has to be set for QEMU VM restore");
+    }
+    if let Err(_) = std::env::var("PBS_FINGERPRINT") {
+        bail!("environment variable PBS_FINGERPRINT has to be set for QEMU VM restore");
+    }
+
+    let pid;
+    let (pid_fd, pid_path) = make_tmp_file("/tmp/file-restore-qemu.pid.tmp", CreateOptions::new())?;
+    nix::unistd::unlink(&pid_path)?;
+    tools::fd_change_cloexec(pid_fd.0, false)?;
+
+    let (_ramfs_pid, ramfs_path) = create_temp_initramfs(ticket).await?;
+
+    let logpath = create_restore_log_dir()?;
+    let logfile = &format!("{}/qemu.log", logpath);
+    let mut logrotate = tools::logrotate::LogRotate::new(logfile, false)
+        .ok_or_else(|| format_err!("could not get QEMU log file names"))?;
+
+    if let Err(err) = logrotate.do_rotate(CreateOptions::default(), Some(16)) {
+        eprintln!("warning: logrotate for QEMU log file failed - {}", err);
+    }
+
+    let mut logfd = OpenOptions::new()
+        .append(true)
+        .create_new(true)
+        .open(logfile)?;
+    tools::fd_change_cloexec(logfd.as_raw_fd(), false)?;
+
+    // preface log file with start timestamp so one can see how long QEMU took to start
+    writeln!(logfd, "[{}] PBS file restore VM log", {
+        let now = proxmox::tools::time::epoch_i64();
+        proxmox::tools::time::epoch_to_rfc3339(now)?
+    },)?;
+
+    let base_args = [
+        "-chardev",
+        &format!(
+            "file,id=log,path=/dev/null,logfile=/dev/fd/{},logappend=on",
+            logfd.as_raw_fd()
+        ),
+        "-serial",
+        "chardev:log",
+        "-vnc",
+        "none",
+        "-enable-kvm",
+        "-m",
+        "512",
+        "-kernel",
+        buildcfg::PROXMOX_BACKUP_KERNEL_FN,
+        "-initrd",
+        &ramfs_path,
+        "-append",
+        "quiet",
+        "-daemonize",
+        "-pidfile",
+        &format!("/dev/fd/{}", pid_fd.as_raw_fd()),
+        "-name",
+        PBS_VM_NAME,
+    ];
+
+    // Generate drive arguments for all fidx files in backup snapshot
+    let mut drives = Vec::new();
+    let mut id = 0;
+    for file in files {
+        if !file.ends_with(".img.fidx") {
+            continue;
+        }
+        drives.push("-drive".to_owned());
+        drives.push(format!(
+            "file=pbs:repository={},,snapshot={},,archive={},read-only=on,if=none,id=drive{}",
+            details.repo, details.snapshot, file, id
+        ));
+        drives.push("-device".to_owned());
+        // drive serial is used by VM to map .fidx files to /dev paths
+        drives.push(format!("virtio-blk-pci,drive=drive{},serial={}", id, file));
+        id += 1;
+    }
+
+    // Try starting QEMU in a loop to retry if we fail because of a bad 'cid' value
+    let mut attempts = 0;
+    loop {
+        let mut qemu_cmd = std::process::Command::new("qemu-system-x86_64");
+        qemu_cmd.args(base_args.iter());
+        qemu_cmd.args(&drives);
+        qemu_cmd.arg("-device");
+        qemu_cmd.arg(format!(
+            "vhost-vsock-pci,guest-cid={},disable-legacy=on",
+            cid
+        ));
+
+        qemu_cmd.stdout(std::process::Stdio::null());
+        qemu_cmd.stderr(std::process::Stdio::piped());
+
+        let res = tokio::task::block_in_place(|| qemu_cmd.spawn()?.wait_with_output())?;
+
+        if res.status.success() {
+            // at this point QEMU is already daemonized and running, so if anything fails we
+            // technically leave behind a zombie-VM... this shouldn't matter, as it will stop
+            // itself soon enough (timer), and the following operations are unlikely to fail
+            let mut pid_file = unsafe { File::from_raw_fd(pid_fd.as_raw_fd()) };
+            std::mem::forget(pid_fd); // FD ownership is now in pid_fd/File
+            let mut pidstr = String::new();
+            pid_file.read_to_string(&mut pidstr)?;
+            pid = pidstr.trim_end().parse().map_err(|err| {
+                format_err!("cannot parse PID returned by QEMU ('{}'): {}", &pidstr, err)
+            })?;
+            break;
+        } else {
+            let out = String::from_utf8_lossy(&res.stderr);
+            if out.contains("unable to set guest cid: Address already in use") {
+                attempts += 1;
+                if attempts >= MAX_CID_TRIES {
+                    bail!("CID '{}' in use, but max attempts reached, aborting", cid);
+                }
+                // CID in use, try next higher one
+                eprintln!("CID '{}' in use by other VM, attempting next one", cid);
+                // skip special-meaning low values
+                cid = cid.wrapping_add(1).max(10);
+            } else {
+                eprint!("{}", out);
+                bail!("Starting VM failed. See output above for more information.");
+            }
+        }
+    }
+
+    // QEMU has started successfully, now wait for virtio socket to become ready
+    let pid_t = Pid::from_raw(pid);
+    for _ in 0..60 {
+        let client = VsockClient::new(cid as i32, DEFAULT_VSOCK_PORT, Some(ticket.to_owned()));
+        if let Ok(Ok(_)) =
+            time::timeout(Duration::from_secs(2), client.get("api2/json/status", None)).await
+        {
+            return Ok((pid, cid as i32));
+        }
+        if kill(pid_t, None).is_err() {
+            // QEMU exited
+            bail!("VM exited before connection could be established");
+        }
+        time::sleep(Duration::from_millis(200)).await;
+    }
+
+    // start failed
+    if let Err(err) = try_kill_vm(pid) {
+        eprintln!("killing failed VM failed: {}", err);
+    }
+    bail!("starting VM timed out");
+}
diff --git a/src/buildcfg.rs b/src/buildcfg.rs
index 4f333288..b0f61efb 100644
--- a/src/buildcfg.rs
+++ b/src/buildcfg.rs
@@ -10,6 +10,14 @@ macro_rules! PROXMOX_BACKUP_RUN_DIR_M { () => ("/run/proxmox-backup") }
 #[macro_export]
 macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") }
 
+#[macro_export]
+macro_rules! PROXMOX_BACKUP_CACHE_DIR_M { () => ("/var/cache/proxmox-backup") }
+
+#[macro_export]
+macro_rules! PROXMOX_BACKUP_FILE_RESTORE_BIN_DIR_M {
+    () => ("/usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore")
+}
+
 /// namespaced directory for in-memory (tmpfs) run state
 pub const PROXMOX_BACKUP_RUN_DIR: &str = PROXMOX_BACKUP_RUN_DIR_M!();
 
@@ -30,6 +38,15 @@ pub const PROXMOX_BACKUP_PROXY_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(
 /// the PID filename for the privileged api daemon
 pub const PROXMOX_BACKUP_API_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(), "/api.pid");
 
+/// filename of the cached initramfs to use for booting single file restore VMs, this file is
+/// automatically created by APT hooks
+pub const PROXMOX_BACKUP_INITRAMFS_FN: &str =
+    concat!(PROXMOX_BACKUP_CACHE_DIR_M!(), "/file-restore-initramfs.img");
+
+/// filename of the kernel to use for booting single file restore VMs
+pub const PROXMOX_BACKUP_KERNEL_FN: &str =
+    concat!(PROXMOX_BACKUP_FILE_RESTORE_BIN_DIR_M!(), "/bzImage");
+
 /// Prepend configuration directory to a file name
 ///
 /// This is a simply way to get the full path for configuration files.
-- 
2.20.1





^ permalink raw reply	[flat|nested] 32+ messages in thread

* [pbs-devel] applied: [PATCH v3 00/20] Single file restore for VM images
  2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
                   ` (19 preceding siblings ...)
  2021-03-31 10:22 ` [pbs-devel] [PATCH v3 proxmox-backup 20/20] file-restore: add 'extract' command for VM file restore Stefan Reiter
@ 2021-04-08 14:44 ` Thomas Lamprecht
  20 siblings, 0 replies; 32+ messages in thread
From: Thomas Lamprecht @ 2021-04-08 14:44 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Stefan Reiter

On 31.03.21 12:21, Stefan Reiter wrote:
> Implements CLI-based single file and directory restore for both pxar.didx
> archives (containers, hosts) and img.fidx (VMs, raw block devices). The design
> for VM restore uses a small virtual machine that the host communicates with via
> virtio-vsock.
> 
> This is encapsuled into a new package called "proxmox-file-restore", providing a
> binary of the same name. A second package is provided in a new git repository[0]
> called "proxmox-backup-restore-image", providing a minimal kernel image and a
> base initramfs (without the daemon, which is included in proxmox-file-restore).
> 
> Dependency bump in proxmox-backup for pxar is required.
> 
> Tested with ext4 and NTFS VMs, but theoretically includes support for many more
> filesystems.
> 
> Known issues/Missing features:
> * GUI/PVE support
> * PBS_PASSWORD/PBS_FINGERPRINT currently have to be set manually for VM restore
> * ZFS/LVM/md/... support
> * shell auto-complete for "proxmox-file-restore" doesn't seem to work (and I
>   don't know why...)
> 
> [0] now already public at:
>     https://git.proxmox.com/?p=proxmox-backup-restore-image.git;a=summary
> 


applied the remaining patches now, thanks!

PVE support with GUI integration would be the most important thing to implement
next.




^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2021-04-08 14:44 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-31 10:21 [pbs-devel] [PATCH v3 00/20] Single file restore for VM images Stefan Reiter
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 pxar 01/20] decoder/aio: add contents() and content_size() calls Stefan Reiter
2021-03-31 11:54   ` [pbs-devel] applied: " Wolfgang Bumiller
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 02/20] vsock_client: remove wrong comment Stefan Reiter
2021-04-01  9:53   ` [pbs-devel] applied: " Thomas Lamprecht
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 03/20] vsock_client: remove some &mut restrictions and rustfmt Stefan Reiter
2021-04-01  9:54   ` [pbs-devel] applied: " Thomas Lamprecht
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 04/20] vsock_client: support authorization header Stefan Reiter
2021-04-01  9:54   ` [pbs-devel] applied: " Thomas Lamprecht
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 05/20] proxmox_client_tools: move common key related functions to key_source.rs Stefan Reiter
2021-04-01  9:54   ` [pbs-devel] applied: " Thomas Lamprecht
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 06/20] file-restore: add binary and basic commands Stefan Reiter
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 07/20] file-restore: allow specifying output-format Stefan Reiter
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 08/20] server/rest: extract auth to seperate module Stefan Reiter
2021-04-01  9:55   ` [pbs-devel] applied: " Thomas Lamprecht
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 09/20] server/rest: add ApiAuth trait to make user auth generic Stefan Reiter
2021-03-31 12:55   ` Wolfgang Bumiller
2021-03-31 14:07     ` Thomas Lamprecht
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 10/20] file-restore-daemon: add binary with virtio-vsock API server Stefan Reiter
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 11/20] file-restore-daemon: add watchdog module Stefan Reiter
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 12/20] file-restore-daemon: add disk module Stefan Reiter
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 13/20] add tools/cpio encoding module Stefan Reiter
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 14/20] file-restore: add qemu-helper setuid binary Stefan Reiter
2021-03-31 14:15   ` Oguz Bektas
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 15/20] file-restore: add basic VM/block device support Stefan Reiter
2021-04-01 15:43   ` [pbs-devel] [PATCH v4 " Stefan Reiter
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 16/20] debian/client: add postinst hook to rebuild file-restore initramfs Stefan Reiter
2021-03-31 10:21 ` [pbs-devel] [PATCH v3 proxmox-backup 17/20] file-restore(-daemon): implement list API Stefan Reiter
2021-03-31 10:22 ` [pbs-devel] [PATCH v3 proxmox-backup 18/20] pxar/extract: add sequential variant of extract_sub_dir Stefan Reiter
2021-03-31 10:22 ` [pbs-devel] [PATCH v3 proxmox-backup 19/20] tools/zip: add zip_directory helper Stefan Reiter
2021-03-31 10:22 ` [pbs-devel] [PATCH v3 proxmox-backup 20/20] file-restore: add 'extract' command for VM file restore Stefan Reiter
2021-04-08 14:44 ` [pbs-devel] applied: [PATCH v3 00/20] Single file restore for VM images Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal