public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup v2 0/8] tape: single snapshot restore
@ 2021-05-05 10:09 Dominik Csapak
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 1/8] tape/drive: add 'move_to_file' to TapeDriver trait Dominik Csapak
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Dominik Csapak @ 2021-05-05 10:09 UTC (permalink / raw)
  To: pbs-devel

v2 of the series, some small parts are still in-progress/unfinished:
* api path is (imho) not optimal, but did not find something better
  (integration into existing restore call gets ugly fast...)
* schema for the snapshot list is not done yet (but not hard..)
* gui for multiselection is not done yet (have to think about how
  we do that for a good ux)

questions still to answer:
 do we really want to have the ability to restore multiple 'single' snapshots
 in one go? if not, it would drastically simplify the code

changes from v1:
* use parallel handler for chunk restore
* rebase on master
* add patch to return manifest from try_restore_snapshot_archive
* using of Arc<WorkerTask> like we do now in rest of the file

@Dietmar, could you test on real tape hardware if it works correctly as-is?

Dominik Csapak (8):
  tape/drive: add 'move_to_file' to TapeDriver trait
  tape/media_catalog: add helpers to look for snapshot/chunk files
  api2/tape/restore: factor out check_datastore_privs
  api2/tape/restore: remove unnecessary params from
    (try_)restore_snapshot_archive
  api2/tape/restore: return backup manifest in
    try_restore_snapshot_archive
  api2/tape/restore: add 'restore-single' api path
  bin/proxmox-tape: add restore-single command to proxmox-tape
  ui: tape: add single snapshot restore

 src/api2/tape/mod.rs           |   1 +
 src/api2/tape/restore.rs       | 668 +++++++++++++++++++++++++++++++--
 src/bin/proxmox-tape.rs        |  62 +++
 src/tape/drive/lto/mod.rs      |   4 +
 src/tape/drive/mod.rs          |   3 +
 src/tape/drive/virtual_tape.rs |  22 ++
 src/tape/media_catalog.rs      |  20 +
 www/tape/BackupOverview.js     |  41 ++
 www/tape/window/TapeRestore.js |  26 ++
 9 files changed, 821 insertions(+), 26 deletions(-)

-- 
2.20.1





^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 1/8] tape/drive: add 'move_to_file' to TapeDriver trait
  2021-05-05 10:09 [pbs-devel] [PATCH proxmox-backup v2 0/8] tape: single snapshot restore Dominik Csapak
@ 2021-05-05 10:09 ` Dominik Csapak
  2021-05-06  5:56   ` [pbs-devel] applied: " Dietmar Maurer
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 2/8] tape/media_catalog: add helpers to look for snapshot/chunk files Dominik Csapak
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Dominik Csapak @ 2021-05-05 10:09 UTC (permalink / raw)
  To: pbs-devel

so that we can directly move to a specified file on the tape

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/tape/drive/lto/mod.rs      |  4 ++++
 src/tape/drive/mod.rs          |  3 +++
 src/tape/drive/virtual_tape.rs | 22 ++++++++++++++++++++++
 3 files changed, 29 insertions(+)

diff --git a/src/tape/drive/lto/mod.rs b/src/tape/drive/lto/mod.rs
index 642c3cc7..3894b034 100644
--- a/src/tape/drive/lto/mod.rs
+++ b/src/tape/drive/lto/mod.rs
@@ -309,6 +309,10 @@ impl TapeDriver for LtoTapeHandle {
         Ok(())
     }
 
+    fn move_to_file(&mut self, file: u64) -> Result<(), Error> {
+        self.locate_file(file)
+    }
+
     fn rewind(&mut self) -> Result<(), Error> {
         self.sg_tape.rewind()
     }
diff --git a/src/tape/drive/mod.rs b/src/tape/drive/mod.rs
index fd8f503d..f72e0b51 100644
--- a/src/tape/drive/mod.rs
+++ b/src/tape/drive/mod.rs
@@ -80,6 +80,9 @@ pub trait TapeDriver {
     /// Move to last file
     fn move_to_last_file(&mut self) -> Result<(), Error>;
 
+    /// Move to given file nr
+    fn move_to_file(&mut self, file: u64) -> Result<(), Error>;
+
     /// Current file number
     fn current_file_number(&mut self) -> Result<u64, Error>;
 
diff --git a/src/tape/drive/virtual_tape.rs b/src/tape/drive/virtual_tape.rs
index f7ebc0bb..3c5f9502 100644
--- a/src/tape/drive/virtual_tape.rs
+++ b/src/tape/drive/virtual_tape.rs
@@ -261,6 +261,28 @@ impl TapeDriver for VirtualTapeHandle {
         Ok(())
     }
 
+    fn move_to_file(&mut self, file: u64) -> Result<(), Error> {
+        let mut status = self.load_status()?;
+        match status.current_tape {
+            Some(VirtualTapeStatus { ref name, ref mut pos }) => {
+
+                let index = self.load_tape_index(name)
+                    .map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
+
+                if file as usize > index.files {
+                    bail!("invalid file nr");
+                }
+
+                *pos = file as usize;
+
+                self.store_status(&status)
+                    .map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
+
+                Ok(())
+            }
+            None => bail!("drive is empty (no tape loaded)."),
+        }
+    }
 
     fn read_next_file(&mut self) -> Result<Box<dyn TapeRead>, BlockReadError> {
         let mut status = self.load_status()
-- 
2.20.1





^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 2/8] tape/media_catalog: add helpers to look for snapshot/chunk files
  2021-05-05 10:09 [pbs-devel] [PATCH proxmox-backup v2 0/8] tape: single snapshot restore Dominik Csapak
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 1/8] tape/drive: add 'move_to_file' to TapeDriver trait Dominik Csapak
@ 2021-05-05 10:09 ` Dominik Csapak
  2021-05-06  5:59   ` [pbs-devel] applied: " Dietmar Maurer
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 3/8] api2/tape/restore: factor out check_datastore_privs Dominik Csapak
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Dominik Csapak @ 2021-05-05 10:09 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/tape/media_catalog.rs | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/src/tape/media_catalog.rs b/src/tape/media_catalog.rs
index aff91c43..8be97a36 100644
--- a/src/tape/media_catalog.rs
+++ b/src/tape/media_catalog.rs
@@ -924,6 +924,16 @@ impl MediaSetCatalog {
         false
     }
 
+    /// Returns the media uuid and snapshot archive file number
+    pub fn lookup_snapshot(&self, store: &str, snapshot: &str) -> Option<(&Uuid, u64)> {
+        for (uuid, catalog) in self.catalog_list.iter() {
+            if let Some(nr) = catalog.lookup_snapshot(store, snapshot) {
+                return Some((uuid, nr));
+            }
+        }
+        None
+    }
+
     /// Test if the catalog already contain a chunk
     pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
         for catalog in self.catalog_list.values() {
@@ -933,6 +943,16 @@ impl MediaSetCatalog {
         }
         false
     }
+
+    /// Returns the media uuid and chunk archive file number
+    pub fn lookup_chunk(&self, store: &str, digest: &[u8;32]) -> Option<(&Uuid, u64)> {
+        for (uuid, catalog) in self.catalog_list.iter() {
+            if let Some(nr) = catalog.lookup_chunk(store, digest) {
+                return Some((uuid, nr));
+            }
+        }
+        None
+    }
 }
 
 // Type definitions for internal binary catalog encoding
-- 
2.20.1





^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 3/8] api2/tape/restore: factor out check_datastore_privs
  2021-05-05 10:09 [pbs-devel] [PATCH proxmox-backup v2 0/8] tape: single snapshot restore Dominik Csapak
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 1/8] tape/drive: add 'move_to_file' to TapeDriver trait Dominik Csapak
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 2/8] tape/media_catalog: add helpers to look for snapshot/chunk files Dominik Csapak
@ 2021-05-05 10:09 ` Dominik Csapak
  2021-05-06  6:01   ` [pbs-devel] applied: " Dietmar Maurer
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 4/8] api2/tape/restore: remove unnecessary params from (try_)restore_snapshot_archive Dominik Csapak
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Dominik Csapak @ 2021-05-05 10:09 UTC (permalink / raw)
  To: pbs-devel

so that we can reuse it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/api2/tape/restore.rs | 39 +++++++++++++++++++++++++--------------
 1 file changed, 25 insertions(+), 14 deletions(-)

diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 1dd6ba11..b7bf6670 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -157,6 +157,30 @@ impl DataStoreMap {
     }
 }
 
+fn check_datastore_privs(
+    user_info: &CachedUserInfo,
+    store: &str,
+    auth_id: &Authid,
+    owner: &Option<Authid>,
+) -> Result<(), Error> {
+    let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
+    if (privs & PRIV_DATASTORE_BACKUP) == 0 {
+        bail!("no permissions on /datastore/{}", store);
+    }
+
+    if let Some(ref owner) = owner {
+        let correct_owner = owner == auth_id
+            || (owner.is_token() && !auth_id.is_token() && owner.user() == auth_id.user());
+
+        // same permission as changing ownership after syncing
+        if !correct_owner && privs & PRIV_DATASTORE_MODIFY == 0 {
+            bail!("no permission to restore as '{}'", owner);
+        }
+    }
+
+    Ok(())
+}
+
 pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
 
 #[api(
@@ -212,20 +236,7 @@ pub fn restore(
     }
 
     for store in used_datastores.iter() {
-        let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
-        if (privs & PRIV_DATASTORE_BACKUP) == 0 {
-            bail!("no permissions on /datastore/{}", store);
-        }
-
-        if let Some(ref owner) = owner {
-            let correct_owner = owner == &auth_id
-                || (owner.is_token() && !auth_id.is_token() && owner.user() == auth_id.user());
-
-            // same permission as changing ownership after syncing
-            if !correct_owner && privs & PRIV_DATASTORE_MODIFY == 0 {
-                bail!("no permission to restore as '{}'", owner);
-            }
-        }
+        check_datastore_privs(&user_info, &store, &auth_id, &owner)?;
     }
 
     let privs = user_info.lookup_privs(&auth_id, &["tape", "drive", &drive]);
-- 
2.20.1





^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 4/8] api2/tape/restore: remove unnecessary params from (try_)restore_snapshot_archive
  2021-05-05 10:09 [pbs-devel] [PATCH proxmox-backup v2 0/8] tape: single snapshot restore Dominik Csapak
                   ` (2 preceding siblings ...)
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 3/8] api2/tape/restore: factor out check_datastore_privs Dominik Csapak
@ 2021-05-05 10:09 ` Dominik Csapak
  2021-05-06  6:02   ` [pbs-devel] applied: " Dietmar Maurer
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 5/8] api2/tape/restore: return backup manifest in try_restore_snapshot_archive Dominik Csapak
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Dominik Csapak @ 2021-05-05 10:09 UTC (permalink / raw)
  To: pbs-devel

we do not need them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/api2/tape/restore.rs | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index b7bf6670..f3452364 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -503,8 +503,6 @@ fn restore_archive<'a>(
             let datastore_name = archive_header.store;
             let snapshot = archive_header.snapshot;
 
-            let checked_chunks = checked_chunks_map.entry(datastore_name.clone()).or_insert(HashSet::new());
-
             task_log!(worker, "File {}: snapshot archive {}:{}", current_file_number, datastore_name, snapshot);
 
             let backup_dir: BackupDir = snapshot.parse()?;
@@ -531,7 +529,7 @@ fn restore_archive<'a>(
                     if is_new {
                         task_log!(worker, "restore snapshot {}", backup_dir);
 
-                        match restore_snapshot_archive(worker.clone(), reader, &path, &datastore, checked_chunks) {
+                        match restore_snapshot_archive(worker.clone(), reader, &path) {
                             Err(err) => {
                                 std::fs::remove_dir_all(&path)?;
                                 bail!("restore snapshot {} failed - {}", backup_dir, err);
@@ -774,13 +772,11 @@ fn restore_snapshot_archive<'a>(
     worker: Arc<WorkerTask>,
     reader: Box<dyn 'a + TapeRead>,
     snapshot_path: &Path,
-    datastore: &DataStore,
-    checked_chunks: &mut HashSet<[u8;32]>,
 ) -> Result<bool, Error> {
 
     let mut decoder = pxar::decoder::sync::Decoder::from_std(reader)?;
-    match try_restore_snapshot_archive(worker, &mut decoder, snapshot_path, datastore, checked_chunks) {
-        Ok(()) => Ok(true),
+    match try_restore_snapshot_archive(worker, &mut decoder, snapshot_path) {
+        Ok(_) => Ok(true),
         Err(err) => {
             let reader = decoder.input();
 
@@ -804,8 +800,6 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
     worker: Arc<WorkerTask>,
     decoder: &mut pxar::decoder::sync::Decoder<R>,
     snapshot_path: &Path,
-    _datastore: &DataStore,
-    _checked_chunks: &mut HashSet<[u8;32]>,
 ) -> Result<(), Error> {
 
     let _root = match decoder.next() {
-- 
2.20.1





^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 5/8] api2/tape/restore: return backup manifest in try_restore_snapshot_archive
  2021-05-05 10:09 [pbs-devel] [PATCH proxmox-backup v2 0/8] tape: single snapshot restore Dominik Csapak
                   ` (3 preceding siblings ...)
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 4/8] api2/tape/restore: remove unnecessary params from (try_)restore_snapshot_archive Dominik Csapak
@ 2021-05-05 10:09 ` Dominik Csapak
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 6/8] api2/tape/restore: add 'restore-single' api path Dominik Csapak
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Dominik Csapak @ 2021-05-05 10:09 UTC (permalink / raw)
  To: pbs-devel

we'll use that for partial snapshot restore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/api2/tape/restore.rs | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index f3452364..9884b379 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -800,7 +800,7 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
     worker: Arc<WorkerTask>,
     decoder: &mut pxar::decoder::sync::Decoder<R>,
     snapshot_path: &Path,
-) -> Result<(), Error> {
+) -> Result<BackupManifest, Error> {
 
     let _root = match decoder.next() {
         None => bail!("missing root entry"),
@@ -886,9 +886,10 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
         }
     }
 
-    if manifest.is_none() {
-        bail!("missing manifest");
-    }
+    let manifest = match manifest {
+        None => bail!("missing manifest"),
+        Some(manifest) => manifest,
+    };
 
     // Do not verify anything here, because this would be to slow (causes tape stops).
 
@@ -902,7 +903,7 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
         bail!("Atomic rename manifest {:?} failed - {}", manifest_path, err);
     }
 
-    Ok(())
+    Ok(manifest)
 }
 
 /// Try to restore media catalogs (form catalog_archives)
-- 
2.20.1





^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 6/8] api2/tape/restore: add 'restore-single' api path
  2021-05-05 10:09 [pbs-devel] [PATCH proxmox-backup v2 0/8] tape: single snapshot restore Dominik Csapak
                   ` (4 preceding siblings ...)
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 5/8] api2/tape/restore: return backup manifest in try_restore_snapshot_archive Dominik Csapak
@ 2021-05-05 10:09 ` Dominik Csapak
  2021-05-05 10:53   ` Thomas Lamprecht
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 7/8] bin/proxmox-tape: add restore-single command to proxmox-tape Dominik Csapak
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 8/8] ui: tape: add single snapshot restore Dominik Csapak
  7 siblings, 1 reply; 17+ messages in thread
From: Dominik Csapak @ 2021-05-05 10:09 UTC (permalink / raw)
  To: pbs-devel

this makes it possible to only restore some snapshots from a tape media-set
instead of the whole. If the user selects only a small part, this will
probably be faster (and definitely uses less space on the target
datastores).

the user has to provide a datastore map (like on normal restore), and
a list of snapshots to restore in the form of 'store:type/group/id'
e.g. 'mystore:ct/100/2021-01-01T00:00:00Z'

we achieve this by first restoring the index to a temp dir, retrieving
a list of chunks, and using the catalog, we generate a list of
media/files that we need to (partially) restore.

finally, we copy the snapshots to the correct dir in the datastore,
and clean up the temp dir

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/api2/tape/mod.rs     |   1 +
 src/api2/tape/restore.rs | 612 ++++++++++++++++++++++++++++++++++++++-
 2 files changed, 612 insertions(+), 1 deletion(-)

diff --git a/src/api2/tape/mod.rs b/src/api2/tape/mod.rs
index 219a721b..4bf72d18 100644
--- a/src/api2/tape/mod.rs
+++ b/src/api2/tape/mod.rs
@@ -72,6 +72,7 @@ const SUBDIRS: SubdirMap = &[
     ("drive", &drive::ROUTER),
     ("media", &media::ROUTER),
     ("restore", &restore::ROUTER),
+    ("restore-single", &restore::ROUTER_SINGLE),
     (
         "scan-changers",
         &Router::new()
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 9884b379..f6b2157e 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -1,4 +1,4 @@
-use std::path::Path;
+use std::path::{Path, PathBuf};
 use std::ffi::OsStr;
 use std::collections::{HashMap, HashSet};
 use std::convert::TryFrom;
@@ -51,9 +51,14 @@ use crate::{
         },
     },
     backup::{
+        ArchiveType,
+        archive_type,
+        IndexFile,
         MANIFEST_BLOB_NAME,
         CryptMode,
         DataStore,
+        DynamicIndexReader,
+        FixedIndexReader,
         BackupDir,
         DataBlob,
         BackupManifest,
@@ -69,6 +74,7 @@ use crate::{
         MediaId,
         MediaSet,
         MediaCatalog,
+        MediaSetCatalog,
         Inventory,
         lock_media_set,
         file_formats::{
@@ -95,6 +101,8 @@ use crate::{
     },
 };
 
+const RESTORE_TMP_DIR: &str = "/var/tmp/proxmox-backup";
+
 pub struct DataStoreMap {
     map: HashMap<String, Arc<DataStore>>,
     default: Option<Arc<DataStore>>,
@@ -182,6 +190,608 @@ fn check_datastore_privs(
 }
 
 pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
+pub const ROUTER_SINGLE: Router = Router::new().post(&API_METHOD_RESTORE_SINGLE);
+
+#[api(
+   input: {
+        properties: {
+            store: {
+                schema: DATASTORE_MAP_LIST_SCHEMA,
+            },
+            drive: {
+                schema: DRIVE_NAME_SCHEMA,
+            },
+            "media-set": {
+                description: "Media set UUID.",
+                type: String,
+            },
+            "snapshots": {
+                description: "List of snapshots.",
+                type: Array,
+                items: {
+                    type: String,
+                    description: "A single snapshot in format: 'store:type/group/id'."
+                },
+            },
+            "notify-user": {
+                type: Userid,
+                optional: true,
+            },
+            owner: {
+                type: Authid,
+                optional: true,
+            },
+        },
+    },
+    returns: {
+        schema: UPID_SCHEMA,
+    },
+    access: {
+        // Note: parameters are no uri parameter, so we need to test inside function body
+        description: "The user needs Tape.Read privilege on /tape/pool/{pool} \
+                      and /tape/drive/{drive}, Datastore.Backup privilege on /datastore/{store}.",
+        permission: &Permission::Anybody,
+    },
+)]
+/// Restore single snapshots from from media-set
+pub fn restore_single(
+    store: String,
+    drive: String,
+    media_set: String,
+    snapshots: Vec<String>,
+    notify_user: Option<Userid>,
+    owner: Option<Authid>,
+    rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Value, Error> {
+    let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
+    let user_info = CachedUserInfo::new()?;
+
+    let store_map = DataStoreMap::try_from(store)
+        .map_err(|err| format_err!("cannot parse store mapping: {}", err))?;
+    let used_datastores = store_map.used_datastores();
+    if used_datastores.len() == 0 {
+        bail!("no datastores given");
+    }
+
+    for store in used_datastores.iter() {
+        check_datastore_privs(&user_info, &store, &auth_id, &owner)?;
+    }
+
+    let privs = user_info.lookup_privs(&auth_id, &["tape", "drive", &drive]);
+    if (privs & PRIV_TAPE_READ) == 0 {
+        bail!("no permissions on /tape/drive/{}", drive);
+    }
+
+    let media_set_uuid = media_set.parse()?;
+
+    let status_path = Path::new(TAPE_STATUS_DIR);
+
+    // test what catalog files we have
+    let _catalogs = MediaCatalog::media_with_catalogs(status_path)?;
+
+    let _lock = lock_media_set(status_path, &media_set_uuid, None)?;
+
+    let inventory = Inventory::load(status_path)?;
+
+    let pool = inventory.lookup_media_set_pool(&media_set_uuid)?;
+
+    let privs = user_info.lookup_privs(&auth_id, &["tape", "pool", &pool]);
+    if (privs & PRIV_TAPE_READ) == 0 {
+        bail!("no permissions on /tape/pool/{}", pool);
+    }
+
+    let (drive_config, _digest) = config::drive::config()?;
+
+    // early check/lock before starting worker
+    let drive_lock = lock_tape_device(&drive_config, &drive)?;
+
+    let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
+
+    let upid_str = WorkerTask::new_thread(
+        "tape-restore-single",
+        None,
+        auth_id.clone(),
+        to_stdout,
+        move |worker| {
+            let _drive_lock = drive_lock; // keep lock guard
+
+            set_tape_device_state(&drive, &worker.upid().to_string())?;
+
+
+            let restore_owner = owner.as_ref().unwrap_or(&auth_id);
+
+            let email = notify_user
+                .as_ref()
+                .and_then(|userid| lookup_user_email(userid))
+                .or_else(|| lookup_user_email(&auth_id.clone().into()));
+
+            let res = restore_single_worker(
+                worker.clone(),
+                snapshots,
+                inventory,
+                media_set_uuid,
+                drive_config,
+                &drive,
+                store_map,
+                restore_owner,
+                email,
+            );
+
+            if let Err(err) = set_tape_device_state(&drive, "") {
+                task_log!(worker, "could not unset drive state for {}: {}", drive, err);
+            }
+
+            res
+        },
+    )?;
+
+    Ok(upid_str.into())
+}
+
+fn restore_single_worker(
+    worker: Arc<WorkerTask>,
+    snapshots: Vec<String>,
+    inventory: Inventory,
+    media_set_uuid: Uuid,
+    drive_config: SectionConfigData,
+    drive_name: &str,
+    store_map: DataStoreMap,
+    restore_owner: &Authid,
+    email: Option<String>,
+) -> Result<(), Error> {
+    let base_path: PathBuf = format!("{}/{}", RESTORE_TMP_DIR, media_set_uuid).into();
+    std::fs::create_dir_all(&base_path)?;
+
+    let catalog = get_media_set_catalog(&inventory, &media_set_uuid)?;
+
+    let mut datastore_locks = Vec::new();
+    let mut snapshot_file_list: HashMap<Uuid, Vec<u64>> = HashMap::new();
+    let mut snapshot_locks = HashMap::new();
+
+    let res = proxmox::try_block!({
+        // assemble snapshot files/locks
+        for i in 0..snapshots.len() {
+            let store_snapshot = &snapshots[i];
+            let mut split = snapshots[i].splitn(2, ':');
+            let source_datastore = split
+                .next()
+                .ok_or_else(|| format_err!("invalid snapshot: {}", store_snapshot))?;
+            let snapshot = split
+                .next()
+                .ok_or_else(|| format_err!("invalid snapshot:{}", store_snapshot))?;
+            let backup_dir: BackupDir = snapshot.parse()?;
+
+            let datastore = store_map.get_datastore(source_datastore).ok_or_else(|| {
+                format_err!(
+                    "could not find mapping for source datastore: {}",
+                    source_datastore
+                )
+            })?;
+
+            let (owner, _group_lock) =
+                datastore.create_locked_backup_group(backup_dir.group(), &restore_owner)?;
+            if restore_owner != &owner {
+                // only the owner is allowed to create additional snapshots
+                bail!(
+                    "restore '{}' failed - owner check failed ({} != {})",
+                    snapshot,
+                    restore_owner,
+                    owner
+                );
+            }
+
+            let (media_id, file_num) = if let Some((media_uuid, nr)) =
+                catalog.lookup_snapshot(&source_datastore, &snapshot)
+            {
+                let media_id = inventory.lookup_media(media_uuid).unwrap();
+                (media_id, nr)
+            } else {
+                task_warn!(
+                    worker,
+                    "did not find snapshot '{}' in media set {}",
+                    snapshot,
+                    media_set_uuid
+                );
+                continue;
+            };
+
+            let (_rel_path, is_new, snap_lock) = datastore.create_locked_backup_dir(&backup_dir)?;
+
+            if !is_new {
+                task_log!(
+                    worker,
+                    "found snapshot {} on target datastore, skipping...",
+                    snapshot
+                );
+                continue;
+            }
+
+            snapshot_locks.insert(store_snapshot.to_string(), snap_lock);
+
+            let shared_store_lock = datastore.try_shared_chunk_store_lock()?;
+            datastore_locks.push(shared_store_lock);
+
+            let file_list = snapshot_file_list
+                .entry(media_id.label.uuid.clone())
+                .or_insert_with(Vec::new);
+            file_list.push(file_num);
+
+            task_log!(
+                worker,
+                "found snapshot {} on {}: file {}",
+                snapshot,
+                media_id.label.label_text,
+                file_num
+            );
+        }
+
+        if snapshot_file_list.is_empty() {
+            task_log!(worker, "nothing to restore, skipping remaining phases...");
+            return Ok(());
+        }
+
+        task_log!(worker, "Phase 1: temporarily restore snapshots to temp dir");
+        let mut chunks_list: HashMap<String, HashSet<[u8; 32]>> = HashMap::new();
+        for (media_uuid, file_list) in snapshot_file_list.iter_mut() {
+            let media_id = inventory.lookup_media(media_uuid).unwrap();
+            let (drive, info) = request_and_load_media(
+                &worker,
+                &drive_config,
+                &drive_name,
+                &media_id.label,
+                &email,
+            )?;
+            file_list.sort_unstable();
+            restore_snapshots_to_tmpdir(
+                worker.clone(),
+                &base_path,
+                file_list,
+                drive,
+                &info,
+                &media_set_uuid,
+                &mut chunks_list,
+            )?;
+        }
+
+        let mut media_list: HashMap<Uuid, HashMap<u64, HashSet<[u8; 32]>>> = HashMap::new();
+
+        for (source_datastore, chunks) in chunks_list.into_iter() {
+            let datastore = store_map.get_datastore(&source_datastore).ok_or_else(|| {
+                format_err!(
+                    "could not find mapping for source datastore: {}",
+                    source_datastore
+                )
+            })?;
+            for digest in chunks.into_iter() {
+                // we only want to restore chunks that we do not have yet
+                if !datastore.cond_touch_chunk(&digest, false)? {
+                    if let Some((uuid, nr)) = catalog.lookup_chunk(&source_datastore, &digest) {
+                        let file = media_list.entry(uuid.clone()).or_insert_with(HashMap::new);
+                        let chunks = file.entry(nr).or_insert_with(HashSet::new);
+                        chunks.insert(digest);
+                    }
+                }
+            }
+        }
+
+        // we do not need it anymore, saves memory
+        drop(catalog);
+
+        if !media_list.is_empty() {
+            task_log!(worker, "Phase 2: restore chunks to datastores");
+        } else {
+            task_log!(worker, "all chunks exist already, skipping phase 2...");
+        }
+
+        for (media_uuid, file_list) in media_list.iter_mut() {
+            let media_id = inventory.lookup_media(media_uuid).unwrap();
+            let (mut drive, _info) = request_and_load_media(
+                &worker,
+                &drive_config,
+                &drive_name,
+                &media_id.label,
+                &email,
+            )?;
+            let mut files: Vec<u64> = file_list.keys().map(|v| *v).collect();
+            files.sort();
+            restore_chunk_file_list(worker.clone(), &mut drive, &files[..], &store_map, file_list)?;
+        }
+
+        task_log!(
+            worker,
+            "Phase 3: copy snapshots from temp dir to datastores"
+        );
+        for (store_snapshot, _lock) in snapshot_locks.into_iter() {
+            let mut split = store_snapshot.splitn(2, ':');
+            let source_datastore = split
+                .next()
+                .ok_or_else(|| format_err!("invalid snapshot: {}", store_snapshot))?;
+            let snapshot = split
+                .next()
+                .ok_or_else(|| format_err!("invalid snapshot:{}", store_snapshot))?;
+            let backup_dir: BackupDir = snapshot.parse()?;
+
+            let datastore = store_map
+                .get_datastore(&source_datastore)
+                .ok_or_else(|| format_err!("unexpected source datastore: {}", source_datastore))?;
+
+            let mut tmp_path = base_path.clone();
+            tmp_path.push(&source_datastore);
+            tmp_path.push(snapshot);
+
+            let path = datastore.snapshot_path(&backup_dir);
+
+            for entry in std::fs::read_dir(tmp_path)? {
+                let entry = entry?;
+                let mut new_path = path.clone();
+                new_path.push(entry.file_name());
+                std::fs::copy(entry.path(), new_path)?;
+            }
+            task_log!(worker, "Restore snapshot '{}' done", snapshot);
+        }
+        Ok(())
+    });
+
+    match std::fs::remove_dir_all(&base_path) {
+        Ok(()) => {}
+        Err(err) => task_warn!(worker, "error cleaning up: {}", err),
+    }
+
+    res
+}
+
+fn get_media_set_catalog(
+    inventory: &Inventory,
+    media_set_uuid: &Uuid,
+) -> Result<MediaSetCatalog, Error> {
+    let status_path = Path::new(TAPE_STATUS_DIR);
+
+    let members = inventory.compute_media_set_members(media_set_uuid)?;
+    let media_list = members.media_list();
+    let mut catalog = MediaSetCatalog::new();
+
+    for (seq_nr, media_uuid) in media_list.iter().enumerate() {
+        match media_uuid {
+            None => {
+                bail!(
+                    "media set {} is incomplete (missing member {}).",
+                    media_set_uuid,
+                    seq_nr
+                );
+            }
+            Some(media_uuid) => {
+                let media_id = inventory.lookup_media(media_uuid).unwrap();
+                let media_catalog = MediaCatalog::open(status_path, &media_id, false, false)?;
+                catalog.append_catalog(media_catalog)?;
+            }
+        }
+    }
+
+    Ok(catalog)
+}
+
+fn restore_snapshots_to_tmpdir(
+    worker: Arc<WorkerTask>,
+    path: &PathBuf,
+    file_list: &[u64],
+    mut drive: Box<dyn TapeDriver>,
+    media_id: &MediaId,
+    media_set_uuid: &Uuid,
+    chunks_list: &mut HashMap<String, HashSet<[u8; 32]>>,
+) -> Result<(), Error> {
+    match media_id.media_set_label {
+        None => {
+            bail!(
+                "missing media set label on media {} ({})",
+                media_id.label.label_text,
+                media_id.label.uuid
+            );
+        }
+        Some(ref set) => {
+            if set.uuid != *media_set_uuid {
+                bail!(
+                    "wrong media set label on media {} ({} != {})",
+                    media_id.label.label_text,
+                    media_id.label.uuid,
+                    media_set_uuid
+                );
+            }
+            let encrypt_fingerprint = set.encryption_key_fingerprint.clone().map(|fp| {
+                task_log!(worker, "Encryption key fingerprint: {}", fp);
+                (fp, set.uuid.clone())
+            });
+
+            drive.set_encryption(encrypt_fingerprint)?;
+        }
+    }
+
+    for file_num in file_list {
+        drive.move_to_file(*file_num)?;
+        let mut reader = drive.read_next_file()?;
+
+        let header: MediaContentHeader = unsafe { reader.read_le_value()? };
+        if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
+            bail!("missing MediaContentHeader");
+        }
+
+        match header.content_magic {
+            PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1 => {
+                let header_data = reader.read_exact_allocated(header.size as usize)?;
+
+                let archive_header: SnapshotArchiveHeader = serde_json::from_slice(&header_data)
+                    .map_err(|err| {
+                        format_err!("unable to parse snapshot archive header - {}", err)
+                    })?;
+
+                let source_datastore = archive_header.store;
+                let snapshot = archive_header.snapshot;
+
+                task_log!(
+                    worker,
+                    "File {}: snapshot archive {}:{}",
+                    file_num,
+                    source_datastore,
+                    snapshot
+                );
+
+                let mut decoder = pxar::decoder::sync::Decoder::from_std(reader)?;
+
+                let mut tmp_path = path.clone();
+                tmp_path.push(&source_datastore);
+                tmp_path.push(snapshot);
+                std::fs::create_dir_all(&tmp_path)?;
+
+                let chunks = chunks_list
+                    .entry(source_datastore)
+                    .or_insert_with(HashSet::new);
+                let manifest = try_restore_snapshot_archive(worker.clone(), &mut decoder, &tmp_path)?;
+                for item in manifest.files() {
+                    let mut archive_path = tmp_path.to_owned();
+                    archive_path.push(&item.filename);
+
+                    let index: Box<dyn IndexFile> = match archive_type(&item.filename)? {
+                        ArchiveType::DynamicIndex => {
+                            Box::new(DynamicIndexReader::open(&archive_path)?)
+                        }
+                        ArchiveType::FixedIndex => {
+                            Box::new(FixedIndexReader::open(&archive_path)?)
+                        }
+                        ArchiveType::Blob => continue,
+                    };
+                    for i in 0..index.index_count() {
+                        if let Some(digest) = index.index_digest(i) {
+                            chunks.insert(*digest);
+                        }
+                    }
+                }
+            }
+            _ => bail!("unexpected file type"),
+        }
+    }
+
+    Ok(())
+}
+
+fn restore_chunk_file_list(
+    worker: Arc<WorkerTask>,
+    drive: &mut Box<dyn TapeDriver>,
+    file_list: &[u64],
+    store_map: &DataStoreMap,
+    chunk_list: &mut HashMap<u64, HashSet<[u8; 32]>>,
+) -> Result<(), Error> {
+    for nr in file_list {
+        let current_file_number = drive.current_file_number()?;
+        if current_file_number != *nr {
+            drive.move_to_file(*nr)?;
+        }
+        let mut reader = drive.read_next_file()?;
+        let header: MediaContentHeader = unsafe { reader.read_le_value()? };
+        if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
+            bail!("missing MediaContentHeader");
+        }
+
+        match header.content_magic {
+            PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1 => {
+                let header_data = reader.read_exact_allocated(header.size as usize)?;
+
+                let archive_header: ChunkArchiveHeader = serde_json::from_slice(&header_data)
+                    .map_err(|err| format_err!("unable to parse chunk archive header - {}", err))?;
+
+                let source_datastore = archive_header.store;
+
+                task_log!(
+                    worker,
+                    "File {}: chunk archive for datastore '{}'",
+                    nr,
+                    source_datastore
+                );
+
+                let datastore = store_map.get_datastore(&source_datastore).ok_or_else(|| {
+                    format_err!("unexpected chunk archive for store: {}", source_datastore)
+                })?;
+
+                let chunks = chunk_list
+                    .get_mut(nr)
+                    .ok_or_else(|| format_err!("undexpected file nr: {}", nr))?;
+
+                let count = restore_partial_chunk_archive(worker.clone(), reader, datastore.clone(), chunks)?;
+                task_log!(worker, "restored {} chunks", count);
+            }
+            _ => bail!("unexpected content magic {:?}", header.content_magic),
+        }
+    }
+
+    Ok(())
+}
+
+fn restore_partial_chunk_archive<'a>(
+    worker: Arc<WorkerTask>,
+    reader: Box<dyn 'a + TapeRead>,
+    datastore: Arc<DataStore>,
+    chunk_list: &mut HashSet<[u8; 32]>,
+) -> Result<usize, Error> {
+    let mut decoder = ChunkArchiveDecoder::new(reader);
+
+    let mut count = 0;
+
+    let start_time = std::time::SystemTime::now();
+    let bytes = Arc::new(std::sync::atomic::AtomicU64::new(0));
+    let bytes2 = bytes.clone();
+
+    let writer_pool = ParallelHandler::new(
+        "tape restore chunk writer",
+        4,
+        move |(chunk, digest): (DataBlob, [u8; 32])| {
+            if !datastore.cond_touch_chunk(&digest, false)? {
+                bytes2.fetch_add(chunk.raw_size(), std::sync::atomic::Ordering::SeqCst);
+                chunk.verify_crc()?;
+                if chunk.crypt_mode()? == CryptMode::None {
+                    chunk.decode(None, Some(&digest))?; // verify digest
+                }
+
+                datastore.insert_chunk(&chunk, &digest)?;
+            }
+            Ok(())
+        },
+    );
+
+    let verify_and_write_channel = writer_pool.channel();
+
+    loop {
+        let (digest, blob) = match decoder.next_chunk()? {
+            Some((digest, blob)) => (digest, blob),
+            None => break,
+        };
+
+        worker.check_abort()?;
+
+        if chunk_list.remove(&digest) {
+            verify_and_write_channel.send((blob, digest.clone()))?;
+            count += 1;
+        }
+
+        if chunk_list.is_empty() {
+            break;
+        }
+    }
+
+    drop(verify_and_write_channel);
+
+    writer_pool.complete()?;
+
+    let elapsed = start_time.elapsed()?.as_secs_f64();
+
+    let bytes = bytes.load(std::sync::atomic::Ordering::SeqCst);
+
+    task_log!(
+        worker,
+        "restored {} bytes ({:.2} MB/s)",
+        bytes,
+        (bytes as f64) / (1_000_000.0 * elapsed)
+    );
+
+    Ok(count)
+}
 
 #[api(
    input: {
-- 
2.20.1





^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 7/8] bin/proxmox-tape: add restore-single command to proxmox-tape
  2021-05-05 10:09 [pbs-devel] [PATCH proxmox-backup v2 0/8] tape: single snapshot restore Dominik Csapak
                   ` (5 preceding siblings ...)
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 6/8] api2/tape/restore: add 'restore-single' api path Dominik Csapak
@ 2021-05-05 10:09 ` Dominik Csapak
  2021-05-05 11:04   ` Thomas Lamprecht
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 8/8] ui: tape: add single snapshot restore Dominik Csapak
  7 siblings, 1 reply; 17+ messages in thread
From: Dominik Csapak @ 2021-05-05 10:09 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/bin/proxmox-tape.rs | 62 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/src/bin/proxmox-tape.rs b/src/bin/proxmox-tape.rs
index e18f334c..3d5d0cf3 100644
--- a/src/bin/proxmox-tape.rs
+++ b/src/bin/proxmox-tape.rs
@@ -868,6 +868,61 @@ async fn backup(mut param: Value) -> Result<(), Error> {
     Ok(())
 }
 
+#[api(
+   input: {
+        properties: {
+            store: {
+                schema: DATASTORE_MAP_LIST_SCHEMA,
+            },
+            drive: {
+                schema: DRIVE_NAME_SCHEMA,
+                optional: true,
+            },
+            "media-set": {
+                description: "Media set UUID.",
+                type: String,
+            },
+            "snapshots": {
+                description: "Comma-separated list of snapshots.",
+                type: Array,
+                items: {
+                    type: String,
+                    description: "A single snapshot',"
+                },
+            },
+            "notify-user": {
+                type: Userid,
+                optional: true,
+            },
+            owner: {
+                type: Authid,
+                optional: true,
+            },
+            "output-format": {
+                schema: OUTPUT_FORMAT,
+                optional: true,
+            },
+        },
+    },
+)]
+/// Restore data from media-set
+async fn restore_single(mut param: Value) -> Result<(), Error> {
+
+    let output_format = get_output_format(&param);
+
+    let (config, _digest) = config::drive::config()?;
+
+    param["drive"] = extract_drive_name(&mut param, &config)?.into();
+
+    let mut client = connect_to_localhost()?;
+
+    let result = client.post("api2/json/tape/restore-single", Some(param)).await?;
+
+    view_task_result(&mut client, result, &output_format).await?;
+
+    Ok(())
+}
+
 #[api(
    input: {
         properties: {
@@ -981,6 +1036,13 @@ fn main() {
                 .completion_cb("store", complete_datastore_name)
                 .completion_cb("media-set", complete_media_set_uuid)
         )
+        .insert(
+            "restore-single",
+            CliCommand::new(&API_METHOD_RESTORE_SINGLE)
+                .arg_param(&["media-set", "store", "snapshots"])
+                .completion_cb("store", complete_datastore_name)
+                .completion_cb("media-set", complete_media_set_uuid)
+        )
         .insert(
             "barcode-label",
             CliCommand::new(&API_METHOD_BARCODE_LABEL_MEDIA)
-- 
2.20.1





^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 8/8] ui: tape: add single snapshot restore
  2021-05-05 10:09 [pbs-devel] [PATCH proxmox-backup v2 0/8] tape: single snapshot restore Dominik Csapak
                   ` (6 preceding siblings ...)
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 7/8] bin/proxmox-tape: add restore-single command to proxmox-tape Dominik Csapak
@ 2021-05-05 10:09 ` Dominik Csapak
  7 siblings, 0 replies; 17+ messages in thread
From: Dominik Csapak @ 2021-05-05 10:09 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/tape/BackupOverview.js     | 41 ++++++++++++++++++++++++++++++++++
 www/tape/window/TapeRestore.js | 26 +++++++++++++++++++++
 2 files changed, 67 insertions(+)

diff --git a/www/tape/BackupOverview.js b/www/tape/BackupOverview.js
index 0cc0e18e..c028d58d 100644
--- a/www/tape/BackupOverview.js
+++ b/www/tape/BackupOverview.js
@@ -16,6 +16,38 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
 	    }).show();
 	},
 
+	restoreSingle: function(button, record) {
+	    let me = this;
+	    let view = me.getView();
+	    let selection = view.getSelection();
+	    if (!selection || selection.length < 1) {
+		return;
+	    }
+
+	    let node = selection[0];
+	    if (node.data.restoreid === undefined) {
+		return;
+	    }
+	    let restoreid = node.data.restoreid;
+	    let mediaset = node.data.text;
+	    let uuid = node.data['media-set-uuid'];
+	    let datastores = [node.data.store];
+
+	    Ext.create('PBS.TapeManagement.TapeRestoreWindow', {
+		mediaset,
+		uuid,
+		list: [
+		    restoreid,
+		],
+		datastores,
+		listeners: {
+		    destroy: function() {
+			me.reload();
+		    },
+		},
+	    }).show();
+	},
+
 	restore: function(button, record) {
 	    let me = this;
 	    let view = me.getView();
@@ -149,6 +181,7 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
 		    entry.text = entry.snapshot;
 		    entry.leaf = true;
 		    entry.children = [];
+		    entry.restoreid = `${entry.store}:${entry.snapshot}`;
 		    let iconCls = PBS.Utils.get_type_icon_cls(entry.snapshot);
 		    if (iconCls !== '') {
 			entry.iconCls = `fa ${iconCls}`;
@@ -262,6 +295,14 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
 	    parentXType: 'treepanel',
 	    enableFn: (rec) => !!rec.data['media-set-uuid'],
 	},
+	{
+	    xtype: 'proxmoxButton',
+	    disabled: true,
+	    text: gettext('Restore Snapshot'),
+	    handler: 'restoreSingle',
+	    parentXType: 'treepanel',
+	    enableFn: (rec) => !!rec.data.restoreid,
+	},
     ],
 
     columns: [
diff --git a/www/tape/window/TapeRestore.js b/www/tape/window/TapeRestore.js
index 85011cba..fcd93f9c 100644
--- a/www/tape/window/TapeRestore.js
+++ b/www/tape/window/TapeRestore.js
@@ -10,6 +10,19 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
     showTaskViewer: true,
     isCreate: true,
 
+    cbindData: function(config) {
+	let me = this;
+	me.isSingle = false;
+	me.listText = "";
+	if (me.list !== undefined) {
+	    me.url = '/api2/extjs/tape/restore-single';
+	    me.isSingle = true;
+	    me.listText = me.list.join('<br>');
+	    me.title = gettext('Restore Snapshot');
+	}
+	return {};
+    },
+
     defaults: {
 	labelWidth: 120,
     },
@@ -33,6 +46,10 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
 		}
 		delete values.mapping;
 
+		if (me.up('window').list !== undefined) {
+		    values.snapshots = me.up('window').list;
+		}
+
 		values.store = datastores.join(',');
 
 		return values;
@@ -55,6 +72,15 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
 			value: '{uuid}',
 		    },
 		},
+		{
+		    xtype: 'displayfield',
+		    fieldLabel: gettext('Snapshot(s)'),
+		    submitValue: false,
+		    cbind: {
+			hidden: '{!isSingle}',
+			value: '{listText}',
+		    },
+		},
 		{
 		    xtype: 'pbsDriveSelector',
 		    fieldLabel: gettext('Drive'),
-- 
2.20.1





^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v2 6/8] api2/tape/restore: add 'restore-single' api path
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 6/8] api2/tape/restore: add 'restore-single' api path Dominik Csapak
@ 2021-05-05 10:53   ` Thomas Lamprecht
  2021-05-05 12:48     ` Dominik Csapak
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Lamprecht @ 2021-05-05 10:53 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Dominik Csapak

On 05.05.21 12:09, Dominik Csapak wrote:
> @@ -182,6 +190,608 @@ fn check_datastore_privs(
>  }
>  
>  pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
> +pub const ROUTER_SINGLE: Router = Router::new().post(&API_METHOD_RESTORE_SINGLE);
> +
> +#[api(
> +   input: {
> +        properties: {
> +            store: {
> +                schema: DATASTORE_MAP_LIST_SCHEMA,
> +            },
> +            drive: {
> +                schema: DRIVE_NAME_SCHEMA,
> +            },
> +            "media-set": {
> +                description: "Media set UUID.",
> +                type: String,
> +            },
> +            "snapshots": {
> +                description: "List of snapshots.",
> +                type: Array,
> +                items: {
> +                    type: String,
> +                    description: "A single snapshot in format: 'store:type/group/id'."
> +                },
> +            },

restore-*single* which may restore a list of snapshots, rather weird...

Why is snapshots not just an optional parameter of the default restore path?

We have already all other parameters there, would make it less confusing from
an outside POV and allow reusing some code if done right..

> +            "notify-user": {
> +                type: Userid,
> +                optional: true,
> +            },
> +            owner: {
> +                type: Authid,
> +                optional: true,
> +            },
> +        },
> +    },
> +    returns: {
> +        schema: UPID_SCHEMA,
> +    },
> +    access: {
> +        // Note: parameters are no uri parameter, so we need to test inside function body
> +        description: "The user needs Tape.Read privilege on /tape/pool/{pool} \
> +                      and /tape/drive/{drive}, Datastore.Backup privilege on /datastore/{store}.",
> +        permission: &Permission::Anybody,
> +    },
> +)]




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v2 7/8] bin/proxmox-tape: add restore-single command to proxmox-tape
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 7/8] bin/proxmox-tape: add restore-single command to proxmox-tape Dominik Csapak
@ 2021-05-05 11:04   ` Thomas Lamprecht
  2021-05-05 12:50     ` Dominik Csapak
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Lamprecht @ 2021-05-05 11:04 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Dominik Csapak

On 05.05.21 12:09, Dominik Csapak wrote:
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  src/bin/proxmox-tape.rs | 62 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 62 insertions(+)
> 
> diff --git a/src/bin/proxmox-tape.rs b/src/bin/proxmox-tape.rs
> index e18f334c..3d5d0cf3 100644
> --- a/src/bin/proxmox-tape.rs
> +++ b/src/bin/proxmox-tape.rs
> @@ -868,6 +868,61 @@ async fn backup(mut param: Value) -> Result<(), Error> {
>      Ok(())
>  }
>  
> +#[api(
> +   input: {
> +        properties: {
> +            store: {
> +                schema: DATASTORE_MAP_LIST_SCHEMA,
> +            },
> +            drive: {
> +                schema: DRIVE_NAME_SCHEMA,
> +                optional: true,
> +            },
> +            "media-set": {
> +                description: "Media set UUID.",
> +                type: String,
> +            },
> +            "snapshots": {
> +                description: "Comma-separated list of snapshots.",
> +                type: Array,
> +                items: {
> +                    type: String,
> +                    description: "A single snapshot',"
> +                },
> +            },

same here, and if we'd really like to add a extra command for restoring not all but
a list of snapshots I'd at least make snapshots a fixed "take all" parameter, e.g.:

proxmox-tape restore-snapshots --store bar --media-set foo vm/100/... ct/101/...

Or allow passing it multiple times for accumulation, as those can be better
tab-completed which would be really helpful compared to typing some date in.
(those pasting it in would not loose any benefit).

But as said, above is only for the case where we really want a separate command,
IMO this would fit fine into a single restore command...

> +            "notify-user": {
> +                type: Userid,
> +                optional: true,
> +            },
> +            owner: {
> +                type: Authid,
> +                optional: true,
> +            },
> +            "output-format": {
> +                schema: OUTPUT_FORMAT,
> +                optional: true,
> +            },
> +        },
> +    },
> +)]
> +/// Restore data from media-set
> +async fn restore_single(mut param: Value) -> Result<(), Error> {
> +
> +    let output_format = get_output_format(&param);
> +
> +    let (config, _digest) = config::drive::config()?;
> +
> +    param["drive"] = extract_drive_name(&mut param, &config)?.into();
> +
> +    let mut client = connect_to_localhost()?;
> +
> +    let result = client.post("api2/json/tape/restore-single", Some(param)).await?;
> +
> +    view_task_result(&mut client, result, &output_format).await?;
> +
> +    Ok(())
> +}
> +
>  #[api(
>     input: {
>          properties: {
> @@ -981,6 +1036,13 @@ fn main() {
>                  .completion_cb("store", complete_datastore_name)
>                  .completion_cb("media-set", complete_media_set_uuid)
>          )
> +        .insert(
> +            "restore-single",
> +            CliCommand::new(&API_METHOD_RESTORE_SINGLE)
> +                .arg_param(&["media-set", "store", "snapshots"])
> +                .completion_cb("store", complete_datastore_name)
> +                .completion_cb("media-set", complete_media_set_uuid)
> +        )
>          .insert(
>              "barcode-label",
>              CliCommand::new(&API_METHOD_BARCODE_LABEL_MEDIA)
> 





^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v2 6/8] api2/tape/restore: add 'restore-single' api path
  2021-05-05 10:53   ` Thomas Lamprecht
@ 2021-05-05 12:48     ` Dominik Csapak
  0 siblings, 0 replies; 17+ messages in thread
From: Dominik Csapak @ 2021-05-05 12:48 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox Backup Server development discussion

On 5/5/21 12:53, Thomas Lamprecht wrote:
> On 05.05.21 12:09, Dominik Csapak wrote:
>> @@ -182,6 +190,608 @@ fn check_datastore_privs(
>>   }
>>   
>>   pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
>> +pub const ROUTER_SINGLE: Router = Router::new().post(&API_METHOD_RESTORE_SINGLE);
>> +
>> +#[api(
>> +   input: {
>> +        properties: {
>> +            store: {
>> +                schema: DATASTORE_MAP_LIST_SCHEMA,
>> +            },
>> +            drive: {
>> +                schema: DRIVE_NAME_SCHEMA,
>> +            },
>> +            "media-set": {
>> +                description: "Media set UUID.",
>> +                type: String,
>> +            },
>> +            "snapshots": {
>> +                description: "List of snapshots.",
>> +                type: Array,
>> +                items: {
>> +                    type: String,
>> +                    description: "A single snapshot in format: 'store:type/group/id'."
>> +                },
>> +            },
> 
> restore-*single* which may restore a list of snapshots, rather weird...
> 
> Why is snapshots not just an optional parameter of the default restore path?
> 
> We have already all other parameters there, would make it less confusing from
> an outside POV and allow reusing some code if done right..
> 

thanks. yes that make sense, i'll integrate it.
(i overestimated the non-overlapping parts of the api calls,
turns out they share much if i do a little refactoring...)


>> +            "notify-user": {
>> +                type: Userid,
>> +                optional: true,
>> +            },
>> +            owner: {
>> +                type: Authid,
>> +                optional: true,
>> +            },
>> +        },
>> +    },
>> +    returns: {
>> +        schema: UPID_SCHEMA,
>> +    },
>> +    access: {
>> +        // Note: parameters are no uri parameter, so we need to test inside function body
>> +        description: "The user needs Tape.Read privilege on /tape/pool/{pool} \
>> +                      and /tape/drive/{drive}, Datastore.Backup privilege on /datastore/{store}.",
>> +        permission: &Permission::Anybody,
>> +    },
>> +)]





^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v2 7/8] bin/proxmox-tape: add restore-single command to proxmox-tape
  2021-05-05 11:04   ` Thomas Lamprecht
@ 2021-05-05 12:50     ` Dominik Csapak
  0 siblings, 0 replies; 17+ messages in thread
From: Dominik Csapak @ 2021-05-05 12:50 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox Backup Server development discussion

On 5/5/21 13:04, Thomas Lamprecht wrote:
> On 05.05.21 12:09, Dominik Csapak wrote:
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>>   src/bin/proxmox-tape.rs | 62 +++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 62 insertions(+)
>>
>> diff --git a/src/bin/proxmox-tape.rs b/src/bin/proxmox-tape.rs
>> index e18f334c..3d5d0cf3 100644
>> --- a/src/bin/proxmox-tape.rs
>> +++ b/src/bin/proxmox-tape.rs
>> @@ -868,6 +868,61 @@ async fn backup(mut param: Value) -> Result<(), Error> {
>>       Ok(())
>>   }
>>   
>> +#[api(
>> +   input: {
>> +        properties: {
>> +            store: {
>> +                schema: DATASTORE_MAP_LIST_SCHEMA,
>> +            },
>> +            drive: {
>> +                schema: DRIVE_NAME_SCHEMA,
>> +                optional: true,
>> +            },
>> +            "media-set": {
>> +                description: "Media set UUID.",
>> +                type: String,
>> +            },
>> +            "snapshots": {
>> +                description: "Comma-separated list of snapshots.",
>> +                type: Array,
>> +                items: {
>> +                    type: String,
>> +                    description: "A single snapshot',"
>> +                },
>> +            },
> 
> same here, and if we'd really like to add a extra command for restoring not all but
> a list of snapshots I'd at least make snapshots a fixed "take all" parameter, e.g.:
> 
> proxmox-tape restore-snapshots --store bar --media-set foo vm/100/... ct/101/...
> 
> Or allow passing it multiple times for accumulation, as those can be better
> tab-completed which would be really helpful compared to typing some date in.
> (those pasting it in would not loose any benefit).

this does that already (the text is wrong) but yes have it that way
and optional makes sense, just gotta write a completion helper
(since we need the source store per snapshot too)

> 
> But as said, above is only for the case where we really want a separate command,
> IMO this would fit fine into a single restore command...
> 
yes thanks




^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup v2 1/8] tape/drive: add 'move_to_file' to TapeDriver trait
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 1/8] tape/drive: add 'move_to_file' to TapeDriver trait Dominik Csapak
@ 2021-05-06  5:56   ` Dietmar Maurer
  0 siblings, 0 replies; 17+ messages in thread
From: Dietmar Maurer @ 2021-05-06  5:56 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Dominik Csapak

applied




^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup v2 2/8] tape/media_catalog: add helpers to look for snapshot/chunk files
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 2/8] tape/media_catalog: add helpers to look for snapshot/chunk files Dominik Csapak
@ 2021-05-06  5:59   ` Dietmar Maurer
  0 siblings, 0 replies; 17+ messages in thread
From: Dietmar Maurer @ 2021-05-06  5:59 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Dominik Csapak

applied

On 5/5/21 12:09 PM, Dominik Csapak wrote:
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>   src/tape/media_catalog.rs | 20 ++++++++++++++++++++
>   1 file changed, 20 insertions(+)
>
> diff --git a/src/tape/media_catalog.rs b/src/tape/media_catalog.rs
> index aff91c43..8be97a36 100644
> --- a/src/tape/media_catalog.rs
> +++ b/src/tape/media_catalog.rs
> @@ -924,6 +924,16 @@ impl MediaSetCatalog {
>           false
>       }
>   
> +    /// Returns the media uuid and snapshot archive file number
> +    pub fn lookup_snapshot(&self, store: &str, snapshot: &str) -> Option<(&Uuid, u64)> {
> +        for (uuid, catalog) in self.catalog_list.iter() {
> +            if let Some(nr) = catalog.lookup_snapshot(store, snapshot) {
> +                return Some((uuid, nr));
> +            }
> +        }
> +        None
> +    }
> +
>       /// Test if the catalog already contain a chunk
>       pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
>           for catalog in self.catalog_list.values() {
> @@ -933,6 +943,16 @@ impl MediaSetCatalog {
>           }
>           false
>       }
> +
> +    /// Returns the media uuid and chunk archive file number
> +    pub fn lookup_chunk(&self, store: &str, digest: &[u8;32]) -> Option<(&Uuid, u64)> {
> +        for (uuid, catalog) in self.catalog_list.iter() {
> +            if let Some(nr) = catalog.lookup_chunk(store, digest) {
> +                return Some((uuid, nr));
> +            }
> +        }
> +        None
> +    }
>   }
>   
>   // Type definitions for internal binary catalog encoding




^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup v2 3/8] api2/tape/restore: factor out check_datastore_privs
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 3/8] api2/tape/restore: factor out check_datastore_privs Dominik Csapak
@ 2021-05-06  6:01   ` Dietmar Maurer
  0 siblings, 0 replies; 17+ messages in thread
From: Dietmar Maurer @ 2021-05-06  6:01 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Dominik Csapak

applied

On 5/5/21 12:09 PM, Dominik Csapak wrote:
> so that we can reuse it
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>   src/api2/tape/restore.rs | 39 +++++++++++++++++++++++++--------------
>   1 file changed, 25 insertions(+), 14 deletions(-)
>
> diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
> index 1dd6ba11..b7bf6670 100644
> --- a/src/api2/tape/restore.rs
> +++ b/src/api2/tape/restore.rs
> @@ -157,6 +157,30 @@ impl DataStoreMap {
>       }
>   }
>   
> +fn check_datastore_privs(
> +    user_info: &CachedUserInfo,
> +    store: &str,
> +    auth_id: &Authid,
> +    owner: &Option<Authid>,
> +) -> Result<(), Error> {
> +    let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
> +    if (privs & PRIV_DATASTORE_BACKUP) == 0 {
> +        bail!("no permissions on /datastore/{}", store);
> +    }
> +
> +    if let Some(ref owner) = owner {
> +        let correct_owner = owner == auth_id
> +            || (owner.is_token() && !auth_id.is_token() && owner.user() == auth_id.user());
> +
> +        // same permission as changing ownership after syncing
> +        if !correct_owner && privs & PRIV_DATASTORE_MODIFY == 0 {
> +            bail!("no permission to restore as '{}'", owner);
> +        }
> +    }
> +
> +    Ok(())
> +}
> +
>   pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
>   
>   #[api(
> @@ -212,20 +236,7 @@ pub fn restore(
>       }
>   
>       for store in used_datastores.iter() {
> -        let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
> -        if (privs & PRIV_DATASTORE_BACKUP) == 0 {
> -            bail!("no permissions on /datastore/{}", store);
> -        }
> -
> -        if let Some(ref owner) = owner {
> -            let correct_owner = owner == &auth_id
> -                || (owner.is_token() && !auth_id.is_token() && owner.user() == auth_id.user());
> -
> -            // same permission as changing ownership after syncing
> -            if !correct_owner && privs & PRIV_DATASTORE_MODIFY == 0 {
> -                bail!("no permission to restore as '{}'", owner);
> -            }
> -        }
> +        check_datastore_privs(&user_info, &store, &auth_id, &owner)?;
>       }
>   
>       let privs = user_info.lookup_privs(&auth_id, &["tape", "drive", &drive]);




^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup v2 4/8] api2/tape/restore: remove unnecessary params from (try_)restore_snapshot_archive
  2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 4/8] api2/tape/restore: remove unnecessary params from (try_)restore_snapshot_archive Dominik Csapak
@ 2021-05-06  6:02   ` Dietmar Maurer
  0 siblings, 0 replies; 17+ messages in thread
From: Dietmar Maurer @ 2021-05-06  6:02 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Dominik Csapak

applied

On 5/5/21 12:09 PM, Dominik Csapak wrote:
> we do not need them
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>   src/api2/tape/restore.rs | 12 +++---------
>   1 file changed, 3 insertions(+), 9 deletions(-)
>
> diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
> index b7bf6670..f3452364 100644
> --- a/src/api2/tape/restore.rs
> +++ b/src/api2/tape/restore.rs
> @@ -503,8 +503,6 @@ fn restore_archive<'a>(
>               let datastore_name = archive_header.store;
>               let snapshot = archive_header.snapshot;
>   
> -            let checked_chunks = checked_chunks_map.entry(datastore_name.clone()).or_insert(HashSet::new());
> -
>               task_log!(worker, "File {}: snapshot archive {}:{}", current_file_number, datastore_name, snapshot);
>   
>               let backup_dir: BackupDir = snapshot.parse()?;
> @@ -531,7 +529,7 @@ fn restore_archive<'a>(
>                       if is_new {
>                           task_log!(worker, "restore snapshot {}", backup_dir);
>   
> -                        match restore_snapshot_archive(worker.clone(), reader, &path, &datastore, checked_chunks) {
> +                        match restore_snapshot_archive(worker.clone(), reader, &path) {
>                               Err(err) => {
>                                   std::fs::remove_dir_all(&path)?;
>                                   bail!("restore snapshot {} failed - {}", backup_dir, err);
> @@ -774,13 +772,11 @@ fn restore_snapshot_archive<'a>(
>       worker: Arc<WorkerTask>,
>       reader: Box<dyn 'a + TapeRead>,
>       snapshot_path: &Path,
> -    datastore: &DataStore,
> -    checked_chunks: &mut HashSet<[u8;32]>,
>   ) -> Result<bool, Error> {
>   
>       let mut decoder = pxar::decoder::sync::Decoder::from_std(reader)?;
> -    match try_restore_snapshot_archive(worker, &mut decoder, snapshot_path, datastore, checked_chunks) {
> -        Ok(()) => Ok(true),
> +    match try_restore_snapshot_archive(worker, &mut decoder, snapshot_path) {
> +        Ok(_) => Ok(true),
>           Err(err) => {
>               let reader = decoder.input();
>   
> @@ -804,8 +800,6 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
>       worker: Arc<WorkerTask>,
>       decoder: &mut pxar::decoder::sync::Decoder<R>,
>       snapshot_path: &Path,
> -    _datastore: &DataStore,
> -    _checked_chunks: &mut HashSet<[u8;32]>,
>   ) -> Result<(), Error> {
>   
>       let _root = match decoder.next() {




^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-05-06  6:02 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-05 10:09 [pbs-devel] [PATCH proxmox-backup v2 0/8] tape: single snapshot restore Dominik Csapak
2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 1/8] tape/drive: add 'move_to_file' to TapeDriver trait Dominik Csapak
2021-05-06  5:56   ` [pbs-devel] applied: " Dietmar Maurer
2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 2/8] tape/media_catalog: add helpers to look for snapshot/chunk files Dominik Csapak
2021-05-06  5:59   ` [pbs-devel] applied: " Dietmar Maurer
2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 3/8] api2/tape/restore: factor out check_datastore_privs Dominik Csapak
2021-05-06  6:01   ` [pbs-devel] applied: " Dietmar Maurer
2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 4/8] api2/tape/restore: remove unnecessary params from (try_)restore_snapshot_archive Dominik Csapak
2021-05-06  6:02   ` [pbs-devel] applied: " Dietmar Maurer
2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 5/8] api2/tape/restore: return backup manifest in try_restore_snapshot_archive Dominik Csapak
2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 6/8] api2/tape/restore: add 'restore-single' api path Dominik Csapak
2021-05-05 10:53   ` Thomas Lamprecht
2021-05-05 12:48     ` Dominik Csapak
2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 7/8] bin/proxmox-tape: add restore-single command to proxmox-tape Dominik Csapak
2021-05-05 11:04   ` Thomas Lamprecht
2021-05-05 12:50     ` Dominik Csapak
2021-05-05 10:09 ` [pbs-devel] [PATCH proxmox-backup v2 8/8] ui: tape: add single snapshot restore Dominik Csapak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal