* [pbs-devel] [RFC PATCH proxmox-backup] tape: single snapshot restore
@ 2021-05-03 8:37 Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 1/7] tape/drive: add 'move_to_file' to TapeDriver trait Dominik Csapak
` (6 more replies)
0 siblings, 7 replies; 8+ messages in thread
From: Dominik Csapak @ 2021-05-03 8:37 UTC (permalink / raw)
To: pbs-devel
this is a first version of a possible single snapshot restore from tape
some small parts are in-progress/unfinished:
* api path is (imho) not optimal, but did not find something better
(integration into existing restore call gets ugly fast...)
* schema for the snapshot list is not done yet (but not hard..)
* no parallelization (also not done yet for normal restore)
* some things could probably be better refactored
* gui for multiselection is not done yet (have to think about how
we do that for a good ux)
questions to answer:
do we really want to have the ability to restore multiple 'single' snapshots
in one go? if not, it would drastically simplify the code
Dominik Csapak (7):
tape/drive: add 'move_to_file' to TapeDriver trait
tape/media_catalog: add helpers to look for snapshot/chunk files
api2/tape/restore: factor out check_datastore_privs
api2/tape/restore: make datastore in restore_snapshot_archive optional
api2/tape/restore: add 'restore-single' api path
bin/proxmox-tape: add restore-single command to proxmox-tape
ui: tape: add single snapshot restore
src/api2/tape/mod.rs | 1 +
src/api2/tape/restore.rs | 617 ++++++++++++++++++++++++++++++++-
src/bin/proxmox-tape.rs | 62 ++++
src/tape/drive/lto/mod.rs | 4 +
src/tape/drive/mod.rs | 3 +
src/tape/drive/virtual_tape.rs | 22 ++
src/tape/media_catalog.rs | 20 ++
www/tape/BackupOverview.js | 41 +++
www/tape/window/TapeRestore.js | 26 ++
9 files changed, 777 insertions(+), 19 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [pbs-devel] [RFC PATCH proxmox-backup 1/7] tape/drive: add 'move_to_file' to TapeDriver trait
2021-05-03 8:37 [pbs-devel] [RFC PATCH proxmox-backup] tape: single snapshot restore Dominik Csapak
@ 2021-05-03 8:38 ` Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 2/7] tape/media_catalog: add helpers to look for snapshot/chunk files Dominik Csapak
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2021-05-03 8:38 UTC (permalink / raw)
To: pbs-devel
so that we can directly move to a specified file on the tape
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/tape/drive/lto/mod.rs | 4 ++++
src/tape/drive/mod.rs | 3 +++
src/tape/drive/virtual_tape.rs | 22 ++++++++++++++++++++++
3 files changed, 29 insertions(+)
diff --git a/src/tape/drive/lto/mod.rs b/src/tape/drive/lto/mod.rs
index 642c3cc7..3894b034 100644
--- a/src/tape/drive/lto/mod.rs
+++ b/src/tape/drive/lto/mod.rs
@@ -309,6 +309,10 @@ impl TapeDriver for LtoTapeHandle {
Ok(())
}
+ fn move_to_file(&mut self, file: u64) -> Result<(), Error> {
+ self.locate_file(file)
+ }
+
fn rewind(&mut self) -> Result<(), Error> {
self.sg_tape.rewind()
}
diff --git a/src/tape/drive/mod.rs b/src/tape/drive/mod.rs
index fd8f503d..f72e0b51 100644
--- a/src/tape/drive/mod.rs
+++ b/src/tape/drive/mod.rs
@@ -80,6 +80,9 @@ pub trait TapeDriver {
/// Move to last file
fn move_to_last_file(&mut self) -> Result<(), Error>;
+ /// Move to given file nr
+ fn move_to_file(&mut self, file: u64) -> Result<(), Error>;
+
/// Current file number
fn current_file_number(&mut self) -> Result<u64, Error>;
diff --git a/src/tape/drive/virtual_tape.rs b/src/tape/drive/virtual_tape.rs
index f7ebc0bb..3c5f9502 100644
--- a/src/tape/drive/virtual_tape.rs
+++ b/src/tape/drive/virtual_tape.rs
@@ -261,6 +261,28 @@ impl TapeDriver for VirtualTapeHandle {
Ok(())
}
+ fn move_to_file(&mut self, file: u64) -> Result<(), Error> {
+ let mut status = self.load_status()?;
+ match status.current_tape {
+ Some(VirtualTapeStatus { ref name, ref mut pos }) => {
+
+ let index = self.load_tape_index(name)
+ .map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
+
+ if file as usize > index.files {
+ bail!("invalid file nr");
+ }
+
+ *pos = file as usize;
+
+ self.store_status(&status)
+ .map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
+
+ Ok(())
+ }
+ None => bail!("drive is empty (no tape loaded)."),
+ }
+ }
fn read_next_file(&mut self) -> Result<Box<dyn TapeRead>, BlockReadError> {
let mut status = self.load_status()
--
2.20.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [pbs-devel] [RFC PATCH proxmox-backup 2/7] tape/media_catalog: add helpers to look for snapshot/chunk files
2021-05-03 8:37 [pbs-devel] [RFC PATCH proxmox-backup] tape: single snapshot restore Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 1/7] tape/drive: add 'move_to_file' to TapeDriver trait Dominik Csapak
@ 2021-05-03 8:38 ` Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 3/7] api2/tape/restore: factor out check_datastore_privs Dominik Csapak
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2021-05-03 8:38 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/tape/media_catalog.rs | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/src/tape/media_catalog.rs b/src/tape/media_catalog.rs
index aff91c43..8be97a36 100644
--- a/src/tape/media_catalog.rs
+++ b/src/tape/media_catalog.rs
@@ -924,6 +924,16 @@ impl MediaSetCatalog {
false
}
+ /// Returns the media uuid and snapshot archive file number
+ pub fn lookup_snapshot(&self, store: &str, snapshot: &str) -> Option<(&Uuid, u64)> {
+ for (uuid, catalog) in self.catalog_list.iter() {
+ if let Some(nr) = catalog.lookup_snapshot(store, snapshot) {
+ return Some((uuid, nr));
+ }
+ }
+ None
+ }
+
/// Test if the catalog already contain a chunk
pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
for catalog in self.catalog_list.values() {
@@ -933,6 +943,16 @@ impl MediaSetCatalog {
}
false
}
+
+ /// Returns the media uuid and chunk archive file number
+ pub fn lookup_chunk(&self, store: &str, digest: &[u8;32]) -> Option<(&Uuid, u64)> {
+ for (uuid, catalog) in self.catalog_list.iter() {
+ if let Some(nr) = catalog.lookup_chunk(store, digest) {
+ return Some((uuid, nr));
+ }
+ }
+ None
+ }
}
// Type definitions for internal binary catalog encoding
--
2.20.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [pbs-devel] [RFC PATCH proxmox-backup 3/7] api2/tape/restore: factor out check_datastore_privs
2021-05-03 8:37 [pbs-devel] [RFC PATCH proxmox-backup] tape: single snapshot restore Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 1/7] tape/drive: add 'move_to_file' to TapeDriver trait Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 2/7] tape/media_catalog: add helpers to look for snapshot/chunk files Dominik Csapak
@ 2021-05-03 8:38 ` Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 4/7] api2/tape/restore: make datastore in restore_snapshot_archive optional Dominik Csapak
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2021-05-03 8:38 UTC (permalink / raw)
To: pbs-devel
so that we can reuse it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/tape/restore.rs | 39 +++++++++++++++++++++++++--------------
1 file changed, 25 insertions(+), 14 deletions(-)
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index b61e99a4..1cde7c63 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -162,6 +162,30 @@ impl DataStoreMap {
}
}
+fn check_datastore_privs(
+ user_info: &CachedUserInfo,
+ store: &str,
+ auth_id: &Authid,
+ owner: &Option<Authid>,
+) -> Result<(), Error> {
+ let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
+ if (privs & PRIV_DATASTORE_BACKUP) == 0 {
+ bail!("no permissions on /datastore/{}", store);
+ }
+
+ if let Some(ref owner) = owner {
+ let correct_owner = owner == auth_id
+ || (owner.is_token() && !auth_id.is_token() && owner.user() == auth_id.user());
+
+ // same permission as changing ownership after syncing
+ if !correct_owner && privs & PRIV_DATASTORE_MODIFY == 0 {
+ bail!("no permission to restore as '{}'", owner);
+ }
+ }
+
+ Ok(())
+}
+
pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
#[api(
@@ -217,20 +241,7 @@ pub fn restore(
}
for store in used_datastores.iter() {
- let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
- if (privs & PRIV_DATASTORE_BACKUP) == 0 {
- bail!("no permissions on /datastore/{}", store);
- }
-
- if let Some(ref owner) = owner {
- let correct_owner = owner == &auth_id
- || (owner.is_token() && !auth_id.is_token() && owner.user() == auth_id.user());
-
- // same permission as changing ownership after syncing
- if !correct_owner && privs & PRIV_DATASTORE_MODIFY == 0 {
- bail!("no permission to restore as '{}'", owner);
- }
- }
+ check_datastore_privs(&user_info, &store, &auth_id, &owner)?;
}
let privs = user_info.lookup_privs(&auth_id, &["tape", "drive", &drive]);
--
2.20.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [pbs-devel] [RFC PATCH proxmox-backup 4/7] api2/tape/restore: make datastore in restore_snapshot_archive optional
2021-05-03 8:37 [pbs-devel] [RFC PATCH proxmox-backup] tape: single snapshot restore Dominik Csapak
` (2 preceding siblings ...)
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 3/7] api2/tape/restore: factor out check_datastore_privs Dominik Csapak
@ 2021-05-03 8:38 ` Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 5/7] api2/tape/restore: add 'restore-single' api path Dominik Csapak
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2021-05-03 8:38 UTC (permalink / raw)
To: pbs-devel
and add the chunks to the hashset without going through the datastore
we'll want this to get a list of chunks from the indexes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/tape/restore.rs | 30 +++++++++++++++++++++++-------
1 file changed, 23 insertions(+), 7 deletions(-)
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 1cde7c63..aa51ce4b 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -536,7 +536,7 @@ fn restore_archive<'a>(
if is_new {
task_log!(worker, "restore snapshot {}", backup_dir);
- match restore_snapshot_archive(worker, reader, &path, &datastore, checked_chunks) {
+ match restore_snapshot_archive(worker, reader, &path, Some(&datastore), checked_chunks) {
Err(err) => {
std::fs::remove_dir_all(&path)?;
bail!("restore snapshot {} failed - {}", backup_dir, err);
@@ -693,8 +693,8 @@ fn restore_snapshot_archive<'a>(
worker: &WorkerTask,
reader: Box<dyn 'a + TapeRead>,
snapshot_path: &Path,
- datastore: &DataStore,
- checked_chunks: &mut HashSet<[u8;32]>,
+ datastore: Option<&DataStore>,
+ checked_chunks: &mut HashSet<[u8; 32]>,
) -> Result<bool, Error> {
let mut decoder = pxar::decoder::sync::Decoder::from_std(reader)?;
@@ -723,8 +723,8 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
worker: &WorkerTask,
decoder: &mut pxar::decoder::sync::Decoder<R>,
snapshot_path: &Path,
- datastore: &DataStore,
- checked_chunks: &mut HashSet<[u8;32]>,
+ datastore: Option<&DataStore>,
+ checked_chunks: &mut HashSet<[u8; 32]>,
) -> Result<(), Error> {
let _root = match decoder.next() {
@@ -815,13 +815,29 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
let index = DynamicIndexReader::open(&archive_path)?;
let (csum, size) = index.compute_csum();
manifest.verify_file(&item.filename, &csum, size)?;
- datastore.fast_index_verification(&index, checked_chunks)?;
+ if let Some(ref datastore) = datastore {
+ datastore.fast_index_verification(&index, checked_chunks)?;
+ } else {
+ for i in 0..index.index_count() {
+ if let Some(digest) = index.index_digest(i) {
+ checked_chunks.insert(*digest);
+ }
+ }
+ }
}
ArchiveType::FixedIndex => {
let index = FixedIndexReader::open(&archive_path)?;
let (csum, size) = index.compute_csum();
manifest.verify_file(&item.filename, &csum, size)?;
- datastore.fast_index_verification(&index, checked_chunks)?;
+ if let Some(ref datastore) = datastore {
+ datastore.fast_index_verification(&index, checked_chunks)?;
+ } else {
+ for i in 0..index.index_count() {
+ if let Some(digest) = index.index_digest(i) {
+ checked_chunks.insert(*digest);
+ }
+ }
+ }
}
ArchiveType::Blob => {
let mut tmpfile = std::fs::File::open(&archive_path)?;
--
2.20.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [pbs-devel] [RFC PATCH proxmox-backup 5/7] api2/tape/restore: add 'restore-single' api path
2021-05-03 8:37 [pbs-devel] [RFC PATCH proxmox-backup] tape: single snapshot restore Dominik Csapak
` (3 preceding siblings ...)
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 4/7] api2/tape/restore: make datastore in restore_snapshot_archive optional Dominik Csapak
@ 2021-05-03 8:38 ` Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 6/7] bin/proxmox-tape: add restore-single command to proxmox-tape Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 7/7] ui: tape: add single snapshot restore Dominik Csapak
6 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2021-05-03 8:38 UTC (permalink / raw)
To: pbs-devel
this makes it possible to only restore some snapshots from a tape media-set
instead of the whole. If the user selects only a small part, this will
probably be faster (and definitely uses less space on the target
datastores).
the user has to provide a datastore map (like on normal restore), and
a list of snapshots to restore in the form of 'store:type/group/id'
e.g. 'mystore:ct/100/2021-01-01T00:00:00Z'
we achieve this by first restoring the index to a temp dir, retrieving
a list of chunks, and using the catalog, we generate a list of
media/files that we need to (partially) restore.
finally, we copy the snapshots to the correct dir in the datastore,
and clean up the temp dir
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/tape/mod.rs | 1 +
src/api2/tape/restore.rs | 554 ++++++++++++++++++++++++++++++++++++++-
2 files changed, 554 insertions(+), 1 deletion(-)
diff --git a/src/api2/tape/mod.rs b/src/api2/tape/mod.rs
index 219a721b..4bf72d18 100644
--- a/src/api2/tape/mod.rs
+++ b/src/api2/tape/mod.rs
@@ -72,6 +72,7 @@ const SUBDIRS: SubdirMap = &[
("drive", &drive::ROUTER),
("media", &media::ROUTER),
("restore", &restore::ROUTER),
+ ("restore-single", &restore::ROUTER_SINGLE),
(
"scan-changers",
&Router::new()
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index aa51ce4b..2e8e1453 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -1,4 +1,4 @@
-use std::path::Path;
+use std::path::{Path, PathBuf};
use std::ffi::OsStr;
use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
@@ -74,6 +74,7 @@ use crate::{
MediaId,
MediaSet,
MediaCatalog,
+ MediaSetCatalog,
Inventory,
lock_media_set,
file_formats::{
@@ -100,6 +101,8 @@ use crate::{
},
};
+const RESTORE_TMP_DIR: &str = "/var/tmp/proxmox-backup";
+
pub struct DataStoreMap {
map: HashMap<String, Arc<DataStore>>,
default: Option<Arc<DataStore>>,
@@ -187,6 +190,555 @@ fn check_datastore_privs(
}
pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
+pub const ROUTER_SINGLE: Router = Router::new().post(&API_METHOD_RESTORE_SINGLE);
+
+#[api(
+ input: {
+ properties: {
+ store: {
+ schema: DATASTORE_MAP_LIST_SCHEMA,
+ },
+ drive: {
+ schema: DRIVE_NAME_SCHEMA,
+ },
+ "media-set": {
+ description: "Media set UUID.",
+ type: String,
+ },
+ "snapshots": {
+ description: "List of snapshots.",
+ type: Array,
+ items: {
+ type: String,
+ description: "A single snapshot in format: 'store:type/group/id'."
+ },
+ },
+ "notify-user": {
+ type: Userid,
+ optional: true,
+ },
+ owner: {
+ type: Authid,
+ optional: true,
+ },
+ },
+ },
+ returns: {
+ schema: UPID_SCHEMA,
+ },
+ access: {
+ // Note: parameters are no uri parameter, so we need to test inside function body
+ description: "The user needs Tape.Read privilege on /tape/pool/{pool} \
+ and /tape/drive/{drive}, Datastore.Backup privilege on /datastore/{store}.",
+ permission: &Permission::Anybody,
+ },
+)]
+/// Restore single snapshots from from media-set
+pub fn restore_single(
+ store: String,
+ drive: String,
+ media_set: String,
+ snapshots: Vec<String>,
+ notify_user: Option<Userid>,
+ owner: Option<Authid>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Value, Error> {
+ let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let store_map = DataStoreMap::try_from(store)
+ .map_err(|err| format_err!("cannot parse store mapping: {}", err))?;
+ let used_datastores = store_map.used_datastores();
+ if used_datastores.len() == 0 {
+ bail!("no datastores given");
+ }
+
+ for store in used_datastores.iter() {
+ check_datastore_privs(&user_info, &store, &auth_id, &owner)?;
+ }
+
+ let privs = user_info.lookup_privs(&auth_id, &["tape", "drive", &drive]);
+ if (privs & PRIV_TAPE_READ) == 0 {
+ bail!("no permissions on /tape/drive/{}", drive);
+ }
+
+ let media_set_uuid = media_set.parse()?;
+
+ let status_path = Path::new(TAPE_STATUS_DIR);
+
+ // test what catalog files we have
+ let _catalogs = MediaCatalog::media_with_catalogs(status_path)?;
+
+ let _lock = lock_media_set(status_path, &media_set_uuid, None)?;
+
+ let inventory = Inventory::load(status_path)?;
+
+ let pool = inventory.lookup_media_set_pool(&media_set_uuid)?;
+
+ let privs = user_info.lookup_privs(&auth_id, &["tape", "pool", &pool]);
+ if (privs & PRIV_TAPE_READ) == 0 {
+ bail!("no permissions on /tape/pool/{}", pool);
+ }
+
+ let (drive_config, _digest) = config::drive::config()?;
+
+ // early check/lock before starting worker
+ let drive_lock = lock_tape_device(&drive_config, &drive)?;
+
+ let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
+
+ let upid_str = WorkerTask::new_thread(
+ "tape-restore-single",
+ None,
+ auth_id.clone(),
+ to_stdout,
+ move |worker| {
+ let _drive_lock = drive_lock; // keep lock guard
+
+ set_tape_device_state(&drive, &worker.upid().to_string())?;
+
+
+ let restore_owner = owner.as_ref().unwrap_or(&auth_id);
+
+ let email = notify_user
+ .as_ref()
+ .and_then(|userid| lookup_user_email(userid))
+ .or_else(|| lookup_user_email(&auth_id.clone().into()));
+
+ let res = restore_single_worker(
+ &worker,
+ snapshots,
+ inventory,
+ media_set_uuid,
+ drive_config,
+ &drive,
+ store_map,
+ restore_owner,
+ email,
+ );
+
+ if let Err(err) = set_tape_device_state(&drive, "") {
+ task_log!(worker, "could not unset drive state for {}: {}", drive, err);
+ }
+
+ res
+ },
+ )?;
+
+ Ok(upid_str.into())
+}
+
+fn restore_single_worker(
+ worker: &WorkerTask,
+ snapshots: Vec<String>,
+ inventory: Inventory,
+ media_set_uuid: Uuid,
+ drive_config: SectionConfigData,
+ drive_name: &str,
+ store_map: DataStoreMap,
+ restore_owner: &Authid,
+ email: Option<String>,
+) -> Result<(), Error> {
+ let base_path: PathBuf = format!("{}/{}", RESTORE_TMP_DIR, media_set_uuid).into();
+ std::fs::create_dir_all(&base_path)?;
+
+ let catalog = get_media_set_catalog(&inventory, &media_set_uuid)?;
+
+ let mut datastore_locks = Vec::new();
+ let mut snapshot_file_list: HashMap<Uuid, Vec<u64>> = HashMap::new();
+ let mut snapshot_locks = HashMap::new();
+
+ let res = proxmox::try_block!({
+ // assemble snapshot files/locks
+ for i in 0..snapshots.len() {
+ let store_snapshot = &snapshots[i];
+ let mut split = snapshots[i].splitn(2, ':');
+ let source_datastore = split
+ .next()
+ .ok_or_else(|| format_err!("invalid snapshot: {}", store_snapshot))?;
+ let snapshot = split
+ .next()
+ .ok_or_else(|| format_err!("invalid snapshot:{}", store_snapshot))?;
+ let backup_dir: BackupDir = snapshot.parse()?;
+
+ let datastore = store_map.get_datastore(source_datastore).ok_or_else(|| {
+ format_err!(
+ "could not find mapping for source datastore: {}",
+ source_datastore
+ )
+ })?;
+
+ let (owner, _group_lock) =
+ datastore.create_locked_backup_group(backup_dir.group(), &restore_owner)?;
+ if restore_owner != &owner {
+ // only the owner is allowed to create additional snapshots
+ bail!(
+ "restore '{}' failed - owner check failed ({} != {})",
+ snapshot,
+ restore_owner,
+ owner
+ );
+ }
+
+ let (media_id, file_num) = if let Some((media_uuid, nr)) =
+ catalog.lookup_snapshot(&source_datastore, &snapshot)
+ {
+ let media_id = inventory.lookup_media(media_uuid).unwrap();
+ (media_id, nr)
+ } else {
+ task_warn!(
+ worker,
+ "did not find snapshot '{}' in media set {}",
+ snapshot,
+ media_set_uuid
+ );
+ continue;
+ };
+
+ let (_rel_path, is_new, snap_lock) = datastore.create_locked_backup_dir(&backup_dir)?;
+
+ if !is_new {
+ task_log!(
+ worker,
+ "found snapshot {} on target datastore, skipping...",
+ snapshot
+ );
+ continue;
+ }
+
+ snapshot_locks.insert(store_snapshot.to_string(), snap_lock);
+
+ let shared_store_lock = datastore.try_shared_chunk_store_lock()?;
+ datastore_locks.push(shared_store_lock);
+
+ let file_list = snapshot_file_list
+ .entry(media_id.label.uuid.clone())
+ .or_insert_with(Vec::new);
+ file_list.push(file_num);
+
+ task_log!(
+ worker,
+ "found snapshot {} on {}: file {}",
+ snapshot,
+ media_id.label.label_text,
+ file_num
+ );
+ }
+
+ if snapshot_file_list.is_empty() {
+ task_log!(worker, "nothing to restore, skipping remaining phases...");
+ return Ok(());
+ }
+
+ task_log!(worker, "Phase 1: temporarily restore snapshots to temp dir");
+ let mut chunks_list: HashMap<String, HashSet<[u8; 32]>> = HashMap::new();
+ for (media_uuid, file_list) in snapshot_file_list.iter_mut() {
+ let media_id = inventory.lookup_media(media_uuid).unwrap();
+ let (drive, info) = request_and_load_media(
+ worker,
+ &drive_config,
+ &drive_name,
+ &media_id.label,
+ &email,
+ )?;
+ file_list.sort_unstable();
+ restore_snapshots_to_tmpdir(
+ &worker,
+ &base_path,
+ file_list,
+ drive,
+ &info,
+ &media_set_uuid,
+ &mut chunks_list,
+ )?;
+ }
+
+ let mut media_list: HashMap<Uuid, HashMap<u64, HashSet<[u8; 32]>>> = HashMap::new();
+
+ for (source_datastore, chunks) in chunks_list.into_iter() {
+ let datastore = store_map.get_datastore(&source_datastore).ok_or_else(|| {
+ format_err!(
+ "could not find mapping for source datastore: {}",
+ source_datastore
+ )
+ })?;
+ for digest in chunks.into_iter() {
+ // we only want to restore chunks that we do not have yet
+ if !datastore.cond_touch_chunk(&digest, false)? {
+ if let Some((uuid, nr)) = catalog.lookup_chunk(&source_datastore, &digest) {
+ let file = media_list.entry(uuid.clone()).or_insert_with(HashMap::new);
+ let chunks = file.entry(nr).or_insert_with(HashSet::new);
+ chunks.insert(digest);
+ }
+ }
+ }
+ }
+
+ // we do not need it anymore, saves memory
+ drop(catalog);
+
+ if !media_list.is_empty() {
+ task_log!(worker, "Phase 2: restore chunks to datastores");
+ } else {
+ task_log!(worker, "all chunks exist already, skipping phase 2...");
+ }
+
+ for (media_uuid, file_list) in media_list.iter_mut() {
+ let media_id = inventory.lookup_media(media_uuid).unwrap();
+ let (mut drive, _info) = request_and_load_media(
+ worker,
+ &drive_config,
+ &drive_name,
+ &media_id.label,
+ &email,
+ )?;
+ let mut files: Vec<u64> = file_list.keys().map(|v| *v).collect();
+ files.sort();
+ restore_chunk_file_list(worker, &mut drive, &files[..], &store_map, file_list)?;
+ }
+
+ task_log!(
+ worker,
+ "Phase 3: copy snapshots from temp dir to datastores"
+ );
+ for (store_snapshot, _lock) in snapshot_locks.into_iter() {
+ let mut split = store_snapshot.splitn(2, ':');
+ let source_datastore = split
+ .next()
+ .ok_or_else(|| format_err!("invalid snapshot: {}", store_snapshot))?;
+ let snapshot = split
+ .next()
+ .ok_or_else(|| format_err!("invalid snapshot:{}", store_snapshot))?;
+ let backup_dir: BackupDir = snapshot.parse()?;
+
+ let datastore = store_map
+ .get_datastore(&source_datastore)
+ .ok_or_else(|| format_err!("unexpected source datastore: {}", source_datastore))?;
+
+ let mut tmp_path = base_path.clone();
+ tmp_path.push(&source_datastore);
+ tmp_path.push(snapshot);
+
+ let path = datastore.snapshot_path(&backup_dir);
+
+ for entry in std::fs::read_dir(tmp_path)? {
+ let entry = entry?;
+ let mut new_path = path.clone();
+ new_path.push(entry.file_name());
+ std::fs::copy(entry.path(), new_path)?;
+ }
+ task_log!(worker, "Restore snapshot '{}' done", snapshot);
+ }
+ Ok(())
+ });
+
+ match std::fs::remove_dir_all(&base_path) {
+ Ok(()) => {}
+ Err(err) => task_warn!(worker, "error cleaning up: {}", err),
+ }
+
+ res
+}
+
+fn get_media_set_catalog(
+ inventory: &Inventory,
+ media_set_uuid: &Uuid,
+) -> Result<MediaSetCatalog, Error> {
+ let status_path = Path::new(TAPE_STATUS_DIR);
+
+ let members = inventory.compute_media_set_members(media_set_uuid)?;
+ let media_list = members.media_list();
+ let mut catalog = MediaSetCatalog::new();
+
+ for (seq_nr, media_uuid) in media_list.iter().enumerate() {
+ match media_uuid {
+ None => {
+ bail!(
+ "media set {} is incomplete (missing member {}).",
+ media_set_uuid,
+ seq_nr
+ );
+ }
+ Some(media_uuid) => {
+ let media_id = inventory.lookup_media(media_uuid).unwrap();
+ let media_catalog = MediaCatalog::open(status_path, &media_id, false, false)?;
+ catalog.append_catalog(media_catalog)?;
+ }
+ }
+ }
+
+ Ok(catalog)
+}
+
+fn restore_snapshots_to_tmpdir(
+ worker: &WorkerTask,
+ path: &PathBuf,
+ file_list: &[u64],
+ mut drive: Box<dyn TapeDriver>,
+ media_id: &MediaId,
+ media_set_uuid: &Uuid,
+ chunks_list: &mut HashMap<String, HashSet<[u8; 32]>>,
+) -> Result<(), Error> {
+ match media_id.media_set_label {
+ None => {
+ bail!(
+ "missing media set label on media {} ({})",
+ media_id.label.label_text,
+ media_id.label.uuid
+ );
+ }
+ Some(ref set) => {
+ if set.uuid != *media_set_uuid {
+ bail!(
+ "wrong media set label on media {} ({} != {})",
+ media_id.label.label_text,
+ media_id.label.uuid,
+ media_set_uuid
+ );
+ }
+ let encrypt_fingerprint = set.encryption_key_fingerprint.clone().map(|fp| {
+ task_log!(worker, "Encryption key fingerprint: {}", fp);
+ (fp, set.uuid.clone())
+ });
+
+ drive.set_encryption(encrypt_fingerprint)?;
+ }
+ }
+
+ for file_num in file_list {
+ drive.move_to_file(*file_num)?;
+ let mut reader = drive.read_next_file()?;
+
+ let header: MediaContentHeader = unsafe { reader.read_le_value()? };
+ if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
+ bail!("missing MediaContentHeader");
+ }
+
+ match header.content_magic {
+ PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1 => {
+ let header_data = reader.read_exact_allocated(header.size as usize)?;
+
+ let archive_header: SnapshotArchiveHeader = serde_json::from_slice(&header_data)
+ .map_err(|err| {
+ format_err!("unable to parse snapshot archive header - {}", err)
+ })?;
+
+ let source_datastore = archive_header.store;
+ let snapshot = archive_header.snapshot;
+
+ task_log!(
+ worker,
+ "File {}: snapshot archive {}:{}",
+ file_num,
+ source_datastore,
+ snapshot
+ );
+
+ let mut decoder = pxar::decoder::sync::Decoder::from_std(reader)?;
+
+ let mut tmp_path = path.clone();
+ tmp_path.push(&source_datastore);
+ tmp_path.push(snapshot);
+ std::fs::create_dir_all(&tmp_path)?;
+
+ let chunks = chunks_list
+ .entry(source_datastore)
+ .or_insert_with(HashSet::new);
+ try_restore_snapshot_archive(worker, &mut decoder, &tmp_path, None, chunks)?;
+ }
+ _ => bail!("unexpected file type"),
+ }
+ }
+
+ Ok(())
+}
+
+fn restore_chunk_file_list(
+ worker: &WorkerTask,
+ drive: &mut Box<dyn TapeDriver>,
+ file_list: &[u64],
+ store_map: &DataStoreMap,
+ chunk_list: &mut HashMap<u64, HashSet<[u8; 32]>>,
+) -> Result<(), Error> {
+ for nr in file_list {
+ let current_file_number = drive.current_file_number()?;
+ if current_file_number != *nr {
+ drive.move_to_file(*nr)?;
+ }
+ let mut reader = drive.read_next_file()?;
+ let header: MediaContentHeader = unsafe { reader.read_le_value()? };
+ if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
+ bail!("missing MediaContentHeader");
+ }
+
+ match header.content_magic {
+ PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1 => {
+ let header_data = reader.read_exact_allocated(header.size as usize)?;
+
+ let archive_header: ChunkArchiveHeader = serde_json::from_slice(&header_data)
+ .map_err(|err| format_err!("unable to parse chunk archive header - {}", err))?;
+
+ let source_datastore = archive_header.store;
+
+ task_log!(
+ worker,
+ "File {}: chunk archive for datastore '{}'",
+ nr,
+ source_datastore
+ );
+
+ let datastore = store_map.get_datastore(&source_datastore).ok_or_else(|| {
+ format_err!("unexpected chunk archive for store: {}", source_datastore)
+ })?;
+
+ let chunks = chunk_list
+ .get_mut(nr)
+ .ok_or_else(|| format_err!("undexpected file nr: {}", nr))?;
+
+ let count = restore_partial_chunk_archive(worker, reader, datastore, chunks)?;
+ task_log!(worker, "register {} chunks", count);
+ }
+ _ => bail!("unexpected content magic {:?}", header.content_magic),
+ }
+ }
+
+ Ok(())
+}
+
+fn restore_partial_chunk_archive<'a>(
+ worker: &WorkerTask,
+ reader: Box<dyn 'a + TapeRead>,
+ datastore: &DataStore,
+ chunk_list: &mut HashSet<[u8; 32]>,
+) -> Result<usize, Error> {
+ let mut decoder = ChunkArchiveDecoder::new(reader);
+
+ let mut count = 0;
+ loop {
+ let (digest, blob) = match decoder.next_chunk()? {
+ Some((digest, blob)) => (digest, blob),
+ None => break,
+ };
+
+ worker.check_abort()?;
+
+ if chunk_list.remove(&digest) && !datastore.cond_touch_chunk(&digest, false)? {
+ blob.verify_crc()?;
+
+ if blob.crypt_mode()? == CryptMode::None {
+ blob.decode(None, Some(&digest))?; // verify digest
+ }
+ datastore.insert_chunk(&blob, &digest)?;
+ count += 1;
+ }
+
+ if chunk_list.is_empty() {
+ break;
+ }
+ }
+
+ Ok(count)
+}
#[api(
input: {
--
2.20.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [pbs-devel] [RFC PATCH proxmox-backup 6/7] bin/proxmox-tape: add restore-single command to proxmox-tape
2021-05-03 8:37 [pbs-devel] [RFC PATCH proxmox-backup] tape: single snapshot restore Dominik Csapak
` (4 preceding siblings ...)
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 5/7] api2/tape/restore: add 'restore-single' api path Dominik Csapak
@ 2021-05-03 8:38 ` Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 7/7] ui: tape: add single snapshot restore Dominik Csapak
6 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2021-05-03 8:38 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/bin/proxmox-tape.rs | 62 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 62 insertions(+)
diff --git a/src/bin/proxmox-tape.rs b/src/bin/proxmox-tape.rs
index e18f334c..3d5d0cf3 100644
--- a/src/bin/proxmox-tape.rs
+++ b/src/bin/proxmox-tape.rs
@@ -868,6 +868,61 @@ async fn backup(mut param: Value) -> Result<(), Error> {
Ok(())
}
+#[api(
+ input: {
+ properties: {
+ store: {
+ schema: DATASTORE_MAP_LIST_SCHEMA,
+ },
+ drive: {
+ schema: DRIVE_NAME_SCHEMA,
+ optional: true,
+ },
+ "media-set": {
+ description: "Media set UUID.",
+ type: String,
+ },
+ "snapshots": {
+ description: "Comma-separated list of snapshots.",
+ type: Array,
+ items: {
+ type: String,
+ description: "A single snapshot',"
+ },
+ },
+ "notify-user": {
+ type: Userid,
+ optional: true,
+ },
+ owner: {
+ type: Authid,
+ optional: true,
+ },
+ "output-format": {
+ schema: OUTPUT_FORMAT,
+ optional: true,
+ },
+ },
+ },
+)]
+/// Restore data from media-set
+async fn restore_single(mut param: Value) -> Result<(), Error> {
+
+ let output_format = get_output_format(¶m);
+
+ let (config, _digest) = config::drive::config()?;
+
+ param["drive"] = extract_drive_name(&mut param, &config)?.into();
+
+ let mut client = connect_to_localhost()?;
+
+ let result = client.post("api2/json/tape/restore-single", Some(param)).await?;
+
+ view_task_result(&mut client, result, &output_format).await?;
+
+ Ok(())
+}
+
#[api(
input: {
properties: {
@@ -981,6 +1036,13 @@ fn main() {
.completion_cb("store", complete_datastore_name)
.completion_cb("media-set", complete_media_set_uuid)
)
+ .insert(
+ "restore-single",
+ CliCommand::new(&API_METHOD_RESTORE_SINGLE)
+ .arg_param(&["media-set", "store", "snapshots"])
+ .completion_cb("store", complete_datastore_name)
+ .completion_cb("media-set", complete_media_set_uuid)
+ )
.insert(
"barcode-label",
CliCommand::new(&API_METHOD_BARCODE_LABEL_MEDIA)
--
2.20.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [pbs-devel] [RFC PATCH proxmox-backup 7/7] ui: tape: add single snapshot restore
2021-05-03 8:37 [pbs-devel] [RFC PATCH proxmox-backup] tape: single snapshot restore Dominik Csapak
` (5 preceding siblings ...)
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 6/7] bin/proxmox-tape: add restore-single command to proxmox-tape Dominik Csapak
@ 2021-05-03 8:38 ` Dominik Csapak
6 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2021-05-03 8:38 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/tape/BackupOverview.js | 41 ++++++++++++++++++++++++++++++++++
www/tape/window/TapeRestore.js | 26 +++++++++++++++++++++
2 files changed, 67 insertions(+)
diff --git a/www/tape/BackupOverview.js b/www/tape/BackupOverview.js
index 0cc0e18e..c028d58d 100644
--- a/www/tape/BackupOverview.js
+++ b/www/tape/BackupOverview.js
@@ -16,6 +16,38 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
}).show();
},
+ restoreSingle: function(button, record) {
+ let me = this;
+ let view = me.getView();
+ let selection = view.getSelection();
+ if (!selection || selection.length < 1) {
+ return;
+ }
+
+ let node = selection[0];
+ if (node.data.restoreid === undefined) {
+ return;
+ }
+ let restoreid = node.data.restoreid;
+ let mediaset = node.data.text;
+ let uuid = node.data['media-set-uuid'];
+ let datastores = [node.data.store];
+
+ Ext.create('PBS.TapeManagement.TapeRestoreWindow', {
+ mediaset,
+ uuid,
+ list: [
+ restoreid,
+ ],
+ datastores,
+ listeners: {
+ destroy: function() {
+ me.reload();
+ },
+ },
+ }).show();
+ },
+
restore: function(button, record) {
let me = this;
let view = me.getView();
@@ -149,6 +181,7 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
entry.text = entry.snapshot;
entry.leaf = true;
entry.children = [];
+ entry.restoreid = `${entry.store}:${entry.snapshot}`;
let iconCls = PBS.Utils.get_type_icon_cls(entry.snapshot);
if (iconCls !== '') {
entry.iconCls = `fa ${iconCls}`;
@@ -262,6 +295,14 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
parentXType: 'treepanel',
enableFn: (rec) => !!rec.data['media-set-uuid'],
},
+ {
+ xtype: 'proxmoxButton',
+ disabled: true,
+ text: gettext('Restore Snapshot'),
+ handler: 'restoreSingle',
+ parentXType: 'treepanel',
+ enableFn: (rec) => !!rec.data.restoreid,
+ },
],
columns: [
diff --git a/www/tape/window/TapeRestore.js b/www/tape/window/TapeRestore.js
index 85011cba..fcd93f9c 100644
--- a/www/tape/window/TapeRestore.js
+++ b/www/tape/window/TapeRestore.js
@@ -10,6 +10,19 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
showTaskViewer: true,
isCreate: true,
+ cbindData: function(config) {
+ let me = this;
+ me.isSingle = false;
+ me.listText = "";
+ if (me.list !== undefined) {
+ me.url = '/api2/extjs/tape/restore-single';
+ me.isSingle = true;
+ me.listText = me.list.join('<br>');
+ me.title = gettext('Restore Snapshot');
+ }
+ return {};
+ },
+
defaults: {
labelWidth: 120,
},
@@ -33,6 +46,10 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
}
delete values.mapping;
+ if (me.up('window').list !== undefined) {
+ values.snapshots = me.up('window').list;
+ }
+
values.store = datastores.join(',');
return values;
@@ -55,6 +72,15 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
value: '{uuid}',
},
},
+ {
+ xtype: 'displayfield',
+ fieldLabel: gettext('Snapshot(s)'),
+ submitValue: false,
+ cbind: {
+ hidden: '{!isSingle}',
+ value: '{listText}',
+ },
+ },
{
xtype: 'pbsDriveSelector',
fieldLabel: gettext('Drive'),
--
2.20.1
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-05-03 8:38 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-03 8:37 [pbs-devel] [RFC PATCH proxmox-backup] tape: single snapshot restore Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 1/7] tape/drive: add 'move_to_file' to TapeDriver trait Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 2/7] tape/media_catalog: add helpers to look for snapshot/chunk files Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 3/7] api2/tape/restore: factor out check_datastore_privs Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 4/7] api2/tape/restore: make datastore in restore_snapshot_archive optional Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 5/7] api2/tape/restore: add 'restore-single' api path Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 6/7] bin/proxmox-tape: add restore-single command to proxmox-tape Dominik Csapak
2021-05-03 8:38 ` [pbs-devel] [RFC PATCH proxmox-backup 7/7] ui: tape: add single snapshot restore Dominik Csapak
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox