* [pbs-devel] [PATCH proxmox-backup v3 0/7] tape: single snapshot restore
@ 2021-05-06 12:20 Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 1/7] api2/tape/restore: return backup manifest in try_restore_snapshot_archive Dominik Csapak
` (6 more replies)
0 siblings, 7 replies; 10+ messages in thread
From: Dominik Csapak @ 2021-05-06 12:20 UTC (permalink / raw)
To: pbs-devel
v3 of the series, should be an ok and working state, nothing
obvious is missing or not working besides
* gui for multiselection (i have some ideas, but we can allow
single snapshots for now and add multiselect later)
changes from v2:
* added schema for snapshot
* integrated with the normal restore api call and command
* added completion helper for proxmox-tape
* added small api-viewer patch to improve the '<array>' type text
* rebase on master
changes from v1:
* use parallel handler for chunk restore
* rebase on master
* add patch to return manifest from try_restore_snapshot_archive
* using of Arc<WorkerTask> like we do now in rest of the file
Dominik Csapak (7):
api2/tape/restore: return backup manifest in
try_restore_snapshot_archive
api2/types: add TAPE_RESTORE_SNAPSHOT_SCHEMA
api2/tape/restore: add optional snapshots to 'restore'
tape/inventory: add completion helper for tape snapshots
bin/proxmox-tape: add optional snapshots to restore command
ui: tape: add single snapshot restore
docs/api-viewer: improve rendering of array format
docs/api-viewer/PBSAPI.js | 31 +-
src/api2/tape/restore.rs | 702 ++++++++++++++++++++++++++++-----
src/api2/types/mod.rs | 11 +
src/backup.rs | 1 +
src/backup/backup_info.rs | 9 +-
src/bin/proxmox-tape.rs | 13 +-
src/tape/inventory.rs | 36 ++
www/tape/BackupOverview.js | 41 ++
www/tape/window/TapeRestore.js | 25 ++
9 files changed, 771 insertions(+), 98 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 1/7] api2/tape/restore: return backup manifest in try_restore_snapshot_archive
2021-05-06 12:20 [pbs-devel] [PATCH proxmox-backup v3 0/7] tape: single snapshot restore Dominik Csapak
@ 2021-05-06 12:20 ` Dominik Csapak
2021-05-07 10:49 ` [pbs-devel] applied: " Dietmar Maurer
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 2/7] api2/types: add TAPE_RESTORE_SNAPSHOT_SCHEMA Dominik Csapak
` (5 subsequent siblings)
6 siblings, 1 reply; 10+ messages in thread
From: Dominik Csapak @ 2021-05-06 12:20 UTC (permalink / raw)
To: pbs-devel
we'll use that for partial snapshot restore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/tape/restore.rs | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index f3452364..9884b379 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -800,7 +800,7 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
worker: Arc<WorkerTask>,
decoder: &mut pxar::decoder::sync::Decoder<R>,
snapshot_path: &Path,
-) -> Result<(), Error> {
+) -> Result<BackupManifest, Error> {
let _root = match decoder.next() {
None => bail!("missing root entry"),
@@ -886,9 +886,10 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
}
}
- if manifest.is_none() {
- bail!("missing manifest");
- }
+ let manifest = match manifest {
+ None => bail!("missing manifest"),
+ Some(manifest) => manifest,
+ };
// Do not verify anything here, because this would be to slow (causes tape stops).
@@ -902,7 +903,7 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
bail!("Atomic rename manifest {:?} failed - {}", manifest_path, err);
}
- Ok(())
+ Ok(manifest)
}
/// Try to restore media catalogs (form catalog_archives)
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 2/7] api2/types: add TAPE_RESTORE_SNAPSHOT_SCHEMA
2021-05-06 12:20 [pbs-devel] [PATCH proxmox-backup v3 0/7] tape: single snapshot restore Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 1/7] api2/tape/restore: return backup manifest in try_restore_snapshot_archive Dominik Csapak
@ 2021-05-06 12:20 ` Dominik Csapak
2021-05-07 10:51 ` [pbs-devel] applied: " Dietmar Maurer
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 3/7] api2/tape/restore: add optional snapshots to 'restore' Dominik Csapak
` (4 subsequent siblings)
6 siblings, 1 reply; 10+ messages in thread
From: Dominik Csapak @ 2021-05-06 12:20 UTC (permalink / raw)
To: pbs-devel
which is 'store:type/id/time'
needed to refactor SNAPSHOT_PATH_REGEX_STR from backup_info
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/types/mod.rs | 11 +++++++++++
src/backup.rs | 1 +
src/backup/backup_info.rs | 9 ++++++++-
3 files changed, 20 insertions(+), 1 deletion(-)
diff --git a/src/api2/types/mod.rs b/src/api2/types/mod.rs
index e829f207..21b5eade 100644
--- a/src/api2/types/mod.rs
+++ b/src/api2/types/mod.rs
@@ -114,6 +114,8 @@ const_regex!{
pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$";
pub DATASTORE_MAP_REGEX = concat!(r"(:?", PROXMOX_SAFE_ID_REGEX_STR!(), r"=)?", PROXMOX_SAFE_ID_REGEX_STR!());
+
+ pub TAPE_RESTORE_SNAPSHOT_REGEX = concat!(r"^", PROXMOX_SAFE_ID_REGEX_STR!(), r":", SNAPSHOT_PATH_REGEX_STR!(), r"$");
}
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
@@ -185,6 +187,9 @@ pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat =
pub const DATASTORE_MAP_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&DATASTORE_MAP_REGEX);
+pub const TAPE_RESTORE_SNAPSHOT_FORMAT: ApiStringFormat =
+ ApiStringFormat::Pattern(&TAPE_RESTORE_SNAPSHOT_REGEX);
+
pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.")
.format(&PASSWORD_FORMAT)
.min_length(1)
@@ -396,6 +401,12 @@ pub const DATASTORE_MAP_LIST_SCHEMA: Schema = StringSchema::new(
.format(&ApiStringFormat::PropertyString(&DATASTORE_MAP_ARRAY_SCHEMA))
.schema();
+pub const TAPE_RESTORE_SNAPSHOT_SCHEMA: Schema = StringSchema::new(
+ "A snapshot in the format: 'store:type/id/time")
+ .format(&TAPE_RESTORE_SNAPSHOT_FORMAT)
+ .type_text("store:type/id/time")
+ .schema();
+
pub const MEDIA_SET_UUID_SCHEMA: Schema =
StringSchema::new("MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).")
.format(&UUID_FORMAT)
diff --git a/src/backup.rs b/src/backup.rs
index cca43881..ae937be0 100644
--- a/src/backup.rs
+++ b/src/backup.rs
@@ -238,6 +238,7 @@ pub use fixed_index::*;
mod dynamic_index;
pub use dynamic_index::*;
+#[macro_use]
mod backup_info;
pub use backup_info::*;
diff --git a/src/backup/backup_info.rs b/src/backup/backup_info.rs
index b0f6e31c..f39f2ed4 100644
--- a/src/backup/backup_info.rs
+++ b/src/backup/backup_info.rs
@@ -25,6 +25,13 @@ macro_rules! BACKUP_TIME_RE {
};
}
+#[macro_export]
+macro_rules! SNAPSHOT_PATH_REGEX_STR {
+ () => (
+ concat!(r"(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")")
+ );
+}
+
const_regex! {
BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
@@ -37,7 +44,7 @@ const_regex! {
GROUP_PATH_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$");
SNAPSHOT_PATH_REGEX = concat!(
- r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")$");
+ r"^", SNAPSHOT_PATH_REGEX_STR!(), r"$");
}
/// BackupGroup is a directory containing a list of BackupDir
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 3/7] api2/tape/restore: add optional snapshots to 'restore'
2021-05-06 12:20 [pbs-devel] [PATCH proxmox-backup v3 0/7] tape: single snapshot restore Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 1/7] api2/tape/restore: return backup manifest in try_restore_snapshot_archive Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 2/7] api2/types: add TAPE_RESTORE_SNAPSHOT_SCHEMA Dominik Csapak
@ 2021-05-06 12:20 ` Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 4/7] tape/inventory: add completion helper for tape snapshots Dominik Csapak
` (3 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Dominik Csapak @ 2021-05-06 12:20 UTC (permalink / raw)
To: pbs-devel
this makes it possible to only restore some snapshots from a tape media-set
instead of the whole. If the user selects only a small part, this will
probably be faster (and definitely uses less space on the target
datastores).
the user has to provide a list of snapshots to restore in the form of
'store:type/group/id'
e.g. 'mystore:ct/100/2021-01-01T00:00:00Z'
we achieve this by first restoring the index to a temp dir, retrieving
a list of chunks, and using the catalog, we generate a list of
media/files that we need to (partially) restore.
finally, we copy the snapshots to the correct dir in the datastore,
and clean up the temp dir
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/tape/restore.rs | 691 ++++++++++++++++++++++++++++++++++-----
1 file changed, 608 insertions(+), 83 deletions(-)
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 9884b379..f5c35e5a 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -1,4 +1,4 @@
-use std::path::Path;
+use std::path::{Path, PathBuf};
use std::ffi::OsStr;
use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
@@ -40,6 +40,7 @@ use crate::{
UPID_SCHEMA,
Authid,
Userid,
+ TAPE_RESTORE_SNAPSHOT_SCHEMA,
},
config::{
self,
@@ -51,9 +52,14 @@ use crate::{
},
},
backup::{
+ ArchiveType,
+ archive_type,
+ IndexFile,
MANIFEST_BLOB_NAME,
CryptMode,
DataStore,
+ DynamicIndexReader,
+ FixedIndexReader,
BackupDir,
DataBlob,
BackupManifest,
@@ -69,6 +75,7 @@ use crate::{
MediaId,
MediaSet,
MediaCatalog,
+ MediaSetCatalog,
Inventory,
lock_media_set,
file_formats::{
@@ -95,6 +102,8 @@ use crate::{
},
};
+const RESTORE_TMP_DIR: &str = "/var/tmp/proxmox-backup";
+
pub struct DataStoreMap {
map: HashMap<String, Arc<DataStore>>,
default: Option<Arc<DataStore>>,
@@ -183,6 +192,7 @@ fn check_datastore_privs(
pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
+
#[api(
input: {
properties: {
@@ -200,6 +210,14 @@ pub const ROUTER: Router = Router::new().post(&API_METHOD_RESTORE);
type: Userid,
optional: true,
},
+ "snapshots": {
+ description: "List of snapshots.",
+ type: Array,
+ optional: true,
+ items: {
+ schema: TAPE_RESTORE_SNAPSHOT_SCHEMA,
+ },
+ },
owner: {
type: Authid,
optional: true,
@@ -222,6 +240,7 @@ pub fn restore(
drive: String,
media_set: String,
notify_user: Option<Userid>,
+ snapshots: Option<Vec<String>>,
owner: Option<Authid>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
@@ -266,14 +285,20 @@ pub fn restore(
let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
- let taskid = used_datastores
- .iter()
- .map(|s| s.to_string())
- .collect::<Vec<String>>()
- .join(", ");
+ let (worker_type, task_id) = if snapshots.is_some() {
+ ("tape-restore-single", None)
+ } else {
+ let task_id = used_datastores
+ .iter()
+ .map(|s| s.to_string())
+ .collect::<Vec<String>>()
+ .join(", ");
+ ("tape-restore", Some(task_id))
+ };
+
let upid_str = WorkerTask::new_thread(
- "tape-restore",
- Some(taskid),
+ worker_type,
+ task_id,
auth_id.clone(),
to_stdout,
move |worker| {
@@ -281,100 +306,608 @@ pub fn restore(
set_tape_device_state(&drive, &worker.upid().to_string())?;
- let members = inventory.compute_media_set_members(&media_set_uuid)?;
+ let restore_owner = owner.as_ref().unwrap_or(&auth_id);
- let media_list = members.media_list();
+ let email = notify_user
+ .as_ref()
+ .and_then(|userid| lookup_user_email(userid))
+ .or_else(|| lookup_user_email(&auth_id.clone().into()));
+
+ let res = if let Some(snapshots) = snapshots {
+ restore_single_worker(
+ worker.clone(),
+ snapshots,
+ inventory,
+ media_set_uuid,
+ drive_config,
+ &drive,
+ store_map,
+ restore_owner,
+ email,
+ )
+ } else {
+ task_log!(worker, "Restore mediaset '{}'", media_set);
+ task_log!(worker, "Pool: {}", pool);
+ let res = restore_worker(
+ worker.clone(),
+ inventory,
+ media_set_uuid,
+ drive_config,
+ &drive,
+ store_map,
+ restore_owner,
+ email,
+ );
+ task_log!(worker, "Restore mediaset '{}' done", media_set);
+ res
+ };
- let mut media_id_list = Vec::new();
+ if let Err(err) = set_tape_device_state(&drive, "") {
+ task_log!(
+ worker,
+ "could not unset drive state for {}: {}",
+ drive,
+ err
+ );
+ }
- let mut encryption_key_fingerprint = None;
+ res
+ }
+ )?;
- for (seq_nr, media_uuid) in media_list.iter().enumerate() {
- match media_uuid {
- None => {
- bail!("media set {} is incomplete (missing member {}).", media_set_uuid, seq_nr);
- }
- Some(media_uuid) => {
- let media_id = inventory.lookup_media(media_uuid).unwrap();
- if let Some(ref set) = media_id.media_set_label { // always true here
- if encryption_key_fingerprint.is_none() && set.encryption_key_fingerprint.is_some() {
- encryption_key_fingerprint = set.encryption_key_fingerprint.clone();
- }
- }
- media_id_list.push(media_id);
+ Ok(upid_str.into())
+}
+
+fn restore_worker(
+ worker: Arc<WorkerTask>,
+ inventory: Inventory,
+ media_set_uuid: Uuid,
+ drive_config: SectionConfigData,
+ drive_name: &str,
+ store_map: DataStoreMap,
+ restore_owner: &Authid,
+ email: Option<String>,
+) -> Result<(), Error> {
+ let members = inventory.compute_media_set_members(&media_set_uuid)?;
+
+ let media_list = members.media_list();
+
+ let mut media_id_list = Vec::new();
+
+ let mut encryption_key_fingerprint = None;
+
+ for (seq_nr, media_uuid) in media_list.iter().enumerate() {
+ match media_uuid {
+ None => {
+ bail!("media set {} is incomplete (missing member {}).", media_set_uuid, seq_nr);
+ }
+ Some(media_uuid) => {
+ let media_id = inventory.lookup_media(media_uuid).unwrap();
+ if let Some(ref set) = media_id.media_set_label { // always true here
+ if encryption_key_fingerprint.is_none() && set.encryption_key_fingerprint.is_some() {
+ encryption_key_fingerprint = set.encryption_key_fingerprint.clone();
}
}
+ media_id_list.push(media_id);
+ }
+ }
+ }
+
+ if let Some(fingerprint) = encryption_key_fingerprint {
+ task_log!(worker, "Encryption key fingerprint: {}", fingerprint);
+ }
+
+ task_log!(
+ worker,
+ "Datastore(s): {}",
+ store_map
+ .used_datastores()
+ .into_iter()
+ .map(String::from)
+ .collect::<Vec<String>>()
+ .join(", "),
+ );
+
+ task_log!(worker, "Drive: {}", drive_name);
+ task_log!(
+ worker,
+ "Required media list: {}",
+ media_id_list.iter()
+ .map(|media_id| media_id.label.label_text.as_str())
+ .collect::<Vec<&str>>()
+ .join(";")
+ );
+
+ let mut datastore_locks = Vec::new();
+ for store_name in store_map.used_datastores() {
+ // explicit create shared lock to prevent GC on newly created chunks
+ if let Some(store) = store_map.get_datastore(store_name) {
+ let shared_store_lock = store.try_shared_chunk_store_lock()?;
+ datastore_locks.push(shared_store_lock);
+ }
+ }
+
+ let mut checked_chunks_map = HashMap::new();
+
+ for media_id in media_id_list.iter() {
+ request_and_restore_media(
+ worker.clone(),
+ media_id,
+ &drive_config,
+ drive_name,
+ &store_map,
+ &mut checked_chunks_map,
+ restore_owner,
+ &email,
+ )?;
+ }
+
+ Ok(())
+}
+
+fn restore_single_worker(
+ worker: Arc<WorkerTask>,
+ snapshots: Vec<String>,
+ inventory: Inventory,
+ media_set_uuid: Uuid,
+ drive_config: SectionConfigData,
+ drive_name: &str,
+ store_map: DataStoreMap,
+ restore_owner: &Authid,
+ email: Option<String>,
+) -> Result<(), Error> {
+ let base_path: PathBuf = format!("{}/{}", RESTORE_TMP_DIR, media_set_uuid).into();
+ std::fs::create_dir_all(&base_path)?;
+
+ let catalog = get_media_set_catalog(&inventory, &media_set_uuid)?;
+
+ let mut datastore_locks = Vec::new();
+ let mut snapshot_file_list: HashMap<Uuid, Vec<u64>> = HashMap::new();
+ let mut snapshot_locks = HashMap::new();
+
+ let res = proxmox::try_block!({
+ // assemble snapshot files/locks
+ for i in 0..snapshots.len() {
+ let store_snapshot = &snapshots[i];
+ let mut split = snapshots[i].splitn(2, ':');
+ let source_datastore = split
+ .next()
+ .ok_or_else(|| format_err!("invalid snapshot: {}", store_snapshot))?;
+ let snapshot = split
+ .next()
+ .ok_or_else(|| format_err!("invalid snapshot:{}", store_snapshot))?;
+ let backup_dir: BackupDir = snapshot.parse()?;
+
+ let datastore = store_map.get_datastore(source_datastore).ok_or_else(|| {
+ format_err!(
+ "could not find mapping for source datastore: {}",
+ source_datastore
+ )
+ })?;
+
+ let (owner, _group_lock) =
+ datastore.create_locked_backup_group(backup_dir.group(), &restore_owner)?;
+ if restore_owner != &owner {
+ // only the owner is allowed to create additional snapshots
+ bail!(
+ "restore '{}' failed - owner check failed ({} != {})",
+ snapshot,
+ restore_owner,
+ owner
+ );
}
- task_log!(worker, "Restore mediaset '{}'", media_set);
- if let Some(fingerprint) = encryption_key_fingerprint {
- task_log!(worker, "Encryption key fingerprint: {}", fingerprint);
+ let (media_id, file_num) = if let Some((media_uuid, nr)) =
+ catalog.lookup_snapshot(&source_datastore, &snapshot)
+ {
+ let media_id = inventory.lookup_media(media_uuid).unwrap();
+ (media_id, nr)
+ } else {
+ task_warn!(
+ worker,
+ "did not find snapshot '{}' in media set {}",
+ snapshot,
+ media_set_uuid
+ );
+ continue;
+ };
+
+ let (_rel_path, is_new, snap_lock) = datastore.create_locked_backup_dir(&backup_dir)?;
+
+ if !is_new {
+ task_log!(
+ worker,
+ "found snapshot {} on target datastore, skipping...",
+ snapshot
+ );
+ continue;
}
- task_log!(worker, "Pool: {}", pool);
- task_log!(
- worker,
- "Datastore(s): {}",
- store_map
- .used_datastores()
- .into_iter()
- .map(String::from)
- .collect::<Vec<String>>()
- .join(", "),
- );
- task_log!(worker, "Drive: {}", drive);
+ snapshot_locks.insert(store_snapshot.to_string(), snap_lock);
+
+ let shared_store_lock = datastore.try_shared_chunk_store_lock()?;
+ datastore_locks.push(shared_store_lock);
+
+ let file_list = snapshot_file_list
+ .entry(media_id.label.uuid.clone())
+ .or_insert_with(Vec::new);
+ file_list.push(file_num);
+
task_log!(
worker,
- "Required media list: {}",
- media_id_list.iter()
- .map(|media_id| media_id.label.label_text.as_str())
- .collect::<Vec<&str>>()
- .join(";")
+ "found snapshot {} on {}: file {}",
+ snapshot,
+ media_id.label.label_text,
+ file_num
);
+ }
+
+ if snapshot_file_list.is_empty() {
+ task_log!(worker, "nothing to restore, skipping remaining phases...");
+ return Ok(());
+ }
- let mut datastore_locks = Vec::new();
- for store_name in store_map.used_datastores() {
- // explicit create shared lock to prevent GC on newly created chunks
- if let Some(store) = store_map.get_datastore(store_name) {
- let shared_store_lock = store.try_shared_chunk_store_lock()?;
- datastore_locks.push(shared_store_lock);
+ task_log!(worker, "Phase 1: temporarily restore snapshots to temp dir");
+ let mut chunks_list: HashMap<String, HashSet<[u8; 32]>> = HashMap::new();
+ for (media_uuid, file_list) in snapshot_file_list.iter_mut() {
+ let media_id = inventory.lookup_media(media_uuid).unwrap();
+ let (drive, info) = request_and_load_media(
+ &worker,
+ &drive_config,
+ &drive_name,
+ &media_id.label,
+ &email,
+ )?;
+ file_list.sort_unstable();
+ restore_snapshots_to_tmpdir(
+ worker.clone(),
+ &base_path,
+ file_list,
+ drive,
+ &info,
+ &media_set_uuid,
+ &mut chunks_list,
+ )?;
+ }
+
+ let mut media_list: HashMap<Uuid, HashMap<u64, HashSet<[u8; 32]>>> = HashMap::new();
+
+ for (source_datastore, chunks) in chunks_list.into_iter() {
+ let datastore = store_map.get_datastore(&source_datastore).ok_or_else(|| {
+ format_err!(
+ "could not find mapping for source datastore: {}",
+ source_datastore
+ )
+ })?;
+ for digest in chunks.into_iter() {
+ // we only want to restore chunks that we do not have yet
+ if !datastore.cond_touch_chunk(&digest, false)? {
+ if let Some((uuid, nr)) = catalog.lookup_chunk(&source_datastore, &digest) {
+ let file = media_list.entry(uuid.clone()).or_insert_with(HashMap::new);
+ let chunks = file.entry(nr).or_insert_with(HashSet::new);
+ chunks.insert(digest);
+ }
}
}
+ }
- let mut checked_chunks_map = HashMap::new();
+ // we do not need it anymore, saves memory
+ drop(catalog);
- for media_id in media_id_list.iter() {
- request_and_restore_media(
- worker.clone(),
- media_id,
- &drive_config,
- &drive,
- &store_map,
- &mut checked_chunks_map,
- &auth_id,
- ¬ify_user,
- &owner,
- )?;
+ if !media_list.is_empty() {
+ task_log!(worker, "Phase 2: restore chunks to datastores");
+ } else {
+ task_log!(worker, "all chunks exist already, skipping phase 2...");
+ }
+
+ for (media_uuid, file_list) in media_list.iter_mut() {
+ let media_id = inventory.lookup_media(media_uuid).unwrap();
+ let (mut drive, _info) = request_and_load_media(
+ &worker,
+ &drive_config,
+ &drive_name,
+ &media_id.label,
+ &email,
+ )?;
+ let mut files: Vec<u64> = file_list.keys().map(|v| *v).collect();
+ files.sort();
+ restore_chunk_file_list(worker.clone(), &mut drive, &files[..], &store_map, file_list)?;
+ }
+
+ task_log!(
+ worker,
+ "Phase 3: copy snapshots from temp dir to datastores"
+ );
+ for (store_snapshot, _lock) in snapshot_locks.into_iter() {
+ let mut split = store_snapshot.splitn(2, ':');
+ let source_datastore = split
+ .next()
+ .ok_or_else(|| format_err!("invalid snapshot: {}", store_snapshot))?;
+ let snapshot = split
+ .next()
+ .ok_or_else(|| format_err!("invalid snapshot:{}", store_snapshot))?;
+ let backup_dir: BackupDir = snapshot.parse()?;
+
+ let datastore = store_map
+ .get_datastore(&source_datastore)
+ .ok_or_else(|| format_err!("unexpected source datastore: {}", source_datastore))?;
+
+ let mut tmp_path = base_path.clone();
+ tmp_path.push(&source_datastore);
+ tmp_path.push(snapshot);
+
+ let path = datastore.snapshot_path(&backup_dir);
+
+ for entry in std::fs::read_dir(tmp_path)? {
+ let entry = entry?;
+ let mut new_path = path.clone();
+ new_path.push(entry.file_name());
+ std::fs::copy(entry.path(), new_path)?;
}
+ task_log!(worker, "Restore snapshot '{}' done", snapshot);
+ }
+ Ok(())
+ });
+
+ match std::fs::remove_dir_all(&base_path) {
+ Ok(()) => {}
+ Err(err) => task_warn!(worker, "error cleaning up: {}", err),
+ }
- drop(datastore_locks);
+ res
+}
- task_log!(worker, "Restore mediaset '{}' done", media_set);
+fn get_media_set_catalog(
+ inventory: &Inventory,
+ media_set_uuid: &Uuid,
+) -> Result<MediaSetCatalog, Error> {
+ let status_path = Path::new(TAPE_STATUS_DIR);
+
+ let members = inventory.compute_media_set_members(media_set_uuid)?;
+ let media_list = members.media_list();
+ let mut catalog = MediaSetCatalog::new();
+
+ for (seq_nr, media_uuid) in media_list.iter().enumerate() {
+ match media_uuid {
+ None => {
+ bail!(
+ "media set {} is incomplete (missing member {}).",
+ media_set_uuid,
+ seq_nr
+ );
+ }
+ Some(media_uuid) => {
+ let media_id = inventory.lookup_media(media_uuid).unwrap();
+ let media_catalog = MediaCatalog::open(status_path, &media_id, false, false)?;
+ catalog.append_catalog(media_catalog)?;
+ }
+ }
+ }
+
+ Ok(catalog)
+}
+
+fn restore_snapshots_to_tmpdir(
+ worker: Arc<WorkerTask>,
+ path: &PathBuf,
+ file_list: &[u64],
+ mut drive: Box<dyn TapeDriver>,
+ media_id: &MediaId,
+ media_set_uuid: &Uuid,
+ chunks_list: &mut HashMap<String, HashSet<[u8; 32]>>,
+) -> Result<(), Error> {
+ match media_id.media_set_label {
+ None => {
+ bail!(
+ "missing media set label on media {} ({})",
+ media_id.label.label_text,
+ media_id.label.uuid
+ );
+ }
+ Some(ref set) => {
+ if set.uuid != *media_set_uuid {
+ bail!(
+ "wrong media set label on media {} ({} != {})",
+ media_id.label.label_text,
+ media_id.label.uuid,
+ media_set_uuid
+ );
+ }
+ let encrypt_fingerprint = set.encryption_key_fingerprint.clone().map(|fp| {
+ task_log!(worker, "Encryption key fingerprint: {}", fp);
+ (fp, set.uuid.clone())
+ });
+
+ drive.set_encryption(encrypt_fingerprint)?;
+ }
+ }
+
+ for file_num in file_list {
+ drive.move_to_file(*file_num)?;
+ let mut reader = drive.read_next_file()?;
+
+ let header: MediaContentHeader = unsafe { reader.read_le_value()? };
+ if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
+ bail!("missing MediaContentHeader");
+ }
+
+ match header.content_magic {
+ PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1 => {
+ let header_data = reader.read_exact_allocated(header.size as usize)?;
+
+ let archive_header: SnapshotArchiveHeader = serde_json::from_slice(&header_data)
+ .map_err(|err| {
+ format_err!("unable to parse snapshot archive header - {}", err)
+ })?;
+
+ let source_datastore = archive_header.store;
+ let snapshot = archive_header.snapshot;
- if let Err(err) = set_tape_device_state(&drive, "") {
task_log!(
worker,
- "could not unset drive state for {}: {}",
- drive,
- err
+ "File {}: snapshot archive {}:{}",
+ file_num,
+ source_datastore,
+ snapshot
+ );
+
+ let mut decoder = pxar::decoder::sync::Decoder::from_std(reader)?;
+
+ let mut tmp_path = path.clone();
+ tmp_path.push(&source_datastore);
+ tmp_path.push(snapshot);
+ std::fs::create_dir_all(&tmp_path)?;
+
+ let chunks = chunks_list
+ .entry(source_datastore)
+ .or_insert_with(HashSet::new);
+ let manifest = try_restore_snapshot_archive(worker.clone(), &mut decoder, &tmp_path)?;
+ for item in manifest.files() {
+ let mut archive_path = tmp_path.to_owned();
+ archive_path.push(&item.filename);
+
+ let index: Box<dyn IndexFile> = match archive_type(&item.filename)? {
+ ArchiveType::DynamicIndex => {
+ Box::new(DynamicIndexReader::open(&archive_path)?)
+ }
+ ArchiveType::FixedIndex => {
+ Box::new(FixedIndexReader::open(&archive_path)?)
+ }
+ ArchiveType::Blob => continue,
+ };
+ for i in 0..index.index_count() {
+ if let Some(digest) = index.index_digest(i) {
+ chunks.insert(*digest);
+ }
+ }
+ }
+ }
+ _ => bail!("unexpected file type"),
+ }
+ }
+
+ Ok(())
+}
+
+fn restore_chunk_file_list(
+ worker: Arc<WorkerTask>,
+ drive: &mut Box<dyn TapeDriver>,
+ file_list: &[u64],
+ store_map: &DataStoreMap,
+ chunk_list: &mut HashMap<u64, HashSet<[u8; 32]>>,
+) -> Result<(), Error> {
+ for nr in file_list {
+ let current_file_number = drive.current_file_number()?;
+ if current_file_number != *nr {
+ drive.move_to_file(*nr)?;
+ }
+ let mut reader = drive.read_next_file()?;
+ let header: MediaContentHeader = unsafe { reader.read_le_value()? };
+ if header.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
+ bail!("missing MediaContentHeader");
+ }
+
+ match header.content_magic {
+ PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1 => {
+ let header_data = reader.read_exact_allocated(header.size as usize)?;
+
+ let archive_header: ChunkArchiveHeader = serde_json::from_slice(&header_data)
+ .map_err(|err| format_err!("unable to parse chunk archive header - {}", err))?;
+
+ let source_datastore = archive_header.store;
+
+ task_log!(
+ worker,
+ "File {}: chunk archive for datastore '{}'",
+ nr,
+ source_datastore
);
+
+ let datastore = store_map.get_datastore(&source_datastore).ok_or_else(|| {
+ format_err!("unexpected chunk archive for store: {}", source_datastore)
+ })?;
+
+ let chunks = chunk_list
+ .get_mut(nr)
+ .ok_or_else(|| format_err!("undexpected file nr: {}", nr))?;
+
+ let count = restore_partial_chunk_archive(worker.clone(), reader, datastore.clone(), chunks)?;
+ task_log!(worker, "restored {} chunks", count);
}
+ _ => bail!("unexpected content magic {:?}", header.content_magic),
+ }
+ }
+ Ok(())
+}
+
+fn restore_partial_chunk_archive<'a>(
+ worker: Arc<WorkerTask>,
+ reader: Box<dyn 'a + TapeRead>,
+ datastore: Arc<DataStore>,
+ chunk_list: &mut HashSet<[u8; 32]>,
+) -> Result<usize, Error> {
+ let mut decoder = ChunkArchiveDecoder::new(reader);
+
+ let mut count = 0;
+
+ let start_time = std::time::SystemTime::now();
+ let bytes = Arc::new(std::sync::atomic::AtomicU64::new(0));
+ let bytes2 = bytes.clone();
+
+ let writer_pool = ParallelHandler::new(
+ "tape restore chunk writer",
+ 4,
+ move |(chunk, digest): (DataBlob, [u8; 32])| {
+ if !datastore.cond_touch_chunk(&digest, false)? {
+ bytes2.fetch_add(chunk.raw_size(), std::sync::atomic::Ordering::SeqCst);
+ chunk.verify_crc()?;
+ if chunk.crypt_mode()? == CryptMode::None {
+ chunk.decode(None, Some(&digest))?; // verify digest
+ }
+
+ datastore.insert_chunk(&chunk, &digest)?;
+ }
Ok(())
+ },
+ );
+
+ let verify_and_write_channel = writer_pool.channel();
+
+ loop {
+ let (digest, blob) = match decoder.next_chunk()? {
+ Some((digest, blob)) => (digest, blob),
+ None => break,
+ };
+
+ worker.check_abort()?;
+
+ if chunk_list.remove(&digest) {
+ verify_and_write_channel.send((blob, digest.clone()))?;
+ count += 1;
}
- )?;
- Ok(upid_str.into())
+ if chunk_list.is_empty() {
+ break;
+ }
+ }
+
+ drop(verify_and_write_channel);
+
+ writer_pool.complete()?;
+
+ let elapsed = start_time.elapsed()?.as_secs_f64();
+
+ let bytes = bytes.load(std::sync::atomic::Ordering::SeqCst);
+
+ task_log!(
+ worker,
+ "restored {} bytes ({:.2} MB/s)",
+ bytes,
+ (bytes as f64) / (1_000_000.0 * elapsed)
+ );
+
+ Ok(count)
}
/// Request and restore complete media without using existing catalog (create catalog instead)
@@ -385,21 +918,15 @@ pub fn request_and_restore_media(
drive_name: &str,
store_map: &DataStoreMap,
checked_chunks_map: &mut HashMap<String, HashSet<[u8;32]>>,
- authid: &Authid,
- notify_user: &Option<Userid>,
- owner: &Option<Authid>,
+ restore_owner: &Authid,
+ email: &Option<String>,
) -> Result<(), Error> {
let media_set_uuid = match media_id.media_set_label {
None => bail!("restore_media: no media set - internal error"),
Some(ref set) => &set.uuid,
};
- let email = notify_user
- .as_ref()
- .and_then(|userid| lookup_user_email(userid))
- .or_else(|| lookup_user_email(&authid.clone().into()));
-
- let (mut drive, info) = request_and_load_media(&worker, &drive_config, &drive_name, &media_id.label, &email)?;
+ let (mut drive, info) = request_and_load_media(&worker, &drive_config, &drive_name, &media_id.label, email)?;
match info.media_set_label {
None => {
@@ -419,8 +946,6 @@ pub fn request_and_restore_media(
}
}
- let restore_owner = owner.as_ref().unwrap_or(authid);
-
restore_media(
worker,
&mut drive,
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 4/7] tape/inventory: add completion helper for tape snapshots
2021-05-06 12:20 [pbs-devel] [PATCH proxmox-backup v3 0/7] tape: single snapshot restore Dominik Csapak
` (2 preceding siblings ...)
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 3/7] api2/tape/restore: add optional snapshots to 'restore' Dominik Csapak
@ 2021-05-06 12:20 ` Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 5/7] bin/proxmox-tape: add optional snapshots to restore command Dominik Csapak
` (2 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Dominik Csapak @ 2021-05-06 12:20 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/tape/inventory.rs | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/src/tape/inventory.rs b/src/tape/inventory.rs
index f9654538..4bb6d4f8 100644
--- a/src/tape/inventory.rs
+++ b/src/tape/inventory.rs
@@ -54,6 +54,7 @@ use crate::{
tape::{
TAPE_STATUS_DIR,
MediaSet,
+ MediaCatalog,
file_formats::{
MediaLabel,
MediaSetLabel,
@@ -850,3 +851,38 @@ pub fn complete_media_label_text(
inventory.map.values().map(|entry| entry.id.label.label_text.clone()).collect()
}
+
+pub fn complete_media_set_snapshots(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+ let media_set_uuid: Uuid = match param.get("media-set").and_then(|s| s.parse().ok()) {
+ Some(uuid) => uuid,
+ None => return Vec::new(),
+ };
+ let status_path = Path::new(TAPE_STATUS_DIR);
+ let inventory = match Inventory::load(&status_path) {
+ Ok(inventory) => inventory,
+ Err(_) => return Vec::new(),
+ };
+
+ let mut res = Vec::new();
+ let media_ids = inventory.list_used_media().into_iter().filter(|media| {
+ match &media.media_set_label {
+ Some(label) => label.uuid == media_set_uuid,
+ None => false,
+ }
+ });
+
+ for media_id in media_ids {
+ let catalog = match MediaCatalog::open(status_path, &media_id, false, false) {
+ Ok(catalog) => catalog,
+ Err(_) => continue,
+ };
+
+ for (store, content) in catalog.content() {
+ for snapshot in content.snapshot_index.keys() {
+ res.push(format!("{}:{}", store, snapshot));
+ }
+ }
+ }
+
+ res
+}
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 5/7] bin/proxmox-tape: add optional snapshots to restore command
2021-05-06 12:20 [pbs-devel] [PATCH proxmox-backup v3 0/7] tape: single snapshot restore Dominik Csapak
` (3 preceding siblings ...)
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 4/7] tape/inventory: add completion helper for tape snapshots Dominik Csapak
@ 2021-05-06 12:20 ` Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 6/7] ui: tape: add single snapshot restore Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 7/7] docs/api-viewer: improve rendering of array format Dominik Csapak
6 siblings, 0 replies; 10+ messages in thread
From: Dominik Csapak @ 2021-05-06 12:20 UTC (permalink / raw)
To: pbs-devel
and add the appropriate completion helper
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/bin/proxmox-tape.rs | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/src/bin/proxmox-tape.rs b/src/bin/proxmox-tape.rs
index e18f334c..46bd4ecc 100644
--- a/src/bin/proxmox-tape.rs
+++ b/src/bin/proxmox-tape.rs
@@ -34,6 +34,7 @@ use proxmox_backup::{
MEDIA_LABEL_SCHEMA,
MEDIA_POOL_NAME_SCHEMA,
Userid,
+ TAPE_RESTORE_SNAPSHOT_SCHEMA,
},
},
config::{
@@ -51,6 +52,7 @@ use proxmox_backup::{
},
complete_media_label_text,
complete_media_set_uuid,
+ complete_media_set_snapshots,
file_formats::{
PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0,
MediaContentHeader,
@@ -886,6 +888,14 @@ async fn backup(mut param: Value) -> Result<(), Error> {
type: Userid,
optional: true,
},
+ "snapshots": {
+ description: "List of snapshots.",
+ type: Array,
+ optional: true,
+ items: {
+ schema: TAPE_RESTORE_SNAPSHOT_SCHEMA,
+ },
+ },
owner: {
type: Authid,
optional: true,
@@ -977,9 +987,10 @@ fn main() {
.insert(
"restore",
CliCommand::new(&API_METHOD_RESTORE)
- .arg_param(&["media-set", "store"])
+ .arg_param(&["media-set", "store", "snapshots"])
.completion_cb("store", complete_datastore_name)
.completion_cb("media-set", complete_media_set_uuid)
+ .completion_cb("snapshots", complete_media_set_snapshots)
)
.insert(
"barcode-label",
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 6/7] ui: tape: add single snapshot restore
2021-05-06 12:20 [pbs-devel] [PATCH proxmox-backup v3 0/7] tape: single snapshot restore Dominik Csapak
` (4 preceding siblings ...)
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 5/7] bin/proxmox-tape: add optional snapshots to restore command Dominik Csapak
@ 2021-05-06 12:20 ` Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 7/7] docs/api-viewer: improve rendering of array format Dominik Csapak
6 siblings, 0 replies; 10+ messages in thread
From: Dominik Csapak @ 2021-05-06 12:20 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/tape/BackupOverview.js | 41 ++++++++++++++++++++++++++++++++++
www/tape/window/TapeRestore.js | 25 +++++++++++++++++++++
2 files changed, 66 insertions(+)
diff --git a/www/tape/BackupOverview.js b/www/tape/BackupOverview.js
index 0cc0e18e..c028d58d 100644
--- a/www/tape/BackupOverview.js
+++ b/www/tape/BackupOverview.js
@@ -16,6 +16,38 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
}).show();
},
+ restoreSingle: function(button, record) {
+ let me = this;
+ let view = me.getView();
+ let selection = view.getSelection();
+ if (!selection || selection.length < 1) {
+ return;
+ }
+
+ let node = selection[0];
+ if (node.data.restoreid === undefined) {
+ return;
+ }
+ let restoreid = node.data.restoreid;
+ let mediaset = node.data.text;
+ let uuid = node.data['media-set-uuid'];
+ let datastores = [node.data.store];
+
+ Ext.create('PBS.TapeManagement.TapeRestoreWindow', {
+ mediaset,
+ uuid,
+ list: [
+ restoreid,
+ ],
+ datastores,
+ listeners: {
+ destroy: function() {
+ me.reload();
+ },
+ },
+ }).show();
+ },
+
restore: function(button, record) {
let me = this;
let view = me.getView();
@@ -149,6 +181,7 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
entry.text = entry.snapshot;
entry.leaf = true;
entry.children = [];
+ entry.restoreid = `${entry.store}:${entry.snapshot}`;
let iconCls = PBS.Utils.get_type_icon_cls(entry.snapshot);
if (iconCls !== '') {
entry.iconCls = `fa ${iconCls}`;
@@ -262,6 +295,14 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
parentXType: 'treepanel',
enableFn: (rec) => !!rec.data['media-set-uuid'],
},
+ {
+ xtype: 'proxmoxButton',
+ disabled: true,
+ text: gettext('Restore Snapshot'),
+ handler: 'restoreSingle',
+ parentXType: 'treepanel',
+ enableFn: (rec) => !!rec.data.restoreid,
+ },
],
columns: [
diff --git a/www/tape/window/TapeRestore.js b/www/tape/window/TapeRestore.js
index 85011cba..ed4a8b97 100644
--- a/www/tape/window/TapeRestore.js
+++ b/www/tape/window/TapeRestore.js
@@ -10,6 +10,18 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
showTaskViewer: true,
isCreate: true,
+ cbindData: function(config) {
+ let me = this;
+ me.isSingle = false;
+ me.listText = "";
+ if (me.list !== undefined) {
+ me.isSingle = true;
+ me.listText = me.list.join('<br>');
+ me.title = gettext('Restore Snapshot');
+ }
+ return {};
+ },
+
defaults: {
labelWidth: 120,
},
@@ -33,6 +45,10 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
}
delete values.mapping;
+ if (me.up('window').list !== undefined) {
+ values.snapshots = me.up('window').list;
+ }
+
values.store = datastores.join(',');
return values;
@@ -55,6 +71,15 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
value: '{uuid}',
},
},
+ {
+ xtype: 'displayfield',
+ fieldLabel: gettext('Snapshot(s)'),
+ submitValue: false,
+ cbind: {
+ hidden: '{!isSingle}',
+ value: '{listText}',
+ },
+ },
{
xtype: 'pbsDriveSelector',
fieldLabel: gettext('Drive'),
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 7/7] docs/api-viewer: improve rendering of array format
2021-05-06 12:20 [pbs-devel] [PATCH proxmox-backup v3 0/7] tape: single snapshot restore Dominik Csapak
` (5 preceding siblings ...)
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 6/7] ui: tape: add single snapshot restore Dominik Csapak
@ 2021-05-06 12:20 ` Dominik Csapak
6 siblings, 0 replies; 10+ messages in thread
From: Dominik Csapak @ 2021-05-06 12:20 UTC (permalink / raw)
To: pbs-devel
by showing
'[format, ...]'
where 'format' is the simple format from the type of the items
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
docs/api-viewer/PBSAPI.js | 31 +++++++++++++++++++++++--------
1 file changed, 23 insertions(+), 8 deletions(-)
diff --git a/docs/api-viewer/PBSAPI.js b/docs/api-viewer/PBSAPI.js
index 2417b4de..3f08418e 100644
--- a/docs/api-viewer/PBSAPI.js
+++ b/docs/api-viewer/PBSAPI.js
@@ -86,13 +86,9 @@ Ext.onReady(function() {
return pdef['enum'] ? 'enum' : (pdef.type || 'string');
};
- var render_format = function(value, metaData, record) {
- var pdef = record.data;
-
- metaData.style = 'white-space:normal;'
-
+ let render_simple_format = function(pdef, type_fallback) {
if (pdef.typetext)
- return Ext.htmlEncode(pdef.typetext);
+ return pdef.typetext;
if (pdef['enum'])
return pdef['enum'].join(' | ');
@@ -101,9 +97,28 @@ Ext.onReady(function() {
return pdef.format;
if (pdef.pattern)
- return Ext.htmlEncode(pdef.pattern);
+ return pdef.pattern;
+
+ if (pdef.type === 'boolean')
+ return `<true|false>`;
+
+ if (type_fallback && pdef.type)
+ return `<${pdef.type}>`;
+
+ return;
+ };
+
+ let render_format = function(value, metaData, record) {
+ let pdef = record.data;
+
+ metaData.style = 'white-space:normal;'
+
+ if (pdef.type === 'array' && pdef.items) {
+ let format = render_simple_format(pdef.items, true);\f
+ return `[${Ext.htmlEncode(format)}, ...]`;
+ }
- return '';
+ return Ext.htmlEncode(render_simple_format(pdef) || '');
};
var real_path = function(path) {
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] applied: [PATCH proxmox-backup v3 1/7] api2/tape/restore: return backup manifest in try_restore_snapshot_archive
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 1/7] api2/tape/restore: return backup manifest in try_restore_snapshot_archive Dominik Csapak
@ 2021-05-07 10:49 ` Dietmar Maurer
0 siblings, 0 replies; 10+ messages in thread
From: Dietmar Maurer @ 2021-05-07 10:49 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Dominik Csapak
applied
On 5/6/21 2:20 PM, Dominik Csapak wrote:
> we'll use that for partial snapshot restore
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> src/api2/tape/restore.rs | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
> index f3452364..9884b379 100644
> --- a/src/api2/tape/restore.rs
> +++ b/src/api2/tape/restore.rs
> @@ -800,7 +800,7 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
> worker: Arc<WorkerTask>,
> decoder: &mut pxar::decoder::sync::Decoder<R>,
> snapshot_path: &Path,
> -) -> Result<(), Error> {
> +) -> Result<BackupManifest, Error> {
>
> let _root = match decoder.next() {
> None => bail!("missing root entry"),
> @@ -886,9 +886,10 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
> }
> }
>
> - if manifest.is_none() {
> - bail!("missing manifest");
> - }
> + let manifest = match manifest {
> + None => bail!("missing manifest"),
> + Some(manifest) => manifest,
> + };
>
> // Do not verify anything here, because this would be to slow (causes tape stops).
>
> @@ -902,7 +903,7 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
> bail!("Atomic rename manifest {:?} failed - {}", manifest_path, err);
> }
>
> - Ok(())
> + Ok(manifest)
> }
>
> /// Try to restore media catalogs (form catalog_archives)
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] applied: [PATCH proxmox-backup v3 2/7] api2/types: add TAPE_RESTORE_SNAPSHOT_SCHEMA
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 2/7] api2/types: add TAPE_RESTORE_SNAPSHOT_SCHEMA Dominik Csapak
@ 2021-05-07 10:51 ` Dietmar Maurer
0 siblings, 0 replies; 10+ messages in thread
From: Dietmar Maurer @ 2021-05-07 10:51 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Dominik Csapak
applied with a cleanup on top:
I moved all api related type/regx definition from backup_info.rs to
src/api2/types/mod.rs
On 5/6/21 2:20 PM, Dominik Csapak wrote:
> which is 'store:type/id/time'
>
> needed to refactor SNAPSHOT_PATH_REGEX_STR from backup_info
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> src/api2/types/mod.rs | 11 +++++++++++
> src/backup.rs | 1 +
> src/backup/backup_info.rs | 9 ++++++++-
> 3 files changed, 20 insertions(+), 1 deletion(-)
>
> diff --git a/src/api2/types/mod.rs b/src/api2/types/mod.rs
> index e829f207..21b5eade 100644
> --- a/src/api2/types/mod.rs
> +++ b/src/api2/types/mod.rs
> @@ -114,6 +114,8 @@ const_regex!{
> pub UUID_REGEX = r"^[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}$";
>
> pub DATASTORE_MAP_REGEX = concat!(r"(:?", PROXMOX_SAFE_ID_REGEX_STR!(), r"=)?", PROXMOX_SAFE_ID_REGEX_STR!());
> +
> + pub TAPE_RESTORE_SNAPSHOT_REGEX = concat!(r"^", PROXMOX_SAFE_ID_REGEX_STR!(), r":", SNAPSHOT_PATH_REGEX_STR!(), r"$");
> }
>
> pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
> @@ -185,6 +187,9 @@ pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat =
> pub const DATASTORE_MAP_FORMAT: ApiStringFormat =
> ApiStringFormat::Pattern(&DATASTORE_MAP_REGEX);
>
> +pub const TAPE_RESTORE_SNAPSHOT_FORMAT: ApiStringFormat =
> + ApiStringFormat::Pattern(&TAPE_RESTORE_SNAPSHOT_REGEX);
> +
> pub const PASSWORD_SCHEMA: Schema = StringSchema::new("Password.")
> .format(&PASSWORD_FORMAT)
> .min_length(1)
> @@ -396,6 +401,12 @@ pub const DATASTORE_MAP_LIST_SCHEMA: Schema = StringSchema::new(
> .format(&ApiStringFormat::PropertyString(&DATASTORE_MAP_ARRAY_SCHEMA))
> .schema();
>
> +pub const TAPE_RESTORE_SNAPSHOT_SCHEMA: Schema = StringSchema::new(
> + "A snapshot in the format: 'store:type/id/time")
> + .format(&TAPE_RESTORE_SNAPSHOT_FORMAT)
> + .type_text("store:type/id/time")
> + .schema();
> +
> pub const MEDIA_SET_UUID_SCHEMA: Schema =
> StringSchema::new("MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).")
> .format(&UUID_FORMAT)
> diff --git a/src/backup.rs b/src/backup.rs
> index cca43881..ae937be0 100644
> --- a/src/backup.rs
> +++ b/src/backup.rs
> @@ -238,6 +238,7 @@ pub use fixed_index::*;
> mod dynamic_index;
> pub use dynamic_index::*;
>
> +#[macro_use]
> mod backup_info;
> pub use backup_info::*;
>
> diff --git a/src/backup/backup_info.rs b/src/backup/backup_info.rs
> index b0f6e31c..f39f2ed4 100644
> --- a/src/backup/backup_info.rs
> +++ b/src/backup/backup_info.rs
> @@ -25,6 +25,13 @@ macro_rules! BACKUP_TIME_RE {
> };
> }
>
> +#[macro_export]
> +macro_rules! SNAPSHOT_PATH_REGEX_STR {
> + () => (
> + concat!(r"(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")")
> + );
> +}
> +
> const_regex! {
> BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
>
> @@ -37,7 +44,7 @@ const_regex! {
> GROUP_PATH_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$");
>
> SNAPSHOT_PATH_REGEX = concat!(
> - r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")$");
> + r"^", SNAPSHOT_PATH_REGEX_STR!(), r"$");
> }
>
> /// BackupGroup is a directory containing a list of BackupDir
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2021-05-07 10:51 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-06 12:20 [pbs-devel] [PATCH proxmox-backup v3 0/7] tape: single snapshot restore Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 1/7] api2/tape/restore: return backup manifest in try_restore_snapshot_archive Dominik Csapak
2021-05-07 10:49 ` [pbs-devel] applied: " Dietmar Maurer
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 2/7] api2/types: add TAPE_RESTORE_SNAPSHOT_SCHEMA Dominik Csapak
2021-05-07 10:51 ` [pbs-devel] applied: " Dietmar Maurer
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 3/7] api2/tape/restore: add optional snapshots to 'restore' Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 4/7] tape/inventory: add completion helper for tape snapshots Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 5/7] bin/proxmox-tape: add optional snapshots to restore command Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 6/7] ui: tape: add single snapshot restore Dominik Csapak
2021-05-06 12:20 ` [pbs-devel] [PATCH proxmox-backup v3 7/7] docs/api-viewer: improve rendering of array format Dominik Csapak
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox