* [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality
@ 2025-05-08 13:05 Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 01/21] datastore/api: mark snapshots as trash on destroy Christian Ebner
` (21 more replies)
0 siblings, 22 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
This patch series implements a trash bin functionality, marking
backup snapshots, groups and namespaces as trashed on prune and
forget instead of deleting them immediately. Cleanup is deferred to
the garbage collection job, allowing to recover the trashed items if
removed by accident.
In contrast to the previous version 1 of the patch series [0] which
moved trashed items to a separate folder structure, this patch series
utilizes the proposed `.trashed` marker file approach to mark and
filter content items. These marker files have to be set and removed
accordingly in a consistent manner and are used to filter in
iterators based on the trash state.
Sending this series as RFC in order to gain some initial feedback on
the chosen implementation approach and possible
shortcomings/oversights. Further, feedback especially for the design
of the WebUI (which still has some rough edges) is very welcome!
[0] https://lore.proxmox.com/pbs-devel/7db77767-ed54-4d7f-8caa-24b7216b159c@proxmox.com/T/
Christian Ebner (21):
datastore/api: mark snapshots as trash on destroy
datastore: mark groups as trash on destroy
datastore: allow filtering of backups by their trash status
datastore: ignore trashed snapshots for last successful backup
sync: ignore trashed snapshots when reading from local source
api: tape: check trash marker when trying to write snapshot
sync: ignore trashed groups in local source reader
datastore: namespace: add filter for trash status
datastore: refactor recursive namespace removal
datastore: mark namespace as trash instead of deleting it
datastore: check for trash marker in namespace exists check
datastore: clear trashed snapshot dir if re-creation requested
datastore: recreate trashed backup groups if requested
datastore: GC: clean-up trashed snapshots, groups and namespaces
client: expose skip trash flags for cli commands
api: datastore: add flag to list trashed snapshots only
api: namespace: add option to list all namespaces, including trashed
api: admin: implement endpoints to restore trashed contents
ui: add recover for trashed items tab to datastore panel
ui: drop 'permanent' in group/snapshot forget, default is to trash
ui: allow to skip trash on namespace deletion
pbs-datastore/examples/ls-snapshots.rs | 8 +-
pbs-datastore/src/backup_info.rs | 130 +++-
pbs-datastore/src/datastore.rs | 320 ++++++++-
pbs-datastore/src/hierarchy.rs | 91 ++-
pbs-datastore/src/lib.rs | 1 +
proxmox-backup-client/src/group.rs | 14 +-
proxmox-backup-client/src/namespace.rs | 14 +-
proxmox-backup-client/src/snapshot.rs | 16 +-
src/api2/admin/datastore.rs | 225 +++++-
src/api2/admin/namespace.rs | 28 +-
src/api2/backup/environment.rs | 1 +
src/api2/tape/backup.rs | 20 +-
src/backup/hierarchy.rs | 26 +-
src/backup/verify.rs | 4 +-
src/bin/proxmox-backup-manager.rs | 12 +-
src/bin/proxmox_backup_manager/prune.rs | 2 +-
src/server/prune_job.rs | 7 +-
src/server/pull.rs | 25 +-
src/server/sync.rs | 7 +-
www/Makefile | 1 +
www/datastore/Content.js | 4 +-
www/datastore/Panel.js | 8 +
www/datastore/RecoverTrashed.js | 876 ++++++++++++++++++++++++
www/window/NamespaceEdit.js | 21 +-
24 files changed, 1737 insertions(+), 124 deletions(-)
create mode 100644 www/datastore/RecoverTrashed.js
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 01/21] datastore/api: mark snapshots as trash on destroy
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 02/21] datastore: mark groups " Christian Ebner
` (20 subsequent siblings)
21 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
In order to implement the trash can functionality, mark snapshots
as trash instead of removing them by default. However, provide a
`skip-trash` flag to opt-out and destroy the snapshot including it's
contents immediately.
Trashed snapshots are marked by creating a `.trashed` marker file
inside the snapshot folder. Actual removal of the snapshot will be
deferred to the garbage collection task.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/backup_info.rs | 66 ++++++++++++++++++--------------
pbs-datastore/src/datastore.rs | 2 +-
src/api2/admin/datastore.rs | 18 ++++++++-
3 files changed, 55 insertions(+), 31 deletions(-)
diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
index d4732fdd9..76bcd15f5 100644
--- a/pbs-datastore/src/backup_info.rs
+++ b/pbs-datastore/src/backup_info.rs
@@ -21,6 +21,7 @@ use crate::manifest::{BackupManifest, MANIFEST_LOCK_NAME};
use crate::{DataBlob, DataStore};
pub const DATASTORE_LOCKS_DIR: &str = "/run/proxmox-backup/locks";
+pub const TRASH_MARKER_FILENAME: &str = ".trashed";
// TODO: Remove with PBS 5
// Note: The `expect()` call here will only happen if we can neither confirm nor deny the existence
@@ -228,7 +229,7 @@ impl BackupGroup {
delete_stats.increment_protected_snapshots();
continue;
}
- snap.destroy(false)?;
+ snap.destroy(false, false)?;
delete_stats.increment_removed_snapshots();
}
@@ -575,7 +576,8 @@ impl BackupDir {
/// Destroy the whole snapshot, bails if it's protected
///
/// Setting `force` to true skips locking and thus ignores if the backup is currently in use.
- pub fn destroy(&self, force: bool) -> Result<(), Error> {
+ /// Setting `skip_trash` to true will remove the snapshot instead of marking it as trash.
+ pub fn destroy(&self, force: bool, skip_trash: bool) -> Result<(), Error> {
let (_guard, _manifest_guard);
if !force {
_guard = self
@@ -588,37 +590,45 @@ impl BackupDir {
bail!("cannot remove protected snapshot"); // use special error type?
}
- let full_path = self.full_path();
- log::info!("removing backup snapshot {:?}", full_path);
- std::fs::remove_dir_all(&full_path).map_err(|err| {
- format_err!("removing backup snapshot {:?} failed - {}", full_path, err,)
- })?;
+ let mut full_path = self.full_path();
+ log::info!("removing backup snapshot {full_path:?}");
+ if skip_trash {
+ std::fs::remove_dir_all(&full_path).map_err(|err| {
+ format_err!("removing backup snapshot {full_path:?} failed - {err}")
+ })?;
+ } else {
+ full_path.push(TRASH_MARKER_FILENAME);
+ let _trash_file =
+ std::fs::File::create(full_path).context("failed to set trash file")?;
+ }
// remove no longer needed lock files
let _ = std::fs::remove_file(self.manifest_lock_path()); // ignore errors
let _ = std::fs::remove_file(self.lock_path()); // ignore errors
- let group = BackupGroup::from(self);
- let guard = group.lock().with_context(|| {
- format!("while checking if group '{group:?}' is empty during snapshot destruction")
- });
-
- // Only remove the group if all of the following is true:
- //
- // - we can lock it: if we can't lock the group, it is still in use (either by another
- // backup process or a parent caller (who needs to take care that empty groups are
- // removed themselves).
- // - it is now empty: if the group isn't empty, removing it will fail (to avoid removing
- // backups that might still be used).
- // - the new locking mechanism is used: if the old mechanism is used, a group removal here
- // could lead to a race condition.
- //
- // Do not error out, as we have already removed the snapshot, there is nothing a user could
- // do to rectify the situation.
- if guard.is_ok() && group.list_backups()?.is_empty() && !*OLD_LOCKING {
- group.remove_group_dir()?;
- } else if let Err(err) = guard {
- log::debug!("{err:#}");
+ if skip_trash {
+ let group = BackupGroup::from(self);
+ let guard = group.lock().with_context(|| {
+ format!("while checking if group '{group:?}' is empty during snapshot destruction")
+ });
+
+ // Only remove the group if all of the following is true:
+ //
+ // - we can lock it: if we can't lock the group, it is still in use (either by another
+ // backup process or a parent caller (who needs to take care that empty groups are
+ // removed themselves).
+ // - it is now empty: if the group isn't empty, removing it will fail (to avoid removing
+ // backups that might still be used).
+ // - the new locking mechanism is used: if the old mechanism is used, a group removal here
+ // could lead to a race condition.
+ //
+ // Do not error out, as we have already removed the snapshot, there is nothing a user could
+ // do to rectify the situation.
+ if guard.is_ok() && group.list_backups()?.is_empty() && !*OLD_LOCKING {
+ group.remove_group_dir()?;
+ } else if let Err(err) = guard {
+ log::debug!("{err:#}");
+ }
}
Ok(())
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index cbf78ecb6..6df26e825 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -686,7 +686,7 @@ impl DataStore {
) -> Result<(), Error> {
let backup_dir = self.backup_dir(ns.clone(), backup_dir.clone())?;
- backup_dir.destroy(force)
+ backup_dir.destroy(force, true)
}
/// Returns the time of the last successful backup
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 392494488..aafd1bbd7 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -402,6 +402,12 @@ pub async fn list_snapshot_files(
type: pbs_api_types::BackupDir,
flatten: true,
},
+ "skip-trash": {
+ type: bool,
+ optional: true,
+ default: false,
+ description: "Immediately remove the snapshot, not marking it as trash.",
+ },
},
},
access: {
@@ -415,6 +421,7 @@ pub async fn delete_snapshot(
store: String,
ns: Option<BackupNamespace>,
backup_dir: pbs_api_types::BackupDir,
+ skip_trash: bool,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
@@ -435,7 +442,7 @@ pub async fn delete_snapshot(
let snapshot = datastore.backup_dir(ns, backup_dir)?;
- snapshot.destroy(false)?;
+ snapshot.destroy(false, skip_trash)?;
Ok(Value::Null)
})
@@ -979,6 +986,12 @@ pub fn verify(
optional: true,
description: "Spins up an asynchronous task that does the work.",
},
+ "skip-trash": {
+ type: bool,
+ optional: true,
+ default: false,
+ description: "Immediately remove the snapshot, not marking it as trash.",
+ },
},
},
returns: pbs_api_types::ADMIN_DATASTORE_PRUNE_RETURN_TYPE,
@@ -995,6 +1008,7 @@ pub fn prune(
keep_options: KeepOptions,
store: String,
ns: Option<BackupNamespace>,
+ skip_trash: bool,
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
@@ -1098,7 +1112,7 @@ pub fn prune(
});
if !keep {
- if let Err(err) = backup_dir.destroy(false) {
+ if let Err(err) = backup_dir.destroy(false, skip_trash) {
warn!(
"failed to remove dir {:?}: {}",
backup_dir.relative_path(),
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 02/21] datastore: mark groups as trash on destroy
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 01/21] datastore/api: mark snapshots as trash on destroy Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 03/21] datastore: allow filtering of backups by their trash status Christian Ebner
` (19 subsequent siblings)
21 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
In order to implement the trash can functionality, mark all the
snapshots of the group and the group itself as trash instead of
deleting them right away. Cleanup of the group is deferred to the
garbage collection.
Groups and snapshots are marked by the trash marker file. New backups
to this group will check for the marker file (see subsequent
commits), clearing the whole group and all of the snapshots to
create a new snapshot within that group. Otherwise ownership
conflicts could arise. This implies that a new backup clears the
whole trashed group.
Snapshots already marked as trash within the same backup group will
be cleared as well when the group is requested to be destroyed with
skip trash.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/backup_info.rs | 19 ++++++++++++++++---
pbs-datastore/src/datastore.rs | 4 ++--
2 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
index 76bcd15f5..9ce4cb0f8 100644
--- a/pbs-datastore/src/backup_info.rs
+++ b/pbs-datastore/src/backup_info.rs
@@ -215,7 +215,7 @@ impl BackupGroup {
///
/// Returns `BackupGroupDeleteStats`, containing the number of deleted snapshots
/// and number of protected snaphsots, which therefore were not removed.
- pub fn destroy(&self) -> Result<BackupGroupDeleteStats, Error> {
+ pub fn destroy(&self, skip_trash: bool) -> Result<BackupGroupDeleteStats, Error> {
let _guard = self
.lock()
.with_context(|| format!("while destroying group '{self:?}'"))?;
@@ -229,14 +229,20 @@ impl BackupGroup {
delete_stats.increment_protected_snapshots();
continue;
}
- snap.destroy(false, false)?;
+ snap.destroy(false, skip_trash)?;
delete_stats.increment_removed_snapshots();
}
// Note: make sure the old locking mechanism isn't used as `remove_dir_all` is not safe in
// that case
if delete_stats.all_removed() && !*OLD_LOCKING {
- self.remove_group_dir()?;
+ if skip_trash {
+ self.remove_group_dir()?;
+ } else {
+ let path = self.full_group_path().join(TRASH_MARKER_FILENAME);
+ let _trash_file =
+ std::fs::File::create(path).context("failed to set trash file")?;
+ }
delete_stats.increment_removed_groups();
}
@@ -245,6 +251,13 @@ impl BackupGroup {
/// Helper function, assumes that no more snapshots are present in the group.
fn remove_group_dir(&self) -> Result<(), Error> {
+ let trash_path = self.full_group_path().join(TRASH_MARKER_FILENAME);
+ if let Err(err) = std::fs::remove_file(&trash_path) {
+ if err.kind() != std::io::ErrorKind::NotFound {
+ bail!("removing the trash file '{trash_path:?}' failed - {err}")
+ }
+ }
+
let owner_path = self.store.owner_path(&self.ns, &self.group);
std::fs::remove_file(&owner_path).map_err(|err| {
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 6df26e825..e546bc532 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -581,7 +581,7 @@ impl DataStore {
let mut stats = BackupGroupDeleteStats::default();
for group in self.iter_backup_groups(ns.to_owned())? {
- let delete_stats = group?.destroy()?;
+ let delete_stats = group?.destroy(true)?;
stats.add(&delete_stats);
removed_all_groups = removed_all_groups && delete_stats.all_removed();
}
@@ -674,7 +674,7 @@ impl DataStore {
) -> Result<BackupGroupDeleteStats, Error> {
let backup_group = self.backup_group(ns.clone(), backup_group.clone());
- backup_group.destroy()
+ backup_group.destroy(true)
}
/// Remove a backup directory including all content
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 03/21] datastore: allow filtering of backups by their trash status
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 01/21] datastore/api: mark snapshots as trash on destroy Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 02/21] datastore: mark groups " Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 04/21] datastore: ignore trashed snapshots for last successful backup Christian Ebner
` (18 subsequent siblings)
21 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Extends the BackupGroup::list_backups method by an enum parameter to
filter backup snapshots based on their trash status.
This allows to reuse the same logic for listing active, trashed or
all snapshots.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/backup_info.rs | 33 +++++++++++++++++++++++++++++---
pbs-datastore/src/datastore.rs | 4 ++--
src/api2/admin/datastore.rs | 10 +++++-----
src/api2/tape/backup.rs | 4 ++--
src/backup/verify.rs | 4 ++--
src/server/prune_job.rs | 3 ++-
src/server/pull.rs | 3 ++-
7 files changed, 45 insertions(+), 16 deletions(-)
diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
index 9ce4cb0f8..a8c864ac8 100644
--- a/pbs-datastore/src/backup_info.rs
+++ b/pbs-datastore/src/backup_info.rs
@@ -52,6 +52,12 @@ impl fmt::Debug for BackupGroup {
}
}
+pub enum ListBackupFilter {
+ Active,
+ All,
+ Trashed,
+}
+
impl BackupGroup {
pub(crate) fn new(
store: Arc<DataStore>,
@@ -99,7 +105,7 @@ impl BackupGroup {
self.full_group_path().exists()
}
- pub fn list_backups(&self) -> Result<Vec<BackupInfo>, Error> {
+ pub fn list_backups(&self, filter: ListBackupFilter) -> Result<Vec<BackupInfo>, Error> {
let mut list = vec![];
let path = self.full_group_path();
@@ -117,6 +123,19 @@ impl BackupGroup {
let files = list_backup_files(l2_fd, backup_time)?;
let protected = backup_dir.is_protected();
+ match filter {
+ ListBackupFilter::All => (),
+ ListBackupFilter::Trashed => {
+ if !backup_dir.is_trashed() {
+ return Ok(());
+ }
+ }
+ ListBackupFilter::Active => {
+ if backup_dir.is_trashed() {
+ return Ok(());
+ }
+ }
+ }
list.push(BackupInfo {
backup_dir,
@@ -132,7 +151,7 @@ impl BackupGroup {
/// Finds the latest backup inside a backup group
pub fn last_backup(&self, only_finished: bool) -> Result<Option<BackupInfo>, Error> {
- let backups = self.list_backups()?;
+ let backups = self.list_backups(ListBackupFilter::Active)?;
Ok(backups
.into_iter()
.filter(|item| !only_finished || item.is_finished())
@@ -480,6 +499,11 @@ impl BackupDir {
path.exists()
}
+ pub fn is_trashed(&self) -> bool {
+ let path = self.full_path().join(TRASH_MARKER_FILENAME);
+ path.exists()
+ }
+
pub fn backup_time_to_string(backup_time: i64) -> Result<String, Error> {
// fixme: can this fail? (avoid unwrap)
proxmox_time::epoch_to_rfc3339_utc(backup_time)
@@ -637,7 +661,10 @@ impl BackupDir {
//
// Do not error out, as we have already removed the snapshot, there is nothing a user could
// do to rectify the situation.
- if guard.is_ok() && group.list_backups()?.is_empty() && !*OLD_LOCKING {
+ if guard.is_ok()
+ && group.list_backups(ListBackupFilter::All)?.is_empty()
+ && !*OLD_LOCKING
+ {
group.remove_group_dir()?;
} else if let Err(err) = guard {
log::debug!("{err:#}");
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index e546bc532..867324380 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -28,7 +28,7 @@ use pbs_api_types::{
};
use pbs_config::BackupLockGuard;
-use crate::backup_info::{BackupDir, BackupGroup, BackupInfo, OLD_LOCKING};
+use crate::backup_info::{BackupDir, BackupGroup, BackupInfo, ListBackupFilter, OLD_LOCKING};
use crate::chunk_store::ChunkStore;
use crate::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
use crate::fixed_index::{FixedIndexReader, FixedIndexWriter};
@@ -1158,7 +1158,7 @@ impl DataStore {
_ => bail!("exhausted retries and unexpected counter overrun"),
};
- let mut snapshots = match group.list_backups() {
+ let mut snapshots = match group.list_backups(ListBackupFilter::All) {
Ok(snapshots) => snapshots,
Err(err) => {
if group.exists() {
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index aafd1bbd7..133a6d658 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -51,7 +51,7 @@ use pbs_api_types::{
};
use pbs_client::pxar::{create_tar, create_zip};
use pbs_config::CachedUserInfo;
-use pbs_datastore::backup_info::BackupInfo;
+use pbs_datastore::backup_info::{BackupInfo, ListBackupFilter};
use pbs_datastore::cached_chunk_reader::CachedChunkReader;
use pbs_datastore::catalog::{ArchiveEntry, CatalogReader};
use pbs_datastore::data_blob::DataBlob;
@@ -223,7 +223,7 @@ pub fn list_groups(
return Ok(group_info);
}
- let snapshots = match group.list_backups() {
+ let snapshots = match group.list_backups(ListBackupFilter::Active) {
Ok(snapshots) => snapshots,
Err(_) => return Ok(group_info),
};
@@ -624,7 +624,7 @@ unsafe fn list_snapshots_blocking(
return Ok(snapshots);
}
- let group_backups = group.list_backups()?;
+ let group_backups = group.list_backups(ListBackupFilter::Active)?;
snapshots.extend(
group_backups
@@ -657,7 +657,7 @@ async fn get_snapshots_count(
Ok(group) => group,
Err(_) => return Ok(counts), // TODO: add this as error counts?
};
- let snapshot_count = group.list_backups()?.len() as u64;
+ let snapshot_count = group.list_backups(ListBackupFilter::Active)?.len() as u64;
// only include groups with snapshots, counting/displaying empty groups can confuse
if snapshot_count > 0 {
@@ -1042,7 +1042,7 @@ pub fn prune(
}
let mut prune_result: Vec<PruneResult> = Vec::new();
- let list = group.list_backups()?;
+ let list = group.list_backups(ListBackupFilter::Active)?;
let mut prune_info = compute_prune_info(list, &keep_options)?;
diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
index 31293a9a9..923cb7834 100644
--- a/src/api2/tape/backup.rs
+++ b/src/api2/tape/backup.rs
@@ -17,7 +17,7 @@ use pbs_api_types::{
};
use pbs_config::CachedUserInfo;
-use pbs_datastore::backup_info::{BackupDir, BackupInfo};
+use pbs_datastore::backup_info::{BackupDir, BackupInfo, ListBackupFilter};
use pbs_datastore::{DataStore, StoreProgress};
use crate::tape::TapeNotificationMode;
@@ -433,7 +433,7 @@ fn backup_worker(
progress.done_snapshots = 0;
progress.group_snapshots = 0;
- let snapshot_list = group.list_backups()?;
+ let snapshot_list = group.list_backups(ListBackupFilter::Active)?;
// filter out unfinished backups
let mut snapshot_list: Vec<_> = snapshot_list
diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index 3d2cba8ac..1b5e8564b 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -14,7 +14,7 @@ use pbs_api_types::{
CryptMode, SnapshotVerifyState, VerifyState, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_VERIFY,
UPID,
};
-use pbs_datastore::backup_info::{BackupDir, BackupGroup, BackupInfo};
+use pbs_datastore::backup_info::{BackupDir, BackupGroup, BackupInfo, ListBackupFilter};
use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::{BackupManifest, FileInfo};
use pbs_datastore::{DataBlob, DataStore, StoreProgress};
@@ -411,7 +411,7 @@ pub fn verify_backup_group(
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
- let mut list = match group.list_backups() {
+ let mut list = match group.list_backups(ListBackupFilter::Active) {
Ok(list) => list,
Err(err) => {
info!(
diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
index 1c86647a0..596afe086 100644
--- a/src/server/prune_job.rs
+++ b/src/server/prune_job.rs
@@ -1,6 +1,7 @@
use std::sync::Arc;
use anyhow::Error;
+use pbs_datastore::backup_info::ListBackupFilter;
use tracing::{info, warn};
use pbs_api_types::{
@@ -54,7 +55,7 @@ pub fn prune_datastore(
)? {
let group = group?;
let ns = group.backup_ns();
- let list = group.list_backups()?;
+ let list = group.list_backups(ListBackupFilter::Active)?;
let mut prune_info = compute_prune_info(list, &prune_options.keep)?;
prune_info.reverse(); // delete older snapshots first
diff --git a/src/server/pull.rs b/src/server/pull.rs
index b1724c142..50d7b0806 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -7,6 +7,7 @@ use std::sync::{Arc, Mutex};
use std::time::SystemTime;
use anyhow::{bail, format_err, Error};
+use pbs_datastore::backup_info::ListBackupFilter;
use proxmox_human_byte::HumanByte;
use tracing::info;
@@ -660,7 +661,7 @@ async fn pull_group(
.target
.store
.backup_group(target_ns.clone(), group.clone());
- let local_list = group.list_backups()?;
+ let local_list = group.list_backups(ListBackupFilter::Active)?;
for info in local_list {
let snapshot = info.backup_dir;
if source_snapshots.contains(&snapshot.backup_time()) {
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 04/21] datastore: ignore trashed snapshots for last successful backup
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (2 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 03/21] datastore: allow filtering of backups by their trash status Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 05/21] sync: ignore trashed snapshots when reading from local source Christian Ebner
` (17 subsequent siblings)
21 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Exclude trahed snapshots from looking up the last successful backup
of the group, as trashed items are marked for deletion by the next
garbage collection run and must be considered as if not present
anymore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/backup_info.rs | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
index a8c864ac8..189ed28ad 100644
--- a/pbs-datastore/src/backup_info.rs
+++ b/pbs-datastore/src/backup_info.rs
@@ -172,8 +172,13 @@ impl BackupGroup {
return Ok(());
}
- let mut manifest_path = PathBuf::from(backup_time);
- manifest_path.push(MANIFEST_BLOB_NAME.as_ref());
+ let path = PathBuf::from(backup_time);
+ let trash_marker_path = path.join(TRASH_MARKER_FILENAME);
+ if trash_marker_path.exists() {
+ return Ok(());
+ }
+
+ let manifest_path = path.join(MANIFEST_BLOB_NAME.as_ref());
use nix::fcntl::{openat, OFlag};
match openat(
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 05/21] sync: ignore trashed snapshots when reading from local source
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (3 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 04/21] datastore: ignore trashed snapshots for last successful backup Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 06/21] api: tape: check trash marker when trying to write snapshot Christian Ebner
` (16 subsequent siblings)
21 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Trashed snapshots should never be synced, so filter them out when
listing the snapshots backup directories to be synced.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/sync.rs | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/server/sync.rs b/src/server/sync.rs
index 09814ef0c..3de2ec9a4 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -461,6 +461,7 @@ impl SyncSource for LocalSource {
.backup_group(namespace.clone(), group.clone())
.iter_snapshots()?
.filter_map(Result::ok)
+ .filter(|snapshot| !snapshot.is_trashed())
.map(|snapshot| snapshot.dir().to_owned())
.collect::<Vec<BackupDir>>())
}
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 06/21] api: tape: check trash marker when trying to write snapshot
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (4 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 05/21] sync: ignore trashed snapshots when reading from local source Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 07/21] sync: ignore trashed groups in local source reader Christian Ebner
` (15 subsequent siblings)
21 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Since snapshots might be marked as trash, the snapshot directory
can still be present until cleaned up by garbage collection.
Therefore, check for the presence of the trash marker after acquiring
the locked snapshot reader and skip over marked ones.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/tape/backup.rs | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
index 923cb7834..17c8bc605 100644
--- a/src/api2/tape/backup.rs
+++ b/src/api2/tape/backup.rs
@@ -574,7 +574,13 @@ fn backup_snapshot(
info!("backup snapshot {snapshot_path:?}");
let snapshot_reader = match snapshot.locked_reader() {
- Ok(reader) => reader,
+ Ok(reader) => {
+ if snapshot.is_trashed() {
+ info!("snapshot {snapshot_path:?} trashed, skipping");
+ return Ok(SnapshotBackupResult::Ignored);
+ }
+ reader
+ }
Err(err) => {
if !snapshot.full_path().exists() {
// we got an error and the dir does not exist,
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 07/21] sync: ignore trashed groups in local source reader
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (5 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 06/21] api: tape: check trash marker when trying to write snapshot Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 08/21] datastore: namespace: add filter for trash status Christian Ebner
` (14 subsequent siblings)
21 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Check and exclude backup groups which have been marked as trash from
sync.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/backup_info.rs | 7 +++++++
src/server/sync.rs | 1 +
2 files changed, 8 insertions(+)
diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
index 189ed28ad..b4fabb2cc 100644
--- a/pbs-datastore/src/backup_info.rs
+++ b/pbs-datastore/src/backup_info.rs
@@ -105,6 +105,13 @@ impl BackupGroup {
self.full_group_path().exists()
}
+ /// Check if the group is currently marked as trash by checking the presence of the trash
+ /// marker file in the group's directory
+ pub fn is_trashed(&self) -> bool {
+ let path = self.full_group_path().join(TRASH_MARKER_FILENAME);
+ path.exists()
+ }
+
pub fn list_backups(&self, filter: ListBackupFilter) -> Result<Vec<BackupInfo>, Error> {
let mut list = vec![];
diff --git a/src/server/sync.rs b/src/server/sync.rs
index 3de2ec9a4..ce338fbbe 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -447,6 +447,7 @@ impl SyncSource for LocalSource {
Some(owner),
)?
.filter_map(Result::ok)
+ .filter(|backup_group| !backup_group.is_trashed())
.map(|backup_group| backup_group.group().clone())
.collect::<Vec<pbs_api_types::BackupGroup>>())
}
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 08/21] datastore: namespace: add filter for trash status
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (6 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 07/21] sync: ignore trashed groups in local source reader Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 09/21] datastore: refactor recursive namespace removal Christian Ebner
` (13 subsequent siblings)
21 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
As for snapshots, allow to filter namespaces based on their trash
status while iterating.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/examples/ls-snapshots.rs | 8 ++-
pbs-datastore/src/datastore.rs | 24 ++++---
pbs-datastore/src/hierarchy.rs | 91 ++++++++++++++++++++++++--
pbs-datastore/src/lib.rs | 1 +
src/api2/admin/namespace.rs | 16 +++--
src/api2/tape/backup.rs | 8 ++-
src/backup/hierarchy.rs | 26 +++++---
src/server/pull.rs | 8 ++-
src/server/sync.rs | 5 +-
9 files changed, 149 insertions(+), 38 deletions(-)
diff --git a/pbs-datastore/examples/ls-snapshots.rs b/pbs-datastore/examples/ls-snapshots.rs
index 2eeea4892..1d3707a17 100644
--- a/pbs-datastore/examples/ls-snapshots.rs
+++ b/pbs-datastore/examples/ls-snapshots.rs
@@ -2,7 +2,7 @@ use std::path::PathBuf;
use anyhow::{bail, Error};
-use pbs_datastore::DataStore;
+use pbs_datastore::{DataStore, NamespaceListFilter};
fn run() -> Result<(), Error> {
let base: PathBuf = match std::env::args().nth(1) {
@@ -20,7 +20,11 @@ fn run() -> Result<(), Error> {
let store = unsafe { DataStore::open_path("", base, None)? };
- for ns in store.recursive_iter_backup_ns_ok(Default::default(), max_depth)? {
+ for ns in store.recursive_iter_backup_ns_ok(
+ Default::default(),
+ max_depth,
+ NamespaceListFilter::Active,
+ )? {
println!("found namespace store:/{}", ns);
for group in store.iter_backup_groups(ns)? {
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 867324380..aee69768f 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -32,7 +32,9 @@ use crate::backup_info::{BackupDir, BackupGroup, BackupInfo, ListBackupFilter, O
use crate::chunk_store::ChunkStore;
use crate::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
use crate::fixed_index::{FixedIndexReader, FixedIndexWriter};
-use crate::hierarchy::{ListGroups, ListGroupsType, ListNamespaces, ListNamespacesRecursive};
+use crate::hierarchy::{
+ ListGroups, ListGroupsType, ListNamespaces, ListNamespacesRecursive, NamespaceListFilter,
+};
use crate::index::IndexFile;
use crate::task_tracking::{self, update_active_operations};
use crate::DataBlob;
@@ -617,7 +619,7 @@ impl DataStore {
let mut stats = BackupGroupDeleteStats::default();
if delete_groups {
log::info!("removing whole namespace recursively below {store}:/{ns}",);
- for ns in self.recursive_iter_backup_ns(ns.to_owned())? {
+ for ns in self.recursive_iter_backup_ns(ns.to_owned(), NamespaceListFilter::Active)? {
let (removed_ns_groups, delete_stats) = self.remove_namespace_groups(&ns?)?;
stats.add(&delete_stats);
removed_all_requested = removed_all_requested && removed_ns_groups;
@@ -629,7 +631,7 @@ impl DataStore {
// now try to delete the actual namespaces, bottom up so that we can use safe rmdir that
// will choke if a new backup/group appeared in the meantime (but not on an new empty NS)
let mut children = self
- .recursive_iter_backup_ns(ns.to_owned())?
+ .recursive_iter_backup_ns(ns.to_owned(), NamespaceListFilter::All)?
.collect::<Result<Vec<BackupNamespace>, Error>>()?;
children.sort_by_key(|b| std::cmp::Reverse(b.depth()));
@@ -851,8 +853,9 @@ impl DataStore {
pub fn iter_backup_ns(
self: &Arc<DataStore>,
ns: BackupNamespace,
+ filter: NamespaceListFilter,
) -> Result<ListNamespaces, Error> {
- ListNamespaces::new(Arc::clone(self), ns)
+ ListNamespaces::new(Arc::clone(self), ns, filter)
}
/// Get a streaming iter over single-level backup namespaces of a datatstore, filtered by Ok
@@ -862,10 +865,11 @@ impl DataStore {
pub fn iter_backup_ns_ok(
self: &Arc<DataStore>,
ns: BackupNamespace,
+ filter: NamespaceListFilter,
) -> Result<impl Iterator<Item = BackupNamespace> + 'static, Error> {
let this = Arc::clone(self);
Ok(
- ListNamespaces::new(Arc::clone(self), ns)?.filter_map(move |ns| match ns {
+ ListNamespaces::new(Arc::clone(self), ns, filter)?.filter_map(move |ns| match ns {
Ok(ns) => Some(ns),
Err(err) => {
log::error!("list groups error on datastore {} - {}", this.name(), err);
@@ -882,8 +886,9 @@ impl DataStore {
pub fn recursive_iter_backup_ns(
self: &Arc<DataStore>,
ns: BackupNamespace,
+ filter: NamespaceListFilter,
) -> Result<ListNamespacesRecursive, Error> {
- ListNamespacesRecursive::new(Arc::clone(self), ns)
+ ListNamespacesRecursive::new(Arc::clone(self), ns, filter)
}
/// Get a streaming iter over single-level backup namespaces of a datatstore, filtered by Ok
@@ -894,12 +899,13 @@ impl DataStore {
self: &Arc<DataStore>,
ns: BackupNamespace,
max_depth: Option<usize>,
+ filter: NamespaceListFilter,
) -> Result<impl Iterator<Item = BackupNamespace> + 'static, Error> {
let this = Arc::clone(self);
Ok(if let Some(depth) = max_depth {
- ListNamespacesRecursive::new_max_depth(Arc::clone(self), ns, depth)?
+ ListNamespacesRecursive::new_max_depth(Arc::clone(self), ns, depth, filter)?
} else {
- ListNamespacesRecursive::new(Arc::clone(self), ns)?
+ ListNamespacesRecursive::new(Arc::clone(self), ns, filter)?
}
.filter_map(move |ns| match ns {
Ok(ns) => Some(ns),
@@ -1136,7 +1142,7 @@ impl DataStore {
let arc_self = Arc::new(self.clone());
for namespace in arc_self
- .recursive_iter_backup_ns(BackupNamespace::root())
+ .recursive_iter_backup_ns(BackupNamespace::root(), NamespaceListFilter::All)
.context("creating namespace iterator failed")?
{
let namespace = namespace.context("iterating namespaces failed")?;
diff --git a/pbs-datastore/src/hierarchy.rs b/pbs-datastore/src/hierarchy.rs
index e0bf84419..f6385ba6a 100644
--- a/pbs-datastore/src/hierarchy.rs
+++ b/pbs-datastore/src/hierarchy.rs
@@ -8,7 +8,7 @@ use anyhow::{bail, format_err, Error};
use pbs_api_types::{BackupNamespace, BackupType, BACKUP_DATE_REGEX, BACKUP_ID_REGEX};
use proxmox_sys::fs::get_file_type;
-use crate::backup_info::{BackupDir, BackupGroup};
+use crate::backup_info::{BackupDir, BackupGroup, TRASH_MARKER_FILENAME};
use crate::DataStore;
/// A iterator for all BackupDir's (Snapshots) in a BackupGroup
@@ -268,31 +268,49 @@ where
}
}
+#[derive(Clone, Copy, Debug)]
+pub enum NamespaceListFilter {
+ Active,
+ All,
+ Trashed,
+}
+
/// A iterator for a (single) level of Namespaces
pub struct ListNamespaces {
ns: BackupNamespace,
base_path: PathBuf,
ns_state: Option<proxmox_sys::fs::ReadDir>,
+ filter: NamespaceListFilter,
}
impl ListNamespaces {
/// construct a new single-level namespace iterator on a datastore with an optional anchor ns
- pub fn new(store: Arc<DataStore>, ns: BackupNamespace) -> Result<Self, Error> {
+ pub fn new(
+ store: Arc<DataStore>,
+ ns: BackupNamespace,
+ filter: NamespaceListFilter,
+ ) -> Result<Self, Error> {
Ok(ListNamespaces {
ns,
base_path: store.base_path(),
ns_state: None,
+ filter,
})
}
/// to allow constructing the iter directly on a path, e.g., provided by section config
///
/// NOTE: it's recommended to use the datastore one constructor or go over the recursive iter
- pub fn new_from_path(path: PathBuf, ns: Option<BackupNamespace>) -> Result<Self, Error> {
+ pub fn new_from_path(
+ path: PathBuf,
+ ns: Option<BackupNamespace>,
+ filter: NamespaceListFilter,
+ ) -> Result<Self, Error> {
Ok(ListNamespaces {
ns: ns.unwrap_or_default(),
base_path: path,
ns_state: None,
+ filter,
})
}
}
@@ -328,6 +346,50 @@ impl Iterator for ListNamespaces {
};
if let Ok(name) = entry.file_name().to_str() {
if name != "." && name != ".." {
+ use nix::fcntl::OFlag;
+ use nix::sys::stat::Mode;
+
+ match self.filter {
+ NamespaceListFilter::All => (),
+ NamespaceListFilter::Active => {
+ let mut trash_path = self.base_path.to_owned();
+ if !self.ns.is_root() {
+ trash_path.push(self.ns.path());
+ }
+ trash_path.push("ns");
+ trash_path.push(name);
+ trash_path.push(TRASH_MARKER_FILENAME);
+ match proxmox_sys::fd::openat(
+ &libc::AT_FDCWD,
+ &trash_path,
+ OFlag::O_PATH | OFlag::O_CLOEXEC | OFlag::O_NOFOLLOW,
+ Mode::empty(),
+ ) {
+ Ok(_) => continue,
+ Err(nix::errno::Errno::ENOENT) => (),
+ Err(err) => return Some(Err(err.into())),
+ }
+ }
+ NamespaceListFilter::Trashed => {
+ let mut trash_path = self.base_path.to_owned();
+ if !self.ns.is_root() {
+ trash_path.push(self.ns.path());
+ }
+ trash_path.push("ns");
+ trash_path.push(name);
+ trash_path.push(TRASH_MARKER_FILENAME);
+ match proxmox_sys::fd::openat(
+ &libc::AT_FDCWD,
+ &trash_path,
+ OFlag::O_PATH | OFlag::O_CLOEXEC | OFlag::O_NOFOLLOW,
+ Mode::empty(),
+ ) {
+ Ok(_) => (),
+ Err(nix::errno::Errno::ENOENT) => continue,
+ Err(err) => return Some(Err(err.into())),
+ }
+ }
+ }
return Some(BackupNamespace::from_parent_ns(&self.ns, name.to_string()));
}
}
@@ -368,12 +430,17 @@ pub struct ListNamespacesRecursive {
/// the maximal recursion depth from the anchor start ns (depth == 0) downwards
max_depth: u8,
state: Option<Vec<ListNamespaces>>, // vector to avoid code recursion
+ filter: NamespaceListFilter,
}
impl ListNamespacesRecursive {
/// Creates an recursive namespace iterator.
- pub fn new(store: Arc<DataStore>, ns: BackupNamespace) -> Result<Self, Error> {
- Self::new_max_depth(store, ns, pbs_api_types::MAX_NAMESPACE_DEPTH)
+ pub fn new(
+ store: Arc<DataStore>,
+ ns: BackupNamespace,
+ filter: NamespaceListFilter,
+ ) -> Result<Self, Error> {
+ Self::new_max_depth(store, ns, pbs_api_types::MAX_NAMESPACE_DEPTH, filter)
}
/// Creates an recursive namespace iterator that iterates recursively until depth is reached.
@@ -386,6 +453,7 @@ impl ListNamespacesRecursive {
store: Arc<DataStore>,
ns: BackupNamespace,
max_depth: usize,
+ filter: NamespaceListFilter,
) -> Result<Self, Error> {
if max_depth > pbs_api_types::MAX_NAMESPACE_DEPTH {
let limit = pbs_api_types::MAX_NAMESPACE_DEPTH + 1;
@@ -399,6 +467,7 @@ impl ListNamespacesRecursive {
ns,
max_depth: max_depth as u8,
state: None,
+ filter,
})
}
}
@@ -418,7 +487,11 @@ impl Iterator for ListNamespacesRecursive {
match iter.next() {
Some(Ok(ns)) => {
if state.len() < self.max_depth as usize {
- match ListNamespaces::new(Arc::clone(&self.store), ns.to_owned()) {
+ match ListNamespaces::new(
+ Arc::clone(&self.store),
+ ns.to_owned(),
+ self.filter,
+ ) {
Ok(iter) => state.push(iter),
Err(err) => log::error!("failed to create child ns iter {err}"),
}
@@ -434,7 +507,11 @@ impl Iterator for ListNamespacesRecursive {
// first next call ever: initialize state vector and start iterating at our level
let mut state = Vec::with_capacity(pbs_api_types::MAX_NAMESPACE_DEPTH);
if self.max_depth as usize > 0 {
- match ListNamespaces::new(Arc::clone(&self.store), self.ns.to_owned()) {
+ match ListNamespaces::new(
+ Arc::clone(&self.store),
+ self.ns.to_owned(),
+ self.filter,
+ ) {
Ok(list_ns) => state.push(list_ns),
Err(err) => {
// yield the error but set the state to Some to avoid re-try, a future
diff --git a/pbs-datastore/src/lib.rs b/pbs-datastore/src/lib.rs
index 5014b6c09..7390c6df8 100644
--- a/pbs-datastore/src/lib.rs
+++ b/pbs-datastore/src/lib.rs
@@ -208,6 +208,7 @@ pub use datastore::{
mod hierarchy;
pub use hierarchy::{
ListGroups, ListGroupsType, ListNamespaces, ListNamespacesRecursive, ListSnapshots,
+ NamespaceListFilter,
};
mod snapshot_reader;
diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
index 6cf88d89e..e5463524a 100644
--- a/src/api2/admin/namespace.rs
+++ b/src/api2/admin/namespace.rs
@@ -9,7 +9,7 @@ use pbs_api_types::{
DATASTORE_SCHEMA, NS_MAX_DEPTH_SCHEMA, PROXMOX_SAFE_ID_FORMAT,
};
-use pbs_datastore::DataStore;
+use pbs_datastore::{DataStore, NamespaceListFilter};
use crate::backup::{check_ns_modification_privs, check_ns_privs, NS_PRIVS_OK};
@@ -99,12 +99,14 @@ pub fn list_namespaces(
let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
- let iter = match datastore.recursive_iter_backup_ns_ok(parent, max_depth) {
- Ok(iter) => iter,
- // parent NS doesn't exists and user has no privs on it, avoid info leakage.
- Err(_) if parent_access.is_err() => http_bail!(FORBIDDEN, "permission check failed"),
- Err(err) => return Err(err),
- };
+ let iter =
+ match datastore.recursive_iter_backup_ns_ok(parent, max_depth, NamespaceListFilter::Active)
+ {
+ Ok(iter) => iter,
+ // parent NS doesn't exists and user has no privs on it, avoid info leakage.
+ Err(_) if parent_access.is_err() => http_bail!(FORBIDDEN, "permission check failed"),
+ Err(err) => return Err(err),
+ };
let ns_to_item =
|ns: BackupNamespace| -> NamespaceListItem { NamespaceListItem { ns, comment: None } };
diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
index 17c8bc605..53554c54b 100644
--- a/src/api2/tape/backup.rs
+++ b/src/api2/tape/backup.rs
@@ -18,7 +18,7 @@ use pbs_api_types::{
use pbs_config::CachedUserInfo;
use pbs_datastore::backup_info::{BackupDir, BackupInfo, ListBackupFilter};
-use pbs_datastore::{DataStore, StoreProgress};
+use pbs_datastore::{DataStore, NamespaceListFilter, StoreProgress};
use crate::tape::TapeNotificationMode;
use crate::{
@@ -392,7 +392,11 @@ fn backup_worker(
}
let mut group_list = Vec::new();
- let namespaces = datastore.recursive_iter_backup_ns_ok(root_namespace, setup.max_depth)?;
+ let namespaces = datastore.recursive_iter_backup_ns_ok(
+ root_namespace,
+ setup.max_depth,
+ NamespaceListFilter::Active,
+ )?;
for ns in namespaces {
group_list.extend(datastore.list_backup_groups(ns)?);
}
diff --git a/src/backup/hierarchy.rs b/src/backup/hierarchy.rs
index 8dd71fcf7..ec8d4b9c8 100644
--- a/src/backup/hierarchy.rs
+++ b/src/backup/hierarchy.rs
@@ -7,7 +7,9 @@ use pbs_api_types::{
PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_READ,
};
use pbs_config::CachedUserInfo;
-use pbs_datastore::{backup_info::BackupGroup, DataStore, ListGroups, ListNamespacesRecursive};
+use pbs_datastore::{
+ backup_info::BackupGroup, DataStore, ListGroups, ListNamespacesRecursive, NamespaceListFilter,
+};
/// Asserts that `privs` are fulfilled on datastore + (optional) namespace.
pub fn check_ns_privs(
@@ -75,12 +77,15 @@ pub fn can_access_any_namespace(
) -> bool {
// NOTE: traversing the datastore could be avoided if we had an "ACL tree: is there any priv
// below /datastore/{store}" helper
- let mut iter =
- if let Ok(iter) = store.recursive_iter_backup_ns_ok(BackupNamespace::root(), None) {
- iter
- } else {
- return false;
- };
+ let mut iter = if let Ok(iter) = store.recursive_iter_backup_ns_ok(
+ BackupNamespace::root(),
+ None,
+ NamespaceListFilter::Active,
+ ) {
+ iter
+ } else {
+ return false;
+ };
let wanted =
PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_MODIFY | PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP;
let name = store.name();
@@ -129,7 +134,12 @@ impl<'a> ListAccessibleBackupGroups<'a> {
owner_and_priv: Option<u64>,
auth_id: Option<&'a Authid>,
) -> Result<Self, Error> {
- let ns_iter = ListNamespacesRecursive::new_max_depth(Arc::clone(store), ns, max_depth)?;
+ let ns_iter = ListNamespacesRecursive::new_max_depth(
+ Arc::clone(store),
+ ns,
+ max_depth,
+ NamespaceListFilter::Active,
+ )?;
Ok(ListAccessibleBackupGroups {
auth_id,
ns_iter,
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 50d7b0806..d3c6fcf6a 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -25,7 +25,7 @@ use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::{BackupManifest, FileInfo};
use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{check_backup_owner, DataStore, StoreProgress};
+use pbs_datastore::{check_backup_owner, DataStore, NamespaceListFilter, StoreProgress};
use pbs_tools::sha::sha256;
use super::sync::{
@@ -750,7 +750,11 @@ fn check_and_remove_vanished_ns(
let mut local_ns_list: Vec<BackupNamespace> = params
.target
.store
- .recursive_iter_backup_ns_ok(params.target.ns.clone(), Some(max_depth))?
+ .recursive_iter_backup_ns_ok(
+ params.target.ns.clone(),
+ Some(max_depth),
+ NamespaceListFilter::Active,
+ )?
.filter(|ns| {
let user_privs =
user_info.lookup_privs(¶ms.owner, &ns.acl_path(params.target.store.name()));
diff --git a/src/server/sync.rs b/src/server/sync.rs
index ce338fbbe..308a03977 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -26,7 +26,9 @@ use pbs_api_types::{
use pbs_client::{BackupReader, BackupRepository, HttpClient, RemoteChunkReader};
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{BackupManifest, DataStore, ListNamespacesRecursive, LocalChunkReader};
+use pbs_datastore::{
+ BackupManifest, DataStore, ListNamespacesRecursive, LocalChunkReader, NamespaceListFilter,
+};
use crate::backup::ListAccessibleBackupGroups;
use crate::server::jobstate::Job;
@@ -425,6 +427,7 @@ impl SyncSource for LocalSource {
self.store.clone(),
self.ns.clone(),
max_depth.unwrap_or(MAX_NAMESPACE_DEPTH),
+ NamespaceListFilter::Active,
)?
.collect();
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 09/21] datastore: refactor recursive namespace removal
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (7 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 08/21] datastore: namespace: add filter for trash status Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 10/21] datastore: mark namespace as trash instead of deleting it Christian Ebner
` (12 subsequent siblings)
21 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Split off the recursive destruction logic for the namespace folder
hierarchy into its own helper function.
This will allow to separate the marking of namespaces as trashed
from the actually destruction an namespace cleanup by garbage
collection or when bypassing the trash.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index aee69768f..023a6a12e 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -628,6 +628,18 @@ impl DataStore {
log::info!("pruning empty namespace recursively below {store}:/{ns}");
}
+ removed_all_requested =
+ removed_all_requested && self.destroy_namespace_recursive(ns, delete_groups)?;
+
+ Ok((removed_all_requested, stats))
+ }
+
+ fn destroy_namespace_recursive(
+ self: &Arc<Self>,
+ ns: &BackupNamespace,
+ delete_groups: bool,
+ ) -> Result<bool, Error> {
+ let mut removed_all_requested = true;
// now try to delete the actual namespaces, bottom up so that we can use safe rmdir that
// will choke if a new backup/group appeared in the meantime (but not on an new empty NS)
let mut children = self
@@ -662,7 +674,7 @@ impl DataStore {
}
}
- Ok((removed_all_requested, stats))
+ Ok(removed_all_requested)
}
/// Remove a complete backup group including all snapshots.
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 10/21] datastore: mark namespace as trash instead of deleting it
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (8 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 09/21] datastore: refactor recursive namespace removal Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 11/21] datastore: check for trash marker in namespace exists check Christian Ebner
` (11 subsequent siblings)
21 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
As for backup snapshots and groups, mark the namespace as trash
instead of removing it and the contents right away, if the trash
should not be bypassed.
Actual removal of the hirarchical folder structure has to be taken
care of by the garbage collection.
In order to avoid races during removal, first mark the namespaces as
trash pending, mark the snapshots and groups as trash and only after
rename the pending marker file to the trash marker file. By this,
concurrent backups can remove the trash pending marker to avoid the
namespace being trashed.
On re-creation of a trashed namespace remove the marker file on itself
and any parent component from deepest to shallowest. As trashing a full
namespace can also set the trash pending state for recursive namespace
cleanup, remove encounters of that marker file as well to avoid the
namespace or its parent being trashed.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 135 +++++++++++++++++++++++++++++----
src/api2/admin/namespace.rs | 2 +-
src/server/pull.rs | 2 +-
3 files changed, 123 insertions(+), 16 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 023a6a12e..b1d81e199 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -28,7 +28,9 @@ use pbs_api_types::{
};
use pbs_config::BackupLockGuard;
-use crate::backup_info::{BackupDir, BackupGroup, BackupInfo, ListBackupFilter, OLD_LOCKING};
+use crate::backup_info::{
+ BackupDir, BackupGroup, BackupInfo, ListBackupFilter, OLD_LOCKING, TRASH_MARKER_FILENAME,
+};
use crate::chunk_store::ChunkStore;
use crate::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
use crate::fixed_index::{FixedIndexReader, FixedIndexWriter};
@@ -42,6 +44,8 @@ use crate::DataBlob;
static DATASTORE_MAP: LazyLock<Mutex<HashMap<String, Arc<DataStoreImpl>>>> =
LazyLock::new(|| Mutex::new(HashMap::new()));
+const TRASH_PENDING_MARKER_FILENAME: &str = ".pending";
+
/// checks if auth_id is owner, or, if owner is a token, if
/// auth_id is the user of the token
pub fn check_backup_owner(owner: &Authid, auth_id: &Authid) -> Result<(), Error> {
@@ -554,6 +558,7 @@ impl DataStore {
ns_full_path.push(ns.path());
std::fs::create_dir_all(ns_full_path)?;
+ self.untrash_namespace(&ns)?;
Ok(ns)
}
@@ -565,6 +570,34 @@ impl DataStore {
path.exists()
}
+ /// Remove the namespace and all it's parent components from the trash by removing the trash or
+ /// trash-pending marker file for each namespace level from deepest to shallowest. Missing files
+ /// are ignored.
+ pub fn untrash_namespace(&self, namespace: &BackupNamespace) -> Result<(), Error> {
+ let mut namespace = namespace.clone();
+ while namespace.depth() > 0 {
+ let mut trash_file_path = self.base_path();
+ trash_file_path.push(namespace.path());
+ let mut pending_file_path = trash_file_path.clone();
+ pending_file_path.push(TRASH_PENDING_MARKER_FILENAME);
+ if let Err(err) = std::fs::remove_file(&pending_file_path) {
+ // ignore not found, either not trashed or un-trashed by concurrent operation
+ if err.kind() != std::io::ErrorKind::NotFound {
+ bail!("failed to remove trash-pending file {trash_file_path:?}: {err}");
+ }
+ }
+ trash_file_path.push(TRASH_MARKER_FILENAME);
+ if let Err(err) = std::fs::remove_file(&trash_file_path) {
+ // ignore not found, either not trashed or un-trashed by concurrent operation
+ if err.kind() != std::io::ErrorKind::NotFound {
+ bail!("failed to remove trash file {trash_file_path:?}: {err}");
+ }
+ }
+ namespace.pop();
+ }
+ Ok(())
+ }
+
/// Remove all backup groups of a single namespace level but not the namespace itself.
///
/// Does *not* descends into child-namespaces and doesn't remoes the namespace itself either.
@@ -574,6 +607,7 @@ impl DataStore {
pub fn remove_namespace_groups(
self: &Arc<Self>,
ns: &BackupNamespace,
+ skip_trash: bool,
) -> Result<(bool, BackupGroupDeleteStats), Error> {
// FIXME: locking? The single groups/snapshots are already protected, so may not be
// necessary (depends on what we all allow to do with namespaces)
@@ -583,20 +617,22 @@ impl DataStore {
let mut stats = BackupGroupDeleteStats::default();
for group in self.iter_backup_groups(ns.to_owned())? {
- let delete_stats = group?.destroy(true)?;
+ let delete_stats = group?.destroy(skip_trash)?;
stats.add(&delete_stats);
removed_all_groups = removed_all_groups && delete_stats.all_removed();
}
- let base_file = std::fs::File::open(self.base_path())?;
- let base_fd = base_file.as_raw_fd();
- for ty in BackupType::iter() {
- let mut ty_dir = ns.path();
- ty_dir.push(ty.to_string());
- // best effort only, but we probably should log the error
- if let Err(err) = unlinkat(Some(base_fd), &ty_dir, UnlinkatFlags::RemoveDir) {
- if err != nix::errno::Errno::ENOENT {
- log::error!("failed to remove backup type {ty} in {ns} - {err}");
+ if skip_trash {
+ let base_file = std::fs::File::open(self.base_path())?;
+ let base_fd = base_file.as_raw_fd();
+ for ty in BackupType::iter() {
+ let mut ty_dir = ns.path();
+ ty_dir.push(ty.to_string());
+ // best effort only, but we probably should log the error
+ if let Err(err) = unlinkat(Some(base_fd), &ty_dir, UnlinkatFlags::RemoveDir) {
+ if err != nix::errno::Errno::ENOENT {
+ log::error!("failed to remove backup type {ty} in {ns} - {err}");
+ }
}
}
}
@@ -613,6 +649,7 @@ impl DataStore {
self: &Arc<Self>,
ns: &BackupNamespace,
delete_groups: bool,
+ skip_trash: bool,
) -> Result<(bool, BackupGroupDeleteStats), Error> {
let store = self.name();
let mut removed_all_requested = true;
@@ -620,16 +657,68 @@ impl DataStore {
if delete_groups {
log::info!("removing whole namespace recursively below {store}:/{ns}",);
for ns in self.recursive_iter_backup_ns(ns.to_owned(), NamespaceListFilter::Active)? {
- let (removed_ns_groups, delete_stats) = self.remove_namespace_groups(&ns?)?;
+ let namespace = ns?;
+
+ if !skip_trash {
+ let mut path = self.base_path();
+ path.push(namespace.path());
+ path.push(TRASH_PENDING_MARKER_FILENAME);
+ if let Err(err) = std::fs::File::create(&path) {
+ if err.kind() != std::io::ErrorKind::AlreadyExists {
+ return Err(err).context("failed to set trash pending marker file");
+ }
+ }
+ }
+
+ let (removed_ns_groups, delete_stats) =
+ self.remove_namespace_groups(&namespace, skip_trash)?;
stats.add(&delete_stats);
removed_all_requested = removed_all_requested && removed_ns_groups;
+
+ if !skip_trash && !removed_ns_groups {
+ self.untrash_namespace(&namespace)?;
+ }
}
} else {
log::info!("pruning empty namespace recursively below {store}:/{ns}");
}
- removed_all_requested =
- removed_all_requested && self.destroy_namespace_recursive(ns, delete_groups)?;
+ if skip_trash {
+ removed_all_requested =
+ removed_all_requested && self.destroy_namespace_recursive(ns, delete_groups)?;
+ return Ok((removed_all_requested, stats));
+ }
+
+ let mut children = self
+ .recursive_iter_backup_ns(ns.to_owned(), NamespaceListFilter::Active)?
+ .collect::<Result<Vec<BackupNamespace>, Error>>()?;
+
+ children.sort_by_key(|b| std::cmp::Reverse(b.depth()));
+
+ let mut all_trashed = true;
+ for ns in children.iter() {
+ let mut ns_dir = ns.path();
+ ns_dir.push("ns");
+
+ if !ns.is_root() {
+ let mut path = self.base_path();
+ path.push(ns.path());
+
+ let pending_path = path.join(TRASH_PENDING_MARKER_FILENAME);
+ path.push(TRASH_MARKER_FILENAME);
+ if let Err(err) = std::fs::rename(pending_path, path) {
+ if err.kind() == std::io::ErrorKind::NotFound {
+ all_trashed = false;
+ } else {
+ return Err(err).context("Renaming pending marker to trash marker failed");
+ }
+ }
+ }
+ }
+
+ if !all_trashed {
+ bail!("failed to prune namespace, not empty");
+ }
Ok((removed_all_requested, stats))
}
@@ -657,6 +746,24 @@ impl DataStore {
let _ = unlinkat(Some(base_fd), &ns_dir, UnlinkatFlags::RemoveDir);
if !ns.is_root() {
+ let rel_trash_path = ns.path().join(TRASH_MARKER_FILENAME);
+ if let Err(err) =
+ unlinkat(Some(base_fd), &rel_trash_path, UnlinkatFlags::NoRemoveDir)
+ {
+ if err != nix::errno::Errno::ENOENT {
+ bail!("removing the trash file '{rel_trash_path:?}' failed - {err}")
+ }
+ }
+ let rel_pending_path = ns.path().join(TRASH_PENDING_MARKER_FILENAME);
+ if let Err(err) =
+ unlinkat(Some(base_fd), &rel_pending_path, UnlinkatFlags::NoRemoveDir)
+ {
+ if err != nix::errno::Errno::ENOENT {
+ bail!(
+ "removing the trash pending file '{rel_pending_path:?}' failed - {err}"
+ )
+ }
+ }
match unlinkat(Some(base_fd), &ns.path(), UnlinkatFlags::RemoveDir) {
Ok(()) => log::debug!("removed namespace {ns}"),
Err(nix::errno::Errno::ENOENT) => {
diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
index e5463524a..a12d97883 100644
--- a/src/api2/admin/namespace.rs
+++ b/src/api2/admin/namespace.rs
@@ -166,7 +166,7 @@ pub fn delete_namespace(
let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
- let (removed_all, stats) = datastore.remove_namespace_recursive(&ns, delete_groups)?;
+ let (removed_all, stats) = datastore.remove_namespace_recursive(&ns, delete_groups, true)?;
if !removed_all {
let err_msg = if delete_groups {
if datastore.old_locking() {
diff --git a/src/server/pull.rs b/src/server/pull.rs
index d3c6fcf6a..a60ccbf10 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -729,7 +729,7 @@ fn check_and_remove_ns(params: &PullParameters, local_ns: &BackupNamespace) -> R
let (removed_all, _delete_stats) = params
.target
.store
- .remove_namespace_recursive(local_ns, true)?;
+ .remove_namespace_recursive(local_ns, true, true)?;
Ok(removed_all)
}
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 11/21] datastore: check for trash marker in namespace exists check
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (9 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 10/21] datastore: mark namespace as trash instead of deleting it Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 12/21] datastore: clear trashed snapshot dir if re-creation requested Christian Ebner
` (10 subsequent siblings)
21 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Namespaces which have been marked as trash are not considered
existing. This makes sure that sync jobs or tape backup jobs try to
newly create the namespace, thereby clearing all pre-existing
contents in it.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index b1d81e199..ffc6a7039 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -567,7 +567,11 @@ impl DataStore {
pub fn namespace_exists(&self, ns: &BackupNamespace) -> bool {
let mut path = self.base_path();
path.push(ns.path());
- path.exists()
+ if !path.exists() {
+ return false;
+ }
+ path.push(TRASH_MARKER_FILENAME);
+ !path.exists()
}
/// Remove the namespace and all it's parent components from the trash by removing the trash or
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 12/21] datastore: clear trashed snapshot dir if re-creation requested
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (10 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 11/21] datastore: check for trash marker in namespace exists check Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 13/21] datastore: recreate trashed backup groups if requested Christian Ebner
` (9 subsequent siblings)
21 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
If a previously trashed snapshot has been requested for re-creation
(e.g. by a sync job in push direction), drop the contents of the
currently trashed snapshot.
The snapshot directory itself is already locked at that point, either
by the old locking mechanism acting on the directory itself or by the
new locking mechanism. Therefore, concurrent operations can be
excluded.
For the call site this acts as if the snapshot directory has been
newly created.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 29 ++++++++++++++++++++++++++++-
1 file changed, 28 insertions(+), 1 deletion(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index ffc6a7039..4f7766c36 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -951,8 +951,9 @@ impl DataStore {
) -> Result<(PathBuf, bool, BackupLockGuard), Error> {
let backup_dir = self.backup_dir(ns.clone(), backup_dir.clone())?;
let relative_path = backup_dir.relative_path();
+ let full_path = backup_dir.full_path();
- match std::fs::create_dir(backup_dir.full_path()) {
+ match std::fs::create_dir(&full_path) {
Ok(_) => {
let guard = backup_dir.lock().with_context(|| {
format!("while creating new locked snapshot '{backup_dir:?}'")
@@ -963,6 +964,32 @@ impl DataStore {
let guard = backup_dir
.lock()
.with_context(|| format!("while creating locked snapshot '{backup_dir:?}'"))?;
+
+ if backup_dir.is_trashed() {
+ info!("clear trashed backup snapshot {full_path:?}");
+ let dir_entries = std::fs::read_dir(&full_path).context(
+ "failed to read directory contents during cleanup of trashed snapshot",
+ )?;
+ for entry in dir_entries {
+ let entry = entry.context(
+ "failed to read directory entry during clenup of trashed snapshot",
+ )?;
+ // Only expect regular file entries
+ std::fs::remove_file(entry.path()).context(
+ "failed to remove directory entry during clenup of trashed snapshot",
+ )?;
+ }
+ let group = BackupGroup::from(backup_dir);
+ let group_trash_file = group.full_group_path().join(TRASH_MARKER_FILENAME);
+ if let Err(err) = std::fs::remove_file(&group_trash_file) {
+ if err.kind() != std::io::ErrorKind::NotFound {
+ bail!("failed to remove group trash file of trashed snapshot");
+ }
+ }
+ self.untrash_namespace(ns)?;
+ return Ok((relative_path, true, guard));
+ }
+
Ok((relative_path, false, guard))
}
Err(e) => Err(e.into()),
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 13/21] datastore: recreate trashed backup groups if requested
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (11 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 12/21] datastore: clear trashed snapshot dir if re-creation requested Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 14/21] datastore: GC: clean-up trashed snapshots, groups and namespaces Christian Ebner
` (8 subsequent siblings)
21 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
A whole backup group might have been marked as trashed, including all
of the contained snapshots.
Since a new backup to that group (even as different user/owner)
should still work, permanently clear the whole trashed group before
recreation. This will limit the trash lifetime as now the group is
not recoverable until next garbage collection.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 4f7766c36..ca05e1bea 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -934,6 +934,32 @@ impl DataStore {
let guard = backup_group.lock().with_context(|| {
format!("while creating locked backup group '{backup_group:?}'")
})?;
+ if backup_group.is_trashed() {
+ info!("clear trashed backup group {full_path:?}");
+ let dir_entries = std::fs::read_dir(&full_path).context(
+ "failed to read directory contents during cleanup of trashed group",
+ )?;
+ for entry in dir_entries {
+ let entry = entry.context(
+ "failed to read directory entry during clenup of trashed group",
+ )?;
+ let file_type = entry.file_type().context(
+ "failed to get entry file type during clenup of trashed group",
+ )?;
+ if file_type.is_dir() {
+ std::fs::remove_dir_all(entry.path())
+ .context("failed to remove directory entry during clenup of trashed snapshot")?;
+ } else {
+ std::fs::remove_file(entry.path())
+ .context("failed to remove directory entry during clenup of trashed snapshot")?;
+ }
+ }
+ self.set_owner(ns, backup_group.group(), auth_id, false)?;
+ let owner = self.get_owner(ns, backup_group.group())?; // just to be sure
+ self.untrash_namespace(ns)?;
+ return Ok((owner, guard));
+ }
+
let owner = self.get_owner(ns, backup_group.group())?; // just to be sure
Ok((owner, guard))
}
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 14/21] datastore: GC: clean-up trashed snapshots, groups and namespaces
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (12 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 13/21] datastore: recreate trashed backup groups if requested Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 15/21] client: expose skip trash flags for cli commands Christian Ebner
` (7 subsequent siblings)
21 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Cleanup trashed items during phase 1 of garbage collection. If
encountered, index files located within trashed snapshots are touched
as well, deferring chunk cleanup to the next run
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 84 +++++++++++++++++++++++++++++++++-
1 file changed, 83 insertions(+), 1 deletion(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index ca05e1bea..d88af4c68 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -574,6 +574,18 @@ impl DataStore {
!path.exists()
}
+ /// Checks if the namespace trash marker file exists,
+ /// does not imply that the namespace itself exists.
+ pub fn namespace_is_trashed(&self, namespace: &BackupNamespace) -> bool {
+ if namespace.is_root() {
+ return false;
+ }
+ let mut path = self.base_path();
+ path.push(namespace.path());
+ path.push(TRASH_MARKER_FILENAME);
+ path.exists()
+ }
+
/// Remove the namespace and all it's parent components from the trash by removing the trash or
/// trash-pending marker file for each namespace level from deepest to shallowest. Missing files
/// are ignored.
@@ -1322,7 +1334,7 @@ impl DataStore {
.context("creating namespace iterator failed")?
{
let namespace = namespace.context("iterating namespaces failed")?;
- for group in arc_self.iter_backup_groups(namespace)? {
+ for group in arc_self.iter_backup_groups(namespace.clone())? {
let group = group.context("iterating backup groups failed")?;
// Avoid race between listing/marking of snapshots by GC and pruning the last
@@ -1403,10 +1415,80 @@ impl DataStore {
}
processed_index_files += 1;
}
+
+ // Only try to lock a trashed snapshots and continue if that is not
+ // possible, as then most likely this is in the process of being untrashed.
+ // Check trash state before and after locking to avoid otherwise possible
+ // races.
+ if snapshot.backup_dir.is_trashed() {
+ if let Ok(_lock) = snapshot.backup_dir.lock() {
+ if snapshot.backup_dir.is_trashed() {
+ let path = snapshot.backup_dir.full_path();
+ log::info!("removing trashed backup snapshot {path:?}");
+ std::fs::remove_dir_all(&path).with_context(|| {
+ format!("removing trashed backup snapshot {path:?} failed")
+ })?;
+ }
+ } else {
+ let path = snapshot.backup_dir.full_path();
+ warn!("failed to lock trashed backup snapshot can {path:?}");
+ }
+ }
}
break;
}
+ if group.is_trashed() {
+ if let Ok(_lock) = group.lock() {
+ if group.is_trashed() {
+ let trash_path = group.full_group_path().join(".trashed");
+ std::fs::remove_file(&trash_path).map_err(|err| {
+ format_err!(
+ "removing the trash file '{trash_path:?}' failed - {err}"
+ )
+ })?;
+
+ let owner_path = group.full_group_path().join("owner");
+ std::fs::remove_file(&owner_path).map_err(|err| {
+ format_err!(
+ "removing the owner file '{owner_path:?}' failed - {err}"
+ )
+ })?;
+
+ let path = group.full_group_path();
+
+ std::fs::remove_dir(&path).map_err(|err| {
+ format_err!("removing group directory {path:?} failed - {err}")
+ })?;
+
+ // Remove any now empty backup type directory
+ let base_file = std::fs::File::open(self.base_path())?;
+ let base_fd = base_file.as_raw_fd();
+ for ty in BackupType::iter() {
+ let mut ty_dir = namespace.path();
+ ty_dir.push(ty.to_string());
+ match unlinkat(Some(base_fd), &ty_dir, UnlinkatFlags::RemoveDir) {
+ Ok(_) => (),
+ Err(nix::errno::Errno::ENOENT) |
+ Err(nix::errno::Errno::ENOTEMPTY) => (),
+ Err(err) => info!("failed to remove backup type directory for {namespace} - {err}"),
+ }
+ }
+ } else {
+ let path = group.full_group_path();
+ warn!("failed to lock trashed backup group {path:?}");
+ }
+ }
+ }
+ }
+ if self.namespace_is_trashed(&namespace) {
+ // Remove the namespace, but only if it was empty (as the GC already cleared child
+ // items and no new ones have been created since).
+ match arc_self.destroy_namespace_recursive(&namespace, false) {
+ Ok(true) => info!("removed trashed namespace {namespace}"),
+ Ok(false) => info!("failed to remove trashed namespace {namespace}, not empty"),
+ Err(err) => warn!("removing trashed namespace failed: {err:#}"),
+ }
}
}
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 15/21] client: expose skip trash flags for cli commands
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (13 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 14/21] datastore: GC: clean-up trashed snapshots, groups and namespaces Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 16/21] api: datastore: add flag to list trashed snapshots only Christian Ebner
` (6 subsequent siblings)
21 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Allows to explicitly set/clear the `skip-trash` flag when pruning
namespaces, groups or snapshots via the client cli command.
Set defaults for `skip-trash` to false in order to use the trash.
Further, never add the flag to the api call parameters in the client
if not explicitly set, in order to keep backwards compatibility to
older PBS instances.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 6 ++++--
proxmox-backup-client/src/group.rs | 14 +++++++++++++-
proxmox-backup-client/src/namespace.rs | 14 +++++++++++++-
proxmox-backup-client/src/snapshot.rs | 16 ++++++++++++----
src/api2/admin/datastore.rs | 11 +++++++++--
src/api2/admin/namespace.rs | 12 ++++++++++--
src/api2/backup/environment.rs | 1 +
src/server/prune_job.rs | 4 +++-
src/server/pull.rs | 12 +++++++-----
9 files changed, 72 insertions(+), 18 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index d88af4c68..0f86754bb 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -808,10 +808,11 @@ impl DataStore {
self: &Arc<Self>,
ns: &BackupNamespace,
backup_group: &pbs_api_types::BackupGroup,
+ skip_trash: bool,
) -> Result<BackupGroupDeleteStats, Error> {
let backup_group = self.backup_group(ns.clone(), backup_group.clone());
- backup_group.destroy(true)
+ backup_group.destroy(skip_trash)
}
/// Remove a backup directory including all content
@@ -820,10 +821,11 @@ impl DataStore {
ns: &BackupNamespace,
backup_dir: &pbs_api_types::BackupDir,
force: bool,
+ skip_trash: bool,
) -> Result<(), Error> {
let backup_dir = self.backup_dir(ns.clone(), backup_dir.clone())?;
- backup_dir.destroy(force, true)
+ backup_dir.destroy(force, skip_trash)
}
/// Returns the time of the last successful backup
diff --git a/proxmox-backup-client/src/group.rs b/proxmox-backup-client/src/group.rs
index 67f26e261..42f8e1e61 100644
--- a/proxmox-backup-client/src/group.rs
+++ b/proxmox-backup-client/src/group.rs
@@ -37,11 +37,20 @@ pub fn group_mgmt_cli() -> CliCommandMap {
type: BackupNamespace,
optional: true,
},
+ "skip-trash": {
+ type: bool,
+ optional: true,
+ description: "Immediately remove the group, not marking contents as trash.",
+ },
}
}
)]
/// Forget (remove) backup snapshots.
-async fn forget_group(group: String, mut param: Value) -> Result<(), Error> {
+async fn forget_group(
+ group: String,
+ skip_trash: Option<bool>,
+ mut param: Value,
+) -> Result<(), Error> {
let backup_group: BackupGroup = group.parse()?;
let repo = remove_repository_from_value(&mut param)?;
let client = connect(&repo)?;
@@ -63,6 +72,9 @@ async fn forget_group(group: String, mut param: Value) -> Result<(), Error> {
)?;
if confirmation.is_yes() {
let path = format!("api2/json/admin/datastore/{}/groups", repo.store());
+ if let Some(skip_trash) = skip_trash {
+ api_param["skip-trash"] = Value::from(skip_trash);
+ }
if let Err(err) = client.delete(&path, Some(api_param)).await {
// "ENOENT: No such file or directory" is part of the error returned when the group
// has not been found. The full error contains the full datastore path and we would
diff --git a/proxmox-backup-client/src/namespace.rs b/proxmox-backup-client/src/namespace.rs
index 2929e394b..204a11b1d 100644
--- a/proxmox-backup-client/src/namespace.rs
+++ b/proxmox-backup-client/src/namespace.rs
@@ -136,11 +136,20 @@ async fn create_namespace(param: Value) -> Result<(), Error> {
description: "Destroys all groups in the hierarchy.",
optional: true,
},
+ "skip-trash": {
+ type: bool,
+ optional: true,
+ description: "Immediately remove the namespace, not marking contents as trash.",
+ },
}
},
)]
/// Delete an existing namespace.
-async fn delete_namespace(param: Value, delete_groups: Option<bool>) -> Result<(), Error> {
+async fn delete_namespace(
+ param: Value,
+ delete_groups: Option<bool>,
+ skip_trash: Option<bool>,
+) -> Result<(), Error> {
let repo = extract_repository_from_value(¶m)?;
let backup_ns = optional_ns_param(¶m)?;
@@ -150,6 +159,9 @@ async fn delete_namespace(param: Value, delete_groups: Option<bool>) -> Result<(
let path = format!("api2/json/admin/datastore/{}/namespace", repo.store());
let mut param = json!({ "ns": backup_ns });
+ if let Some(skip_trash) = skip_trash {
+ param["skip-trash"] = Value::from(skip_trash);
+ }
if let Some(value) = delete_groups {
param["delete-groups"] = serde_json::to_value(value)?;
diff --git a/proxmox-backup-client/src/snapshot.rs b/proxmox-backup-client/src/snapshot.rs
index f195c23b7..a9b46726a 100644
--- a/proxmox-backup-client/src/snapshot.rs
+++ b/proxmox-backup-client/src/snapshot.rs
@@ -173,11 +173,16 @@ async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
type: String,
description: "Snapshot path.",
},
+ "skip-trash": {
+ type: bool,
+ optional: true,
+ description: "Immediately remove the snapshot, not marking it as trash.",
+ },
}
}
)]
/// Forget (remove) backup snapshots.
-async fn forget_snapshots(param: Value) -> Result<(), Error> {
+async fn forget_snapshots(skip_trash: Option<bool>, param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(¶m)?;
let backup_ns = optional_ns_param(¶m)?;
@@ -188,9 +193,12 @@ async fn forget_snapshots(param: Value) -> Result<(), Error> {
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
- client
- .delete(&path, Some(snapshot_args(&backup_ns, &snapshot)?))
- .await?;
+ let mut args = snapshot_args(&backup_ns, &snapshot)?;
+ if let Some(skip_trash) = skip_trash {
+ args["skip-trash"] = Value::from(skip_trash);
+ }
+
+ client.delete(&path, Some(args)).await?;
record_repository(&repo);
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 133a6d658..84c0bf5b4 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -277,7 +277,13 @@ pub fn list_groups(
optional: true,
default: true,
description: "Return error when group cannot be deleted because of protected snapshots",
- }
+ },
+ "skip-trash": {
+ type: bool,
+ optional: true,
+ default: false,
+ description: "Immediately remove the snapshot, not marking it as trash.",
+ },
},
},
returns: {
@@ -295,6 +301,7 @@ pub async fn delete_group(
ns: Option<BackupNamespace>,
error_on_protected: bool,
group: pbs_api_types::BackupGroup,
+ skip_trash: bool,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<BackupGroupDeleteStats, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -312,7 +319,7 @@ pub async fn delete_group(
&group,
)?;
- let delete_stats = datastore.remove_backup_group(&ns, &group)?;
+ let delete_stats = datastore.remove_backup_group(&ns, &group, skip_trash)?;
let error_msg = if datastore.old_locking() {
"could not remove empty groups directories due to old locking mechanism.\n\
diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
index a12d97883..6f70497a6 100644
--- a/src/api2/admin/namespace.rs
+++ b/src/api2/admin/namespace.rs
@@ -144,7 +144,13 @@ pub fn list_namespaces(
optional: true,
default: true,
description: "Return error when namespace cannot be deleted because of protected snapshots",
- }
+ },
+ "skip-trash": {
+ type: bool,
+ optional: true,
+ default: true,
+ description: "Remove and namespace immediately, skip moving to trash",
+ },
},
},
access: {
@@ -157,6 +163,7 @@ pub fn delete_namespace(
ns: BackupNamespace,
delete_groups: bool,
error_on_protected: bool,
+ skip_trash: bool,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<BackupGroupDeleteStats, Error> {
@@ -166,7 +173,8 @@ pub fn delete_namespace(
let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
- let (removed_all, stats) = datastore.remove_namespace_recursive(&ns, delete_groups, true)?;
+ let (removed_all, stats) =
+ datastore.remove_namespace_recursive(&ns, delete_groups, skip_trash)?;
if !removed_all {
let err_msg = if delete_groups {
if datastore.old_locking() {
diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
index 3d541b461..b5619eb8c 100644
--- a/src/api2/backup/environment.rs
+++ b/src/api2/backup/environment.rs
@@ -730,6 +730,7 @@ impl BackupEnvironment {
self.backup_dir.backup_ns(),
self.backup_dir.as_ref(),
true,
+ true,
)?;
Ok(())
diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
index 596afe086..2794dc3ab 100644
--- a/src/server/prune_job.rs
+++ b/src/server/prune_job.rs
@@ -76,7 +76,9 @@ pub fn prune_datastore(
info.backup_dir.backup_time_string()
);
if !keep && !dry_run {
- if let Err(err) = datastore.remove_backup_dir(ns, info.backup_dir.as_ref(), false) {
+ if let Err(err) =
+ datastore.remove_backup_dir(ns, info.backup_dir.as_ref(), false, true)
+ {
let path = info.backup_dir.relative_path();
warn!("failed to remove dir {path:?}: {err}");
}
diff --git a/src/server/pull.rs b/src/server/pull.rs
index a60ccbf10..0d5845e85 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -504,6 +504,7 @@ async fn pull_snapshot_from<'a>(
snapshot.backup_ns(),
snapshot.as_ref(),
true,
+ true,
) {
info!("cleanup error - {cleanup_err}");
}
@@ -678,7 +679,7 @@ async fn pull_group(
params
.target
.store
- .remove_backup_dir(&target_ns, snapshot.as_ref(), false)?;
+ .remove_backup_dir(&target_ns, snapshot.as_ref(), false, true)?;
sync_stats.add(SyncStats::from(RemovedVanishedStats {
snapshots: 1,
groups: 0,
@@ -997,10 +998,11 @@ pub(crate) async fn pull_ns(
continue;
}
info!("delete vanished group '{local_group}'");
- let delete_stats_result = params
- .target
- .store
- .remove_backup_group(&target_ns, local_group);
+ let delete_stats_result =
+ params
+ .target
+ .store
+ .remove_backup_group(&target_ns, local_group, false);
match delete_stats_result {
Ok(stats) => {
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 16/21] api: datastore: add flag to list trashed snapshots only
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (14 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 15/21] client: expose skip trash flags for cli commands Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 17/21] api: namespace: add option to list all namespaces, including trashed Christian Ebner
` (5 subsequent siblings)
21 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Allows to conditionally show either active or trashed backup
snapshots, the latter being used when displaying the contents of the
trash for given datastore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/admin/datastore.rs | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 84c0bf5b4..cbd24c729 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -473,6 +473,12 @@ pub async fn delete_snapshot(
optional: true,
schema: BACKUP_ID_SCHEMA,
},
+ "trashed": {
+ type: bool,
+ optional: true,
+ default: false,
+ description: "List trashed snapshots only."
+ },
},
},
returns: pbs_api_types::ADMIN_DATASTORE_LIST_SNAPSHOTS_RETURN_TYPE,
@@ -488,6 +494,7 @@ pub async fn list_snapshots(
ns: Option<BackupNamespace>,
backup_type: Option<BackupType>,
backup_id: Option<String>,
+ trashed: bool,
_param: Value,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
@@ -495,7 +502,7 @@ pub async fn list_snapshots(
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
tokio::task::spawn_blocking(move || unsafe {
- list_snapshots_blocking(store, ns, backup_type, backup_id, auth_id)
+ list_snapshots_blocking(store, ns, backup_type, backup_id, auth_id, trashed)
})
.await
.map_err(|err| format_err!("failed to await blocking task: {err}"))?
@@ -508,6 +515,7 @@ unsafe fn list_snapshots_blocking(
backup_type: Option<BackupType>,
backup_id: Option<String>,
auth_id: Authid,
+ trashed: bool,
) -> Result<Vec<SnapshotListItem>, Error> {
let ns = ns.unwrap_or_default();
@@ -631,7 +639,12 @@ unsafe fn list_snapshots_blocking(
return Ok(snapshots);
}
- let group_backups = group.list_backups(ListBackupFilter::Active)?;
+ let filter = if trashed {
+ ListBackupFilter::Trashed
+ } else {
+ ListBackupFilter::Active
+ };
+ let group_backups = group.list_backups(filter)?;
snapshots.extend(
group_backups
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 17/21] api: namespace: add option to list all namespaces, including trashed
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (15 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 16/21] api: datastore: add flag to list trashed snapshots only Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 18/21] api: admin: implement endpoints to restore trashed contents Christian Ebner
` (4 subsequent siblings)
21 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Add an optional parameter so all namespaces can be listed, including
ones which are marked as trash. This allows to display all namespace
in the UI, independent of their current state.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/admin/namespace.rs | 28 +++++++++++++++++--------
src/bin/proxmox-backup-manager.rs | 12 ++++++++---
src/bin/proxmox_backup_manager/prune.rs | 2 +-
3 files changed, 29 insertions(+), 13 deletions(-)
diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
index 6f70497a6..098e69ee8 100644
--- a/src/api2/admin/namespace.rs
+++ b/src/api2/admin/namespace.rs
@@ -75,6 +75,12 @@ pub fn create_namespace(
schema: NS_MAX_DEPTH_SCHEMA,
optional: true,
},
+ "include-trashed": {
+ type: bool,
+ optional: true,
+ default: false,
+ description: "List also namespaces marked as trash.",
+ }
},
},
returns: pbs_api_types::ADMIN_DATASTORE_LIST_NAMESPACE_RETURN_TYPE,
@@ -89,6 +95,7 @@ pub fn list_namespaces(
store: String,
parent: Option<BackupNamespace>,
max_depth: Option<usize>,
+ include_trashed: bool,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<NamespaceListItem>, Error> {
let parent = parent.unwrap_or_default();
@@ -99,14 +106,17 @@ pub fn list_namespaces(
let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
- let iter =
- match datastore.recursive_iter_backup_ns_ok(parent, max_depth, NamespaceListFilter::Active)
- {
- Ok(iter) => iter,
- // parent NS doesn't exists and user has no privs on it, avoid info leakage.
- Err(_) if parent_access.is_err() => http_bail!(FORBIDDEN, "permission check failed"),
- Err(err) => return Err(err),
- };
+ let filter = if include_trashed {
+ NamespaceListFilter::All
+ } else {
+ NamespaceListFilter::Active
+ };
+ let iter = match datastore.recursive_iter_backup_ns_ok(parent, max_depth, filter) {
+ Ok(iter) => iter,
+ // parent NS doesn't exists and user has no privs on it, avoid info leakage.
+ Err(_) if parent_access.is_err() => http_bail!(FORBIDDEN, "permission check failed"),
+ Err(err) => return Err(err),
+ };
let ns_to_item =
|ns: BackupNamespace| -> NamespaceListItem { NamespaceListItem { ns, comment: None } };
@@ -148,7 +158,7 @@ pub fn list_namespaces(
"skip-trash": {
type: bool,
optional: true,
- default: true,
+ default: false,
description: "Remove and namespace immediately, skip moving to trash",
},
},
diff --git a/src/bin/proxmox-backup-manager.rs b/src/bin/proxmox-backup-manager.rs
index d4363e717..0c4d70b41 100644
--- a/src/bin/proxmox-backup-manager.rs
+++ b/src/bin/proxmox-backup-manager.rs
@@ -857,8 +857,14 @@ pub fn complete_remote_datastore_namespace(
Some((None, source_store)) => {
let mut rpcenv = CliEnvironment::new();
rpcenv.set_auth_id(Some(String::from("root@pam")));
- crate::api2::admin::namespace::list_namespaces(source_store, None, None, &mut rpcenv)
- .ok()
+ crate::api2::admin::namespace::list_namespaces(
+ source_store,
+ None,
+ None,
+ false,
+ &mut rpcenv,
+ )
+ .ok()
}
_ => None,
} {
@@ -893,7 +899,7 @@ pub fn complete_sync_local_datastore_namespace(
if let Some(store) = store {
if let Ok(data) =
- crate::api2::admin::namespace::list_namespaces(store, None, None, &mut rpcenv)
+ crate::api2::admin::namespace::list_namespaces(store, None, None, false, &mut rpcenv)
{
for item in data {
list.push(item.ns.name());
diff --git a/src/bin/proxmox_backup_manager/prune.rs b/src/bin/proxmox_backup_manager/prune.rs
index 923eb6f51..b57c39bd7 100644
--- a/src/bin/proxmox_backup_manager/prune.rs
+++ b/src/bin/proxmox_backup_manager/prune.rs
@@ -164,7 +164,7 @@ fn complete_prune_local_datastore_namespace(
if let Some(store) = store {
if let Ok(data) =
- crate::api2::admin::namespace::list_namespaces(store, None, None, &mut rpcenv)
+ crate::api2::admin::namespace::list_namespaces(store, None, None, false, &mut rpcenv)
{
for item in data {
list.push(item.ns.name());
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 18/21] api: admin: implement endpoints to restore trashed contents
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (16 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 17/21] api: namespace: add option to list all namespaces, including trashed Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 19/21] ui: add recover for trashed items tab to datastore panel Christian Ebner
` (3 subsequent siblings)
21 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Implements the api endpoints to restore trashed contents contained
within namespaces, backup groups or individual snapshots.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/admin/datastore.rs | 173 +++++++++++++++++++++++++++++++++++-
1 file changed, 172 insertions(+), 1 deletion(-)
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index cbd24c729..eb033c3fc 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -51,7 +51,7 @@ use pbs_api_types::{
};
use pbs_client::pxar::{create_tar, create_zip};
use pbs_config::CachedUserInfo;
-use pbs_datastore::backup_info::{BackupInfo, ListBackupFilter};
+use pbs_datastore::backup_info::{BackupInfo, ListBackupFilter, TRASH_MARKER_FILENAME};
use pbs_datastore::cached_chunk_reader::CachedChunkReader;
use pbs_datastore::catalog::{ArchiveEntry, CatalogReader};
use pbs_datastore::data_blob::DataBlob;
@@ -2727,6 +2727,165 @@ pub async fn unmount(store: String, rpcenv: &mut dyn RpcEnvironment) -> Result<V
Ok(json!(upid))
}
+#[api(
+ input: {
+ properties: {
+ store: { schema: DATASTORE_SCHEMA },
+ ns: { type: BackupNamespace, },
+ },
+ },
+ access: {
+ permission: &Permission::Anybody,
+ description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
+ or DATASTORE_BACKUP and being the owner of the group",
+ },
+)]
+/// Recover trashed contents of a namespace.
+pub fn recover_namespace(
+ store: String,
+ ns: BackupNamespace,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
+ let limited = check_ns_privs_full(
+ &store,
+ &ns,
+ &auth_id,
+ PRIV_DATASTORE_MODIFY,
+ PRIV_DATASTORE_BACKUP,
+ )?;
+
+ let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+
+ for backup_group in datastore.iter_backup_groups(ns.clone())? {
+ let backup_group = backup_group?;
+ if limited {
+ let owner = datastore.get_owner(&ns, backup_group.group())?;
+ if check_backup_owner(&owner, &auth_id).is_err() {
+ continue;
+ }
+ }
+ do_recover_group(&backup_group)?;
+ }
+
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ store: { schema: DATASTORE_SCHEMA },
+ group: {
+ type: pbs_api_types::BackupGroup,
+ flatten: true,
+ },
+ ns: {
+ type: BackupNamespace,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Anybody,
+ description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
+ or DATASTORE_BACKUP and being the owner of the group",
+ },
+)]
+/// Recover trashed contents of a backup group.
+pub fn recover_group(
+ store: String,
+ group: pbs_api_types::BackupGroup,
+ ns: Option<BackupNamespace>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
+ let ns = ns.unwrap_or_default();
+ let datastore = check_privs_and_load_store(
+ &store,
+ &ns,
+ &auth_id,
+ PRIV_DATASTORE_MODIFY,
+ PRIV_DATASTORE_BACKUP,
+ Some(Operation::Write),
+ &group,
+ )?;
+
+ let backup_group = datastore.backup_group(ns, group);
+ do_recover_group(&backup_group)?;
+
+ Ok(())
+}
+
+fn do_recover_group(backup_group: &BackupGroup) -> Result<(), Error> {
+ let trashed_snapshots = backup_group.list_backups(ListBackupFilter::Trashed)?;
+ for snapshot in trashed_snapshots {
+ do_recover_snapshot(&snapshot.backup_dir)?;
+ }
+
+ let group_trash_path = backup_group.full_group_path().join(TRASH_MARKER_FILENAME);
+ if let Err(err) = std::fs::remove_file(&group_trash_path) {
+ if err.kind() != std::io::ErrorKind::NotFound {
+ bail!("failed to remove group trash file {group_trash_path:?} - {err}");
+ }
+ }
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ store: { schema: DATASTORE_SCHEMA },
+ backup_dir: {
+ type: pbs_api_types::BackupDir,
+ flatten: true,
+ },
+ ns: {
+ type: BackupNamespace,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Anybody,
+ description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
+ or DATASTORE_BACKUP and being the owner of the group",
+ },
+)]
+/// Recover trashed contents of a backup snapshot.
+pub fn recover_snapshot(
+ store: String,
+ backup_dir: pbs_api_types::BackupDir,
+ ns: Option<BackupNamespace>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
+ let ns = ns.unwrap_or_default();
+ let datastore = check_privs_and_load_store(
+ &store,
+ &ns,
+ &auth_id,
+ PRIV_DATASTORE_MODIFY,
+ PRIV_DATASTORE_BACKUP,
+ Some(Operation::Write),
+ &backup_dir.group,
+ )?;
+
+ let snapshot = datastore.backup_dir(ns, backup_dir)?;
+ do_recover_snapshot(&snapshot)?;
+
+ Ok(())
+}
+
+fn do_recover_snapshot(snapshot_dir: &BackupDir) -> Result<(), Error> {
+ let trash_path = snapshot_dir.full_path().join(TRASH_MARKER_FILENAME);
+ if let Err(err) = std::fs::remove_file(&trash_path) {
+ if err.kind() != std::io::ErrorKind::NotFound {
+ bail!("failed to remove trash file {trash_path:?} - {err}");
+ }
+ }
+ Ok(())
+}
+
#[sortable]
const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
(
@@ -2792,6 +2951,18 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
"pxar-file-download",
&Router::new().download(&API_METHOD_PXAR_FILE_DOWNLOAD),
),
+ (
+ "recover-group",
+ &Router::new().post(&API_METHOD_RECOVER_GROUP),
+ ),
+ (
+ "recover-namespace",
+ &Router::new().post(&API_METHOD_RECOVER_NAMESPACE),
+ ),
+ (
+ "recover-snapshot",
+ &Router::new().post(&API_METHOD_RECOVER_SNAPSHOT),
+ ),
("rrd", &Router::new().get(&API_METHOD_GET_RRD_STATS)),
(
"snapshots",
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 19/21] ui: add recover for trashed items tab to datastore panel
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (17 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 18/21] api: admin: implement endpoints to restore trashed contents Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 20/21] ui: drop 'permanent' in group/snapshot forget, default is to trash Christian Ebner
` (2 subsequent siblings)
21 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Display a dedicated recover trashed tab which allows to inspect and
recover trashed items.
This is based on the pre-existing contents tab but drops any actions
which make no sense for the given context, such as editing of group
ownership, notes, verification, ecc.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/Makefile | 1 +
www/datastore/Panel.js | 8 +
www/datastore/RecoverTrashed.js | 876 ++++++++++++++++++++++++++++++++
3 files changed, 885 insertions(+)
create mode 100644 www/datastore/RecoverTrashed.js
diff --git a/www/Makefile b/www/Makefile
index 44c5fa133..aa8955460 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -115,6 +115,7 @@ JSSRC= \
datastore/Panel.js \
datastore/DataStoreListSummary.js \
datastore/DataStoreList.js \
+ datastore/RecoverTrashed.js \
ServerStatus.js \
ServerAdministration.js \
NodeNotes.js \
diff --git a/www/datastore/Panel.js b/www/datastore/Panel.js
index ad9fc10fe..386b62284 100644
--- a/www/datastore/Panel.js
+++ b/www/datastore/Panel.js
@@ -99,6 +99,14 @@ Ext.define('PBS.DataStorePanel', {
datastore: '{datastore}',
},
},
+ {
+ xtype: 'pbsDataStoreRecoverTrashed',
+ itemId: 'trashed',
+ iconCls: 'fa fa-rotate-left',
+ cbind: {
+ datastore: '{datastore}',
+ },
+ },
],
initComponent: function() {
diff --git a/www/datastore/RecoverTrashed.js b/www/datastore/RecoverTrashed.js
new file mode 100644
index 000000000..2257a8cd3
--- /dev/null
+++ b/www/datastore/RecoverTrashed.js
@@ -0,0 +1,876 @@
+Ext.define('PBS.DataStoreRecoverTrashed', {
+ extend: 'Ext.tree.Panel',
+ alias: 'widget.pbsDataStoreRecoverTrashed',
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ rootVisible: false,
+
+ title: gettext('Recover Trashed'),
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ init: function(view) {
+ if (!view.datastore) {
+ throw "no datastore specified";
+ }
+
+ this.store = Ext.create('Ext.data.Store', {
+ model: 'pbs-data-store-snapshots',
+ groupField: 'backup-group',
+ });
+ this.store.on('load', this.onLoad, this);
+
+ view.getStore().setSorters([
+ 'sortWeight',
+ 'text',
+ 'backup-time',
+ ]);
+ },
+
+ control: {
+ '#': { // view
+ rowdblclick: 'rowDoubleClicked',
+ },
+ 'pbsNamespaceSelector': {
+ change: 'nsChange',
+ },
+ },
+
+ rowDoubleClicked: function(table, rec, el, rowId, ev) {
+ if (rec?.data?.ty === 'ns' && !rec.data.root) {
+ this.nsChange(null, rec.data.ns);
+ }
+ },
+
+ nsChange: function(field, value) {
+ let view = this.getView();
+ if (field === null) {
+ field = view.down('pbsNamespaceSelector');
+ field.setValue(value);
+ return;
+ }
+ view.namespace = value;
+ this.reload();
+ },
+
+ reload: function() {
+ let view = this.getView();
+
+ if (!view.store || !this.store) {
+ console.warn('cannot reload, no store(s)');
+ return;
+ }
+
+ let url = `/api2/json/admin/datastore/${view.datastore}/snapshots?trashed=1`;
+ if (view.namespace && view.namespace !== '') {
+ url += `&ns=${encodeURIComponent(view.namespace)}`;
+ }
+ this.store.setProxy({
+ type: 'proxmox',
+ timeout: 300*1000, // 5 minutes, we should make that api call faster
+ url: url,
+ });
+
+ this.store.load();
+ },
+
+ getRecordGroups: function(records) {
+ let groups = {};
+
+ for (const item of records) {
+ var btype = item.data["backup-type"];
+ let group = btype + "/" + item.data["backup-id"];
+
+ if (groups[group] !== undefined) {
+ continue;
+ }
+
+ var cls = PBS.Utils.get_type_icon_cls(btype);
+ if (cls === "") {
+ console.warn(`got unknown backup-type '${btype}'`);
+ continue; // FIXME: auto render? what do?
+ }
+
+ groups[group] = {
+ text: group,
+ leaf: false,
+ iconCls: "fa " + cls,
+ expanded: false,
+ backup_type: item.data["backup-type"],
+ backup_id: item.data["backup-id"],
+ children: [],
+ };
+ }
+
+ return groups;
+ },
+
+ loadNamespaceFromSameLevel: async function() {
+ let view = this.getView();
+ try {
+ let url = `/api2/extjs/admin/datastore/${view.datastore}/namespace?max-depth=1`;
+ if (view.namespace && view.namespace !== '') {
+ url += `&parent=${encodeURIComponent(view.namespace)}`;
+ }
+ url += '&include-trashed=1';
+ let { result: { data: ns } } = await Proxmox.Async.api2({ url });
+ return ns;
+ } catch (err) {
+ console.debug(err);
+ }
+ return [];
+ },
+
+ onLoad: async function(store, records, success, operation) {
+ let me = this;
+ let view = this.getView();
+
+ let namespaces = await me.loadNamespaceFromSameLevel();
+
+ if (!success) {
+ // TODO also check error code for != 403 ?
+ if (namespaces.length === 0) {
+ let error = Proxmox.Utils.getResponseErrorMessage(operation.getError());
+ Proxmox.Utils.setErrorMask(view.down('treeview'), error);
+ return;
+ } else {
+ records = [];
+ }
+ } else {
+ Proxmox.Utils.setErrorMask(view.down('treeview'));
+ }
+
+ let groups = this.getRecordGroups(records);
+
+ let selected;
+ let expanded = {};
+
+ view.getSelection().some(function(item) {
+ let id = item.data.text;
+ if (item.data.leaf) {
+ id = item.parentNode.data.text + id;
+ }
+ selected = id;
+ return true;
+ });
+
+ view.getRootNode().cascadeBy({
+ before: item => {
+ if (item.isExpanded() && !item.data.leaf) {
+ let id = item.data.text;
+ expanded[id] = true;
+ return true;
+ }
+ return false;
+ },
+ after: Ext.emptyFn,
+ });
+
+ for (const item of records) {
+ let group = item.data["backup-type"] + "/" + item.data["backup-id"];
+ let children = groups[group].children;
+
+ let data = item.data;
+
+ data.text = group + '/' + PBS.Utils.render_datetime_utc(data["backup-time"]);
+ data.leaf = false;
+ data.cls = 'no-leaf-icons';
+ data.matchesFilter = true;
+ data.ty = 'dir';
+
+ data.expanded = !!expanded[data.text];
+
+ data.children = [];
+ for (const file of data.files) {
+ file.text = file.filename;
+ file['crypt-mode'] = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
+ file.fingerprint = data.fingerprint;
+ file.leaf = true;
+ file.matchesFilter = true;
+ file.ty = 'file';
+
+ data.children.push(file);
+ }
+
+ children.push(data);
+ }
+
+ let children = [];
+ for (const [name, group] of Object.entries(groups)) {
+ let last_backup = 0;
+ let crypt = {
+ none: 0,
+ mixed: 0,
+ 'sign-only': 0,
+ encrypt: 0,
+ };
+ for (let item of group.children) {
+ crypt[PBS.Utils.cryptmap[item['crypt-mode']]]++;
+ if (item["backup-time"] > last_backup && item.size !== null) {
+ last_backup = item["backup-time"];
+ group["backup-time"] = last_backup;
+ group["last-comment"] = item.comment;
+ group.files = item.files;
+ group.size = item.size;
+ group.owner = item.owner;
+ }
+ }
+ group.count = group.children.length;
+ group.matchesFilter = true;
+ crypt.count = group.count;
+ group['crypt-mode'] = PBS.Utils.calculateCryptMode(crypt);
+ group.expanded = !!expanded[name];
+ group.sortWeight = 0;
+ group.ty = 'group';
+ children.push(group);
+ }
+
+ for (const item of namespaces) {
+ if (item.ns === view.namespace || (!view.namespace && item.ns === '')) {
+ continue;
+ }
+ children.push({
+ text: item.ns,
+ iconCls: 'fa fa-object-group',
+ expanded: true,
+ expandable: false,
+ ns: (view.namespaces ?? '') !== '' ? `/${item.ns}` : item.ns,
+ ty: 'ns',
+ sortWeight: 10,
+ leaf: true,
+ });
+ }
+
+ let isRootNS = !view.namespace || view.namespace === '';
+ let rootText = isRootNS
+ ? gettext('Root Namespace')
+ : Ext.String.format(gettext("Namespace '{0}'"), view.namespace);
+
+ let topNodes = [];
+ if (!isRootNS) {
+ let parentNS = view.namespace.split('/').slice(0, -1).join('/');
+ topNodes.push({
+ text: `.. (${parentNS === '' ? gettext('Root') : parentNS})`,
+ iconCls: 'fa fa-level-up',
+ ty: 'ns',
+ ns: parentNS,
+ sortWeight: -10,
+ leaf: true,
+ });
+ }
+ topNodes.push({
+ text: rootText,
+ iconCls: "fa fa-" + (isRootNS ? 'database' : 'object-group'),
+ expanded: true,
+ expandable: false,
+ sortWeight: -5,
+ root: true, // fake root
+ isRootNS,
+ ty: 'ns',
+ children: children,
+ });
+
+ view.setRootNode({
+ expanded: true,
+ children: topNodes,
+ });
+
+ if (!children.length) {
+ view.setEmptyText(Ext.String.format(
+ gettext('No accessible snapshots found in namespace {0}'),
+ view.namespace && view.namespace !== '' ? `'${view.namespace}'`: gettext('Root'),
+ ));
+ }
+
+ if (selected !== undefined) {
+ let selection = view.getRootNode().findChildBy(function(item) {
+ let id = item.data.text;
+ if (item.data.leaf) {
+ id = item.parentNode.data.text + id;
+ }
+ return selected === id;
+ }, undefined, true);
+ if (selection) {
+ view.setSelection(selection);
+ view.getView().focusRow(selection);
+ }
+ }
+
+ Proxmox.Utils.setErrorMask(view, false);
+ if (view.getStore().getFilters().length > 0) {
+ let searchBox = me.lookup("searchbox");
+ let searchvalue = searchBox.getValue();
+ me.search(searchBox, searchvalue);
+ }
+ },
+
+ recoverNamespace: function(data) {
+ let me = this;
+ let view = me.getView();
+ if (!view.namespace || view.namespace === '') {
+ console.warn('recoverNamespace called with root NS!');
+ return;
+ }
+ Ext.Msg.show({
+ title: gettext('Confirm'),
+ icon: Ext.Msg.WARNING,
+ message: gettext('Are you sure you want to recover all namespace contents?'),
+ buttons: Ext.Msg.YESNO,
+ defaultFocus: 'yes',
+ callback: function(btn) {
+ if (btn !== 'yes') {
+ return;
+ }
+ let params = { "ns": view.namespace };
+
+ Proxmox.Utils.API2Request({
+ url: `/admin/datastore/${view.datastore}/recover-namespace`,
+ params,
+ method: 'POST',
+ waitMsgTarget: view,
+ failure: function(response, opts) {
+ Ext.Msg.alert(gettext('Error'), response.htmlStatus);
+ },
+ callback: me.reload.bind(me),
+ });
+ },
+ });
+ },
+
+ forgetNamespace: function(data) {
+ let me = this;
+ let view = me.getView();
+ if (!view.namespace || view.namespace === '') {
+ console.warn('forgetNamespace called with root NS!');
+ return;
+ }
+ let nsParts = view.namespace.split('/');
+ let nsName = nsParts.pop();
+ let parentNS = nsParts.join('/');
+
+ Ext.create('PBS.window.NamespaceDelete', {
+ datastore: view.datastore,
+ namespace: view.namespace,
+ item: { id: nsName },
+ apiCallDone: success => {
+ if (success) {
+ view.namespace = parentNS; // move up before reload to avoid "ENOENT" error
+ me.reload();
+ }
+ },
+ });
+ },
+
+ recoverGroup: function(data) {
+ let me = this;
+ let view = me.getView();
+
+ let params = {
+ "backup-type": data.backup_type,
+ "backup-id": data.backup_id,
+ };
+ if (view.namespace && view.namespace !== '') {
+ params.ns = view.namespace;
+ }
+
+ Ext.Msg.show({
+ title: gettext('Confirm'),
+ icon: Ext.Msg.WARNING,
+ message: Ext.String.format(gettext('Are you sure you want to recover group {0}'), `'${data.text}'`),
+ buttons: Ext.Msg.YESNO,
+ defaultFocus: 'yes',
+ callback: function(btn) {
+ if (btn !== 'yes') {
+ return;
+ }
+
+ Proxmox.Utils.API2Request({
+ url: `/admin/datastore/${view.datastore}/recover-group`,
+ params,
+ method: 'POST',
+ waitMsgTarget: view,
+ failure: function(response, opts) {
+ Ext.Msg.alert(gettext('Error'), response.htmlStatus);
+ },
+ callback: me.reload.bind(me),
+ });
+ },
+ });
+ },
+
+ forgetGroup: function(data) {
+ let me = this;
+ let view = me.getView();
+
+ let params = {
+ "backup-type": data.backup_type,
+ "backup-id": data.backup_id,
+ "skip-trash": true,
+ };
+ if (view.namespace && view.namespace !== '') {
+ params.ns = view.namespace;
+ }
+
+ Ext.create('Proxmox.window.SafeDestroy', {
+ url: `/admin/datastore/${view.datastore}/groups`,
+ params,
+ item: {
+ id: data.text,
+ },
+ autoShow: true,
+ taskName: 'forget-group',
+ listeners: {
+ destroy: () => me.reload(),
+ },
+ });
+ },
+
+ recoverSnapshot: function(data) {
+ let me = this;
+ let view = me.getView();
+
+ Ext.Msg.show({
+ title: gettext('Confirm'),
+ icon: Ext.Msg.WARNING,
+ message: Ext.String.format(gettext('Are you sure you want to recover snapshot {0}'), `'${data.text}'`),
+ buttons: Ext.Msg.YESNO,
+ defaultFocus: 'yes',
+ callback: function(btn) {
+ if (btn !== 'yes') {
+ return;
+ }
+ let params = {
+ "backup-type": data["backup-type"],
+ "backup-id": data["backup-id"],
+ "backup-time": (data['backup-time'].getTime()/1000).toFixed(0),
+ };
+ if (view.namespace && view.namespace !== '') {
+ params.ns = view.namespace;
+ }
+
+ //TODO adapt to recover api endpoint
+ Proxmox.Utils.API2Request({
+ url: `/admin/datastore/${view.datastore}/recover-snapshot`,
+ params,
+ method: 'POST',
+ waitMsgTarget: view,
+ failure: function(response, opts) {
+ Ext.Msg.alert(gettext('Error'), response.htmlStatus);
+ },
+ callback: me.reload.bind(me),
+ });
+ },
+ });
+ },
+
+ forgetSnapshot: function(data) {
+ let me = this;
+ let view = me.getView();
+
+ Ext.Msg.show({
+ title: gettext('Confirm'),
+ icon: Ext.Msg.WARNING,
+ message: Ext.String.format(gettext('Are you sure you want to remove snapshot {0}'), `'${data.text}'`),
+ buttons: Ext.Msg.YESNO,
+ defaultFocus: 'no',
+ callback: function(btn) {
+ if (btn !== 'yes') {
+ return;
+ }
+ let params = {
+ "backup-type": data["backup-type"],
+ "backup-id": data["backup-id"],
+ "backup-time": (data['backup-time'].getTime()/1000).toFixed(0),
+ "skip-trash": true,
+ };
+ if (view.namespace && view.namespace !== '') {
+ params.ns = view.namespace;
+ }
+
+ Proxmox.Utils.API2Request({
+ url: `/admin/datastore/${view.datastore}/snapshots`,
+ params,
+ method: 'DELETE',
+ waitMsgTarget: view,
+ failure: function(response, opts) {
+ Ext.Msg.alert(gettext('Error'), response.htmlStatus);
+ },
+ callback: me.reload.bind(me),
+ });
+ },
+ });
+ },
+
+ onRecover: function(table, rI, cI, item, e, { data }) {
+ let me = this;
+ let view = this.getView();
+ if ((data.ty !== 'group' && data.ty !== 'dir' && data.ty !== 'ns') || !view.datastore) {
+ return;
+ }
+
+ if (data.ty === 'ns') {
+ me.recoverNamespace(data);
+ } else if (data.ty === 'dir') {
+ me.recoverSnapshot(data);
+ } else {
+ me.recoverGroup(data);
+ }
+ },
+
+ onForget: function(table, rI, cI, item, e, { data }) {
+ let me = this;
+ let view = this.getView();
+ if ((data.ty !== 'group' && data.ty !== 'dir' && data.ty !== 'ns') || !view.datastore) {
+ return;
+ }
+
+ if (data.ty === 'ns') {
+ me.forgetNamespace(data);
+ } else if (data.ty === 'dir') {
+ me.forgetSnapshot(data);
+ } else {
+ me.forgetGroup(data);
+ }
+ },
+
+ // opens a namespace browser
+ openBrowser: function(tv, rI, Ci, item, e, rec) {
+ let me = this;
+ if (rec.data.ty === 'ns') {
+ me.nsChange(null, rec.data.ns);
+ }
+ },
+
+ filter: function(item, value) {
+ if (item.data.text.indexOf(value) !== -1) {
+ return true;
+ }
+
+ if (item.data.owner && item.data.owner.indexOf(value) !== -1) {
+ return true;
+ }
+
+ return false;
+ },
+
+ search: function(tf, value) {
+ let me = this;
+ let view = me.getView();
+ let store = view.getStore();
+ if (!value && value !== 0) {
+ store.clearFilter();
+ // only collapse the children below our toplevel namespace "root"
+ store.getRoot().lastChild.collapseChildren(true);
+ tf.triggers.clear.setVisible(false);
+ return;
+ }
+ tf.triggers.clear.setVisible(true);
+ if (value.length < 2) return;
+ Proxmox.Utils.setErrorMask(view, true);
+ // we do it a little bit later for the error mask to work
+ setTimeout(function() {
+ store.clearFilter();
+ store.getRoot().collapseChildren(true);
+
+ store.beginUpdate();
+ store.getRoot().cascadeBy({
+ before: function(item) {
+ if (me.filter(item, value)) {
+ item.set('matchesFilter', true);
+ if (item.parentNode && item.parentNode.id !== 'root') {
+ item.parentNode.childmatches = true;
+ }
+ return false;
+ }
+ return true;
+ },
+ after: function(item) {
+ if (me.filter(item, value) || item.id === 'root' || item.childmatches) {
+ item.set('matchesFilter', true);
+ if (item.parentNode && item.parentNode.id !== 'root') {
+ item.parentNode.childmatches = true;
+ }
+ if (item.childmatches) {
+ item.expand();
+ }
+ } else {
+ item.set('matchesFilter', false);
+ }
+ delete item.childmatches;
+ },
+ });
+ store.endUpdate();
+
+ store.filter((item) => !!item.get('matchesFilter'));
+ Proxmox.Utils.setErrorMask(view, false);
+ }, 10);
+ },
+ },
+
+ listeners: {
+ activate: function() {
+ let me = this;
+ me.getController().reload();
+ },
+ itemcontextmenu: function(panel, record, item, index, event) {
+ event.stopEvent();
+ let title;
+ let view = panel.up('pbsDataStoreRecoverTrashed');
+ let controller = view.getController();
+ let createControllerCallback = function(name) {
+ return function() {
+ controller[name](view, undefined, undefined, undefined, undefined, record);
+ };
+ };
+ if (record.data.ty === 'group') {
+ title = gettext('Group');
+ } else if (record.data.ty === 'dir') {
+ title = gettext('Snapshot');
+ } else if (record.data.ty === 'ns') {
+ title = gettext('Namespace');
+ }
+ if (title) {
+ let menu = Ext.create('PBS.datastore.RecoverTrashedContextMenu', {
+ title: title,
+ onRecover: createControllerCallback('onRecover'),
+ onForget: createControllerCallback('onForget'),
+ });
+ menu.showAt(event.getXY());
+ }
+ },
+ },
+
+ columns: [
+ {
+ xtype: 'treecolumn',
+ header: gettext("Backup Group"),
+ dataIndex: 'text',
+ renderer: (value, meta, record) => {
+ if (record.data.protected) {
+ return `${value} (${gettext('protected')})`;
+ }
+ return value;
+ },
+ flex: 1,
+ },
+ {
+ text: gettext('Comment'),
+ dataIndex: 'comment',
+ flex: 1,
+ renderer: (v, meta, record) => {
+ let data = record.data;
+ if (!data || data.leaf || data.root) {
+ return '';
+ }
+
+ let additionalClasses = "";
+ if (!v) {
+ if (!data.expanded) {
+ v = data['last-comment'] ?? '';
+ additionalClasses = 'pmx-opacity-75';
+ } else {
+ v = '';
+ }
+ }
+ v = Ext.String.htmlEncode(v);
+ return `<span class="snapshot-comment-column ${additionalClasses}">${v}</span>`;
+ },
+ },
+ {
+ header: gettext('Actions'),
+ xtype: 'actioncolumn',
+ dataIndex: 'text',
+ width: 80,
+ items: [
+ {
+ handler: 'onRecover',
+ getTip: (v, m, { data }) => {
+ let tip = '{0}';
+ if (data.ty === 'ns') {
+ tip = gettext("Recover all namespace contents");
+ } else if (data.ty === 'dir') {
+ tip = gettext("Recover snapshot '{0}'");
+ } else if (data.ty === 'group') {
+ tip = gettext("Recover group '{0}'");
+ }
+ return Ext.String.format(tip, v);
+ },
+ getClass: (v, m, { data }) =>
+ (data.ty === 'ns' && !data.isRootNS && data.ns === undefined) ||
+ data.ty === 'group' || data.ty === 'dir'
+ ? 'fa fa-rotate-left'
+ : 'pmx-hidden',
+ isActionDisabled: (v, r, c, i, { data }) => false,
+ },
+ '->',
+ {
+ handler: 'onForget',
+ getTip: (v, m, { data }) => {
+ let tip = '{0}';
+ if (data.ty === 'ns') {
+ tip = gettext("Permanently forget namespace contents '{0}'");
+ } else if (data.ty === 'dir') {
+ tip = gettext("Permanently forget snapshot '{0}'");
+ } else if (data.ty === 'group') {
+ tip = gettext("Permanently forget group '{0}'");
+ }
+ return Ext.String.format(tip, v);
+ },
+ getClass: (v, m, { data }) =>
+ (data.ty === 'ns' && !data.isRootNS && data.ns === undefined) ||
+ data.ty === 'group' || data.ty === 'dir'
+ ? 'fa critical fa-trash-o'
+ : 'pmx-hidden',
+ isActionDisabled: (v, r, c, i, { data }) => false,
+ },
+ {
+ handler: 'openBrowser',
+ tooltip: gettext('Browse'),
+ getClass: (v, m, { data }) => data.ty === 'ns' && !data.root
+ ? 'fa fa-folder-open-o'
+ : 'pmx-hidden',
+ isActionDisabled: (v, r, c, i, { data }) => data.ty !== 'ns',
+ },
+ ],
+ },
+ {
+ xtype: 'datecolumn',
+ header: gettext('Backup Time'),
+ sortable: true,
+ dataIndex: 'backup-time',
+ format: 'Y-m-d H:i:s',
+ width: 150,
+ },
+ {
+ header: gettext("Size"),
+ sortable: true,
+ dataIndex: 'size',
+ renderer: (v, meta, { data }) => {
+ if ((data.text === 'client.log.blob' && v === undefined) || (data.ty !== 'dir' && data.ty !== 'file')) {
+ return '';
+ }
+ if (v === undefined || v === null) {
+ meta.tdCls = "x-grid-row-loading";
+ return '';
+ }
+ return Proxmox.Utils.format_size(v);
+ },
+ },
+ {
+ xtype: 'numbercolumn',
+ format: '0',
+ header: gettext("Count"),
+ sortable: true,
+ width: 75,
+ align: 'right',
+ dataIndex: 'count',
+ },
+ {
+ header: gettext("Owner"),
+ sortable: true,
+ dataIndex: 'owner',
+ },
+ {
+ header: gettext('Encrypted'),
+ dataIndex: 'crypt-mode',
+ renderer: (v, meta, record) => {
+ if (record.data.size === undefined || record.data.size === null) {
+ return '';
+ }
+ if (v === -1) {
+ return '';
+ }
+ let iconCls = PBS.Utils.cryptIconCls[v] || '';
+ let iconTxt = "";
+ if (iconCls) {
+ iconTxt = `<i class="fa fa-fw fa-${iconCls}"></i> `;
+ }
+ let tip;
+ if (v !== PBS.Utils.cryptmap.indexOf('none') && record.data.fingerprint !== undefined) {
+ tip = "Key: " + PBS.Utils.renderKeyID(record.data.fingerprint);
+ }
+ let txt = (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText;
+ if (record.data.ty === 'group' || tip === undefined) {
+ return txt;
+ } else {
+ return `<span data-qtip="${tip}">${txt}</span>`;
+ }
+ },
+ },
+ ],
+
+ tbar: [
+ {
+ text: gettext('Reload'),
+ iconCls: 'fa fa-refresh',
+ handler: 'reload',
+ },
+ '->',
+ {
+ xtype: 'tbtext',
+ html: gettext('Namespace') + ':',
+ },
+ {
+ xtype: 'pbsNamespaceSelector',
+ width: 200,
+ cbind: {
+ datastore: '{datastore}',
+ },
+ },
+ '-',
+ {
+ xtype: 'tbtext',
+ html: gettext('Search'),
+ },
+ {
+ xtype: 'textfield',
+ reference: 'searchbox',
+ emptyText: gettext('group, date or owner'),
+ triggers: {
+ clear: {
+ cls: 'pmx-clear-trigger',
+ weight: -1,
+ hidden: true,
+ handler: function() {
+ this.triggers.clear.setVisible(false);
+ this.setValue('');
+ },
+ },
+ },
+ listeners: {
+ change: {
+ fn: 'search',
+ buffer: 500,
+ },
+ },
+ },
+ ],
+});
+
+Ext.define('PBS.datastore.RecoverTrashedContextMenu', {
+ extend: 'Ext.menu.Menu',
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ onRecover: undefined,
+ onForget: undefined,
+
+ items: [
+ {
+ text: gettext('Recover'),
+ iconCls: 'fa critical fa-rotate-left',
+ handler: function() { this.up('menu').onRecover(); },
+ cbind: {
+ hidden: '{!onRecover}',
+ },
+ },
+ {
+ text: gettext('Remove'),
+ iconCls: 'fa critical fa-trash-o',
+ handler: function() { this.up('menu').onForget(); },
+ cbind: {
+ hidden: '{!onForget}',
+ },
+ },
+ ],
+});
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 20/21] ui: drop 'permanent' in group/snapshot forget, default is to trash
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (18 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 19/21] ui: add recover for trashed items tab to datastore panel Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 21/21] ui: allow to skip trash on namespace deletion Christian Ebner
2025-05-13 13:54 ` [pbs-devel] superseded: [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
21 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
Soften the message as the snapshots and groups will not be deleted
immediately anymore, but rather moved to the trash, from where they
still can be restored until the next garbage collection run.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/datastore/Content.js | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/www/datastore/Content.js b/www/datastore/Content.js
index fffd8c160..a6e28a773 100644
--- a/www/datastore/Content.js
+++ b/www/datastore/Content.js
@@ -1029,9 +1029,9 @@ Ext.define('PBS.DataStoreContent', {
if (data.ty === 'ns') {
tip = gettext("Remove namespace '{0}'");
} else if (data.ty === 'dir') {
- tip = gettext("Permanently forget snapshot '{0}'");
+ tip = gettext("Forget snapshot '{0}'");
} else if (data.ty === 'group') {
- tip = gettext("Permanently forget group '{0}'");
+ tip = gettext("Forget group '{0}'");
}
return Ext.String.format(tip, v);
},
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] [RFC v2 proxmox-backup 21/21] ui: allow to skip trash on namespace deletion
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (19 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 20/21] ui: drop 'permanent' in group/snapshot forget, default is to trash Christian Ebner
@ 2025-05-08 13:05 ` Christian Ebner
2025-05-13 13:54 ` [pbs-devel] superseded: [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
21 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-08 13:05 UTC (permalink / raw)
To: pbs-devel
In order to bypass the trash, add a check box to the delete namespace
dialog and set the `skip-trash` api call parameter accordingly.
Also, extend the warning to mention that when the trash is skipped,
also already trashed items in the namespace are removed permanently.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/window/NamespaceEdit.js | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/www/window/NamespaceEdit.js b/www/window/NamespaceEdit.js
index a9a440bbf..79a045823 100644
--- a/www/window/NamespaceEdit.js
+++ b/www/window/NamespaceEdit.js
@@ -83,11 +83,30 @@ Ext.define('PBS.window.NamespaceDelete', {
},
},
},
+ {
+ xtype: 'proxmoxcheckbox',
+ name: 'skip-trash',
+ boxLabel: gettext('Skip Trash (delete content immediately)'),
+ value: false,
+ listeners: {
+ change: function(field, value) {
+ let win = field.up('proxmoxSafeDestroy');
+ if (value) {
+ win.params['skip-trash'] = value;
+ } else {
+ delete win.params['skip-trash'];
+ }
+ },
+ },
+ bind: {
+ disabled: '{!rmGroups.checked}',
+ },
+ },
{
xtype: 'box',
padding: '5 0 0 0',
html: `<span class="pmx-hint">${gettext('Note')}</span>: `
- + gettext('This will permanently remove all backups from the current namespace and all namespaces below it!'),
+ + gettext('This will remove all backups from the current namespace and all namespaces below it! If the trash is skipped, this will remove also previously trashed items'),
bind: {
hidden: '{!rmGroups.checked}',
},
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 10/21] datastore: mark namespace as trash instead of deleting it
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 10/21] datastore: mark namespace as trash instead of deleting it Christian Ebner
@ 2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 7:47 ` Christian Ebner
0 siblings, 1 reply; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-09 12:27 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
On May 8, 2025 3:05 pm, Christian Ebner wrote:
> As for backup snapshots and groups, mark the namespace as trash
> instead of removing it and the contents right away, if the trash
> should not be bypassed.
>
> Actual removal of the hirarchical folder structure has to be taken
> care of by the garbage collection.
>
> In order to avoid races during removal, first mark the namespaces as
> trash pending, mark the snapshots and groups as trash and only after
> rename the pending marker file to the trash marker file. By this,
> concurrent backups can remove the trash pending marker to avoid the
> namespace being trashed.
>
> On re-creation of a trashed namespace remove the marker file on itself
> and any parent component from deepest to shallowest. As trashing a full
> namespace can also set the trash pending state for recursive namespace
> cleanup, remove encounters of that marker file as well to avoid the
> namespace or its parent being trashed.
this is fairly involved since we don't have locks on namespaces..
should we have them for creation/removal/trashing/untrashing?
I assume those are fairly rare occurrences, I haven't yet analyzed the
interactions here to see whether the two-marker approach is actually
race-free..
OTOH, do we really need to (be able to) trash namespaces?
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> pbs-datastore/src/datastore.rs | 135 +++++++++++++++++++++++++++++----
> src/api2/admin/namespace.rs | 2 +-
> src/server/pull.rs | 2 +-
> 3 files changed, 123 insertions(+), 16 deletions(-)
>
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index 023a6a12e..b1d81e199 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -28,7 +28,9 @@ use pbs_api_types::{
> };
> use pbs_config::BackupLockGuard;
>
> -use crate::backup_info::{BackupDir, BackupGroup, BackupInfo, ListBackupFilter, OLD_LOCKING};
> +use crate::backup_info::{
> + BackupDir, BackupGroup, BackupInfo, ListBackupFilter, OLD_LOCKING, TRASH_MARKER_FILENAME,
> +};
> use crate::chunk_store::ChunkStore;
> use crate::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
> use crate::fixed_index::{FixedIndexReader, FixedIndexWriter};
> @@ -42,6 +44,8 @@ use crate::DataBlob;
> static DATASTORE_MAP: LazyLock<Mutex<HashMap<String, Arc<DataStoreImpl>>>> =
> LazyLock::new(|| Mutex::new(HashMap::new()));
>
> +const TRASH_PENDING_MARKER_FILENAME: &str = ".pending";
> +
> /// checks if auth_id is owner, or, if owner is a token, if
> /// auth_id is the user of the token
> pub fn check_backup_owner(owner: &Authid, auth_id: &Authid) -> Result<(), Error> {
> @@ -554,6 +558,7 @@ impl DataStore {
> ns_full_path.push(ns.path());
>
> std::fs::create_dir_all(ns_full_path)?;
> + self.untrash_namespace(&ns)?;
>
> Ok(ns)
> }
> @@ -565,6 +570,34 @@ impl DataStore {
> path.exists()
> }
>
> + /// Remove the namespace and all it's parent components from the trash by removing the trash or
> + /// trash-pending marker file for each namespace level from deepest to shallowest. Missing files
> + /// are ignored.
> + pub fn untrash_namespace(&self, namespace: &BackupNamespace) -> Result<(), Error> {
> + let mut namespace = namespace.clone();
> + while namespace.depth() > 0 {
> + let mut trash_file_path = self.base_path();
> + trash_file_path.push(namespace.path());
> + let mut pending_file_path = trash_file_path.clone();
> + pending_file_path.push(TRASH_PENDING_MARKER_FILENAME);
> + if let Err(err) = std::fs::remove_file(&pending_file_path) {
> + // ignore not found, either not trashed or un-trashed by concurrent operation
> + if err.kind() != std::io::ErrorKind::NotFound {
> + bail!("failed to remove trash-pending file {trash_file_path:?}: {err}");
> + }
> + }
> + trash_file_path.push(TRASH_MARKER_FILENAME);
> + if let Err(err) = std::fs::remove_file(&trash_file_path) {
> + // ignore not found, either not trashed or un-trashed by concurrent operation
> + if err.kind() != std::io::ErrorKind::NotFound {
> + bail!("failed to remove trash file {trash_file_path:?}: {err}");
> + }
> + }
> + namespace.pop();
> + }
> + Ok(())
> + }
> +
> /// Remove all backup groups of a single namespace level but not the namespace itself.
> ///
> /// Does *not* descends into child-namespaces and doesn't remoes the namespace itself either.
> @@ -574,6 +607,7 @@ impl DataStore {
> pub fn remove_namespace_groups(
> self: &Arc<Self>,
> ns: &BackupNamespace,
> + skip_trash: bool,
> ) -> Result<(bool, BackupGroupDeleteStats), Error> {
> // FIXME: locking? The single groups/snapshots are already protected, so may not be
> // necessary (depends on what we all allow to do with namespaces)
> @@ -583,20 +617,22 @@ impl DataStore {
> let mut stats = BackupGroupDeleteStats::default();
>
> for group in self.iter_backup_groups(ns.to_owned())? {
> - let delete_stats = group?.destroy(true)?;
> + let delete_stats = group?.destroy(skip_trash)?;
> stats.add(&delete_stats);
> removed_all_groups = removed_all_groups && delete_stats.all_removed();
> }
>
> - let base_file = std::fs::File::open(self.base_path())?;
> - let base_fd = base_file.as_raw_fd();
> - for ty in BackupType::iter() {
> - let mut ty_dir = ns.path();
> - ty_dir.push(ty.to_string());
> - // best effort only, but we probably should log the error
> - if let Err(err) = unlinkat(Some(base_fd), &ty_dir, UnlinkatFlags::RemoveDir) {
> - if err != nix::errno::Errno::ENOENT {
> - log::error!("failed to remove backup type {ty} in {ns} - {err}");
> + if skip_trash {
> + let base_file = std::fs::File::open(self.base_path())?;
> + let base_fd = base_file.as_raw_fd();
> + for ty in BackupType::iter() {
> + let mut ty_dir = ns.path();
> + ty_dir.push(ty.to_string());
> + // best effort only, but we probably should log the error
> + if let Err(err) = unlinkat(Some(base_fd), &ty_dir, UnlinkatFlags::RemoveDir) {
> + if err != nix::errno::Errno::ENOENT {
> + log::error!("failed to remove backup type {ty} in {ns} - {err}");
> + }
> }
> }
> }
> @@ -613,6 +649,7 @@ impl DataStore {
> self: &Arc<Self>,
> ns: &BackupNamespace,
> delete_groups: bool,
> + skip_trash: bool,
> ) -> Result<(bool, BackupGroupDeleteStats), Error> {
> let store = self.name();
> let mut removed_all_requested = true;
> @@ -620,16 +657,68 @@ impl DataStore {
> if delete_groups {
> log::info!("removing whole namespace recursively below {store}:/{ns}",);
> for ns in self.recursive_iter_backup_ns(ns.to_owned(), NamespaceListFilter::Active)? {
> - let (removed_ns_groups, delete_stats) = self.remove_namespace_groups(&ns?)?;
> + let namespace = ns?;
> +
> + if !skip_trash {
> + let mut path = self.base_path();
> + path.push(namespace.path());
> + path.push(TRASH_PENDING_MARKER_FILENAME);
> + if let Err(err) = std::fs::File::create(&path) {
pending marker created here iff delete_groups && !skip_trash
> + if err.kind() != std::io::ErrorKind::AlreadyExists {
> + return Err(err).context("failed to set trash pending marker file");
> + }
> + }
> + }
> +
> + let (removed_ns_groups, delete_stats) =
> + self.remove_namespace_groups(&namespace, skip_trash)?;
> stats.add(&delete_stats);
> removed_all_requested = removed_all_requested && removed_ns_groups;
> +
> + if !skip_trash && !removed_ns_groups {
> + self.untrash_namespace(&namespace)?;
> + }
> }
> } else {
> log::info!("pruning empty namespace recursively below {store}:/{ns}");
> }
>
> - removed_all_requested =
> - removed_all_requested && self.destroy_namespace_recursive(ns, delete_groups)?;
> + if skip_trash {
> + removed_all_requested =
> + removed_all_requested && self.destroy_namespace_recursive(ns, delete_groups)?;
> + return Ok((removed_all_requested, stats));
early return here iff skip_trash
> + }
> +
> + let mut children = self
> + .recursive_iter_backup_ns(ns.to_owned(), NamespaceListFilter::Active)?
> + .collect::<Result<Vec<BackupNamespace>, Error>>()?;
> +
> + children.sort_by_key(|b| std::cmp::Reverse(b.depth()));
> +
> + let mut all_trashed = true;
> + for ns in children.iter() {
> + let mut ns_dir = ns.path();
> + ns_dir.push("ns");
> +
> + if !ns.is_root() {
> + let mut path = self.base_path();
> + path.push(ns.path());
> +
> + let pending_path = path.join(TRASH_PENDING_MARKER_FILENAME);
> + path.push(TRASH_MARKER_FILENAME);
> + if let Err(err) = std::fs::rename(pending_path, path) {
pending marker assumed to exist here iff !skip_trash
> + if err.kind() == std::io::ErrorKind::NotFound {
> + all_trashed = false;
> + } else {
> + return Err(err).context("Renaming pending marker to trash marker failed");
> + }
> + }
> + }
> + }
> +
> + if !all_trashed {
> + bail!("failed to prune namespace, not empty");
wrong error returned as a result..
> + }
>
> Ok((removed_all_requested, stats))
> }
> @@ -657,6 +746,24 @@ impl DataStore {
> let _ = unlinkat(Some(base_fd), &ns_dir, UnlinkatFlags::RemoveDir);
>
> if !ns.is_root() {
> + let rel_trash_path = ns.path().join(TRASH_MARKER_FILENAME);
> + if let Err(err) =
> + unlinkat(Some(base_fd), &rel_trash_path, UnlinkatFlags::NoRemoveDir)
> + {
> + if err != nix::errno::Errno::ENOENT {
> + bail!("removing the trash file '{rel_trash_path:?}' failed - {err}")
> + }
> + }
> + let rel_pending_path = ns.path().join(TRASH_PENDING_MARKER_FILENAME);
> + if let Err(err) =
> + unlinkat(Some(base_fd), &rel_pending_path, UnlinkatFlags::NoRemoveDir)
> + {
> + if err != nix::errno::Errno::ENOENT {
> + bail!(
> + "removing the trash pending file '{rel_pending_path:?}' failed - {err}"
> + )
> + }
> + }
> match unlinkat(Some(base_fd), &ns.path(), UnlinkatFlags::RemoveDir) {
> Ok(()) => log::debug!("removed namespace {ns}"),
> Err(nix::errno::Errno::ENOENT) => {
> diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
> index e5463524a..a12d97883 100644
> --- a/src/api2/admin/namespace.rs
> +++ b/src/api2/admin/namespace.rs
> @@ -166,7 +166,7 @@ pub fn delete_namespace(
>
> let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
>
> - let (removed_all, stats) = datastore.remove_namespace_recursive(&ns, delete_groups)?;
> + let (removed_all, stats) = datastore.remove_namespace_recursive(&ns, delete_groups, true)?;
> if !removed_all {
> let err_msg = if delete_groups {
> if datastore.old_locking() {
> diff --git a/src/server/pull.rs b/src/server/pull.rs
> index d3c6fcf6a..a60ccbf10 100644
> --- a/src/server/pull.rs
> +++ b/src/server/pull.rs
> @@ -729,7 +729,7 @@ fn check_and_remove_ns(params: &PullParameters, local_ns: &BackupNamespace) -> R
> let (removed_all, _delete_stats) = params
> .target
> .store
> - .remove_namespace_recursive(local_ns, true)?;
> + .remove_namespace_recursive(local_ns, true, true)?;
>
> Ok(removed_all)
> }
> --
> 2.39.5
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 18/21] api: admin: implement endpoints to restore trashed contents
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 18/21] api: admin: implement endpoints to restore trashed contents Christian Ebner
@ 2025-05-09 12:27 ` Fabian Grünbichler
2025-05-09 12:59 ` Christian Ebner
0 siblings, 1 reply; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-09 12:27 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
On May 8, 2025 3:05 pm, Christian Ebner wrote:
> Implements the api endpoints to restore trashed contents contained
> within namespaces, backup groups or individual snapshots.
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> src/api2/admin/datastore.rs | 173 +++++++++++++++++++++++++++++++++++-
> 1 file changed, 172 insertions(+), 1 deletion(-)
>
> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
> index cbd24c729..eb033c3fc 100644
> --- a/src/api2/admin/datastore.rs
> +++ b/src/api2/admin/datastore.rs
> @@ -51,7 +51,7 @@ use pbs_api_types::{
> };
> use pbs_client::pxar::{create_tar, create_zip};
> use pbs_config::CachedUserInfo;
> -use pbs_datastore::backup_info::{BackupInfo, ListBackupFilter};
> +use pbs_datastore::backup_info::{BackupInfo, ListBackupFilter, TRASH_MARKER_FILENAME};
> use pbs_datastore::cached_chunk_reader::CachedChunkReader;
> use pbs_datastore::catalog::{ArchiveEntry, CatalogReader};
> use pbs_datastore::data_blob::DataBlob;
> @@ -2727,6 +2727,165 @@ pub async fn unmount(store: String, rpcenv: &mut dyn RpcEnvironment) -> Result<V
> Ok(json!(upid))
> }
>
> +#[api(
> + input: {
> + properties: {
> + store: { schema: DATASTORE_SCHEMA },
> + ns: { type: BackupNamespace, },
> + },
> + },
> + access: {
> + permission: &Permission::Anybody,
> + description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
> + or DATASTORE_BACKUP and being the owner of the group",
> + },
> +)]
> +/// Recover trashed contents of a namespace.
> +pub fn recover_namespace(
> + store: String,
> + ns: BackupNamespace,
> + rpcenv: &mut dyn RpcEnvironment,
> +) -> Result<(), Error> {
> + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
> + let limited = check_ns_privs_full(
> + &store,
> + &ns,
> + &auth_id,
> + PRIV_DATASTORE_MODIFY,
> + PRIV_DATASTORE_BACKUP,
> + )?;
> +
> + let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
> +
> + for backup_group in datastore.iter_backup_groups(ns.clone())? {
> + let backup_group = backup_group?;
> + if limited {
> + let owner = datastore.get_owner(&ns, backup_group.group())?;
> + if check_backup_owner(&owner, &auth_id).is_err() {
> + continue;
> + }
> + }
> + do_recover_group(&backup_group)?;
> + }
> +
> + Ok(())
> +}
> +
> +#[api(
> + input: {
> + properties: {
> + store: { schema: DATASTORE_SCHEMA },
> + group: {
> + type: pbs_api_types::BackupGroup,
> + flatten: true,
> + },
> + ns: {
> + type: BackupNamespace,
> + optional: true,
> + },
> + },
> + },
> + access: {
> + permission: &Permission::Anybody,
> + description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
> + or DATASTORE_BACKUP and being the owner of the group",
> + },
> +)]
> +/// Recover trashed contents of a backup group.
> +pub fn recover_group(
> + store: String,
> + group: pbs_api_types::BackupGroup,
> + ns: Option<BackupNamespace>,
> + rpcenv: &mut dyn RpcEnvironment,
> +) -> Result<(), Error> {
> + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
> + let ns = ns.unwrap_or_default();
> + let datastore = check_privs_and_load_store(
> + &store,
> + &ns,
> + &auth_id,
> + PRIV_DATASTORE_MODIFY,
> + PRIV_DATASTORE_BACKUP,
> + Some(Operation::Write),
> + &group,
> + )?;
> +
> + let backup_group = datastore.backup_group(ns, group);
> + do_recover_group(&backup_group)?;
> +
> + Ok(())
> +}
> +
> +fn do_recover_group(backup_group: &BackupGroup) -> Result<(), Error> {
missing locking for the group?
> + let trashed_snapshots = backup_group.list_backups(ListBackupFilter::Trashed)?;
> + for snapshot in trashed_snapshots {
> + do_recover_snapshot(&snapshot.backup_dir)?;
> + }
> +
> + let group_trash_path = backup_group.full_group_path().join(TRASH_MARKER_FILENAME);
> + if let Err(err) = std::fs::remove_file(&group_trash_path) {
> + if err.kind() != std::io::ErrorKind::NotFound {
> + bail!("failed to remove group trash file {group_trash_path:?} - {err}");
> + }
> + }
> + Ok(())
> +}
> +
> +#[api(
> + input: {
> + properties: {
> + store: { schema: DATASTORE_SCHEMA },
> + backup_dir: {
> + type: pbs_api_types::BackupDir,
> + flatten: true,
> + },
> + ns: {
> + type: BackupNamespace,
> + optional: true,
> + },
> + },
> + },
> + access: {
> + permission: &Permission::Anybody,
> + description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
> + or DATASTORE_BACKUP and being the owner of the group",
> + },
> +)]
> +/// Recover trashed contents of a backup snapshot.
> +pub fn recover_snapshot(
> + store: String,
> + backup_dir: pbs_api_types::BackupDir,
> + ns: Option<BackupNamespace>,
> + rpcenv: &mut dyn RpcEnvironment,
> +) -> Result<(), Error> {
> + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
> + let ns = ns.unwrap_or_default();
> + let datastore = check_privs_and_load_store(
> + &store,
> + &ns,
> + &auth_id,
> + PRIV_DATASTORE_MODIFY,
> + PRIV_DATASTORE_BACKUP,
> + Some(Operation::Write),
> + &backup_dir.group,
> + )?;
> +
> + let snapshot = datastore.backup_dir(ns, backup_dir)?;
> + do_recover_snapshot(&snapshot)?;
> +
> + Ok(())
> +}
> +
> +fn do_recover_snapshot(snapshot_dir: &BackupDir) -> Result<(), Error> {
missing locking for the snapshot?
> + let trash_path = snapshot_dir.full_path().join(TRASH_MARKER_FILENAME);
> + if let Err(err) = std::fs::remove_file(&trash_path) {
> + if err.kind() != std::io::ErrorKind::NotFound {
> + bail!("failed to remove trash file {trash_path:?} - {err}");
> + }
> + }
> + Ok(())
> +}
> +
> #[sortable]
> const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
> (
> @@ -2792,6 +2951,18 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
> "pxar-file-download",
> &Router::new().download(&API_METHOD_PXAR_FILE_DOWNLOAD),
> ),
> + (
> + "recover-group",
> + &Router::new().post(&API_METHOD_RECOVER_GROUP),
I am not sure whether those should be POST or PUT, they are modifying an
existing (trashed) group/snapshot/.. after all?
> + ),
> + (
> + "recover-namespace",
> + &Router::new().post(&API_METHOD_RECOVER_NAMESPACE),
> + ),
> + (
> + "recover-snapshot",
> + &Router::new().post(&API_METHOD_RECOVER_SNAPSHOT),
> + ),
> ("rrd", &Router::new().get(&API_METHOD_GET_RRD_STATS)),
> (
> "snapshots",
> --
> 2.39.5
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 16/21] api: datastore: add flag to list trashed snapshots only
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 16/21] api: datastore: add flag to list trashed snapshots only Christian Ebner
@ 2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 7:57 ` Christian Ebner
0 siblings, 1 reply; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-09 12:27 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
On May 8, 2025 3:05 pm, Christian Ebner wrote:
> Allows to conditionally show either active or trashed backup
> snapshots, the latter being used when displaying the contents of the
> trash for given datastore.
and what if I want to list both/all?
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> src/api2/admin/datastore.rs | 17 +++++++++++++++--
> 1 file changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
> index 84c0bf5b4..cbd24c729 100644
> --- a/src/api2/admin/datastore.rs
> +++ b/src/api2/admin/datastore.rs
> @@ -473,6 +473,12 @@ pub async fn delete_snapshot(
> optional: true,
> schema: BACKUP_ID_SCHEMA,
> },
> + "trashed": {
> + type: bool,
> + optional: true,
> + default: false,
> + description: "List trashed snapshots only."
> + },
> },
> },
> returns: pbs_api_types::ADMIN_DATASTORE_LIST_SNAPSHOTS_RETURN_TYPE,
> @@ -488,6 +494,7 @@ pub async fn list_snapshots(
> ns: Option<BackupNamespace>,
> backup_type: Option<BackupType>,
> backup_id: Option<String>,
> + trashed: bool,
> _param: Value,
> _info: &ApiMethod,
> rpcenv: &mut dyn RpcEnvironment,
> @@ -495,7 +502,7 @@ pub async fn list_snapshots(
> let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
>
> tokio::task::spawn_blocking(move || unsafe {
> - list_snapshots_blocking(store, ns, backup_type, backup_id, auth_id)
> + list_snapshots_blocking(store, ns, backup_type, backup_id, auth_id, trashed)
> })
> .await
> .map_err(|err| format_err!("failed to await blocking task: {err}"))?
> @@ -508,6 +515,7 @@ unsafe fn list_snapshots_blocking(
> backup_type: Option<BackupType>,
> backup_id: Option<String>,
> auth_id: Authid,
> + trashed: bool,
> ) -> Result<Vec<SnapshotListItem>, Error> {
> let ns = ns.unwrap_or_default();
>
> @@ -631,7 +639,12 @@ unsafe fn list_snapshots_blocking(
> return Ok(snapshots);
> }
>
> - let group_backups = group.list_backups(ListBackupFilter::Active)?;
> + let filter = if trashed {
> + ListBackupFilter::Trashed
> + } else {
> + ListBackupFilter::Active
> + };
> + let group_backups = group.list_backups(filter)?;
>
> snapshots.extend(
> group_backups
> --
> 2.39.5
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 14/21] datastore: GC: clean-up trashed snapshots, groups and namespaces
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 14/21] datastore: GC: clean-up trashed snapshots, groups and namespaces Christian Ebner
@ 2025-05-09 12:27 ` Fabian Grünbichler
0 siblings, 0 replies; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-09 12:27 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
On May 8, 2025 3:05 pm, Christian Ebner wrote:
> Cleanup trashed items during phase 1 of garbage collection. If
> encountered, index files located within trashed snapshots are touched
> as well, deferring chunk cleanup to the next run
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> pbs-datastore/src/datastore.rs | 84 +++++++++++++++++++++++++++++++++-
> 1 file changed, 83 insertions(+), 1 deletion(-)
>
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index ca05e1bea..d88af4c68 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -574,6 +574,18 @@ impl DataStore {
> !path.exists()
> }
>
> + /// Checks if the namespace trash marker file exists,
> + /// does not imply that the namespace itself exists.
> + pub fn namespace_is_trashed(&self, namespace: &BackupNamespace) -> bool {
> + if namespace.is_root() {
> + return false;
> + }
> + let mut path = self.base_path();
> + path.push(namespace.path());
> + path.push(TRASH_MARKER_FILENAME);
> + path.exists()
> + }
> +
> /// Remove the namespace and all it's parent components from the trash by removing the trash or
> /// trash-pending marker file for each namespace level from deepest to shallowest. Missing files
> /// are ignored.
> @@ -1322,7 +1334,7 @@ impl DataStore {
> .context("creating namespace iterator failed")?
> {
> let namespace = namespace.context("iterating namespaces failed")?;
> - for group in arc_self.iter_backup_groups(namespace)? {
> + for group in arc_self.iter_backup_groups(namespace.clone())? {
> let group = group.context("iterating backup groups failed")?;
>
> // Avoid race between listing/marking of snapshots by GC and pruning the last
> @@ -1403,10 +1415,80 @@ impl DataStore {
> }
> processed_index_files += 1;
> }
> +
> + // Only try to lock a trashed snapshots and continue if that is not
> + // possible, as then most likely this is in the process of being untrashed.
> + // Check trash state before and after locking to avoid otherwise possible
> + // races.
> + if snapshot.backup_dir.is_trashed() {
> + if let Ok(_lock) = snapshot.backup_dir.lock() {
> + if snapshot.backup_dir.is_trashed() {
> + let path = snapshot.backup_dir.full_path();
> + log::info!("removing trashed backup snapshot {path:?}");
> + std::fs::remove_dir_all(&path).with_context(|| {
> + format!("removing trashed backup snapshot {path:?} failed")
> + })?;
> + }
> + } else {
> + let path = snapshot.backup_dir.full_path();
> + warn!("failed to lock trashed backup snapshot can {path:?}");
> + }
> + }
> }
>
> break;
> }
> + if group.is_trashed() {
> + if let Ok(_lock) = group.lock() {
> + if group.is_trashed() {
shouldn't this use some helper to reduce code duplication?
> + let trash_path = group.full_group_path().join(".trashed");
> + std::fs::remove_file(&trash_path).map_err(|err| {
> + format_err!(
> + "removing the trash file '{trash_path:?}' failed - {err}"
> + )
> + })?;
> +
> + let owner_path = group.full_group_path().join("owner");
> + std::fs::remove_file(&owner_path).map_err(|err| {
> + format_err!(
> + "removing the owner file '{owner_path:?}' failed - {err}"
> + )
> + })?;
> +
> + let path = group.full_group_path();
> +
> + std::fs::remove_dir(&path).map_err(|err| {
> + format_err!("removing group directory {path:?} failed - {err}")
> + })?;
> +
> + // Remove any now empty backup type directory
is this needed here? if we remove the whole namespace below, it would be
done anyway..
> + let base_file = std::fs::File::open(self.base_path())?;
> + let base_fd = base_file.as_raw_fd();
> + for ty in BackupType::iter() {
> + let mut ty_dir = namespace.path();
> + ty_dir.push(ty.to_string());
> + match unlinkat(Some(base_fd), &ty_dir, UnlinkatFlags::RemoveDir) {
> + Ok(_) => (),
> + Err(nix::errno::Errno::ENOENT) |
> + Err(nix::errno::Errno::ENOTEMPTY) => (),
> + Err(err) => info!("failed to remove backup type directory for {namespace} - {err}"),
> + }
> + }
> + } else {
> + let path = group.full_group_path();
> + warn!("failed to lock trashed backup group {path:?}");
> + }
> + }
> + }
> + }
> + if self.namespace_is_trashed(&namespace) {
> + // Remove the namespace, but only if it was empty (as the GC already cleared child
> + // items and no new ones have been created since).
> + match arc_self.destroy_namespace_recursive(&namespace, false) {
> + Ok(true) => info!("removed trashed namespace {namespace}"),
> + Ok(false) => info!("failed to remove trashed namespace {namespace}, not empty"),
> + Err(err) => warn!("removing trashed namespace failed: {err:#}"),
> + }
> }
> }
>
> --
> 2.39.5
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 13/21] datastore: recreate trashed backup groups if requested
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 13/21] datastore: recreate trashed backup groups if requested Christian Ebner
@ 2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 8:05 ` Christian Ebner
0 siblings, 1 reply; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-09 12:27 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
On May 8, 2025 3:05 pm, Christian Ebner wrote:
> A whole backup group might have been marked as trashed, including all
> of the contained snapshots.
>
> Since a new backup to that group (even as different user/owner)
> should still work, permanently clear the whole trashed group before
> recreation. This will limit the trash lifetime as now the group is
> not recoverable until next garbage collection.
IMHO this is phrased in a way that makes it hard to parse, and in any
case, such things should go into the docs..
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> pbs-datastore/src/datastore.rs | 26 ++++++++++++++++++++++++++
> 1 file changed, 26 insertions(+)
>
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index 4f7766c36..ca05e1bea 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -934,6 +934,32 @@ impl DataStore {
> let guard = backup_group.lock().with_context(|| {
> format!("while creating locked backup group '{backup_group:?}'")
> })?;
> + if backup_group.is_trashed() {
> + info!("clear trashed backup group {full_path:?}");
I think we should only do this if the new and old owner are not
identical..
> + let dir_entries = std::fs::read_dir(&full_path).context(
> + "failed to read directory contents during cleanup of trashed group",
> + )?;
> + for entry in dir_entries {
> + let entry = entry.context(
> + "failed to read directory entry during clenup of trashed group",
> + )?;
> + let file_type = entry.file_type().context(
> + "failed to get entry file type during clenup of trashed group",
> + )?;
> + if file_type.is_dir() {
> + std::fs::remove_dir_all(entry.path())
> + .context("failed to remove directory entry during clenup of trashed snapshot")?;
> + } else {
> + std::fs::remove_file(entry.path())
> + .context("failed to remove directory entry during clenup of trashed snapshot")?;
> + }
> + }
> + self.set_owner(ns, backup_group.group(), auth_id, false)?;
> + let owner = self.get_owner(ns, backup_group.group())?; // just to be sure
sure about that? we are holding a lock here, nobody is allowed to change
the owner but us..
> + self.untrash_namespace(ns)?;
> + return Ok((owner, guard));
> + }
> +
> let owner = self.get_owner(ns, backup_group.group())?; // just to be sure
> Ok((owner, guard))
> }
> --
> 2.39.5
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 12/21] datastore: clear trashed snapshot dir if re-creation requested
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 12/21] datastore: clear trashed snapshot dir if re-creation requested Christian Ebner
@ 2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 8:31 ` Christian Ebner
0 siblings, 1 reply; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-09 12:27 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
On May 8, 2025 3:05 pm, Christian Ebner wrote:
> If a previously trashed snapshot has been requested for re-creation
> (e.g. by a sync job in push direction), drop the contents of the
> currently trashed snapshot.
> The snapshot directory itself is already locked at that point, either
> by the old locking mechanism acting on the directory itself or by the
> new locking mechanism. Therefore, concurrent operations can be
> excluded.
>
> For the call site this acts as if the snapshot directory has been
> newly created.
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> pbs-datastore/src/datastore.rs | 29 ++++++++++++++++++++++++++++-
> 1 file changed, 28 insertions(+), 1 deletion(-)
>
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index ffc6a7039..4f7766c36 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -951,8 +951,9 @@ impl DataStore {
> ) -> Result<(PathBuf, bool, BackupLockGuard), Error> {
> let backup_dir = self.backup_dir(ns.clone(), backup_dir.clone())?;
> let relative_path = backup_dir.relative_path();
> + let full_path = backup_dir.full_path();
>
> - match std::fs::create_dir(backup_dir.full_path()) {
> + match std::fs::create_dir(&full_path) {
> Ok(_) => {
> let guard = backup_dir.lock().with_context(|| {
> format!("while creating new locked snapshot '{backup_dir:?}'")
> @@ -963,6 +964,32 @@ impl DataStore {
> let guard = backup_dir
> .lock()
> .with_context(|| format!("while creating locked snapshot '{backup_dir:?}'"))?;
> +
> + if backup_dir.is_trashed() {
> + info!("clear trashed backup snapshot {full_path:?}");
> + let dir_entries = std::fs::read_dir(&full_path).context(
> + "failed to read directory contents during cleanup of trashed snapshot",
> + )?;
> + for entry in dir_entries {
> + let entry = entry.context(
> + "failed to read directory entry during clenup of trashed snapshot",
> + )?;
> + // Only expect regular file entries
> + std::fs::remove_file(entry.path()).context(
> + "failed to remove directory entry during clenup of trashed snapshot",
> + )?;
> + }
> + let group = BackupGroup::from(backup_dir);
> + let group_trash_file = group.full_group_path().join(TRASH_MARKER_FILENAME);
> + if let Err(err) = std::fs::remove_file(&group_trash_file) {
> + if err.kind() != std::io::ErrorKind::NotFound {
> + bail!("failed to remove group trash file of trashed snapshot");
> + }
> + }
this shouldn't be possible to hit, right? as creating a backup dir
entails first creating the backup group (guarded by the group lock), and
that would already clear the group's trash marker..
> + self.untrash_namespace(ns)?;
> + return Ok((relative_path, true, guard));
> + }
> +
> Ok((relative_path, false, guard))
> }
> Err(e) => Err(e.into()),
> --
> 2.39.5
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 07/21] sync: ignore trashed groups in local source reader
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 07/21] sync: ignore trashed groups in local source reader Christian Ebner
@ 2025-05-09 12:27 ` Fabian Grünbichler
0 siblings, 0 replies; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-09 12:27 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
On May 8, 2025 3:05 pm, Christian Ebner wrote:
> Check and exclude backup groups which have been marked as trash from
> sync.
could be grouped with patch 5?
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> pbs-datastore/src/backup_info.rs | 7 +++++++
> src/server/sync.rs | 1 +
> 2 files changed, 8 insertions(+)
>
> diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
> index 189ed28ad..b4fabb2cc 100644
> --- a/pbs-datastore/src/backup_info.rs
> +++ b/pbs-datastore/src/backup_info.rs
> @@ -105,6 +105,13 @@ impl BackupGroup {
> self.full_group_path().exists()
> }
>
> + /// Check if the group is currently marked as trash by checking the presence of the trash
> + /// marker file in the group's directory
> + pub fn is_trashed(&self) -> bool {
> + let path = self.full_group_path().join(TRASH_MARKER_FILENAME);
> + path.exists()
> + }
and this hunk moved to patch 2, or the is_trashed helpers extracted into
their own patch?
> +
> pub fn list_backups(&self, filter: ListBackupFilter) -> Result<Vec<BackupInfo>, Error> {
> let mut list = vec![];
>
> diff --git a/src/server/sync.rs b/src/server/sync.rs
> index 3de2ec9a4..ce338fbbe 100644
> --- a/src/server/sync.rs
> +++ b/src/server/sync.rs
> @@ -447,6 +447,7 @@ impl SyncSource for LocalSource {
> Some(owner),
> )?
> .filter_map(Result::ok)
> + .filter(|backup_group| !backup_group.is_trashed())
> .map(|backup_group| backup_group.group().clone())
> .collect::<Vec<pbs_api_types::BackupGroup>>())
> }
> --
> 2.39.5
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 06/21] api: tape: check trash marker when trying to write snapshot
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 06/21] api: tape: check trash marker when trying to write snapshot Christian Ebner
@ 2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 9:19 ` Christian Ebner
0 siblings, 1 reply; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-09 12:27 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
On May 8, 2025 3:05 pm, Christian Ebner wrote:
> Since snapshots might be marked as trash, the snapshot directory
> can still be present until cleaned up by garbage collection.
>
> Therefore, check for the presence of the trash marker after acquiring
> the locked snapshot reader and skip over marked ones.
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> src/api2/tape/backup.rs | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
> index 923cb7834..17c8bc605 100644
> --- a/src/api2/tape/backup.rs
> +++ b/src/api2/tape/backup.rs
> @@ -574,7 +574,13 @@ fn backup_snapshot(
> info!("backup snapshot {snapshot_path:?}");
>
> let snapshot_reader = match snapshot.locked_reader() {
> - Ok(reader) => reader,
> + Ok(reader) => {
> + if snapshot.is_trashed() {
> + info!("snapshot {snapshot_path:?} trashed, skipping");
not sure why we log this, but don't log this in other places?
> + return Ok(SnapshotBackupResult::Ignored);
> + }
> + reader
> + }
> Err(err) => {
> if !snapshot.full_path().exists() {
> // we got an error and the dir does not exist,
> --
> 2.39.5
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 03/21] datastore: allow filtering of backups by their trash status
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 03/21] datastore: allow filtering of backups by their trash status Christian Ebner
@ 2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 9:32 ` Christian Ebner
0 siblings, 1 reply; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-09 12:27 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
On May 8, 2025 3:05 pm, Christian Ebner wrote:
> Extends the BackupGroup::list_backups method by an enum parameter to
> filter backup snapshots based on their trash status.
>
> This allows to reuse the same logic for listing active, trashed or
> all snapshots.
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> pbs-datastore/src/backup_info.rs | 33 +++++++++++++++++++++++++++++---
> pbs-datastore/src/datastore.rs | 4 ++--
> src/api2/admin/datastore.rs | 10 +++++-----
> src/api2/tape/backup.rs | 4 ++--
> src/backup/verify.rs | 4 ++--
> src/server/prune_job.rs | 3 ++-
> src/server/pull.rs | 3 ++-
> 7 files changed, 45 insertions(+), 16 deletions(-)
>
> diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
> index 9ce4cb0f8..a8c864ac8 100644
> --- a/pbs-datastore/src/backup_info.rs
> +++ b/pbs-datastore/src/backup_info.rs
> @@ -52,6 +52,12 @@ impl fmt::Debug for BackupGroup {
> }
> }
>
> +pub enum ListBackupFilter {
> + Active,
active sounds like there's currently a backup going on..
> + All,
> + Trashed,
> +}
> +
> impl BackupGroup {
> pub(crate) fn new(
> store: Arc<DataStore>,
> @@ -99,7 +105,7 @@ impl BackupGroup {
> self.full_group_path().exists()
> }
>
> - pub fn list_backups(&self) -> Result<Vec<BackupInfo>, Error> {
> + pub fn list_backups(&self, filter: ListBackupFilter) -> Result<Vec<BackupInfo>, Error> {
> let mut list = vec![];
>
> let path = self.full_group_path();
> @@ -117,6 +123,19 @@ impl BackupGroup {
> let files = list_backup_files(l2_fd, backup_time)?;
>
> let protected = backup_dir.is_protected();
> + match filter {
> + ListBackupFilter::All => (),
> + ListBackupFilter::Trashed => {
> + if !backup_dir.is_trashed() {
> + return Ok(());
> + }
> + }
> + ListBackupFilter::Active => {
> + if backup_dir.is_trashed() {
> + return Ok(());
> + }
> + }
> + }
>
> list.push(BackupInfo {
> backup_dir,
> @@ -132,7 +151,7 @@ impl BackupGroup {
>
> /// Finds the latest backup inside a backup group
> pub fn last_backup(&self, only_finished: bool) -> Result<Option<BackupInfo>, Error> {
> - let backups = self.list_backups()?;
> + let backups = self.list_backups(ListBackupFilter::Active)?;
> Ok(backups
> .into_iter()
> .filter(|item| !only_finished || item.is_finished())
> @@ -480,6 +499,11 @@ impl BackupDir {
> path.exists()
> }
>
> + pub fn is_trashed(&self) -> bool {
> + let path = self.full_path().join(TRASH_MARKER_FILENAME);
> + path.exists()
> + }
> +
> pub fn backup_time_to_string(backup_time: i64) -> Result<String, Error> {
> // fixme: can this fail? (avoid unwrap)
> proxmox_time::epoch_to_rfc3339_utc(backup_time)
> @@ -637,7 +661,10 @@ impl BackupDir {
> //
> // Do not error out, as we have already removed the snapshot, there is nothing a user could
> // do to rectify the situation.
> - if guard.is_ok() && group.list_backups()?.is_empty() && !*OLD_LOCKING {
> + if guard.is_ok()
> + && group.list_backups(ListBackupFilter::All)?.is_empty()
> + && !*OLD_LOCKING
> + {
> group.remove_group_dir()?;
> } else if let Err(err) = guard {
> log::debug!("{err:#}");
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index e546bc532..867324380 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -28,7 +28,7 @@ use pbs_api_types::{
> };
> use pbs_config::BackupLockGuard;
>
> -use crate::backup_info::{BackupDir, BackupGroup, BackupInfo, OLD_LOCKING};
> +use crate::backup_info::{BackupDir, BackupGroup, BackupInfo, ListBackupFilter, OLD_LOCKING};
> use crate::chunk_store::ChunkStore;
> use crate::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
> use crate::fixed_index::{FixedIndexReader, FixedIndexWriter};
> @@ -1158,7 +1158,7 @@ impl DataStore {
> _ => bail!("exhausted retries and unexpected counter overrun"),
> };
>
> - let mut snapshots = match group.list_backups() {
> + let mut snapshots = match group.list_backups(ListBackupFilter::All) {
> Ok(snapshots) => snapshots,
> Err(err) => {
> if group.exists() {
> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
> index aafd1bbd7..133a6d658 100644
> --- a/src/api2/admin/datastore.rs
> +++ b/src/api2/admin/datastore.rs
> @@ -51,7 +51,7 @@ use pbs_api_types::{
> };
> use pbs_client::pxar::{create_tar, create_zip};
> use pbs_config::CachedUserInfo;
> -use pbs_datastore::backup_info::BackupInfo;
> +use pbs_datastore::backup_info::{BackupInfo, ListBackupFilter};
> use pbs_datastore::cached_chunk_reader::CachedChunkReader;
> use pbs_datastore::catalog::{ArchiveEntry, CatalogReader};
> use pbs_datastore::data_blob::DataBlob;
> @@ -223,7 +223,7 @@ pub fn list_groups(
> return Ok(group_info);
> }
>
> - let snapshots = match group.list_backups() {
> + let snapshots = match group.list_backups(ListBackupFilter::Active) {
> Ok(snapshots) => snapshots,
> Err(_) => return Ok(group_info),
> };
> @@ -624,7 +624,7 @@ unsafe fn list_snapshots_blocking(
> return Ok(snapshots);
> }
>
> - let group_backups = group.list_backups()?;
> + let group_backups = group.list_backups(ListBackupFilter::Active)?;
>
> snapshots.extend(
> group_backups
> @@ -657,7 +657,7 @@ async fn get_snapshots_count(
> Ok(group) => group,
> Err(_) => return Ok(counts), // TODO: add this as error counts?
> };
> - let snapshot_count = group.list_backups()?.len() as u64;
> + let snapshot_count = group.list_backups(ListBackupFilter::Active)?.len() as u64;
>
> // only include groups with snapshots, counting/displaying empty groups can confuse
> if snapshot_count > 0 {
> @@ -1042,7 +1042,7 @@ pub fn prune(
> }
> let mut prune_result: Vec<PruneResult> = Vec::new();
>
> - let list = group.list_backups()?;
> + let list = group.list_backups(ListBackupFilter::Active)?;
>
> let mut prune_info = compute_prune_info(list, &keep_options)?;
>
> diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
> index 31293a9a9..923cb7834 100644
> --- a/src/api2/tape/backup.rs
> +++ b/src/api2/tape/backup.rs
> @@ -17,7 +17,7 @@ use pbs_api_types::{
> };
>
> use pbs_config::CachedUserInfo;
> -use pbs_datastore::backup_info::{BackupDir, BackupInfo};
> +use pbs_datastore::backup_info::{BackupDir, BackupInfo, ListBackupFilter};
> use pbs_datastore::{DataStore, StoreProgress};
>
> use crate::tape::TapeNotificationMode;
> @@ -433,7 +433,7 @@ fn backup_worker(
> progress.done_snapshots = 0;
> progress.group_snapshots = 0;
>
> - let snapshot_list = group.list_backups()?;
> + let snapshot_list = group.list_backups(ListBackupFilter::Active)?;
>
> // filter out unfinished backups
> let mut snapshot_list: Vec<_> = snapshot_list
> diff --git a/src/backup/verify.rs b/src/backup/verify.rs
> index 3d2cba8ac..1b5e8564b 100644
> --- a/src/backup/verify.rs
> +++ b/src/backup/verify.rs
> @@ -14,7 +14,7 @@ use pbs_api_types::{
> CryptMode, SnapshotVerifyState, VerifyState, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_VERIFY,
> UPID,
> };
> -use pbs_datastore::backup_info::{BackupDir, BackupGroup, BackupInfo};
> +use pbs_datastore::backup_info::{BackupDir, BackupGroup, BackupInfo, ListBackupFilter};
> use pbs_datastore::index::IndexFile;
> use pbs_datastore::manifest::{BackupManifest, FileInfo};
> use pbs_datastore::{DataBlob, DataStore, StoreProgress};
> @@ -411,7 +411,7 @@ pub fn verify_backup_group(
> filter: Option<&dyn Fn(&BackupManifest) -> bool>,
> ) -> Result<Vec<String>, Error> {
> let mut errors = Vec::new();
> - let mut list = match group.list_backups() {
> + let mut list = match group.list_backups(ListBackupFilter::Active) {
> Ok(list) => list,
> Err(err) => {
> info!(
> diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
> index 1c86647a0..596afe086 100644
> --- a/src/server/prune_job.rs
> +++ b/src/server/prune_job.rs
> @@ -1,6 +1,7 @@
> use std::sync::Arc;
>
> use anyhow::Error;
> +use pbs_datastore::backup_info::ListBackupFilter;
> use tracing::{info, warn};
>
> use pbs_api_types::{
> @@ -54,7 +55,7 @@ pub fn prune_datastore(
> )? {
> let group = group?;
> let ns = group.backup_ns();
> - let list = group.list_backups()?;
> + let list = group.list_backups(ListBackupFilter::Active)?;
>
> let mut prune_info = compute_prune_info(list, &prune_options.keep)?;
> prune_info.reverse(); // delete older snapshots first
> diff --git a/src/server/pull.rs b/src/server/pull.rs
> index b1724c142..50d7b0806 100644
> --- a/src/server/pull.rs
> +++ b/src/server/pull.rs
> @@ -7,6 +7,7 @@ use std::sync::{Arc, Mutex};
> use std::time::SystemTime;
>
> use anyhow::{bail, format_err, Error};
> +use pbs_datastore::backup_info::ListBackupFilter;
> use proxmox_human_byte::HumanByte;
> use tracing::info;
>
> @@ -660,7 +661,7 @@ async fn pull_group(
> .target
> .store
> .backup_group(target_ns.clone(), group.clone());
> - let local_list = group.list_backups()?;
> + let local_list = group.list_backups(ListBackupFilter::Active)?;
> for info in local_list {
> let snapshot = info.backup_dir;
> if source_snapshots.contains(&snapshot.backup_time()) {
> --
> 2.39.5
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 02/21] datastore: mark groups as trash on destroy
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 02/21] datastore: mark groups " Christian Ebner
@ 2025-05-09 12:27 ` Fabian Grünbichler
0 siblings, 0 replies; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-09 12:27 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
On May 8, 2025 3:05 pm, Christian Ebner wrote:
> In order to implement the trash can functionality, mark all the
> snapshots of the group and the group itself as trash instead of
> deleting them right away. Cleanup of the group is deferred to the
> garbage collection.
>
> Groups and snapshots are marked by the trash marker file. New backups
> to this group will check for the marker file (see subsequent
> commits), clearing the whole group and all of the snapshots to
> create a new snapshot within that group. Otherwise ownership
> conflicts could arise. This implies that a new backup clears the
> whole trashed group.
this seems a bit surprising.. couldn't we check the new and older owner,
and abort if there's a mismatch, but proceed otherwise?
the implementation of this also doesn't happen in this patch though, so
maybe drop it here in any case - this just implements trashing groups..
> Snapshots already marked as trash within the same backup group will
> be cleared as well when the group is requested to be destroyed with
> skip trash.
this makes sense
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> pbs-datastore/src/backup_info.rs | 19 ++++++++++++++++---
> pbs-datastore/src/datastore.rs | 4 ++--
> 2 files changed, 18 insertions(+), 5 deletions(-)
>
> diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
> index 76bcd15f5..9ce4cb0f8 100644
> --- a/pbs-datastore/src/backup_info.rs
> +++ b/pbs-datastore/src/backup_info.rs
> @@ -215,7 +215,7 @@ impl BackupGroup {
> ///
> /// Returns `BackupGroupDeleteStats`, containing the number of deleted snapshots
> /// and number of protected snaphsots, which therefore were not removed.
> - pub fn destroy(&self) -> Result<BackupGroupDeleteStats, Error> {
> + pub fn destroy(&self, skip_trash: bool) -> Result<BackupGroupDeleteStats, Error> {
> let _guard = self
> .lock()
> .with_context(|| format!("while destroying group '{self:?}'"))?;
> @@ -229,14 +229,20 @@ impl BackupGroup {
> delete_stats.increment_protected_snapshots();
> continue;
> }
> - snap.destroy(false, false)?;
> + snap.destroy(false, skip_trash)?;
> delete_stats.increment_removed_snapshots();
> }
>
> // Note: make sure the old locking mechanism isn't used as `remove_dir_all` is not safe in
> // that case
> if delete_stats.all_removed() && !*OLD_LOCKING {
> - self.remove_group_dir()?;
> + if skip_trash {
> + self.remove_group_dir()?;
> + } else {
> + let path = self.full_group_path().join(TRASH_MARKER_FILENAME);
> + let _trash_file =
> + std::fs::File::create(path).context("failed to set trash file")?;
> + }
> delete_stats.increment_removed_groups();
> }
>
> @@ -245,6 +251,13 @@ impl BackupGroup {
>
> /// Helper function, assumes that no more snapshots are present in the group.
> fn remove_group_dir(&self) -> Result<(), Error> {
> + let trash_path = self.full_group_path().join(TRASH_MARKER_FILENAME);
> + if let Err(err) = std::fs::remove_file(&trash_path) {
> + if err.kind() != std::io::ErrorKind::NotFound {
> + bail!("removing the trash file '{trash_path:?}' failed - {err}")
> + }
> + }
> +
> let owner_path = self.store.owner_path(&self.ns, &self.group);
>
> std::fs::remove_file(&owner_path).map_err(|err| {
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index 6df26e825..e546bc532 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -581,7 +581,7 @@ impl DataStore {
> let mut stats = BackupGroupDeleteStats::default();
>
> for group in self.iter_backup_groups(ns.to_owned())? {
> - let delete_stats = group?.destroy()?;
> + let delete_stats = group?.destroy(true)?;
> stats.add(&delete_stats);
> removed_all_groups = removed_all_groups && delete_stats.all_removed();
> }
> @@ -674,7 +674,7 @@ impl DataStore {
> ) -> Result<BackupGroupDeleteStats, Error> {
> let backup_group = self.backup_group(ns.clone(), backup_group.clone());
>
> - backup_group.destroy()
> + backup_group.destroy(true)
> }
>
> /// Remove a backup directory including all content
> --
> 2.39.5
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 01/21] datastore/api: mark snapshots as trash on destroy
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 01/21] datastore/api: mark snapshots as trash on destroy Christian Ebner
@ 2025-05-09 12:27 ` Fabian Grünbichler
0 siblings, 0 replies; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-09 12:27 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
On May 8, 2025 3:05 pm, Christian Ebner wrote:
> In order to implement the trash can functionality, mark snapshots
> as trash instead of removing them by default. However, provide a
> `skip-trash` flag to opt-out and destroy the snapshot including it's
> contents immediately.
>
> Trashed snapshots are marked by creating a `.trashed` marker file
> inside the snapshot folder. Actual removal of the snapshot will be
> deferred to the garbage collection task.
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> pbs-datastore/src/backup_info.rs | 66 ++++++++++++++++++--------------
> pbs-datastore/src/datastore.rs | 2 +-
> src/api2/admin/datastore.rs | 18 ++++++++-
> 3 files changed, 55 insertions(+), 31 deletions(-)
>
> diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
> index d4732fdd9..76bcd15f5 100644
> --- a/pbs-datastore/src/backup_info.rs
> +++ b/pbs-datastore/src/backup_info.rs
> @@ -21,6 +21,7 @@ use crate::manifest::{BackupManifest, MANIFEST_LOCK_NAME};
> use crate::{DataBlob, DataStore};
>
> pub const DATASTORE_LOCKS_DIR: &str = "/run/proxmox-backup/locks";
> +pub const TRASH_MARKER_FILENAME: &str = ".trashed";
>
> // TODO: Remove with PBS 5
> // Note: The `expect()` call here will only happen if we can neither confirm nor deny the existence
> @@ -228,7 +229,7 @@ impl BackupGroup {
> delete_stats.increment_protected_snapshots();
> continue;
> }
> - snap.destroy(false)?;
> + snap.destroy(false, false)?;
> delete_stats.increment_removed_snapshots();
> }
>
> @@ -575,7 +576,8 @@ impl BackupDir {
> /// Destroy the whole snapshot, bails if it's protected
> ///
> /// Setting `force` to true skips locking and thus ignores if the backup is currently in use.
> - pub fn destroy(&self, force: bool) -> Result<(), Error> {
> + /// Setting `skip_trash` to true will remove the snapshot instead of marking it as trash.
> + pub fn destroy(&self, force: bool, skip_trash: bool) -> Result<(), Error> {
> let (_guard, _manifest_guard);
> if !force {
> _guard = self
> @@ -588,37 +590,45 @@ impl BackupDir {
> bail!("cannot remove protected snapshot"); // use special error type?
> }
>
> - let full_path = self.full_path();
> - log::info!("removing backup snapshot {:?}", full_path);
> - std::fs::remove_dir_all(&full_path).map_err(|err| {
> - format_err!("removing backup snapshot {:?} failed - {}", full_path, err,)
> - })?;
> + let mut full_path = self.full_path();
> + log::info!("removing backup snapshot {full_path:?}");
> + if skip_trash {
> + std::fs::remove_dir_all(&full_path).map_err(|err| {
> + format_err!("removing backup snapshot {full_path:?} failed - {err}")
> + })?;
> + } else {
> + full_path.push(TRASH_MARKER_FILENAME);
> + let _trash_file =
> + std::fs::File::create(full_path).context("failed to set trash file")?;
> + }
>
> // remove no longer needed lock files
> let _ = std::fs::remove_file(self.manifest_lock_path()); // ignore errors
> let _ = std::fs::remove_file(self.lock_path()); // ignore errors
>
> - let group = BackupGroup::from(self);
> - let guard = group.lock().with_context(|| {
> - format!("while checking if group '{group:?}' is empty during snapshot destruction")
> - });
> -
> - // Only remove the group if all of the following is true:
> - //
> - // - we can lock it: if we can't lock the group, it is still in use (either by another
> - // backup process or a parent caller (who needs to take care that empty groups are
> - // removed themselves).
> - // - it is now empty: if the group isn't empty, removing it will fail (to avoid removing
> - // backups that might still be used).
> - // - the new locking mechanism is used: if the old mechanism is used, a group removal here
> - // could lead to a race condition.
> - //
> - // Do not error out, as we have already removed the snapshot, there is nothing a user could
> - // do to rectify the situation.
> - if guard.is_ok() && group.list_backups()?.is_empty() && !*OLD_LOCKING {
> - group.remove_group_dir()?;
> - } else if let Err(err) = guard {
> - log::debug!("{err:#}");
> + if skip_trash {
> + let group = BackupGroup::from(self);
> + let guard = group.lock().with_context(|| {
> + format!("while checking if group '{group:?}' is empty during snapshot destruction")
> + });
> +
> + // Only remove the group if all of the following is true:
> + //
> + // - we can lock it: if we can't lock the group, it is still in use (either by another
> + // backup process or a parent caller (who needs to take care that empty groups are
> + // removed themselves).
> + // - it is now empty: if the group isn't empty, removing it will fail (to avoid removing
> + // backups that might still be used).
> + // - the new locking mechanism is used: if the old mechanism is used, a group removal here
> + // could lead to a race condition.
> + //
> + // Do not error out, as we have already removed the snapshot, there is nothing a user could
> + // do to rectify the situation.
> + if guard.is_ok() && group.list_backups()?.is_empty() && !*OLD_LOCKING {
> + group.remove_group_dir()?;
> + } else if let Err(err) = guard {
> + log::debug!("{err:#}");
> + }
> }
>
> Ok(())
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index cbf78ecb6..6df26e825 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -686,7 +686,7 @@ impl DataStore {
> ) -> Result<(), Error> {
> let backup_dir = self.backup_dir(ns.clone(), backup_dir.clone())?;
>
> - backup_dir.destroy(force)
> + backup_dir.destroy(force, true)
> }
>
> /// Returns the time of the last successful backup
> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
> index 392494488..aafd1bbd7 100644
> --- a/src/api2/admin/datastore.rs
> +++ b/src/api2/admin/datastore.rs
> @@ -402,6 +402,12 @@ pub async fn list_snapshot_files(
> type: pbs_api_types::BackupDir,
> flatten: true,
> },
> + "skip-trash": {
> + type: bool,
> + optional: true,
> + default: false,
should this default to false in the backend? wouldn't that be a bit
surprising for scripted access? or is this 4.0 material anyway? ;)
> + description: "Immediately remove the snapshot, not marking it as trash.",
> + },
> },
> },
> access: {
> @@ -415,6 +421,7 @@ pub async fn delete_snapshot(
> store: String,
> ns: Option<BackupNamespace>,
> backup_dir: pbs_api_types::BackupDir,
> + skip_trash: bool,
> _info: &ApiMethod,
> rpcenv: &mut dyn RpcEnvironment,
> ) -> Result<Value, Error> {
> @@ -435,7 +442,7 @@ pub async fn delete_snapshot(
>
> let snapshot = datastore.backup_dir(ns, backup_dir)?;
>
> - snapshot.destroy(false)?;
> + snapshot.destroy(false, skip_trash)?;
>
> Ok(Value::Null)
> })
> @@ -979,6 +986,12 @@ pub fn verify(
> optional: true,
> description: "Spins up an asynchronous task that does the work.",
> },
> + "skip-trash": {
> + type: bool,
> + optional: true,
> + default: false,
> + description: "Immediately remove the snapshot, not marking it as trash.",
> + },
> },
> },
> returns: pbs_api_types::ADMIN_DATASTORE_PRUNE_RETURN_TYPE,
> @@ -995,6 +1008,7 @@ pub fn prune(
> keep_options: KeepOptions,
> store: String,
> ns: Option<BackupNamespace>,
> + skip_trash: bool,
> param: Value,
> rpcenv: &mut dyn RpcEnvironment,
> ) -> Result<Value, Error> {
> @@ -1098,7 +1112,7 @@ pub fn prune(
> });
>
> if !keep {
> - if let Err(err) = backup_dir.destroy(false) {
> + if let Err(err) = backup_dir.destroy(false, skip_trash) {
> warn!(
> "failed to remove dir {:?}: {}",
> backup_dir.relative_path(),
> --
> 2.39.5
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 18/21] api: admin: implement endpoints to restore trashed contents
2025-05-09 12:27 ` Fabian Grünbichler
@ 2025-05-09 12:59 ` Christian Ebner
2025-05-12 10:03 ` Fabian Grünbichler
0 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-09 12:59 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
Thanks for feedback, will have a closer look next week.
Allow me two quick questions inline though...
On 5/9/25 14:27, Fabian Grünbichler wrote:
> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>> Implements the api endpoints to restore trashed contents contained
>> within namespaces, backup groups or individual snapshots.
>>
>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>> ---
>> src/api2/admin/datastore.rs | 173 +++++++++++++++++++++++++++++++++++-
>> 1 file changed, 172 insertions(+), 1 deletion(-)
>>
>> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
>> index cbd24c729..eb033c3fc 100644
>> --- a/src/api2/admin/datastore.rs
>> +++ b/src/api2/admin/datastore.rs
>> @@ -51,7 +51,7 @@ use pbs_api_types::{
>> };
>> use pbs_client::pxar::{create_tar, create_zip};
>> use pbs_config::CachedUserInfo;
>> -use pbs_datastore::backup_info::{BackupInfo, ListBackupFilter};
>> +use pbs_datastore::backup_info::{BackupInfo, ListBackupFilter, TRASH_MARKER_FILENAME};
>> use pbs_datastore::cached_chunk_reader::CachedChunkReader;
>> use pbs_datastore::catalog::{ArchiveEntry, CatalogReader};
>> use pbs_datastore::data_blob::DataBlob;
>> @@ -2727,6 +2727,165 @@ pub async fn unmount(store: String, rpcenv: &mut dyn RpcEnvironment) -> Result<V
>> Ok(json!(upid))
>> }
>>
>> +#[api(
>> + input: {
>> + properties: {
>> + store: { schema: DATASTORE_SCHEMA },
>> + ns: { type: BackupNamespace, },
>> + },
>> + },
>> + access: {
>> + permission: &Permission::Anybody,
>> + description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
>> + or DATASTORE_BACKUP and being the owner of the group",
>> + },
>> +)]
>> +/// Recover trashed contents of a namespace.
>> +pub fn recover_namespace(
>> + store: String,
>> + ns: BackupNamespace,
>> + rpcenv: &mut dyn RpcEnvironment,
>> +) -> Result<(), Error> {
>> + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
>> + let limited = check_ns_privs_full(
>> + &store,
>> + &ns,
>> + &auth_id,
>> + PRIV_DATASTORE_MODIFY,
>> + PRIV_DATASTORE_BACKUP,
>> + )?;
>> +
>> + let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
>> +
>> + for backup_group in datastore.iter_backup_groups(ns.clone())? {
>> + let backup_group = backup_group?;
>> + if limited {
>> + let owner = datastore.get_owner(&ns, backup_group.group())?;
>> + if check_backup_owner(&owner, &auth_id).is_err() {
>> + continue;
>> + }
>> + }
>> + do_recover_group(&backup_group)?;
>> + }
>> +
>> + Ok(())
>> +}
>> +
>> +#[api(
>> + input: {
>> + properties: {
>> + store: { schema: DATASTORE_SCHEMA },
>> + group: {
>> + type: pbs_api_types::BackupGroup,
>> + flatten: true,
>> + },
>> + ns: {
>> + type: BackupNamespace,
>> + optional: true,
>> + },
>> + },
>> + },
>> + access: {
>> + permission: &Permission::Anybody,
>> + description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
>> + or DATASTORE_BACKUP and being the owner of the group",
>> + },
>> +)]
>> +/// Recover trashed contents of a backup group.
>> +pub fn recover_group(
>> + store: String,
>> + group: pbs_api_types::BackupGroup,
>> + ns: Option<BackupNamespace>,
>> + rpcenv: &mut dyn RpcEnvironment,
>> +) -> Result<(), Error> {
>> + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
>> + let ns = ns.unwrap_or_default();
>> + let datastore = check_privs_and_load_store(
>> + &store,
>> + &ns,
>> + &auth_id,
>> + PRIV_DATASTORE_MODIFY,
>> + PRIV_DATASTORE_BACKUP,
>> + Some(Operation::Write),
>> + &group,
>> + )?;
>> +
>> + let backup_group = datastore.backup_group(ns, group);
>> + do_recover_group(&backup_group)?;
>> +
>> + Ok(())
>> +}
>> +
>> +fn do_recover_group(backup_group: &BackupGroup) -> Result<(), Error> {
>
> missing locking for the group?
Not sure about that one. After all the group is trashed at least as long
as all the snapshots are trashed. And GC will only ever clean up the
group folder if the trash marker is not set. So I do not see a reason
why this should be locked.
>
>> + let trashed_snapshots = backup_group.list_backups(ListBackupFilter::Trashed)?;
>> + for snapshot in trashed_snapshots {
>> + do_recover_snapshot(&snapshot.backup_dir)?;
>> + }
>> +
>> + let group_trash_path = backup_group.full_group_path().join(TRASH_MARKER_FILENAME);
>> + if let Err(err) = std::fs::remove_file(&group_trash_path) {
>> + if err.kind() != std::io::ErrorKind::NotFound {
>> + bail!("failed to remove group trash file {group_trash_path:?} - {err}");
>> + }
>> + }
>> + Ok(())
>> +}
>> +
>> +#[api(
>> + input: {
>> + properties: {
>> + store: { schema: DATASTORE_SCHEMA },
>> + backup_dir: {
>> + type: pbs_api_types::BackupDir,
>> + flatten: true,
>> + },
>> + ns: {
>> + type: BackupNamespace,
>> + optional: true,
>> + },
>> + },
>> + },
>> + access: {
>> + permission: &Permission::Anybody,
>> + description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
>> + or DATASTORE_BACKUP and being the owner of the group",
>> + },
>> +)]
>> +/// Recover trashed contents of a backup snapshot.
>> +pub fn recover_snapshot(
>> + store: String,
>> + backup_dir: pbs_api_types::BackupDir,
>> + ns: Option<BackupNamespace>,
>> + rpcenv: &mut dyn RpcEnvironment,
>> +) -> Result<(), Error> {
>> + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
>> + let ns = ns.unwrap_or_default();
>> + let datastore = check_privs_and_load_store(
>> + &store,
>> + &ns,
>> + &auth_id,
>> + PRIV_DATASTORE_MODIFY,
>> + PRIV_DATASTORE_BACKUP,
>> + Some(Operation::Write),
>> + &backup_dir.group,
>> + )?;
>> +
>> + let snapshot = datastore.backup_dir(ns, backup_dir)?;
>> + do_recover_snapshot(&snapshot)?;
>> +
>> + Ok(())
>> +}
>> +
>> +fn do_recover_snapshot(snapshot_dir: &BackupDir) -> Result<(), Error> {
>
> missing locking for the snapshot?
Why? remove_file() should be atomic?
>
>> + let trash_path = snapshot_dir.full_path().join(TRASH_MARKER_FILENAME);
>> + if let Err(err) = std::fs::remove_file(&trash_path) {
>> + if err.kind() != std::io::ErrorKind::NotFound {
>> + bail!("failed to remove trash file {trash_path:?} - {err}");
>> + }
>> + }
>> + Ok(())
>> +}
>> +
>> #[sortable]
>> const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
>> (
>> @@ -2792,6 +2951,18 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
>> "pxar-file-download",
>> &Router::new().download(&API_METHOD_PXAR_FILE_DOWNLOAD),
>> ),
>> + (
>> + "recover-group",
>> + &Router::new().post(&API_METHOD_RECOVER_GROUP),
>
> I am not sure whether those should be POST or PUT, they are modifying an
> existing (trashed) group/snapshot/.. after all?
>
>> + ),
>> + (
>> + "recover-namespace",
>> + &Router::new().post(&API_METHOD_RECOVER_NAMESPACE),
>> + ),
>> + (
>> + "recover-snapshot",
>> + &Router::new().post(&API_METHOD_RECOVER_SNAPSHOT),
>> + ),
>> ("rrd", &Router::new().get(&API_METHOD_GET_RRD_STATS)),
>> (
>> "snapshots",
>> --
>> 2.39.5
>>
>>
>>
>> _______________________________________________
>> pbs-devel mailing list
>> pbs-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>>
>>
>>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 10/21] datastore: mark namespace as trash instead of deleting it
2025-05-09 12:27 ` Fabian Grünbichler
@ 2025-05-12 7:47 ` Christian Ebner
2025-05-12 9:46 ` Fabian Grünbichler
0 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-12 7:47 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
On 5/9/25 14:27, Fabian Grünbichler wrote:
> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>> As for backup snapshots and groups, mark the namespace as trash
>> instead of removing it and the contents right away, if the trash
>> should not be bypassed.
>>
>> Actual removal of the hirarchical folder structure has to be taken
>> care of by the garbage collection.
>>
>> In order to avoid races during removal, first mark the namespaces as
>> trash pending, mark the snapshots and groups as trash and only after
>> rename the pending marker file to the trash marker file. By this,
>> concurrent backups can remove the trash pending marker to avoid the
>> namespace being trashed.
>>
>> On re-creation of a trashed namespace remove the marker file on itself
>> and any parent component from deepest to shallowest. As trashing a full
>> namespace can also set the trash pending state for recursive namespace
>> cleanup, remove encounters of that marker file as well to avoid the
>> namespace or its parent being trashed.
>
> this is fairly involved since we don't have locks on namespaces..
>
> should we have them for creation/removal/trashing/untrashing?
That is what I did try to avoid at all cost with the two-marker
approach, as locking the namespace might be rather invasive. But if that
does not work out as intended, I see no other way as to add exclusive
locking for namespaces as well, yes.
>
> I assume those are fairly rare occurrences, I haven't yet analyzed the
> interactions here to see whether the two-marker approach is actually
> race-free..
>
> OTOH, do we really need to (be able to) trash namespaces?
Yes, I think we do need that as well since the datastore's hierarchy
should remain in place, and the namespace iterator requires a way to
distinguish between a namespace which has been trashed/deleted and a
namespace which has not, but might contain trashed items. Otherwise a
user requesting to forget a namespace, still sees the (empty as only
trashed contents) namespace tree after the operation. Which would be
rather unexpected?
>
>>
>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>> ---
>> pbs-datastore/src/datastore.rs | 135 +++++++++++++++++++++++++++++----
>> src/api2/admin/namespace.rs | 2 +-
>> src/server/pull.rs | 2 +-
>> 3 files changed, 123 insertions(+), 16 deletions(-)
>>
>> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
>> index 023a6a12e..b1d81e199 100644
>> --- a/pbs-datastore/src/datastore.rs
>> +++ b/pbs-datastore/src/datastore.rs
>> @@ -28,7 +28,9 @@ use pbs_api_types::{
>> };
>> use pbs_config::BackupLockGuard;
>>
>> -use crate::backup_info::{BackupDir, BackupGroup, BackupInfo, ListBackupFilter, OLD_LOCKING};
>> +use crate::backup_info::{
>> + BackupDir, BackupGroup, BackupInfo, ListBackupFilter, OLD_LOCKING, TRASH_MARKER_FILENAME,
>> +};
>> use crate::chunk_store::ChunkStore;
>> use crate::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
>> use crate::fixed_index::{FixedIndexReader, FixedIndexWriter};
>> @@ -42,6 +44,8 @@ use crate::DataBlob;
>> static DATASTORE_MAP: LazyLock<Mutex<HashMap<String, Arc<DataStoreImpl>>>> =
>> LazyLock::new(|| Mutex::new(HashMap::new()));
>>
>> +const TRASH_PENDING_MARKER_FILENAME: &str = ".pending";
>> +
>> /// checks if auth_id is owner, or, if owner is a token, if
>> /// auth_id is the user of the token
>> pub fn check_backup_owner(owner: &Authid, auth_id: &Authid) -> Result<(), Error> {
>> @@ -554,6 +558,7 @@ impl DataStore {
>> ns_full_path.push(ns.path());
>>
>> std::fs::create_dir_all(ns_full_path)?;
>> + self.untrash_namespace(&ns)?;
>>
>> Ok(ns)
>> }
>> @@ -565,6 +570,34 @@ impl DataStore {
>> path.exists()
>> }
>>
>> + /// Remove the namespace and all it's parent components from the trash by removing the trash or
>> + /// trash-pending marker file for each namespace level from deepest to shallowest. Missing files
>> + /// are ignored.
>> + pub fn untrash_namespace(&self, namespace: &BackupNamespace) -> Result<(), Error> {
>> + let mut namespace = namespace.clone();
>> + while namespace.depth() > 0 {
>> + let mut trash_file_path = self.base_path();
>> + trash_file_path.push(namespace.path());
>> + let mut pending_file_path = trash_file_path.clone();
>> + pending_file_path.push(TRASH_PENDING_MARKER_FILENAME);
>> + if let Err(err) = std::fs::remove_file(&pending_file_path) {
>> + // ignore not found, either not trashed or un-trashed by concurrent operation
>> + if err.kind() != std::io::ErrorKind::NotFound {
>> + bail!("failed to remove trash-pending file {trash_file_path:?}: {err}");
>> + }
>> + }
>> + trash_file_path.push(TRASH_MARKER_FILENAME);
>> + if let Err(err) = std::fs::remove_file(&trash_file_path) {
>> + // ignore not found, either not trashed or un-trashed by concurrent operation
>> + if err.kind() != std::io::ErrorKind::NotFound {
>> + bail!("failed to remove trash file {trash_file_path:?}: {err}");
>> + }
>> + }
>> + namespace.pop();
>> + }
>> + Ok(())
>> + }
>> +
>> /// Remove all backup groups of a single namespace level but not the namespace itself.
>> ///
>> /// Does *not* descends into child-namespaces and doesn't remoes the namespace itself either.
>> @@ -574,6 +607,7 @@ impl DataStore {
>> pub fn remove_namespace_groups(
>> self: &Arc<Self>,
>> ns: &BackupNamespace,
>> + skip_trash: bool,
>> ) -> Result<(bool, BackupGroupDeleteStats), Error> {
>> // FIXME: locking? The single groups/snapshots are already protected, so may not be
>> // necessary (depends on what we all allow to do with namespaces)
>> @@ -583,20 +617,22 @@ impl DataStore {
>> let mut stats = BackupGroupDeleteStats::default();
>>
>> for group in self.iter_backup_groups(ns.to_owned())? {
>> - let delete_stats = group?.destroy(true)?;
>> + let delete_stats = group?.destroy(skip_trash)?;
>> stats.add(&delete_stats);
>> removed_all_groups = removed_all_groups && delete_stats.all_removed();
>> }
>>
>> - let base_file = std::fs::File::open(self.base_path())?;
>> - let base_fd = base_file.as_raw_fd();
>> - for ty in BackupType::iter() {
>> - let mut ty_dir = ns.path();
>> - ty_dir.push(ty.to_string());
>> - // best effort only, but we probably should log the error
>> - if let Err(err) = unlinkat(Some(base_fd), &ty_dir, UnlinkatFlags::RemoveDir) {
>> - if err != nix::errno::Errno::ENOENT {
>> - log::error!("failed to remove backup type {ty} in {ns} - {err}");
>> + if skip_trash {
>> + let base_file = std::fs::File::open(self.base_path())?;
>> + let base_fd = base_file.as_raw_fd();
>> + for ty in BackupType::iter() {
>> + let mut ty_dir = ns.path();
>> + ty_dir.push(ty.to_string());
>> + // best effort only, but we probably should log the error
>> + if let Err(err) = unlinkat(Some(base_fd), &ty_dir, UnlinkatFlags::RemoveDir) {
>> + if err != nix::errno::Errno::ENOENT {
>> + log::error!("failed to remove backup type {ty} in {ns} - {err}");
>> + }
>> }
>> }
>> }
>> @@ -613,6 +649,7 @@ impl DataStore {
>> self: &Arc<Self>,
>> ns: &BackupNamespace,
>> delete_groups: bool,
>> + skip_trash: bool,
>> ) -> Result<(bool, BackupGroupDeleteStats), Error> {
>> let store = self.name();
>> let mut removed_all_requested = true;
>> @@ -620,16 +657,68 @@ impl DataStore {
>> if delete_groups {
>> log::info!("removing whole namespace recursively below {store}:/{ns}",);
>> for ns in self.recursive_iter_backup_ns(ns.to_owned(), NamespaceListFilter::Active)? {
>> - let (removed_ns_groups, delete_stats) = self.remove_namespace_groups(&ns?)?;
>> + let namespace = ns?;
>> +
>> + if !skip_trash {
>> + let mut path = self.base_path();
>> + path.push(namespace.path());
>> + path.push(TRASH_PENDING_MARKER_FILENAME);
>> + if let Err(err) = std::fs::File::create(&path) {
>
> pending marker created here iff delete_groups && !skip_trash
>
>> + if err.kind() != std::io::ErrorKind::AlreadyExists {
>> + return Err(err).context("failed to set trash pending marker file");
>> + }
>> + }
>> + }
>> +
>> + let (removed_ns_groups, delete_stats) =
>> + self.remove_namespace_groups(&namespace, skip_trash)?;
>> stats.add(&delete_stats);
>> removed_all_requested = removed_all_requested && removed_ns_groups;
>> +
>> + if !skip_trash && !removed_ns_groups {
>> + self.untrash_namespace(&namespace)?;
>> + }
>> }
>> } else {
>> log::info!("pruning empty namespace recursively below {store}:/{ns}");
>> }
>>
>> - removed_all_requested =
>> - removed_all_requested && self.destroy_namespace_recursive(ns, delete_groups)?;
>> + if skip_trash {
>> + removed_all_requested =
>> + removed_all_requested && self.destroy_namespace_recursive(ns, delete_groups)?;
>> + return Ok((removed_all_requested, stats));
>
> early return here iff skip_trash
>
>> + }
>> +
>> + let mut children = self
>> + .recursive_iter_backup_ns(ns.to_owned(), NamespaceListFilter::Active)?
>> + .collect::<Result<Vec<BackupNamespace>, Error>>()?;
>> +
>> + children.sort_by_key(|b| std::cmp::Reverse(b.depth()));
>> +
>> + let mut all_trashed = true;
>> + for ns in children.iter() {
>> + let mut ns_dir = ns.path();
>> + ns_dir.push("ns");
>> +
>> + if !ns.is_root() {
>> + let mut path = self.base_path();
>> + path.push(ns.path());
>> +
>> + let pending_path = path.join(TRASH_PENDING_MARKER_FILENAME);
>> + path.push(TRASH_MARKER_FILENAME);
>> + if let Err(err) = std::fs::rename(pending_path, path) {
>
> pending marker assumed to exist here iff !skip_trash
>
>> + if err.kind() == std::io::ErrorKind::NotFound {
>> + all_trashed = false;
>> + } else {
>> + return Err(err).context("Renaming pending marker to trash marker failed");
>> + }
>> + }
>> + }
>> + }
>> +
>> + if !all_trashed {
>> + bail!("failed to prune namespace, not empty");
>
> wrong error returned as a result..
Ah good catch! Indeed the logic does not work when `delete_groups` and
`skip_trash` is false, and the namespace to delete being empty. Will fix
this in an upcoming iteration of the patches.
>
>> + }
>>
>> Ok((removed_all_requested, stats))
>> }
>> @@ -657,6 +746,24 @@ impl DataStore {
>> let _ = unlinkat(Some(base_fd), &ns_dir, UnlinkatFlags::RemoveDir);
>>
>> if !ns.is_root() {
>> + let rel_trash_path = ns.path().join(TRASH_MARKER_FILENAME);
>> + if let Err(err) =
>> + unlinkat(Some(base_fd), &rel_trash_path, UnlinkatFlags::NoRemoveDir)
>> + {
>> + if err != nix::errno::Errno::ENOENT {
>> + bail!("removing the trash file '{rel_trash_path:?}' failed - {err}")
>> + }
>> + }
>> + let rel_pending_path = ns.path().join(TRASH_PENDING_MARKER_FILENAME);
>> + if let Err(err) =
>> + unlinkat(Some(base_fd), &rel_pending_path, UnlinkatFlags::NoRemoveDir)
>> + {
>> + if err != nix::errno::Errno::ENOENT {
>> + bail!(
>> + "removing the trash pending file '{rel_pending_path:?}' failed - {err}"
>> + )
>> + }
>> + }
>> match unlinkat(Some(base_fd), &ns.path(), UnlinkatFlags::RemoveDir) {
>> Ok(()) => log::debug!("removed namespace {ns}"),
>> Err(nix::errno::Errno::ENOENT) => {
>> diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
>> index e5463524a..a12d97883 100644
>> --- a/src/api2/admin/namespace.rs
>> +++ b/src/api2/admin/namespace.rs
>> @@ -166,7 +166,7 @@ pub fn delete_namespace(
>>
>> let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
>>
>> - let (removed_all, stats) = datastore.remove_namespace_recursive(&ns, delete_groups)?;
>> + let (removed_all, stats) = datastore.remove_namespace_recursive(&ns, delete_groups, true)?;
>> if !removed_all {
>> let err_msg = if delete_groups {
>> if datastore.old_locking() {
>> diff --git a/src/server/pull.rs b/src/server/pull.rs
>> index d3c6fcf6a..a60ccbf10 100644
>> --- a/src/server/pull.rs
>> +++ b/src/server/pull.rs
>> @@ -729,7 +729,7 @@ fn check_and_remove_ns(params: &PullParameters, local_ns: &BackupNamespace) -> R
>> let (removed_all, _delete_stats) = params
>> .target
>> .store
>> - .remove_namespace_recursive(local_ns, true)?;
>> + .remove_namespace_recursive(local_ns, true, true)?;
>>
>> Ok(removed_all)
>> }
>> --
>> 2.39.5
>>
>>
>>
>> _______________________________________________
>> pbs-devel mailing list
>> pbs-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>>
>>
>>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 16/21] api: datastore: add flag to list trashed snapshots only
2025-05-09 12:27 ` Fabian Grünbichler
@ 2025-05-12 7:57 ` Christian Ebner
2025-05-12 10:01 ` Fabian Grünbichler
0 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-12 7:57 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
On 5/9/25 14:27, Fabian Grünbichler wrote:
> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>> Allows to conditionally show either active or trashed backup
>> snapshots, the latter being used when displaying the contents of the
>> trash for given datastore.
>
> and what if I want to list both/all?
Did not include that option/variant as not required for the WebUI and
possible to achieve by 2 API calls instead of a single one with the
respective flags.
Further, listing both of them with the same call would require to add a
trash flag to the response item type, which I tried to avoid.
Do we require to be able to list both with the same api call?
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 13/21] datastore: recreate trashed backup groups if requested
2025-05-09 12:27 ` Fabian Grünbichler
@ 2025-05-12 8:05 ` Christian Ebner
2025-05-12 10:02 ` Fabian Grünbichler
0 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-12 8:05 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
On 5/9/25 14:27, Fabian Grünbichler wrote:
> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>> A whole backup group might have been marked as trashed, including all
>> of the contained snapshots.
>>
>> Since a new backup to that group (even as different user/owner)
>> should still work, permanently clear the whole trashed group before
>> recreation. This will limit the trash lifetime as now the group is
>> not recoverable until next garbage collection.
>
> IMHO this is phrased in a way that makes it hard to parse, and in any
> case, such things should go into the docs..
Acked, will add a section to the docs for handling and implications of
trashed items.
>
>>
>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>> ---
>> pbs-datastore/src/datastore.rs | 26 ++++++++++++++++++++++++++
>> 1 file changed, 26 insertions(+)
>>
>> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
>> index 4f7766c36..ca05e1bea 100644
>> --- a/pbs-datastore/src/datastore.rs
>> +++ b/pbs-datastore/src/datastore.rs
>> @@ -934,6 +934,32 @@ impl DataStore {
>> let guard = backup_group.lock().with_context(|| {
>> format!("while creating locked backup group '{backup_group:?}'")
>> })?;
>> + if backup_group.is_trashed() {
>> + info!("clear trashed backup group {full_path:?}");
>
> I think we should only do this if the new and old owner are not
> identical..
Hmm, not sure if that would not introduce other possible issues/confusions?
E.g. a PVE host creates snapshots for a VM/CT with given ID to the
corresponding backup group. The group get's pruned as not required
anymore, the VM/CT destroyed. A new VM/CT is created on the PVE host and
backups created to the (trashed) group...
>
>> + let dir_entries = std::fs::read_dir(&full_path).context(
>> + "failed to read directory contents during cleanup of trashed group",
>> + )?;
>> + for entry in dir_entries {
>> + let entry = entry.context(
>> + "failed to read directory entry during clenup of trashed group",
>> + )?;
>> + let file_type = entry.file_type().context(
>> + "failed to get entry file type during clenup of trashed group",
>> + )?;
>> + if file_type.is_dir() {
>> + std::fs::remove_dir_all(entry.path())
>> + .context("failed to remove directory entry during clenup of trashed snapshot")?;
>> + } else {
>> + std::fs::remove_file(entry.path())
>> + .context("failed to remove directory entry during clenup of trashed snapshot")?;
>> + }
>> + }
>> + self.set_owner(ns, backup_group.group(), auth_id, false)?;
>> + let owner = self.get_owner(ns, backup_group.group())?; // just to be sure
>
> sure about that? we are holding a lock here, nobody is allowed to change
> the owner but us..
Not really, opted for staying on the safe side here, because the
per-exsiting code does it as well, but without mentioning why exactly.
>
>> + self.untrash_namespace(ns)?;
>> + return Ok((owner, guard));
>> + }
>> +
>> let owner = self.get_owner(ns, backup_group.group())?; // just to be sure
>> Ok((owner, guard))
>> }
>> --
>> 2.39.5
>>
>>
>>
>> _______________________________________________
>> pbs-devel mailing list
>> pbs-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>>
>>
>>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 12/21] datastore: clear trashed snapshot dir if re-creation requested
2025-05-09 12:27 ` Fabian Grünbichler
@ 2025-05-12 8:31 ` Christian Ebner
0 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-12 8:31 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
On 5/9/25 14:27, Fabian Grünbichler wrote:
> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>> If a previously trashed snapshot has been requested for re-creation
>> (e.g. by a sync job in push direction), drop the contents of the
>> currently trashed snapshot.
>> The snapshot directory itself is already locked at that point, either
>> by the old locking mechanism acting on the directory itself or by the
>> new locking mechanism. Therefore, concurrent operations can be
>> excluded.
>>
>> For the call site this acts as if the snapshot directory has been
>> newly created.
>>
>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>> ---
>> pbs-datastore/src/datastore.rs | 29 ++++++++++++++++++++++++++++-
>> 1 file changed, 28 insertions(+), 1 deletion(-)
>>
>> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
>> index ffc6a7039..4f7766c36 100644
>> --- a/pbs-datastore/src/datastore.rs
>> +++ b/pbs-datastore/src/datastore.rs
>> @@ -951,8 +951,9 @@ impl DataStore {
>> ) -> Result<(PathBuf, bool, BackupLockGuard), Error> {
>> let backup_dir = self.backup_dir(ns.clone(), backup_dir.clone())?;
>> let relative_path = backup_dir.relative_path();
>> + let full_path = backup_dir.full_path();
>>
>> - match std::fs::create_dir(backup_dir.full_path()) {
>> + match std::fs::create_dir(&full_path) {
>> Ok(_) => {
>> let guard = backup_dir.lock().with_context(|| {
>> format!("while creating new locked snapshot '{backup_dir:?}'")
>> @@ -963,6 +964,32 @@ impl DataStore {
>> let guard = backup_dir
>> .lock()
>> .with_context(|| format!("while creating locked snapshot '{backup_dir:?}'"))?;
>> +
>> + if backup_dir.is_trashed() {
>> + info!("clear trashed backup snapshot {full_path:?}");
>> + let dir_entries = std::fs::read_dir(&full_path).context(
>> + "failed to read directory contents during cleanup of trashed snapshot",
>> + )?;
>> + for entry in dir_entries {
>> + let entry = entry.context(
>> + "failed to read directory entry during clenup of trashed snapshot",
>> + )?;
>> + // Only expect regular file entries
>> + std::fs::remove_file(entry.path()).context(
>> + "failed to remove directory entry during clenup of trashed snapshot",
>> + )?;
>> + }
>> + let group = BackupGroup::from(backup_dir);
>> + let group_trash_file = group.full_group_path().join(TRASH_MARKER_FILENAME);
>> + if let Err(err) = std::fs::remove_file(&group_trash_file) {
>> + if err.kind() != std::io::ErrorKind::NotFound {
>> + bail!("failed to remove group trash file of trashed snapshot");
>> + }
>> + }
>
> this shouldn't be possible to hit, right? as creating a backup dir
> entails first creating the backup group (guarded by the group lock), and
> that would already clear the group's trash marker..
Yes, you are right: The whole group and namespace un-trashing logic is
already performed by `create_locked_backup_group` and redundant at this
point. So I will drop this and add a comment mentioning this fact instead.
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 06/21] api: tape: check trash marker when trying to write snapshot
2025-05-09 12:27 ` Fabian Grünbichler
@ 2025-05-12 9:19 ` Christian Ebner
2025-05-12 9:38 ` Fabian Grünbichler
0 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-12 9:19 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
On 5/9/25 14:27, Fabian Grünbichler wrote:
> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>> Since snapshots might be marked as trash, the snapshot directory
>> can still be present until cleaned up by garbage collection.
>>
>> Therefore, check for the presence of the trash marker after acquiring
>> the locked snapshot reader and skip over marked ones.
>>
>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>> ---
>> src/api2/tape/backup.rs | 8 +++++++-
>> 1 file changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
>> index 923cb7834..17c8bc605 100644
>> --- a/src/api2/tape/backup.rs
>> +++ b/src/api2/tape/backup.rs
>> @@ -574,7 +574,13 @@ fn backup_snapshot(
>> info!("backup snapshot {snapshot_path:?}");
>>
>> let snapshot_reader = match snapshot.locked_reader() {
>> - Ok(reader) => reader,
>> + Ok(reader) => {
>> + if snapshot.is_trashed() {
>> + info!("snapshot {snapshot_path:?} trashed, skipping");
>
> not sure why we log this, but don't log this in other places?
The intention was to keep the pre-existing logging as for vanished
snapshots (since pruned in the mean time), but make the 2 cases
distinguishable.
So I think that either the logging should be dropped for both cases, or
this should be logged as is. Opinions?
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 03/21] datastore: allow filtering of backups by their trash status
2025-05-09 12:27 ` Fabian Grünbichler
@ 2025-05-12 9:32 ` Christian Ebner
2025-05-12 10:08 ` Fabian Grünbichler
0 siblings, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-12 9:32 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
On 5/9/25 14:27, Fabian Grünbichler wrote:
> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>> Extends the BackupGroup::list_backups method by an enum parameter to
>> filter backup snapshots based on their trash status.
>>
>> This allows to reuse the same logic for listing active, trashed or
>> all snapshots.
>>
>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>> ---
>> pbs-datastore/src/backup_info.rs | 33 +++++++++++++++++++++++++++++---
>> pbs-datastore/src/datastore.rs | 4 ++--
>> src/api2/admin/datastore.rs | 10 +++++-----
>> src/api2/tape/backup.rs | 4 ++--
>> src/backup/verify.rs | 4 ++--
>> src/server/prune_job.rs | 3 ++-
>> src/server/pull.rs | 3 ++-
>> 7 files changed, 45 insertions(+), 16 deletions(-)
>>
>> diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
>> index 9ce4cb0f8..a8c864ac8 100644
>> --- a/pbs-datastore/src/backup_info.rs
>> +++ b/pbs-datastore/src/backup_info.rs
>> @@ -52,6 +52,12 @@ impl fmt::Debug for BackupGroup {
>> }
>> }
>>
>> +pub enum ListBackupFilter {
>> + Active,
>
> active sounds like there's currently a backup going on..
>
>> + All,
>> + Trashed,
>> +}
True, I might rename the enum and it's variants to
pub enum TrashStateFilter {
All,
NotTrashed,
Trashed,
}
and use that for both, snapshot and namespace filtering.
Although a bit thorn, I do dislike the `NotTrashed` but fail to come up
with a more striking name...
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 06/21] api: tape: check trash marker when trying to write snapshot
2025-05-12 9:19 ` Christian Ebner
@ 2025-05-12 9:38 ` Fabian Grünbichler
2025-05-12 9:46 ` Christian Ebner
2025-05-12 9:55 ` Christian Ebner
0 siblings, 2 replies; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-12 9:38 UTC (permalink / raw)
To: Christian Ebner, Proxmox Backup Server development discussion
On May 12, 2025 11:19 am, Christian Ebner wrote:
> On 5/9/25 14:27, Fabian Grünbichler wrote:
>> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>>> Since snapshots might be marked as trash, the snapshot directory
>>> can still be present until cleaned up by garbage collection.
>>>
>>> Therefore, check for the presence of the trash marker after acquiring
>>> the locked snapshot reader and skip over marked ones.
>>>
>>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>>> ---
>>> src/api2/tape/backup.rs | 8 +++++++-
>>> 1 file changed, 7 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
>>> index 923cb7834..17c8bc605 100644
>>> --- a/src/api2/tape/backup.rs
>>> +++ b/src/api2/tape/backup.rs
>>> @@ -574,7 +574,13 @@ fn backup_snapshot(
>>> info!("backup snapshot {snapshot_path:?}");
>>>
>>> let snapshot_reader = match snapshot.locked_reader() {
>>> - Ok(reader) => reader,
>>> + Ok(reader) => {
>>> + if snapshot.is_trashed() {
>>> + info!("snapshot {snapshot_path:?} trashed, skipping");
>>
>> not sure why we log this, but don't log this in other places?
>
> The intention was to keep the pre-existing logging as for vanished
> snapshots (since pruned in the mean time), but make the 2 cases
> distinguishable.
>
> So I think that either the logging should be dropped for both cases, or
> this should be logged as is. Opinions?
iff we make trashing the default at some point, this would become very
noisy.. the vanished logging is different, since it "just" triggers on
the time window between listing snapshots and attempting to read from
them..
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 10/21] datastore: mark namespace as trash instead of deleting it
2025-05-12 7:47 ` Christian Ebner
@ 2025-05-12 9:46 ` Fabian Grünbichler
2025-05-12 10:35 ` Christian Ebner
0 siblings, 1 reply; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-12 9:46 UTC (permalink / raw)
To: Christian Ebner, Proxmox Backup Server development discussion
On May 12, 2025 9:47 am, Christian Ebner wrote:
> On 5/9/25 14:27, Fabian Grünbichler wrote:
>> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>>> As for backup snapshots and groups, mark the namespace as trash
>>> instead of removing it and the contents right away, if the trash
>>> should not be bypassed.
>>>
>>> Actual removal of the hirarchical folder structure has to be taken
>>> care of by the garbage collection.
>>>
>>> In order to avoid races during removal, first mark the namespaces as
>>> trash pending, mark the snapshots and groups as trash and only after
>>> rename the pending marker file to the trash marker file. By this,
>>> concurrent backups can remove the trash pending marker to avoid the
>>> namespace being trashed.
>>>
>>> On re-creation of a trashed namespace remove the marker file on itself
>>> and any parent component from deepest to shallowest. As trashing a full
>>> namespace can also set the trash pending state for recursive namespace
>>> cleanup, remove encounters of that marker file as well to avoid the
>>> namespace or its parent being trashed.
>>
>> this is fairly involved since we don't have locks on namespaces..
>>
>> should we have them for creation/removal/trashing/untrashing?
>
> That is what I did try to avoid at all cost with the two-marker
> approach, as locking the namespace might be rather invasive. But if that
> does not work out as intended, I see no other way as to add exclusive
> locking for namespaces as well, yes.
>
>>
>> I assume those are fairly rare occurrences, I haven't yet analyzed the
>> interactions here to see whether the two-marker approach is actually
>> race-free..
>>
>> OTOH, do we really need to (be able to) trash namespaces?
>
> Yes, I think we do need that as well since the datastore's hierarchy
> should remain in place, and the namespace iterator requires a way to
> distinguish between a namespace which has been trashed/deleted and a
> namespace which has not, but might contain trashed items. Otherwise a
> user requesting to forget a namespace, still sees the (empty as only
> trashed contents) namespace tree after the operation. Which would be
> rather unexpected?
we could also require emptying the trash as part of forgetting a
namespace? we already have the `delete_groups` option and only remove a
non-empty namespace is set, we could re-use that or add a new
`empty_trash` option next to it, if we want double-opt-in ;)
else, we'd also need to support trashing whole datastores if
we follow this line of thinking..
like I said, I don't think forgetting namespaces is something that is
done very often, as namespaces are normally fairly static.. and if I
want to remove a namespace, I probably also want to remove all its
contents (trash or regular ;)).
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 06/21] api: tape: check trash marker when trying to write snapshot
2025-05-12 9:38 ` Fabian Grünbichler
@ 2025-05-12 9:46 ` Christian Ebner
2025-05-12 9:55 ` Christian Ebner
1 sibling, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-12 9:46 UTC (permalink / raw)
To: Fabian Grünbichler, Proxmox Backup Server development discussion
On 5/12/25 11:38, Fabian Grünbichler wrote:
> On May 12, 2025 11:19 am, Christian Ebner wrote:
>> On 5/9/25 14:27, Fabian Grünbichler wrote:
>>> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>>>> Since snapshots might be marked as trash, the snapshot directory
>>>> can still be present until cleaned up by garbage collection.
>>>>
>>>> Therefore, check for the presence of the trash marker after acquiring
>>>> the locked snapshot reader and skip over marked ones.
>>>>
>>>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>>>> ---
>>>> src/api2/tape/backup.rs | 8 +++++++-
>>>> 1 file changed, 7 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
>>>> index 923cb7834..17c8bc605 100644
>>>> --- a/src/api2/tape/backup.rs
>>>> +++ b/src/api2/tape/backup.rs
>>>> @@ -574,7 +574,13 @@ fn backup_snapshot(
>>>> info!("backup snapshot {snapshot_path:?}");
>>>>
>>>> let snapshot_reader = match snapshot.locked_reader() {
>>>> - Ok(reader) => reader,
>>>> + Ok(reader) => {
>>>> + if snapshot.is_trashed() {
>>>> + info!("snapshot {snapshot_path:?} trashed, skipping");
>>>
>>> not sure why we log this, but don't log this in other places?
>>
>> The intention was to keep the pre-existing logging as for vanished
>> snapshots (since pruned in the mean time), but make the 2 cases
>> distinguishable.
>>
>> So I think that either the logging should be dropped for both cases, or
>> this should be logged as is. Opinions?
>
> iff we make trashing the default at some point, this would become very
> noisy.. the vanished logging is different, since it "just" triggers on
> the time window between listing snapshots and attempting to read from
> them..
Good point, so will drop it then!
Although this opens up another thing to consider a bit closer: the
current patches already favor the use of the trash over immediately
deleting contents. I feel it might be best to only use it by default for
manual operations via cli or ui, but not for actions taken by jobs...
and for the first one provide the opt-out flags.
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 06/21] api: tape: check trash marker when trying to write snapshot
2025-05-12 9:38 ` Fabian Grünbichler
2025-05-12 9:46 ` Christian Ebner
@ 2025-05-12 9:55 ` Christian Ebner
2025-05-12 10:09 ` Fabian Grünbichler
1 sibling, 1 reply; 51+ messages in thread
From: Christian Ebner @ 2025-05-12 9:55 UTC (permalink / raw)
To: Fabian Grünbichler, Proxmox Backup Server development discussion
On 5/12/25 11:38, Fabian Grünbichler wrote:
> On May 12, 2025 11:19 am, Christian Ebner wrote:
>> On 5/9/25 14:27, Fabian Grünbichler wrote:
>>> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>>>> Since snapshots might be marked as trash, the snapshot directory
>>>> can still be present until cleaned up by garbage collection.
>>>>
>>>> Therefore, check for the presence of the trash marker after acquiring
>>>> the locked snapshot reader and skip over marked ones.
>>>>
>>>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>>>> ---
>>>> src/api2/tape/backup.rs | 8 +++++++-
>>>> 1 file changed, 7 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
>>>> index 923cb7834..17c8bc605 100644
>>>> --- a/src/api2/tape/backup.rs
>>>> +++ b/src/api2/tape/backup.rs
>>>> @@ -574,7 +574,13 @@ fn backup_snapshot(
>>>> info!("backup snapshot {snapshot_path:?}");
>>>>
>>>> let snapshot_reader = match snapshot.locked_reader() {
>>>> - Ok(reader) => reader,
>>>> + Ok(reader) => {
>>>> + if snapshot.is_trashed() {
>>>> + info!("snapshot {snapshot_path:?} trashed, skipping");
>>>
>>> not sure why we log this, but don't log this in other places?
>>
>> The intention was to keep the pre-existing logging as for vanished
>> snapshots (since pruned in the mean time), but make the 2 cases
>> distinguishable.
>>
>> So I think that either the logging should be dropped for both cases, or
>> this should be logged as is. Opinions?
>
> iff we make trashing the default at some point, this would become very
> noisy.. the vanished logging is different, since it "just" triggers on
> the time window between listing snapshots and attempting to read from
> them..
On second thought, the logging here would act just the same as for the
vanished case, as the snapshot list generate at the start of the job is
pre-filtered already, only considering not trashed snapshots...
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 16/21] api: datastore: add flag to list trashed snapshots only
2025-05-12 7:57 ` Christian Ebner
@ 2025-05-12 10:01 ` Fabian Grünbichler
0 siblings, 0 replies; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-12 10:01 UTC (permalink / raw)
To: Christian Ebner, Proxmox Backup Server development discussion
On May 12, 2025 9:57 am, Christian Ebner wrote:
> On 5/9/25 14:27, Fabian Grünbichler wrote:
>> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>>> Allows to conditionally show either active or trashed backup
>>> snapshots, the latter being used when displaying the contents of the
>>> trash for given datastore.
>>
>> and what if I want to list both/all?
>
> Did not include that option/variant as not required for the WebUI and
> possible to achieve by 2 API calls instead of a single one with the
> respective flags.
well, doing two API calls means I then have to check for changed state
and merge the results as a client..
> Further, listing both of them with the same call would require to add a
> trash flag to the response item type, which I tried to avoid.
we could only include the trash marker in the serialized result if it is
set?
> Do we require to be able to list both with the same api call?
I think it makes for a better interface..
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 13/21] datastore: recreate trashed backup groups if requested
2025-05-12 8:05 ` Christian Ebner
@ 2025-05-12 10:02 ` Fabian Grünbichler
0 siblings, 0 replies; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-12 10:02 UTC (permalink / raw)
To: Christian Ebner, Proxmox Backup Server development discussion
On May 12, 2025 10:05 am, Christian Ebner wrote:
> On 5/9/25 14:27, Fabian Grünbichler wrote:
>> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>>> A whole backup group might have been marked as trashed, including all
>>> of the contained snapshots.
>>>
>>> Since a new backup to that group (even as different user/owner)
>>> should still work, permanently clear the whole trashed group before
>>> recreation. This will limit the trash lifetime as now the group is
>>> not recoverable until next garbage collection.
>>
>> IMHO this is phrased in a way that makes it hard to parse, and in any
>> case, such things should go into the docs..
>
> Acked, will add a section to the docs for handling and implications of
> trashed items.
>
>>
>>>
>>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>>> ---
>>> pbs-datastore/src/datastore.rs | 26 ++++++++++++++++++++++++++
>>> 1 file changed, 26 insertions(+)
>>>
>>> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
>>> index 4f7766c36..ca05e1bea 100644
>>> --- a/pbs-datastore/src/datastore.rs
>>> +++ b/pbs-datastore/src/datastore.rs
>>> @@ -934,6 +934,32 @@ impl DataStore {
>>> let guard = backup_group.lock().with_context(|| {
>>> format!("while creating locked backup group '{backup_group:?}'")
>>> })?;
>>> + if backup_group.is_trashed() {
>>> + info!("clear trashed backup group {full_path:?}");
>>
>> I think we should only do this if the new and old owner are not
>> identical..
>
> Hmm, not sure if that would not introduce other possible issues/confusions?
> E.g. a PVE host creates snapshots for a VM/CT with given ID to the
> corresponding backup group. The group get's pruned as not required
> anymore, the VM/CT destroyed. A new VM/CT is created on the PVE host and
> backups created to the (trashed) group...
I think the downside of too aggressively clearing trashed snapshot
(which might still be valuable) is far bigger than the downside of this
potential footgun. especially if the gist of how trashing works is
"trash will be cleared on next GC run", then "trash group; scheduled
backup runs" clearing all trashed snapshots would be potentially
disastrous - e.g., if I don't have GC configured at the moment and use
"trash group" as a sort of bulk action before recovering a few
individual snapshots..
if I give a user access to a VMID on the PVE side, then I need to ensure
any traces of old usage of that VMID is gone if I don't want that user
to see those traces. this doesn't change with the trash feature at all
(there's also things like old task logs, RRD data etc that are hard to
clear, so you *should never reuse a VMID between users* anyway).
as long as the owner stays the same, a user with access that allows
creating a new snapshot could have already recovered and read all those
trashed snapshot before creating a new snapshot, so nothing is leaked
that is not already accessible..
I am not even 100% sure if clearing the trash on owner change is
sensible/expected behaviour (the alternative being to block new
snapshots until the trash is cleared).
>>
>>> + let dir_entries = std::fs::read_dir(&full_path).context(
>>> + "failed to read directory contents during cleanup of trashed group",
>>> + )?;
>>> + for entry in dir_entries {
>>> + let entry = entry.context(
>>> + "failed to read directory entry during clenup of trashed group",
>>> + )?;
>>> + let file_type = entry.file_type().context(
>>> + "failed to get entry file type during clenup of trashed group",
>>> + )?;
>>> + if file_type.is_dir() {
>>> + std::fs::remove_dir_all(entry.path())
>>> + .context("failed to remove directory entry during clenup of trashed snapshot")?;
>>> + } else {
>>> + std::fs::remove_file(entry.path())
>>> + .context("failed to remove directory entry during clenup of trashed snapshot")?;
>>> + }
>>> + }
>>> + self.set_owner(ns, backup_group.group(), auth_id, false)?;
>>> + let owner = self.get_owner(ns, backup_group.group())?; // just to be sure
>>
>> sure about that? we are holding a lock here, nobody is allowed to change
>> the owner but us..
>
> Not really, opted for staying on the safe side here, because the
> per-exsiting code does it as well, but without mentioning why exactly.
AFAICT that pre-dates locking of groups or snapshots, I think it doesn't
make sense there either since the introduction of those - all calls to
set_owner are guarded by the group lock now..
>>> + self.untrash_namespace(ns)?;
>>> + return Ok((owner, guard));
>>> + }
>>> +
>>> let owner = self.get_owner(ns, backup_group.group())?; // just to be sure
>>> Ok((owner, guard))
>>> }
>>> --
>>> 2.39.5
>>>
>>>
>>>
>>> _______________________________________________
>>> pbs-devel mailing list
>>> pbs-devel@lists.proxmox.com
>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> pbs-devel mailing list
>> pbs-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>>
>>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 18/21] api: admin: implement endpoints to restore trashed contents
2025-05-09 12:59 ` Christian Ebner
@ 2025-05-12 10:03 ` Fabian Grünbichler
0 siblings, 0 replies; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-12 10:03 UTC (permalink / raw)
To: Christian Ebner, Proxmox Backup Server development discussion
On May 9, 2025 2:59 pm, Christian Ebner wrote:
> Thanks for feedback, will have a closer look next week.
>
> Allow me two quick questions inline though...
>
> On 5/9/25 14:27, Fabian Grünbichler wrote:
>> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>>> Implements the api endpoints to restore trashed contents contained
>>> within namespaces, backup groups or individual snapshots.
>>>
>>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>>> ---
>>> src/api2/admin/datastore.rs | 173 +++++++++++++++++++++++++++++++++++-
>>> 1 file changed, 172 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
>>> index cbd24c729..eb033c3fc 100644
>>> --- a/src/api2/admin/datastore.rs
>>> +++ b/src/api2/admin/datastore.rs
>>> @@ -51,7 +51,7 @@ use pbs_api_types::{
>>> };
>>> use pbs_client::pxar::{create_tar, create_zip};
>>> use pbs_config::CachedUserInfo;
>>> -use pbs_datastore::backup_info::{BackupInfo, ListBackupFilter};
>>> +use pbs_datastore::backup_info::{BackupInfo, ListBackupFilter, TRASH_MARKER_FILENAME};
>>> use pbs_datastore::cached_chunk_reader::CachedChunkReader;
>>> use pbs_datastore::catalog::{ArchiveEntry, CatalogReader};
>>> use pbs_datastore::data_blob::DataBlob;
>>> @@ -2727,6 +2727,165 @@ pub async fn unmount(store: String, rpcenv: &mut dyn RpcEnvironment) -> Result<V
>>> Ok(json!(upid))
>>> }
>>>
>>> +#[api(
>>> + input: {
>>> + properties: {
>>> + store: { schema: DATASTORE_SCHEMA },
>>> + ns: { type: BackupNamespace, },
>>> + },
>>> + },
>>> + access: {
>>> + permission: &Permission::Anybody,
>>> + description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
>>> + or DATASTORE_BACKUP and being the owner of the group",
>>> + },
>>> +)]
>>> +/// Recover trashed contents of a namespace.
>>> +pub fn recover_namespace(
>>> + store: String,
>>> + ns: BackupNamespace,
>>> + rpcenv: &mut dyn RpcEnvironment,
>>> +) -> Result<(), Error> {
>>> + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
>>> + let limited = check_ns_privs_full(
>>> + &store,
>>> + &ns,
>>> + &auth_id,
>>> + PRIV_DATASTORE_MODIFY,
>>> + PRIV_DATASTORE_BACKUP,
>>> + )?;
>>> +
>>> + let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
>>> +
>>> + for backup_group in datastore.iter_backup_groups(ns.clone())? {
>>> + let backup_group = backup_group?;
>>> + if limited {
>>> + let owner = datastore.get_owner(&ns, backup_group.group())?;
>>> + if check_backup_owner(&owner, &auth_id).is_err() {
>>> + continue;
>>> + }
>>> + }
>>> + do_recover_group(&backup_group)?;
>>> + }
>>> +
>>> + Ok(())
>>> +}
>>> +
>>> +#[api(
>>> + input: {
>>> + properties: {
>>> + store: { schema: DATASTORE_SCHEMA },
>>> + group: {
>>> + type: pbs_api_types::BackupGroup,
>>> + flatten: true,
>>> + },
>>> + ns: {
>>> + type: BackupNamespace,
>>> + optional: true,
>>> + },
>>> + },
>>> + },
>>> + access: {
>>> + permission: &Permission::Anybody,
>>> + description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
>>> + or DATASTORE_BACKUP and being the owner of the group",
>>> + },
>>> +)]
>>> +/// Recover trashed contents of a backup group.
>>> +pub fn recover_group(
>>> + store: String,
>>> + group: pbs_api_types::BackupGroup,
>>> + ns: Option<BackupNamespace>,
>>> + rpcenv: &mut dyn RpcEnvironment,
>>> +) -> Result<(), Error> {
>>> + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
>>> + let ns = ns.unwrap_or_default();
>>> + let datastore = check_privs_and_load_store(
>>> + &store,
>>> + &ns,
>>> + &auth_id,
>>> + PRIV_DATASTORE_MODIFY,
>>> + PRIV_DATASTORE_BACKUP,
>>> + Some(Operation::Write),
>>> + &group,
>>> + )?;
>>> +
>>> + let backup_group = datastore.backup_group(ns, group);
>>> + do_recover_group(&backup_group)?;
>>> +
>>> + Ok(())
>>> +}
>>> +
>>> +fn do_recover_group(backup_group: &BackupGroup) -> Result<(), Error> {
>>
>> missing locking for the group?
>
> Not sure about that one. After all the group is trashed at least as long
> as all the snapshots are trashed. And GC will only ever clean up the
> group folder if the trash marker is not set. So I do not see a reason
> why this should be locked.
because logically, this is the inverse of BackupGroup::destroy with
skip_trash=false, and that locks the group..
else you could have a recovery and a full deletion running concurrently
for the same group. also, while you are recovering the group you
probably don't want to start a new backup snapshot, which would also be
possible with the missing lock.
>>> + let trashed_snapshots = backup_group.list_backups(ListBackupFilter::Trashed)?;
>>> + for snapshot in trashed_snapshots {
>>> + do_recover_snapshot(&snapshot.backup_dir)?;
>>> + }
>>> +
>>> + let group_trash_path = backup_group.full_group_path().join(TRASH_MARKER_FILENAME);
>>> + if let Err(err) = std::fs::remove_file(&group_trash_path) {
>>> + if err.kind() != std::io::ErrorKind::NotFound {
>>> + bail!("failed to remove group trash file {group_trash_path:?} - {err}");
>>> + }
>>> + }
>>> + Ok(())
>>> +}
>>> +
>>> +#[api(
>>> + input: {
>>> + properties: {
>>> + store: { schema: DATASTORE_SCHEMA },
>>> + backup_dir: {
>>> + type: pbs_api_types::BackupDir,
>>> + flatten: true,
>>> + },
>>> + ns: {
>>> + type: BackupNamespace,
>>> + optional: true,
>>> + },
>>> + },
>>> + },
>>> + access: {
>>> + permission: &Permission::Anybody,
>>> + description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_MODIFY for any \
>>> + or DATASTORE_BACKUP and being the owner of the group",
>>> + },
>>> +)]
>>> +/// Recover trashed contents of a backup snapshot.
>>> +pub fn recover_snapshot(
>>> + store: String,
>>> + backup_dir: pbs_api_types::BackupDir,
>>> + ns: Option<BackupNamespace>,
>>> + rpcenv: &mut dyn RpcEnvironment,
>>> +) -> Result<(), Error> {
>>> + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
>>> + let ns = ns.unwrap_or_default();
>>> + let datastore = check_privs_and_load_store(
>>> + &store,
>>> + &ns,
>>> + &auth_id,
>>> + PRIV_DATASTORE_MODIFY,
>>> + PRIV_DATASTORE_BACKUP,
>>> + Some(Operation::Write),
>>> + &backup_dir.group,
>>> + )?;
>>> +
>>> + let snapshot = datastore.backup_dir(ns, backup_dir)?;
>>> + do_recover_snapshot(&snapshot)?;
>>> +
>>> + Ok(())
>>> +}
>>> +
>>> +fn do_recover_snapshot(snapshot_dir: &BackupDir) -> Result<(), Error> {
>>
>> missing locking for the snapshot?
>
> Why? remove_file() should be atomic?
but a skip_trash=true deletion might be going on already or some other
operation holding a lock on the snapshot that doesn't want the 'trash'
status being changed underneath it?
>>
>>> + let trash_path = snapshot_dir.full_path().join(TRASH_MARKER_FILENAME);
>>> + if let Err(err) = std::fs::remove_file(&trash_path) {
>>> + if err.kind() != std::io::ErrorKind::NotFound {
>>> + bail!("failed to remove trash file {trash_path:?} - {err}");
>>> + }
>>> + }
>>> + Ok(())
>>> +}
>>> +
>>> #[sortable]
>>> const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
>>> (
>>> @@ -2792,6 +2951,18 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
>>> "pxar-file-download",
>>> &Router::new().download(&API_METHOD_PXAR_FILE_DOWNLOAD),
>>> ),
>>> + (
>>> + "recover-group",
>>> + &Router::new().post(&API_METHOD_RECOVER_GROUP),
>>
>> I am not sure whether those should be POST or PUT, they are modifying an
>> existing (trashed) group/snapshot/.. after all?
>>
>>> + ),
>>> + (
>>> + "recover-namespace",
>>> + &Router::new().post(&API_METHOD_RECOVER_NAMESPACE),
>>> + ),
>>> + (
>>> + "recover-snapshot",
>>> + &Router::new().post(&API_METHOD_RECOVER_SNAPSHOT),
>>> + ),
>>> ("rrd", &Router::new().get(&API_METHOD_GET_RRD_STATS)),
>>> (
>>> "snapshots",
>>> --
>>> 2.39.5
>>>
>>>
>>>
>>> _______________________________________________
>>> pbs-devel mailing list
>>> pbs-devel@lists.proxmox.com
>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> pbs-devel mailing list
>> pbs-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>>
>>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 03/21] datastore: allow filtering of backups by their trash status
2025-05-12 9:32 ` Christian Ebner
@ 2025-05-12 10:08 ` Fabian Grünbichler
0 siblings, 0 replies; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-12 10:08 UTC (permalink / raw)
To: Christian Ebner, Proxmox Backup Server development discussion
> Christian Ebner <c.ebner@proxmox.com> hat am 12.05.2025 11:32 CEST geschrieben:
>
>
> On 5/9/25 14:27, Fabian Grünbichler wrote:
> > On May 8, 2025 3:05 pm, Christian Ebner wrote:
> >> Extends the BackupGroup::list_backups method by an enum parameter to
> >> filter backup snapshots based on their trash status.
> >>
> >> This allows to reuse the same logic for listing active, trashed or
> >> all snapshots.
> >>
> >> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> >> ---
> >> pbs-datastore/src/backup_info.rs | 33 +++++++++++++++++++++++++++++---
> >> pbs-datastore/src/datastore.rs | 4 ++--
> >> src/api2/admin/datastore.rs | 10 +++++-----
> >> src/api2/tape/backup.rs | 4 ++--
> >> src/backup/verify.rs | 4 ++--
> >> src/server/prune_job.rs | 3 ++-
> >> src/server/pull.rs | 3 ++-
> >> 7 files changed, 45 insertions(+), 16 deletions(-)
> >>
> >> diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
> >> index 9ce4cb0f8..a8c864ac8 100644
> >> --- a/pbs-datastore/src/backup_info.rs
> >> +++ b/pbs-datastore/src/backup_info.rs
> >> @@ -52,6 +52,12 @@ impl fmt::Debug for BackupGroup {
> >> }
> >> }
> >>
> >> +pub enum ListBackupFilter {
> >> + Active,
> >
> > active sounds like there's currently a backup going on..
> >
> >> + All,
> >> + Trashed,
> >> +}
>
> True, I might rename the enum and it's variants to
>
> pub enum TrashStateFilter {
> All,
> NotTrashed,
> Trashed,
> }
>
> and use that for both, snapshot and namespace filtering.
>
> Although a bit thorn, I do dislike the `NotTrashed` but fail to come up
> with a more striking name...
IncludeTrash (list all snapshots, including trash)
ExcludeTrash (the backwards compat default?)
OnlyTrash (list only the trashed snapshots, for a 'trash can' view)
might work?
or else, `Regular` for non-trash snapshots? `Trash` might also be nicer
than `Trashed`.
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 06/21] api: tape: check trash marker when trying to write snapshot
2025-05-12 9:55 ` Christian Ebner
@ 2025-05-12 10:09 ` Fabian Grünbichler
0 siblings, 0 replies; 51+ messages in thread
From: Fabian Grünbichler @ 2025-05-12 10:09 UTC (permalink / raw)
To: Christian Ebner, Proxmox Backup Server development discussion
> Christian Ebner <c.ebner@proxmox.com> hat am 12.05.2025 11:55 CEST geschrieben:
>
>
> On 5/12/25 11:38, Fabian Grünbichler wrote:
> > On May 12, 2025 11:19 am, Christian Ebner wrote:
> >> On 5/9/25 14:27, Fabian Grünbichler wrote:
> >>> On May 8, 2025 3:05 pm, Christian Ebner wrote:
> >>>> Since snapshots might be marked as trash, the snapshot directory
> >>>> can still be present until cleaned up by garbage collection.
> >>>>
> >>>> Therefore, check for the presence of the trash marker after acquiring
> >>>> the locked snapshot reader and skip over marked ones.
> >>>>
> >>>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> >>>> ---
> >>>> src/api2/tape/backup.rs | 8 +++++++-
> >>>> 1 file changed, 7 insertions(+), 1 deletion(-)
> >>>>
> >>>> diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
> >>>> index 923cb7834..17c8bc605 100644
> >>>> --- a/src/api2/tape/backup.rs
> >>>> +++ b/src/api2/tape/backup.rs
> >>>> @@ -574,7 +574,13 @@ fn backup_snapshot(
> >>>> info!("backup snapshot {snapshot_path:?}");
> >>>>
> >>>> let snapshot_reader = match snapshot.locked_reader() {
> >>>> - Ok(reader) => reader,
> >>>> + Ok(reader) => {
> >>>> + if snapshot.is_trashed() {
> >>>> + info!("snapshot {snapshot_path:?} trashed, skipping");
> >>>
> >>> not sure why we log this, but don't log this in other places?
> >>
> >> The intention was to keep the pre-existing logging as for vanished
> >> snapshots (since pruned in the mean time), but make the 2 cases
> >> distinguishable.
> >>
> >> So I think that either the logging should be dropped for both cases, or
> >> this should be logged as is. Opinions?
> >
> > iff we make trashing the default at some point, this would become very
> > noisy.. the vanished logging is different, since it "just" triggers on
> > the time window between listing snapshots and attempting to read from
> > them..
>
> On second thought, the logging here would act just the same as for the
> vanished case, as the snapshot list generate at the start of the job is
> pre-filtered already, only considering not trashed snapshots...
I guess then we can keep it..
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [pbs-devel] [RFC v2 proxmox-backup 10/21] datastore: mark namespace as trash instead of deleting it
2025-05-12 9:46 ` Fabian Grünbichler
@ 2025-05-12 10:35 ` Christian Ebner
0 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-12 10:35 UTC (permalink / raw)
To: Fabian Grünbichler, Proxmox Backup Server development discussion
On 5/12/25 11:46, Fabian Grünbichler wrote:
> On May 12, 2025 9:47 am, Christian Ebner wrote:
>> On 5/9/25 14:27, Fabian Grünbichler wrote:
>>> On May 8, 2025 3:05 pm, Christian Ebner wrote:
>>>> As for backup snapshots and groups, mark the namespace as trash
>>>> instead of removing it and the contents right away, if the trash
>>>> should not be bypassed.
>>>>
>>>> Actual removal of the hirarchical folder structure has to be taken
>>>> care of by the garbage collection.
>>>>
>>>> In order to avoid races during removal, first mark the namespaces as
>>>> trash pending, mark the snapshots and groups as trash and only after
>>>> rename the pending marker file to the trash marker file. By this,
>>>> concurrent backups can remove the trash pending marker to avoid the
>>>> namespace being trashed.
>>>>
>>>> On re-creation of a trashed namespace remove the marker file on itself
>>>> and any parent component from deepest to shallowest. As trashing a full
>>>> namespace can also set the trash pending state for recursive namespace
>>>> cleanup, remove encounters of that marker file as well to avoid the
>>>> namespace or its parent being trashed.
>>>
>>> this is fairly involved since we don't have locks on namespaces..
>>>
>>> should we have them for creation/removal/trashing/untrashing?
>>
>> That is what I did try to avoid at all cost with the two-marker
>> approach, as locking the namespace might be rather invasive. But if that
>> does not work out as intended, I see no other way as to add exclusive
>> locking for namespaces as well, yes.
>>
>>>
>>> I assume those are fairly rare occurrences, I haven't yet analyzed the
>>> interactions here to see whether the two-marker approach is actually
>>> race-free..
>>>
>>> OTOH, do we really need to (be able to) trash namespaces?
>>
>> Yes, I think we do need that as well since the datastore's hierarchy
>> should remain in place, and the namespace iterator requires a way to
>> distinguish between a namespace which has been trashed/deleted and a
>> namespace which has not, but might contain trashed items. Otherwise a
>> user requesting to forget a namespace, still sees the (empty as only
>> trashed contents) namespace tree after the operation. Which would be
>> rather unexpected?
>
> we could also require emptying the trash as part of forgetting a
> namespace? we already have the `delete_groups` option and only remove a
> non-empty namespace is set, we could re-use that or add a new
> `empty_trash` option next to it, if we want double-opt-in ;)
>
> else, we'd also need to support trashing whole datastores if
> we follow this line of thinking..
>
> like I said, I don't think forgetting namespaces is something that is
> done very often, as namespaces are normally fairly static.. and if I
> want to remove a namespace, I probably also want to remove all its
> contents (trash or regular ;)).
Ah okay, requiring to clear trashed contents as well when deleting a
namespace seems the better approach indeed. Will adapt the code
accordingly.
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
* [pbs-devel] superseded: [RFC v2 proxmox-backup 00/21] implement trash bin functionality
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
` (20 preceding siblings ...)
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 21/21] ui: allow to skip trash on namespace deletion Christian Ebner
@ 2025-05-13 13:54 ` Christian Ebner
21 siblings, 0 replies; 51+ messages in thread
From: Christian Ebner @ 2025-05-13 13:54 UTC (permalink / raw)
To: pbs-devel
superseded-by version 3:
https://lore.proxmox.com/pbs-devel/20250513135247.644260-1-c.ebner@proxmox.com/T/
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 51+ messages in thread
end of thread, other threads:[~2025-05-13 13:54 UTC | newest]
Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-05-08 13:05 [pbs-devel] [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 01/21] datastore/api: mark snapshots as trash on destroy Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 02/21] datastore: mark groups " Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 03/21] datastore: allow filtering of backups by their trash status Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 9:32 ` Christian Ebner
2025-05-12 10:08 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 04/21] datastore: ignore trashed snapshots for last successful backup Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 05/21] sync: ignore trashed snapshots when reading from local source Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 06/21] api: tape: check trash marker when trying to write snapshot Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 9:19 ` Christian Ebner
2025-05-12 9:38 ` Fabian Grünbichler
2025-05-12 9:46 ` Christian Ebner
2025-05-12 9:55 ` Christian Ebner
2025-05-12 10:09 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 07/21] sync: ignore trashed groups in local source reader Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 08/21] datastore: namespace: add filter for trash status Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 09/21] datastore: refactor recursive namespace removal Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 10/21] datastore: mark namespace as trash instead of deleting it Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 7:47 ` Christian Ebner
2025-05-12 9:46 ` Fabian Grünbichler
2025-05-12 10:35 ` Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 11/21] datastore: check for trash marker in namespace exists check Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 12/21] datastore: clear trashed snapshot dir if re-creation requested Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 8:31 ` Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 13/21] datastore: recreate trashed backup groups if requested Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 8:05 ` Christian Ebner
2025-05-12 10:02 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 14/21] datastore: GC: clean-up trashed snapshots, groups and namespaces Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 15/21] client: expose skip trash flags for cli commands Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 16/21] api: datastore: add flag to list trashed snapshots only Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-12 7:57 ` Christian Ebner
2025-05-12 10:01 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 17/21] api: namespace: add option to list all namespaces, including trashed Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 18/21] api: admin: implement endpoints to restore trashed contents Christian Ebner
2025-05-09 12:27 ` Fabian Grünbichler
2025-05-09 12:59 ` Christian Ebner
2025-05-12 10:03 ` Fabian Grünbichler
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 19/21] ui: add recover for trashed items tab to datastore panel Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 20/21] ui: drop 'permanent' in group/snapshot forget, default is to trash Christian Ebner
2025-05-08 13:05 ` [pbs-devel] [RFC v2 proxmox-backup 21/21] ui: allow to skip trash on namespace deletion Christian Ebner
2025-05-13 13:54 ` [pbs-devel] superseded: [RFC v2 proxmox-backup 00/21] implement trash bin functionality Christian Ebner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox