public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Hannes Laimer <h.laimer@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [PATCH proxmox-backup v8 04/13] datastore: split remove_namespace into flat and recursive variants
Date: Wed, 22 Apr 2026 15:39:42 +0200	[thread overview]
Message-ID: <20260422133951.192862-5-h.laimer@proxmox.com> (raw)
In-Reply-To: <20260422133951.192862-1-h.laimer@proxmox.com>

Removal previously always descended into child namespaces. Callers
that drive traversal themselves (pull's vanished-namespace loop, and
the upcoming move-namespace cleanup) only need the per-level work.

Factor the single-level work into its own public entry point and
keep the recursive variant as a bottom-up wrapper around it. Also
lift the empty-type-dir cleanup so it runs in both modes. Otherwise
the flat path cannot prune an already-empty namespace whose type
dirs were left behind by prior group moves or deletes. Switch pull's
vanished-NS cleanup to the flat variant.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
 pbs-datastore/src/datastore.rs | 151 ++++++++++++++++++++-------------
 src/server/pull.rs             |   7 +-
 2 files changed, 93 insertions(+), 65 deletions(-)

diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index c1f2f8eb..bce0d846 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -1035,46 +1035,111 @@ impl DataStore {
             removed_all_groups = removed_all_groups && delete_stats.all_removed();
         }
 
+        Ok((removed_all_groups, stats))
+    }
+
+    /// Remove a single namespace level (no recursion into children).
+    ///
+    /// If `delete_groups` is true, destroys all groups in this namespace. Otherwise only
+    /// an empty namespace directory is pruned. Returns `(removed_all_requested, stats)`
+    /// where `removed_all_requested` is true if nothing was protected and the directory
+    /// was removed.
+    pub fn remove_namespace(
+        self: &Arc<Self>,
+        ns: &BackupNamespace,
+        delete_groups: bool,
+    ) -> Result<(bool, BackupGroupDeleteStats), Error> {
+        let mut removed_all_requested = true;
+        let mut stats = BackupGroupDeleteStats::default();
+        let backend = self.backend()?;
+
+        if delete_groups {
+            log::info!("removing groups in namespace {}:/{ns}", self.name());
+            let (removed_ns_groups, delete_stats) = self.remove_namespace_groups(ns)?;
+            stats.add(&delete_stats);
+            removed_all_requested = removed_all_requested && removed_ns_groups;
+        }
+
         let base_file = std::fs::File::open(self.base_path())?;
         let base_fd = base_file.as_raw_fd();
+
+        // Best-effort cleanup of empty type subdirectories (e.g. `host/`). These can be
+        // left behind after groups are destroyed (aside from protected snapshots) or after
+        // groups have been moved out, and would otherwise prevent the namespace directory
+        // below from being removed. ENOTEMPTY is expected (protected snapshots, or
+        // groups still present when `delete_groups` is false) and silently skipped.
         for ty in BackupType::iter() {
             let mut ty_dir = ns.path();
             ty_dir.push(ty.to_string());
-            // best effort only, but we probably should log the error
             if let Err(err) = unlinkat(Some(base_fd), &ty_dir, UnlinkatFlags::RemoveDir) {
-                if err != nix::errno::Errno::ENOENT {
-                    log::error!("failed to remove backup type {ty} in {ns} - {err}");
+                if err != nix::errno::Errno::ENOENT && err != nix::errno::Errno::ENOTEMPTY {
+                    log::warn!("failed to remove empty backup type dir {ty} in {ns} - {err}");
                 }
             }
         }
 
-        Ok((removed_all_groups, stats))
+        let mut ns_dir = ns.path();
+        ns_dir.push("ns");
+        let _ = unlinkat(Some(base_fd), &ns_dir, UnlinkatFlags::RemoveDir);
+
+        if !ns.is_root() {
+            match unlinkat(Some(base_fd), &ns.path(), UnlinkatFlags::RemoveDir) {
+                Ok(()) => log::debug!("removed namespace {ns}"),
+                Err(nix::errno::Errno::ENOENT) => {
+                    log::debug!("namespace {ns} already removed")
+                }
+                Err(nix::errno::Errno::ENOTEMPTY) if !delete_groups => {
+                    removed_all_requested = false;
+                    log::debug!("skip removal of non-empty namespace {ns}")
+                }
+                Err(err) => {
+                    removed_all_requested = false;
+                    log::warn!("failed to remove namespace {ns} - {err}")
+                }
+            }
+            if let DatastoreBackend::S3(s3_client) = &backend {
+                // Only remove the namespace marker, if it was empty,
+                // than this is the same as the namespace being removed.
+                let object_key =
+                    crate::s3::object_key_from_path(&ns.path(), NAMESPACE_MARKER_FILENAME)
+                        .context("invalid namespace marker object key")?;
+                proxmox_async::runtime::block_on(s3_client.delete_object(object_key))?;
+            }
+        }
+
+        Ok((removed_all_requested, stats))
     }
 
-    /// Remove a complete backup namespace optionally including all it's, and child namespaces',
-    /// groups. If  `removed_groups` is false this only prunes empty namespaces.
+    /// Remove a complete backup namespace subtree including all child namespaces.
     ///
-    /// Returns true if everything requested, and false if some groups were protected or if some
-    /// namespaces weren't empty even though all groups were deleted (race with new backup)
+    /// If `delete_groups` is false this only prunes empty namespaces. Returns true if
+    /// everything requested was removed, and false if some groups were protected or some
+    /// namespaces weren't empty even though all groups were deleted (race with new backup).
     pub fn remove_namespace_recursive(
         self: &Arc<Self>,
         ns: &BackupNamespace,
         delete_groups: bool,
     ) -> Result<(bool, BackupGroupDeleteStats), Error> {
         let store = self.name();
-        let mut removed_all_requested = true;
-        let mut stats = BackupGroupDeleteStats::default();
-        let backend = self.backend()?;
+        log::info!(
+            "{} namespace recursively below {store}:/{ns}",
+            if delete_groups {
+                "removing whole"
+            } else {
+                "pruning empty"
+            },
+        );
 
-        if delete_groups {
-            log::info!("removing whole namespace recursively below {store}:/{ns}",);
-            for ns in self.recursive_iter_backup_ns(ns.to_owned())? {
-                let (removed_ns_groups, delete_stats) = self.remove_namespace_groups(&ns?)?;
-                stats.add(&delete_stats);
-                removed_all_requested = removed_all_requested && removed_ns_groups;
-            }
+        let mut children = self
+            .recursive_iter_backup_ns(ns.to_owned())?
+            .collect::<Result<Vec<BackupNamespace>, Error>>()?;
+        // Process deepest first so a parent's rmdir sees an empty directory.
+        children.sort_by_key(|b| std::cmp::Reverse(b.depth()));
 
-            if let DatastoreBackend::S3(s3_client) = &backend {
+        // When destroying the full subtree, a bulk S3 delete under the root prefix catches
+        // any orphaned group-level metadata (owner/notes) before per-level cleanup iterates.
+        if delete_groups {
+            if let DatastoreBackend::S3(s3_client) = &self.backend()? {
                 let ns_dir = ns.path();
                 let ns_prefix = ns_dir
                     .to_str()
@@ -1092,50 +1157,14 @@ impl DataStore {
                     bail!("deleting objects failed");
                 }
             }
-        } else {
-            log::info!("pruning empty namespace recursively below {store}:/{ns}");
         }
 
-        // now try to delete the actual namespaces, bottom up so that we can use safe rmdir that
-        // will choke if a new backup/group appeared in the meantime (but not on an new empty NS)
-        let mut children = self
-            .recursive_iter_backup_ns(ns.to_owned())?
-            .collect::<Result<Vec<BackupNamespace>, Error>>()?;
-
-        children.sort_by_key(|b| std::cmp::Reverse(b.depth()));
-
-        let base_file = std::fs::File::open(self.base_path())?;
-        let base_fd = base_file.as_raw_fd();
-
-        for ns in children.iter() {
-            let mut ns_dir = ns.path();
-            ns_dir.push("ns");
-            let _ = unlinkat(Some(base_fd), &ns_dir, UnlinkatFlags::RemoveDir);
-
-            if !ns.is_root() {
-                match unlinkat(Some(base_fd), &ns.path(), UnlinkatFlags::RemoveDir) {
-                    Ok(()) => log::debug!("removed namespace {ns}"),
-                    Err(nix::errno::Errno::ENOENT) => {
-                        log::debug!("namespace {ns} already removed")
-                    }
-                    Err(nix::errno::Errno::ENOTEMPTY) if !delete_groups => {
-                        removed_all_requested = false;
-                        log::debug!("skip removal of non-empty namespace {ns}")
-                    }
-                    Err(err) => {
-                        removed_all_requested = false;
-                        log::warn!("failed to remove namespace {ns} - {err}")
-                    }
-                }
-                if let DatastoreBackend::S3(s3_client) = &backend {
-                    // Only remove the namespace marker, if it was empty,
-                    // than this is the same as the namespace being removed.
-                    let object_key =
-                        crate::s3::object_key_from_path(&ns.path(), NAMESPACE_MARKER_FILENAME)
-                            .context("invalid namespace marker object key")?;
-                    proxmox_async::runtime::block_on(s3_client.delete_object(object_key))?;
-                }
-            }
+        let mut removed_all_requested = true;
+        let mut stats = BackupGroupDeleteStats::default();
+        for child in &children {
+            let (removed, delete_stats) = self.remove_namespace(child, delete_groups)?;
+            stats.add(&delete_stats);
+            removed_all_requested = removed_all_requested && removed;
         }
 
         Ok((removed_all_requested, stats))
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 16a59fee..2c9d89a8 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -879,10 +879,9 @@ fn check_and_remove_ns(params: &PullParameters, local_ns: &BackupNamespace) -> R
     check_ns_modification_privs(params.target.store.name(), local_ns, &params.owner)
         .map_err(|err| format_err!("Removing {local_ns} not allowed - {err}"))?;
 
-    let (removed_all, _delete_stats) = params
-        .target
-        .store
-        .remove_namespace_recursive(local_ns, true)?;
+    // The outer loop (check_and_remove_vanished_ns) iterates children first, so we only need
+    // to act on this one level.
+    let (removed_all, _delete_stats) = params.target.store.remove_namespace(local_ns, true)?;
 
     Ok(removed_all)
 }
-- 
2.47.3





  parent reply	other threads:[~2026-04-22 13:40 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-22 13:39 [PATCH proxmox-backup v8 00/13] fixes #6195: add support for moving groups and namespaces Hannes Laimer
2026-04-22 13:39 ` [PATCH proxmox-backup v8 01/13] ui: show empty groups Hannes Laimer
2026-04-22 13:39 ` [PATCH proxmox-backup v8 02/13] datastore: lift check_namespace_depth_limit to pbs-datastore Hannes Laimer
2026-04-22 13:39 ` [PATCH proxmox-backup v8 03/13] datastore: have BackupGroup::destroy consume the group lock Hannes Laimer
2026-04-22 13:39 ` Hannes Laimer [this message]
2026-04-22 13:39 ` [PATCH proxmox-backup v8 05/13] datastore: add move journal for coordinating with gc phase 1 Hannes Laimer
2026-04-22 13:39 ` [PATCH proxmox-backup v8 06/13] datastore: add move-group Hannes Laimer
2026-04-22 13:39 ` [PATCH proxmox-backup v8 07/13] datastore: add move-namespace Hannes Laimer
2026-04-22 13:39 ` [PATCH proxmox-backup v8 08/13] docs: add section on moving namespaces and groups Hannes Laimer
2026-04-22 13:39 ` [PATCH proxmox-backup v8 09/13] api: add POST endpoint for move-group Hannes Laimer
2026-04-22 13:39 ` [PATCH proxmox-backup v8 10/13] api: add POST endpoint for move-namespace Hannes Laimer
2026-04-22 13:39 ` [PATCH proxmox-backup v8 11/13] ui: add move group action Hannes Laimer
2026-04-23 13:35   ` Michael Köppl
2026-04-23 13:47     ` Hannes Laimer
2026-04-22 13:39 ` [PATCH proxmox-backup v8 12/13] ui: add move namespace action Hannes Laimer
2026-04-23 14:49   ` Michael Köppl
2026-04-22 13:39 ` [PATCH proxmox-backup v8 13/13] cli: add move-namespace and move-group commands Hannes Laimer
2026-04-23 16:29 ` [PATCH proxmox-backup v8 00/13] fixes #6195: add support for moving groups and namespaces Michael Köppl
2026-04-23 22:38 ` applied: " Thomas Lamprecht
2026-04-24  8:31   ` Fabian Grünbichler
2026-04-24  8:43     ` Hannes Laimer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260422133951.192862-5-h.laimer@proxmox.com \
    --to=h.laimer@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal