public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Hannes Laimer <h.laimer@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [PATCH proxmox-backup v4 3/7] datastore: add move_namespace
Date: Wed, 11 Mar 2026 16:13:11 +0100	[thread overview]
Message-ID: <20260311151315.133637-4-h.laimer@proxmox.com> (raw)
In-Reply-To: <20260311151315.133637-1-h.laimer@proxmox.com>

move_namespace relocates an entire namespace subtree (the given
namespace, all child namespaces, and their groups) to a new location
within the same datastore.

For the filesystem backend the entire subtree is relocated with a single
atomic rename. For the S3 backend groups are moved one at a time via
BackupGroup::move_to(). Groups that fail are left at the source and
listed as an error in the task log so they can be retried with
move_group individually. Source namespaces where all groups succeeded
have their S3 markers and local cache directories removed,
deepest-first.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
 pbs-datastore/src/datastore.rs | 216 ++++++++++++++++++++++++++++++++-
 1 file changed, 215 insertions(+), 1 deletion(-)

diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 51813acb..81066faf 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -31,7 +31,7 @@ use pbs_api_types::{
     ArchiveType, Authid, BackupGroupDeleteStats, BackupNamespace, BackupType, ChunkOrder,
     DataStoreConfig, DatastoreBackendConfig, DatastoreBackendType, DatastoreFSyncLevel,
     DatastoreTuning, GarbageCollectionCacheStats, GarbageCollectionStatus, MaintenanceMode,
-    MaintenanceType, Operation, UPID,
+    MaintenanceType, Operation, MAX_NAMESPACE_DEPTH, UPID,
 };
 use pbs_config::s3::S3_CFG_TYPE_ID;
 use pbs_config::{BackupLockGuard, ConfigVersionCache};
@@ -1001,6 +1001,220 @@ impl DataStore {
         Ok((removed_all_requested, stats))
     }
 
+    /// Move a backup namespace (including all child namespaces and groups) to a new location.
+    ///
+    /// The entire subtree rooted at `source_ns` is relocated to `target_ns`. Exclusive namespace
+    /// locks are held on both source and target namespaces for the duration to block concurrent
+    /// readers and writers.
+    ///
+    /// For the filesystem backend the rename is atomic. For the S3 backend groups are moved
+    /// one at a time. A group that fails to copy is left at the source and can be moved
+    /// individually with `move_group`. The operation returns an error listing any such groups.
+    ///
+    /// Fails if:
+    /// - `source_ns` is the root namespace
+    /// - `source_ns` == `target_ns`
+    /// - `source_ns` does not exist
+    /// - `target_ns` already exists (to prevent silent merging)
+    /// - `target_ns`'s parent does not exist
+    /// - `source_ns` is an ancestor of `target_ns`
+    /// - the move would exceed the maximum namespace depth
+    pub fn move_namespace(
+        self: &Arc<Self>,
+        source_ns: &BackupNamespace,
+        target_ns: &BackupNamespace,
+    ) -> Result<(), Error> {
+        if source_ns.is_root() {
+            bail!("cannot move root namespace");
+        }
+        if source_ns == target_ns {
+            bail!("source and target namespace must be different");
+        }
+
+        // lock_namespace also acquires shared locks on all ancestors, so a concurrent
+        // move_namespace on a child namespace (which also walks up to ancestors) will be blocked
+        // by our exclusive lock, and we will be blocked by any in-progress move's targeting one of
+        // our ancestors.
+        let _source_ns_guard = lock_namespace(self.name(), source_ns)
+            .with_context(|| format!("failed to lock source namespace '{source_ns}' for move"))?;
+        // Lock target_ns to prevent two concurrent moves from racing to create the same target.
+        let _target_ns_guard = lock_namespace(self.name(), target_ns)
+            .with_context(|| format!("failed to lock target namespace '{target_ns}' for move"))?;
+
+        if !self.namespace_exists(source_ns) {
+            bail!("source namespace '{source_ns}' does not exist");
+        }
+        if self.namespace_exists(target_ns) {
+            bail!("target namespace '{target_ns}' already exists");
+        }
+        let target_parent = target_ns.parent();
+        if !self.namespace_exists(&target_parent) {
+            bail!("target namespace parent '{target_parent}' does not exist");
+        }
+        if source_ns.contains(target_ns).is_some() {
+            bail!(
+                "cannot move namespace '{source_ns}' into its own subtree (target: '{target_ns}')"
+            );
+        }
+
+        let all_source_ns: Vec<BackupNamespace> = self
+            .recursive_iter_backup_ns(source_ns.clone())?
+            .collect::<Result<Vec<_>, Error>>()?;
+
+        let all_source_groups: Vec<BackupGroup> = all_source_ns
+            .iter()
+            .map(|ns| self.iter_backup_groups(ns.clone()))
+            .collect::<Result<Vec<_>, Error>>()?
+            .into_iter()
+            .flatten()
+            .collect::<Result<Vec<_>, Error>>()?;
+
+        let subtree_depth = all_source_ns
+            .iter()
+            .map(BackupNamespace::depth)
+            .max()
+            .map_or(0, |d| d - source_ns.depth());
+        if subtree_depth + target_ns.depth() > MAX_NAMESPACE_DEPTH {
+            bail!(
+                "move would exceed maximum namespace depth \
+                ({subtree_depth}+{} > {MAX_NAMESPACE_DEPTH})",
+                target_ns.depth(),
+            );
+        }
+
+        let backend = self.backend()?;
+
+        log::info!(
+            "moving namespace '{source_ns}' -> '{target_ns}': {} namespaces, {} groups",
+            all_source_ns.len(),
+            all_source_groups.len(),
+        );
+
+        match &backend {
+            DatastoreBackend::Filesystem => {
+                let src_path = self.namespace_path(source_ns);
+                let dst_path = self.namespace_path(target_ns);
+                if let Some(dst_parent) = dst_path.parent() {
+                    std::fs::create_dir_all(dst_parent).with_context(|| {
+                        format!("failed to create parent directory for namespace rename '{source_ns}' -> '{target_ns}'")
+                    })?;
+                }
+                log::debug!("renaming namespace directory '{src_path:?}' -> '{dst_path:?}'");
+                std::fs::rename(&src_path, &dst_path).with_context(|| {
+                    format!("failed to rename namespace directory '{source_ns}' -> '{target_ns}'")
+                })?;
+            }
+            DatastoreBackend::S3(s3_client) => {
+                // Create target local namespace directories upfront (covers empty namespaces).
+                for ns in &all_source_ns {
+                    let target_child = ns.map_prefix(source_ns, target_ns)?;
+                    std::fs::create_dir_all(self.namespace_path(&target_child)).with_context(
+                        || {
+                            format!(
+                                "failed to create local dir for target namespace '{target_child}'"
+                            )
+                        },
+                    )?;
+                }
+
+                // Create S3 namespace markers for all target namespaces.
+                for ns in &all_source_ns {
+                    let target_child = ns.map_prefix(source_ns, target_ns)?;
+                    let object_key = crate::s3::object_key_from_path(
+                        &target_child.path(),
+                        NAMESPACE_MARKER_FILENAME,
+                    )
+                    .context("invalid namespace marker object key")?;
+                    log::debug!(
+                        "creating S3 namespace marker for '{target_child}': {object_key:?}"
+                    );
+                    proxmox_async::runtime::block_on(
+                        s3_client.upload_no_replace_with_retry(object_key, Bytes::from("")),
+                    )
+                    .context("failed to create namespace marker on S3 backend")?;
+                }
+
+                // Move each group. Failed groups are skipped and remain at the source in
+                // both S3 and local cache. Collect the namespaces of any failed groups so we
+                // know which source namespaces still have content after the loop.
+                let mut failed_groups: Vec<String> = Vec::new();
+                let mut failed_ns: HashSet<BackupNamespace> = HashSet::new();
+
+                for group in &all_source_groups {
+                    let target_group_ns = group.backup_ns().map_prefix(source_ns, target_ns)?;
+
+                    // Ensure the target type directory exists before move_to renames into it.
+                    if let Err(err) =
+                        std::fs::create_dir_all(self.type_path(&target_group_ns, group.group().ty))
+                    {
+                        warn!(
+                            "move_namespace: failed to create type dir for '{}' in '{}': {err:#}",
+                            group.group(),
+                            target_group_ns
+                        );
+                        failed_groups.push(group.group().to_string());
+                        failed_ns.insert(group.backup_ns().clone());
+                        continue;
+                    }
+
+                    if let Err(err) = group.move_to(&target_group_ns, &backend) {
+                        warn!(
+                            "move_namespace: failed to move group '{}' from '{}' to '{}': {err:#}",
+                            group.group(),
+                            group.backup_ns(),
+                            target_group_ns
+                        );
+                        failed_groups.push(group.group().to_string());
+                        failed_ns.insert(group.backup_ns().clone());
+                    }
+                }
+
+                // Clean up source namespaces that are now fully empty (all groups moved).
+                // Process deepest-first so parent directories are already empty when reached.
+                for ns in all_source_ns.iter().rev() {
+                    // Skip if this namespace itself or any descendant still has groups.
+                    let has_remaining = failed_ns
+                        .iter()
+                        .any(|fns| fns == ns || ns.contains(fns).is_some());
+                    if has_remaining {
+                        continue;
+                    }
+
+                    // Delete the source S3 namespace marker.
+                    let object_key =
+                        crate::s3::object_key_from_path(&ns.path(), NAMESPACE_MARKER_FILENAME)
+                            .context("invalid namespace marker object key")?;
+                    log::debug!("deleting source S3 namespace marker for '{ns}': {object_key:?}");
+                    proxmox_async::runtime::block_on(s3_client.delete_object(object_key))
+                        .context("failed to delete source namespace marker on S3 backend")?;
+
+                    // Remove the source local cache directory. Try type subdirectories first
+                    // (they should be empty after the per-group renames), then the namespace dir.
+                    let ns_path = self.namespace_path(ns);
+                    if let Ok(entries) = std::fs::read_dir(&ns_path) {
+                        for entry in entries.flatten() {
+                            let _ = std::fs::remove_dir(entry.path());
+                        }
+                    }
+                    let _ = std::fs::remove_dir(&ns_path);
+                }
+
+                if !failed_groups.is_empty() {
+                    bail!(
+                        "namespace move partially completed; {} group(s) could not be moved \
+                        and remain at source '{}': {}. \
+                        Use move group to move them individually.",
+                        failed_groups.len(),
+                        source_ns,
+                        failed_groups.join(", ")
+                    );
+                }
+            }
+        }
+
+        Ok(())
+    }
+
     /// Remove a complete backup group including all snapshots.
     ///
     /// Returns `BackupGroupDeleteStats`, containing the number of deleted snapshots
-- 
2.47.3





  parent reply	other threads:[~2026-03-11 15:13 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-11 15:13 [PATCH proxmox-backup v4 0/7] fixes #6195: add support for moving groups and namespaces Hannes Laimer
2026-03-11 15:13 ` [PATCH proxmox-backup v4 1/7] datastore: add namespace-level locking Hannes Laimer
2026-03-12 15:43   ` Christian Ebner
2026-03-13  7:40     ` Hannes Laimer
2026-03-13  7:56       ` Christian Ebner
2026-03-17 13:03     ` Christian Ebner
2026-03-11 15:13 ` [PATCH proxmox-backup v4 2/7] datastore: add move_group Hannes Laimer
2026-03-12 16:08   ` Christian Ebner
2026-03-13  7:28     ` Hannes Laimer
2026-03-13  7:52       ` Christian Ebner
2026-03-11 15:13 ` Hannes Laimer [this message]
2026-03-11 15:13 ` [PATCH proxmox-backup v4 4/7] api: add PUT endpoint for move_group Hannes Laimer
2026-03-12 16:17   ` Christian Ebner
2026-03-11 15:13 ` [PATCH proxmox-backup v4 5/7] api: add PUT endpoint for move_namespace Hannes Laimer
2026-03-12 16:19   ` Christian Ebner
2026-03-11 15:13 ` [PATCH proxmox-backup v4 6/7] ui: add move group action Hannes Laimer
2026-03-17 11:43   ` Christian Ebner
2026-03-17 11:48     ` Hannes Laimer
2026-03-11 15:13 ` [PATCH proxmox-backup v4 7/7] ui: add move namespace action Hannes Laimer
2026-03-12 16:21 ` [PATCH proxmox-backup v4 0/7] fixes #6195: add support for moving groups and namespaces Christian Ebner
2026-03-17 13:47 ` Christian Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260311151315.133637-4-h.laimer@proxmox.com \
    --to=h.laimer@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal