all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Hannes Laimer <h.laimer@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [PATCH proxmox-backup v6 2/8] datastore: add move_group
Date: Tue, 31 Mar 2026 14:34:03 +0200	[thread overview]
Message-ID: <20260331123409.198353-3-h.laimer@proxmox.com> (raw)
In-Reply-To: <20260331123409.198353-1-h.laimer@proxmox.com>

Add support for moving a single backup group to a different namespace
within the same datastore.

For the filesystem backend, the group directory is relocated with a
single rename(2). For S3, all objects under the source prefix are
copied to the target prefix and then deleted.

An exclusive group lock and exclusive locks on every snapshot are
acquired before the move, mirroring the locking strategy used when
destroying a group. This ensures no concurrent readers, writers, or
verify tasks are operating on any snapshot being moved. Existence
checks are performed under those locks to avoid TOCTOU races.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
 pbs-datastore/src/backup_info.rs | 148 ++++++++++++++++++++++++++++++-
 pbs-datastore/src/datastore.rs   |  56 ++++++++++++
 2 files changed, 203 insertions(+), 1 deletion(-)

diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
index c33eb307..a8a56198 100644
--- a/pbs-datastore/src/backup_info.rs
+++ b/pbs-datastore/src/backup_info.rs
@@ -9,7 +9,7 @@ use std::time::Duration;
 use anyhow::{bail, format_err, Context, Error};
 use const_format::concatcp;
 
-use proxmox_s3_client::S3PathPrefix;
+use proxmox_s3_client::{S3ObjectKey, S3PathPrefix};
 use proxmox_sys::fs::{lock_dir_noblock, lock_dir_noblock_shared, replace_file, CreateOptions};
 use proxmox_systemd::escape_unit;
 
@@ -273,6 +273,152 @@ impl BackupGroup {
         Ok(delete_stats)
     }
 
+    /// Move this group to a new namespace.
+    ///
+    /// For the filesystem backend, uses `rename` to atomically relocate the group directory. For
+    /// the S3 backend, copies all objects to the destination prefix first, then renames the local
+    /// cache directory, then deletes the source objects. A copy failure returns an error with the
+    /// group intact at source. A delete failure is logged as a warning - any un-deleted source
+    /// objects are orphaned and must be removed manually.
+    ///
+    /// The caller must have created the target type directory
+    /// (e.g. `{target_ns}/{backup_type}/`) before calling this method.
+    ///
+    /// The caller must hold an exclusive group lock and exclusive locks on all snapshots in the
+    /// group. This prevents concurrent readers, writers, and verify tasks from operating on the
+    /// group and ensures no new objects are added between the S3 copy sweep and the subsequent
+    /// deletes.
+    pub(crate) fn move_to(
+        &self,
+        target_ns: &BackupNamespace,
+        backend: &DatastoreBackend,
+    ) -> Result<(), Error> {
+        let src_path = self.full_group_path();
+        let target_path = self.store.group_path(target_ns, &self.group);
+
+        log::info!("moving backup group {src_path:?} to {target_path:?}");
+
+        match backend {
+            DatastoreBackend::Filesystem => {
+                std::fs::rename(&src_path, &target_path).with_context(|| {
+                    format!("failed to move group {src_path:?} to {target_path:?}")
+                })?;
+                // Remove the now-stale source lock file. The caller's lock guard
+                // still holds the flock via the open FD until it is dropped.
+                let _ = std::fs::remove_file(self.lock_path());
+            }
+            DatastoreBackend::S3(s3_client) => {
+                // Build S3 key prefixes for source and target groups, e.g.:
+                //   src: ".cnt/a/b/vm/100/"
+                //   tgt: ".cnt/a/c/vm/100/"
+                let src_rel = self.relative_group_path();
+                let src_rel_str = src_rel
+                    .to_str()
+                    .ok_or_else(|| format_err!("invalid source group path"))?;
+                let src_prefix_str = format!("{S3_CONTENT_PREFIX}/{src_rel_str}/");
+
+                let mut tgt_rel = target_ns.path();
+                tgt_rel.push(self.group.ty.as_str());
+                tgt_rel.push(&self.group.id);
+                let tgt_rel_str = tgt_rel
+                    .to_str()
+                    .ok_or_else(|| format_err!("invalid target group path"))?;
+                let tgt_prefix_str = format!("{S3_CONTENT_PREFIX}/{tgt_rel_str}/");
+
+                // S3 list_objects returns full keys with the store name as the leading component,
+                // e.g. "mystore/.cnt/a/b/vm/100/2026-01-01T00:00:00Z/drive-scsi0.img.fidx".
+                // Strip "mystore/" to get the relative key used by copy_object.
+                let store_prefix = format!("{}/", self.store.name());
+
+                log::debug!(
+                    "S3 move: listing prefix '{src_prefix_str}', store_prefix='{store_prefix}', tgt_prefix='{tgt_prefix_str}'"
+                );
+
+                // Copy all objects to the target prefix first. No source objects are deleted
+                // until all copies succeed, so a copy failure leaves the group intact at source.
+                let prefix = S3PathPrefix::Some(src_prefix_str.clone());
+                let mut token: Option<String> = None;
+                let mut src_keys = Vec::new();
+
+                loop {
+                    let result = proxmox_async::runtime::block_on(
+                        s3_client.list_objects_v2(&prefix, token.as_deref()),
+                    )
+                    .context("failed to list group objects on S3 backend")?;
+
+                    log::debug!(
+                        "S3 move: listed {} objects (truncated={})",
+                        result.contents.len(),
+                        result.is_truncated
+                    );
+
+                    for item in result.contents {
+                        let full_key_str: &str = &item.key;
+                        log::debug!("S3 move: processing key '{full_key_str}'");
+                        let rel_key =
+                            full_key_str.strip_prefix(&store_prefix).ok_or_else(|| {
+                                format_err!("unexpected key prefix in '{full_key_str}'")
+                            })?;
+                        let src_key = S3ObjectKey::try_from(rel_key)?;
+
+                        // Replace the source group prefix with the target prefix,
+                        // keeping the snapshot dir and filename (suffix) intact.
+                        let suffix = rel_key
+                            .strip_prefix(&src_prefix_str)
+                            .ok_or_else(|| format_err!("unexpected key format '{rel_key}'"))?;
+                        let dst_key_str = format!("{tgt_prefix_str}{suffix}");
+                        let dst_key = S3ObjectKey::try_from(dst_key_str.as_str())?;
+
+                        log::debug!("S3 move: copy '{rel_key}' -> '{dst_key_str}'");
+                        proxmox_async::runtime::block_on(
+                            s3_client.copy_object(src_key.clone(), dst_key),
+                        )
+                        .with_context(|| format!("failed to copy S3 object '{rel_key}'"))?;
+                        src_keys.push(src_key);
+                    }
+
+                    if result.is_truncated {
+                        token = result.next_continuation_token;
+                    } else {
+                        break;
+                    }
+                }
+
+                // All copies succeeded. Remove any pre-created target directory (the
+                // caller may have created one) before renaming - rename(2) requires
+                // the target to be empty if it exists.
+                if target_path.exists() {
+                    let _ = std::fs::remove_dir_all(&target_path);
+                }
+                // Rename the local cache directory before deleting source objects so that
+                // the local cache reflects the target state as soon as possible.
+                std::fs::rename(&src_path, &target_path).with_context(|| {
+                    format!("failed to move group {src_path:?} to {target_path:?}")
+                })?;
+                let _ = std::fs::remove_file(self.lock_path());
+
+                // Delete source objects. In case of a delete failure the group is already at the
+                // target (S3 copies + local cache). Treat delete failures as warnings so the
+                // caller does not misreport the group as "failed to move".
+                // Un-deleted sources must be removed manually.
+                log::debug!("S3 move: deleting {} source objects", src_keys.len());
+                for src_key in src_keys {
+                    log::debug!("S3 move: delete '{src_key:?}'");
+                    if let Err(err) =
+                        proxmox_async::runtime::block_on(s3_client.delete_object(src_key.clone()))
+                    {
+                        log::warn!(
+                            "S3 move: failed to delete source object '{src_key:?}' \
+                            (group already at target, orphaned object requires manual removal): {err:#}"
+                        );
+                    }
+                }
+            }
+        }
+
+        Ok(())
+    }
+
     /// Helper function, assumes that no more snapshots are present in the group.
     fn remove_group_dir(&self) -> Result<(), Error> {
         let note_path = self.store.group_notes_path(&self.ns, &self.group);
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index ef378c69..d74e7e42 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -1014,6 +1014,62 @@ impl DataStore {
         backup_group.destroy(&self.backend()?)
     }
 
+    /// Move a single backup group to a different namespace within the same datastore.
+    ///
+    /// Acquires an exclusive group lock and exclusive locks on every snapshot in the group to
+    /// ensure no concurrent readers, writers, or verify tasks are operating on any snapshot.
+    /// This mirrors the locking strategy of `BackupGroup::destroy()`.
+    pub fn move_group(
+        self: &Arc<Self>,
+        source_ns: &BackupNamespace,
+        group: &pbs_api_types::BackupGroup,
+        target_ns: &BackupNamespace,
+    ) -> Result<(), Error> {
+        if source_ns == target_ns {
+            bail!("source and target namespace must be different");
+        }
+
+        let source_group = self.backup_group(source_ns.clone(), group.clone());
+        let target_group = self.backup_group(target_ns.clone(), group.clone());
+
+        // Acquire exclusive group lock - prevents new snapshot additions/removals.
+        let _group_lock = source_group
+            .lock()
+            .with_context(|| format!("failed to lock group '{group}' for move"))?;
+
+        // Acquire exclusive lock on each snapshot - ensures no concurrent readers, verify
+        // tasks, or other operations are active on any snapshot in this group.
+        let mut _snap_locks = Vec::new();
+        for snap in source_group.iter_snapshots()? {
+            let snap = snap?;
+            _snap_locks.push(snap.lock().with_context(|| {
+                format!(
+                    "cannot move group '{group}': snapshot '{}' is in use",
+                    snap.dir(),
+                )
+            })?);
+        }
+
+        // Check existence under locks to avoid TOCTOU races.
+        if !self.namespace_exists(target_ns) {
+            bail!("target namespace '{target_ns}' does not exist");
+        }
+        if !source_group.exists() {
+            bail!("group '{group}' does not exist in namespace '{source_ns}'");
+        }
+        if target_group.exists() {
+            bail!("group '{group}' already exists in target namespace '{target_ns}'");
+        }
+
+        let backend = self.backend()?;
+
+        std::fs::create_dir_all(self.type_path(target_ns, group.ty)).with_context(|| {
+            format!("failed to create type directory in '{target_ns}' for move")
+        })?;
+
+        source_group.move_to(target_ns, &backend)
+    }
+
     /// Remove a backup directory including all content
     pub fn remove_backup_dir(
         self: &Arc<Self>,
-- 
2.47.3





  parent reply	other threads:[~2026-03-31 12:34 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-31 12:34 [PATCH proxmox-backup v6 0/8] fixes #6195: add support for moving groups and namespaces Hannes Laimer
2026-03-31 12:34 ` [PATCH proxmox-backup v6 1/8] ui: show empty groups Hannes Laimer
2026-03-31 12:34 ` Hannes Laimer [this message]
2026-03-31 12:34 ` [PATCH proxmox-backup v6 3/8] datastore: add move_namespace Hannes Laimer
2026-03-31 12:34 ` [PATCH proxmox-backup v6 4/8] api: add PUT endpoint for move_group Hannes Laimer
2026-03-31 12:34 ` [PATCH proxmox-backup v6 5/8] api: add PUT endpoint for move_namespace Hannes Laimer
2026-03-31 12:34 ` [PATCH proxmox-backup v6 6/8] ui: add move group action Hannes Laimer
2026-03-31 12:34 ` [PATCH proxmox-backup v6 7/8] ui: add move namespace action Hannes Laimer
2026-04-02  9:28   ` Arthur Bied-Charreton
2026-03-31 12:34 ` [PATCH proxmox-backup v6 8/8] cli: add move-namespace and move-group commands Hannes Laimer
2026-04-02  9:22   ` Arthur Bied-Charreton
2026-04-02  9:34 ` [PATCH proxmox-backup v6 0/8] fixes #6195: add support for moving groups and namespaces Arthur Bied-Charreton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260331123409.198353-3-h.laimer@proxmox.com \
    --to=h.laimer@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal