public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup v3 0/3] fixes #6195: add support for moving namespaces
@ 2025-09-11  9:49 Hannes Laimer
  2025-09-11  9:49 ` [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint " Hannes Laimer
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Hannes Laimer @ 2025-09-11  9:49 UTC (permalink / raw)
  To: pbs-devel

Adds support for moving namespaces within a datastore. 
- we don't create non-existing target namespaces
- we don't allow moving if a namespace with the same "name" already
  exists. So given:
   /a/b/c/x
   /1/2/3/4/x
  moving /1/2/3/4/x to /a/b/c/ is not allowed.
these are the only real restrictions. Of course moving itself into
sub-namespaces, exeeding depth limit or moving root are also not. But
those are not really artificial.

v3, thanks @Thomas:
 - add commit message outlining how moving is done
 - add some comments containing a simple example that show how paths are
   manipulated throughout the moving process
 - add short section to docs

v2, thanks @Shannon:
 - only allow moving if no tasks are running, some more info on the
   patch itself 

Hannes Laimer (3):
  fix #6195: api: datastore: add endpoint for moving namespaces
  fix #6195: ui: add button/window for moving a namespace
  docs: add section for moving datastores

 docs/storage.rst               |  20 ++++-
 pbs-datastore/src/datastore.rs | 129 +++++++++++++++++++++++++++++++++
 src/api2/admin/namespace.rs    |  50 ++++++++++++-
 www/Makefile                   |   1 +
 www/datastore/Content.js       |  38 ++++++++++
 www/window/NamespaceMove.js    |  47 ++++++++++++
 6 files changed, 282 insertions(+), 3 deletions(-)
 create mode 100644 www/window/NamespaceMove.js

-- 
2.47.3



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint for moving namespaces
  2025-09-11  9:49 [pbs-devel] [PATCH proxmox-backup v3 0/3] fixes #6195: add support for moving namespaces Hannes Laimer
@ 2025-09-11  9:49 ` Hannes Laimer
  2025-09-15  8:16   ` Christian Ebner
  2025-09-11  9:49 ` [pbs-devel] [PATCH proxmox-backup v3 2/3] fix #6195: ui: add button/window for moving a namespace Hannes Laimer
  2025-09-11  9:49 ` [pbs-devel] [PATCH proxmox-backup v3 3/3] docs: add section for moving datastores Hannes Laimer
  2 siblings, 1 reply; 10+ messages in thread
From: Hannes Laimer @ 2025-09-11  9:49 UTC (permalink / raw)
  To: pbs-devel

For normal FS backed datastores we:
 1. create the target dir, only creates something if the target
    namespace does not have any sub-namespaces yet. So, only if `ns/`
    would be missing (this does assume we run on the proxy, otherwise
    we'd get the wrong permissions on created directories)
 2. move the source to the target, `fs::rename(source, target)`

For S3 backed datastores we:
 1. iterate through all objects under the source namespace prefix
 2. the S3 API does not support move, so we copy over each object
     this will be one API call for each object(each file in a snapshot
     directory is an object)
 3. delete all object under the source prefix
     this will be at least two API calls, one for getting the list of
     all object and at least another one for deleting them
 4. do the same for the locale cache DS, so like for every other normal
    datastore as described above

So the total API calls needed to move a namespace in a S3 datastore are
roughly
 `# of objects` + `at least 2` (depends on pagination/batched deletion)
For local datastores moving only involves a fs::rename and potentially
the creation of a `ns/` directory.

Validation before doing any of these two includes:
 - source can't be root
 - source has to exist
 - target ns has to exists
 - target can't already contain a namespace with the same name as source
 - source can't be move into its own sub-namespaces
 - max_depth can't be exeeded with move

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
since v3:
 - commit message
 - add short comments with example

 pbs-datastore/src/datastore.rs | 129 +++++++++++++++++++++++++++++++++
 src/api2/admin/namespace.rs    |  50 ++++++++++++-
 2 files changed, 177 insertions(+), 2 deletions(-)

diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 7cf020fc..e45b3c3d 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -706,6 +706,135 @@ impl DataStore {
         Ok(ns)
     }
 
+    pub fn namespace_move(
+        self: &Arc<Self>,
+        ns: BackupNamespace,
+        mut target: BackupNamespace,
+    ) -> Result<(), Error> {
+        log::info!(
+            "moving ns '{}' to ns '{}' in datastore {}",
+            ns,
+            target,
+            self.name()
+        );
+        if ns.is_root() {
+            bail!("can't move root");
+        }
+        if !self.namespace_exists(&ns) {
+            bail!("cannot move a namespace that does not exist");
+        }
+        if !self.namespace_exists(&target) {
+            bail!("cannot move to a namespace that does not exist");
+        }
+        let Some(base_name) = ns.clone().pop() else {
+            bail!("source namespace can't be root");
+        };
+        target.push(base_name.clone())?;
+        if self.namespace_exists(&target) {
+            bail!("'{}' already exists", target);
+        }
+
+        let source_path = self.namespace_path(&ns);
+        let target_path = self.namespace_path(&target);
+        if target_path.starts_with(&source_path) {
+            bail!("cannot move namespace into one of its sub-namespaces");
+        }
+
+        let iter = self.recursive_iter_backup_ns_ok(ns.to_owned(), None)?;
+        let source_depth = ns.depth();
+        let source_height = iter.map(|ns| ns.depth() - source_depth).max().unwrap_or(1);
+        target.check_max_depth(source_height - 1)?;
+
+        if let DatastoreBackend::S3(s3_client) = self.backend()? {
+            let src_ns_rel = ns.path();
+            let src_ns_rel_str = src_ns_rel
+                .to_str()
+                .ok_or_else(|| format_err!("invalid source namespace path"))?;
+
+            let dst_ns_rel = target.path();
+            let dst_ns_rel_str = dst_ns_rel
+                .to_str()
+                .ok_or_else(|| format_err!("invalid destination namespace path"))?;
+            // e.g. src_ns_rel_str: "a/b"; dst_ns_rel_str: "x/b"
+            
+            let list_prefix = S3PathPrefix::Some(format!("{S3_CONTENT_PREFIX}/{src_ns_rel_str}"));
+            let mut next_continuation_token: Option<String> = None;
+            // e.g. store_prefix: "{store}/.cnt/"
+            let store_prefix = format!("{}/{S3_CONTENT_PREFIX}/", self.name());
+
+            loop {
+                let list_objects_result = proxmox_async::runtime::block_on(
+                    s3_client.list_objects_v2(&list_prefix, next_continuation_token.as_deref()),
+                )?;
+
+                let objects_to_move: Vec<S3ObjectKey> = list_objects_result
+                    .contents
+                    .iter()
+                    .map(|c| c.key.clone())
+                    .collect();
+
+                for object_key in objects_to_move {
+                    let object_key_str = object_key.to_string();
+                    // e.g. object_key_str: "store/.cnt/a/b/vm/100/2025-01-01T00:00:00Z/index.json.blob"
+                    let object_path =
+                        object_key_str.strip_prefix(&store_prefix).ok_or_else(|| {
+                            format_err!(
+                                "failed to strip store prefix '{}' from object key '{}'",
+                                store_prefix,
+                                object_key_str
+                            )
+                        })?;
+                    // e.g. object_path: "a/b/vm/100/2025-01-01T00:00:00Z/index.json.blob"
+
+                    let src_key_rel = S3ObjectKey::try_from(
+                        format!("{S3_CONTENT_PREFIX}/{}", object_path).as_str(),
+                    )?;
+                    // e.g. src_key_rel: ".cnt/a/b/vm/100/2025-01-01T00:00:00Z/index.json.blob"
+
+                    let src_rel_with_sep = format!("{}/", src_ns_rel_str);
+                    let rest = object_path
+                        .strip_prefix(&src_rel_with_sep)
+                        .ok_or_else(|| format_err!("unexpected object path: {}", object_path))?;
+                    // e.g. src_rel_with_sep: "a/b/"; rest: "vm/100/2025-01-01T00:00:00Z/index.json.blob"
+
+                    let dest_rel_path = format!("{}/{}", dst_ns_rel_str, rest);
+                    let dest_rel_path = std::path::Path::new(&dest_rel_path);
+                    let file_name = dest_rel_path
+                        .file_name()
+                        .and_then(|n| n.to_str())
+                        .ok_or_else(|| format_err!("invalid destination object file name"))?;
+                    let parent = dest_rel_path
+                        .parent()
+                        .unwrap_or_else(|| std::path::Path::new(""));
+                    let dest_key = crate::s3::object_key_from_path(parent, file_name)?;
+                    // e.g. dest_key: ".cnt/x/b/vm/100/2025-01-01T00:00:00Z/index.json.blob"
+
+                    proxmox_async::runtime::block_on(
+                        s3_client.copy_object(src_key_rel.clone(), dest_key),
+                    )?;
+                }
+
+                if list_objects_result.is_truncated {
+                    next_continuation_token = list_objects_result.next_continuation_token;
+                } else {
+                    break;
+                }
+            }
+
+            // delete objects under the old namespace prefix
+            let delete_error =
+                proxmox_async::runtime::block_on(s3_client.delete_objects_by_prefix(&list_prefix))?;
+            // e.g. delete prefix: ".cnt/a/b"
+            if delete_error {
+                bail!("deleting objects under old namespace prefix failed");
+            }
+        }
+
+        std::fs::create_dir_all(&target_path)?;
+        std::fs::rename(source_path, target_path)?;
+        Ok(())
+    }
+
     /// Returns if the given namespace exists on the datastore
     pub fn namespace_exists(&self, ns: &BackupNamespace) -> bool {
         let mut path = self.base_path();
diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
index 6cf88d89..cf741ead 100644
--- a/src/api2/admin/namespace.rs
+++ b/src/api2/admin/namespace.rs
@@ -6,7 +6,7 @@ use proxmox_schema::*;
 
 use pbs_api_types::{
     Authid, BackupGroupDeleteStats, BackupNamespace, NamespaceListItem, Operation,
-    DATASTORE_SCHEMA, NS_MAX_DEPTH_SCHEMA, PROXMOX_SAFE_ID_FORMAT,
+    DATASTORE_SCHEMA, NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_MODIFY, PROXMOX_SAFE_ID_FORMAT,
 };
 
 use pbs_datastore::DataStore;
@@ -123,6 +123,51 @@ pub fn list_namespaces(
     Ok(namespace_list)
 }
 
+#[api(
+    input: {
+        properties: {
+            store: { schema: DATASTORE_SCHEMA },
+            ns: {
+                type: BackupNamespace,
+            },
+            "target-ns": {
+                type: BackupNamespace,
+                optional: true,
+            },
+        },
+    },
+    access: {
+        permission: &Permission::Anybody,
+        description: "Requires DATASTORE_MODIFY on both the source /datastore/{store}/{ns} and\
+            the target /datastore/{store}/[{target-ns}]",
+    },
+)]
+/// Move a namespace into a different one. No target means the root namespace is the target.
+pub fn move_namespace(
+    store: String,
+    ns: BackupNamespace,
+    target_ns: Option<BackupNamespace>,
+    _info: &ApiMethod,
+    rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+    let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
+    let target = target_ns.unwrap_or(BackupNamespace::root());
+
+    check_ns_modification_privs(&store, &ns, &auth_id)?;
+    check_ns_privs(&store, &target, &auth_id, PRIV_DATASTORE_MODIFY)?;
+
+    let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+
+    // This lock on the active operations tracking file prevents updates on it, so no new
+    // read/write tasks can be started while we hold this. We are the one writing operation we
+    // expect here.
+    let (operations, _lock) = pbs_datastore::task_tracking::get_active_operations_locked(&store)?;
+    if operations.read > 0 || operations.write > 1 {
+        bail!("can't move namespace while tasks are running");
+    };
+    datastore.namespace_move(ns, target)
+}
+
 #[api(
     input: {
         properties: {
@@ -192,4 +237,5 @@ pub fn delete_namespace(
 pub const ROUTER: Router = Router::new()
     .get(&API_METHOD_LIST_NAMESPACES)
     .post(&API_METHOD_CREATE_NAMESPACE)
-    .delete(&API_METHOD_DELETE_NAMESPACE);
+    .delete(&API_METHOD_DELETE_NAMESPACE)
+    .subdirs(&[("move", &Router::new().post(&API_METHOD_MOVE_NAMESPACE))]);
-- 
2.47.3



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v3 2/3] fix #6195: ui: add button/window for moving a namespace
  2025-09-11  9:49 [pbs-devel] [PATCH proxmox-backup v3 0/3] fixes #6195: add support for moving namespaces Hannes Laimer
  2025-09-11  9:49 ` [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint " Hannes Laimer
@ 2025-09-11  9:49 ` Hannes Laimer
  2025-09-11  9:49 ` [pbs-devel] [PATCH proxmox-backup v3 3/3] docs: add section for moving datastores Hannes Laimer
  2 siblings, 0 replies; 10+ messages in thread
From: Hannes Laimer @ 2025-09-11  9:49 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
no changes since v1.

 www/Makefile                |  1 +
 www/datastore/Content.js    | 38 ++++++++++++++++++++++++++++++
 www/window/NamespaceMove.js | 47 +++++++++++++++++++++++++++++++++++++
 3 files changed, 86 insertions(+)
 create mode 100644 www/window/NamespaceMove.js

diff --git a/www/Makefile b/www/Makefile
index 9ebf0445..d0b4592c 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -79,6 +79,7 @@ JSSRC=							\
 	window/CreateDirectory.js			\
 	window/DataStoreEdit.js				\
 	window/NamespaceEdit.js				\
+	window/NamespaceMove.js				\
 	window/MaintenanceOptions.js			\
 	window/NotesEdit.js				\
 	window/RemoteEdit.js				\
diff --git a/www/datastore/Content.js b/www/datastore/Content.js
index 075022e9..ad9733c1 100644
--- a/www/datastore/Content.js
+++ b/www/datastore/Content.js
@@ -447,6 +447,22 @@ Ext.define('PBS.DataStoreContent', {
             win.on('destroy', this.reload, this);
         },
 
+        onMove: function (table, rI, cI, item, e, rec) {
+            let me = this;
+            let view = me.getView();
+            if (!rec.data.ns || rec.data.ty !== 'ns' || !view.datastore) {
+                return;
+            }
+            Ext.create('PBS.NamespaceMove', {
+                autoShow: true,
+                datastore: view.datastore,
+                ns: rec.data.ns,
+                listeners: {
+                    destroy: () => me.reload(),
+                },
+            });
+        },
+
         onPrune: function (table, rI, cI, item, e, rec) {
             let me = this;
             let view = me.getView();
@@ -1058,6 +1074,18 @@ Ext.define('PBS.DataStoreContent', {
                         data.ty === 'group' ? 'fa fa-scissors' : 'pmx-hidden',
                     isActionDisabled: (v, r, c, i, { data }) => data.ty !== 'group',
                 },
+                {
+                    handler: 'onMove',
+                    getTip: (v, m, rec) => Ext.String.format(gettext("Move '{0}'"), v),
+                    getClass: (v, m, { data }) => {
+                        if (data.parentId !== 'root' && !!data.ns && data.ty === 'ns' && !data.isRootNS) {
+                            return 'fa fa-arrow-right';
+                        } else {
+                            return 'pmx-hidden';
+                        }
+                    },
+                    isActionDisabled: (v, r, c, i, { data }) => data.ty !== 'ns',
+                },
                 {
                     handler: 'onProtectionChange',
                     getTip: (v, m, rec) =>
@@ -1439,6 +1467,16 @@ Ext.define('PBS.datastore.GroupCmdMenu', {
                 hidden: '{!onPrune}',
             },
         },
+        {
+            text: gettext('Move'),
+            iconCls: 'fa fa-arrow-right',
+            handler: function () {
+                this.up('menu').onMove();
+            },
+            cbind: {
+                hidden: '{!onMove}',
+            },
+        },
         { xtype: 'menuseparator' },
         {
             text: gettext('Remove'),
diff --git a/www/window/NamespaceMove.js b/www/window/NamespaceMove.js
new file mode 100644
index 00000000..b294bf6f
--- /dev/null
+++ b/www/window/NamespaceMove.js
@@ -0,0 +1,47 @@
+Ext.define('PBS.NamespaceMove', {
+    extend: 'Proxmox.window.Edit',
+    alias: 'widget.pbsNamespaceMove',
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    onlineHelp: 'changing-backup-owner',
+
+    submitText: gettext('Move Namespace'),
+    width: 350,
+
+    initComponent: function () {
+        let me = this;
+
+        if (!me.datastore) {
+            throw 'no datastore specified';
+        }
+        if (!me.ns) {
+            throw 'no namespace specified';
+        }
+
+        Ext.apply(me, {
+            url: `/api2/extjs/admin/datastore/${me.datastore}/namespace/move`,
+            method: 'POST',
+            subject: gettext('Move Namespace') + ` - ${me.ns}`,
+            items: {
+                xtype: 'inputpanel',
+                onGetValues: function (values) {
+                    values.ns = me.ns;
+                    return values;
+                },
+
+                items: [
+                    {
+                        xtype: 'pbsNamespaceSelector',
+                        datastore: me.datastore,
+                        name: 'target-ns',
+                        value: me.ns,
+                        fieldLabel: gettext('Target'),
+                        allowBlank: true,
+                    },
+                ],
+            },
+        });
+
+        me.callParent();
+    },
+});
-- 
2.47.3



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v3 3/3] docs: add section for moving datastores
  2025-09-11  9:49 [pbs-devel] [PATCH proxmox-backup v3 0/3] fixes #6195: add support for moving namespaces Hannes Laimer
  2025-09-11  9:49 ` [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint " Hannes Laimer
  2025-09-11  9:49 ` [pbs-devel] [PATCH proxmox-backup v3 2/3] fix #6195: ui: add button/window for moving a namespace Hannes Laimer
@ 2025-09-11  9:49 ` Hannes Laimer
  2 siblings, 0 replies; 10+ messages in thread
From: Hannes Laimer @ 2025-09-11  9:49 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
new in v3.

 docs/storage.rst | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/docs/storage.rst b/docs/storage.rst
index 54540219..00e45ce1 100644
--- a/docs/storage.rst
+++ b/docs/storage.rst
@@ -527,7 +527,25 @@ For backup groups, the existing privilege rules still apply. You either need a
 privileged enough permission or to be the owner of the backup group; nothing
 changed here.
 
-.. todo:: continue
+Moving Namespaces
+^^^^^^^^^^^^^^^^^
+
+You can move a namespace within the same datastore to reorganize the hierarchy. Moving the root
+namespace is not allowed, and cross-datastore moves are not supported.
+
+Requirements and limits:
+
+- Source namespace must exist and must not be the root
+- Target parent namespace must exist and the final destination must not already exist
+- You cannot move a namespace into one of its own sub-namespaces
+- The resulting depth cannot exceed the maximum depth
+
+Notes:
+
+- On local filesystems, moves are typically fast.
+- On S3-backed datastores (see :ref:`datastore_s3_backend`), moves may take longer and incur API
+  costs, as objects are re-written under a new prefix. For example, ``.cnt/a/b/...`` moved to
+  target namespace ``x/b`` becomes ``.cnt/x/b/...``.
 
 
 Options
-- 
2.47.3



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint for moving namespaces
  2025-09-11  9:49 ` [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint " Hannes Laimer
@ 2025-09-15  8:16   ` Christian Ebner
  2025-09-15  8:27     ` Hannes Laimer
  0 siblings, 1 reply; 10+ messages in thread
From: Christian Ebner @ 2025-09-15  8:16 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Hannes Laimer

Thanks for having a go at this issue, I did not yet have an in depth 
look at this but unfortunately I'm afraid the current implementation 
approach will not work for the S3 backend (and might also have issues 
for local datastores).

Copying the S3 objects is not an atomic operation and  will take some 
time, so leaves you open for race conditions. E.g. while you copy 
contents, a new backup snapshot might be created in one of the already 
copied backup groups, which will then however be deleted afterwards. 
Same is true for pruning, and other metadata editing operations such as
adding notes, backup task logs, ecc.

So IMO this must be tackled on a group level, making sure to get an 
exclusive lock for each group (on the source as well as target of the 
move operation) before doing any manipulation. Only then it is okay to 
do any non-atomic operations.

The moving of the namespace must then be implemented as batch operations 
on the groups and sub-namespaces.

This should be handled the same also for regular datastores, to avoid 
any races there to.


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint for moving namespaces
  2025-09-15  8:16   ` Christian Ebner
@ 2025-09-15  8:27     ` Hannes Laimer
  2025-09-15  8:56       ` Christian Ebner
  0 siblings, 1 reply; 10+ messages in thread
From: Hannes Laimer @ 2025-09-15  8:27 UTC (permalink / raw)
  To: Christian Ebner, Proxmox Backup Server development discussion

On 15.09.25 10:15, Christian Ebner wrote:
> Thanks for having a go at this issue, I did not yet have an in depth 
> look at this but unfortunately I'm afraid the current implementation 
> approach will not work for the S3 backend (and might also have issues 
> for local datastores).
> 
> Copying the S3 objects is not an atomic operation and  will take some 
> time, so leaves you open for race conditions. E.g. while you copy 
> contents, a new backup snapshot might be created in one of the already 
> copied backup groups, which will then however be deleted afterwards. 
> Same is true for pruning, and other metadata editing operations such as
> adding notes, backup task logs, ecc.
> 

Yes, but not really. We lock the `active_operations` tracking file, so
no new read/write operations can be started after we start the moving
process. There's a short comment in the API endpoint function.

I'm not sure there is much value in more granular locking, I mean, is
half a successful move worth much? Unless we add some kind of rollback,
but tbh, I feel like that would not be worth the effort I think.

> So IMO this must be tackled on a group level, making sure to get an 
> exclusive lock for each group (on the source as well as target of the 
> move operation) before doing any manipulation. Only then it is okay to 
> do any non-atomic operations.
> 
> The moving of the namespace must then be implemented as batch operations 
> on the groups and sub-namespaces.
> 
> This should be handled the same also for regular datastores, to avoid 
> any races there to.



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint for moving namespaces
  2025-09-15  8:27     ` Hannes Laimer
@ 2025-09-15  8:56       ` Christian Ebner
  2025-09-15  9:19         ` Hannes Laimer
  0 siblings, 1 reply; 10+ messages in thread
From: Christian Ebner @ 2025-09-15  8:56 UTC (permalink / raw)
  To: Hannes Laimer, Proxmox Backup Server development discussion

On 9/15/25 10:27 AM, Hannes Laimer wrote:
> On 15.09.25 10:15, Christian Ebner wrote:
>> Thanks for having a go at this issue, I did not yet have an in depth 
>> look at this but unfortunately I'm afraid the current implementation 
>> approach will not work for the S3 backend (and might also have issues 
>> for local datastores).
>>
>> Copying the S3 objects is not an atomic operation and  will take some 
>> time, so leaves you open for race conditions. E.g. while you copy 
>> contents, a new backup snapshot might be created in one of the already 
>> copied backup groups, which will then however be deleted afterwards. 
>> Same is true for pruning, and other metadata editing operations such as
>> adding notes, backup task logs, ecc.
>>
> 
> Yes, but not really. We lock the `active_operations` tracking file, so
> no new read/write operations can be started after we start the moving
> process. There's a short comment in the API endpoint function.

Ah yes, I did miss that part. But by doing that you will basically block 
any datastore operation, not just the ones to the source or target 
namespace. This is not ideal IMO. Further you cannot move a NS if any 
other operation is ongoing on the datastore, which might be completely 
unrelated to the source and target namespace, e.g. a backup to another 
namespace?

> I'm not sure there is much value in more granular locking, I mean, is
> half a successful move worth much? Unless we add some kind of rollback,
> but tbh, I feel like that would not be worth the effort I think.

Well, it could be just like we do for the sync jobs, skipping the move 
for the ones where the backup group could not be locked or fails for 
some other reason?

I think having a more granular backup group unit instead of namespace 
makes this more flexible: what if I only want to move one backup group 
from one namespace to another one, as the initial request in the bug report?

For example, I had a VM which has been backed up to a given namespace, 
has however since been destroyed, but I want to keep the backups by 
moving the group with all the snapshots to a different namespace, 
freeing the backup type and ID for the current namespace?

> 
>> So IMO this must be tackled on a group level, making sure to get an 
>> exclusive lock for each group (on the source as well as target of the 
>> move operation) before doing any manipulation. Only then it is okay to 
>> do any non-atomic operations.
>>
>> The moving of the namespace must then be implemented as batch 
>> operations on the groups and sub-namespaces.
>>
>> This should be handled the same also for regular datastores, to avoid 
>> any races there to.
> 



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint for moving namespaces
  2025-09-15  8:56       ` Christian Ebner
@ 2025-09-15  9:19         ` Hannes Laimer
  2025-09-15 10:01           ` Christian Ebner
  0 siblings, 1 reply; 10+ messages in thread
From: Hannes Laimer @ 2025-09-15  9:19 UTC (permalink / raw)
  To: Christian Ebner, Proxmox Backup Server development discussion

On 15.09.25 10:56, Christian Ebner wrote:
> On 9/15/25 10:27 AM, Hannes Laimer wrote:
>> On 15.09.25 10:15, Christian Ebner wrote:
>>> Thanks for having a go at this issue, I did not yet have an in depth 
>>> look at this but unfortunately I'm afraid the current implementation 
>>> approach will not work for the S3 backend (and might also have issues 
>>> for local datastores).
>>>
>>> Copying the S3 objects is not an atomic operation and  will take some 
>>> time, so leaves you open for race conditions. E.g. while you copy 
>>> contents, a new backup snapshot might be created in one of the 
>>> already copied backup groups, which will then however be deleted 
>>> afterwards. Same is true for pruning, and other metadata editing 
>>> operations such as
>>> adding notes, backup task logs, ecc.
>>>
>>
>> Yes, but not really. We lock the `active_operations` tracking file, so
>> no new read/write operations can be started after we start the moving
>> process. There's a short comment in the API endpoint function.
> 
> Ah yes, I did miss that part. But by doing that you will basically block 
> any datastore operation, not just the ones to the source or target 
> namespace. This is not ideal IMO. Further you cannot move a NS if any 
> other operation is ongoing on the datastore, which might be completely 
> unrelated to the source and target namespace, e.g. a backup to another 
> namespace?

Yes. But I don't think this is something we can (easily) check for, 
maybe there is a good way, but I can't think of a feasible one.
We could lock all affected groups in advance, but I'm not super sure we 
can just move a locked dir, at least with the old locking.

Given both for local and S3 datastores this is I'd argue a rather fast
operations, so just saying 'nobody does anything while we move stuff' is
reasonable.

What we could think about adding is maybe a checkbox for update jobs
referencing the NS, but not sure about if we want that.

> 
>> I'm not sure there is much value in more granular locking, I mean, is
>> half a successful move worth much? Unless we add some kind of rollback,
>> but tbh, I feel like that would not be worth the effort I think.
> 
> Well, it could be just like we do for the sync jobs, skipping the move 
> for the ones where the backup group could not be locked or fails for 
> some other reason?
> 

Hmm, but then we'd have it in two places, and moving again later won't
work because we can't distinguish between a same named ns already
existing and a new try to complete an earlier move. And we also can't
allow that in general, cause what happens if there's the same VMID
twice.

> I think having a more granular backup group unit instead of namespace 
> makes this more flexible: what if I only want to move one backup group 
> from one namespace to another one, as the initial request in the bug 
> report?
> 

That is not possible currently. And, at least with this series, not
intended. We could support that eventually, but that should be rather
orthogonal to this one I think.

> For example, I had a VM which has been backed up to a given namespace, 
> has however since been destroyed, but I want to keep the backups by 
> moving the group with all the snapshots to a different namespace, 
> freeing the backup type and ID for the current namespace?
> 

I see the use-case for this, but I think these are two things. Moving a 
NS and moving a single group.

>>
>>> So IMO this must be tackled on a group level, making sure to get an 
>>> exclusive lock for each group (on the source as well as target of the 
>>> move operation) before doing any manipulation. Only then it is okay 
>>> to do any non-atomic operations.
>>>
>>> The moving of the namespace must then be implemented as batch 
>>> operations on the groups and sub-namespaces.
>>>
>>> This should be handled the same also for regular datastores, to avoid 
>>> any races there to.
>>
> 



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint for moving namespaces
  2025-09-15  9:19         ` Hannes Laimer
@ 2025-09-15 10:01           ` Christian Ebner
  2025-09-15 10:32             ` Hannes Laimer
  0 siblings, 1 reply; 10+ messages in thread
From: Christian Ebner @ 2025-09-15 10:01 UTC (permalink / raw)
  To: Hannes Laimer, Proxmox Backup Server development discussion
  Cc: Thomas Lamprecht

On 9/15/25 11:19 AM, Hannes Laimer wrote:
> On 15.09.25 10:56, Christian Ebner wrote:
>> On 9/15/25 10:27 AM, Hannes Laimer wrote:
>>> On 15.09.25 10:15, Christian Ebner wrote:
>>>> Thanks for having a go at this issue, I did not yet have an in depth 
>>>> look at this but unfortunately I'm afraid the current implementation 
>>>> approach will not work for the S3 backend (and might also have 
>>>> issues for local datastores).
>>>>
>>>> Copying the S3 objects is not an atomic operation and  will take 
>>>> some time, so leaves you open for race conditions. E.g. while you 
>>>> copy contents, a new backup snapshot might be created in one of the 
>>>> already copied backup groups, which will then however be deleted 
>>>> afterwards. Same is true for pruning, and other metadata editing 
>>>> operations such as
>>>> adding notes, backup task logs, ecc.
>>>>
>>>
>>> Yes, but not really. We lock the `active_operations` tracking file, so
>>> no new read/write operations can be started after we start the moving
>>> process. There's a short comment in the API endpoint function.
>>
>> Ah yes, I did miss that part. But by doing that you will basically 
>> block any datastore operation, not just the ones to the source or 
>> target namespace. This is not ideal IMO. Further you cannot move a NS 
>> if any other operation is ongoing on the datastore, which might be 
>> completely unrelated to the source and target namespace, e.g. a backup 
>> to another namespace?
> 
> Yes. But I don't think this is something we can (easily) check for, 
> maybe there is a good way, but I can't think of a feasible one.
> We could lock all affected groups in advance, but I'm not super sure we 
> can just move a locked dir, at least with the old locking.

No, not lock all in advance, but we can lock it on a per backup group 
basis (source and target) and consider that as the basic operation, so 
this is mostly a local sync job on the same datastore from one namespace 
to another one. That is why I suggested to consider the moving of a 
namespace as batch operation of moving backup groups. While not as 
performant, this should eliminate possible races and makes error 
handling/rollback much easier.

> Given both for local and S3 datastores this is I'd argue a rather fast
> operations, so just saying 'nobody does anything while we move stuff' is
> reasonable.

Well, for an S3 object store with several sub-namespaces, containing 
hundreds of backup groups and thousands of snapshots (with notes, backup 
task logs and other metadata) this might take some time. After all there 
is a copy request for each of the objects involved. Do you have some 
hard numbers on this?

> 
> What we could think about adding is maybe a checkbox for update jobs
> referencing the NS, but not sure about if we want that.
> 
>>
>>> I'm not sure there is much value in more granular locking, I mean, is
>>> half a successful move worth much? Unless we add some kind of rollback,
>>> but tbh, I feel like that would not be worth the effort I think.
>>
>> Well, it could be just like we do for the sync jobs, skipping the move 
>> for the ones where the backup group could not be locked or fails for 
>> some other reason?
>>
> 
> Hmm, but then we'd have it in two places, and moving again later won't
> work because we can't distinguish between a same named ns already
> existing and a new try to complete an earlier move. And we also can't
> allow that in general, cause what happens if there's the same VMID
> twice.

Not if the failed/skipped group is cleaned up correctly, if not 
preexisting? And skip if it is preexisting... disallowing any group name 
collisions.

> 
>> I think having a more granular backup group unit instead of namespace 
>> makes this more flexible: what if I only want to move one backup group 
>> from one namespace to another one, as the initial request in the bug 
>> report?
>>
> 
> That is not possible currently. And, at least with this series, not
> intended. We could support that eventually, but that should be rather
> orthogonal to this one I think.

But then this does not really fix the issue ;)

> 
>> For example, I had a VM which has been backed up to a given namespace, 
>> has however since been destroyed, but I want to keep the backups by 
>> moving the group with all the snapshots to a different namespace, 
>> freeing the backup type and ID for the current namespace?
>>
> 
> I see the use-case for this, but I think these are two things. Moving a 
> NS and moving a single group.

This is a design decision we should make now.
To me it seems to make more sense to see the namespace moving as batch 
operation of moving groups.

Alternatively, IMO we must implement locking for namespace analogous to 
the locking for backup groups to be able to keep a consistent state, 
especially for the S3 backend were there are a lot of failure modes. 
Locking all operations on the datastore and requiring for none (even 
unrelated ones) to be active before trying the move is not ideal.

Other opinions here? CC'ing Thomas and Fabian...

> 
>>>
>>>> So IMO this must be tackled on a group level, making sure to get an 
>>>> exclusive lock for each group (on the source as well as target of 
>>>> the move operation) before doing any manipulation. Only then it is 
>>>> okay to do any non-atomic operations.
>>>>
>>>> The moving of the namespace must then be implemented as batch 
>>>> operations on the groups and sub-namespaces.
>>>>
>>>> This should be handled the same also for regular datastores, to 
>>>> avoid any races there to.
>>>
>>
> 



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint for moving namespaces
  2025-09-15 10:01           ` Christian Ebner
@ 2025-09-15 10:32             ` Hannes Laimer
  0 siblings, 0 replies; 10+ messages in thread
From: Hannes Laimer @ 2025-09-15 10:32 UTC (permalink / raw)
  To: Christian Ebner, Proxmox Backup Server development discussion
  Cc: Thomas Lamprecht



On 15.09.25 12:01, Christian Ebner wrote:
> On 9/15/25 11:19 AM, Hannes Laimer wrote:
>> On 15.09.25 10:56, Christian Ebner wrote:
>>> On 9/15/25 10:27 AM, Hannes Laimer wrote:
>>>> On 15.09.25 10:15, Christian Ebner wrote:
>>>>> Thanks for having a go at this issue, I did not yet have an in 
>>>>> depth look at this but unfortunately I'm afraid the current 
>>>>> implementation approach will not work for the S3 backend (and might 
>>>>> also have issues for local datastores).
>>>>>
>>>>> Copying the S3 objects is not an atomic operation and  will take 
>>>>> some time, so leaves you open for race conditions. E.g. while you 
>>>>> copy contents, a new backup snapshot might be created in one of the 
>>>>> already copied backup groups, which will then however be deleted 
>>>>> afterwards. Same is true for pruning, and other metadata editing 
>>>>> operations such as
>>>>> adding notes, backup task logs, ecc.
>>>>>
>>>>
>>>> Yes, but not really. We lock the `active_operations` tracking file, so
>>>> no new read/write operations can be started after we start the moving
>>>> process. There's a short comment in the API endpoint function.
>>>
>>> Ah yes, I did miss that part. But by doing that you will basically 
>>> block any datastore operation, not just the ones to the source or 
>>> target namespace. This is not ideal IMO. Further you cannot move a NS 
>>> if any other operation is ongoing on the datastore, which might be 
>>> completely unrelated to the source and target namespace, e.g. a 
>>> backup to another namespace?
>>
>> Yes. But I don't think this is something we can (easily) check for, 
>> maybe there is a good way, but I can't think of a feasible one.
>> We could lock all affected groups in advance, but I'm not super sure 
>> we can just move a locked dir, at least with the old locking.
> 
> No, not lock all in advance, but we can lock it on a per backup group 
> basis (source and target) and consider that as the basic operation, so 
> this is mostly a local sync job on the same datastore from one namespace 
> to another one. That is why I suggested to consider the moving of a 
> namespace as batch operation of moving backup groups. While not as 
> performant, this should eliminate possible races and makes error 
> handling/rollback much easier.
> 
>> Given both for local and S3 datastores this is I'd argue a rather fast
>> operations, so just saying 'nobody does anything while we move stuff' is
>> reasonable.
> 
> Well, for an S3 object store with several sub-namespaces, containing 
> hundreds of backup groups and thousands of snapshots (with notes, backup 
> task logs and other metadata) this might take some time. After all there 
> is a copy request for each of the objects involved. Do you have some 
> hard numbers on this?
> >>
>> What we could think about adding is maybe a checkbox for update jobs
>> referencing the NS, but not sure about if we want that.
>>
>>>
>>>> I'm not sure there is much value in more granular locking, I mean, is
>>>> half a successful move worth much? Unless we add some kind of rollback,
>>>> but tbh, I feel like that would not be worth the effort I think.
>>>
>>> Well, it could be just like we do for the sync jobs, skipping the 
>>> move for the ones where the backup group could not be locked or fails 
>>> for some other reason?
>>>
>>
>> Hmm, but then we'd have it in two places, and moving again later won't
>> work because we can't distinguish between a same named ns already
>> existing and a new try to complete an earlier move. And we also can't
>> allow that in general, cause what happens if there's the same VMID
>> twice.
> 
> Not if the failed/skipped group is cleaned up correctly, if not 
> preexisting? And skip if it is preexisting... disallowing any group name 
> collisions.
> 
>>
>>> I think having a more granular backup group unit instead of namespace 
>>> makes this more flexible: what if I only want to move one backup 
>>> group from one namespace to another one, as the initial request in 
>>> the bug report?
>>>
>>
>> That is not possible currently. And, at least with this series, not
>> intended. We could support that eventually, but that should be rather
>> orthogonal to this one I think.
> 
> But then this does not really fix the issue ;)
> 
>>
>>> For example, I had a VM which has been backed up to a given 
>>> namespace, has however since been destroyed, but I want to keep the 
>>> backups by moving the group with all the snapshots to a different 
>>> namespace, freeing the backup type and ID for the current namespace?
>>>
>>
>> I see the use-case for this, but I think these are two things. Moving 
>> a NS and moving a single group.
> 
> This is a design decision we should make now.
> To me it seems to make more sense to see the namespace moving as batch 
> operation of moving groups.
> 

hmm, I think you are right. I tried to keep this really simple given
we're just moving a directory, buuut this is probably not how we want
to do this. I'll rethink the approach. Still, some other opinions would
be good before I start with that.

> Alternatively, IMO we must implement locking for namespace analogous to 
> the locking for backup groups to be able to keep a consistent state, 
> especially for the S3 backend were there are a lot of failure modes. 
> Locking all operations on the datastore and requiring for none (even 
> unrelated ones) to be active before trying the move is not ideal.
> 
> Other opinions here? CC'ing Thomas and Fabian...
> 
>>
>>>>
>>>>> So IMO this must be tackled on a group level, making sure to get an 
>>>>> exclusive lock for each group (on the source as well as target of 
>>>>> the move operation) before doing any manipulation. Only then it is 
>>>>> okay to do any non-atomic operations.
>>>>>
>>>>> The moving of the namespace must then be implemented as batch 
>>>>> operations on the groups and sub-namespaces.
>>>>>
>>>>> This should be handled the same also for regular datastores, to 
>>>>> avoid any races there to.
>>>>
>>>
>>
> 



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2025-09-15 10:32 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-11  9:49 [pbs-devel] [PATCH proxmox-backup v3 0/3] fixes #6195: add support for moving namespaces Hannes Laimer
2025-09-11  9:49 ` [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint " Hannes Laimer
2025-09-15  8:16   ` Christian Ebner
2025-09-15  8:27     ` Hannes Laimer
2025-09-15  8:56       ` Christian Ebner
2025-09-15  9:19         ` Hannes Laimer
2025-09-15 10:01           ` Christian Ebner
2025-09-15 10:32             ` Hannes Laimer
2025-09-11  9:49 ` [pbs-devel] [PATCH proxmox-backup v3 2/3] fix #6195: ui: add button/window for moving a namespace Hannes Laimer
2025-09-11  9:49 ` [pbs-devel] [PATCH proxmox-backup v3 3/3] docs: add section for moving datastores Hannes Laimer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal