From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 8642A1FF144 for ; Tue, 24 Mar 2026 14:00:35 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id DFBE01088A; Tue, 24 Mar 2026 14:00:55 +0100 (CET) Date: Tue, 24 Mar 2026 14:00:18 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= Subject: Re: [PATCH proxmox-backup v5 4/9] datastore: add move_namespace To: Hannes Laimer , pbs-devel@lists.proxmox.com References: <20260319161325.206846-1-h.laimer@proxmox.com> <20260319161325.206846-5-h.laimer@proxmox.com> In-Reply-To: <20260319161325.206846-5-h.laimer@proxmox.com> MIME-Version: 1.0 User-Agent: astroid/0.17.0 (https://github.com/astroidmail/astroid) Message-Id: <1774356676.j651zha210.astroid@yuna.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1774357174850 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.054 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: 2PJEZC5NXBEYEU3OPJQJMH7VGMWYX5PS X-Message-ID-Hash: 2PJEZC5NXBEYEU3OPJQJMH7VGMWYX5PS X-MailFrom: f.gruenbichler@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox Backup Server development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On March 19, 2026 5:13 pm, Hannes Laimer wrote: > move_namespace relocates an entire namespace subtree (the given > namespace, all child namespaces, and their groups) to a new location > within the same datastore. >=20 > For the filesystem backend the entire subtree is relocated with a single > atomic rename. For the S3 backend groups are moved one at a time via > BackupGroup::move_to(). Groups that fail are left at the source and > listed as an error in the task log so they can be retried with > move_group individually. Source namespaces where all groups succeeded > have their S3 markers and local cache directories removed, > deepest-first. >=20 > Signed-off-by: Hannes Laimer > --- > pbs-datastore/src/datastore.rs | 249 ++++++++++++++++++++++++++++++++- > 1 file changed, 248 insertions(+), 1 deletion(-) >=20 > diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore= .rs > index d4c88452..3187cbc3 100644 > --- a/pbs-datastore/src/datastore.rs > +++ b/pbs-datastore/src/datastore.rs > @@ -31,7 +31,7 @@ use pbs_api_types::{ > ArchiveType, Authid, BackupGroupDeleteStats, BackupNamespace, Backup= Type, ChunkOrder, > DataStoreConfig, DatastoreBackendConfig, DatastoreBackendType, Datas= toreFSyncLevel, > DatastoreTuning, GarbageCollectionCacheStats, GarbageCollectionStatu= s, MaintenanceMode, > - MaintenanceType, Operation, UPID, > + MaintenanceType, Operation, MAX_NAMESPACE_DEPTH, UPID, > }; > use pbs_config::s3::S3_CFG_TYPE_ID; > use pbs_config::{BackupLockGuard, ConfigVersionCache}; > @@ -1003,6 +1003,253 @@ impl DataStore { > Ok((removed_all_requested, stats)) > } > =20 > + /// Move a backup namespace (including all child namespaces and grou= ps) to a new location. > + /// > + /// The entire subtree rooted at `source_ns` is relocated to `target= _ns`. Exclusive namespace > + /// locks are held on both source and target namespaces for the dura= tion to block concurrent > + /// readers and writers. > + /// > + /// For the filesystem backend the rename is atomic. For the S3 back= end groups are moved > + /// one at a time. A group that fails to copy is left at the source = and can be moved > + /// individually with `move_group`. The operation returns an error l= isting any such groups. I think we want these semantics in both cases, since a move is only atomic if you ignore a lot of the invariants you need to violate to make it so, even in the filesystem case. and if we have such semantics, and handle the invariants correctly, we only need the namespace locks if we basically want to make the namespace lock a "per-NS-maintenance-mode" in disguise.. which I don't think is what users actually want - they want an option to bulk move groups from A to B without interrupting the world.. I hope my comments on the other patches make it clear what I mean - I am open for arguments for the current approach with namespace locks, but at the moment I am not yet convinced this is the better approach.. skipping the rest of the implementation below (for now) > + /// > + /// Fails if: > + /// - `source_ns` is the root namespace > + /// - `source_ns` =3D=3D `target_ns` > + /// - `source_ns` does not exist > + /// - `target_ns` already exists (to prevent silent merging) > + /// - `target_ns`'s parent does not exist > + /// - `source_ns` is an ancestor of `target_ns` > + /// - the move would exceed the maximum namespace depth > + pub fn move_namespace( other than noting that this particular fn is *very* long ;) > [..]