public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Christian Ebner <c.ebner@proxmox.com>
To: Hannes Laimer <h.laimer@proxmox.com>, pbs-devel@lists.proxmox.com
Subject: Re: [PATCH proxmox-backup v4 1/7] datastore: add namespace-level locking
Date: Tue, 17 Mar 2026 14:03:09 +0100	[thread overview]
Message-ID: <c3bcbb12-fb37-489c-9542-1ac0c523c550@proxmox.com> (raw)
In-Reply-To: <02a6d817-c785-4a03-a945-441397a11d0f@proxmox.com>

On 3/12/26 4:43 PM, Christian Ebner wrote:
> One high level comment: I think it would make sense to allow to set an 
> overall timeout when acquiring the namespace locks. Most move operations 
> probably are rather fast anyways, so waiting a few seconds in a e.g. 
> prune job will probably be preferable over parts of the prune job being 
> skipped.
> 
> Another comment inline.
> 
> On 3/11/26 4:13 PM, Hannes Laimer wrote:
>> Add exclusive/shared namespace locking keyed at
>> /run/proxmox-backup/locks/{store}/{ns}/.ns-lock.
>>
>> Operations that read from or write into a namespace hold a shared lock
>> for their duration. Structural operations (move, delete) hold an
>> exclusive lock. The shared lock is hierarchical: locking a/b/c also
>> locks a/b and a, so an exclusive lock on any ancestor blocks all
>> active operations below it. Walking up the ancestor chain costs
>> O(depth), which is bounded by the maximum namespace depth of 8,
>> whereas locking all descendants would be arbitrarily expensive.
>>
>> Backup jobs and pull/push sync acquire the shared lock via
>> create_locked_backup_group and pull_ns/push_namespace respectively.
>> Verify and prune acquire it per snapshot/group and skip gracefully if
>> the lock cannot be taken, since a concurrent move is a transient
>> condition.
>>
>> Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
>> ---
>>   pbs-datastore/src/backup_info.rs | 92 ++++++++++++++++++++++++++++++++
>>   pbs-datastore/src/datastore.rs   | 45 +++++++++++++---
>>   src/api2/admin/namespace.rs      |  6 ++-
>>   src/api2/backup/environment.rs   |  4 ++
>>   src/api2/backup/mod.rs           | 14 +++--
>>   src/api2/tape/restore.rs         |  9 ++--
>>   src/backup/verify.rs             | 19 ++++++-
>>   src/server/prune_job.rs          | 11 ++++
>>   src/server/pull.rs               |  8 ++-
>>   src/server/push.rs               |  6 +++
>>   10 files changed, 193 insertions(+), 21 deletions(-)
>>
>> diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/ 
>> backup_info.rs
>> index c33eb307..476daa61 100644
>> --- a/pbs-datastore/src/backup_info.rs
>> +++ b/pbs-datastore/src/backup_info.rs
>> @@ -937,6 +937,98 @@ fn lock_file_path_helper(ns: &BackupNamespace, 
>> path: PathBuf) -> PathBuf {
>>       to_return.join(format!("{first_eigthy}...{last_eighty}-{hash}"))
>>   }
>> +/// Returns the lock file path for a backup namespace.
>> +///
>> +/// The lock file will be located at:
>> +/// `${DATASTORE_LOCKS_DIR}/${store_name}/${ns_colon_encoded}/.ns-lock`
>> +pub(crate) fn ns_lock_path(store_name: &str, ns: &BackupNamespace) -> 
>> PathBuf {
>> +    let ns_part = ns
>> +        .components()
>> +        .map(String::from)
>> +        .reduce(|acc, n| format!("{acc}:{n}"))
>> +        .unwrap_or_default();
>> +    Path::new(DATASTORE_LOCKS_DIR)
>> +        .join(store_name)
>> +        .join(ns_part)

As discussed off-list, the namespace directory itself might be used for 
locking instead of a dedicated file, in order to reduce the number of 
i-nodes which are located on this tmpfs.

>> +        .join(".ns-lock")
>> +}
>> +




  parent reply	other threads:[~2026-03-17 13:03 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-11 15:13 [PATCH proxmox-backup v4 0/7] fixes #6195: add support for moving groups and namespaces Hannes Laimer
2026-03-11 15:13 ` [PATCH proxmox-backup v4 1/7] datastore: add namespace-level locking Hannes Laimer
2026-03-12 15:43   ` Christian Ebner
2026-03-13  7:40     ` Hannes Laimer
2026-03-13  7:56       ` Christian Ebner
2026-03-17 13:03     ` Christian Ebner [this message]
2026-03-11 15:13 ` [PATCH proxmox-backup v4 2/7] datastore: add move_group Hannes Laimer
2026-03-12 16:08   ` Christian Ebner
2026-03-13  7:28     ` Hannes Laimer
2026-03-13  7:52       ` Christian Ebner
2026-03-11 15:13 ` [PATCH proxmox-backup v4 3/7] datastore: add move_namespace Hannes Laimer
2026-03-11 15:13 ` [PATCH proxmox-backup v4 4/7] api: add PUT endpoint for move_group Hannes Laimer
2026-03-12 16:17   ` Christian Ebner
2026-03-11 15:13 ` [PATCH proxmox-backup v4 5/7] api: add PUT endpoint for move_namespace Hannes Laimer
2026-03-12 16:19   ` Christian Ebner
2026-03-11 15:13 ` [PATCH proxmox-backup v4 6/7] ui: add move group action Hannes Laimer
2026-03-17 11:43   ` Christian Ebner
2026-03-17 11:48     ` Hannes Laimer
2026-03-11 15:13 ` [PATCH proxmox-backup v4 7/7] ui: add move namespace action Hannes Laimer
2026-03-12 16:21 ` [PATCH proxmox-backup v4 0/7] fixes #6195: add support for moving groups and namespaces Christian Ebner
2026-03-17 13:47 ` Christian Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c3bcbb12-fb37-489c-9542-1ac0c523c550@proxmox.com \
    --to=c.ebner@proxmox.com \
    --cc=h.laimer@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal