public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Hannes Laimer <h.laimer@proxmox.com>
To: Christian Ebner <c.ebner@proxmox.com>, pbs-devel@lists.proxmox.com
Subject: Re: [PATCH proxmox-backup v4 1/7] datastore: add namespace-level locking
Date: Fri, 13 Mar 2026 08:40:29 +0100	[thread overview]
Message-ID: <349fd1ce-d2be-4f6d-97e8-cb8d6075d9b2@proxmox.com> (raw)
In-Reply-To: <02a6d817-c785-4a03-a945-441397a11d0f@proxmox.com>

On 2026-03-12 16:43, Christian Ebner wrote:
> One high level comment: I think it would make sense to allow to set an
> overall timeout when acquiring the namespace locks. Most move operations
> probably are rather fast anyways, so waiting a few seconds in a e.g.
> prune job will probably be preferable over parts of the prune job being
> skipped.
> 

set as in have it configurable? Or just defining one for the locks?
At least with move in mind, I though that waiting for a lock could mean
that the things might be gone by the time we get the lock, 50/50 chance
of that happening actually. So i figured it'd be better to not have
that, technicaly shouldn't be a problem cause I think we do all
existance checks after locking, but still, waiting for something that is
currently being moved (away 1 in 2 times) seemed odd.

not super oppoed though for a few seconds of hard-coded waiting.

> Another comment inline.
> 
> On 3/11/26 4:13 PM, Hannes Laimer wrote:
>> Add exclusive/shared namespace locking keyed at
>> /run/proxmox-backup/locks/{store}/{ns}/.ns-lock.
>>
>> Operations that read from or write into a namespace hold a shared lock
>> for their duration. Structural operations (move, delete) hold an
>> exclusive lock. The shared lock is hierarchical: locking a/b/c also
>> locks a/b and a, so an exclusive lock on any ancestor blocks all
>> active operations below it. Walking up the ancestor chain costs
>> O(depth), which is bounded by the maximum namespace depth of 8,
>> whereas locking all descendants would be arbitrarily expensive.
>>
>> Backup jobs and pull/push sync acquire the shared lock via
>> create_locked_backup_group and pull_ns/push_namespace respectively.
>> Verify and prune acquire it per snapshot/group and skip gracefully if
>> the lock cannot be taken, since a concurrent move is a transient
>> condition.
>>
>> Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
>> ---
>>   pbs-datastore/src/backup_info.rs | 92 ++++++++++++++++++++++++++++++++
>>   pbs-datastore/src/datastore.rs   | 45 +++++++++++++---
>>   src/api2/admin/namespace.rs      |  6 ++-
>>   src/api2/backup/environment.rs   |  4 ++
>>   src/api2/backup/mod.rs           | 14 +++--
>>   src/api2/tape/restore.rs         |  9 ++--
>>   src/backup/verify.rs             | 19 ++++++-
>>   src/server/prune_job.rs          | 11 ++++
>>   src/server/pull.rs               |  8 ++-
>>   src/server/push.rs               |  6 +++
>>   10 files changed, 193 insertions(+), 21 deletions(-)
>>
>> diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/
>> backup_info.rs
>> index c33eb307..476daa61 100644
>> --- a/pbs-datastore/src/backup_info.rs
>> +++ b/pbs-datastore/src/backup_info.rs
>> @@ -937,6 +937,98 @@ fn lock_file_path_helper(ns: &BackupNamespace,
>> path: PathBuf) -> PathBuf {
>>       to_return.join(format!("{first_eigthy}...{last_eighty}-{hash}"))
>>   }
>>   +/// Returns the lock file path for a backup namespace.
>> +///
>> +/// The lock file will be located at:
>> +/// `${DATASTORE_LOCKS_DIR}/${store_name}/${ns_colon_encoded}/.ns-lock`
>> +pub(crate) fn ns_lock_path(store_name: &str, ns: &BackupNamespace) ->
>> PathBuf {
>> +    let ns_part = ns
>> +        .components()
>> +        .map(String::from)
>> +        .reduce(|acc, n| format!("{acc}:{n}"))
>> +        .unwrap_or_default();
>> +    Path::new(DATASTORE_LOCKS_DIR)
>> +        .join(store_name)
>> +        .join(ns_part)
>> +        .join(".ns-lock")
>> +}
>> +
>> +/// Holds namespace locks acquired for a structural operation or a
>> read/write operation.
>> +///
>> +/// For exclusive locks the first guard is exclusive (on the
>> namespace itself), and any remaining
>> +/// guards are shared (on ancestor namespaces). For shared locks all
>> guards are shared.
>> +pub struct NamespaceLockGuard {
>> +    _guards: Vec<BackupLockGuard>,
>> +}
> 
> comment: While it might not be that critical in practice, it probably
> makes sense to also control the order with which the lock guards are
> dropped by implementing the Drop trait and consuming the vector in order
> to avoid possible inconsistencies with the assumed order.
> 

makes sense, will add in v5, thanks!

>> +
>> +fn lock_namespace_exclusive_single(
>> +    store_name: &str,
>> +    ns: &BackupNamespace,
>> +) -> Result<BackupLockGuard, Error> {
>> +    let path = ns_lock_path(store_name, ns);
>> +    lock_helper(store_name, &path, |p| {
>> +        open_backup_lockfile(p, Some(Duration::from_secs(0)), true)
>> +            .with_context(|| format!("unable to acquire exclusive
>> namespace lock for '{ns}'"))
>> +    })
>> +}
>> +
>> +fn lock_namespace_shared_single(
>> +    store_name: &str,
>> +    ns: &BackupNamespace,
>> +) -> Result<BackupLockGuard, Error> {
>> +    let path = ns_lock_path(store_name, ns);
>> +    lock_helper(store_name, &path, |p| {
>> +        open_backup_lockfile(p, Some(Duration::from_secs(0)), false)
>> +            .with_context(|| format!("unable to acquire shared
>> namespace lock for '{ns}'"))
>> +    })
>> +}
>> +
>> +/// Acquires an exclusive lock on `ns` and shared locks on all its
>> non-root ancestors.
>> +///
>> +/// Used by operations that structurally modify a namespace (e.g.
>> move). The shared ancestor locks
>> +/// ensure that a concurrent structural operation on any ancestor
>> namespace blocks until this one
>> +/// completes, and vice versa - mirroring the hierarchical behavior
>> of `lock_namespace_shared`.
>> +pub(crate) fn lock_namespace(
>> +    store_name: &str,
>> +    ns: &BackupNamespace,
>> +) -> Result<NamespaceLockGuard, Error> {
>> +    // Acquire the exclusive lock on ns first, then shared locks on
>> ancestors.
>> +    // Order matters: taking exclusive before shared avoids a
>> scenario where we hold shared on
>> +    // an ancestor and then block waiting for exclusive on ns (which
>> another holder of that
>> +    // ancestor's shared lock might be waiting to promote - not
>> currently possible but fragile).
>> +    let exclusive = lock_namespace_exclusive_single(store_name, ns)?;
>> +    let mut guards = vec![exclusive];
>> +    let mut current = ns.clone();
>> +    while !current.is_root() {
>> +        current = current.parent();
>> +        if !current.is_root() {
>> +            guards.push(lock_namespace_shared_single(store_name,
>> &current)?);
>> +        }
>> +    }
>> +    Ok(NamespaceLockGuard { _guards: guards })
>> +}
>> +
>> +/// Acquires shared locks on a namespace and all its non-root ancestors.
>> +///
>> +/// Held by operations that read from or write into a namespace
>> (backup, sync, verify, prune),
>> +/// preventing concurrent exclusive operations (e.g. move) on the
>> namespace itself or any ancestor
>> +/// from proceeding while this guard is alive.
>> +///
>> +/// Locking up the ancestor chain (bounded by max namespace depth)
>> rather than down the subtree
>> +/// keeps the cost O(depth) regardless of how wide the namespace tree
>> is.
>> +pub(crate) fn lock_namespace_shared(
>> +    store_name: &str,
>> +    ns: &BackupNamespace,
>> +) -> Result<NamespaceLockGuard, Error> {
>> +    let mut guards = Vec::new();
>> +    let mut current = ns.clone();
>> +    while !current.is_root() {
>> +        guards.push(lock_namespace_shared_single(store_name,
>> &current)?);
>> +        current = current.parent();
>> +    }
>> +    Ok(NamespaceLockGuard { _guards: guards })
>> +}
>> +
>>   /// Helps implement the double stat'ing procedure. It avoids certain
>> race conditions upon lock
>>   /// deletion.
>>   ///
>> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/
>> datastore.rs
>> index ef378c69..564c44a1 100644
>> --- a/pbs-datastore/src/datastore.rs
>> +++ b/pbs-datastore/src/datastore.rs
>> @@ -38,7 +38,8 @@ use pbs_config::{BackupLockGuard, ConfigVersionCache};
>>   use proxmox_section_config::SectionConfigData;
>>     use crate::backup_info::{
>> -    BackupDir, BackupGroup, BackupInfo, OLD_LOCKING,
>> PROTECTED_MARKER_FILENAME,
>> +    lock_namespace, lock_namespace_shared, BackupDir, BackupGroup,
>> BackupInfo, NamespaceLockGuard,
>> +    OLD_LOCKING, PROTECTED_MARKER_FILENAME,
>>   };
>>   use crate::chunk_store::ChunkStore;
>>   use crate::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
>> @@ -1123,18 +1124,46 @@ impl DataStore {
>>           Ok(())
>>       }
>>   -    /// Create (if it does not already exists) and lock a backup group
>> +    /// Acquires an exclusive lock on a backup namespace and shared
>> locks on all its ancestors.
>>       ///
>> -    /// And set the owner to 'userid'. If the group already exists,
>> it returns the
>> -    /// current owner (instead of setting the owner).
>> +    /// Operations that structurally modify a namespace (move,
>> delete) must hold this for their
>> +    /// duration to prevent concurrent readers or writers from
>> accessing the namespace while it is
>> +    /// being relocated or destroyed.
>> +    pub fn lock_namespace(
>> +        self: &Arc<Self>,
>> +        ns: &BackupNamespace,
>> +    ) -> Result<NamespaceLockGuard, Error> {
>> +        lock_namespace(self.name(), ns)
>> +    }
>> +
>> +    /// Acquires shared locks on a backup namespace and all its non-
>> root ancestors.
>>       ///
>> -    /// This also acquires an exclusive lock on the directory and
>> returns the lock guard.
>> +    /// Operations that read from or write into a namespace (backup,
>> sync, verify, prune) must hold
>> +    /// this for their duration to prevent a concurrent
>> `move_namespace` or `delete_namespace` from
>> +    /// relocating or destroying the namespace under them.
>> +    pub fn lock_namespace_shared(
>> +        self: &Arc<Self>,
>> +        ns: &BackupNamespace,
>> +    ) -> Result<NamespaceLockGuard, Error> {
>> +        lock_namespace_shared(self.name(), ns)
>> +    }
>> +
>> +    /// Create (if it does not already exist) and lock a backup group.
>> +    ///
>> +    /// Sets the owner to `auth_id`. If the group already exists,
>> returns the current owner.
>> +    ///
>> +    /// Returns `(owner, ns_lock, group_lock)`. Both locks must be
>> kept alive for the duration of
>> +    /// the backup session: the shared namespace lock prevents
>> concurrent namespace moves or
>> +    /// deletions, and the exclusive group lock prevents concurrent
>> backups to the same group.
>>       pub fn create_locked_backup_group(
>>           self: &Arc<Self>,
>>           ns: &BackupNamespace,
>>           backup_group: &pbs_api_types::BackupGroup,
>>           auth_id: &Authid,
>> -    ) -> Result<(Authid, BackupLockGuard), Error> {
>> +    ) -> Result<(Authid, NamespaceLockGuard, BackupLockGuard), Error> {
>> +        let ns_guard = lock_namespace_shared(self.name(), ns)
>> +            .with_context(|| format!("failed to acquire shared
>> namespace lock for '{ns}'"))?;
>> +
>>           let backup_group = self.backup_group(ns.clone(),
>> backup_group.clone());
>>             // create intermediate path first
>> @@ -1155,14 +1184,14 @@ impl DataStore {
>>                       return Err(err);
>>                   }
>>                   let owner = self.get_owner(ns,
>> backup_group.group())?; // just to be sure
>> -                Ok((owner, guard))
>> +                Ok((owner, ns_guard, guard))
>>               }
>>               Err(ref err) if err.kind() ==
>> io::ErrorKind::AlreadyExists => {
>>                   let guard = backup_group.lock().with_context(|| {
>>                       format!("while creating locked backup group
>> '{backup_group:?}'")
>>                   })?;
>>                   let owner = self.get_owner(ns,
>> backup_group.group())?; // just to be sure
>> -                Ok((owner, guard))
>> +                Ok((owner, ns_guard, guard))
>>               }
>>               Err(err) => bail!("unable to create backup group {:?} -
>> {}", full_path, err),
>>           }
>> diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
>> index 30e24d8d..ec913001 100644
>> --- a/src/api2/admin/namespace.rs
>> +++ b/src/api2/admin/namespace.rs
>> @@ -1,4 +1,4 @@
>> -use anyhow::{bail, Error};
>> +use anyhow::{bail, Context, Error};
>>     use pbs_config::CachedUserInfo;
>>   use proxmox_router::{http_bail, ApiMethod, Permission, Router,
>> RpcEnvironment};
>> @@ -164,6 +164,10 @@ pub fn delete_namespace(
>>         let datastore = DataStore::lookup_datastore(&store,
>> Operation::Write)?;
>>   +    let _ns_lock = datastore
>> +        .lock_namespace(&ns)
>> +        .with_context(|| format!("failed to lock namespace '{ns}' for
>> deletion"))?;
>> +
>>       let (removed_all, stats) =
>> datastore.remove_namespace_recursive(&ns, delete_groups)?;
>>       if !removed_all {
>>           let err_msg = if delete_groups {
>> diff --git a/src/api2/backup/environment.rs b/src/api2/backup/
>> environment.rs
>> index ab623f1f..44483add 100644
>> --- a/src/api2/backup/environment.rs
>> +++ b/src/api2/backup/environment.rs
>> @@ -1,5 +1,6 @@
>>   use anyhow::{bail, format_err, Context, Error};
>>   use pbs_config::BackupLockGuard;
>> +use pbs_datastore::backup_info::NamespaceLockGuard;
>>   use proxmox_sys::process_locker::ProcessLockSharedGuard;
>>     use std::collections::HashMap;
>> @@ -100,6 +101,7 @@ struct SharedBackupState {
>>     pub struct BackupLockGuards {
>>       previous_snapshot: Option<BackupLockGuard>,
>> +    _namespace: Option<NamespaceLockGuard>,
>>       group: Option<BackupLockGuard>,
>>       snapshot: Option<BackupLockGuard>,
>>       chunk_store: Option<ProcessLockSharedGuard>,
>> @@ -108,12 +110,14 @@ pub struct BackupLockGuards {
>>   impl BackupLockGuards {
>>       pub(crate) fn new(
>>           previous_snapshot: Option<BackupLockGuard>,
>> +        namespace: NamespaceLockGuard,
>>           group: BackupLockGuard,
>>           snapshot: BackupLockGuard,
>>           chunk_store: ProcessLockSharedGuard,
>>       ) -> Self {
>>           Self {
>>               previous_snapshot,
>> +            _namespace: Some(namespace),
>>               group: Some(group),
>>               snapshot: Some(snapshot),
>>               chunk_store: Some(chunk_store),
>> diff --git a/src/api2/backup/mod.rs b/src/api2/backup/mod.rs
>> index 6df0d34b..80e70dac 100644
>> --- a/src/api2/backup/mod.rs
>> +++ b/src/api2/backup/mod.rs
>> @@ -143,8 +143,9 @@ fn upgrade_to_backup_protocol(
>>               "backup"
>>           };
>>   -        // lock backup group to only allow one backup per group at
>> a time
>> -        let (owner, group_guard) = datastore.create_locked_backup_group(
>> +        // lock backup group to only allow one backup per group at a
>> time,
>> +        // also acquires a shared namespace lock to prevent
>> concurrent namespace moves
>> +        let (owner, ns_guard, group_guard) =
>> datastore.create_locked_backup_group(
>>               backup_group.backup_ns(),
>>               backup_group.as_ref(),
>>               &auth_id,
>> @@ -215,8 +216,13 @@ fn upgrade_to_backup_protocol(
>>                   // case of errors. The former is required for
>> immediate subsequent backups (e.g.
>>                   // during a push sync) to be able to lock the group
>> and snapshots.
>>                   let chunk_store_guard =
>> datastore.try_shared_chunk_store_lock()?;
>> -                let backup_lock_guards =
>> -                    BackupLockGuards::new(last_guard, group_guard,
>> snap_guard, chunk_store_guard);
>> +                let backup_lock_guards = BackupLockGuards::new(
>> +                    last_guard,
>> +                    ns_guard,
>> +                    group_guard,
>> +                    snap_guard,
>> +                    chunk_store_guard,
>> +                );
>>                     let mut env = BackupEnvironment::new(
>>                       env_type,
>> diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
>> index 92529a76..1cb85f2d 100644
>> --- a/src/api2/tape/restore.rs
>> +++ b/src/api2/tape/restore.rs
>> @@ -848,11 +848,8 @@ fn restore_list_worker(
>>                               Some(restore_owner),
>>                           )?;
>>   -                        let (owner, _group_lock) =
>> datastore.create_locked_backup_group(
>> -                            &ns,
>> -                            backup_dir.as_ref(),
>> -                            restore_owner,
>> -                        )?;
>> +                        let (owner, _ns_guard, _group_lock) = datastore
>> +                            .create_locked_backup_group(&ns,
>> backup_dir.as_ref(), restore_owner)?;
>>                           if restore_owner != &owner {
>>                               bail!(
>>                                   "cannot restore snapshot
>> '{snapshot}' into group '{}', owner check \
>> @@ -1354,7 +1351,7 @@ fn restore_archive<'a>(
>>                           auth_id,
>>                           Some(restore_owner),
>>                       )?;
>> -                    let (owner, _group_lock) =
>> datastore.create_locked_backup_group(
>> +                    let (owner, _ns_guard, _group_lock) =
>> datastore.create_locked_backup_group(
>>                           &backup_ns,
>>                           backup_dir.as_ref(),
>>                           restore_owner,
>> diff --git a/src/backup/verify.rs b/src/backup/verify.rs
>> index f52d7781..ce5d6f69 100644
>> --- a/src/backup/verify.rs
>> +++ b/src/backup/verify.rs
>> @@ -393,10 +393,27 @@ impl VerifyWorker {
>>               return Ok(true);
>>           }
>>   +        let ns_lock = match
>> self.datastore.lock_namespace_shared(backup_dir.backup_ns()) {
>> +            Ok(lock) => lock,
>> +            Err(err) => {
>> +                info!(
>> +                    "SKIPPED: verify {}:{} - could not acquire
>> namespace lock: {}",
>> +                    self.datastore.name(),
>> +                    backup_dir.dir(),
>> +                    err,
>> +                );
>> +                return Ok(true);
>> +            }
>> +        };
>> +
>>           let snap_lock = backup_dir.lock_shared();
>>             match snap_lock {
>> -            Ok(snap_lock) =>
>> self.verify_backup_dir_with_lock(backup_dir, upid, filter, snap_lock),
>> +            Ok(snap_lock) => {
>> +                let result =
>> self.verify_backup_dir_with_lock(backup_dir, upid, filter, snap_lock);
>> +                drop(ns_lock);
>> +                result
>> +            }
>>               Err(err) => {
>>                   info!(
>>                       "SKIPPED: verify {}:{} - could not acquire
>> snapshot lock: {}",
>> diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
>> index bb86a323..8103f59c 100644
>> --- a/src/server/prune_job.rs
>> +++ b/src/server/prune_job.rs
>> @@ -54,6 +54,17 @@ pub fn prune_datastore(
>>       )? {
>>           let group = group?;
>>           let ns = group.backup_ns();
>> +        let _ns_lock = match datastore.lock_namespace_shared(ns) {
>> +            Ok(lock) => lock,
>> +            Err(err) => {
>> +                warn!(
>> +                    "skipping prune of group '{}:{}' - could not
>> acquire namespace lock: {err}",
>> +                    ns,
>> +                    group.group(),
>> +                );
>> +                continue;
>> +            }
>> +        };
>>           let list = group.list_backups()?;
>>             let mut prune_info = compute_prune_info(list,
>> &prune_options.keep)?;
>> diff --git a/src/server/pull.rs b/src/server/pull.rs
>> index 0ac6b5b8..40173324 100644
>> --- a/src/server/pull.rs
>> +++ b/src/server/pull.rs
>> @@ -1061,6 +1061,12 @@ async fn pull_ns(
>>       params: &mut PullParameters,
>>       encountered_chunks: Arc<Mutex<EncounteredChunks>>,
>>   ) -> Result<(StoreProgress, SyncStats, bool), Error> {
>> +    let _ns_lock = params
>> +        .target
>> +        .store
>> +        .lock_namespace_shared(namespace)
>> +        .with_context(|| format!("failed to acquire shared namespace
>> lock for '{namespace}'"))?;
>> +
>>       let list: Vec<BackupGroup> =
>> params.source.list_groups(namespace, &params.owner).await?;
>>         let unfiltered_count = list.len();
>> @@ -1093,7 +1099,7 @@ async fn pull_ns(
>>           progress.done_snapshots = 0;
>>           progress.group_snapshots = 0;
>>   -        let (owner, _lock_guard) =
>> +        let (owner, _ns_guard, _lock_guard) =
>>               match params
>>                   .target
>>                   .store
>> diff --git a/src/server/push.rs b/src/server/push.rs
>> index 27c5b22d..47067d66 100644
>> --- a/src/server/push.rs
>> +++ b/src/server/push.rs
>> @@ -525,6 +525,12 @@ pub(crate) async fn push_namespace(
>>       check_ns_remote_datastore_privs(params, &target_namespace,
>> PRIV_REMOTE_DATASTORE_BACKUP)
>>           .context("Pushing to remote namespace not allowed")?;
>>   +    let _ns_lock = params
>> +        .source
>> +        .store
>> +        .lock_namespace_shared(namespace)
>> +        .with_context(|| format!("failed to acquire shared namespace
>> lock for '{namespace}'"))?;
>> +
>>       let mut list: Vec<BackupGroup> = params
>>           .source
>>           .list_groups(namespace, &params.local_user)
> 





  reply	other threads:[~2026-03-13  7:41 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-11 15:13 [PATCH proxmox-backup v4 0/7] fixes #6195: add support for moving groups and namespaces Hannes Laimer
2026-03-11 15:13 ` [PATCH proxmox-backup v4 1/7] datastore: add namespace-level locking Hannes Laimer
2026-03-12 15:43   ` Christian Ebner
2026-03-13  7:40     ` Hannes Laimer [this message]
2026-03-13  7:56       ` Christian Ebner
2026-03-17 13:03     ` Christian Ebner
2026-03-11 15:13 ` [PATCH proxmox-backup v4 2/7] datastore: add move_group Hannes Laimer
2026-03-12 16:08   ` Christian Ebner
2026-03-13  7:28     ` Hannes Laimer
2026-03-13  7:52       ` Christian Ebner
2026-03-11 15:13 ` [PATCH proxmox-backup v4 3/7] datastore: add move_namespace Hannes Laimer
2026-03-11 15:13 ` [PATCH proxmox-backup v4 4/7] api: add PUT endpoint for move_group Hannes Laimer
2026-03-12 16:17   ` Christian Ebner
2026-03-11 15:13 ` [PATCH proxmox-backup v4 5/7] api: add PUT endpoint for move_namespace Hannes Laimer
2026-03-12 16:19   ` Christian Ebner
2026-03-11 15:13 ` [PATCH proxmox-backup v4 6/7] ui: add move group action Hannes Laimer
2026-03-17 11:43   ` Christian Ebner
2026-03-17 11:48     ` Hannes Laimer
2026-03-11 15:13 ` [PATCH proxmox-backup v4 7/7] ui: add move namespace action Hannes Laimer
2026-03-12 16:21 ` [PATCH proxmox-backup v4 0/7] fixes #6195: add support for moving groups and namespaces Christian Ebner
2026-03-17 13:47 ` Christian Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=349fd1ce-d2be-4f6d-97e8-cb8d6075d9b2@proxmox.com \
    --to=h.laimer@proxmox.com \
    --cc=c.ebner@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal