all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Christian Ebner <c.ebner@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [pbs-devel] [PATCH proxmox-backup v5 19/19] datastore: document s3 backend specific locking restrictions
Date: Tue, 11 Nov 2025 15:30:02 +0100	[thread overview]
Message-ID: <20251111143002.759901-20-c.ebner@proxmox.com> (raw)
In-Reply-To: <20251111143002.759901-1-c.ebner@proxmox.com>

The requirements are stricter here, since not only must it be avoided
to hold std::sync::Mutex guards for async contexts, but also there
must be consistency between s3 object store, local datastore cache
and in-memory LRU cache.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-datastore/src/lib.rs | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/pbs-datastore/src/lib.rs b/pbs-datastore/src/lib.rs
index 849078a8f..1f7c54ae8 100644
--- a/pbs-datastore/src/lib.rs
+++ b/pbs-datastore/src/lib.rs
@@ -81,6 +81,19 @@
 //! because running these operations concurrently is treated as a feature
 //! on its own.
 //!
+//! For datastores with S3 backend there are further restrictions since
+//! there are 3 types of locking mechanisms involved:
+//! - per-chunk file lock
+//! - chunk store mutex lock
+//! - lru cache mutex lock
+//!
+//! Locks must always be acquired in this specific order to avoid deadlocks.
+//! The per-chunk file lock is used to avoid holding a mutex lock during calls
+//! into async contexts, which can deadlock otherwise. It must be held for the
+//! whole time from starting an operation on the chunk until it is persisted
+//! to s3 backend, local datastore cache and in-memory LRU cache where
+//! required.
+//!
 //! ## Inter-process Locking
 //!
 //! We need to be able to restart the proxmox-backup service daemons, so
-- 
2.47.3



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


  parent reply	other threads:[~2025-11-11 14:30 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-11 14:29 [pbs-devel] [PATCH proxmox-backup v5 00/19] fix chunk upload/insert, rename corrupt chunks and GC race conditions for s3 backend Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 01/19] datastore: GC: drop overly verbose info message during s3 chunk sweep Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 02/19] chunk store: implement per-chunk file locking helper for s3 backend Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 03/19] datastore: acquire chunk store mutex lock when renaming corrupt chunk Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 04/19] datastore: get per-chunk file lock for chunk rename on s3 backend Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 05/19] fix #6961: datastore: verify: evict corrupt chunks from in-memory LRU cache Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 06/19] datastore: add locking to protect against races on chunk insert for s3 Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 07/19] GC: fix race with chunk upload/insert on s3 backends Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 08/19] chunk store: reduce exposure of clear_chunk() to crate only Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 09/19] chunk store: make chunk removal a helper method of the chunk store Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 10/19] store: split insert_chunk into wrapper + unsafe locked implementation Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 11/19] store: cache: move Mutex acquire to cache insertion Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 12/19] chunk store: rename cache-specific helpers Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 13/19] GC: cleanup chunk markers from cache in phase 3 on s3 backends Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 14/19] GC: touch bad chunk files independent of backend type Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 15/19] GC: guard missing marker file insertion for s3 backed stores Christian Ebner
2025-11-11 14:29 ` [pbs-devel] [PATCH proxmox-backup v5 16/19] GC: s3: track if a chunk marker file is missing since a bad chunk Christian Ebner
2025-11-11 14:30 ` [pbs-devel] [PATCH proxmox-backup v5 17/19] chunk store: add helpers marking missing local chunk markers as expected Christian Ebner
2025-11-11 14:30 ` [pbs-devel] [PATCH proxmox-backup v5 18/19] GC: assure chunk exists on s3 store when creating missing chunk marker Christian Ebner
2025-11-11 14:30 ` Christian Ebner [this message]
2025-11-14 13:21 ` [pbs-devel] superseded: [PATCH proxmox-backup v5 00/19] fix chunk upload/insert, rename corrupt chunks and GC race conditions for s3 backend Christian Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251111143002.759901-20-c.ebner@proxmox.com \
    --to=c.ebner@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal