all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Christian Ebner <c.ebner@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups
Date: Fri, 26 Sep 2025 10:42:19 +0200	[thread overview]
Message-ID: <20250926084221.201116-1-c.ebner@proxmox.com> (raw)

These patches aim to fix a deadlock which can occur during backup
jobs to datastores backed by S3 backend. The deadlock most likely is
caused by the mutex guard for the backup shared state being held
while entering the tokio::task::block_in_place context and executing
async code, which however can lead to deadlocks as described in [0].

Therefore, these patches avoid holding the mutex guard for the shared
backup state while performing the s3 backend operations, by
prematurely dropping it. To avoid inconsistencies, introduce flags
to keep track of the index writers closing state and add a transient
`Finishing` state to be entered during manifest updates.

Changes since version 1 (thanks @Fabian):
- Use the shared backup state's writers in addition with a closed flag
  instead of counting active backend operations.
- Replace finished flag with BackupState enum to introduce the new,
  transient `Finishing` state to be entered during manifest updates.
- Add missing checks and refactor code to the now mutable reference when
  accessing the shared backup state in the respective close calls.


[0] https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use

Link to the bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=6750

Another report in the community forum:
https://forum.proxmox.com/threads/171422/

proxmox-backup:

Christian Ebner (2):
  fix #6750: api: avoid possible deadlock on datastores with s3 backend
  api: backup: never hold mutex guard when doing manifest update

 src/api2/backup/environment.rs | 169 +++++++++++++++++++++++----------
 1 file changed, 120 insertions(+), 49 deletions(-)


Summary over all repositories:
  1 files changed, 120 insertions(+), 49 deletions(-)

-- 
Generated by git-murpp 0.8.1


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


             reply	other threads:[~2025-09-26  8:42 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-26  8:42 Christian Ebner [this message]
2025-09-26  8:42 ` [pbs-devel] [PATCH proxmox-backup v2 1/2] fix #6750: api: avoid possible deadlock on datastores with s3 backend Christian Ebner
2025-09-26  8:42 ` [pbs-devel] [PATCH proxmox-backup v2 2/2] api: backup: never hold mutex guard when doing manifest update Christian Ebner
2025-09-26 10:26 ` [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups Fabian Grünbichler
2025-09-26 10:35   ` Christian Ebner
2025-09-26 10:45     ` Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250926084221.201116-1-c.ebner@proxmox.com \
    --to=c.ebner@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal