all lists on lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup v3 0/3] fix #6750: fix possible deadlock for s3 backed datastore backups
@ 2025-09-29  8:04 Christian Ebner
  2025-09-29  8:04 ` [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6750: api: avoid possible deadlock on datastores with s3 backend Christian Ebner
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Christian Ebner @ 2025-09-29  8:04 UTC (permalink / raw)
  To: pbs-devel

These patches aim to fix a deadlock which can occur during backup
jobs to datastores backed by S3 backend. The deadlock most likely is
caused by the mutex guard for the backup shared state being held
while entering the tokio::task::block_in_place context and executing
async code, which however can lead to deadlocks as described in [0].

Therefore, these patches avoid holding the mutex guard for the shared
backup state while performing the s3 backend operations, by
prematurely dropping it. To avoid inconsistencies, introduce flags
to keep track of the index writers closing state and add a transient
`Finishing` state to be entered during manifest updates.

Changes since version 2 (thanks @Fabian):
- Avoid unneeded mutex guard during backup removal

Changes since version 1 (thanks @Fabian):
- Use the shared backup state's writers in addition with a closed flag
  instead of counting active backend operations.
- Replace finished flag with BackupState enum to introduce the new,
  transient `Finishing` state to be entered during manifest updates.
- Add missing checks and refactor code to the now mutable reference when
  accessing the shared backup state in the respective close calls.


[0] https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use

Link to the bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=6750

Another report in the community forum:
https://forum.proxmox.com/threads/171422/

proxmox-backup:

Christian Ebner (3):
  fix #6750: api: avoid possible deadlock on datastores with s3 backend
  api: backup: never hold mutex guard when doing manifest update
  api: backup: avoid holding mutex and inline backup cleanup method

 src/api2/backup/environment.rs | 181 ++++++++++++++++++++++-----------
 src/api2/backup/mod.rs         |  24 ++++-
 2 files changed, 140 insertions(+), 65 deletions(-)


Summary over all repositories:
  2 files changed, 140 insertions(+), 65 deletions(-)

-- 
Generated by git-murpp 0.8.1


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6750: api: avoid possible deadlock on datastores with s3 backend
  2025-09-29  8:04 [pbs-devel] [PATCH proxmox-backup v3 0/3] fix #6750: fix possible deadlock for s3 backed datastore backups Christian Ebner
@ 2025-09-29  8:04 ` Christian Ebner
  2025-09-29  8:04 ` [pbs-devel] [PATCH proxmox-backup v3 2/3] api: backup: never hold mutex guard when doing manifest update Christian Ebner
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Christian Ebner @ 2025-09-29  8:04 UTC (permalink / raw)
  To: pbs-devel

Closing of the fixed or dynamic index files with s3 backend will call
async code, which must be avoided because of possible deadlocks [0].
Therefore, perform all changes on the shared backup state and drop the
guard before uploading the fixed index file to the s3 backend.

In order to keep track of the index writer state, add a closed flag
to signal that the writer has already been closed successfully.
By only taking a mutable reference during the initial accounting
and moving the writer out of the shared backup state's writers list
after the upload, the pre-existing logic for the finished flag can
be used.

[0] https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use

Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=6750
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 src/api2/backup/environment.rs | 121 ++++++++++++++++++++++++---------
 1 file changed, 87 insertions(+), 34 deletions(-)

diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
index d5e6869cd..f997c86a1 100644
--- a/src/api2/backup/environment.rs
+++ b/src/api2/backup/environment.rs
@@ -62,6 +62,7 @@ struct DynamicWriterState {
     offset: u64,
     chunk_count: u64,
     upload_stat: UploadStatistic,
+    closed: bool,
 }
 
 struct FixedWriterState {
@@ -73,6 +74,7 @@ struct FixedWriterState {
     small_chunk_count: usize, // allow 0..1 small chunks (last chunk may be smaller)
     upload_stat: UploadStatistic,
     incremental: bool,
+    closed: bool,
 }
 
 // key=digest, value=length
@@ -194,6 +196,13 @@ impl BackupEnvironment {
             None => bail!("fixed writer '{}' not registered", wid),
         };
 
+        if data.closed {
+            bail!(
+                "fixed writer '{}' register chunk failed - already closed",
+                data.name
+            );
+        }
+
         if size > data.chunk_size {
             bail!(
                 "fixed writer '{}' - got large chunk ({} > {}",
@@ -248,6 +257,13 @@ impl BackupEnvironment {
             None => bail!("dynamic writer '{}' not registered", wid),
         };
 
+        if data.closed {
+            bail!(
+                "dynamic writer '{}' register chunk failed - already closed",
+                data.name
+            );
+        }
+
         // record statistics
         data.upload_stat.count += 1;
         data.upload_stat.size += size as u64;
@@ -288,6 +304,7 @@ impl BackupEnvironment {
                 offset: 0,
                 chunk_count: 0,
                 upload_stat: UploadStatistic::new(),
+                closed: false,
             },
         );
 
@@ -320,6 +337,7 @@ impl BackupEnvironment {
                 small_chunk_count: 0,
                 upload_stat: UploadStatistic::new(),
                 incremental,
+                closed: false,
             },
         );
 
@@ -343,6 +361,13 @@ impl BackupEnvironment {
             None => bail!("dynamic writer '{}' not registered", wid),
         };
 
+        if data.closed {
+            bail!(
+                "dynamic writer '{}' append chunk failed - already closed",
+                data.name
+            );
+        }
+
         if data.offset != offset {
             bail!(
                 "dynamic writer '{}' append chunk failed - got strange chunk offset ({} != {})",
@@ -377,6 +402,13 @@ impl BackupEnvironment {
             None => bail!("fixed writer '{}' not registered", wid),
         };
 
+        if data.closed {
+            bail!(
+                "fixed writer '{}' append chunk failed - already closed",
+                data.name
+            );
+        }
+
         let end = (offset as usize) + (size as usize);
         let idx = data.index.check_chunk_alignment(end, size as usize)?;
 
@@ -449,10 +481,17 @@ impl BackupEnvironment {
 
         state.ensure_unfinished()?;
 
-        let mut data = match state.dynamic_writers.remove(&wid) {
+        let data = match state.dynamic_writers.get_mut(&wid) {
             Some(data) => data,
             None => bail!("dynamic writer '{}' not registered", wid),
         };
+        let writer_name = data.name.clone();
+        let uuid = data.index.uuid;
+        let upload_stat = data.upload_stat;
+
+        if data.closed {
+            bail!("dynamic writer '{writer_name}' close failed - already closed");
+        }
 
         if data.chunk_count != chunk_count {
             bail!(
@@ -472,9 +511,8 @@ impl BackupEnvironment {
             );
         }
 
-        let uuid = data.index.uuid;
-
         let expected_csum = data.index.close()?;
+        data.closed = true;
 
         if csum != expected_csum {
             bail!(
@@ -483,28 +521,28 @@ impl BackupEnvironment {
             );
         }
 
+        state.file_counter += 1;
+        state.backup_size += size;
+        state.backup_stat = state.backup_stat + upload_stat;
+
+        self.log_upload_stat(&writer_name, &csum, &uuid, size, chunk_count, &upload_stat);
+
+        // never hold mutex guard during s3 upload due to possible deadlocks
+        drop(state);
+
         // For S3 backends, upload the index file to the object store after closing
         if let DatastoreBackend::S3(s3_client) = &self.backend {
-            self.s3_upload_index(s3_client, &data.name)
+            self.s3_upload_index(s3_client, &writer_name)
                 .context("failed to upload dynamic index to s3 backend")?;
             self.log(format!(
-                "Uploaded dynamic index file to s3 backend: {}",
-                data.name
+                "Uploaded dynamic index file to s3 backend: {writer_name}"
             ))
         }
 
-        self.log_upload_stat(
-            &data.name,
-            &csum,
-            &uuid,
-            size,
-            chunk_count,
-            &data.upload_stat,
-        );
-
-        state.file_counter += 1;
-        state.backup_size += size;
-        state.backup_stat = state.backup_stat + data.upload_stat;
+        let mut state = self.state.lock().unwrap();
+        if state.dynamic_writers.remove(&wid).is_none() {
+            bail!("dynamic writer '{wid}' no longer registered");
+        }
 
         Ok(())
     }
@@ -521,11 +559,19 @@ impl BackupEnvironment {
 
         state.ensure_unfinished()?;
 
-        let mut data = match state.fixed_writers.remove(&wid) {
+        let data = match state.fixed_writers.get_mut(&wid) {
             Some(data) => data,
             None => bail!("fixed writer '{}' not registered", wid),
         };
 
+        let writer_name = data.name.clone();
+        let uuid = data.index.uuid;
+        let upload_stat = data.upload_stat;
+
+        if data.closed {
+            bail!("fixed writer '{writer_name}' close failed - already closed");
+        }
+
         if data.chunk_count != chunk_count {
             bail!(
                 "fixed writer '{}' close failed - received wrong number of chunk ({} != {})",
@@ -557,8 +603,8 @@ impl BackupEnvironment {
             }
         }
 
-        let uuid = data.index.uuid;
         let expected_csum = data.index.close()?;
+        data.closed = true;
 
         if csum != expected_csum {
             bail!(
@@ -567,28 +613,35 @@ impl BackupEnvironment {
             );
         }
 
-        // For S3 backends, upload the index file to the object store after closing
-        if let DatastoreBackend::S3(s3_client) = &self.backend {
-            self.s3_upload_index(s3_client, &data.name)
-                .context("failed to upload fixed index to s3 backend")?;
-            self.log(format!(
-                "Uploaded fixed index file to object store: {}",
-                data.name
-            ))
-        }
+        state.file_counter += 1;
+        state.backup_size += size;
+        state.backup_stat = state.backup_stat + upload_stat;
 
         self.log_upload_stat(
-            &data.name,
+            &writer_name,
             &expected_csum,
             &uuid,
             size,
             chunk_count,
-            &data.upload_stat,
+            &upload_stat,
         );
 
-        state.file_counter += 1;
-        state.backup_size += size;
-        state.backup_stat = state.backup_stat + data.upload_stat;
+        // never hold mutex guard during s3 upload due to possible deadlocks
+        drop(state);
+
+        // For S3 backends, upload the index file to the object store after closing
+        if let DatastoreBackend::S3(s3_client) = &self.backend {
+            self.s3_upload_index(s3_client, &writer_name)
+                .context("failed to upload fixed index to s3 backend")?;
+            self.log(format!(
+                "Uploaded fixed index file to object store: {writer_name}"
+            ))
+        }
+
+        let mut state = self.state.lock().unwrap();
+        if state.fixed_writers.remove(&wid).is_none() {
+            bail!("dynamic writer '{wid}' no longer registered");
+        }
 
         Ok(())
     }
-- 
2.47.3



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v3 2/3] api: backup: never hold mutex guard when doing manifest update
  2025-09-29  8:04 [pbs-devel] [PATCH proxmox-backup v3 0/3] fix #6750: fix possible deadlock for s3 backed datastore backups Christian Ebner
  2025-09-29  8:04 ` [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6750: api: avoid possible deadlock on datastores with s3 backend Christian Ebner
@ 2025-09-29  8:04 ` Christian Ebner
  2025-09-29  8:04 ` [pbs-devel] [PATCH proxmox-backup v3 3/3] api: backup: avoid holding mutex and inline backup cleanup method Christian Ebner
  2025-09-29  9:41 ` [pbs-devel] applied-series: [PATCH proxmox-backup v3 0/3] fix #6750: fix possible deadlock for s3 backed datastore backups Fabian Grünbichler
  3 siblings, 0 replies; 5+ messages in thread
From: Christian Ebner @ 2025-09-29  8:04 UTC (permalink / raw)
  To: pbs-devel

An manifest update with s3 backend will call async code, which must
be avoided because of possible deadlocks [0]. Therefore, perform all
changes on the shared backup state and drop the guard before
updating the manifest, which performs the backend specific update,
reacquiring it again afterwards to ensure the fs sync level.

To still guarantee consistency, replace the finished flag by an enum
with an new transient finishing state, which allows to discriminate
the 3 different backup states.

[0] https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 src/api2/backup/environment.rs | 48 +++++++++++++++++++++++-----------
 1 file changed, 33 insertions(+), 15 deletions(-)

diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
index f997c86a1..de6ce3c89 100644
--- a/src/api2/backup/environment.rs
+++ b/src/api2/backup/environment.rs
@@ -80,8 +80,15 @@ struct FixedWriterState {
 // key=digest, value=length
 type KnownChunksMap = HashMap<[u8; 32], u32>;
 
+#[derive(PartialEq)]
+enum BackupState {
+    Active,
+    Finishing,
+    Finished,
+}
+
 struct SharedBackupState {
-    finished: bool,
+    finished: BackupState,
     uid_counter: usize,
     file_counter: usize, // successfully uploaded files
     dynamic_writers: HashMap<usize, DynamicWriterState>,
@@ -92,12 +99,13 @@ struct SharedBackupState {
 }
 
 impl SharedBackupState {
-    // Raise error if finished flag is set
+    // Raise error if the backup is no longer in an active state.
     fn ensure_unfinished(&self) -> Result<(), Error> {
-        if self.finished {
-            bail!("backup already marked as finished.");
+        match self.finished {
+            BackupState::Active => Ok(()),
+            BackupState::Finishing => bail!("backup is already in the process of finishing."),
+            BackupState::Finished => bail!("backup already marked as finished."),
         }
-        Ok(())
     }
 
     // Get an unique integer ID
@@ -134,7 +142,7 @@ impl BackupEnvironment {
         no_cache: bool,
     ) -> Result<Self, Error> {
         let state = SharedBackupState {
-            finished: false,
+            finished: BackupState::Active,
             uid_counter: 0,
             file_counter: 0,
             dynamic_writers: HashMap::new(),
@@ -712,18 +720,29 @@ impl BackupEnvironment {
             }
         }
 
-        // check for valid manifest and store stats
         let stats = serde_json::to_value(state.backup_stat)?;
+
+        // make sure no other api calls can modify the backup state anymore
+        state.finished = BackupState::Finishing;
+
+        // never hold mutex guard during s3 upload due to possible deadlocks
+        drop(state);
+
+        // check for valid manifest and store stats
         self.backup_dir
             .update_manifest(&self.backend, |manifest| {
                 manifest.unprotected["chunk_upload_stats"] = stats;
             })
             .map_err(|err| format_err!("unable to update manifest blob - {err}"))?;
 
+        let mut state = self.state.lock().unwrap();
+        if state.finished != BackupState::Finishing {
+            bail!("backup not in finishing state after manifest update");
+        }
         self.datastore.try_ensure_sync_level()?;
 
         // marks the backup as successful
-        state.finished = true;
+        state.finished = BackupState::Finished;
 
         Ok(())
     }
@@ -800,25 +819,24 @@ impl BackupEnvironment {
         self.formatter.format_result(result, self)
     }
 
-    /// Raise error if finished flag is not set
+    /// Raise error if finished state is not set
     pub fn ensure_finished(&self) -> Result<(), Error> {
-        let state = self.state.lock().unwrap();
-        if !state.finished {
-            bail!("backup ended but finished flag is not set.");
+        if !self.finished() {
+            bail!("backup ended but finished state is not set.");
         }
         Ok(())
     }
 
-    /// Return true if the finished flag is set
+    /// Return true if the finished state is set
     pub fn finished(&self) -> bool {
         let state = self.state.lock().unwrap();
-        state.finished
+        state.finished == BackupState::Finished
     }
 
     /// Remove complete backup
     pub fn remove_backup(&self) -> Result<(), Error> {
         let mut state = self.state.lock().unwrap();
-        state.finished = true;
+        state.finished = BackupState::Finished;
 
         self.datastore.remove_backup_dir(
             self.backup_dir.backup_ns(),
-- 
2.47.3



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v3 3/3] api: backup: avoid holding mutex and inline backup cleanup method
  2025-09-29  8:04 [pbs-devel] [PATCH proxmox-backup v3 0/3] fix #6750: fix possible deadlock for s3 backed datastore backups Christian Ebner
  2025-09-29  8:04 ` [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6750: api: avoid possible deadlock on datastores with s3 backend Christian Ebner
  2025-09-29  8:04 ` [pbs-devel] [PATCH proxmox-backup v3 2/3] api: backup: never hold mutex guard when doing manifest update Christian Ebner
@ 2025-09-29  8:04 ` Christian Ebner
  2025-09-29  9:41 ` [pbs-devel] applied-series: [PATCH proxmox-backup v3 0/3] fix #6750: fix possible deadlock for s3 backed datastore backups Fabian Grünbichler
  3 siblings, 0 replies; 5+ messages in thread
From: Christian Ebner @ 2025-09-29  8:04 UTC (permalink / raw)
  To: pbs-devel

The method to cleanup backups due to failure (or because a benchmark)
currently acquires a mutex guard before removing the backup directory.

Since the S3 backend however needs to execute code in async context,
it is not deadlock save to hold the guard. However, the guard was
only acquired to set the state to finished, which is however not
necessary since the request handling future is already executed to
completion, so this has no further effect.

Therefore, drop the mutex locking altoghether and inline the now
trivial simple function call to the respective callsides.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- not present in previous version

 src/api2/backup/environment.rs | 14 --------------
 src/api2/backup/mod.rs         | 24 +++++++++++++++++++++---
 2 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
index de6ce3c89..ace305d7e 100644
--- a/src/api2/backup/environment.rs
+++ b/src/api2/backup/environment.rs
@@ -833,20 +833,6 @@ impl BackupEnvironment {
         state.finished == BackupState::Finished
     }
 
-    /// Remove complete backup
-    pub fn remove_backup(&self) -> Result<(), Error> {
-        let mut state = self.state.lock().unwrap();
-        state.finished = BackupState::Finished;
-
-        self.datastore.remove_backup_dir(
-            self.backup_dir.backup_ns(),
-            self.backup_dir.as_ref(),
-            true,
-        )?;
-
-        Ok(())
-    }
-
     fn s3_upload_index(&self, s3_client: &S3Client, name: &str) -> Result<(), Error> {
         let object_key =
             pbs_datastore::s3::object_key_from_path(&self.backup_dir.relative_path(), name)
diff --git a/src/api2/backup/mod.rs b/src/api2/backup/mod.rs
index ae61ff697..8a076a2b0 100644
--- a/src/api2/backup/mod.rs
+++ b/src/api2/backup/mod.rs
@@ -282,7 +282,13 @@ fn upgrade_to_backup_protocol(
                 };
                 if benchmark {
                     env.log("benchmark finished successfully");
-                    proxmox_async::runtime::block_in_place(|| env.remove_backup())?;
+                    proxmox_async::runtime::block_in_place(|| {
+                        env.datastore.remove_backup_dir(
+                            env.backup_dir.backup_ns(),
+                            env.backup_dir.as_ref(),
+                            true,
+                        )
+                    })?;
                     return Ok(());
                 }
 
@@ -310,13 +316,25 @@ fn upgrade_to_backup_protocol(
                     (Ok(_), Err(err)) => {
                         env.log(format!("backup ended and finish failed: {}", err));
                         env.log("removing unfinished backup");
-                        proxmox_async::runtime::block_in_place(|| env.remove_backup())?;
+                        proxmox_async::runtime::block_in_place(|| {
+                            env.datastore.remove_backup_dir(
+                                env.backup_dir.backup_ns(),
+                                env.backup_dir.as_ref(),
+                                true,
+                            )
+                        })?;
                         Err(err)
                     }
                     (Err(err), Err(_)) => {
                         env.log(format!("backup failed: {}", err));
                         env.log("removing failed backup");
-                        proxmox_async::runtime::block_in_place(|| env.remove_backup())?;
+                        proxmox_async::runtime::block_in_place(|| {
+                            env.datastore.remove_backup_dir(
+                                env.backup_dir.backup_ns(),
+                                env.backup_dir.as_ref(),
+                                true,
+                            )
+                        })?;
                         Err(err)
                     }
                 }
-- 
2.47.3



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [pbs-devel] applied-series: [PATCH proxmox-backup v3 0/3] fix #6750: fix possible deadlock for s3 backed datastore backups
  2025-09-29  8:04 [pbs-devel] [PATCH proxmox-backup v3 0/3] fix #6750: fix possible deadlock for s3 backed datastore backups Christian Ebner
                   ` (2 preceding siblings ...)
  2025-09-29  8:04 ` [pbs-devel] [PATCH proxmox-backup v3 3/3] api: backup: avoid holding mutex and inline backup cleanup method Christian Ebner
@ 2025-09-29  9:41 ` Fabian Grünbichler
  3 siblings, 0 replies; 5+ messages in thread
From: Fabian Grünbichler @ 2025-09-29  9:41 UTC (permalink / raw)
  To: Christian Ebner, pbs-devel

with some follow-ups sent:
https://lore.proxmox.com/pbs-devel/20250929093228.205510-1-f.gruenbichler@proxmox.com/T/#u
https://lore.proxmox.com/pbs-devel/20250929092143.190162-1-f.gruenbichler@proxmox.com/T/#u

AFAICT those are the only other instances that might be problematic..

Quoting Christian Ebner (2025-09-29 10:04:04)
> These patches aim to fix a deadlock which can occur during backup
> jobs to datastores backed by S3 backend. The deadlock most likely is
> caused by the mutex guard for the backup shared state being held
> while entering the tokio::task::block_in_place context and executing
> async code, which however can lead to deadlocks as described in [0].
> 
> Therefore, these patches avoid holding the mutex guard for the shared
> backup state while performing the s3 backend operations, by
> prematurely dropping it. To avoid inconsistencies, introduce flags
> to keep track of the index writers closing state and add a transient
> `Finishing` state to be entered during manifest updates.
> 
> Changes since version 2 (thanks @Fabian):
> - Avoid unneeded mutex guard during backup removal
> 
> Changes since version 1 (thanks @Fabian):
> - Use the shared backup state's writers in addition with a closed flag
>   instead of counting active backend operations.
> - Replace finished flag with BackupState enum to introduce the new,
>   transient `Finishing` state to be entered during manifest updates.
> - Add missing checks and refactor code to the now mutable reference when
>   accessing the shared backup state in the respective close calls.
> 
> 
> [0] https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use
> 
> Link to the bugtracker issue:
> https://bugzilla.proxmox.com/show_bug.cgi?id=6750
> 
> Another report in the community forum:
> https://forum.proxmox.com/threads/171422/
> 
> proxmox-backup:
> 
> Christian Ebner (3):
>   fix #6750: api: avoid possible deadlock on datastores with s3 backend
>   api: backup: never hold mutex guard when doing manifest update
>   api: backup: avoid holding mutex and inline backup cleanup method
> 
>  src/api2/backup/environment.rs | 181 ++++++++++++++++++++++-----------
>  src/api2/backup/mod.rs         |  24 ++++-
>  2 files changed, 140 insertions(+), 65 deletions(-)
> 
> 
> Summary over all repositories:
>   2 files changed, 140 insertions(+), 65 deletions(-)
> 
> -- 
> Generated by git-murpp 0.8.1
> 
> 
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
> 
>


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-09-29  9:42 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-29  8:04 [pbs-devel] [PATCH proxmox-backup v3 0/3] fix #6750: fix possible deadlock for s3 backed datastore backups Christian Ebner
2025-09-29  8:04 ` [pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6750: api: avoid possible deadlock on datastores with s3 backend Christian Ebner
2025-09-29  8:04 ` [pbs-devel] [PATCH proxmox-backup v3 2/3] api: backup: never hold mutex guard when doing manifest update Christian Ebner
2025-09-29  8:04 ` [pbs-devel] [PATCH proxmox-backup v3 3/3] api: backup: avoid holding mutex and inline backup cleanup method Christian Ebner
2025-09-29  9:41 ` [pbs-devel] applied-series: [PATCH proxmox-backup v3 0/3] fix #6750: fix possible deadlock for s3 backed datastore backups Fabian Grünbichler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal