* [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups @ 2025-09-26 8:42 Christian Ebner 2025-09-26 8:42 ` [pbs-devel] [PATCH proxmox-backup v2 1/2] fix #6750: api: avoid possible deadlock on datastores with s3 backend Christian Ebner ` (2 more replies) 0 siblings, 3 replies; 6+ messages in thread From: Christian Ebner @ 2025-09-26 8:42 UTC (permalink / raw) To: pbs-devel These patches aim to fix a deadlock which can occur during backup jobs to datastores backed by S3 backend. The deadlock most likely is caused by the mutex guard for the backup shared state being held while entering the tokio::task::block_in_place context and executing async code, which however can lead to deadlocks as described in [0]. Therefore, these patches avoid holding the mutex guard for the shared backup state while performing the s3 backend operations, by prematurely dropping it. To avoid inconsistencies, introduce flags to keep track of the index writers closing state and add a transient `Finishing` state to be entered during manifest updates. Changes since version 1 (thanks @Fabian): - Use the shared backup state's writers in addition with a closed flag instead of counting active backend operations. - Replace finished flag with BackupState enum to introduce the new, transient `Finishing` state to be entered during manifest updates. - Add missing checks and refactor code to the now mutable reference when accessing the shared backup state in the respective close calls. [0] https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use Link to the bugtracker issue: https://bugzilla.proxmox.com/show_bug.cgi?id=6750 Another report in the community forum: https://forum.proxmox.com/threads/171422/ proxmox-backup: Christian Ebner (2): fix #6750: api: avoid possible deadlock on datastores with s3 backend api: backup: never hold mutex guard when doing manifest update src/api2/backup/environment.rs | 169 +++++++++++++++++++++++---------- 1 file changed, 120 insertions(+), 49 deletions(-) Summary over all repositories: 1 files changed, 120 insertions(+), 49 deletions(-) -- Generated by git-murpp 0.8.1 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 1/2] fix #6750: api: avoid possible deadlock on datastores with s3 backend 2025-09-26 8:42 [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups Christian Ebner @ 2025-09-26 8:42 ` Christian Ebner 2025-09-26 8:42 ` [pbs-devel] [PATCH proxmox-backup v2 2/2] api: backup: never hold mutex guard when doing manifest update Christian Ebner 2025-09-26 10:26 ` [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups Fabian Grünbichler 2 siblings, 0 replies; 6+ messages in thread From: Christian Ebner @ 2025-09-26 8:42 UTC (permalink / raw) To: pbs-devel Closing of the fixed or dynamic index files with s3 backend will call async code, which must be avoided because of possible deadlocks [0]. Therefore, perform all changes on the shared backup state and drop the guard before uploading the fixed index file to the s3 backend. In order to keep track of the index writer state, add a closed flag to signal that the writer has already been closed successfully. By only taking a mutable reference during the initial accounting and moving the writer out of the shared backup state's writers list after the upload, the pre-existing logic for the finished flag can be used. [0] https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=6750 Signed-off-by: Christian Ebner <c.ebner@proxmox.com> --- src/api2/backup/environment.rs | 121 ++++++++++++++++++++++++--------- 1 file changed, 87 insertions(+), 34 deletions(-) diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs index d5e6869cd..f997c86a1 100644 --- a/src/api2/backup/environment.rs +++ b/src/api2/backup/environment.rs @@ -62,6 +62,7 @@ struct DynamicWriterState { offset: u64, chunk_count: u64, upload_stat: UploadStatistic, + closed: bool, } struct FixedWriterState { @@ -73,6 +74,7 @@ struct FixedWriterState { small_chunk_count: usize, // allow 0..1 small chunks (last chunk may be smaller) upload_stat: UploadStatistic, incremental: bool, + closed: bool, } // key=digest, value=length @@ -194,6 +196,13 @@ impl BackupEnvironment { None => bail!("fixed writer '{}' not registered", wid), }; + if data.closed { + bail!( + "fixed writer '{}' register chunk failed - already closed", + data.name + ); + } + if size > data.chunk_size { bail!( "fixed writer '{}' - got large chunk ({} > {}", @@ -248,6 +257,13 @@ impl BackupEnvironment { None => bail!("dynamic writer '{}' not registered", wid), }; + if data.closed { + bail!( + "dynamic writer '{}' register chunk failed - already closed", + data.name + ); + } + // record statistics data.upload_stat.count += 1; data.upload_stat.size += size as u64; @@ -288,6 +304,7 @@ impl BackupEnvironment { offset: 0, chunk_count: 0, upload_stat: UploadStatistic::new(), + closed: false, }, ); @@ -320,6 +337,7 @@ impl BackupEnvironment { small_chunk_count: 0, upload_stat: UploadStatistic::new(), incremental, + closed: false, }, ); @@ -343,6 +361,13 @@ impl BackupEnvironment { None => bail!("dynamic writer '{}' not registered", wid), }; + if data.closed { + bail!( + "dynamic writer '{}' append chunk failed - already closed", + data.name + ); + } + if data.offset != offset { bail!( "dynamic writer '{}' append chunk failed - got strange chunk offset ({} != {})", @@ -377,6 +402,13 @@ impl BackupEnvironment { None => bail!("fixed writer '{}' not registered", wid), }; + if data.closed { + bail!( + "fixed writer '{}' append chunk failed - already closed", + data.name + ); + } + let end = (offset as usize) + (size as usize); let idx = data.index.check_chunk_alignment(end, size as usize)?; @@ -449,10 +481,17 @@ impl BackupEnvironment { state.ensure_unfinished()?; - let mut data = match state.dynamic_writers.remove(&wid) { + let data = match state.dynamic_writers.get_mut(&wid) { Some(data) => data, None => bail!("dynamic writer '{}' not registered", wid), }; + let writer_name = data.name.clone(); + let uuid = data.index.uuid; + let upload_stat = data.upload_stat; + + if data.closed { + bail!("dynamic writer '{writer_name}' close failed - already closed"); + } if data.chunk_count != chunk_count { bail!( @@ -472,9 +511,8 @@ impl BackupEnvironment { ); } - let uuid = data.index.uuid; - let expected_csum = data.index.close()?; + data.closed = true; if csum != expected_csum { bail!( @@ -483,28 +521,28 @@ impl BackupEnvironment { ); } + state.file_counter += 1; + state.backup_size += size; + state.backup_stat = state.backup_stat + upload_stat; + + self.log_upload_stat(&writer_name, &csum, &uuid, size, chunk_count, &upload_stat); + + // never hold mutex guard during s3 upload due to possible deadlocks + drop(state); + // For S3 backends, upload the index file to the object store after closing if let DatastoreBackend::S3(s3_client) = &self.backend { - self.s3_upload_index(s3_client, &data.name) + self.s3_upload_index(s3_client, &writer_name) .context("failed to upload dynamic index to s3 backend")?; self.log(format!( - "Uploaded dynamic index file to s3 backend: {}", - data.name + "Uploaded dynamic index file to s3 backend: {writer_name}" )) } - self.log_upload_stat( - &data.name, - &csum, - &uuid, - size, - chunk_count, - &data.upload_stat, - ); - - state.file_counter += 1; - state.backup_size += size; - state.backup_stat = state.backup_stat + data.upload_stat; + let mut state = self.state.lock().unwrap(); + if state.dynamic_writers.remove(&wid).is_none() { + bail!("dynamic writer '{wid}' no longer registered"); + } Ok(()) } @@ -521,11 +559,19 @@ impl BackupEnvironment { state.ensure_unfinished()?; - let mut data = match state.fixed_writers.remove(&wid) { + let data = match state.fixed_writers.get_mut(&wid) { Some(data) => data, None => bail!("fixed writer '{}' not registered", wid), }; + let writer_name = data.name.clone(); + let uuid = data.index.uuid; + let upload_stat = data.upload_stat; + + if data.closed { + bail!("fixed writer '{writer_name}' close failed - already closed"); + } + if data.chunk_count != chunk_count { bail!( "fixed writer '{}' close failed - received wrong number of chunk ({} != {})", @@ -557,8 +603,8 @@ impl BackupEnvironment { } } - let uuid = data.index.uuid; let expected_csum = data.index.close()?; + data.closed = true; if csum != expected_csum { bail!( @@ -567,28 +613,35 @@ impl BackupEnvironment { ); } - // For S3 backends, upload the index file to the object store after closing - if let DatastoreBackend::S3(s3_client) = &self.backend { - self.s3_upload_index(s3_client, &data.name) - .context("failed to upload fixed index to s3 backend")?; - self.log(format!( - "Uploaded fixed index file to object store: {}", - data.name - )) - } + state.file_counter += 1; + state.backup_size += size; + state.backup_stat = state.backup_stat + upload_stat; self.log_upload_stat( - &data.name, + &writer_name, &expected_csum, &uuid, size, chunk_count, - &data.upload_stat, + &upload_stat, ); - state.file_counter += 1; - state.backup_size += size; - state.backup_stat = state.backup_stat + data.upload_stat; + // never hold mutex guard during s3 upload due to possible deadlocks + drop(state); + + // For S3 backends, upload the index file to the object store after closing + if let DatastoreBackend::S3(s3_client) = &self.backend { + self.s3_upload_index(s3_client, &writer_name) + .context("failed to upload fixed index to s3 backend")?; + self.log(format!( + "Uploaded fixed index file to object store: {writer_name}" + )) + } + + let mut state = self.state.lock().unwrap(); + if state.fixed_writers.remove(&wid).is_none() { + bail!("dynamic writer '{wid}' no longer registered"); + } Ok(()) } -- 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 2/2] api: backup: never hold mutex guard when doing manifest update 2025-09-26 8:42 [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups Christian Ebner 2025-09-26 8:42 ` [pbs-devel] [PATCH proxmox-backup v2 1/2] fix #6750: api: avoid possible deadlock on datastores with s3 backend Christian Ebner @ 2025-09-26 8:42 ` Christian Ebner 2025-09-26 10:26 ` [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups Fabian Grünbichler 2 siblings, 0 replies; 6+ messages in thread From: Christian Ebner @ 2025-09-26 8:42 UTC (permalink / raw) To: pbs-devel An manifest update with s3 backend will call async code, which must be avoided because of possible deadlocks [0]. Therefore, perform all changes on the shared backup state and drop the guard before updating the manifest, which performs the backend specific update, reacquiring it again afterwards to ensure the fs sync level. To still guarantee consistency, replace the finished flag by an enum with an new transient finishing state, which allows to discriminate the 3 different backup states. [0] https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use Signed-off-by: Christian Ebner <c.ebner@proxmox.com> --- src/api2/backup/environment.rs | 48 +++++++++++++++++++++++----------- 1 file changed, 33 insertions(+), 15 deletions(-) diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs index f997c86a1..de6ce3c89 100644 --- a/src/api2/backup/environment.rs +++ b/src/api2/backup/environment.rs @@ -80,8 +80,15 @@ struct FixedWriterState { // key=digest, value=length type KnownChunksMap = HashMap<[u8; 32], u32>; +#[derive(PartialEq)] +enum BackupState { + Active, + Finishing, + Finished, +} + struct SharedBackupState { - finished: bool, + finished: BackupState, uid_counter: usize, file_counter: usize, // successfully uploaded files dynamic_writers: HashMap<usize, DynamicWriterState>, @@ -92,12 +99,13 @@ struct SharedBackupState { } impl SharedBackupState { - // Raise error if finished flag is set + // Raise error if the backup is no longer in an active state. fn ensure_unfinished(&self) -> Result<(), Error> { - if self.finished { - bail!("backup already marked as finished."); + match self.finished { + BackupState::Active => Ok(()), + BackupState::Finishing => bail!("backup is already in the process of finishing."), + BackupState::Finished => bail!("backup already marked as finished."), } - Ok(()) } // Get an unique integer ID @@ -134,7 +142,7 @@ impl BackupEnvironment { no_cache: bool, ) -> Result<Self, Error> { let state = SharedBackupState { - finished: false, + finished: BackupState::Active, uid_counter: 0, file_counter: 0, dynamic_writers: HashMap::new(), @@ -712,18 +720,29 @@ impl BackupEnvironment { } } - // check for valid manifest and store stats let stats = serde_json::to_value(state.backup_stat)?; + + // make sure no other api calls can modify the backup state anymore + state.finished = BackupState::Finishing; + + // never hold mutex guard during s3 upload due to possible deadlocks + drop(state); + + // check for valid manifest and store stats self.backup_dir .update_manifest(&self.backend, |manifest| { manifest.unprotected["chunk_upload_stats"] = stats; }) .map_err(|err| format_err!("unable to update manifest blob - {err}"))?; + let mut state = self.state.lock().unwrap(); + if state.finished != BackupState::Finishing { + bail!("backup not in finishing state after manifest update"); + } self.datastore.try_ensure_sync_level()?; // marks the backup as successful - state.finished = true; + state.finished = BackupState::Finished; Ok(()) } @@ -800,25 +819,24 @@ impl BackupEnvironment { self.formatter.format_result(result, self) } - /// Raise error if finished flag is not set + /// Raise error if finished state is not set pub fn ensure_finished(&self) -> Result<(), Error> { - let state = self.state.lock().unwrap(); - if !state.finished { - bail!("backup ended but finished flag is not set."); + if !self.finished() { + bail!("backup ended but finished state is not set."); } Ok(()) } - /// Return true if the finished flag is set + /// Return true if the finished state is set pub fn finished(&self) -> bool { let state = self.state.lock().unwrap(); - state.finished + state.finished == BackupState::Finished } /// Remove complete backup pub fn remove_backup(&self) -> Result<(), Error> { let mut state = self.state.lock().unwrap(); - state.finished = true; + state.finished = BackupState::Finished; self.datastore.remove_backup_dir( self.backup_dir.backup_ns(), -- 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups 2025-09-26 8:42 [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups Christian Ebner 2025-09-26 8:42 ` [pbs-devel] [PATCH proxmox-backup v2 1/2] fix #6750: api: avoid possible deadlock on datastores with s3 backend Christian Ebner 2025-09-26 8:42 ` [pbs-devel] [PATCH proxmox-backup v2 2/2] api: backup: never hold mutex guard when doing manifest update Christian Ebner @ 2025-09-26 10:26 ` Fabian Grünbichler 2025-09-26 10:35 ` Christian Ebner 2 siblings, 1 reply; 6+ messages in thread From: Fabian Grünbichler @ 2025-09-26 10:26 UTC (permalink / raw) To: Proxmox Backup Server development discussion On September 26, 2025 10:42 am, Christian Ebner wrote: > These patches aim to fix a deadlock which can occur during backup > jobs to datastores backed by S3 backend. The deadlock most likely is > caused by the mutex guard for the backup shared state being held > while entering the tokio::task::block_in_place context and executing > async code, which however can lead to deadlocks as described in [0]. > > Therefore, these patches avoid holding the mutex guard for the shared > backup state while performing the s3 backend operations, by > prematurely dropping it. To avoid inconsistencies, introduce flags > to keep track of the index writers closing state and add a transient > `Finishing` state to be entered during manifest updates. > > Changes since version 1 (thanks @Fabian): > - Use the shared backup state's writers in addition with a closed flag > instead of counting active backend operations. > - Replace finished flag with BackupState enum to introduce the new, > transient `Finishing` state to be entered during manifest updates. > - Add missing checks and refactor code to the now mutable reference when > accessing the shared backup state in the respective close calls. this looks a lot better! but I think we both missed one more problematic code path: - env.remove_backup() (sync) -- locks state -- calls pbs_datastore::datastore::remove_backup() (sync) --- calls pbs_datastore::backup_info::BackupDir::destroy (sync) ---- calls proxmox_async_runtime::block_on(s3_client.delete_objects_by_prefix) this one is only called in mod.rs *after* the backup session processing is completed, I am not even sure why we call into the env there (all we do with it is set the state to finished, but that has no effect at that point anymore AFAICT?) maybe we should just move the remove_backup fn from the env to mod.rs and drop the state update from it? > > > [0] https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use > > Link to the bugtracker issue: > https://bugzilla.proxmox.com/show_bug.cgi?id=6750 > > Another report in the community forum: > https://forum.proxmox.com/threads/171422/ > > proxmox-backup: > > Christian Ebner (2): > fix #6750: api: avoid possible deadlock on datastores with s3 backend > api: backup: never hold mutex guard when doing manifest update > > src/api2/backup/environment.rs | 169 +++++++++++++++++++++++---------- > 1 file changed, 120 insertions(+), 49 deletions(-) > > > Summary over all repositories: > 1 files changed, 120 insertions(+), 49 deletions(-) > > -- > Generated by git-murpp 0.8.1 > > > _______________________________________________ > pbs-devel mailing list > pbs-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel > > > _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups 2025-09-26 10:26 ` [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups Fabian Grünbichler @ 2025-09-26 10:35 ` Christian Ebner 2025-09-26 10:45 ` Fabian Grünbichler 0 siblings, 1 reply; 6+ messages in thread From: Christian Ebner @ 2025-09-26 10:35 UTC (permalink / raw) To: Fabian Grünbichler, Proxmox Backup Server development discussion On 9/26/25 12:26 PM, Fabian Grünbichler wrote: > On September 26, 2025 10:42 am, Christian Ebner wrote: >> These patches aim to fix a deadlock which can occur during backup >> jobs to datastores backed by S3 backend. The deadlock most likely is >> caused by the mutex guard for the backup shared state being held >> while entering the tokio::task::block_in_place context and executing >> async code, which however can lead to deadlocks as described in [0]. >> >> Therefore, these patches avoid holding the mutex guard for the shared >> backup state while performing the s3 backend operations, by >> prematurely dropping it. To avoid inconsistencies, introduce flags >> to keep track of the index writers closing state and add a transient >> `Finishing` state to be entered during manifest updates. >> >> Changes since version 1 (thanks @Fabian): >> - Use the shared backup state's writers in addition with a closed flag >> instead of counting active backend operations. >> - Replace finished flag with BackupState enum to introduce the new, >> transient `Finishing` state to be entered during manifest updates. >> - Add missing checks and refactor code to the now mutable reference when >> accessing the shared backup state in the respective close calls. > > this looks a lot better! > > but I think we both missed one more problematic code path: > > - env.remove_backup() (sync) > -- locks state > -- calls pbs_datastore::datastore::remove_backup() (sync) > --- calls pbs_datastore::backup_info::BackupDir::destroy (sync) > ---- calls proxmox_async_runtime::block_on(s3_client.delete_objects_by_prefix) Good catch! > this one is only called in mod.rs *after* the backup session processing > is completed, I am not even sure why we call into the env there (all we > do with it is set the state to finished, but that has no effect at that > point anymore AFAICT?) Must double check, but that might be related to allowing the client connection to disappear without further error? > maybe we should just move the remove_backup fn from the env to mod.rs > and drop the state update from it? Okay, will check what are the further implications of that, thanks! _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups 2025-09-26 10:35 ` Christian Ebner @ 2025-09-26 10:45 ` Fabian Grünbichler 0 siblings, 0 replies; 6+ messages in thread From: Fabian Grünbichler @ 2025-09-26 10:45 UTC (permalink / raw) To: Christian Ebner, Proxmox Backup Server development discussion On September 26, 2025 12:35 pm, Christian Ebner wrote: > On 9/26/25 12:26 PM, Fabian Grünbichler wrote: >> On September 26, 2025 10:42 am, Christian Ebner wrote: >>> These patches aim to fix a deadlock which can occur during backup >>> jobs to datastores backed by S3 backend. The deadlock most likely is >>> caused by the mutex guard for the backup shared state being held >>> while entering the tokio::task::block_in_place context and executing >>> async code, which however can lead to deadlocks as described in [0]. >>> >>> Therefore, these patches avoid holding the mutex guard for the shared >>> backup state while performing the s3 backend operations, by >>> prematurely dropping it. To avoid inconsistencies, introduce flags >>> to keep track of the index writers closing state and add a transient >>> `Finishing` state to be entered during manifest updates. >>> >>> Changes since version 1 (thanks @Fabian): >>> - Use the shared backup state's writers in addition with a closed flag >>> instead of counting active backend operations. >>> - Replace finished flag with BackupState enum to introduce the new, >>> transient `Finishing` state to be entered during manifest updates. >>> - Add missing checks and refactor code to the now mutable reference when >>> accessing the shared backup state in the respective close calls. >> >> this looks a lot better! >> >> but I think we both missed one more problematic code path: >> >> - env.remove_backup() (sync) >> -- locks state >> -- calls pbs_datastore::datastore::remove_backup() (sync) >> --- calls pbs_datastore::backup_info::BackupDir::destroy (sync) >> ---- calls proxmox_async_runtime::block_on(s3_client.delete_objects_by_prefix) > > Good catch! > >> this one is only called in mod.rs *after* the backup session processing >> is completed, I am not even sure why we call into the env there (all we >> do with it is set the state to finished, but that has no effect at that >> point anymore AFAICT?) > > Must double check, but that might be related to allowing the client > connection to disappear without further error? I don't think so, that (ugly hack) happens as part of processing requests, the removal happens afterwards *based on the result* of that processing.. >> maybe we should just move the remove_backup fn from the env to mod.rs >> and drop the state update from it? > > Okay, will check what are the further implications of that, thanks! > _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-09-26 10:45 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2025-09-26 8:42 [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups Christian Ebner 2025-09-26 8:42 ` [pbs-devel] [PATCH proxmox-backup v2 1/2] fix #6750: api: avoid possible deadlock on datastores with s3 backend Christian Ebner 2025-09-26 8:42 ` [pbs-devel] [PATCH proxmox-backup v2 2/2] api: backup: never hold mutex guard when doing manifest update Christian Ebner 2025-09-26 10:26 ` [pbs-devel] [PATCH proxmox-backup v2 0/2] fix #6750: fix possible deadlock for s3 backed datastore backups Fabian Grünbichler 2025-09-26 10:35 ` Christian Ebner 2025-09-26 10:45 ` Fabian Grünbichler
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.