* [pbs-devel] [PATCH v4 proxmox-backup] client: pxar: fix race in pxar backup stream @ 2024-12-07 11:07 Christian Ebner 2025-01-24 8:22 ` Fabian Grünbichler 2025-01-24 12:48 ` Christian Ebner 0 siblings, 2 replies; 5+ messages in thread From: Christian Ebner @ 2024-12-07 11:07 UTC (permalink / raw) To: pbs-devel Fixes a race condition where the backup upload stream can miss an error returned by pxar::create_archive, because the error state is only set after the backup stream was already polled. On instantiation, `PxarBackupStream` spawns a future handling the pxar archive creation, which sends the encoded pxar archive stream (or streams in case of split archives) through a channel, received by the pxar backup stream on polling. In case this channel is closed as signaled by returning an error, the poll logic will propagate an eventual error occurred during pxar creation by taking it from the `PxarBackupStream`. As this error might not have been set just yet, this can lead to incorrectly terminating a backup snapshot with success, eventhough an error occurred. To fix this, introduce `ArchiverState` to hold a finish flag as well as the error and add a notification channel, allowing the archiver future to signal the waiting stream. As the notification waiter will block on subsequent polls even if it has already been notified about the archive creation finish, or it might not have been registered just yet when the notification was send out, only block and wait for notifications if the finished flag in the `ArchiverState` is not set. If it is set, there is no need to wait for a notification, as the archiver is finished for sure. In case of premature termination of the pxar backup stream, no additional measures have to been taken, as the abort handle already terminates the archive creation. Signed-off-by: Christian Ebner <c.ebner@proxmox.com> --- changes since version 3: - fix a possible deadlock encountered during further testing by strictly limiting the archiver state's mutex lock scope. pbs-client/src/pxar_backup_stream.rs | 61 +++++++++++++++++++++------- 1 file changed, 47 insertions(+), 14 deletions(-) diff --git a/pbs-client/src/pxar_backup_stream.rs b/pbs-client/src/pxar_backup_stream.rs index 2bfb5cf29..3fb1927d0 100644 --- a/pbs-client/src/pxar_backup_stream.rs +++ b/pbs-client/src/pxar_backup_stream.rs @@ -11,6 +11,7 @@ use futures::stream::Stream; use nix::dir::Dir; use nix::fcntl::OFlag; use nix::sys::stat::Mode; +use tokio::sync::Notify; use proxmox_async::blocking::TokioWriterAdapter; use proxmox_io::StdChannelWriter; @@ -30,7 +31,13 @@ pub struct PxarBackupStream { rx: Option<std::sync::mpsc::Receiver<Result<Vec<u8>, Error>>>, pub suggested_boundaries: Option<std::sync::mpsc::Receiver<u64>>, handle: Option<AbortHandle>, - error: Arc<Mutex<Option<Error>>>, + archiver_state: Arc<Mutex<ArchiverState>>, + archiver_finished_notification: Arc<Notify>, +} + +struct ArchiverState { + finished: bool, + error: Option<Error>, } impl Drop for PxarBackupStream { @@ -78,10 +85,16 @@ impl PxarBackupStream { (pxar::PxarVariant::Unified(writer), None, None, None) }; - let error = Arc::new(Mutex::new(None)); - let error2 = Arc::clone(&error); + let archiver_state = ArchiverState { + finished: false, + error: None, + }; + let archiver_state = Arc::new(Mutex::new(archiver_state)); + let archiver_state2 = Arc::clone(&archiver_state); + let pxar_backup_stream_notifier = Arc::new(Notify::new()); + let archiver_finished_notification = pxar_backup_stream_notifier.clone(); let handler = async move { - if let Err(err) = crate::pxar::create_archive( + let result = crate::pxar::create_archive( dir, PxarWriters::new( writer, @@ -96,10 +109,19 @@ impl PxarBackupStream { boundaries, suggested_boundaries_tx, ) - .await - { - let mut error = error2.lock().unwrap(); - *error = Some(err); + .await; + + let mut state = archiver_state2.lock().unwrap(); + state.finished = true; + if let Err(err) = result { + state.error = Some(err); + } + drop(state); + + // Notify upload streams that archiver is finished (with or without error) + pxar_backup_stream_notifier.notify_one(); + if separate_payload_stream { + pxar_backup_stream_notifier.notify_one(); } }; @@ -111,14 +133,16 @@ impl PxarBackupStream { rx: Some(rx), suggested_boundaries: None, handle: Some(handle.clone()), - error: Arc::clone(&error), + archiver_state: archiver_state.clone(), + archiver_finished_notification: archiver_finished_notification.clone(), }; let backup_payload_stream = payload_rx.map(|rx| Self { rx: Some(rx), suggested_boundaries: suggested_boundaries_rx, handle: Some(handle), - error, + archiver_state, + archiver_finished_notification, }); Ok((backup_stream, backup_payload_stream)) @@ -143,8 +167,8 @@ impl Stream for PxarBackupStream { fn poll_next(self: Pin<&mut Self>, _cx: &mut Context) -> Poll<Option<Self::Item>> { { // limit lock scope - let mut error = self.error.lock().unwrap(); - if let Some(err) = error.take() { + let mut state = self.archiver_state.lock().unwrap(); + if let Some(err) = state.error.take() { return Poll::Ready(Some(Err(err))); } } @@ -152,8 +176,17 @@ impl Stream for PxarBackupStream { match proxmox_async::runtime::block_in_place(|| self.rx.as_ref().unwrap().recv()) { Ok(data) => Poll::Ready(Some(data)), Err(_) => { - let mut error = self.error.lock().unwrap(); - if let Some(err) = error.take() { + // If the archiver did not signal it is finished, wait for finished completion + // to avoid potentially miss errors + let finished = { self.archiver_state.lock().unwrap().finished }; + if !finished { + proxmox_async::runtime::block_on( + self.archiver_finished_notification.notified(), + ); + } + + let error = { self.archiver_state.lock().unwrap().error.take() }; + if let Some(err) = error { return Poll::Ready(Some(Err(err))); } Poll::Ready(None) // channel closed, no error -- 2.39.5 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [pbs-devel] [PATCH v4 proxmox-backup] client: pxar: fix race in pxar backup stream 2024-12-07 11:07 [pbs-devel] [PATCH v4 proxmox-backup] client: pxar: fix race in pxar backup stream Christian Ebner @ 2025-01-24 8:22 ` Fabian Grünbichler 2025-01-24 9:20 ` Christian Ebner 2025-01-24 12:48 ` Christian Ebner 1 sibling, 1 reply; 5+ messages in thread From: Fabian Grünbichler @ 2025-01-24 8:22 UTC (permalink / raw) To: Proxmox Backup Server development discussion On December 7, 2024 12:07 pm, Christian Ebner wrote: > Fixes a race condition where the backup upload stream can miss an > error returned by pxar::create_archive, because the error state is > only set after the backup stream was already polled. > > On instantiation, `PxarBackupStream` spawns a future handling the > pxar archive creation, which sends the encoded pxar archive stream > (or streams in case of split archives) through a channel, received > by the pxar backup stream on polling. > > In case this channel is closed as signaled by returning an error, the > poll logic will propagate an eventual error occurred during pxar > creation by taking it from the `PxarBackupStream`. > > As this error might not have been set just yet, this can lead to > incorrectly terminating a backup snapshot with success, eventhough an > error occurred. > > To fix this, introduce `ArchiverState` to hold a finish flag as well > as the error and add a notification channel, allowing the archiver > future to signal the waiting stream. As the notification waiter will > block on subsequent polls even if it has already been notified about > the archive creation finish, or it might not have been registered > just yet when the notification was send out, only block and wait for > notifications if the finished flag in the `ArchiverState` is not set. > If it is set, there is no need to wait for a notification, as the > archiver is finished for sure. > > In case of premature termination of the pxar backup stream, no > additional measures have to been taken, as the abort handle already > terminates the archive creation. > > Signed-off-by: Christian Ebner <c.ebner@proxmox.com> > --- > changes since version 3: > - fix a possible deadlock encountered during further testing by > strictly limiting the archiver state's mutex lock scope. > > pbs-client/src/pxar_backup_stream.rs | 61 +++++++++++++++++++++------- > 1 file changed, 47 insertions(+), 14 deletions(-) > > diff --git a/pbs-client/src/pxar_backup_stream.rs b/pbs-client/src/pxar_backup_stream.rs > index 2bfb5cf29..3fb1927d0 100644 > --- a/pbs-client/src/pxar_backup_stream.rs > +++ b/pbs-client/src/pxar_backup_stream.rs > @@ -11,6 +11,7 @@ use futures::stream::Stream; > use nix::dir::Dir; > use nix::fcntl::OFlag; > use nix::sys::stat::Mode; > +use tokio::sync::Notify; > > use proxmox_async::blocking::TokioWriterAdapter; > use proxmox_io::StdChannelWriter; > @@ -30,7 +31,13 @@ pub struct PxarBackupStream { > rx: Option<std::sync::mpsc::Receiver<Result<Vec<u8>, Error>>>, > pub suggested_boundaries: Option<std::sync::mpsc::Receiver<u64>>, > handle: Option<AbortHandle>, > - error: Arc<Mutex<Option<Error>>>, > + archiver_state: Arc<Mutex<ArchiverState>>, > + archiver_finished_notification: Arc<Notify>, I am not sure I follow this change.. wouldn't just having the error and the notification be enough? if we encounter an error during stream procession, we can immediately abort. if the stream is finished, we check for errors, wait for the notification, check for errors again? if we have one Notify per stream, then every stream must either see an error, or get the notification. no more race (provided any encountered error is always set before notifying) and no risk for waiting forever either ;) > +} > + > +struct ArchiverState { > + finished: bool, > + error: Option<Error>, > } > > impl Drop for PxarBackupStream { > @@ -78,10 +85,16 @@ impl PxarBackupStream { > (pxar::PxarVariant::Unified(writer), None, None, None) > }; > > - let error = Arc::new(Mutex::new(None)); > - let error2 = Arc::clone(&error); > + let archiver_state = ArchiverState { > + finished: false, > + error: None, > + }; > + let archiver_state = Arc::new(Mutex::new(archiver_state)); > + let archiver_state2 = Arc::clone(&archiver_state); > + let pxar_backup_stream_notifier = Arc::new(Notify::new()); > + let archiver_finished_notification = pxar_backup_stream_notifier.clone(); > let handler = async move { > - if let Err(err) = crate::pxar::create_archive( > + let result = crate::pxar::create_archive( > dir, > PxarWriters::new( > writer, > @@ -96,10 +109,19 @@ impl PxarBackupStream { > boundaries, > suggested_boundaries_tx, > ) > - .await > - { > - let mut error = error2.lock().unwrap(); > - *error = Some(err); > + .await; > + > + let mut state = archiver_state2.lock().unwrap(); > + state.finished = true; > + if let Err(err) = result { > + state.error = Some(err); > + } > + drop(state); > + > + // Notify upload streams that archiver is finished (with or without error) > + pxar_backup_stream_notifier.notify_one(); > + if separate_payload_stream { > + pxar_backup_stream_notifier.notify_one(); this uses the same Notify, but that only holds a single permit, so isn't this still racy? (see below) > } > }; > > @@ -111,14 +133,16 @@ impl PxarBackupStream { > rx: Some(rx), > suggested_boundaries: None, > handle: Some(handle.clone()), > - error: Arc::clone(&error), > + archiver_state: archiver_state.clone(), > + archiver_finished_notification: archiver_finished_notification.clone(), > }; > > let backup_payload_stream = payload_rx.map(|rx| Self { > rx: Some(rx), > suggested_boundaries: suggested_boundaries_rx, > handle: Some(handle), > - error, > + archiver_state, > + archiver_finished_notification, > }); > > Ok((backup_stream, backup_payload_stream)) > @@ -143,8 +167,8 @@ impl Stream for PxarBackupStream { > fn poll_next(self: Pin<&mut Self>, _cx: &mut Context) -> Poll<Option<Self::Item>> { > { > // limit lock scope > - let mut error = self.error.lock().unwrap(); > - if let Some(err) = error.take() { > + let mut state = self.archiver_state.lock().unwrap(); > + if let Some(err) = state.error.take() { > return Poll::Ready(Some(Err(err))); > } > } > @@ -152,8 +176,17 @@ impl Stream for PxarBackupStream { > match proxmox_async::runtime::block_in_place(|| self.rx.as_ref().unwrap().recv()) { > Ok(data) => Poll::Ready(Some(data)), > Err(_) => { > - let mut error = self.error.lock().unwrap(); > - if let Some(err) = error.take() { > + // If the archiver did not signal it is finished, wait for finished completion > + // to avoid potentially miss errors > + let finished = { self.archiver_state.lock().unwrap().finished }; > + if !finished { > + proxmox_async::runtime::block_on( > + self.archiver_finished_notification.notified(), if you are unlucky, you end up here but the execution pattern is like this A = archiver S1 = stream one S2 = stream two S1 sees not finished S2 sees not finished A sets finished A notifies A notifies again S1 sees notification (consuming both notifications, as there is only one stored in Notify) S2 waits forever ? might not happen in practice because it "always" ends up doing this: S1 sees not finished S2 sees not finished S1 blocks waiting for notifications S2 blocks waiting for notifications A sets finished A notifies S1 and S2 get notified and proceed A notifies again (has no effect) .. but that is just luck ;) > + ); > + } > + > + let error = { self.archiver_state.lock().unwrap().error.take() }; > + if let Some(err) = error { > return Poll::Ready(Some(Err(err))); > } > Poll::Ready(None) // channel closed, no error > -- > 2.39.5 > > > > _______________________________________________ > pbs-devel mailing list > pbs-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel > > > _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [pbs-devel] [PATCH v4 proxmox-backup] client: pxar: fix race in pxar backup stream 2025-01-24 8:22 ` Fabian Grünbichler @ 2025-01-24 9:20 ` Christian Ebner 2025-01-24 10:21 ` Christian Ebner 0 siblings, 1 reply; 5+ messages in thread From: Christian Ebner @ 2025-01-24 9:20 UTC (permalink / raw) To: Proxmox Backup Server development discussion, Fabian Grünbichler On 1/24/25 09:22, Fabian Grünbichler wrote: > On December 7, 2024 12:07 pm, Christian Ebner wrote: >> Fixes a race condition where the backup upload stream can miss an >> error returned by pxar::create_archive, because the error state is >> only set after the backup stream was already polled. >> >> On instantiation, `PxarBackupStream` spawns a future handling the >> pxar archive creation, which sends the encoded pxar archive stream >> (or streams in case of split archives) through a channel, received >> by the pxar backup stream on polling. >> >> In case this channel is closed as signaled by returning an error, the >> poll logic will propagate an eventual error occurred during pxar >> creation by taking it from the `PxarBackupStream`. >> >> As this error might not have been set just yet, this can lead to >> incorrectly terminating a backup snapshot with success, eventhough an >> error occurred. >> >> To fix this, introduce `ArchiverState` to hold a finish flag as well >> as the error and add a notification channel, allowing the archiver >> future to signal the waiting stream. As the notification waiter will >> block on subsequent polls even if it has already been notified about >> the archive creation finish, or it might not have been registered >> just yet when the notification was send out, only block and wait for >> notifications if the finished flag in the `ArchiverState` is not set. >> If it is set, there is no need to wait for a notification, as the >> archiver is finished for sure. >> >> In case of premature termination of the pxar backup stream, no >> additional measures have to been taken, as the abort handle already >> terminates the archive creation. >> >> Signed-off-by: Christian Ebner <c.ebner@proxmox.com> >> --- >> changes since version 3: >> - fix a possible deadlock encountered during further testing by >> strictly limiting the archiver state's mutex lock scope. >> >> pbs-client/src/pxar_backup_stream.rs | 61 +++++++++++++++++++++------- >> 1 file changed, 47 insertions(+), 14 deletions(-) >> >> diff --git a/pbs-client/src/pxar_backup_stream.rs b/pbs-client/src/pxar_backup_stream.rs >> index 2bfb5cf29..3fb1927d0 100644 >> --- a/pbs-client/src/pxar_backup_stream.rs >> +++ b/pbs-client/src/pxar_backup_stream.rs >> @@ -11,6 +11,7 @@ use futures::stream::Stream; >> use nix::dir::Dir; >> use nix::fcntl::OFlag; >> use nix::sys::stat::Mode; >> +use tokio::sync::Notify; >> >> use proxmox_async::blocking::TokioWriterAdapter; >> use proxmox_io::StdChannelWriter; >> @@ -30,7 +31,13 @@ pub struct PxarBackupStream { >> rx: Option<std::sync::mpsc::Receiver<Result<Vec<u8>, Error>>>, >> pub suggested_boundaries: Option<std::sync::mpsc::Receiver<u64>>, >> handle: Option<AbortHandle>, >> - error: Arc<Mutex<Option<Error>>>, >> + archiver_state: Arc<Mutex<ArchiverState>>, >> + archiver_finished_notification: Arc<Notify>, > > I am not sure I follow this change.. wouldn't just having the error and > the notification be enough? If I recall correctly, the issue here was that one stream can block forever without this in case of split pxar archives. The reason being, that it will not be notified of notifications already send out by the archiver before the stream registered to receive notifications. So by setting the finished flag in the state, one can avoid to even register and block forever. > > if we encounter an error during stream procession, we can immediately > abort. if the stream is finished, we check for errors, wait for the > notification, check for errors again? > > if we have one Notify per stream, then every stream must either see an > error, or get the notification. no more race (provided any encountered > error is always set before notifying) and no risk for waiting forever > either ;) As stated above the archiver might send out the finished notification before the stream registers to be notified, never getting any notification and blocking forever. > >> +} >> + >> +struct ArchiverState { >> + finished: bool, >> + error: Option<Error>, >> } >> >> impl Drop for PxarBackupStream { >> @@ -78,10 +85,16 @@ impl PxarBackupStream { >> (pxar::PxarVariant::Unified(writer), None, None, None) >> }; >> >> - let error = Arc::new(Mutex::new(None)); >> - let error2 = Arc::clone(&error); >> + let archiver_state = ArchiverState { >> + finished: false, >> + error: None, >> + }; >> + let archiver_state = Arc::new(Mutex::new(archiver_state)); >> + let archiver_state2 = Arc::clone(&archiver_state); >> + let pxar_backup_stream_notifier = Arc::new(Notify::new()); >> + let archiver_finished_notification = pxar_backup_stream_notifier.clone(); >> let handler = async move { >> - if let Err(err) = crate::pxar::create_archive( >> + let result = crate::pxar::create_archive( >> dir, >> PxarWriters::new( >> writer, >> @@ -96,10 +109,19 @@ impl PxarBackupStream { >> boundaries, >> suggested_boundaries_tx, >> ) >> - .await >> - { >> - let mut error = error2.lock().unwrap(); >> - *error = Some(err); >> + .await; >> + >> + let mut state = archiver_state2.lock().unwrap(); >> + state.finished = true; >> + if let Err(err) = result { >> + state.error = Some(err); >> + } >> + drop(state); >> + >> + // Notify upload streams that archiver is finished (with or without error) >> + pxar_backup_stream_notifier.notify_one(); >> + if separate_payload_stream { >> + pxar_backup_stream_notifier.notify_one(); > > this uses the same Notify, but that only holds a single permit, so isn't > this still racy? (see below) Not sure on this one, must rethink this. But if I recall, this has once again to do with the fact that the receiver might not yet block to receive, so it must get the notification. Otherwise it will block, as the permit is only for `next` calls on already waiting notification receivers. > >> } >> }; >> >> @@ -111,14 +133,16 @@ impl PxarBackupStream { >> rx: Some(rx), >> suggested_boundaries: None, >> handle: Some(handle.clone()), >> - error: Arc::clone(&error), >> + archiver_state: archiver_state.clone(), >> + archiver_finished_notification: archiver_finished_notification.clone(), >> }; >> >> let backup_payload_stream = payload_rx.map(|rx| Self { >> rx: Some(rx), >> suggested_boundaries: suggested_boundaries_rx, >> handle: Some(handle), >> - error, >> + archiver_state, >> + archiver_finished_notification, >> }); >> >> Ok((backup_stream, backup_payload_stream)) >> @@ -143,8 +167,8 @@ impl Stream for PxarBackupStream { >> fn poll_next(self: Pin<&mut Self>, _cx: &mut Context) -> Poll<Option<Self::Item>> { >> { >> // limit lock scope >> - let mut error = self.error.lock().unwrap(); >> - if let Some(err) = error.take() { >> + let mut state = self.archiver_state.lock().unwrap(); >> + if let Some(err) = state.error.take() { >> return Poll::Ready(Some(Err(err))); >> } >> } >> @@ -152,8 +176,17 @@ impl Stream for PxarBackupStream { >> match proxmox_async::runtime::block_in_place(|| self.rx.as_ref().unwrap().recv()) { >> Ok(data) => Poll::Ready(Some(data)), >> Err(_) => { >> - let mut error = self.error.lock().unwrap(); >> - if let Some(err) = error.take() { >> + // If the archiver did not signal it is finished, wait for finished completion >> + // to avoid potentially miss errors >> + let finished = { self.archiver_state.lock().unwrap().finished }; >> + if !finished { >> + proxmox_async::runtime::block_on( >> + self.archiver_finished_notification.notified(), > > if you are unlucky, you end up here but the execution pattern is like this > > A = archiver > S1 = stream one > S2 = stream two > > S1 sees not finished > S2 sees not finished > A sets finished > A notifies > A notifies again > S1 sees notification (consuming both notifications, as there is only one > stored in Notify) > S2 waits forever Why, it should get the permit on the `next` call in that case? Or am I wrong about that? > > ? > > might not happen in practice because it "always" ends up doing this: > > S1 sees not finished > S2 sees not finished > S1 blocks waiting for notifications > S2 blocks waiting for notifications > A sets finished > A notifies > S1 and S2 get notified and proceed > A notifies again (has no effect) > .. > > but that is just luck ;) > >> + ); >> + } >> + >> + let error = { self.archiver_state.lock().unwrap().error.take() }; >> + if let Some(err) = error { >> return Poll::Ready(Some(Err(err))); >> } >> Poll::Ready(None) // channel closed, no error >> -- >> 2.39.5 >> >> >> >> _______________________________________________ >> pbs-devel mailing list >> pbs-devel@lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel >> >> >> > > > _______________________________________________ > pbs-devel mailing list > pbs-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel > > _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [pbs-devel] [PATCH v4 proxmox-backup] client: pxar: fix race in pxar backup stream 2025-01-24 9:20 ` Christian Ebner @ 2025-01-24 10:21 ` Christian Ebner 0 siblings, 0 replies; 5+ messages in thread From: Christian Ebner @ 2025-01-24 10:21 UTC (permalink / raw) To: Proxmox Backup Server development discussion, Fabian Grünbichler On 1/24/25 10:20, Christian Ebner wrote: > On 1/24/25 09:22, Fabian Grünbichler wrote: >> On December 7, 2024 12:07 pm, Christian Ebner wrote: >>> Fixes a race condition where the backup upload stream can miss an >>> error returned by pxar::create_archive, because the error state is >>> only set after the backup stream was already polled. >>> >>> On instantiation, `PxarBackupStream` spawns a future handling the >>> pxar archive creation, which sends the encoded pxar archive stream >>> (or streams in case of split archives) through a channel, received >>> by the pxar backup stream on polling. >>> >>> In case this channel is closed as signaled by returning an error, the >>> poll logic will propagate an eventual error occurred during pxar >>> creation by taking it from the `PxarBackupStream`. >>> >>> As this error might not have been set just yet, this can lead to >>> incorrectly terminating a backup snapshot with success, eventhough an >>> error occurred. >>> >>> To fix this, introduce `ArchiverState` to hold a finish flag as well >>> as the error and add a notification channel, allowing the archiver >>> future to signal the waiting stream. As the notification waiter will >>> block on subsequent polls even if it has already been notified about >>> the archive creation finish, or it might not have been registered >>> just yet when the notification was send out, only block and wait for >>> notifications if the finished flag in the `ArchiverState` is not set. >>> If it is set, there is no need to wait for a notification, as the >>> archiver is finished for sure. >>> >>> In case of premature termination of the pxar backup stream, no >>> additional measures have to been taken, as the abort handle already >>> terminates the archive creation. >>> >>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com> >>> --- >>> changes since version 3: >>> - fix a possible deadlock encountered during further testing by >>> strictly limiting the archiver state's mutex lock scope. >>> >>> pbs-client/src/pxar_backup_stream.rs | 61 +++++++++++++++++++++------- >>> 1 file changed, 47 insertions(+), 14 deletions(-) >>> >>> diff --git a/pbs-client/src/pxar_backup_stream.rs b/pbs-client/src/ >>> pxar_backup_stream.rs >>> index 2bfb5cf29..3fb1927d0 100644 >>> --- a/pbs-client/src/pxar_backup_stream.rs >>> +++ b/pbs-client/src/pxar_backup_stream.rs >>> @@ -11,6 +11,7 @@ use futures::stream::Stream; >>> use nix::dir::Dir; >>> use nix::fcntl::OFlag; >>> use nix::sys::stat::Mode; >>> +use tokio::sync::Notify; >>> use proxmox_async::blocking::TokioWriterAdapter; >>> use proxmox_io::StdChannelWriter; >>> @@ -30,7 +31,13 @@ pub struct PxarBackupStream { >>> rx: Option<std::sync::mpsc::Receiver<Result<Vec<u8>, Error>>>, >>> pub suggested_boundaries: Option<std::sync::mpsc::Receiver<u64>>, >>> handle: Option<AbortHandle>, >>> - error: Arc<Mutex<Option<Error>>>, >>> + archiver_state: Arc<Mutex<ArchiverState>>, >>> + archiver_finished_notification: Arc<Notify>, >> >> I am not sure I follow this change.. wouldn't just having the error and >> the notification be enough? > > If I recall correctly, the issue here was that one stream can block > forever without this in case of split pxar archives. > The reason being, that it will not be notified of notifications already > send out by the archiver before the stream registered to receive > notifications. > So by setting the finished flag in the state, one can avoid to even > register and block forever. > >> >> if we encounter an error during stream procession, we can immediately >> abort. if the stream is finished, we check for errors, wait for the >> notification, check for errors again? >> >> if we have one Notify per stream, then every stream must either see an >> error, or get the notification. no more race (provided any encountered >> error is always set before notifying) and no risk for waiting forever >> either ;) > > As stated above the archiver might send out the finished notification > before the stream registers to be notified, never getting any > notification and blocking forever. Ah no, I see you are right. My statements above are only true for `notify_waiters` https://docs.rs/tokio/latest/tokio/sync/struct.Notify.html#method.notify_waiters, which only ever notifies already waiting tasks... `notify_one` indeed stores the permit and the next task asking to be notified will either get the permit or wait for the notification to be send. So this indeed needs 2 different `Notify` instances for the 2 different streams to be notified and does not need the `finished` flag at all, as you correctly stated. Will send a new version for this, thanks! _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [pbs-devel] [PATCH v4 proxmox-backup] client: pxar: fix race in pxar backup stream 2024-12-07 11:07 [pbs-devel] [PATCH v4 proxmox-backup] client: pxar: fix race in pxar backup stream Christian Ebner 2025-01-24 8:22 ` Fabian Grünbichler @ 2025-01-24 12:48 ` Christian Ebner 1 sibling, 0 replies; 5+ messages in thread From: Christian Ebner @ 2025-01-24 12:48 UTC (permalink / raw) To: pbs-devel superseded-by version 5: https://lore.proxmox.com/pbs-devel/20250124124635.291858-1-c.ebner@proxmox.com/T/ _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-01-24 12:48 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2024-12-07 11:07 [pbs-devel] [PATCH v4 proxmox-backup] client: pxar: fix race in pxar backup stream Christian Ebner 2025-01-24 8:22 ` Fabian Grünbichler 2025-01-24 9:20 ` Christian Ebner 2025-01-24 10:21 ` Christian Ebner 2025-01-24 12:48 ` Christian Ebner
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inboxService provided by Proxmox Server Solutions GmbH | Privacy | Legal