From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox Backup Server development discussion
<pbs-devel@lists.proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox v3 2/2] s3-client: add helper method to force final unconditional upload on
Date: Mon, 27 Oct 2025 15:10:32 +0100 [thread overview]
Message-ID: <1761574129.ygaxndq4ku.astroid@yuna.none> (raw)
In-Reply-To: <20251015164008.975591-3-c.ebner@proxmox.com>
On October 15, 2025 6:40 pm, Christian Ebner wrote:
> Extend the currently implemented conditional/unconditional upload
> helpers with an additional variant which will perform conditional
> uploads requests up until the last one. The last will be send
> unconditionally, not setting the If-None-Match header. The usecase
> for this is to not fail in PBS during chunk upload if a concurrent
> upload to the same chunk is in-progress, not finished within the
> upload retries with backoff time.
>
> Which put object results in the final object is then however not
> clearly specified in that case, AWS docs mention contradicting
> behaviour [0]. Quote for different parts of the docs:
>
>> If two PUT requests are simultaneously made to the same key, the
>> request with the latest timestamp wins.
>> [...]
>> Amazon S3 internally uses last-writer-wins semantics to determine
>> which write takes precedence.
>
> [0] https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html#ConsistencyModel
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> proxmox-s3-client/src/client.rs | 32 ++++++++++++++++++++++++++++----
> 1 file changed, 28 insertions(+), 4 deletions(-)
>
> diff --git a/proxmox-s3-client/src/client.rs b/proxmox-s3-client/src/client.rs
> index 4ebd8c4b..fae8a56f 100644
> --- a/proxmox-s3-client/src/client.rs
> +++ b/proxmox-s3-client/src/client.rs
> @@ -684,7 +684,26 @@ impl S3Client {
> object_data: Bytes,
> ) -> Result<bool, Error> {
> let replace = false;
> - self.do_upload_with_retry(object_key, object_data, replace)
> + let finally_replace = false;
> + self.do_upload_with_retry(object_key, object_data, replace, finally_replace)
> + .await
> + }
> +
> + /// Upload the given object via the S3 api, not replacing it if already present in the object
> + /// store. If a conditional upload leads to repeated failures with status code 409, do not set
> + /// the `If-None-Match` header for the final retry.
> + /// Retrying up to 3 times in case of error.
> + ///
> + /// Note: Which object results in the final version is not clearly specified.
> + #[inline(always)]
> + pub async fn upload_replace_on_final_retry(
> + &self,
> + object_key: S3ObjectKey,
> + object_data: Bytes,
> + ) -> Result<bool, Error> {
> + let replace = false;
> + let finally_replace = true;
> + self.do_upload_with_retry(object_key, object_data, replace, finally_replace)
> .await
> }
>
> @@ -697,17 +716,19 @@ impl S3Client {
> object_data: Bytes,
> ) -> Result<bool, Error> {
> let replace = true;
> - self.do_upload_with_retry(object_key, object_data, replace)
> + let finally_replace = false;
> + self.do_upload_with_retry(object_key, object_data, replace, finally_replace)
> .await
> }
>
> /// Helper to perform the object upload and retry, wrapped by the corresponding methods
> - /// to mask the `replace` flag.
> + /// to mask the `replace` and `finally_replace` flag.
> async fn do_upload_with_retry(
> &self,
> object_key: S3ObjectKey,
> object_data: Bytes,
> - replace: bool,
> + mut replace: bool,
> + finally_replace: bool,
> ) -> Result<bool, Error> {
> let content_size = object_data.len() as u64;
> let timeout_secs = content_size
> @@ -719,6 +740,9 @@ impl S3Client {
> let backoff_secs = S3_HTTP_REQUEST_RETRY_BACKOFF_DEFAULT * 3_u32.pow(retry as u32);
> tokio::time::sleep(backoff_secs).await;
> }
> + if retry == MAX_S3_UPLOAD_RETRY - 1 {
> + replace = finally_replace;
> + }
same question here as with the previous patch - the description above
mentions that the finally-replace logic triggers if all earlier attempts
returned 409.. but here it now happens unconditional, even if the
retries are caused by other errors?
so either the replace fallback should move into the NeedsRetry handling
below, or the documentation above needs to be adapted to match the code
> let body = Body::from(object_data.clone());
> match self
> .put_object(object_key.clone(), body, timeout, replace)
> --
> 2.47.3
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
next prev parent reply other threads:[~2025-10-27 14:10 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-15 16:39 [pbs-devel] [PATCH proxmox{, -backup} v3 00/10] s3 store: fix chunk upload/insert and GC race condition for s3 backend Christian Ebner
2025-10-15 16:39 ` [pbs-devel] [PATCH proxmox v3 1/2] s3-client: add exponential backoff time for upload retries Christian Ebner
2025-10-27 14:10 ` Fabian Grünbichler
2025-10-15 16:40 ` [pbs-devel] [PATCH proxmox v3 2/2] s3-client: add helper method to force final unconditional upload on Christian Ebner
2025-10-27 14:10 ` Fabian Grünbichler [this message]
2025-10-15 16:40 ` [pbs-devel] [PATCH proxmox-backup v3 1/8] api/pull: avoid failing on concurrent conditional chunk uploads Christian Ebner
2025-10-15 16:40 ` [pbs-devel] [PATCH proxmox-backup v3 2/8] datastore: GC: drop overly verbose info message during s3 chunk sweep Christian Ebner
2025-10-15 16:40 ` [pbs-devel] [PATCH proxmox-backup v3 3/8] GC: refactor atime gathering for local chunk markers with s3 backend Christian Ebner
2025-10-15 16:40 ` [pbs-devel] [PATCH proxmox-backup v3 4/8] chunk store: refactor method for chunk insertion Christian Ebner
2025-10-15 16:40 ` [pbs-devel] [PATCH proxmox-backup v3 5/8] chunk store: add backend upload marker helpers for s3 backed stores Christian Ebner
2025-10-15 16:40 ` [pbs-devel] [PATCH proxmox-backup v3 6/8] api: chunk upload: fix race between chunk backend upload and insert Christian Ebner
2025-10-15 16:40 ` [pbs-devel] [PATCH proxmox-backup v3 7/8] api: chunk upload: fix race with garbage collection for no-cache on s3 Christian Ebner
2025-10-15 16:40 ` [pbs-devel] [PATCH proxmox-backup v3 8/8] pull: guard chunk upload and only insert into cache after upload Christian Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1761574129.ygaxndq4ku.astroid@yuna.none \
--to=f.gruenbichler@proxmox.com \
--cc=pbs-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox