From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id D3EF51FF179 for ; Wed, 15 Oct 2025 18:40:36 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 233D0B0F; Wed, 15 Oct 2025 18:40:55 +0200 (CEST) From: Christian Ebner To: pbs-devel@lists.proxmox.com Date: Wed, 15 Oct 2025 18:40:01 +0200 Message-ID: <20251015164008.975591-4-c.ebner@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20251015164008.975591-1-c.ebner@proxmox.com> References: <20251015164008.975591-1-c.ebner@proxmox.com> MIME-Version: 1.0 X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1760546416987 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.041 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pbs-devel] [PATCH proxmox-backup v3 1/8] api/pull: avoid failing on concurrent conditional chunk uploads X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Backup Server development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" Chunks are currently being uploaded conditionally by setting the `If-None-Match` header for put request (if not disabled by the provider quirks). In that case, uploads to the s3 backend while a concurrent upload to the same object is ongoing will lead to the request returning with http status code 409 [0]. While a retry logic with backoff time is used, the concurrent upload might still not be finished after the retires are exhausted. Therefore, use the `upload_replace_on_final_retry` method instead, which does not set the `If-None-Match` header on the last retry, effectively re-uploading the object in that case. While it is not specified which of the concurrent uploads will then be the resulting object version, this is still fine as chunks with the same digest encode for the same data (modulo compression). [0] https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_RequestSyntax Signed-off-by: Christian Ebner --- src/api2/backup/upload_chunk.rs | 4 ++-- src/server/pull.rs | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/api2/backup/upload_chunk.rs b/src/api2/backup/upload_chunk.rs index 8dd7e4d52..64e8d6e63 100644 --- a/src/api2/backup/upload_chunk.rs +++ b/src/api2/backup/upload_chunk.rs @@ -263,7 +263,7 @@ async fn upload_to_backend( if env.no_cache { let object_key = pbs_datastore::s3::object_key_from_digest(&digest)?; let is_duplicate = s3_client - .upload_no_replace_with_retry(object_key, data) + .upload_replace_on_final_retry(object_key, data) .await .map_err(|err| format_err!("failed to upload chunk to s3 backend - {err:#}"))?; return Ok((digest, size, encoded_size, is_duplicate)); @@ -287,7 +287,7 @@ async fn upload_to_backend( tracing::info!("Upload of new chunk {}", hex::encode(digest)); let object_key = pbs_datastore::s3::object_key_from_digest(&digest)?; let is_duplicate = s3_client - .upload_no_replace_with_retry(object_key, data.clone()) + .upload_replace_on_final_retry(object_key, data.clone()) .await .map_err(|err| format_err!("failed to upload chunk to s3 backend - {err:#}"))?; diff --git a/src/server/pull.rs b/src/server/pull.rs index 817b57ac5..c0b6fef7c 100644 --- a/src/server/pull.rs +++ b/src/server/pull.rs @@ -181,7 +181,7 @@ async fn pull_index_chunks( let upload_data = hyper::body::Bytes::from(data); let object_key = pbs_datastore::s3::object_key_from_digest(&digest)?; let _is_duplicate = proxmox_async::runtime::block_on( - s3_client.upload_no_replace_with_retry(object_key, upload_data), + s3_client.upload_replace_on_final_retry(object_key, upload_data), ) .context("failed to upload chunk to s3 backend")?; } -- 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel