From: Christian Ebner <c.ebner@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [pbs-devel] [PATCH proxmox-backup v2 07/19] api/datastore: move s3 index upload helper to datastore backend
Date: Tue, 4 Nov 2025 14:06:47 +0100 [thread overview]
Message-ID: <20251104130659.435139-8-c.ebner@proxmox.com> (raw)
In-Reply-To: <20251104130659.435139-1-c.ebner@proxmox.com>
In an effort to decouple the api implementation from the backend
implementation and deduplicate code.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
Changes since version 1:
- not present in previous version
pbs-datastore/src/datastore.rs | 26 ++++++++++++++++++++++++++
src/api2/backup/environment.rs | 32 ++++++++++----------------------
src/server/pull.rs | 14 ++------------
3 files changed, 38 insertions(+), 34 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 0d738f0ac..343f49f36 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -223,6 +223,32 @@ pub enum DatastoreBackend {
S3(Arc<S3Client>),
}
+impl DatastoreBackend {
+ /// Reads the index file and uploads it to the S3 backend.
+ ///
+ /// Returns with error if the backend variant is not S3.
+ pub async fn s3_upload_index(&self, backup_dir: &BackupDir, name: &str) -> Result<(), Error> {
+ match self {
+ Self::Filesystem => bail!("datastore backend not of type S3"),
+ Self::S3(s3_client) => {
+ let object_key = crate::s3::object_key_from_path(&backup_dir.relative_path(), name)
+ .context("invalid index file object key")?;
+
+ let mut full_path = backup_dir.full_path();
+ full_path.push(name);
+ let data = tokio::fs::read(&full_path)
+ .await
+ .context("failed to read index contents")?;
+ let contents = hyper::body::Bytes::from(data);
+ let _is_duplicate = s3_client
+ .upload_replace_with_retry(object_key, contents)
+ .await?;
+ Ok(())
+ }
+ }
+ }
+}
+
impl DataStore {
// This one just panics on everything
#[doc(hidden)]
diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
index 0faf6c8e0..f87d5a89e 100644
--- a/src/api2/backup/environment.rs
+++ b/src/api2/backup/environment.rs
@@ -18,7 +18,6 @@ use pbs_datastore::dynamic_index::DynamicIndexWriter;
use pbs_datastore::fixed_index::FixedIndexWriter;
use pbs_datastore::{DataBlob, DataStore, DatastoreBackend};
use proxmox_rest_server::{formatter::*, WorkerTask};
-use proxmox_s3_client::S3Client;
use crate::backup::VerifyWorker;
@@ -560,9 +559,11 @@ impl BackupEnvironment {
drop(state);
// For S3 backends, upload the index file to the object store after closing
- if let DatastoreBackend::S3(s3_client) = &self.backend {
- self.s3_upload_index(s3_client, &writer_name)
- .context("failed to upload dynamic index to s3 backend")?;
+ if let DatastoreBackend::S3(_s3_client) = &self.backend {
+ proxmox_async::runtime::block_on(
+ self.backend.s3_upload_index(&self.backup_dir, &writer_name),
+ )
+ .context("failed to upload dynamic index to s3 backend")?;
self.log(format!(
"Uploaded dynamic index file to s3 backend: {writer_name}"
))
@@ -659,9 +660,11 @@ impl BackupEnvironment {
drop(state);
// For S3 backends, upload the index file to the object store after closing
- if let DatastoreBackend::S3(s3_client) = &self.backend {
- self.s3_upload_index(s3_client, &writer_name)
- .context("failed to upload fixed index to s3 backend")?;
+ if let DatastoreBackend::S3(_s3_client) = &self.backend {
+ proxmox_async::runtime::block_on(
+ self.backend.s3_upload_index(&self.backup_dir, &writer_name),
+ )
+ .context("failed to upload fixed index to s3 backend")?;
self.log(format!(
"Uploaded fixed index file to object store: {writer_name}"
))
@@ -842,21 +845,6 @@ impl BackupEnvironment {
let state = self.state.lock().unwrap();
state.finished == BackupState::Finished
}
-
- fn s3_upload_index(&self, s3_client: &S3Client, name: &str) -> Result<(), Error> {
- let object_key =
- pbs_datastore::s3::object_key_from_path(&self.backup_dir.relative_path(), name)
- .context("invalid index file object key")?;
-
- let mut full_path = self.backup_dir.full_path();
- full_path.push(name);
- let data = std::fs::read(&full_path).context("failed to read index contents")?;
- let contents = hyper::body::Bytes::from(data);
- proxmox_async::runtime::block_on(
- s3_client.upload_replace_with_retry(object_key, contents),
- )?;
- Ok(())
- }
}
impl RpcEnvironment for BackupEnvironment {
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 2dcadf972..94b2fbf55 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -359,19 +359,9 @@ async fn pull_single_archive<'a>(
if let Err(err) = std::fs::rename(&tmp_path, &path) {
bail!("Atomic rename file {:?} failed - {}", path, err);
}
- if let DatastoreBackend::S3(s3_client) = backend {
- let object_key =
- pbs_datastore::s3::object_key_from_path(&snapshot.relative_path(), archive_name)
- .context("invalid archive object key")?;
- let data = tokio::fs::read(&path)
- .await
- .context("failed to read archive contents")?;
- let contents = hyper::body::Bytes::from(data);
- let _is_duplicate = s3_client
- .upload_replace_with_retry(object_key, contents)
- .await?;
- }
+ backend.s3_upload_index(snapshot, archive_name).await?;
+
Ok(sync_stats)
}
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
next prev parent reply other threads:[~2025-11-04 13:07 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-04 13:06 [pbs-devel] [PATCH proxmox-backup v2 00/19] fix chunk upload/insert, rename corrupt chunks and GC race conditions for s3 backend Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 01/19] sync: pull: instantiate backend only once per sync job Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 02/19] api/datastore: move group notes setting to the datastore Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 03/19] api/datastore: move snapshot deletion into dedicated datastore helper Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 04/19] api/datastore: move backup log upload by implementing " Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 05/19] api: backup: use datastore add_blob helper for backup session Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 06/19] api/datastore: add dedicated datastore helper to set snapshot notes Christian Ebner
2025-11-04 13:06 ` Christian Ebner [this message]
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 08/19] datastore: refactor chunk insert based on backend Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 09/19] verify: rename corrupted to corrupt in log output and function names Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 10/19] verify/datastore: make rename corrupt chunk a datastore helper method Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 11/19] datastore: refactor rename_corrupt_chunk error handling Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 12/19] chunk store: implement per-chunk file locking helper for s3 backend Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 13/19] datastore: acquire chunk store mutex lock when renaming corrupt chunk Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 14/19] datastore: get per-chunk file lock for chunk rename on s3 backend Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 15/19] fix #6961: datastore: verify: evict corrupt chunks from in-memory LRU cache Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 16/19] datastore: add locking to protect against races on chunk insert for s3 Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 17/19] GC: fix race with chunk upload/insert on s3 backends Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 18/19] GC: lock chunk marker before cleanup in phase 3 " Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 19/19] datastore: GC: drop overly verbose info message during s3 chunk sweep Christian Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251104130659.435139-8-c.ebner@proxmox.com \
--to=c.ebner@proxmox.com \
--cc=pbs-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox