From: Christian Ebner <c.ebner@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [pbs-devel] [PATCH proxmox-backup v2 04/19] api/datastore: move backup log upload by implementing datastore helper
Date: Tue, 4 Nov 2025 14:06:44 +0100 [thread overview]
Message-ID: <20251104130659.435139-5-c.ebner@proxmox.com> (raw)
In-Reply-To: <20251104130659.435139-1-c.ebner@proxmox.com>
In an effort to decouple the api from the datastore backend, move the
backup task log upload to use a new add blob helper method of the
datastore. This methods gets the backend as parameter for cases where
it outlives the call, e.g. during a backup session or sync session.
The new helper is fully sync and called on a blocking task, thereby
now also solving the previously incorrectly blocking rename_file() in
async context.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
Changes since version 1:
- allow to pass in the backend instance, so it can be reused where
possible
pbs-datastore/src/datastore.rs | 23 +++++++++++++++++++++++
src/api2/admin/datastore.rs | 28 ++++++++++------------------
2 files changed, 33 insertions(+), 18 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 46600a88c..277489f5f 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -2453,4 +2453,27 @@ impl DataStore {
snapshot.destroy(false, &backend)?;
Ok(())
}
+
+ /// Adds the blob to the given snapshot.
+ /// Requires the caller to hold the exclusive lock.
+ pub fn add_blob(
+ self: &Arc<Self>,
+ filename: &str,
+ snapshot: BackupDir,
+ blob: DataBlob,
+ backend: &DatastoreBackend,
+ ) -> Result<(), Error> {
+ if let DatastoreBackend::S3(s3_client) = backend {
+ let object_key = crate::s3::object_key_from_path(&snapshot.relative_path(), filename)
+ .context("invalid blob object key")?;
+ let data = hyper::body::Bytes::copy_from_slice(blob.raw_data());
+ proxmox_async::runtime::block_on(s3_client.upload_replace_with_retry(object_key, data))
+ .context("failed to upload blob to s3 backend")?;
+ };
+
+ let mut path = snapshot.full_path();
+ path.push(filename);
+ replace_file(&path, blob.raw_data(), CreateOptions::new(), false)?;
+ Ok(())
+ }
}
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 763440df9..b54ea9a04 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -28,9 +28,7 @@ use proxmox_router::{
use proxmox_rrd_api_types::{RrdMode, RrdTimeframe};
use proxmox_schema::*;
use proxmox_sortable_macro::sortable;
-use proxmox_sys::fs::{
- file_read_firstline, file_read_optional_string, replace_file, CreateOptions,
-};
+use proxmox_sys::fs::{file_read_firstline, file_read_optional_string, CreateOptions};
use proxmox_time::CalendarEvent;
use proxmox_worker_task::WorkerTaskContext;
@@ -63,7 +61,7 @@ use pbs_datastore::manifest::BackupManifest;
use pbs_datastore::prune::compute_prune_info;
use pbs_datastore::{
check_backup_owner, ensure_datastore_is_mounted, task_tracking, BackupDir, DataStore,
- DatastoreBackend, LocalChunkReader, StoreProgress,
+ LocalChunkReader, StoreProgress,
};
use pbs_tools::json::required_string_param;
use proxmox_rest_server::{formatter, worker_is_active, WorkerTask};
@@ -1536,20 +1534,14 @@ pub fn upload_backup_log(
// always verify blob/CRC at server side
let blob = DataBlob::load_from_reader(&mut &data[..])?;
- if let DatastoreBackend::S3(s3_client) = datastore.backend()? {
- let object_key = pbs_datastore::s3::object_key_from_path(
- &backup_dir.relative_path(),
- file_name.as_ref(),
- )
- .context("invalid client log object key")?;
- let data = hyper::body::Bytes::copy_from_slice(blob.raw_data());
- s3_client
- .upload_replace_with_retry(object_key, data)
- .await
- .context("failed to upload client log to s3 backend")?;
- };
-
- replace_file(&path, blob.raw_data(), CreateOptions::new(), false)?;
+ tokio::task::spawn_blocking(move || {
+ let backend = datastore
+ .backend()
+ .context("failed to get datastore backend")?;
+ datastore.add_blob(file_name.as_ref(), backup_dir, blob, &backend)
+ })
+ .await
+ .map_err(|err| format_err!("{err:#?}"))??;
// fixme: use correct formatter
Ok(formatter::JSON_FORMATTER.format_data(Value::Null, &*rpcenv))
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
next prev parent reply other threads:[~2025-11-04 13:07 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-04 13:06 [pbs-devel] [PATCH proxmox-backup v2 00/19] fix chunk upload/insert, rename corrupt chunks and GC race conditions for s3 backend Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 01/19] sync: pull: instantiate backend only once per sync job Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 02/19] api/datastore: move group notes setting to the datastore Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 03/19] api/datastore: move snapshot deletion into dedicated datastore helper Christian Ebner
2025-11-04 13:06 ` Christian Ebner [this message]
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 05/19] api: backup: use datastore add_blob helper for backup session Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 06/19] api/datastore: add dedicated datastore helper to set snapshot notes Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 07/19] api/datastore: move s3 index upload helper to datastore backend Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 08/19] datastore: refactor chunk insert based on backend Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 09/19] verify: rename corrupted to corrupt in log output and function names Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 10/19] verify/datastore: make rename corrupt chunk a datastore helper method Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 11/19] datastore: refactor rename_corrupt_chunk error handling Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 12/19] chunk store: implement per-chunk file locking helper for s3 backend Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 13/19] datastore: acquire chunk store mutex lock when renaming corrupt chunk Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 14/19] datastore: get per-chunk file lock for chunk rename on s3 backend Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 15/19] fix #6961: datastore: verify: evict corrupt chunks from in-memory LRU cache Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 16/19] datastore: add locking to protect against races on chunk insert for s3 Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 17/19] GC: fix race with chunk upload/insert on s3 backends Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 18/19] GC: lock chunk marker before cleanup in phase 3 " Christian Ebner
2025-11-04 13:06 ` [pbs-devel] [PATCH proxmox-backup v2 19/19] datastore: GC: drop overly verbose info message during s3 chunk sweep Christian Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251104130659.435139-5-c.ebner@proxmox.com \
--to=c.ebner@proxmox.com \
--cc=pbs-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.