From: Christian Ebner <c.ebner@proxmox.com>
To: "Proxmox Backup Server development discussion"
<pbs-devel@lists.proxmox.com>,
"Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox-backup 04/17] api/datastore: move backup log upload by implementing datastore helper
Date: Tue, 4 Nov 2025 09:47:33 +0100 [thread overview]
Message-ID: <6a81e1c3-c2d8-4179-8f5a-5b932c95adf0@proxmox.com> (raw)
In-Reply-To: <1762175257.ofm4ecgmdi.astroid@yuna.none>
On 11/3/25 3:51 PM, Fabian Grünbichler wrote:
> On November 3, 2025 12:31 pm, Christian Ebner wrote:
>> In an effort to decouple the api from the datastore backend, move the
>> backup task log upload to use a new add blob helper method of the
>> datastore.
>>
>> The new helper is fully sync and called on a blocking task, thereby
>> now also solving the previously incorrectly blocking rename_file() in
>> async context.
>>
>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>> ---
>> pbs-datastore/src/datastore.rs | 22 ++++++++++++++++++++++
>> src/api2/admin/datastore.rs | 25 +++++++------------------
>> 2 files changed, 29 insertions(+), 18 deletions(-)
>>
>> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
>> index 46600a88c..cc1267d78 100644
>> --- a/pbs-datastore/src/datastore.rs
>> +++ b/pbs-datastore/src/datastore.rs
>> @@ -2453,4 +2453,26 @@ impl DataStore {
>> snapshot.destroy(false, &backend)?;
>> Ok(())
>> }
>> +
>> + /// Adds the blob to the given snapshot.
>> + /// Requires the caller to hold the exclusive lock.
>> + pub fn add_blob(
>> + self: &Arc<Self>,
>> + filename: &str,
>> + snapshot: BackupDir,
>> + blob: DataBlob,
>
> should this get a backend parameter to not require instantiating a new
> one in contexts where lots of blobs might be added (backup env, sync
> job)?
Yes, good point! Will adapt this to get the backend passed in as well so
the same client and it's connection can be reused.
>
>> + ) -> Result<(), Error> {
>> + if let DatastoreBackend::S3(s3_client) = self.backend()? {
>> + let object_key = crate::s3::object_key_from_path(&snapshot.relative_path(), filename)
>> + .context("invalid client log object key")?;
>> + let data = hyper::body::Bytes::copy_from_slice(blob.raw_data());
>> + proxmox_async::runtime::block_on(s3_client.upload_replace_with_retry(object_key, data))
>> + .context("failed to upload client log to s3 backend")?;
>> + };
>> +
>> + let mut path = snapshot.full_path();
>> + path.push(filename);
>> + replace_file(&path, blob.raw_data(), CreateOptions::new(), false)?;
>> + Ok(())
>> + }
>
> the backup env also has this, and should switch to this new helper:
>
> // always verify blob/CRC at server side
> let blob = DataBlob::load_from_reader(&mut &data[..])?;
>
> let raw_data = blob.raw_data();
> if let DatastoreBackend::S3(s3_client) = &self.backend {
> let object_key = pbs_datastore::s3::object_key_from_path(
> &self.backup_dir.relative_path(),
> file_name,
> )
> .context("invalid blob object key")?;
> let data = hyper::body::Bytes::copy_from_slice(raw_data);
> proxmox_async::runtime::block_on(
> s3_client.upload_replace_with_retry(object_key.clone(), data),
> )
> .context("failed to upload blob to s3 backend")?;
> self.log(format!("Uploaded blob to object store: {object_key}"))
> }
>
> replace_file(&path, raw_data, CreateOptions::new(), false)?;
True, will adapt this as well, thanks!
>
>> }
>> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
>> index 763440df9..6881b4093 100644
>> --- a/src/api2/admin/datastore.rs
>> +++ b/src/api2/admin/datastore.rs
>> @@ -28,9 +28,7 @@ use proxmox_router::{
>> use proxmox_rrd_api_types::{RrdMode, RrdTimeframe};
>> use proxmox_schema::*;
>> use proxmox_sortable_macro::sortable;
>> -use proxmox_sys::fs::{
>> - file_read_firstline, file_read_optional_string, replace_file, CreateOptions,
>> -};
>> +use proxmox_sys::fs::{file_read_firstline, file_read_optional_string, CreateOptions};
>> use proxmox_time::CalendarEvent;
>> use proxmox_worker_task::WorkerTaskContext;
>>
>> @@ -63,7 +61,7 @@ use pbs_datastore::manifest::BackupManifest;
>> use pbs_datastore::prune::compute_prune_info;
>> use pbs_datastore::{
>> check_backup_owner, ensure_datastore_is_mounted, task_tracking, BackupDir, DataStore,
>> - DatastoreBackend, LocalChunkReader, StoreProgress,
>> + LocalChunkReader, StoreProgress,
>> };
>> use pbs_tools::json::required_string_param;
>> use proxmox_rest_server::{formatter, worker_is_active, WorkerTask};
>> @@ -1536,20 +1534,11 @@ pub fn upload_backup_log(
>> // always verify blob/CRC at server side
>> let blob = DataBlob::load_from_reader(&mut &data[..])?;
>>
>> - if let DatastoreBackend::S3(s3_client) = datastore.backend()? {
>> - let object_key = pbs_datastore::s3::object_key_from_path(
>> - &backup_dir.relative_path(),
>> - file_name.as_ref(),
>> - )
>> - .context("invalid client log object key")?;
>> - let data = hyper::body::Bytes::copy_from_slice(blob.raw_data());
>> - s3_client
>> - .upload_replace_with_retry(object_key, data)
>> - .await
>> - .context("failed to upload client log to s3 backend")?;
>> - };
>> -
>> - replace_file(&path, blob.raw_data(), CreateOptions::new(), false)?;
>> + tokio::task::spawn_blocking(move || {
>> + datastore.add_blob(file_name.as_ref(), backup_dir, blob)
>> + })
>> + .await
>> + .map_err(|err| format_err!("{err:#?}"))??;
>>
>> // fixme: use correct formatter
>> Ok(formatter::JSON_FORMATTER.format_data(Value::Null, &*rpcenv))
>> --
>> 2.47.3
>>
>>
>>
>> _______________________________________________
>> pbs-devel mailing list
>> pbs-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>>
>>
>>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
next prev parent reply other threads:[~2025-11-04 8:46 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-03 11:31 [pbs-devel] [PATCH proxmox-backup 00/17] fix chunk upload/insert, rename corrupt chunks and GC race conditions for s3 backend Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 01/17] sync: pull: instantiate backend only once per sync job Christian Ebner
2025-11-03 14:51 ` Fabian Grünbichler
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 02/17] api/datastore: move group notes setting to the datastore Christian Ebner
2025-11-03 14:51 ` Fabian Grünbichler
2025-11-04 8:51 ` Christian Ebner
2025-11-04 9:13 ` Fabian Grünbichler
2025-11-04 9:37 ` Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 03/17] api/datastore: move snapshot deletion into dedicated datastore helper Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 04/17] api/datastore: move backup log upload by implementing " Christian Ebner
2025-11-03 14:51 ` Fabian Grünbichler
2025-11-04 8:47 ` Christian Ebner [this message]
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 05/17] api/datastore: add dedicated datastore helper to set snapshot notes Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 06/17] datastore: refactor chunk insert based on backend Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 07/17] verify: rename corrupted to corrupt in log output and function names Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 08/17] verify/datastore: make rename corrupt chunk a datastore helper method Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 09/17] datastore: refactor rename_corrupt_chunk error handling Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 10/17] datastore: implement per-chunk file locking helper for s3 backend Christian Ebner
2025-11-03 14:51 ` Fabian Grünbichler
2025-11-04 8:45 ` Christian Ebner
2025-11-04 9:01 ` Fabian Grünbichler
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 11/17] datastore: acquire chunk store mutex lock when renaming corrupt chunk Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 12/17] datastore: get per-chunk file lock for chunk rename on s3 backend Christian Ebner
2025-11-03 14:51 ` Fabian Grünbichler
2025-11-04 8:33 ` Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 13/17] fix #6961: datastore: verify: evict corrupt chunks from in-memory LRU cache Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 14/17] datastore: add locking to protect against races on chunk insert for s3 Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 15/17] GC: fix race with chunk upload/insert on s3 backends Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 16/17] GC: lock chunk marker before cleanup in phase 3 " Christian Ebner
2025-11-03 11:31 ` [pbs-devel] [PATCH proxmox-backup 17/17] datastore: GC: drop overly verbose info message during s3 chunk sweep Christian Ebner
2025-11-04 13:08 ` [pbs-devel] superseded: [PATCH proxmox-backup 00/17] fix chunk upload/insert, rename corrupt chunks and GC race conditions for s3 backend Christian Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6a81e1c3-c2d8-4179-8f5a-5b932c95adf0@proxmox.com \
--to=c.ebner@proxmox.com \
--cc=f.gruenbichler@proxmox.com \
--cc=pbs-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox