* [PATCH proxmox-backup 0/3] fixup client log fetching and decryption
@ 2026-04-25 14:09 Christian Ebner
2026-04-25 14:09 ` [PATCH proxmox-backup 1/3] sync: fix client log fetching for local sync job Christian Ebner
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Christian Ebner @ 2026-04-25 14:09 UTC (permalink / raw)
To: pbs-devel
This patches fix a few issues with the client log fetching for pull
sync jobs:
1. Local pull syncs did not fetch the client log at all, add the
missing implementation thereof.
2. While fetching the client log, some logs were still not using the
log line sender, required since introduction of the parallel group
sync feature for correctly buffering and logging lines with prefix.
3. The client log was fetched without decryption even with a matching
decryption key. This is fixed by factoring out the data blob
decryption logic into a reusable helper function and using it in
combination with the crypt config to decrypt the blobs while
fetching the client log.
Christian Ebner (3):
sync: fix client log fetching for local sync job
sync: use log sender for logging when fetching client log
sync: decrypt client log on pull with matching decryption key
src/server/pull.rs | 46 +++++++----------
src/server/sync.rs | 125 +++++++++++++++++++++++++++++++++++++--------
2 files changed, 122 insertions(+), 49 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH proxmox-backup 1/3] sync: fix client log fetching for local sync job
2026-04-25 14:09 [PATCH proxmox-backup 0/3] fixup client log fetching and decryption Christian Ebner
@ 2026-04-25 14:09 ` Christian Ebner
2026-04-25 14:09 ` [PATCH proxmox-backup 2/3] sync: use log sender for logging when fetching client log Christian Ebner
2026-04-25 14:09 ` [PATCH proxmox-backup 3/3] sync: decrypt client log on pull with matching decryption key Christian Ebner
2 siblings, 0 replies; 4+ messages in thread
From: Christian Ebner @ 2026-04-25 14:09 UTC (permalink / raw)
To: pbs-devel
Pulling with a local sync job currently does not handle fetching the
client log correctly, completely lacking the implementation.
Fix this by adding the missing implementation on the local source
reader.
Since this does not actually download anything, also rename the
method from try_download_client_log() to a now better fitting
try_fetch_client_log().
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 2 +-
src/server/sync.rs | 10 ++++++----
2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 67ab70348..6bb5995cc 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -679,7 +679,7 @@ async fn pull_snapshot<'a>(
let fetch_log = async || {
if !client_log_name.exists() {
reader
- .try_download_client_log(&client_log_name)
+ .try_fetch_client_log(&client_log_name)
.await
.with_context(|| prefix.clone())?;
if client_log_name.exists() {
diff --git a/src/server/sync.rs b/src/server/sync.rs
index d25180341..ad537129b 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -103,8 +103,8 @@ pub(crate) trait SyncSourceReader: Send + Sync {
/// `into` is the path of the local file to load the source file into.
async fn load_file_into(&self, filename: &str, into: &Path) -> Result<Option<DataBlob>, Error>;
- /// Tries to download the client log from the source and save it into a local file.
- async fn try_download_client_log(&self, to_path: &Path) -> Result<(), Error>;
+ /// Tries to fetch the client log from the source and save it into a local file.
+ async fn try_fetch_client_log(&self, to_path: &Path) -> Result<(), Error>;
fn skip_chunk_sync(&self, target_store_name: &str) -> bool;
}
@@ -168,7 +168,7 @@ impl SyncSourceReader for RemoteSourceReader {
Ok(DataBlob::load_from_reader(&mut tmp_file).ok())
}
- async fn try_download_client_log(&self, to_path: &Path) -> Result<(), Error> {
+ async fn try_fetch_client_log(&self, to_path: &Path) -> Result<(), Error> {
let mut tmp_path = to_path.to_owned();
tmp_path.set_extension("tmp");
@@ -229,7 +229,9 @@ impl SyncSourceReader for LocalSourceReader {
Ok(DataBlob::load_from_reader(&mut tmp_file).ok())
}
- async fn try_download_client_log(&self, _to_path: &Path) -> Result<(), Error> {
+ async fn try_fetch_client_log(&self, to_path: &Path) -> Result<(), Error> {
+ self.load_file_into(CLIENT_LOG_BLOB_NAME.as_ref(), to_path)
+ .await?;
Ok(())
}
--
2.47.3
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH proxmox-backup 2/3] sync: use log sender for logging when fetching client log
2026-04-25 14:09 [PATCH proxmox-backup 0/3] fixup client log fetching and decryption Christian Ebner
2026-04-25 14:09 ` [PATCH proxmox-backup 1/3] sync: fix client log fetching for local sync job Christian Ebner
@ 2026-04-25 14:09 ` Christian Ebner
2026-04-25 14:09 ` [PATCH proxmox-backup 3/3] sync: decrypt client log on pull with matching decryption key Christian Ebner
2 siblings, 0 replies; 4+ messages in thread
From: Christian Ebner @ 2026-04-25 14:09 UTC (permalink / raw)
To: pbs-devel
For the log messages to be correctly logged and prefixed, extend the
trait method for fetching the client log by the log sender and use
that for logging. Since the local source reader does not yet log
this, store a full pbs_datastore::BackupDir instead of the
pbs_api_types::BackupDir, which is a superset thereof and already
contains a reference to the datastore required by the reader.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 4 +++-
src/server/sync.rs | 58 +++++++++++++++++++++++++++++++++-------------
2 files changed, 45 insertions(+), 17 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 6bb5995cc..d37998f56 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -679,9 +679,10 @@ async fn pull_snapshot<'a>(
let fetch_log = async || {
if !client_log_name.exists() {
reader
- .try_fetch_client_log(&client_log_name)
+ .try_fetch_client_log(&client_log_name, Arc::clone(&log_sender))
.await
.with_context(|| prefix.clone())?;
+
if client_log_name.exists() {
if let DatastoreBackend::S3(s3_client) = backend {
let object_key = pbs_datastore::s3::object_key_from_path(
@@ -704,6 +705,7 @@ async fn pull_snapshot<'a>(
}
}
}
+
Ok::<(), Error>(())
};
let cleanup = async || {
diff --git a/src/server/sync.rs b/src/server/sync.rs
index ad537129b..d8eec844e 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -3,7 +3,7 @@
use std::collections::HashMap;
use std::io::{Seek, Write};
use std::ops::Deref;
-use std::path::{Path, PathBuf};
+use std::path::Path;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
use std::time::Duration;
@@ -104,7 +104,11 @@ pub(crate) trait SyncSourceReader: Send + Sync {
async fn load_file_into(&self, filename: &str, into: &Path) -> Result<Option<DataBlob>, Error>;
/// Tries to fetch the client log from the source and save it into a local file.
- async fn try_fetch_client_log(&self, to_path: &Path) -> Result<(), Error>;
+ async fn try_fetch_client_log(
+ &self,
+ to_path: &Path,
+ log_sender: Arc<LogLineSender>,
+ ) -> Result<(), Error>;
fn skip_chunk_sync(&self, target_store_name: &str) -> bool;
}
@@ -117,8 +121,7 @@ pub(crate) struct RemoteSourceReader {
pub(crate) struct LocalSourceReader {
// must not be accessed/made pub, this is just a hack for Send+Sync
_dir_lock: Arc<Mutex<BackupLockGuard>>,
- pub(crate) path: PathBuf,
- pub(crate) datastore: Arc<DataStore>,
+ pub(crate) dir: pbs_datastore::BackupDir,
}
#[async_trait::async_trait]
@@ -168,7 +171,11 @@ impl SyncSourceReader for RemoteSourceReader {
Ok(DataBlob::load_from_reader(&mut tmp_file).ok())
}
- async fn try_fetch_client_log(&self, to_path: &Path) -> Result<(), Error> {
+ async fn try_fetch_client_log(
+ &self,
+ to_path: &Path,
+ log_sender: Arc<LogLineSender>,
+ ) -> Result<(), Error> {
let mut tmp_path = to_path.to_owned();
tmp_path.set_extension("tmp");
@@ -189,11 +196,16 @@ impl SyncSourceReader for RemoteSourceReader {
if let Err(err) = std::fs::rename(&tmp_path, to_path) {
bail!("Atomic rename file {to_path:?} failed - {err}");
}
- info!(
- "Snapshot {snapshot}: got backup log file {client_log_name}",
- snapshot = &self.dir,
- client_log_name = client_log_name.deref()
- );
+ log_sender
+ .log(
+ Level::INFO,
+ format!(
+ "Snapshot {}: got backup log file {}",
+ self.dir,
+ client_log_name.deref()
+ ),
+ )
+ .await?;
}
Ok(())
@@ -211,7 +223,8 @@ impl SyncSourceReader for LocalSourceReader {
crypt_config: Option<Arc<CryptConfig>>,
crypt_mode: CryptMode,
) -> Result<Arc<dyn AsyncReadChunk>, Error> {
- let chunk_reader = LocalChunkReader::new(self.datastore.clone(), crypt_config, crypt_mode)?;
+ let chunk_reader =
+ LocalChunkReader::new(self.dir.datastore().clone(), crypt_config, crypt_mode)?;
Ok(Arc::new(chunk_reader))
}
@@ -222,21 +235,35 @@ impl SyncSourceReader for LocalSourceReader {
.truncate(true)
.read(true)
.open(into)?;
- let mut from_path = self.path.clone();
+ let mut from_path = self.dir.full_path();
from_path.push(filename);
tmp_file.write_all(std::fs::read(from_path)?.as_slice())?;
tmp_file.rewind()?;
Ok(DataBlob::load_from_reader(&mut tmp_file).ok())
}
- async fn try_fetch_client_log(&self, to_path: &Path) -> Result<(), Error> {
+ async fn try_fetch_client_log(
+ &self,
+ to_path: &Path,
+ log_sender: Arc<LogLineSender>,
+ ) -> Result<(), Error> {
self.load_file_into(CLIENT_LOG_BLOB_NAME.as_ref(), to_path)
.await?;
+ log_sender
+ .log(
+ Level::INFO,
+ format!(
+ "Snapshot {}: got backup log file {}",
+ self.dir.dir(),
+ CLIENT_LOG_BLOB_NAME.as_ref(),
+ ),
+ )
+ .await?;
Ok(())
}
fn skip_chunk_sync(&self, target_store_name: &str) -> bool {
- self.datastore.name() == target_store_name
+ self.dir.datastore().name() == target_store_name
}
}
@@ -496,8 +523,7 @@ impl SyncSource for LocalSource {
.with_context(|| format!("while reading snapshot '{dir:?}' for a sync job"))?;
Ok(Arc::new(LocalSourceReader {
_dir_lock: Arc::new(Mutex::new(guard)),
- path: dir.full_path(),
- datastore: dir.datastore().clone(),
+ dir,
}))
}
}
--
2.47.3
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH proxmox-backup 3/3] sync: decrypt client log on pull with matching decryption key
2026-04-25 14:09 [PATCH proxmox-backup 0/3] fixup client log fetching and decryption Christian Ebner
2026-04-25 14:09 ` [PATCH proxmox-backup 1/3] sync: fix client log fetching for local sync job Christian Ebner
2026-04-25 14:09 ` [PATCH proxmox-backup 2/3] sync: use log sender for logging when fetching client log Christian Ebner
@ 2026-04-25 14:09 ` Christian Ebner
2 siblings, 0 replies; 4+ messages in thread
From: Christian Ebner @ 2026-04-25 14:09 UTC (permalink / raw)
To: pbs-devel
Client logs are currently fetched as is, not decrypting when pulling
even when there is a matching decryption key.
Fix this by:
- Factoring out the DataBlob decryption helper so it can be reused
- Pass the crypt config as conditional parameter to the fetch_log closure
- Extend the try_fetch_client_log() implementation to decrypt the source
blob on the fly, if the crypt config is given.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 44 ++++++++++++-------------------
src/server/sync.rs | 65 ++++++++++++++++++++++++++++++++++++++++++----
2 files changed, 76 insertions(+), 33 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index d37998f56..42c34732f 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -2,7 +2,7 @@
use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
-use std::io::{Read, Seek};
+use std::io::Seek;
use std::os::fd::AsRawFd;
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
@@ -34,8 +34,7 @@ use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::{BackupManifest, FileInfo};
use pbs_datastore::read_chunk::AsyncReadChunk;
use pbs_datastore::{
- check_backup_owner, check_namespace_depth_limit, DataBlobReader, DataStore, DatastoreBackend,
- StoreProgress,
+ check_backup_owner, check_namespace_depth_limit, DataStore, DatastoreBackend, StoreProgress,
};
use pbs_tools::bounded_join_set::BoundedJoinSet;
use pbs_tools::buffered_logger::{BufferedLogger, LogLineSender};
@@ -43,7 +42,6 @@ use pbs_tools::crypt_config::CryptConfig;
use pbs_tools::sha::sha256;
use proxmox_human_byte::HumanByte;
use proxmox_parallel_handler::ParallelHandler;
-use proxmox_sys::fs::{replace_file, CreateOptions};
use tracing::{info, Level};
pub(crate) struct PullTarget {
@@ -574,26 +572,15 @@ async fn pull_single_archive<'a>(
})
.with_context(|| archive_prefix.clone())?;
- if crypt_config.is_some() {
- let crypt_config = crypt_config.clone();
-
+ if let Some(crypt_config) = &crypt_config {
let tmp_dec_path = tmp_dec_path.clone();
- let (csum, size) = tokio::task::spawn_blocking(move || {
- // must rewind again since after verifying cursor is at the end of the file
- tmpfile.rewind()?;
- let mut reader = DataBlobReader::new(tmpfile, crypt_config)?;
- let mut dec_raw_data = Vec::new();
- reader.read_to_end(&mut dec_raw_data)?;
- reader.finish()?;
-
- let blob = DataBlob::encode(&dec_raw_data, None, true)?;
-
- let (csum, size) = sha256(&mut blob.raw_data())?;
- replace_file(tmp_dec_path, blob.raw_data(), CreateOptions::new(), true)?;
- Ok((csum, size))
- })
- .await?
+ let (csum, size) = super::sync::decrypt_encrypted_data_blob(
+ tmpfile,
+ Arc::clone(crypt_config),
+ tmp_dec_path,
+ )
+ .await
.map_err(|err: Error| format_err!("Failed when decrypting blob {path:?}: {err}"))
.with_context(|| archive_prefix.clone())?;
@@ -676,10 +663,10 @@ async fn pull_snapshot<'a>(
let backend = ¶ms.target.backend;
- let fetch_log = async || {
+ let fetch_log = async |crypt_config: Option<Arc<CryptConfig>>| {
if !client_log_name.exists() {
reader
- .try_fetch_client_log(&client_log_name, Arc::clone(&log_sender))
+ .try_fetch_client_log(&client_log_name, crypt_config, Arc::clone(&log_sender))
.await
.with_context(|| prefix.clone())?;
@@ -731,7 +718,8 @@ async fn pull_snapshot<'a>(
})?;
if manifest_blob.raw_data() == tmp_manifest_blob.raw_data() {
- fetch_log().await?;
+ // requires no decryption since either none or both, source and target are encrypted
+ fetch_log(None).await?;
cleanup().await?;
return Ok(sync_stats); // nothing changed
}
@@ -775,9 +763,9 @@ async fn pull_snapshot<'a>(
let new_manifest = Arc::new(Mutex::new(BackupManifest::new(snapshot.into())));
(Some(crypt_config), Some(new_manifest))
}
- (_, true) => {
+ (crypt_config, true) => {
// nothing changed
- fetch_log().await?;
+ fetch_log(crypt_config).await?;
cleanup().await?;
return Ok(sync_stats);
}
@@ -908,7 +896,7 @@ async fn pull_snapshot<'a>(
.with_context(|| prefix.clone())?;
}
- fetch_log().await?;
+ fetch_log(crypt_config).await?;
snapshot
.cleanup_unreferenced_files(&manifest)
diff --git a/src/server/sync.rs b/src/server/sync.rs
index d8eec844e..c5fab0e04 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -1,7 +1,8 @@
//! Sync datastore contents from source to target, either in push or pull direction
use std::collections::HashMap;
-use std::io::{Seek, Write};
+use std::fs::File;
+use std::io::{Read, Seek, Write};
use std::ops::Deref;
use std::path::Path;
use std::sync::atomic::{AtomicUsize, Ordering};
@@ -18,6 +19,7 @@ use tracing::{info, warn, Level};
use proxmox_human_byte::HumanByte;
use proxmox_rest_server::WorkerTask;
use proxmox_router::HttpError;
+use proxmox_sys::fs::{replace_file, CreateOptions};
use pbs_api_types::{
Authid, BackupDir, BackupGroup, BackupNamespace, CryptMode, GroupListItem, SnapshotListItem,
@@ -28,9 +30,12 @@ use pbs_client::{BackupReader, BackupRepository, HttpClient, RemoteChunkReader};
use pbs_config::CachedUserInfo;
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{BackupManifest, DataStore, ListNamespacesRecursive, LocalChunkReader};
+use pbs_datastore::{
+ BackupManifest, DataBlobReader, DataStore, ListNamespacesRecursive, LocalChunkReader,
+};
use pbs_tools::buffered_logger::LogLineSender;
use pbs_tools::crypt_config::CryptConfig;
+use pbs_tools::sha::sha256;
use crate::backup::ListAccessibleBackupGroups;
use crate::server::jobstate::Job;
@@ -107,6 +112,7 @@ pub(crate) trait SyncSourceReader: Send + Sync {
async fn try_fetch_client_log(
&self,
to_path: &Path,
+ crypt_config: Option<Arc<CryptConfig>>,
log_sender: Arc<LogLineSender>,
) -> Result<(), Error>;
@@ -174,6 +180,7 @@ impl SyncSourceReader for RemoteSourceReader {
async fn try_fetch_client_log(
&self,
to_path: &Path,
+ crypt_config: Option<Arc<CryptConfig>>,
log_sender: Arc<LogLineSender>,
) -> Result<(), Error> {
let mut tmp_path = to_path.to_owned();
@@ -190,10 +197,17 @@ impl SyncSourceReader for RemoteSourceReader {
let client_log_name = &CLIENT_LOG_BLOB_NAME;
if let Ok(()) = self
.backup_reader
- .download(client_log_name.as_ref(), tmpfile)
+ .download(client_log_name.as_ref(), &tmpfile)
.await
{
- if let Err(err) = std::fs::rename(&tmp_path, to_path) {
+ if let Some(crypt_config) = &crypt_config {
+ let (_csum, _size) = decrypt_encrypted_data_blob(
+ tmpfile,
+ Arc::clone(crypt_config),
+ to_path.to_path_buf(),
+ )
+ .await?;
+ } else if let Err(err) = std::fs::rename(&tmp_path, to_path) {
bail!("Atomic rename file {to_path:?} failed - {err}");
}
log_sender
@@ -245,10 +259,24 @@ impl SyncSourceReader for LocalSourceReader {
async fn try_fetch_client_log(
&self,
to_path: &Path,
+ crypt_config: Option<Arc<CryptConfig>>,
log_sender: Arc<LogLineSender>,
) -> Result<(), Error> {
- self.load_file_into(CLIENT_LOG_BLOB_NAME.as_ref(), to_path)
+ if let Some(crypt_config) = &crypt_config {
+ let mut from_path = self.dir.full_path();
+ from_path.push(CLIENT_LOG_BLOB_NAME.as_ref());
+ let blob_file = tokio::fs::File::open(from_path).await?;
+ let blob_file = blob_file.into_std().await;
+ let (_csum, _size) = decrypt_encrypted_data_blob(
+ blob_file,
+ Arc::clone(crypt_config),
+ to_path.to_path_buf(),
+ )
.await?;
+ } else {
+ self.load_file_into(CLIENT_LOG_BLOB_NAME.as_ref(), to_path)
+ .await?;
+ }
log_sender
.log(
Level::INFO,
@@ -834,6 +862,33 @@ pub(super) fn exclude_not_verified_or_encrypted(
false
}
+/// Decrypt data blob stored in given file and store it to target path.
+///
+/// Returns the checksum and size of the resulting decrypted blob file.
+pub(super) async fn decrypt_encrypted_data_blob<P: AsRef<Path> + Send + 'static>(
+ mut blob_file: File,
+ crypt_config: Arc<CryptConfig>,
+ target_path: P,
+) -> Result<([u8; 32], u64), Error> {
+ let (csum, size) = tokio::task::spawn_blocking(move || {
+ // assure to start at the beginning of the file
+ blob_file.rewind()?;
+ let mut reader = DataBlobReader::new(blob_file, Some(crypt_config))?;
+ let mut dec_raw_data = Vec::new();
+ reader.read_to_end(&mut dec_raw_data)?;
+ reader.finish()?;
+
+ let blob = DataBlob::encode(&dec_raw_data, None, true)?;
+
+ let (csum, size) = sha256(&mut blob.raw_data())?;
+ replace_file(target_path, blob.raw_data(), CreateOptions::new(), true)?;
+ Ok::<([u8; 32], u64), Error>((csum, size))
+ })
+ .await??;
+
+ Ok((csum, size))
+}
+
/// Helper to check if user has access to given encryption key and load it from config.
pub(crate) fn check_privs_and_load_key_config(
key_id: &str,
--
2.47.3
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-04-25 14:10 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-25 14:09 [PATCH proxmox-backup 0/3] fixup client log fetching and decryption Christian Ebner
2026-04-25 14:09 ` [PATCH proxmox-backup 1/3] sync: fix client log fetching for local sync job Christian Ebner
2026-04-25 14:09 ` [PATCH proxmox-backup 2/3] sync: use log sender for logging when fetching client log Christian Ebner
2026-04-25 14:09 ` [PATCH proxmox-backup 3/3] sync: decrypt client log on pull with matching decryption key Christian Ebner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox