* [pbs-devel] [PATCH proxmox{, -backup} 0/6] fix #6716: Add support for http proxy configuration for S3 endpoints
@ 2026-01-22 15:11 Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox 1/2] pbs-api-types: make operation non-optional for maintenance-mode check Christian Ebner
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Christian Ebner @ 2026-01-22 15:11 UTC (permalink / raw)
To: pbs-devel
This patch series exposes the nodes proxy config setting to the S3
clients for datastores backed by an S3 object store.
To honor crate boundaries, the proxy configuration is not read when
instantiating the s3-client via the datastore backend directly,
but rather by passing in a callback function on datastore
instantiation, allowing to gather the current node proxy whenever a
new s3 client connection is being created.
To achieve this, the first patches of the series refactor the code
so the callback can be passed by wrapping into a convenience helper.
Testing was performed by proxying traffic using https://www.mitmproxy.org/
Link to the bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=6716
proxmox:
Christian Ebner (2):
pbs-api-types: make operation non-optional for maintenance-mode check
s3-client: add proxy configuration as optional client option
pbs-api-types/src/maintenance.rs | 6 +++---
proxmox-s3-client/examples/s3_client.rs | 1 +
proxmox-s3-client/src/client.rs | 10 +++++++++-
3 files changed, 13 insertions(+), 4 deletions(-)
proxmox-backup:
Christian Ebner (4):
datastore: make operation non-optional in lookups
tools: factor out node proxy config read helper
datastore: refactor datastore lookup parameters into dedicated type
fix #6716: pass node http proxy config to s3 backend
pbs-datastore/src/datastore.rs | 88 +++++++++++++++------
pbs-datastore/src/lib.rs | 2 +-
pbs-datastore/src/snapshot_reader.rs | 20 +++--
src/api2/admin/datastore.rs | 58 +++++++-------
src/api2/admin/namespace.rs | 9 ++-
src/api2/admin/s3.rs | 2 +
src/api2/backup/mod.rs | 3 +-
src/api2/config/datastore.rs | 16 +++-
src/api2/config/s3.rs | 11 ++-
src/api2/node/apt.rs | 8 +-
src/api2/reader/mod.rs | 3 +-
src/api2/status/mod.rs | 6 +-
src/api2/tape/backup.rs | 6 +-
src/api2/tape/restore.rs | 6 +-
src/bin/proxmox-backup-proxy.rs | 8 +-
src/server/metric_collection/mod.rs | 2 +-
src/server/prune_job.rs | 3 +-
src/server/pull.rs | 7 +-
src/server/push.rs | 3 +-
src/server/verify_job.rs | 3 +-
src/tape/pool_writer/new_chunks_iterator.rs | 15 ++--
src/tools/mod.rs | 22 +++++-
22 files changed, 203 insertions(+), 98 deletions(-)
Summary over all repositories:
25 files changed, 216 insertions(+), 102 deletions(-)
--
Generated by git-murpp 0.8.1
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pbs-devel] [PATCH proxmox 1/2] pbs-api-types: make operation non-optional for maintenance-mode check
2026-01-22 15:11 [pbs-devel] [PATCH proxmox{, -backup} 0/6] fix #6716: Add support for http proxy configuration for S3 endpoints Christian Ebner
@ 2026-01-22 15:11 ` Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox 2/2] s3-client: add proxy configuration as optional client option Christian Ebner
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2026-01-22 15:11 UTC (permalink / raw)
To: pbs-devel
This checks the availability of the datastore constrained by the
currently set maintenance-mode, based on the operation the caller
would like to perform.
In an effort to force the operation to be set when performing
datastore operations, make this non optional for the maintenance mode
checks.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-api-types/src/maintenance.rs | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/pbs-api-types/src/maintenance.rs b/pbs-api-types/src/maintenance.rs
index 6b97ff10..5af34291 100644
--- a/pbs-api-types/src/maintenance.rs
+++ b/pbs-api-types/src/maintenance.rs
@@ -93,7 +93,7 @@ impl MaintenanceMode {
|| self.ty == MaintenanceType::Unmount
}
- pub fn check(&self, operation: Option<Operation>) -> Result<(), Error> {
+ pub fn check(&self, operation: Operation) -> Result<(), Error> {
if self.ty == MaintenanceType::Delete {
bail!("datastore is being deleted");
}
@@ -102,7 +102,7 @@ impl MaintenanceMode {
.decode_utf8()
.unwrap_or(Cow::Borrowed(""));
- if let Some(Operation::Lookup) = operation {
+ if Operation::Lookup == operation {
return Ok(());
} else if self.ty == MaintenanceType::Unmount {
bail!("datastore is being unmounted");
@@ -111,7 +111,7 @@ impl MaintenanceMode {
} else if self.ty == MaintenanceType::S3Refresh {
bail!("S3 refresh maintenance mode: {}", message);
} else if self.ty == MaintenanceType::ReadOnly {
- if let Some(Operation::Write) = operation {
+ if Operation::Write == operation {
bail!("read-only maintenance mode: {}", message);
}
}
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pbs-devel] [PATCH proxmox 2/2] s3-client: add proxy configuration as optional client option
2026-01-22 15:11 [pbs-devel] [PATCH proxmox{, -backup} 0/6] fix #6716: Add support for http proxy configuration for S3 endpoints Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox 1/2] pbs-api-types: make operation non-optional for maintenance-mode check Christian Ebner
@ 2026-01-22 15:11 ` Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 1/4] datastore: make operation non-optional in lookups Christian Ebner
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2026-01-22 15:11 UTC (permalink / raw)
To: pbs-devel
Allows to set a http proxy configuration for the client to be used for
the traffic from and to the s3 API endpoint.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
proxmox-s3-client/examples/s3_client.rs | 1 +
proxmox-s3-client/src/client.rs | 10 +++++++++-
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/proxmox-s3-client/examples/s3_client.rs b/proxmox-s3-client/examples/s3_client.rs
index ca69971c..df523c75 100644
--- a/proxmox-s3-client/examples/s3_client.rs
+++ b/proxmox-s3-client/examples/s3_client.rs
@@ -40,6 +40,7 @@ async fn run() -> Result<(), anyhow::Error> {
put_rate_limit: None,
provider_quirks: Vec::new(),
rate_limiter_config: None,
+ proxy_config: None,
};
// Creating a client instance and connect to api endpoint
diff --git a/proxmox-s3-client/src/client.rs b/proxmox-s3-client/src/client.rs
index 83176b39..25cc77c0 100644
--- a/proxmox-s3-client/src/client.rs
+++ b/proxmox-s3-client/src/client.rs
@@ -20,7 +20,7 @@ use openssl::x509::X509StoreContextRef;
use tracing::error;
use proxmox_http::client::HttpsConnector;
-use proxmox_http::Body;
+use proxmox_http::{Body, ProxyConfig};
use proxmox_rate_limiter::{RateLimit, RateLimiter, SharedRateLimiter};
use proxmox_schema::api_types::CERT_FINGERPRINT_SHA256_SCHEMA;
@@ -100,6 +100,8 @@ pub struct S3ClientOptions {
pub provider_quirks: Vec<ProviderQuirks>,
/// Configuration options for the shared rate limiter.
pub rate_limiter_config: Option<S3RateLimiterConfig>,
+ /// Proxy configuration to be used by the client.
+ pub proxy_config: Option<ProxyConfig>,
}
impl S3ClientOptions {
@@ -110,6 +112,7 @@ impl S3ClientOptions {
bucket: Option<String>,
common_prefix: String,
rate_limiter_options: Option<S3RateLimiterOptions>,
+ proxy_config: Option<ProxyConfig>,
) -> Self {
let rate_limiter_config = rate_limiter_options.map(|options| S3RateLimiterConfig {
options,
@@ -131,6 +134,7 @@ impl S3ClientOptions {
put_rate_limit: config.put_rate_limit,
provider_quirks: config.provider_quirks.unwrap_or_default(),
rate_limiter_config,
+ proxy_config,
}
}
}
@@ -213,6 +217,10 @@ impl S3Client {
}
}
+ if let Some(proxy_config) = &options.proxy_config {
+ https_connector.set_proxy(proxy_config.clone());
+ }
+
let client = Client::builder(TokioExecutor::new()).build::<_, Body>(https_connector);
let authority_template = if let Some(port) = options.port {
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 1/4] datastore: make operation non-optional in lookups
2026-01-22 15:11 [pbs-devel] [PATCH proxmox{, -backup} 0/6] fix #6716: Add support for http proxy configuration for S3 endpoints Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox 1/2] pbs-api-types: make operation non-optional for maintenance-mode check Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox 2/2] s3-client: add proxy configuration as optional client option Christian Ebner
@ 2026-01-22 15:11 ` Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 2/4] tools: factor out node proxy config read helper Christian Ebner
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2026-01-22 15:11 UTC (permalink / raw)
To: pbs-devel
Based on the requested operation, the datastore might not be
available as e.g. it can be in a maintenance mode not allowing read
or write access, but lookup is just fine.
All callsides should however specify this, so make this non-optional.
Only pub callable exception remains DataStore::open_path(), as this
is used for an example and opens the datastore via the raw directory
path instead of relying on the PBS instance.
On the datastore itself it is kept as optional internal field due to
the raw access limitations and also since datastore cloning must not
fail on active operation update errors.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 19 ++++------
pbs-datastore/src/snapshot_reader.rs | 2 +-
src/api2/admin/datastore.rs | 52 ++++++++++++++--------------
src/api2/admin/namespace.rs | 6 ++--
src/api2/backup/mod.rs | 2 +-
src/api2/reader/mod.rs | 2 +-
src/api2/status/mod.rs | 4 +--
src/api2/tape/backup.rs | 4 +--
src/api2/tape/restore.rs | 4 +--
src/bin/proxmox-backup-proxy.rs | 4 +--
src/server/metric_collection/mod.rs | 2 +-
src/server/prune_job.rs | 2 +-
src/server/pull.rs | 4 +--
src/server/push.rs | 2 +-
src/server/verify_job.rs | 2 +-
15 files changed, 52 insertions(+), 59 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 7ad3d917d..b77567e51 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -464,10 +464,7 @@ impl DataStore {
Ok(())
}
- pub fn lookup_datastore(
- name: &str,
- operation: Option<Operation>,
- ) -> Result<Arc<DataStore>, Error> {
+ pub fn lookup_datastore(name: &str, operation: Operation) -> Result<Arc<DataStore>, Error> {
// Avoid TOCTOU between checking maintenance mode and updating active operation counter, as
// we use it to decide whether it is okay to delete the datastore.
let _config_lock = pbs_config::datastore::lock_config()?;
@@ -495,12 +492,10 @@ impl DataStore {
let chunk_store = if let Some(datastore) = &entry {
// Re-use DataStoreImpl
if datastore.config_generation == gen_num && gen_num.is_some() {
- if let Some(operation) = operation {
- update_active_operations(name, operation, 1)?;
- }
+ update_active_operations(name, operation, 1)?;
return Ok(Arc::new(Self {
inner: Arc::clone(datastore),
- operation,
+ operation: Some(operation),
}));
}
Arc::clone(&datastore.chunk_store)
@@ -521,13 +516,11 @@ impl DataStore {
let datastore = Arc::new(datastore);
datastore_cache.insert(name.to_string(), datastore.clone());
- if let Some(operation) = operation {
- update_active_operations(name, operation, 1)?;
- }
+ update_active_operations(name, operation, 1)?;
Ok(Arc::new(Self {
inner: datastore,
- operation,
+ operation: Some(operation),
}))
}
@@ -553,7 +546,7 @@ impl DataStore {
{
// the datastore drop handler does the checking if tasks are running and clears the
// cache entry, so we just have to trigger it here
- let _ = DataStore::lookup_datastore(name, Some(Operation::Lookup));
+ let _ = DataStore::lookup_datastore(name, Operation::Lookup);
}
Ok(())
diff --git a/pbs-datastore/src/snapshot_reader.rs b/pbs-datastore/src/snapshot_reader.rs
index e4608ea56..231b1f493 100644
--- a/pbs-datastore/src/snapshot_reader.rs
+++ b/pbs-datastore/src/snapshot_reader.rs
@@ -164,7 +164,7 @@ impl<F: Fn(&[u8; 32]) -> bool> Iterator for SnapshotChunkIterator<'_, F> {
let datastore = DataStore::lookup_datastore(
self.snapshot_reader.datastore_name(),
- Some(Operation::Read),
+ Operation::Read,
)?;
let order =
datastore.get_chunks_in_order(&*index, &self.skip_fn, |_| Ok(()))?;
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 88ad5d53b..a307e1488 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -83,7 +83,7 @@ fn check_privs_and_load_store(
auth_id: &Authid,
full_access_privs: u64,
partial_access_privs: u64,
- operation: Option<Operation>,
+ operation: Operation,
backup_group: &pbs_api_types::BackupGroup,
) -> Result<Arc<DataStore>, Error> {
let limited = check_ns_privs_full(store, ns, auth_id, full_access_privs, partial_access_privs)?;
@@ -134,7 +134,7 @@ pub fn list_groups(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
datastore
.iter_backup_groups(ns.clone())? // FIXME: Namespaces and recursion parameters!
@@ -251,7 +251,7 @@ pub async fn delete_group(
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_PRUNE,
- Some(Operation::Write),
+ Operation::Write,
&group,
)?;
@@ -318,7 +318,7 @@ pub async fn list_snapshot_files(
&auth_id,
PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
+ Operation::Read,
&backup_dir.group,
)?;
@@ -372,7 +372,7 @@ pub async fn delete_snapshot(
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_PRUNE,
- Some(Operation::Write),
+ Operation::Write,
&backup_dir.group,
)?;
@@ -467,7 +467,7 @@ unsafe fn list_snapshots_blocking(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
// FIXME: filter also owner before collecting, for doing that nicely the owner should move into
// backup group and provide an error free (Err -> None) accessor
@@ -601,7 +601,7 @@ pub async fn status(
}
};
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
let (counts, gc_status) = if verbose {
let filter_owner = if store_privs & PRIV_DATASTORE_AUDIT != 0 {
@@ -724,7 +724,7 @@ pub fn verify(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
let ignore_verified = ignore_verified.unwrap_or(true);
let worker_id;
@@ -904,7 +904,7 @@ pub fn prune(
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_PRUNE,
- Some(Operation::Write),
+ Operation::Write,
&group,
)?;
@@ -1076,7 +1076,7 @@ pub fn prune_datastore(
true,
)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
let ns = prune_options.ns.clone().unwrap_or_default();
let worker_id = format!("{store}:{ns}");
@@ -1114,7 +1114,7 @@ pub fn start_garbage_collection(
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let job = Job::new("garbage_collection", &store)
@@ -1161,7 +1161,7 @@ pub fn garbage_collection_status(
..Default::default()
};
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
let status_in_memory = datastore.last_gc_status();
let state_file = JobState::load("garbage_collection", &store)
.map_err(|err| log::error!("could not open GC statefile for {store}: {err}"))
@@ -1307,7 +1307,7 @@ pub fn download_file(
&auth_id,
PRIV_DATASTORE_READ,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
+ Operation::Read,
&backup_dir.group,
)?;
@@ -1392,7 +1392,7 @@ pub fn download_file_decoded(
&auth_id,
PRIV_DATASTORE_READ,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
+ Operation::Read,
&backup_dir_api.group,
)?;
@@ -1521,7 +1521,7 @@ pub fn upload_backup_log(
&auth_id,
0,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Write),
+ Operation::Write,
&backup_dir_api.group,
)?;
let backup_dir = datastore.backup_dir(backup_ns.clone(), backup_dir_api.clone())?;
@@ -1619,7 +1619,7 @@ pub async fn catalog(
&auth_id,
PRIV_DATASTORE_READ,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
+ Operation::Read,
&backup_dir.group,
)?;
@@ -1741,7 +1741,7 @@ pub fn pxar_file_download(
&auth_id,
PRIV_DATASTORE_READ,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
+ Operation::Read,
&backup_dir.group,
)?;
@@ -1873,7 +1873,7 @@ pub fn get_rrd_stats(
cf: RrdMode,
_param: Value,
) -> Result<Value, Error> {
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
let disk_manager = crate::tools::disks::DiskManage::new();
let mut rrd_fields = vec![
@@ -1952,7 +1952,7 @@ pub fn get_group_notes(
&auth_id,
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
+ Operation::Read,
&backup_group,
)?;
@@ -2000,7 +2000,7 @@ pub fn set_group_notes(
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Write),
+ Operation::Write,
&backup_group,
)?;
@@ -2047,7 +2047,7 @@ pub fn get_notes(
&auth_id,
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
+ Operation::Read,
&backup_dir.group,
)?;
@@ -2100,7 +2100,7 @@ pub fn set_notes(
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Write),
+ Operation::Write,
&backup_dir.group,
)?;
@@ -2145,7 +2145,7 @@ pub fn get_protection(
&auth_id,
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
+ Operation::Read,
&backup_dir.group,
)?;
@@ -2195,7 +2195,7 @@ pub async fn set_protection(
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Write),
+ Operation::Write,
&backup_dir.group,
)?;
@@ -2249,7 +2249,7 @@ pub async fn set_backup_owner(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
let backup_group = datastore.backup_group(ns, backup_group);
let owner = backup_group.get_owner()?;
@@ -2734,7 +2734,7 @@ pub fn s3_refresh(store: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Valu
/// Performs an s3 refresh for given datastore. Expects the store to already be in maintenance mode
/// s3-refresh.
pub(crate) fn do_s3_refresh(store: &str, worker: &dyn WorkerTaskContext) -> Result<(), Error> {
- let datastore = DataStore::lookup_datastore(store, Some(Operation::Lookup))?;
+ let datastore = DataStore::lookup_datastore(store, Operation::Lookup)?;
run_maintenance_locked(store, MaintenanceType::S3Refresh, worker, || {
proxmox_async::runtime::block_on(datastore.s3_refresh())
})
diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
index 6cf88d89e..30e24d8db 100644
--- a/src/api2/admin/namespace.rs
+++ b/src/api2/admin/namespace.rs
@@ -54,7 +54,7 @@ pub fn create_namespace(
check_ns_modification_privs(&store, &ns, &auth_id)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
datastore.create_namespace(&parent, name)
}
@@ -97,7 +97,7 @@ pub fn list_namespaces(
// get result up-front to avoid cloning NS, it's relatively cheap anyway (no IO normally)
let parent_access = check_ns_privs(&store, &parent, &auth_id, NS_PRIVS_OK);
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
let iter = match datastore.recursive_iter_backup_ns_ok(parent, max_depth) {
Ok(iter) => iter,
@@ -162,7 +162,7 @@ pub fn delete_namespace(
check_ns_modification_privs(&store, &ns, &auth_id)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
let (removed_all, stats) = datastore.remove_namespace_recursive(&ns, delete_groups)?;
if !removed_all {
diff --git a/src/api2/backup/mod.rs b/src/api2/backup/mod.rs
index 3e6b7a950..946510e85 100644
--- a/src/api2/backup/mod.rs
+++ b/src/api2/backup/mod.rs
@@ -99,7 +99,7 @@ fn upgrade_to_backup_protocol(
)
.map_err(|err| http_err!(FORBIDDEN, "{err}"))?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
let protocols = parts
.headers
diff --git a/src/api2/reader/mod.rs b/src/api2/reader/mod.rs
index f7adc366f..9262eb6cb 100644
--- a/src/api2/reader/mod.rs
+++ b/src/api2/reader/mod.rs
@@ -96,7 +96,7 @@ fn upgrade_to_backup_reader_protocol(
bail!("no permissions on /{}", acl_path.join("/"));
}
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
let backup_dir = pbs_api_types::BackupDir::deserialize(¶m)?;
diff --git a/src/api2/status/mod.rs b/src/api2/status/mod.rs
index 605072d60..885fdb0cc 100644
--- a/src/api2/status/mod.rs
+++ b/src/api2/status/mod.rs
@@ -69,7 +69,7 @@ pub async fn datastore_status(
};
if !allowed {
- if let Ok(datastore) = DataStore::lookup_datastore(store, Some(Operation::Lookup)) {
+ if let Ok(datastore) = DataStore::lookup_datastore(store, Operation::Lookup) {
if can_access_any_namespace(datastore, &auth_id, &user_info) {
list.push(DataStoreStatusListItem::empty(store, None, mount_status));
}
@@ -77,7 +77,7 @@ pub async fn datastore_status(
continue;
}
- let datastore = match DataStore::lookup_datastore(store, Some(Operation::Read)) {
+ let datastore = match DataStore::lookup_datastore(store, Operation::Read) {
Ok(datastore) => datastore,
Err(err) => {
list.push(DataStoreStatusListItem::empty(
diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
index 16a26b83e..47e8d0209 100644
--- a/src/api2/tape/backup.rs
+++ b/src/api2/tape/backup.rs
@@ -152,7 +152,7 @@ pub fn do_tape_backup_job(
let worker_type = job.jobtype().to_string();
- let datastore = DataStore::lookup_datastore(&setup.store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore(&setup.store, Operation::Read)?;
let (config, _digest) = pbs_config::media_pool::config()?;
let pool_config: MediaPoolConfig = config.lookup("pool", &setup.pool)?;
@@ -310,7 +310,7 @@ pub fn backup(
check_backup_permission(&auth_id, &setup.store, &setup.pool, &setup.drive)?;
- let datastore = DataStore::lookup_datastore(&setup.store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore(&setup.store, Operation::Read)?;
let (config, _digest) = pbs_config::media_pool::config()?;
let pool_config: MediaPoolConfig = config.lookup("pool", &setup.pool)?;
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 4f2ee3db6..92529a76d 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -144,10 +144,10 @@ impl TryFrom<String> for DataStoreMap {
if let Some(index) = store.find('=') {
let mut target = store.split_off(index);
target.remove(0); // remove '='
- let datastore = DataStore::lookup_datastore(&target, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore(&target, Operation::Write)?;
map.insert(store, datastore);
} else if default.is_none() {
- default = Some(DataStore::lookup_datastore(&store, Some(Operation::Write))?);
+ default = Some(DataStore::lookup_datastore(&store, Operation::Write)?);
} else {
bail!("multiple default stores given");
}
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 870208fee..3be8e8dcf 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -530,7 +530,7 @@ async fn schedule_datastore_garbage_collection() {
{
// limit datastore scope due to Op::Lookup
- let datastore = match DataStore::lookup_datastore(&store, Some(Operation::Lookup)) {
+ let datastore = match DataStore::lookup_datastore(&store, Operation::Lookup) {
Ok(datastore) => datastore,
Err(err) => {
eprintln!("lookup_datastore failed - {err}");
@@ -573,7 +573,7 @@ async fn schedule_datastore_garbage_collection() {
Err(_) => continue, // could not get lock
};
- let datastore = match DataStore::lookup_datastore(&store, Some(Operation::Write)) {
+ let datastore = match DataStore::lookup_datastore(&store, Operation::Write) {
Ok(datastore) => datastore,
Err(err) => {
log::warn!("skipping scheduled GC on {store}, could look it up - {err}");
diff --git a/src/server/metric_collection/mod.rs b/src/server/metric_collection/mod.rs
index 9b62cbb42..7979b7632 100644
--- a/src/server/metric_collection/mod.rs
+++ b/src/server/metric_collection/mod.rs
@@ -234,7 +234,7 @@ fn collect_disk_stats_sync() -> (DiskStat, Vec<DiskStat>) {
for config in datastore_list {
if config
.get_maintenance_mode()
- .is_some_and(|mode| mode.check(Some(Operation::Read)).is_err())
+ .is_some_and(|mode| mode.check(Operation::Read).is_err())
{
continue;
}
diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
index 9d07558d3..bb86a323e 100644
--- a/src/server/prune_job.rs
+++ b/src/server/prune_job.rs
@@ -133,7 +133,7 @@ pub fn do_prune_job(
auth_id: &Authid,
schedule: Option<String>,
) -> Result<String, Error> {
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
let worker_type = job.jobtype().to_string();
let auth_id = auth_id.clone();
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 15d8b9deb..412a59e66 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -113,11 +113,11 @@ impl PullParameters {
})
} else {
Arc::new(LocalSource {
- store: DataStore::lookup_datastore(remote_store, Some(Operation::Read))?,
+ store: DataStore::lookup_datastore(remote_store, Operation::Read)?,
ns: remote_ns,
})
};
- let store = DataStore::lookup_datastore(store, Some(Operation::Write))?;
+ let store = DataStore::lookup_datastore(store, Operation::Write)?;
let backend = store.backend()?;
let target = PullTarget { store, ns, backend };
diff --git a/src/server/push.rs b/src/server/push.rs
index d7884fce2..92bbbb9fc 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -109,7 +109,7 @@ impl PushParameters {
let remove_vanished = remove_vanished.unwrap_or(false);
let encrypted_only = encrypted_only.unwrap_or(false);
let verified_only = verified_only.unwrap_or(false);
- let store = DataStore::lookup_datastore(store, Some(Operation::Read))?;
+ let store = DataStore::lookup_datastore(store, Operation::Read)?;
if !store.namespace_exists(&ns) {
bail!(
diff --git a/src/server/verify_job.rs b/src/server/verify_job.rs
index e0b03155c..2ec8c5138 100644
--- a/src/server/verify_job.rs
+++ b/src/server/verify_job.rs
@@ -15,7 +15,7 @@ pub fn do_verification_job(
schedule: Option<String>,
to_stdout: bool,
) -> Result<String, Error> {
- let datastore = DataStore::lookup_datastore(&verification_job.store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore(&verification_job.store, Operation::Read)?;
let outdated_after = verification_job.outdated_after;
let ignore_verified_snapshots = verification_job.ignore_verified.unwrap_or(true);
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 2/4] tools: factor out node proxy config read helper
2026-01-22 15:11 [pbs-devel] [PATCH proxmox{, -backup} 0/6] fix #6716: Add support for http proxy configuration for S3 endpoints Christian Ebner
` (2 preceding siblings ...)
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 1/4] datastore: make operation non-optional in lookups Christian Ebner
@ 2026-01-22 15:11 ` Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 3/4] datastore: refactor datastore lookup parameters into dedicated type Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 4/4] fix #6716: pass node http proxy config to s3 backend Christian Ebner
5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2026-01-22 15:11 UTC (permalink / raw)
To: pbs-devel
Will be reused for getting the node proxy config to set it for the
s3-client as well.
No functional changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/node/apt.rs | 8 +-------
src/tools/mod.rs | 11 +++++++++++
2 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/src/api2/node/apt.rs b/src/api2/node/apt.rs
index c696adb1e..dd92e9f48 100644
--- a/src/api2/node/apt.rs
+++ b/src/api2/node/apt.rs
@@ -15,8 +15,6 @@ use proxmox_sys::fs::{replace_file, CreateOptions};
use pbs_api_types::{NODE_SCHEMA, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY, UPID_SCHEMA};
-use crate::config::node;
-
#[api(
input: {
properties: {
@@ -59,11 +57,7 @@ pub fn update_apt_proxy_config(proxy_config: Option<&ProxyConfig>) -> Result<(),
}
fn read_and_update_proxy_config() -> Result<Option<ProxyConfig>, Error> {
- let proxy_config = if let Ok((node_config, _digest)) = node::config() {
- node_config.http_proxy()
- } else {
- None
- };
+ let proxy_config = crate::tools::node_proxy_config();
update_apt_proxy_config(proxy_config.as_ref())?;
Ok(proxy_config)
diff --git a/src/tools/mod.rs b/src/tools/mod.rs
index 6a975bde2..93b4d8ea4 100644
--- a/src/tools/mod.rs
+++ b/src/tools/mod.rs
@@ -13,6 +13,8 @@ use proxmox_http::{client::Client, HttpOptions, ProxyConfig};
use pbs_datastore::backup_info::{BackupDir, BackupInfo};
use pbs_datastore::manifest::BackupManifest;
+use crate::config::node;
+
pub mod config;
pub mod disks;
pub mod fs;
@@ -186,3 +188,12 @@ pub(crate) fn backup_info_to_snapshot_list_item(
}
}
}
+
+/// Read the nodes http proxy config from the node config.
+pub(super) fn node_proxy_config() -> Option<proxmox_http::ProxyConfig> {
+ if let Ok((node_config, _digest)) = node::config() {
+ node_config.http_proxy()
+ } else {
+ None
+ }
+}
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 3/4] datastore: refactor datastore lookup parameters into dedicated type
2026-01-22 15:11 [pbs-devel] [PATCH proxmox{, -backup} 0/6] fix #6716: Add support for http proxy configuration for S3 endpoints Christian Ebner
` (3 preceding siblings ...)
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 2/4] tools: factor out node proxy config read helper Christian Ebner
@ 2026-01-22 15:11 ` Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 4/4] fix #6716: pass node http proxy config to s3 backend Christian Ebner
5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2026-01-22 15:11 UTC (permalink / raw)
To: pbs-devel
This will allow to easily extend the lookup by a callback method to
allow lookup of the nodes proxy config whenever that is required for
the backend implementation.
Move this to a central helper so individual
DataStore::lookup_datastore() calls do not need to individually set
common future parameters.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 36 ++++++++++++++++++----------
pbs-datastore/src/lib.rs | 2 +-
pbs-datastore/src/snapshot_reader.rs | 6 +++--
src/api2/admin/datastore.rs | 26 ++++++++++----------
src/api2/admin/namespace.rs | 9 ++++---
src/api2/backup/mod.rs | 3 ++-
src/api2/reader/mod.rs | 3 ++-
src/api2/status/mod.rs | 6 +++--
src/api2/tape/backup.rs | 6 +++--
src/api2/tape/restore.rs | 6 +++--
src/bin/proxmox-backup-proxy.rs | 8 ++++---
src/server/prune_job.rs | 3 ++-
src/server/pull.rs | 7 ++++--
src/server/push.rs | 3 ++-
src/server/verify_job.rs | 3 ++-
src/tools/mod.rs | 10 +++++++-
16 files changed, 90 insertions(+), 47 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index b77567e51..6fa533e2f 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -195,6 +195,17 @@ impl DataStoreImpl {
}
}
+pub struct DataStoreLookup<'a> {
+ name: &'a str,
+ operation: Operation,
+}
+
+impl<'a> DataStoreLookup<'a> {
+ pub fn with(name: &'a str, operation: Operation) -> Self {
+ Self { name, operation }
+ }
+}
+
pub struct DataStore {
inner: Arc<DataStoreImpl>,
operation: Option<Operation>,
@@ -464,18 +475,18 @@ impl DataStore {
Ok(())
}
- pub fn lookup_datastore(name: &str, operation: Operation) -> Result<Arc<DataStore>, Error> {
+ pub fn lookup_datastore(lookup: DataStoreLookup) -> Result<Arc<DataStore>, Error> {
// Avoid TOCTOU between checking maintenance mode and updating active operation counter, as
// we use it to decide whether it is okay to delete the datastore.
let _config_lock = pbs_config::datastore::lock_config()?;
// Get the current datastore.cfg generation number and cached config
let (section_config, gen_num) = datastore_section_config_cached(true)?;
- let config: DataStoreConfig = section_config.lookup("datastore", name)?;
+ let config: DataStoreConfig = section_config.lookup("datastore", lookup.name)?;
if let Some(maintenance_mode) = config.get_maintenance_mode() {
- if let Err(error) = maintenance_mode.check(operation) {
- bail!("datastore '{name}' is unavailable: {error}");
+ if let Err(error) = maintenance_mode.check(lookup.operation) {
+ bail!("datastore '{}' is unavailable: {error}", lookup.name);
}
}
@@ -486,16 +497,16 @@ impl DataStore {
bail!("datastore '{}' is not mounted", config.name);
}
- let entry = datastore_cache.get(name);
+ let entry = datastore_cache.get(lookup.name);
// reuse chunk store so that we keep using the same process locker instance!
let chunk_store = if let Some(datastore) = &entry {
// Re-use DataStoreImpl
if datastore.config_generation == gen_num && gen_num.is_some() {
- update_active_operations(name, operation, 1)?;
+ update_active_operations(lookup.name, lookup.operation, 1)?;
return Ok(Arc::new(Self {
inner: Arc::clone(datastore),
- operation: Some(operation),
+ operation: Some(lookup.operation),
}));
}
Arc::clone(&datastore.chunk_store)
@@ -505,7 +516,7 @@ impl DataStore {
.parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
)?;
Arc::new(ChunkStore::open(
- name,
+ lookup.name,
config.absolute_path(),
tuning.sync_level.unwrap_or_default(),
)?)
@@ -514,13 +525,13 @@ impl DataStore {
let datastore = DataStore::with_store_and_config(chunk_store, config, gen_num)?;
let datastore = Arc::new(datastore);
- datastore_cache.insert(name.to_string(), datastore.clone());
+ datastore_cache.insert(lookup.name.to_string(), datastore.clone());
- update_active_operations(name, operation, 1)?;
+ update_active_operations(lookup.name, lookup.operation, 1)?;
Ok(Arc::new(Self {
inner: datastore,
- operation: Some(operation),
+ operation: Some(lookup.operation),
}))
}
@@ -546,7 +557,8 @@ impl DataStore {
{
// the datastore drop handler does the checking if tasks are running and clears the
// cache entry, so we just have to trigger it here
- let _ = DataStore::lookup_datastore(name, Operation::Lookup);
+ let lookup = DataStoreLookup::with(name, Operation::Lookup);
+ let _ = DataStore::lookup_datastore(lookup);
}
Ok(())
diff --git a/pbs-datastore/src/lib.rs b/pbs-datastore/src/lib.rs
index 1f7c54ae8..8770a09ca 100644
--- a/pbs-datastore/src/lib.rs
+++ b/pbs-datastore/src/lib.rs
@@ -217,7 +217,7 @@ pub use store_progress::StoreProgress;
mod datastore;
pub use datastore::{
check_backup_owner, ensure_datastore_is_mounted, get_datastore_mount_status, DataStore,
- DatastoreBackend, S3_DATASTORE_IN_USE_MARKER,
+ DataStoreLookup, DatastoreBackend, S3_DATASTORE_IN_USE_MARKER,
};
mod hierarchy;
diff --git a/pbs-datastore/src/snapshot_reader.rs b/pbs-datastore/src/snapshot_reader.rs
index 231b1f493..d522a02d7 100644
--- a/pbs-datastore/src/snapshot_reader.rs
+++ b/pbs-datastore/src/snapshot_reader.rs
@@ -16,6 +16,7 @@ use pbs_api_types::{
};
use crate::backup_info::BackupDir;
+use crate::datastore::DataStoreLookup;
use crate::dynamic_index::DynamicIndexReader;
use crate::fixed_index::FixedIndexReader;
use crate::index::IndexFile;
@@ -162,10 +163,11 @@ impl<F: Fn(&[u8; 32]) -> bool> Iterator for SnapshotChunkIterator<'_, F> {
),
};
- let datastore = DataStore::lookup_datastore(
+ let lookup = DataStoreLookup::with(
self.snapshot_reader.datastore_name(),
Operation::Read,
- )?;
+ );
+ let datastore = DataStore::lookup_datastore(lookup)?;
let order =
datastore.get_chunks_in_order(&*index, &self.skip_fn, |_| Ok(()))?;
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index a307e1488..f5bd3f0f4 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -71,7 +71,9 @@ use crate::api2::backup::optional_ns_param;
use crate::api2::node::rrd::create_value_from_rrd;
use crate::backup::{check_ns_privs_full, ListAccessibleBackupGroups, VerifyWorker, NS_PRIVS_OK};
use crate::server::jobstate::{compute_schedule_status, Job, JobState};
-use crate::tools::{backup_info_to_snapshot_list_item, get_all_snapshot_files, read_backup_index};
+use crate::tools::{
+ backup_info_to_snapshot_list_item, get_all_snapshot_files, lookup_with, read_backup_index,
+};
// helper to unify common sequence of checks:
// 1. check privs on NS (full or limited access)
@@ -88,7 +90,7 @@ fn check_privs_and_load_store(
) -> Result<Arc<DataStore>, Error> {
let limited = check_ns_privs_full(store, ns, auth_id, full_access_privs, partial_access_privs)?;
- let datastore = DataStore::lookup_datastore(store, operation)?;
+ let datastore = DataStore::lookup_datastore(lookup_with(store, operation))?;
if limited {
let owner = datastore.get_owner(ns, backup_group)?;
@@ -134,7 +136,7 @@ pub fn list_groups(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
+ let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Read))?;
datastore
.iter_backup_groups(ns.clone())? // FIXME: Namespaces and recursion parameters!
@@ -467,7 +469,7 @@ unsafe fn list_snapshots_blocking(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
+ let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Read))?;
// FIXME: filter also owner before collecting, for doing that nicely the owner should move into
// backup group and provide an error free (Err -> None) accessor
@@ -601,7 +603,7 @@ pub async fn status(
}
};
- let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
+ let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Read))?;
let (counts, gc_status) = if verbose {
let filter_owner = if store_privs & PRIV_DATASTORE_AUDIT != 0 {
@@ -724,7 +726,7 @@ pub fn verify(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
+ let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Read))?;
let ignore_verified = ignore_verified.unwrap_or(true);
let worker_id;
@@ -1076,7 +1078,7 @@ pub fn prune_datastore(
true,
)?;
- let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
+ let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Write))?;
let ns = prune_options.ns.clone().unwrap_or_default();
let worker_id = format!("{store}:{ns}");
@@ -1114,7 +1116,7 @@ pub fn start_garbage_collection(
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
- let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
+ let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Write))?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let job = Job::new("garbage_collection", &store)
@@ -1161,7 +1163,7 @@ pub fn garbage_collection_status(
..Default::default()
};
- let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
+ let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Read))?;
let status_in_memory = datastore.last_gc_status();
let state_file = JobState::load("garbage_collection", &store)
.map_err(|err| log::error!("could not open GC statefile for {store}: {err}"))
@@ -1873,7 +1875,7 @@ pub fn get_rrd_stats(
cf: RrdMode,
_param: Value,
) -> Result<Value, Error> {
- let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
+ let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Read))?;
let disk_manager = crate::tools::disks::DiskManage::new();
let mut rrd_fields = vec![
@@ -2249,7 +2251,7 @@ pub async fn set_backup_owner(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
+ let datastore = DataStore::lookup_datastore(lookup_with(&store, Operation::Write))?;
let backup_group = datastore.backup_group(ns, backup_group);
let owner = backup_group.get_owner()?;
@@ -2734,7 +2736,7 @@ pub fn s3_refresh(store: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Valu
/// Performs an s3 refresh for given datastore. Expects the store to already be in maintenance mode
/// s3-refresh.
pub(crate) fn do_s3_refresh(store: &str, worker: &dyn WorkerTaskContext) -> Result<(), Error> {
- let datastore = DataStore::lookup_datastore(store, Operation::Lookup)?;
+ let datastore = DataStore::lookup_datastore(lookup_with(store, Operation::Lookup))?;
run_maintenance_locked(store, MaintenanceType::S3Refresh, worker, || {
proxmox_async::runtime::block_on(datastore.s3_refresh())
})
diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
index 30e24d8db..c885ab540 100644
--- a/src/api2/admin/namespace.rs
+++ b/src/api2/admin/namespace.rs
@@ -54,7 +54,8 @@ pub fn create_namespace(
check_ns_modification_privs(&store, &ns, &auth_id)?;
- let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
+ let lookup = crate::tools::lookup_with(&store, Operation::Write);
+ let datastore = DataStore::lookup_datastore(lookup)?;
datastore.create_namespace(&parent, name)
}
@@ -97,7 +98,8 @@ pub fn list_namespaces(
// get result up-front to avoid cloning NS, it's relatively cheap anyway (no IO normally)
let parent_access = check_ns_privs(&store, &parent, &auth_id, NS_PRIVS_OK);
- let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
+ let lookup = crate::tools::lookup_with(&store, Operation::Read);
+ let datastore = DataStore::lookup_datastore(lookup)?;
let iter = match datastore.recursive_iter_backup_ns_ok(parent, max_depth) {
Ok(iter) => iter,
@@ -162,7 +164,8 @@ pub fn delete_namespace(
check_ns_modification_privs(&store, &ns, &auth_id)?;
- let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
+ let lookup = crate::tools::lookup_with(&store, Operation::Write);
+ let datastore = DataStore::lookup_datastore(lookup)?;
let (removed_all, stats) = datastore.remove_namespace_recursive(&ns, delete_groups)?;
if !removed_all {
diff --git a/src/api2/backup/mod.rs b/src/api2/backup/mod.rs
index 946510e85..6708f3da3 100644
--- a/src/api2/backup/mod.rs
+++ b/src/api2/backup/mod.rs
@@ -99,7 +99,8 @@ fn upgrade_to_backup_protocol(
)
.map_err(|err| http_err!(FORBIDDEN, "{err}"))?;
- let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
+ let lookup = crate::tools::lookup_with(&store, Operation::Write);
+ let datastore = DataStore::lookup_datastore(lookup)?;
let protocols = parts
.headers
diff --git a/src/api2/reader/mod.rs b/src/api2/reader/mod.rs
index 9262eb6cb..a814ba5f7 100644
--- a/src/api2/reader/mod.rs
+++ b/src/api2/reader/mod.rs
@@ -96,7 +96,8 @@ fn upgrade_to_backup_reader_protocol(
bail!("no permissions on /{}", acl_path.join("/"));
}
- let datastore = DataStore::lookup_datastore(&store, Operation::Read)?;
+ let lookup = crate::tools::lookup_with(&store, Operation::Read);
+ let datastore = DataStore::lookup_datastore(lookup)?;
let backup_dir = pbs_api_types::BackupDir::deserialize(¶m)?;
diff --git a/src/api2/status/mod.rs b/src/api2/status/mod.rs
index 885fdb0cc..43bb95d19 100644
--- a/src/api2/status/mod.rs
+++ b/src/api2/status/mod.rs
@@ -69,7 +69,8 @@ pub async fn datastore_status(
};
if !allowed {
- if let Ok(datastore) = DataStore::lookup_datastore(store, Operation::Lookup) {
+ let lookup = crate::tools::lookup_with(store, Operation::Lookup);
+ if let Ok(datastore) = DataStore::lookup_datastore(lookup) {
if can_access_any_namespace(datastore, &auth_id, &user_info) {
list.push(DataStoreStatusListItem::empty(store, None, mount_status));
}
@@ -77,7 +78,8 @@ pub async fn datastore_status(
continue;
}
- let datastore = match DataStore::lookup_datastore(store, Operation::Read) {
+ let lookup = crate::tools::lookup_with(store, Operation::Read);
+ let datastore = match DataStore::lookup_datastore(lookup) {
Ok(datastore) => datastore,
Err(err) => {
list.push(DataStoreStatusListItem::empty(
diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
index 47e8d0209..c254c6d8b 100644
--- a/src/api2/tape/backup.rs
+++ b/src/api2/tape/backup.rs
@@ -152,7 +152,8 @@ pub fn do_tape_backup_job(
let worker_type = job.jobtype().to_string();
- let datastore = DataStore::lookup_datastore(&setup.store, Operation::Read)?;
+ let lookup = crate::tools::lookup_with(&setup.store, Operation::Read);
+ let datastore = DataStore::lookup_datastore(lookup)?;
let (config, _digest) = pbs_config::media_pool::config()?;
let pool_config: MediaPoolConfig = config.lookup("pool", &setup.pool)?;
@@ -310,7 +311,8 @@ pub fn backup(
check_backup_permission(&auth_id, &setup.store, &setup.pool, &setup.drive)?;
- let datastore = DataStore::lookup_datastore(&setup.store, Operation::Read)?;
+ let lookup = crate::tools::lookup_with(&setup.store, Operation::Read);
+ let datastore = DataStore::lookup_datastore(lookup)?;
let (config, _digest) = pbs_config::media_pool::config()?;
let pool_config: MediaPoolConfig = config.lookup("pool", &setup.pool)?;
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 92529a76d..4356cf748 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -144,10 +144,12 @@ impl TryFrom<String> for DataStoreMap {
if let Some(index) = store.find('=') {
let mut target = store.split_off(index);
target.remove(0); // remove '='
- let datastore = DataStore::lookup_datastore(&target, Operation::Write)?;
+ let lookup = crate::tools::lookup_with(&target, Operation::Write);
+ let datastore = DataStore::lookup_datastore(lookup)?;
map.insert(store, datastore);
} else if default.is_none() {
- default = Some(DataStore::lookup_datastore(&store, Operation::Write)?);
+ let lookup = crate::tools::lookup_with(&store, Operation::Write);
+ default = Some(DataStore::lookup_datastore(lookup)?);
} else {
bail!("multiple default stores given");
}
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 3be8e8dcf..3014d3092 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -47,7 +47,7 @@ use pbs_api_types::{
use proxmox_backup::auth_helpers::*;
use proxmox_backup::config;
use proxmox_backup::server::{self, metric_collection};
-use proxmox_backup::tools::PROXMOX_BACKUP_TCP_KEEPALIVE_TIME;
+use proxmox_backup::tools::{lookup_with, PROXMOX_BACKUP_TCP_KEEPALIVE_TIME};
use proxmox_backup::api2::tape::backup::do_tape_backup_job;
use proxmox_backup::server::do_prune_job;
@@ -530,7 +530,8 @@ async fn schedule_datastore_garbage_collection() {
{
// limit datastore scope due to Op::Lookup
- let datastore = match DataStore::lookup_datastore(&store, Operation::Lookup) {
+ let lookup = lookup_with(&store, Operation::Lookup);
+ let datastore = match DataStore::lookup_datastore(lookup) {
Ok(datastore) => datastore,
Err(err) => {
eprintln!("lookup_datastore failed - {err}");
@@ -573,7 +574,8 @@ async fn schedule_datastore_garbage_collection() {
Err(_) => continue, // could not get lock
};
- let datastore = match DataStore::lookup_datastore(&store, Operation::Write) {
+ let lookup = lookup_with(&store, Operation::Write);
+ let datastore = match DataStore::lookup_datastore(lookup) {
Ok(datastore) => datastore,
Err(err) => {
log::warn!("skipping scheduled GC on {store}, could look it up - {err}");
diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
index bb86a323e..ca5c67541 100644
--- a/src/server/prune_job.rs
+++ b/src/server/prune_job.rs
@@ -133,7 +133,8 @@ pub fn do_prune_job(
auth_id: &Authid,
schedule: Option<String>,
) -> Result<String, Error> {
- let datastore = DataStore::lookup_datastore(&store, Operation::Write)?;
+ let lookup = crate::tools::lookup_with(&store, Operation::Write);
+ let datastore = DataStore::lookup_datastore(lookup)?;
let worker_type = job.jobtype().to_string();
let auth_id = auth_id.clone();
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 412a59e66..dece52f34 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -112,12 +112,15 @@ impl PullParameters {
client,
})
} else {
+ let lookup = crate::tools::lookup_with(remote_store, Operation::Read);
+ let store = DataStore::lookup_datastore(lookup)?;
Arc::new(LocalSource {
- store: DataStore::lookup_datastore(remote_store, Operation::Read)?,
+ store,
ns: remote_ns,
})
};
- let store = DataStore::lookup_datastore(store, Operation::Write)?;
+ let lookup = crate::tools::lookup_with(store, Operation::Write);
+ let store = DataStore::lookup_datastore(lookup)?;
let backend = store.backend()?;
let target = PullTarget { store, ns, backend };
diff --git a/src/server/push.rs b/src/server/push.rs
index 92bbbb9fc..2d335f559 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -109,7 +109,8 @@ impl PushParameters {
let remove_vanished = remove_vanished.unwrap_or(false);
let encrypted_only = encrypted_only.unwrap_or(false);
let verified_only = verified_only.unwrap_or(false);
- let store = DataStore::lookup_datastore(store, Operation::Read)?;
+ let lookup = crate::tools::lookup_with(store, Operation::Read);
+ let store = DataStore::lookup_datastore(lookup)?;
if !store.namespace_exists(&ns) {
bail!(
diff --git a/src/server/verify_job.rs b/src/server/verify_job.rs
index 2ec8c5138..ab14c7389 100644
--- a/src/server/verify_job.rs
+++ b/src/server/verify_job.rs
@@ -15,7 +15,8 @@ pub fn do_verification_job(
schedule: Option<String>,
to_stdout: bool,
) -> Result<String, Error> {
- let datastore = DataStore::lookup_datastore(&verification_job.store, Operation::Read)?;
+ let lookup = crate::tools::lookup_with(&verification_job.store, Operation::Read);
+ let datastore = DataStore::lookup_datastore(lookup)?;
let outdated_after = verification_job.outdated_after;
let ignore_verified_snapshots = verification_job.ignore_verified.unwrap_or(true);
diff --git a/src/tools/mod.rs b/src/tools/mod.rs
index 93b4d8ea4..4e9f9928c 100644
--- a/src/tools/mod.rs
+++ b/src/tools/mod.rs
@@ -6,12 +6,14 @@ use anyhow::{bail, Error};
use std::collections::HashSet;
use pbs_api_types::{
- Authid, BackupContent, CryptMode, SnapshotListItem, SnapshotVerifyState, MANIFEST_BLOB_NAME,
+ Authid, BackupContent, CryptMode, Operation, SnapshotListItem, SnapshotVerifyState,
+ MANIFEST_BLOB_NAME,
};
use proxmox_http::{client::Client, HttpOptions, ProxyConfig};
use pbs_datastore::backup_info::{BackupDir, BackupInfo};
use pbs_datastore::manifest::BackupManifest;
+use pbs_datastore::DataStoreLookup;
use crate::config::node;
@@ -197,3 +199,9 @@ pub(super) fn node_proxy_config() -> Option<proxmox_http::ProxyConfig> {
None
}
}
+
+/// Read the nodes http proxy config from the node config.
+#[inline(always)]
+pub fn lookup_with<'a>(name: &'a str, operation: Operation) -> DataStoreLookup<'a> {
+ DataStoreLookup::with(name, operation)
+}
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 4/4] fix #6716: pass node http proxy config to s3 backend
2026-01-22 15:11 [pbs-devel] [PATCH proxmox{, -backup} 0/6] fix #6716: Add support for http proxy configuration for S3 endpoints Christian Ebner
` (4 preceding siblings ...)
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 3/4] datastore: refactor datastore lookup parameters into dedicated type Christian Ebner
@ 2026-01-22 15:11 ` Christian Ebner
5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2026-01-22 15:11 UTC (permalink / raw)
To: pbs-devel
To avoid passing crate boundaries, reading the http proxy from the
node config is not an option and passing it unconditionally must also
be avoided to not read the node config on each datastore lookup, also
with non-s3 datastores which never need it anyways.
Rather pass a callback method to the datastore on instantiation,
allowing it's backend implementation to fetch the nodes proxy config
when actually needed to instantiate the s3-client.
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=6716
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/datastore.rs | 51 +++++++++++++++++----
pbs-datastore/src/snapshot_reader.rs | 14 ++++--
src/api2/admin/s3.rs | 2 +
src/api2/config/datastore.rs | 16 +++++--
src/api2/config/s3.rs | 11 ++++-
src/tape/pool_writer/new_chunks_iterator.rs | 15 +++---
src/tools/mod.rs | 3 +-
7 files changed, 88 insertions(+), 24 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 6fa533e2f..efd747367 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -11,6 +11,7 @@ use http_body_util::BodyExt;
use hyper::body::Bytes;
use nix::unistd::{unlinkat, UnlinkatFlags};
use pbs_tools::lru_cache::LruCache;
+use proxmox_http::ProxyConfig;
use tokio::io::AsyncWriteExt;
use tracing::{info, warn};
@@ -195,20 +196,32 @@ impl DataStoreImpl {
}
}
+pub type ProxyConfigCallback = fn() -> Option<ProxyConfig>;
+
pub struct DataStoreLookup<'a> {
name: &'a str,
operation: Operation,
+ proxy_config_callback: Arc<Option<ProxyConfigCallback>>,
}
impl<'a> DataStoreLookup<'a> {
- pub fn with(name: &'a str, operation: Operation) -> Self {
- Self { name, operation }
+ pub fn with(
+ name: &'a str,
+ operation: Operation,
+ proxy_config_callback: Arc<Option<ProxyConfigCallback>>,
+ ) -> Self {
+ Self {
+ name,
+ operation,
+ proxy_config_callback,
+ }
}
}
pub struct DataStore {
inner: Arc<DataStoreImpl>,
operation: Option<Operation>,
+ proxy_config_callback: Arc<Option<ProxyConfigCallback>>,
}
impl Clone for DataStore {
@@ -224,6 +237,7 @@ impl Clone for DataStore {
DataStore {
inner: self.inner.clone(),
operation: new_operation,
+ proxy_config_callback: Arc::clone(&self.proxy_config_callback),
}
}
}
@@ -409,6 +423,7 @@ impl DataStore {
Arc::new(Self {
inner: unsafe { DataStoreImpl::new_test() },
operation: None,
+ proxy_config_callback: Arc::new(None),
})
}
@@ -437,6 +452,7 @@ impl DataStore {
user: pbs_config::backup_user()?,
base_path: S3_CLIENT_RATE_LIMITER_BASE_PATH.into(),
};
+ let proxy_config = self.proxy_config_callback.map(|cb| cb()).flatten();
let options = S3ClientOptions::from_config(
config.config,
@@ -444,6 +460,7 @@ impl DataStore {
Some(bucket),
self.name().to_owned(),
Some(rate_limiter_options),
+ proxy_config,
);
let s3_client = S3Client::new(options)?;
DatastoreBackend::S3(Arc::new(s3_client))
@@ -507,6 +524,7 @@ impl DataStore {
return Ok(Arc::new(Self {
inner: Arc::clone(datastore),
operation: Some(lookup.operation),
+ proxy_config_callback: Arc::clone(&lookup.proxy_config_callback),
}));
}
Arc::clone(&datastore.chunk_store)
@@ -532,6 +550,7 @@ impl DataStore {
Ok(Arc::new(Self {
inner: datastore,
operation: Some(lookup.operation),
+ proxy_config_callback: Arc::clone(&lookup.proxy_config_callback),
}))
}
@@ -557,7 +576,7 @@ impl DataStore {
{
// the datastore drop handler does the checking if tasks are running and clears the
// cache entry, so we just have to trigger it here
- let lookup = DataStoreLookup::with(name, Operation::Lookup);
+ let lookup = DataStoreLookup::with(name, Operation::Lookup, Arc::new(None));
let _ = DataStore::lookup_datastore(lookup);
}
@@ -617,7 +636,11 @@ impl DataStore {
update_active_operations(&name, operation, 1)?;
}
- Ok(Arc::new(Self { inner, operation }))
+ Ok(Arc::new(Self {
+ inner,
+ operation,
+ proxy_config_callback: Arc::new(None),
+ }))
}
fn with_store_and_config(
@@ -2352,7 +2375,11 @@ impl DataStore {
/// Destroy a datastore. This requires that there are no active operations on the datastore.
///
/// This is a synchronous operation and should be run in a worker-thread.
- pub fn destroy(name: &str, destroy_data: bool) -> Result<(), Error> {
+ pub fn destroy(
+ name: &str,
+ destroy_data: bool,
+ proxy_config_callback: ProxyConfigCallback,
+ ) -> Result<(), Error> {
let config_lock = pbs_config::datastore::lock_config()?;
let (mut config, _digest) = pbs_config::datastore::config()?;
@@ -2401,9 +2428,10 @@ impl DataStore {
}
}
- if let (_backend, Some(s3_client)) =
- Self::s3_client_and_backend_from_datastore_config(&datastore_config)?
- {
+ if let (_backend, Some(s3_client)) = Self::s3_client_and_backend_from_datastore_config(
+ &datastore_config,
+ proxy_config_callback,
+ )? {
// Delete all objects within the datastore prefix
let prefix = S3PathPrefix::Some(String::default());
let delete_objects_error =
@@ -2418,7 +2446,10 @@ impl DataStore {
remove(".chunks", &mut ok);
}
} else if let (_backend, Some(s3_client)) =
- Self::s3_client_and_backend_from_datastore_config(&datastore_config)?
+ Self::s3_client_and_backend_from_datastore_config(
+ &datastore_config,
+ proxy_config_callback,
+ )?
{
// Only delete in-use marker so datastore can be re-imported
let object_key = S3ObjectKey::try_from(S3_DATASTORE_IN_USE_MARKER)
@@ -2637,6 +2668,7 @@ impl DataStore {
pub fn s3_client_and_backend_from_datastore_config(
datastore_config: &DataStoreConfig,
+ proxy_config_callback: ProxyConfigCallback,
) -> Result<(DatastoreBackendType, Option<S3Client>), Error> {
let backend_config: DatastoreBackendConfig =
datastore_config.backend.as_deref().unwrap_or("").parse()?;
@@ -2671,6 +2703,7 @@ impl DataStore {
Some(bucket),
datastore_config.name.to_owned(),
Some(rate_limiter_options),
+ proxy_config_callback(),
);
let s3_client = S3Client::new(options)
.context("failed to create s3 client")
diff --git a/pbs-datastore/src/snapshot_reader.rs b/pbs-datastore/src/snapshot_reader.rs
index d522a02d7..ddae632cc 100644
--- a/pbs-datastore/src/snapshot_reader.rs
+++ b/pbs-datastore/src/snapshot_reader.rs
@@ -16,7 +16,7 @@ use pbs_api_types::{
};
use crate::backup_info::BackupDir;
-use crate::datastore::DataStoreLookup;
+use crate::datastore::{DataStoreLookup, ProxyConfigCallback};
use crate::dynamic_index::DynamicIndexReader;
use crate::fixed_index::FixedIndexReader;
use crate::index::IndexFile;
@@ -125,8 +125,9 @@ impl SnapshotReader {
pub fn chunk_iterator<F: Fn(&[u8; 32]) -> bool>(
&'_ self,
skip_fn: F,
+ proxy_config_callback: Arc<Option<ProxyConfigCallback>>,
) -> Result<SnapshotChunkIterator<'_, F>, Error> {
- SnapshotChunkIterator::new(self, skip_fn)
+ SnapshotChunkIterator::new(self, skip_fn, proxy_config_callback)
}
}
@@ -139,6 +140,7 @@ pub struct SnapshotChunkIterator<'a, F: Fn(&[u8; 32]) -> bool> {
snapshot_reader: &'a SnapshotReader,
todo_list: Vec<String>,
skip_fn: F,
+ proxy_config_callback: Arc<Option<ProxyConfigCallback>>,
#[allow(clippy::type_complexity)]
current_index: Option<(Rc<Box<dyn IndexFile + Send>>, usize, Vec<(usize, u64)>)>,
}
@@ -166,6 +168,7 @@ impl<F: Fn(&[u8; 32]) -> bool> Iterator for SnapshotChunkIterator<'_, F> {
let lookup = DataStoreLookup::with(
self.snapshot_reader.datastore_name(),
Operation::Read,
+ Arc::clone(&self.proxy_config_callback),
);
let datastore = DataStore::lookup_datastore(lookup)?;
let order =
@@ -192,7 +195,11 @@ impl<F: Fn(&[u8; 32]) -> bool> Iterator for SnapshotChunkIterator<'_, F> {
}
impl<'a, F: Fn(&[u8; 32]) -> bool> SnapshotChunkIterator<'a, F> {
- pub fn new(snapshot_reader: &'a SnapshotReader, skip_fn: F) -> Result<Self, Error> {
+ pub fn new(
+ snapshot_reader: &'a SnapshotReader,
+ skip_fn: F,
+ proxy_config_callback: Arc<Option<ProxyConfigCallback>>,
+ ) -> Result<Self, Error> {
let mut todo_list = Vec::new();
for filename in snapshot_reader.file_list() {
@@ -209,6 +216,7 @@ impl<'a, F: Fn(&[u8; 32]) -> bool> SnapshotChunkIterator<'a, F> {
todo_list,
current_index: None,
skip_fn,
+ proxy_config_callback,
})
}
}
diff --git a/src/api2/admin/s3.rs b/src/api2/admin/s3.rs
index 73388281b..ea3b15979 100644
--- a/src/api2/admin/s3.rs
+++ b/src/api2/admin/s3.rs
@@ -47,6 +47,7 @@ pub async fn check(
let config: S3ClientConf = config
.lookup(S3_CFG_TYPE_ID, &s3_client_id)
.context("config lookup failed")?;
+ let http_proxy = crate::tools::node_proxy_config();
let store_prefix = store_prefix.unwrap_or_default();
let options = S3ClientOptions::from_config(
@@ -55,6 +56,7 @@ pub async fn check(
Some(bucket),
store_prefix,
None,
+ http_proxy,
);
let test_object_key =
diff --git a/src/api2/config/datastore.rs b/src/api2/config/datastore.rs
index f845fe2d0..45a251851 100644
--- a/src/api2/config/datastore.rs
+++ b/src/api2/config/datastore.rs
@@ -128,7 +128,10 @@ pub(crate) fn do_create_datastore(
)?;
let (backend_type, backend_s3_client) =
- match DataStore::s3_client_and_backend_from_datastore_config(&datastore)? {
+ match DataStore::s3_client_and_backend_from_datastore_config(
+ &datastore,
+ crate::tools::node_proxy_config,
+ )? {
(backend_type, Some(s3_client)) => {
if !overwrite_in_use {
let object_key = S3ObjectKey::try_from(S3_DATASTORE_IN_USE_MARKER)
@@ -342,7 +345,10 @@ pub fn create_datastore(
let store_name = config.name.to_string();
- let (backend, s3_client) = DataStore::s3_client_and_backend_from_datastore_config(&config)?;
+ let (backend, s3_client) = DataStore::s3_client_and_backend_from_datastore_config(
+ &config,
+ crate::tools::node_proxy_config,
+ )?;
if let Some(s3_client) = s3_client {
proxmox_async::runtime::block_on(s3_client.head_bucket())
.context("failed to access bucket")
@@ -770,7 +776,11 @@ pub async fn delete_datastore(
auth_id.to_string(),
to_stdout,
move |_worker| {
- pbs_datastore::DataStore::destroy(&name, destroy_data)?;
+ pbs_datastore::DataStore::destroy(
+ &name,
+ destroy_data,
+ crate::tools::node_proxy_config,
+ )?;
// ignore errors
let _ = jobstate::remove_state_file("prune", &name);
diff --git a/src/api2/config/s3.rs b/src/api2/config/s3.rs
index 27b3c4cc2..20508fe33 100644
--- a/src/api2/config/s3.rs
+++ b/src/api2/config/s3.rs
@@ -348,10 +348,17 @@ pub async fn list_buckets(
let config: S3ClientConf = config
.lookup(S3_CFG_TYPE_ID, &id)
.context("config lookup failed")?;
+ let http_proxy = crate::tools::node_proxy_config();
let empty_prefix = String::new();
- let options =
- S3ClientOptions::from_config(config.config, config.secret_key, None, empty_prefix, None);
+ let options = S3ClientOptions::from_config(
+ config.config,
+ config.secret_key,
+ None,
+ empty_prefix,
+ None,
+ http_proxy,
+ );
let client = S3Client::new(options).context("client creation failed")?;
let list_buckets_response = client
.list_buckets()
diff --git a/src/tape/pool_writer/new_chunks_iterator.rs b/src/tape/pool_writer/new_chunks_iterator.rs
index 0e29516f8..f8040056a 100644
--- a/src/tape/pool_writer/new_chunks_iterator.rs
+++ b/src/tape/pool_writer/new_chunks_iterator.rs
@@ -39,12 +39,15 @@ impl NewChunksIterator {
let datastore_name = snapshot_reader.datastore_name().to_string();
let result: Result<(), Error> = proxmox_lang::try_block!({
- let chunk_iter = snapshot_reader.chunk_iterator(move |digest| {
- catalog_set
- .lock()
- .unwrap()
- .contains_chunk(&datastore_name, digest)
- })?;
+ let chunk_iter = snapshot_reader.chunk_iterator(
+ move |digest| {
+ catalog_set
+ .lock()
+ .unwrap()
+ .contains_chunk(&datastore_name, digest)
+ },
+ Arc::new(None), // FIXME: required once S3 <-> tape is implemented
+ )?;
let reader_pool =
ParallelHandler::new("tape backup chunk reader pool", read_threads, {
diff --git a/src/tools/mod.rs b/src/tools/mod.rs
index 4e9f9928c..5f3505417 100644
--- a/src/tools/mod.rs
+++ b/src/tools/mod.rs
@@ -4,6 +4,7 @@
use anyhow::{bail, Error};
use std::collections::HashSet;
+use std::sync::Arc;
use pbs_api_types::{
Authid, BackupContent, CryptMode, Operation, SnapshotListItem, SnapshotVerifyState,
@@ -203,5 +204,5 @@ pub(super) fn node_proxy_config() -> Option<proxmox_http::ProxyConfig> {
/// Read the nodes http proxy config from the node config.
#[inline(always)]
pub fn lookup_with<'a>(name: &'a str, operation: Operation) -> DataStoreLookup<'a> {
- DataStoreLookup::with(name, operation)
+ DataStoreLookup::with(name, operation, Arc::new(Some(node_proxy_config)))
}
--
2.47.3
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-01-22 15:11 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-22 15:11 [pbs-devel] [PATCH proxmox{, -backup} 0/6] fix #6716: Add support for http proxy configuration for S3 endpoints Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox 1/2] pbs-api-types: make operation non-optional for maintenance-mode check Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox 2/2] s3-client: add proxy configuration as optional client option Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 1/4] datastore: make operation non-optional in lookups Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 2/4] tools: factor out node proxy config read helper Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 3/4] datastore: refactor datastore lookup parameters into dedicated type Christian Ebner
2026-01-22 15:11 ` [pbs-devel] [PATCH proxmox-backup 4/4] fix #6716: pass node http proxy config to s3 backend Christian Ebner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox