From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [PATCH v2 2/5] client: repository: add individual component parameters
Date: Mon, 30 Mar 2026 20:20:38 +0200 [thread overview]
Message-ID: <20260330182352.2346420-3-t.lamprecht@proxmox.com> (raw)
In-Reply-To: <20260330182352.2346420-1-t.lamprecht@proxmox.com>
The compact repository URL format ([[auth-id@]server[:port]:]datastore)
can be cumbersome to work with when changing a single aspect of the
connection or when using API tokens.
Add --server, --port, --datastore, --auth-id, and --ns as separate
CLI parameters alongside the existing compound --repository URL.
A conversion resolves either form into a BackupRepository, enforcing
mutual exclusion between the two.
CLI atom options merge with the corresponding PBS_SERVER, PBS_PORT,
PBS_DATASTORE, PBS_AUTH_ID environment variables per-field (CLI wins),
following the common convention where CLI flags override their
corresponding environment variable defaults. If no CLI args are given,
PBS_REPOSITORY takes precedence over the atom env vars.
No command-level changes yet; the struct and extraction logic are
introduced here so that the command migration can be a separate
mechanical change.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
changes v1 -> v2:
- split out from the single v1 commit; now contains only the
pbs-client library code, no command-level changes
- includes ns in BackupRepositoryArgs (was a separate commit in v1)
- per-field merge of CLI atoms with PBS_* env vars instead of
treating them as mutually exclusive layers
- fixed error swallowing: mutual-exclusion errors now propagate
instead of falling through to env fallback
pbs-client/src/backup_repo.rs | 226 +++++++++++++++++++++++-
pbs-client/src/tools/mod.rs | 314 ++++++++++++++++++++++++++++++----
2 files changed, 504 insertions(+), 36 deletions(-)
diff --git a/pbs-client/src/backup_repo.rs b/pbs-client/src/backup_repo.rs
index 45c859d67..fff63054e 100644
--- a/pbs-client/src/backup_repo.rs
+++ b/pbs-client/src/backup_repo.rs
@@ -1,8 +1,140 @@
use std::fmt;
-use anyhow::{format_err, Error};
+use anyhow::{bail, format_err, Error};
+use serde::{Deserialize, Serialize};
-use pbs_api_types::{Authid, Userid, BACKUP_REPO_URL_REGEX, IP_V6_REGEX};
+use proxmox_schema::*;
+
+use pbs_api_types::{
+ Authid, BackupNamespace, Userid, BACKUP_REPO_URL, BACKUP_REPO_URL_REGEX, DATASTORE_SCHEMA,
+ IP_V6_REGEX,
+};
+
+pub const REPO_URL_SCHEMA: Schema =
+ StringSchema::new("Repository URL: [[auth-id@]server[:port]:]datastore")
+ .format(&BACKUP_REPO_URL)
+ .max_length(256)
+ .schema();
+
+pub const BACKUP_REPO_SERVER_SCHEMA: Schema =
+ StringSchema::new("Backup server address (hostname or IP). Default: localhost")
+ .format(&api_types::DNS_NAME_OR_IP_FORMAT)
+ .max_length(256)
+ .schema();
+
+pub const BACKUP_REPO_PORT_SCHEMA: Schema = IntegerSchema::new("Backup server port. Default: 8007")
+ .minimum(1)
+ .maximum(65535)
+ .default(8007)
+ .schema();
+
+#[api(
+ properties: {
+ repository: {
+ schema: REPO_URL_SCHEMA,
+ optional: true,
+ },
+ server: {
+ schema: BACKUP_REPO_SERVER_SCHEMA,
+ optional: true,
+ },
+ port: {
+ schema: BACKUP_REPO_PORT_SCHEMA,
+ optional: true,
+ },
+ datastore: {
+ schema: DATASTORE_SCHEMA,
+ optional: true,
+ },
+ "auth-id": {
+ type: Authid,
+ optional: true,
+ },
+ ns: {
+ type: BackupNamespace,
+ optional: true,
+ },
+ },
+)]
+#[derive(Default, Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
+/// Backup repository location, specified either as a repository URL or as individual components
+/// (server, port, datastore, auth-id), plus an optional backup namespace.
+pub struct BackupRepositoryArgs {
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub repository: Option<String>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub server: Option<String>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub port: Option<u16>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub datastore: Option<String>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub auth_id: Option<Authid>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub ns: Option<BackupNamespace>,
+}
+
+impl BackupRepositoryArgs {
+ /// Returns `true` if any atom parameter (server, port, datastore, or auth-id) is set.
+ pub fn has_atoms(&self) -> bool {
+ self.server.is_some()
+ || self.port.is_some()
+ || self.datastore.is_some()
+ || self.auth_id.is_some()
+ }
+
+ /// Merge `self` with `fallback`, using values from `self` where present
+ /// and filling in from `fallback` for fields that are `None`.
+ pub fn merge_from(self, fallback: BackupRepositoryArgs) -> Self {
+ Self {
+ repository: self.repository.or(fallback.repository),
+ server: self.server.or(fallback.server),
+ port: self.port.or(fallback.port),
+ datastore: self.datastore.or(fallback.datastore),
+ auth_id: self.auth_id.or(fallback.auth_id),
+ ns: self.ns.or(fallback.ns),
+ }
+ }
+}
+
+impl TryFrom<BackupRepositoryArgs> for BackupRepository {
+ type Error = anyhow::Error;
+
+ /// Convert explicit CLI arguments into a [`BackupRepository`].
+ ///
+ /// * If `repository` and any atom are both set, returns an error.
+ /// * If atoms are present, builds the repository from them (requires `datastore`).
+ /// * If only `repository` is set, parses the repo URL.
+ /// * If nothing is set, returns an error - callers must fall back to environment variables /
+ /// credentials themselves.
+ fn try_from(args: BackupRepositoryArgs) -> Result<Self, Self::Error> {
+ let has_url = args.repository.is_some();
+ let has_atoms = args.has_atoms();
+
+ if has_url && has_atoms {
+ bail!("--repository and --server/--port/--datastore/--auth-id are mutually exclusive");
+ }
+
+ if has_atoms {
+ let store = args.datastore.ok_or_else(|| {
+ format_err!("--datastore is required when not using --repository")
+ })?;
+ return Ok(BackupRepository::new(
+ args.auth_id,
+ args.server,
+ args.port,
+ store,
+ ));
+ }
+
+ if let Some(url) = args.repository {
+ return url.parse();
+ }
+
+ anyhow::bail!("no repository specified")
+ }
+}
/// Reference remote backup locations
///
@@ -193,4 +325,94 @@ mod tests {
let repo = BackupRepository::new(None, Some("[ff80::1]".into()), None, "s".into());
assert_eq!(repo.host(), "[ff80::1]");
}
+
+ #[test]
+ fn has_atoms() {
+ assert!(!BackupRepositoryArgs::default().has_atoms());
+
+ let with_server = BackupRepositoryArgs {
+ server: Some("host".into()),
+ ..Default::default()
+ };
+ assert!(with_server.has_atoms());
+
+ let repo_only = BackupRepositoryArgs {
+ repository: Some("myhost:mystore".into()),
+ ..Default::default()
+ };
+ assert!(!repo_only.has_atoms());
+ }
+
+ #[test]
+ fn try_from_atoms_only() {
+ let args = BackupRepositoryArgs {
+ server: Some("pbs.local".into()),
+ port: Some(9000),
+ datastore: Some("tank".into()),
+ auth_id: Some("backup@pam".parse().unwrap()),
+ ..Default::default()
+ };
+ let repo = BackupRepository::try_from(args).unwrap();
+ assert_eq!(repo.host(), "pbs.local");
+ assert_eq!(repo.port(), 9000);
+ assert_eq!(repo.store(), "tank");
+ assert_eq!(repo.auth_id().to_string(), "backup@pam");
+ }
+
+ #[test]
+ fn try_from_atoms_datastore_only() {
+ let args = BackupRepositoryArgs {
+ datastore: Some("local".into()),
+ ..Default::default()
+ };
+ let repo = BackupRepository::try_from(args).unwrap();
+ assert_eq!(repo.store(), "local");
+ assert_eq!(repo.host(), "localhost");
+ assert_eq!(repo.port(), 8007);
+ }
+
+ #[test]
+ fn try_from_url_only() {
+ let args = BackupRepositoryArgs {
+ repository: Some("admin@pam@backuphost:8008:mystore".into()),
+ ..Default::default()
+ };
+ let repo = BackupRepository::try_from(args).unwrap();
+ assert_eq!(repo.host(), "backuphost");
+ assert_eq!(repo.port(), 8008);
+ assert_eq!(repo.store(), "mystore");
+ }
+
+ #[test]
+ fn try_from_mutual_exclusion_error() {
+ let args = BackupRepositoryArgs {
+ repository: Some("somehost:mystore".into()),
+ server: Some("otherhost".into()),
+ ..Default::default()
+ };
+ let err = BackupRepository::try_from(args).unwrap_err();
+ assert!(err.to_string().contains("mutually exclusive"), "got: {err}");
+ }
+
+ #[test]
+ fn try_from_nothing_set_error() {
+ let err = BackupRepository::try_from(BackupRepositoryArgs::default()).unwrap_err();
+ assert!(
+ err.to_string().contains("no repository specified"),
+ "got: {err}"
+ );
+ }
+
+ #[test]
+ fn try_from_atoms_without_datastore_error() {
+ let args = BackupRepositoryArgs {
+ server: Some("pbs.local".into()),
+ ..Default::default()
+ };
+ let err = BackupRepository::try_from(args).unwrap_err();
+ assert!(
+ err.to_string().contains("--datastore is required"),
+ "got: {err}"
+ );
+ }
}
diff --git a/pbs-client/src/tools/mod.rs b/pbs-client/src/tools/mod.rs
index 7a496d14c..859946079 100644
--- a/pbs-client/src/tools/mod.rs
+++ b/pbs-client/src/tools/mod.rs
@@ -17,12 +17,14 @@ use proxmox_router::cli::{complete_file_name, shellword_split};
use proxmox_schema::*;
use proxmox_sys::fs::file_get_json;
-use pbs_api_types::{
- Authid, BackupArchiveName, BackupNamespace, RateLimitConfig, UserWithTokens, BACKUP_REPO_URL,
-};
+use pbs_api_types::{Authid, BackupArchiveName, BackupNamespace, RateLimitConfig, UserWithTokens};
use pbs_datastore::BackupManifest;
-use crate::{BackupRepository, HttpClient, HttpClientOptions};
+use crate::{BackupRepository, BackupRepositoryArgs, HttpClient, HttpClientOptions};
+
+// Re-export for backward compatibility; the canonical definition is now in backup_repo alongside
+// BackupRepositoryArgs.
+pub use crate::REPO_URL_SCHEMA;
pub mod key_source;
@@ -30,6 +32,10 @@ const ENV_VAR_PBS_FINGERPRINT: &str = "PBS_FINGERPRINT";
const ENV_VAR_PBS_PASSWORD: &str = "PBS_PASSWORD";
const ENV_VAR_PBS_ENCRYPTION_PASSWORD: &str = "PBS_ENCRYPTION_PASSWORD";
const ENV_VAR_PBS_REPOSITORY: &str = "PBS_REPOSITORY";
+const ENV_VAR_PBS_SERVER: &str = "PBS_SERVER";
+const ENV_VAR_PBS_PORT: &str = "PBS_PORT";
+const ENV_VAR_PBS_DATASTORE: &str = "PBS_DATASTORE";
+const ENV_VAR_PBS_AUTH_ID: &str = "PBS_AUTH_ID";
/// Directory with system [credential]s. See systemd-creds(1).
///
@@ -44,11 +50,6 @@ const CRED_PBS_REPOSITORY: &str = "proxmox-backup-client.repository";
/// Credential name of the the fingerprint.
const CRED_PBS_FINGERPRINT: &str = "proxmox-backup-client.fingerprint";
-pub const REPO_URL_SCHEMA: Schema = StringSchema::new("Repository URL.")
- .format(&BACKUP_REPO_URL)
- .max_length(256)
- .schema();
-
pub const CHUNK_SIZE_SCHEMA: Schema = IntegerSchema::new("Chunk size in KB. Must be a power of 2.")
.minimum(64)
.maximum(4096)
@@ -233,41 +234,149 @@ pub fn get_fingerprint() -> Option<String> {
.unwrap_or_default()
}
+/// Build [`BackupRepositoryArgs`] from the fields in a JSON Value.
+fn args_from_value(param: &Value) -> BackupRepositoryArgs {
+ BackupRepositoryArgs {
+ repository: param["repository"].as_str().map(String::from),
+ server: param["server"].as_str().map(String::from),
+ port: param["port"].as_u64().map(|p| p as u16),
+ datastore: param["datastore"].as_str().map(String::from),
+ auth_id: param["auth-id"]
+ .as_str()
+ .and_then(|s| s.parse::<Authid>().ok()),
+ ns: None, // namespace is not part of repository resolution
+ }
+}
+
+/// Build [`BackupRepositoryArgs`] from `PBS_*` environment variables.
+fn args_from_env() -> BackupRepositoryArgs {
+ BackupRepositoryArgs {
+ repository: None,
+ server: std::env::var(ENV_VAR_PBS_SERVER).ok(),
+ port: std::env::var(ENV_VAR_PBS_PORT)
+ .ok()
+ .and_then(|p| p.parse::<u16>().ok()),
+ datastore: std::env::var(ENV_VAR_PBS_DATASTORE).ok(),
+ auth_id: std::env::var(ENV_VAR_PBS_AUTH_ID)
+ .ok()
+ .and_then(|s| s.parse::<Authid>().ok()),
+ ns: None,
+ }
+}
+
+/// Remove repository-related keys from a JSON Value and return the parsed [`BackupRepository`].
+///
+/// This is used by commands that forward the remaining parameters to the server API after stripping
+/// the repository fields.
pub fn remove_repository_from_value(param: &mut Value) -> Result<BackupRepository, Error> {
- if let Some(url) = param
+ let map = param
.as_object_mut()
- .ok_or_else(|| format_err!("unable to get repository (parameter is not an object)"))?
- .remove("repository")
- {
- return url
- .as_str()
- .ok_or_else(|| format_err!("invalid repository value (must be a string)"))?
- .parse();
- }
-
- get_default_repository()
- .ok_or_else(|| format_err!("unable to get default repository"))?
- .parse()
+ .ok_or_else(|| format_err!("unable to get repository (parameter is not an object)"))?;
+
+ let to_string = |v: Value| v.as_str().map(String::from);
+
+ let args = BackupRepositoryArgs {
+ repository: map.remove("repository").and_then(to_string),
+ server: map.remove("server").and_then(to_string),
+ port: map
+ .remove("port")
+ .and_then(|v| v.as_u64())
+ .map(|p| p as u16),
+ datastore: map.remove("datastore").and_then(to_string),
+ auth_id: map
+ .remove("auth-id")
+ .and_then(to_string)
+ .map(|s| s.parse::<Authid>())
+ .transpose()?,
+ ns: None, // keep ns in the Value for the API call
+ };
+
+ if args.repository.is_some() && args.has_atoms() {
+ bail!("--repository and --server/--port/--datastore/--auth-id are mutually exclusive");
+ }
+ if args.repository.is_some() {
+ return BackupRepository::try_from(args);
+ }
+ if args.has_atoms() {
+ let env = args_from_env();
+ return BackupRepository::try_from(args.merge_from(env));
+ }
+
+ if let Some(url) = get_default_repository() {
+ return url.parse();
+ }
+ let env = args_from_env();
+ if env.has_atoms() {
+ return BackupRepository::try_from(env);
+ }
+ bail!("repository not passed via CLI options and unable to get (default) repository from environment");
}
+/// Extract a [`BackupRepository`] from CLI parameters.
+///
+/// Resolution:
+/// - `--repository` and CLI atoms are mutually exclusive.
+/// - `--repository` alone is used as-is (env vars ignored).
+/// - CLI atoms are merged with `PBS_*` env atom vars per-field (CLI wins).
+/// - If no CLI args are given, falls back to `PBS_REPOSITORY`, then to `PBS_*` atom env vars, then
+/// errors.
pub fn extract_repository_from_value(param: &Value) -> Result<BackupRepository, Error> {
- let repo_url = param["repository"]
- .as_str()
- .map(String::from)
- .or_else(get_default_repository)
- .ok_or_else(|| format_err!("unable to get (default) repository"))?;
+ let cli = args_from_value(param);
- let repo: BackupRepository = repo_url.parse()?;
+ if cli.repository.is_some() && cli.has_atoms() {
+ bail!("--repository and --server/--port/--datastore/--auth-id are mutually exclusive");
+ }
+ if cli.repository.is_some() {
+ return BackupRepository::try_from(cli);
+ }
+ if cli.has_atoms() {
+ let env = args_from_env();
+ return BackupRepository::try_from(cli.merge_from(env));
+ }
- Ok(repo)
+ // No CLI args at all, try environment.
+ if let Some(url) = get_default_repository() {
+ return url.parse();
+ }
+ let env = args_from_env();
+ if env.has_atoms() {
+ return BackupRepository::try_from(env);
+ }
+ bail!("unable to get (default) repository");
}
+/// Extract a [`BackupRepository`] from a parameter map (used for shell completion callbacks).
pub fn extract_repository_from_map(param: &HashMap<String, String>) -> Option<BackupRepository> {
- param
- .get("repository")
- .map(String::from)
- .or_else(get_default_repository)
- .and_then(|repo_url| repo_url.parse::<BackupRepository>().ok())
+ let cli = BackupRepositoryArgs {
+ repository: param.get("repository").cloned(),
+ server: param.get("server").cloned(),
+ port: param.get("port").and_then(|p| p.parse().ok()),
+ datastore: param.get("datastore").cloned(),
+ auth_id: param.get("auth-id").and_then(|s| s.parse().ok()),
+ ns: None,
+ };
+
+ if cli.repository.is_some() {
+ return BackupRepository::try_from(cli).ok();
+ }
+ if cli.has_atoms() {
+ let env = args_from_env();
+ return BackupRepository::try_from(cli.merge_from(env)).ok();
+ }
+
+ // Fall back to environment: compound URL, then atoms.
+ if let Some(url) = get_default_repository() {
+ if let Ok(repo) = url.parse() {
+ return Some(repo);
+ }
+ }
+
+ let env = args_from_env();
+ if env.has_atoms() {
+ return BackupRepository::try_from(env).ok();
+ }
+
+ None
}
pub fn connect(repo: &BackupRepository) -> Result<HttpClient, Error> {
@@ -757,3 +866,140 @@ pub fn create_tmp_file() -> std::io::Result<std::fs::File> {
}
})
}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+ use serde_json::json;
+
+ static ENV_MUTEX: std::sync::Mutex<()> = std::sync::Mutex::new(());
+
+ const REPO_ENV_VARS: &[&str] = &[
+ ENV_VAR_PBS_REPOSITORY,
+ ENV_VAR_PBS_SERVER,
+ ENV_VAR_PBS_PORT,
+ ENV_VAR_PBS_DATASTORE,
+ ENV_VAR_PBS_AUTH_ID,
+ ENV_VAR_CREDENTIALS_DIRECTORY,
+ ];
+
+ fn with_cleared_repo_env(f: impl FnOnce()) {
+ let _guard = ENV_MUTEX.lock().unwrap();
+ for k in REPO_ENV_VARS {
+ std::env::remove_var(k);
+ }
+ f();
+ for k in REPO_ENV_VARS {
+ std::env::remove_var(k);
+ }
+ }
+
+ #[test]
+ fn extract_repo_from_atoms() {
+ with_cleared_repo_env(|| {
+ let param = json!({"server": "myhost", "datastore": "mystore"});
+ let repo = extract_repository_from_value(¶m).unwrap();
+ assert_eq!(repo.host(), "myhost");
+ assert_eq!(repo.store(), "mystore");
+ assert_eq!(repo.port(), 8007);
+ });
+ }
+
+ #[test]
+ fn extract_repo_from_url() {
+ with_cleared_repo_env(|| {
+ let param = json!({"repository": "myhost:mystore"});
+ let repo = extract_repository_from_value(¶m).unwrap();
+ assert_eq!(repo.host(), "myhost");
+ assert_eq!(repo.store(), "mystore");
+ });
+ }
+
+ #[test]
+ fn extract_repo_mutual_exclusion_error() {
+ with_cleared_repo_env(|| {
+ let param = json!({"repository": "myhost:mystore", "auth-id": "user@pam"});
+ let err = extract_repository_from_value(¶m).unwrap_err();
+ assert!(err.to_string().contains("mutually exclusive"), "got: {err}");
+ });
+ }
+
+ #[test]
+ fn extract_repo_atoms_without_datastore_error() {
+ with_cleared_repo_env(|| {
+ let param = json!({"server": "myhost"});
+ let err = extract_repository_from_value(¶m).unwrap_err();
+ assert!(
+ err.to_string().contains("--datastore is required"),
+ "got: {err}"
+ );
+ });
+ }
+
+ #[test]
+ fn extract_repo_nothing_provided_error() {
+ with_cleared_repo_env(|| {
+ let err = extract_repository_from_value(&json!({})).unwrap_err();
+ assert!(err.to_string().contains("unable to get"), "got: {err}");
+ });
+ }
+
+ #[test]
+ fn extract_repo_env_fallback() {
+ with_cleared_repo_env(|| {
+ std::env::set_var(ENV_VAR_PBS_SERVER, "envhost");
+ std::env::set_var(ENV_VAR_PBS_DATASTORE, "envstore");
+ let repo = extract_repository_from_value(&json!({})).unwrap();
+ assert_eq!(repo.host(), "envhost");
+ assert_eq!(repo.store(), "envstore");
+ });
+ }
+
+ #[test]
+ fn extract_repo_pbs_repository_env_takes_precedence() {
+ with_cleared_repo_env(|| {
+ std::env::set_var(ENV_VAR_PBS_REPOSITORY, "repohost:repostore");
+ std::env::set_var(ENV_VAR_PBS_SERVER, "envhost");
+ std::env::set_var(ENV_VAR_PBS_DATASTORE, "envstore");
+ let repo = extract_repository_from_value(&json!({})).unwrap();
+ assert_eq!(repo.host(), "repohost");
+ assert_eq!(repo.store(), "repostore");
+ });
+ }
+
+ #[test]
+ fn extract_repo_cli_overrides_env() {
+ with_cleared_repo_env(|| {
+ std::env::set_var(ENV_VAR_PBS_REPOSITORY, "envhost:envstore");
+ let param = json!({"server": "clihost", "datastore": "clistore"});
+ let repo = extract_repository_from_value(¶m).unwrap();
+ assert_eq!(repo.host(), "clihost");
+ assert_eq!(repo.store(), "clistore");
+ });
+ }
+
+ #[test]
+ fn extract_repo_cli_atoms_merge_with_env_atoms() {
+ with_cleared_repo_env(|| {
+ std::env::set_var(ENV_VAR_PBS_SERVER, "envhost");
+ std::env::set_var(ENV_VAR_PBS_DATASTORE, "envstore");
+ let param = json!({"auth-id": "backup@pbs"});
+ let repo = extract_repository_from_value(¶m).unwrap();
+ assert_eq!(repo.host(), "envhost");
+ assert_eq!(repo.store(), "envstore");
+ assert_eq!(repo.auth_id().to_string(), "backup@pbs");
+ });
+ }
+
+ #[test]
+ fn extract_repo_cli_atom_overrides_same_env_atom() {
+ with_cleared_repo_env(|| {
+ std::env::set_var(ENV_VAR_PBS_SERVER, "envhost");
+ std::env::set_var(ENV_VAR_PBS_DATASTORE, "envstore");
+ let param = json!({"server": "clihost"});
+ let repo = extract_repository_from_value(¶m).unwrap();
+ assert_eq!(repo.host(), "clihost");
+ assert_eq!(repo.store(), "envstore");
+ });
+ }
+}
--
2.47.3
next prev parent reply other threads:[~2026-03-30 18:24 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-30 18:20 [PATCH v2 0/5] " Thomas Lamprecht
2026-03-30 18:20 ` [PATCH v2 1/5] client: repository: add tests for BackupRepository parsing Thomas Lamprecht
2026-03-30 18:20 ` Thomas Lamprecht [this message]
2026-03-31 8:55 ` [PATCH v2 2/5] client: repository: add individual component parameters Thomas Lamprecht
2026-03-30 18:20 ` [PATCH v2 3/5] client: migrate commands to flattened repository args Thomas Lamprecht
2026-03-30 18:20 ` [PATCH v2 4/5] docs: document repository component options and env vars Thomas Lamprecht
2026-03-30 18:20 ` [PATCH v2 5/5] fix #5340: client: repository: add PBS_NAMESPACE environment variable Thomas Lamprecht
2026-04-01 22:56 ` superseded: [PATCH v2 0/5] client: repository: add individual component parameters Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260330182352.2346420-3-t.lamprecht@proxmox.com \
--to=t.lamprecht@proxmox.com \
--cc=pbs-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.