* [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
@ 2026-04-01 7:55 Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox 01/20] pbs-api-types: define encryption key type and schema Christian Ebner
` (23 more replies)
0 siblings, 24 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
This patch series implements support for encrypting backup snapshots
when pushing from a source PBS instance to an untrusted remote target
PBS instance. Further, it adds support to decrypt snapshots being
encrypted on the remote source PBS when pulling the contents to the
local target PBS instance. This allows to perform full server side
encryption/decryption when syncing with a less trusted remote PBS.
In order to encrypt/decrypt snapshots, a new encryption key entity
is introduced, to be created as global instance on the PBS, placed and
managed by it's own dedicated config. Keys with secret are stored
in dedicated files so they only need to be loaded when accessing the
key, not for listing of configuration.
The sync jobs in push and pull direction are extended to receive an
additional encryption key parameter, allowing the given key to be
used for encryption/decription of snapshots, depending on the sync
direction. In order to encrypt/decrypt the contents, chunks, index
files, blobs and manifest are additionally processed, rewritten when
required.
Link to the bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=7251
proxmox:
Christian Ebner (2):
pbs-api-types: define encryption key type and schema
pbs-api-types: sync job: add optional encryption key to config
pbs-api-types/src/jobs.rs | 11 ++++++++--
pbs-api-types/src/key_derivation.rs | 34 ++++++++++++++++++++++++++---
pbs-api-types/src/lib.rs | 2 +-
3 files changed, 41 insertions(+), 6 deletions(-)
proxmox-backup:
Christian Ebner (18):
pbs-key-config: introduce store_with() for KeyConfig
pbs-config: implement encryption key config handling
pbs-config: acls: add 'encryption-keys' as valid 'system' subpath
ui: expose 'encryption-keys' as acl subpath for 'system'
api: config: add endpoints for encryption key manipulation
api: config: allow encryption key manipulation for sync job
sync: push: rewrite manifest instead of pushing pre-existing one
sync: add helper to check encryption key acls and load key
fix #7251: api: push: encrypt snapshots using configured encryption
key
ui: define and expose encryption key management menu item and windows
ui: expose assigning encryption key to sync jobs
sync: pull: load encryption key if given in job config
sync: expand source chunk reader trait by crypt config
sync: pull: introduce and use decrypt index writer if crypt config
sync: pull: extend encountered chunk by optional decrypted digest
sync: pull: decrypt blob files on pull if encryption key is configured
sync: pull: decrypt chunks and rewrite index file for matching key
sync: pull: decrypt snapshots with matching encryption key fingerprint
pbs-config/Cargo.toml | 1 +
pbs-config/src/acl.rs | 4 +-
pbs-config/src/encryption_keys.rs | 159 +++++++++++
pbs-config/src/lib.rs | 1 +
pbs-key-config/src/lib.rs | 36 ++-
src/api2/config/encryption_keys.rs | 115 ++++++++
src/api2/config/mod.rs | 2 +
src/api2/config/sync.rs | 10 +
src/api2/pull.rs | 15 +-
src/api2/push.rs | 14 +-
src/server/pull.rs | 416 ++++++++++++++++++++++++-----
src/server/push.rs | 222 +++++++++++----
src/server/sync.rs | 57 +++-
www/Makefile | 3 +
www/NavigationTree.js | 6 +
www/Utils.js | 1 +
www/config/EncryptionKeysView.js | 143 ++++++++++
www/form/EncryptionKeySelector.js | 59 ++++
www/form/PermissionPathSelector.js | 1 +
www/window/EncryptionKeysEdit.js | 382 ++++++++++++++++++++++++++
www/window/SyncJobEdit.js | 11 +
21 files changed, 1512 insertions(+), 146 deletions(-)
create mode 100644 pbs-config/src/encryption_keys.rs
create mode 100644 src/api2/config/encryption_keys.rs
create mode 100644 www/config/EncryptionKeysView.js
create mode 100644 www/form/EncryptionKeySelector.js
create mode 100644 www/window/EncryptionKeysEdit.js
Summary over all repositories:
24 files changed, 1553 insertions(+), 152 deletions(-)
--
Generated by murpp 0.11.0
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox 01/20] pbs-api-types: define encryption key type and schema
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox 02/20] pbs-api-types: sync job: add optional encryption key to config Christian Ebner
` (22 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Will be used to store and uniquly identify encryption keys in the
config. Contains the KeyInfo extended by the unique key identifier.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-api-types/src/key_derivation.rs | 34 ++++++++++++++++++++++++++---
pbs-api-types/src/lib.rs | 2 +-
2 files changed, 32 insertions(+), 4 deletions(-)
diff --git a/pbs-api-types/src/key_derivation.rs b/pbs-api-types/src/key_derivation.rs
index 57ae353a..2a986c21 100644
--- a/pbs-api-types/src/key_derivation.rs
+++ b/pbs-api-types/src/key_derivation.rs
@@ -3,12 +3,13 @@ use serde::{Deserialize, Serialize};
#[cfg(feature = "enum-fallback")]
use proxmox_fixed_string::FixedString;
-use proxmox_schema::api;
+use proxmox_schema::api_types::SAFE_ID_FORMAT;
+use proxmox_schema::{api, Schema, StringSchema, Updater};
use crate::CERT_FINGERPRINT_SHA256_SCHEMA;
#[api(default: "scrypt")]
-#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
+#[derive(Clone, Copy, Debug, Deserialize, Serialize, PartialEq)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
@@ -41,7 +42,7 @@ impl Default for Kdf {
},
},
)]
-#[derive(Deserialize, Serialize)]
+#[derive(Clone, Default, Deserialize, Serialize, Updater, PartialEq)]
/// Encryption Key Information
pub struct KeyInfo {
/// Path to key (if stored in a file)
@@ -59,3 +60,30 @@ pub struct KeyInfo {
#[serde(skip_serializing_if = "Option::is_none")]
pub hint: Option<String>,
}
+
+/// ID to uniquely identify an encryption key.
+pub const ENCRYPTION_KEY_ID_SCHEMA: Schema =
+ StringSchema::new("ID to uniquely identify encryption key")
+ .format(&SAFE_ID_FORMAT)
+ .min_length(3)
+ .max_length(32)
+ .schema();
+
+#[api(
+ properties: {
+ id: {
+ schema: ENCRYPTION_KEY_ID_SCHEMA,
+ },
+ info: {
+ type: KeyInfo,
+ },
+ },
+)]
+#[derive(Clone, Default, Deserialize, Serialize, Updater, PartialEq)]
+/// Encryption Key Information
+pub struct EncryptionKey {
+ #[updater(skip)]
+ pub id: String,
+ #[serde(flatten)]
+ pub info: KeyInfo,
+}
diff --git a/pbs-api-types/src/lib.rs b/pbs-api-types/src/lib.rs
index 54547291..ddd5840e 100644
--- a/pbs-api-types/src/lib.rs
+++ b/pbs-api-types/src/lib.rs
@@ -104,7 +104,7 @@ mod jobs;
pub use jobs::*;
mod key_derivation;
-pub use key_derivation::{Kdf, KeyInfo};
+pub use key_derivation::{EncryptionKey, Kdf, KeyInfo, ENCRYPTION_KEY_ID_SCHEMA};
mod maintenance;
pub use maintenance::*;
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox 02/20] pbs-api-types: sync job: add optional encryption key to config
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox 01/20] pbs-api-types: define encryption key type and schema Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 03/20] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
` (21 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Allows to configure the encryption key to encrypt backups when
performing sync jobs in push direction and decrypt snapshots with
matching key fingerprint on pull sync jobs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-api-types/src/jobs.rs | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/pbs-api-types/src/jobs.rs b/pbs-api-types/src/jobs.rs
index 7e6dfb94..15fe2ca2 100644
--- a/pbs-api-types/src/jobs.rs
+++ b/pbs-api-types/src/jobs.rs
@@ -13,8 +13,9 @@ use proxmox_schema::*;
use crate::{
Authid, BackupNamespace, BackupType, NotificationMode, RateLimitConfig, Userid,
BACKUP_GROUP_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_NS_RE, DATASTORE_SCHEMA,
- DRIVE_NAME_SCHEMA, MEDIA_POOL_NAME_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PROXMOX_SAFE_ID_FORMAT,
- PROXMOX_SAFE_ID_REGEX_STR, REMOTE_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA,
+ DRIVE_NAME_SCHEMA, ENCRYPTION_KEY_ID_SCHEMA, MEDIA_POOL_NAME_SCHEMA,
+ NS_MAX_DEPTH_REDUCED_SCHEMA, PROXMOX_SAFE_ID_FORMAT, PROXMOX_SAFE_ID_REGEX_STR,
+ REMOTE_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA,
};
const_regex! {
@@ -664,6 +665,10 @@ pub const UNMOUNT_ON_SYNC_DONE_SCHEMA: Schema =
type: SyncDirection,
optional: true,
},
+ "encryption-key": {
+ schema: ENCRYPTION_KEY_ID_SCHEMA,
+ optional: true,
+ },
}
)]
#[derive(Serialize, Deserialize, Clone, Updater, PartialEq)]
@@ -709,6 +714,8 @@ pub struct SyncJobConfig {
pub unmount_on_done: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub sync_direction: Option<SyncDirection>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub encryption_key: Option<String>,
}
impl SyncJobConfig {
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 03/20] pbs-key-config: introduce store_with() for KeyConfig
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox 01/20] pbs-api-types: define encryption key type and schema Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox 02/20] pbs-api-types: sync job: add optional encryption key to config Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 04/20] pbs-config: implement encryption key config handling Christian Ebner
` (20 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Extends the behavior of KeyConfig::store() to allow optionally
specifying the mode and ownership of the file the key is stored with.
Default to the same behavior as KeyConfig::store() if none of the
optional parameters are set, therefore the same implementation is
reused for it as well.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-key-config/src/lib.rs | 36 ++++++++++++++++++++++++++++++++----
1 file changed, 32 insertions(+), 4 deletions(-)
diff --git a/pbs-key-config/src/lib.rs b/pbs-key-config/src/lib.rs
index 0bcd5338c..258fb197b 100644
--- a/pbs-key-config/src/lib.rs
+++ b/pbs-key-config/src/lib.rs
@@ -1,7 +1,10 @@
use std::io::Write;
+use std::os::fd::AsRawFd;
use std::path::Path;
use anyhow::{bail, format_err, Context, Error};
+use nix::sys::stat::Mode;
+use nix::unistd::{Gid, Uid};
use serde::{Deserialize, Serialize};
use proxmox_lang::try_block;
@@ -236,24 +239,49 @@ impl KeyConfig {
/// Store a KeyConfig to path
pub fn store<P: AsRef<Path>>(&self, path: P, replace: bool) -> Result<(), Error> {
+ self.store_with(path, replace, None, None, None)
+ }
+
+ /// Store a KeyConfig to path with given ownership and mode.
+ /// Requires the process to run with permissions to do so.
+ pub fn store_with<P: AsRef<Path>>(
+ &self,
+ path: P,
+ replace: bool,
+ mode: Option<Mode>,
+ owner: Option<Uid>,
+ group: Option<Gid>,
+ ) -> Result<(), Error> {
let path: &Path = path.as_ref();
let data = serde_json::to_string(self)?;
try_block!({
if replace {
- let mode = nix::sys::stat::Mode::S_IRUSR | nix::sys::stat::Mode::S_IWUSR;
- replace_file(path, data.as_bytes(), CreateOptions::new().perm(mode), true)?;
+ let mode =
+ mode.unwrap_or(nix::sys::stat::Mode::S_IRUSR | nix::sys::stat::Mode::S_IWUSR);
+ let mut create_options = CreateOptions::new().perm(mode);
+ if let Some(owner) = owner {
+ create_options = create_options.owner(owner);
+ }
+ if let Some(group) = group {
+ create_options = create_options.group(group);
+ }
+ replace_file(path, data.as_bytes(), create_options, true)?;
} else {
use std::os::unix::fs::OpenOptionsExt;
-
+ let mode = mode.map(|m| m.bits()).unwrap_or(0o0600);
let mut file = std::fs::OpenOptions::new()
.write(true)
- .mode(0o0600)
+ .mode(mode)
.create_new(true)
.open(path)?;
file.write_all(data.as_bytes())?;
+
+ let fd = file.as_raw_fd();
+ nix::unistd::fchown(fd, owner, group)?;
+ nix::unistd::fsync(fd)?;
}
Ok(())
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 04/20] pbs-config: implement encryption key config handling
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (2 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 03/20] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 23:27 ` Thomas Lamprecht
2026-04-01 7:55 ` [PATCH proxmox-backup 05/20] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
` (19 subsequent siblings)
23 siblings, 1 reply; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Implements the handling for encryption key configuration and files.
Individual encryption keys with the secret key material are stored in
individual files, while the config stores duplicate key info, so the
actual key only needs to be loaded when accessed, not for listing.
Key fingerprint is compared when loading the key in order to detect
possible mismatches.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-config/Cargo.toml | 1 +
pbs-config/src/encryption_keys.rs | 159 ++++++++++++++++++++++++++++++
pbs-config/src/lib.rs | 1 +
3 files changed, 161 insertions(+)
create mode 100644 pbs-config/src/encryption_keys.rs
diff --git a/pbs-config/Cargo.toml b/pbs-config/Cargo.toml
index eb81ce004..a27964cfd 100644
--- a/pbs-config/Cargo.toml
+++ b/pbs-config/Cargo.toml
@@ -30,3 +30,4 @@ proxmox-uuid.workspace = true
pbs-api-types.workspace = true
pbs-buildcfg.workspace = true
+pbs-key-config.workspace = true
diff --git a/pbs-config/src/encryption_keys.rs b/pbs-config/src/encryption_keys.rs
new file mode 100644
index 000000000..afe16eb1c
--- /dev/null
+++ b/pbs-config/src/encryption_keys.rs
@@ -0,0 +1,159 @@
+use std::collections::HashMap;
+use std::sync::LazyLock;
+
+use anyhow::{bail, format_err, Error};
+use nix::{sys::stat::Mode, unistd::Uid};
+use serde::Deserialize;
+
+use pbs_api_types::{EncryptionKey, KeyInfo, ENCRYPTION_KEY_ID_SCHEMA};
+use proxmox_schema::ApiType;
+use proxmox_section_config::{SectionConfig, SectionConfigData, SectionConfigPlugin};
+use proxmox_sys::fs::CreateOptions;
+
+use pbs_buildcfg::configdir;
+use pbs_key_config::KeyConfig;
+
+use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
+
+pub static CONFIG: LazyLock<SectionConfig> = LazyLock::new(init);
+
+fn init() -> SectionConfig {
+ let obj_schema = EncryptionKey::API_SCHEMA.unwrap_all_of_schema();
+ let plugin = SectionConfigPlugin::new(
+ ENCRYPTION_KEYS_CFG_TYPE_ID.to_string(),
+ Some(String::from("id")),
+ obj_schema,
+ );
+ let mut config = SectionConfig::new(&ENCRYPTION_KEY_ID_SCHEMA);
+ config.register_plugin(plugin);
+
+ config
+}
+
+/// Configuration file location for encryption keys.
+pub const ENCRYPTION_KEYS_CFG_FILENAME: &str = configdir!("/encryption-keys.cfg");
+/// Configuration lock file used to prevent concurrent configuration update operations.
+pub const ENCRYPTION_KEYS_CFG_LOCKFILE: &str = configdir!("/.encryption-keys.lck");
+/// Directory where to store the actual encryption keys
+pub const ENCRYPTION_KEYS_DIR: &str = configdir!("/encryption-keys/");
+
+/// Config type for encryption key config entries
+pub const ENCRYPTION_KEYS_CFG_TYPE_ID: &str = "encryption-key";
+
+/// Get exclusive lock for encryption key configuration update.
+pub fn lock_config() -> Result<BackupLockGuard, Error> {
+ open_backup_lockfile(ENCRYPTION_KEYS_CFG_LOCKFILE, None, true)
+}
+
+/// Load encryption key configuration from file.
+pub fn config() -> Result<(SectionConfigData, [u8; 32]), Error> {
+ let content = proxmox_sys::fs::file_read_optional_string(ENCRYPTION_KEYS_CFG_FILENAME)?;
+ let content = content.unwrap_or_default();
+ let digest = openssl::sha::sha256(content.as_bytes());
+ let data = CONFIG.parse(ENCRYPTION_KEYS_CFG_FILENAME, &content)?;
+ Ok((data, digest))
+}
+
+/// Shell completion helper to complete encryption key id's as found in the config.
+pub fn complete_encryption_key_id(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
+ match config() {
+ Ok((data, _digest)) => data.sections.keys().map(|id| id.to_string()).collect(),
+ Err(_) => Vec::new(),
+ }
+}
+
+/// Load the encryption key from file.
+///
+/// Looks up the key in the config and tries to load it from the given file.
+/// Upon loading, the config key fingerprint is compared to the one stored in the key
+/// file.
+pub fn load_key_config(id: &str) -> Result<KeyConfig, Error> {
+ let _lock = lock_config()?;
+ let (config, _digest) = config()?;
+
+ let key: EncryptionKey = config.lookup(ENCRYPTION_KEYS_CFG_TYPE_ID, id)?;
+ let key_config = match &key.info.path {
+ Some(path) => KeyConfig::load(path)?,
+ None => bail!("missing path for encryption key {id}"),
+ };
+
+ let stored_key_info = KeyInfo::from(&key_config);
+
+ if key.info.fingerprint != stored_key_info.fingerprint {
+ bail!("loaded key does not match the config for key {id}");
+ }
+
+ Ok(key_config)
+}
+
+/// Store the encryption key to file.
+///
+/// Inserts the key in the config and stores it to the given file.
+pub fn store_key(id: &str, key: &KeyConfig) -> Result<(), Error> {
+ let _lock = lock_config()?;
+ let (mut config, _digest) = config()?;
+
+ let backup_user = crate::backup_user()?;
+ let keyfile_mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
+ let dir_options = CreateOptions::new()
+ .perm(Mode::from_bits_truncate(0o0750))
+ .owner(Uid::from_raw(0))
+ .group(backup_user.gid);
+
+ proxmox_sys::fs::ensure_dir_exists(ENCRYPTION_KEYS_DIR, &dir_options, true)?;
+
+ // if the key with given id already exists try to load and store a backup.
+ if config.sections.contains_key(id) {
+ bail!("key with id '{id}' already exists.");
+ }
+
+ let key_path = format!("{ENCRYPTION_KEYS_DIR}{id}.enc");
+ // do not replace existing files
+ key.store_with(
+ &key_path,
+ false,
+ Some(keyfile_mode),
+ Some(Uid::from_raw(0)),
+ Some(backup_user.gid),
+ )?;
+
+ let mut info = KeyInfo::from(key);
+ info.path = Some(key_path);
+
+ let encryption_key = EncryptionKey {
+ id: id.to_string(),
+ info,
+ };
+
+ config.set_data(id, ENCRYPTION_KEYS_CFG_TYPE_ID, encryption_key)?;
+
+ let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
+ replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())
+}
+
+/// Delete the encryption key from config.
+///
+/// Deletes the key from the config but keeps a backup of the key file.
+pub fn delete_key(id: &str) -> Result<(), Error> {
+ let _lock = lock_config()?;
+ let (mut config, _digest) = config()?;
+
+ // if the key with given id exists in config, try to remove also file on path.
+ if let Some((section_type, key)) = config.sections.get(id) {
+ if section_type == ENCRYPTION_KEYS_CFG_TYPE_ID {
+ let key = EncryptionKey::deserialize(key)
+ .map_err(|_err| format_err!("failed to parse pre-existing key"))?;
+
+ if let Some(path) = &key.info.path {
+ std::fs::remove_file(path)?;
+ }
+ }
+
+ config.sections.remove(id);
+
+ let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
+ replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())
+ } else {
+ bail!("key {id} not found in config");
+ }
+}
diff --git a/pbs-config/src/lib.rs b/pbs-config/src/lib.rs
index 1ed472385..7f7c8c3e1 100644
--- a/pbs-config/src/lib.rs
+++ b/pbs-config/src/lib.rs
@@ -4,6 +4,7 @@ pub use cached_user_info::CachedUserInfo;
pub mod datastore;
pub mod domains;
pub mod drive;
+pub mod encryption_keys;
pub mod media_pool;
pub mod metrics;
pub mod notifications;
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 05/20] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (3 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 04/20] pbs-config: implement encryption key config handling Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 06/20] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
` (18 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Adds a dedicated subpath for permission checks on encryption key
configurations in the acl path components check. Allows to set
permissions on either the whole encryption keys config or for
individual encryption key ids.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-config/src/acl.rs | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/pbs-config/src/acl.rs b/pbs-config/src/acl.rs
index 2abbf5802..d18a346ff 100644
--- a/pbs-config/src/acl.rs
+++ b/pbs-config/src/acl.rs
@@ -127,8 +127,8 @@ pub fn check_acl_path(path: &str) -> Result<(), Error> {
_ => {}
}
}
- "s3-endpoint" => {
- // /system/s3-endpoint/{id}
+ "s3-endpoint" | "encryption-keys" => {
+ // /system/<matched-component>/{id}
if components_len <= 3 {
return Ok(());
}
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 06/20] ui: expose 'encryption-keys' as acl subpath for 'system'
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (4 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 05/20] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 07/20] api: config: add endpoints for encryption key manipulation Christian Ebner
` (17 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Allows to select the 'encryption-keys' subpath to restirct
permissions to either the full encryption keys configuration or the
matching key id.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/form/PermissionPathSelector.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/form/PermissionPathSelector.js b/www/form/PermissionPathSelector.js
index e5f2aec46..64de42888 100644
--- a/www/form/PermissionPathSelector.js
+++ b/www/form/PermissionPathSelector.js
@@ -15,6 +15,7 @@ Ext.define('PBS.data.PermissionPathsStore', {
{ value: '/system' },
{ value: '/system/certificates' },
{ value: '/system/disks' },
+ { value: '/system/encryption-keys' },
{ value: '/system/log' },
{ value: '/system/network' },
{ value: '/system/network/dns' },
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 07/20] api: config: add endpoints for encryption key manipulation
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (5 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 06/20] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 08/20] api: config: allow encryption key manipulation for sync job Christian Ebner
` (16 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Defines the api endpoints for listing existing keys as defined in the
config and create new keys.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/config/encryption_keys.rs | 115 +++++++++++++++++++++++++++++
src/api2/config/mod.rs | 2 +
2 files changed, 117 insertions(+)
create mode 100644 src/api2/config/encryption_keys.rs
diff --git a/src/api2/config/encryption_keys.rs b/src/api2/config/encryption_keys.rs
new file mode 100644
index 000000000..bc3ee2908
--- /dev/null
+++ b/src/api2/config/encryption_keys.rs
@@ -0,0 +1,115 @@
+use anyhow::{format_err, Error};
+use serde_json::Value;
+
+use proxmox_router::{Permission, Router, RpcEnvironment};
+use proxmox_schema::api;
+
+use pbs_api_types::{
+ Authid, EncryptionKey, ENCRYPTION_KEY_ID_SCHEMA, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY,
+};
+
+use pbs_config::encryption_keys::{self, ENCRYPTION_KEYS_CFG_TYPE_ID};
+use pbs_config::CachedUserInfo;
+
+use pbs_key_config::KeyConfig;
+
+#[api(
+ input: {
+ properties: {},
+ },
+ returns: {
+ description: "List of configured encryption keys.",
+ type: Array,
+ items: { type: EncryptionKey },
+ },
+ access: {
+ permission: &Permission::Anybody,
+ description: "List configured encryption keys filtered by Sys.Audit privileges",
+ },
+)]
+/// List configured encryption keys.
+pub fn list_keys(
+ _param: Value,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<EncryptionKey>, Error> {
+ let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let (config, digest) = encryption_keys::config()?;
+
+ let list: Vec<EncryptionKey> = config.convert_to_typed_array(ENCRYPTION_KEYS_CFG_TYPE_ID)?;
+ let list = list
+ .into_iter()
+ .filter(|key| {
+ let privs = user_info.lookup_privs(&auth_id, &["system", "encryption-keys", &key.id]);
+ privs & PRIV_SYS_AUDIT != 0
+ })
+ .collect();
+
+ rpcenv["digest"] = hex::encode(digest).into();
+
+ Ok(list)
+}
+
+#[api(
+ protected: true,
+ input: {
+ properties: {
+ id: {
+ schema: ENCRYPTION_KEY_ID_SCHEMA,
+ },
+ key: {
+ description: "Use provided key instead of creating new one.",
+ type: String,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system", "encryption-keys"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Create new encryption key instance or use the provided one.
+pub fn create_key(
+ id: String,
+ key: Option<String>,
+ _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<KeyConfig, Error> {
+ let key_config = if let Some(key) = &key {
+ serde_json::from_str(key)
+ .map_err(|err| format_err!("failed to parse provided key: {err}"))?
+ } else {
+ let mut raw_key = [0u8; 32];
+ proxmox_sys::linux::fill_with_random_data(&mut raw_key)?;
+ KeyConfig::without_password(raw_key)?
+ };
+
+ encryption_keys::store_key(&id, &key_config)?;
+
+ Ok(key_config)
+}
+
+#[api(
+ protected: true,
+ input: {
+ properties: {
+ id: {
+ schema: ENCRYPTION_KEY_ID_SCHEMA,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system", "encryption-keys", "{id}"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Remove encryption key (makes the key unusable, but keeps a backup).
+pub fn delete_key(id: String, _rpcenv: &mut dyn RpcEnvironment) -> Result<(), Error> {
+ encryption_keys::delete_key(&id).map_err(|err| format_err!("failed to delete key: {err}"))
+}
+
+const ITEM_ROUTER: Router = Router::new().delete(&API_METHOD_DELETE_KEY);
+
+pub const ROUTER: Router = Router::new()
+ .get(&API_METHOD_LIST_KEYS)
+ .post(&API_METHOD_CREATE_KEY)
+ .match_all("id", &ITEM_ROUTER);
diff --git a/src/api2/config/mod.rs b/src/api2/config/mod.rs
index 1cd9ead76..0281bcfae 100644
--- a/src/api2/config/mod.rs
+++ b/src/api2/config/mod.rs
@@ -9,6 +9,7 @@ pub mod acme;
pub mod changer;
pub mod datastore;
pub mod drive;
+pub mod encryption_keys;
pub mod media_pool;
pub mod metrics;
pub mod notifications;
@@ -28,6 +29,7 @@ const SUBDIRS: SubdirMap = &sorted!([
("changer", &changer::ROUTER),
("datastore", &datastore::ROUTER),
("drive", &drive::ROUTER),
+ ("encryption-keys", &encryption_keys::ROUTER),
("media-pool", &media_pool::ROUTER),
("metrics", &metrics::ROUTER),
("notifications", ¬ifications::ROUTER),
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 08/20] api: config: allow encryption key manipulation for sync job
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (6 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 07/20] api: config: add endpoints for encryption key manipulation Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 09/20] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
` (15 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Since the SyncJobConfig got extended to include an optional
encryption key, set the default to none. Extend the api config
update handler to also set or delete the encryption key based on
the provided parameters.
They encryption key will be used to encrypt unencrypted backup
snapshots (push) or decrypt snapshots with matching key fingerprint
(pull) during the sync.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/config/sync.rs | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index dff447cb6..e69b0a1ae 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -345,6 +345,8 @@ pub enum DeletableProperty {
UnmountOnDone,
/// Delete the sync_direction property,
SyncDirection,
+ /// Delete the encryption_key property,
+ EncryptionKey,
}
#[api(
@@ -471,6 +473,9 @@ pub fn update_sync_job(
DeletableProperty::SyncDirection => {
data.sync_direction = None;
}
+ DeletableProperty::EncryptionKey => {
+ data.encryption_key = None;
+ }
}
}
}
@@ -530,6 +535,10 @@ pub fn update_sync_job(
data.sync_direction = Some(sync_direction);
}
+ if let Some(encryption_key) = update.encryption_key {
+ data.encryption_key = Some(encryption_key);
+ }
+
if update.limit.rate_in.is_some() {
data.limit.rate_in = update.limit.rate_in;
}
@@ -705,6 +714,7 @@ acl:1:/remote/remote1/remotestore1:write@pbs:RemoteSyncOperator
run_on_mount: None,
unmount_on_done: None,
sync_direction: None, // use default
+ encryption_key: None,
};
// should work without ACLs
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 09/20] sync: push: rewrite manifest instead of pushing pre-existing one
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (7 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 08/20] api: config: allow encryption key manipulation for sync job Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 10/20] sync: add helper to check encryption key acls and load key Christian Ebner
` (14 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
In preparation for being able to encrypt unencypted backup snapshots
during push sync jobs.
Previously the pre-existing manifest file was pushed to the remote
target since it did not require modifications and contained all the
files with the correct metadata. When encrypting, the files must
however be marked as encrypted by individually setting the crypt mode
and the manifest must be signed and the encryption key fingerprint
added to the unprotected part of the manifest.
Therefore, now recreate the manifest and update accordingly. To do
so, pushing of the index must return the full BackupStats, not just
the sync stats for accounting.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/push.rs | 59 +++++++++++++++++++++++++++++++++-------------
1 file changed, 43 insertions(+), 16 deletions(-)
diff --git a/src/server/push.rs b/src/server/push.rs
index 27c5b22d4..269a4c386 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -17,8 +17,8 @@ use pbs_api_types::{
PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
};
use pbs_client::{
- BackupRepository, BackupWriter, BackupWriterOptions, HttpClient, IndexType, MergedChunkInfo,
- UploadOptions,
+ BackupRepository, BackupStats, BackupWriter, BackupWriterOptions, HttpClient, IndexType,
+ MergedChunkInfo, UploadOptions,
};
use pbs_config::CachedUserInfo;
use pbs_datastore::data_blob::ChunkInfo;
@@ -26,7 +26,7 @@ use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{DataStore, StoreProgress};
+use pbs_datastore::{BackupManifest, DataStore, StoreProgress};
use super::sync::{
check_namespace_depth_limit, exclude_not_verified_or_encrypted,
@@ -879,6 +879,7 @@ pub(crate) async fn push_snapshot(
// Avoid double upload penalty by remembering already seen chunks
let known_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64 * 1024)));
+ let mut target_manifest = BackupManifest::new(snapshot.clone());
for entry in source_manifest.files() {
let mut path = backup_dir.full_path();
@@ -891,6 +892,12 @@ pub(crate) async fn push_snapshot(
let backup_stats = backup_writer
.upload_blob(file, archive_name.as_ref())
.await?;
+ target_manifest.add_file(
+ &archive_name,
+ backup_stats.size,
+ backup_stats.csum,
+ entry.chunk_crypt_mode(),
+ )?;
stats.add(SyncStats {
chunk_count: backup_stats.chunk_count as usize,
bytes: backup_stats.size as usize,
@@ -913,7 +920,7 @@ pub(crate) async fn push_snapshot(
let chunk_reader = reader
.chunk_reader(entry.chunk_crypt_mode())
.context("failed to get chunk reader")?;
- let sync_stats = push_index(
+ let upload_stats = push_index(
&archive_name,
index,
chunk_reader,
@@ -922,7 +929,18 @@ pub(crate) async fn push_snapshot(
known_chunks.clone(),
)
.await?;
- stats.add(sync_stats);
+ target_manifest.add_file(
+ &archive_name,
+ upload_stats.size,
+ upload_stats.csum,
+ entry.chunk_crypt_mode(),
+ )?;
+ stats.add(SyncStats {
+ chunk_count: upload_stats.chunk_count as usize,
+ bytes: upload_stats.size as usize,
+ elapsed: upload_stats.duration,
+ removed: None,
+ });
}
ArchiveType::FixedIndex => {
if let Some(manifest) = upload_options.previous_manifest.as_ref() {
@@ -940,7 +958,7 @@ pub(crate) async fn push_snapshot(
.chunk_reader(entry.chunk_crypt_mode())
.context("failed to get chunk reader")?;
let size = index.index_bytes();
- let sync_stats = push_index(
+ let upload_stats = push_index(
&archive_name,
index,
chunk_reader,
@@ -949,7 +967,18 @@ pub(crate) async fn push_snapshot(
known_chunks.clone(),
)
.await?;
- stats.add(sync_stats);
+ target_manifest.add_file(
+ &archive_name,
+ upload_stats.size,
+ upload_stats.csum,
+ entry.chunk_crypt_mode(),
+ )?;
+ stats.add(SyncStats {
+ chunk_count: upload_stats.chunk_count as usize,
+ bytes: upload_stats.size as usize,
+ elapsed: upload_stats.duration,
+ removed: None,
+ });
}
}
} else {
@@ -972,8 +1001,11 @@ pub(crate) async fn push_snapshot(
.await?;
}
- // Rewrite manifest for pushed snapshot, recreating manifest from source on target
- let manifest_json = serde_json::to_value(source_manifest)?;
+ // Rewrite manifest for pushed snapshot, recreating manifest from source on target,
+ // needs to update all relevant info for new manifest.
+ target_manifest.unprotected = source_manifest.unprotected;
+ target_manifest.signature = source_manifest.signature;
+ let manifest_json = serde_json::to_value(target_manifest)?;
let manifest_string = serde_json::to_string_pretty(&manifest_json)?;
let backup_stats = backup_writer
.upload_blob_from_data(
@@ -1005,7 +1037,7 @@ async fn push_index(
backup_writer: &BackupWriter,
index_type: IndexType,
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
-) -> Result<SyncStats, Error> {
+) -> Result<BackupStats, Error> {
let (upload_channel_tx, upload_channel_rx) = mpsc::channel(20);
let mut chunk_infos =
stream::iter(0..index.index_count()).map(move |pos| index.chunk_info(pos).unwrap());
@@ -1057,10 +1089,5 @@ async fn push_index(
.upload_index_chunk_info(filename, merged_chunk_info_stream, upload_options)
.await?;
- Ok(SyncStats {
- chunk_count: upload_stats.chunk_count as usize,
- bytes: upload_stats.size as usize,
- elapsed: upload_stats.duration,
- removed: None,
- })
+ Ok(upload_stats)
}
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 10/20] sync: add helper to check encryption key acls and load key
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (8 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 09/20] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 11/20] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
` (13 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Introduces a common helper function to be used when loading an
encryption key in sync job for either push or pull direction.
For given user, access to the provided key by id is checked and the
key config containing the secret loaded from the file by means of the
config.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/sync.rs | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/src/server/sync.rs b/src/server/sync.rs
index aedf4a271..2c1d5dc61 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -21,12 +21,14 @@ use proxmox_router::HttpError;
use pbs_api_types::{
Authid, BackupDir, BackupGroup, BackupNamespace, CryptMode, GroupListItem, SnapshotListItem,
SyncDirection, SyncJobConfig, VerifyState, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME,
- MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+ MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ, PRIV_SYS_AUDIT,
};
use pbs_client::{BackupReader, BackupRepository, HttpClient, RemoteChunkReader};
+use pbs_config::CachedUserInfo;
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::read_chunk::AsyncReadChunk;
use pbs_datastore::{BackupManifest, DataStore, ListNamespacesRecursive, LocalChunkReader};
+use pbs_tools::crypt_config::CryptConfig;
use crate::backup::ListAccessibleBackupGroups;
use crate::server::jobstate::Job;
@@ -791,3 +793,27 @@ pub(super) fn exclude_not_verified_or_encrypted(
false
}
+
+/// Helper to check user having access to the given encryption key and loading
+/// the it using the passphrase from the config.
+pub(super) fn check_privs_and_load_key_config(
+ key_id: &str,
+ user: &Authid,
+) -> Result<Option<Arc<CryptConfig>>, Error> {
+ let user_info = CachedUserInfo::new()?;
+ user_info.check_privs(
+ user,
+ &["system", "encryption-keys", key_id],
+ PRIV_SYS_AUDIT,
+ true,
+ )?;
+
+ let key_config = pbs_config::encryption_keys::load_key_config(key_id)?;
+ // pass empty passphrase to get raw key material of unprotected key
+ let (enc_key, _created, fingerprint) = key_config.decrypt(&|| Ok(Vec::new()))?;
+
+ log::info!("Loaded encryption key {key_id} with fingerprint '{fingerprint}'");
+
+ let crypt_config = Arc::new(CryptConfig::new(enc_key)?);
+ Ok(Some(crypt_config))
+}
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 11/20] fix #7251: api: push: encrypt snapshots using configured encryption key
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (9 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 10/20] sync: add helper to check encryption key acls and load key Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 12/20] ui: define and expose encryption key management menu item and windows Christian Ebner
` (12 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
If an encryption key id is provided in the push parameters, the key
is loaded at the start of the push sync job and passed along via the
crypt config.
Backup snapshots which are already encrypted or partially encrypted
snapshots are skipped to avoid mixing of contents. Pre-existing
snapshots on the remote are however not checked to match the key.
Special care has to be taken when tracking the already encountered
chunks. For regular push sync jobs chunk upload is optimized to skip
re-upload of chunks from the previous snapshot (if any) and new, but
already encountered chunks for the current group sync. Since the chunks
now have to be re-processes anyways, do not load the chunks from the
previous snapshot into memory if they need re-encryption and keep track
of the unencrypted -> encrypted digest mapping in a hashmap to avoid
re-processing. This might be optimized in the future by e.g. move the
tracking to an LRU cache, which however requrires more carefully
evaluaton of memory consumption.
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=7251
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/push.rs | 14 ++--
src/server/push.rs | 167 ++++++++++++++++++++++++++++++++++-----------
src/server/sync.rs | 1 +
3 files changed, 137 insertions(+), 45 deletions(-)
diff --git a/src/api2/push.rs b/src/api2/push.rs
index e5edc13e0..79f220afd 100644
--- a/src/api2/push.rs
+++ b/src/api2/push.rs
@@ -3,10 +3,10 @@ use futures::{future::FutureExt, select};
use pbs_api_types::{
Authid, BackupNamespace, GroupFilter, RateLimitConfig, DATASTORE_SCHEMA,
- GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
- PRIV_DATASTORE_READ, PRIV_REMOTE_DATASTORE_BACKUP, PRIV_REMOTE_DATASTORE_PRUNE,
- REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA, SYNC_ENCRYPTED_ONLY_SCHEMA,
- SYNC_VERIFIED_ONLY_SCHEMA, TRANSFER_LAST_SCHEMA,
+ ENCRYPTION_KEY_ID_SCHEMA, GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA,
+ PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ, PRIV_REMOTE_DATASTORE_BACKUP,
+ PRIV_REMOTE_DATASTORE_PRUNE, REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA,
+ SYNC_ENCRYPTED_ONLY_SCHEMA, SYNC_VERIFIED_ONLY_SCHEMA, TRANSFER_LAST_SCHEMA,
};
use proxmox_rest_server::WorkerTask;
use proxmox_router::{Permission, Router, RpcEnvironment};
@@ -108,6 +108,10 @@ fn check_push_privs(
schema: TRANSFER_LAST_SCHEMA,
optional: true,
},
+ "encryption-key": {
+ schema: ENCRYPTION_KEY_ID_SCHEMA,
+ optional: true,
+ },
},
},
access: {
@@ -133,6 +137,7 @@ async fn push(
verified_only: Option<bool>,
limit: RateLimitConfig,
transfer_last: Option<usize>,
+ encryption_key: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -164,6 +169,7 @@ async fn push(
verified_only,
limit,
transfer_last,
+ encryption_key,
)
.await?;
diff --git a/src/server/push.rs b/src/server/push.rs
index 269a4c386..beacc0819 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1,6 +1,6 @@
//! Sync datastore by pushing contents to remote server
-use std::collections::HashSet;
+use std::collections::{HashMap, HashSet};
use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Context, Error};
@@ -11,22 +11,23 @@ use tracing::{info, warn};
use pbs_api_types::{
print_store_and_ns, ApiVersion, ApiVersionInfo, ArchiveType, Authid, BackupArchiveName,
- BackupDir, BackupGroup, BackupGroupDeleteStats, BackupNamespace, GroupFilter, GroupListItem,
- NamespaceListItem, Operation, RateLimitConfig, Remote, SnapshotListItem, CLIENT_LOG_BLOB_NAME,
- MANIFEST_BLOB_NAME, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ, PRIV_REMOTE_DATASTORE_BACKUP,
- PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
+ BackupDir, BackupGroup, BackupGroupDeleteStats, BackupNamespace, CryptMode, GroupFilter,
+ GroupListItem, NamespaceListItem, Operation, RateLimitConfig, Remote, SnapshotListItem,
+ CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+ PRIV_REMOTE_DATASTORE_BACKUP, PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
};
use pbs_client::{
BackupRepository, BackupStats, BackupWriter, BackupWriterOptions, HttpClient, IndexType,
MergedChunkInfo, UploadOptions,
};
use pbs_config::CachedUserInfo;
-use pbs_datastore::data_blob::ChunkInfo;
+use pbs_datastore::data_blob::{ChunkInfo, DataChunkBuilder};
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
use pbs_datastore::read_chunk::AsyncReadChunk;
use pbs_datastore::{BackupManifest, DataStore, StoreProgress};
+use pbs_tools::crypt_config::CryptConfig;
use super::sync::{
check_namespace_depth_limit, exclude_not_verified_or_encrypted,
@@ -83,6 +84,9 @@ pub(crate) struct PushParameters {
verified_only: bool,
/// How many snapshots should be transferred at most (taking the newest N snapshots)
transfer_last: Option<usize>,
+ /// Encryption key to use for pushing unencrypted backup snapshots. Does not affect
+ /// already encrypted snapshots.
+ crypt_config: Option<Arc<CryptConfig>>,
}
impl PushParameters {
@@ -102,6 +106,7 @@ impl PushParameters {
verified_only: Option<bool>,
limit: RateLimitConfig,
transfer_last: Option<usize>,
+ encryption_key: Option<String>,
) -> Result<Self, Error> {
if let Some(max_depth) = max_depth {
ns.check_max_depth(max_depth)?;
@@ -154,6 +159,12 @@ impl PushParameters {
};
let group_filter = group_filter.unwrap_or_default();
+ let crypt_config = if let Some(key_id) = &encryption_key {
+ crate::server::sync::check_privs_and_load_key_config(key_id, &local_user)?
+ } else {
+ None
+ };
+
Ok(Self {
source,
target,
@@ -164,6 +175,7 @@ impl PushParameters {
encrypted_only,
verified_only,
transfer_last,
+ crypt_config,
})
}
@@ -794,6 +806,29 @@ pub(crate) async fn push_group(
Ok(stats)
}
+async fn load_previous_snapshot_known_chunks(
+ params: &PushParameters,
+ manifest: &BackupManifest,
+ backup_writer: &BackupWriter,
+ archive_name: &BackupArchiveName,
+ known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
+) {
+ if let Some(crypt_config) = ¶ms.crypt_config {
+ if let Ok(Some(fingerprint)) = manifest.fingerprint() {
+ if *fingerprint.bytes() == crypt_config.fingerprint() {
+ // Add known chunks only if the fingerprint is not the
+ // same and therefore needs no re-encryption.
+ return;
+ }
+ }
+ }
+
+ // Add known chunks, ignore errors since archive might not be present
+ let _res = backup_writer
+ .download_previous_fixed_index(archive_name, manifest, known_chunks)
+ .await;
+}
+
/// Push snapshot to target
///
/// Creates a new snapshot on the target and pushes the content of the source snapshot to the
@@ -836,6 +871,19 @@ pub(crate) async fn push_snapshot(
return Ok(stats);
}
+ if params.crypt_config.is_some() {
+ // Check if manifest contains only non encrypted files, refuse progress otherwise to
+ // not double encrypt or upload partially unencrypted contents.
+ if !source_manifest
+ .files()
+ .iter()
+ .all(|file| file.chunk_crypt_mode() == CryptMode::None)
+ {
+ warn!("Encountered partially encrypted snapshot, refuse to re-encrypt and skip");
+ return Ok(stats);
+ }
+ }
+
// Writer instance locks the snapshot on the remote side
let backup_writer = BackupWriter::start(
¶ms.target.client,
@@ -843,7 +891,7 @@ pub(crate) async fn push_snapshot(
datastore: params.target.repo.store(),
ns: &target_ns,
backup: snapshot,
- crypt_config: None,
+ crypt_config: params.crypt_config.clone(),
debug: false,
benchmark: false,
no_cache: false,
@@ -860,19 +908,20 @@ pub(crate) async fn push_snapshot(
}
};
- // Dummy upload options: the actual compression and/or encryption already happened while
- // the chunks were generated during creation of the backup snapshot, therefore pre-existing
- // chunks (already compressed and/or encrypted) can be pushed to the target.
+ // Dummy upload options: The actual compression already happened while
+ // the chunks were generated during creation of the backup snapshot,
+ // therefore pre-existing chunks (already compressed) can be pushed to
+ // the target.
+ //
// Further, these steps are skipped in the backup writer upload stream.
//
// Therefore, these values do not need to fit the values given in the manifest.
// The original manifest is uploaded in the end anyways.
//
// Compression is set to true so that the uploaded manifest will be compressed.
- // Encrypt is set to assure that above files are not encrypted.
let upload_options = UploadOptions {
compress: true,
- encrypt: false,
+ encrypt: params.crypt_config.is_some(),
previous_manifest,
..UploadOptions::default()
};
@@ -886,6 +935,10 @@ pub(crate) async fn push_snapshot(
path.push(&entry.filename);
if path.try_exists()? {
let archive_name = BackupArchiveName::from_path(&entry.filename)?;
+ let crypt_mode = match ¶ms.crypt_config {
+ Some(_) => CryptMode::Encrypt,
+ None => entry.chunk_crypt_mode(),
+ };
match archive_name.archive_type() {
ArchiveType::Blob => {
let file = std::fs::File::open(&path)?;
@@ -896,7 +949,7 @@ pub(crate) async fn push_snapshot(
&archive_name,
backup_stats.size,
backup_stats.csum,
- entry.chunk_crypt_mode(),
+ crypt_mode,
)?;
stats.add(SyncStats {
chunk_count: backup_stats.chunk_count as usize,
@@ -907,14 +960,14 @@ pub(crate) async fn push_snapshot(
}
ArchiveType::DynamicIndex => {
if let Some(manifest) = upload_options.previous_manifest.as_ref() {
- // Add known chunks, ignore errors since archive might not be present
- let _res = backup_writer
- .download_previous_dynamic_index(
- &archive_name,
- manifest,
- known_chunks.clone(),
- )
- .await;
+ load_previous_snapshot_known_chunks(
+ params,
+ manifest,
+ &backup_writer,
+ &archive_name,
+ known_chunks.clone(),
+ )
+ .await;
}
let index = DynamicIndexReader::open(&path)?;
let chunk_reader = reader
@@ -927,13 +980,14 @@ pub(crate) async fn push_snapshot(
&backup_writer,
IndexType::Dynamic,
known_chunks.clone(),
+ params.crypt_config.clone(),
)
.await?;
target_manifest.add_file(
&archive_name,
upload_stats.size,
upload_stats.csum,
- entry.chunk_crypt_mode(),
+ crypt_mode,
)?;
stats.add(SyncStats {
chunk_count: upload_stats.chunk_count as usize,
@@ -944,14 +998,14 @@ pub(crate) async fn push_snapshot(
}
ArchiveType::FixedIndex => {
if let Some(manifest) = upload_options.previous_manifest.as_ref() {
- // Add known chunks, ignore errors since archive might not be present
- let _res = backup_writer
- .download_previous_fixed_index(
- &archive_name,
- manifest,
- known_chunks.clone(),
- )
- .await;
+ load_previous_snapshot_known_chunks(
+ params,
+ manifest,
+ &backup_writer,
+ &archive_name,
+ known_chunks.clone(),
+ )
+ .await;
}
let index = FixedIndexReader::open(&path)?;
let chunk_reader = reader
@@ -965,13 +1019,14 @@ pub(crate) async fn push_snapshot(
&backup_writer,
IndexType::Fixed(Some(size)),
known_chunks.clone(),
+ params.crypt_config.clone(),
)
.await?;
target_manifest.add_file(
&archive_name,
upload_stats.size,
upload_stats.csum,
- entry.chunk_crypt_mode(),
+ crypt_mode,
)?;
stats.add(SyncStats {
chunk_count: upload_stats.chunk_count as usize,
@@ -1005,13 +1060,21 @@ pub(crate) async fn push_snapshot(
// needs to update all relevant info for new manifest.
target_manifest.unprotected = source_manifest.unprotected;
target_manifest.signature = source_manifest.signature;
- let manifest_json = serde_json::to_value(target_manifest)?;
- let manifest_string = serde_json::to_string_pretty(&manifest_json)?;
+ let manifest_string = if params.crypt_config.is_some() {
+ target_manifest.to_string(params.crypt_config.as_ref().map(Arc::as_ref))?
+ } else {
+ let manifest_json = serde_json::to_value(target_manifest)?;
+ serde_json::to_string_pretty(&manifest_json)?
+ };
let backup_stats = backup_writer
.upload_blob_from_data(
manifest_string.into_bytes(),
MANIFEST_BLOB_NAME.as_ref(),
- upload_options,
+ UploadOptions {
+ compress: true,
+ encrypt: false,
+ ..UploadOptions::default()
+ },
)
.await?;
backup_writer.finish().await?;
@@ -1037,12 +1100,15 @@ async fn push_index(
backup_writer: &BackupWriter,
index_type: IndexType,
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
+ crypt_config: Option<Arc<CryptConfig>>,
) -> Result<BackupStats, Error> {
let (upload_channel_tx, upload_channel_rx) = mpsc::channel(20);
let mut chunk_infos =
stream::iter(0..index.index_count()).map(move |pos| index.chunk_info(pos).unwrap());
+ let crypt_config_cloned = crypt_config.clone();
tokio::spawn(async move {
+ let mut encrypted_mapping = HashMap::new();
while let Some(chunk_info) = chunk_infos.next().await {
// Avoid reading known chunks, as they are not uploaded by the backup writer anyways
let needs_upload = {
@@ -1056,20 +1122,39 @@ async fn push_index(
chunk_reader
.read_raw_chunk(&chunk_info.digest)
.await
- .map(|chunk| {
- MergedChunkInfo::New(ChunkInfo {
+ .and_then(|chunk| {
+ let (chunk, digest, chunk_len) = match crypt_config_cloned.as_ref() {
+ Some(crypt_config) => {
+ let data = chunk.decode(None, Some(&chunk_info.digest))?;
+ let (chunk, digest) = DataChunkBuilder::new(&data)
+ .compress(true)
+ .crypt_config(crypt_config)
+ .build()?;
+ encrypted_mapping.insert(chunk_info.digest, digest);
+ (chunk, digest, data.len() as u64)
+ }
+ None => (chunk, chunk_info.digest, chunk_info.size()),
+ };
+
+ Ok(MergedChunkInfo::New(ChunkInfo {
chunk,
- digest: chunk_info.digest,
- chunk_len: chunk_info.size(),
+ digest,
+ chunk_len,
offset: chunk_info.range.start,
- })
+ }))
})
} else {
+ let digest =
+ if let Some(encrypted_digest) = encrypted_mapping.get(&chunk_info.digest) {
+ *encrypted_digest
+ } else {
+ chunk_info.digest
+ };
Ok(MergedChunkInfo::Known(vec![(
// Pass size instead of offset, will be replaced with offset by the backup
// writer
chunk_info.size(),
- chunk_info.digest,
+ digest,
)]))
};
let _ = upload_channel_tx.send(merged_chunk_info).await;
@@ -1080,7 +1165,7 @@ async fn push_index(
let upload_options = UploadOptions {
compress: true,
- encrypt: false,
+ encrypt: crypt_config.is_some(),
index_type,
..UploadOptions::default()
};
diff --git a/src/server/sync.rs b/src/server/sync.rs
index 2c1d5dc61..d52175a13 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -677,6 +677,7 @@ pub fn do_sync_job(
sync_job.verified_only,
sync_job.limit.clone(),
sync_job.transfer_last,
+ sync_job.encryption_key,
)
.await?;
push_store(push_params).await?
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 12/20] ui: define and expose encryption key management menu item and windows
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (10 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 11/20] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 23:09 ` Thomas Lamprecht
` (2 more replies)
2026-04-01 7:55 ` [PATCH proxmox-backup 13/20] ui: expose assigning encryption key to sync jobs Christian Ebner
` (11 subsequent siblings)
23 siblings, 3 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Allows to create or remove encryption keys via the WebUI. A new key
entity can be added by either generating a new key by the server
itself or uploading a pre-existing key via a key file, similar to
what Proxmox VE currently allows when setting up a PBS storage.
After creation, the key will be shown in a dialog which allows export
thereof. This is reusing the same logic as PVE with slight adaptions
to include key id and different api endpoint.
On removal the user is informed about the risk of not being able to
decrypt snapshots anymore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/Makefile | 2 +
www/NavigationTree.js | 6 +
www/Utils.js | 1 +
www/config/EncryptionKeysView.js | 143 ++++++++++++
www/window/EncryptionKeysEdit.js | 382 +++++++++++++++++++++++++++++++
5 files changed, 534 insertions(+)
create mode 100644 www/config/EncryptionKeysView.js
create mode 100644 www/window/EncryptionKeysEdit.js
diff --git a/www/Makefile b/www/Makefile
index 9ebf0445f..dbede8a5a 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -70,6 +70,7 @@ JSSRC= \
config/GCView.js \
config/WebauthnView.js \
config/CertificateView.js \
+ config/EncryptionKeysView.js \
config/NodeOptionView.js \
config/MetricServerView.js \
config/NotificationConfigView.js \
@@ -78,6 +79,7 @@ JSSRC= \
window/BackupGroupChangeOwner.js \
window/CreateDirectory.js \
window/DataStoreEdit.js \
+ window/EncryptionKeysEdit.js \
window/NamespaceEdit.js \
window/MaintenanceOptions.js \
window/NotesEdit.js \
diff --git a/www/NavigationTree.js b/www/NavigationTree.js
index 649692c83..58543bdc3 100644
--- a/www/NavigationTree.js
+++ b/www/NavigationTree.js
@@ -74,6 +74,12 @@ Ext.define('PBS.store.NavigationStore', {
path: 'pbsCertificateConfiguration',
leaf: true,
},
+ {
+ text: gettext('Encryption Keys'),
+ iconCls: 'fa fa-lock',
+ path: 'pbsEncryptionKeysView',
+ leaf: true,
+ },
{
text: gettext('Notifications'),
iconCls: 'fa fa-bell-o',
diff --git a/www/Utils.js b/www/Utils.js
index 25fba16f9..a10251c2f 100644
--- a/www/Utils.js
+++ b/www/Utils.js
@@ -450,6 +450,7 @@ Ext.define('PBS.Utils', {
prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')),
prunejob: (type, id) => PBS.Utils.render_prune_job_worker_id(id, gettext('Prune Job')),
reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read Objects')),
+ 'remove-encryption-key': [gettext('Encryption Key'), gettext('Remove Encryption Key')],
'rewind-media': [gettext('Drive'), gettext('Rewind Media')],
's3-refresh': [gettext('Datastore'), gettext('S3 Refresh')],
sync: ['Datastore', gettext('Remote Sync')],
diff --git a/www/config/EncryptionKeysView.js b/www/config/EncryptionKeysView.js
new file mode 100644
index 000000000..965dec47c
--- /dev/null
+++ b/www/config/EncryptionKeysView.js
@@ -0,0 +1,143 @@
+Ext.define('pbs-encryption-keys', {
+ extend: 'Ext.data.Model',
+ fields: ['id', 'fingerprint', 'created'],
+ idProperty: 'id',
+ proxy: {
+ type: 'proxmox',
+ url: '/api2/json/config/encryption-keys',
+ },
+});
+
+Ext.define('PBS.config.EncryptionKeysView', {
+ extend: 'Ext.grid.GridPanel',
+ alias: 'widget.pbsEncryptionKeysView',
+
+ title: gettext('Encryption Keys'),
+
+ stateful: true,
+ stateId: 'grid-encryption-keys',
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ addEncryptionKey: function () {
+ let me = this;
+ Ext.create('PBS.window.EncryptionKeysEdit', {
+ listeners: {
+ destroy: function () {
+ me.reload();
+ },
+ },
+ }).show();
+ },
+
+ removeEncryptionKey: function () {
+ let me = this;
+ let view = me.getView();
+ let selection = view.getSelection();
+
+ if (!selection || selection.length < 1) {
+ return;
+ }
+
+ let keyID = selection[0].data.id;
+
+ Ext.create('Proxmox.window.SafeDestroy', {
+ url: `/api2/json/config/encryption-keys/${keyID}`,
+ item: {
+ id: keyID,
+ },
+ autoShow: true,
+ showProgress: false,
+ taskName: 'remove-encryption-key',
+ listeners: {
+ destroy: () => me.reload(),
+ },
+ additionalItems: [
+ {
+ xtype: 'box',
+ userCls: 'pmx-hint',
+ style: {
+ 'inline-size': '375px',
+ 'overflow-wrap': 'break-word',
+ },
+ padding: '5',
+ html: gettext(
+ 'Make sure you have a backup of the encryption key!<br><br>You will not be able to decrypt backup snapshots encrypted with this key once removed.',
+ ),
+ },
+ ],
+ }).show();
+ },
+
+ reload: function () {
+ this.getView().getStore().rstore.load();
+ },
+
+ init: function (view) {
+ Proxmox.Utils.monStoreErrors(view, view.getStore().rstore);
+ },
+ },
+
+ listeners: {
+ activate: 'reload',
+ itemdblclick: 'editEncryptionKeys',
+ },
+
+ store: {
+ type: 'diff',
+ autoDestroy: true,
+ autoDestroyRstore: true,
+ sorters: 'id',
+ rstore: {
+ type: 'update',
+ storeid: 'pbs-encryption-keys',
+ model: 'pbs-encryption-keys',
+ autoStart: true,
+ interval: 5000,
+ },
+ },
+
+ tbar: [
+ {
+ xtype: 'proxmoxButton',
+ text: gettext('Add'),
+ handler: 'addEncryptionKey',
+ selModel: false,
+ },
+ {
+ xtype: 'proxmoxButton',
+ text: gettext('Remove'),
+ handler: 'removeEncryptionKey',
+ disabled: true,
+ },
+ ],
+
+ viewConfig: {
+ trackOver: false,
+ },
+
+ columns: [
+ {
+ dataIndex: 'id',
+ header: gettext('Encryption Key ID'),
+ renderer: Ext.String.htmlEncode,
+ sortable: true,
+ width: 200,
+ },
+ {
+ dataIndex: 'fingerprint',
+ header: gettext('Fingerprint'),
+ renderer: Ext.String.htmlEncode,
+ sortable: false,
+ width: 600,
+ },
+ {
+ dataIndex: 'created',
+ header: gettext('Created'),
+ renderer: Proxmox.Utils.render_timestamp,
+ sortable: false,
+ flex: 1,
+ },
+ ],
+});
diff --git a/www/window/EncryptionKeysEdit.js b/www/window/EncryptionKeysEdit.js
new file mode 100644
index 000000000..42a14cb20
--- /dev/null
+++ b/www/window/EncryptionKeysEdit.js
@@ -0,0 +1,382 @@
+Ext.define('PBS.ShowEncryptionKey', {
+ extend: 'Ext.window.Window',
+ xtype: 'pbsShowEncryptionKey',
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ width: 600,
+ modal: true,
+ resizable: false,
+ title: gettext('Important: Save your Encryption Key'),
+
+ // avoid close by ESC key, force user to more manual action
+ onEsc: Ext.emptyFn,
+ closable: false,
+
+ items: [
+ {
+ xtype: 'form',
+ layout: {
+ type: 'vbox',
+ align: 'stretch',
+ },
+ bodyPadding: 10,
+ border: false,
+ defaults: {
+ anchor: '100%',
+ border: false,
+ padding: '10 0 0 0',
+ },
+ items: [
+ {
+ xtype: 'textfield',
+ fieldLabel: gettext('Key ID'),
+ labelWidth: 80,
+ inputId: 'keyID',
+ cbind: {
+ value: '{keyID}',
+ },
+ editable: false,
+ },
+ {
+ xtype: 'textfield',
+ fieldLabel: gettext('Key'),
+ labelWidth: 80,
+ inputId: 'encryption-key',
+ cbind: {
+ value: '{key}',
+ },
+ editable: false,
+ },
+ {
+ xtype: 'component',
+ html:
+ gettext(
+ 'Keep your encryption key safe, but easily accessible for disaster recovery.',
+ ) +
+ '<br>' +
+ gettext('We recommend the following safe-keeping strategy:'),
+ },
+ {
+ xtyp: 'container',
+ layout: 'hbox',
+ items: [
+ {
+ xtype: 'component',
+ html: '1. ' + gettext('Save the key in your password manager.'),
+ flex: 1,
+ },
+ {
+ xtype: 'button',
+ text: gettext('Copy Key'),
+ iconCls: 'fa fa-clipboard x-btn-icon-el-default-toolbar-small',
+ cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+ width: 110,
+ handler: function (b) {
+ document.getElementById('encryption-key').select();
+ document.execCommand('copy');
+ },
+ },
+ ],
+ },
+ {
+ xtype: 'container',
+ layout: 'hbox',
+ items: [
+ {
+ xtype: 'component',
+ html:
+ '2. ' +
+ gettext(
+ 'Download the key to a USB (pen) drive, placed in secure vault.',
+ ),
+ flex: 1,
+ },
+ {
+ xtype: 'button',
+ text: gettext('Download'),
+ iconCls: 'fa fa-download x-btn-icon-el-default-toolbar-small',
+ cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+ width: 110,
+ handler: function (b) {
+ let showWindow = this.up('window');
+
+ let filename = `${showWindow.keyID}.enc`;
+
+ let hiddenElement = document.createElement('a');
+ hiddenElement.href =
+ 'data:attachment/text,' + encodeURI(showWindow.key);
+ hiddenElement.target = '_blank';
+ hiddenElement.download = filename;
+ hiddenElement.click();
+ },
+ },
+ ],
+ },
+ {
+ xtype: 'container',
+ layout: 'hbox',
+ items: [
+ {
+ xtype: 'component',
+ html:
+ '3. ' +
+ gettext('Print as paperkey, laminated and placed in secure vault.'),
+ flex: 1,
+ },
+ {
+ xtype: 'button',
+ text: gettext('Print Key'),
+ iconCls: 'fa fa-print x-btn-icon-el-default-toolbar-small',
+ cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+ width: 110,
+ handler: function (b) {
+ let showWindow = this.up('window');
+ showWindow.paperkey(showWindow.key);
+ },
+ },
+ ],
+ },
+ ],
+ },
+ {
+ xtype: 'component',
+ border: false,
+ padding: '10 10 10 10',
+ userCls: 'pmx-hint',
+ html: gettext(
+ 'Please save the encryption key - losing it will render any backup created with it unusable',
+ ),
+ },
+ ],
+ buttons: [
+ {
+ text: gettext('Close'),
+ handler: function (b) {
+ let showWindow = this.up('window');
+ showWindow.close();
+ },
+ },
+ ],
+ paperkey: function (keyString) {
+ let me = this;
+
+ const key = JSON.parse(keyString);
+
+ const qrwidth = 500;
+ let qrdiv = document.createElement('div');
+ let qrcode = new QRCode(qrdiv, {
+ width: qrwidth,
+ height: qrwidth,
+ correctLevel: QRCode.CorrectLevel.H,
+ });
+ qrcode.makeCode(keyString);
+
+ let shortKeyFP = '';
+ if (key.fingerprint) {
+ shortKeyFP = PBS.Utils.renderKeyID(key.fingerprint);
+ }
+
+ let printFrame = document.createElement('iframe');
+ Object.assign(printFrame.style, {
+ position: 'fixed',
+ right: '0',
+ bottom: '0',
+ width: '0',
+ height: '0',
+ border: '0',
+ });
+ const prettifiedKey = JSON.stringify(key, null, 2);
+ const keyQrBase64 = qrdiv.children[0].toDataURL('image/png');
+ const html = `<html><head><script>
+ window.addEventListener('DOMContentLoaded', (ev) => window.print());
+ </script><style>@media print and (max-height: 150mm) {
+ h4, p { margin: 0; font-size: 1em; }
+ }</style></head><body style="padding: 5px;">
+ <h4>Encryption Key '${me.keyID}' (${shortKeyFP})</h4>
+<p style="font-size:1.2em;font-family:monospace;white-space:pre-wrap;overflow-wrap:break-word;">
+-----BEGIN PROXMOX BACKUP KEY-----
+${prettifiedKey}
+-----END PROXMOX BACKUP KEY-----</p>
+ <center><img style="width: 100%; max-width: ${qrwidth}px;" src="${keyQrBase64}"></center>
+ </body></html>`;
+
+ printFrame.src = 'data:text/html;base64,' + btoa(html);
+ document.body.appendChild(printFrame);
+ me.on('destroy', () => document.body.removeChild(printFrame));
+ },
+});
+
+Ext.define('PBS.window.EncryptionKeysEdit', {
+ extend: 'Proxmox.window.Edit',
+ xtype: 'widget.pbsEncryptionKeysEdit',
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ width: 400,
+
+ fieldDefaults: { labelWidth: 120 },
+
+ subject: gettext('Encryption Key'),
+
+ cbindData: function (initialConfig) {
+ let me = this;
+
+ me.url = '/api2/extjs/config/encryption-keys';
+ me.method = 'POST';
+ me.autoLoad = false;
+
+ return {};
+ },
+
+ apiCallDone: function (success, response, options) {
+ let me = this;
+
+ if (!me.rendered) {
+ return;
+ }
+
+ let res = response.result.data;
+ if (!res) {
+ return;
+ }
+
+ let keyIdField = me.down('field[name=id]');
+ Ext.create('PBS.ShowEncryptionKey', {
+ autoShow: true,
+ keyID: keyIdField.getValue(),
+ key: JSON.stringify(res),
+ });
+ },
+
+ viewModel: {
+ data: {
+ keepCryptVisible: false,
+ },
+ },
+
+ items: [
+ {
+ xtype: 'pmxDisplayEditField',
+ name: 'id',
+ fieldLabel: gettext('Encryption Key ID'),
+ renderer: Ext.htmlEncode,
+ allowBlank: false,
+ minLength: 4,
+ editable: true,
+ },
+ {
+ xtype: 'displayfield',
+ name: 'crypt-key-fp',
+ fieldLabel: gettext('Key Source'),
+ padding: '2 0',
+ },
+ {
+ xtype: 'radiofield',
+ name: 'keysource',
+ value: true,
+ inputValue: 'new',
+ submitValue: false,
+ boxLabel: gettext('Auto-generate a new encryption key'),
+ padding: '0 0 0 25',
+ },
+ {
+ xtype: 'radiofield',
+ name: 'keysource',
+ inputValue: 'upload',
+ submitValue: false,
+ boxLabel: gettext('Upload an existing encryption key'),
+ padding: '0 0 0 25',
+ listeners: {
+ change: function (f, value) {
+ let editWindow = this.up('window');
+ if (!editWindow.rendered) {
+ return;
+ }
+ let uploadKeyField = editWindow.down('field[name=key]');
+ uploadKeyField.setDisabled(!value);
+ uploadKeyField.setHidden(!value);
+
+ let uploadKeyButton = editWindow.down('filebutton[name=upload-button]');
+ uploadKeyButton.setDisabled(!value);
+ uploadKeyButton.setHidden(!value);
+
+ if (value) {
+ uploadKeyField.validate();
+ } else {
+ uploadKeyField.reset();
+ }
+ },
+ },
+ },
+ {
+ xtype: 'fieldcontainer',
+ layout: 'hbox',
+ items: [
+ {
+ xtype: 'proxmoxtextfield',
+ name: 'key',
+ fieldLabel: gettext('Upload From File'),
+ value: '',
+ disabled: true,
+ hidden: true,
+ allowBlank: false,
+ labelAlign: 'right',
+ flex: 1,
+ emptyText: gettext('Drag-and-drop key file here.'),
+ validator: function (value) {
+ if (value.length) {
+ let key;
+ try {
+ key = JSON.parse(value);
+ } catch (e) {
+ return 'Failed to parse key - ' + e;
+ }
+ if (key.data === undefined) {
+ return 'Does not seems like a valid Proxmox Backup key!';
+ }
+ }
+ return true;
+ },
+ afterRender: function () {
+ if (!window.FileReader) {
+ // No FileReader support in this browser
+ return;
+ }
+ let cancel = function (ev) {
+ ev = ev.event;
+ if (ev.preventDefault) {
+ ev.preventDefault();
+ }
+ };
+ this.inputEl.on('dragover', cancel);
+ this.inputEl.on('dragenter', cancel);
+ this.inputEl.on('drop', (ev) => {
+ cancel(ev);
+ let reader = new FileReader();
+ reader.onload = (val) => this.setValue(val);
+ reader.readAsText(ev.event.dataTransfer.files[0]);
+ });
+ },
+ },
+ {
+ xtype: 'filebutton',
+ name: 'upload-button',
+ iconCls: 'fa fa-fw fa-folder-open-o x-btn-icon-el-default-toolbar-small',
+ cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+ margin: '0 0 0 4',
+ disabled: true,
+ hidden: true,
+ listeners: {
+ change: function (btn, e, value) {
+ let ev = e.event;
+ let field = btn.up().down('proxmoxtextfield[name=key]');
+ let reader = new FileReader();
+ reader.onload = (ev) => field.setValue(ev.target.result);
+ reader.readAsText(ev.target.files[0]);
+ btn.reset();
+ },
+ },
+ },
+ ],
+ },
+ ],
+});
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 13/20] ui: expose assigning encryption key to sync jobs
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (11 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 12/20] ui: define and expose encryption key management menu item and windows Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 14/20] sync: pull: load encryption key if given in job config Christian Ebner
` (10 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
This allows to select pre-defined encryption keys and assign them to
the sync job configuration.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/Makefile | 1 +
www/form/EncryptionKeySelector.js | 59 +++++++++++++++++++++++++++++++
www/window/SyncJobEdit.js | 11 ++++++
3 files changed, 71 insertions(+)
create mode 100644 www/form/EncryptionKeySelector.js
diff --git a/www/Makefile b/www/Makefile
index dbede8a5a..76f8e2dd7 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -55,6 +55,7 @@ JSSRC= \
form/GroupSelector.js \
form/GroupFilter.js \
form/VerifyOutdatedAfter.js \
+ form/EncryptionKeySelector.js \
data/RunningTasksStore.js \
button/TaskButton.js \
panel/PrunePanel.js \
diff --git a/www/form/EncryptionKeySelector.js b/www/form/EncryptionKeySelector.js
new file mode 100644
index 000000000..831055a82
--- /dev/null
+++ b/www/form/EncryptionKeySelector.js
@@ -0,0 +1,59 @@
+Ext.define('PBS.form.EncryptionKeySelector', {
+ extend: 'Ext.form.field.ComboBox',
+ alias: 'widget.pbsEncryptionKeySelector',
+
+ allowBlank: true,
+ autoSelect: true,
+ submitEmpty: false,
+ valueField: 'id',
+
+ displayField: 'id',
+ emptyText: gettext('None (disabled)'),
+
+ editable: true,
+ anyMatch: true,
+ forceSelection: true,
+ queryMode: 'local',
+
+ matchFieldWidth: false,
+ listConfig: {
+ minWidth: 170,
+ maxWidth: 500,
+ emptyText: `<div class="x-grid-empty">${gettext('No key accessible.')}</div>`,
+ },
+
+ triggers: {
+ clear: {
+ cls: 'pmx-clear-trigger',
+ weight: -1,
+ hidden: true,
+ handler: function () {
+ this.triggers.clear.setVisible(false);
+ this.setValue('');
+ },
+ },
+ },
+
+ listeners: {
+ change: function (field, value) {
+ let canClear = value !== '';
+ field.triggers.clear.setVisible(canClear);
+ },
+ },
+
+ initComponent: function () {
+ let me = this;
+
+ me.store = Ext.create('Ext.data.Store', {
+ model: 'pbs-encryption-keys',
+ autoLoad: true,
+ proxy: {
+ type: 'proxmox',
+ timeout: 30 * 1000,
+ url: `/api2/json/config/encryption-keys`,
+ },
+ });
+
+ me.callParent();
+ },
+});
diff --git a/www/window/SyncJobEdit.js b/www/window/SyncJobEdit.js
index 074c7855a..f6838c631 100644
--- a/www/window/SyncJobEdit.js
+++ b/www/window/SyncJobEdit.js
@@ -560,6 +560,17 @@ Ext.define('PBS.window.SyncJobEdit', {
},
],
},
+ {
+ xtype: 'inputpanel',
+ title: gettext('Encryption'),
+ column1: [
+ {
+ xtype: 'pbsEncryptionKeySelector',
+ name: 'encryption-key',
+ fieldLabel: gettext('Encryption Key'),
+ },
+ ],
+ },
],
},
});
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 14/20] sync: pull: load encryption key if given in job config
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (12 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 13/20] ui: expose assigning encryption key to sync jobs Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 15/20] sync: expand source chunk reader trait by crypt config Christian Ebner
` (9 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
If configured and passed in on PullParams construction, check access
and load the encryption key. Any snapshots matching this key
fingerprint should be decrypted during pull sync.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/pull.rs | 15 +++++++++++----
src/server/pull.rs | 11 +++++++++++
2 files changed, 22 insertions(+), 4 deletions(-)
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index 4b1fd5e60..fb797a882 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -8,10 +8,10 @@ use proxmox_schema::api;
use pbs_api_types::{
Authid, BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, DATASTORE_SCHEMA,
- GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
- PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ, REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA,
- RESYNC_CORRUPT_SCHEMA, SYNC_ENCRYPTED_ONLY_SCHEMA, SYNC_VERIFIED_ONLY_SCHEMA,
- TRANSFER_LAST_SCHEMA,
+ ENCRYPTION_KEY_ID_SCHEMA, GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA,
+ PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ, REMOTE_ID_SCHEMA,
+ REMOVE_VANISHED_BACKUPS_SCHEMA, RESYNC_CORRUPT_SCHEMA, SYNC_ENCRYPTED_ONLY_SCHEMA,
+ SYNC_VERIFIED_ONLY_SCHEMA, TRANSFER_LAST_SCHEMA,
};
use pbs_config::CachedUserInfo;
use proxmox_rest_server::WorkerTask;
@@ -91,6 +91,7 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
sync_job.encrypted_only,
sync_job.verified_only,
sync_job.resync_corrupt,
+ sync_job.encryption_key.clone(),
)
}
}
@@ -148,6 +149,10 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
schema: RESYNC_CORRUPT_SCHEMA,
optional: true,
},
+ "encryption-key": {
+ schema: ENCRYPTION_KEY_ID_SCHEMA,
+ optional: true,
+ },
},
},
access: {
@@ -175,6 +180,7 @@ async fn pull(
encrypted_only: Option<bool>,
verified_only: Option<bool>,
resync_corrupt: Option<bool>,
+ encryption_key: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -215,6 +221,7 @@ async fn pull(
encrypted_only,
verified_only,
resync_corrupt,
+ encryption_key,
)?;
// fixme: set to_stdout to false?
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 0ac6b5b8e..5374b4faf 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -8,6 +8,7 @@ use std::sync::{Arc, Mutex};
use std::time::SystemTime;
use anyhow::{bail, format_err, Context, Error};
+use pbs_tools::crypt_config::CryptConfig;
use proxmox_human_byte::HumanByte;
use tracing::{info, warn};
@@ -65,6 +66,8 @@ pub(crate) struct PullParameters {
verified_only: bool,
/// Whether to re-sync corrupted snapshots
resync_corrupt: bool,
+ /// Encryption key config to decrypt snapshots with matching key fingerprint
+ crypt_config: Option<Arc<CryptConfig>>,
}
impl PullParameters {
@@ -85,6 +88,7 @@ impl PullParameters {
encrypted_only: Option<bool>,
verified_only: Option<bool>,
resync_corrupt: Option<bool>,
+ encryption_key: Option<String>,
) -> Result<Self, Error> {
if let Some(max_depth) = max_depth {
ns.check_max_depth(max_depth)?;
@@ -123,6 +127,12 @@ impl PullParameters {
let group_filter = group_filter.unwrap_or_default();
+ let crypt_config = if let Some(key_id) = &encryption_key {
+ crate::server::sync::check_privs_and_load_key_config(key_id, &owner)?
+ } else {
+ None
+ };
+
Ok(Self {
source,
target,
@@ -134,6 +144,7 @@ impl PullParameters {
encrypted_only,
verified_only,
resync_corrupt,
+ crypt_config,
})
}
}
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 15/20] sync: expand source chunk reader trait by crypt config
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (13 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 14/20] sync: pull: load encryption key if given in job config Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 16/20] sync: pull: introduce and use decrypt index writer if " Christian Ebner
` (8 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Allows to pass in the crypto config for the source chunk reader,
making it possible to decrypt chunks when fetching.
This will be used by the pull sync job to decrypt snapshot chunks
which have been encrypted with an encryption key matching the
one in the pull job configuration.
Disarmed by not setting the crypt config until the rest of the logic
to correctly decrypt snapshots on pull, including manifest, index
files and chunks is put in place in subsequet code changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 8 ++++++--
src/server/push.rs | 4 ++--
src/server/sync.rs | 28 ++++++++++++++++++++++------
3 files changed, 30 insertions(+), 10 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 5374b4faf..a5d1b3079 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -293,6 +293,7 @@ async fn pull_single_archive<'a>(
snapshot: &'a pbs_datastore::BackupDir,
archive_info: &'a FileInfo,
encountered_chunks: Arc<Mutex<EncounteredChunks>>,
+ crypt_config: Option<Arc<CryptConfig>>,
backend: &DatastoreBackend,
) -> Result<SyncStats, Error> {
let archive_name = &archive_info.filename;
@@ -323,7 +324,7 @@ async fn pull_single_archive<'a>(
} else {
let stats = pull_index_chunks(
reader
- .chunk_reader(archive_info.crypt_mode)
+ .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
.context("failed to get chunk reader")?,
snapshot.datastore().clone(),
index,
@@ -346,7 +347,7 @@ async fn pull_single_archive<'a>(
} else {
let stats = pull_index_chunks(
reader
- .chunk_reader(archive_info.crypt_mode)
+ .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
.context("failed to get chunk reader")?,
snapshot.datastore().clone(),
index,
@@ -460,6 +461,8 @@ async fn pull_snapshot<'a>(
return Ok(sync_stats);
}
+ let mut crypt_config = None;
+
let backend = ¶ms.target.backend;
for item in manifest.files() {
let mut path = snapshot.full_path();
@@ -506,6 +509,7 @@ async fn pull_snapshot<'a>(
snapshot,
item,
encountered_chunks.clone(),
+ crypt_config.clone(),
backend,
)
.await?;
diff --git a/src/server/push.rs b/src/server/push.rs
index beacc0819..7b34992b0 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -971,7 +971,7 @@ pub(crate) async fn push_snapshot(
}
let index = DynamicIndexReader::open(&path)?;
let chunk_reader = reader
- .chunk_reader(entry.chunk_crypt_mode())
+ .chunk_reader(None, entry.chunk_crypt_mode())
.context("failed to get chunk reader")?;
let upload_stats = push_index(
&archive_name,
@@ -1009,7 +1009,7 @@ pub(crate) async fn push_snapshot(
}
let index = FixedIndexReader::open(&path)?;
let chunk_reader = reader
- .chunk_reader(entry.chunk_crypt_mode())
+ .chunk_reader(None, entry.chunk_crypt_mode())
.context("failed to get chunk reader")?;
let size = index.index_bytes();
let upload_stats = push_index(
diff --git a/src/server/sync.rs b/src/server/sync.rs
index d52175a13..5dd069ba3 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -90,7 +90,11 @@ impl SyncStats {
/// and checking whether chunk sync should be skipped.
pub(crate) trait SyncSourceReader: Send + Sync {
/// Returns a chunk reader with the specified encryption mode.
- fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error>;
+ fn chunk_reader(
+ &self,
+ crypt_config: Option<Arc<CryptConfig>>,
+ crypt_mode: CryptMode,
+ ) -> Result<Arc<dyn AsyncReadChunk>, Error>;
/// Asynchronously loads a file from the source into a local file.
/// `filename` is the name of the file to load from the source.
@@ -117,9 +121,17 @@ pub(crate) struct LocalSourceReader {
#[async_trait::async_trait]
impl SyncSourceReader for RemoteSourceReader {
- fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error> {
- let chunk_reader =
- RemoteChunkReader::new(self.backup_reader.clone(), None, crypt_mode, HashMap::new());
+ fn chunk_reader(
+ &self,
+ crypt_config: Option<Arc<CryptConfig>>,
+ crypt_mode: CryptMode,
+ ) -> Result<Arc<dyn AsyncReadChunk>, Error> {
+ let chunk_reader = RemoteChunkReader::new(
+ self.backup_reader.clone(),
+ crypt_config,
+ crypt_mode,
+ HashMap::new(),
+ );
Ok(Arc::new(chunk_reader))
}
@@ -191,8 +203,12 @@ impl SyncSourceReader for RemoteSourceReader {
#[async_trait::async_trait]
impl SyncSourceReader for LocalSourceReader {
- fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error> {
- let chunk_reader = LocalChunkReader::new(self.datastore.clone(), None, crypt_mode)?;
+ fn chunk_reader(
+ &self,
+ crypt_config: Option<Arc<CryptConfig>>,
+ crypt_mode: CryptMode,
+ ) -> Result<Arc<dyn AsyncReadChunk>, Error> {
+ let chunk_reader = LocalChunkReader::new(self.datastore.clone(), crypt_config, crypt_mode)?;
Ok(Arc::new(chunk_reader))
}
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 16/20] sync: pull: introduce and use decrypt index writer if crypt config
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (14 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 15/20] sync: expand source chunk reader trait by crypt config Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 17/20] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
` (7 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
In order to decrypt and encrypted index file during a pull sync job
when a matching decryption key is configured, the index has to be
rewritten as the chunks has to be decrypted and the new digests
calculated based on the decrypted chunk. The newly written index file
need to finally replace the original one, achieved by replacing the
original tempfile after pulling the chunks.
In order to be able to do so, provide a DecryptedIndexWriter instance
to the chunk pulling logic. The DecryptIndexWriter provides variants
for fix and dynamic index writers, or none if no rewriting should
happen.
This remains disarmed for the time being by never passing the crypt
config until the logic to decrypt the chunk and re-calculate the
digests is in place, done in subsequent code changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 135 ++++++++++++++++++++++++++++++---------------
1 file changed, 89 insertions(+), 46 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index a5d1b3079..8002bbf87 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -21,8 +21,8 @@ use pbs_api_types::{
use pbs_client::BackupRepository;
use pbs_config::CachedUserInfo;
use pbs_datastore::data_blob::DataBlob;
-use pbs_datastore::dynamic_index::DynamicIndexReader;
-use pbs_datastore::fixed_index::FixedIndexReader;
+use pbs_datastore::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
+use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::{BackupManifest, FileInfo};
use pbs_datastore::read_chunk::AsyncReadChunk;
@@ -155,6 +155,7 @@ async fn pull_index_chunks<I: IndexFile>(
index: I,
encountered_chunks: Arc<Mutex<EncounteredChunks>>,
backend: &DatastoreBackend,
+ decrypted_index_writer: DecryptedIndexWriter,
) -> Result<SyncStats, Error> {
use futures::stream::{self, StreamExt, TryStreamExt};
@@ -190,55 +191,61 @@ async fn pull_index_chunks<I: IndexFile>(
let bytes = Arc::new(AtomicUsize::new(0));
let chunk_count = Arc::new(AtomicUsize::new(0));
- stream
- .map(|info| {
- let target = Arc::clone(&target);
- let chunk_reader = chunk_reader.clone();
- let bytes = Arc::clone(&bytes);
- let chunk_count = Arc::clone(&chunk_count);
- let verify_and_write_channel = verify_and_write_channel.clone();
- let encountered_chunks = Arc::clone(&encountered_chunks);
-
- Ok::<_, Error>(async move {
- {
- // limit guard scope
- let mut guard = encountered_chunks.lock().unwrap();
- if let Some(touched) = guard.check_reusable(&info.digest) {
- if touched {
- return Ok::<_, Error>(());
- }
- let chunk_exists = proxmox_async::runtime::block_in_place(|| {
- target.cond_touch_chunk(&info.digest, false)
- })?;
- if chunk_exists {
- guard.mark_touched(&info.digest);
- //info!("chunk {} exists {}", pos, hex::encode(digest));
- return Ok::<_, Error>(());
- }
+ let stream = stream.map(|info| {
+ let target = Arc::clone(&target);
+ let chunk_reader = chunk_reader.clone();
+ let bytes = Arc::clone(&bytes);
+ let chunk_count = Arc::clone(&chunk_count);
+ let verify_and_write_channel = verify_and_write_channel.clone();
+ let encountered_chunks = Arc::clone(&encountered_chunks);
+
+ Ok::<_, Error>(async move {
+ {
+ // limit guard scope
+ let mut guard = encountered_chunks.lock().unwrap();
+ if let Some(touched) = guard.check_reusable(&info.digest) {
+ if touched {
+ return Ok::<_, Error>(());
+ }
+ let chunk_exists = proxmox_async::runtime::block_in_place(|| {
+ target.cond_touch_chunk(&info.digest, false)
+ })?;
+ if chunk_exists {
+ guard.mark_touched(&info.digest);
+ //info!("chunk {} exists {}", pos, hex::encode(digest));
+ return Ok::<_, Error>(());
}
- // mark before actually downloading the chunk, so this happens only once
- guard.mark_reusable(&info.digest);
- guard.mark_touched(&info.digest);
}
+ // mark before actually downloading the chunk, so this happens only once
+ guard.mark_reusable(&info.digest);
+ guard.mark_touched(&info.digest);
+ }
- //info!("sync {} chunk {}", pos, hex::encode(digest));
- let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
- let raw_size = chunk.raw_size() as usize;
+ //info!("sync {} chunk {}", pos, hex::encode(digest));
+ let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+ let raw_size = chunk.raw_size() as usize;
- // decode, verify and write in a separate threads to maximize throughput
- proxmox_async::runtime::block_in_place(|| {
- verify_and_write_channel.send((chunk, info.digest, info.size()))
- })?;
+ // decode, verify and write in a separate threads to maximize throughput
+ proxmox_async::runtime::block_in_place(|| {
+ verify_and_write_channel.send((chunk, info.digest, info.size()))
+ })?;
- bytes.fetch_add(raw_size, Ordering::SeqCst);
- chunk_count.fetch_add(1, Ordering::SeqCst);
+ bytes.fetch_add(raw_size, Ordering::SeqCst);
+ chunk_count.fetch_add(1, Ordering::SeqCst);
- Ok(())
- })
+ Ok(())
})
- .try_buffer_unordered(20)
- .try_for_each(|_res| futures::future::ok(()))
- .await?;
+ });
+
+ if let DecryptedIndexWriter::None = decrypted_index_writer {
+ stream
+ .try_buffer_unordered(20)
+ .try_for_each(|_res| futures::future::ok(()))
+ .await?;
+ } else {
+ // must keep chunk order to correctly rewrite index file
+ stream.try_for_each(|item| item).await?;
+ }
drop(verify_and_write_channel);
@@ -319,9 +326,15 @@ async fn pull_single_archive<'a>(
let (csum, size) = index.compute_csum();
verify_archive(archive_info, &csum, size)?;
- if reader.skip_chunk_sync(snapshot.datastore().name()) {
+ if crypt_config.is_none() && reader.skip_chunk_sync(snapshot.datastore().name()) {
info!("skipping chunk sync for same datastore");
} else {
+ let new_index_writer = if crypt_config.is_some() {
+ let writer = DynamicIndexWriter::create(&path)?;
+ DecryptedIndexWriter::Dynamic(Arc::new(Mutex::new(writer)))
+ } else {
+ DecryptedIndexWriter::None
+ };
let stats = pull_index_chunks(
reader
.chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
@@ -330,8 +343,16 @@ async fn pull_single_archive<'a>(
index,
encountered_chunks,
backend,
+ new_index_writer.clone(),
)
.await?;
+ if let DecryptedIndexWriter::Dynamic(index) = &new_index_writer {
+ let csum = index.lock().unwrap().close()?;
+
+ // Overwrite current tmp file so it will be persisted instead
+ std::fs::rename(&path, &tmp_path)?;
+ }
+
sync_stats.add(stats);
}
}
@@ -342,9 +363,16 @@ async fn pull_single_archive<'a>(
let (csum, size) = index.compute_csum();
verify_archive(archive_info, &csum, size)?;
- if reader.skip_chunk_sync(snapshot.datastore().name()) {
+ if crypt_config.is_none() && reader.skip_chunk_sync(snapshot.datastore().name()) {
info!("skipping chunk sync for same datastore");
} else {
+ let new_index_writer = if crypt_config.is_some() {
+ let writer =
+ FixedIndexWriter::create(&path, Some(size), index.chunk_size as u32)?;
+ DecryptedIndexWriter::Fixed(Arc::new(Mutex::new(writer)))
+ } else {
+ DecryptedIndexWriter::None
+ };
let stats = pull_index_chunks(
reader
.chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
@@ -353,8 +381,16 @@ async fn pull_single_archive<'a>(
index,
encountered_chunks,
backend,
+ new_index_writer.clone(),
)
.await?;
+ if let DecryptedIndexWriter::Fixed(index) = &new_index_writer {
+ let csum = index.lock().unwrap().close()?;
+
+ // Overwrite current tmp file so it will be persisted instead
+ std::fs::rename(&path, &tmp_path)?;
+ }
+
sync_stats.add(stats);
}
}
@@ -1269,3 +1305,10 @@ impl EncounteredChunks {
self.chunk_set.clear();
}
}
+
+#[derive(Clone)]
+enum DecryptedIndexWriter {
+ Fixed(Arc<Mutex<FixedIndexWriter>>),
+ Dynamic(Arc<Mutex<DynamicIndexWriter>>),
+ None,
+}
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 17/20] sync: pull: extend encountered chunk by optional decrypted digest
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (15 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 16/20] sync: pull: introduce and use decrypt index writer if " Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 18/20] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
` (6 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
For index files being decrypted during the pull, it is not enough to
keep track of the processes source chunks, but the decrypted digest
has to be known as well in order to rewrite the index file.
Extend the encountered chunks such that this can be tracked as well.
To not introduce clippy warnings and to keep the code readable,
introduce the EncounteredChunksInfo struct as internal type for the
hash map values.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 53 +++++++++++++++++++++++++++++-----------------
1 file changed, 33 insertions(+), 20 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 8002bbf87..bc2a89f88 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -167,7 +167,7 @@ async fn pull_index_chunks<I: IndexFile>(
.filter(|info| {
let guard = encountered_chunks.lock().unwrap();
match guard.check_reusable(&info.digest) {
- Some(touched) => !touched, // reusable and already touched, can always skip
+ Some((touched, _decrypted_chunk)) => !touched, // reusable and already touched, can always skip
None => true,
}
}),
@@ -203,7 +203,7 @@ async fn pull_index_chunks<I: IndexFile>(
{
// limit guard scope
let mut guard = encountered_chunks.lock().unwrap();
- if let Some(touched) = guard.check_reusable(&info.digest) {
+ if let Some((touched, _decrypted_digest)) = guard.check_reusable(&info.digest) {
if touched {
return Ok::<_, Error>(());
}
@@ -211,14 +211,14 @@ async fn pull_index_chunks<I: IndexFile>(
target.cond_touch_chunk(&info.digest, false)
})?;
if chunk_exists {
- guard.mark_touched(&info.digest);
+ guard.mark_touched(&info.digest, None);
//info!("chunk {} exists {}", pos, hex::encode(digest));
return Ok::<_, Error>(());
}
}
// mark before actually downloading the chunk, so this happens only once
- guard.mark_reusable(&info.digest);
- guard.mark_touched(&info.digest);
+ guard.mark_reusable(&info.digest, None);
+ guard.mark_touched(&info.digest, None);
}
//info!("sync {} chunk {}", pos, hex::encode(digest));
@@ -813,7 +813,7 @@ async fn pull_group(
for pos in 0..index.index_count() {
let chunk_info = index.chunk_info(pos).unwrap();
- reusable_chunks.mark_reusable(&chunk_info.digest);
+ reusable_chunks.mark_reusable(&chunk_info.digest, None);
}
}
}
@@ -1243,12 +1243,17 @@ async fn pull_ns(
Ok((progress, sync_stats, errors))
}
+struct EncounteredChunkInfo {
+ reusable: bool,
+ touched: bool,
+ decrypted_digest: Option<[u8; 32]>,
+}
+
/// Store the state of encountered chunks, tracking if they can be reused for the
/// index file currently being pulled and if the chunk has already been touched
/// during this sync.
struct EncounteredChunks {
- // key: digest, value: (reusable, touched)
- chunk_set: HashMap<[u8; 32], (bool, bool)>,
+ chunk_set: HashMap<[u8; 32], EncounteredChunkInfo>,
}
impl EncounteredChunks {
@@ -1261,12 +1266,12 @@ impl EncounteredChunks {
/// Check if the current state allows to reuse this chunk and if so,
/// if the chunk has already been touched.
- fn check_reusable(&self, digest: &[u8; 32]) -> Option<bool> {
- if let Some((reusable, touched)) = self.chunk_set.get(digest) {
- if !reusable {
+ fn check_reusable(&self, digest: &[u8; 32]) -> Option<(bool, Option<&[u8; 32]>)> {
+ if let Some(chunk_info) = self.chunk_set.get(digest) {
+ if !chunk_info.reusable {
None
} else {
- Some(*touched)
+ Some((chunk_info.touched, chunk_info.decrypted_digest.as_ref()))
}
} else {
None
@@ -1274,28 +1279,36 @@ impl EncounteredChunks {
}
/// Mark chunk as reusable, inserting it as un-touched if not present
- fn mark_reusable(&mut self, digest: &[u8; 32]) {
+ fn mark_reusable(&mut self, digest: &[u8; 32], decrypted_digest: Option<[u8; 32]>) {
match self.chunk_set.entry(*digest) {
Entry::Occupied(mut occupied) => {
- let (reusable, _touched) = occupied.get_mut();
- *reusable = true;
+ let chunk_info = occupied.get_mut();
+ chunk_info.reusable = true;
}
Entry::Vacant(vacant) => {
- vacant.insert((true, false));
+ vacant.insert(EncounteredChunkInfo {
+ reusable: true,
+ touched: false,
+ decrypted_digest,
+ });
}
}
}
/// Mark chunk as touched during this sync, inserting it as not reusable
/// but touched if not present.
- fn mark_touched(&mut self, digest: &[u8; 32]) {
+ fn mark_touched(&mut self, digest: &[u8; 32], decrypted_digest: Option<[u8; 32]>) {
match self.chunk_set.entry(*digest) {
Entry::Occupied(mut occupied) => {
- let (_reusable, touched) = occupied.get_mut();
- *touched = true;
+ let chunk_info = occupied.get_mut();
+ chunk_info.touched = true;
}
Entry::Vacant(vacant) => {
- vacant.insert((false, true));
+ vacant.insert(EncounteredChunkInfo {
+ reusable: false,
+ touched: true,
+ decrypted_digest,
+ });
}
}
}
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 18/20] sync: pull: decrypt blob files on pull if encryption key is configured
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (16 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 17/20] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 19/20] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
` (5 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
During pull, blob files are stored in a temporary file before being
renamed to the actual blob filename as stored in the manifest.
If a decryption key is configured in the pull parameters, use the
decrypted temporary blob file after downloading it from the remote
to decrypt it and re-encode as new compressed but unencrypted blob
file. Rename the decrypted tempfile to be the new tmpfile to be
finally moved in place.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 42 ++++++++++++++++++++++++++++++++++++++++--
1 file changed, 40 insertions(+), 2 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index bc2a89f88..ccf349c92 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -2,7 +2,7 @@
use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
-use std::io::Seek;
+use std::io::{BufReader, Read, Seek, Write};
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
use std::time::SystemTime;
@@ -26,7 +26,9 @@ use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::{BackupManifest, FileInfo};
use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{check_backup_owner, DataStore, DatastoreBackend, StoreProgress};
+use pbs_datastore::{
+ check_backup_owner, DataBlobReader, DataStore, DatastoreBackend, StoreProgress,
+};
use pbs_tools::sha::sha256;
use super::sync::{
@@ -398,6 +400,42 @@ async fn pull_single_archive<'a>(
tmpfile.rewind()?;
let (csum, size) = sha256(&mut tmpfile)?;
verify_archive(archive_info, &csum, size)?;
+
+ if crypt_config.is_some() {
+ let crypt_config = crypt_config.clone();
+ let tmp_path = tmp_path.clone();
+ let archive_name = archive_name.clone();
+
+ tokio::task::spawn_blocking(move || {
+ // must rewind again since after verifying cursor is at the end of the file
+ tmpfile.rewind()?;
+ let reader = DataBlobReader::new(tmpfile, crypt_config)?;
+ let mut reader = BufReader::new(reader);
+ let mut raw_data = Vec::new();
+ reader.read_to_end(&mut raw_data)?;
+
+ let blob = DataBlob::encode(&raw_data, None, true)?;
+ let raw_blob = blob.into_inner();
+
+ let mut decrypted_tmp_path = tmp_path.clone();
+ decrypted_tmp_path.set_extension("dectmp");
+ let mut decrypted_tmpfile = std::fs::OpenOptions::new()
+ .read(true)
+ .write(true)
+ .create_new(true)
+ .open(&decrypted_tmp_path)?;
+ decrypted_tmpfile.write_all(&raw_blob)?;
+ decrypted_tmpfile.flush()?;
+ decrypted_tmpfile.rewind()?;
+ let (csum, size) = sha256(&mut decrypted_tmpfile)?;
+
+ std::fs::rename(&decrypted_tmp_path, &tmp_path)?;
+
+ Ok::<(), Error>(())
+ })
+ .await?
+ .map_err(|err| format_err!("Failed when decrypting blob {path:?}: {err}"))?;
+ }
}
}
if let Err(err) = std::fs::rename(&tmp_path, &path) {
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 19/20] sync: pull: decrypt chunks and rewrite index file for matching key
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (17 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 18/20] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 20/20] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner
` (4 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Once the matching decryptioin key will be provided, use it to decrypt
the chunks on pull and rewrite the index file based on the decrypted
chunk digests and offsets.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 135 ++++++++++++++++++++++++++++++++++++++-------
1 file changed, 114 insertions(+), 21 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index ccf349c92..05152d0dd 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -3,7 +3,7 @@
use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
use std::io::{BufReader, Read, Seek, Write};
-use std::sync::atomic::{AtomicUsize, Ordering};
+use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
use std::time::SystemTime;
@@ -20,7 +20,7 @@ use pbs_api_types::{
};
use pbs_client::BackupRepository;
use pbs_config::CachedUserInfo;
-use pbs_datastore::data_blob::DataBlob;
+use pbs_datastore::data_blob::{DataBlob, DataChunkBuilder};
use pbs_datastore::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
use pbs_datastore::index::IndexFile;
@@ -169,7 +169,16 @@ async fn pull_index_chunks<I: IndexFile>(
.filter(|info| {
let guard = encountered_chunks.lock().unwrap();
match guard.check_reusable(&info.digest) {
- Some((touched, _decrypted_chunk)) => !touched, // reusable and already touched, can always skip
+ Some((touched, mapped_digest)) => {
+ if mapped_digest.is_some() {
+ // if there is a mapping, then the chunk digest must be rewritten to
+ // the index, cannot skip here but optimized when processing the stream
+ true
+ } else {
+ // reusable and already touched, can always skip
+ !touched
+ }
+ }
None => true,
}
}),
@@ -191,6 +200,7 @@ async fn pull_index_chunks<I: IndexFile>(
let verify_and_write_channel = verify_pool.channel();
let bytes = Arc::new(AtomicUsize::new(0));
+ let offset = Arc::new(AtomicU64::new(0));
let chunk_count = Arc::new(AtomicUsize::new(0));
let stream = stream.map(|info| {
@@ -200,36 +210,119 @@ async fn pull_index_chunks<I: IndexFile>(
let chunk_count = Arc::clone(&chunk_count);
let verify_and_write_channel = verify_and_write_channel.clone();
let encountered_chunks = Arc::clone(&encountered_chunks);
+ let offset = Arc::clone(&offset);
+ let decrypted_index_writer = decrypted_index_writer.clone();
Ok::<_, Error>(async move {
- {
- // limit guard scope
- let mut guard = encountered_chunks.lock().unwrap();
- if let Some((touched, _decrypted_digest)) = guard.check_reusable(&info.digest) {
- if touched {
+ //info!("sync {} chunk {}", pos, hex::encode(digest));
+ let (chunk, digest, size) = match decrypted_index_writer {
+ DecryptedIndexWriter::Fixed(index) => {
+ if let Some((_touched, Some(decrypted_digest))) = encountered_chunks
+ .lock()
+ .unwrap()
+ .check_reusable(&info.digest)
+ {
+ // already got the decrypted digest and chunk has been written,
+ // no need to process again
+ let size = info.size();
+ let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+
+ index.lock().unwrap().add_chunk(
+ start_offset,
+ size as u32,
+ decrypted_digest,
+ )?;
+
return Ok::<_, Error>(());
}
- let chunk_exists = proxmox_async::runtime::block_in_place(|| {
- target.cond_touch_chunk(&info.digest, false)
- })?;
- if chunk_exists {
- guard.mark_touched(&info.digest, None);
- //info!("chunk {} exists {}", pos, hex::encode(digest));
+
+ let chunk_data = chunk_reader.read_chunk(&info.digest).await?;
+ let (chunk, digest) =
+ DataChunkBuilder::new(&chunk_data).compress(true).build()?;
+
+ let size = chunk_data.len() as u64;
+ let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+
+ index
+ .lock()
+ .unwrap()
+ .add_chunk(start_offset, size as u32, &digest)?;
+
+ encountered_chunks
+ .lock()
+ .unwrap()
+ .mark_reusable(&info.digest, Some(digest));
+
+ (chunk, digest, size)
+ }
+ DecryptedIndexWriter::Dynamic(index) => {
+ if let Some((_touched, Some(decrypted_digest))) = encountered_chunks
+ .lock()
+ .unwrap()
+ .check_reusable(&info.digest)
+ {
+ // already got the decrypted digest and chunk has been written,
+ // no need to process again
+ let size = info.size();
+ let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+ let end_offset = start_offset + size;
+
+ index
+ .lock()
+ .unwrap()
+ .add_chunk(end_offset, decrypted_digest)?;
+
return Ok::<_, Error>(());
}
+
+ let chunk_data = chunk_reader.read_chunk(&info.digest).await?;
+ let (chunk, digest) =
+ DataChunkBuilder::new(&chunk_data).compress(true).build()?;
+
+ let size = chunk_data.len() as u64;
+ let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+ let end_offset = start_offset + size;
+
+ index.lock().unwrap().add_chunk(end_offset, &digest)?;
+
+ encountered_chunks
+ .lock()
+ .unwrap()
+ .mark_reusable(&info.digest, Some(digest));
+
+ (chunk, digest, size)
}
- // mark before actually downloading the chunk, so this happens only once
- guard.mark_reusable(&info.digest, None);
- guard.mark_touched(&info.digest, None);
- }
+ DecryptedIndexWriter::None => {
+ {
+ // limit guard scope
+ let mut guard = encountered_chunks.lock().unwrap();
+ if let Some((touched, _mapped)) = guard.check_reusable(&info.digest) {
+ if touched {
+ return Ok::<_, Error>(());
+ }
+ let chunk_exists = proxmox_async::runtime::block_in_place(|| {
+ target.cond_touch_chunk(&info.digest, false)
+ })?;
+ if chunk_exists {
+ guard.mark_touched(&info.digest, None);
+ //info!("chunk {} exists {}", pos, hex::encode(digest));
+ return Ok::<_, Error>(());
+ }
+ }
+ // mark before actually downloading the chunk, so this happens only once
+ guard.mark_reusable(&info.digest, None);
+ guard.mark_touched(&info.digest, None);
+ }
- //info!("sync {} chunk {}", pos, hex::encode(digest));
- let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+ let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+ (chunk, info.digest, info.size())
+ }
+ };
let raw_size = chunk.raw_size() as usize;
// decode, verify and write in a separate threads to maximize throughput
proxmox_async::runtime::block_in_place(|| {
- verify_and_write_channel.send((chunk, info.digest, info.size()))
+ verify_and_write_channel.send((chunk, digest, size))
})?;
bytes.fetch_add(raw_size, Ordering::SeqCst);
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH proxmox-backup 20/20] sync: pull: decrypt snapshots with matching encryption key fingerprint
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (18 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 19/20] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
@ 2026-04-01 7:55 ` Christian Ebner
2026-04-02 0:25 ` [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Thomas Lamprecht
` (3 subsequent siblings)
23 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-01 7:55 UTC (permalink / raw)
To: pbs-devel
Decrypt any backup snapshot during pull which was encrypted with a
matching encryption key. Matching of keys is performed by comparing
the fingerprint of the key as stored in the source manifest and the
key configured for the pull sync jobs.
If matching, pass along the key's crypto config to the index and chunk
readers and write the local files unencrypted instead of simply
downloading them. A new manifest file is written instead of the
original one and files registered accordingly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 78 ++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 76 insertions(+), 2 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 05152d0dd..22b058056 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -3,6 +3,7 @@
use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
use std::io::{BufReader, Read, Seek, Write};
+use std::os::fd::AsRawFd;
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
use std::time::SystemTime;
@@ -10,11 +11,14 @@ use std::time::SystemTime;
use anyhow::{bail, format_err, Context, Error};
use pbs_tools::crypt_config::CryptConfig;
use proxmox_human_byte::HumanByte;
+use serde_json::Value;
+use tokio::fs::OpenOptions;
+use tokio::io::AsyncWriteExt;
use tracing::{info, warn};
use pbs_api_types::{
print_store_and_ns, ArchiveType, Authid, BackupArchiveName, BackupDir, BackupGroup,
- BackupNamespace, GroupFilter, Operation, RateLimitConfig, Remote, SnapshotListItem,
+ BackupNamespace, CryptMode, GroupFilter, Operation, RateLimitConfig, Remote, SnapshotListItem,
VerifyState, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME, MAX_NAMESPACE_DEPTH,
PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP,
};
@@ -397,6 +401,7 @@ async fn pull_single_archive<'a>(
encountered_chunks: Arc<Mutex<EncounteredChunks>>,
crypt_config: Option<Arc<CryptConfig>>,
backend: &DatastoreBackend,
+ new_manifest: Option<Arc<Mutex<BackupManifest>>>,
) -> Result<SyncStats, Error> {
let archive_name = &archive_info.filename;
let mut path = snapshot.full_path();
@@ -446,6 +451,17 @@ async fn pull_single_archive<'a>(
// Overwrite current tmp file so it will be persisted instead
std::fs::rename(&path, &tmp_path)?;
+
+ if let Some(new_manifest) = new_manifest {
+ let name = archive_name.as_str().try_into()?;
+ // size is indetical to original, encrypted index
+ new_manifest.lock().unwrap().add_file(
+ &name,
+ size,
+ csum,
+ CryptMode::None,
+ )?;
+ }
}
sync_stats.add(stats);
@@ -484,6 +500,17 @@ async fn pull_single_archive<'a>(
// Overwrite current tmp file so it will be persisted instead
std::fs::rename(&path, &tmp_path)?;
+
+ if let Some(new_manifest) = new_manifest {
+ let name = archive_name.as_str().try_into()?;
+ // size is indetical to original, encrypted index
+ new_manifest.lock().unwrap().add_file(
+ &name,
+ size,
+ csum,
+ CryptMode::None,
+ )?;
+ }
}
sync_stats.add(stats);
@@ -522,6 +549,14 @@ async fn pull_single_archive<'a>(
decrypted_tmpfile.rewind()?;
let (csum, size) = sha256(&mut decrypted_tmpfile)?;
+ if let Some(new_manifest) = new_manifest {
+ let mut new_manifest = new_manifest.lock().unwrap();
+ let name = archive_name.as_str().try_into()?;
+ new_manifest.add_file(&name, size, csum, CryptMode::None)?;
+ }
+
+ nix::unistd::fsync(decrypted_tmpfile.as_raw_fd())?;
+
std::fs::rename(&decrypted_tmp_path, &tmp_path)?;
Ok::<(), Error>(())
@@ -607,9 +642,11 @@ async fn pull_snapshot<'a>(
let _ = std::fs::remove_file(&tmp_manifest_name);
return Ok(sync_stats); // nothing changed
}
+ // redownload also in case of encrypted, even if key would match as cannot
+ // fully verify otherwise due to file checksum mismatches.
}
- let manifest_data = tmp_manifest_blob.raw_data().to_vec();
+ let mut manifest_data = tmp_manifest_blob.raw_data().to_vec();
let manifest = BackupManifest::try_from(tmp_manifest_blob)?;
if ignore_not_verified_or_encrypted(
@@ -629,6 +666,16 @@ async fn pull_snapshot<'a>(
}
let mut crypt_config = None;
+ let mut new_manifest = None;
+ if let Ok(Some(source_fingerprint)) = manifest.fingerprint() {
+ if let Some(config) = ¶ms.crypt_config {
+ if config.fingerprint() == *source_fingerprint.bytes() {
+ crypt_config = Some(Arc::clone(config));
+ new_manifest = Some(Arc::new(Mutex::new(BackupManifest::new(snapshot.into()))));
+ info!("Found matching key fingerprint {source_fingerprint}, decrypt on pull");
+ }
+ }
+ }
let backend = ¶ms.target.backend;
for item in manifest.files() {
@@ -678,11 +725,38 @@ async fn pull_snapshot<'a>(
encountered_chunks.clone(),
crypt_config.clone(),
backend,
+ new_manifest.clone(),
)
.await?;
sync_stats.add(stats);
}
+ if let Some(new_manifest) = new_manifest {
+ let mut new_manifest = Arc::try_unwrap(new_manifest)
+ .map_err(|_arc| {
+ format_err!("failed to take ownership of still referenced new manifest")
+ })?
+ .into_inner()
+ .unwrap();
+
+ // copy over notes ecc, but drop encryption key fingerprint
+ new_manifest.unprotected = manifest.unprotected.clone();
+ new_manifest.unprotected["key-fingerprint"] = Value::Null;
+
+ let manifest_string = new_manifest.to_string(None)?;
+ let manifest_blob = DataBlob::encode(manifest_string.as_bytes(), None, true)?;
+ // update contents to be uploaded to backend
+ manifest_data = manifest_blob.raw_data().to_vec();
+
+ let mut tmp_manifest_file = OpenOptions::new()
+ .write(true)
+ .truncate(true) // clear pre-existing manifest content
+ .open(&tmp_manifest_name)
+ .await?;
+ tmp_manifest_file.write_all(&manifest_data).await?;
+ tmp_manifest_file.flush().await?;
+ }
+
if let Err(err) = std::fs::rename(&tmp_manifest_name, &manifest_name) {
bail!("Atomic rename file {:?} failed - {}", manifest_name, err);
}
--
2.47.3
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox-backup 12/20] ui: define and expose encryption key management menu item and windows
2026-04-01 7:55 ` [PATCH proxmox-backup 12/20] ui: define and expose encryption key management menu item and windows Christian Ebner
@ 2026-04-01 23:09 ` Thomas Lamprecht
2026-04-03 8:35 ` Dominik Csapak
2026-04-01 23:10 ` Thomas Lamprecht
2026-04-03 12:16 ` Dominik Csapak
2 siblings, 1 reply; 41+ messages in thread
From: Thomas Lamprecht @ 2026-04-01 23:09 UTC (permalink / raw)
To: Christian Ebner, pbs-devel
Am 01.04.26 um 09:55 schrieb Christian Ebner:
> diff --git a/www/config/EncryptionKeysView.js b/www/config/EncryptionKeysView.js
> + listeners: {
> + activate: 'reload',
> + itemdblclick: 'editEncryptionKeys',
This should be pbsEncryptionKeysEdit
> + },
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox-backup 12/20] ui: define and expose encryption key management menu item and windows
2026-04-01 7:55 ` [PATCH proxmox-backup 12/20] ui: define and expose encryption key management menu item and windows Christian Ebner
2026-04-01 23:09 ` Thomas Lamprecht
@ 2026-04-01 23:10 ` Thomas Lamprecht
2026-04-03 12:16 ` Dominik Csapak
2 siblings, 0 replies; 41+ messages in thread
From: Thomas Lamprecht @ 2026-04-01 23:10 UTC (permalink / raw)
To: Christian Ebner, pbs-devel
Am 01.04.26 um 09:55 schrieb Christian Ebner:
> + {
> + xtype: 'component',
> + html:
> + gettext(
> + 'Keep your encryption key safe, but easily accessible for disaster recovery.',
> + ) +
> + '<br>' +
> + gettext('We recommend the following safe-keeping strategy:'),
> + },
> + {
> + xtyp: 'container',
s/xtyp/xtype/
> + layout: 'hbox',
> + items: [
> + {
> + xtype: 'component',
> + html: '1. ' + gettext('Save the key in your password manager.'),
> + flex: 1,
> + },
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox-backup 04/20] pbs-config: implement encryption key config handling
2026-04-01 7:55 ` [PATCH proxmox-backup 04/20] pbs-config: implement encryption key config handling Christian Ebner
@ 2026-04-01 23:27 ` Thomas Lamprecht
2026-04-02 7:09 ` Christian Ebner
0 siblings, 1 reply; 41+ messages in thread
From: Thomas Lamprecht @ 2026-04-01 23:27 UTC (permalink / raw)
To: Christian Ebner, pbs-devel
Am 01.04.26 um 09:55 schrieb Christian Ebner:
> Implements the handling for encryption key configuration and files.
>
> Individual encryption keys with the secret key material are stored in
> individual files, while the config stores duplicate key info, so the
> actual key only needs to be loaded when accessed, not for listing.
>
> Key fingerprint is compared when loading the key in order to detect
> possible mismatches.
>
> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
> ---
> pbs-config/Cargo.toml | 1 +
> pbs-config/src/encryption_keys.rs | 159 ++++++++++++++++++++++++++++++
> pbs-config/src/lib.rs | 1 +
> 3 files changed, 161 insertions(+)
> create mode 100644 pbs-config/src/encryption_keys.rs
>
> diff --git a/pbs-config/Cargo.toml b/pbs-config/Cargo.toml
> index eb81ce004..a27964cfd 100644
> --- a/pbs-config/Cargo.toml
> +++ b/pbs-config/Cargo.toml
> @@ -30,3 +30,4 @@ proxmox-uuid.workspace = true
>
> pbs-api-types.workspace = true
> pbs-buildcfg.workspace = true
> +pbs-key-config.workspace = true
> diff --git a/pbs-config/src/encryption_keys.rs b/pbs-config/src/encryption_keys.rs
> +/// Delete the encryption key from config.
> +///
> +/// Deletes the key from the config but keeps a backup of the key file.
The implementation just calls `std::fs::remove_file`, so is there really a
backup of the key kept?
> +pub fn delete_key(id: &str) -> Result<(), Error> {
> + let _lock = lock_config()?;
> + let (mut config, _digest) = config()?;
> +
> + // if the key with given id exists in config, try to remove also file on path.
> + if let Some((section_type, key)) = config.sections.get(id) {
> + if section_type == ENCRYPTION_KEYS_CFG_TYPE_ID {
> + let key = EncryptionKey::deserialize(key)
> + .map_err(|_err| format_err!("failed to parse pre-existing key"))?;
> +
> + if let Some(path) = &key.info.path {
> + std::fs::remove_file(path)?;
> + }
This ordering seems slightly risky: the key file is deleted *before* the
config is rewritten. If `replace_backup_config` fails after `remove_file`
succeeds, the key material is gone but the config still references it.
Would be probably safer to write the config first (removing the section),
then delete the file. Or was the idea here to avoid leaving a stale key
file on disk?
> + }
> +
> + config.sections.remove(id);
> +
> + let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
> + replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())
> + } else {
> + bail!("key {id} not found in config");
> + }
> +}
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (19 preceding siblings ...)
2026-04-01 7:55 ` [PATCH proxmox-backup 20/20] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner
@ 2026-04-02 0:25 ` Thomas Lamprecht
2026-04-02 7:37 ` Christian Ebner
2026-04-03 8:39 ` Dominik Csapak
` (2 subsequent siblings)
23 siblings, 1 reply; 41+ messages in thread
From: Thomas Lamprecht @ 2026-04-02 0:25 UTC (permalink / raw)
To: Christian Ebner, pbs-devel
Am 01.04.26 um 09:55 schrieb Christian Ebner:
> This patch series implements support for encrypting backup snapshots
> when pushing from a source PBS instance to an untrusted remote target
> PBS instance. Further, it adds support to decrypt snapshots being
> encrypted on the remote source PBS when pulling the contents to the
> local target PBS instance.
Thanks, the overall direction makes sense, a few design-level concerns
below, but note that this is includes various stuff all over the place, I
did not yet rechecked them with a fresh head, so do not take them for face
value, rather input/questions to your own surely much more mature mental
model.
Pull resync after decryption is efficiency-wise not ideal. The local
decrypted manifest will never byte-match the remote encrypted one, so the
`raw_data()` fast path in pull_snapshot can never fire for already-decrypted
snapshots. The per-file verification then also fails (different checksums),
causing a full re-download. The `last_sync_time` filtering limits this to
the latest snapshot per group, so it's not that bad, but probablly still
worth to improve - especially for large snapshots that rarely change.
On the push side, `upload_blob` is a raw byte passthrough, so an unencrypted
source backup will then e.g. have `.conf.blob` or `.pxar.catalog.blob` get
stored unencrypted on the untrusted remote. Might be nicer to also encrypt
those here from security and an integrity standpoint?
Also, `upload_blob_from_file` for the client log goes through
`upload_blob_from_data` which calls `DataBlob::encode()` on already-encoded
on-disk bytes, so you'd end up with a double-wrapped blob on the remote.
Minor: `check_privs_and_load_key_config` uses `PRIV_SYS_AUDIT` to gate key
access. It's not a real security issue since creating/running sync jobs
already requires much heavier privileges (Datastore.Backup, Remote.Read,
etc.), but Audit is semantically a read-only privilege. Might be worth using
something like Sys.Modify instead to better express the intent. We could
also create a few new privs for this, if it helps.
When a server-side key is configured on push, all snapshots with any
encrypted content are silently skipped. That includes fully client-side
encrypted ones that could just be synced as-is - they are already encrypted
after all. Might be better to push those unchanged and only skip genuinely
partially-encrypted snapshots? Or if unsure, allow to user to chose what
they want. Wrapping encryption is probably never a good idea though,
maybe just any parts that are not yet encrypted?
Nit: The same `encryption_key` field means "encrypt" for push and
"decrypt" for pull. Maybe be literal here and name it decryption_key for
the later. Or something generic that works for both.
A few key lifecycle things:
Deleting a key does not check whether sync jobs reference it; the next run
just fails. Might want to refuse deletion while references exist, or at
least warn.
Password-protected keys can be uploaded but in the method
`check_privs_and_load_key_config` you always pass an empty passphrase
to its `key_config.decrypt()` call, failing with "Passphrase is too short!"
in such a case.
Verify state gets copied from the encrypted source manifest into the
decrypted target. That verification was against encrypted data though, so
might want to clear it on decryption. Albeit, the decryption itself
basically was a verification, so not really problematic.
Some other things:
`load_previous_snapshot_known_chunks` always calls
`download_previous_fixed_index` even for DynamicIndex archives, breaking
being able to leverage the deduplication for cross-snapshot chunk dedup
for .didx (error is silently swallowed via `let _res`) AFAICT.
The `.dectmp` temp file from blob decryption on pull is not cleaned up on
error, and `create_new(true)` means a retry fails then too.
`log::info!` vs `tracing::info!` in check_privs_and_load_key_config;
"indetical" typo (x2) in the index rewrite comments.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox-backup 04/20] pbs-config: implement encryption key config handling
2026-04-01 23:27 ` Thomas Lamprecht
@ 2026-04-02 7:09 ` Christian Ebner
0 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-02 7:09 UTC (permalink / raw)
To: Thomas Lamprecht, pbs-devel
On 4/2/26 1:26 AM, Thomas Lamprecht wrote:
> Am 01.04.26 um 09:55 schrieb Christian Ebner:
>> Implements the handling for encryption key configuration and files.
>>
>> Individual encryption keys with the secret key material are stored in
>> individual files, while the config stores duplicate key info, so the
>> actual key only needs to be loaded when accessed, not for listing.
>>
>> Key fingerprint is compared when loading the key in order to detect
>> possible mismatches.
>>
>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>> ---
>> pbs-config/Cargo.toml | 1 +
>> pbs-config/src/encryption_keys.rs | 159 ++++++++++++++++++++++++++++++
>> pbs-config/src/lib.rs | 1 +
>> 3 files changed, 161 insertions(+)
>> create mode 100644 pbs-config/src/encryption_keys.rs
>>
>> diff --git a/pbs-config/Cargo.toml b/pbs-config/Cargo.toml
>> index eb81ce004..a27964cfd 100644
>> --- a/pbs-config/Cargo.toml
>> +++ b/pbs-config/Cargo.toml
>> @@ -30,3 +30,4 @@ proxmox-uuid.workspace = true
>>
>> pbs-api-types.workspace = true
>> pbs-buildcfg.workspace = true
>> +pbs-key-config.workspace = true
>> diff --git a/pbs-config/src/encryption_keys.rs b/pbs-config/src/encryption_keys.rs
>
>> +/// Delete the encryption key from config.
>> +///
>> +/// Deletes the key from the config but keeps a backup of the key file.
>
> The implementation just calls `std::fs::remove_file`, so is there really a
> backup of the key kept?
Ah thanks for noticing! This is indeed outdated. An initial draft
version kept a copy of the keyfile around. This was however removed at a
later stage due to some offlist discussion with Fabian.
>
>> +pub fn delete_key(id: &str) -> Result<(), Error> {
>> + let _lock = lock_config()?;
>> + let (mut config, _digest) = config()?;
>> +
>> + // if the key with given id exists in config, try to remove also file on path.
>> + if let Some((section_type, key)) = config.sections.get(id) {
>> + if section_type == ENCRYPTION_KEYS_CFG_TYPE_ID {
>> + let key = EncryptionKey::deserialize(key)
>> + .map_err(|_err| format_err!("failed to parse pre-existing key"))?;
>> +
>> + if let Some(path) = &key.info.path {
>> + std::fs::remove_file(path)?;
>> + }
>
> This ordering seems slightly risky: the key file is deleted *before* the
> config is rewritten. If `replace_backup_config` fails after `remove_file`
> succeeds, the key material is gone but the config still references it.
>
> Would be probably safer to write the config first (removing the section),
> then delete the file. Or was the idea here to avoid leaving a stale key
> file on disk?
No, this was again due to the changes mentioned above, will adapt and I
see there is also still a check missing if the key is still in use by
any sync job (which has to be performed on the api handler
implementation though, not here).
>
>> + }
>> +
>> + config.sections.remove(id);
>> +
>> + let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
>> + replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())
>> + } else {
>> + bail!("key {id} not found in config");
>> + }
>> +}
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-02 0:25 ` [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Thomas Lamprecht
@ 2026-04-02 7:37 ` Christian Ebner
2026-04-08 7:50 ` Fabian Grünbichler
0 siblings, 1 reply; 41+ messages in thread
From: Christian Ebner @ 2026-04-02 7:37 UTC (permalink / raw)
To: Thomas Lamprecht, pbs-devel
On 4/2/26 2:24 AM, Thomas Lamprecht wrote:
> Am 01.04.26 um 09:55 schrieb Christian Ebner:
>> This patch series implements support for encrypting backup snapshots
>> when pushing from a source PBS instance to an untrusted remote target
>> PBS instance. Further, it adds support to decrypt snapshots being
>> encrypted on the remote source PBS when pulling the contents to the
>> local target PBS instance.
>
> Thanks, the overall direction makes sense, a few design-level concerns
> below, but note that this is includes various stuff all over the place, I
> did not yet rechecked them with a fresh head, so do not take them for face
> value, rather input/questions to your own surely much more mature mental
> model.
>
> Pull resync after decryption is efficiency-wise not ideal. The local
> decrypted manifest will never byte-match the remote encrypted one, so the
> `raw_data()` fast path in pull_snapshot can never fire for already-decrypted
> snapshots. The per-file verification then also fails (different checksums),
> causing a full re-download. The `last_sync_time` filtering limits this to
> the latest snapshot per group, so it's not that bad, but probablly still
> worth to improve - especially for large snapshots that rarely change.
Yes, you are right this is still needs optimization and I'm aware of it.
What could work is to either store the previous manifest checksum for
comparison. I didn't want to spent to much time on optimization just
yet, this is still something I wanted to explore further if the overall
approach is fine.
> On the push side, `upload_blob` is a raw byte passthrough, so an unencrypted
> source backup will then e.g. have `.conf.blob` or `.pxar.catalog.blob` get
> stored unencrypted on the untrusted remote. Might be nicer to also encrypt
> those here from security and an integrity standpoint?
That is not intentional and will be fixed, thanks!
> Also, `upload_blob_from_file` for the client log goes through
> `upload_blob_from_data` which calls `DataBlob::encode()` on already-encoded
> on-disk bytes, so you'd end up with a double-wrapped blob on the remote.
Will fix this as well!
> Minor: `check_privs_and_load_key_config` uses `PRIV_SYS_AUDIT` to gate key
> access. It's not a real security issue since creating/running sync jobs
> already requires much heavier privileges (Datastore.Backup, Remote.Read,
> etc.), but Audit is semantically a read-only privilege. Might be worth using
> something like Sys.Modify instead to better express the intent. We could
> also create a few new privs for this, if it helps.
Okay, will escalate to Sys.Modify on the key acl also for loading the key.
> When a server-side key is configured on push, all snapshots with any
> encrypted content are silently skipped. That includes fully client-side
> encrypted ones that could just be synced as-is - they are already encrypted
> after all. Might be better to push those unchanged and only skip genuinely
> partially-encrypted snapshots? Or if unsure, allow to user to chose what
> they want. Wrapping encryption is probably never a good idea though,
> maybe just any parts that are not yet encrypted?
Pushing fully encrypted ones as is makes sense, only log and skip
partially encrypted snapshots.
> Nit: The same `encryption_key` field means "encrypt" for push and
> "decrypt" for pull. Maybe be literal here and name it decryption_key for
> the later. Or something generic that works for both.
Yes, that make sense... Was primed by the fact that the entities
themself are referred to as encryption keys, but renaming this to
decryption_key for the pull makes more sense for making the code better
readable. Will adapt that.
> A few key lifecycle things:
>
> Deleting a key does not check whether sync jobs reference it; the next run
> just fails. Might want to refuse deletion while references exist, or at
> least warn.
Yes, also realized this just now, will add a check for this in the api
handler and not allow to remove it in that case.
> Password-protected keys can be uploaded but in the method
> `check_privs_and_load_key_config` you always pass an empty passphrase
> to its `key_config.decrypt()` call, failing with "Passphrase is too short!"
> in such a case.
Good catch, password protected keys should not be uploadable for the
time being. It was discussed off list that we might extend this to allow
for password protected keys in the future. An intial draft version
simply stored the key in the config as well, but since that does not add
anything security wise and there are ideas to improve secret handling in
general, supporting encrypted keys was dropped for now.
> Verify state gets copied from the encrypted source manifest into the
> decrypted target. That verification was against encrypted data though, so
> might want to clear it on decryption. Albeit, the decryption itself
> basically was a verification, so not really problematic.
Will strip this when decrypting, thanks!
> Some other things:
>
> `load_previous_snapshot_known_chunks` always calls
> `download_previous_fixed_index` even for DynamicIndex archives, breaking
> being able to leverage the deduplication for cross-snapshot chunk dedup
> for .didx (error is silently swallowed via `let _res`) AFAICT.
>
> The `.dectmp` temp file from blob decryption on pull is not cleaned up on
> error, and `create_new(true)` means a retry fails then too.
>
> `log::info!` vs `tracing::info!` in check_privs_and_load_key_config;
> "indetical" typo (x2) in the index rewrite comments.
Will look into these as well, thanks a lot for comments and review!
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox-backup 12/20] ui: define and expose encryption key management menu item and windows
2026-04-01 23:09 ` Thomas Lamprecht
@ 2026-04-03 8:35 ` Dominik Csapak
0 siblings, 0 replies; 41+ messages in thread
From: Dominik Csapak @ 2026-04-03 8:35 UTC (permalink / raw)
To: Thomas Lamprecht, Christian Ebner, pbs-devel
On 4/2/26 1:08 AM, Thomas Lamprecht wrote:
> Am 01.04.26 um 09:55 schrieb Christian Ebner:
>> diff --git a/www/config/EncryptionKeysView.js b/www/config/EncryptionKeysView.js
>
>> + listeners: {
>> + activate: 'reload',
>> + itemdblclick: 'editEncryptionKeys',
>
> This should be pbsEncryptionKeysEdit
>
>> + },
>
>
>
>
actually i think the name is already correct, but
it is simply not implemented. there'd have to be some
method 'editEncryptionKeys' that spawns an edit window.
but looking at the code there, it seems it's not intended
for editing the keys (what should be edited exactly)
so IMHO this line should just be removed
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (20 preceding siblings ...)
2026-04-02 0:25 ` [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Thomas Lamprecht
@ 2026-04-03 8:39 ` Dominik Csapak
2026-04-03 8:50 ` Christian Ebner
2026-04-07 15:12 ` Manuel Federanko
2026-04-08 7:29 ` David Riley
23 siblings, 1 reply; 41+ messages in thread
From: Dominik Csapak @ 2026-04-03 8:39 UTC (permalink / raw)
To: Christian Ebner, pbs-devel
high level question before i dive deeper into the ui part:
do the encryption keys have any overlap with tape encryption keys?
we already have a ui for that there. maybe we could unify that?
Personally it's not a blocker for me, but we should think
of merging them in the long run. I think it would be better
to only have one place to create/manage encryption keys (per product)
If it's technically not compatible/viable to merge these in the backend,
we could still write the gui in a way to only have one place but
manage two types of keys (with different api paths, etc.)
sorry if that is already answered in one of your patches,but
just saw the encryption key gui before going in and wanted to write
that out.
On 4/1/26 9:55 AM, Christian Ebner wrote:
> This patch series implements support for encrypting backup snapshots
> when pushing from a source PBS instance to an untrusted remote target
> PBS instance. Further, it adds support to decrypt snapshots being
> encrypted on the remote source PBS when pulling the contents to the
> local target PBS instance. This allows to perform full server side
> encryption/decryption when syncing with a less trusted remote PBS.
>
> In order to encrypt/decrypt snapshots, a new encryption key entity
> is introduced, to be created as global instance on the PBS, placed and
> managed by it's own dedicated config. Keys with secret are stored
> in dedicated files so they only need to be loaded when accessing the
> key, not for listing of configuration.
>
> The sync jobs in push and pull direction are extended to receive an
> additional encryption key parameter, allowing the given key to be
> used for encryption/decription of snapshots, depending on the sync
> direction. In order to encrypt/decrypt the contents, chunks, index
> files, blobs and manifest are additionally processed, rewritten when
> required.
>
> Link to the bugtracker issue:
> https://bugzilla.proxmox.com/show_bug.cgi?id=7251
>
>
> proxmox:
>
> Christian Ebner (2):
> pbs-api-types: define encryption key type and schema
> pbs-api-types: sync job: add optional encryption key to config
>
> pbs-api-types/src/jobs.rs | 11 ++++++++--
> pbs-api-types/src/key_derivation.rs | 34 ++++++++++++++++++++++++++---
> pbs-api-types/src/lib.rs | 2 +-
> 3 files changed, 41 insertions(+), 6 deletions(-)
>
>
> proxmox-backup:
>
> Christian Ebner (18):
> pbs-key-config: introduce store_with() for KeyConfig
> pbs-config: implement encryption key config handling
> pbs-config: acls: add 'encryption-keys' as valid 'system' subpath
> ui: expose 'encryption-keys' as acl subpath for 'system'
> api: config: add endpoints for encryption key manipulation
> api: config: allow encryption key manipulation for sync job
> sync: push: rewrite manifest instead of pushing pre-existing one
> sync: add helper to check encryption key acls and load key
> fix #7251: api: push: encrypt snapshots using configured encryption
> key
> ui: define and expose encryption key management menu item and windows
> ui: expose assigning encryption key to sync jobs
> sync: pull: load encryption key if given in job config
> sync: expand source chunk reader trait by crypt config
> sync: pull: introduce and use decrypt index writer if crypt config
> sync: pull: extend encountered chunk by optional decrypted digest
> sync: pull: decrypt blob files on pull if encryption key is configured
> sync: pull: decrypt chunks and rewrite index file for matching key
> sync: pull: decrypt snapshots with matching encryption key fingerprint
>
> pbs-config/Cargo.toml | 1 +
> pbs-config/src/acl.rs | 4 +-
> pbs-config/src/encryption_keys.rs | 159 +++++++++++
> pbs-config/src/lib.rs | 1 +
> pbs-key-config/src/lib.rs | 36 ++-
> src/api2/config/encryption_keys.rs | 115 ++++++++
> src/api2/config/mod.rs | 2 +
> src/api2/config/sync.rs | 10 +
> src/api2/pull.rs | 15 +-
> src/api2/push.rs | 14 +-
> src/server/pull.rs | 416 ++++++++++++++++++++++++-----
> src/server/push.rs | 222 +++++++++++----
> src/server/sync.rs | 57 +++-
> www/Makefile | 3 +
> www/NavigationTree.js | 6 +
> www/Utils.js | 1 +
> www/config/EncryptionKeysView.js | 143 ++++++++++
> www/form/EncryptionKeySelector.js | 59 ++++
> www/form/PermissionPathSelector.js | 1 +
> www/window/EncryptionKeysEdit.js | 382 ++++++++++++++++++++++++++
> www/window/SyncJobEdit.js | 11 +
> 21 files changed, 1512 insertions(+), 146 deletions(-)
> create mode 100644 pbs-config/src/encryption_keys.rs
> create mode 100644 src/api2/config/encryption_keys.rs
> create mode 100644 www/config/EncryptionKeysView.js
> create mode 100644 www/form/EncryptionKeySelector.js
> create mode 100644 www/window/EncryptionKeysEdit.js
>
>
> Summary over all repositories:
> 24 files changed, 1553 insertions(+), 152 deletions(-)
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-03 8:39 ` Dominik Csapak
@ 2026-04-03 8:50 ` Christian Ebner
2026-04-03 9:00 ` Dominik Csapak
0 siblings, 1 reply; 41+ messages in thread
From: Christian Ebner @ 2026-04-03 8:50 UTC (permalink / raw)
To: Dominik Csapak, pbs-devel
On 4/3/26 10:38 AM, Dominik Csapak wrote:
> high level question before i dive deeper into the ui part:
>
> do the encryption keys have any overlap with tape encryption keys?
>
> we already have a ui for that there. maybe we could unify that?
Not really matching the encryption key handling for the tape, no. We did
not want to allow for password protected encryption keys (discussed of
list with Fabian) and therefore I opted to follow more along the line of
what we do for PVE, at least with respect to the UI.
> Personally it's not a blocker for me, but we should think
> of merging them in the long run. I think it would be better
> to only have one place to create/manage encryption keys (per product)
The encryption keys for tapes are already strongly intertwined with the
code there, I did not feel confident to move keys and configs there.
Rather I would have this as possible breaking change for PBS 5.0, where
we could possibly merge config. Also, it was discussed to have some
better mechanism for secret handling in the future (e.g. for the
password protected keys).
> If it's technically not compatible/viable to merge these in the backend,
> we could still write the gui in a way to only have one place but
> manage two types of keys (with different api paths, etc.)
The key handling there is slightly different though, so not sure if this
will be compatible.
> sorry if that is already answered in one of your patches,but
> just saw the encryption key gui before going in and wanted to write
> that out.
>
> On 4/1/26 9:55 AM, Christian Ebner wrote:
>> This patch series implements support for encrypting backup snapshots
>> when pushing from a source PBS instance to an untrusted remote target
>> PBS instance. Further, it adds support to decrypt snapshots being
>> encrypted on the remote source PBS when pulling the contents to the
>> local target PBS instance. This allows to perform full server side
>> encryption/decryption when syncing with a less trusted remote PBS.
>>
>> In order to encrypt/decrypt snapshots, a new encryption key entity
>> is introduced, to be created as global instance on the PBS, placed and
>> managed by it's own dedicated config. Keys with secret are stored
>> in dedicated files so they only need to be loaded when accessing the
>> key, not for listing of configuration.
>>
>> The sync jobs in push and pull direction are extended to receive an
>> additional encryption key parameter, allowing the given key to be
>> used for encryption/decription of snapshots, depending on the sync
>> direction. In order to encrypt/decrypt the contents, chunks, index
>> files, blobs and manifest are additionally processed, rewritten when
>> required.
>>
>> Link to the bugtracker issue:
>> https://bugzilla.proxmox.com/show_bug.cgi?id=7251
>>
>>
>> proxmox:
>>
>> Christian Ebner (2):
>> pbs-api-types: define encryption key type and schema
>> pbs-api-types: sync job: add optional encryption key to config
>>
>> pbs-api-types/src/jobs.rs | 11 ++++++++--
>> pbs-api-types/src/key_derivation.rs | 34 ++++++++++++++++++++++++++---
>> pbs-api-types/src/lib.rs | 2 +-
>> 3 files changed, 41 insertions(+), 6 deletions(-)
>>
>>
>> proxmox-backup:
>>
>> Christian Ebner (18):
>> pbs-key-config: introduce store_with() for KeyConfig
>> pbs-config: implement encryption key config handling
>> pbs-config: acls: add 'encryption-keys' as valid 'system' subpath
>> ui: expose 'encryption-keys' as acl subpath for 'system'
>> api: config: add endpoints for encryption key manipulation
>> api: config: allow encryption key manipulation for sync job
>> sync: push: rewrite manifest instead of pushing pre-existing one
>> sync: add helper to check encryption key acls and load key
>> fix #7251: api: push: encrypt snapshots using configured encryption
>> key
>> ui: define and expose encryption key management menu item and windows
>> ui: expose assigning encryption key to sync jobs
>> sync: pull: load encryption key if given in job config
>> sync: expand source chunk reader trait by crypt config
>> sync: pull: introduce and use decrypt index writer if crypt config
>> sync: pull: extend encountered chunk by optional decrypted digest
>> sync: pull: decrypt blob files on pull if encryption key is configured
>> sync: pull: decrypt chunks and rewrite index file for matching key
>> sync: pull: decrypt snapshots with matching encryption key fingerprint
>>
>> pbs-config/Cargo.toml | 1 +
>> pbs-config/src/acl.rs | 4 +-
>> pbs-config/src/encryption_keys.rs | 159 +++++++++++
>> pbs-config/src/lib.rs | 1 +
>> pbs-key-config/src/lib.rs | 36 ++-
>> src/api2/config/encryption_keys.rs | 115 ++++++++
>> src/api2/config/mod.rs | 2 +
>> src/api2/config/sync.rs | 10 +
>> src/api2/pull.rs | 15 +-
>> src/api2/push.rs | 14 +-
>> src/server/pull.rs | 416 ++++++++++++++++++++++++-----
>> src/server/push.rs | 222 +++++++++++----
>> src/server/sync.rs | 57 +++-
>> www/Makefile | 3 +
>> www/NavigationTree.js | 6 +
>> www/Utils.js | 1 +
>> www/config/EncryptionKeysView.js | 143 ++++++++++
>> www/form/EncryptionKeySelector.js | 59 ++++
>> www/form/PermissionPathSelector.js | 1 +
>> www/window/EncryptionKeysEdit.js | 382 ++++++++++++++++++++++++++
>> www/window/SyncJobEdit.js | 11 +
>> 21 files changed, 1512 insertions(+), 146 deletions(-)
>> create mode 100644 pbs-config/src/encryption_keys.rs
>> create mode 100644 src/api2/config/encryption_keys.rs
>> create mode 100644 www/config/EncryptionKeysView.js
>> create mode 100644 www/form/EncryptionKeySelector.js
>> create mode 100644 www/window/EncryptionKeysEdit.js
>>
>>
>> Summary over all repositories:
>> 24 files changed, 1553 insertions(+), 152 deletions(-)
>>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-03 8:50 ` Christian Ebner
@ 2026-04-03 9:00 ` Dominik Csapak
0 siblings, 0 replies; 41+ messages in thread
From: Dominik Csapak @ 2026-04-03 9:00 UTC (permalink / raw)
To: Christian Ebner, pbs-devel
On 4/3/26 10:49 AM, Christian Ebner wrote:
> On 4/3/26 10:38 AM, Dominik Csapak wrote:
>> high level question before i dive deeper into the ui part:
>>
>> do the encryption keys have any overlap with tape encryption keys?
>>
>> we already have a ui for that there. maybe we could unify that?
>
> Not really matching the encryption key handling for the tape, no. We did
> not want to allow for password protected encryption keys (discussed of
> list with Fabian) and therefore I opted to follow more along the line of
> what we do for PVE, at least with respect to the UI.
>
>> Personally it's not a blocker for me, but we should think
>> of merging them in the long run. I think it would be better
>> to only have one place to create/manage encryption keys (per product)
>
> The encryption keys for tapes are already strongly intertwined with the
> code there, I did not feel confident to move keys and configs there.
> Rather I would have this as possible breaking change for PBS 5.0, where
> we could possibly merge config. Also, it was discussed to have some
> better mechanism for secret handling in the future (e.g. for the
> password protected keys).
>
>> If it's technically not compatible/viable to merge these in the backend,
>> we could still write the gui in a way to only have one place but
>> manage two types of keys (with different api paths, etc.)
>
> The key handling there is slightly different though, so not sure if this
> will be compatible.
>
i mean simply from a gui perspective we could still have a single panel
with two 'types' of encryption keys and use one or the other editwindows
the 'restore key' button would of course only work for tape keys
but that shouldn't be a problem?
the lists are nearly identical (id/hint, fingerprint, created). it would
be just a matter of wiring the buttons/actions to the correct
edit components/api calls i think
>> sorry if that is already answered in one of your patches,but
>> just saw the encryption key gui before going in and wanted to write
>> that out.
>>
>> On 4/1/26 9:55 AM, Christian Ebner wrote:
>>> This patch series implements support for encrypting backup snapshots
>>> when pushing from a source PBS instance to an untrusted remote target
>>> PBS instance. Further, it adds support to decrypt snapshots being
>>> encrypted on the remote source PBS when pulling the contents to the
>>> local target PBS instance. This allows to perform full server side
>>> encryption/decryption when syncing with a less trusted remote PBS.
>>>
>>> In order to encrypt/decrypt snapshots, a new encryption key entity
>>> is introduced, to be created as global instance on the PBS, placed and
>>> managed by it's own dedicated config. Keys with secret are stored
>>> in dedicated files so they only need to be loaded when accessing the
>>> key, not for listing of configuration.
>>>
>>> The sync jobs in push and pull direction are extended to receive an
>>> additional encryption key parameter, allowing the given key to be
>>> used for encryption/decription of snapshots, depending on the sync
>>> direction. In order to encrypt/decrypt the contents, chunks, index
>>> files, blobs and manifest are additionally processed, rewritten when
>>> required.
>>>
>>> Link to the bugtracker issue:
>>> https://bugzilla.proxmox.com/show_bug.cgi?id=7251
>>>
>>>
>>> proxmox:
>>>
>>> Christian Ebner (2):
>>> pbs-api-types: define encryption key type and schema
>>> pbs-api-types: sync job: add optional encryption key to config
>>>
>>> pbs-api-types/src/jobs.rs | 11 ++++++++--
>>> pbs-api-types/src/key_derivation.rs | 34 ++++++++++++++++++++++++++---
>>> pbs-api-types/src/lib.rs | 2 +-
>>> 3 files changed, 41 insertions(+), 6 deletions(-)
>>>
>>>
>>> proxmox-backup:
>>>
>>> Christian Ebner (18):
>>> pbs-key-config: introduce store_with() for KeyConfig
>>> pbs-config: implement encryption key config handling
>>> pbs-config: acls: add 'encryption-keys' as valid 'system' subpath
>>> ui: expose 'encryption-keys' as acl subpath for 'system'
>>> api: config: add endpoints for encryption key manipulation
>>> api: config: allow encryption key manipulation for sync job
>>> sync: push: rewrite manifest instead of pushing pre-existing one
>>> sync: add helper to check encryption key acls and load key
>>> fix #7251: api: push: encrypt snapshots using configured encryption
>>> key
>>> ui: define and expose encryption key management menu item and windows
>>> ui: expose assigning encryption key to sync jobs
>>> sync: pull: load encryption key if given in job config
>>> sync: expand source chunk reader trait by crypt config
>>> sync: pull: introduce and use decrypt index writer if crypt config
>>> sync: pull: extend encountered chunk by optional decrypted digest
>>> sync: pull: decrypt blob files on pull if encryption key is
>>> configured
>>> sync: pull: decrypt chunks and rewrite index file for matching key
>>> sync: pull: decrypt snapshots with matching encryption key
>>> fingerprint
>>>
>>> pbs-config/Cargo.toml | 1 +
>>> pbs-config/src/acl.rs | 4 +-
>>> pbs-config/src/encryption_keys.rs | 159 +++++++++++
>>> pbs-config/src/lib.rs | 1 +
>>> pbs-key-config/src/lib.rs | 36 ++-
>>> src/api2/config/encryption_keys.rs | 115 ++++++++
>>> src/api2/config/mod.rs | 2 +
>>> src/api2/config/sync.rs | 10 +
>>> src/api2/pull.rs | 15 +-
>>> src/api2/push.rs | 14 +-
>>> src/server/pull.rs | 416 ++++++++++++++++++++++++-----
>>> src/server/push.rs | 222 +++++++++++----
>>> src/server/sync.rs | 57 +++-
>>> www/Makefile | 3 +
>>> www/NavigationTree.js | 6 +
>>> www/Utils.js | 1 +
>>> www/config/EncryptionKeysView.js | 143 ++++++++++
>>> www/form/EncryptionKeySelector.js | 59 ++++
>>> www/form/PermissionPathSelector.js | 1 +
>>> www/window/EncryptionKeysEdit.js | 382 ++++++++++++++++++++++++++
>>> www/window/SyncJobEdit.js | 11 +
>>> 21 files changed, 1512 insertions(+), 146 deletions(-)
>>> create mode 100644 pbs-config/src/encryption_keys.rs
>>> create mode 100644 src/api2/config/encryption_keys.rs
>>> create mode 100644 www/config/EncryptionKeysView.js
>>> create mode 100644 www/form/EncryptionKeySelector.js
>>> create mode 100644 www/window/EncryptionKeysEdit.js
>>>
>>>
>>> Summary over all repositories:
>>> 24 files changed, 1553 insertions(+), 152 deletions(-)
>>>
>>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox-backup 12/20] ui: define and expose encryption key management menu item and windows
2026-04-01 7:55 ` [PATCH proxmox-backup 12/20] ui: define and expose encryption key management menu item and windows Christian Ebner
2026-04-01 23:09 ` Thomas Lamprecht
2026-04-01 23:10 ` Thomas Lamprecht
@ 2026-04-03 12:16 ` Dominik Csapak
2 siblings, 0 replies; 41+ messages in thread
From: Dominik Csapak @ 2026-04-03 12:16 UTC (permalink / raw)
To: Christian Ebner, pbs-devel
aside from what thomas found imo mostly ok
one comment inline
On 4/1/26 9:55 AM, Christian Ebner wrote:
[snip]
> diff --git a/www/config/EncryptionKeysView.js b/www/config/EncryptionKeysView.js
> new file mode 100644
> index 000000000..965dec47c
> --- /dev/null
> +++ b/www/config/EncryptionKeysView.js
> @@ -0,0 +1,143 @@
> +Ext.define('pbs-encryption-keys', {
> + extend: 'Ext.data.Model',
> + fields: ['id', 'fingerprint', 'created'],
> + idProperty: 'id',
> + proxy: {
> + type: 'proxmox',
> + url: '/api2/json/config/encryption-keys',
> + },
> +});
> +
> +Ext.define('PBS.config.EncryptionKeysView', {
> + extend: 'Ext.grid.GridPanel',
> + alias: 'widget.pbsEncryptionKeysView',
> +
> + title: gettext('Encryption Keys'),
> +
> + stateful: true,
> + stateId: 'grid-encryption-keys',
> +
> + controller: {
> + xclass: 'Ext.app.ViewController',
> +
> + addEncryptionKey: function () {
> + let me = this;
> + Ext.create('PBS.window.EncryptionKeysEdit', {
> + listeners: {
> + destroy: function () {
> + me.reload();
> + },
> + },
> + }).show();
> + },
> +
> + removeEncryptionKey: function () {
> + let me = this;
> + let view = me.getView();
> + let selection = view.getSelection();
> +
> + if (!selection || selection.length < 1) {
> + return;
> + }
> +
> + let keyID = selection[0].data.id;
> +
> + Ext.create('Proxmox.window.SafeDestroy', {
> + url: `/api2/json/config/encryption-keys/${keyID}`,
we should probably use '/api2/extjs' here, at least in my setup this
leads to an 'unknown error' popup when deleting a key
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (21 preceding siblings ...)
2026-04-03 8:39 ` Dominik Csapak
@ 2026-04-07 15:12 ` Manuel Federanko
2026-04-07 16:17 ` Christian Ebner
2026-04-08 7:29 ` David Riley
23 siblings, 1 reply; 41+ messages in thread
From: Manuel Federanko @ 2026-04-07 15:12 UTC (permalink / raw)
To: pbs-devel
On 2026-04-01 9:55 AM, Christian Ebner wrote:
> This patch series implements support for encrypting backup snapshots
> when pushing from a source PBS instance to an untrusted remote target
> PBS instance. Further, it adds support to decrypt snapshots being
> encrypted on the remote source PBS when pulling the contents to the
> local target PBS instance. This allows to perform full server side
> encryption/decryption when syncing with a less trusted remote PBS.
>
> In order to encrypt/decrypt snapshots, a new encryption key entity
> is introduced, to be created as global instance on the PBS, placed and
> managed by it's own dedicated config. Keys with secret are stored
> in dedicated files so they only need to be loaded when accessing the
> key, not for listing of configuration.
>
> The sync jobs in push and pull direction are extended to receive an
> additional encryption key parameter, allowing the given key to be
> used for encryption/decription of snapshots, depending on the sync
> direction. In order to encrypt/decrypt the contents, chunks, index
> files, blobs and manifest are additionally processed, rewritten when
> required.
>
> Link to the bugtracker issue:
> https://bugzilla.proxmox.com/show_bug.cgi?id=7251
>
I just had a go at this, looks good overall, a couple of issues:
it's not possible to not specify a encryption key during sync job
creation, the dialog fails with an error:
> "encryption-key: value must be at least 3 characters long"
it still works via the cli
creating a sync job with a unknown encryption key silently succeeds,
leaving the key unset (via the cli)
nit: the encryption key name length check is inconsistent, I can only
create keys where the name is 4 characters or longer, the check
for a sync job is 3 characters
when removing a encryption key a "unknown error" is displayed to
the user
adding a key which is password protected works, which we could
already check against, to prevent later failures in sync jobs.
a pull sync which had a correct key set will decrypt a backup
switching a key from that pull job (or starting another pull
job with a different key) will mark the backup as corrupt and
re-sync it, replacing the decrypted backup with a encrypted one.
> re-sync snapshot host/tali/2026-04-07T14:35:30Z
> detected changed file "/mnt/datastore/zfs0/host/tali/2026-04-07T14:35:30Z/config.pxar.didx" - wrong checksum for file 'config.pxar.didx'
I'm not sure we even want to allow that since that setup is
inherently at odds.
apologies if some of these issues are already touched upon by the
others.
Other than that it lgtm and works as expected.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-07 15:12 ` Manuel Federanko
@ 2026-04-07 16:17 ` Christian Ebner
0 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-07 16:17 UTC (permalink / raw)
To: Manuel Federanko, pbs-devel
On 4/7/26 5:11 PM, Manuel Federanko wrote:
> On 2026-04-01 9:55 AM, Christian Ebner wrote:
>> This patch series implements support for encrypting backup snapshots
>> when pushing from a source PBS instance to an untrusted remote target
>> PBS instance. Further, it adds support to decrypt snapshots being
>> encrypted on the remote source PBS when pulling the contents to the
>> local target PBS instance. This allows to perform full server side
>> encryption/decryption when syncing with a less trusted remote PBS.
>>
>> In order to encrypt/decrypt snapshots, a new encryption key entity
>> is introduced, to be created as global instance on the PBS, placed and
>> managed by it's own dedicated config. Keys with secret are stored
>> in dedicated files so they only need to be loaded when accessing the
>> key, not for listing of configuration.
>>
>> The sync jobs in push and pull direction are extended to receive an
>> additional encryption key parameter, allowing the given key to be
>> used for encryption/decription of snapshots, depending on the sync
>> direction. In order to encrypt/decrypt the contents, chunks, index
>> files, blobs and manifest are additionally processed, rewritten when
>> required.
>>
>> Link to the bugtracker issue:
>> https://bugzilla.proxmox.com/show_bug.cgi?id=7251
>>
>
> I just had a go at this, looks good overall, a couple of issues:
Thanks for testing!
>
> it's not possible to not specify a encryption key during sync job
> creation, the dialog fails with an error:
>> "encryption-key: value must be at least 3 characters long"
> it still works via the cli
>
> creating a sync job with a unknown encryption key silently succeeds,
> leaving the key unset (via the cli)
Will double check these and the length checks below, thanks for reporting!
>
> nit: the encryption key name length check is inconsistent, I can only
> create keys where the name is 4 characters or longer, the check
> for a sync job is 3 characters
>
> when removing a encryption key a "unknown error" is displayed to
> the user
Thanks, this has been reported by Dominik and is fixed already for the
upcoming version of the patches.
>
> adding a key which is password protected works, which we could
> already check against, to prevent later failures in sync jobs.
Yes, reported also by Thomas and will be checked and not be possible in v2.
>
> a pull sync which had a correct key set will decrypt a backup
> switching a key from that pull job (or starting another pull
> job with a different key) will mark the backup as corrupt and
> re-sync it, replacing the decrypted backup with a encrypted one.
>> re-sync snapshot host/tali/2026-04-07T14:35:30Z
>> detected changed file "/mnt/datastore/zfs0/host/tali/2026-04-07T14:35:30Z/config.pxar.didx" - wrong checksum for file 'config.pxar.didx'
> I'm not sure we even want to allow that since that setup is
> inherently at odds.
Thanks for reporting, decryption during pull is the intended behavior,
it will be made clearer by switching the labels (and add the still
missing docs). It should however not lead to resync as corrupted backup,
that is indeed unwanted. Will have a closer look, thanks!
>
>
> apologies if some of these issues are already touched upon by the
> others.
>
> Other than that it lgtm and works as expected.
>
>
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (22 preceding siblings ...)
2026-04-07 15:12 ` Manuel Federanko
@ 2026-04-08 7:29 ` David Riley
2026-04-08 15:11 ` Christian Ebner
23 siblings, 1 reply; 41+ messages in thread
From: David Riley @ 2026-04-08 7:29 UTC (permalink / raw)
To: Christian Ebner, pbs-devel
Thanks for your work on this.
I tested your patches for the proxmox and proxmox-backup
repositories and encountered three issues:
1. Push/Pull Job: Form validation error
When the encryption key selection is left untouched or disabled,
the form fails validation.
Reproduction Steps:
Use two PBS instances (PBS1 with patches, PBS2 without).
Create a datastore and an encryption key on PBS1.
Create a datastore on PBS2.
On PBS1, configure a Push Job for the datastore.
Leave the Encryption Key field disabled/untouched and submit the
form.
The following error appears:
parameter verification errors (400)
encryption-key: value must be at least 3 characters long
2. Deleting Encryption Key: "Unknown error" pop-up
Deleting an encryption key appears to succeed (the key is removed),
but the UI shows an error pop up.
Error message: "Unknown error"
Observations: No errors are logged in the browser console or the
network tab.
3. Drag and Drop of Encryption Key File
Dragging a file from the file explorer into the "Upload from File"
form field results in a parsing error.
Reproduction Steps:
Create Encryption Key and download it.
Add new key
Drag and drop the just downloaded key file into the form field.
The field displays "[object ProgressEvent]" instead of the file
content/name. Hovering over it shows:
"Failed to parse key - SyntaxError: JSON parse unexpected character
at line 1 column 2 of the JSON data"
Aside from these three points, the rest of the functionality works as
expected.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-02 7:37 ` Christian Ebner
@ 2026-04-08 7:50 ` Fabian Grünbichler
2026-04-08 8:13 ` Christian Ebner
0 siblings, 1 reply; 41+ messages in thread
From: Fabian Grünbichler @ 2026-04-08 7:50 UTC (permalink / raw)
To: Christian Ebner, pbs-devel, Thomas Lamprecht
On April 2, 2026 9:37 am, Christian Ebner wrote:
> On 4/2/26 2:24 AM, Thomas Lamprecht wrote:
>> Am 01.04.26 um 09:55 schrieb Christian Ebner:
>>> This patch series implements support for encrypting backup snapshots
>>> when pushing from a source PBS instance to an untrusted remote target
>>> PBS instance. Further, it adds support to decrypt snapshots being
>>> encrypted on the remote source PBS when pulling the contents to the
>>> local target PBS instance.
>>
[..]
>
>> When a server-side key is configured on push, all snapshots with any
>> encrypted content are silently skipped. That includes fully client-side
>> encrypted ones that could just be synced as-is - they are already encrypted
>> after all. Might be better to push those unchanged and only skip genuinely
>> partially-encrypted snapshots? Or if unsure, allow to user to chose what
>> they want. Wrapping encryption is probably never a good idea though,
>> maybe just any parts that are not yet encrypted?
>
> Pushing fully encrypted ones as is makes sense, only log and skip
> partially encrypted snapshots.
pushing fully encrypted snapshots *if they use the same key*, or all
fully encrypted snapshots? I think the former is okay, the latter might
be confusing/unexpected and lead to "I have all my backups on the remote
but can't decrypt them because of the wrong key"?
e.g.:
client A uses key A to backup to PBS X
client B uses plaintext to backup to PBS X
PBS X uses key B to sync to PBS Y
user thinks key B is enough to restore backups from Y (after all, it's
the encryption key used for syncing), and destroys clients A and B and
PBS X (all in the same location maybe?). now they can't recover client
A..
I know this is technically on the user (they didn't pay attention to
their key management), but seems like a potential foot gun. if we skip
the groups of client A because of a key mismatch (and warn about that?)
then it is at least obvious that those backups *are not on Y*. we could
have a flag for allowing key mismatches to support such scenarios
opt-in?
>> Nit: The same `encryption_key` field means "encrypt" for push and
>> "decrypt" for pull. Maybe be literal here and name it decryption_key for
>> the later. Or something generic that works for both.
>
> Yes, that make sense... Was primed by the fact that the entities
> themself are referred to as encryption keys, but renaming this to
> decryption_key for the pull makes more sense for making the code better
> readable. Will adapt that.
>
>> A few key lifecycle things:
>>
>> Deleting a key does not check whether sync jobs reference it; the next run
>> just fails. Might want to refuse deletion while references exist, or at
>> least warn.
>
> Yes, also realized this just now, will add a check for this in the api
> handler and not allow to remove it in that case.
key rotation/lifecycle management would be a question in any case - we
could have one "active" key per sync job, and a list of archived ones
that are only used for decryption? a key rotation would obviously break
deduplication chains.. such a scheme would also allow re-encryption on
the fly of client-encrypted backups, if a matching key is provided -
though the use case for that seems limited to me (why encrypt in the
first place if you trust this PBS with the plaintext data?).
or alternatively it would need to be documented that for rotating a key,
you want to create a new key and target namespace, then switch the sync
job over (syncing everything from scratch), and delete the old copies
encrypted with the previous key once the re-sync is done?
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-08 7:50 ` Fabian Grünbichler
@ 2026-04-08 8:13 ` Christian Ebner
2026-04-08 8:29 ` Thomas Lamprecht
0 siblings, 1 reply; 41+ messages in thread
From: Christian Ebner @ 2026-04-08 8:13 UTC (permalink / raw)
To: Fabian Grünbichler, pbs-devel, Thomas Lamprecht
On 4/8/26 9:49 AM, Fabian Grünbichler wrote:
> On April 2, 2026 9:37 am, Christian Ebner wrote:
>> On 4/2/26 2:24 AM, Thomas Lamprecht wrote:
>>> Am 01.04.26 um 09:55 schrieb Christian Ebner:
>>>> This patch series implements support for encrypting backup snapshots
>>>> when pushing from a source PBS instance to an untrusted remote target
>>>> PBS instance. Further, it adds support to decrypt snapshots being
>>>> encrypted on the remote source PBS when pulling the contents to the
>>>> local target PBS instance.
>>>
>
> [..]
>
>>
>>> When a server-side key is configured on push, all snapshots with any
>>> encrypted content are silently skipped. That includes fully client-side
>>> encrypted ones that could just be synced as-is - they are already encrypted
>>> after all. Might be better to push those unchanged and only skip genuinely
>>> partially-encrypted snapshots? Or if unsure, allow to user to chose what
>>> they want. Wrapping encryption is probably never a good idea though,
>>> maybe just any parts that are not yet encrypted?
>>
>> Pushing fully encrypted ones as is makes sense, only log and skip
>> partially encrypted snapshots.
>
> pushing fully encrypted snapshots *if they use the same key*, or all
> fully encrypted snapshots? I think the former is okay, the latter might
> be confusing/unexpected and lead to "I have all my backups on the remote
> but can't decrypt them because of the wrong key"?
after pondering about this a bit I must agree, mixing of content
encrypted by different keys during sync will cause headache for users
not expecting it.
Given this, I would rather keep the handling as is, meaning already
encrypted backups are not pushed and maybe even add a check to detect if
the previous manifest of the backup group (if present) was encrypted
using the same key, refusing to sync at all if not?
>
> e.g.:
>
> client A uses key A to backup to PBS X
> client B uses plaintext to backup to PBS X
> PBS X uses key B to sync to PBS Y
>
> user thinks key B is enough to restore backups from Y (after all, it's
> the encryption key used for syncing), and destroys clients A and B and
> PBS X (all in the same location maybe?). now they can't recover client
> A..
>
> I know this is technically on the user (they didn't pay attention to
> their key management), but seems like a potential foot gun. if we skip
> the groups of client A because of a key mismatch (and warn about that?)
> then it is at least obvious that those backups *are not on Y*. we could
> have a flag for allowing key mismatches to support such scenarios
> opt-in?
I would rather not allow to flag this, but make the user explicitly sync
such snapshots if truly wanted. After all we do already have the
encrypted-only in place, which allows to sync encrypted snapshots
without touching the others. It seems to me that making mixing contents
the undesired, hard route is the better way, given also the pull decrypt
mismatch issues with 2 jobs using different keys as reported by Manuel.
>
>>> Nit: The same `encryption_key` field means "encrypt" for push and
>>> "decrypt" for pull. Maybe be literal here and name it decryption_key for
>>> the later. Or something generic that works for both.
>>
>> Yes, that make sense... Was primed by the fact that the entities
>> themself are referred to as encryption keys, but renaming this to
>> decryption_key for the pull makes more sense for making the code better
>> readable. Will adapt that.
>>
>>> A few key lifecycle things:
>>>
>>> Deleting a key does not check whether sync jobs reference it; the next run
>>> just fails. Might want to refuse deletion while references exist, or at
>>> least warn.
>>
>> Yes, also realized this just now, will add a check for this in the api
>> handler and not allow to remove it in that case.
>
> key rotation/lifecycle management would be a question in any case - we
> could have one "active" key per sync job, and a list of archived ones
> that are only used for decryption? a key rotation would obviously break
> deduplication chains.. such a scheme would also allow re-encryption on
> the fly of client-encrypted backups, if a matching key is provided -
> though the use case for that seems limited to me (why encrypt in the
> first place if you trust this PBS with the plaintext data?).
>
> or alternatively it would need to be documented that for rotating a key,
> you want to create a new key and target namespace, then switch the sync
> job over (syncing everything from scratch), and delete the old copies
> encrypted with the previous key once the re-sync is done?
Given above, I think the alternative option might be the better way.
Mixing snapshots in a group with different keys should not be encouraged
due to possible key loss as described above. But making it possible to
archive the key, so it is still kept around for decryption, but not
usable to push new encrypted backups is definitely something I should
implement, yes. And probably it should not be possible to remove a key,
which has not been marked as archived first, even if currently not in
use by any sync jobs?
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-08 8:13 ` Christian Ebner
@ 2026-04-08 8:29 ` Thomas Lamprecht
2026-04-08 8:56 ` Christian Ebner
2026-04-08 9:03 ` Fabian Grünbichler
0 siblings, 2 replies; 41+ messages in thread
From: Thomas Lamprecht @ 2026-04-08 8:29 UTC (permalink / raw)
To: Christian Ebner, Fabian Grünbichler, pbs-devel
On 08/04/2026 10:12, Christian Ebner wrote:
> On 4/8/26 9:49 AM, Fabian Grünbichler wrote:
>> On April 2, 2026 9:37 am, Christian Ebner wrote:
>>> On 4/2/26 2:24 AM, Thomas Lamprecht wrote:
>>>> Am 01.04.26 um 09:55 schrieb Christian Ebner:
>>>>> This patch series implements support for encrypting backup snapshots
>>>>> when pushing from a source PBS instance to an untrusted remote target
>>>>> PBS instance. Further, it adds support to decrypt snapshots being
>>>>> encrypted on the remote source PBS when pulling the contents to the
>>>>> local target PBS instance.
>>>>
>>
>> [..]
>>
>>>
>>>> When a server-side key is configured on push, all snapshots with any
>>>> encrypted content are silently skipped. That includes fully client-side
>>>> encrypted ones that could just be synced as-is - they are already encrypted
>>>> after all. Might be better to push those unchanged and only skip genuinely
>>>> partially-encrypted snapshots? Or if unsure, allow to user to chose what
>>>> they want. Wrapping encryption is probably never a good idea though,
>>>> maybe just any parts that are not yet encrypted?
>>>
>>> Pushing fully encrypted ones as is makes sense, only log and skip
>>> partially encrypted snapshots.
>>
>> pushing fully encrypted snapshots *if they use the same key*, or all
>> fully encrypted snapshots? I think the former is okay, the latter might
>> be confusing/unexpected and lead to "I have all my backups on the remote
>> but can't decrypt them because of the wrong key"?
>
> after pondering about this a bit I must agree, mixing of content encrypted by different keys during sync will cause headache for users not expecting it.
Well, can they really loose? Not having a backups synced might be
unexpected too, and while the user could create another sync job, having
to do so can be a bit of a hassle.
Also, the contract of "everything gets encrypted, if it already is, it
gets just send along as is" doesn't sound that complex to me.
> Given this, I would rather keep the handling as is, meaning already encrypted backups are not pushed and maybe even add a check to detect if the previous manifest of the backup group (if present) was encrypted using the same key, refusing to sync at all if not?
Confusion might be better resolved through docs and task log warnings,
depending on gravity, over refusing to sync something, which can be
fatal if the primary data source breaks and one searches for backups
only to find that they did not get synced out of confusion protection.
Don't get me wrong, I agree that this is not ideal and Fabian has a
point, but with syncing or not syncing data one needs to be very careful
for a backup server appliance.
>> e.g.:
>>
>> client A uses key A to backup to PBS X
>> client B uses plaintext to backup to PBS X
>> PBS X uses key B to sync to PBS Y
>>
>> user thinks key B is enough to restore backups from Y (after all, it's
>> the encryption key used for syncing), and destroys clients A and B and
>> PBS X (all in the same location maybe?). now they can't recover client
>> A..
>>
>> I know this is technically on the user (they didn't pay attention to
>> their key management), but seems like a potential foot gun. if we skip
>> the groups of client A because of a key mismatch (and warn about that?)
>> then it is at least obvious that those backups *are not on Y*. we could
>> have a flag for allowing key mismatches to support such scenarios
>> opt-in?
>
> I would rather not allow to flag this, but make the user explicitly sync such snapshots if truly wanted. After all we do already have the encrypted-only in place, which allows to sync encrypted snapshots without touching the others. It seems to me that making mixing contents the undesired, hard route is the better way, given also the pull decrypt mismatch issues with 2 jobs using different keys as reported by Manuel.
Extra option is naturally adding complexity, but it would allow making
the decision by the users, i.e. the ones that can actually know what the
right choice for them is.
But IMO we could imply the sync everything, but encrypt only unecrypted
part now, given that datastores and namespaces often are organised such
that most backups in there are the same and that we have already lots of
possibilities to filter out stuff, and if user demand for skipping these
comes up, then we could add a filter flag for exclude/include encrypted
(and for another usecase also unencrypted) backup snapshots from the job
selection, which would come relatively naturally and not be just another
flag tacked on the job options.
>>
>>>> Nit: The same `encryption_key` field means "encrypt" for push and
>>>> "decrypt" for pull. Maybe be literal here and name it decryption_key for
>>>> the later. Or something generic that works for both.
>>>
>>> Yes, that make sense... Was primed by the fact that the entities
>>> themself are referred to as encryption keys, but renaming this to
>>> decryption_key for the pull makes more sense for making the code better
>>> readable. Will adapt that.
>>>
>>>> A few key lifecycle things:
>>>>
>>>> Deleting a key does not check whether sync jobs reference it; the next run
>>>> just fails. Might want to refuse deletion while references exist, or at
>>>> least warn.
>>>
>>> Yes, also realized this just now, will add a check for this in the api
>>> handler and not allow to remove it in that case.
>>
>> key rotation/lifecycle management would be a question in any case - we
>> could have one "active" key per sync job, and a list of archived ones
>> that are only used for decryption? a key rotation would obviously break
>> deduplication chains.. such a scheme would also allow re-encryption on
>> the fly of client-encrypted backups, if a matching key is provided -
>> though the use case for that seems limited to me (why encrypt in the
>> first place if you trust this PBS with the plaintext data?).
>>
>> or alternatively it would need to be documented that for rotating a key,
>> you want to create a new key and target namespace, then switch the sync
>> job over (syncing everything from scratch), and delete the old copies
>> encrypted with the previous key once the re-sync is done?
>
> Given above, I think the alternative option might be the better way. Mixing snapshots in a group with different keys should not be encouraged due to possible key loss as described above. But making it possible to archive the key, so it is still kept around for decryption, but not usable to push new encrypted backups is definitely something I should implement, yes. And probably it should not be possible to remove a key, which has not been marked as archived first, even if currently not in use by any sync jobs?
>
Yeah, key removal might be well something that benefits from some extra
steps and/or warnings needed.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-08 8:29 ` Thomas Lamprecht
@ 2026-04-08 8:56 ` Christian Ebner
2026-04-08 9:03 ` Fabian Grünbichler
1 sibling, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-08 8:56 UTC (permalink / raw)
To: Thomas Lamprecht, Fabian Grünbichler, pbs-devel
On 4/8/26 10:28 AM, Thomas Lamprecht wrote:
> On 08/04/2026 10:12, Christian Ebner wrote:
>> On 4/8/26 9:49 AM, Fabian Grünbichler wrote:
>>> On April 2, 2026 9:37 am, Christian Ebner wrote:
>>>> On 4/2/26 2:24 AM, Thomas Lamprecht wrote:
>>>>> Am 01.04.26 um 09:55 schrieb Christian Ebner:
>>>>>> This patch series implements support for encrypting backup snapshots
>>>>>> when pushing from a source PBS instance to an untrusted remote target
>>>>>> PBS instance. Further, it adds support to decrypt snapshots being
>>>>>> encrypted on the remote source PBS when pulling the contents to the
>>>>>> local target PBS instance.
>>>>>
>>>
>>> [..]
>>>
>>>>
>>>>> When a server-side key is configured on push, all snapshots with any
>>>>> encrypted content are silently skipped. That includes fully client-side
>>>>> encrypted ones that could just be synced as-is - they are already encrypted
>>>>> after all. Might be better to push those unchanged and only skip genuinely
>>>>> partially-encrypted snapshots? Or if unsure, allow to user to chose what
>>>>> they want. Wrapping encryption is probably never a good idea though,
>>>>> maybe just any parts that are not yet encrypted?
>>>>
>>>> Pushing fully encrypted ones as is makes sense, only log and skip
>>>> partially encrypted snapshots.
>>>
>>> pushing fully encrypted snapshots *if they use the same key*, or all
>>> fully encrypted snapshots? I think the former is okay, the latter might
>>> be confusing/unexpected and lead to "I have all my backups on the remote
>>> but can't decrypt them because of the wrong key"?
>>
>> after pondering about this a bit I must agree, mixing of content encrypted by different keys during sync will cause headache for users not expecting it.
>
> Well, can they really loose? Not having a backups synced might be
> unexpected too, and while the user could create another sync job, having
> to do so can be a bit of a hassle.
> Also, the contract of "everything gets encrypted, if it already is, it
> gets just send along as is" doesn't sound that complex to me.
>> Given this, I would rather keep the handling as is, meaning already encrypted backups are not pushed and maybe even add a check to detect if the previous manifest of the backup group (if present) was encrypted using the same key, refusing to sync at all if not?
>
> Confusion might be better resolved through docs and task log warnings,
> depending on gravity, over refusing to sync something, which can be
> fatal if the primary data source breaks and one searches for backups
> only to find that they did not get synced out of confusion protection.
>
> Don't get me wrong, I agree that this is not ideal and Fabian has a
> point, but with syncing or not syncing data one needs to be very careful
> for a backup server appliance.
Okay, so let's go this route then. Pushing already encrypted backup
snapshots as they are and adding a big fat warning below the key field
that it is in the users responsibility to check that the posses all the
keys for a restore.
And add documentation that the recommended way is to not mix contents
but rather use namespaces to split them.
>>> e.g.:
>>>
>>> client A uses key A to backup to PBS X
>>> client B uses plaintext to backup to PBS X
>>> PBS X uses key B to sync to PBS Y
>>>
>>> user thinks key B is enough to restore backups from Y (after all, it's
>>> the encryption key used for syncing), and destroys clients A and B and
>>> PBS X (all in the same location maybe?). now they can't recover client
>>> A..
>>>
>>> I know this is technically on the user (they didn't pay attention to
>>> their key management), but seems like a potential foot gun. if we skip
>>> the groups of client A because of a key mismatch (and warn about that?)
>>> then it is at least obvious that those backups *are not on Y*. we could
>>> have a flag for allowing key mismatches to support such scenarios
>>> opt-in?
>>
>> I would rather not allow to flag this, but make the user explicitly sync such snapshots if truly wanted. After all we do already have the encrypted-only in place, which allows to sync encrypted snapshots without touching the others. It seems to me that making mixing contents the undesired, hard route is the better way, given also the pull decrypt mismatch issues with 2 jobs using different keys as reported by Manuel.
>
> Extra option is naturally adding complexity, but it would allow making
> the decision by the users, i.e. the ones that can actually know what the
> right choice for them is.
>
> But IMO we could imply the sync everything, but encrypt only unecrypted
> part now, given that datastores and namespaces often are organised such
> that most backups in there are the same and that we have already lots of
> possibilities to filter out stuff, and if user demand for skipping these
> comes up, then we could add a filter flag for exclude/include encrypted
> (and for another usecase also unencrypted) backup snapshots from the job
> selection, which would come relatively naturally and not be just another
> flag tacked on the job options.
Yes, adding more user friendly filtering options for the snapshot level
would be indeed warranted for sync jobs. But something for the PBS 5.0
release, as this would be a breaking change in the config.
>>>
>>>>> Nit: The same `encryption_key` field means "encrypt" for push and
>>>>> "decrypt" for pull. Maybe be literal here and name it decryption_key for
>>>>> the later. Or something generic that works for both.
>>>>
>>>> Yes, that make sense... Was primed by the fact that the entities
>>>> themself are referred to as encryption keys, but renaming this to
>>>> decryption_key for the pull makes more sense for making the code better
>>>> readable. Will adapt that.
>>>>
>>>>> A few key lifecycle things:
>>>>>
>>>>> Deleting a key does not check whether sync jobs reference it; the next run
>>>>> just fails. Might want to refuse deletion while references exist, or at
>>>>> least warn.
>>>>
>>>> Yes, also realized this just now, will add a check for this in the api
>>>> handler and not allow to remove it in that case.
>>>
>>> key rotation/lifecycle management would be a question in any case - we
>>> could have one "active" key per sync job, and a list of archived ones
>>> that are only used for decryption? a key rotation would obviously break
>>> deduplication chains.. such a scheme would also allow re-encryption on
>>> the fly of client-encrypted backups, if a matching key is provided -
>>> though the use case for that seems limited to me (why encrypt in the
>>> first place if you trust this PBS with the plaintext data?).
>>>
>>> or alternatively it would need to be documented that for rotating a key,
>>> you want to create a new key and target namespace, then switch the sync
>>> job over (syncing everything from scratch), and delete the old copies
>>> encrypted with the previous key once the re-sync is done?
>>
>> Given above, I think the alternative option might be the better way. Mixing snapshots in a group with different keys should not be encouraged due to possible key loss as described above. But making it possible to archive the key, so it is still kept around for decryption, but not usable to push new encrypted backups is definitely something I should implement, yes. And probably it should not be possible to remove a key, which has not been marked as archived first, even if currently not in use by any sync jobs?
>>
>
>
> Yeah, key removal might be well something that benefits from some extra
> steps and/or warnings needed.
Acked, so will add the archiving as well.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-08 8:29 ` Thomas Lamprecht
2026-04-08 8:56 ` Christian Ebner
@ 2026-04-08 9:03 ` Fabian Grünbichler
1 sibling, 0 replies; 41+ messages in thread
From: Fabian Grünbichler @ 2026-04-08 9:03 UTC (permalink / raw)
To: Christian Ebner, pbs-devel, Thomas Lamprecht
On April 8, 2026 10:29 am, Thomas Lamprecht wrote:
> On 08/04/2026 10:12, Christian Ebner wrote:
>> On 4/8/26 9:49 AM, Fabian Grünbichler wrote:
>>> On April 2, 2026 9:37 am, Christian Ebner wrote:
>>>> On 4/2/26 2:24 AM, Thomas Lamprecht wrote:
>>>>> Am 01.04.26 um 09:55 schrieb Christian Ebner:
>>>>>> This patch series implements support for encrypting backup snapshots
>>>>>> when pushing from a source PBS instance to an untrusted remote target
>>>>>> PBS instance. Further, it adds support to decrypt snapshots being
>>>>>> encrypted on the remote source PBS when pulling the contents to the
>>>>>> local target PBS instance.
>>>>>
>>>
>>> [..]
>>>
>>>>
>>>>> When a server-side key is configured on push, all snapshots with any
>>>>> encrypted content are silently skipped. That includes fully client-side
>>>>> encrypted ones that could just be synced as-is - they are already encrypted
>>>>> after all. Might be better to push those unchanged and only skip genuinely
>>>>> partially-encrypted snapshots? Or if unsure, allow to user to chose what
>>>>> they want. Wrapping encryption is probably never a good idea though,
>>>>> maybe just any parts that are not yet encrypted?
>>>>
>>>> Pushing fully encrypted ones as is makes sense, only log and skip
>>>> partially encrypted snapshots.
>>>
>>> pushing fully encrypted snapshots *if they use the same key*, or all
>>> fully encrypted snapshots? I think the former is okay, the latter might
>>> be confusing/unexpected and lead to "I have all my backups on the remote
>>> but can't decrypt them because of the wrong key"?
>>
>> after pondering about this a bit I must agree, mixing of content encrypted by different keys during sync will cause headache for users not expecting it.
>
> Well, can they really loose? Not having a backups synced might be
> unexpected too, and while the user could create another sync job, having
> to do so can be a bit of a hassle.
> Also, the contract of "everything gets encrypted, if it already is, it
> gets just send along as is" doesn't sound that complex to me.
>> Given this, I would rather keep the handling as is, meaning already encrypted backups are not pushed and maybe even add a check to detect if the previous manifest of the backup group (if present) was encrypted using the same key, refusing to sync at all if not?
>
> Confusion might be better resolved through docs and task log warnings,
> depending on gravity, over refusing to sync something, which can be
> fatal if the primary data source breaks and one searches for backups
> only to find that they did not get synced out of confusion protection.
>
> Don't get me wrong, I agree that this is not ideal and Fabian has a
> point, but with syncing or not syncing data one needs to be very careful
> for a backup server appliance.
>>> e.g.:
>>>
>>> client A uses key A to backup to PBS X
>>> client B uses plaintext to backup to PBS X
>>> PBS X uses key B to sync to PBS Y
>>>
>>> user thinks key B is enough to restore backups from Y (after all, it's
>>> the encryption key used for syncing), and destroys clients A and B and
>>> PBS X (all in the same location maybe?). now they can't recover client
>>> A..
>>>
>>> I know this is technically on the user (they didn't pay attention to
>>> their key management), but seems like a potential foot gun. if we skip
>>> the groups of client A because of a key mismatch (and warn about that?)
>>> then it is at least obvious that those backups *are not on Y*. we could
>>> have a flag for allowing key mismatches to support such scenarios
>>> opt-in?
>>
>> I would rather not allow to flag this, but make the user explicitly sync such snapshots if truly wanted. After all we do already have the encrypted-only in place, which allows to sync encrypted snapshots without touching the others. It seems to me that making mixing contents the undesired, hard route is the better way, given also the pull decrypt mismatch issues with 2 jobs using different keys as reported by Manuel.
>
> Extra option is naturally adding complexity, but it would allow making
> the decision by the users, i.e. the ones that can actually know what the
> right choice for them is.
>
> But IMO we could imply the sync everything, but encrypt only unecrypted
> part now, given that datastores and namespaces often are organised such
> that most backups in there are the same and that we have already lots of
> possibilities to filter out stuff, and if user demand for skipping these
> comes up, then we could add a filter flag for exclude/include encrypted
> (and for another usecase also unencrypted) backup snapshots from the job
> selection, which would come relatively naturally and not be just another
> flag tacked on the job options.
in any case, we should probably include a summary at the end of the sync
job?
transferred X snapshots, Y newly encrypted, Z already encrypted at
source
or, for pull:
transferred X snapshots, Y decrypted, Z already unencrypted at source
that makes it easy to spot (and hopefully prevent mistakes) without
being noisy like warnings would be?
I can see the arguments for both sides, and I agree that the most
important part is getting the docs and UX right so that it is obvious to
the users what is going on - supporting the "other" variant can always
be done later..
snapshot filters (only verified, only encrypted, only plain) have been
requested in the past by users as well..
>>>
>>>>> Nit: The same `encryption_key` field means "encrypt" for push and
>>>>> "decrypt" for pull. Maybe be literal here and name it decryption_key for
>>>>> the later. Or something generic that works for both.
>>>>
>>>> Yes, that make sense... Was primed by the fact that the entities
>>>> themself are referred to as encryption keys, but renaming this to
>>>> decryption_key for the pull makes more sense for making the code better
>>>> readable. Will adapt that.
>>>>
>>>>> A few key lifecycle things:
>>>>>
>>>>> Deleting a key does not check whether sync jobs reference it; the next run
>>>>> just fails. Might want to refuse deletion while references exist, or at
>>>>> least warn.
>>>>
>>>> Yes, also realized this just now, will add a check for this in the api
>>>> handler and not allow to remove it in that case.
>>>
>>> key rotation/lifecycle management would be a question in any case - we
>>> could have one "active" key per sync job, and a list of archived ones
>>> that are only used for decryption? a key rotation would obviously break
>>> deduplication chains.. such a scheme would also allow re-encryption on
>>> the fly of client-encrypted backups, if a matching key is provided -
>>> though the use case for that seems limited to me (why encrypt in the
>>> first place if you trust this PBS with the plaintext data?).
>>>
>>> or alternatively it would need to be documented that for rotating a key,
>>> you want to create a new key and target namespace, then switch the sync
>>> job over (syncing everything from scratch), and delete the old copies
>>> encrypted with the previous key once the re-sync is done?
>>
>> Given above, I think the alternative option might be the better way. Mixing snapshots in a group with different keys should not be encouraged due to possible key loss as described above. But making it possible to archive the key, so it is still kept around for decryption, but not usable to push new encrypted backups is definitely something I should implement, yes. And probably it should not be possible to remove a key, which has not been marked as archived first, even if currently not in use by any sync jobs?
>>
>
>
> Yeah, key removal might be well something that benefits from some extra
> steps and/or warnings needed.
we currently don't support "duplicating" or inverting sync jobs easily.
that would be a requirement for doing key rotation like this though,
because at the moment I can only edit in-place, which will lead users
naturally to just change the encryption key. then the key is no longer
referenced, and can be removed without warnings, despite possibly plenty
of snapshots using it on the target. a similar "problem" also exists for
PVE->PBS, and whatever we decide on here should probably also be
implemented there for consistency.
IMHO we either want to forbid changing the encryption key used by a sync
job (and add a duplicate button that allows creating a new one with same
settings but different key easily), or automatically append the key to a
list of archival keys for that sync job when changing (which would
nicely mesh with a "switch direction" button, since you'd only need
those old keys when pulling later). although the notion of an active key
only makes sense for pushing anyway, for pulling you could always have a
list of keys and match via the fingerprints..
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs
2026-04-08 7:29 ` David Riley
@ 2026-04-08 15:11 ` Christian Ebner
0 siblings, 0 replies; 41+ messages in thread
From: Christian Ebner @ 2026-04-08 15:11 UTC (permalink / raw)
To: David Riley, pbs-devel
On 4/8/26 9:28 AM, David Riley wrote:
> Thanks for your work on this.
> I tested your patches for the proxmox and proxmox-backup
> repositories and encountered three issues:
>
> 1. Push/Pull Job: Form validation error
> When the encryption key selection is left untouched or disabled,
> the form fails validation.
>
> Reproduction Steps:
> Use two PBS instances (PBS1 with patches, PBS2 without).
> Create a datastore and an encryption key on PBS1.
> Create a datastore on PBS2.
> On PBS1, configure a Push Job for the datastore.
> Leave the Encryption Key field disabled/untouched and submit the
> form.
>
> The following error appears:
> parameter verification errors (400)
> encryption-key: value must be at least 3 characters long
>
> 2. Deleting Encryption Key: "Unknown error" pop-up
>
> Deleting an encryption key appears to succeed (the key is removed),
> but the UI shows an error pop up.
>
> Error message: "Unknown error"
> Observations: No errors are logged in the browser console or the
> network tab.
>
> 3. Drag and Drop of Encryption Key File
> Dragging a file from the file explorer into the "Upload from File"
> form field results in a parsing error.
>
> Reproduction Steps:
> Create Encryption Key and download it.
> Add new key
> Drag and drop the just downloaded key file into the form field.
>
> The field displays "[object ProgressEvent]" instead of the file
> content/name. Hovering over it shows:
> "Failed to parse key - SyntaxError: JSON parse unexpected character
> at line 1 column 2 of the JSON data"
>
> Aside from these three points, the rest of the functionality works as
> expected.
>
Thanks a lot for testing and reporting these issues: 1. and 2. have
already been reported by others and already fixed for the upcoming
version 2.
Looking into 3. now, thanks for catching that!
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2026-04-08 15:11 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-01 7:55 [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox 01/20] pbs-api-types: define encryption key type and schema Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox 02/20] pbs-api-types: sync job: add optional encryption key to config Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 03/20] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 04/20] pbs-config: implement encryption key config handling Christian Ebner
2026-04-01 23:27 ` Thomas Lamprecht
2026-04-02 7:09 ` Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 05/20] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 06/20] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 07/20] api: config: add endpoints for encryption key manipulation Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 08/20] api: config: allow encryption key manipulation for sync job Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 09/20] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 10/20] sync: add helper to check encryption key acls and load key Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 11/20] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 12/20] ui: define and expose encryption key management menu item and windows Christian Ebner
2026-04-01 23:09 ` Thomas Lamprecht
2026-04-03 8:35 ` Dominik Csapak
2026-04-01 23:10 ` Thomas Lamprecht
2026-04-03 12:16 ` Dominik Csapak
2026-04-01 7:55 ` [PATCH proxmox-backup 13/20] ui: expose assigning encryption key to sync jobs Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 14/20] sync: pull: load encryption key if given in job config Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 15/20] sync: expand source chunk reader trait by crypt config Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 16/20] sync: pull: introduce and use decrypt index writer if " Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 17/20] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 18/20] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 19/20] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
2026-04-01 7:55 ` [PATCH proxmox-backup 20/20] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner
2026-04-02 0:25 ` [PATCH proxmox{,-backup} 00/20] fix #7251: implement server side encryption support for push sync jobs Thomas Lamprecht
2026-04-02 7:37 ` Christian Ebner
2026-04-08 7:50 ` Fabian Grünbichler
2026-04-08 8:13 ` Christian Ebner
2026-04-08 8:29 ` Thomas Lamprecht
2026-04-08 8:56 ` Christian Ebner
2026-04-08 9:03 ` Fabian Grünbichler
2026-04-03 8:39 ` Dominik Csapak
2026-04-03 8:50 ` Christian Ebner
2026-04-03 9:00 ` Dominik Csapak
2026-04-07 15:12 ` Manuel Federanko
2026-04-07 16:17 ` Christian Ebner
2026-04-08 7:29 ` David Riley
2026-04-08 15:11 ` Christian Ebner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox