public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs
@ 2026-04-10 16:54 Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox v2 01/27] pbs-api-types: define en-/decryption key type and schema Christian Ebner
                   ` (26 more replies)
  0 siblings, 27 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

This patch series implements support for encrypting backup snapshots
when pushing from a source PBS instance to an untrusted remote target
PBS instance. Further, it adds support to decrypt snapshots being
encrypted on the remote source PBS when pulling the contents to the
local target PBS instance. This allows to perform full server side
encryption/decryption when syncing with a less trusted remote PBS.

In order to encrypt/decrypt snapshots, a new encryption key entity
is introduced, to be created as global instance on the PBS, placed and
managed by it's own dedicated config. Keys with secret are stored
in dedicated files so they only need to be loaded when accessing the
key, not for listing of configuration. Sync encryption keys can be
archived, rendering them no longer usable to encrypt new contents,
but still allowing to decrypt. In order to remove a sync encryption
key, it must be archived first and no longer associated to any
sync job config, a constrained added as safety net to avoid accidental
key removal.
The same centralized key management is also used for tape encryption
keys, so they are on-par ui wise, the configs remain however separated
for the time being.

The sync jobs in push direction are extended to receive an additional
active encryption key parameter, which will be used to encrypt
unencrypted snapshot when pushing to the remote target.
A list of associated keys is kept, adding the previous encryption key
of the push sync job if the key is rotated.
For pull sync jobs, the active encryption key parameter is not
considered, rather all associated keys will be loaded and used to
decrypt snapshots with matching fingerprint as found in the source
manifest. In order to encrypt/decrypt the contents, chunks, index
files, blobs and manifest are additionally processed, rewritten when
required.

Changes since version 1 (thanks a lot to @all reviewers/testers!):
- Implement encryption key archiving and key rotation logic, allowing
  to specify active encryption key for push syncs, and a list of
  previously used ones. For pull multiple decryption keys can now be
  configured.
- Rework the UI to add support for key archiving, manage key association
  in sync jobs and to also manage tape encryption keys in the same
  centralized grid.
- Check for key still being in-use by sync job before removing it
- Fully encrypted snapshots are now pushed as-is if an encryption key
  is configured.
- Fixed inefficient resync of pre-existing target snapshot on pull,
  detect file changes in manifest via fingerprinting.
- Avoid overwriting pre-existing decrypted local snapshot by encrypted
  snapshot when no (or mismatching) decryption key is passed for pull
  job.
- Rename EncryptionKey to CyrptKey, as the key is also used for
  decryption.
- Remove key from config before removing keyfile
- Add locking mechansism to avoid races in key config writing
- Fix gathering of known chunks from previous snapshot in push for
  dynamic index files
- Detect config changes by checking for digest mismatch
- Guard key loading by PRIV_SYS_MODIFY
- Use tracing::info! instead of log::info!
- Fix clearing of encryption/decryption key via sync job config window
- Fix creating new sync job without crypt key configured
- Check key exists and can be accessed when set in sync job
- Fix min key id length for key edit window
- Fixed drag-and-drop for key file upload
- Fix outdated comments, typos, ecc.

Link to the bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=7251


proxmox:

Christian Ebner (2):
  pbs-api-types: define en-/decryption key type and schema
  pbs-api-types: sync job: add optional cryptographic keys to config

 pbs-api-types/src/jobs.rs           | 21 ++++++++++++++--
 pbs-api-types/src/key_derivation.rs | 38 ++++++++++++++++++++++++++---
 pbs-api-types/src/lib.rs            |  2 +-
 3 files changed, 55 insertions(+), 6 deletions(-)


proxmox-backup:

Christian Ebner (25):
  datastore: blob: implement async reader for data blobs
  datastore: manifest: add helper for change detection fingerprint
  pbs-key-config: introduce store_with() for KeyConfig
  pbs-config: implement encryption key config handling
  pbs-config: acls: add 'encryption-keys' as valid 'system' subpath
  ui: expose 'encryption-keys' as acl subpath for 'system'
  sync: add helper to check encryption key acls and load key
  api: config: add endpoints for encryption key manipulation
  api: config: check sync owner has access to en-/decryption keys
  api: config: allow encryption key manipulation for sync job
  sync: push: rewrite manifest instead of pushing pre-existing one
  api: push sync: expose optional encryption key for push sync
  sync: push: optionally encrypt data blob on upload
  sync: push: optionally encrypt client log on upload if key is given
  sync: push: add helper for loading known chunks from previous snapshot
  fix #7251: api: push: encrypt snapshots using configured encryption
    key
  ui: define and expose encryption key management menu item and windows
  ui: expose assigning encryption key to sync jobs
  sync: pull: load encryption key if given in job config
  sync: expand source chunk reader trait by crypt config
  sync: pull: introduce and use decrypt index writer if crypt config
  sync: pull: extend encountered chunk by optional decrypted digest
  sync: pull: decrypt blob files on pull if encryption key is configured
  sync: pull: decrypt chunks and rewrite index file for matching key
  sync: pull: decrypt snapshots with matching encryption key fingerprint

 pbs-config/Cargo.toml              |   2 +
 pbs-config/src/acl.rs              |   4 +-
 pbs-config/src/encryption_keys.rs  | 210 +++++++++++++
 pbs-config/src/lib.rs              |   1 +
 pbs-datastore/src/data_blob.rs     |  18 +-
 pbs-datastore/src/manifest.rs      |  20 ++
 pbs-key-config/src/lib.rs          |  36 ++-
 src/api2/config/encryption_keys.rs | 203 +++++++++++++
 src/api2/config/mod.rs             |   2 +
 src/api2/config/sync.rs            |  78 ++++-
 src/api2/pull.rs                   |  15 +-
 src/api2/push.rs                   |   8 +-
 src/server/pull.rs                 | 455 ++++++++++++++++++++++++-----
 src/server/push.rs                 | 297 ++++++++++++++-----
 src/server/sync.rs                 |  58 +++-
 www/Makefile                       |   3 +
 www/NavigationTree.js              |   6 +
 www/Utils.js                       |   1 +
 www/config/EncryptionKeysView.js   | 324 ++++++++++++++++++++
 www/form/EncryptionKeySelector.js  |  96 ++++++
 www/form/PermissionPathSelector.js |   1 +
 www/window/EncryptionKeysEdit.js   | 383 ++++++++++++++++++++++++
 www/window/SyncJobEdit.js          |  30 ++
 23 files changed, 2092 insertions(+), 159 deletions(-)
 create mode 100644 pbs-config/src/encryption_keys.rs
 create mode 100644 src/api2/config/encryption_keys.rs
 create mode 100644 www/config/EncryptionKeysView.js
 create mode 100644 www/form/EncryptionKeySelector.js
 create mode 100644 www/window/EncryptionKeysEdit.js


Summary over all repositories:
  26 files changed, 2147 insertions(+), 165 deletions(-)

-- 
Generated by murpp 0.11.0




^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox v2 01/27] pbs-api-types: define en-/decryption key type and schema
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox v2 02/27] pbs-api-types: sync job: add optional cryptographic keys to config Christian Ebner
                   ` (25 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Will be used to store and uniquly identify en-/decryption keys in
their respective config. Contains the KeyInfo extended by the unique
key identifier and an optional `archived-at` timestamp for keys which
are marked to no longer be used for encrypting new content, just
decrypting.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-api-types/src/key_derivation.rs | 38 ++++++++++++++++++++++++++---
 pbs-api-types/src/lib.rs            |  2 +-
 2 files changed, 36 insertions(+), 4 deletions(-)

diff --git a/pbs-api-types/src/key_derivation.rs b/pbs-api-types/src/key_derivation.rs
index 57ae353a..0815a1f4 100644
--- a/pbs-api-types/src/key_derivation.rs
+++ b/pbs-api-types/src/key_derivation.rs
@@ -3,12 +3,13 @@ use serde::{Deserialize, Serialize};
 #[cfg(feature = "enum-fallback")]
 use proxmox_fixed_string::FixedString;
 
-use proxmox_schema::api;
+use proxmox_schema::api_types::SAFE_ID_FORMAT;
+use proxmox_schema::{api, Schema, StringSchema, Updater};
 
 use crate::CERT_FINGERPRINT_SHA256_SCHEMA;
 
 #[api(default: "scrypt")]
-#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
+#[derive(Clone, Copy, Debug, Deserialize, Serialize, PartialEq)]
 #[serde(rename_all = "lowercase")]
 /// Key derivation function for password protected encryption keys.
 pub enum Kdf {
@@ -41,7 +42,7 @@ impl Default for Kdf {
         },
     },
 )]
-#[derive(Deserialize, Serialize)]
+#[derive(Clone, Default, Deserialize, Serialize, Updater, PartialEq)]
 /// Encryption Key Information
 pub struct KeyInfo {
     /// Path to key (if stored in a file)
@@ -59,3 +60,34 @@ pub struct KeyInfo {
     #[serde(skip_serializing_if = "Option::is_none")]
     pub hint: Option<String>,
 }
+
+/// ID to uniquely identify an encryption/decryption key.
+pub const CRYPT_KEY_ID_SCHEMA: Schema =
+    StringSchema::new("ID to uniquely identify encryption/decription key")
+        .format(&SAFE_ID_FORMAT)
+        .min_length(3)
+        .max_length(32)
+        .schema();
+
+#[api(
+    properties: {
+        id: {
+            schema: CRYPT_KEY_ID_SCHEMA,
+        },
+        info: {
+            type: KeyInfo,
+        },
+    },
+)]
+#[derive(Clone, Default, Deserialize, Serialize, Updater, PartialEq)]
+#[serde(rename_all = "kebab-case")]
+/// Encryption/Decryption Key Info with ID.
+pub struct CryptKey {
+    #[updater(skip)]
+    pub id: String,
+    #[serde(flatten)]
+    pub info: KeyInfo,
+    /// Timestamp when key was archived (not set if key is active).
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub archived_at: Option<i64>,
+}
diff --git a/pbs-api-types/src/lib.rs b/pbs-api-types/src/lib.rs
index 54547291..2f5dfea6 100644
--- a/pbs-api-types/src/lib.rs
+++ b/pbs-api-types/src/lib.rs
@@ -104,7 +104,7 @@ mod jobs;
 pub use jobs::*;
 
 mod key_derivation;
-pub use key_derivation::{Kdf, KeyInfo};
+pub use key_derivation::{CryptKey, Kdf, KeyInfo, CRYPT_KEY_ID_SCHEMA};
 
 mod maintenance;
 pub use maintenance::*;
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox v2 02/27] pbs-api-types: sync job: add optional cryptographic keys to config
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox v2 01/27] pbs-api-types: define en-/decryption key type and schema Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 03/27] datastore: blob: implement async reader for data blobs Christian Ebner
                   ` (24 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Allows to configure the active encryption key, used to encrypt
backups when performing sync jobs in push direction.

Further, allows to associated keys to a sync job, used as decryption
keys for pull sync jobs for snapshots with matching key fingerprint.
For push sync jobs the associated keys are used to keep track of
previously in-use encryption keys, assuring that they are only
removable if the user actually removed the association. This is
used as safety net against involuntary key deletion.

The field name `associated-key` was chosen since the sync job config
stores the list items on separate lines with property name, so the
resulting config will be structured as shown:
```
sync: encrypt-sync
	active-encryption-key key2
        associated-key key1
        associated-key key0
        ...
```

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-api-types/src/jobs.rs | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/pbs-api-types/src/jobs.rs b/pbs-api-types/src/jobs.rs
index 7e6dfb94..59f2820f 100644
--- a/pbs-api-types/src/jobs.rs
+++ b/pbs-api-types/src/jobs.rs
@@ -13,8 +13,9 @@ use proxmox_schema::*;
 use crate::{
     Authid, BackupNamespace, BackupType, NotificationMode, RateLimitConfig, Userid,
     BACKUP_GROUP_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_NS_RE, DATASTORE_SCHEMA,
-    DRIVE_NAME_SCHEMA, MEDIA_POOL_NAME_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PROXMOX_SAFE_ID_FORMAT,
-    PROXMOX_SAFE_ID_REGEX_STR, REMOTE_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA,
+    DRIVE_NAME_SCHEMA, CRYPT_KEY_ID_SCHEMA, MEDIA_POOL_NAME_SCHEMA,
+    NS_MAX_DEPTH_REDUCED_SCHEMA, PROXMOX_SAFE_ID_FORMAT, PROXMOX_SAFE_ID_REGEX_STR,
+    REMOTE_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA,
 };
 
 const_regex! {
@@ -664,6 +665,18 @@ pub const UNMOUNT_ON_SYNC_DONE_SCHEMA: Schema =
             type: SyncDirection,
             optional: true,
         },
+        "active-encryption-key": {
+            schema: CRYPT_KEY_ID_SCHEMA,
+            optional: true,
+        },
+        "associated-key": {
+            type: Array,
+            description: "List of cryptographic keys associated with sync job.",
+            items: {
+                schema: CRYPT_KEY_ID_SCHEMA,
+            },
+            optional: true,
+        },
     }
 )]
 #[derive(Serialize, Deserialize, Clone, Updater, PartialEq)]
@@ -709,6 +722,10 @@ pub struct SyncJobConfig {
     pub unmount_on_done: Option<bool>,
     #[serde(skip_serializing_if = "Option::is_none")]
     pub sync_direction: Option<SyncDirection>,
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub active_encryption_key: Option<String>,
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub associated_key: Option<Vec<String>>,
 }
 
 impl SyncJobConfig {
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 03/27] datastore: blob: implement async reader for data blobs
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox v2 01/27] pbs-api-types: define en-/decryption key type and schema Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox v2 02/27] pbs-api-types: sync job: add optional cryptographic keys to config Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 04/27] datastore: manifest: add helper for change detection fingerprint Christian Ebner
                   ` (23 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

So it can be used to load the blob file when server side encryption
is required during push sync jobs, which runs in async context.

Factor out the DataBlob and CRC check, which is identical for sync
and async reader implementation.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-datastore/src/data_blob.rs | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/pbs-datastore/src/data_blob.rs b/pbs-datastore/src/data_blob.rs
index 0c05c5a40..22b5dfaf9 100644
--- a/pbs-datastore/src/data_blob.rs
+++ b/pbs-datastore/src/data_blob.rs
@@ -2,6 +2,7 @@ use std::io::Write;
 
 use anyhow::{bail, Error};
 use openssl::symm::{decrypt_aead, Mode};
+use tokio::io::{AsyncRead, AsyncReadExt};
 
 use proxmox_io::{ReadExt, WriteExt};
 
@@ -238,15 +239,26 @@ impl DataBlob {
         }
     }
 
-    /// Load blob from ``reader``, verify CRC
+    /// Load data blob via given sync ``reader`` and verify it's CRC
     pub fn load_from_reader(reader: &mut dyn std::io::Read) -> Result<Self, Error> {
         let mut data = Vec::with_capacity(1024 * 1024);
         reader.read_to_end(&mut data)?;
+        Self::from_raw_with_crc_check(data)
+    }
 
-        let blob = Self::from_raw(data)?;
+    /// Load data blob via given async ``reader`` and verify it's CRC
+    pub async fn load_from_async_reader(
+        reader: &mut (dyn AsyncRead + Unpin + Send),
+    ) -> Result<Self, Error> {
+        let mut data = Vec::with_capacity(1024 * 1024);
+        reader.read_to_end(&mut data).await?;
+        Self::from_raw_with_crc_check(data)
+    }
 
+    /// Generates a data blob from raw input data and checks for matching CRC in header
+    fn from_raw_with_crc_check(raw_data: Vec<u8>) -> Result<Self, Error> {
+        let blob = Self::from_raw(raw_data)?;
         blob.verify_crc()?;
-
         Ok(blob)
     }
 
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 04/27] datastore: manifest: add helper for change detection fingerprint
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (2 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 03/27] datastore: blob: implement async reader for data blobs Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 05/27] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
                   ` (22 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Generates a checksum over the file names and checksums of the manifest,
to be stored in the encrypted snapshots manifest when doing server side
sync push encryption. The fingerprint will then be used on pull to
detect if a manifests file contents did not change and are therefore
fine to be skipped (no resync required). The usual byte-wise comparison
is not feasible for this.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-datastore/src/manifest.rs | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/pbs-datastore/src/manifest.rs b/pbs-datastore/src/manifest.rs
index fb734a674..5f7d3efcc 100644
--- a/pbs-datastore/src/manifest.rs
+++ b/pbs-datastore/src/manifest.rs
@@ -236,6 +236,26 @@ impl BackupManifest {
         }
         Ok(Some(serde_json::from_value::<SnapshotVerifyState>(verify)?))
     }
+
+    /// Set the fingerprint used to detect changes for encrypted -> decrypted syncs
+    pub fn set_change_detection_fingerprint(
+        &mut self,
+        fingerprint: &[u8; 32],
+    ) -> Result<(), Error> {
+        let fp_str = hex::encode(fingerprint);
+        self.unprotected["change-detection-fingerprint"] = serde_json::to_value(fp_str)?;
+        Ok(())
+    }
+
+    /// Generate the fingerprint used to detect changes for encrypted -> decrypted syncs
+    pub fn change_detection_fingerprint(&self) -> Result<[u8; 32], Error> {
+        let mut csum = openssl::sha::Sha256::new();
+        for file_info in self.files() {
+            csum.update(file_info.filename.as_bytes());
+            csum.update(&file_info.csum);
+        }
+        Ok(csum.finish())
+    }
 }
 
 impl TryFrom<super::DataBlob> for BackupManifest {
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 05/27] pbs-key-config: introduce store_with() for KeyConfig
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (3 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 04/27] datastore: manifest: add helper for change detection fingerprint Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 06/27] pbs-config: implement encryption key config handling Christian Ebner
                   ` (21 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Extends the behavior of KeyConfig::store() to allow optionally
specifying the mode and ownership of the file the key is stored with.
Default to the same behavior as KeyConfig::store() if none of the
optional parameters are set, therefore the same implementation is
reused for it as well.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-key-config/src/lib.rs | 36 ++++++++++++++++++++++++++++++++----
 1 file changed, 32 insertions(+), 4 deletions(-)

diff --git a/pbs-key-config/src/lib.rs b/pbs-key-config/src/lib.rs
index 0bcd5338c..258fb197b 100644
--- a/pbs-key-config/src/lib.rs
+++ b/pbs-key-config/src/lib.rs
@@ -1,7 +1,10 @@
 use std::io::Write;
+use std::os::fd::AsRawFd;
 use std::path::Path;
 
 use anyhow::{bail, format_err, Context, Error};
+use nix::sys::stat::Mode;
+use nix::unistd::{Gid, Uid};
 use serde::{Deserialize, Serialize};
 
 use proxmox_lang::try_block;
@@ -236,24 +239,49 @@ impl KeyConfig {
 
     /// Store a KeyConfig to path
     pub fn store<P: AsRef<Path>>(&self, path: P, replace: bool) -> Result<(), Error> {
+        self.store_with(path, replace, None, None, None)
+    }
+
+    /// Store a KeyConfig to path with given ownership and mode.
+    /// Requires the process to run with permissions to do so.
+    pub fn store_with<P: AsRef<Path>>(
+        &self,
+        path: P,
+        replace: bool,
+        mode: Option<Mode>,
+        owner: Option<Uid>,
+        group: Option<Gid>,
+    ) -> Result<(), Error> {
         let path: &Path = path.as_ref();
 
         let data = serde_json::to_string(self)?;
 
         try_block!({
             if replace {
-                let mode = nix::sys::stat::Mode::S_IRUSR | nix::sys::stat::Mode::S_IWUSR;
-                replace_file(path, data.as_bytes(), CreateOptions::new().perm(mode), true)?;
+                let mode =
+                    mode.unwrap_or(nix::sys::stat::Mode::S_IRUSR | nix::sys::stat::Mode::S_IWUSR);
+                let mut create_options = CreateOptions::new().perm(mode);
+                if let Some(owner) = owner {
+                    create_options = create_options.owner(owner);
+                }
+                if let Some(group) = group {
+                    create_options = create_options.group(group);
+                }
+                replace_file(path, data.as_bytes(), create_options, true)?;
             } else {
                 use std::os::unix::fs::OpenOptionsExt;
-
+                let mode = mode.map(|m| m.bits()).unwrap_or(0o0600);
                 let mut file = std::fs::OpenOptions::new()
                     .write(true)
-                    .mode(0o0600)
+                    .mode(mode)
                     .create_new(true)
                     .open(path)?;
 
                 file.write_all(data.as_bytes())?;
+
+                let fd = file.as_raw_fd();
+                nix::unistd::fchown(fd, owner, group)?;
+                nix::unistd::fsync(fd)?;
             }
 
             Ok(())
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 06/27] pbs-config: implement encryption key config handling
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (4 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 05/27] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 07/27] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
                   ` (20 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Implements the handling for encryption key configuration and files.

Individual encryption keys with the secret key material are stored in
individual files, while the config stores duplicate key info, so the
actual key only needs to be loaded when accessed, not for listing.

The key's fingerprint is compared to the one stored in the config
when loading the key, in order to detect possible mismatches.

Races between key creation and deletion are avoided by locking both,
config and individual key file.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-config/Cargo.toml             |   2 +
 pbs-config/src/encryption_keys.rs | 210 ++++++++++++++++++++++++++++++
 pbs-config/src/lib.rs             |   1 +
 3 files changed, 213 insertions(+)
 create mode 100644 pbs-config/src/encryption_keys.rs

diff --git a/pbs-config/Cargo.toml b/pbs-config/Cargo.toml
index ea2496843..04687cb59 100644
--- a/pbs-config/Cargo.toml
+++ b/pbs-config/Cargo.toml
@@ -20,6 +20,7 @@ serde.workspace = true
 serde_json.workspace = true
 
 proxmox-http.workspace = true
+proxmox-lang.workspace = true
 proxmox-notify.workspace = true
 proxmox-router = { workspace = true, default-features = false }
 proxmox-s3-client.workspace = true
@@ -32,3 +33,4 @@ proxmox-uuid.workspace = true
 
 pbs-api-types.workspace = true
 pbs-buildcfg.workspace = true
+pbs-key-config.workspace = true
diff --git a/pbs-config/src/encryption_keys.rs b/pbs-config/src/encryption_keys.rs
new file mode 100644
index 000000000..e8e6c8a20
--- /dev/null
+++ b/pbs-config/src/encryption_keys.rs
@@ -0,0 +1,210 @@
+use std::collections::HashMap;
+use std::sync::LazyLock;
+
+use anyhow::{bail, format_err, Error};
+use nix::{sys::stat::Mode, unistd::Uid};
+use serde::Deserialize;
+
+use pbs_api_types::{CryptKey, KeyInfo, CRYPT_KEY_ID_SCHEMA};
+use proxmox_schema::ApiType;
+use proxmox_section_config::{SectionConfig, SectionConfigData, SectionConfigPlugin};
+use proxmox_sys::fs::CreateOptions;
+
+use pbs_buildcfg::configdir;
+use pbs_key_config::KeyConfig;
+
+use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
+
+pub static CONFIG: LazyLock<SectionConfig> = LazyLock::new(init);
+
+fn init() -> SectionConfig {
+    let obj_schema = CryptKey::API_SCHEMA.unwrap_all_of_schema();
+    let plugin = SectionConfigPlugin::new(
+        ENCRYPTION_KEYS_CFG_TYPE_ID.to_string(),
+        Some(String::from("id")),
+        obj_schema,
+    );
+    let mut config = SectionConfig::new(&CRYPT_KEY_ID_SCHEMA);
+    config.register_plugin(plugin);
+
+    config
+}
+
+/// Configuration file location for encryption keys.
+pub const ENCRYPTION_KEYS_CFG_FILENAME: &str = configdir!("/encryption-keys.cfg");
+/// Configuration lock file used to prevent concurrent configuration update operations.
+pub const ENCRYPTION_KEYS_CFG_LOCKFILE: &str = configdir!("/.encryption-keys.lck");
+/// Directory where to store the actual encryption keys
+pub const ENCRYPTION_KEYS_DIR: &str = configdir!("/encryption-keys/");
+
+/// Config type for encryption key config entries
+pub const ENCRYPTION_KEYS_CFG_TYPE_ID: &str = "sync-key";
+
+/// Get exclusive lock for encryption key configuration update.
+pub fn lock_config() -> Result<BackupLockGuard, Error> {
+    open_backup_lockfile(ENCRYPTION_KEYS_CFG_LOCKFILE, None, true)
+}
+
+/// Load encryption key configuration from file.
+pub fn config() -> Result<(SectionConfigData, [u8; 32]), Error> {
+    let content = proxmox_sys::fs::file_read_optional_string(ENCRYPTION_KEYS_CFG_FILENAME)?;
+    let content = content.unwrap_or_default();
+    let digest = openssl::sha::sha256(content.as_bytes());
+    let data = CONFIG.parse(ENCRYPTION_KEYS_CFG_FILENAME, &content)?;
+    Ok((data, digest))
+}
+
+/// Shell completion helper to complete encryption key id's as found in the config.
+pub fn complete_encryption_key_id(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
+    match config() {
+        Ok((data, _digest)) => data.sections.keys().map(|id| id.to_string()).collect(),
+        Err(_) => Vec::new(),
+    }
+}
+
+/// Load the encryption key from file.
+///
+/// Looks up the key in the config and tries to load it from the given file.
+/// Upon loading, the config key fingerprint is compared to the one stored in the key
+/// file. Fail to load archived keys if flag is set.
+pub fn load_key_config(id: &str, fail_on_archived: bool) -> Result<KeyConfig, Error> {
+    let _lock = lock_config()?;
+    let (config, _digest) = config()?;
+
+    let key: CryptKey = config.lookup(ENCRYPTION_KEYS_CFG_TYPE_ID, id)?;
+    if fail_on_archived && key.archived_at.is_some() {
+        bail!("cannot load archived encryption key {id}");
+    }
+    let key_config = match &key.info.path {
+        Some(path) => KeyConfig::load(path)?,
+        None => bail!("missing path for encryption key {id}"),
+    };
+
+    let stored_key_info = KeyInfo::from(&key_config);
+
+    if key.info.fingerprint != stored_key_info.fingerprint {
+        bail!("loaded key does not match the config for key {id}");
+    }
+
+    Ok(key_config)
+}
+
+/// Store the encryption key to file.
+///
+/// Inserts the key in the config and stores it to the given file.
+pub fn store_key(id: &str, key: &KeyConfig) -> Result<(), Error> {
+    let _lock = lock_config()?;
+    let (mut config, _digest) = config()?;
+
+    if config.sections.contains_key(id) {
+        bail!("key with id '{id}' already exists.");
+    }
+
+    let backup_user = crate::backup_user()?;
+    let dir_options = CreateOptions::new()
+        .perm(Mode::from_bits_truncate(0o0750))
+        .owner(Uid::from_raw(0))
+        .group(backup_user.gid);
+
+    proxmox_sys::fs::ensure_dir_exists(ENCRYPTION_KEYS_DIR, &dir_options, true)?;
+
+    let key_path = format!("{ENCRYPTION_KEYS_DIR}{id}.enc");
+
+    // lock to avoid race with key deletion, file ownership and permissions will
+    // be adapted by replacing file on key store below.
+    open_backup_lockfile(&key_path, None, true)?;
+
+    // assert the key file is empty (new)
+    let metadata = std::fs::metadata(&key_path)?;
+    if metadata.len() > 0 {
+        bail!("detected pre-existing key file, refusing to overwrite.");
+    }
+
+    let keyfile_mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
+
+    // replaces file and therefore drops lock on keyfile
+    key.store_with(
+        &key_path,
+        true,
+        Some(keyfile_mode),
+        Some(Uid::from_raw(0)),
+        Some(backup_user.gid),
+    )?;
+
+    let mut info = KeyInfo::from(key);
+    info.path = Some(key_path.clone());
+
+    let crypt_key = CryptKey {
+        id: id.to_string(),
+        info,
+        archived_at: None,
+    };
+
+    let result = proxmox_lang::try_block!({
+        config.set_data(id, ENCRYPTION_KEYS_CFG_TYPE_ID, crypt_key)?;
+
+        let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
+        replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())
+    });
+
+    if result.is_err() {
+        let _ = std::fs::remove_file(key_path);
+    }
+
+    result
+}
+
+/// Delete the encryption key from config.
+///
+/// Returns true if the key was removed successfully, false if there was no matching key.
+pub fn delete_key(id: &str, mut config: SectionConfigData) -> Result<bool, Error> {
+    if let Some((_, key)) = config.sections.remove(id) {
+        let key =
+            CryptKey::deserialize(key).map_err(|_err| format_err!("failed to parse key config"))?;
+
+        if key.archived_at.is_none() {
+            bail!("key still active, deleting is only possible for archived keys");
+        }
+
+        if let Some(key_path) = &key.info.path {
+            // Avoid races with key insertion
+            let _lock = open_backup_lockfile(key_path, None, true)?;
+
+            let key_config = KeyConfig::load(key_path)?;
+            let stored_key_info = KeyInfo::from(&key_config);
+            // Check the key is the expected one
+            if key.info.fingerprint != stored_key_info.fingerprint {
+                bail!("unexpected key detected in key file, refuse to delete");
+            }
+
+            let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
+            // drops config lock
+            replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())?;
+
+            // drop key file lock
+            std::fs::remove_file(key_path)?;
+            return Ok(true);
+        }
+
+        bail!("missing key file path for key '{id}'");
+    }
+    Ok(false)
+}
+
+/// Mark the key as archived by setting the `archived-at` timestamp.
+pub fn archive_key(id: &str, mut config: SectionConfigData) -> Result<(), Error> {
+    let mut key: CryptKey = config.lookup(ENCRYPTION_KEYS_CFG_TYPE_ID, id)?;
+
+    if key.archived_at.is_some() {
+        bail!("key already marked as archived");
+    }
+
+    key.archived_at = Some(proxmox_time::epoch_i64());
+
+    config.set_data(id, ENCRYPTION_KEYS_CFG_TYPE_ID, &key)?;
+    let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
+    // drops config lock
+    replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())?;
+
+    Ok(())
+}
diff --git a/pbs-config/src/lib.rs b/pbs-config/src/lib.rs
index 18b27d23a..3bdaa8fec 100644
--- a/pbs-config/src/lib.rs
+++ b/pbs-config/src/lib.rs
@@ -4,6 +4,7 @@ pub use cached_user_info::CachedUserInfo;
 pub mod datastore;
 pub mod domains;
 pub mod drive;
+pub mod encryption_keys;
 pub mod key_value;
 pub mod media_pool;
 pub mod metrics;
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 07/27] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (5 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 06/27] pbs-config: implement encryption key config handling Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 08/27] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
                   ` (19 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Adds a dedicated subpath for permission checks on encryption key
configurations in the acl path components check. Allows to set
permissions on either the whole encryption keys config or for
individual encryption key ids.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 pbs-config/src/acl.rs | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/pbs-config/src/acl.rs b/pbs-config/src/acl.rs
index 2abbf5802..d18a346ff 100644
--- a/pbs-config/src/acl.rs
+++ b/pbs-config/src/acl.rs
@@ -127,8 +127,8 @@ pub fn check_acl_path(path: &str) -> Result<(), Error> {
                         _ => {}
                     }
                 }
-                "s3-endpoint" => {
-                    // /system/s3-endpoint/{id}
+                "s3-endpoint" | "encryption-keys" => {
+                    // /system/<matched-component>/{id}
                     if components_len <= 3 {
                         return Ok(());
                     }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 08/27] ui: expose 'encryption-keys' as acl subpath for 'system'
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (6 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 07/27] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 09/27] sync: add helper to check encryption key acls and load key Christian Ebner
                   ` (18 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Allows to select the 'encryption-keys' subpath to restirct
permissions to either the full encryption keys configuration or the
matching key id.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 www/form/PermissionPathSelector.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/form/PermissionPathSelector.js b/www/form/PermissionPathSelector.js
index e5f2aec46..64de42888 100644
--- a/www/form/PermissionPathSelector.js
+++ b/www/form/PermissionPathSelector.js
@@ -15,6 +15,7 @@ Ext.define('PBS.data.PermissionPathsStore', {
         { value: '/system' },
         { value: '/system/certificates' },
         { value: '/system/disks' },
+        { value: '/system/encryption-keys' },
         { value: '/system/log' },
         { value: '/system/network' },
         { value: '/system/network/dns' },
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 09/27] sync: add helper to check encryption key acls and load key
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (7 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 08/27] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 10/27] api: config: add endpoints for encryption key manipulation Christian Ebner
                   ` (17 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Introduces a common helper function to be used when loading an
encryption key in sync job for either push or pull direction.

For given user, access to the provided key by id is checked and the
key config containing the secret loaded from the file by means of the
config.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/sync.rs | 29 ++++++++++++++++++++++++++++-
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/src/server/sync.rs b/src/server/sync.rs
index aedf4a271..9c070cd9c 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -21,12 +21,14 @@ use proxmox_router::HttpError;
 use pbs_api_types::{
     Authid, BackupDir, BackupGroup, BackupNamespace, CryptMode, GroupListItem, SnapshotListItem,
     SyncDirection, SyncJobConfig, VerifyState, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME,
-    MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+    MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ, PRIV_SYS_MODIFY,
 };
 use pbs_client::{BackupReader, BackupRepository, HttpClient, RemoteChunkReader};
+use pbs_config::CachedUserInfo;
 use pbs_datastore::data_blob::DataBlob;
 use pbs_datastore::read_chunk::AsyncReadChunk;
 use pbs_datastore::{BackupManifest, DataStore, ListNamespacesRecursive, LocalChunkReader};
+use pbs_tools::crypt_config::CryptConfig;
 
 use crate::backup::ListAccessibleBackupGroups;
 use crate::server::jobstate::Job;
@@ -791,3 +793,28 @@ pub(super) fn exclude_not_verified_or_encrypted(
 
     false
 }
+
+/// Helper to check user having access to the given encryption key and loading
+/// the it using the passphrase from the config.
+pub(crate) fn check_privs_and_load_key_config(
+    key_id: &str,
+    user: &Authid,
+    fail_on_archived: bool,
+) -> Result<Option<Arc<CryptConfig>>, Error> {
+    let user_info = CachedUserInfo::new()?;
+    user_info.check_privs(
+        user,
+        &["system", "encryption-keys", key_id],
+        PRIV_SYS_MODIFY,
+        true,
+    )?;
+
+    let key_config = pbs_config::encryption_keys::load_key_config(key_id, fail_on_archived)?;
+    // pass empty passphrase to get raw key material of unprotected key
+    let (enc_key, _created, fingerprint) = key_config.decrypt(&|| Ok(Vec::new()))?;
+
+    info!("Loaded encryption key '{key_id}' with fingerprint '{fingerprint}'");
+
+    let crypt_config = Arc::new(CryptConfig::new(enc_key)?);
+    Ok(Some(crypt_config))
+}
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 10/27] api: config: add endpoints for encryption key manipulation
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (8 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 09/27] sync: add helper to check encryption key acls and load key Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 11/27] api: config: check sync owner has access to en-/decryption keys Christian Ebner
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Defines the api endpoints for listing existing keys as defined in the
config, create new keys and archive or remove keys.

New keys are either generated on the server side or uploaded as json
string. Password protected keys are currently not supported and will
be added at a later stage, once a general mechanism for secrets
handling is implemented for PBS.

Keys are archived by setting the `archived-at` timestamp, marking
them as no longer usable for encrypting new content with respective
keys.

Removing a key requires for it to be archived first. Further, is only
possible when the key is no longer referenced by a sync job config,
protecting from accidental deletion of an in-use key.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/api2/config/encryption_keys.rs | 203 +++++++++++++++++++++++++++++
 src/api2/config/mod.rs             |   2 +
 2 files changed, 205 insertions(+)
 create mode 100644 src/api2/config/encryption_keys.rs

diff --git a/src/api2/config/encryption_keys.rs b/src/api2/config/encryption_keys.rs
new file mode 100644
index 000000000..f08f54f8e
--- /dev/null
+++ b/src/api2/config/encryption_keys.rs
@@ -0,0 +1,203 @@
+use anyhow::{bail, format_err, Error};
+use serde_json::Value;
+
+use proxmox_router::{Permission, Router, RpcEnvironment};
+use proxmox_schema::api;
+
+use pbs_api_types::{
+    Authid, CryptKey, SyncJobConfig, CRYPT_KEY_ID_SCHEMA, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY,
+    PROXMOX_CONFIG_DIGEST_SCHEMA,
+};
+
+use pbs_config::encryption_keys::{self, ENCRYPTION_KEYS_CFG_TYPE_ID};
+use pbs_config::CachedUserInfo;
+
+use pbs_key_config::KeyConfig;
+
+#[api(
+    input: {
+        properties: {
+            "include-archived": {
+                type: bool,
+                description: "List also keys which have been archived.",
+                optional: true,
+                default: false,
+            },
+        },
+    },
+    returns: {
+        description: "List of configured encryption keys.",
+        type: Array,
+        items: { type: CryptKey },
+    },
+    access: {
+        permission: &Permission::Anybody,
+        description: "List configured encryption keys filtered by Sys.Audit privileges",
+    },
+)]
+/// List configured encryption keys.
+pub fn list_keys(
+    include_archived: bool,
+    _param: Value,
+    rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<CryptKey>, Error> {
+    let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
+    let user_info = CachedUserInfo::new()?;
+
+    let (config, digest) = encryption_keys::config()?;
+
+    let list: Vec<CryptKey> = config.convert_to_typed_array(ENCRYPTION_KEYS_CFG_TYPE_ID)?;
+    let list = list
+        .into_iter()
+        .filter(|key| {
+            if !include_archived && key.archived_at.is_some() {
+                return false;
+            }
+            let privs = user_info.lookup_privs(&auth_id, &["system", "encryption-keys", &key.id]);
+            privs & PRIV_SYS_AUDIT != 0
+        })
+        .collect();
+
+    rpcenv["digest"] = hex::encode(digest).into();
+
+    Ok(list)
+}
+
+#[api(
+    protected: true,
+    input: {
+        properties: {
+            id: {
+                schema: CRYPT_KEY_ID_SCHEMA,
+            },
+            key: {
+                description: "Use provided key instead of creating new one.",
+                type: String,
+                optional: true,
+            },
+        },
+    },
+    access: {
+        permission: &Permission::Privilege(&["system", "encryption-keys"], PRIV_SYS_MODIFY, false),
+    },
+)]
+/// Create new encryption key instance or use the provided one.
+pub fn create_key(
+    id: String,
+    key: Option<String>,
+    _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<KeyConfig, Error> {
+    let key_config = if let Some(key) = &key {
+        serde_json::from_str(key)
+            .map_err(|err| format_err!("failed to parse provided key: {err}"))?
+    } else {
+        let mut raw_key = [0u8; 32];
+        proxmox_sys::linux::fill_with_random_data(&mut raw_key)?;
+        KeyConfig::without_password(raw_key)?
+    };
+
+    if key_config.kdf.is_some() {
+        bail!("protected keys not supported");
+    }
+
+    encryption_keys::store_key(&id, &key_config)?;
+
+    Ok(key_config)
+}
+
+#[api(
+    protected: true,
+    input: {
+        properties: {
+            id: {
+                schema: CRYPT_KEY_ID_SCHEMA,
+            },
+            digest: {
+                optional: true,
+                schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+            },
+        },
+    },
+    access: {
+        permission: &Permission::Privilege(&["system", "encryption-keys", "{id}"], PRIV_SYS_MODIFY, false),
+    },
+)]
+/// Mark the key by given id as archived, no longer usable to encrypt contents.
+pub fn archive_key(
+    id: String,
+    digest: Option<String>,
+    _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+    let _lock = encryption_keys::lock_config()?;
+    let (config, expected_digest) = encryption_keys::config()?;
+
+    pbs_config::detect_modified_configuration_file(digest, &expected_digest)?;
+
+    encryption_keys::archive_key(&id, config)?;
+
+    Ok(())
+}
+
+#[api(
+    protected: true,
+    input: {
+        properties: {
+            id: {
+                schema: CRYPT_KEY_ID_SCHEMA,
+            },
+            digest: {
+                optional: true,
+                schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+            },
+        },
+    },
+    access: {
+        permission: &Permission::Privilege(&["system", "encryption-keys", "{id}"], PRIV_SYS_MODIFY, false),
+    },
+)]
+/// Remove encryption key.
+pub fn delete_key(
+    id: String,
+    digest: Option<String>,
+    _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+    let _lock = encryption_keys::lock_config()?;
+    let (config, expected_digest) = encryption_keys::config()?;
+
+    pbs_config::detect_modified_configuration_file(digest, &expected_digest)?;
+
+    if let Some(job_id) = encryption_key_in_use(&id)
+        .map_err(|_err| format_err!("failed to check if encryption key is in-use"))?
+    {
+        bail!("encryption key in use by sync job {job_id}");
+    }
+
+    encryption_keys::delete_key(&id, config)?;
+
+    Ok(())
+}
+
+fn encryption_key_in_use(id: &str) -> Result<Option<String>, Error> {
+    let (config, _digest) = pbs_config::sync::config()?;
+
+    let list: Vec<SyncJobConfig> = config.convert_to_typed_array("sync")?;
+    let job_with_key = list.iter().find(|job| {
+        job.active_encryption_key.as_deref() == Some(id)
+            || job
+                .associated_key
+                .as_deref()
+                .unwrap_or(&[])
+                .contains(&id.to_string())
+    });
+    let job_id = job_with_key.map(|job| job.id.clone());
+    Ok(job_id)
+}
+
+const ITEM_ROUTER: Router = Router::new()
+    .post(&API_METHOD_ARCHIVE_KEY)
+    .delete(&API_METHOD_DELETE_KEY);
+
+pub const ROUTER: Router = Router::new()
+    .get(&API_METHOD_LIST_KEYS)
+    .post(&API_METHOD_CREATE_KEY)
+    .match_all("id", &ITEM_ROUTER);
diff --git a/src/api2/config/mod.rs b/src/api2/config/mod.rs
index 1cd9ead76..0281bcfae 100644
--- a/src/api2/config/mod.rs
+++ b/src/api2/config/mod.rs
@@ -9,6 +9,7 @@ pub mod acme;
 pub mod changer;
 pub mod datastore;
 pub mod drive;
+pub mod encryption_keys;
 pub mod media_pool;
 pub mod metrics;
 pub mod notifications;
@@ -28,6 +29,7 @@ const SUBDIRS: SubdirMap = &sorted!([
     ("changer", &changer::ROUTER),
     ("datastore", &datastore::ROUTER),
     ("drive", &drive::ROUTER),
+    ("encryption-keys", &encryption_keys::ROUTER),
     ("media-pool", &media_pool::ROUTER),
     ("metrics", &metrics::ROUTER),
     ("notifications", &notifications::ROUTER),
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 11/27] api: config: check sync owner has access to en-/decryption keys
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (9 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 10/27] api: config: add endpoints for encryption key manipulation Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 12/27] api: config: allow encryption key manipulation for sync job Christian Ebner
                   ` (15 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

So a sync job can not be configured with a non existing or non
accessible key for given sync owner/local-user.

Key access is checked by loading the key from the keyfile.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/api2/config/sync.rs | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index 67fa3182c..51be7e208 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -62,6 +62,16 @@ fn is_correct_owner(auth_id: &Authid, job: &SyncJobConfig) -> bool {
     }
 }
 
+// Check access and test key loading works as expected for sync job owner/user.
+fn sync_user_can_access_optional_key(key_id: Option<&str>, owner: &Authid) -> Result<(), Error> {
+    if let Some(key_id) = key_id {
+        if crate::server::sync::check_privs_and_load_key_config(key_id, owner, false).is_err() {
+            bail!("no such key or cannot access key '{key_id}'");
+        }
+    }
+    Ok(())
+}
+
 /// checks whether user can run the corresponding sync job, depending on sync direction
 ///
 /// namespace creation/deletion ACL and backup group ownership checks happen in the pull/push code
@@ -251,6 +261,19 @@ pub fn create_sync_job(
         }
     }
 
+    let owner = config
+        .owner
+        .as_ref()
+        .unwrap_or_else(|| Authid::root_auth_id());
+
+    if sync_direction == SyncDirection::Push {
+        sync_user_can_access_optional_key(config.active_encryption_key.as_deref(), owner)?;
+    } else {
+        for key in config.associated_key.as_deref().unwrap_or(&[]) {
+            sync_user_can_access_optional_key(Some(key), owner)?;
+        }
+    }
+
     let (mut section_config, _digest) = sync::config()?;
 
     if section_config.sections.contains_key(&config.id) {
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 12/27] api: config: allow encryption key manipulation for sync job
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (10 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 11/27] api: config: check sync owner has access to en-/decryption keys Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 13/27] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
                   ` (14 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Since the SyncJobConfig got extended to include an optional active
encryption key, set the default to none.
Extend the api config update handler to also set, update or delete
the active encryption key based on the provided parameters.

Associated keys will also be updated accordingly, however it is
assured that the previously active key will remain associated, if
changed.

They encryption key will be used to encrypt unencrypted backup
snapshots during push sync. Any of the associated keys will be used
to decrypt snapshots with matching key fingerprint during pull sync.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/api2/config/sync.rs | 55 ++++++++++++++++++++++++++++++++++++++---
 1 file changed, 52 insertions(+), 3 deletions(-)

diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index 51be7e208..3b92958c6 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -324,6 +324,22 @@ pub fn read_sync_job(id: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Sync
     Ok(sync_job)
 }
 
+fn keep_previous_key_as_associated(
+    current: &SyncJobConfig,
+    associated_keys: &mut Option<Vec<String>>,
+) {
+    if let Some(prev) = &current.active_encryption_key {
+        match associated_keys {
+            Some(ref mut keys) => {
+                if !keys.contains(prev) {
+                    keys.push(prev.clone());
+                }
+            }
+            None => *associated_keys = Some(vec![prev.clone()]),
+        }
+    }
+}
+
 #[api()]
 #[derive(Serialize, Deserialize)]
 #[serde(rename_all = "kebab-case")]
@@ -367,6 +383,10 @@ pub enum DeletableProperty {
     UnmountOnDone,
     /// Delete the sync_direction property,
     SyncDirection,
+    /// Delete the active encryption key property,
+    ActiveEncryptionKey,
+    /// Delete associated key property,
+    AssociatedKey,
 }
 
 #[api(
@@ -408,7 +428,7 @@ required sync job owned by user. Additionally, remove vanished requires RemoteDa
 #[allow(clippy::too_many_arguments)]
 pub fn update_sync_job(
     id: String,
-    update: SyncJobConfigUpdater,
+    mut update: SyncJobConfigUpdater,
     delete: Option<Vec<DeletableProperty>>,
     digest: Option<String>,
     rpcenv: &mut dyn RpcEnvironment,
@@ -431,6 +451,7 @@ pub fn update_sync_job(
     }
 
     if let Some(delete) = delete {
+        let mut allow_remove_associated = true;
         for delete_prop in delete {
             match delete_prop {
                 DeletableProperty::Remote => {
@@ -490,6 +511,16 @@ pub fn update_sync_job(
                 DeletableProperty::SyncDirection => {
                     data.sync_direction = None;
                 }
+                DeletableProperty::ActiveEncryptionKey => {
+                    allow_remove_associated = data.active_encryption_key.is_none();
+                    keep_previous_key_as_associated(&data, &mut update.associated_key);
+                    data.active_encryption_key = None;
+                }
+                DeletableProperty::AssociatedKey => {
+                    if allow_remove_associated {
+                        data.associated_key = None;
+                    }
+                }
             }
         }
     }
@@ -518,8 +549,8 @@ pub fn update_sync_job(
     if let Some(remote_ns) = update.remote_ns {
         data.remote_ns = Some(remote_ns);
     }
-    if let Some(owner) = update.owner {
-        data.owner = Some(owner);
+    if let Some(owner) = &update.owner {
+        data.owner = Some(owner.clone());
     }
     if let Some(group_filter) = update.group_filter {
         data.group_filter = Some(group_filter);
@@ -549,6 +580,22 @@ pub fn update_sync_job(
         data.sync_direction = Some(sync_direction);
     }
 
+    if let Some(active_encryption_key) = update.active_encryption_key {
+        let owner = update.owner.as_ref().unwrap_or_else(|| {
+            data.owner
+                .as_ref()
+                .unwrap_or_else(|| Authid::root_auth_id())
+        });
+        sync_user_can_access_optional_key(Some(&active_encryption_key), owner)?;
+
+        keep_previous_key_as_associated(&data, &mut update.associated_key);
+        data.active_encryption_key = Some(active_encryption_key);
+    }
+
+    if let Some(associated_key) = update.associated_key {
+        data.associated_key = Some(associated_key);
+    }
+
     if update.limit.rate_in.is_some() {
         data.limit.rate_in = update.limit.rate_in;
     }
@@ -721,6 +768,8 @@ acl:1:/remote/remote1/remotestore1:write@pbs:RemoteSyncOperator
         run_on_mount: None,
         unmount_on_done: None,
         sync_direction: None, // use default
+        active_encryption_key: None,
+        associated_key: None,
     };
 
     // should work without ACLs
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 13/27] sync: push: rewrite manifest instead of pushing pre-existing one
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (11 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 12/27] api: config: allow encryption key manipulation for sync job Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 14/27] api: push sync: expose optional encryption key for push sync Christian Ebner
                   ` (13 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

In preparation for being able to encrypt unencypted backup snapshots
during push sync jobs.

Previously the pre-existing manifest file was pushed to the remote
target since it did not require modifications and contained all the
files with the correct metadata. When encrypting, the files must
however be marked as encrypted by individually setting the crypt mode
and the manifest must be signed and the encryption key fingerprint
added to the unprotected part of the manifest.

Therefore, now recreate the manifest and update accordingly. To do
so, pushing of the index must return the full BackupStats, not just
the sync stats for accounting.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/push.rs | 59 +++++++++++++++++++++++++++++++++-------------
 1 file changed, 43 insertions(+), 16 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index 697b94f2f..9e5a19560 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -17,8 +17,8 @@ use pbs_api_types::{
     PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
 };
 use pbs_client::{
-    BackupRepository, BackupWriter, BackupWriterOptions, HttpClient, IndexType, MergedChunkInfo,
-    UploadOptions,
+    BackupRepository, BackupStats, BackupWriter, BackupWriterOptions, HttpClient, IndexType,
+    MergedChunkInfo, UploadOptions,
 };
 use pbs_config::CachedUserInfo;
 use pbs_datastore::data_blob::ChunkInfo;
@@ -26,7 +26,7 @@ use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
 use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{DataStore, StoreProgress};
+use pbs_datastore::{BackupManifest, DataStore, StoreProgress};
 
 use super::sync::{
     check_namespace_depth_limit, exclude_not_verified_or_encrypted,
@@ -880,6 +880,7 @@ pub(crate) async fn push_snapshot(
 
     // Avoid double upload penalty by remembering already seen chunks
     let known_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64 * 1024)));
+    let mut target_manifest = BackupManifest::new(snapshot.clone());
 
     for entry in source_manifest.files() {
         let mut path = backup_dir.full_path();
@@ -892,6 +893,12 @@ pub(crate) async fn push_snapshot(
                     let backup_stats = backup_writer
                         .upload_blob(file, archive_name.as_ref())
                         .await?;
+                    target_manifest.add_file(
+                        &archive_name,
+                        backup_stats.size,
+                        backup_stats.csum,
+                        entry.chunk_crypt_mode(),
+                    )?;
                     stats.add(SyncStats {
                         chunk_count: backup_stats.chunk_count as usize,
                         bytes: backup_stats.size as usize,
@@ -914,7 +921,7 @@ pub(crate) async fn push_snapshot(
                     let chunk_reader = reader
                         .chunk_reader(entry.chunk_crypt_mode())
                         .context("failed to get chunk reader")?;
-                    let sync_stats = push_index(
+                    let upload_stats = push_index(
                         &archive_name,
                         index,
                         chunk_reader,
@@ -923,7 +930,18 @@ pub(crate) async fn push_snapshot(
                         known_chunks.clone(),
                     )
                     .await?;
-                    stats.add(sync_stats);
+                    target_manifest.add_file(
+                        &archive_name,
+                        upload_stats.size,
+                        upload_stats.csum,
+                        entry.chunk_crypt_mode(),
+                    )?;
+                    stats.add(SyncStats {
+                        chunk_count: upload_stats.chunk_count as usize,
+                        bytes: upload_stats.size as usize,
+                        elapsed: upload_stats.duration,
+                        removed: None,
+                    });
                 }
                 ArchiveType::FixedIndex => {
                     if let Some(manifest) = upload_options.previous_manifest.as_ref() {
@@ -941,7 +959,7 @@ pub(crate) async fn push_snapshot(
                         .chunk_reader(entry.chunk_crypt_mode())
                         .context("failed to get chunk reader")?;
                     let size = index.index_bytes();
-                    let sync_stats = push_index(
+                    let upload_stats = push_index(
                         &archive_name,
                         index,
                         chunk_reader,
@@ -950,7 +968,18 @@ pub(crate) async fn push_snapshot(
                         known_chunks.clone(),
                     )
                     .await?;
-                    stats.add(sync_stats);
+                    target_manifest.add_file(
+                        &archive_name,
+                        upload_stats.size,
+                        upload_stats.csum,
+                        entry.chunk_crypt_mode(),
+                    )?;
+                    stats.add(SyncStats {
+                        chunk_count: upload_stats.chunk_count as usize,
+                        bytes: upload_stats.size as usize,
+                        elapsed: upload_stats.duration,
+                        removed: None,
+                    });
                 }
             }
         } else {
@@ -973,8 +1002,11 @@ pub(crate) async fn push_snapshot(
             .await?;
     }
 
-    // Rewrite manifest for pushed snapshot, recreating manifest from source on target
-    let manifest_json = serde_json::to_value(source_manifest)?;
+    // Rewrite manifest for pushed snapshot, recreating manifest from source on target,
+    // needs to update all relevant info for new manifest.
+    target_manifest.unprotected = source_manifest.unprotected;
+    target_manifest.signature = source_manifest.signature;
+    let manifest_json = serde_json::to_value(target_manifest)?;
     let manifest_string = serde_json::to_string_pretty(&manifest_json)?;
     let backup_stats = backup_writer
         .upload_blob_from_data(
@@ -1006,7 +1038,7 @@ async fn push_index(
     backup_writer: &BackupWriter,
     index_type: IndexType,
     known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
-) -> Result<SyncStats, Error> {
+) -> Result<BackupStats, Error> {
     let (upload_channel_tx, upload_channel_rx) = mpsc::channel(20);
     let mut chunk_infos =
         stream::iter(0..index.index_count()).map(move |pos| index.chunk_info(pos).unwrap());
@@ -1058,10 +1090,5 @@ async fn push_index(
         .upload_index_chunk_info(filename, merged_chunk_info_stream, upload_options)
         .await?;
 
-    Ok(SyncStats {
-        chunk_count: upload_stats.chunk_count as usize,
-        bytes: upload_stats.size as usize,
-        elapsed: upload_stats.duration,
-        removed: None,
-    })
+    Ok(upload_stats)
 }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 14/27] api: push sync: expose optional encryption key for push sync
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (12 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 13/27] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 15/27] sync: push: optionally encrypt data blob on upload Christian Ebner
                   ` (12 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Exposes the optional encryption key id to be used for server side
encryption of contents during push sync jobs. Only expose the
parameter for now and load the key if given, the logic to use it will
be implemented in subsequent code changes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/api2/push.rs   |  8 +++++++-
 src/server/push.rs | 12 ++++++++++++
 src/server/sync.rs |  1 +
 3 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/src/api2/push.rs b/src/api2/push.rs
index e5edc13e0..84d93621b 100644
--- a/src/api2/push.rs
+++ b/src/api2/push.rs
@@ -2,7 +2,7 @@ use anyhow::{format_err, Error};
 use futures::{future::FutureExt, select};
 
 use pbs_api_types::{
-    Authid, BackupNamespace, GroupFilter, RateLimitConfig, DATASTORE_SCHEMA,
+    Authid, BackupNamespace, GroupFilter, RateLimitConfig, CRYPT_KEY_ID_SCHEMA, DATASTORE_SCHEMA,
     GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
     PRIV_DATASTORE_READ, PRIV_REMOTE_DATASTORE_BACKUP, PRIV_REMOTE_DATASTORE_PRUNE,
     REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA, SYNC_ENCRYPTED_ONLY_SCHEMA,
@@ -108,6 +108,10 @@ fn check_push_privs(
                 schema: TRANSFER_LAST_SCHEMA,
                 optional: true,
             },
+            "encryption-key": {
+                schema: CRYPT_KEY_ID_SCHEMA,
+                optional: true,
+            },
         },
     },
     access: {
@@ -133,6 +137,7 @@ async fn push(
     verified_only: Option<bool>,
     limit: RateLimitConfig,
     transfer_last: Option<usize>,
+    encryption_key: Option<String>,
     rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<String, Error> {
     let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -164,6 +169,7 @@ async fn push(
         verified_only,
         limit,
         transfer_last,
+        encryption_key,
     )
     .await?;
 
diff --git a/src/server/push.rs b/src/server/push.rs
index 9e5a19560..93434bedc 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -27,6 +27,7 @@ use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
 use pbs_datastore::read_chunk::AsyncReadChunk;
 use pbs_datastore::{BackupManifest, DataStore, StoreProgress};
+use pbs_tools::crypt_config::CryptConfig;
 
 use super::sync::{
     check_namespace_depth_limit, exclude_not_verified_or_encrypted,
@@ -83,6 +84,9 @@ pub(crate) struct PushParameters {
     verified_only: bool,
     /// How many snapshots should be transferred at most (taking the newest N snapshots)
     transfer_last: Option<usize>,
+    /// Encryption key to use for pushing unencrypted backup snapshots. Does not affect
+    /// already encrypted snapshots.
+    crypt_config: Option<Arc<CryptConfig>>,
 }
 
 impl PushParameters {
@@ -102,6 +106,7 @@ impl PushParameters {
         verified_only: Option<bool>,
         limit: RateLimitConfig,
         transfer_last: Option<usize>,
+        active_encryption_key: Option<String>,
     ) -> Result<Self, Error> {
         if let Some(max_depth) = max_depth {
             ns.check_max_depth(max_depth)?;
@@ -155,6 +160,12 @@ impl PushParameters {
         };
         let group_filter = group_filter.unwrap_or_default();
 
+        let crypt_config = if let Some(key_id) = &active_encryption_key {
+            crate::server::sync::check_privs_and_load_key_config(key_id, &local_user, true)?
+        } else {
+            None
+        };
+
         Ok(Self {
             source,
             target,
@@ -165,6 +176,7 @@ impl PushParameters {
             encrypted_only,
             verified_only,
             transfer_last,
+            crypt_config,
         })
     }
 
diff --git a/src/server/sync.rs b/src/server/sync.rs
index 9c070cd9c..6b84ae6d7 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -677,6 +677,7 @@ pub fn do_sync_job(
                             sync_job.verified_only,
                             sync_job.limit.clone(),
                             sync_job.transfer_last,
+                            sync_job.active_encryption_key,
                         )
                         .await?;
                         push_store(push_params).await?
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 15/27] sync: push: optionally encrypt data blob on upload
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (13 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 14/27] api: push sync: expose optional encryption key for push sync Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 16/27] sync: push: optionally encrypt client log on upload if key is given Christian Ebner
                   ` (11 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Encrypt the data blob with given encryption key during syncs in push
direction, if given.

Introduces a helper to read and decode the data blob from source into
raw data and re-encrypt, so the new blob is compressed and encrypted,
including the correct header when uploading. The same helper will be
reused for client log uploads in subsequent code changes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/push.rs | 39 +++++++++++++++++++++++++++++++++------
 1 file changed, 33 insertions(+), 6 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index 93434bedc..7ce47e32e 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1,6 +1,7 @@
 //! Sync datastore by pushing contents to remote server
 
 use std::collections::HashSet;
+use std::path::Path;
 use std::sync::{Arc, Mutex};
 
 use anyhow::{bail, format_err, Context, Error};
@@ -26,7 +27,7 @@ use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
 use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{BackupManifest, DataStore, StoreProgress};
+use pbs_datastore::{BackupManifest, DataBlob, DataStore, StoreProgress};
 use pbs_tools::crypt_config::CryptConfig;
 
 use super::sync::{
@@ -849,6 +850,8 @@ pub(crate) async fn push_snapshot(
         return Ok(stats);
     }
 
+    let mut encrypt_using_key = None;
+
     // Writer instance locks the snapshot on the remote side
     let backup_writer = BackupWriter::start(
         &params.target.client,
@@ -856,7 +859,7 @@ pub(crate) async fn push_snapshot(
             datastore: params.target.repo.store(),
             ns: &target_ns,
             backup: snapshot,
-            crypt_config: None,
+            crypt_config: encrypt_using_key.clone(),
             debug: false,
             benchmark: false,
             no_cache: false,
@@ -901,10 +904,20 @@ pub(crate) async fn push_snapshot(
             let archive_name = BackupArchiveName::from_path(&entry.filename)?;
             match archive_name.archive_type() {
                 ArchiveType::Blob => {
-                    let file = std::fs::File::open(&path)?;
-                    let backup_stats = backup_writer
-                        .upload_blob(file, archive_name.as_ref())
-                        .await?;
+                    let backup_stats = if encrypt_using_key.is_some() {
+                        reencode_encrypted_and_upload_blob(
+                            path,
+                            &archive_name,
+                            &backup_writer,
+                            &upload_options,
+                        )
+                        .await?
+                    } else {
+                        let file = std::fs::File::open(&path)?;
+                        backup_writer
+                            .upload_blob(file, archive_name.as_ref())
+                            .await?
+                    };
                     target_manifest.add_file(
                         &archive_name,
                         backup_stats.size,
@@ -1039,6 +1052,20 @@ pub(crate) async fn push_snapshot(
     Ok(stats)
 }
 
+async fn reencode_encrypted_and_upload_blob<P: AsRef<Path>>(
+    path: P,
+    archive_name: &BackupArchiveName,
+    backup_writer: &BackupWriter,
+    upload_options: &UploadOptions,
+) -> Result<BackupStats, Error> {
+    let mut file = tokio::fs::File::open(&path).await?;
+    let data_blob = DataBlob::load_from_async_reader(&mut file).await?;
+    let raw_data = data_blob.decode(None, None)?;
+    backup_writer
+        .upload_blob_from_data(raw_data, archive_name.as_ref(), upload_options.clone())
+        .await
+}
+
 // Read fixed or dynamic index and push to target by uploading via the backup writer instance
 //
 // For fixed indexes, the size must be provided as given by the index reader.
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 16/27] sync: push: optionally encrypt client log on upload if key is given
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (14 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 15/27] sync: push: optionally encrypt data blob on upload Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 17/27] sync: push: add helper for loading known chunks from previous snapshot Christian Ebner
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Encrypt the client log blob with given encryption key during syncs in push
direction, if given. The client log is not part of the manifest and therefore
needs to be handled separately.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/push.rs | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index 7ce47e32e..02718b7b6 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1018,13 +1018,23 @@ pub(crate) async fn push_snapshot(
     let client_log_name = &CLIENT_LOG_BLOB_NAME;
     client_log_path.push(client_log_name.as_ref());
     if client_log_path.is_file() {
-        backup_writer
-            .upload_blob_from_file(
-                &client_log_path,
-                client_log_name.as_ref(),
-                upload_options.clone(),
+        if encrypt_using_key.is_some() {
+            reencode_encrypted_and_upload_blob(
+                client_log_path,
+                client_log_name,
+                &backup_writer,
+                &upload_options,
             )
             .await?;
+        } else {
+            backup_writer
+                .upload_blob_from_file(
+                    &client_log_path,
+                    client_log_name.as_ref(),
+                    upload_options.clone(),
+                )
+                .await?;
+        }
     }
 
     // Rewrite manifest for pushed snapshot, recreating manifest from source on target,
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 17/27] sync: push: add helper for loading known chunks from previous snapshot
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (15 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 16/27] sync: push: optionally encrypt client log on upload if key is given Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 18/27] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
                   ` (9 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Loading of known chunks only makes sense for snapshots which do not
need encryption while pushing. To check this move the known chunk
loading into a common helper method and distinguish dynamic/fixed
index for loading based on archive type.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/push.rs | 65 ++++++++++++++++++++++++++++++++--------------
 1 file changed, 45 insertions(+), 20 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index 02718b7b6..9b7a4adcb 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -808,6 +808,41 @@ pub(crate) async fn push_group(
     Ok(stats)
 }
 
+async fn load_previous_snapshot_known_chunks(
+    params: &PushParameters,
+    upload_options: &UploadOptions,
+    backup_writer: &BackupWriter,
+    archive_name: &BackupArchiveName,
+    known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
+) {
+    if let Some(manifest) = upload_options.previous_manifest.as_ref() {
+        if let Some(crypt_config) = &params.crypt_config {
+            if let Ok(Some(fingerprint)) = manifest.fingerprint() {
+                if *fingerprint.bytes() == crypt_config.fingerprint() {
+                    // needs encryption during push, cannot reuse chunks from previous manifest
+                    return;
+                }
+            }
+        }
+
+        // Add known chunks, ignore errors since archive might not be present and it is better
+        // to proceed on unrelated errors than to fail here.
+        match archive_name.archive_type() {
+            ArchiveType::FixedIndex => {
+                let _res = backup_writer
+                    .download_previous_fixed_index(archive_name, manifest, known_chunks)
+                    .await;
+            }
+            ArchiveType::DynamicIndex => {
+                let _res = backup_writer
+                    .download_previous_dynamic_index(archive_name, manifest, known_chunks)
+                    .await;
+            }
+            ArchiveType::Blob => (),
+        };
+    }
+}
+
 /// Push snapshot to target
 ///
 /// Creates a new snapshot on the target and pushes the content of the source snapshot to the
@@ -902,6 +937,16 @@ pub(crate) async fn push_snapshot(
         path.push(&entry.filename);
         if path.try_exists()? {
             let archive_name = BackupArchiveName::from_path(&entry.filename)?;
+
+            load_previous_snapshot_known_chunks(
+                params,
+                &upload_options,
+                &backup_writer,
+                &archive_name,
+                known_chunks.clone(),
+            )
+            .await;
+
             match archive_name.archive_type() {
                 ArchiveType::Blob => {
                     let backup_stats = if encrypt_using_key.is_some() {
@@ -932,16 +977,6 @@ pub(crate) async fn push_snapshot(
                     });
                 }
                 ArchiveType::DynamicIndex => {
-                    if let Some(manifest) = upload_options.previous_manifest.as_ref() {
-                        // Add known chunks, ignore errors since archive might not be present
-                        let _res = backup_writer
-                            .download_previous_dynamic_index(
-                                &archive_name,
-                                manifest,
-                                known_chunks.clone(),
-                            )
-                            .await;
-                    }
                     let index = DynamicIndexReader::open(&path)?;
                     let chunk_reader = reader
                         .chunk_reader(entry.chunk_crypt_mode())
@@ -969,16 +1004,6 @@ pub(crate) async fn push_snapshot(
                     });
                 }
                 ArchiveType::FixedIndex => {
-                    if let Some(manifest) = upload_options.previous_manifest.as_ref() {
-                        // Add known chunks, ignore errors since archive might not be present
-                        let _res = backup_writer
-                            .download_previous_fixed_index(
-                                &archive_name,
-                                manifest,
-                                known_chunks.clone(),
-                            )
-                            .await;
-                    }
                     let index = FixedIndexReader::open(&path)?;
                     let chunk_reader = reader
                         .chunk_reader(entry.chunk_crypt_mode())
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 18/27] fix #7251: api: push: encrypt snapshots using configured encryption key
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (16 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 17/27] sync: push: add helper for loading known chunks from previous snapshot Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 19/27] ui: define and expose encryption key management menu item and windows Christian Ebner
                   ` (8 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

If an encryption key id is provided in the push parameters, the key
is loaded at the start of the push sync job and passed along via the
crypt config.

Backup snapshots which are already encrypted or partially encrypted
snapshots are skipped to avoid mixing of contents. Pre-existing
snapshots on the remote are however not checked to match the key.

Special care has to be taken when tracking the already encountered
chunks. For regular push sync jobs chunk upload is optimized to skip
re-upload of chunks from the previous snapshot (if any) and new, but
already encountered chunks for the current group sync. Since the chunks
now have to be re-processes anyways, do not load the chunks from the
previous snapshot into memory if they need re-encryption and keep track
of the unencrypted -> encrypted digest mapping in a hashmap to avoid
re-processing. This might be optimized in the future by e.g. move the
tracking to an LRU cache, which however requrires more carefully
evaluaton of memory consumption.

Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=7251
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/push.rs | 112 ++++++++++++++++++++++++++++++++++-----------
 1 file changed, 86 insertions(+), 26 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index 9b7a4adcb..f433ca50d 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1,6 +1,6 @@
 //! Sync datastore by pushing contents to remote server
 
-use std::collections::HashSet;
+use std::collections::{HashMap, HashSet};
 use std::path::Path;
 use std::sync::{Arc, Mutex};
 
@@ -12,17 +12,17 @@ use tracing::{info, warn};
 
 use pbs_api_types::{
     print_store_and_ns, ApiVersion, ApiVersionInfo, ArchiveType, Authid, BackupArchiveName,
-    BackupDir, BackupGroup, BackupGroupDeleteStats, BackupNamespace, GroupFilter, GroupListItem,
-    NamespaceListItem, Operation, RateLimitConfig, Remote, SnapshotListItem, CLIENT_LOG_BLOB_NAME,
-    MANIFEST_BLOB_NAME, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ, PRIV_REMOTE_DATASTORE_BACKUP,
-    PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
+    BackupDir, BackupGroup, BackupGroupDeleteStats, BackupNamespace, CryptMode, GroupFilter,
+    GroupListItem, NamespaceListItem, Operation, RateLimitConfig, Remote, SnapshotListItem,
+    CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+    PRIV_REMOTE_DATASTORE_BACKUP, PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
 };
 use pbs_client::{
     BackupRepository, BackupStats, BackupWriter, BackupWriterOptions, HttpClient, IndexType,
     MergedChunkInfo, UploadOptions,
 };
 use pbs_config::CachedUserInfo;
-use pbs_datastore::data_blob::ChunkInfo;
+use pbs_datastore::data_blob::{ChunkInfo, DataChunkBuilder};
 use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
@@ -886,6 +886,27 @@ pub(crate) async fn push_snapshot(
     }
 
     let mut encrypt_using_key = None;
+    if params.crypt_config.is_some() {
+        let mut contains_unencrypted_file = false;
+        // Check if snapshot is fully encrypted or not encrypted at all:
+        // refuse progress otherwise to upload partially unencrypted contents or mix encryption key.
+        if source_manifest.files().iter().all(|file| {
+            if file.chunk_crypt_mode() == CryptMode::None {
+                contains_unencrypted_file = true;
+                true
+            } else {
+                false
+            }
+        }) {
+            encrypt_using_key = params.crypt_config.clone();
+            info!("Encrypt and push unencrypted snapshot '{snapshot}'");
+        } else if contains_unencrypted_file {
+            warn!("Encountered partially encrypted snapshot, refuse to re-encrypt and skip");
+            return Ok(stats);
+        } else {
+            info!("Pushing already encrypted snapshot '{snapshot}' without re-encryption");
+        }
+    }
 
     // Writer instance locks the snapshot on the remote side
     let backup_writer = BackupWriter::start(
@@ -911,19 +932,20 @@ pub(crate) async fn push_snapshot(
         }
     };
 
-    // Dummy upload options: the actual compression and/or encryption already happened while
-    // the chunks were generated during creation of the backup snapshot, therefore pre-existing
-    // chunks (already compressed and/or encrypted) can be pushed to the target.
+    // Dummy upload options: The actual compression already happened while
+    // the chunks were generated during creation of the backup snapshot,
+    // therefore pre-existing chunks (already compressed) can be pushed to
+    // the target.
+    //
     // Further, these steps are skipped in the backup writer upload stream.
     //
     // Therefore, these values do not need to fit the values given in the manifest.
     // The original manifest is uploaded in the end anyways.
     //
     // Compression is set to true so that the uploaded manifest will be compressed.
-    // Encrypt is set to assure that above files are not encrypted.
     let upload_options = UploadOptions {
         compress: true,
-        encrypt: false,
+        encrypt: encrypt_using_key.is_some(),
         previous_manifest,
         ..UploadOptions::default()
     };
@@ -937,6 +959,10 @@ pub(crate) async fn push_snapshot(
         path.push(&entry.filename);
         if path.try_exists()? {
             let archive_name = BackupArchiveName::from_path(&entry.filename)?;
+            let crypt_mode = match &encrypt_using_key {
+                Some(_) => CryptMode::Encrypt,
+                None => entry.chunk_crypt_mode(),
+            };
 
             load_previous_snapshot_known_chunks(
                 params,
@@ -967,7 +993,7 @@ pub(crate) async fn push_snapshot(
                         &archive_name,
                         backup_stats.size,
                         backup_stats.csum,
-                        entry.chunk_crypt_mode(),
+                        crypt_mode,
                     )?;
                     stats.add(SyncStats {
                         chunk_count: backup_stats.chunk_count as usize,
@@ -988,13 +1014,14 @@ pub(crate) async fn push_snapshot(
                         &backup_writer,
                         IndexType::Dynamic,
                         known_chunks.clone(),
+                        encrypt_using_key.clone(),
                     )
                     .await?;
                     target_manifest.add_file(
                         &archive_name,
                         upload_stats.size,
                         upload_stats.csum,
-                        entry.chunk_crypt_mode(),
+                        crypt_mode,
                     )?;
                     stats.add(SyncStats {
                         chunk_count: upload_stats.chunk_count as usize,
@@ -1016,13 +1043,14 @@ pub(crate) async fn push_snapshot(
                         &backup_writer,
                         IndexType::Fixed(Some(size)),
                         known_chunks.clone(),
+                        encrypt_using_key.clone(),
                     )
                     .await?;
                     target_manifest.add_file(
                         &archive_name,
                         upload_stats.size,
                         upload_stats.csum,
-                        entry.chunk_crypt_mode(),
+                        crypt_mode,
                     )?;
                     stats.add(SyncStats {
                         chunk_count: upload_stats.chunk_count as usize,
@@ -1064,15 +1092,25 @@ pub(crate) async fn push_snapshot(
 
     // Rewrite manifest for pushed snapshot, recreating manifest from source on target,
     // needs to update all relevant info for new manifest.
-    target_manifest.unprotected = source_manifest.unprotected;
-    target_manifest.signature = source_manifest.signature;
-    let manifest_json = serde_json::to_value(target_manifest)?;
-    let manifest_string = serde_json::to_string_pretty(&manifest_json)?;
+    target_manifest.unprotected = source_manifest.unprotected.clone();
+    target_manifest.signature = source_manifest.signature.clone();
+    let manifest_string = if encrypt_using_key.is_some() {
+        let fp = source_manifest.change_detection_fingerprint()?;
+        target_manifest.set_change_detection_fingerprint(&fp)?;
+        target_manifest.to_string(encrypt_using_key.as_ref().map(Arc::as_ref))?
+    } else {
+        let manifest_json = serde_json::to_value(target_manifest)?;
+        serde_json::to_string_pretty(&manifest_json)?
+    };
     let backup_stats = backup_writer
         .upload_blob_from_data(
             manifest_string.into_bytes(),
             MANIFEST_BLOB_NAME.as_ref(),
-            upload_options,
+            UploadOptions {
+                compress: true,
+                encrypt: false,
+                ..UploadOptions::default()
+            },
         )
         .await?;
     backup_writer.finish().await?;
@@ -1112,12 +1150,15 @@ async fn push_index(
     backup_writer: &BackupWriter,
     index_type: IndexType,
     known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
+    crypt_config: Option<Arc<CryptConfig>>,
 ) -> Result<BackupStats, Error> {
     let (upload_channel_tx, upload_channel_rx) = mpsc::channel(20);
     let mut chunk_infos =
         stream::iter(0..index.index_count()).map(move |pos| index.chunk_info(pos).unwrap());
 
+    let crypt_config_cloned = crypt_config.clone();
     tokio::spawn(async move {
+        let mut encrypted_mapping = HashMap::new();
         while let Some(chunk_info) = chunk_infos.next().await {
             // Avoid reading known chunks, as they are not uploaded by the backup writer anyways
             let needs_upload = {
@@ -1131,20 +1172,39 @@ async fn push_index(
                 chunk_reader
                     .read_raw_chunk(&chunk_info.digest)
                     .await
-                    .map(|chunk| {
-                        MergedChunkInfo::New(ChunkInfo {
+                    .and_then(|chunk| {
+                        let (chunk, digest, chunk_len) = match crypt_config_cloned.as_ref() {
+                            Some(crypt_config) => {
+                                let data = chunk.decode(None, Some(&chunk_info.digest))?;
+                                let (chunk, digest) = DataChunkBuilder::new(&data)
+                                    .compress(true)
+                                    .crypt_config(crypt_config)
+                                    .build()?;
+                                encrypted_mapping.insert(chunk_info.digest, digest);
+                                (chunk, digest, data.len() as u64)
+                            }
+                            None => (chunk, chunk_info.digest, chunk_info.size()),
+                        };
+
+                        Ok(MergedChunkInfo::New(ChunkInfo {
                             chunk,
-                            digest: chunk_info.digest,
-                            chunk_len: chunk_info.size(),
+                            digest,
+                            chunk_len,
                             offset: chunk_info.range.start,
-                        })
+                        }))
                     })
             } else {
+                let digest =
+                    if let Some(encrypted_digest) = encrypted_mapping.get(&chunk_info.digest) {
+                        *encrypted_digest
+                    } else {
+                        chunk_info.digest
+                    };
                 Ok(MergedChunkInfo::Known(vec![(
                     // Pass size instead of offset, will be replaced with offset by the backup
                     // writer
                     chunk_info.size(),
-                    chunk_info.digest,
+                    digest,
                 )]))
             };
             let _ = upload_channel_tx.send(merged_chunk_info).await;
@@ -1155,7 +1215,7 @@ async fn push_index(
 
     let upload_options = UploadOptions {
         compress: true,
-        encrypt: false,
+        encrypt: crypt_config.is_some(),
         index_type,
         ..UploadOptions::default()
     };
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 19/27] ui: define and expose encryption key management menu item and windows
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (17 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 18/27] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 20/27] ui: expose assigning encryption key to sync jobs Christian Ebner
                   ` (7 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Allows to create or remove encryption keys via the WebUI. A new key
entity can be added by either generating a new key by the server
itself or uploading a pre-existing key via a key file, similar to
what Proxmox VE currently allows when setting up a PBS storage.

After creation, the key will be shown in a dialog which allows export
thereof. This is reusing the same logic as PVE with slight adaptions
to include key id and different api endpoint.

On removal the user is informed about the risk of not being able to
decrypt snapshots anymore.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 www/Makefile                     |   2 +
 www/NavigationTree.js            |   6 +
 www/Utils.js                     |   1 +
 www/config/EncryptionKeysView.js | 324 ++++++++++++++++++++++++++
 www/window/EncryptionKeysEdit.js | 383 +++++++++++++++++++++++++++++++
 5 files changed, 716 insertions(+)
 create mode 100644 www/config/EncryptionKeysView.js
 create mode 100644 www/window/EncryptionKeysEdit.js

diff --git a/www/Makefile b/www/Makefile
index 5a60e47e1..08ad50846 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -70,6 +70,7 @@ JSSRC=							\
 	config/GCView.js				\
 	config/WebauthnView.js				\
 	config/CertificateView.js			\
+	config/EncryptionKeysView.js			\
 	config/NodeOptionView.js			\
 	config/MetricServerView.js			\
 	config/NotificationConfigView.js		\
@@ -78,6 +79,7 @@ JSSRC=							\
 	window/BackupGroupChangeOwner.js		\
 	window/CreateDirectory.js			\
 	window/DataStoreEdit.js				\
+	window/EncryptionKeysEdit.js			\
 	window/NamespaceEdit.js				\
 	window/MaintenanceOptions.js			\
 	window/NotesEdit.js				\
diff --git a/www/NavigationTree.js b/www/NavigationTree.js
index 35b8d693b..f596c7d1b 100644
--- a/www/NavigationTree.js
+++ b/www/NavigationTree.js
@@ -74,6 +74,12 @@ Ext.define('PBS.store.NavigationStore', {
                         path: 'pbsCertificateConfiguration',
                         leaf: true,
                     },
+                    {
+                        text: gettext('Encryption Keys'),
+                        iconCls: 'fa fa-lock',
+                        path: 'pbsEncryptionKeysView',
+                        leaf: true,
+                    },
                     {
                         text: gettext('Notifications'),
                         iconCls: 'fa fa-bell-o',
diff --git a/www/Utils.js b/www/Utils.js
index 350ab820b..bf4b025c7 100644
--- a/www/Utils.js
+++ b/www/Utils.js
@@ -451,6 +451,7 @@ Ext.define('PBS.Utils', {
             prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')),
             prunejob: (type, id) => PBS.Utils.render_prune_job_worker_id(id, gettext('Prune Job')),
             reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read Objects')),
+            'remove-encryption-key': [gettext('Key'), gettext('Remove Key')],
             'rewind-media': [gettext('Drive'), gettext('Rewind Media')],
             's3-refresh': [gettext('Datastore'), gettext('S3 Refresh')],
             sync: ['Datastore', gettext('Remote Sync')],
diff --git a/www/config/EncryptionKeysView.js b/www/config/EncryptionKeysView.js
new file mode 100644
index 000000000..ecf67cb6a
--- /dev/null
+++ b/www/config/EncryptionKeysView.js
@@ -0,0 +1,324 @@
+Ext.define('pbs-encryption-keys', {
+    extend: 'Ext.data.Model',
+    fields: ['id', 'type', 'hint', 'fingerprint', 'created', 'archived-at'],
+    idProperty: 'id',
+});
+
+Ext.define('PBS.config.EncryptionKeysView', {
+    extend: 'Ext.grid.GridPanel',
+    alias: 'widget.pbsEncryptionKeysView',
+
+    title: gettext('Encryption Keys'),
+
+    stateful: true,
+    stateId: 'grid-encryption-keys',
+
+    controller: {
+        xclass: 'Ext.app.ViewController',
+
+        addSyncEncryptionKey: function () {
+            let me = this;
+            Ext.create('PBS.window.EncryptionKeysEdit', {
+                listeners: {
+                    destroy: function () {
+                        me.reload();
+                    },
+                },
+            }).show();
+        },
+
+        addTapeEncryptionKey: function () {
+            let me = this;
+            Ext.create('PBS.TapeManagement.EncryptionEditWindow', {
+                listeners: {
+                    destroy: function () {
+                        me.reload();
+                    },
+                },
+            }).show();
+        },
+
+        archiveEncryptionKey: function () {
+            let me = this;
+            let view = me.getView();
+            let selection = view.getSelection();
+
+            if (!selection || selection.length < 1) {
+                return;
+            }
+
+            if (selection[0].data.type === 'tape') {
+                Ext.Msg.alert(gettext('Error'), gettext('cannot archive tape key'));
+            }
+
+            let keyID = selection[0].data.id;
+            Proxmox.Utils.API2Request({
+                url: `/api2/extjs/config/encryption-keys/${keyID}`,
+                method: 'POST',
+                waitMsgTarget: view,
+                failure: function (response, opts) {
+                    Ext.Msg.alert(gettext('Error'), response.htmlStatus);
+                },
+                success: function (response, opts) {
+                    view.getSelectionModel().deselectAll();
+                    me.reload();
+                },
+            });
+        },
+
+        removeEncryptionKey: function () {
+            let me = this;
+            let view = me.getView();
+            let selection = view.getSelection();
+
+            if (!selection || selection.length < 1) {
+                return;
+            }
+
+            let keyType = selection[0].data.type;
+            let keyID = selection[0].data.id;
+            let keyFp = selection[0].data.fingerprint;
+            let endpointUrl =
+                keyType === 'tape'
+                    ? `/api2/extjs/config/tape-encryption-keys/${keyFp}`
+                    : `/api2/extjs/config/encryption-keys/${keyID}`;
+
+            Ext.create('Proxmox.window.SafeDestroy', {
+                url: endpointUrl,
+                item: {
+                    id: `${keyType}/${keyID}`,
+                },
+                autoShow: true,
+                showProgress: false,
+                taskName: 'remove-encryption-key',
+                listeners: {
+                    destroy: () => me.reload(),
+                },
+                additionalItems: [
+                    {
+                        xtype: 'box',
+                        userCls: 'pmx-hint',
+                        style: {
+                            'inline-size': '375px',
+                            'overflow-wrap': 'break-word',
+                        },
+                        padding: '5',
+                        html: gettext(
+                            'Make sure you have a backup of the encryption key!<br><br>You will not be able to decrypt contents encrypted with this key once removed.',
+                        ),
+                    },
+                ],
+            }).show();
+        },
+
+        restoreEncryptionKey: function () {
+            Ext.create('Proxmox.window.Edit', {
+                title: gettext('Restore Key'),
+                isCreate: true,
+                submitText: gettext('Restore'),
+                method: 'POST',
+                url: `/api2/extjs/tape/drive`,
+                submitUrl: function (url, values) {
+                    let drive = values.drive;
+                    delete values.drive;
+                    return `${url}/${drive}/restore-key`;
+                },
+                items: [
+                    {
+                        xtype: 'pbsDriveSelector',
+                        fieldLabel: gettext('Drive'),
+                        name: 'drive',
+                    },
+                    {
+                        xtype: 'textfield',
+                        inputType: 'password',
+                        fieldLabel: gettext('Password'),
+                        name: 'password',
+                    },
+                ],
+            }).show();
+        },
+
+        reload: async function () {
+            let me = this;
+            let view = me.getView();
+
+            let syncKeysFuture = Proxmox.Async.api2({
+                url: '/api2/extjs/config/encryption-keys',
+                method: 'GET',
+                params: {
+                    'include-archived': true,
+                },
+            });
+
+            let tapeKeysFuture = Proxmox.Async.api2({
+                url: '/api2/extjs/config/tape-encryption-keys',
+                method: 'GET',
+            });
+
+            let [syncKeys, tapeKeys] = await Promise.all([syncKeysFuture, tapeKeysFuture]);
+
+            let combinedKeys = [];
+
+            syncKeys.result.data.forEach((key) => {
+                key.type = 'sync';
+                combinedKeys.push(key);
+            });
+
+            tapeKeys.result.data.forEach((key) => {
+                // generate a temp id for listing based on creation timestamp and fingerprint
+                key.id = `${key.created}-${key.fingerprint.substring(0, 9).replace(/:/g, '')}`;
+                key.type = 'tape';
+                combinedKeys.push(key);
+            });
+
+            let store = view.getStore().rstore;
+            store.loadData(combinedKeys);
+            store.fireEvent('load', store, combinedKeys, true);
+        },
+
+        init: function () {
+            let me = this;
+            me.reload();
+            me.updateTask = Ext.TaskManager.start({
+                run: () => me.reload(),
+                interval: 5000,
+            });
+        },
+
+        destroy: function () {
+            let me = this;
+            if (me.updateTask) {
+                Ext.TaskManager.stop(me.updateTask);
+            }
+        },
+    },
+
+    listeners: {
+        activate: 'reload',
+    },
+
+    store: {
+        type: 'diff',
+        autoDestroy: true,
+        autoDestroyRstore: true,
+        sorters: 'id',
+        rstore: {
+            type: 'store',
+            storeid: 'pbs-encryption-keys',
+            model: 'pbs-encryption-keys',
+            proxy: {
+                type: 'memory',
+            },
+        },
+    },
+
+    tbar: [
+        {
+            text: gettext('Add'),
+            menu: [
+                {
+                    text: gettext('Add Sync Encryption Key'),
+                    iconCls: 'fa fa-check-circle',
+                    handler: 'addSyncEncryptionKey',
+                    selModel: false,
+                },
+                {
+                    text: gettext('Add Tape Encryption Key'),
+                    iconCls: 'pbs-icon-tape',
+                    handler: 'addTapeEncryptionKey',
+                    selModel: false,
+                },
+            ],
+        },
+        '-',
+        {
+            xtype: 'proxmoxButton',
+            text: gettext('Archive'),
+            handler: 'archiveEncryptionKey',
+            dangerous: true,
+            confirmMsg: Ext.String.format(
+                gettext('Archiving will render the key unusable to encrypt new content, proceed?'),
+            ),
+            disabled: true,
+            enableFn: (item) => item.data.type === 'sync' && !item.data['archived-at'],
+        },
+        '-',
+        {
+            xtype: 'proxmoxButton',
+            text: gettext('Remove'),
+            handler: 'removeEncryptionKey',
+            disabled: true,
+            enableFn: (item) =>
+                (item.data.type === 'sync' && !!item.data['archived-at']) ||
+                item.data.type === 'tape',
+        },
+        '-',
+        {
+            text: gettext('Restore Key'),
+            xtype: 'proxmoxButton',
+            handler: 'restoreEncryptionKey',
+            disabled: true,
+            enableFn: (item) => item.data.type === 'tape',
+        },
+    ],
+
+    viewConfig: {
+        trackOver: false,
+    },
+
+    columns: [
+        {
+            dataIndex: 'id',
+            header: gettext('Key ID'),
+            renderer: Ext.String.htmlEncode,
+            sortable: true,
+            width: 200,
+        },
+        {
+            dataIndex: 'type',
+            header: gettext('Type'),
+            renderer: function (value) {
+                let iconCls, label;
+                if (value === 'sync') {
+                    iconCls = 'fa fa-check-circle';
+                    label = gettext('Sync');
+                } else if (value === 'tape') {
+                    iconCls = 'pbs-icon-tape';
+                    label = gettext('Tape');
+                } else {
+                    return value;
+                }
+                return `<i class="${iconCls}"></i> ${label}`;
+            },
+            sortable: true,
+            width: 50,
+        },
+        {
+            dataIndex: 'hint',
+            header: gettext('Hint'),
+            sortable: true,
+            width: 50,
+        },
+        {
+            dataIndex: 'fingerprint',
+            header: gettext('Fingerprint'),
+            sortable: false,
+            width: 600,
+        },
+        {
+            dataIndex: 'created',
+            header: gettext('Created'),
+            renderer: Proxmox.Utils.render_timestamp,
+            sortable: true,
+            flex: 1,
+        },
+        {
+            dataIndex: 'archived-at',
+            header: gettext('Archived'),
+            renderer: (val) => (val ? Proxmox.Utils.render_timestamp(val) : ''),
+            sortable: true,
+            flex: 1,
+        },
+    ],
+});
diff --git a/www/window/EncryptionKeysEdit.js b/www/window/EncryptionKeysEdit.js
new file mode 100644
index 000000000..f5edf5dc4
--- /dev/null
+++ b/www/window/EncryptionKeysEdit.js
@@ -0,0 +1,383 @@
+Ext.define('PBS.ShowEncryptionKey', {
+    extend: 'Ext.window.Window',
+    xtype: 'pbsShowEncryptionKey',
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    width: 600,
+    modal: true,
+    resizable: false,
+    title: gettext('Important: Save your Encryption Key'),
+
+    // avoid close by ESC key, force user to more manual action
+    onEsc: Ext.emptyFn,
+    closable: false,
+
+    items: [
+        {
+            xtype: 'form',
+            layout: {
+                type: 'vbox',
+                align: 'stretch',
+            },
+            bodyPadding: 10,
+            border: false,
+            defaults: {
+                anchor: '100%',
+                border: false,
+                padding: '10 0 0 0',
+            },
+            items: [
+                {
+                    xtype: 'textfield',
+                    fieldLabel: gettext('Key ID'),
+                    labelWidth: 80,
+                    inputId: 'keyID',
+                    cbind: {
+                        value: '{keyID}',
+                    },
+                    editable: false,
+                },
+                {
+                    xtype: 'textfield',
+                    fieldLabel: gettext('Key'),
+                    labelWidth: 80,
+                    inputId: 'encryption-key',
+                    cbind: {
+                        value: '{key}',
+                    },
+                    editable: false,
+                },
+                {
+                    xtype: 'component',
+                    html:
+                        gettext(
+                            'Keep your encryption key safe, but easily accessible for disaster recovery.',
+                        ) +
+                        '<br>' +
+                        gettext('We recommend the following safe-keeping strategy:'),
+                },
+                {
+                    xtype: 'container',
+                    layout: 'hbox',
+                    items: [
+                        {
+                            xtype: 'component',
+                            html: '1. ' + gettext('Save the key in your password manager.'),
+                            flex: 1,
+                        },
+                        {
+                            xtype: 'button',
+                            text: gettext('Copy Key'),
+                            iconCls: 'fa fa-clipboard x-btn-icon-el-default-toolbar-small',
+                            cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+                            width: 110,
+                            handler: function (b) {
+                                document.getElementById('encryption-key').select();
+                                document.execCommand('copy');
+                            },
+                        },
+                    ],
+                },
+                {
+                    xtype: 'container',
+                    layout: 'hbox',
+                    items: [
+                        {
+                            xtype: 'component',
+                            html:
+                                '2. ' +
+                                gettext(
+                                    'Download the key to a USB (pen) drive, placed in secure vault.',
+                                ),
+                            flex: 1,
+                        },
+                        {
+                            xtype: 'button',
+                            text: gettext('Download'),
+                            iconCls: 'fa fa-download x-btn-icon-el-default-toolbar-small',
+                            cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+                            width: 110,
+                            handler: function (b) {
+                                let showWindow = this.up('window');
+
+                                let filename = `${showWindow.keyID}.enc`;
+
+                                let hiddenElement = document.createElement('a');
+                                hiddenElement.href =
+                                    'data:attachment/text,' + encodeURI(showWindow.key);
+                                hiddenElement.target = '_blank';
+                                hiddenElement.download = filename;
+                                hiddenElement.click();
+                            },
+                        },
+                    ],
+                },
+                {
+                    xtype: 'container',
+                    layout: 'hbox',
+                    items: [
+                        {
+                            xtype: 'component',
+                            html:
+                                '3. ' +
+                                gettext('Print as paperkey, laminated and placed in secure vault.'),
+                            flex: 1,
+                        },
+                        {
+                            xtype: 'button',
+                            text: gettext('Print Key'),
+                            iconCls: 'fa fa-print x-btn-icon-el-default-toolbar-small',
+                            cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+                            width: 110,
+                            handler: function (b) {
+                                let showWindow = this.up('window');
+                                showWindow.paperkey(showWindow.key);
+                            },
+                        },
+                    ],
+                },
+            ],
+        },
+        {
+            xtype: 'component',
+            border: false,
+            padding: '10 10 10 10',
+            userCls: 'pmx-hint',
+            html: gettext(
+                'Please save the encryption key - losing it will render any backup created with it unusable',
+            ),
+        },
+    ],
+    buttons: [
+        {
+            text: gettext('Close'),
+            handler: function (b) {
+                let showWindow = this.up('window');
+                showWindow.close();
+            },
+        },
+    ],
+    paperkey: function (keyString) {
+        let me = this;
+
+        const key = JSON.parse(keyString);
+
+        const qrwidth = 500;
+        let qrdiv = document.createElement('div');
+        let qrcode = new QRCode(qrdiv, {
+            width: qrwidth,
+            height: qrwidth,
+            correctLevel: QRCode.CorrectLevel.H,
+        });
+        qrcode.makeCode(keyString);
+
+        let shortKeyFP = '';
+        if (key.fingerprint) {
+            shortKeyFP = PBS.Utils.renderKeyID(key.fingerprint);
+        }
+
+        let printFrame = document.createElement('iframe');
+        Object.assign(printFrame.style, {
+            position: 'fixed',
+            right: '0',
+            bottom: '0',
+            width: '0',
+            height: '0',
+            border: '0',
+        });
+        const prettifiedKey = JSON.stringify(key, null, 2);
+        const keyQrBase64 = qrdiv.children[0].toDataURL('image/png');
+        const html = `<html><head><script>
+	    window.addEventListener('DOMContentLoaded', (ev) => window.print());
+	</script><style>@media print and (max-height: 150mm) {
+	  h4, p { margin: 0; font-size: 1em; }
+	}</style></head><body style="padding: 5px;">
+	<h4>Encryption Key '${me.keyID}' (${shortKeyFP})</h4>
+<p style="font-size:1.2em;font-family:monospace;white-space:pre-wrap;overflow-wrap:break-word;">
+-----BEGIN PROXMOX BACKUP KEY-----
+${prettifiedKey}
+-----END PROXMOX BACKUP KEY-----</p>
+	<center><img style="width: 100%; max-width: ${qrwidth}px;" src="${keyQrBase64}"></center>
+	</body></html>`;
+
+        printFrame.src = 'data:text/html;base64,' + btoa(html);
+        document.body.appendChild(printFrame);
+        me.on('destroy', () => document.body.removeChild(printFrame));
+    },
+});
+
+Ext.define('PBS.window.EncryptionKeysEdit', {
+    extend: 'Proxmox.window.Edit',
+    xtype: 'widget.pbsEncryptionKeysEdit',
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    width: 400,
+
+    fieldDefaults: { labelWidth: 120 },
+
+    subject: gettext('Encryption Key'),
+
+    cbindData: function (initialConfig) {
+        let me = this;
+
+        me.url = '/api2/extjs/config/encryption-keys';
+        me.method = 'POST';
+        me.autoLoad = false;
+
+        return {};
+    },
+
+    apiCallDone: function (success, response, options) {
+        let me = this;
+
+        if (!me.rendered) {
+            return;
+        }
+
+        let res = response.result.data;
+        if (!res) {
+            return;
+        }
+
+        let keyIdField = me.down('field[name=id]');
+        Ext.create('PBS.ShowEncryptionKey', {
+            autoShow: true,
+            keyID: keyIdField.getValue(),
+            key: JSON.stringify(res),
+        });
+    },
+
+    viewModel: {
+        data: {
+            keepCryptVisible: false,
+        },
+    },
+
+    items: [
+        {
+            xtype: 'pmxDisplayEditField',
+            name: 'id',
+            fieldLabel: gettext('Encryption Key ID'),
+            renderer: Ext.htmlEncode,
+            allowBlank: false,
+            minLength: 3,
+            editable: true,
+        },
+        {
+            xtype: 'displayfield',
+            name: 'crypt-key-fp',
+            fieldLabel: gettext('Key Source'),
+            padding: '2 0',
+        },
+        {
+            xtype: 'radiofield',
+            name: 'keysource',
+            value: true,
+            inputValue: 'new',
+            submitValue: false,
+            boxLabel: gettext('Auto-generate a new encryption key'),
+            padding: '0 0 0 25',
+        },
+        {
+            xtype: 'radiofield',
+            name: 'keysource',
+            inputValue: 'upload',
+            submitValue: false,
+            boxLabel: gettext('Upload an existing encryption key'),
+            padding: '0 0 0 25',
+            listeners: {
+                change: function (f, value) {
+                    let editWindow = this.up('window');
+                    if (!editWindow.rendered) {
+                        return;
+                    }
+                    let uploadKeyField = editWindow.down('field[name=key]');
+                    uploadKeyField.setDisabled(!value);
+                    uploadKeyField.setHidden(!value);
+
+                    let uploadKeyButton = editWindow.down('filebutton[name=upload-button]');
+                    uploadKeyButton.setDisabled(!value);
+                    uploadKeyButton.setHidden(!value);
+
+                    if (value) {
+                        uploadKeyField.validate();
+                    } else {
+                        uploadKeyField.reset();
+                    }
+                },
+            },
+        },
+        {
+            xtype: 'fieldcontainer',
+            layout: 'hbox',
+            items: [
+                {
+                    xtype: 'proxmoxtextfield',
+                    name: 'key',
+                    fieldLabel: gettext('Upload From File'),
+                    value: '',
+                    disabled: true,
+                    hidden: true,
+                    allowBlank: false,
+                    labelAlign: 'right',
+                    flex: 1,
+                    emptyText: gettext('Drag-and-drop key file here.'),
+                    validator: function (value) {
+                        if (value.length) {
+                            let key;
+                            try {
+                                key = JSON.parse(value);
+                            } catch (e) {
+                                return 'Failed to parse key - ' + e;
+                            }
+                            if (key.data === undefined) {
+                                return 'Does not seems like a valid Proxmox Backup key!';
+                            }
+                        }
+                        return true;
+                    },
+                    afterRender: function () {
+                        let me = this;
+                        if (!window.FileReader) {
+                            // No FileReader support in this browser
+                            return;
+                        }
+                        let cancel = function (ev) {
+                            ev = ev.event;
+                            if (ev.preventDefault) {
+                                ev.preventDefault();
+                            }
+                        };
+                        this.inputEl.on('dragover', cancel);
+                        this.inputEl.on('dragenter', cancel);
+                        this.inputEl.on('drop', (ev) => {
+                            cancel(ev);
+                            let reader = new FileReader();
+                            reader.onload = (ev) => me.setValue(ev.target.result);
+                            reader.readAsText(ev.event.dataTransfer.files[0]);
+                        });
+                    },
+                },
+                {
+                    xtype: 'filebutton',
+                    name: 'upload-button',
+                    iconCls: 'fa fa-fw fa-folder-open-o x-btn-icon-el-default-toolbar-small',
+                    cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+                    margin: '0 0 0 4',
+                    disabled: true,
+                    hidden: true,
+                    listeners: {
+                        change: function (btn, e, value) {
+                            let ev = e.event;
+                            let field = btn.up().down('proxmoxtextfield[name=key]');
+                            let reader = new FileReader();
+                            reader.onload = (ev) => field.setValue(ev.target.result);
+                            reader.readAsText(ev.target.files[0]);
+                            btn.reset();
+                        },
+                    },
+                },
+            ],
+        },
+    ],
+});
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 20/27] ui: expose assigning encryption key to sync jobs
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (18 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 19/27] ui: define and expose encryption key management menu item and windows Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 21/27] sync: pull: load encryption key if given in job config Christian Ebner
                   ` (6 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

This allows to select pre-defined encryption keys and assign them to
the sync job configuration.

Sync keys can be either assigned as active encryption key to sync
jobs in push direction, to be used when pushing new contents or
associated to a sync job in pull direction, then used to decrypt
contents with matching key fingerprint.

As active encryption key only ones with are not archived can be used,
while associations can be made also with archived keys, to still be
able do decrypt contents on pull and to avoid key deletion if
associated to either push or pull sync jobs.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 www/Makefile                      |  1 +
 www/form/EncryptionKeySelector.js | 96 +++++++++++++++++++++++++++++++
 www/window/SyncJobEdit.js         | 30 ++++++++++
 3 files changed, 127 insertions(+)
 create mode 100644 www/form/EncryptionKeySelector.js

diff --git a/www/Makefile b/www/Makefile
index 08ad50846..51da9d74e 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -55,6 +55,7 @@ JSSRC=							\
 	form/GroupSelector.js				\
 	form/GroupFilter.js				\
 	form/VerifyOutdatedAfter.js			\
+	form/EncryptionKeySelector.js			\
 	data/RunningTasksStore.js			\
 	button/TaskButton.js				\
 	panel/PrunePanel.js				\
diff --git a/www/form/EncryptionKeySelector.js b/www/form/EncryptionKeySelector.js
new file mode 100644
index 000000000..e0390e56a
--- /dev/null
+++ b/www/form/EncryptionKeySelector.js
@@ -0,0 +1,96 @@
+Ext.define('PBS.form.EncryptionKeySelector', {
+    extend: 'Ext.form.field.ComboBox',
+    alias: 'widget.pbsEncryptionKeySelector',
+
+    queryMode: 'local',
+
+    valueField: 'id',
+    displayField: 'id',
+
+    emptyText: gettext('None'),
+
+    listConfig: {
+        columns: [
+            {
+                dataIndex: 'id',
+                header: gettext('Key ID'),
+                sortable: true,
+                flex: 1,
+            },
+            {
+                dataIndex: 'created',
+                header: gettext('Created'),
+                sortable: true,
+                renderer: Proxmox.Utils.render_timestamp,
+                flex: 1,
+            },
+            {
+                dataIndex: 'archived-at',
+                header: gettext('Archived'),
+                renderer: (val) => (val ? Proxmox.Utils.render_timestamp(val) : ''),
+                sortable: true,
+                flex: 1,
+            },
+        ],
+        emptyText: `<div class="x-grid-empty">${gettext('No key accessible.')}</div>`,
+    },
+
+    config: {
+        deleteEmpty: true,
+        extraRequestParams: {},
+    },
+    // override framework function to implement deleteEmpty behaviour
+    getSubmitData: function () {
+        let me = this;
+
+        let data = null;
+        if (!me.disabled && me.submitValue) {
+            let val = me.getSubmitValue();
+            if (val !== null && val !== '') {
+                data = {};
+                data[me.getName()] = val;
+            } else if (me.getDeleteEmpty()) {
+                data = {};
+                data.delete = me.getName();
+            }
+        }
+
+        return data;
+    },
+
+    triggers: {
+        clear: {
+            cls: 'pmx-clear-trigger',
+            weight: -1,
+            hidden: true,
+            handler: function () {
+                this.triggers.clear.setVisible(false);
+                this.setValue('');
+            },
+        },
+    },
+
+    listeners: {
+        change: function (field, value) {
+            let canClear = (value ?? '') !== '';
+            field.triggers.clear.setVisible(canClear);
+        },
+    },
+
+    initComponent: function () {
+        let me = this;
+
+        me.store = Ext.create('Ext.data.Store', {
+            model: 'pbs-encryption-keys',
+            autoLoad: true,
+            proxy: {
+                type: 'proxmox',
+                timeout: 30 * 1000,
+                url: `/api2/json/config/encryption-keys`,
+                extraParams: me.extraRequestParams,
+            },
+        });
+
+        me.callParent();
+    },
+});
diff --git a/www/window/SyncJobEdit.js b/www/window/SyncJobEdit.js
index 074c7855a..9994f14e8 100644
--- a/www/window/SyncJobEdit.js
+++ b/www/window/SyncJobEdit.js
@@ -34,6 +34,7 @@ Ext.define('PBS.window.SyncJobEdit', {
         if (me.syncDirection === 'push') {
             me.subject = gettext('Sync Job - Push Direction');
             me.syncDirectionPush = true;
+            me.syncCryptKeyMultiSelect = false;
             me.syncRemoteLabel = gettext('Target Remote');
             me.syncRemoteDatastore = gettext('Target Datastore');
             me.syncRemoteNamespace = gettext('Target Namespace');
@@ -560,6 +561,35 @@ Ext.define('PBS.window.SyncJobEdit', {
                     },
                 ],
             },
+            {
+                xtype: 'inputpanel',
+                title: gettext('Encryption'),
+                column1: [
+                    {
+                        xtype: 'pbsEncryptionKeySelector',
+                        name: 'active-encryption-key',
+                        fieldLabel: gettext('Active Encryption Key'),
+                        multiSelect: false,
+                        cbind: {
+                            deleteEmpty: '{!isCreate}',
+                            disabled: '{!syncDirectionPush}',
+                            hidden: '{!syncDirectionPush}',
+                        },
+                    },
+                    {
+                        xtype: 'pbsEncryptionKeySelector',
+                        name: 'associated-key',
+                        fieldLabel: gettext('Associated Keys'),
+                        multiSelect: true,
+                        cbind: {
+                            deleteEmpty: '{!isCreate}',
+                        },
+                        extraRequestParams: {
+                            'include-archived': true,
+                        },
+                    },
+                ],
+            },
         ],
     },
 });
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 21/27] sync: pull: load encryption key if given in job config
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (19 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 20/27] ui: expose assigning encryption key to sync jobs Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 22/27] sync: expand source chunk reader trait by crypt config Christian Ebner
                   ` (5 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

If configured and passed in on PullParams construction, check access
and load the encryption key. Any snapshots matching this key
fingerprint should be decrypted during pull sync.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/api2/pull.rs   | 15 +++++++++++++--
 src/server/pull.rs | 19 +++++++++++++++++++
 2 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index 4b1fd5e60..7ca12fe99 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -7,8 +7,8 @@ use proxmox_router::{Permission, Router, RpcEnvironment};
 use proxmox_schema::api;
 
 use pbs_api_types::{
-    Authid, BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, DATASTORE_SCHEMA,
-    GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
+    Authid, BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, CRYPT_KEY_ID_SCHEMA,
+    DATASTORE_SCHEMA, GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
     PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ, REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA,
     RESYNC_CORRUPT_SCHEMA, SYNC_ENCRYPTED_ONLY_SCHEMA, SYNC_VERIFIED_ONLY_SCHEMA,
     TRANSFER_LAST_SCHEMA,
@@ -91,6 +91,7 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
             sync_job.encrypted_only,
             sync_job.verified_only,
             sync_job.resync_corrupt,
+            sync_job.associated_key.clone(),
         )
     }
 }
@@ -148,6 +149,14 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
                 schema: RESYNC_CORRUPT_SCHEMA,
                 optional: true,
             },
+            "decryption-keys": {
+                type: Array,
+                description: "List of decryption keys.",
+                items: {
+                    schema: CRYPT_KEY_ID_SCHEMA,
+                },
+                optional: true,
+            },
         },
     },
     access: {
@@ -175,6 +184,7 @@ async fn pull(
     encrypted_only: Option<bool>,
     verified_only: Option<bool>,
     resync_corrupt: Option<bool>,
+    decryption_keys: Option<Vec<String>>,
     rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<String, Error> {
     let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -215,6 +225,7 @@ async fn pull(
         encrypted_only,
         verified_only,
         resync_corrupt,
+        decryption_keys,
     )?;
 
     // fixme: set to_stdout to false?
diff --git a/src/server/pull.rs b/src/server/pull.rs
index bd3e8bef4..87c71a9ab 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -8,6 +8,7 @@ use std::sync::{Arc, Mutex};
 use std::time::SystemTime;
 
 use anyhow::{bail, format_err, Context, Error};
+use pbs_tools::crypt_config::CryptConfig;
 use proxmox_human_byte::HumanByte;
 use tracing::{info, warn};
 
@@ -65,6 +66,8 @@ pub(crate) struct PullParameters {
     verified_only: bool,
     /// Whether to re-sync corrupted snapshots
     resync_corrupt: bool,
+    /// Decryption key config to decrypt snapshots with matching key fingerprint
+    crypt_configs: Vec<Arc<CryptConfig>>,
 }
 
 impl PullParameters {
@@ -85,6 +88,7 @@ impl PullParameters {
         encrypted_only: Option<bool>,
         verified_only: Option<bool>,
         resync_corrupt: Option<bool>,
+        decryption_keys: Option<Vec<String>>,
     ) -> Result<Self, Error> {
         if let Some(max_depth) = max_depth {
             ns.check_max_depth(max_depth)?;
@@ -126,6 +130,20 @@ impl PullParameters {
 
         let group_filter = group_filter.unwrap_or_default();
 
+        let crypt_configs = if let Some(key_ids) = &decryption_keys {
+            let mut crypt_configs = Vec::with_capacity(key_ids.len());
+            for key_id in key_ids {
+                if let Some(crypt_config) =
+                    crate::server::sync::check_privs_and_load_key_config(key_id, &owner, false)?
+                {
+                    crypt_configs.push(crypt_config);
+                }
+            }
+            crypt_configs
+        } else {
+            Vec::new()
+        };
+
         Ok(Self {
             source,
             target,
@@ -137,6 +155,7 @@ impl PullParameters {
             encrypted_only,
             verified_only,
             resync_corrupt,
+            crypt_configs,
         })
     }
 }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 22/27] sync: expand source chunk reader trait by crypt config
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (20 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 21/27] sync: pull: load encryption key if given in job config Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 23/27] sync: pull: introduce and use decrypt index writer if " Christian Ebner
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Allows to pass in the crypto config for the source chunk reader,
making it possible to decrypt chunks when fetching.

This will be used by the pull sync job to decrypt snapshot chunks
which have been encrypted with an encryption key matching the
one in the pull job configuration.

Disarmed by not setting the crypt config until the rest of the logic
to correctly decrypt snapshots on pull, including manifest, index
files and chunks is put in place in subsequet code changes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/pull.rs |  8 ++++++--
 src/server/push.rs |  4 ++--
 src/server/sync.rs | 28 ++++++++++++++++++++++------
 3 files changed, 30 insertions(+), 10 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index 87c71a9ab..39f4b2d75 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -304,6 +304,7 @@ async fn pull_single_archive<'a>(
     snapshot: &'a pbs_datastore::BackupDir,
     archive_info: &'a FileInfo,
     encountered_chunks: Arc<Mutex<EncounteredChunks>>,
+    crypt_config: Option<Arc<CryptConfig>>,
     backend: &DatastoreBackend,
 ) -> Result<SyncStats, Error> {
     let archive_name = &archive_info.filename;
@@ -334,7 +335,7 @@ async fn pull_single_archive<'a>(
             } else {
                 let stats = pull_index_chunks(
                     reader
-                        .chunk_reader(archive_info.crypt_mode)
+                        .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
                         .context("failed to get chunk reader")?,
                     snapshot.datastore().clone(),
                     index,
@@ -357,7 +358,7 @@ async fn pull_single_archive<'a>(
             } else {
                 let stats = pull_index_chunks(
                     reader
-                        .chunk_reader(archive_info.crypt_mode)
+                        .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
                         .context("failed to get chunk reader")?,
                     snapshot.datastore().clone(),
                     index,
@@ -471,6 +472,8 @@ async fn pull_snapshot<'a>(
         return Ok(sync_stats);
     }
 
+    let mut crypt_config = None;
+
     let backend = &params.target.backend;
     for item in manifest.files() {
         let mut path = snapshot.full_path();
@@ -517,6 +520,7 @@ async fn pull_snapshot<'a>(
             snapshot,
             item,
             encountered_chunks.clone(),
+            crypt_config.clone(),
             backend,
         )
         .await?;
diff --git a/src/server/push.rs b/src/server/push.rs
index f433ca50d..1375958fe 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1005,7 +1005,7 @@ pub(crate) async fn push_snapshot(
                 ArchiveType::DynamicIndex => {
                     let index = DynamicIndexReader::open(&path)?;
                     let chunk_reader = reader
-                        .chunk_reader(entry.chunk_crypt_mode())
+                        .chunk_reader(None, entry.chunk_crypt_mode())
                         .context("failed to get chunk reader")?;
                     let upload_stats = push_index(
                         &archive_name,
@@ -1033,7 +1033,7 @@ pub(crate) async fn push_snapshot(
                 ArchiveType::FixedIndex => {
                     let index = FixedIndexReader::open(&path)?;
                     let chunk_reader = reader
-                        .chunk_reader(entry.chunk_crypt_mode())
+                        .chunk_reader(None, entry.chunk_crypt_mode())
                         .context("failed to get chunk reader")?;
                     let size = index.index_bytes();
                     let upload_stats = push_index(
diff --git a/src/server/sync.rs b/src/server/sync.rs
index 6b84ae6d7..dce9c99ee 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -90,7 +90,11 @@ impl SyncStats {
 /// and checking whether chunk sync should be skipped.
 pub(crate) trait SyncSourceReader: Send + Sync {
     /// Returns a chunk reader with the specified encryption mode.
-    fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error>;
+    fn chunk_reader(
+        &self,
+        crypt_config: Option<Arc<CryptConfig>>,
+        crypt_mode: CryptMode,
+    ) -> Result<Arc<dyn AsyncReadChunk>, Error>;
 
     /// Asynchronously loads a file from the source into a local file.
     /// `filename` is the name of the file to load from the source.
@@ -117,9 +121,17 @@ pub(crate) struct LocalSourceReader {
 
 #[async_trait::async_trait]
 impl SyncSourceReader for RemoteSourceReader {
-    fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error> {
-        let chunk_reader =
-            RemoteChunkReader::new(self.backup_reader.clone(), None, crypt_mode, HashMap::new());
+    fn chunk_reader(
+        &self,
+        crypt_config: Option<Arc<CryptConfig>>,
+        crypt_mode: CryptMode,
+    ) -> Result<Arc<dyn AsyncReadChunk>, Error> {
+        let chunk_reader = RemoteChunkReader::new(
+            self.backup_reader.clone(),
+            crypt_config,
+            crypt_mode,
+            HashMap::new(),
+        );
         Ok(Arc::new(chunk_reader))
     }
 
@@ -191,8 +203,12 @@ impl SyncSourceReader for RemoteSourceReader {
 
 #[async_trait::async_trait]
 impl SyncSourceReader for LocalSourceReader {
-    fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error> {
-        let chunk_reader = LocalChunkReader::new(self.datastore.clone(), None, crypt_mode)?;
+    fn chunk_reader(
+        &self,
+        crypt_config: Option<Arc<CryptConfig>>,
+        crypt_mode: CryptMode,
+    ) -> Result<Arc<dyn AsyncReadChunk>, Error> {
+        let chunk_reader = LocalChunkReader::new(self.datastore.clone(), crypt_config, crypt_mode)?;
         Ok(Arc::new(chunk_reader))
     }
 
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 23/27] sync: pull: introduce and use decrypt index writer if crypt config
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (21 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 22/27] sync: expand source chunk reader trait by crypt config Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 24/27] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
                   ` (3 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

In order to decrypt and encrypted index file during a pull sync job
when a matching decryption key is configured, the index has to be
rewritten as the chunks has to be decrypted and the new digests
calculated based on the decrypted chunk. The newly written index file
need to finally replace the original one, achieved by replacing the
original tempfile after pulling the chunks.

In order to be able to do so, provide a DecryptedIndexWriter instance
to the chunk pulling logic. The DecryptIndexWriter provides variants
for fix and dynamic index writers, or none if no rewriting should
happen.

This remains disarmed for the time being by never passing the crypt
config until the logic to decrypt the chunk and re-calculate the
digests is in place, done in subsequent code changes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/pull.rs | 133 ++++++++++++++++++++++++++++++---------------
 1 file changed, 88 insertions(+), 45 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index 39f4b2d75..7f5c00ddb 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -21,8 +21,8 @@ use pbs_api_types::{
 use pbs_client::BackupRepository;
 use pbs_config::CachedUserInfo;
 use pbs_datastore::data_blob::DataBlob;
-use pbs_datastore::dynamic_index::DynamicIndexReader;
-use pbs_datastore::fixed_index::FixedIndexReader;
+use pbs_datastore::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
+use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
 use pbs_datastore::index::IndexFile;
 use pbs_datastore::manifest::{BackupManifest, FileInfo};
 use pbs_datastore::read_chunk::AsyncReadChunk;
@@ -166,6 +166,7 @@ async fn pull_index_chunks<I: IndexFile>(
     index: I,
     encountered_chunks: Arc<Mutex<EncounteredChunks>>,
     backend: &DatastoreBackend,
+    decrypted_index_writer: DecryptedIndexWriter,
 ) -> Result<SyncStats, Error> {
     use futures::stream::{self, StreamExt, TryStreamExt};
 
@@ -201,55 +202,61 @@ async fn pull_index_chunks<I: IndexFile>(
     let bytes = Arc::new(AtomicUsize::new(0));
     let chunk_count = Arc::new(AtomicUsize::new(0));
 
-    stream
-        .map(|info| {
-            let target = Arc::clone(&target);
-            let chunk_reader = chunk_reader.clone();
-            let bytes = Arc::clone(&bytes);
-            let chunk_count = Arc::clone(&chunk_count);
-            let verify_and_write_channel = verify_and_write_channel.clone();
-            let encountered_chunks = Arc::clone(&encountered_chunks);
+    let stream = stream.map(|info| {
+        let target = Arc::clone(&target);
+        let chunk_reader = chunk_reader.clone();
+        let bytes = Arc::clone(&bytes);
+        let chunk_count = Arc::clone(&chunk_count);
+        let verify_and_write_channel = verify_and_write_channel.clone();
+        let encountered_chunks = Arc::clone(&encountered_chunks);
 
-            Ok::<_, Error>(async move {
-                {
-                    // limit guard scope
-                    let mut guard = encountered_chunks.lock().unwrap();
-                    if let Some(touched) = guard.check_reusable(&info.digest) {
-                        if touched {
-                            return Ok::<_, Error>(());
-                        }
-                        let chunk_exists = proxmox_async::runtime::block_in_place(|| {
-                            target.cond_touch_chunk(&info.digest, false)
-                        })?;
-                        if chunk_exists {
-                            guard.mark_touched(&info.digest);
-                            //info!("chunk {} exists {}", pos, hex::encode(digest));
-                            return Ok::<_, Error>(());
-                        }
+        Ok::<_, Error>(async move {
+            {
+                // limit guard scope
+                let mut guard = encountered_chunks.lock().unwrap();
+                if let Some(touched) = guard.check_reusable(&info.digest) {
+                    if touched {
+                        return Ok::<_, Error>(());
+                    }
+                    let chunk_exists = proxmox_async::runtime::block_in_place(|| {
+                        target.cond_touch_chunk(&info.digest, false)
+                    })?;
+                    if chunk_exists {
+                        guard.mark_touched(&info.digest);
+                        //info!("chunk {} exists {}", pos, hex::encode(digest));
+                        return Ok::<_, Error>(());
                     }
-                    // mark before actually downloading the chunk, so this happens only once
-                    guard.mark_reusable(&info.digest);
-                    guard.mark_touched(&info.digest);
                 }
+                // mark before actually downloading the chunk, so this happens only once
+                guard.mark_reusable(&info.digest);
+                guard.mark_touched(&info.digest);
+            }
 
-                //info!("sync {} chunk {}", pos, hex::encode(digest));
-                let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
-                let raw_size = chunk.raw_size() as usize;
+            //info!("sync {} chunk {}", pos, hex::encode(digest));
+            let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+            let raw_size = chunk.raw_size() as usize;
 
-                // decode, verify and write in a separate threads to maximize throughput
-                proxmox_async::runtime::block_in_place(|| {
-                    verify_and_write_channel.send((chunk, info.digest, info.size()))
-                })?;
+            // decode, verify and write in a separate threads to maximize throughput
+            proxmox_async::runtime::block_in_place(|| {
+                verify_and_write_channel.send((chunk, info.digest, info.size()))
+            })?;
 
-                bytes.fetch_add(raw_size, Ordering::SeqCst);
-                chunk_count.fetch_add(1, Ordering::SeqCst);
+            bytes.fetch_add(raw_size, Ordering::SeqCst);
+            chunk_count.fetch_add(1, Ordering::SeqCst);
 
-                Ok(())
-            })
+            Ok(())
         })
-        .try_buffer_unordered(20)
-        .try_for_each(|_res| futures::future::ok(()))
-        .await?;
+    });
+
+    if let DecryptedIndexWriter::None = decrypted_index_writer {
+        stream
+            .try_buffer_unordered(20)
+            .try_for_each(|_res| futures::future::ok(()))
+            .await?;
+    } else {
+        // must keep chunk order to correctly rewrite index file
+        stream.try_for_each(|item| item).await?;
+    }
 
     drop(verify_and_write_channel);
 
@@ -330,9 +337,15 @@ async fn pull_single_archive<'a>(
             let (csum, size) = index.compute_csum();
             verify_archive(archive_info, &csum, size)?;
 
-            if reader.skip_chunk_sync(snapshot.datastore().name()) {
+            if crypt_config.is_none() && reader.skip_chunk_sync(snapshot.datastore().name()) {
                 info!("skipping chunk sync for same datastore");
             } else {
+                let new_index_writer = if crypt_config.is_some() {
+                    let writer = DynamicIndexWriter::create(&path)?;
+                    DecryptedIndexWriter::Dynamic(Arc::new(Mutex::new(writer)))
+                } else {
+                    DecryptedIndexWriter::None
+                };
                 let stats = pull_index_chunks(
                     reader
                         .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
@@ -341,8 +354,16 @@ async fn pull_single_archive<'a>(
                     index,
                     encountered_chunks,
                     backend,
+                    new_index_writer.clone(),
                 )
                 .await?;
+                if let DecryptedIndexWriter::Dynamic(index) = &new_index_writer {
+                    let csum = index.lock().unwrap().close()?;
+
+                    // Overwrite current tmp file so it will be persisted instead
+                    std::fs::rename(&path, &tmp_path)?;
+                }
+
                 sync_stats.add(stats);
             }
         }
@@ -353,9 +374,16 @@ async fn pull_single_archive<'a>(
             let (csum, size) = index.compute_csum();
             verify_archive(archive_info, &csum, size)?;
 
-            if reader.skip_chunk_sync(snapshot.datastore().name()) {
+            if crypt_config.is_none() && reader.skip_chunk_sync(snapshot.datastore().name()) {
                 info!("skipping chunk sync for same datastore");
             } else {
+                let new_index_writer = if crypt_config.is_some() {
+                    let writer =
+                        FixedIndexWriter::create(&path, Some(size), index.chunk_size as u32)?;
+                    DecryptedIndexWriter::Fixed(Arc::new(Mutex::new(writer)))
+                } else {
+                    DecryptedIndexWriter::None
+                };
                 let stats = pull_index_chunks(
                     reader
                         .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
@@ -364,8 +392,16 @@ async fn pull_single_archive<'a>(
                     index,
                     encountered_chunks,
                     backend,
+                    new_index_writer.clone(),
                 )
                 .await?;
+                if let DecryptedIndexWriter::Fixed(index) = &new_index_writer {
+                    let csum = index.lock().unwrap().close()?;
+
+                    // Overwrite current tmp file so it will be persisted instead
+                    std::fs::rename(&path, &tmp_path)?;
+                }
+
                 sync_stats.add(stats);
             }
         }
@@ -1280,3 +1316,10 @@ impl EncounteredChunks {
         self.chunk_set.clear();
     }
 }
+
+#[derive(Clone)]
+enum DecryptedIndexWriter {
+    Fixed(Arc<Mutex<FixedIndexWriter>>),
+    Dynamic(Arc<Mutex<DynamicIndexWriter>>),
+    None,
+}
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 24/27] sync: pull: extend encountered chunk by optional decrypted digest
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (22 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 23/27] sync: pull: introduce and use decrypt index writer if " Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 25/27] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
                   ` (2 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

For index files being decrypted during the pull, it is not enough to
keep track of the processes source chunks, but the decrypted digest
has to be known as well in order to rewrite the index file.

Extend the encountered chunks such that this can be tracked as well.
To not introduce clippy warnings and to keep the code readable,
introduce the EncounteredChunksInfo struct as internal type for the
hash map values.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/pull.rs | 53 +++++++++++++++++++++++++++++-----------------
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index 7f5c00ddb..1efe24d46 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -178,7 +178,7 @@ async fn pull_index_chunks<I: IndexFile>(
             .filter(|info| {
                 let guard = encountered_chunks.lock().unwrap();
                 match guard.check_reusable(&info.digest) {
-                    Some(touched) => !touched, // reusable and already touched, can always skip
+                    Some((touched, _decrypted_chunk)) => !touched, // reusable and already touched, can always skip
                     None => true,
                 }
             }),
@@ -214,7 +214,7 @@ async fn pull_index_chunks<I: IndexFile>(
             {
                 // limit guard scope
                 let mut guard = encountered_chunks.lock().unwrap();
-                if let Some(touched) = guard.check_reusable(&info.digest) {
+                if let Some((touched, _decrypted_digest)) = guard.check_reusable(&info.digest) {
                     if touched {
                         return Ok::<_, Error>(());
                     }
@@ -222,14 +222,14 @@ async fn pull_index_chunks<I: IndexFile>(
                         target.cond_touch_chunk(&info.digest, false)
                     })?;
                     if chunk_exists {
-                        guard.mark_touched(&info.digest);
+                        guard.mark_touched(&info.digest, None);
                         //info!("chunk {} exists {}", pos, hex::encode(digest));
                         return Ok::<_, Error>(());
                     }
                 }
                 // mark before actually downloading the chunk, so this happens only once
-                guard.mark_reusable(&info.digest);
-                guard.mark_touched(&info.digest);
+                guard.mark_reusable(&info.digest, None);
+                guard.mark_touched(&info.digest, None);
             }
 
             //info!("sync {} chunk {}", pos, hex::encode(digest));
@@ -824,7 +824,7 @@ async fn pull_group(
 
                         for pos in 0..index.index_count() {
                             let chunk_info = index.chunk_info(pos).unwrap();
-                            reusable_chunks.mark_reusable(&chunk_info.digest);
+                            reusable_chunks.mark_reusable(&chunk_info.digest, None);
                         }
                     }
                 }
@@ -1254,12 +1254,17 @@ async fn pull_ns(
     Ok((progress, sync_stats, errors))
 }
 
+struct EncounteredChunkInfo {
+    reusable: bool,
+    touched: bool,
+    decrypted_digest: Option<[u8; 32]>,
+}
+
 /// Store the state of encountered chunks, tracking if they can be reused for the
 /// index file currently being pulled and if the chunk has already been touched
 /// during this sync.
 struct EncounteredChunks {
-    // key: digest, value: (reusable, touched)
-    chunk_set: HashMap<[u8; 32], (bool, bool)>,
+    chunk_set: HashMap<[u8; 32], EncounteredChunkInfo>,
 }
 
 impl EncounteredChunks {
@@ -1272,12 +1277,12 @@ impl EncounteredChunks {
 
     /// Check if the current state allows to reuse this chunk and if so,
     /// if the chunk has already been touched.
-    fn check_reusable(&self, digest: &[u8; 32]) -> Option<bool> {
-        if let Some((reusable, touched)) = self.chunk_set.get(digest) {
-            if !reusable {
+    fn check_reusable(&self, digest: &[u8; 32]) -> Option<(bool, Option<&[u8; 32]>)> {
+        if let Some(chunk_info) = self.chunk_set.get(digest) {
+            if !chunk_info.reusable {
                 None
             } else {
-                Some(*touched)
+                Some((chunk_info.touched, chunk_info.decrypted_digest.as_ref()))
             }
         } else {
             None
@@ -1285,28 +1290,36 @@ impl EncounteredChunks {
     }
 
     /// Mark chunk as reusable, inserting it as un-touched if not present
-    fn mark_reusable(&mut self, digest: &[u8; 32]) {
+    fn mark_reusable(&mut self, digest: &[u8; 32], decrypted_digest: Option<[u8; 32]>) {
         match self.chunk_set.entry(*digest) {
             Entry::Occupied(mut occupied) => {
-                let (reusable, _touched) = occupied.get_mut();
-                *reusable = true;
+                let chunk_info = occupied.get_mut();
+                chunk_info.reusable = true;
             }
             Entry::Vacant(vacant) => {
-                vacant.insert((true, false));
+                vacant.insert(EncounteredChunkInfo {
+                    reusable: true,
+                    touched: false,
+                    decrypted_digest,
+                });
             }
         }
     }
 
     /// Mark chunk as touched during this sync, inserting it as not reusable
     /// but touched if not present.
-    fn mark_touched(&mut self, digest: &[u8; 32]) {
+    fn mark_touched(&mut self, digest: &[u8; 32], decrypted_digest: Option<[u8; 32]>) {
         match self.chunk_set.entry(*digest) {
             Entry::Occupied(mut occupied) => {
-                let (_reusable, touched) = occupied.get_mut();
-                *touched = true;
+                let chunk_info = occupied.get_mut();
+                chunk_info.touched = true;
             }
             Entry::Vacant(vacant) => {
-                vacant.insert((false, true));
+                vacant.insert(EncounteredChunkInfo {
+                    reusable: false,
+                    touched: true,
+                    decrypted_digest,
+                });
             }
         }
     }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 25/27] sync: pull: decrypt blob files on pull if encryption key is configured
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (23 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 24/27] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 26/27] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 27/27] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

During pull, blob files are stored in a temporary file before being
renamed to the actual blob filename as stored in the manifest.
If a decryption key is configured in the pull parameters, use the
decrypted temporary blob file after downloading it from the remote
to decrypt it and re-encode as new compressed but unencrypted blob
file. Rename the decrypted tempfile to be the new tmpfile to be
finally moved in place.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/pull.rs | 49 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 47 insertions(+), 2 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index 1efe24d46..ce32afcd7 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -2,7 +2,7 @@
 
 use std::collections::hash_map::Entry;
 use std::collections::{HashMap, HashSet};
-use std::io::Seek;
+use std::io::{BufReader, Read, Seek, Write};
 use std::sync::atomic::{AtomicUsize, Ordering};
 use std::sync::{Arc, Mutex};
 use std::time::SystemTime;
@@ -26,7 +26,9 @@ use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
 use pbs_datastore::index::IndexFile;
 use pbs_datastore::manifest::{BackupManifest, FileInfo};
 use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{check_backup_owner, DataStore, DatastoreBackend, StoreProgress};
+use pbs_datastore::{
+    check_backup_owner, DataBlobReader, DataStore, DatastoreBackend, StoreProgress,
+};
 use pbs_tools::sha::sha256;
 
 use super::sync::{
@@ -409,6 +411,49 @@ async fn pull_single_archive<'a>(
             tmpfile.rewind()?;
             let (csum, size) = sha256(&mut tmpfile)?;
             verify_archive(archive_info, &csum, size)?;
+
+            if crypt_config.is_some() {
+                let crypt_config = crypt_config.clone();
+                let tmp_path = tmp_path.clone();
+                let archive_name = archive_name.clone();
+
+                tokio::task::spawn_blocking(move || {
+                    // must rewind again since after verifying cursor is at the end of the file
+                    tmpfile.rewind()?;
+                    let reader = DataBlobReader::new(tmpfile, crypt_config)?;
+                    let mut reader = BufReader::new(reader);
+                    let mut raw_data = Vec::new();
+                    reader.read_to_end(&mut raw_data)?;
+
+                    let blob = DataBlob::encode(&raw_data, None, true)?;
+                    let raw_blob = blob.into_inner();
+
+                    let mut decrypted_tmp_path = tmp_path.clone();
+                    decrypted_tmp_path.set_extension("dectmp");
+                    let result = proxmox_lang::try_block!({
+                        let mut decrypted_tmpfile = std::fs::OpenOptions::new()
+                            .read(true)
+                            .write(true)
+                            .create_new(true)
+                            .open(&decrypted_tmp_path)?;
+                        decrypted_tmpfile.write_all(&raw_blob)?;
+                        decrypted_tmpfile.flush()?;
+                        decrypted_tmpfile.rewind()?;
+                        let (csum, size) = sha256(&mut decrypted_tmpfile)?;
+
+                        std::fs::rename(&decrypted_tmp_path, &tmp_path)?;
+                        Ok(())
+                    });
+
+                    if result.is_err() {
+                        let _ = std::fs::remove_file(&decrypted_tmp_path);
+                    }
+
+                    result
+                })
+                .await?
+                .map_err(|err: Error| format_err!("Failed when decrypting blob {path:?}: {err}"))?;
+            }
         }
     }
     if let Err(err) = std::fs::rename(&tmp_path, &path) {
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 26/27] sync: pull: decrypt chunks and rewrite index file for matching key
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (24 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 25/27] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 27/27] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Once the matching decryptioin key will be provided, use it to decrypt
the chunks on pull and rewrite the index file based on the decrypted
chunk digests and offsets.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/pull.rs | 135 ++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 114 insertions(+), 21 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index ce32afcd7..40e5353dd 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -3,7 +3,7 @@
 use std::collections::hash_map::Entry;
 use std::collections::{HashMap, HashSet};
 use std::io::{BufReader, Read, Seek, Write};
-use std::sync::atomic::{AtomicUsize, Ordering};
+use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
 use std::sync::{Arc, Mutex};
 use std::time::SystemTime;
 
@@ -20,7 +20,7 @@ use pbs_api_types::{
 };
 use pbs_client::BackupRepository;
 use pbs_config::CachedUserInfo;
-use pbs_datastore::data_blob::DataBlob;
+use pbs_datastore::data_blob::{DataBlob, DataChunkBuilder};
 use pbs_datastore::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
 use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
 use pbs_datastore::index::IndexFile;
@@ -180,7 +180,16 @@ async fn pull_index_chunks<I: IndexFile>(
             .filter(|info| {
                 let guard = encountered_chunks.lock().unwrap();
                 match guard.check_reusable(&info.digest) {
-                    Some((touched, _decrypted_chunk)) => !touched, // reusable and already touched, can always skip
+                    Some((touched, mapped_digest)) => {
+                        if mapped_digest.is_some() {
+                            // if there is a mapping, then the chunk digest must be rewritten to
+                            // the index, cannot skip here but optimized when processing the stream
+                            true
+                        } else {
+                            // reusable and already touched, can always skip
+                            !touched
+                        }
+                    }
                     None => true,
                 }
             }),
@@ -202,6 +211,7 @@ async fn pull_index_chunks<I: IndexFile>(
     let verify_and_write_channel = verify_pool.channel();
 
     let bytes = Arc::new(AtomicUsize::new(0));
+    let offset = Arc::new(AtomicU64::new(0));
     let chunk_count = Arc::new(AtomicUsize::new(0));
 
     let stream = stream.map(|info| {
@@ -211,36 +221,119 @@ async fn pull_index_chunks<I: IndexFile>(
         let chunk_count = Arc::clone(&chunk_count);
         let verify_and_write_channel = verify_and_write_channel.clone();
         let encountered_chunks = Arc::clone(&encountered_chunks);
+        let offset = Arc::clone(&offset);
+        let decrypted_index_writer = decrypted_index_writer.clone();
 
         Ok::<_, Error>(async move {
-            {
-                // limit guard scope
-                let mut guard = encountered_chunks.lock().unwrap();
-                if let Some((touched, _decrypted_digest)) = guard.check_reusable(&info.digest) {
-                    if touched {
+            //info!("sync {} chunk {}", pos, hex::encode(digest));
+            let (chunk, digest, size) = match decrypted_index_writer {
+                DecryptedIndexWriter::Fixed(index) => {
+                    if let Some((_touched, Some(decrypted_digest))) = encountered_chunks
+                        .lock()
+                        .unwrap()
+                        .check_reusable(&info.digest)
+                    {
+                        // already got the decrypted digest and chunk has been written,
+                        // no need to process again
+                        let size = info.size();
+                        let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+
+                        index.lock().unwrap().add_chunk(
+                            start_offset,
+                            size as u32,
+                            decrypted_digest,
+                        )?;
+
                         return Ok::<_, Error>(());
                     }
-                    let chunk_exists = proxmox_async::runtime::block_in_place(|| {
-                        target.cond_touch_chunk(&info.digest, false)
-                    })?;
-                    if chunk_exists {
-                        guard.mark_touched(&info.digest, None);
-                        //info!("chunk {} exists {}", pos, hex::encode(digest));
+
+                    let chunk_data = chunk_reader.read_chunk(&info.digest).await?;
+                    let (chunk, digest) =
+                        DataChunkBuilder::new(&chunk_data).compress(true).build()?;
+
+                    let size = chunk_data.len() as u64;
+                    let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+
+                    index
+                        .lock()
+                        .unwrap()
+                        .add_chunk(start_offset, size as u32, &digest)?;
+
+                    encountered_chunks
+                        .lock()
+                        .unwrap()
+                        .mark_reusable(&info.digest, Some(digest));
+
+                    (chunk, digest, size)
+                }
+                DecryptedIndexWriter::Dynamic(index) => {
+                    if let Some((_touched, Some(decrypted_digest))) = encountered_chunks
+                        .lock()
+                        .unwrap()
+                        .check_reusable(&info.digest)
+                    {
+                        // already got the decrypted digest and chunk has been written,
+                        // no need to process again
+                        let size = info.size();
+                        let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+                        let end_offset = start_offset + size;
+
+                        index
+                            .lock()
+                            .unwrap()
+                            .add_chunk(end_offset, decrypted_digest)?;
+
                         return Ok::<_, Error>(());
                     }
+
+                    let chunk_data = chunk_reader.read_chunk(&info.digest).await?;
+                    let (chunk, digest) =
+                        DataChunkBuilder::new(&chunk_data).compress(true).build()?;
+
+                    let size = chunk_data.len() as u64;
+                    let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+                    let end_offset = start_offset + size;
+
+                    index.lock().unwrap().add_chunk(end_offset, &digest)?;
+
+                    encountered_chunks
+                        .lock()
+                        .unwrap()
+                        .mark_reusable(&info.digest, Some(digest));
+
+                    (chunk, digest, size)
                 }
-                // mark before actually downloading the chunk, so this happens only once
-                guard.mark_reusable(&info.digest, None);
-                guard.mark_touched(&info.digest, None);
-            }
+                DecryptedIndexWriter::None => {
+                    {
+                        // limit guard scope
+                        let mut guard = encountered_chunks.lock().unwrap();
+                        if let Some((touched, _mapped)) = guard.check_reusable(&info.digest) {
+                            if touched {
+                                return Ok::<_, Error>(());
+                            }
+                            let chunk_exists = proxmox_async::runtime::block_in_place(|| {
+                                target.cond_touch_chunk(&info.digest, false)
+                            })?;
+                            if chunk_exists {
+                                guard.mark_touched(&info.digest, None);
+                                //info!("chunk {} exists {}", pos, hex::encode(digest));
+                                return Ok::<_, Error>(());
+                            }
+                        }
+                        // mark before actually downloading the chunk, so this happens only once
+                        guard.mark_reusable(&info.digest, None);
+                        guard.mark_touched(&info.digest, None);
+                    }
 
-            //info!("sync {} chunk {}", pos, hex::encode(digest));
-            let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+                    let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+                    (chunk, info.digest, info.size())
+                }
+            };
             let raw_size = chunk.raw_size() as usize;
 
             // decode, verify and write in a separate threads to maximize throughput
             proxmox_async::runtime::block_in_place(|| {
-                verify_and_write_channel.send((chunk, info.digest, info.size()))
+                verify_and_write_channel.send((chunk, digest, size))
             })?;
 
             bytes.fetch_add(raw_size, Ordering::SeqCst);
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH proxmox-backup v2 27/27] sync: pull: decrypt snapshots with matching encryption key fingerprint
  2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (25 preceding siblings ...)
  2026-04-10 16:54 ` [PATCH proxmox-backup v2 26/27] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
@ 2026-04-10 16:54 ` Christian Ebner
  26 siblings, 0 replies; 28+ messages in thread
From: Christian Ebner @ 2026-04-10 16:54 UTC (permalink / raw)
  To: pbs-devel

Decrypt any backup snapshot during pull which was encrypted with a
matching encryption key. Matching of keys is performed by comparing
the fingerprint of the key as stored in the source manifest and the
key configured for the pull sync jobs.

If matching, pass along the key's crypto config to the index and chunk
readers and write the local files unencrypted instead of simply
downloading them. A new manifest file is written instead of the
original one and files registered accordingly.

If the local snapshot already exists (resync), refuse to sync without
decryption if the target snapshot is unencrypted, the source however
encrypted.

To detect file changes for resync, compare the file change
fingerprint calculated on the decrypted files before push sync with
encryption.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/pull.rs | 104 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 102 insertions(+), 2 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index 40e5353dd..9e95a46c5 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -3,6 +3,7 @@
 use std::collections::hash_map::Entry;
 use std::collections::{HashMap, HashSet};
 use std::io::{BufReader, Read, Seek, Write};
+use std::os::fd::AsRawFd;
 use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
 use std::sync::{Arc, Mutex};
 use std::time::SystemTime;
@@ -10,11 +11,14 @@ use std::time::SystemTime;
 use anyhow::{bail, format_err, Context, Error};
 use pbs_tools::crypt_config::CryptConfig;
 use proxmox_human_byte::HumanByte;
+use serde_json::Value;
+use tokio::fs::OpenOptions;
+use tokio::io::AsyncWriteExt;
 use tracing::{info, warn};
 
 use pbs_api_types::{
     print_store_and_ns, ArchiveType, Authid, BackupArchiveName, BackupDir, BackupGroup,
-    BackupNamespace, GroupFilter, Operation, RateLimitConfig, Remote, SnapshotListItem,
+    BackupNamespace, CryptMode, GroupFilter, Operation, RateLimitConfig, Remote, SnapshotListItem,
     VerifyState, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME, MAX_NAMESPACE_DEPTH,
     PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP,
 };
@@ -408,6 +412,7 @@ async fn pull_single_archive<'a>(
     encountered_chunks: Arc<Mutex<EncounteredChunks>>,
     crypt_config: Option<Arc<CryptConfig>>,
     backend: &DatastoreBackend,
+    new_manifest: Option<Arc<Mutex<BackupManifest>>>,
 ) -> Result<SyncStats, Error> {
     let archive_name = &archive_info.filename;
     let mut path = snapshot.full_path();
@@ -457,6 +462,17 @@ async fn pull_single_archive<'a>(
 
                     // Overwrite current tmp file so it will be persisted instead
                     std::fs::rename(&path, &tmp_path)?;
+
+                    if let Some(new_manifest) = new_manifest {
+                        let name = archive_name.as_str().try_into()?;
+                        // size is identical to original, encrypted index
+                        new_manifest.lock().unwrap().add_file(
+                            &name,
+                            size,
+                            csum,
+                            CryptMode::None,
+                        )?;
+                    }
                 }
 
                 sync_stats.add(stats);
@@ -495,6 +511,17 @@ async fn pull_single_archive<'a>(
 
                     // Overwrite current tmp file so it will be persisted instead
                     std::fs::rename(&path, &tmp_path)?;
+
+                    if let Some(new_manifest) = new_manifest {
+                        let name = archive_name.as_str().try_into()?;
+                        // size is identical to original, encrypted index
+                        new_manifest.lock().unwrap().add_file(
+                            &name,
+                            size,
+                            csum,
+                            CryptMode::None,
+                        )?;
+                    }
                 }
 
                 sync_stats.add(stats);
@@ -534,6 +561,14 @@ async fn pull_single_archive<'a>(
                         decrypted_tmpfile.rewind()?;
                         let (csum, size) = sha256(&mut decrypted_tmpfile)?;
 
+                        if let Some(new_manifest) = new_manifest {
+                            let mut new_manifest = new_manifest.lock().unwrap();
+                            let name = archive_name.as_str().try_into()?;
+                            new_manifest.add_file(&name, size, csum, CryptMode::None)?;
+                        }
+
+                        nix::unistd::fsync(decrypted_tmpfile.as_raw_fd())?;
+
                         std::fs::rename(&decrypted_tmp_path, &tmp_path)?;
                         Ok(())
                     });
@@ -604,6 +639,8 @@ async fn pull_snapshot<'a>(
         return Ok(sync_stats);
     }
 
+    let mut local_manifest_file_fp = None;
+    let mut local_manifest_key_fp = None;
     if manifest_name.exists() && !corrupt {
         let manifest_blob = proxmox_lang::try_block!({
             let mut manifest_file = std::fs::File::open(&manifest_name).map_err(|err| {
@@ -624,12 +661,32 @@ async fn pull_snapshot<'a>(
             info!("no data changes");
             let _ = std::fs::remove_file(&tmp_manifest_name);
             return Ok(sync_stats); // nothing changed
+        } else {
+            let manifest = BackupManifest::try_from(manifest_blob)?;
+            local_manifest_key_fp = manifest.fingerprint()?;
+            if !params.crypt_configs.is_empty() {
+                let fp = manifest.change_detection_fingerprint()?;
+                local_manifest_file_fp = Some(hex::encode(fp));
+            }
         }
     }
 
-    let manifest_data = tmp_manifest_blob.raw_data().to_vec();
+    let mut manifest_data = tmp_manifest_blob.raw_data().to_vec();
     let manifest = BackupManifest::try_from(tmp_manifest_blob)?;
 
+    if let Value::String(fp) = &manifest.unprotected["change-detection-fingerprint"] {
+        if let Some(local) = local_manifest_file_fp {
+            if *fp == local {
+                if !client_log_name.exists() {
+                    reader.try_download_client_log(&client_log_name).await?;
+                };
+                info!("no data changes");
+                let _ = std::fs::remove_file(&tmp_manifest_name);
+                return Ok(sync_stats);
+            }
+        }
+    }
+
     if ignore_not_verified_or_encrypted(
         &manifest,
         snapshot.dir(),
@@ -647,6 +704,22 @@ async fn pull_snapshot<'a>(
     }
 
     let mut crypt_config = None;
+    let mut new_manifest = None;
+    if let Ok(Some(source_fingerprint)) = manifest.fingerprint() {
+        for config in &params.crypt_configs {
+            if config.fingerprint() == *source_fingerprint.bytes() {
+                crypt_config = Some(Arc::clone(config));
+                new_manifest = Some(Arc::new(Mutex::new(BackupManifest::new(snapshot.into()))));
+                info!("Found matching key fingerprint {source_fingerprint}, decrypt on pull");
+                break;
+            }
+        }
+    }
+
+    // pre-existing local manifest for unencrypted snapshot, never overwrite with encrypted
+    if local_manifest_key_fp.is_some() && crypt_config.is_none() {
+        bail!("local unencrypted snapshot detected, refuse to sync without source decryption");
+    }
 
     let backend = &params.target.backend;
     for item in manifest.files() {
@@ -696,11 +769,38 @@ async fn pull_snapshot<'a>(
             encountered_chunks.clone(),
             crypt_config.clone(),
             backend,
+            new_manifest.clone(),
         )
         .await?;
         sync_stats.add(stats);
     }
 
+    if let Some(new_manifest) = new_manifest {
+        let mut new_manifest = Arc::try_unwrap(new_manifest)
+            .map_err(|_arc| {
+                format_err!("failed to take ownership of still referenced new manifest")
+            })?
+            .into_inner()
+            .unwrap();
+
+        // copy over notes ecc, but drop encryption key fingerprint
+        new_manifest.unprotected = manifest.unprotected.clone();
+        new_manifest.unprotected["key-fingerprint"] = Value::Null;
+
+        let manifest_string = new_manifest.to_string(None)?;
+        let manifest_blob = DataBlob::encode(manifest_string.as_bytes(), None, true)?;
+        // update contents to be uploaded to backend
+        manifest_data = manifest_blob.raw_data().to_vec();
+
+        let mut tmp_manifest_file = OpenOptions::new()
+            .write(true)
+            .truncate(true) // clear pre-existing manifest content
+            .open(&tmp_manifest_name)
+            .await?;
+        tmp_manifest_file.write_all(&manifest_data).await?;
+        tmp_manifest_file.flush().await?;
+    }
+
     if let Err(err) = std::fs::rename(&tmp_manifest_name, &manifest_name) {
         bail!("Atomic rename file {:?} failed - {}", manifest_name, err);
     }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2026-04-10 17:03 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox v2 01/27] pbs-api-types: define en-/decryption key type and schema Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox v2 02/27] pbs-api-types: sync job: add optional cryptographic keys to config Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 03/27] datastore: blob: implement async reader for data blobs Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 04/27] datastore: manifest: add helper for change detection fingerprint Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 05/27] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 06/27] pbs-config: implement encryption key config handling Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 07/27] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 08/27] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 09/27] sync: add helper to check encryption key acls and load key Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 10/27] api: config: add endpoints for encryption key manipulation Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 11/27] api: config: check sync owner has access to en-/decryption keys Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 12/27] api: config: allow encryption key manipulation for sync job Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 13/27] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 14/27] api: push sync: expose optional encryption key for push sync Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 15/27] sync: push: optionally encrypt data blob on upload Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 16/27] sync: push: optionally encrypt client log on upload if key is given Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 17/27] sync: push: add helper for loading known chunks from previous snapshot Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 18/27] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 19/27] ui: define and expose encryption key management menu item and windows Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 20/27] ui: expose assigning encryption key to sync jobs Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 21/27] sync: pull: load encryption key if given in job config Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 22/27] sync: expand source chunk reader trait by crypt config Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 23/27] sync: pull: introduce and use decrypt index writer if " Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 24/27] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 25/27] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 26/27] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 27/27] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal