public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs
@ 2026-04-14 12:58 Christian Ebner
  2026-04-14 12:58 ` [PATCH proxmox v3 01/30] pbs-api-types: define en-/decryption key type and schema Christian Ebner
                   ` (29 more replies)
  0 siblings, 30 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:58 UTC (permalink / raw)
  To: pbs-devel

This patch series implements support for encrypting backup snapshots
when pushing from a source PBS instance to an untrusted remote target
PBS instance. Further, it adds support to decrypt snapshots being
encrypted on the remote source PBS when pulling the contents to the
local target PBS instance. This allows to perform full server side
encryption/decryption when syncing with a less trusted remote PBS.

In order to encrypt/decrypt snapshots, a new encryption key entity
is introduced, to be created as global instance on the PBS, placed and
managed by it's own dedicated config. Keys with secret are stored
in dedicated files so they only need to be loaded when accessing the
key, not for listing of configuration. Sync encryption keys can be
archived, rendering them no longer usable to encrypt new contents,
but still allowing to decrypt. In order to remove a sync encryption
key, it must be archived first and no longer associated to any
sync job config, a constrained added as safety net to avoid accidental
key removal.
The same centralized key management is also used for tape encryption
keys, so they are on-par ui wise, the configs remain however separated
for the time being.

The sync jobs in push direction are extended to receive an additional
active encryption key parameter, which will be used to encrypt
unencrypted snapshot when pushing to the remote target.
A list of associated keys is kept, adding the previous encryption key
of the push sync job if the key is rotated.
For pull sync jobs, the active encryption key parameter is not
considered, rather all associated keys will be loaded and used to
decrypt snapshots with matching fingerprint as found in the source
manifest. In order to encrypt/decrypt the contents, chunks, index
files, blobs and manifest are additionally processed, rewritten when
required.

Changes since version 2 (thanks a lot to @Thomas for review):
- Add dedicated lock file for per-key file locks when creating/deleting sync
  keys.
- Add initial documentation for server side encryption/decription during sync
  jobs.
- Adapt key archive endpoint to be able to toggle, kept as dedicated patch as
  unsure about impl details.
- Early detect unusable keys provided on key creation as upload via api.
- List all associated sync jobs when checking with encryption_key_in_use().
- Fix check for key access when setting active encryption key. It must fail for
  archived keys.
- Add flag to check for not allowing to set archived key as active encryption
  key.
- Drop associated keys also on active encryption key update, readd rotated one
  afterwards if required.
- Refactor check for un-/partially-/fully-encrypted backup snapshots.
- Include snapshot name in log message for skipped snapshots.
- Add missing return on error when requesting key archivation for tape.
- Handle errors for api calls to load tape and sync keys in ui by wrapping into
  try-catch-block.
- Also drop verify state on pull, do not rely on inherent check to better
  protect against bugs and corruptions.
- Awitch field label for associated keys based on sync direction.
- Add comment field explaining active encryption key and associated keys and
  their relation on key rotation.
- Also store key id together with key config when loading associated keys, so it
  can be logged later when key fingerprint matched.
- Squash new manifest registration into patch 26, keeping logic together
- Fix boguous check, must use change-detection-fingerprint, not key-fingerprint
  to detect changes on already existing manifest.
- Convert unprotected manifest part to json value to drop key-fingerprint.
- Log id of key used for decryption, not just fingerprint
- Switch all remaining `log` macros for sync to use `tracing`.
- Fix typos in commit message for async DataBlob reader patch.
- Double column width for `hint` field.
- Fix icons for type based menu buttons and type column
- Drop dead code `crypt-key-fp`.
- Fix error messages by s/seems/seem/ and wrap in gettext()
- Document config lock requirements for delete_key().
- Drop outdated comment on key file lock drop, it's a dedicated file now.

Changes since version 1 (thanks a lot to @all reviewers/testers!):
- Implement encryption key archiving and key rotation logic, allowing
  to specify active encryption key for push syncs, and a list of
  previously used ones. For pull multiple decryption keys can now be
  configured.
- Rework the UI to add support for key archiving, manage key association
  in sync jobs and to also manage tape encryption keys in the same
  centralized grid.
- Check for key still being in-use by sync job before removing it
- Fully encrypted snapshots are now pushed as-is if an encryption key
  is configured.
- Fixed inefficient resync of pre-existing target snapshot on pull,
  detect file changes in manifest via fingerprinting.
- Avoid overwriting pre-existing decrypted local snapshot by encrypted
  snapshot when no (or mismatching) decryption key is passed for pull
  job.
- Rename EncryptionKey to CyrptKey, as the key is also used for
  decryption.
- Remove key from config before removing keyfile
- Add locking mechansism to avoid races in key config writing
- Fix gathering of known chunks from previous snapshot in push for
  dynamic index files
- Detect config changes by checking for digest mismatch
- Guard key loading by PRIV_SYS_MODIFY
- Use tracing::info! instead of log::info!
- Fix clearing of encryption/decryption key via sync job config window
- Fix creating new sync job without crypt key configured
- Check key exists and can be accessed when set in sync job
- Fix min key id length for key edit window
- Fixed drag-and-drop for key file upload
- Fix outdated comments, typos, ecc.

Link to the bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=7251


proxmox:

Christian Ebner (2):
  pbs-api-types: define en-/decryption key type and schema
  pbs-api-types: sync job: add optional cryptographic keys to config

 pbs-api-types/src/jobs.rs           | 21 ++++++++++++++--
 pbs-api-types/src/key_derivation.rs | 38 ++++++++++++++++++++++++++---
 pbs-api-types/src/lib.rs            |  2 +-
 3 files changed, 55 insertions(+), 6 deletions(-)


proxmox-backup:

Christian Ebner (28):
  sync: push: use tracing macros instead of log
  datastore: blob: implement async reader for data blobs
  datastore: manifest: add helper for change detection fingerprint
  pbs-key-config: introduce store_with() for KeyConfig
  pbs-config: implement encryption key config handling
  pbs-config: acls: add 'encryption-keys' as valid 'system' subpath
  ui: expose 'encryption-keys' as acl subpath for 'system'
  sync: add helper to check encryption key acls and load key
  api: config: add endpoints for encryption key manipulation
  api: config: check sync owner has access to en-/decryption keys
  api: config: allow encryption key manipulation for sync job
  sync: push: rewrite manifest instead of pushing pre-existing one
  api: push sync: expose optional encryption key for push sync
  sync: push: optionally encrypt data blob on upload
  sync: push: optionally encrypt client log on upload if key is given
  sync: push: add helper for loading known chunks from previous snapshot
  fix #7251: api: push: encrypt snapshots using configured encryption
    key
  ui: define and expose encryption key management menu item and windows
  ui: expose assigning encryption key to sync jobs
  sync: pull: load encryption key if given in job config
  sync: expand source chunk reader trait by crypt config
  sync: pull: introduce and use decrypt index writer if crypt config
  sync: pull: extend encountered chunk by optional decrypted digest
  sync: pull: decrypt blob files on pull if encryption key is configured
  sync: pull: decrypt chunks and rewrite index file for matching key
  sync: pull: decrypt snapshots with matching encryption key fingerprint
  api: encryption keys: allow to toggle the archived state for keys
  docs: add section describing server side encryption for sync jobs

 docs/managing-remotes.rst          |  49 +++
 pbs-config/Cargo.toml              |   2 +
 pbs-config/src/acl.rs              |   4 +-
 pbs-config/src/encryption_keys.rs  | 217 ++++++++++++++
 pbs-config/src/lib.rs              |   1 +
 pbs-datastore/src/data_blob.rs     |  18 +-
 pbs-datastore/src/manifest.rs      |  20 ++
 pbs-key-config/src/lib.rs          |  36 ++-
 src/api2/config/encryption_keys.rs | 219 ++++++++++++++
 src/api2/config/mod.rs             |   2 +
 src/api2/config/sync.rs            |  94 +++++-
 src/api2/pull.rs                   |  15 +-
 src/api2/push.rs                   |   8 +-
 src/server/pull.rs                 | 459 ++++++++++++++++++++++++-----
 src/server/push.rs                 | 311 ++++++++++++++-----
 src/server/sync.rs                 |  58 +++-
 www/Makefile                       |   3 +
 www/NavigationTree.js              |   6 +
 www/Utils.js                       |   1 +
 www/config/EncryptionKeysView.js   | 346 ++++++++++++++++++++++
 www/form/EncryptionKeySelector.js  |  96 ++++++
 www/form/PermissionPathSelector.js |   1 +
 www/window/EncryptionKeysEdit.js   | 382 ++++++++++++++++++++++++
 www/window/SyncJobEdit.js          |  62 ++++
 24 files changed, 2248 insertions(+), 162 deletions(-)
 create mode 100644 pbs-config/src/encryption_keys.rs
 create mode 100644 src/api2/config/encryption_keys.rs
 create mode 100644 www/config/EncryptionKeysView.js
 create mode 100644 www/form/EncryptionKeySelector.js
 create mode 100644 www/window/EncryptionKeysEdit.js


Summary over all repositories:
  27 files changed, 2303 insertions(+), 168 deletions(-)

-- 
Generated by murpp 0.11.0




^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox v3 01/30] pbs-api-types: define en-/decryption key type and schema
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
@ 2026-04-14 12:58 ` Christian Ebner
  2026-04-14 12:58 ` [PATCH proxmox v3 02/30] pbs-api-types: sync job: add optional cryptographic keys to config Christian Ebner
                   ` (28 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:58 UTC (permalink / raw)
  To: pbs-devel

Will be used to store and uniquly identify en-/decryption keys in
their respective config. Contains the KeyInfo extended by the unique
key identifier and an optional `archived-at` timestamp for keys which
are marked to no longer be used for encrypting new content, just
decrypting.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 pbs-api-types/src/key_derivation.rs | 38 ++++++++++++++++++++++++++---
 pbs-api-types/src/lib.rs            |  2 +-
 2 files changed, 36 insertions(+), 4 deletions(-)

diff --git a/pbs-api-types/src/key_derivation.rs b/pbs-api-types/src/key_derivation.rs
index 57ae353a..0815a1f4 100644
--- a/pbs-api-types/src/key_derivation.rs
+++ b/pbs-api-types/src/key_derivation.rs
@@ -3,12 +3,13 @@ use serde::{Deserialize, Serialize};
 #[cfg(feature = "enum-fallback")]
 use proxmox_fixed_string::FixedString;
 
-use proxmox_schema::api;
+use proxmox_schema::api_types::SAFE_ID_FORMAT;
+use proxmox_schema::{api, Schema, StringSchema, Updater};
 
 use crate::CERT_FINGERPRINT_SHA256_SCHEMA;
 
 #[api(default: "scrypt")]
-#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
+#[derive(Clone, Copy, Debug, Deserialize, Serialize, PartialEq)]
 #[serde(rename_all = "lowercase")]
 /// Key derivation function for password protected encryption keys.
 pub enum Kdf {
@@ -41,7 +42,7 @@ impl Default for Kdf {
         },
     },
 )]
-#[derive(Deserialize, Serialize)]
+#[derive(Clone, Default, Deserialize, Serialize, Updater, PartialEq)]
 /// Encryption Key Information
 pub struct KeyInfo {
     /// Path to key (if stored in a file)
@@ -59,3 +60,34 @@ pub struct KeyInfo {
     #[serde(skip_serializing_if = "Option::is_none")]
     pub hint: Option<String>,
 }
+
+/// ID to uniquely identify an encryption/decryption key.
+pub const CRYPT_KEY_ID_SCHEMA: Schema =
+    StringSchema::new("ID to uniquely identify encryption/decription key")
+        .format(&SAFE_ID_FORMAT)
+        .min_length(3)
+        .max_length(32)
+        .schema();
+
+#[api(
+    properties: {
+        id: {
+            schema: CRYPT_KEY_ID_SCHEMA,
+        },
+        info: {
+            type: KeyInfo,
+        },
+    },
+)]
+#[derive(Clone, Default, Deserialize, Serialize, Updater, PartialEq)]
+#[serde(rename_all = "kebab-case")]
+/// Encryption/Decryption Key Info with ID.
+pub struct CryptKey {
+    #[updater(skip)]
+    pub id: String,
+    #[serde(flatten)]
+    pub info: KeyInfo,
+    /// Timestamp when key was archived (not set if key is active).
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub archived_at: Option<i64>,
+}
diff --git a/pbs-api-types/src/lib.rs b/pbs-api-types/src/lib.rs
index 54547291..2f5dfea6 100644
--- a/pbs-api-types/src/lib.rs
+++ b/pbs-api-types/src/lib.rs
@@ -104,7 +104,7 @@ mod jobs;
 pub use jobs::*;
 
 mod key_derivation;
-pub use key_derivation::{Kdf, KeyInfo};
+pub use key_derivation::{CryptKey, Kdf, KeyInfo, CRYPT_KEY_ID_SCHEMA};
 
 mod maintenance;
 pub use maintenance::*;
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox v3 02/30] pbs-api-types: sync job: add optional cryptographic keys to config
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
  2026-04-14 12:58 ` [PATCH proxmox v3 01/30] pbs-api-types: define en-/decryption key type and schema Christian Ebner
@ 2026-04-14 12:58 ` Christian Ebner
  2026-04-14 12:58 ` [PATCH proxmox-backup v3 03/30] sync: push: use tracing macros instead of log Christian Ebner
                   ` (27 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:58 UTC (permalink / raw)
  To: pbs-devel

Allows to configure the active encryption key, used to encrypt
backups when performing sync jobs in push direction.

Further, allows to associated keys to a sync job, used as decryption
keys for pull sync jobs for snapshots with matching key fingerprint.
For push sync jobs the associated keys are used to keep track of
previously in-use encryption keys, assuring that they are only
removable if the user actually removed the association. This is
used as safety net against involuntary key deletion.

The field name `associated-key` was chosen since the sync job config
stores the list items on separate lines with property name, so the
resulting config will be structured as shown:
```
sync: encrypt-sync
	active-encryption-key key2
        associated-key key1
        associated-key key0
        ...
```

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 pbs-api-types/src/jobs.rs | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/pbs-api-types/src/jobs.rs b/pbs-api-types/src/jobs.rs
index 7e6dfb94..59f2820f 100644
--- a/pbs-api-types/src/jobs.rs
+++ b/pbs-api-types/src/jobs.rs
@@ -13,8 +13,9 @@ use proxmox_schema::*;
 use crate::{
     Authid, BackupNamespace, BackupType, NotificationMode, RateLimitConfig, Userid,
     BACKUP_GROUP_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_NS_RE, DATASTORE_SCHEMA,
-    DRIVE_NAME_SCHEMA, MEDIA_POOL_NAME_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PROXMOX_SAFE_ID_FORMAT,
-    PROXMOX_SAFE_ID_REGEX_STR, REMOTE_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA,
+    DRIVE_NAME_SCHEMA, CRYPT_KEY_ID_SCHEMA, MEDIA_POOL_NAME_SCHEMA,
+    NS_MAX_DEPTH_REDUCED_SCHEMA, PROXMOX_SAFE_ID_FORMAT, PROXMOX_SAFE_ID_REGEX_STR,
+    REMOTE_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA,
 };
 
 const_regex! {
@@ -664,6 +665,18 @@ pub const UNMOUNT_ON_SYNC_DONE_SCHEMA: Schema =
             type: SyncDirection,
             optional: true,
         },
+        "active-encryption-key": {
+            schema: CRYPT_KEY_ID_SCHEMA,
+            optional: true,
+        },
+        "associated-key": {
+            type: Array,
+            description: "List of cryptographic keys associated with sync job.",
+            items: {
+                schema: CRYPT_KEY_ID_SCHEMA,
+            },
+            optional: true,
+        },
     }
 )]
 #[derive(Serialize, Deserialize, Clone, Updater, PartialEq)]
@@ -709,6 +722,10 @@ pub struct SyncJobConfig {
     pub unmount_on_done: Option<bool>,
     #[serde(skip_serializing_if = "Option::is_none")]
     pub sync_direction: Option<SyncDirection>,
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub active_encryption_key: Option<String>,
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub associated_key: Option<Vec<String>>,
 }
 
 impl SyncJobConfig {
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 03/30] sync: push: use tracing macros instead of log
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
  2026-04-14 12:58 ` [PATCH proxmox v3 01/30] pbs-api-types: define en-/decryption key type and schema Christian Ebner
  2026-04-14 12:58 ` [PATCH proxmox v3 02/30] pbs-api-types: sync job: add optional cryptographic keys to config Christian Ebner
@ 2026-04-14 12:58 ` Christian Ebner
  2026-04-14 12:58 ` [PATCH proxmox-backup v3 04/30] datastore: blob: implement async reader for data blobs Christian Ebner
                   ` (26 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:58 UTC (permalink / raw)
  To: pbs-devel

In order to keep logging consistent, drop all ocurences of log::info!
and log::warn! and use the tracing macros instead.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- not present in previoius version

 src/server/push.rs | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index 697b94f2f..a8f7c15f9 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -822,8 +822,8 @@ pub(crate) async fn push_snapshot(
         Ok((manifest, _raw_size)) => manifest,
         Err(err) => {
             // No manifest in snapshot or failed to read, warn and skip
-            log::warn!("Encountered errors: {err:#}");
-            log::warn!("Failed to load manifest for '{snapshot}'!");
+            warn!("Encountered errors: {err:#}");
+            warn!("Failed to load manifest for '{snapshot}'!");
             return Ok(stats);
         }
     };
@@ -857,7 +857,7 @@ pub(crate) async fn push_snapshot(
     if fetch_previous_manifest {
         match backup_writer.download_previous_manifest().await {
             Ok(manifest) => previous_manifest = Some(Arc::new(manifest)),
-            Err(err) => log::info!("Could not download previous manifest - {err}"),
+            Err(err) => info!("Could not download previous manifest - {err}"),
         }
     };
 
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 04/30] datastore: blob: implement async reader for data blobs
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (2 preceding siblings ...)
  2026-04-14 12:58 ` [PATCH proxmox-backup v3 03/30] sync: push: use tracing macros instead of log Christian Ebner
@ 2026-04-14 12:58 ` Christian Ebner
  2026-04-14 12:58 ` [PATCH proxmox-backup v3 05/30] datastore: manifest: add helper for change detection fingerprint Christian Ebner
                   ` (25 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:58 UTC (permalink / raw)
  To: pbs-devel

So it can be used to load the blob file when server side encryption
is required during push sync jobs, which runs in async context.

Factor out the DataBlob and CRC check, which is identical for sync
and async reader implementation.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- fix typos in commit message for async DataBlob reader patch.

 pbs-datastore/src/data_blob.rs | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/pbs-datastore/src/data_blob.rs b/pbs-datastore/src/data_blob.rs
index 0c05c5a40..b434243c0 100644
--- a/pbs-datastore/src/data_blob.rs
+++ b/pbs-datastore/src/data_blob.rs
@@ -2,6 +2,7 @@ use std::io::Write;
 
 use anyhow::{bail, Error};
 use openssl::symm::{decrypt_aead, Mode};
+use tokio::io::{AsyncRead, AsyncReadExt};
 
 use proxmox_io::{ReadExt, WriteExt};
 
@@ -238,15 +239,26 @@ impl DataBlob {
         }
     }
 
-    /// Load blob from ``reader``, verify CRC
+    /// Load data blob via given sync ``reader`` and verify its CRC
     pub fn load_from_reader(reader: &mut dyn std::io::Read) -> Result<Self, Error> {
         let mut data = Vec::with_capacity(1024 * 1024);
         reader.read_to_end(&mut data)?;
+        Self::from_raw_with_crc_check(data)
+    }
 
-        let blob = Self::from_raw(data)?;
+    /// Load data blob via given async ``reader`` and verify its CRC
+    pub async fn load_from_async_reader(
+        reader: &mut (dyn AsyncRead + Unpin + Send),
+    ) -> Result<Self, Error> {
+        let mut data = Vec::with_capacity(1024 * 1024);
+        reader.read_to_end(&mut data).await?;
+        Self::from_raw_with_crc_check(data)
+    }
 
+    /// Generates a data blob from raw input data and checks for matching CRC in header
+    fn from_raw_with_crc_check(raw_data: Vec<u8>) -> Result<Self, Error> {
+        let blob = Self::from_raw(raw_data)?;
         blob.verify_crc()?;
-
         Ok(blob)
     }
 
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 05/30] datastore: manifest: add helper for change detection fingerprint
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (3 preceding siblings ...)
  2026-04-14 12:58 ` [PATCH proxmox-backup v3 04/30] datastore: blob: implement async reader for data blobs Christian Ebner
@ 2026-04-14 12:58 ` Christian Ebner
  2026-04-14 12:58 ` [PATCH proxmox-backup v3 06/30] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
                   ` (24 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:58 UTC (permalink / raw)
  To: pbs-devel

Generates a checksum over the file names and checksums of the manifest,
to be stored in the encrypted snapshots manifest when doing server side
sync push encryption. The fingerprint will then be used on pull to
detect if a manifests file contents did not change and are therefore
fine to be skipped (no resync required). The usual byte-wise comparison
is not feasible for this.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 pbs-datastore/src/manifest.rs | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/pbs-datastore/src/manifest.rs b/pbs-datastore/src/manifest.rs
index fb734a674..5f7d3efcc 100644
--- a/pbs-datastore/src/manifest.rs
+++ b/pbs-datastore/src/manifest.rs
@@ -236,6 +236,26 @@ impl BackupManifest {
         }
         Ok(Some(serde_json::from_value::<SnapshotVerifyState>(verify)?))
     }
+
+    /// Set the fingerprint used to detect changes for encrypted -> decrypted syncs
+    pub fn set_change_detection_fingerprint(
+        &mut self,
+        fingerprint: &[u8; 32],
+    ) -> Result<(), Error> {
+        let fp_str = hex::encode(fingerprint);
+        self.unprotected["change-detection-fingerprint"] = serde_json::to_value(fp_str)?;
+        Ok(())
+    }
+
+    /// Generate the fingerprint used to detect changes for encrypted -> decrypted syncs
+    pub fn change_detection_fingerprint(&self) -> Result<[u8; 32], Error> {
+        let mut csum = openssl::sha::Sha256::new();
+        for file_info in self.files() {
+            csum.update(file_info.filename.as_bytes());
+            csum.update(&file_info.csum);
+        }
+        Ok(csum.finish())
+    }
 }
 
 impl TryFrom<super::DataBlob> for BackupManifest {
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 06/30] pbs-key-config: introduce store_with() for KeyConfig
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (4 preceding siblings ...)
  2026-04-14 12:58 ` [PATCH proxmox-backup v3 05/30] datastore: manifest: add helper for change detection fingerprint Christian Ebner
@ 2026-04-14 12:58 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 07/30] pbs-config: implement encryption key config handling Christian Ebner
                   ` (23 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:58 UTC (permalink / raw)
  To: pbs-devel

Extends the behavior of KeyConfig::store() to allow optionally
specifying the mode and ownership of the file the key is stored with.
Default to the same behavior as KeyConfig::store() if none of the
optional parameters are set, therefore the same implementation is
reused for it as well.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 pbs-key-config/src/lib.rs | 36 ++++++++++++++++++++++++++++++++----
 1 file changed, 32 insertions(+), 4 deletions(-)

diff --git a/pbs-key-config/src/lib.rs b/pbs-key-config/src/lib.rs
index 0bcd5338c..258fb197b 100644
--- a/pbs-key-config/src/lib.rs
+++ b/pbs-key-config/src/lib.rs
@@ -1,7 +1,10 @@
 use std::io::Write;
+use std::os::fd::AsRawFd;
 use std::path::Path;
 
 use anyhow::{bail, format_err, Context, Error};
+use nix::sys::stat::Mode;
+use nix::unistd::{Gid, Uid};
 use serde::{Deserialize, Serialize};
 
 use proxmox_lang::try_block;
@@ -236,24 +239,49 @@ impl KeyConfig {
 
     /// Store a KeyConfig to path
     pub fn store<P: AsRef<Path>>(&self, path: P, replace: bool) -> Result<(), Error> {
+        self.store_with(path, replace, None, None, None)
+    }
+
+    /// Store a KeyConfig to path with given ownership and mode.
+    /// Requires the process to run with permissions to do so.
+    pub fn store_with<P: AsRef<Path>>(
+        &self,
+        path: P,
+        replace: bool,
+        mode: Option<Mode>,
+        owner: Option<Uid>,
+        group: Option<Gid>,
+    ) -> Result<(), Error> {
         let path: &Path = path.as_ref();
 
         let data = serde_json::to_string(self)?;
 
         try_block!({
             if replace {
-                let mode = nix::sys::stat::Mode::S_IRUSR | nix::sys::stat::Mode::S_IWUSR;
-                replace_file(path, data.as_bytes(), CreateOptions::new().perm(mode), true)?;
+                let mode =
+                    mode.unwrap_or(nix::sys::stat::Mode::S_IRUSR | nix::sys::stat::Mode::S_IWUSR);
+                let mut create_options = CreateOptions::new().perm(mode);
+                if let Some(owner) = owner {
+                    create_options = create_options.owner(owner);
+                }
+                if let Some(group) = group {
+                    create_options = create_options.group(group);
+                }
+                replace_file(path, data.as_bytes(), create_options, true)?;
             } else {
                 use std::os::unix::fs::OpenOptionsExt;
-
+                let mode = mode.map(|m| m.bits()).unwrap_or(0o0600);
                 let mut file = std::fs::OpenOptions::new()
                     .write(true)
-                    .mode(0o0600)
+                    .mode(mode)
                     .create_new(true)
                     .open(path)?;
 
                 file.write_all(data.as_bytes())?;
+
+                let fd = file.as_raw_fd();
+                nix::unistd::fchown(fd, owner, group)?;
+                nix::unistd::fsync(fd)?;
             }
 
             Ok(())
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 07/30] pbs-config: implement encryption key config handling
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (5 preceding siblings ...)
  2026-04-14 12:58 ` [PATCH proxmox-backup v3 06/30] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 14:32   ` Michael Köppl
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 08/30] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
                   ` (22 subsequent siblings)
  29 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Implements the handling for encryption key configuration and files.

Individual encryption keys with the secret key material are stored in
individual files, while the config stores duplicate key info, so the
actual key only needs to be loaded when accessed, not for listing.

The key's fingerprint is compared to the one stored in the config
when loading the key, in order to detect possible mismatches.

Races between key creation and deletion are avoided by locking both,
config and individual key file.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- add dedicated lock file for per-key file locks.
- document config lock requirements for delete_key().
- drop outdated comments.

 pbs-config/Cargo.toml             |   2 +
 pbs-config/src/encryption_keys.rs | 215 ++++++++++++++++++++++++++++++
 pbs-config/src/lib.rs             |   1 +
 3 files changed, 218 insertions(+)
 create mode 100644 pbs-config/src/encryption_keys.rs

diff --git a/pbs-config/Cargo.toml b/pbs-config/Cargo.toml
index ea2496843..04687cb59 100644
--- a/pbs-config/Cargo.toml
+++ b/pbs-config/Cargo.toml
@@ -20,6 +20,7 @@ serde.workspace = true
 serde_json.workspace = true
 
 proxmox-http.workspace = true
+proxmox-lang.workspace = true
 proxmox-notify.workspace = true
 proxmox-router = { workspace = true, default-features = false }
 proxmox-s3-client.workspace = true
@@ -32,3 +33,4 @@ proxmox-uuid.workspace = true
 
 pbs-api-types.workspace = true
 pbs-buildcfg.workspace = true
+pbs-key-config.workspace = true
diff --git a/pbs-config/src/encryption_keys.rs b/pbs-config/src/encryption_keys.rs
new file mode 100644
index 000000000..fd5989a98
--- /dev/null
+++ b/pbs-config/src/encryption_keys.rs
@@ -0,0 +1,215 @@
+use std::collections::HashMap;
+use std::sync::LazyLock;
+
+use anyhow::{bail, format_err, Error};
+use nix::{sys::stat::Mode, unistd::Uid};
+use serde::Deserialize;
+
+use pbs_api_types::{CryptKey, KeyInfo, CRYPT_KEY_ID_SCHEMA};
+use proxmox_schema::ApiType;
+use proxmox_section_config::{SectionConfig, SectionConfigData, SectionConfigPlugin};
+use proxmox_sys::fs::CreateOptions;
+
+use pbs_buildcfg::configdir;
+use pbs_key_config::KeyConfig;
+
+use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
+
+pub static CONFIG: LazyLock<SectionConfig> = LazyLock::new(init);
+
+fn init() -> SectionConfig {
+    let obj_schema = CryptKey::API_SCHEMA.unwrap_all_of_schema();
+    let plugin = SectionConfigPlugin::new(
+        ENCRYPTION_KEYS_CFG_TYPE_ID.to_string(),
+        Some(String::from("id")),
+        obj_schema,
+    );
+    let mut config = SectionConfig::new(&CRYPT_KEY_ID_SCHEMA);
+    config.register_plugin(plugin);
+
+    config
+}
+
+/// Configuration file location for encryption keys.
+pub const ENCRYPTION_KEYS_CFG_FILENAME: &str = configdir!("/encryption-keys.cfg");
+/// Configuration lock file used to prevent concurrent configuration update operations.
+pub const ENCRYPTION_KEYS_CFG_LOCKFILE: &str = configdir!("/.encryption-keys.lck");
+/// Directory where to store the actual encryption keys
+pub const ENCRYPTION_KEYS_DIR: &str = configdir!("/encryption-keys/");
+
+/// Config type for encryption key config entries
+pub const ENCRYPTION_KEYS_CFG_TYPE_ID: &str = "sync-key";
+
+/// Get exclusive lock for encryption key configuration update.
+pub fn lock_config() -> Result<BackupLockGuard, Error> {
+    open_backup_lockfile(ENCRYPTION_KEYS_CFG_LOCKFILE, None, true)
+}
+
+/// Load encryption key configuration from file.
+pub fn config() -> Result<(SectionConfigData, [u8; 32]), Error> {
+    let content = proxmox_sys::fs::file_read_optional_string(ENCRYPTION_KEYS_CFG_FILENAME)?;
+    let content = content.unwrap_or_default();
+    let digest = openssl::sha::sha256(content.as_bytes());
+    let data = CONFIG.parse(ENCRYPTION_KEYS_CFG_FILENAME, &content)?;
+    Ok((data, digest))
+}
+
+/// Shell completion helper to complete encryption key id's as found in the config.
+pub fn complete_encryption_key_id(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
+    match config() {
+        Ok((data, _digest)) => data.sections.keys().map(|id| id.to_string()).collect(),
+        Err(_) => Vec::new(),
+    }
+}
+
+/// Load the encryption key from file.
+///
+/// Looks up the key in the config and tries to load it from the given file.
+/// Upon loading, the config key fingerprint is compared to the one stored in the key
+/// file. Fail to load archived keys if flag is set.
+pub fn load_key_config(id: &str, fail_on_archived: bool) -> Result<KeyConfig, Error> {
+    let _lock = lock_config()?;
+    let (config, _digest) = config()?;
+
+    let key: CryptKey = config.lookup(ENCRYPTION_KEYS_CFG_TYPE_ID, id)?;
+    if fail_on_archived && key.archived_at.is_some() {
+        bail!("cannot load archived encryption key {id}");
+    }
+    let key_config = match &key.info.path {
+        Some(path) => KeyConfig::load(path)?,
+        None => bail!("missing path for encryption key {id}"),
+    };
+
+    let stored_key_info = KeyInfo::from(&key_config);
+
+    if key.info.fingerprint != stored_key_info.fingerprint {
+        bail!("loaded key does not match the config for key {id}");
+    }
+
+    Ok(key_config)
+}
+
+/// Store the encryption key to file.
+///
+/// Inserts the key in the config and stores it to the given file.
+pub fn store_key(id: &str, key: &KeyConfig) -> Result<(), Error> {
+    let _lock = lock_config()?;
+    let (mut config, _digest) = config()?;
+
+    if config.sections.contains_key(id) {
+        bail!("key with id '{id}' already exists.");
+    }
+
+    let backup_user = crate::backup_user()?;
+    let dir_options = CreateOptions::new()
+        .perm(Mode::from_bits_truncate(0o0750))
+        .owner(Uid::from_raw(0))
+        .group(backup_user.gid);
+
+    proxmox_sys::fs::ensure_dir_exists(ENCRYPTION_KEYS_DIR, &dir_options, true)?;
+
+    let key_path = format!("{ENCRYPTION_KEYS_DIR}{id}.enc");
+    let key_lock_path = format!("{key_path}.lck");
+
+    // lock to avoid race with key deletion
+    open_backup_lockfile(&key_lock_path, None, true)?;
+
+    // assert the key file is empty or does not exist
+    match std::fs::metadata(&key_path) {
+        Ok(metadata) => {
+            if metadata.len() > 0 {
+                bail!("detected pre-existing key file, refusing to overwrite.");
+            }
+        }
+        Err(err) if err.kind() == std::io::ErrorKind::NotFound => (),
+        Err(err) => return Err(err.into()),
+    }
+
+    let keyfile_mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
+
+    key.store_with(
+        &key_path,
+        true,
+        Some(keyfile_mode),
+        Some(Uid::from_raw(0)),
+        Some(backup_user.gid),
+    )?;
+
+    let mut info = KeyInfo::from(key);
+    info.path = Some(key_path.clone());
+
+    let crypt_key = CryptKey {
+        id: id.to_string(),
+        info,
+        archived_at: None,
+    };
+
+    let result = proxmox_lang::try_block!({
+        config.set_data(id, ENCRYPTION_KEYS_CFG_TYPE_ID, crypt_key)?;
+
+        let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
+        replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())
+    });
+
+    if result.is_err() {
+        let _ = std::fs::remove_file(key_path);
+    }
+
+    result
+}
+
+/// Delete the encryption key from config.
+///
+/// Returns true if the key was removed successfully, false if there was no matching key.
+/// Safety: caller must acquire and hold config lock.
+pub fn delete_key(id: &str, mut config: SectionConfigData) -> Result<bool, Error> {
+    if let Some((_, key)) = config.sections.remove(id) {
+        let key =
+            CryptKey::deserialize(key).map_err(|_err| format_err!("failed to parse key config"))?;
+
+        if key.archived_at.is_none() {
+            bail!("key still active, deleting is only possible for archived keys");
+        }
+
+        if let Some(key_path) = &key.info.path {
+            let key_lock_path = format!("{key_path}.lck");
+            // Avoid races with key insertion
+            let _lock = open_backup_lockfile(key_lock_path, None, true)?;
+
+            let key_config = KeyConfig::load(key_path)?;
+            let stored_key_info = KeyInfo::from(&key_config);
+            // Check the key is the expected one
+            if key.info.fingerprint != stored_key_info.fingerprint {
+                bail!("unexpected key detected in key file, refuse to delete");
+            }
+
+            let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
+            // drops config lock
+            replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())?;
+
+            std::fs::remove_file(key_path)?;
+            return Ok(true);
+        }
+
+        bail!("missing key file path for key '{id}'");
+    }
+    Ok(false)
+}
+
+/// Mark the key as archived by setting the `archived-at` timestamp.
+pub fn archive_key(id: &str, mut config: SectionConfigData) -> Result<(), Error> {
+    let mut key: CryptKey = config.lookup(ENCRYPTION_KEYS_CFG_TYPE_ID, id)?;
+
+    if key.archived_at.is_some() {
+        bail!("key already marked as archived");
+    }
+
+    key.archived_at = Some(proxmox_time::epoch_i64());
+
+    config.set_data(id, ENCRYPTION_KEYS_CFG_TYPE_ID, &key)?;
+    let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
+    // drops config lock
+    replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())?;
+
+    Ok(())
+}
diff --git a/pbs-config/src/lib.rs b/pbs-config/src/lib.rs
index 18b27d23a..3bdaa8fec 100644
--- a/pbs-config/src/lib.rs
+++ b/pbs-config/src/lib.rs
@@ -4,6 +4,7 @@ pub use cached_user_info::CachedUserInfo;
 pub mod datastore;
 pub mod domains;
 pub mod drive;
+pub mod encryption_keys;
 pub mod key_value;
 pub mod media_pool;
 pub mod metrics;
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 08/30] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (6 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 07/30] pbs-config: implement encryption key config handling Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 09/30] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
                   ` (21 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Adds a dedicated subpath for permission checks on encryption key
configurations in the acl path components check. Allows to set
permissions on either the whole encryption keys config or for
individual encryption key ids.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 pbs-config/src/acl.rs | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/pbs-config/src/acl.rs b/pbs-config/src/acl.rs
index 2abbf5802..d18a346ff 100644
--- a/pbs-config/src/acl.rs
+++ b/pbs-config/src/acl.rs
@@ -127,8 +127,8 @@ pub fn check_acl_path(path: &str) -> Result<(), Error> {
                         _ => {}
                     }
                 }
-                "s3-endpoint" => {
-                    // /system/s3-endpoint/{id}
+                "s3-endpoint" | "encryption-keys" => {
+                    // /system/<matched-component>/{id}
                     if components_len <= 3 {
                         return Ok(());
                     }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 09/30] ui: expose 'encryption-keys' as acl subpath for 'system'
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (7 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 08/30] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 10/30] sync: add helper to check encryption key acls and load key Christian Ebner
                   ` (20 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Allows to select the 'encryption-keys' subpath to restirct
permissions to either the full encryption keys configuration or the
matching key id.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 www/form/PermissionPathSelector.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/form/PermissionPathSelector.js b/www/form/PermissionPathSelector.js
index e5f2aec46..64de42888 100644
--- a/www/form/PermissionPathSelector.js
+++ b/www/form/PermissionPathSelector.js
@@ -15,6 +15,7 @@ Ext.define('PBS.data.PermissionPathsStore', {
         { value: '/system' },
         { value: '/system/certificates' },
         { value: '/system/disks' },
+        { value: '/system/encryption-keys' },
         { value: '/system/log' },
         { value: '/system/network' },
         { value: '/system/network/dns' },
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 10/30] sync: add helper to check encryption key acls and load key
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (8 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 09/30] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 11/30] api: config: add endpoints for encryption key manipulation Christian Ebner
                   ` (19 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Introduces a common helper function to be used when loading an
encryption key in sync job for either push or pull direction.

For given user, access to the provided key by id is checked and the
key config containing the secret loaded from the file by means of the
config.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 src/server/sync.rs | 29 ++++++++++++++++++++++++++++-
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/src/server/sync.rs b/src/server/sync.rs
index aedf4a271..9c070cd9c 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -21,12 +21,14 @@ use proxmox_router::HttpError;
 use pbs_api_types::{
     Authid, BackupDir, BackupGroup, BackupNamespace, CryptMode, GroupListItem, SnapshotListItem,
     SyncDirection, SyncJobConfig, VerifyState, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME,
-    MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+    MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ, PRIV_SYS_MODIFY,
 };
 use pbs_client::{BackupReader, BackupRepository, HttpClient, RemoteChunkReader};
+use pbs_config::CachedUserInfo;
 use pbs_datastore::data_blob::DataBlob;
 use pbs_datastore::read_chunk::AsyncReadChunk;
 use pbs_datastore::{BackupManifest, DataStore, ListNamespacesRecursive, LocalChunkReader};
+use pbs_tools::crypt_config::CryptConfig;
 
 use crate::backup::ListAccessibleBackupGroups;
 use crate::server::jobstate::Job;
@@ -791,3 +793,28 @@ pub(super) fn exclude_not_verified_or_encrypted(
 
     false
 }
+
+/// Helper to check user having access to the given encryption key and loading
+/// the it using the passphrase from the config.
+pub(crate) fn check_privs_and_load_key_config(
+    key_id: &str,
+    user: &Authid,
+    fail_on_archived: bool,
+) -> Result<Option<Arc<CryptConfig>>, Error> {
+    let user_info = CachedUserInfo::new()?;
+    user_info.check_privs(
+        user,
+        &["system", "encryption-keys", key_id],
+        PRIV_SYS_MODIFY,
+        true,
+    )?;
+
+    let key_config = pbs_config::encryption_keys::load_key_config(key_id, fail_on_archived)?;
+    // pass empty passphrase to get raw key material of unprotected key
+    let (enc_key, _created, fingerprint) = key_config.decrypt(&|| Ok(Vec::new()))?;
+
+    info!("Loaded encryption key '{key_id}' with fingerprint '{fingerprint}'");
+
+    let crypt_config = Arc::new(CryptConfig::new(enc_key)?);
+    Ok(Some(crypt_config))
+}
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 11/30] api: config: add endpoints for encryption key manipulation
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (9 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 10/30] sync: add helper to check encryption key acls and load key Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 12/30] api: config: check sync owner has access to en-/decryption keys Christian Ebner
                   ` (18 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Defines the api endpoints for listing existing keys as defined in the
config, create new keys and archive or remove keys.

New keys are either generated on the server side or uploaded as json
string. Password protected keys are currently not supported and will
be added at a later stage, once a general mechanism for secrets
handling is implemented for PBS.

Keys are archived by setting the `archived-at` timestamp, marking
them as no longer usable for encrypting new content with respective
keys.

Removing a key requires for it to be archived first. Further, is only
possible when the key is no longer referenced by a sync job config,
protecting from accidental deletion of an in-use key.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- early detect unusable keys provided on key creation as upload via api.
- list all associated sync jobs when checking with encryption_key_in_use().

 src/api2/config/encryption_keys.rs | 219 +++++++++++++++++++++++++++++
 src/api2/config/mod.rs             |   2 +
 2 files changed, 221 insertions(+)
 create mode 100644 src/api2/config/encryption_keys.rs

diff --git a/src/api2/config/encryption_keys.rs b/src/api2/config/encryption_keys.rs
new file mode 100644
index 000000000..d3097929d
--- /dev/null
+++ b/src/api2/config/encryption_keys.rs
@@ -0,0 +1,219 @@
+use anyhow::{bail, format_err, Error};
+use serde_json::Value;
+
+use proxmox_router::{Permission, Router, RpcEnvironment};
+use proxmox_schema::api;
+
+use pbs_api_types::{
+    Authid, CryptKey, SyncJobConfig, CRYPT_KEY_ID_SCHEMA, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY,
+    PROXMOX_CONFIG_DIGEST_SCHEMA,
+};
+
+use pbs_config::encryption_keys::{self, ENCRYPTION_KEYS_CFG_TYPE_ID};
+use pbs_config::CachedUserInfo;
+
+use pbs_key_config::KeyConfig;
+
+#[api(
+    input: {
+        properties: {
+            "include-archived": {
+                type: bool,
+                description: "List also keys which have been archived.",
+                optional: true,
+                default: false,
+            },
+        },
+    },
+    returns: {
+        description: "List of configured encryption keys.",
+        type: Array,
+        items: { type: CryptKey },
+    },
+    access: {
+        permission: &Permission::Anybody,
+        description: "List configured encryption keys filtered by Sys.Audit privileges",
+    },
+)]
+/// List configured encryption keys.
+pub fn list_keys(
+    include_archived: bool,
+    _param: Value,
+    rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<CryptKey>, Error> {
+    let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
+    let user_info = CachedUserInfo::new()?;
+
+    let (config, digest) = encryption_keys::config()?;
+
+    let list: Vec<CryptKey> = config.convert_to_typed_array(ENCRYPTION_KEYS_CFG_TYPE_ID)?;
+    let list = list
+        .into_iter()
+        .filter(|key| {
+            if !include_archived && key.archived_at.is_some() {
+                return false;
+            }
+            let privs = user_info.lookup_privs(&auth_id, &["system", "encryption-keys", &key.id]);
+            privs & PRIV_SYS_AUDIT != 0
+        })
+        .collect();
+
+    rpcenv["digest"] = hex::encode(digest).into();
+
+    Ok(list)
+}
+
+#[api(
+    protected: true,
+    input: {
+        properties: {
+            id: {
+                schema: CRYPT_KEY_ID_SCHEMA,
+            },
+            key: {
+                description: "Use provided key instead of creating new one.",
+                type: String,
+                optional: true,
+            },
+        },
+    },
+    access: {
+        permission: &Permission::Privilege(&["system", "encryption-keys"], PRIV_SYS_MODIFY, false),
+    },
+)]
+/// Create new encryption key instance or use the provided one.
+pub fn create_key(
+    id: String,
+    key: Option<String>,
+    _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<KeyConfig, Error> {
+    let key_config = if let Some(key) = &key {
+        let key_config: KeyConfig = serde_json::from_str(key)
+            .map_err(|err| format_err!("failed to parse provided key: {err}"))?;
+        // early detect unusable keys
+        if key_config.kdf.is_some() {
+            bail!("protected keys not supported");
+        }
+        let _ = key_config
+            .decrypt(&|| Ok(Vec::new()))
+            .map_err(|err| format_err!("failed to load provided key: {err}"))?;
+        key_config
+    } else {
+        let mut raw_key = [0u8; 32];
+        proxmox_sys::linux::fill_with_random_data(&mut raw_key)?;
+        KeyConfig::without_password(raw_key)?
+    };
+
+    encryption_keys::store_key(&id, &key_config)?;
+
+    Ok(key_config)
+}
+
+#[api(
+    protected: true,
+    input: {
+        properties: {
+            id: {
+                schema: CRYPT_KEY_ID_SCHEMA,
+            },
+            digest: {
+                optional: true,
+                schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+            },
+        },
+    },
+    access: {
+        permission: &Permission::Privilege(&["system", "encryption-keys", "{id}"], PRIV_SYS_MODIFY, false),
+    },
+)]
+/// Mark the key by given id as archived, no longer usable to encrypt contents.
+pub fn archive_key(
+    id: String,
+    digest: Option<String>,
+    _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+    let _lock = encryption_keys::lock_config()?;
+    let (config, expected_digest) = encryption_keys::config()?;
+
+    pbs_config::detect_modified_configuration_file(digest, &expected_digest)?;
+
+    encryption_keys::archive_key(&id, config)?;
+
+    Ok(())
+}
+
+#[api(
+    protected: true,
+    input: {
+        properties: {
+            id: {
+                schema: CRYPT_KEY_ID_SCHEMA,
+            },
+            digest: {
+                optional: true,
+                schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+            },
+        },
+    },
+    access: {
+        permission: &Permission::Privilege(&["system", "encryption-keys", "{id}"], PRIV_SYS_MODIFY, false),
+    },
+)]
+/// Remove encryption key.
+pub fn delete_key(
+    id: String,
+    digest: Option<String>,
+    _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+    let _lock = encryption_keys::lock_config()?;
+    let (config, expected_digest) = encryption_keys::config()?;
+
+    pbs_config::detect_modified_configuration_file(digest, &expected_digest)?;
+
+    if let Some(job_ids) = encryption_key_in_use(&id)
+        .map_err(|_err| format_err!("failed to check if encryption key is in-use"))?
+    {
+        let plural = if job_ids.len() > 1 { "s" } else { "" };
+        let ids = job_ids.join(", ");
+        bail!("encryption key in use by sync job{plural}: '{ids}'");
+    }
+
+    encryption_keys::delete_key(&id, config)?;
+
+    Ok(())
+}
+
+// check which sync jobs are associated to given key id or hold it as active encryption key
+fn encryption_key_in_use(id: &str) -> Result<Option<Vec<String>>, Error> {
+    let (config, _digest) = pbs_config::sync::config()?;
+
+    let mut used_by_jobs = Vec::new();
+
+    let job_list: Vec<SyncJobConfig> = config.convert_to_typed_array("sync")?;
+    for job in job_list {
+        if job.active_encryption_key.as_deref() == Some(id)
+            || job
+                .associated_key
+                .as_deref()
+                .unwrap_or(&[])
+                .contains(&id.to_string())
+        {
+            used_by_jobs.push(job.id.clone());
+        }
+    }
+
+    if used_by_jobs.is_empty() {
+        Ok(None)
+    } else {
+        Ok(Some(used_by_jobs))
+    }
+}
+
+const ITEM_ROUTER: Router = Router::new()
+    .post(&API_METHOD_ARCHIVE_KEY)
+    .delete(&API_METHOD_DELETE_KEY);
+
+pub const ROUTER: Router = Router::new()
+    .get(&API_METHOD_LIST_KEYS)
+    .post(&API_METHOD_CREATE_KEY)
+    .match_all("id", &ITEM_ROUTER);
diff --git a/src/api2/config/mod.rs b/src/api2/config/mod.rs
index 1cd9ead76..0281bcfae 100644
--- a/src/api2/config/mod.rs
+++ b/src/api2/config/mod.rs
@@ -9,6 +9,7 @@ pub mod acme;
 pub mod changer;
 pub mod datastore;
 pub mod drive;
+pub mod encryption_keys;
 pub mod media_pool;
 pub mod metrics;
 pub mod notifications;
@@ -28,6 +29,7 @@ const SUBDIRS: SubdirMap = &sorted!([
     ("changer", &changer::ROUTER),
     ("datastore", &datastore::ROUTER),
     ("drive", &drive::ROUTER),
+    ("encryption-keys", &encryption_keys::ROUTER),
     ("media-pool", &media_pool::ROUTER),
     ("metrics", &metrics::ROUTER),
     ("notifications", &notifications::ROUTER),
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 12/30] api: config: check sync owner has access to en-/decryption keys
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (10 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 11/30] api: config: add endpoints for encryption key manipulation Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 13/30] api: config: allow encryption key manipulation for sync job Christian Ebner
                   ` (17 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

So a sync job can not be configured with a non existing or non
accessible key for given sync owner/local-user.

Key access is checked by loading the key from the keyfile.
When setting the active encryption key for push sync jobs it is
further assured that the key is not archived yet.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- fix check for key access when setting active encryption key. It must fail for archived keys.

 src/api2/config/sync.rs | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index 67fa3182c..75b99c2a7 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -62,6 +62,22 @@ fn is_correct_owner(auth_id: &Authid, job: &SyncJobConfig) -> bool {
     }
 }
 
+// Check access and test key loading works as expected for sync job owner/user.
+fn sync_user_can_access_optional_key(
+    key_id: Option<&str>,
+    owner: &Authid,
+    fail_on_archived: bool,
+) -> Result<(), Error> {
+    if let Some(key_id) = key_id {
+        if crate::server::sync::check_privs_and_load_key_config(key_id, owner, fail_on_archived)
+            .is_err()
+        {
+            bail!("no such key or cannot access key '{key_id}'");
+        }
+    }
+    Ok(())
+}
+
 /// checks whether user can run the corresponding sync job, depending on sync direction
 ///
 /// namespace creation/deletion ACL and backup group ownership checks happen in the pull/push code
@@ -251,6 +267,19 @@ pub fn create_sync_job(
         }
     }
 
+    let owner = config
+        .owner
+        .as_ref()
+        .unwrap_or_else(|| Authid::root_auth_id());
+
+    if sync_direction == SyncDirection::Push {
+        sync_user_can_access_optional_key(config.active_encryption_key.as_deref(), owner, true)?;
+    } else {
+        for key in config.associated_key.as_deref().unwrap_or(&[]) {
+            sync_user_can_access_optional_key(Some(key), owner, false)?;
+        }
+    }
+
     let (mut section_config, _digest) = sync::config()?;
 
     if section_config.sections.contains_key(&config.id) {
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 13/30] api: config: allow encryption key manipulation for sync job
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (11 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 12/30] api: config: check sync owner has access to en-/decryption keys Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 14/30] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
                   ` (16 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Since the SyncJobConfig got extended to include an optional active
encryption key, set the default to none.
Extend the api config update handler to also set, update or delete
the active encryption key based on the provided parameters.

Associated keys will also be updated accordingly, however it is
assured that the previously active key will remain associated, if
changed.

They encryption key will be used to encrypt unencrypted backup
snapshots during push sync. Any of the associated keys will be used
to decrypt snapshots with matching key fingerprint during pull sync.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- add flag to check for not allowing to set archived key as active encryption key.
- drop associated keys also on active encryption key update, readd rotated one afterwards if required.

 src/api2/config/sync.rs | 65 +++++++++++++++++++++++++++++++++++++++--
 1 file changed, 62 insertions(+), 3 deletions(-)

diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index 75b99c2a7..26ce29ed2 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -330,6 +330,22 @@ pub fn read_sync_job(id: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Sync
     Ok(sync_job)
 }
 
+fn keep_previous_key_as_associated(
+    current_active: Option<&String>,
+    associated_keys: &mut Option<Vec<String>>,
+) {
+    if let Some(prev) = current_active {
+        match associated_keys {
+            Some(ref mut keys) => {
+                if !keys.contains(prev) {
+                    keys.push(prev.clone());
+                }
+            }
+            None => *associated_keys = Some(vec![prev.clone()]),
+        }
+    }
+}
+
 #[api()]
 #[derive(Serialize, Deserialize)]
 #[serde(rename_all = "kebab-case")]
@@ -373,6 +389,10 @@ pub enum DeletableProperty {
     UnmountOnDone,
     /// Delete the sync_direction property,
     SyncDirection,
+    /// Delete the active encryption key property,
+    ActiveEncryptionKey,
+    /// Delete associated key property,
+    AssociatedKey,
 }
 
 #[api(
@@ -414,7 +434,7 @@ required sync job owned by user. Additionally, remove vanished requires RemoteDa
 #[allow(clippy::too_many_arguments)]
 pub fn update_sync_job(
     id: String,
-    update: SyncJobConfigUpdater,
+    mut update: SyncJobConfigUpdater,
     delete: Option<Vec<DeletableProperty>>,
     digest: Option<String>,
     rpcenv: &mut dyn RpcEnvironment,
@@ -437,6 +457,9 @@ pub fn update_sync_job(
     }
 
     if let Some(delete) = delete {
+        // temporarily hold the previosly active key in case of updates,
+        // so it can be readded as associated key afterwards.
+        let mut previous_active_encryption_key = None;
         for delete_prop in delete {
             match delete_prop {
                 DeletableProperty::Remote => {
@@ -496,8 +519,23 @@ pub fn update_sync_job(
                 DeletableProperty::SyncDirection => {
                     data.sync_direction = None;
                 }
+                DeletableProperty::ActiveEncryptionKey => {
+                    // Previously active encryption keys are always rotated to
+                    // become an associated key in order to hinder unintended
+                    // deletion (e.g. key got rotated for the push, but it still
+                    // is intended to be used for  restore/pull existing snapshots).
+                    previous_active_encryption_key = data.active_encryption_key.take();
+                }
+                DeletableProperty::AssociatedKey => {
+                    // Previous active encryption key might be added as associated below.
+                    data.associated_key = None;
+                }
             }
         }
+        keep_previous_key_as_associated(
+            previous_active_encryption_key.as_ref(),
+            &mut data.associated_key,
+        );
     }
 
     if let Some(comment) = update.comment {
@@ -524,8 +562,8 @@ pub fn update_sync_job(
     if let Some(remote_ns) = update.remote_ns {
         data.remote_ns = Some(remote_ns);
     }
-    if let Some(owner) = update.owner {
-        data.owner = Some(owner);
+    if let Some(owner) = &update.owner {
+        data.owner = Some(owner.clone());
     }
     if let Some(group_filter) = update.group_filter {
         data.group_filter = Some(group_filter);
@@ -555,6 +593,25 @@ pub fn update_sync_job(
         data.sync_direction = Some(sync_direction);
     }
 
+    if let Some(active_encryption_key) = update.active_encryption_key {
+        let owner = update.owner.as_ref().unwrap_or_else(|| {
+            data.owner
+                .as_ref()
+                .unwrap_or_else(|| Authid::root_auth_id())
+        });
+        sync_user_can_access_optional_key(Some(&active_encryption_key), owner, true)?;
+
+        keep_previous_key_as_associated(
+            data.active_encryption_key.as_ref(),
+            &mut update.associated_key,
+        );
+        data.active_encryption_key = Some(active_encryption_key);
+    }
+
+    if let Some(associated_key) = update.associated_key {
+        data.associated_key = Some(associated_key);
+    }
+
     if update.limit.rate_in.is_some() {
         data.limit.rate_in = update.limit.rate_in;
     }
@@ -727,6 +784,8 @@ acl:1:/remote/remote1/remotestore1:write@pbs:RemoteSyncOperator
         run_on_mount: None,
         unmount_on_done: None,
         sync_direction: None, // use default
+        active_encryption_key: None,
+        associated_key: None,
     };
 
     // should work without ACLs
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 14/30] sync: push: rewrite manifest instead of pushing pre-existing one
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (12 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 13/30] api: config: allow encryption key manipulation for sync job Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 15/30] api: push sync: expose optional encryption key for push sync Christian Ebner
                   ` (15 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

In preparation for being able to encrypt unencypted backup snapshots
during push sync jobs.

Previously the pre-existing manifest file was pushed to the remote
target since it did not require modifications and contained all the
files with the correct metadata. When encrypting, the files must
however be marked as encrypted by individually setting the crypt mode
and the manifest must be signed and the encryption key fingerprint
added to the unprotected part of the manifest.

Therefore, now recreate the manifest and update accordingly. To do
so, pushing of the index must return the full BackupStats, not just
the sync stats for accounting.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 src/server/push.rs | 59 +++++++++++++++++++++++++++++++++-------------
 1 file changed, 43 insertions(+), 16 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index a8f7c15f9..0d56d2e5d 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -17,8 +17,8 @@ use pbs_api_types::{
     PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
 };
 use pbs_client::{
-    BackupRepository, BackupWriter, BackupWriterOptions, HttpClient, IndexType, MergedChunkInfo,
-    UploadOptions,
+    BackupRepository, BackupStats, BackupWriter, BackupWriterOptions, HttpClient, IndexType,
+    MergedChunkInfo, UploadOptions,
 };
 use pbs_config::CachedUserInfo;
 use pbs_datastore::data_blob::ChunkInfo;
@@ -26,7 +26,7 @@ use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
 use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{DataStore, StoreProgress};
+use pbs_datastore::{BackupManifest, DataStore, StoreProgress};
 
 use super::sync::{
     check_namespace_depth_limit, exclude_not_verified_or_encrypted,
@@ -880,6 +880,7 @@ pub(crate) async fn push_snapshot(
 
     // Avoid double upload penalty by remembering already seen chunks
     let known_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64 * 1024)));
+    let mut target_manifest = BackupManifest::new(snapshot.clone());
 
     for entry in source_manifest.files() {
         let mut path = backup_dir.full_path();
@@ -892,6 +893,12 @@ pub(crate) async fn push_snapshot(
                     let backup_stats = backup_writer
                         .upload_blob(file, archive_name.as_ref())
                         .await?;
+                    target_manifest.add_file(
+                        &archive_name,
+                        backup_stats.size,
+                        backup_stats.csum,
+                        entry.chunk_crypt_mode(),
+                    )?;
                     stats.add(SyncStats {
                         chunk_count: backup_stats.chunk_count as usize,
                         bytes: backup_stats.size as usize,
@@ -914,7 +921,7 @@ pub(crate) async fn push_snapshot(
                     let chunk_reader = reader
                         .chunk_reader(entry.chunk_crypt_mode())
                         .context("failed to get chunk reader")?;
-                    let sync_stats = push_index(
+                    let upload_stats = push_index(
                         &archive_name,
                         index,
                         chunk_reader,
@@ -923,7 +930,18 @@ pub(crate) async fn push_snapshot(
                         known_chunks.clone(),
                     )
                     .await?;
-                    stats.add(sync_stats);
+                    target_manifest.add_file(
+                        &archive_name,
+                        upload_stats.size,
+                        upload_stats.csum,
+                        entry.chunk_crypt_mode(),
+                    )?;
+                    stats.add(SyncStats {
+                        chunk_count: upload_stats.chunk_count as usize,
+                        bytes: upload_stats.size as usize,
+                        elapsed: upload_stats.duration,
+                        removed: None,
+                    });
                 }
                 ArchiveType::FixedIndex => {
                     if let Some(manifest) = upload_options.previous_manifest.as_ref() {
@@ -941,7 +959,7 @@ pub(crate) async fn push_snapshot(
                         .chunk_reader(entry.chunk_crypt_mode())
                         .context("failed to get chunk reader")?;
                     let size = index.index_bytes();
-                    let sync_stats = push_index(
+                    let upload_stats = push_index(
                         &archive_name,
                         index,
                         chunk_reader,
@@ -950,7 +968,18 @@ pub(crate) async fn push_snapshot(
                         known_chunks.clone(),
                     )
                     .await?;
-                    stats.add(sync_stats);
+                    target_manifest.add_file(
+                        &archive_name,
+                        upload_stats.size,
+                        upload_stats.csum,
+                        entry.chunk_crypt_mode(),
+                    )?;
+                    stats.add(SyncStats {
+                        chunk_count: upload_stats.chunk_count as usize,
+                        bytes: upload_stats.size as usize,
+                        elapsed: upload_stats.duration,
+                        removed: None,
+                    });
                 }
             }
         } else {
@@ -973,8 +1002,11 @@ pub(crate) async fn push_snapshot(
             .await?;
     }
 
-    // Rewrite manifest for pushed snapshot, recreating manifest from source on target
-    let manifest_json = serde_json::to_value(source_manifest)?;
+    // Rewrite manifest for pushed snapshot, recreating manifest from source on target,
+    // needs to update all relevant info for new manifest.
+    target_manifest.unprotected = source_manifest.unprotected;
+    target_manifest.signature = source_manifest.signature;
+    let manifest_json = serde_json::to_value(target_manifest)?;
     let manifest_string = serde_json::to_string_pretty(&manifest_json)?;
     let backup_stats = backup_writer
         .upload_blob_from_data(
@@ -1006,7 +1038,7 @@ async fn push_index(
     backup_writer: &BackupWriter,
     index_type: IndexType,
     known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
-) -> Result<SyncStats, Error> {
+) -> Result<BackupStats, Error> {
     let (upload_channel_tx, upload_channel_rx) = mpsc::channel(20);
     let mut chunk_infos =
         stream::iter(0..index.index_count()).map(move |pos| index.chunk_info(pos).unwrap());
@@ -1058,10 +1090,5 @@ async fn push_index(
         .upload_index_chunk_info(filename, merged_chunk_info_stream, upload_options)
         .await?;
 
-    Ok(SyncStats {
-        chunk_count: upload_stats.chunk_count as usize,
-        bytes: upload_stats.size as usize,
-        elapsed: upload_stats.duration,
-        removed: None,
-    })
+    Ok(upload_stats)
 }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 15/30] api: push sync: expose optional encryption key for push sync
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (13 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 14/30] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 16/30] sync: push: optionally encrypt data blob on upload Christian Ebner
                   ` (14 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Exposes the optional encryption key id to be used for server side
encryption of contents during push sync jobs. Only expose the
parameter for now and load the key if given, the logic to use it will
be implemented in subsequent code changes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- also keep key id for logging

 src/api2/push.rs   |  8 +++++++-
 src/server/push.rs | 14 ++++++++++++++
 src/server/sync.rs |  1 +
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/src/api2/push.rs b/src/api2/push.rs
index e5edc13e0..84d93621b 100644
--- a/src/api2/push.rs
+++ b/src/api2/push.rs
@@ -2,7 +2,7 @@ use anyhow::{format_err, Error};
 use futures::{future::FutureExt, select};
 
 use pbs_api_types::{
-    Authid, BackupNamespace, GroupFilter, RateLimitConfig, DATASTORE_SCHEMA,
+    Authid, BackupNamespace, GroupFilter, RateLimitConfig, CRYPT_KEY_ID_SCHEMA, DATASTORE_SCHEMA,
     GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
     PRIV_DATASTORE_READ, PRIV_REMOTE_DATASTORE_BACKUP, PRIV_REMOTE_DATASTORE_PRUNE,
     REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA, SYNC_ENCRYPTED_ONLY_SCHEMA,
@@ -108,6 +108,10 @@ fn check_push_privs(
                 schema: TRANSFER_LAST_SCHEMA,
                 optional: true,
             },
+            "encryption-key": {
+                schema: CRYPT_KEY_ID_SCHEMA,
+                optional: true,
+            },
         },
     },
     access: {
@@ -133,6 +137,7 @@ async fn push(
     verified_only: Option<bool>,
     limit: RateLimitConfig,
     transfer_last: Option<usize>,
+    encryption_key: Option<String>,
     rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<String, Error> {
     let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -164,6 +169,7 @@ async fn push(
         verified_only,
         limit,
         transfer_last,
+        encryption_key,
     )
     .await?;
 
diff --git a/src/server/push.rs b/src/server/push.rs
index 0d56d2e5d..f989fdf93 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -27,6 +27,7 @@ use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
 use pbs_datastore::read_chunk::AsyncReadChunk;
 use pbs_datastore::{BackupManifest, DataStore, StoreProgress};
+use pbs_tools::crypt_config::CryptConfig;
 
 use super::sync::{
     check_namespace_depth_limit, exclude_not_verified_or_encrypted,
@@ -83,6 +84,9 @@ pub(crate) struct PushParameters {
     verified_only: bool,
     /// How many snapshots should be transferred at most (taking the newest N snapshots)
     transfer_last: Option<usize>,
+    /// Encryption key to use for pushing unencrypted backup snapshots. Does not affect
+    /// already encrypted snapshots.
+    crypt_config: Option<(String, Arc<CryptConfig>)>,
 }
 
 impl PushParameters {
@@ -102,6 +106,7 @@ impl PushParameters {
         verified_only: Option<bool>,
         limit: RateLimitConfig,
         transfer_last: Option<usize>,
+        active_encryption_key: Option<String>,
     ) -> Result<Self, Error> {
         if let Some(max_depth) = max_depth {
             ns.check_max_depth(max_depth)?;
@@ -155,6 +160,14 @@ impl PushParameters {
         };
         let group_filter = group_filter.unwrap_or_default();
 
+        let crypt_config = if let Some(key_id) = &active_encryption_key {
+            let crypt_config =
+                crate::server::sync::check_privs_and_load_key_config(key_id, &local_user, true)?;
+            crypt_config.map(|crypt_config| (key_id.to_string(), crypt_config))
+        } else {
+            None
+        };
+
         Ok(Self {
             source,
             target,
@@ -165,6 +178,7 @@ impl PushParameters {
             encrypted_only,
             verified_only,
             transfer_last,
+            crypt_config,
         })
     }
 
diff --git a/src/server/sync.rs b/src/server/sync.rs
index 9c070cd9c..6b84ae6d7 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -677,6 +677,7 @@ pub fn do_sync_job(
                             sync_job.verified_only,
                             sync_job.limit.clone(),
                             sync_job.transfer_last,
+                            sync_job.active_encryption_key,
                         )
                         .await?;
                         push_store(push_params).await?
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 16/30] sync: push: optionally encrypt data blob on upload
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (14 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 15/30] api: push sync: expose optional encryption key for push sync Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 17/30] sync: push: optionally encrypt client log on upload if key is given Christian Ebner
                   ` (13 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Encrypt the data blob with given encryption key during syncs in push
direction, if given.

Introduces a helper to read and decode the data blob from source into
raw data and re-encrypt, so the new blob is compressed and encrypted,
including the correct header when uploading. The same helper will be
reused for client log uploads in subsequent code changes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 src/server/push.rs | 39 +++++++++++++++++++++++++++++++++------
 1 file changed, 33 insertions(+), 6 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index f989fdf93..75f92b291 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1,6 +1,7 @@
 //! Sync datastore by pushing contents to remote server
 
 use std::collections::HashSet;
+use std::path::Path;
 use std::sync::{Arc, Mutex};
 
 use anyhow::{bail, format_err, Context, Error};
@@ -26,7 +27,7 @@ use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
 use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{BackupManifest, DataStore, StoreProgress};
+use pbs_datastore::{BackupManifest, DataBlob, DataStore, StoreProgress};
 use pbs_tools::crypt_config::CryptConfig;
 
 use super::sync::{
@@ -851,6 +852,8 @@ pub(crate) async fn push_snapshot(
         return Ok(stats);
     }
 
+    let mut encrypt_using_key = None;
+
     // Writer instance locks the snapshot on the remote side
     let backup_writer = BackupWriter::start(
         &params.target.client,
@@ -858,7 +861,7 @@ pub(crate) async fn push_snapshot(
             datastore: params.target.repo.store(),
             ns: &target_ns,
             backup: snapshot,
-            crypt_config: None,
+            crypt_config: encrypt_using_key.clone(),
             debug: false,
             benchmark: false,
             no_cache: false,
@@ -903,10 +906,20 @@ pub(crate) async fn push_snapshot(
             let archive_name = BackupArchiveName::from_path(&entry.filename)?;
             match archive_name.archive_type() {
                 ArchiveType::Blob => {
-                    let file = std::fs::File::open(&path)?;
-                    let backup_stats = backup_writer
-                        .upload_blob(file, archive_name.as_ref())
-                        .await?;
+                    let backup_stats = if encrypt_using_key.is_some() {
+                        reencode_encrypted_and_upload_blob(
+                            path,
+                            &archive_name,
+                            &backup_writer,
+                            &upload_options,
+                        )
+                        .await?
+                    } else {
+                        let file = std::fs::File::open(&path)?;
+                        backup_writer
+                            .upload_blob(file, archive_name.as_ref())
+                            .await?
+                    };
                     target_manifest.add_file(
                         &archive_name,
                         backup_stats.size,
@@ -1041,6 +1054,20 @@ pub(crate) async fn push_snapshot(
     Ok(stats)
 }
 
+async fn reencode_encrypted_and_upload_blob<P: AsRef<Path>>(
+    path: P,
+    archive_name: &BackupArchiveName,
+    backup_writer: &BackupWriter,
+    upload_options: &UploadOptions,
+) -> Result<BackupStats, Error> {
+    let mut file = tokio::fs::File::open(&path).await?;
+    let data_blob = DataBlob::load_from_async_reader(&mut file).await?;
+    let raw_data = data_blob.decode(None, None)?;
+    backup_writer
+        .upload_blob_from_data(raw_data, archive_name.as_ref(), upload_options.clone())
+        .await
+}
+
 // Read fixed or dynamic index and push to target by uploading via the backup writer instance
 //
 // For fixed indexes, the size must be provided as given by the index reader.
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 17/30] sync: push: optionally encrypt client log on upload if key is given
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (15 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 16/30] sync: push: optionally encrypt data blob on upload Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 18/30] sync: push: add helper for loading known chunks from previous snapshot Christian Ebner
                   ` (12 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Encrypt the client log blob with given encryption key during syncs in push
direction, if given. The client log is not part of the manifest and therefore
needs to be handled separately.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 src/server/push.rs | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index 75f92b291..a848c989c 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1020,13 +1020,23 @@ pub(crate) async fn push_snapshot(
     let client_log_name = &CLIENT_LOG_BLOB_NAME;
     client_log_path.push(client_log_name.as_ref());
     if client_log_path.is_file() {
-        backup_writer
-            .upload_blob_from_file(
-                &client_log_path,
-                client_log_name.as_ref(),
-                upload_options.clone(),
+        if encrypt_using_key.is_some() {
+            reencode_encrypted_and_upload_blob(
+                client_log_path,
+                client_log_name,
+                &backup_writer,
+                &upload_options,
             )
             .await?;
+        } else {
+            backup_writer
+                .upload_blob_from_file(
+                    &client_log_path,
+                    client_log_name.as_ref(),
+                    upload_options.clone(),
+                )
+                .await?;
+        }
     }
 
     // Rewrite manifest for pushed snapshot, recreating manifest from source on target,
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 18/30] sync: push: add helper for loading known chunks from previous snapshot
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (16 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 17/30] sync: push: optionally encrypt client log on upload if key is given Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 19/30] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
                   ` (11 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Loading of known chunks only makes sense for snapshots which do not
need encryption while pushing. To check this move the known chunk
loading into a common helper method and distinguish dynamic/fixed
index for loading based on archive type.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- desructure crypt_config from now also present key_id

 src/server/push.rs | 65 ++++++++++++++++++++++++++++++++--------------
 1 file changed, 45 insertions(+), 20 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index a848c989c..f9ef924e5 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -810,6 +810,41 @@ pub(crate) async fn push_group(
     Ok(stats)
 }
 
+async fn load_previous_snapshot_known_chunks(
+    params: &PushParameters,
+    upload_options: &UploadOptions,
+    backup_writer: &BackupWriter,
+    archive_name: &BackupArchiveName,
+    known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
+) {
+    if let Some(manifest) = upload_options.previous_manifest.as_ref() {
+        if let Some((_id, crypt_config)) = &params.crypt_config {
+            if let Ok(Some(fingerprint)) = manifest.fingerprint() {
+                if *fingerprint.bytes() == crypt_config.fingerprint() {
+                    // needs encryption during push, cannot reuse chunks from previous manifest
+                    return;
+                }
+            }
+        }
+
+        // Add known chunks, ignore errors since archive might not be present and it is better
+        // to proceed on unrelated errors than to fail here.
+        match archive_name.archive_type() {
+            ArchiveType::FixedIndex => {
+                let _res = backup_writer
+                    .download_previous_fixed_index(archive_name, manifest, known_chunks)
+                    .await;
+            }
+            ArchiveType::DynamicIndex => {
+                let _res = backup_writer
+                    .download_previous_dynamic_index(archive_name, manifest, known_chunks)
+                    .await;
+            }
+            ArchiveType::Blob => (),
+        };
+    }
+}
+
 /// Push snapshot to target
 ///
 /// Creates a new snapshot on the target and pushes the content of the source snapshot to the
@@ -904,6 +939,16 @@ pub(crate) async fn push_snapshot(
         path.push(&entry.filename);
         if path.try_exists()? {
             let archive_name = BackupArchiveName::from_path(&entry.filename)?;
+
+            load_previous_snapshot_known_chunks(
+                params,
+                &upload_options,
+                &backup_writer,
+                &archive_name,
+                known_chunks.clone(),
+            )
+            .await;
+
             match archive_name.archive_type() {
                 ArchiveType::Blob => {
                     let backup_stats = if encrypt_using_key.is_some() {
@@ -934,16 +979,6 @@ pub(crate) async fn push_snapshot(
                     });
                 }
                 ArchiveType::DynamicIndex => {
-                    if let Some(manifest) = upload_options.previous_manifest.as_ref() {
-                        // Add known chunks, ignore errors since archive might not be present
-                        let _res = backup_writer
-                            .download_previous_dynamic_index(
-                                &archive_name,
-                                manifest,
-                                known_chunks.clone(),
-                            )
-                            .await;
-                    }
                     let index = DynamicIndexReader::open(&path)?;
                     let chunk_reader = reader
                         .chunk_reader(entry.chunk_crypt_mode())
@@ -971,16 +1006,6 @@ pub(crate) async fn push_snapshot(
                     });
                 }
                 ArchiveType::FixedIndex => {
-                    if let Some(manifest) = upload_options.previous_manifest.as_ref() {
-                        // Add known chunks, ignore errors since archive might not be present
-                        let _res = backup_writer
-                            .download_previous_fixed_index(
-                                &archive_name,
-                                manifest,
-                                known_chunks.clone(),
-                            )
-                            .await;
-                    }
                     let index = FixedIndexReader::open(&path)?;
                     let chunk_reader = reader
                         .chunk_reader(entry.chunk_crypt_mode())
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 19/30] fix #7251: api: push: encrypt snapshots using configured encryption key
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (17 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 18/30] sync: push: add helper for loading known chunks from previous snapshot Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-15 14:49   ` Michael Köppl
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 20/30] ui: define and expose encryption key management menu item and windows Christian Ebner
                   ` (10 subsequent siblings)
  29 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

If an encryption key id is provided in the push parameters, the key
is loaded at the start of the push sync job and passed along via the
crypt config.

Backup snapshots which are already encrypted or partially encrypted
snapshots are skipped to avoid mixing of contents. Pre-existing
snapshots on the remote are however not checked to match the key.

Special care has to be taken when tracking the already encountered
chunks. For regular push sync jobs chunk upload is optimized to skip
re-upload of chunks from the previous snapshot (if any) and new, but
already encountered chunks for the current group sync. Since the chunks
now have to be re-processes anyways, do not load the chunks from the
previous snapshot into memory if they need re-encryption and keep track
of the unencrypted -> encrypted digest mapping in a hashmap to avoid
re-processing. This might be optimized in the future by e.g. move the
tracking to an LRU cache, which however requrires more carefully
evaluaton of memory consumption.

Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=7251
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- refactor check for un-/partially-/fully-encrypted backup snapshots.
- include snapshot name in log message for skipped snapshots.

 src/server/push.rs | 120 +++++++++++++++++++++++++++++++++++----------
 1 file changed, 93 insertions(+), 27 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index f9ef924e5..87e74a9bd 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1,6 +1,6 @@
 //! Sync datastore by pushing contents to remote server
 
-use std::collections::HashSet;
+use std::collections::{HashMap, HashSet};
 use std::path::Path;
 use std::sync::{Arc, Mutex};
 
@@ -12,17 +12,17 @@ use tracing::{info, warn};
 
 use pbs_api_types::{
     print_store_and_ns, ApiVersion, ApiVersionInfo, ArchiveType, Authid, BackupArchiveName,
-    BackupDir, BackupGroup, BackupGroupDeleteStats, BackupNamespace, GroupFilter, GroupListItem,
-    NamespaceListItem, Operation, RateLimitConfig, Remote, SnapshotListItem, CLIENT_LOG_BLOB_NAME,
-    MANIFEST_BLOB_NAME, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ, PRIV_REMOTE_DATASTORE_BACKUP,
-    PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
+    BackupDir, BackupGroup, BackupGroupDeleteStats, BackupNamespace, CryptMode, GroupFilter,
+    GroupListItem, NamespaceListItem, Operation, RateLimitConfig, Remote, SnapshotListItem,
+    CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+    PRIV_REMOTE_DATASTORE_BACKUP, PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
 };
 use pbs_client::{
     BackupRepository, BackupStats, BackupWriter, BackupWriterOptions, HttpClient, IndexType,
     MergedChunkInfo, UploadOptions,
 };
 use pbs_config::CachedUserInfo;
-use pbs_datastore::data_blob::ChunkInfo;
+use pbs_datastore::data_blob::{ChunkInfo, DataChunkBuilder};
 use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
@@ -888,6 +888,27 @@ pub(crate) async fn push_snapshot(
     }
 
     let mut encrypt_using_key = None;
+    if params.crypt_config.is_some() {
+        // Check if snapshot is fully encrypted or not encrypted at all:
+        // refuse progress otherwise to upload partially unencrypted contents or mix encryption key.
+        let files = source_manifest.files();
+        let all_unencrypted = files
+            .iter()
+            .all(|f| f.chunk_crypt_mode() == CryptMode::None);
+        let any_unencrypted = files
+            .iter()
+            .any(|f| f.chunk_crypt_mode() == CryptMode::None);
+
+        if all_unencrypted {
+            encrypt_using_key = params.crypt_config.clone();
+            info!("Encrypt and push unencrypted snapshot '{snapshot}'");
+        } else if any_unencrypted {
+            warn!("Encountered partially encrypted snapshot '{snapshot}', refuse to re-encrypt and skip");
+            return Ok(stats);
+        } else {
+            info!("Pushing already encrypted snapshot '{snapshot}' without re-encryption");
+        }
+    }
 
     // Writer instance locks the snapshot on the remote side
     let backup_writer = BackupWriter::start(
@@ -896,7 +917,9 @@ pub(crate) async fn push_snapshot(
             datastore: params.target.repo.store(),
             ns: &target_ns,
             backup: snapshot,
-            crypt_config: encrypt_using_key.clone(),
+            crypt_config: encrypt_using_key
+                .as_ref()
+                .map(|(_id, conf)| Arc::clone(conf)),
             debug: false,
             benchmark: false,
             no_cache: false,
@@ -913,19 +936,20 @@ pub(crate) async fn push_snapshot(
         }
     };
 
-    // Dummy upload options: the actual compression and/or encryption already happened while
-    // the chunks were generated during creation of the backup snapshot, therefore pre-existing
-    // chunks (already compressed and/or encrypted) can be pushed to the target.
+    // Dummy upload options: The actual compression already happened while
+    // the chunks were generated during creation of the backup snapshot,
+    // therefore pre-existing chunks (already compressed) can be pushed to
+    // the target.
+    //
     // Further, these steps are skipped in the backup writer upload stream.
     //
     // Therefore, these values do not need to fit the values given in the manifest.
     // The original manifest is uploaded in the end anyways.
     //
     // Compression is set to true so that the uploaded manifest will be compressed.
-    // Encrypt is set to assure that above files are not encrypted.
     let upload_options = UploadOptions {
         compress: true,
-        encrypt: false,
+        encrypt: encrypt_using_key.is_some(),
         previous_manifest,
         ..UploadOptions::default()
     };
@@ -939,6 +963,10 @@ pub(crate) async fn push_snapshot(
         path.push(&entry.filename);
         if path.try_exists()? {
             let archive_name = BackupArchiveName::from_path(&entry.filename)?;
+            let crypt_mode = match &encrypt_using_key {
+                Some(_) => CryptMode::Encrypt,
+                None => entry.chunk_crypt_mode(),
+            };
 
             load_previous_snapshot_known_chunks(
                 params,
@@ -969,7 +997,7 @@ pub(crate) async fn push_snapshot(
                         &archive_name,
                         backup_stats.size,
                         backup_stats.csum,
-                        entry.chunk_crypt_mode(),
+                        crypt_mode,
                     )?;
                     stats.add(SyncStats {
                         chunk_count: backup_stats.chunk_count as usize,
@@ -990,13 +1018,16 @@ pub(crate) async fn push_snapshot(
                         &backup_writer,
                         IndexType::Dynamic,
                         known_chunks.clone(),
+                        encrypt_using_key
+                            .as_ref()
+                            .map(|(_id, conf)| Arc::clone(conf)),
                     )
                     .await?;
                     target_manifest.add_file(
                         &archive_name,
                         upload_stats.size,
                         upload_stats.csum,
-                        entry.chunk_crypt_mode(),
+                        crypt_mode,
                     )?;
                     stats.add(SyncStats {
                         chunk_count: upload_stats.chunk_count as usize,
@@ -1018,13 +1049,16 @@ pub(crate) async fn push_snapshot(
                         &backup_writer,
                         IndexType::Fixed(Some(size)),
                         known_chunks.clone(),
+                        encrypt_using_key
+                            .as_ref()
+                            .map(|(_id, conf)| Arc::clone(conf)),
                     )
                     .await?;
                     target_manifest.add_file(
                         &archive_name,
                         upload_stats.size,
                         upload_stats.csum,
-                        entry.chunk_crypt_mode(),
+                        crypt_mode,
                     )?;
                     stats.add(SyncStats {
                         chunk_count: upload_stats.chunk_count as usize,
@@ -1066,15 +1100,25 @@ pub(crate) async fn push_snapshot(
 
     // Rewrite manifest for pushed snapshot, recreating manifest from source on target,
     // needs to update all relevant info for new manifest.
-    target_manifest.unprotected = source_manifest.unprotected;
-    target_manifest.signature = source_manifest.signature;
-    let manifest_json = serde_json::to_value(target_manifest)?;
-    let manifest_string = serde_json::to_string_pretty(&manifest_json)?;
+    target_manifest.unprotected = source_manifest.unprotected.clone();
+    target_manifest.signature = source_manifest.signature.clone();
+    let manifest_string = if encrypt_using_key.is_some() {
+        let fp = source_manifest.change_detection_fingerprint()?;
+        target_manifest.set_change_detection_fingerprint(&fp)?;
+        target_manifest.to_string(encrypt_using_key.as_ref().map(|(_id, conf)| conf.as_ref()))?
+    } else {
+        let manifest_json = serde_json::to_value(target_manifest)?;
+        serde_json::to_string_pretty(&manifest_json)?
+    };
     let backup_stats = backup_writer
         .upload_blob_from_data(
             manifest_string.into_bytes(),
             MANIFEST_BLOB_NAME.as_ref(),
-            upload_options,
+            UploadOptions {
+                compress: true,
+                encrypt: false,
+                ..UploadOptions::default()
+            },
         )
         .await?;
     backup_writer.finish().await?;
@@ -1114,12 +1158,15 @@ async fn push_index(
     backup_writer: &BackupWriter,
     index_type: IndexType,
     known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
+    crypt_config: Option<Arc<CryptConfig>>,
 ) -> Result<BackupStats, Error> {
     let (upload_channel_tx, upload_channel_rx) = mpsc::channel(20);
     let mut chunk_infos =
         stream::iter(0..index.index_count()).map(move |pos| index.chunk_info(pos).unwrap());
 
+    let crypt_config_cloned = crypt_config.clone();
     tokio::spawn(async move {
+        let mut encrypted_mapping = HashMap::new();
         while let Some(chunk_info) = chunk_infos.next().await {
             // Avoid reading known chunks, as they are not uploaded by the backup writer anyways
             let needs_upload = {
@@ -1133,20 +1180,39 @@ async fn push_index(
                 chunk_reader
                     .read_raw_chunk(&chunk_info.digest)
                     .await
-                    .map(|chunk| {
-                        MergedChunkInfo::New(ChunkInfo {
+                    .and_then(|chunk| {
+                        let (chunk, digest, chunk_len) = match crypt_config_cloned.as_ref() {
+                            Some(crypt_config) => {
+                                let data = chunk.decode(None, Some(&chunk_info.digest))?;
+                                let (chunk, digest) = DataChunkBuilder::new(&data)
+                                    .compress(true)
+                                    .crypt_config(crypt_config)
+                                    .build()?;
+                                encrypted_mapping.insert(chunk_info.digest, digest);
+                                (chunk, digest, data.len() as u64)
+                            }
+                            None => (chunk, chunk_info.digest, chunk_info.size()),
+                        };
+
+                        Ok(MergedChunkInfo::New(ChunkInfo {
                             chunk,
-                            digest: chunk_info.digest,
-                            chunk_len: chunk_info.size(),
+                            digest,
+                            chunk_len,
                             offset: chunk_info.range.start,
-                        })
+                        }))
                     })
             } else {
+                let digest =
+                    if let Some(encrypted_digest) = encrypted_mapping.get(&chunk_info.digest) {
+                        *encrypted_digest
+                    } else {
+                        chunk_info.digest
+                    };
                 Ok(MergedChunkInfo::Known(vec![(
                     // Pass size instead of offset, will be replaced with offset by the backup
                     // writer
                     chunk_info.size(),
-                    chunk_info.digest,
+                    digest,
                 )]))
             };
             let _ = upload_channel_tx.send(merged_chunk_info).await;
@@ -1157,7 +1223,7 @@ async fn push_index(
 
     let upload_options = UploadOptions {
         compress: true,
-        encrypt: false,
+        encrypt: crypt_config.is_some(),
         index_type,
         ..UploadOptions::default()
     };
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 20/30] ui: define and expose encryption key management menu item and windows
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (18 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 19/30] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 21/30] ui: expose assigning encryption key to sync jobs Christian Ebner
                   ` (9 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Allows to create or remove encryption keys via the WebUI. A new key
entity can be added by either generating a new key by the server
itself or uploading a pre-existing key via a key file, similar to
what Proxmox VE currently allows when setting up a PBS storage.

After creation, the key will be shown in a dialog which allows export
thereof. This is reusing the same logic as PVE with slight adaptions
to include key id and different api endpoint.

On removal the user is informed about the risk of not being able to
decrypt snapshots anymore.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- add missing return on error when requesting key archivation for tape.
- handle api calls by wrapping into try-catch-block.
- double column width for `hint` field.
- fix icons for type based menu buttons and type column
- drop dead code `crypt-key-fp`.
- fix error messages by s/seems/seem/ and wrap in gettext()

 www/Makefile                     |   2 +
 www/NavigationTree.js            |   6 +
 www/Utils.js                     |   1 +
 www/config/EncryptionKeysView.js | 336 +++++++++++++++++++++++++++
 www/window/EncryptionKeysEdit.js | 382 +++++++++++++++++++++++++++++++
 5 files changed, 727 insertions(+)
 create mode 100644 www/config/EncryptionKeysView.js
 create mode 100644 www/window/EncryptionKeysEdit.js

diff --git a/www/Makefile b/www/Makefile
index 5a60e47e1..08ad50846 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -70,6 +70,7 @@ JSSRC=							\
 	config/GCView.js				\
 	config/WebauthnView.js				\
 	config/CertificateView.js			\
+	config/EncryptionKeysView.js			\
 	config/NodeOptionView.js			\
 	config/MetricServerView.js			\
 	config/NotificationConfigView.js		\
@@ -78,6 +79,7 @@ JSSRC=							\
 	window/BackupGroupChangeOwner.js		\
 	window/CreateDirectory.js			\
 	window/DataStoreEdit.js				\
+	window/EncryptionKeysEdit.js			\
 	window/NamespaceEdit.js				\
 	window/MaintenanceOptions.js			\
 	window/NotesEdit.js				\
diff --git a/www/NavigationTree.js b/www/NavigationTree.js
index 35b8d693b..f596c7d1b 100644
--- a/www/NavigationTree.js
+++ b/www/NavigationTree.js
@@ -74,6 +74,12 @@ Ext.define('PBS.store.NavigationStore', {
                         path: 'pbsCertificateConfiguration',
                         leaf: true,
                     },
+                    {
+                        text: gettext('Encryption Keys'),
+                        iconCls: 'fa fa-lock',
+                        path: 'pbsEncryptionKeysView',
+                        leaf: true,
+                    },
                     {
                         text: gettext('Notifications'),
                         iconCls: 'fa fa-bell-o',
diff --git a/www/Utils.js b/www/Utils.js
index 350ab820b..bf4b025c7 100644
--- a/www/Utils.js
+++ b/www/Utils.js
@@ -451,6 +451,7 @@ Ext.define('PBS.Utils', {
             prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')),
             prunejob: (type, id) => PBS.Utils.render_prune_job_worker_id(id, gettext('Prune Job')),
             reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read Objects')),
+            'remove-encryption-key': [gettext('Key'), gettext('Remove Key')],
             'rewind-media': [gettext('Drive'), gettext('Rewind Media')],
             's3-refresh': [gettext('Datastore'), gettext('S3 Refresh')],
             sync: ['Datastore', gettext('Remote Sync')],
diff --git a/www/config/EncryptionKeysView.js b/www/config/EncryptionKeysView.js
new file mode 100644
index 000000000..35f147799
--- /dev/null
+++ b/www/config/EncryptionKeysView.js
@@ -0,0 +1,336 @@
+Ext.define('pbs-encryption-keys', {
+    extend: 'Ext.data.Model',
+    fields: ['id', 'type', 'hint', 'fingerprint', 'created', 'archived-at'],
+    idProperty: 'id',
+});
+
+Ext.define('PBS.config.EncryptionKeysView', {
+    extend: 'Ext.grid.GridPanel',
+    alias: 'widget.pbsEncryptionKeysView',
+
+    title: gettext('Encryption Keys'),
+
+    stateful: true,
+    stateId: 'grid-encryption-keys',
+
+    controller: {
+        xclass: 'Ext.app.ViewController',
+
+        addSyncEncryptionKey: function () {
+            let me = this;
+            Ext.create('PBS.window.EncryptionKeysEdit', {
+                listeners: {
+                    destroy: function () {
+                        me.reload();
+                    },
+                },
+            }).show();
+        },
+
+        addTapeEncryptionKey: function () {
+            let me = this;
+            Ext.create('PBS.TapeManagement.EncryptionEditWindow', {
+                listeners: {
+                    destroy: function () {
+                        me.reload();
+                    },
+                },
+            }).show();
+        },
+
+        archiveEncryptionKey: function () {
+            let me = this;
+            let view = me.getView();
+            let selection = view.getSelection();
+
+            if (!selection || selection.length < 1) {
+                return;
+            }
+
+            if (selection[0].data.type === 'tape') {
+                Ext.Msg.alert(gettext('Error'), gettext('cannot archive tape key'));
+                return;
+            }
+
+            let keyID = selection[0].data.id;
+            Proxmox.Utils.API2Request({
+                url: `/api2/extjs/config/encryption-keys/${keyID}`,
+                method: 'POST',
+                waitMsgTarget: view,
+                failure: function (response, opts) {
+                    Ext.Msg.alert(gettext('Error'), response.htmlStatus);
+                },
+                success: function (response, opts) {
+                    view.getSelectionModel().deselectAll();
+                    me.reload();
+                },
+            });
+        },
+
+        removeEncryptionKey: function () {
+            let me = this;
+            let view = me.getView();
+            let selection = view.getSelection();
+
+            if (!selection || selection.length < 1) {
+                return;
+            }
+
+            let keyType = selection[0].data.type;
+            let keyID = selection[0].data.id;
+            let keyFp = selection[0].data.fingerprint;
+            let endpointUrl =
+                keyType === 'tape'
+                    ? `/api2/extjs/config/tape-encryption-keys/${keyFp}`
+                    : `/api2/extjs/config/encryption-keys/${keyID}`;
+
+            Ext.create('Proxmox.window.SafeDestroy', {
+                url: endpointUrl,
+                item: {
+                    id: `${keyType}/${keyID}`,
+                },
+                autoShow: true,
+                showProgress: false,
+                taskName: 'remove-encryption-key',
+                listeners: {
+                    destroy: () => me.reload(),
+                },
+                additionalItems: [
+                    {
+                        xtype: 'box',
+                        userCls: 'pmx-hint',
+                        style: {
+                            'inline-size': '375px',
+                            'overflow-wrap': 'break-word',
+                        },
+                        padding: '5',
+                        html: gettext(
+                            'Make sure you have a backup of the encryption key!<br><br>You will not be able to decrypt contents encrypted with this key once removed.',
+                        ),
+                    },
+                ],
+            }).show();
+        },
+
+        restoreEncryptionKey: function () {
+            Ext.create('Proxmox.window.Edit', {
+                title: gettext('Restore Key'),
+                isCreate: true,
+                submitText: gettext('Restore'),
+                method: 'POST',
+                url: `/api2/extjs/tape/drive`,
+                submitUrl: function (url, values) {
+                    let drive = values.drive;
+                    delete values.drive;
+                    return `${url}/${drive}/restore-key`;
+                },
+                items: [
+                    {
+                        xtype: 'pbsDriveSelector',
+                        fieldLabel: gettext('Drive'),
+                        name: 'drive',
+                    },
+                    {
+                        xtype: 'textfield',
+                        inputType: 'password',
+                        fieldLabel: gettext('Password'),
+                        name: 'password',
+                    },
+                ],
+            }).show();
+        },
+
+        reload: async function () {
+            let me = this;
+            let view = me.getView();
+
+            let syncKeysFuture = Proxmox.Async.api2({
+                url: '/api2/extjs/config/encryption-keys',
+                method: 'GET',
+                params: {
+                    'include-archived': true,
+                },
+            });
+
+            let tapeKeysFuture = Proxmox.Async.api2({
+                url: '/api2/extjs/config/tape-encryption-keys',
+                method: 'GET',
+            });
+
+            let combinedKeys = [];
+
+            try {
+                let syncKeys = await syncKeysFuture;
+                if (syncKeys?.result?.data) {
+                    syncKeys.result.data.forEach((key) => {
+                        key.type = 'sync';
+                        combinedKeys.push(key);
+                    });
+                }
+            } catch (error) {
+                Ext.Msg.alert(gettext('Error'), error);
+            }
+
+            try {
+                let tapeKeys = await tapeKeysFuture;
+                if (tapeKeys?.result?.data) {
+                    tapeKeys.result.data.forEach((key) => {
+                        key.id = `${key.created}-${key.fingerprint.substring(0, 9).replace(/:/g, '')}`;
+                        key.type = 'tape';
+                        combinedKeys.push(key);
+                    });
+                }
+            } catch (error) {
+                Ext.Msg.alert(gettext('Error'), error);
+            }
+
+            let store = view.getStore().rstore;
+            store.loadData(combinedKeys);
+            store.fireEvent('load', store, combinedKeys, true);
+        },
+
+        init: function () {
+            let me = this;
+            me.reload();
+            me.updateTask = Ext.TaskManager.start({
+                run: () => me.reload(),
+                interval: 5000,
+            });
+        },
+
+        destroy: function () {
+            let me = this;
+            if (me.updateTask) {
+                Ext.TaskManager.stop(me.updateTask);
+            }
+        },
+    },
+
+    listeners: {
+        activate: 'reload',
+    },
+
+    store: {
+        type: 'diff',
+        autoDestroy: true,
+        autoDestroyRstore: true,
+        sorters: 'id',
+        rstore: {
+            type: 'store',
+            storeid: 'pbs-encryption-keys',
+            model: 'pbs-encryption-keys',
+            proxy: {
+                type: 'memory',
+            },
+        },
+    },
+
+    tbar: [
+        {
+            text: gettext('Add'),
+            menu: [
+                {
+                    text: gettext('Add Sync Encryption Key'),
+                    iconCls: 'fa fa-refresh',
+                    handler: 'addSyncEncryptionKey',
+                    selModel: false,
+                },
+                {
+                    text: gettext('Add Tape Encryption Key'),
+                    iconCls: 'pbs-icon-tape',
+                    handler: 'addTapeEncryptionKey',
+                    selModel: false,
+                },
+            ],
+        },
+        '-',
+        {
+            xtype: 'proxmoxButton',
+            text: gettext('Archive'),
+            handler: 'archiveEncryptionKey',
+            dangerous: true,
+            confirmMsg: Ext.String.format(
+                gettext('Archiving will render the key unusable to encrypt new content, proceed?'),
+            ),
+            disabled: true,
+            enableFn: (item) => item.data.type === 'sync' && !item.data['archived-at'],
+        },
+        '-',
+        {
+            xtype: 'proxmoxButton',
+            text: gettext('Remove'),
+            handler: 'removeEncryptionKey',
+            disabled: true,
+            enableFn: (item) =>
+                (item.data.type === 'sync' && !!item.data['archived-at']) ||
+                item.data.type === 'tape',
+        },
+        '-',
+        {
+            text: gettext('Restore Key'),
+            xtype: 'proxmoxButton',
+            handler: 'restoreEncryptionKey',
+            disabled: true,
+            enableFn: (item) => item.data.type === 'tape',
+        },
+    ],
+
+    viewConfig: {
+        trackOver: false,
+    },
+
+    columns: [
+        {
+            dataIndex: 'id',
+            header: gettext('Key ID'),
+            renderer: Ext.String.htmlEncode,
+            sortable: true,
+            width: 200,
+        },
+        {
+            dataIndex: 'type',
+            header: gettext('Type'),
+            renderer: function (value) {
+                let iconCls, label;
+                if (value === 'sync') {
+                    iconCls = 'fa fa-refresh';
+                    label = gettext('Sync');
+                } else if (value === 'tape') {
+                    iconCls = 'fa pbs-icon-tape';
+                    label = gettext('Tape');
+                } else {
+                    return value;
+                }
+                return `<i class="${iconCls}"></i> ${label}`;
+            },
+            sortable: true,
+            width: 50,
+        },
+        {
+            dataIndex: 'hint',
+            header: gettext('Hint'),
+            sortable: true,
+            width: 100,
+        },
+        {
+            dataIndex: 'fingerprint',
+            header: gettext('Fingerprint'),
+            sortable: false,
+            width: 600,
+        },
+        {
+            dataIndex: 'created',
+            header: gettext('Created'),
+            renderer: Proxmox.Utils.render_timestamp,
+            sortable: true,
+            flex: 1,
+        },
+        {
+            dataIndex: 'archived-at',
+            header: gettext('Archived'),
+            renderer: (val) => (val ? Proxmox.Utils.render_timestamp(val) : ''),
+            sortable: true,
+            flex: 1,
+        },
+    ],
+});
diff --git a/www/window/EncryptionKeysEdit.js b/www/window/EncryptionKeysEdit.js
new file mode 100644
index 000000000..dfabdd5ea
--- /dev/null
+++ b/www/window/EncryptionKeysEdit.js
@@ -0,0 +1,382 @@
+Ext.define('PBS.ShowEncryptionKey', {
+    extend: 'Ext.window.Window',
+    xtype: 'pbsShowEncryptionKey',
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    width: 600,
+    modal: true,
+    resizable: false,
+    title: gettext('Important: Save your Encryption Key'),
+
+    // avoid close by ESC key, force user to more manual action
+    onEsc: Ext.emptyFn,
+    closable: false,
+
+    items: [
+        {
+            xtype: 'form',
+            layout: {
+                type: 'vbox',
+                align: 'stretch',
+            },
+            bodyPadding: 10,
+            border: false,
+            defaults: {
+                anchor: '100%',
+                border: false,
+                padding: '10 0 0 0',
+            },
+            items: [
+                {
+                    xtype: 'textfield',
+                    fieldLabel: gettext('Key ID'),
+                    labelWidth: 80,
+                    inputId: 'keyID',
+                    cbind: {
+                        value: '{keyID}',
+                    },
+                    editable: false,
+                },
+                {
+                    xtype: 'textfield',
+                    fieldLabel: gettext('Key'),
+                    labelWidth: 80,
+                    inputId: 'encryption-key',
+                    cbind: {
+                        value: '{key}',
+                    },
+                    editable: false,
+                },
+                {
+                    xtype: 'component',
+                    html:
+                        gettext(
+                            'Keep your encryption key safe, but easily accessible for disaster recovery.',
+                        ) +
+                        '<br>' +
+                        gettext('We recommend the following safe-keeping strategy:'),
+                },
+                {
+                    xtype: 'container',
+                    layout: 'hbox',
+                    items: [
+                        {
+                            xtype: 'component',
+                            html: '1. ' + gettext('Save the key in your password manager.'),
+                            flex: 1,
+                        },
+                        {
+                            xtype: 'button',
+                            text: gettext('Copy Key'),
+                            iconCls: 'fa fa-clipboard x-btn-icon-el-default-toolbar-small',
+                            cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+                            width: 110,
+                            handler: function (b) {
+                                document.getElementById('encryption-key').select();
+                                document.execCommand('copy');
+                            },
+                        },
+                    ],
+                },
+                {
+                    xtype: 'container',
+                    layout: 'hbox',
+                    items: [
+                        {
+                            xtype: 'component',
+                            html:
+                                '2. ' +
+                                gettext(
+                                    'Download the key to a USB (pen) drive, placed in secure vault.',
+                                ),
+                            flex: 1,
+                        },
+                        {
+                            xtype: 'button',
+                            text: gettext('Download'),
+                            iconCls: 'fa fa-download x-btn-icon-el-default-toolbar-small',
+                            cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+                            width: 110,
+                            handler: function (b) {
+                                let showWindow = this.up('window');
+
+                                let filename = `${showWindow.keyID}.enc`;
+
+                                let hiddenElement = document.createElement('a');
+                                hiddenElement.href =
+                                    'data:attachment/text,' + encodeURI(showWindow.key);
+                                hiddenElement.target = '_blank';
+                                hiddenElement.download = filename;
+                                hiddenElement.click();
+                            },
+                        },
+                    ],
+                },
+                {
+                    xtype: 'container',
+                    layout: 'hbox',
+                    items: [
+                        {
+                            xtype: 'component',
+                            html:
+                                '3. ' +
+                                gettext('Print as paperkey, laminated and placed in secure vault.'),
+                            flex: 1,
+                        },
+                        {
+                            xtype: 'button',
+                            text: gettext('Print Key'),
+                            iconCls: 'fa fa-print x-btn-icon-el-default-toolbar-small',
+                            cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+                            width: 110,
+                            handler: function (b) {
+                                let showWindow = this.up('window');
+                                showWindow.paperkey(showWindow.key);
+                            },
+                        },
+                    ],
+                },
+            ],
+        },
+        {
+            xtype: 'component',
+            border: false,
+            padding: '10 10 10 10',
+            userCls: 'pmx-hint',
+            html: gettext(
+                'Please save the encryption key - losing it will render any backup created with it unusable',
+            ),
+        },
+    ],
+    buttons: [
+        {
+            text: gettext('Close'),
+            handler: function (b) {
+                let showWindow = this.up('window');
+                showWindow.close();
+            },
+        },
+    ],
+    paperkey: function (keyString) {
+        let me = this;
+
+        const key = JSON.parse(keyString);
+
+        const qrwidth = 500;
+        let qrdiv = document.createElement('div');
+        let qrcode = new QRCode(qrdiv, {
+            width: qrwidth,
+            height: qrwidth,
+            correctLevel: QRCode.CorrectLevel.H,
+        });
+        qrcode.makeCode(keyString);
+
+        let shortKeyFP = '';
+        if (key.fingerprint) {
+            shortKeyFP = PBS.Utils.renderKeyID(key.fingerprint);
+        }
+
+        let printFrame = document.createElement('iframe');
+        Object.assign(printFrame.style, {
+            position: 'fixed',
+            right: '0',
+            bottom: '0',
+            width: '0',
+            height: '0',
+            border: '0',
+        });
+        const prettifiedKey = JSON.stringify(key, null, 2);
+        const keyQrBase64 = qrdiv.children[0].toDataURL('image/png');
+        const html = `<html><head><script>
+	    window.addEventListener('DOMContentLoaded', (ev) => window.print());
+	</script><style>@media print and (max-height: 150mm) {
+	  h4, p { margin: 0; font-size: 1em; }
+	}</style></head><body style="padding: 5px;">
+	<h4>Encryption Key '${me.keyID}' (${shortKeyFP})</h4>
+<p style="font-size:1.2em;font-family:monospace;white-space:pre-wrap;overflow-wrap:break-word;">
+-----BEGIN PROXMOX BACKUP KEY-----
+${prettifiedKey}
+-----END PROXMOX BACKUP KEY-----</p>
+	<center><img style="width: 100%; max-width: ${qrwidth}px;" src="${keyQrBase64}"></center>
+	</body></html>`;
+
+        printFrame.src = 'data:text/html;base64,' + btoa(html);
+        document.body.appendChild(printFrame);
+        me.on('destroy', () => document.body.removeChild(printFrame));
+    },
+});
+
+Ext.define('PBS.window.EncryptionKeysEdit', {
+    extend: 'Proxmox.window.Edit',
+    xtype: 'widget.pbsEncryptionKeysEdit',
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    width: 400,
+
+    fieldDefaults: { labelWidth: 120 },
+
+    subject: gettext('Encryption Key'),
+
+    cbindData: function (initialConfig) {
+        let me = this;
+
+        me.url = '/api2/extjs/config/encryption-keys';
+        me.method = 'POST';
+        me.autoLoad = false;
+
+        return {};
+    },
+
+    apiCallDone: function (success, response, options) {
+        let me = this;
+
+        if (!me.rendered) {
+            return;
+        }
+
+        let res = response.result.data;
+        if (!res) {
+            return;
+        }
+
+        let keyIdField = me.down('field[name=id]');
+        Ext.create('PBS.ShowEncryptionKey', {
+            autoShow: true,
+            keyID: keyIdField.getValue(),
+            key: JSON.stringify(res),
+        });
+    },
+
+    viewModel: {
+        data: {
+            keepCryptVisible: false,
+        },
+    },
+
+    items: [
+        {
+            xtype: 'pmxDisplayEditField',
+            name: 'id',
+            fieldLabel: gettext('Encryption Key ID'),
+            renderer: Ext.htmlEncode,
+            allowBlank: false,
+            minLength: 3,
+            editable: true,
+        },
+        {
+            xtype: 'displayfield',
+            fieldLabel: gettext('Key Source'),
+            padding: '2 0',
+        },
+        {
+            xtype: 'radiofield',
+            name: 'keysource',
+            value: true,
+            inputValue: 'new',
+            submitValue: false,
+            boxLabel: gettext('Auto-generate a new encryption key'),
+            padding: '0 0 0 25',
+        },
+        {
+            xtype: 'radiofield',
+            name: 'keysource',
+            inputValue: 'upload',
+            submitValue: false,
+            boxLabel: gettext('Upload an existing encryption key'),
+            padding: '0 0 0 25',
+            listeners: {
+                change: function (f, value) {
+                    let editWindow = this.up('window');
+                    if (!editWindow.rendered) {
+                        return;
+                    }
+                    let uploadKeyField = editWindow.down('field[name=key]');
+                    uploadKeyField.setDisabled(!value);
+                    uploadKeyField.setHidden(!value);
+
+                    let uploadKeyButton = editWindow.down('filebutton[name=upload-button]');
+                    uploadKeyButton.setDisabled(!value);
+                    uploadKeyButton.setHidden(!value);
+
+                    if (value) {
+                        uploadKeyField.validate();
+                    } else {
+                        uploadKeyField.reset();
+                    }
+                },
+            },
+        },
+        {
+            xtype: 'fieldcontainer',
+            layout: 'hbox',
+            items: [
+                {
+                    xtype: 'proxmoxtextfield',
+                    name: 'key',
+                    fieldLabel: gettext('Upload From File'),
+                    value: '',
+                    disabled: true,
+                    hidden: true,
+                    allowBlank: false,
+                    labelAlign: 'right',
+                    flex: 1,
+                    emptyText: gettext('Drag-and-drop key file here.'),
+                    validator: function (value) {
+                        if (value.length) {
+                            let key;
+                            try {
+                                key = JSON.parse(value);
+                            } catch (e) {
+                                return gettext('Failed to parse key - {0}', e);
+                            }
+                            if (key.data === undefined) {
+                                return gettext('Does not seem like a valid Proxmox Backup key!');
+                            }
+                        }
+                        return true;
+                    },
+                    afterRender: function () {
+                        let me = this;
+                        if (!window.FileReader) {
+                            // No FileReader support in this browser
+                            return;
+                        }
+                        let cancel = function (ev) {
+                            ev = ev.event;
+                            if (ev.preventDefault) {
+                                ev.preventDefault();
+                            }
+                        };
+                        this.inputEl.on('dragover', cancel);
+                        this.inputEl.on('dragenter', cancel);
+                        this.inputEl.on('drop', (ev) => {
+                            cancel(ev);
+                            let reader = new FileReader();
+                            reader.onload = (ev) => me.setValue(ev.target.result);
+                            reader.readAsText(ev.event.dataTransfer.files[0]);
+                        });
+                    },
+                },
+                {
+                    xtype: 'filebutton',
+                    name: 'upload-button',
+                    iconCls: 'fa fa-fw fa-folder-open-o x-btn-icon-el-default-toolbar-small',
+                    cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+                    margin: '0 0 0 4',
+                    disabled: true,
+                    hidden: true,
+                    listeners: {
+                        change: function (btn, e, value) {
+                            let ev = e.event;
+                            let field = btn.up().down('proxmoxtextfield[name=key]');
+                            let reader = new FileReader();
+                            reader.onload = (ev) => field.setValue(ev.target.result);
+                            reader.readAsText(ev.target.files[0]);
+                            btn.reset();
+                        },
+                    },
+                },
+            ],
+        },
+    ],
+});
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 21/30] ui: expose assigning encryption key to sync jobs
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (19 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 20/30] ui: define and expose encryption key management menu item and windows Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-15 14:49   ` Michael Köppl
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 22/30] sync: pull: load encryption key if given in job config Christian Ebner
                   ` (8 subsequent siblings)
  29 siblings, 1 reply; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

This allows to select pre-defined encryption keys and assign them to
the sync job configuration.

Sync keys can be either assigned as active encryption key to sync
jobs in push direction, to be used when pushing new contents or
associated to a sync job in pull direction, then used to decrypt
contents with matching key fingerprint.

As active encryption key only ones with are not archived can be used,
while associations can be made also with archived keys, to still be
able do decrypt contents on pull and to avoid key deletion if
associated to either push or pull sync jobs.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- switch field label for associated keys based on sync direction.
- add comment field explaining active encryption key and associated keys and their relation on key rotation.

 www/Makefile                      |  1 +
 www/form/EncryptionKeySelector.js | 96 +++++++++++++++++++++++++++++++
 www/window/SyncJobEdit.js         | 62 ++++++++++++++++++++
 3 files changed, 159 insertions(+)
 create mode 100644 www/form/EncryptionKeySelector.js

diff --git a/www/Makefile b/www/Makefile
index 08ad50846..51da9d74e 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -55,6 +55,7 @@ JSSRC=							\
 	form/GroupSelector.js				\
 	form/GroupFilter.js				\
 	form/VerifyOutdatedAfter.js			\
+	form/EncryptionKeySelector.js			\
 	data/RunningTasksStore.js			\
 	button/TaskButton.js				\
 	panel/PrunePanel.js				\
diff --git a/www/form/EncryptionKeySelector.js b/www/form/EncryptionKeySelector.js
new file mode 100644
index 000000000..e0390e56a
--- /dev/null
+++ b/www/form/EncryptionKeySelector.js
@@ -0,0 +1,96 @@
+Ext.define('PBS.form.EncryptionKeySelector', {
+    extend: 'Ext.form.field.ComboBox',
+    alias: 'widget.pbsEncryptionKeySelector',
+
+    queryMode: 'local',
+
+    valueField: 'id',
+    displayField: 'id',
+
+    emptyText: gettext('None'),
+
+    listConfig: {
+        columns: [
+            {
+                dataIndex: 'id',
+                header: gettext('Key ID'),
+                sortable: true,
+                flex: 1,
+            },
+            {
+                dataIndex: 'created',
+                header: gettext('Created'),
+                sortable: true,
+                renderer: Proxmox.Utils.render_timestamp,
+                flex: 1,
+            },
+            {
+                dataIndex: 'archived-at',
+                header: gettext('Archived'),
+                renderer: (val) => (val ? Proxmox.Utils.render_timestamp(val) : ''),
+                sortable: true,
+                flex: 1,
+            },
+        ],
+        emptyText: `<div class="x-grid-empty">${gettext('No key accessible.')}</div>`,
+    },
+
+    config: {
+        deleteEmpty: true,
+        extraRequestParams: {},
+    },
+    // override framework function to implement deleteEmpty behaviour
+    getSubmitData: function () {
+        let me = this;
+
+        let data = null;
+        if (!me.disabled && me.submitValue) {
+            let val = me.getSubmitValue();
+            if (val !== null && val !== '') {
+                data = {};
+                data[me.getName()] = val;
+            } else if (me.getDeleteEmpty()) {
+                data = {};
+                data.delete = me.getName();
+            }
+        }
+
+        return data;
+    },
+
+    triggers: {
+        clear: {
+            cls: 'pmx-clear-trigger',
+            weight: -1,
+            hidden: true,
+            handler: function () {
+                this.triggers.clear.setVisible(false);
+                this.setValue('');
+            },
+        },
+    },
+
+    listeners: {
+        change: function (field, value) {
+            let canClear = (value ?? '') !== '';
+            field.triggers.clear.setVisible(canClear);
+        },
+    },
+
+    initComponent: function () {
+        let me = this;
+
+        me.store = Ext.create('Ext.data.Store', {
+            model: 'pbs-encryption-keys',
+            autoLoad: true,
+            proxy: {
+                type: 'proxmox',
+                timeout: 30 * 1000,
+                url: `/api2/json/config/encryption-keys`,
+                extraParams: me.extraRequestParams,
+            },
+        });
+
+        me.callParent();
+    },
+});
diff --git a/www/window/SyncJobEdit.js b/www/window/SyncJobEdit.js
index 074c7855a..a944a6395 100644
--- a/www/window/SyncJobEdit.js
+++ b/www/window/SyncJobEdit.js
@@ -34,10 +34,12 @@ Ext.define('PBS.window.SyncJobEdit', {
         if (me.syncDirection === 'push') {
             me.subject = gettext('Sync Job - Push Direction');
             me.syncDirectionPush = true;
+            me.syncCryptKeyMultiSelect = false;
             me.syncRemoteLabel = gettext('Target Remote');
             me.syncRemoteDatastore = gettext('Target Datastore');
             me.syncRemoteNamespace = gettext('Target Namespace');
             me.syncLocalOwner = gettext('Local User');
+            me.associatedKeysLabel = gettext('Associated Keys');
             // Sync direction request parameter is only required for creating new jobs,
             // for edit and delete it is derived from the job config given by it's id.
             if (me.isCreate) {
@@ -52,6 +54,7 @@ Ext.define('PBS.window.SyncJobEdit', {
             me.syncRemoteDatastore = gettext('Source Datastore');
             me.syncRemoteNamespace = gettext('Source Namespace');
             me.syncLocalOwner = gettext('Local Owner');
+            me.associatedKeysLabel = gettext('Decryption Keys');
         }
 
         return {};
@@ -560,6 +563,65 @@ Ext.define('PBS.window.SyncJobEdit', {
                     },
                 ],
             },
+            {
+                xtype: 'inputpanel',
+                title: gettext('Encryption'),
+                column1: [
+                    {
+                        xtype: 'pbsEncryptionKeySelector',
+                        name: 'active-encryption-key',
+                        fieldLabel: gettext('Active Encryption Key'),
+                        multiSelect: false,
+                        cbind: {
+                            deleteEmpty: '{!isCreate}',
+                            disabled: '{!syncDirectionPush}',
+                            hidden: '{!syncDirectionPush}',
+                        },
+                    },
+                    {
+                        xtype: 'pbsEncryptionKeySelector',
+                        name: 'associated-key',
+                        multiSelect: true,
+                        cbind: {
+                            fieldLabel: '{associatedKeysLabel}',
+                            deleteEmpty: '{!isCreate}',
+                        },
+                        extraRequestParams: {
+                            'include-archived': true,
+                        },
+                    },
+                ],
+                column2: [
+                    {
+                        xtype: 'box',
+                        style: {
+                            'inline-size': '325px',
+                            'overflow-wrap': 'break-word',
+                        },
+                        padding: '5',
+                        html: gettext(
+                            'Active encryption key is used to encrypt snapshots which are not encrypted on the source during sync. Already encrypted contents are unaffected, partially encrypted contents skipped if set.',
+                        ),
+                        cbind: {
+                            hidden: '{!syncDirectionPush}',
+                        },
+                    },
+                    {
+                        xtype: 'box',
+                        style: {
+                            'inline-size': '325px',
+                            'overflow-wrap': 'break-word',
+                        },
+                        padding: '5',
+                        html: gettext(
+                            'Associated keys store a reference to keys in order to protect them from removal without prior disassociation. On changing the active encryption key, the previous key is added to the associated keys in order to protect from accidental deletion in case it still is required to decrypt contents.',
+                        ),
+                        cbind: {
+                            hidden: '{!syncDirectionPush}',
+                        },
+                    },
+                ],
+            },
         ],
     },
 });
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 22/30] sync: pull: load encryption key if given in job config
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (20 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 21/30] ui: expose assigning encryption key to sync jobs Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 23/30] sync: expand source chunk reader trait by crypt config Christian Ebner
                   ` (7 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

If configured and passed in on PullParams construction, check access
and load the encryption key. Any snapshots matching this key
fingerprint should be decrypted during pull sync.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- also store key id together with key config when loading associated keys, so it can be logged later.

 src/api2/pull.rs   | 15 +++++++++++++--
 src/server/pull.rs | 19 +++++++++++++++++++
 2 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index 4b1fd5e60..7ca12fe99 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -7,8 +7,8 @@ use proxmox_router::{Permission, Router, RpcEnvironment};
 use proxmox_schema::api;
 
 use pbs_api_types::{
-    Authid, BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, DATASTORE_SCHEMA,
-    GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
+    Authid, BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, CRYPT_KEY_ID_SCHEMA,
+    DATASTORE_SCHEMA, GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
     PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ, REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA,
     RESYNC_CORRUPT_SCHEMA, SYNC_ENCRYPTED_ONLY_SCHEMA, SYNC_VERIFIED_ONLY_SCHEMA,
     TRANSFER_LAST_SCHEMA,
@@ -91,6 +91,7 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
             sync_job.encrypted_only,
             sync_job.verified_only,
             sync_job.resync_corrupt,
+            sync_job.associated_key.clone(),
         )
     }
 }
@@ -148,6 +149,14 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
                 schema: RESYNC_CORRUPT_SCHEMA,
                 optional: true,
             },
+            "decryption-keys": {
+                type: Array,
+                description: "List of decryption keys.",
+                items: {
+                    schema: CRYPT_KEY_ID_SCHEMA,
+                },
+                optional: true,
+            },
         },
     },
     access: {
@@ -175,6 +184,7 @@ async fn pull(
     encrypted_only: Option<bool>,
     verified_only: Option<bool>,
     resync_corrupt: Option<bool>,
+    decryption_keys: Option<Vec<String>>,
     rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<String, Error> {
     let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -215,6 +225,7 @@ async fn pull(
         encrypted_only,
         verified_only,
         resync_corrupt,
+        decryption_keys,
     )?;
 
     // fixme: set to_stdout to false?
diff --git a/src/server/pull.rs b/src/server/pull.rs
index bd3e8bef4..cc18cd973 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -8,6 +8,7 @@ use std::sync::{Arc, Mutex};
 use std::time::SystemTime;
 
 use anyhow::{bail, format_err, Context, Error};
+use pbs_tools::crypt_config::CryptConfig;
 use proxmox_human_byte::HumanByte;
 use tracing::{info, warn};
 
@@ -65,6 +66,8 @@ pub(crate) struct PullParameters {
     verified_only: bool,
     /// Whether to re-sync corrupted snapshots
     resync_corrupt: bool,
+    /// Decryption key ids and configs to decrypt snapshots with matching key fingerprint
+    crypt_configs: Vec<(String, Arc<CryptConfig>)>,
 }
 
 impl PullParameters {
@@ -85,6 +88,7 @@ impl PullParameters {
         encrypted_only: Option<bool>,
         verified_only: Option<bool>,
         resync_corrupt: Option<bool>,
+        decryption_keys: Option<Vec<String>>,
     ) -> Result<Self, Error> {
         if let Some(max_depth) = max_depth {
             ns.check_max_depth(max_depth)?;
@@ -126,6 +130,20 @@ impl PullParameters {
 
         let group_filter = group_filter.unwrap_or_default();
 
+        let crypt_configs = if let Some(key_ids) = &decryption_keys {
+            let mut crypt_configs = Vec::with_capacity(key_ids.len());
+            for key_id in key_ids {
+                if let Some(crypt_config) =
+                    crate::server::sync::check_privs_and_load_key_config(key_id, &owner, false)?
+                {
+                    crypt_configs.push((key_id.to_string(), crypt_config));
+                }
+            }
+            crypt_configs
+        } else {
+            Vec::new()
+        };
+
         Ok(Self {
             source,
             target,
@@ -137,6 +155,7 @@ impl PullParameters {
             encrypted_only,
             verified_only,
             resync_corrupt,
+            crypt_configs,
         })
     }
 }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 23/30] sync: expand source chunk reader trait by crypt config
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (21 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 22/30] sync: pull: load encryption key if given in job config Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 24/30] sync: pull: introduce and use decrypt index writer if " Christian Ebner
                   ` (6 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Allows to pass in the crypto config for the source chunk reader,
making it possible to decrypt chunks when fetching.

This will be used by the pull sync job to decrypt snapshot chunks
which have been encrypted with an encryption key matching the
one in the pull job configuration.

Disarmed by not setting the crypt config until the rest of the logic
to correctly decrypt snapshots on pull, including manifest, index
files and chunks is put in place in subsequet code changes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 src/server/pull.rs |  8 ++++++--
 src/server/push.rs |  4 ++--
 src/server/sync.rs | 28 ++++++++++++++++++++++------
 3 files changed, 30 insertions(+), 10 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index cc18cd973..e1a6ca6d0 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -304,6 +304,7 @@ async fn pull_single_archive<'a>(
     snapshot: &'a pbs_datastore::BackupDir,
     archive_info: &'a FileInfo,
     encountered_chunks: Arc<Mutex<EncounteredChunks>>,
+    crypt_config: Option<Arc<CryptConfig>>,
     backend: &DatastoreBackend,
 ) -> Result<SyncStats, Error> {
     let archive_name = &archive_info.filename;
@@ -334,7 +335,7 @@ async fn pull_single_archive<'a>(
             } else {
                 let stats = pull_index_chunks(
                     reader
-                        .chunk_reader(archive_info.crypt_mode)
+                        .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
                         .context("failed to get chunk reader")?,
                     snapshot.datastore().clone(),
                     index,
@@ -357,7 +358,7 @@ async fn pull_single_archive<'a>(
             } else {
                 let stats = pull_index_chunks(
                     reader
-                        .chunk_reader(archive_info.crypt_mode)
+                        .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
                         .context("failed to get chunk reader")?,
                     snapshot.datastore().clone(),
                     index,
@@ -471,6 +472,8 @@ async fn pull_snapshot<'a>(
         return Ok(sync_stats);
     }
 
+    let mut crypt_config = None;
+
     let backend = &params.target.backend;
     for item in manifest.files() {
         let mut path = snapshot.full_path();
@@ -517,6 +520,7 @@ async fn pull_snapshot<'a>(
             snapshot,
             item,
             encountered_chunks.clone(),
+            crypt_config.clone(),
             backend,
         )
         .await?;
diff --git a/src/server/push.rs b/src/server/push.rs
index 87e74a9bd..258254950 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1009,7 +1009,7 @@ pub(crate) async fn push_snapshot(
                 ArchiveType::DynamicIndex => {
                     let index = DynamicIndexReader::open(&path)?;
                     let chunk_reader = reader
-                        .chunk_reader(entry.chunk_crypt_mode())
+                        .chunk_reader(None, entry.chunk_crypt_mode())
                         .context("failed to get chunk reader")?;
                     let upload_stats = push_index(
                         &archive_name,
@@ -1039,7 +1039,7 @@ pub(crate) async fn push_snapshot(
                 ArchiveType::FixedIndex => {
                     let index = FixedIndexReader::open(&path)?;
                     let chunk_reader = reader
-                        .chunk_reader(entry.chunk_crypt_mode())
+                        .chunk_reader(None, entry.chunk_crypt_mode())
                         .context("failed to get chunk reader")?;
                     let size = index.index_bytes();
                     let upload_stats = push_index(
diff --git a/src/server/sync.rs b/src/server/sync.rs
index 6b84ae6d7..dce9c99ee 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -90,7 +90,11 @@ impl SyncStats {
 /// and checking whether chunk sync should be skipped.
 pub(crate) trait SyncSourceReader: Send + Sync {
     /// Returns a chunk reader with the specified encryption mode.
-    fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error>;
+    fn chunk_reader(
+        &self,
+        crypt_config: Option<Arc<CryptConfig>>,
+        crypt_mode: CryptMode,
+    ) -> Result<Arc<dyn AsyncReadChunk>, Error>;
 
     /// Asynchronously loads a file from the source into a local file.
     /// `filename` is the name of the file to load from the source.
@@ -117,9 +121,17 @@ pub(crate) struct LocalSourceReader {
 
 #[async_trait::async_trait]
 impl SyncSourceReader for RemoteSourceReader {
-    fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error> {
-        let chunk_reader =
-            RemoteChunkReader::new(self.backup_reader.clone(), None, crypt_mode, HashMap::new());
+    fn chunk_reader(
+        &self,
+        crypt_config: Option<Arc<CryptConfig>>,
+        crypt_mode: CryptMode,
+    ) -> Result<Arc<dyn AsyncReadChunk>, Error> {
+        let chunk_reader = RemoteChunkReader::new(
+            self.backup_reader.clone(),
+            crypt_config,
+            crypt_mode,
+            HashMap::new(),
+        );
         Ok(Arc::new(chunk_reader))
     }
 
@@ -191,8 +203,12 @@ impl SyncSourceReader for RemoteSourceReader {
 
 #[async_trait::async_trait]
 impl SyncSourceReader for LocalSourceReader {
-    fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error> {
-        let chunk_reader = LocalChunkReader::new(self.datastore.clone(), None, crypt_mode)?;
+    fn chunk_reader(
+        &self,
+        crypt_config: Option<Arc<CryptConfig>>,
+        crypt_mode: CryptMode,
+    ) -> Result<Arc<dyn AsyncReadChunk>, Error> {
+        let chunk_reader = LocalChunkReader::new(self.datastore.clone(), crypt_config, crypt_mode)?;
         Ok(Arc::new(chunk_reader))
     }
 
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 24/30] sync: pull: introduce and use decrypt index writer if crypt config
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (22 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 23/30] sync: expand source chunk reader trait by crypt config Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 25/30] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
                   ` (5 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

In order to decrypt and encrypted index file during a pull sync job
when a matching decryption key is configured, the index has to be
rewritten as the chunks has to be decrypted and the new digests
calculated based on the decrypted chunk. The newly written index file
need to finally replace the original one, achieved by replacing the
original tempfile after pulling the chunks.

In order to be able to do so, provide a DecryptedIndexWriter instance
to the chunk pulling logic. The DecryptIndexWriter provides variants
for fix and dynamic index writers, or none if no rewriting should
happen.

This remains disarmed for the time being by never passing the crypt
config until the logic to decrypt the chunk and re-calculate the
digests is in place, done in subsequent code changes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 src/server/pull.rs | 133 ++++++++++++++++++++++++++++++---------------
 1 file changed, 88 insertions(+), 45 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index e1a6ca6d0..9bac50d4f 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -21,8 +21,8 @@ use pbs_api_types::{
 use pbs_client::BackupRepository;
 use pbs_config::CachedUserInfo;
 use pbs_datastore::data_blob::DataBlob;
-use pbs_datastore::dynamic_index::DynamicIndexReader;
-use pbs_datastore::fixed_index::FixedIndexReader;
+use pbs_datastore::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
+use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
 use pbs_datastore::index::IndexFile;
 use pbs_datastore::manifest::{BackupManifest, FileInfo};
 use pbs_datastore::read_chunk::AsyncReadChunk;
@@ -166,6 +166,7 @@ async fn pull_index_chunks<I: IndexFile>(
     index: I,
     encountered_chunks: Arc<Mutex<EncounteredChunks>>,
     backend: &DatastoreBackend,
+    decrypted_index_writer: DecryptedIndexWriter,
 ) -> Result<SyncStats, Error> {
     use futures::stream::{self, StreamExt, TryStreamExt};
 
@@ -201,55 +202,61 @@ async fn pull_index_chunks<I: IndexFile>(
     let bytes = Arc::new(AtomicUsize::new(0));
     let chunk_count = Arc::new(AtomicUsize::new(0));
 
-    stream
-        .map(|info| {
-            let target = Arc::clone(&target);
-            let chunk_reader = chunk_reader.clone();
-            let bytes = Arc::clone(&bytes);
-            let chunk_count = Arc::clone(&chunk_count);
-            let verify_and_write_channel = verify_and_write_channel.clone();
-            let encountered_chunks = Arc::clone(&encountered_chunks);
+    let stream = stream.map(|info| {
+        let target = Arc::clone(&target);
+        let chunk_reader = chunk_reader.clone();
+        let bytes = Arc::clone(&bytes);
+        let chunk_count = Arc::clone(&chunk_count);
+        let verify_and_write_channel = verify_and_write_channel.clone();
+        let encountered_chunks = Arc::clone(&encountered_chunks);
 
-            Ok::<_, Error>(async move {
-                {
-                    // limit guard scope
-                    let mut guard = encountered_chunks.lock().unwrap();
-                    if let Some(touched) = guard.check_reusable(&info.digest) {
-                        if touched {
-                            return Ok::<_, Error>(());
-                        }
-                        let chunk_exists = proxmox_async::runtime::block_in_place(|| {
-                            target.cond_touch_chunk(&info.digest, false)
-                        })?;
-                        if chunk_exists {
-                            guard.mark_touched(&info.digest);
-                            //info!("chunk {} exists {}", pos, hex::encode(digest));
-                            return Ok::<_, Error>(());
-                        }
+        Ok::<_, Error>(async move {
+            {
+                // limit guard scope
+                let mut guard = encountered_chunks.lock().unwrap();
+                if let Some(touched) = guard.check_reusable(&info.digest) {
+                    if touched {
+                        return Ok::<_, Error>(());
+                    }
+                    let chunk_exists = proxmox_async::runtime::block_in_place(|| {
+                        target.cond_touch_chunk(&info.digest, false)
+                    })?;
+                    if chunk_exists {
+                        guard.mark_touched(&info.digest);
+                        //info!("chunk {} exists {}", pos, hex::encode(digest));
+                        return Ok::<_, Error>(());
                     }
-                    // mark before actually downloading the chunk, so this happens only once
-                    guard.mark_reusable(&info.digest);
-                    guard.mark_touched(&info.digest);
                 }
+                // mark before actually downloading the chunk, so this happens only once
+                guard.mark_reusable(&info.digest);
+                guard.mark_touched(&info.digest);
+            }
 
-                //info!("sync {} chunk {}", pos, hex::encode(digest));
-                let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
-                let raw_size = chunk.raw_size() as usize;
+            //info!("sync {} chunk {}", pos, hex::encode(digest));
+            let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+            let raw_size = chunk.raw_size() as usize;
 
-                // decode, verify and write in a separate threads to maximize throughput
-                proxmox_async::runtime::block_in_place(|| {
-                    verify_and_write_channel.send((chunk, info.digest, info.size()))
-                })?;
+            // decode, verify and write in a separate threads to maximize throughput
+            proxmox_async::runtime::block_in_place(|| {
+                verify_and_write_channel.send((chunk, info.digest, info.size()))
+            })?;
 
-                bytes.fetch_add(raw_size, Ordering::SeqCst);
-                chunk_count.fetch_add(1, Ordering::SeqCst);
+            bytes.fetch_add(raw_size, Ordering::SeqCst);
+            chunk_count.fetch_add(1, Ordering::SeqCst);
 
-                Ok(())
-            })
+            Ok(())
         })
-        .try_buffer_unordered(20)
-        .try_for_each(|_res| futures::future::ok(()))
-        .await?;
+    });
+
+    if let DecryptedIndexWriter::None = decrypted_index_writer {
+        stream
+            .try_buffer_unordered(20)
+            .try_for_each(|_res| futures::future::ok(()))
+            .await?;
+    } else {
+        // must keep chunk order to correctly rewrite index file
+        stream.try_for_each(|item| item).await?;
+    }
 
     drop(verify_and_write_channel);
 
@@ -330,9 +337,15 @@ async fn pull_single_archive<'a>(
             let (csum, size) = index.compute_csum();
             verify_archive(archive_info, &csum, size)?;
 
-            if reader.skip_chunk_sync(snapshot.datastore().name()) {
+            if crypt_config.is_none() && reader.skip_chunk_sync(snapshot.datastore().name()) {
                 info!("skipping chunk sync for same datastore");
             } else {
+                let new_index_writer = if crypt_config.is_some() {
+                    let writer = DynamicIndexWriter::create(&path)?;
+                    DecryptedIndexWriter::Dynamic(Arc::new(Mutex::new(writer)))
+                } else {
+                    DecryptedIndexWriter::None
+                };
                 let stats = pull_index_chunks(
                     reader
                         .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
@@ -341,8 +354,16 @@ async fn pull_single_archive<'a>(
                     index,
                     encountered_chunks,
                     backend,
+                    new_index_writer.clone(),
                 )
                 .await?;
+                if let DecryptedIndexWriter::Dynamic(index) = &new_index_writer {
+                    let csum = index.lock().unwrap().close()?;
+
+                    // Overwrite current tmp file so it will be persisted instead
+                    std::fs::rename(&path, &tmp_path)?;
+                }
+
                 sync_stats.add(stats);
             }
         }
@@ -353,9 +374,16 @@ async fn pull_single_archive<'a>(
             let (csum, size) = index.compute_csum();
             verify_archive(archive_info, &csum, size)?;
 
-            if reader.skip_chunk_sync(snapshot.datastore().name()) {
+            if crypt_config.is_none() && reader.skip_chunk_sync(snapshot.datastore().name()) {
                 info!("skipping chunk sync for same datastore");
             } else {
+                let new_index_writer = if crypt_config.is_some() {
+                    let writer =
+                        FixedIndexWriter::create(&path, Some(size), index.chunk_size as u32)?;
+                    DecryptedIndexWriter::Fixed(Arc::new(Mutex::new(writer)))
+                } else {
+                    DecryptedIndexWriter::None
+                };
                 let stats = pull_index_chunks(
                     reader
                         .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
@@ -364,8 +392,16 @@ async fn pull_single_archive<'a>(
                     index,
                     encountered_chunks,
                     backend,
+                    new_index_writer.clone(),
                 )
                 .await?;
+                if let DecryptedIndexWriter::Fixed(index) = &new_index_writer {
+                    let csum = index.lock().unwrap().close()?;
+
+                    // Overwrite current tmp file so it will be persisted instead
+                    std::fs::rename(&path, &tmp_path)?;
+                }
+
                 sync_stats.add(stats);
             }
         }
@@ -1280,3 +1316,10 @@ impl EncounteredChunks {
         self.chunk_set.clear();
     }
 }
+
+#[derive(Clone)]
+enum DecryptedIndexWriter {
+    Fixed(Arc<Mutex<FixedIndexWriter>>),
+    Dynamic(Arc<Mutex<DynamicIndexWriter>>),
+    None,
+}
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 25/30] sync: pull: extend encountered chunk by optional decrypted digest
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (23 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 24/30] sync: pull: introduce and use decrypt index writer if " Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 26/30] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
                   ` (4 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

For index files being decrypted during the pull, it is not enough to
keep track of the processes source chunks, but the decrypted digest
has to be known as well in order to rewrite the index file.

Extend the encountered chunks such that this can be tracked as well.
To not introduce clippy warnings and to keep the code readable,
introduce the EncounteredChunksInfo struct as internal type for the
hash map values.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 src/server/pull.rs | 53 +++++++++++++++++++++++++++++-----------------
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index 9bac50d4f..61880183a 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -178,7 +178,7 @@ async fn pull_index_chunks<I: IndexFile>(
             .filter(|info| {
                 let guard = encountered_chunks.lock().unwrap();
                 match guard.check_reusable(&info.digest) {
-                    Some(touched) => !touched, // reusable and already touched, can always skip
+                    Some((touched, _decrypted_chunk)) => !touched, // reusable and already touched, can always skip
                     None => true,
                 }
             }),
@@ -214,7 +214,7 @@ async fn pull_index_chunks<I: IndexFile>(
             {
                 // limit guard scope
                 let mut guard = encountered_chunks.lock().unwrap();
-                if let Some(touched) = guard.check_reusable(&info.digest) {
+                if let Some((touched, _decrypted_digest)) = guard.check_reusable(&info.digest) {
                     if touched {
                         return Ok::<_, Error>(());
                     }
@@ -222,14 +222,14 @@ async fn pull_index_chunks<I: IndexFile>(
                         target.cond_touch_chunk(&info.digest, false)
                     })?;
                     if chunk_exists {
-                        guard.mark_touched(&info.digest);
+                        guard.mark_touched(&info.digest, None);
                         //info!("chunk {} exists {}", pos, hex::encode(digest));
                         return Ok::<_, Error>(());
                     }
                 }
                 // mark before actually downloading the chunk, so this happens only once
-                guard.mark_reusable(&info.digest);
-                guard.mark_touched(&info.digest);
+                guard.mark_reusable(&info.digest, None);
+                guard.mark_touched(&info.digest, None);
             }
 
             //info!("sync {} chunk {}", pos, hex::encode(digest));
@@ -824,7 +824,7 @@ async fn pull_group(
 
                         for pos in 0..index.index_count() {
                             let chunk_info = index.chunk_info(pos).unwrap();
-                            reusable_chunks.mark_reusable(&chunk_info.digest);
+                            reusable_chunks.mark_reusable(&chunk_info.digest, None);
                         }
                     }
                 }
@@ -1254,12 +1254,17 @@ async fn pull_ns(
     Ok((progress, sync_stats, errors))
 }
 
+struct EncounteredChunkInfo {
+    reusable: bool,
+    touched: bool,
+    decrypted_digest: Option<[u8; 32]>,
+}
+
 /// Store the state of encountered chunks, tracking if they can be reused for the
 /// index file currently being pulled and if the chunk has already been touched
 /// during this sync.
 struct EncounteredChunks {
-    // key: digest, value: (reusable, touched)
-    chunk_set: HashMap<[u8; 32], (bool, bool)>,
+    chunk_set: HashMap<[u8; 32], EncounteredChunkInfo>,
 }
 
 impl EncounteredChunks {
@@ -1272,12 +1277,12 @@ impl EncounteredChunks {
 
     /// Check if the current state allows to reuse this chunk and if so,
     /// if the chunk has already been touched.
-    fn check_reusable(&self, digest: &[u8; 32]) -> Option<bool> {
-        if let Some((reusable, touched)) = self.chunk_set.get(digest) {
-            if !reusable {
+    fn check_reusable(&self, digest: &[u8; 32]) -> Option<(bool, Option<&[u8; 32]>)> {
+        if let Some(chunk_info) = self.chunk_set.get(digest) {
+            if !chunk_info.reusable {
                 None
             } else {
-                Some(*touched)
+                Some((chunk_info.touched, chunk_info.decrypted_digest.as_ref()))
             }
         } else {
             None
@@ -1285,28 +1290,36 @@ impl EncounteredChunks {
     }
 
     /// Mark chunk as reusable, inserting it as un-touched if not present
-    fn mark_reusable(&mut self, digest: &[u8; 32]) {
+    fn mark_reusable(&mut self, digest: &[u8; 32], decrypted_digest: Option<[u8; 32]>) {
         match self.chunk_set.entry(*digest) {
             Entry::Occupied(mut occupied) => {
-                let (reusable, _touched) = occupied.get_mut();
-                *reusable = true;
+                let chunk_info = occupied.get_mut();
+                chunk_info.reusable = true;
             }
             Entry::Vacant(vacant) => {
-                vacant.insert((true, false));
+                vacant.insert(EncounteredChunkInfo {
+                    reusable: true,
+                    touched: false,
+                    decrypted_digest,
+                });
             }
         }
     }
 
     /// Mark chunk as touched during this sync, inserting it as not reusable
     /// but touched if not present.
-    fn mark_touched(&mut self, digest: &[u8; 32]) {
+    fn mark_touched(&mut self, digest: &[u8; 32], decrypted_digest: Option<[u8; 32]>) {
         match self.chunk_set.entry(*digest) {
             Entry::Occupied(mut occupied) => {
-                let (_reusable, touched) = occupied.get_mut();
-                *touched = true;
+                let chunk_info = occupied.get_mut();
+                chunk_info.touched = true;
             }
             Entry::Vacant(vacant) => {
-                vacant.insert((false, true));
+                vacant.insert(EncounteredChunkInfo {
+                    reusable: false,
+                    touched: true,
+                    decrypted_digest,
+                });
             }
         }
     }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 26/30] sync: pull: decrypt blob files on pull if encryption key is configured
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (24 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 25/30] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 27/30] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
                   ` (3 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

During pull, blob files are stored in a temporary file before being
renamed to the actual blob filename as stored in the manifest.
If a decryption key is configured in the pull parameters, use the
decrypted temporary blob file after downloading it from the remote
to decrypt it and re-encode as new compressed but unencrypted blob
file. Rename the decrypted tempfile to be the new tmpfile to be
finally moved in place.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- squash new manifest registration into this patch

 src/server/pull.rs | 63 +++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 60 insertions(+), 3 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index 61880183a..75958d625 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -2,7 +2,8 @@
 
 use std::collections::hash_map::Entry;
 use std::collections::{HashMap, HashSet};
-use std::io::Seek;
+use std::io::{BufReader, Read, Seek, Write};
+use std::os::fd::AsRawFd;
 use std::sync::atomic::{AtomicUsize, Ordering};
 use std::sync::{Arc, Mutex};
 use std::time::SystemTime;
@@ -14,7 +15,7 @@ use tracing::{info, warn};
 
 use pbs_api_types::{
     print_store_and_ns, ArchiveType, Authid, BackupArchiveName, BackupDir, BackupGroup,
-    BackupNamespace, GroupFilter, Operation, RateLimitConfig, Remote, SnapshotListItem,
+    BackupNamespace, CryptMode, GroupFilter, Operation, RateLimitConfig, Remote, SnapshotListItem,
     VerifyState, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME, MAX_NAMESPACE_DEPTH,
     PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP,
 };
@@ -26,7 +27,9 @@ use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
 use pbs_datastore::index::IndexFile;
 use pbs_datastore::manifest::{BackupManifest, FileInfo};
 use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{check_backup_owner, DataStore, DatastoreBackend, StoreProgress};
+use pbs_datastore::{
+    check_backup_owner, DataBlobReader, DataStore, DatastoreBackend, StoreProgress,
+};
 use pbs_tools::sha::sha256;
 
 use super::sync::{
@@ -313,6 +316,7 @@ async fn pull_single_archive<'a>(
     encountered_chunks: Arc<Mutex<EncounteredChunks>>,
     crypt_config: Option<Arc<CryptConfig>>,
     backend: &DatastoreBackend,
+    new_manifest: Option<Arc<Mutex<BackupManifest>>>,
 ) -> Result<SyncStats, Error> {
     let archive_name = &archive_info.filename;
     let mut path = snapshot.full_path();
@@ -409,6 +413,57 @@ async fn pull_single_archive<'a>(
             tmpfile.rewind()?;
             let (csum, size) = sha256(&mut tmpfile)?;
             verify_archive(archive_info, &csum, size)?;
+
+            if crypt_config.is_some() {
+                let crypt_config = crypt_config.clone();
+                let tmp_path = tmp_path.clone();
+                let archive_name = archive_name.clone();
+
+                tokio::task::spawn_blocking(move || {
+                    // must rewind again since after verifying cursor is at the end of the file
+                    tmpfile.rewind()?;
+                    let reader = DataBlobReader::new(tmpfile, crypt_config)?;
+                    let mut reader = BufReader::new(reader);
+                    let mut raw_data = Vec::new();
+                    reader.read_to_end(&mut raw_data)?;
+
+                    let blob = DataBlob::encode(&raw_data, None, true)?;
+                    let raw_blob = blob.into_inner();
+
+                    let mut decrypted_tmp_path = tmp_path.clone();
+                    decrypted_tmp_path.set_extension("dectmp");
+                    let result = proxmox_lang::try_block!({
+                        let mut decrypted_tmpfile = std::fs::OpenOptions::new()
+                            .read(true)
+                            .write(true)
+                            .create_new(true)
+                            .open(&decrypted_tmp_path)?;
+                        decrypted_tmpfile.write_all(&raw_blob)?;
+                        decrypted_tmpfile.flush()?;
+                        decrypted_tmpfile.rewind()?;
+                        let (csum, size) = sha256(&mut decrypted_tmpfile)?;
+
+                        if let Some(new_manifest) = new_manifest {
+                            let mut new_manifest = new_manifest.lock().unwrap();
+                            let name = archive_name.as_str().try_into()?;
+                            new_manifest.add_file(&name, size, csum, CryptMode::None)?;
+                        }
+
+                        nix::unistd::fsync(decrypted_tmpfile.as_raw_fd())?;
+
+                        std::fs::rename(&decrypted_tmp_path, &tmp_path)?;
+                        Ok(())
+                    });
+
+                    if result.is_err() {
+                        let _ = std::fs::remove_file(&decrypted_tmp_path);
+                    }
+
+                    result
+                })
+                .await?
+                .map_err(|err: Error| format_err!("Failed when decrypting blob {path:?}: {err}"))?;
+            }
         }
     }
     if let Err(err) = std::fs::rename(&tmp_path, &path) {
@@ -509,6 +564,7 @@ async fn pull_snapshot<'a>(
     }
 
     let mut crypt_config = None;
+    let mut new_manifest = None;
 
     let backend = &params.target.backend;
     for item in manifest.files() {
@@ -558,6 +614,7 @@ async fn pull_snapshot<'a>(
             encountered_chunks.clone(),
             crypt_config.clone(),
             backend,
+            new_manifest.clone(),
         )
         .await?;
         sync_stats.add(stats);
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 27/30] sync: pull: decrypt chunks and rewrite index file for matching key
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (25 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 26/30] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 28/30] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner
                   ` (2 subsequent siblings)
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Once the matching decryptioin key will be provided, use it to decrypt
the chunks on pull and rewrite the index file based on the decrypted
chunk digests and offsets.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- no changes

 src/server/pull.rs | 135 ++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 114 insertions(+), 21 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index 75958d625..90cb99777 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -4,7 +4,7 @@ use std::collections::hash_map::Entry;
 use std::collections::{HashMap, HashSet};
 use std::io::{BufReader, Read, Seek, Write};
 use std::os::fd::AsRawFd;
-use std::sync::atomic::{AtomicUsize, Ordering};
+use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
 use std::sync::{Arc, Mutex};
 use std::time::SystemTime;
 
@@ -21,7 +21,7 @@ use pbs_api_types::{
 };
 use pbs_client::BackupRepository;
 use pbs_config::CachedUserInfo;
-use pbs_datastore::data_blob::DataBlob;
+use pbs_datastore::data_blob::{DataBlob, DataChunkBuilder};
 use pbs_datastore::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
 use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
 use pbs_datastore::index::IndexFile;
@@ -181,7 +181,16 @@ async fn pull_index_chunks<I: IndexFile>(
             .filter(|info| {
                 let guard = encountered_chunks.lock().unwrap();
                 match guard.check_reusable(&info.digest) {
-                    Some((touched, _decrypted_chunk)) => !touched, // reusable and already touched, can always skip
+                    Some((touched, mapped_digest)) => {
+                        if mapped_digest.is_some() {
+                            // if there is a mapping, then the chunk digest must be rewritten to
+                            // the index, cannot skip here but optimized when processing the stream
+                            true
+                        } else {
+                            // reusable and already touched, can always skip
+                            !touched
+                        }
+                    }
                     None => true,
                 }
             }),
@@ -203,6 +212,7 @@ async fn pull_index_chunks<I: IndexFile>(
     let verify_and_write_channel = verify_pool.channel();
 
     let bytes = Arc::new(AtomicUsize::new(0));
+    let offset = Arc::new(AtomicU64::new(0));
     let chunk_count = Arc::new(AtomicUsize::new(0));
 
     let stream = stream.map(|info| {
@@ -212,36 +222,119 @@ async fn pull_index_chunks<I: IndexFile>(
         let chunk_count = Arc::clone(&chunk_count);
         let verify_and_write_channel = verify_and_write_channel.clone();
         let encountered_chunks = Arc::clone(&encountered_chunks);
+        let offset = Arc::clone(&offset);
+        let decrypted_index_writer = decrypted_index_writer.clone();
 
         Ok::<_, Error>(async move {
-            {
-                // limit guard scope
-                let mut guard = encountered_chunks.lock().unwrap();
-                if let Some((touched, _decrypted_digest)) = guard.check_reusable(&info.digest) {
-                    if touched {
+            //info!("sync {} chunk {}", pos, hex::encode(digest));
+            let (chunk, digest, size) = match decrypted_index_writer {
+                DecryptedIndexWriter::Fixed(index) => {
+                    if let Some((_touched, Some(decrypted_digest))) = encountered_chunks
+                        .lock()
+                        .unwrap()
+                        .check_reusable(&info.digest)
+                    {
+                        // already got the decrypted digest and chunk has been written,
+                        // no need to process again
+                        let size = info.size();
+                        let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+
+                        index.lock().unwrap().add_chunk(
+                            start_offset,
+                            size as u32,
+                            decrypted_digest,
+                        )?;
+
                         return Ok::<_, Error>(());
                     }
-                    let chunk_exists = proxmox_async::runtime::block_in_place(|| {
-                        target.cond_touch_chunk(&info.digest, false)
-                    })?;
-                    if chunk_exists {
-                        guard.mark_touched(&info.digest, None);
-                        //info!("chunk {} exists {}", pos, hex::encode(digest));
+
+                    let chunk_data = chunk_reader.read_chunk(&info.digest).await?;
+                    let (chunk, digest) =
+                        DataChunkBuilder::new(&chunk_data).compress(true).build()?;
+
+                    let size = chunk_data.len() as u64;
+                    let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+
+                    index
+                        .lock()
+                        .unwrap()
+                        .add_chunk(start_offset, size as u32, &digest)?;
+
+                    encountered_chunks
+                        .lock()
+                        .unwrap()
+                        .mark_reusable(&info.digest, Some(digest));
+
+                    (chunk, digest, size)
+                }
+                DecryptedIndexWriter::Dynamic(index) => {
+                    if let Some((_touched, Some(decrypted_digest))) = encountered_chunks
+                        .lock()
+                        .unwrap()
+                        .check_reusable(&info.digest)
+                    {
+                        // already got the decrypted digest and chunk has been written,
+                        // no need to process again
+                        let size = info.size();
+                        let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+                        let end_offset = start_offset + size;
+
+                        index
+                            .lock()
+                            .unwrap()
+                            .add_chunk(end_offset, decrypted_digest)?;
+
                         return Ok::<_, Error>(());
                     }
+
+                    let chunk_data = chunk_reader.read_chunk(&info.digest).await?;
+                    let (chunk, digest) =
+                        DataChunkBuilder::new(&chunk_data).compress(true).build()?;
+
+                    let size = chunk_data.len() as u64;
+                    let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+                    let end_offset = start_offset + size;
+
+                    index.lock().unwrap().add_chunk(end_offset, &digest)?;
+
+                    encountered_chunks
+                        .lock()
+                        .unwrap()
+                        .mark_reusable(&info.digest, Some(digest));
+
+                    (chunk, digest, size)
                 }
-                // mark before actually downloading the chunk, so this happens only once
-                guard.mark_reusable(&info.digest, None);
-                guard.mark_touched(&info.digest, None);
-            }
+                DecryptedIndexWriter::None => {
+                    {
+                        // limit guard scope
+                        let mut guard = encountered_chunks.lock().unwrap();
+                        if let Some((touched, _mapped)) = guard.check_reusable(&info.digest) {
+                            if touched {
+                                return Ok::<_, Error>(());
+                            }
+                            let chunk_exists = proxmox_async::runtime::block_in_place(|| {
+                                target.cond_touch_chunk(&info.digest, false)
+                            })?;
+                            if chunk_exists {
+                                guard.mark_touched(&info.digest, None);
+                                //info!("chunk {} exists {}", pos, hex::encode(digest));
+                                return Ok::<_, Error>(());
+                            }
+                        }
+                        // mark before actually downloading the chunk, so this happens only once
+                        guard.mark_reusable(&info.digest, None);
+                        guard.mark_touched(&info.digest, None);
+                    }
 
-            //info!("sync {} chunk {}", pos, hex::encode(digest));
-            let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+                    let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+                    (chunk, info.digest, info.size())
+                }
+            };
             let raw_size = chunk.raw_size() as usize;
 
             // decode, verify and write in a separate threads to maximize throughput
             proxmox_async::runtime::block_in_place(|| {
-                verify_and_write_channel.send((chunk, info.digest, info.size()))
+                verify_and_write_channel.send((chunk, digest, size))
             })?;
 
             bytes.fetch_add(raw_size, Ordering::SeqCst);
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 28/30] sync: pull: decrypt snapshots with matching encryption key fingerprint
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (26 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 27/30] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 29/30] api: encryption keys: allow to toggle the archived state for keys Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 30/30] docs: add section describing server side encryption for sync jobs Christian Ebner
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Decrypt any backup snapshot during pull which was encrypted with a
matching encryption key. Matching of keys is performed by comparing
the fingerprint of the key as stored in the source manifest and the
key configured for the pull sync jobs.

If matching, pass along the key's crypto config to the index and chunk
readers and write the local files unencrypted instead of simply
downloading them. A new manifest file is written instead of the
original one and files registered accordingly.

If the local snapshot already exists (resync), refuse to sync without
decryption if the target snapshot is unencrypted, the source however
encrypted.

To detect file changes for resync, compare the file change
fingerprint calculated on the decrypted files before push sync with
encryption.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- fix boguous check, must use change-detection-fingerprint, not key-fingerprint to detect changes on already existing manifest.
- convert unprotected manifest part to json value to drop key-fingerprint.
- Also drop verify state on pull, do not rely on inherent check to better protect against bugs and corruptions.
- Log id of key used for decryption, not just fingerprint

 src/server/pull.rs | 94 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 93 insertions(+), 1 deletion(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index 90cb99777..df2746c11 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -11,6 +11,9 @@ use std::time::SystemTime;
 use anyhow::{bail, format_err, Context, Error};
 use pbs_tools::crypt_config::CryptConfig;
 use proxmox_human_byte::HumanByte;
+use serde_json::Value;
+use tokio::fs::OpenOptions;
+use tokio::io::AsyncWriteExt;
 use tracing::{info, warn};
 
 use pbs_api_types::{
@@ -459,6 +462,17 @@ async fn pull_single_archive<'a>(
 
                     // Overwrite current tmp file so it will be persisted instead
                     std::fs::rename(&path, &tmp_path)?;
+
+                    if let Some(new_manifest) = new_manifest {
+                        let name = archive_name.as_str().try_into()?;
+                        // size is identical to original, encrypted index
+                        new_manifest.lock().unwrap().add_file(
+                            &name,
+                            size,
+                            csum,
+                            CryptMode::None,
+                        )?;
+                    }
                 }
 
                 sync_stats.add(stats);
@@ -497,6 +511,17 @@ async fn pull_single_archive<'a>(
 
                     // Overwrite current tmp file so it will be persisted instead
                     std::fs::rename(&path, &tmp_path)?;
+
+                    if let Some(new_manifest) = new_manifest {
+                        let name = archive_name.as_str().try_into()?;
+                        // size is identical to original, encrypted index
+                        new_manifest.lock().unwrap().add_file(
+                            &name,
+                            size,
+                            csum,
+                            CryptMode::None,
+                        )?;
+                    }
                 }
 
                 sync_stats.add(stats);
@@ -614,6 +639,7 @@ async fn pull_snapshot<'a>(
         return Ok(sync_stats);
     }
 
+    let mut local_manifest_file_fp = None;
     if manifest_name.exists() && !corrupt {
         let manifest_blob = proxmox_lang::try_block!({
             let mut manifest_file = std::fs::File::open(&manifest_name).map_err(|err| {
@@ -634,12 +660,31 @@ async fn pull_snapshot<'a>(
             info!("no data changes");
             let _ = std::fs::remove_file(&tmp_manifest_name);
             return Ok(sync_stats); // nothing changed
+        } else {
+            let manifest = BackupManifest::try_from(manifest_blob)?;
+            if !params.crypt_configs.is_empty() {
+                let fp = manifest.change_detection_fingerprint()?;
+                local_manifest_file_fp = Some(hex::encode(fp));
+            }
         }
     }
 
-    let manifest_data = tmp_manifest_blob.raw_data().to_vec();
+    let mut manifest_data = tmp_manifest_blob.raw_data().to_vec();
     let manifest = BackupManifest::try_from(tmp_manifest_blob)?;
 
+    if let Value::String(fp) = &manifest.unprotected["change-detection-fingerprint"] {
+        if let Some(local) = &local_manifest_file_fp {
+            if fp == local {
+                if !client_log_name.exists() {
+                    reader.try_download_client_log(&client_log_name).await?;
+                };
+                info!("no data changes");
+                let _ = std::fs::remove_file(&tmp_manifest_name);
+                return Ok(sync_stats);
+            }
+        }
+    }
+
     if ignore_not_verified_or_encrypted(
         &manifest,
         snapshot.dir(),
@@ -658,6 +703,21 @@ async fn pull_snapshot<'a>(
 
     let mut crypt_config = None;
     let mut new_manifest = None;
+    if let Ok(Some(source_fingerprint)) = manifest.fingerprint() {
+        for (key_id, config) in &params.crypt_configs {
+            if config.fingerprint() == *source_fingerprint.bytes() {
+                crypt_config = Some(Arc::clone(config));
+                new_manifest = Some(Arc::new(Mutex::new(BackupManifest::new(snapshot.into()))));
+                info!("Found matching key '{key_id}' with fingerprint {source_fingerprint}, decrypt on pull");
+                break;
+            }
+        }
+    }
+
+    // pre-existing local manifest for unencrypted snapshot, never overwrite with encrypted
+    if local_manifest_file_fp.is_some() && crypt_config.is_none() {
+        bail!("local unencrypted snapshot detected, refuse to sync without source decryption");
+    }
 
     let backend = &params.target.backend;
     for item in manifest.files() {
@@ -713,6 +773,38 @@ async fn pull_snapshot<'a>(
         sync_stats.add(stats);
     }
 
+    if let Some(new_manifest) = new_manifest {
+        let mut new_manifest = Arc::try_unwrap(new_manifest)
+            .map_err(|_arc| {
+                format_err!("failed to take ownership of still referenced new manifest")
+            })?
+            .into_inner()
+            .unwrap();
+
+        // copy over notes ecc, but drop encryption key fingerprint and verify state, to be
+        // reverified independent from the sync.
+        new_manifest.unprotected = manifest.unprotected.clone();
+        if let Some(unprotected) = new_manifest.unprotected.as_object_mut() {
+            unprotected.remove("key-fingerprint");
+            unprotected.remove("verify_state");
+        } else {
+            bail!("Encountered unexpected manifest without 'unprotected' section.");
+        }
+
+        let manifest_string = new_manifest.to_string(None)?;
+        let manifest_blob = DataBlob::encode(manifest_string.as_bytes(), None, true)?;
+        // update contents to be uploaded to backend
+        manifest_data = manifest_blob.raw_data().to_vec();
+
+        let mut tmp_manifest_file = OpenOptions::new()
+            .write(true)
+            .truncate(true) // clear pre-existing manifest content
+            .open(&tmp_manifest_name)
+            .await?;
+        tmp_manifest_file.write_all(&manifest_data).await?;
+        tmp_manifest_file.flush().await?;
+    }
+
     if let Err(err) = std::fs::rename(&tmp_manifest_name, &manifest_name) {
         bail!("Atomic rename file {:?} failed - {}", manifest_name, err);
     }
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 29/30] api: encryption keys: allow to toggle the archived state for keys
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (27 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 28/30] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 30/30] docs: add section describing server side encryption for sync jobs Christian Ebner
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Adapt the api endpoint to not only allow to archive a key, but rather
allow to toggle its archived state by setting or stripping the
optional `archived-at` timestamp in the config.

Expose this in the ui by adapting the corresponding button
accordingly.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- not present in previous version

note: kept this as separate patch for now as unsure if this should be
split into 2 dedicated api points instead?

 pbs-config/src/encryption_keys.rs  | 10 ++++++----
 src/api2/config/encryption_keys.rs |  6 +++---
 www/config/EncryptionKeysView.js   | 24 +++++++++++++++++-------
 3 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/pbs-config/src/encryption_keys.rs b/pbs-config/src/encryption_keys.rs
index fd5989a98..2120ae861 100644
--- a/pbs-config/src/encryption_keys.rs
+++ b/pbs-config/src/encryption_keys.rs
@@ -196,16 +196,18 @@ pub fn delete_key(id: &str, mut config: SectionConfigData) -> Result<bool, Error
     Ok(false)
 }
 
-/// Mark the key as archived by setting the `archived-at` timestamp.
+/// Toggle the key's archived state by setting or dropping the `archived-at` timestamp.
 pub fn archive_key(id: &str, mut config: SectionConfigData) -> Result<(), Error> {
     let mut key: CryptKey = config.lookup(ENCRYPTION_KEYS_CFG_TYPE_ID, id)?;
 
     if key.archived_at.is_some() {
-        bail!("key already marked as archived");
+        // was archived, mark as active again
+        key.archived_at = None;
+    } else {
+        // was not archived, mark as archived
+        key.archived_at = Some(proxmox_time::epoch_i64());
     }
 
-    key.archived_at = Some(proxmox_time::epoch_i64());
-
     config.set_data(id, ENCRYPTION_KEYS_CFG_TYPE_ID, &key)?;
     let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
     // drops config lock
diff --git a/src/api2/config/encryption_keys.rs b/src/api2/config/encryption_keys.rs
index d3097929d..a10430b25 100644
--- a/src/api2/config/encryption_keys.rs
+++ b/src/api2/config/encryption_keys.rs
@@ -126,8 +126,8 @@ pub fn create_key(
         permission: &Permission::Privilege(&["system", "encryption-keys", "{id}"], PRIV_SYS_MODIFY, false),
     },
 )]
-/// Mark the key by given id as archived, no longer usable to encrypt contents.
-pub fn archive_key(
+/// Toggle the archive state for the key by given id, archived keys are no longer usable to encrypt contents.
+pub fn toggle_key_archive_state(
     id: String,
     digest: Option<String>,
     _rpcenv: &mut dyn RpcEnvironment,
@@ -210,7 +210,7 @@ fn encryption_key_in_use(id: &str) -> Result<Option<Vec<String>>, Error> {
 }
 
 const ITEM_ROUTER: Router = Router::new()
-    .post(&API_METHOD_ARCHIVE_KEY)
+    .post(&API_METHOD_TOGGLE_KEY_ARCHIVE_STATE)
     .delete(&API_METHOD_DELETE_KEY);
 
 pub const ROUTER: Router = Router::new()
diff --git a/www/config/EncryptionKeysView.js b/www/config/EncryptionKeysView.js
index 35f147799..77542932d 100644
--- a/www/config/EncryptionKeysView.js
+++ b/www/config/EncryptionKeysView.js
@@ -38,7 +38,7 @@ Ext.define('PBS.config.EncryptionKeysView', {
             }).show();
         },
 
-        archiveEncryptionKey: function () {
+        toggleEncryptionKeyArchiveState: function () {
             let me = this;
             let view = me.getView();
             let selection = view.getSelection();
@@ -246,14 +246,24 @@ Ext.define('PBS.config.EncryptionKeysView', {
         '-',
         {
             xtype: 'proxmoxButton',
-            text: gettext('Archive'),
-            handler: 'archiveEncryptionKey',
+            text: gettext('Toggle Archived'),
+            handler: 'toggleEncryptionKeyArchiveState',
             dangerous: true,
-            confirmMsg: Ext.String.format(
-                gettext('Archiving will render the key unusable to encrypt new content, proceed?'),
-            ),
+            confirmMsg: (item) => {
+                let msg;
+                if (item.data['archived-at']) {
+                    msg = gettext(
+                        'Are you sure you want to restore the archived key to be active again?',
+                    );
+                } else {
+                    msg = gettext(
+                        'Archiving will render the key unusable to encrypt new content, proceed?',
+                    );
+                }
+                return Ext.String.format(msg);
+            },
             disabled: true,
-            enableFn: (item) => item.data.type === 'sync' && !item.data['archived-at'],
+            enableFn: (item) => item.data.type === 'sync',
         },
         '-',
         {
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH proxmox-backup v3 30/30] docs: add section describing server side encryption for sync jobs
  2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
                   ` (28 preceding siblings ...)
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 29/30] api: encryption keys: allow to toggle the archived state for keys Christian Ebner
@ 2026-04-14 12:59 ` Christian Ebner
  29 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-14 12:59 UTC (permalink / raw)
  To: pbs-devel

Especially clarify the terminology of active encryption key,
associated keys, key archiving and requirements for removal.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 2:
- not present in previous version

 docs/managing-remotes.rst | 49 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/docs/managing-remotes.rst b/docs/managing-remotes.rst
index 95ac4823c..182fbda09 100644
--- a/docs/managing-remotes.rst
+++ b/docs/managing-remotes.rst
@@ -302,3 +302,52 @@ The following permissions are required for a sync job in push direction:
 
 .. note:: Sync jobs in push direction require namespace support on the remote
    Proxmox Backup Server instance (minimum version 2.2).
+
+Server Side Encryption/Decryption During Sync
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Sync job in push direction allow to encrypt unencrypted snapshots when syncing
+to a less trusted remote Proxmox Backup Server instance. For this, a server side
+encryption key can be assigned to the sync job. This key will then be used to
+encrypt the contents before pushing them to the remote, analogous to performing
+a backup with an encryption key. Already encrypted snapshots are not re-encrypted
+but rather pushed unmodified. Snapshots containing only partially encrypted
+contents are skipped for security reasons.
+
+On the other hand, sync jobs in pull direction allow to assign a number of
+associated keys, which will be used to decrypt snapshot contents if the key
+fingerprint of one of the listed keys matches the one used to encrypt the
+backup snapshot. The active encryption key has no effect for sync jobs in pull
+direction and should not be set.
+
+In order to configure the sync job, as well as for sync job owner/local user
+to access the keys during sync, ``System.Modify`` permissions are required on
+the ``/system/encryption-keys/{key}`` path.
+
+.. note:: Encryption key handling comes with a few risks, especially with key
+   rotatiton. Therefore, only active keys can be used to encrypt new snapshot
+   contents during push sync. If an active encryption key is changed, the key is
+   kept back as associated key on the sync job, in order to protect it from
+   accidental removal. Further, any encryption key can be archived, rendering it
+   no longer usable for encryption, only to decrypt pre-existing contents. Any
+   encryption key usable for sync jobs must therefore be marked as archived and
+   disassociated from any sync job still associated to it, before being able to
+   remove it.
+
+The following command can be used to assign the active encryption key for a sync
+job.
+
+.. code-block:: console
+
+    # proxmox-backup-manager sync-job update pbs2-push --active-encryption-key key0
+
+Setting the associated keys will drop any key not present in the given key list,
+with exception of the previously assigned active encryption key, if it is updated
+as well. The previously assigned encryption key will always be pushed to the list
+of associated keys on rotation. For example, below command would assign ``key``
+as the new active encryption key and ``key0,key2,key3`` as associated keys for
+the sync job.
+
+.. code-block:: console
+
+    # proxmox-backup-manager sync-job update pbs2-push --active-encryption-key key1 --associated-key key2 --associated-key key3
-- 
2.47.3





^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH proxmox-backup v3 07/30] pbs-config: implement encryption key config handling
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 07/30] pbs-config: implement encryption key config handling Christian Ebner
@ 2026-04-14 14:32   ` Michael Köppl
  2026-04-15  6:48     ` Christian Ebner
  0 siblings, 1 reply; 40+ messages in thread
From: Michael Köppl @ 2026-04-14 14:32 UTC (permalink / raw)
  To: Christian Ebner, pbs-devel

2 comments inline

On Tue Apr 14, 2026 at 2:59 PM CEST, Christian Ebner wrote:

[snip]

> +/// Store the encryption key to file.
> +///
> +/// Inserts the key in the config and stores it to the given file.
> +pub fn store_key(id: &str, key: &KeyConfig) -> Result<(), Error> {
> +    let _lock = lock_config()?;
> +    let (mut config, _digest) = config()?;
> +
> +    if config.sections.contains_key(id) {
> +        bail!("key with id '{id}' already exists.");
> +    }
> +
> +    let backup_user = crate::backup_user()?;
> +    let dir_options = CreateOptions::new()
> +        .perm(Mode::from_bits_truncate(0o0750))
> +        .owner(Uid::from_raw(0))
> +        .group(backup_user.gid);
> +
> +    proxmox_sys::fs::ensure_dir_exists(ENCRYPTION_KEYS_DIR, &dir_options, true)?;
> +
> +    let key_path = format!("{ENCRYPTION_KEYS_DIR}{id}.enc");
> +    let key_lock_path = format!("{key_path}.lck");
> +
> +    // lock to avoid race with key deletion
> +    open_backup_lockfile(&key_lock_path, None, true)?;

This needs to be assigned to a variable, no? Otherwise, the lock would
be immediately dropped.

In other places we have something like let _lock = lock_config()?;

> +
> +    // assert the key file is empty or does not exist
> +    match std::fs::metadata(&key_path) {
> +        Ok(metadata) => {
> +            if metadata.len() > 0 {
> +                bail!("detected pre-existing key file, refusing to overwrite.");
> +            }
> +        }
> +        Err(err) if err.kind() == std::io::ErrorKind::NotFound => (),
> +        Err(err) => return Err(err.into()),
> +    }
> +

[snip]

> +/// Delete the encryption key from config.
> +///
> +/// Returns true if the key was removed successfully, false if there was no matching key.
> +/// Safety: caller must acquire and hold config lock.
> +pub fn delete_key(id: &str, mut config: SectionConfigData) -> Result<bool, Error> {
> +    if let Some((_, key)) = config.sections.remove(id) {
> +        let key =
> +            CryptKey::deserialize(key).map_err(|_err| format_err!("failed to parse key config"))?;
> +
> +        if key.archived_at.is_none() {
> +            bail!("key still active, deleting is only possible for archived keys");
> +        }
> +
> +        if let Some(key_path) = &key.info.path {
> +            let key_lock_path = format!("{key_path}.lck");
> +            // Avoid races with key insertion
> +            let _lock = open_backup_lockfile(key_lock_path, None, true)?;
> +
> +            let key_config = KeyConfig::load(key_path)?;
> +            let stored_key_info = KeyInfo::from(&key_config);
> +            // Check the key is the expected one
> +            if key.info.fingerprint != stored_key_info.fingerprint {
> +                bail!("unexpected key detected in key file, refuse to delete");
> +            }
> +
> +            let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
> +            // drops config lock
> +            replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())?;
> +
> +            std::fs::remove_file(key_path)?;

Wouldn't it make more sense to delete the key before writing the config?
If removing the key fails, an error would be returned, the key file
would (possibly) still be there, but the config would already have been
updated, leaving an orphaned key file.

> +            return Ok(true);
> +        }
> +
> +        bail!("missing key file path for key '{id}'");
> +    }
> +    Ok(false)
> +}
> +

[snip]




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH proxmox-backup v3 07/30] pbs-config: implement encryption key config handling
  2026-04-14 14:32   ` Michael Köppl
@ 2026-04-15  6:48     ` Christian Ebner
  2026-04-15  8:03       ` Daniel Kral
  2026-04-15  8:06       ` Thomas Lamprecht
  0 siblings, 2 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-15  6:48 UTC (permalink / raw)
  To: Michael Köppl, pbs-devel

On 4/14/26 4:31 PM, Michael Köppl wrote:
> 2 comments inline
> 
> On Tue Apr 14, 2026 at 2:59 PM CEST, Christian Ebner wrote:
> 
> [snip]
> 
>> +/// Store the encryption key to file.
>> +///
>> +/// Inserts the key in the config and stores it to the given file.
>> +pub fn store_key(id: &str, key: &KeyConfig) -> Result<(), Error> {
>> +    let _lock = lock_config()?;
>> +    let (mut config, _digest) = config()?;
>> +
>> +    if config.sections.contains_key(id) {
>> +        bail!("key with id '{id}' already exists.");
>> +    }
>> +
>> +    let backup_user = crate::backup_user()?;
>> +    let dir_options = CreateOptions::new()
>> +        .perm(Mode::from_bits_truncate(0o0750))
>> +        .owner(Uid::from_raw(0))
>> +        .group(backup_user.gid);
>> +
>> +    proxmox_sys::fs::ensure_dir_exists(ENCRYPTION_KEYS_DIR, &dir_options, true)?;
>> +
>> +    let key_path = format!("{ENCRYPTION_KEYS_DIR}{id}.enc");
>> +    let key_lock_path = format!("{key_path}.lck");
>> +
>> +    // lock to avoid race with key deletion
>> +    open_backup_lockfile(&key_lock_path, None, true)?;
> 
> This needs to be assigned to a variable, no? Otherwise, the lock would
> be immediately dropped.

Oh, good catch! Indeed without this the lock would be immediately 
dropped, will be fixed. Thanks!

> 
> In other places we have something like let _lock = lock_config()?;
> 
>> +
>> +    // assert the key file is empty or does not exist
>> +    match std::fs::metadata(&key_path) {
>> +        Ok(metadata) => {
>> +            if metadata.len() > 0 {
>> +                bail!("detected pre-existing key file, refusing to overwrite.");
>> +            }
>> +        }
>> +        Err(err) if err.kind() == std::io::ErrorKind::NotFound => (),
>> +        Err(err) => return Err(err.into()),
>> +    }
>> +
> 
> [snip]
> 
>> +/// Delete the encryption key from config.
>> +///
>> +/// Returns true if the key was removed successfully, false if there was no matching key.
>> +/// Safety: caller must acquire and hold config lock.
>> +pub fn delete_key(id: &str, mut config: SectionConfigData) -> Result<bool, Error> {
>> +    if let Some((_, key)) = config.sections.remove(id) {
>> +        let key =
>> +            CryptKey::deserialize(key).map_err(|_err| format_err!("failed to parse key config"))?;
>> +
>> +        if key.archived_at.is_none() {
>> +            bail!("key still active, deleting is only possible for archived keys");
>> +        }
>> +
>> +        if let Some(key_path) = &key.info.path {
>> +            let key_lock_path = format!("{key_path}.lck");
>> +            // Avoid races with key insertion
>> +            let _lock = open_backup_lockfile(key_lock_path, None, true)?;
>> +
>> +            let key_config = KeyConfig::load(key_path)?;
>> +            let stored_key_info = KeyInfo::from(&key_config);
>> +            // Check the key is the expected one
>> +            if key.info.fingerprint != stored_key_info.fingerprint {
>> +                bail!("unexpected key detected in key file, refuse to delete");
>> +            }
>> +
>> +            let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
>> +            // drops config lock
>> +            replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())?;
>> +
>> +            std::fs::remove_file(key_path)?;
> 
> Wouldn't it make more sense to delete the key before writing the config?
> If removing the key fails, an error would be returned, the key file
> would (possibly) still be there, but the config would already have been
> updated, leaving an orphaned key file.

The ordering here was chosen as is on purpose, see 
https://lore.proxmox.com/pbs-devel/b7d0f730-a574-4454-bf18-933db202db8b@proxmox.com/

> 
>> +            return Ok(true);
>> +        }
>> +
>> +        bail!("missing key file path for key '{id}'");
>> +    }
>> +    Ok(false)
>> +}
>> +
> 
> [snip]





^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH proxmox-backup v3 07/30] pbs-config: implement encryption key config handling
  2026-04-15  6:48     ` Christian Ebner
@ 2026-04-15  8:03       ` Daniel Kral
  2026-04-15  8:21         ` Christian Ebner
  2026-04-15  8:06       ` Thomas Lamprecht
  1 sibling, 1 reply; 40+ messages in thread
From: Daniel Kral @ 2026-04-15  8:03 UTC (permalink / raw)
  To: Christian Ebner, Michael Köppl, pbs-devel

On Wed Apr 15, 2026 at 8:48 AM CEST, Christian Ebner wrote:
> On 4/14/26 4:31 PM, Michael Köppl wrote:
>> 2 comments inline
>> 
>> On Tue Apr 14, 2026 at 2:59 PM CEST, Christian Ebner wrote:
>> 
>> [snip]
>> 
>>> +/// Store the encryption key to file.
>>> +///
>>> +/// Inserts the key in the config and stores it to the given file.
>>> +pub fn store_key(id: &str, key: &KeyConfig) -> Result<(), Error> {
>>> +    let _lock = lock_config()?;
>>> +    let (mut config, _digest) = config()?;
>>> +
>>> +    if config.sections.contains_key(id) {
>>> +        bail!("key with id '{id}' already exists.");
>>> +    }
>>> +
>>> +    let backup_user = crate::backup_user()?;
>>> +    let dir_options = CreateOptions::new()
>>> +        .perm(Mode::from_bits_truncate(0o0750))
>>> +        .owner(Uid::from_raw(0))
>>> +        .group(backup_user.gid);
>>> +
>>> +    proxmox_sys::fs::ensure_dir_exists(ENCRYPTION_KEYS_DIR, &dir_options, true)?;
>>> +
>>> +    let key_path = format!("{ENCRYPTION_KEYS_DIR}{id}.enc");
>>> +    let key_lock_path = format!("{key_path}.lck");
>>> +
>>> +    // lock to avoid race with key deletion
>>> +    open_backup_lockfile(&key_lock_path, None, true)?;
>> 
>> This needs to be assigned to a variable, no? Otherwise, the lock would
>> be immediately dropped.
>
> Oh, good catch! Indeed without this the lock would be immediately 
> dropped, will be fixed. Thanks!
>

Would it make sense to add a #[must_use = "..."] attribute [0] to
open_backup_lockfile() or even the BackupLockGuard in general or would
it be too strict here?

[0] https://doc.rust-lang.org/reference/attributes/diagnostics.html#the-must_use-attribute

>> 
>> In other places we have something like let _lock = lock_config()?;
>> 




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH proxmox-backup v3 07/30] pbs-config: implement encryption key config handling
  2026-04-15  6:48     ` Christian Ebner
  2026-04-15  8:03       ` Daniel Kral
@ 2026-04-15  8:06       ` Thomas Lamprecht
  1 sibling, 0 replies; 40+ messages in thread
From: Thomas Lamprecht @ 2026-04-15  8:06 UTC (permalink / raw)
  To: Christian Ebner, Michael Köppl, pbs-devel

On 15/04/2026 08:47, Christian Ebner wrote:
>>>
>>> +    // lock to avoid race with key deletion
>>> +    open_backup_lockfile(&key_lock_path, None, true)?;
>>
>> This needs to be assigned to a variable, no? Otherwise, the lock would
>> be immediately dropped.
> 
> Oh, good catch! Indeed without this the lock would be immediately dropped, will be fixed. Thanks!

Might be good to tag the open_backup_lockfile fn with the #[must_use]
attr:

https://doc.rust-lang.org/reference/attributes/diagnostics.html#the-must_use-attribute




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH proxmox-backup v3 07/30] pbs-config: implement encryption key config handling
  2026-04-15  8:03       ` Daniel Kral
@ 2026-04-15  8:21         ` Christian Ebner
  0 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-15  8:21 UTC (permalink / raw)
  To: Daniel Kral, Michael Köppl, pbs-devel

On 4/15/26 10:02 AM, Daniel Kral wrote:
> On Wed Apr 15, 2026 at 8:48 AM CEST, Christian Ebner wrote:
>> On 4/14/26 4:31 PM, Michael Köppl wrote:
>>> 2 comments inline
>>>
>>> On Tue Apr 14, 2026 at 2:59 PM CEST, Christian Ebner wrote:
>>>> +    // lock to avoid race with key deletion
>>>> +    open_backup_lockfile(&key_lock_path, None, true)?;
>>>
>>> This needs to be assigned to a variable, no? Otherwise, the lock would
>>> be immediately dropped.
>>
>> Oh, good catch! Indeed without this the lock would be immediately
>> dropped, will be fixed. Thanks!
>>
> 
> Would it make sense to add a #[must_use = "..."] attribute [0] to
> open_backup_lockfile() or even the BackupLockGuard in general or would
> it be too strict here?


Yes, adding it directly to `BackupLockGuard` makes the most sense here. 
And with that, only above currently unused location is reported.

Will send a dedicated patch for this shortly, thanks!




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH proxmox-backup v3 19/30] fix #7251: api: push: encrypt snapshots using configured encryption key
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 19/30] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
@ 2026-04-15 14:49   ` Michael Köppl
  2026-04-15 15:25     ` Christian Ebner
  0 siblings, 1 reply; 40+ messages in thread
From: Michael Köppl @ 2026-04-15 14:49 UTC (permalink / raw)
  To: Christian Ebner, pbs-devel

On Tue Apr 14, 2026 at 2:59 PM CEST, Christian Ebner wrote:

[snip]

>      let mut encrypt_using_key = None;
> +    if params.crypt_config.is_some() {
> +        // Check if snapshot is fully encrypted or not encrypted at all:
> +        // refuse progress otherwise to upload partially unencrypted contents or mix encryption key.
> +        let files = source_manifest.files();
> +        let all_unencrypted = files
> +            .iter()
> +            .all(|f| f.chunk_crypt_mode() == CryptMode::None);
> +        let any_unencrypted = files
> +            .iter()
> +            .any(|f| f.chunk_crypt_mode() == CryptMode::None);
> +
> +        if all_unencrypted {
> +            encrypt_using_key = params.crypt_config.clone();
> +            info!("Encrypt and push unencrypted snapshot '{snapshot}'");

nit: Might just be me, but to me this reads like it would encrypt the
snapshot and then just push the unencrypted one. Perhaps something like
"Encrypt and push previously unencrypted snapshot"?

> +        } else if any_unencrypted {
> +            warn!("Encountered partially encrypted snapshot '{snapshot}', refuse to re-encrypt and skip");
> +            return Ok(stats);
> +        } else {
> +            info!("Pushing already encrypted snapshot '{snapshot}' without re-encryption");
> +        }
> +    }

[snip]




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH proxmox-backup v3 21/30] ui: expose assigning encryption key to sync jobs
  2026-04-14 12:59 ` [PATCH proxmox-backup v3 21/30] ui: expose assigning encryption key to sync jobs Christian Ebner
@ 2026-04-15 14:49   ` Michael Köppl
  2026-04-15 15:20     ` Christian Ebner
  0 siblings, 1 reply; 40+ messages in thread
From: Michael Köppl @ 2026-04-15 14:49 UTC (permalink / raw)
  To: Christian Ebner, pbs-devel

On Tue Apr 14, 2026 at 2:59 PM CEST, Christian Ebner wrote:

[snip]

> +                column2: [
> +                    {
> +                        xtype: 'box',
> +                        style: {
> +                            'inline-size': '325px',
> +                            'overflow-wrap': 'break-word',
> +                        },
> +                        padding: '5',
> +                        html: gettext(
> +                            'Active encryption key is used to encrypt snapshots which are not encrypted on the source during sync. Already encrypted contents are unaffected, partially encrypted contents skipped if set.',

@Daniel and I discussed this off-list during testing and both found it
a bit difficult to understand at first glance what this means. Perhaps
something like this could improve it, also using active voice:

"When pushing, the system uses the active encryption key to encrypt
unencrypted sources snapshots. It leaves existing encrypted content
as-is, and skips partially encrypted content if the skip setting is
turned on."

> +                        ),
> +                        cbind: {
> +                            hidden: '{!syncDirectionPush}',
> +                        },
> +                    },
> +                    {
> +                        xtype: 'box',
> +                        style: {
> +                            'inline-size': '325px',
> +                            'overflow-wrap': 'break-word',
> +                        },
> +                        padding: '5',
> +                        html: gettext(
> +                            'Associated keys store a reference to keys in order to protect them from removal without prior disassociation. On changing the active encryption key, the previous key is added to the associated keys in order to protect from accidental deletion in case it still is required to decrypt contents.',

same as above, perhaps something like:

"To prevent premature removal, associated keys hold a reference to a key
until you explicitly unlink it. When you change your active encryption
key, the system automatically associates the old key to protect it from
accidental deletion, ensuring you can still decrypt older contents."

> +                        ),
> +                        cbind: {
> +                            hidden: '{!syncDirectionPush}',
> +                        },
> +                    },
> +                ],
> +            },
>          ],
>      },
>  });





^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH proxmox-backup v3 21/30] ui: expose assigning encryption key to sync jobs
  2026-04-15 14:49   ` Michael Köppl
@ 2026-04-15 15:20     ` Christian Ebner
  0 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-15 15:20 UTC (permalink / raw)
  To: Michael Köppl, pbs-devel

On 4/15/26 4:47 PM, Michael Köppl wrote:
> On Tue Apr 14, 2026 at 2:59 PM CEST, Christian Ebner wrote:
> 
> [snip]
> 
>> +                column2: [
>> +                    {
>> +                        xtype: 'box',
>> +                        style: {
>> +                            'inline-size': '325px',
>> +                            'overflow-wrap': 'break-word',
>> +                        },
>> +                        padding: '5',
>> +                        html: gettext(
>> +                            'Active encryption key is used to encrypt snapshots which are not encrypted on the source during sync. Already encrypted contents are unaffected, partially encrypted contents skipped if set.',
> 
> @Daniel and I discussed this off-list during testing and both found it
> a bit difficult to understand at first glance what this means. Perhaps
> something like this could improve it, also using active voice:
> 
> "When pushing, the system uses the active encryption key to encrypt
> unencrypted sources snapshots. It leaves existing encrypted content
> as-is, and skips partially encrypted content if the skip setting is
> turned on."
> 
>> +                        ),
>> +                        cbind: {
>> +                            hidden: '{!syncDirectionPush}',
>> +                        },
>> +                    },
>> +                    {
>> +                        xtype: 'box',
>> +                        style: {
>> +                            'inline-size': '325px',
>> +                            'overflow-wrap': 'break-word',
>> +                        },
>> +                        padding: '5',
>> +                        html: gettext(
>> +                            'Associated keys store a reference to keys in order to protect them from removal without prior disassociation. On changing the active encryption key, the previous key is added to the associated keys in order to protect from accidental deletion in case it still is required to decrypt contents.',
> 
> same as above, perhaps something like:
> 
> "To prevent premature removal, associated keys hold a reference to a key
> until you explicitly unlink it. When you change your active encryption
> key, the system automatically associates the old key to protect it from
> accidental deletion, ensuring you can still decrypt older contents."
> 
>> +                        ),
>> +                        cbind: {
>> +                            hidden: '{!syncDirectionPush}',
>> +                        },
>> +                    },
>> +                ],
>> +            },
>>           ],
>>       },
>>   });
> 

Agreed, thanks for the suggestions: will incorporate these!




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH proxmox-backup v3 19/30] fix #7251: api: push: encrypt snapshots using configured encryption key
  2026-04-15 14:49   ` Michael Köppl
@ 2026-04-15 15:25     ` Christian Ebner
  0 siblings, 0 replies; 40+ messages in thread
From: Christian Ebner @ 2026-04-15 15:25 UTC (permalink / raw)
  To: Michael Köppl, pbs-devel

On 4/15/26 4:47 PM, Michael Köppl wrote:
> On Tue Apr 14, 2026 at 2:59 PM CEST, Christian Ebner wrote:
> 
> [snip]
> 
>>       let mut encrypt_using_key = None;
>> +    if params.crypt_config.is_some() {
>> +        // Check if snapshot is fully encrypted or not encrypted at all:
>> +        // refuse progress otherwise to upload partially unencrypted contents or mix encryption key.
>> +        let files = source_manifest.files();
>> +        let all_unencrypted = files
>> +            .iter()
>> +            .all(|f| f.chunk_crypt_mode() == CryptMode::None);
>> +        let any_unencrypted = files
>> +            .iter()
>> +            .any(|f| f.chunk_crypt_mode() == CryptMode::None);
>> +
>> +        if all_unencrypted {
>> +            encrypt_using_key = params.crypt_config.clone();
>> +            info!("Encrypt and push unencrypted snapshot '{snapshot}'");
> 
> nit: Might just be me, but to me this reads like it would encrypt the
> snapshot and then just push the unencrypted one. Perhaps something like
> "Encrypt and push previously unencrypted snapshot"?

True, now that you mention it I cannot unsee it. But maybe even better:
"Encrypt source snapshot '{}' on the fly while pushing to remote"?

> 
>> +        } else if any_unencrypted {
>> +            warn!("Encountered partially encrypted snapshot '{snapshot}', refuse to re-encrypt and skip");
>> +            return Ok(stats);
>> +        } else {
>> +            info!("Pushing already encrypted snapshot '{snapshot}' without re-encryption");
>> +        }
>> +    }
> 
> [snip]





^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2026-04-15 15:25 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-14 12:58 [PATCH proxmox{,-backup} v3 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
2026-04-14 12:58 ` [PATCH proxmox v3 01/30] pbs-api-types: define en-/decryption key type and schema Christian Ebner
2026-04-14 12:58 ` [PATCH proxmox v3 02/30] pbs-api-types: sync job: add optional cryptographic keys to config Christian Ebner
2026-04-14 12:58 ` [PATCH proxmox-backup v3 03/30] sync: push: use tracing macros instead of log Christian Ebner
2026-04-14 12:58 ` [PATCH proxmox-backup v3 04/30] datastore: blob: implement async reader for data blobs Christian Ebner
2026-04-14 12:58 ` [PATCH proxmox-backup v3 05/30] datastore: manifest: add helper for change detection fingerprint Christian Ebner
2026-04-14 12:58 ` [PATCH proxmox-backup v3 06/30] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 07/30] pbs-config: implement encryption key config handling Christian Ebner
2026-04-14 14:32   ` Michael Köppl
2026-04-15  6:48     ` Christian Ebner
2026-04-15  8:03       ` Daniel Kral
2026-04-15  8:21         ` Christian Ebner
2026-04-15  8:06       ` Thomas Lamprecht
2026-04-14 12:59 ` [PATCH proxmox-backup v3 08/30] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 09/30] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 10/30] sync: add helper to check encryption key acls and load key Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 11/30] api: config: add endpoints for encryption key manipulation Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 12/30] api: config: check sync owner has access to en-/decryption keys Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 13/30] api: config: allow encryption key manipulation for sync job Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 14/30] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 15/30] api: push sync: expose optional encryption key for push sync Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 16/30] sync: push: optionally encrypt data blob on upload Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 17/30] sync: push: optionally encrypt client log on upload if key is given Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 18/30] sync: push: add helper for loading known chunks from previous snapshot Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 19/30] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
2026-04-15 14:49   ` Michael Köppl
2026-04-15 15:25     ` Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 20/30] ui: define and expose encryption key management menu item and windows Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 21/30] ui: expose assigning encryption key to sync jobs Christian Ebner
2026-04-15 14:49   ` Michael Köppl
2026-04-15 15:20     ` Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 22/30] sync: pull: load encryption key if given in job config Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 23/30] sync: expand source chunk reader trait by crypt config Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 24/30] sync: pull: introduce and use decrypt index writer if " Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 25/30] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 26/30] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 27/30] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 28/30] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 29/30] api: encryption keys: allow to toggle the archived state for keys Christian Ebner
2026-04-14 12:59 ` [PATCH proxmox-backup v3 30/30] docs: add section describing server side encryption for sync jobs Christian Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal