* [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs
@ 2026-04-20 16:15 Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox v4 01/30] pbs-api-types: define en-/decryption key type and schema Christian Ebner
` (29 more replies)
0 siblings, 30 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
This patch series implements support for encrypting backup snapshots
when pushing from a source PBS instance to an untrusted remote target
PBS instance. Further, it adds support to decrypt snapshots being
encrypted on the remote source PBS when pulling the contents to the
local target PBS instance. This allows to perform full server side
encryption/decryption when syncing with a less trusted remote PBS.
In order to encrypt/decrypt snapshots, a new encryption key entity
is introduced, to be created as global instance on the PBS, placed and
managed by it's own dedicated config. Keys with secret are stored
in dedicated files so they only need to be loaded when accessing the
key, not for listing of configuration. Sync encryption keys can be
archived, rendering them no longer usable to encrypt new contents,
but still allowing to decrypt. In order to remove a sync encryption
key, it must be archived first and no longer associated to any
sync job config, a constrained added as safety net to avoid accidental
key removal.
The same centralized key management is also used for tape encryption
keys, so they are on-par ui wise, the configs remain however separated
for the time being.
The sync jobs in push direction are extended to receive an additional
active encryption key parameter, which will be used to encrypt
unencrypted snapshot when pushing to the remote target.
A list of associated keys is kept, adding the previous encryption key
of the push sync job if the key is rotated.
For pull sync jobs, the active encryption key parameter is not
considered, rather all associated keys will be loaded and used to
decrypt snapshots with matching fingerprint as found in the source
manifest. In order to encrypt/decrypt the contents, chunks, index
files, blobs and manifest are additionally processed, rewritten when
required.
Changes since version 3 (thanks a lot to @Michael and @Thomas for review):
patch 07:
- fix missing lock variable binding, suggested `must_use` attribute has already
been send as separate patch and applied
- Drop key archive helper, it cannot be reused with new check for key being
in-use
- implement save_config instead, to be used when toggling key archive state
patch 10:
- adapted outdated/incorrect docstring, indirectly fixing typo as well
- drop unused Option wrapper for check_privs_and_load_key_config(), as it always
returns a key or fails anyways
patch 11:
- Adapt archive key endpoint, as must check if the key is in-use as active
encryption key
patch 13:
- Add missing permission checks on associated key updates as well as owner
updates
- Drop needless owner mapping when checking active encryptionk key updates,
data.owner already contains the updated owner or the pre-configured one (if
unchanged)
- Extend commit message to reflect these changes as well
patch 15:
- drop unused Option wrapper handling for check_privs_and_load_key_config()
patch 16:
- keep encrypt_using_key non-mutable until actually required
patch 18:
- never load known chunks if active encryption key is set, but local source is
pre-encrypted as the chunks cannot be reused
patch 19:
- adapt log message for pre-encrypted snapshots not being re-encrypted
- adapt log message for snapshots being encrypted when pushed
- only push target manifest signature in case of snapshots not being re-encrypted
- make encrypt_using_key mutable only in this patch
patch 21:
- adapt infotext explaining active encryption key and associated keys for push
sync jobs
patch 22:
- drop unused Option wrapper handling for check_privs_and_load_key_config()
- make optional crypt_config non-mutable until actually used
patch 24:
- silence clippy unused warnings by prefixing variable binding with `_`
- add comment explaining why the rewritten index replaces the temp file instead
of directly writing the final index file
- drop the DecryptedIndexWriter::None enum variant, wrap into Option instead
patch 25:
- set decrypted digest also when entry already contains digest, but the mapping
has not been set yet
- explicitley mention this behaviour in the docstring
- refactor return values for check_reusable to be a dedicated type
- make new_manifest non-mutable until actually used
patch 26:
- use create(true).truncate(true) instead of create_new(true) to be resilient
against leftover stale index files
patch 27:
- adapt to csum binding being silenced in patch 24.
- adapt to DecryptedIndexWriter::None enum variant being dropped in favor of
Option wrapping
- adapt to new structured return type for check_reusable()
patch 28:
- Remove outdated change-detection-fingerprint when decrypting contents
- fsync new manifest for improved crash safety
patch 29:
- rename from archive_key() to toggle_archive_key() to make the intent clear and
adapt docstring accordingly
patch 30:
- Mention that sync jobs with set active encryption key will not skip
pre-encrypted snapshosts, but rather push them as is.
- Mention that sync jobs configured with encrypted-only flag set will never
use the active encryption key since unencrypted ones are then ignored.
- Fix `rotation` typo
- Explicitley spell out that key0 is currently the active encryption key and
therefore will be added as associated key on rotation
Changes since version 2 (thanks a lot to @Thomas for review):
- Add dedicated lock file for per-key file locks when creating/deleting sync
keys.
- Add initial documentation for server side encryption/decription during sync
jobs.
- Adapt key archive endpoint to be able to toggle, kept as dedicated patch as
unsure about impl details.
- Early detect unusable keys provided on key creation as upload via api.
- List all associated sync jobs when checking with encryption_key_in_use().
- Fix check for key access when setting active encryption key. It must fail for
archived keys.
- Add flag to check for not allowing to set archived key as active encryption
key.
- Drop associated keys also on active encryption key update, readd rotated one
afterwards if required.
- Refactor check for un-/partially-/fully-encrypted backup snapshots.
- Include snapshot name in log message for skipped snapshots.
- Add missing return on error when requesting key archivation for tape.
- Handle errors for api calls to load tape and sync keys in ui by wrapping into
try-catch-block.
- Also drop verify state on pull, do not rely on inherent check to better
protect against bugs and corruptions.
- Awitch field label for associated keys based on sync direction.
- Add comment field explaining active encryption key and associated keys and
their relation on key rotation.
- Also store key id together with key config when loading associated keys, so it
can be logged later when key fingerprint matched.
- Squash new manifest registration into patch 26, keeping logic together
- Fix boguous check, must use change-detection-fingerprint, not key-fingerprint
to detect changes on already existing manifest.
- Convert unprotected manifest part to json value to drop key-fingerprint.
- Log id of key used for decryption, not just fingerprint
- Switch all remaining `log` macros for sync to use `tracing`.
- Fix typos in commit message for async DataBlob reader patch.
- Double column width for `hint` field.
- Fix icons for type based menu buttons and type column
- Drop dead code `crypt-key-fp`.
- Fix error messages by s/seems/seem/ and wrap in gettext()
- Document config lock requirements for delete_key().
- Drop outdated comment on key file lock drop, it's a dedicated file now.
Changes since version 1 (thanks a lot to @all reviewers/testers!):
- Implement encryption key archiving and key rotation logic, allowing
to specify active encryption key for push syncs, and a list of
previously used ones. For pull multiple decryption keys can now be
configured.
- Rework the UI to add support for key archiving, manage key association
in sync jobs and to also manage tape encryption keys in the same
centralized grid.
- Check for key still being in-use by sync job before removing it
- Fully encrypted snapshots are now pushed as-is if an encryption key
is configured.
- Fixed inefficient resync of pre-existing target snapshot on pull,
detect file changes in manifest via fingerprinting.
- Avoid overwriting pre-existing decrypted local snapshot by encrypted
snapshot when no (or mismatching) decryption key is passed for pull
job.
- Rename EncryptionKey to CyrptKey, as the key is also used for
decryption.
- Remove key from config before removing keyfile
- Add locking mechansism to avoid races in key config writing
- Fix gathering of known chunks from previous snapshot in push for
dynamic index files
- Detect config changes by checking for digest mismatch
- Guard key loading by PRIV_SYS_MODIFY
- Use tracing::info! instead of log::info!
- Fix clearing of encryption/decryption key via sync job config window
- Fix creating new sync job without crypt key configured
- Check key exists and can be accessed when set in sync job
- Fix min key id length for key edit window
- Fixed drag-and-drop for key file upload
- Fix outdated comments, typos, ecc.
Link to the bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=7251
proxmox:
Christian Ebner (2):
pbs-api-types: define en-/decryption key type and schema
pbs-api-types: sync job: add optional cryptographic keys to config
pbs-api-types/src/jobs.rs | 21 ++++++++++++++--
pbs-api-types/src/key_derivation.rs | 38 ++++++++++++++++++++++++++---
pbs-api-types/src/lib.rs | 2 +-
3 files changed, 55 insertions(+), 6 deletions(-)
proxmox-backup:
Christian Ebner (28):
sync: push: use tracing macros instead of log
datastore: blob: implement async reader for data blobs
datastore: manifest: add helper for change detection fingerprint
pbs-key-config: introduce store_with() for KeyConfig
pbs-config: implement encryption key config handling
pbs-config: acls: add 'encryption-keys' as valid 'system' subpath
ui: expose 'encryption-keys' as acl subpath for 'system'
sync: add helper to check encryption key acls and load key
api: config: add endpoints for encryption key manipulation
api: config: check sync owner has access to en-/decryption keys
api: config: allow encryption key manipulation for sync job
sync: push: rewrite manifest instead of pushing pre-existing one
api: push sync: expose optional encryption key for push sync
sync: push: optionally encrypt data blob on upload
sync: push: optionally encrypt client log on upload if key is given
sync: push: add helper for loading known chunks from previous snapshot
fix #7251: api: push: encrypt snapshots using configured encryption
key
ui: define and expose encryption key management menu item and windows
ui: expose assigning encryption key to sync jobs
sync: pull: load encryption key if given in job config
sync: expand source chunk reader trait by crypt config
sync: pull: introduce and use decrypt index writer if crypt config
sync: pull: extend encountered chunk by optional decrypted digest
sync: pull: decrypt blob files on pull if encryption key is configured
sync: pull: decrypt chunks and rewrite index file for matching key
sync: pull: decrypt snapshots with matching encryption key fingerprint
api: encryption keys: allow to toggle the archived state for keys
docs: add section describing server side encryption for sync jobs
docs/managing-remotes.rst | 54 ++++
pbs-config/Cargo.toml | 2 +
pbs-config/src/acl.rs | 4 +-
pbs-config/src/encryption_keys.rs | 201 ++++++++++++
pbs-config/src/lib.rs | 1 +
pbs-datastore/src/data_blob.rs | 18 +-
pbs-datastore/src/manifest.rs | 20 ++
pbs-key-config/src/lib.rs | 36 ++-
src/api2/config/encryption_keys.rs | 229 ++++++++++++++
src/api2/config/mod.rs | 2 +
src/api2/config/sync.rs | 113 ++++++-
src/api2/pull.rs | 15 +-
src/api2/push.rs | 8 +-
src/server/pull.rs | 484 +++++++++++++++++++++++++----
src/server/push.rs | 315 ++++++++++++++-----
src/server/sync.rs | 57 +++-
www/Makefile | 3 +
www/NavigationTree.js | 6 +
www/Utils.js | 1 +
www/config/EncryptionKeysView.js | 346 +++++++++++++++++++++
www/form/EncryptionKeySelector.js | 96 ++++++
www/form/PermissionPathSelector.js | 1 +
www/window/EncryptionKeysEdit.js | 382 +++++++++++++++++++++++
www/window/SyncJobEdit.js | 62 ++++
24 files changed, 2295 insertions(+), 161 deletions(-)
create mode 100644 pbs-config/src/encryption_keys.rs
create mode 100644 src/api2/config/encryption_keys.rs
create mode 100644 www/config/EncryptionKeysView.js
create mode 100644 www/form/EncryptionKeySelector.js
create mode 100644 www/window/EncryptionKeysEdit.js
Summary over all repositories:
27 files changed, 2350 insertions(+), 167 deletions(-)
--
Generated by murpp 0.11.0
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox v4 01/30] pbs-api-types: define en-/decryption key type and schema
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox v4 02/30] pbs-api-types: sync job: add optional cryptographic keys to config Christian Ebner
` (28 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Will be used to store and uniquly identify en-/decryption keys in
their respective config. Contains the KeyInfo extended by the unique
key identifier and an optional `archived-at` timestamp for keys which
are marked to no longer be used for encrypting new content, just
decrypting.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-api-types/src/key_derivation.rs | 38 ++++++++++++++++++++++++++---
pbs-api-types/src/lib.rs | 2 +-
2 files changed, 36 insertions(+), 4 deletions(-)
diff --git a/pbs-api-types/src/key_derivation.rs b/pbs-api-types/src/key_derivation.rs
index 57ae353a..0815a1f4 100644
--- a/pbs-api-types/src/key_derivation.rs
+++ b/pbs-api-types/src/key_derivation.rs
@@ -3,12 +3,13 @@ use serde::{Deserialize, Serialize};
#[cfg(feature = "enum-fallback")]
use proxmox_fixed_string::FixedString;
-use proxmox_schema::api;
+use proxmox_schema::api_types::SAFE_ID_FORMAT;
+use proxmox_schema::{api, Schema, StringSchema, Updater};
use crate::CERT_FINGERPRINT_SHA256_SCHEMA;
#[api(default: "scrypt")]
-#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
+#[derive(Clone, Copy, Debug, Deserialize, Serialize, PartialEq)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
@@ -41,7 +42,7 @@ impl Default for Kdf {
},
},
)]
-#[derive(Deserialize, Serialize)]
+#[derive(Clone, Default, Deserialize, Serialize, Updater, PartialEq)]
/// Encryption Key Information
pub struct KeyInfo {
/// Path to key (if stored in a file)
@@ -59,3 +60,34 @@ pub struct KeyInfo {
#[serde(skip_serializing_if = "Option::is_none")]
pub hint: Option<String>,
}
+
+/// ID to uniquely identify an encryption/decryption key.
+pub const CRYPT_KEY_ID_SCHEMA: Schema =
+ StringSchema::new("ID to uniquely identify encryption/decription key")
+ .format(&SAFE_ID_FORMAT)
+ .min_length(3)
+ .max_length(32)
+ .schema();
+
+#[api(
+ properties: {
+ id: {
+ schema: CRYPT_KEY_ID_SCHEMA,
+ },
+ info: {
+ type: KeyInfo,
+ },
+ },
+)]
+#[derive(Clone, Default, Deserialize, Serialize, Updater, PartialEq)]
+#[serde(rename_all = "kebab-case")]
+/// Encryption/Decryption Key Info with ID.
+pub struct CryptKey {
+ #[updater(skip)]
+ pub id: String,
+ #[serde(flatten)]
+ pub info: KeyInfo,
+ /// Timestamp when key was archived (not set if key is active).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub archived_at: Option<i64>,
+}
diff --git a/pbs-api-types/src/lib.rs b/pbs-api-types/src/lib.rs
index 54547291..2f5dfea6 100644
--- a/pbs-api-types/src/lib.rs
+++ b/pbs-api-types/src/lib.rs
@@ -104,7 +104,7 @@ mod jobs;
pub use jobs::*;
mod key_derivation;
-pub use key_derivation::{Kdf, KeyInfo};
+pub use key_derivation::{CryptKey, Kdf, KeyInfo, CRYPT_KEY_ID_SCHEMA};
mod maintenance;
pub use maintenance::*;
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox v4 02/30] pbs-api-types: sync job: add optional cryptographic keys to config
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox v4 01/30] pbs-api-types: define en-/decryption key type and schema Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 03/30] sync: push: use tracing macros instead of log Christian Ebner
` (27 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Allows to configure the active encryption key, used to encrypt
backups when performing sync jobs in push direction.
Further, allows to associated keys to a sync job, used as decryption
keys for pull sync jobs for snapshots with matching key fingerprint.
For push sync jobs the associated keys are used to keep track of
previously in-use encryption keys, assuring that they are only
removable if the user actually removed the association. This is
used as safety net against involuntary key deletion.
The field name `associated-key` was chosen since the sync job config
stores the list items on separate lines with property name, so the
resulting config will be structured as shown:
```
sync: encrypt-sync
active-encryption-key key2
associated-key key1
associated-key key0
...
```
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-api-types/src/jobs.rs | 21 +++++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/pbs-api-types/src/jobs.rs b/pbs-api-types/src/jobs.rs
index 7e6dfb94..59f2820f 100644
--- a/pbs-api-types/src/jobs.rs
+++ b/pbs-api-types/src/jobs.rs
@@ -13,8 +13,9 @@ use proxmox_schema::*;
use crate::{
Authid, BackupNamespace, BackupType, NotificationMode, RateLimitConfig, Userid,
BACKUP_GROUP_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_NS_RE, DATASTORE_SCHEMA,
- DRIVE_NAME_SCHEMA, MEDIA_POOL_NAME_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PROXMOX_SAFE_ID_FORMAT,
- PROXMOX_SAFE_ID_REGEX_STR, REMOTE_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA,
+ DRIVE_NAME_SCHEMA, CRYPT_KEY_ID_SCHEMA, MEDIA_POOL_NAME_SCHEMA,
+ NS_MAX_DEPTH_REDUCED_SCHEMA, PROXMOX_SAFE_ID_FORMAT, PROXMOX_SAFE_ID_REGEX_STR,
+ REMOTE_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA,
};
const_regex! {
@@ -664,6 +665,18 @@ pub const UNMOUNT_ON_SYNC_DONE_SCHEMA: Schema =
type: SyncDirection,
optional: true,
},
+ "active-encryption-key": {
+ schema: CRYPT_KEY_ID_SCHEMA,
+ optional: true,
+ },
+ "associated-key": {
+ type: Array,
+ description: "List of cryptographic keys associated with sync job.",
+ items: {
+ schema: CRYPT_KEY_ID_SCHEMA,
+ },
+ optional: true,
+ },
}
)]
#[derive(Serialize, Deserialize, Clone, Updater, PartialEq)]
@@ -709,6 +722,10 @@ pub struct SyncJobConfig {
pub unmount_on_done: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub sync_direction: Option<SyncDirection>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub active_encryption_key: Option<String>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub associated_key: Option<Vec<String>>,
}
impl SyncJobConfig {
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 03/30] sync: push: use tracing macros instead of log
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox v4 01/30] pbs-api-types: define en-/decryption key type and schema Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox v4 02/30] pbs-api-types: sync job: add optional cryptographic keys to config Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 04/30] datastore: blob: implement async reader for data blobs Christian Ebner
` (26 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
In order to keep logging consistent, drop all ocurences of log::info!
and log::warn! and use the tracing macros instead.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/push.rs | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/server/push.rs b/src/server/push.rs
index e69973b4f..21f326aba 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -828,8 +828,8 @@ pub(crate) async fn push_snapshot(
Ok((manifest, _raw_size)) => manifest,
Err(err) => {
// No manifest in snapshot or failed to read, warn and skip
- log::warn!("Encountered errors: {err:#}");
- log::warn!("Failed to load manifest for '{snapshot}'!");
+ warn!("Encountered errors: {err:#}");
+ warn!("Failed to load manifest for '{snapshot}'!");
return Ok(stats);
}
};
@@ -863,7 +863,7 @@ pub(crate) async fn push_snapshot(
if fetch_previous_manifest {
match backup_writer.download_previous_manifest().await {
Ok(manifest) => previous_manifest = Some(Arc::new(manifest)),
- Err(err) => log::info!("Could not download previous manifest - {err}"),
+ Err(err) => info!("Could not download previous manifest - {err}"),
}
};
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 04/30] datastore: blob: implement async reader for data blobs
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (2 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 03/30] sync: push: use tracing macros instead of log Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 05/30] datastore: manifest: add helper for change detection fingerprint Christian Ebner
` (25 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
So it can be used to load the blob file when server side encryption
is required during push sync jobs, which runs in async context.
Factor out the DataBlob and CRC check, which is identical for sync
and async reader implementation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/data_blob.rs | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/pbs-datastore/src/data_blob.rs b/pbs-datastore/src/data_blob.rs
index 0c05c5a40..b434243c0 100644
--- a/pbs-datastore/src/data_blob.rs
+++ b/pbs-datastore/src/data_blob.rs
@@ -2,6 +2,7 @@ use std::io::Write;
use anyhow::{bail, Error};
use openssl::symm::{decrypt_aead, Mode};
+use tokio::io::{AsyncRead, AsyncReadExt};
use proxmox_io::{ReadExt, WriteExt};
@@ -238,15 +239,26 @@ impl DataBlob {
}
}
- /// Load blob from ``reader``, verify CRC
+ /// Load data blob via given sync ``reader`` and verify its CRC
pub fn load_from_reader(reader: &mut dyn std::io::Read) -> Result<Self, Error> {
let mut data = Vec::with_capacity(1024 * 1024);
reader.read_to_end(&mut data)?;
+ Self::from_raw_with_crc_check(data)
+ }
- let blob = Self::from_raw(data)?;
+ /// Load data blob via given async ``reader`` and verify its CRC
+ pub async fn load_from_async_reader(
+ reader: &mut (dyn AsyncRead + Unpin + Send),
+ ) -> Result<Self, Error> {
+ let mut data = Vec::with_capacity(1024 * 1024);
+ reader.read_to_end(&mut data).await?;
+ Self::from_raw_with_crc_check(data)
+ }
+ /// Generates a data blob from raw input data and checks for matching CRC in header
+ fn from_raw_with_crc_check(raw_data: Vec<u8>) -> Result<Self, Error> {
+ let blob = Self::from_raw(raw_data)?;
blob.verify_crc()?;
-
Ok(blob)
}
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 05/30] datastore: manifest: add helper for change detection fingerprint
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (3 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 04/30] datastore: blob: implement async reader for data blobs Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 06/30] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
` (24 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Generates a checksum over the file names and checksums of the manifest,
to be stored in the encrypted snapshots manifest when doing server side
sync push encryption. The fingerprint will then be used on pull to
detect if a manifests file contents did not change and are therefore
fine to be skipped (no resync required). The usual byte-wise comparison
is not feasible for this.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-datastore/src/manifest.rs | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/pbs-datastore/src/manifest.rs b/pbs-datastore/src/manifest.rs
index fb734a674..5f7d3efcc 100644
--- a/pbs-datastore/src/manifest.rs
+++ b/pbs-datastore/src/manifest.rs
@@ -236,6 +236,26 @@ impl BackupManifest {
}
Ok(Some(serde_json::from_value::<SnapshotVerifyState>(verify)?))
}
+
+ /// Set the fingerprint used to detect changes for encrypted -> decrypted syncs
+ pub fn set_change_detection_fingerprint(
+ &mut self,
+ fingerprint: &[u8; 32],
+ ) -> Result<(), Error> {
+ let fp_str = hex::encode(fingerprint);
+ self.unprotected["change-detection-fingerprint"] = serde_json::to_value(fp_str)?;
+ Ok(())
+ }
+
+ /// Generate the fingerprint used to detect changes for encrypted -> decrypted syncs
+ pub fn change_detection_fingerprint(&self) -> Result<[u8; 32], Error> {
+ let mut csum = openssl::sha::Sha256::new();
+ for file_info in self.files() {
+ csum.update(file_info.filename.as_bytes());
+ csum.update(&file_info.csum);
+ }
+ Ok(csum.finish())
+ }
}
impl TryFrom<super::DataBlob> for BackupManifest {
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 06/30] pbs-key-config: introduce store_with() for KeyConfig
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (4 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 05/30] datastore: manifest: add helper for change detection fingerprint Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 07/30] pbs-config: implement encryption key config handling Christian Ebner
` (23 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Extends the behavior of KeyConfig::store() to allow optionally
specifying the mode and ownership of the file the key is stored with.
Default to the same behavior as KeyConfig::store() if none of the
optional parameters are set, therefore the same implementation is
reused for it as well.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-key-config/src/lib.rs | 36 +++++++++++++++++++++++++++++++-----
1 file changed, 31 insertions(+), 5 deletions(-)
diff --git a/pbs-key-config/src/lib.rs b/pbs-key-config/src/lib.rs
index 123fa5966..258fb197b 100644
--- a/pbs-key-config/src/lib.rs
+++ b/pbs-key-config/src/lib.rs
@@ -3,6 +3,8 @@ use std::os::fd::AsRawFd;
use std::path::Path;
use anyhow::{bail, format_err, Context, Error};
+use nix::sys::stat::Mode;
+use nix::unistd::{Gid, Uid};
use serde::{Deserialize, Serialize};
use proxmox_lang::try_block;
@@ -237,25 +239,49 @@ impl KeyConfig {
/// Store a KeyConfig to path
pub fn store<P: AsRef<Path>>(&self, path: P, replace: bool) -> Result<(), Error> {
+ self.store_with(path, replace, None, None, None)
+ }
+
+ /// Store a KeyConfig to path with given ownership and mode.
+ /// Requires the process to run with permissions to do so.
+ pub fn store_with<P: AsRef<Path>>(
+ &self,
+ path: P,
+ replace: bool,
+ mode: Option<Mode>,
+ owner: Option<Uid>,
+ group: Option<Gid>,
+ ) -> Result<(), Error> {
let path: &Path = path.as_ref();
let data = serde_json::to_string(self)?;
try_block!({
if replace {
- let mode = nix::sys::stat::Mode::S_IRUSR | nix::sys::stat::Mode::S_IWUSR;
- replace_file(path, data.as_bytes(), CreateOptions::new().perm(mode), true)?;
+ let mode =
+ mode.unwrap_or(nix::sys::stat::Mode::S_IRUSR | nix::sys::stat::Mode::S_IWUSR);
+ let mut create_options = CreateOptions::new().perm(mode);
+ if let Some(owner) = owner {
+ create_options = create_options.owner(owner);
+ }
+ if let Some(group) = group {
+ create_options = create_options.group(group);
+ }
+ replace_file(path, data.as_bytes(), create_options, true)?;
} else {
use std::os::unix::fs::OpenOptionsExt;
-
+ let mode = mode.map(|m| m.bits()).unwrap_or(0o0600);
let mut file = std::fs::OpenOptions::new()
.write(true)
- .mode(0o0600)
+ .mode(mode)
.create_new(true)
.open(path)?;
file.write_all(data.as_bytes())?;
- nix::unistd::fsync(file.as_raw_fd())?;
+
+ let fd = file.as_raw_fd();
+ nix::unistd::fchown(fd, owner, group)?;
+ nix::unistd::fsync(fd)?;
}
Ok(())
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 07/30] pbs-config: implement encryption key config handling
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (5 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 06/30] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 08/30] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
` (22 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Implements the handling for encryption key configuration and files.
Individual encryption keys with the secret key material are stored in
individual files, while the config stores duplicate key info, so the
actual key only needs to be loaded when accessed, not for listing.
The key's fingerprint is compared to the one stored in the config
when loading the key, in order to detect possible mismatches.
Races between key creation and deletion are avoided by locking both,
config and individual key file.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-config/Cargo.toml | 2 +
pbs-config/src/encryption_keys.rs | 201 ++++++++++++++++++++++++++++++
pbs-config/src/lib.rs | 1 +
3 files changed, 204 insertions(+)
create mode 100644 pbs-config/src/encryption_keys.rs
diff --git a/pbs-config/Cargo.toml b/pbs-config/Cargo.toml
index ea2496843..04687cb59 100644
--- a/pbs-config/Cargo.toml
+++ b/pbs-config/Cargo.toml
@@ -20,6 +20,7 @@ serde.workspace = true
serde_json.workspace = true
proxmox-http.workspace = true
+proxmox-lang.workspace = true
proxmox-notify.workspace = true
proxmox-router = { workspace = true, default-features = false }
proxmox-s3-client.workspace = true
@@ -32,3 +33,4 @@ proxmox-uuid.workspace = true
pbs-api-types.workspace = true
pbs-buildcfg.workspace = true
+pbs-key-config.workspace = true
diff --git a/pbs-config/src/encryption_keys.rs b/pbs-config/src/encryption_keys.rs
new file mode 100644
index 000000000..d3743daa6
--- /dev/null
+++ b/pbs-config/src/encryption_keys.rs
@@ -0,0 +1,201 @@
+use std::collections::HashMap;
+use std::sync::LazyLock;
+
+use anyhow::{bail, format_err, Error};
+use nix::{sys::stat::Mode, unistd::Uid};
+use serde::Deserialize;
+
+use pbs_api_types::{CryptKey, KeyInfo, CRYPT_KEY_ID_SCHEMA};
+use proxmox_schema::ApiType;
+use proxmox_section_config::{SectionConfig, SectionConfigData, SectionConfigPlugin};
+use proxmox_sys::fs::CreateOptions;
+
+use pbs_buildcfg::configdir;
+use pbs_key_config::KeyConfig;
+
+use crate::{open_backup_lockfile, replace_backup_config, BackupLockGuard};
+
+pub static CONFIG: LazyLock<SectionConfig> = LazyLock::new(init);
+
+fn init() -> SectionConfig {
+ let obj_schema = CryptKey::API_SCHEMA.unwrap_all_of_schema();
+ let plugin = SectionConfigPlugin::new(
+ ENCRYPTION_KEYS_CFG_TYPE_ID.to_string(),
+ Some(String::from("id")),
+ obj_schema,
+ );
+ let mut config = SectionConfig::new(&CRYPT_KEY_ID_SCHEMA);
+ config.register_plugin(plugin);
+
+ config
+}
+
+/// Configuration file location for encryption keys.
+pub const ENCRYPTION_KEYS_CFG_FILENAME: &str = configdir!("/encryption-keys.cfg");
+/// Configuration lock file used to prevent concurrent configuration update operations.
+pub const ENCRYPTION_KEYS_CFG_LOCKFILE: &str = configdir!("/.encryption-keys.lck");
+/// Directory where to store the actual encryption keys
+pub const ENCRYPTION_KEYS_DIR: &str = configdir!("/encryption-keys/");
+
+/// Config type for encryption key config entries
+pub const ENCRYPTION_KEYS_CFG_TYPE_ID: &str = "sync-key";
+
+/// Get exclusive lock for encryption key configuration update.
+pub fn lock_config() -> Result<BackupLockGuard, Error> {
+ open_backup_lockfile(ENCRYPTION_KEYS_CFG_LOCKFILE, None, true)
+}
+
+/// Load encryption key configuration from file.
+pub fn config() -> Result<(SectionConfigData, [u8; 32]), Error> {
+ let content = proxmox_sys::fs::file_read_optional_string(ENCRYPTION_KEYS_CFG_FILENAME)?;
+ let content = content.unwrap_or_default();
+ let digest = openssl::sha::sha256(content.as_bytes());
+ let data = CONFIG.parse(ENCRYPTION_KEYS_CFG_FILENAME, &content)?;
+ Ok((data, digest))
+}
+
+/// Save given key configuration to file.
+pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
+ let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, config)?;
+ replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())
+}
+
+/// Shell completion helper to complete encryption key id's as found in the config.
+pub fn complete_encryption_key_id(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
+ match config() {
+ Ok((data, _digest)) => data.sections.keys().map(|id| id.to_string()).collect(),
+ Err(_) => Vec::new(),
+ }
+}
+
+/// Load the encryption key from file.
+///
+/// Looks up the key in the config and tries to load it from the given file.
+/// Upon loading, the config key fingerprint is compared to the one stored in the key
+/// file. Fail to load archived keys if flag is set.
+pub fn load_key_config(id: &str, fail_on_archived: bool) -> Result<KeyConfig, Error> {
+ let _lock = lock_config()?;
+ let (config, _digest) = config()?;
+
+ let key: CryptKey = config.lookup(ENCRYPTION_KEYS_CFG_TYPE_ID, id)?;
+ if fail_on_archived && key.archived_at.is_some() {
+ bail!("cannot load archived encryption key {id}");
+ }
+ let key_config = match &key.info.path {
+ Some(path) => KeyConfig::load(path)?,
+ None => bail!("missing path for encryption key {id}"),
+ };
+
+ let stored_key_info = KeyInfo::from(&key_config);
+
+ if key.info.fingerprint != stored_key_info.fingerprint {
+ bail!("loaded key does not match the config for key {id}");
+ }
+
+ Ok(key_config)
+}
+
+/// Store the encryption key to file.
+///
+/// Inserts the key in the config and stores it to the given file.
+pub fn store_key(id: &str, key: &KeyConfig) -> Result<(), Error> {
+ let _lock = lock_config()?;
+ let (mut config, _digest) = config()?;
+
+ if config.sections.contains_key(id) {
+ bail!("key with id '{id}' already exists.");
+ }
+
+ let backup_user = crate::backup_user()?;
+ let dir_options = CreateOptions::new()
+ .perm(Mode::from_bits_truncate(0o0750))
+ .owner(Uid::from_raw(0))
+ .group(backup_user.gid);
+
+ proxmox_sys::fs::ensure_dir_exists(ENCRYPTION_KEYS_DIR, &dir_options, true)?;
+
+ let key_path = format!("{ENCRYPTION_KEYS_DIR}{id}.enc");
+ let key_lock_path = format!("{key_path}.lck");
+
+ // lock to avoid race with key deletion
+ let _lock = open_backup_lockfile(&key_lock_path, None, true)?;
+
+ // assert the key file is empty or does not exist
+ match std::fs::metadata(&key_path) {
+ Ok(metadata) => {
+ if metadata.len() > 0 {
+ bail!("detected pre-existing key file, refusing to overwrite.");
+ }
+ }
+ Err(err) if err.kind() == std::io::ErrorKind::NotFound => (),
+ Err(err) => return Err(err.into()),
+ }
+
+ let keyfile_mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
+
+ key.store_with(
+ &key_path,
+ true,
+ Some(keyfile_mode),
+ Some(Uid::from_raw(0)),
+ Some(backup_user.gid),
+ )?;
+
+ let mut info = KeyInfo::from(key);
+ info.path = Some(key_path.clone());
+
+ let crypt_key = CryptKey {
+ id: id.to_string(),
+ info,
+ archived_at: None,
+ };
+
+ let result = proxmox_lang::try_block!({
+ config.set_data(id, ENCRYPTION_KEYS_CFG_TYPE_ID, crypt_key)?;
+ save_config(&config)
+ });
+
+ if result.is_err() {
+ let _ = std::fs::remove_file(key_path);
+ }
+
+ result
+}
+
+/// Delete the encryption key from config.
+///
+/// Returns true if the key was removed successfully, false if there was no matching key.
+/// Safety: caller must acquire and hold config lock.
+pub fn delete_key(id: &str, mut config: SectionConfigData) -> Result<bool, Error> {
+ if let Some((_, key)) = config.sections.remove(id) {
+ let key =
+ CryptKey::deserialize(key).map_err(|_err| format_err!("failed to parse key config"))?;
+
+ if key.archived_at.is_none() {
+ bail!("key still active, deleting is only possible for archived keys");
+ }
+
+ if let Some(key_path) = &key.info.path {
+ let key_lock_path = format!("{key_path}.lck");
+ // Avoid races with key insertion
+ let _lock = open_backup_lockfile(key_lock_path, None, true)?;
+
+ let key_config = KeyConfig::load(key_path)?;
+ let stored_key_info = KeyInfo::from(&key_config);
+ // Check the key is the expected one
+ if key.info.fingerprint != stored_key_info.fingerprint {
+ bail!("unexpected key detected in key file, refuse to delete");
+ }
+
+ let raw = CONFIG.write(ENCRYPTION_KEYS_CFG_FILENAME, &config)?;
+ // drops config lock
+ replace_backup_config(ENCRYPTION_KEYS_CFG_FILENAME, raw.as_bytes())?;
+
+ std::fs::remove_file(key_path)?;
+ return Ok(true);
+ }
+
+ bail!("missing key file path for key '{id}'");
+ }
+ Ok(false)
+}
diff --git a/pbs-config/src/lib.rs b/pbs-config/src/lib.rs
index 88a31904f..d1bc9f862 100644
--- a/pbs-config/src/lib.rs
+++ b/pbs-config/src/lib.rs
@@ -4,6 +4,7 @@ pub use cached_user_info::CachedUserInfo;
pub mod datastore;
pub mod domains;
pub mod drive;
+pub mod encryption_keys;
pub mod key_value;
pub mod media_pool;
pub mod metrics;
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 08/30] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (6 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 07/30] pbs-config: implement encryption key config handling Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 09/30] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
` (21 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Adds a dedicated subpath for permission checks on encryption key
configurations in the acl path components check. Allows to set
permissions on either the whole encryption keys config or for
individual encryption key ids.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
pbs-config/src/acl.rs | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/pbs-config/src/acl.rs b/pbs-config/src/acl.rs
index 2abbf5802..d18a346ff 100644
--- a/pbs-config/src/acl.rs
+++ b/pbs-config/src/acl.rs
@@ -127,8 +127,8 @@ pub fn check_acl_path(path: &str) -> Result<(), Error> {
_ => {}
}
}
- "s3-endpoint" => {
- // /system/s3-endpoint/{id}
+ "s3-endpoint" | "encryption-keys" => {
+ // /system/<matched-component>/{id}
if components_len <= 3 {
return Ok(());
}
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 09/30] ui: expose 'encryption-keys' as acl subpath for 'system'
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (7 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 08/30] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 10/30] sync: add helper to check encryption key acls and load key Christian Ebner
` (20 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Allows to select the 'encryption-keys' subpath to restirct
permissions to either the full encryption keys configuration or the
matching key id.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/form/PermissionPathSelector.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/form/PermissionPathSelector.js b/www/form/PermissionPathSelector.js
index e5f2aec46..64de42888 100644
--- a/www/form/PermissionPathSelector.js
+++ b/www/form/PermissionPathSelector.js
@@ -15,6 +15,7 @@ Ext.define('PBS.data.PermissionPathsStore', {
{ value: '/system' },
{ value: '/system/certificates' },
{ value: '/system/disks' },
+ { value: '/system/encryption-keys' },
{ value: '/system/log' },
{ value: '/system/network' },
{ value: '/system/network/dns' },
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 10/30] sync: add helper to check encryption key acls and load key
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (8 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 09/30] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 11/30] api: config: add endpoints for encryption key manipulation Christian Ebner
` (19 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Introduces a common helper function to be used when loading an
encryption key in sync job for either push or pull direction.
For given user, access to the provided key by id is checked and the
key config containing the secret loaded from the file by means of the
config.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/sync.rs | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/src/server/sync.rs b/src/server/sync.rs
index aedf4a271..18480a109 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -21,12 +21,14 @@ use proxmox_router::HttpError;
use pbs_api_types::{
Authid, BackupDir, BackupGroup, BackupNamespace, CryptMode, GroupListItem, SnapshotListItem,
SyncDirection, SyncJobConfig, VerifyState, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME,
- MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+ MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ, PRIV_SYS_MODIFY,
};
use pbs_client::{BackupReader, BackupRepository, HttpClient, RemoteChunkReader};
+use pbs_config::CachedUserInfo;
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::read_chunk::AsyncReadChunk;
use pbs_datastore::{BackupManifest, DataStore, ListNamespacesRecursive, LocalChunkReader};
+use pbs_tools::crypt_config::CryptConfig;
use crate::backup::ListAccessibleBackupGroups;
use crate::server::jobstate::Job;
@@ -791,3 +793,27 @@ pub(super) fn exclude_not_verified_or_encrypted(
false
}
+
+/// Helper to check if user has access to given encryption key and load it from config.
+pub(crate) fn check_privs_and_load_key_config(
+ key_id: &str,
+ user: &Authid,
+ fail_on_archived: bool,
+) -> Result<Arc<CryptConfig>, Error> {
+ let user_info = CachedUserInfo::new()?;
+ user_info.check_privs(
+ user,
+ &["system", "encryption-keys", key_id],
+ PRIV_SYS_MODIFY,
+ true,
+ )?;
+
+ let key_config = pbs_config::encryption_keys::load_key_config(key_id, fail_on_archived)?;
+ // pass empty passphrase to get raw key material of unprotected key
+ let (enc_key, _created, fingerprint) = key_config.decrypt(&|| Ok(Vec::new()))?;
+
+ info!("Loaded encryption key '{key_id}' with fingerprint '{fingerprint}'");
+
+ let crypt_config = Arc::new(CryptConfig::new(enc_key)?);
+ Ok(crypt_config)
+}
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 11/30] api: config: add endpoints for encryption key manipulation
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (9 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 10/30] sync: add helper to check encryption key acls and load key Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 12/30] api: config: check sync owner has access to en-/decryption keys Christian Ebner
` (18 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Defines the api endpoints for listing existing keys as defined in the
config, create new keys and archive or remove keys.
New keys are either generated on the server side or uploaded as json
string. Password protected keys are currently not supported and will
be added at a later stage, once a general mechanism for secrets
handling is implemented for PBS.
Keys are archived by setting the `archived-at` timestamp, marking
them as no longer usable for encrypting new content with respective
keys. This is only possible if the key is not in-use as active
encryption key by any sync job.
Removing a key requires for it to be archived first. Further, is only
possible when the key is no longer referenced by a sync job config,
protecting from accidental deletion of an in-use key.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/config/encryption_keys.rs | 231 +++++++++++++++++++++++++++++
src/api2/config/mod.rs | 2 +
2 files changed, 233 insertions(+)
create mode 100644 src/api2/config/encryption_keys.rs
diff --git a/src/api2/config/encryption_keys.rs b/src/api2/config/encryption_keys.rs
new file mode 100644
index 000000000..620eb8f69
--- /dev/null
+++ b/src/api2/config/encryption_keys.rs
@@ -0,0 +1,231 @@
+use anyhow::{bail, format_err, Error};
+use serde_json::Value;
+
+use proxmox_router::{Permission, Router, RpcEnvironment};
+use proxmox_schema::api;
+
+use pbs_api_types::{
+ Authid, CryptKey, SyncJobConfig, CRYPT_KEY_ID_SCHEMA, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY,
+ PROXMOX_CONFIG_DIGEST_SCHEMA,
+};
+
+use pbs_config::encryption_keys::{self, ENCRYPTION_KEYS_CFG_TYPE_ID};
+use pbs_config::CachedUserInfo;
+
+use pbs_key_config::KeyConfig;
+
+#[api(
+ input: {
+ properties: {
+ "include-archived": {
+ type: bool,
+ description: "List also keys which have been archived.",
+ optional: true,
+ default: false,
+ },
+ },
+ },
+ returns: {
+ description: "List of configured encryption keys.",
+ type: Array,
+ items: { type: CryptKey },
+ },
+ access: {
+ permission: &Permission::Anybody,
+ description: "List configured encryption keys filtered by Sys.Audit privileges",
+ },
+)]
+/// List configured encryption keys.
+pub fn list_keys(
+ include_archived: bool,
+ _param: Value,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<CryptKey>, Error> {
+ let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let (config, digest) = encryption_keys::config()?;
+
+ let list: Vec<CryptKey> = config.convert_to_typed_array(ENCRYPTION_KEYS_CFG_TYPE_ID)?;
+ let list = list
+ .into_iter()
+ .filter(|key| {
+ if !include_archived && key.archived_at.is_some() {
+ return false;
+ }
+ let privs = user_info.lookup_privs(&auth_id, &["system", "encryption-keys", &key.id]);
+ privs & PRIV_SYS_AUDIT != 0
+ })
+ .collect();
+
+ rpcenv["digest"] = hex::encode(digest).into();
+
+ Ok(list)
+}
+
+#[api(
+ protected: true,
+ input: {
+ properties: {
+ id: {
+ schema: CRYPT_KEY_ID_SCHEMA,
+ },
+ key: {
+ description: "Use provided key instead of creating new one.",
+ type: String,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system", "encryption-keys"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Create new encryption key instance or use the provided one.
+pub fn create_key(
+ id: String,
+ key: Option<String>,
+ _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<KeyConfig, Error> {
+ let key_config = if let Some(key) = &key {
+ let key_config: KeyConfig = serde_json::from_str(key)
+ .map_err(|err| format_err!("failed to parse provided key: {err}"))?;
+ // early detect unusable keys
+ if key_config.kdf.is_some() {
+ bail!("protected keys not supported");
+ }
+ let _ = key_config
+ .decrypt(&|| Ok(Vec::new()))
+ .map_err(|err| format_err!("failed to load provided key: {err}"))?;
+ key_config
+ } else {
+ let mut raw_key = [0u8; 32];
+ proxmox_sys::linux::fill_with_random_data(&mut raw_key)?;
+ KeyConfig::without_password(raw_key)?
+ };
+
+ encryption_keys::store_key(&id, &key_config)?;
+
+ Ok(key_config)
+}
+
+#[api(
+ protected: true,
+ input: {
+ properties: {
+ id: {
+ schema: CRYPT_KEY_ID_SCHEMA,
+ },
+ digest: {
+ optional: true,
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system", "encryption-keys", "{id}"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Mark the key by given id as archived, no longer usable to encrypt contents.
+pub fn archive_key(
+ id: String,
+ digest: Option<String>,
+ _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let _lock = encryption_keys::lock_config()?;
+ let (mut config, expected_digest) = encryption_keys::config()?;
+
+ pbs_config::detect_modified_configuration_file(digest, &expected_digest)?;
+
+ let mut key: CryptKey = config.lookup(ENCRYPTION_KEYS_CFG_TYPE_ID, &id)?;
+
+ if key.archived_at.is_some() {
+ bail!("key already marked as archived");
+ } else {
+ check_encryption_key_in_use(&id, false)?;
+ }
+
+ key.archived_at = Some(proxmox_time::epoch_i64());
+
+ config.set_data(&id, ENCRYPTION_KEYS_CFG_TYPE_ID, &key)?;
+ // drops config lock
+ encryption_keys::save_config(&config)?;
+
+ Ok(())
+}
+
+#[api(
+ protected: true,
+ input: {
+ properties: {
+ id: {
+ schema: CRYPT_KEY_ID_SCHEMA,
+ },
+ digest: {
+ optional: true,
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system", "encryption-keys", "{id}"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Remove encryption key.
+pub fn delete_key(
+ id: String,
+ digest: Option<String>,
+ _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let _lock = encryption_keys::lock_config()?;
+ let (config, expected_digest) = encryption_keys::config()?;
+
+ pbs_config::detect_modified_configuration_file(digest, &expected_digest)?;
+
+ check_encryption_key_in_use(&id, true)?;
+
+ encryption_keys::delete_key(&id, config)?;
+
+ Ok(())
+}
+
+// Check if sync jobs hold given key as active encryption key and if flag set, if sync jobs have it
+// as associated key.
+fn check_encryption_key_in_use(id: &str, include_associated: bool) -> Result<(), Error> {
+ let (config, _digest) = pbs_config::sync::config()?;
+
+ let mut used_by_jobs = Vec::new();
+
+ let job_list: Vec<SyncJobConfig> = config.convert_to_typed_array("sync")?;
+ for job in job_list {
+ if job.active_encryption_key.as_deref() == Some(id) {
+ used_by_jobs.push(job.id.clone());
+ }
+ if include_associated
+ && job
+ .associated_key
+ .as_deref()
+ .unwrap_or(&[])
+ .contains(&id.to_string())
+ {
+ used_by_jobs.push(job.id.clone());
+ }
+ }
+
+ if !used_by_jobs.is_empty() {
+ let plural = if used_by_jobs.len() > 1 { "s" } else { "" };
+ let ids = used_by_jobs.join(", ");
+ bail!("encryption key in use by sync job{plural}: '{ids}'");
+ }
+
+ Ok(())
+}
+
+const ITEM_ROUTER: Router = Router::new()
+ .post(&API_METHOD_ARCHIVE_KEY)
+ .delete(&API_METHOD_DELETE_KEY);
+
+pub const ROUTER: Router = Router::new()
+ .get(&API_METHOD_LIST_KEYS)
+ .post(&API_METHOD_CREATE_KEY)
+ .match_all("id", &ITEM_ROUTER);
diff --git a/src/api2/config/mod.rs b/src/api2/config/mod.rs
index 1cd9ead76..0281bcfae 100644
--- a/src/api2/config/mod.rs
+++ b/src/api2/config/mod.rs
@@ -9,6 +9,7 @@ pub mod acme;
pub mod changer;
pub mod datastore;
pub mod drive;
+pub mod encryption_keys;
pub mod media_pool;
pub mod metrics;
pub mod notifications;
@@ -28,6 +29,7 @@ const SUBDIRS: SubdirMap = &sorted!([
("changer", &changer::ROUTER),
("datastore", &datastore::ROUTER),
("drive", &drive::ROUTER),
+ ("encryption-keys", &encryption_keys::ROUTER),
("media-pool", &media_pool::ROUTER),
("metrics", &metrics::ROUTER),
("notifications", ¬ifications::ROUTER),
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 12/30] api: config: check sync owner has access to en-/decryption keys
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (10 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 11/30] api: config: add endpoints for encryption key manipulation Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 13/30] api: config: allow encryption key manipulation for sync job Christian Ebner
` (17 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
So a sync job can not be configured with a non existing or non
accessible key for given sync owner/local-user.
Key access is checked by loading the key from the keyfile.
When setting the active encryption key for push sync jobs it is
further assured that the key is not archived yet.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/config/sync.rs | 29 +++++++++++++++++++++++++++++
1 file changed, 29 insertions(+)
diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index 67fa3182c..75b99c2a7 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -62,6 +62,22 @@ fn is_correct_owner(auth_id: &Authid, job: &SyncJobConfig) -> bool {
}
}
+// Check access and test key loading works as expected for sync job owner/user.
+fn sync_user_can_access_optional_key(
+ key_id: Option<&str>,
+ owner: &Authid,
+ fail_on_archived: bool,
+) -> Result<(), Error> {
+ if let Some(key_id) = key_id {
+ if crate::server::sync::check_privs_and_load_key_config(key_id, owner, fail_on_archived)
+ .is_err()
+ {
+ bail!("no such key or cannot access key '{key_id}'");
+ }
+ }
+ Ok(())
+}
+
/// checks whether user can run the corresponding sync job, depending on sync direction
///
/// namespace creation/deletion ACL and backup group ownership checks happen in the pull/push code
@@ -251,6 +267,19 @@ pub fn create_sync_job(
}
}
+ let owner = config
+ .owner
+ .as_ref()
+ .unwrap_or_else(|| Authid::root_auth_id());
+
+ if sync_direction == SyncDirection::Push {
+ sync_user_can_access_optional_key(config.active_encryption_key.as_deref(), owner, true)?;
+ } else {
+ for key in config.associated_key.as_deref().unwrap_or(&[]) {
+ sync_user_can_access_optional_key(Some(key), owner, false)?;
+ }
+ }
+
let (mut section_config, _digest) = sync::config()?;
if section_config.sections.contains_key(&config.id) {
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 13/30] api: config: allow encryption key manipulation for sync job
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (11 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 12/30] api: config: check sync owner has access to en-/decryption keys Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 14/30] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
` (16 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Since the SyncJobConfig got extended to include an optional active
encryption key, set the default to none.
Extend the api config update handler to also set, update or delete
the active encryption key based on the provided parameters.
Associated keys will also be updated accordingly, however it is
assured that the previously active key will remain associated, if
changed.
They encryption key will be used to encrypt unencrypted backup
snapshots during push sync. Any of the associated keys will be used
to decrypt snapshots with matching key fingerprint during pull sync.
During updates to active encryption key, associated keys and/or owner
assure access to keys is granted for respective sync job owner.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/config/sync.rs | 84 +++++++++++++++++++++++++++++++++++++++--
1 file changed, 81 insertions(+), 3 deletions(-)
diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index 75b99c2a7..8208d5226 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -330,6 +330,22 @@ pub fn read_sync_job(id: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Sync
Ok(sync_job)
}
+fn keep_previous_key_as_associated(
+ current_active: Option<&String>,
+ associated_keys: &mut Option<Vec<String>>,
+) {
+ if let Some(prev) = current_active {
+ match associated_keys {
+ Some(ref mut keys) => {
+ if !keys.contains(prev) {
+ keys.push(prev.clone());
+ }
+ }
+ None => *associated_keys = Some(vec![prev.clone()]),
+ }
+ }
+}
+
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
@@ -373,6 +389,10 @@ pub enum DeletableProperty {
UnmountOnDone,
/// Delete the sync_direction property,
SyncDirection,
+ /// Delete the active encryption key property,
+ ActiveEncryptionKey,
+ /// Delete associated key property,
+ AssociatedKey,
}
#[api(
@@ -414,7 +434,7 @@ required sync job owned by user. Additionally, remove vanished requires RemoteDa
#[allow(clippy::too_many_arguments)]
pub fn update_sync_job(
id: String,
- update: SyncJobConfigUpdater,
+ mut update: SyncJobConfigUpdater,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
@@ -437,6 +457,9 @@ pub fn update_sync_job(
}
if let Some(delete) = delete {
+ // temporarily hold the previosly active key in case of updates,
+ // so it can be readded as associated key afterwards.
+ let mut previous_active_encryption_key = None;
for delete_prop in delete {
match delete_prop {
DeletableProperty::Remote => {
@@ -496,8 +519,23 @@ pub fn update_sync_job(
DeletableProperty::SyncDirection => {
data.sync_direction = None;
}
+ DeletableProperty::ActiveEncryptionKey => {
+ // Previously active encryption keys are always rotated to
+ // become an associated key in order to hinder unintended
+ // deletion (e.g. key got rotated for the push, but it still
+ // is intended to be used for restore/pull existing snapshots).
+ previous_active_encryption_key = data.active_encryption_key.take();
+ }
+ DeletableProperty::AssociatedKey => {
+ // Previous active encryption key might be added as associated below.
+ data.associated_key = None;
+ }
}
}
+ keep_previous_key_as_associated(
+ previous_active_encryption_key.as_ref(),
+ &mut data.associated_key,
+ );
}
if let Some(comment) = update.comment {
@@ -524,8 +562,18 @@ pub fn update_sync_job(
if let Some(remote_ns) = update.remote_ns {
data.remote_ns = Some(remote_ns);
}
- if let Some(owner) = update.owner {
- data.owner = Some(owner);
+ if let Some(owner) = &update.owner {
+ data.owner = Some(owner.clone());
+ // must assure new owner can access pre-configured keys, other cases are
+ // checked on respective key updates
+ if update.active_encryption_key.is_none() {
+ sync_user_can_access_optional_key(data.active_encryption_key.as_deref(), owner, true)?;
+ }
+ if update.associated_key.is_none() {
+ for key in data.associated_key.as_deref().unwrap_or(&[]) {
+ sync_user_can_access_optional_key(Some(key), owner, true)?;
+ }
+ }
}
if let Some(group_filter) = update.group_filter {
data.group_filter = Some(group_filter);
@@ -555,6 +603,34 @@ pub fn update_sync_job(
data.sync_direction = Some(sync_direction);
}
+ if let Some(active_encryption_key) = update.active_encryption_key {
+ // owner updated above already, so can use the one in data
+ let owner = data
+ .owner
+ .as_ref()
+ .unwrap_or_else(|| Authid::root_auth_id());
+ sync_user_can_access_optional_key(Some(&active_encryption_key), owner, true)?;
+
+ keep_previous_key_as_associated(
+ data.active_encryption_key.as_ref(),
+ &mut update.associated_key,
+ );
+ data.active_encryption_key = Some(active_encryption_key);
+ }
+
+ if let Some(associated_key) = update.associated_key {
+ // owner updated above already, so can use the one in data
+ let owner = data
+ .owner
+ .as_ref()
+ .unwrap_or_else(|| Authid::root_auth_id());
+ // Don't allow associating keys the local user/owner can't access
+ for key in &associated_key {
+ sync_user_can_access_optional_key(Some(key), owner, false)?;
+ }
+ data.associated_key = Some(associated_key);
+ }
+
if update.limit.rate_in.is_some() {
data.limit.rate_in = update.limit.rate_in;
}
@@ -727,6 +803,8 @@ acl:1:/remote/remote1/remotestore1:write@pbs:RemoteSyncOperator
run_on_mount: None,
unmount_on_done: None,
sync_direction: None, // use default
+ active_encryption_key: None,
+ associated_key: None,
};
// should work without ACLs
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 14/30] sync: push: rewrite manifest instead of pushing pre-existing one
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (12 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 13/30] api: config: allow encryption key manipulation for sync job Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 15/30] api: push sync: expose optional encryption key for push sync Christian Ebner
` (15 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
In preparation for being able to encrypt unencypted backup snapshots
during push sync jobs.
Previously the pre-existing manifest file was pushed to the remote
target since it did not require modifications and contained all the
files with the correct metadata. When encrypting, the files must
however be marked as encrypted by individually setting the crypt mode
and the manifest must be signed and the encryption key fingerprint
added to the unprotected part of the manifest.
Therefore, now recreate the manifest and update accordingly. To do
so, pushing of the index must return the full BackupStats, not just
the sync stats for accounting.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/push.rs | 59 +++++++++++++++++++++++++++++++++-------------
1 file changed, 43 insertions(+), 16 deletions(-)
diff --git a/src/server/push.rs b/src/server/push.rs
index 21f326aba..c12819377 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -17,8 +17,8 @@ use pbs_api_types::{
PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
};
use pbs_client::{
- BackupRepository, BackupWriter, BackupWriterOptions, HttpClient, IndexType, MergedChunkInfo,
- UploadOptions,
+ BackupRepository, BackupStats, BackupWriter, BackupWriterOptions, HttpClient, IndexType,
+ MergedChunkInfo, UploadOptions,
};
use pbs_config::CachedUserInfo;
use pbs_datastore::data_blob::ChunkInfo;
@@ -26,7 +26,7 @@ use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{DataStore, StoreProgress};
+use pbs_datastore::{BackupManifest, DataStore, StoreProgress};
use super::sync::{
check_namespace_depth_limit, exclude_not_verified_or_encrypted,
@@ -886,6 +886,7 @@ pub(crate) async fn push_snapshot(
// Avoid double upload penalty by remembering already seen chunks
let known_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64 * 1024)));
+ let mut target_manifest = BackupManifest::new(snapshot.clone());
for entry in source_manifest.files() {
let mut path = backup_dir.full_path();
@@ -898,6 +899,12 @@ pub(crate) async fn push_snapshot(
let backup_stats = backup_writer
.upload_blob(file, archive_name.as_ref())
.await?;
+ target_manifest.add_file(
+ &archive_name,
+ backup_stats.size,
+ backup_stats.csum,
+ entry.chunk_crypt_mode(),
+ )?;
stats.add(SyncStats {
chunk_count: backup_stats.chunk_count as usize,
bytes: backup_stats.size as usize,
@@ -920,7 +927,7 @@ pub(crate) async fn push_snapshot(
let chunk_reader = reader
.chunk_reader(entry.chunk_crypt_mode())
.context("failed to get chunk reader")?;
- let sync_stats = push_index(
+ let upload_stats = push_index(
&archive_name,
index,
chunk_reader,
@@ -929,7 +936,18 @@ pub(crate) async fn push_snapshot(
known_chunks.clone(),
)
.await?;
- stats.add(sync_stats);
+ target_manifest.add_file(
+ &archive_name,
+ upload_stats.size,
+ upload_stats.csum,
+ entry.chunk_crypt_mode(),
+ )?;
+ stats.add(SyncStats {
+ chunk_count: upload_stats.chunk_count as usize,
+ bytes: upload_stats.size as usize,
+ elapsed: upload_stats.duration,
+ removed: None,
+ });
}
ArchiveType::FixedIndex => {
if let Some(manifest) = upload_options.previous_manifest.as_ref() {
@@ -947,7 +965,7 @@ pub(crate) async fn push_snapshot(
.chunk_reader(entry.chunk_crypt_mode())
.context("failed to get chunk reader")?;
let size = index.index_bytes();
- let sync_stats = push_index(
+ let upload_stats = push_index(
&archive_name,
index,
chunk_reader,
@@ -956,7 +974,18 @@ pub(crate) async fn push_snapshot(
known_chunks.clone(),
)
.await?;
- stats.add(sync_stats);
+ target_manifest.add_file(
+ &archive_name,
+ upload_stats.size,
+ upload_stats.csum,
+ entry.chunk_crypt_mode(),
+ )?;
+ stats.add(SyncStats {
+ chunk_count: upload_stats.chunk_count as usize,
+ bytes: upload_stats.size as usize,
+ elapsed: upload_stats.duration,
+ removed: None,
+ });
}
}
} else {
@@ -979,8 +1008,11 @@ pub(crate) async fn push_snapshot(
.await?;
}
- // Rewrite manifest for pushed snapshot, recreating manifest from source on target
- let manifest_json = serde_json::to_value(source_manifest)?;
+ // Rewrite manifest for pushed snapshot, recreating manifest from source on target,
+ // needs to update all relevant info for new manifest.
+ target_manifest.unprotected = source_manifest.unprotected;
+ target_manifest.signature = source_manifest.signature;
+ let manifest_json = serde_json::to_value(target_manifest)?;
let manifest_string = serde_json::to_string_pretty(&manifest_json)?;
let backup_stats = backup_writer
.upload_blob_from_data(
@@ -1012,7 +1044,7 @@ async fn push_index(
backup_writer: &BackupWriter,
index_type: IndexType,
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
-) -> Result<SyncStats, Error> {
+) -> Result<BackupStats, Error> {
let (upload_channel_tx, upload_channel_rx) = mpsc::channel(20);
let mut chunk_infos =
stream::iter(0..index.index_count()).map(move |pos| index.chunk_info(pos).unwrap());
@@ -1064,10 +1096,5 @@ async fn push_index(
.upload_index_chunk_info(filename, merged_chunk_info_stream, upload_options)
.await?;
- Ok(SyncStats {
- chunk_count: upload_stats.chunk_count as usize,
- bytes: upload_stats.size as usize,
- elapsed: upload_stats.duration,
- removed: None,
- })
+ Ok(upload_stats)
}
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 15/30] api: push sync: expose optional encryption key for push sync
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (13 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 14/30] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 16/30] sync: push: optionally encrypt data blob on upload Christian Ebner
` (14 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Exposes the optional encryption key id to be used for server side
encryption of contents during push sync jobs. Only expose the
parameter for now and load the key if given, the logic to use it will
be implemented in subsequent code changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/push.rs | 8 +++++++-
src/server/push.rs | 14 ++++++++++++++
src/server/sync.rs | 1 +
3 files changed, 22 insertions(+), 1 deletion(-)
diff --git a/src/api2/push.rs b/src/api2/push.rs
index e5edc13e0..84d93621b 100644
--- a/src/api2/push.rs
+++ b/src/api2/push.rs
@@ -2,7 +2,7 @@ use anyhow::{format_err, Error};
use futures::{future::FutureExt, select};
use pbs_api_types::{
- Authid, BackupNamespace, GroupFilter, RateLimitConfig, DATASTORE_SCHEMA,
+ Authid, BackupNamespace, GroupFilter, RateLimitConfig, CRYPT_KEY_ID_SCHEMA, DATASTORE_SCHEMA,
GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_READ, PRIV_REMOTE_DATASTORE_BACKUP, PRIV_REMOTE_DATASTORE_PRUNE,
REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA, SYNC_ENCRYPTED_ONLY_SCHEMA,
@@ -108,6 +108,10 @@ fn check_push_privs(
schema: TRANSFER_LAST_SCHEMA,
optional: true,
},
+ "encryption-key": {
+ schema: CRYPT_KEY_ID_SCHEMA,
+ optional: true,
+ },
},
},
access: {
@@ -133,6 +137,7 @@ async fn push(
verified_only: Option<bool>,
limit: RateLimitConfig,
transfer_last: Option<usize>,
+ encryption_key: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -164,6 +169,7 @@ async fn push(
verified_only,
limit,
transfer_last,
+ encryption_key,
)
.await?;
diff --git a/src/server/push.rs b/src/server/push.rs
index c12819377..0a2a42be5 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -27,6 +27,7 @@ use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
use pbs_datastore::read_chunk::AsyncReadChunk;
use pbs_datastore::{BackupManifest, DataStore, StoreProgress};
+use pbs_tools::crypt_config::CryptConfig;
use super::sync::{
check_namespace_depth_limit, exclude_not_verified_or_encrypted,
@@ -83,6 +84,9 @@ pub(crate) struct PushParameters {
verified_only: bool,
/// How many snapshots should be transferred at most (taking the newest N snapshots)
transfer_last: Option<usize>,
+ /// Encryption key to use for pushing unencrypted backup snapshots. Does not affect
+ /// already encrypted snapshots.
+ crypt_config: Option<(String, Arc<CryptConfig>)>,
}
impl PushParameters {
@@ -102,6 +106,7 @@ impl PushParameters {
verified_only: Option<bool>,
limit: RateLimitConfig,
transfer_last: Option<usize>,
+ active_encryption_key: Option<String>,
) -> Result<Self, Error> {
if let Some(max_depth) = max_depth {
ns.check_max_depth(max_depth)?;
@@ -155,6 +160,14 @@ impl PushParameters {
};
let group_filter = group_filter.unwrap_or_default();
+ let crypt_config = if let Some(key_id) = &active_encryption_key {
+ let crypt_config =
+ crate::server::sync::check_privs_and_load_key_config(key_id, &local_user, true)?;
+ Some((key_id.to_string(), crypt_config))
+ } else {
+ None
+ };
+
Ok(Self {
source,
target,
@@ -165,6 +178,7 @@ impl PushParameters {
encrypted_only,
verified_only,
transfer_last,
+ crypt_config,
})
}
diff --git a/src/server/sync.rs b/src/server/sync.rs
index 18480a109..da1b0a06f 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -677,6 +677,7 @@ pub fn do_sync_job(
sync_job.verified_only,
sync_job.limit.clone(),
sync_job.transfer_last,
+ sync_job.active_encryption_key,
)
.await?;
push_store(push_params).await?
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 16/30] sync: push: optionally encrypt data blob on upload
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (14 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 15/30] api: push sync: expose optional encryption key for push sync Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 17/30] sync: push: optionally encrypt client log on upload if key is given Christian Ebner
` (13 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Encrypt the data blob with given encryption key during syncs in push
direction, if given.
Introduces a helper to read and decode the data blob from source into
raw data and re-encrypt, so the new blob is compressed and encrypted,
including the correct header when uploading. The same helper will be
reused for client log uploads in subsequent code changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/push.rs | 39 +++++++++++++++++++++++++++++++++------
1 file changed, 33 insertions(+), 6 deletions(-)
diff --git a/src/server/push.rs b/src/server/push.rs
index 0a2a42be5..f8678a78b 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1,6 +1,7 @@
//! Sync datastore by pushing contents to remote server
use std::collections::HashSet;
+use std::path::Path;
use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Context, Error};
@@ -26,7 +27,7 @@ use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{BackupManifest, DataStore, StoreProgress};
+use pbs_datastore::{BackupManifest, DataBlob, DataStore, StoreProgress};
use pbs_tools::crypt_config::CryptConfig;
use super::sync::{
@@ -857,6 +858,8 @@ pub(crate) async fn push_snapshot(
return Ok(stats);
}
+ let encrypt_using_key = None;
+
// Writer instance locks the snapshot on the remote side
let backup_writer = BackupWriter::start(
¶ms.target.client,
@@ -864,7 +867,7 @@ pub(crate) async fn push_snapshot(
datastore: params.target.repo.store(),
ns: &target_ns,
backup: snapshot,
- crypt_config: None,
+ crypt_config: encrypt_using_key.clone(),
debug: false,
benchmark: false,
no_cache: false,
@@ -909,10 +912,20 @@ pub(crate) async fn push_snapshot(
let archive_name = BackupArchiveName::from_path(&entry.filename)?;
match archive_name.archive_type() {
ArchiveType::Blob => {
- let file = std::fs::File::open(&path)?;
- let backup_stats = backup_writer
- .upload_blob(file, archive_name.as_ref())
- .await?;
+ let backup_stats = if encrypt_using_key.is_some() {
+ reencode_encrypted_and_upload_blob(
+ path,
+ &archive_name,
+ &backup_writer,
+ &upload_options,
+ )
+ .await?
+ } else {
+ let file = std::fs::File::open(&path)?;
+ backup_writer
+ .upload_blob(file, archive_name.as_ref())
+ .await?
+ };
target_manifest.add_file(
&archive_name,
backup_stats.size,
@@ -1047,6 +1060,20 @@ pub(crate) async fn push_snapshot(
Ok(stats)
}
+async fn reencode_encrypted_and_upload_blob<P: AsRef<Path>>(
+ path: P,
+ archive_name: &BackupArchiveName,
+ backup_writer: &BackupWriter,
+ upload_options: &UploadOptions,
+) -> Result<BackupStats, Error> {
+ let mut file = tokio::fs::File::open(&path).await?;
+ let data_blob = DataBlob::load_from_async_reader(&mut file).await?;
+ let raw_data = data_blob.decode(None, None)?;
+ backup_writer
+ .upload_blob_from_data(raw_data, archive_name.as_ref(), upload_options.clone())
+ .await
+}
+
// Read fixed or dynamic index and push to target by uploading via the backup writer instance
//
// For fixed indexes, the size must be provided as given by the index reader.
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 17/30] sync: push: optionally encrypt client log on upload if key is given
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (15 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 16/30] sync: push: optionally encrypt data blob on upload Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 18/30] sync: push: add helper for loading known chunks from previous snapshot Christian Ebner
` (12 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Encrypt the client log blob with given encryption key during syncs in push
direction, if given. The client log is not part of the manifest and therefore
needs to be handled separately.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/push.rs | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/src/server/push.rs b/src/server/push.rs
index f8678a78b..74094225c 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1026,13 +1026,23 @@ pub(crate) async fn push_snapshot(
let client_log_name = &CLIENT_LOG_BLOB_NAME;
client_log_path.push(client_log_name.as_ref());
if client_log_path.is_file() {
- backup_writer
- .upload_blob_from_file(
- &client_log_path,
- client_log_name.as_ref(),
- upload_options.clone(),
+ if encrypt_using_key.is_some() {
+ reencode_encrypted_and_upload_blob(
+ client_log_path,
+ client_log_name,
+ &backup_writer,
+ &upload_options,
)
.await?;
+ } else {
+ backup_writer
+ .upload_blob_from_file(
+ &client_log_path,
+ client_log_name.as_ref(),
+ upload_options.clone(),
+ )
+ .await?;
+ }
}
// Rewrite manifest for pushed snapshot, recreating manifest from source on target,
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 18/30] sync: push: add helper for loading known chunks from previous snapshot
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (16 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 17/30] sync: push: optionally encrypt client log on upload if key is given Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 19/30] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
` (11 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Loading of known chunks only makes sense for snapshots which do not
need encryption while pushing. To check this move the known chunk
loading into a common helper method and distinguish dynamic/fixed
index for loading based on archive type.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/push.rs | 69 ++++++++++++++++++++++++++++++++--------------
1 file changed, 49 insertions(+), 20 deletions(-)
diff --git a/src/server/push.rs b/src/server/push.rs
index 74094225c..5c76cf2eb 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -816,6 +816,45 @@ pub(crate) async fn push_group(
Ok(stats)
}
+async fn load_previous_snapshot_known_chunks(
+ params: &PushParameters,
+ upload_options: &UploadOptions,
+ backup_writer: &BackupWriter,
+ archive_name: &BackupArchiveName,
+ known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
+) {
+ if let Some(manifest) = upload_options.previous_manifest.as_ref() {
+ if let Some((_id, crypt_config)) = ¶ms.crypt_config {
+ if let Ok(Some(fingerprint)) = manifest.fingerprint() {
+ if *fingerprint.bytes() == crypt_config.fingerprint() {
+ // needs encryption during push, cannot reuse chunks from previous manifest
+ return;
+ }
+ } else {
+ // previous manifest is not pre-encrypted or failed to check if so,
+ // fallback to not reuse chunks
+ return;
+ }
+ }
+
+ // Add known chunks, ignore errors since archive might not be present and it is better
+ // to proceed on unrelated errors than to fail here.
+ match archive_name.archive_type() {
+ ArchiveType::FixedIndex => {
+ let _res = backup_writer
+ .download_previous_fixed_index(archive_name, manifest, known_chunks)
+ .await;
+ }
+ ArchiveType::DynamicIndex => {
+ let _res = backup_writer
+ .download_previous_dynamic_index(archive_name, manifest, known_chunks)
+ .await;
+ }
+ ArchiveType::Blob => (),
+ };
+ }
+}
+
/// Push snapshot to target
///
/// Creates a new snapshot on the target and pushes the content of the source snapshot to the
@@ -910,6 +949,16 @@ pub(crate) async fn push_snapshot(
path.push(&entry.filename);
if path.try_exists()? {
let archive_name = BackupArchiveName::from_path(&entry.filename)?;
+
+ load_previous_snapshot_known_chunks(
+ params,
+ &upload_options,
+ &backup_writer,
+ &archive_name,
+ known_chunks.clone(),
+ )
+ .await;
+
match archive_name.archive_type() {
ArchiveType::Blob => {
let backup_stats = if encrypt_using_key.is_some() {
@@ -940,16 +989,6 @@ pub(crate) async fn push_snapshot(
});
}
ArchiveType::DynamicIndex => {
- if let Some(manifest) = upload_options.previous_manifest.as_ref() {
- // Add known chunks, ignore errors since archive might not be present
- let _res = backup_writer
- .download_previous_dynamic_index(
- &archive_name,
- manifest,
- known_chunks.clone(),
- )
- .await;
- }
let index = DynamicIndexReader::open(&path)?;
let chunk_reader = reader
.chunk_reader(entry.chunk_crypt_mode())
@@ -977,16 +1016,6 @@ pub(crate) async fn push_snapshot(
});
}
ArchiveType::FixedIndex => {
- if let Some(manifest) = upload_options.previous_manifest.as_ref() {
- // Add known chunks, ignore errors since archive might not be present
- let _res = backup_writer
- .download_previous_fixed_index(
- &archive_name,
- manifest,
- known_chunks.clone(),
- )
- .await;
- }
let index = FixedIndexReader::open(&path)?;
let chunk_reader = reader
.chunk_reader(entry.chunk_crypt_mode())
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 19/30] fix #7251: api: push: encrypt snapshots using configured encryption key
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (17 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 18/30] sync: push: add helper for loading known chunks from previous snapshot Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 20/30] ui: define and expose encryption key management menu item and windows Christian Ebner
` (10 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
If an encryption key id is provided in the push parameters, the key
is loaded at the start of the push sync job and passed along via the
crypt config.
Backup snapshots which are already encrypted or partially encrypted
snapshots are skipped to avoid mixing of contents. Pre-existing
snapshots on the remote are however not checked to match the key.
Special care has to be taken when tracking the already encountered
chunks. For regular push sync jobs chunk upload is optimized to skip
re-upload of chunks from the previous snapshot (if any) and new, but
already encountered chunks for the current group sync. Since the chunks
now have to be re-processes anyways, do not load the chunks from the
previous snapshot into memory if they need re-encryption and keep track
of the unencrypted -> encrypted digest mapping in a hashmap to avoid
re-processing. This might be optimized in the future by e.g. move the
tracking to an LRU cache, which however requrires more carefully
evaluaton of memory consumption.
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=7251
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/push.rs | 122 ++++++++++++++++++++++++++++++++++-----------
1 file changed, 94 insertions(+), 28 deletions(-)
diff --git a/src/server/push.rs b/src/server/push.rs
index 5c76cf2eb..bc8c3810f 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1,6 +1,6 @@
//! Sync datastore by pushing contents to remote server
-use std::collections::HashSet;
+use std::collections::{HashMap, HashSet};
use std::path::Path;
use std::sync::{Arc, Mutex};
@@ -12,17 +12,17 @@ use tracing::{info, warn};
use pbs_api_types::{
print_store_and_ns, ApiVersion, ApiVersionInfo, ArchiveType, Authid, BackupArchiveName,
- BackupDir, BackupGroup, BackupGroupDeleteStats, BackupNamespace, GroupFilter, GroupListItem,
- NamespaceListItem, Operation, RateLimitConfig, Remote, SnapshotListItem, CLIENT_LOG_BLOB_NAME,
- MANIFEST_BLOB_NAME, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ, PRIV_REMOTE_DATASTORE_BACKUP,
- PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
+ BackupDir, BackupGroup, BackupGroupDeleteStats, BackupNamespace, CryptMode, GroupFilter,
+ GroupListItem, NamespaceListItem, Operation, RateLimitConfig, Remote, SnapshotListItem,
+ CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+ PRIV_REMOTE_DATASTORE_BACKUP, PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
};
use pbs_client::{
BackupRepository, BackupStats, BackupWriter, BackupWriterOptions, HttpClient, IndexType,
MergedChunkInfo, UploadOptions,
};
use pbs_config::CachedUserInfo;
-use pbs_datastore::data_blob::ChunkInfo;
+use pbs_datastore::data_blob::{ChunkInfo, DataChunkBuilder};
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
@@ -897,7 +897,28 @@ pub(crate) async fn push_snapshot(
return Ok(stats);
}
- let encrypt_using_key = None;
+ let mut encrypt_using_key = None;
+ if params.crypt_config.is_some() {
+ // Check if snapshot is fully encrypted or not encrypted at all:
+ // refuse progress otherwise to upload partially unencrypted contents or mix encryption key.
+ let files = source_manifest.files();
+ let all_unencrypted = files
+ .iter()
+ .all(|f| f.chunk_crypt_mode() == CryptMode::None);
+ let any_unencrypted = files
+ .iter()
+ .any(|f| f.chunk_crypt_mode() == CryptMode::None);
+
+ if all_unencrypted {
+ encrypt_using_key = params.crypt_config.clone();
+ info!("Encrypt source snapshot '{snapshot}' on the fly while pushing to remote");
+ } else if any_unencrypted {
+ warn!("Encountered partially encrypted snapshot '{snapshot}', refuse to re-encrypt and skip");
+ return Ok(stats);
+ } else {
+ info!("Snapshot '{snapshot}' already encrypted with client key, not re-encypting with configured active encryption key");
+ }
+ }
// Writer instance locks the snapshot on the remote side
let backup_writer = BackupWriter::start(
@@ -906,7 +927,9 @@ pub(crate) async fn push_snapshot(
datastore: params.target.repo.store(),
ns: &target_ns,
backup: snapshot,
- crypt_config: encrypt_using_key.clone(),
+ crypt_config: encrypt_using_key
+ .as_ref()
+ .map(|(_id, conf)| Arc::clone(conf)),
debug: false,
benchmark: false,
no_cache: false,
@@ -923,19 +946,20 @@ pub(crate) async fn push_snapshot(
}
};
- // Dummy upload options: the actual compression and/or encryption already happened while
- // the chunks were generated during creation of the backup snapshot, therefore pre-existing
- // chunks (already compressed and/or encrypted) can be pushed to the target.
+ // Dummy upload options: The actual compression already happened while
+ // the chunks were generated during creation of the backup snapshot,
+ // therefore pre-existing chunks (already compressed) can be pushed to
+ // the target.
+ //
// Further, these steps are skipped in the backup writer upload stream.
//
// Therefore, these values do not need to fit the values given in the manifest.
// The original manifest is uploaded in the end anyways.
//
// Compression is set to true so that the uploaded manifest will be compressed.
- // Encrypt is set to assure that above files are not encrypted.
let upload_options = UploadOptions {
compress: true,
- encrypt: false,
+ encrypt: encrypt_using_key.is_some(),
previous_manifest,
..UploadOptions::default()
};
@@ -949,6 +973,10 @@ pub(crate) async fn push_snapshot(
path.push(&entry.filename);
if path.try_exists()? {
let archive_name = BackupArchiveName::from_path(&entry.filename)?;
+ let crypt_mode = match &encrypt_using_key {
+ Some(_) => CryptMode::Encrypt,
+ None => entry.chunk_crypt_mode(),
+ };
load_previous_snapshot_known_chunks(
params,
@@ -979,7 +1007,7 @@ pub(crate) async fn push_snapshot(
&archive_name,
backup_stats.size,
backup_stats.csum,
- entry.chunk_crypt_mode(),
+ crypt_mode,
)?;
stats.add(SyncStats {
chunk_count: backup_stats.chunk_count as usize,
@@ -1000,13 +1028,16 @@ pub(crate) async fn push_snapshot(
&backup_writer,
IndexType::Dynamic,
known_chunks.clone(),
+ encrypt_using_key
+ .as_ref()
+ .map(|(_id, conf)| Arc::clone(conf)),
)
.await?;
target_manifest.add_file(
&archive_name,
upload_stats.size,
upload_stats.csum,
- entry.chunk_crypt_mode(),
+ crypt_mode,
)?;
stats.add(SyncStats {
chunk_count: upload_stats.chunk_count as usize,
@@ -1028,13 +1059,16 @@ pub(crate) async fn push_snapshot(
&backup_writer,
IndexType::Fixed(Some(size)),
known_chunks.clone(),
+ encrypt_using_key
+ .as_ref()
+ .map(|(_id, conf)| Arc::clone(conf)),
)
.await?;
target_manifest.add_file(
&archive_name,
upload_stats.size,
upload_stats.csum,
- entry.chunk_crypt_mode(),
+ crypt_mode,
)?;
stats.add(SyncStats {
chunk_count: upload_stats.chunk_count as usize,
@@ -1076,15 +1110,25 @@ pub(crate) async fn push_snapshot(
// Rewrite manifest for pushed snapshot, recreating manifest from source on target,
// needs to update all relevant info for new manifest.
- target_manifest.unprotected = source_manifest.unprotected;
- target_manifest.signature = source_manifest.signature;
- let manifest_json = serde_json::to_value(target_manifest)?;
- let manifest_string = serde_json::to_string_pretty(&manifest_json)?;
+ target_manifest.unprotected = source_manifest.unprotected.clone();
+ let manifest_string = if encrypt_using_key.is_some() {
+ let fp = source_manifest.change_detection_fingerprint()?;
+ target_manifest.set_change_detection_fingerprint(&fp)?;
+ target_manifest.to_string(encrypt_using_key.as_ref().map(|(_id, conf)| conf.as_ref()))?
+ } else {
+ target_manifest.signature = source_manifest.signature.clone();
+ let manifest_json = serde_json::to_value(target_manifest)?;
+ serde_json::to_string_pretty(&manifest_json)?
+ };
let backup_stats = backup_writer
.upload_blob_from_data(
manifest_string.into_bytes(),
MANIFEST_BLOB_NAME.as_ref(),
- upload_options,
+ UploadOptions {
+ compress: true,
+ encrypt: false,
+ ..UploadOptions::default()
+ },
)
.await?;
backup_writer.finish().await?;
@@ -1124,12 +1168,15 @@ async fn push_index(
backup_writer: &BackupWriter,
index_type: IndexType,
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
+ crypt_config: Option<Arc<CryptConfig>>,
) -> Result<BackupStats, Error> {
let (upload_channel_tx, upload_channel_rx) = mpsc::channel(20);
let mut chunk_infos =
stream::iter(0..index.index_count()).map(move |pos| index.chunk_info(pos).unwrap());
+ let crypt_config_cloned = crypt_config.clone();
tokio::spawn(async move {
+ let mut encrypted_mapping = HashMap::new();
while let Some(chunk_info) = chunk_infos.next().await {
// Avoid reading known chunks, as they are not uploaded by the backup writer anyways
let needs_upload = {
@@ -1143,20 +1190,39 @@ async fn push_index(
chunk_reader
.read_raw_chunk(&chunk_info.digest)
.await
- .map(|chunk| {
- MergedChunkInfo::New(ChunkInfo {
+ .and_then(|chunk| {
+ let (chunk, digest, chunk_len) = match crypt_config_cloned.as_ref() {
+ Some(crypt_config) => {
+ let data = chunk.decode(None, Some(&chunk_info.digest))?;
+ let (chunk, digest) = DataChunkBuilder::new(&data)
+ .compress(true)
+ .crypt_config(crypt_config)
+ .build()?;
+ encrypted_mapping.insert(chunk_info.digest, digest);
+ (chunk, digest, data.len() as u64)
+ }
+ None => (chunk, chunk_info.digest, chunk_info.size()),
+ };
+
+ Ok(MergedChunkInfo::New(ChunkInfo {
chunk,
- digest: chunk_info.digest,
- chunk_len: chunk_info.size(),
+ digest,
+ chunk_len,
offset: chunk_info.range.start,
- })
+ }))
})
} else {
+ let digest =
+ if let Some(encrypted_digest) = encrypted_mapping.get(&chunk_info.digest) {
+ *encrypted_digest
+ } else {
+ chunk_info.digest
+ };
Ok(MergedChunkInfo::Known(vec![(
// Pass size instead of offset, will be replaced with offset by the backup
// writer
chunk_info.size(),
- chunk_info.digest,
+ digest,
)]))
};
let _ = upload_channel_tx.send(merged_chunk_info).await;
@@ -1167,7 +1233,7 @@ async fn push_index(
let upload_options = UploadOptions {
compress: true,
- encrypt: false,
+ encrypt: crypt_config.is_some(),
index_type,
..UploadOptions::default()
};
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 20/30] ui: define and expose encryption key management menu item and windows
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (18 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 19/30] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 21/30] ui: expose assigning encryption key to sync jobs Christian Ebner
` (9 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Allows to create or remove encryption keys via the WebUI. A new key
entity can be added by either generating a new key by the server
itself or uploading a pre-existing key via a key file, similar to
what Proxmox VE currently allows when setting up a PBS storage.
After creation, the key will be shown in a dialog which allows export
thereof. This is reusing the same logic as PVE with slight adaptions
to include key id and different api endpoint.
On removal the user is informed about the risk of not being able to
decrypt snapshots anymore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/Makefile | 2 +
www/NavigationTree.js | 6 +
www/Utils.js | 1 +
www/config/EncryptionKeysView.js | 336 +++++++++++++++++++++++++++
www/window/EncryptionKeysEdit.js | 382 +++++++++++++++++++++++++++++++
5 files changed, 727 insertions(+)
create mode 100644 www/config/EncryptionKeysView.js
create mode 100644 www/window/EncryptionKeysEdit.js
diff --git a/www/Makefile b/www/Makefile
index 5a60e47e1..08ad50846 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -70,6 +70,7 @@ JSSRC= \
config/GCView.js \
config/WebauthnView.js \
config/CertificateView.js \
+ config/EncryptionKeysView.js \
config/NodeOptionView.js \
config/MetricServerView.js \
config/NotificationConfigView.js \
@@ -78,6 +79,7 @@ JSSRC= \
window/BackupGroupChangeOwner.js \
window/CreateDirectory.js \
window/DataStoreEdit.js \
+ window/EncryptionKeysEdit.js \
window/NamespaceEdit.js \
window/MaintenanceOptions.js \
window/NotesEdit.js \
diff --git a/www/NavigationTree.js b/www/NavigationTree.js
index 35b8d693b..f596c7d1b 100644
--- a/www/NavigationTree.js
+++ b/www/NavigationTree.js
@@ -74,6 +74,12 @@ Ext.define('PBS.store.NavigationStore', {
path: 'pbsCertificateConfiguration',
leaf: true,
},
+ {
+ text: gettext('Encryption Keys'),
+ iconCls: 'fa fa-lock',
+ path: 'pbsEncryptionKeysView',
+ leaf: true,
+ },
{
text: gettext('Notifications'),
iconCls: 'fa fa-bell-o',
diff --git a/www/Utils.js b/www/Utils.js
index 350ab820b..bf4b025c7 100644
--- a/www/Utils.js
+++ b/www/Utils.js
@@ -451,6 +451,7 @@ Ext.define('PBS.Utils', {
prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')),
prunejob: (type, id) => PBS.Utils.render_prune_job_worker_id(id, gettext('Prune Job')),
reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read Objects')),
+ 'remove-encryption-key': [gettext('Key'), gettext('Remove Key')],
'rewind-media': [gettext('Drive'), gettext('Rewind Media')],
's3-refresh': [gettext('Datastore'), gettext('S3 Refresh')],
sync: ['Datastore', gettext('Remote Sync')],
diff --git a/www/config/EncryptionKeysView.js b/www/config/EncryptionKeysView.js
new file mode 100644
index 000000000..35f147799
--- /dev/null
+++ b/www/config/EncryptionKeysView.js
@@ -0,0 +1,336 @@
+Ext.define('pbs-encryption-keys', {
+ extend: 'Ext.data.Model',
+ fields: ['id', 'type', 'hint', 'fingerprint', 'created', 'archived-at'],
+ idProperty: 'id',
+});
+
+Ext.define('PBS.config.EncryptionKeysView', {
+ extend: 'Ext.grid.GridPanel',
+ alias: 'widget.pbsEncryptionKeysView',
+
+ title: gettext('Encryption Keys'),
+
+ stateful: true,
+ stateId: 'grid-encryption-keys',
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ addSyncEncryptionKey: function () {
+ let me = this;
+ Ext.create('PBS.window.EncryptionKeysEdit', {
+ listeners: {
+ destroy: function () {
+ me.reload();
+ },
+ },
+ }).show();
+ },
+
+ addTapeEncryptionKey: function () {
+ let me = this;
+ Ext.create('PBS.TapeManagement.EncryptionEditWindow', {
+ listeners: {
+ destroy: function () {
+ me.reload();
+ },
+ },
+ }).show();
+ },
+
+ archiveEncryptionKey: function () {
+ let me = this;
+ let view = me.getView();
+ let selection = view.getSelection();
+
+ if (!selection || selection.length < 1) {
+ return;
+ }
+
+ if (selection[0].data.type === 'tape') {
+ Ext.Msg.alert(gettext('Error'), gettext('cannot archive tape key'));
+ return;
+ }
+
+ let keyID = selection[0].data.id;
+ Proxmox.Utils.API2Request({
+ url: `/api2/extjs/config/encryption-keys/${keyID}`,
+ method: 'POST',
+ waitMsgTarget: view,
+ failure: function (response, opts) {
+ Ext.Msg.alert(gettext('Error'), response.htmlStatus);
+ },
+ success: function (response, opts) {
+ view.getSelectionModel().deselectAll();
+ me.reload();
+ },
+ });
+ },
+
+ removeEncryptionKey: function () {
+ let me = this;
+ let view = me.getView();
+ let selection = view.getSelection();
+
+ if (!selection || selection.length < 1) {
+ return;
+ }
+
+ let keyType = selection[0].data.type;
+ let keyID = selection[0].data.id;
+ let keyFp = selection[0].data.fingerprint;
+ let endpointUrl =
+ keyType === 'tape'
+ ? `/api2/extjs/config/tape-encryption-keys/${keyFp}`
+ : `/api2/extjs/config/encryption-keys/${keyID}`;
+
+ Ext.create('Proxmox.window.SafeDestroy', {
+ url: endpointUrl,
+ item: {
+ id: `${keyType}/${keyID}`,
+ },
+ autoShow: true,
+ showProgress: false,
+ taskName: 'remove-encryption-key',
+ listeners: {
+ destroy: () => me.reload(),
+ },
+ additionalItems: [
+ {
+ xtype: 'box',
+ userCls: 'pmx-hint',
+ style: {
+ 'inline-size': '375px',
+ 'overflow-wrap': 'break-word',
+ },
+ padding: '5',
+ html: gettext(
+ 'Make sure you have a backup of the encryption key!<br><br>You will not be able to decrypt contents encrypted with this key once removed.',
+ ),
+ },
+ ],
+ }).show();
+ },
+
+ restoreEncryptionKey: function () {
+ Ext.create('Proxmox.window.Edit', {
+ title: gettext('Restore Key'),
+ isCreate: true,
+ submitText: gettext('Restore'),
+ method: 'POST',
+ url: `/api2/extjs/tape/drive`,
+ submitUrl: function (url, values) {
+ let drive = values.drive;
+ delete values.drive;
+ return `${url}/${drive}/restore-key`;
+ },
+ items: [
+ {
+ xtype: 'pbsDriveSelector',
+ fieldLabel: gettext('Drive'),
+ name: 'drive',
+ },
+ {
+ xtype: 'textfield',
+ inputType: 'password',
+ fieldLabel: gettext('Password'),
+ name: 'password',
+ },
+ ],
+ }).show();
+ },
+
+ reload: async function () {
+ let me = this;
+ let view = me.getView();
+
+ let syncKeysFuture = Proxmox.Async.api2({
+ url: '/api2/extjs/config/encryption-keys',
+ method: 'GET',
+ params: {
+ 'include-archived': true,
+ },
+ });
+
+ let tapeKeysFuture = Proxmox.Async.api2({
+ url: '/api2/extjs/config/tape-encryption-keys',
+ method: 'GET',
+ });
+
+ let combinedKeys = [];
+
+ try {
+ let syncKeys = await syncKeysFuture;
+ if (syncKeys?.result?.data) {
+ syncKeys.result.data.forEach((key) => {
+ key.type = 'sync';
+ combinedKeys.push(key);
+ });
+ }
+ } catch (error) {
+ Ext.Msg.alert(gettext('Error'), error);
+ }
+
+ try {
+ let tapeKeys = await tapeKeysFuture;
+ if (tapeKeys?.result?.data) {
+ tapeKeys.result.data.forEach((key) => {
+ key.id = `${key.created}-${key.fingerprint.substring(0, 9).replace(/:/g, '')}`;
+ key.type = 'tape';
+ combinedKeys.push(key);
+ });
+ }
+ } catch (error) {
+ Ext.Msg.alert(gettext('Error'), error);
+ }
+
+ let store = view.getStore().rstore;
+ store.loadData(combinedKeys);
+ store.fireEvent('load', store, combinedKeys, true);
+ },
+
+ init: function () {
+ let me = this;
+ me.reload();
+ me.updateTask = Ext.TaskManager.start({
+ run: () => me.reload(),
+ interval: 5000,
+ });
+ },
+
+ destroy: function () {
+ let me = this;
+ if (me.updateTask) {
+ Ext.TaskManager.stop(me.updateTask);
+ }
+ },
+ },
+
+ listeners: {
+ activate: 'reload',
+ },
+
+ store: {
+ type: 'diff',
+ autoDestroy: true,
+ autoDestroyRstore: true,
+ sorters: 'id',
+ rstore: {
+ type: 'store',
+ storeid: 'pbs-encryption-keys',
+ model: 'pbs-encryption-keys',
+ proxy: {
+ type: 'memory',
+ },
+ },
+ },
+
+ tbar: [
+ {
+ text: gettext('Add'),
+ menu: [
+ {
+ text: gettext('Add Sync Encryption Key'),
+ iconCls: 'fa fa-refresh',
+ handler: 'addSyncEncryptionKey',
+ selModel: false,
+ },
+ {
+ text: gettext('Add Tape Encryption Key'),
+ iconCls: 'pbs-icon-tape',
+ handler: 'addTapeEncryptionKey',
+ selModel: false,
+ },
+ ],
+ },
+ '-',
+ {
+ xtype: 'proxmoxButton',
+ text: gettext('Archive'),
+ handler: 'archiveEncryptionKey',
+ dangerous: true,
+ confirmMsg: Ext.String.format(
+ gettext('Archiving will render the key unusable to encrypt new content, proceed?'),
+ ),
+ disabled: true,
+ enableFn: (item) => item.data.type === 'sync' && !item.data['archived-at'],
+ },
+ '-',
+ {
+ xtype: 'proxmoxButton',
+ text: gettext('Remove'),
+ handler: 'removeEncryptionKey',
+ disabled: true,
+ enableFn: (item) =>
+ (item.data.type === 'sync' && !!item.data['archived-at']) ||
+ item.data.type === 'tape',
+ },
+ '-',
+ {
+ text: gettext('Restore Key'),
+ xtype: 'proxmoxButton',
+ handler: 'restoreEncryptionKey',
+ disabled: true,
+ enableFn: (item) => item.data.type === 'tape',
+ },
+ ],
+
+ viewConfig: {
+ trackOver: false,
+ },
+
+ columns: [
+ {
+ dataIndex: 'id',
+ header: gettext('Key ID'),
+ renderer: Ext.String.htmlEncode,
+ sortable: true,
+ width: 200,
+ },
+ {
+ dataIndex: 'type',
+ header: gettext('Type'),
+ renderer: function (value) {
+ let iconCls, label;
+ if (value === 'sync') {
+ iconCls = 'fa fa-refresh';
+ label = gettext('Sync');
+ } else if (value === 'tape') {
+ iconCls = 'fa pbs-icon-tape';
+ label = gettext('Tape');
+ } else {
+ return value;
+ }
+ return `<i class="${iconCls}"></i> ${label}`;
+ },
+ sortable: true,
+ width: 50,
+ },
+ {
+ dataIndex: 'hint',
+ header: gettext('Hint'),
+ sortable: true,
+ width: 100,
+ },
+ {
+ dataIndex: 'fingerprint',
+ header: gettext('Fingerprint'),
+ sortable: false,
+ width: 600,
+ },
+ {
+ dataIndex: 'created',
+ header: gettext('Created'),
+ renderer: Proxmox.Utils.render_timestamp,
+ sortable: true,
+ flex: 1,
+ },
+ {
+ dataIndex: 'archived-at',
+ header: gettext('Archived'),
+ renderer: (val) => (val ? Proxmox.Utils.render_timestamp(val) : ''),
+ sortable: true,
+ flex: 1,
+ },
+ ],
+});
diff --git a/www/window/EncryptionKeysEdit.js b/www/window/EncryptionKeysEdit.js
new file mode 100644
index 000000000..dfabdd5ea
--- /dev/null
+++ b/www/window/EncryptionKeysEdit.js
@@ -0,0 +1,382 @@
+Ext.define('PBS.ShowEncryptionKey', {
+ extend: 'Ext.window.Window',
+ xtype: 'pbsShowEncryptionKey',
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ width: 600,
+ modal: true,
+ resizable: false,
+ title: gettext('Important: Save your Encryption Key'),
+
+ // avoid close by ESC key, force user to more manual action
+ onEsc: Ext.emptyFn,
+ closable: false,
+
+ items: [
+ {
+ xtype: 'form',
+ layout: {
+ type: 'vbox',
+ align: 'stretch',
+ },
+ bodyPadding: 10,
+ border: false,
+ defaults: {
+ anchor: '100%',
+ border: false,
+ padding: '10 0 0 0',
+ },
+ items: [
+ {
+ xtype: 'textfield',
+ fieldLabel: gettext('Key ID'),
+ labelWidth: 80,
+ inputId: 'keyID',
+ cbind: {
+ value: '{keyID}',
+ },
+ editable: false,
+ },
+ {
+ xtype: 'textfield',
+ fieldLabel: gettext('Key'),
+ labelWidth: 80,
+ inputId: 'encryption-key',
+ cbind: {
+ value: '{key}',
+ },
+ editable: false,
+ },
+ {
+ xtype: 'component',
+ html:
+ gettext(
+ 'Keep your encryption key safe, but easily accessible for disaster recovery.',
+ ) +
+ '<br>' +
+ gettext('We recommend the following safe-keeping strategy:'),
+ },
+ {
+ xtype: 'container',
+ layout: 'hbox',
+ items: [
+ {
+ xtype: 'component',
+ html: '1. ' + gettext('Save the key in your password manager.'),
+ flex: 1,
+ },
+ {
+ xtype: 'button',
+ text: gettext('Copy Key'),
+ iconCls: 'fa fa-clipboard x-btn-icon-el-default-toolbar-small',
+ cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+ width: 110,
+ handler: function (b) {
+ document.getElementById('encryption-key').select();
+ document.execCommand('copy');
+ },
+ },
+ ],
+ },
+ {
+ xtype: 'container',
+ layout: 'hbox',
+ items: [
+ {
+ xtype: 'component',
+ html:
+ '2. ' +
+ gettext(
+ 'Download the key to a USB (pen) drive, placed in secure vault.',
+ ),
+ flex: 1,
+ },
+ {
+ xtype: 'button',
+ text: gettext('Download'),
+ iconCls: 'fa fa-download x-btn-icon-el-default-toolbar-small',
+ cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+ width: 110,
+ handler: function (b) {
+ let showWindow = this.up('window');
+
+ let filename = `${showWindow.keyID}.enc`;
+
+ let hiddenElement = document.createElement('a');
+ hiddenElement.href =
+ 'data:attachment/text,' + encodeURI(showWindow.key);
+ hiddenElement.target = '_blank';
+ hiddenElement.download = filename;
+ hiddenElement.click();
+ },
+ },
+ ],
+ },
+ {
+ xtype: 'container',
+ layout: 'hbox',
+ items: [
+ {
+ xtype: 'component',
+ html:
+ '3. ' +
+ gettext('Print as paperkey, laminated and placed in secure vault.'),
+ flex: 1,
+ },
+ {
+ xtype: 'button',
+ text: gettext('Print Key'),
+ iconCls: 'fa fa-print x-btn-icon-el-default-toolbar-small',
+ cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+ width: 110,
+ handler: function (b) {
+ let showWindow = this.up('window');
+ showWindow.paperkey(showWindow.key);
+ },
+ },
+ ],
+ },
+ ],
+ },
+ {
+ xtype: 'component',
+ border: false,
+ padding: '10 10 10 10',
+ userCls: 'pmx-hint',
+ html: gettext(
+ 'Please save the encryption key - losing it will render any backup created with it unusable',
+ ),
+ },
+ ],
+ buttons: [
+ {
+ text: gettext('Close'),
+ handler: function (b) {
+ let showWindow = this.up('window');
+ showWindow.close();
+ },
+ },
+ ],
+ paperkey: function (keyString) {
+ let me = this;
+
+ const key = JSON.parse(keyString);
+
+ const qrwidth = 500;
+ let qrdiv = document.createElement('div');
+ let qrcode = new QRCode(qrdiv, {
+ width: qrwidth,
+ height: qrwidth,
+ correctLevel: QRCode.CorrectLevel.H,
+ });
+ qrcode.makeCode(keyString);
+
+ let shortKeyFP = '';
+ if (key.fingerprint) {
+ shortKeyFP = PBS.Utils.renderKeyID(key.fingerprint);
+ }
+
+ let printFrame = document.createElement('iframe');
+ Object.assign(printFrame.style, {
+ position: 'fixed',
+ right: '0',
+ bottom: '0',
+ width: '0',
+ height: '0',
+ border: '0',
+ });
+ const prettifiedKey = JSON.stringify(key, null, 2);
+ const keyQrBase64 = qrdiv.children[0].toDataURL('image/png');
+ const html = `<html><head><script>
+ window.addEventListener('DOMContentLoaded', (ev) => window.print());
+ </script><style>@media print and (max-height: 150mm) {
+ h4, p { margin: 0; font-size: 1em; }
+ }</style></head><body style="padding: 5px;">
+ <h4>Encryption Key '${me.keyID}' (${shortKeyFP})</h4>
+<p style="font-size:1.2em;font-family:monospace;white-space:pre-wrap;overflow-wrap:break-word;">
+-----BEGIN PROXMOX BACKUP KEY-----
+${prettifiedKey}
+-----END PROXMOX BACKUP KEY-----</p>
+ <center><img style="width: 100%; max-width: ${qrwidth}px;" src="${keyQrBase64}"></center>
+ </body></html>`;
+
+ printFrame.src = 'data:text/html;base64,' + btoa(html);
+ document.body.appendChild(printFrame);
+ me.on('destroy', () => document.body.removeChild(printFrame));
+ },
+});
+
+Ext.define('PBS.window.EncryptionKeysEdit', {
+ extend: 'Proxmox.window.Edit',
+ xtype: 'widget.pbsEncryptionKeysEdit',
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ width: 400,
+
+ fieldDefaults: { labelWidth: 120 },
+
+ subject: gettext('Encryption Key'),
+
+ cbindData: function (initialConfig) {
+ let me = this;
+
+ me.url = '/api2/extjs/config/encryption-keys';
+ me.method = 'POST';
+ me.autoLoad = false;
+
+ return {};
+ },
+
+ apiCallDone: function (success, response, options) {
+ let me = this;
+
+ if (!me.rendered) {
+ return;
+ }
+
+ let res = response.result.data;
+ if (!res) {
+ return;
+ }
+
+ let keyIdField = me.down('field[name=id]');
+ Ext.create('PBS.ShowEncryptionKey', {
+ autoShow: true,
+ keyID: keyIdField.getValue(),
+ key: JSON.stringify(res),
+ });
+ },
+
+ viewModel: {
+ data: {
+ keepCryptVisible: false,
+ },
+ },
+
+ items: [
+ {
+ xtype: 'pmxDisplayEditField',
+ name: 'id',
+ fieldLabel: gettext('Encryption Key ID'),
+ renderer: Ext.htmlEncode,
+ allowBlank: false,
+ minLength: 3,
+ editable: true,
+ },
+ {
+ xtype: 'displayfield',
+ fieldLabel: gettext('Key Source'),
+ padding: '2 0',
+ },
+ {
+ xtype: 'radiofield',
+ name: 'keysource',
+ value: true,
+ inputValue: 'new',
+ submitValue: false,
+ boxLabel: gettext('Auto-generate a new encryption key'),
+ padding: '0 0 0 25',
+ },
+ {
+ xtype: 'radiofield',
+ name: 'keysource',
+ inputValue: 'upload',
+ submitValue: false,
+ boxLabel: gettext('Upload an existing encryption key'),
+ padding: '0 0 0 25',
+ listeners: {
+ change: function (f, value) {
+ let editWindow = this.up('window');
+ if (!editWindow.rendered) {
+ return;
+ }
+ let uploadKeyField = editWindow.down('field[name=key]');
+ uploadKeyField.setDisabled(!value);
+ uploadKeyField.setHidden(!value);
+
+ let uploadKeyButton = editWindow.down('filebutton[name=upload-button]');
+ uploadKeyButton.setDisabled(!value);
+ uploadKeyButton.setHidden(!value);
+
+ if (value) {
+ uploadKeyField.validate();
+ } else {
+ uploadKeyField.reset();
+ }
+ },
+ },
+ },
+ {
+ xtype: 'fieldcontainer',
+ layout: 'hbox',
+ items: [
+ {
+ xtype: 'proxmoxtextfield',
+ name: 'key',
+ fieldLabel: gettext('Upload From File'),
+ value: '',
+ disabled: true,
+ hidden: true,
+ allowBlank: false,
+ labelAlign: 'right',
+ flex: 1,
+ emptyText: gettext('Drag-and-drop key file here.'),
+ validator: function (value) {
+ if (value.length) {
+ let key;
+ try {
+ key = JSON.parse(value);
+ } catch (e) {
+ return gettext('Failed to parse key - {0}', e);
+ }
+ if (key.data === undefined) {
+ return gettext('Does not seem like a valid Proxmox Backup key!');
+ }
+ }
+ return true;
+ },
+ afterRender: function () {
+ let me = this;
+ if (!window.FileReader) {
+ // No FileReader support in this browser
+ return;
+ }
+ let cancel = function (ev) {
+ ev = ev.event;
+ if (ev.preventDefault) {
+ ev.preventDefault();
+ }
+ };
+ this.inputEl.on('dragover', cancel);
+ this.inputEl.on('dragenter', cancel);
+ this.inputEl.on('drop', (ev) => {
+ cancel(ev);
+ let reader = new FileReader();
+ reader.onload = (ev) => me.setValue(ev.target.result);
+ reader.readAsText(ev.event.dataTransfer.files[0]);
+ });
+ },
+ },
+ {
+ xtype: 'filebutton',
+ name: 'upload-button',
+ iconCls: 'fa fa-fw fa-folder-open-o x-btn-icon-el-default-toolbar-small',
+ cls: 'x-btn-default-toolbar-small proxmox-inline-button',
+ margin: '0 0 0 4',
+ disabled: true,
+ hidden: true,
+ listeners: {
+ change: function (btn, e, value) {
+ let ev = e.event;
+ let field = btn.up().down('proxmoxtextfield[name=key]');
+ let reader = new FileReader();
+ reader.onload = (ev) => field.setValue(ev.target.result);
+ reader.readAsText(ev.target.files[0]);
+ btn.reset();
+ },
+ },
+ },
+ ],
+ },
+ ],
+});
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 21/30] ui: expose assigning encryption key to sync jobs
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (19 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 20/30] ui: define and expose encryption key management menu item and windows Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 22/30] sync: pull: load encryption key if given in job config Christian Ebner
` (8 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
This allows to select pre-defined encryption keys and assign them to
the sync job configuration.
Sync keys can be either assigned as active encryption key to sync
jobs in push direction, to be used when pushing new contents or
associated to a sync job in pull direction, then used to decrypt
contents with matching key fingerprint.
As active encryption key only ones with are not archived can be used,
while associations can be made also with archived keys, to still be
able do decrypt contents on pull and to avoid key deletion if
associated to either push or pull sync jobs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
www/Makefile | 1 +
www/form/EncryptionKeySelector.js | 96 +++++++++++++++++++++++++++++++
www/window/SyncJobEdit.js | 62 ++++++++++++++++++++
3 files changed, 159 insertions(+)
create mode 100644 www/form/EncryptionKeySelector.js
diff --git a/www/Makefile b/www/Makefile
index 08ad50846..51da9d74e 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -55,6 +55,7 @@ JSSRC= \
form/GroupSelector.js \
form/GroupFilter.js \
form/VerifyOutdatedAfter.js \
+ form/EncryptionKeySelector.js \
data/RunningTasksStore.js \
button/TaskButton.js \
panel/PrunePanel.js \
diff --git a/www/form/EncryptionKeySelector.js b/www/form/EncryptionKeySelector.js
new file mode 100644
index 000000000..e0390e56a
--- /dev/null
+++ b/www/form/EncryptionKeySelector.js
@@ -0,0 +1,96 @@
+Ext.define('PBS.form.EncryptionKeySelector', {
+ extend: 'Ext.form.field.ComboBox',
+ alias: 'widget.pbsEncryptionKeySelector',
+
+ queryMode: 'local',
+
+ valueField: 'id',
+ displayField: 'id',
+
+ emptyText: gettext('None'),
+
+ listConfig: {
+ columns: [
+ {
+ dataIndex: 'id',
+ header: gettext('Key ID'),
+ sortable: true,
+ flex: 1,
+ },
+ {
+ dataIndex: 'created',
+ header: gettext('Created'),
+ sortable: true,
+ renderer: Proxmox.Utils.render_timestamp,
+ flex: 1,
+ },
+ {
+ dataIndex: 'archived-at',
+ header: gettext('Archived'),
+ renderer: (val) => (val ? Proxmox.Utils.render_timestamp(val) : ''),
+ sortable: true,
+ flex: 1,
+ },
+ ],
+ emptyText: `<div class="x-grid-empty">${gettext('No key accessible.')}</div>`,
+ },
+
+ config: {
+ deleteEmpty: true,
+ extraRequestParams: {},
+ },
+ // override framework function to implement deleteEmpty behaviour
+ getSubmitData: function () {
+ let me = this;
+
+ let data = null;
+ if (!me.disabled && me.submitValue) {
+ let val = me.getSubmitValue();
+ if (val !== null && val !== '') {
+ data = {};
+ data[me.getName()] = val;
+ } else if (me.getDeleteEmpty()) {
+ data = {};
+ data.delete = me.getName();
+ }
+ }
+
+ return data;
+ },
+
+ triggers: {
+ clear: {
+ cls: 'pmx-clear-trigger',
+ weight: -1,
+ hidden: true,
+ handler: function () {
+ this.triggers.clear.setVisible(false);
+ this.setValue('');
+ },
+ },
+ },
+
+ listeners: {
+ change: function (field, value) {
+ let canClear = (value ?? '') !== '';
+ field.triggers.clear.setVisible(canClear);
+ },
+ },
+
+ initComponent: function () {
+ let me = this;
+
+ me.store = Ext.create('Ext.data.Store', {
+ model: 'pbs-encryption-keys',
+ autoLoad: true,
+ proxy: {
+ type: 'proxmox',
+ timeout: 30 * 1000,
+ url: `/api2/json/config/encryption-keys`,
+ extraParams: me.extraRequestParams,
+ },
+ });
+
+ me.callParent();
+ },
+});
diff --git a/www/window/SyncJobEdit.js b/www/window/SyncJobEdit.js
index 074c7855a..510c3f89c 100644
--- a/www/window/SyncJobEdit.js
+++ b/www/window/SyncJobEdit.js
@@ -34,10 +34,12 @@ Ext.define('PBS.window.SyncJobEdit', {
if (me.syncDirection === 'push') {
me.subject = gettext('Sync Job - Push Direction');
me.syncDirectionPush = true;
+ me.syncCryptKeyMultiSelect = false;
me.syncRemoteLabel = gettext('Target Remote');
me.syncRemoteDatastore = gettext('Target Datastore');
me.syncRemoteNamespace = gettext('Target Namespace');
me.syncLocalOwner = gettext('Local User');
+ me.associatedKeysLabel = gettext('Associated Keys');
// Sync direction request parameter is only required for creating new jobs,
// for edit and delete it is derived from the job config given by it's id.
if (me.isCreate) {
@@ -52,6 +54,7 @@ Ext.define('PBS.window.SyncJobEdit', {
me.syncRemoteDatastore = gettext('Source Datastore');
me.syncRemoteNamespace = gettext('Source Namespace');
me.syncLocalOwner = gettext('Local Owner');
+ me.associatedKeysLabel = gettext('Decryption Keys');
}
return {};
@@ -560,6 +563,65 @@ Ext.define('PBS.window.SyncJobEdit', {
},
],
},
+ {
+ xtype: 'inputpanel',
+ title: gettext('Encryption'),
+ column1: [
+ {
+ xtype: 'pbsEncryptionKeySelector',
+ name: 'active-encryption-key',
+ fieldLabel: gettext('Active Encryption Key'),
+ multiSelect: false,
+ cbind: {
+ deleteEmpty: '{!isCreate}',
+ disabled: '{!syncDirectionPush}',
+ hidden: '{!syncDirectionPush}',
+ },
+ },
+ {
+ xtype: 'pbsEncryptionKeySelector',
+ name: 'associated-key',
+ multiSelect: true,
+ cbind: {
+ fieldLabel: '{associatedKeysLabel}',
+ deleteEmpty: '{!isCreate}',
+ },
+ extraRequestParams: {
+ 'include-archived': true,
+ },
+ },
+ ],
+ column2: [
+ {
+ xtype: 'box',
+ style: {
+ 'inline-size': '325px',
+ 'overflow-wrap': 'break-word',
+ },
+ padding: '5',
+ html: gettext(
+ 'When pushing, the system uses the active encryption key to encrypt unencrypted sources snapshots. It leaves existing encrypted content as-is, and skips partially encrypted content.',
+ ),
+ cbind: {
+ hidden: '{!syncDirectionPush}',
+ },
+ },
+ {
+ xtype: 'box',
+ style: {
+ 'inline-size': '325px',
+ 'overflow-wrap': 'break-word',
+ },
+ padding: '5',
+ html: gettext(
+ 'To prevent premature removal, associated keys hold a reference to a key until you explicitly unlink it. When you change your active encryption key, the system automatically associates the old key to protect it from accidental deletion, ensuring you can still decrypt older contents.',
+ ),
+ cbind: {
+ hidden: '{!syncDirectionPush}',
+ },
+ },
+ ],
+ },
],
},
});
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 22/30] sync: pull: load encryption key if given in job config
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (20 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 21/30] ui: expose assigning encryption key to sync jobs Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 23/30] sync: expand source chunk reader trait by crypt config Christian Ebner
` (7 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
If configured and passed in on PullParams construction, check access
and load the encryption key. Any snapshots matching this key
fingerprint should be decrypted during pull sync.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/pull.rs | 15 +++++++++++++--
src/server/pull.rs | 17 +++++++++++++++++
2 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index 4b1fd5e60..7ca12fe99 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -7,8 +7,8 @@ use proxmox_router::{Permission, Router, RpcEnvironment};
use proxmox_schema::api;
use pbs_api_types::{
- Authid, BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, DATASTORE_SCHEMA,
- GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
+ Authid, BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, CRYPT_KEY_ID_SCHEMA,
+ DATASTORE_SCHEMA, GROUP_FILTER_LIST_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ, REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA,
RESYNC_CORRUPT_SCHEMA, SYNC_ENCRYPTED_ONLY_SCHEMA, SYNC_VERIFIED_ONLY_SCHEMA,
TRANSFER_LAST_SCHEMA,
@@ -91,6 +91,7 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
sync_job.encrypted_only,
sync_job.verified_only,
sync_job.resync_corrupt,
+ sync_job.associated_key.clone(),
)
}
}
@@ -148,6 +149,14 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
schema: RESYNC_CORRUPT_SCHEMA,
optional: true,
},
+ "decryption-keys": {
+ type: Array,
+ description: "List of decryption keys.",
+ items: {
+ schema: CRYPT_KEY_ID_SCHEMA,
+ },
+ optional: true,
+ },
},
},
access: {
@@ -175,6 +184,7 @@ async fn pull(
encrypted_only: Option<bool>,
verified_only: Option<bool>,
resync_corrupt: Option<bool>,
+ decryption_keys: Option<Vec<String>>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -215,6 +225,7 @@ async fn pull(
encrypted_only,
verified_only,
resync_corrupt,
+ decryption_keys,
)?;
// fixme: set to_stdout to false?
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 688f95574..1376771e3 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -8,6 +8,7 @@ use std::sync::{Arc, Mutex};
use std::time::SystemTime;
use anyhow::{bail, format_err, Context, Error};
+use pbs_tools::crypt_config::CryptConfig;
use proxmox_human_byte::HumanByte;
use tracing::{info, warn};
@@ -65,6 +66,8 @@ pub(crate) struct PullParameters {
verified_only: bool,
/// Whether to re-sync corrupted snapshots
resync_corrupt: bool,
+ /// Decryption key ids and configs to decrypt snapshots with matching key fingerprint
+ crypt_configs: Vec<(String, Arc<CryptConfig>)>,
}
impl PullParameters {
@@ -85,6 +88,7 @@ impl PullParameters {
encrypted_only: Option<bool>,
verified_only: Option<bool>,
resync_corrupt: Option<bool>,
+ decryption_keys: Option<Vec<String>>,
) -> Result<Self, Error> {
if let Some(max_depth) = max_depth {
ns.check_max_depth(max_depth)?;
@@ -126,6 +130,18 @@ impl PullParameters {
let group_filter = group_filter.unwrap_or_default();
+ let crypt_configs = if let Some(key_ids) = &decryption_keys {
+ let mut crypt_configs = Vec::with_capacity(key_ids.len());
+ for key_id in key_ids {
+ let crypt_config =
+ crate::server::sync::check_privs_and_load_key_config(key_id, &owner, false)?;
+ crypt_configs.push((key_id.to_string(), crypt_config));
+ }
+ crypt_configs
+ } else {
+ Vec::new()
+ };
+
Ok(Self {
source,
target,
@@ -137,6 +153,7 @@ impl PullParameters {
encrypted_only,
verified_only,
resync_corrupt,
+ crypt_configs,
})
}
}
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 23/30] sync: expand source chunk reader trait by crypt config
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (21 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 22/30] sync: pull: load encryption key if given in job config Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 24/30] sync: pull: introduce and use decrypt index writer if " Christian Ebner
` (6 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Allows to pass in the crypto config for the source chunk reader,
making it possible to decrypt chunks when fetching.
This will be used by the pull sync job to decrypt snapshot chunks
which have been encrypted with an encryption key matching the
one in the pull job configuration.
Disarmed by not setting the crypt config until the rest of the logic
to correctly decrypt snapshots on pull, including manifest, index
files and chunks is put in place in subsequet code changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 8 ++++++--
src/server/push.rs | 4 ++--
src/server/sync.rs | 28 ++++++++++++++++++++++------
3 files changed, 30 insertions(+), 10 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 1376771e3..4587af5c9 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -302,6 +302,7 @@ async fn pull_single_archive<'a>(
snapshot: &'a pbs_datastore::BackupDir,
archive_info: &'a FileInfo,
encountered_chunks: Arc<Mutex<EncounteredChunks>>,
+ crypt_config: Option<Arc<CryptConfig>>,
backend: &DatastoreBackend,
) -> Result<SyncStats, Error> {
let archive_name = &archive_info.filename;
@@ -332,7 +333,7 @@ async fn pull_single_archive<'a>(
} else {
let stats = pull_index_chunks(
reader
- .chunk_reader(archive_info.crypt_mode)
+ .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
.context("failed to get chunk reader")?,
snapshot.datastore().clone(),
index,
@@ -355,7 +356,7 @@ async fn pull_single_archive<'a>(
} else {
let stats = pull_index_chunks(
reader
- .chunk_reader(archive_info.crypt_mode)
+ .chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
.context("failed to get chunk reader")?,
snapshot.datastore().clone(),
index,
@@ -469,6 +470,8 @@ async fn pull_snapshot<'a>(
return Ok(sync_stats);
}
+ let crypt_config = None;
+
let backend = ¶ms.target.backend;
for item in manifest.files() {
let mut path = snapshot.full_path();
@@ -515,6 +518,7 @@ async fn pull_snapshot<'a>(
snapshot,
item,
encountered_chunks.clone(),
+ crypt_config.clone(),
backend,
)
.await?;
diff --git a/src/server/push.rs b/src/server/push.rs
index bc8c3810f..69958c89b 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1019,7 +1019,7 @@ pub(crate) async fn push_snapshot(
ArchiveType::DynamicIndex => {
let index = DynamicIndexReader::open(&path)?;
let chunk_reader = reader
- .chunk_reader(entry.chunk_crypt_mode())
+ .chunk_reader(None, entry.chunk_crypt_mode())
.context("failed to get chunk reader")?;
let upload_stats = push_index(
&archive_name,
@@ -1049,7 +1049,7 @@ pub(crate) async fn push_snapshot(
ArchiveType::FixedIndex => {
let index = FixedIndexReader::open(&path)?;
let chunk_reader = reader
- .chunk_reader(entry.chunk_crypt_mode())
+ .chunk_reader(None, entry.chunk_crypt_mode())
.context("failed to get chunk reader")?;
let size = index.index_bytes();
let upload_stats = push_index(
diff --git a/src/server/sync.rs b/src/server/sync.rs
index da1b0a06f..e97936ef6 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -90,7 +90,11 @@ impl SyncStats {
/// and checking whether chunk sync should be skipped.
pub(crate) trait SyncSourceReader: Send + Sync {
/// Returns a chunk reader with the specified encryption mode.
- fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error>;
+ fn chunk_reader(
+ &self,
+ crypt_config: Option<Arc<CryptConfig>>,
+ crypt_mode: CryptMode,
+ ) -> Result<Arc<dyn AsyncReadChunk>, Error>;
/// Asynchronously loads a file from the source into a local file.
/// `filename` is the name of the file to load from the source.
@@ -117,9 +121,17 @@ pub(crate) struct LocalSourceReader {
#[async_trait::async_trait]
impl SyncSourceReader for RemoteSourceReader {
- fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error> {
- let chunk_reader =
- RemoteChunkReader::new(self.backup_reader.clone(), None, crypt_mode, HashMap::new());
+ fn chunk_reader(
+ &self,
+ crypt_config: Option<Arc<CryptConfig>>,
+ crypt_mode: CryptMode,
+ ) -> Result<Arc<dyn AsyncReadChunk>, Error> {
+ let chunk_reader = RemoteChunkReader::new(
+ self.backup_reader.clone(),
+ crypt_config,
+ crypt_mode,
+ HashMap::new(),
+ );
Ok(Arc::new(chunk_reader))
}
@@ -191,8 +203,12 @@ impl SyncSourceReader for RemoteSourceReader {
#[async_trait::async_trait]
impl SyncSourceReader for LocalSourceReader {
- fn chunk_reader(&self, crypt_mode: CryptMode) -> Result<Arc<dyn AsyncReadChunk>, Error> {
- let chunk_reader = LocalChunkReader::new(self.datastore.clone(), None, crypt_mode)?;
+ fn chunk_reader(
+ &self,
+ crypt_config: Option<Arc<CryptConfig>>,
+ crypt_mode: CryptMode,
+ ) -> Result<Arc<dyn AsyncReadChunk>, Error> {
+ let chunk_reader = LocalChunkReader::new(self.datastore.clone(), crypt_config, crypt_mode)?;
Ok(Arc::new(chunk_reader))
}
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 24/30] sync: pull: introduce and use decrypt index writer if crypt config
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (22 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 23/30] sync: expand source chunk reader trait by crypt config Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 25/30] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
` (5 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
In order to decrypt and encrypted index file during a pull sync job
when a matching decryption key is configured, the index has to be
rewritten as the chunks has to be decrypted and the new digests
calculated based on the decrypted chunk. The newly written index file
need to finally replace the original one, achieved by replacing the
original tempfile after pulling the chunks.
In order to be able to do so, provide a DecryptedIndexWriter instance
to the chunk pulling logic. The DecryptIndexWriter provides variants
for fix and dynamic index writers, or none if no rewriting should
happen.
This remains disarmed for the time being by never passing the crypt
config until the logic to decrypt the chunk and re-calculate the
digests is in place, done in subsequent code changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 138 ++++++++++++++++++++++++++++++---------------
1 file changed, 92 insertions(+), 46 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 4587af5c9..174e50a91 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -21,8 +21,8 @@ use pbs_api_types::{
use pbs_client::BackupRepository;
use pbs_config::CachedUserInfo;
use pbs_datastore::data_blob::DataBlob;
-use pbs_datastore::dynamic_index::DynamicIndexReader;
-use pbs_datastore::fixed_index::FixedIndexReader;
+use pbs_datastore::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
+use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::{BackupManifest, FileInfo};
use pbs_datastore::read_chunk::AsyncReadChunk;
@@ -164,6 +164,7 @@ async fn pull_index_chunks<I: IndexFile>(
index: I,
encountered_chunks: Arc<Mutex<EncounteredChunks>>,
backend: &DatastoreBackend,
+ decrypted_index_writer: Option<DecryptedIndexWriter>,
) -> Result<SyncStats, Error> {
use futures::stream::{self, StreamExt, TryStreamExt};
@@ -199,55 +200,61 @@ async fn pull_index_chunks<I: IndexFile>(
let bytes = Arc::new(AtomicUsize::new(0));
let chunk_count = Arc::new(AtomicUsize::new(0));
- stream
- .map(|info| {
- let target = Arc::clone(&target);
- let chunk_reader = chunk_reader.clone();
- let bytes = Arc::clone(&bytes);
- let chunk_count = Arc::clone(&chunk_count);
- let verify_and_write_channel = verify_and_write_channel.clone();
- let encountered_chunks = Arc::clone(&encountered_chunks);
-
- Ok::<_, Error>(async move {
- {
- // limit guard scope
- let mut guard = encountered_chunks.lock().unwrap();
- if let Some(touched) = guard.check_reusable(&info.digest) {
- if touched {
- return Ok::<_, Error>(());
- }
- let chunk_exists = proxmox_async::runtime::block_in_place(|| {
- target.cond_touch_chunk(&info.digest, false)
- })?;
- if chunk_exists {
- guard.mark_touched(&info.digest);
- //info!("chunk {} exists {}", pos, hex::encode(digest));
- return Ok::<_, Error>(());
- }
+ let stream = stream.map(|info| {
+ let target = Arc::clone(&target);
+ let chunk_reader = chunk_reader.clone();
+ let bytes = Arc::clone(&bytes);
+ let chunk_count = Arc::clone(&chunk_count);
+ let verify_and_write_channel = verify_and_write_channel.clone();
+ let encountered_chunks = Arc::clone(&encountered_chunks);
+
+ Ok::<_, Error>(async move {
+ {
+ // limit guard scope
+ let mut guard = encountered_chunks.lock().unwrap();
+ if let Some(touched) = guard.check_reusable(&info.digest) {
+ if touched {
+ return Ok::<_, Error>(());
+ }
+ let chunk_exists = proxmox_async::runtime::block_in_place(|| {
+ target.cond_touch_chunk(&info.digest, false)
+ })?;
+ if chunk_exists {
+ guard.mark_touched(&info.digest);
+ //info!("chunk {} exists {}", pos, hex::encode(digest));
+ return Ok::<_, Error>(());
}
- // mark before actually downloading the chunk, so this happens only once
- guard.mark_reusable(&info.digest);
- guard.mark_touched(&info.digest);
}
+ // mark before actually downloading the chunk, so this happens only once
+ guard.mark_reusable(&info.digest);
+ guard.mark_touched(&info.digest);
+ }
- //info!("sync {} chunk {}", pos, hex::encode(digest));
- let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
- let raw_size = chunk.raw_size() as usize;
+ //info!("sync {} chunk {}", pos, hex::encode(digest));
+ let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+ let raw_size = chunk.raw_size() as usize;
- // decode, verify and write in a separate threads to maximize throughput
- proxmox_async::runtime::block_in_place(|| {
- verify_and_write_channel.send((chunk, info.digest, info.size()))
- })?;
+ // decode, verify and write in a separate threads to maximize throughput
+ proxmox_async::runtime::block_in_place(|| {
+ verify_and_write_channel.send((chunk, info.digest, info.size()))
+ })?;
- bytes.fetch_add(raw_size, Ordering::SeqCst);
- chunk_count.fetch_add(1, Ordering::SeqCst);
+ bytes.fetch_add(raw_size, Ordering::SeqCst);
+ chunk_count.fetch_add(1, Ordering::SeqCst);
- Ok(())
- })
+ Ok(())
})
- .try_buffer_unordered(20)
- .try_for_each(|_res| futures::future::ok(()))
- .await?;
+ });
+
+ if decrypted_index_writer.is_none() {
+ stream
+ .try_buffer_unordered(20)
+ .try_for_each(|_res| futures::future::ok(()))
+ .await?;
+ } else {
+ // must keep chunk order to correctly rewrite index file
+ stream.try_for_each(|item| item).await?;
+ }
drop(verify_and_write_channel);
@@ -328,9 +335,15 @@ async fn pull_single_archive<'a>(
let (csum, size) = index.compute_csum();
verify_archive(archive_info, &csum, size)?;
- if reader.skip_chunk_sync(snapshot.datastore().name()) {
+ if crypt_config.is_none() && reader.skip_chunk_sync(snapshot.datastore().name()) {
info!("skipping chunk sync for same datastore");
} else {
+ let new_index_writer = if crypt_config.is_some() {
+ let writer = DynamicIndexWriter::create(&path)?;
+ Some(DecryptedIndexWriter::Dynamic(Arc::new(Mutex::new(writer))))
+ } else {
+ None
+ };
let stats = pull_index_chunks(
reader
.chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
@@ -339,8 +352,18 @@ async fn pull_single_archive<'a>(
index,
encountered_chunks,
backend,
+ new_index_writer.clone(),
)
.await?;
+ if let Some(DecryptedIndexWriter::Dynamic(index)) = &new_index_writer {
+ let _csum = index.lock().unwrap().close()?;
+
+ // For both cases, with and without rewriting the index the final index is
+ // persisted with a rename of the tempfile. Therefore, overwrite current
+ // tempfile here so it will be finally persisted instead.
+ std::fs::rename(&path, &tmp_path)?;
+ }
+
sync_stats.add(stats);
}
}
@@ -351,9 +374,16 @@ async fn pull_single_archive<'a>(
let (csum, size) = index.compute_csum();
verify_archive(archive_info, &csum, size)?;
- if reader.skip_chunk_sync(snapshot.datastore().name()) {
+ if crypt_config.is_none() && reader.skip_chunk_sync(snapshot.datastore().name()) {
info!("skipping chunk sync for same datastore");
} else {
+ let new_index_writer = if crypt_config.is_some() {
+ let writer =
+ FixedIndexWriter::create(&path, Some(size), index.chunk_size as u32)?;
+ Some(DecryptedIndexWriter::Fixed(Arc::new(Mutex::new(writer))))
+ } else {
+ None
+ };
let stats = pull_index_chunks(
reader
.chunk_reader(crypt_config.clone(), archive_info.crypt_mode)
@@ -362,8 +392,18 @@ async fn pull_single_archive<'a>(
index,
encountered_chunks,
backend,
+ new_index_writer.clone(),
)
.await?;
+ if let Some(DecryptedIndexWriter::Fixed(index)) = &new_index_writer {
+ let _csum = index.lock().unwrap().close()?;
+
+ // For both cases, with and without rewriting the index the final index is
+ // persisted with a rename of the tempfile. Therefore, overwrite current
+ // tempfile here so it will be finally persisted instead.
+ std::fs::rename(&path, &tmp_path)?;
+ }
+
sync_stats.add(stats);
}
}
@@ -1283,3 +1323,9 @@ impl EncounteredChunks {
}
}
}
+
+#[derive(Clone)]
+enum DecryptedIndexWriter {
+ Fixed(Arc<Mutex<FixedIndexWriter>>),
+ Dynamic(Arc<Mutex<DynamicIndexWriter>>),
+}
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 25/30] sync: pull: extend encountered chunk by optional decrypted digest
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (23 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 24/30] sync: pull: introduce and use decrypt index writer if " Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 26/30] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
` (4 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
For index files being decrypted during the pull, it is not enough to
keep track of the processes source chunks, but the decrypted digest
has to be known as well in order to rewrite the index file.
Extend the encountered chunks such that this can be tracked as well.
To not introduce clippy warnings and to keep the code readable,
introduce the EncounteredChunksInfo struct as internal type for the
hash map values.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 78 +++++++++++++++++++++++++++++++++-------------
1 file changed, 56 insertions(+), 22 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 174e50a91..1c1c9458e 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -176,7 +176,7 @@ async fn pull_index_chunks<I: IndexFile>(
.filter(|info| {
let guard = encountered_chunks.lock().unwrap();
match guard.check_reusable(&info.digest) {
- Some(touched) => !touched, // reusable and already touched, can always skip
+ Some(reusable) => !reusable.touched, // reusable and already touched, can always skip
None => true,
}
}),
@@ -212,22 +212,22 @@ async fn pull_index_chunks<I: IndexFile>(
{
// limit guard scope
let mut guard = encountered_chunks.lock().unwrap();
- if let Some(touched) = guard.check_reusable(&info.digest) {
- if touched {
+ if let Some(reusable) = guard.check_reusable(&info.digest) {
+ if reusable.touched {
return Ok::<_, Error>(());
}
let chunk_exists = proxmox_async::runtime::block_in_place(|| {
target.cond_touch_chunk(&info.digest, false)
})?;
if chunk_exists {
- guard.mark_touched(&info.digest);
+ guard.mark_touched(&info.digest, None);
//info!("chunk {} exists {}", pos, hex::encode(digest));
return Ok::<_, Error>(());
}
}
// mark before actually downloading the chunk, so this happens only once
- guard.mark_reusable(&info.digest);
- guard.mark_touched(&info.digest);
+ guard.mark_reusable(&info.digest, None);
+ guard.mark_touched(&info.digest, None);
}
//info!("sync {} chunk {}", pos, hex::encode(digest));
@@ -828,7 +828,7 @@ async fn pull_group(
for pos in 0..index.index_count() {
let chunk_info = index.chunk_info(pos).unwrap();
- reusable_chunks.mark_reusable(&chunk_info.digest);
+ reusable_chunks.mark_reusable(&chunk_info.digest, None);
}
}
}
@@ -1266,12 +1266,23 @@ async fn pull_ns(
Ok((progress, sync_stats, errors))
}
+struct EncounteredChunkInfo {
+ reusable: bool,
+ touched: bool,
+ decrypted_digest: Option<[u8; 32]>,
+}
+
/// Store the state of encountered chunks, tracking if they can be reused for the
/// index file currently being pulled and if the chunk has already been touched
/// during this sync.
struct EncounteredChunks {
- // key: digest, value: (reusable, touched)
- chunk_set: HashMap<[u8; 32], (bool, bool)>,
+ chunk_set: HashMap<[u8; 32], EncounteredChunkInfo>,
+}
+
+/// Propertires of a reusable chunk
+struct ReusableEncounteredChunk<'a> {
+ touched: bool,
+ decrypted_digest: Option<&'a [u8; 32]>,
}
impl EncounteredChunks {
@@ -1284,41 +1295,64 @@ impl EncounteredChunks {
/// Check if the current state allows to reuse this chunk and if so,
/// if the chunk has already been touched.
- fn check_reusable(&self, digest: &[u8; 32]) -> Option<bool> {
- if let Some((reusable, touched)) = self.chunk_set.get(digest) {
- if !reusable {
+ fn check_reusable(&self, digest: &[u8; 32]) -> Option<ReusableEncounteredChunk<'_>> {
+ if let Some(chunk_info) = self.chunk_set.get(digest) {
+ if !chunk_info.reusable {
None
} else {
- Some(*touched)
+ Some(ReusableEncounteredChunk {
+ touched: chunk_info.touched,
+ decrypted_digest: chunk_info.decrypted_digest.as_ref(),
+ })
}
} else {
None
}
}
- /// Mark chunk as reusable, inserting it as un-touched if not present
- fn mark_reusable(&mut self, digest: &[u8; 32]) {
+ /// Mark chunk as reusable, inserting it as un-touched if not present.
+ ///
+ /// If the mapping already contains the digest, set the decrypted digest only
+ /// if not already set previously.
+ fn mark_reusable(&mut self, digest: &[u8; 32], decrypted_digest: Option<[u8; 32]>) {
match self.chunk_set.entry(*digest) {
Entry::Occupied(mut occupied) => {
- let (reusable, _touched) = occupied.get_mut();
- *reusable = true;
+ let chunk_info = occupied.get_mut();
+ chunk_info.reusable = true;
+ if chunk_info.decrypted_digest.is_none() {
+ chunk_info.decrypted_digest = decrypted_digest;
+ }
}
Entry::Vacant(vacant) => {
- vacant.insert((true, false));
+ vacant.insert(EncounteredChunkInfo {
+ reusable: true,
+ touched: false,
+ decrypted_digest,
+ });
}
}
}
/// Mark chunk as touched during this sync, inserting it as not reusable
/// but touched if not present.
- fn mark_touched(&mut self, digest: &[u8; 32]) {
+ ///
+ /// If the mapping already contains the digest, set the decrypted digest only
+ /// if not already set previously.
+ fn mark_touched(&mut self, digest: &[u8; 32], decrypted_digest: Option<[u8; 32]>) {
match self.chunk_set.entry(*digest) {
Entry::Occupied(mut occupied) => {
- let (_reusable, touched) = occupied.get_mut();
- *touched = true;
+ let chunk_info = occupied.get_mut();
+ chunk_info.touched = true;
+ if chunk_info.decrypted_digest.is_none() {
+ chunk_info.decrypted_digest = decrypted_digest;
+ }
}
Entry::Vacant(vacant) => {
- vacant.insert((false, true));
+ vacant.insert(EncounteredChunkInfo {
+ reusable: false,
+ touched: true,
+ decrypted_digest,
+ });
}
}
}
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 26/30] sync: pull: decrypt blob files on pull if encryption key is configured
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (24 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 25/30] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 27/30] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
` (3 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
During pull, blob files are stored in a temporary file before being
renamed to the actual blob filename as stored in the manifest.
If a decryption key is configured in the pull parameters, use the
decrypted temporary blob file after downloading it from the remote
to decrypt it and re-encode as new compressed but unencrypted blob
file. Rename the decrypted tempfile to be the new tmpfile to be
finally moved in place.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 64 +++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 61 insertions(+), 3 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 1c1c9458e..12da4404e 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -2,7 +2,8 @@
use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
-use std::io::Seek;
+use std::io::{BufReader, Read, Seek, Write};
+use std::os::fd::AsRawFd;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
use std::time::SystemTime;
@@ -14,7 +15,7 @@ use tracing::{info, warn};
use pbs_api_types::{
print_store_and_ns, ArchiveType, Authid, BackupArchiveName, BackupDir, BackupGroup,
- BackupNamespace, GroupFilter, Operation, RateLimitConfig, Remote, SnapshotListItem,
+ BackupNamespace, CryptMode, GroupFilter, Operation, RateLimitConfig, Remote, SnapshotListItem,
VerifyState, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME, MAX_NAMESPACE_DEPTH,
PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP,
};
@@ -26,7 +27,9 @@ use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::{BackupManifest, FileInfo};
use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::{check_backup_owner, DataStore, DatastoreBackend, StoreProgress};
+use pbs_datastore::{
+ check_backup_owner, DataBlobReader, DataStore, DatastoreBackend, StoreProgress,
+};
use pbs_tools::sha::sha256;
use super::sync::{
@@ -311,6 +314,7 @@ async fn pull_single_archive<'a>(
encountered_chunks: Arc<Mutex<EncounteredChunks>>,
crypt_config: Option<Arc<CryptConfig>>,
backend: &DatastoreBackend,
+ new_manifest: Option<Arc<Mutex<BackupManifest>>>,
) -> Result<SyncStats, Error> {
let archive_name = &archive_info.filename;
let mut path = snapshot.full_path();
@@ -411,6 +415,58 @@ async fn pull_single_archive<'a>(
tmpfile.rewind()?;
let (csum, size) = sha256(&mut tmpfile)?;
verify_archive(archive_info, &csum, size)?;
+
+ if crypt_config.is_some() {
+ let crypt_config = crypt_config.clone();
+ let tmp_path = tmp_path.clone();
+ let archive_name = archive_name.clone();
+
+ tokio::task::spawn_blocking(move || {
+ // must rewind again since after verifying cursor is at the end of the file
+ tmpfile.rewind()?;
+ let reader = DataBlobReader::new(tmpfile, crypt_config)?;
+ let mut reader = BufReader::new(reader);
+ let mut raw_data = Vec::new();
+ reader.read_to_end(&mut raw_data)?;
+
+ let blob = DataBlob::encode(&raw_data, None, true)?;
+ let raw_blob = blob.into_inner();
+
+ let mut decrypted_tmp_path = tmp_path.clone();
+ decrypted_tmp_path.set_extension("dectmp");
+ let result = proxmox_lang::try_block!({
+ let mut decrypted_tmpfile = std::fs::OpenOptions::new()
+ .read(true)
+ .write(true)
+ .create(true)
+ .truncate(true)
+ .open(&decrypted_tmp_path)?;
+ decrypted_tmpfile.write_all(&raw_blob)?;
+ decrypted_tmpfile.flush()?;
+ decrypted_tmpfile.rewind()?;
+ let (csum, size) = sha256(&mut decrypted_tmpfile)?;
+
+ if let Some(new_manifest) = new_manifest {
+ let mut new_manifest = new_manifest.lock().unwrap();
+ let name = archive_name.as_str().try_into()?;
+ new_manifest.add_file(&name, size, csum, CryptMode::None)?;
+ }
+
+ nix::unistd::fsync(decrypted_tmpfile.as_raw_fd())?;
+
+ std::fs::rename(&decrypted_tmp_path, &tmp_path)?;
+ Ok(())
+ });
+
+ if result.is_err() {
+ let _ = std::fs::remove_file(&decrypted_tmp_path);
+ }
+
+ result
+ })
+ .await?
+ .map_err(|err: Error| format_err!("Failed when decrypting blob {path:?}: {err}"))?;
+ }
}
}
if let Err(err) = std::fs::rename(&tmp_path, &path) {
@@ -511,6 +567,7 @@ async fn pull_snapshot<'a>(
}
let crypt_config = None;
+ let new_manifest = None;
let backend = ¶ms.target.backend;
for item in manifest.files() {
@@ -560,6 +617,7 @@ async fn pull_snapshot<'a>(
encountered_chunks.clone(),
crypt_config.clone(),
backend,
+ new_manifest.clone(),
)
.await?;
sync_stats.add(stats);
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 27/30] sync: pull: decrypt chunks and rewrite index file for matching key
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (25 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 26/30] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 28/30] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner
` (2 subsequent siblings)
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Once the matching decryption key will be provided, use it to decrypt
the chunks on pull and rewrite the index file based on the decrypted
chunk digests and offsets.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 143 +++++++++++++++++++++++++++++++++++++--------
1 file changed, 120 insertions(+), 23 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 12da4404e..aeb82af99 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -4,7 +4,7 @@ use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
use std::io::{BufReader, Read, Seek, Write};
use std::os::fd::AsRawFd;
-use std::sync::atomic::{AtomicUsize, Ordering};
+use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
use std::time::SystemTime;
@@ -21,7 +21,7 @@ use pbs_api_types::{
};
use pbs_client::BackupRepository;
use pbs_config::CachedUserInfo;
-use pbs_datastore::data_blob::DataBlob;
+use pbs_datastore::data_blob::{DataBlob, DataChunkBuilder};
use pbs_datastore::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
use pbs_datastore::fixed_index::{FixedIndexReader, FixedIndexWriter};
use pbs_datastore::index::IndexFile;
@@ -179,7 +179,16 @@ async fn pull_index_chunks<I: IndexFile>(
.filter(|info| {
let guard = encountered_chunks.lock().unwrap();
match guard.check_reusable(&info.digest) {
- Some(reusable) => !reusable.touched, // reusable and already touched, can always skip
+ Some(reusable) => {
+ if reusable.decrypted_digest.is_some() {
+ // if there is a mapping, then the chunk digest must be rewritten to
+ // the index, cannot skip here but optimized when processing the stream
+ true
+ } else {
+ // reusable and already touched, can always skip
+ !reusable.touched
+ }
+ }
None => true,
}
}),
@@ -201,6 +210,7 @@ async fn pull_index_chunks<I: IndexFile>(
let verify_and_write_channel = verify_pool.channel();
let bytes = Arc::new(AtomicUsize::new(0));
+ let offset = Arc::new(AtomicU64::new(0));
let chunk_count = Arc::new(AtomicUsize::new(0));
let stream = stream.map(|info| {
@@ -210,36 +220,123 @@ async fn pull_index_chunks<I: IndexFile>(
let chunk_count = Arc::clone(&chunk_count);
let verify_and_write_channel = verify_and_write_channel.clone();
let encountered_chunks = Arc::clone(&encountered_chunks);
+ let offset = Arc::clone(&offset);
+ let decrypted_index_writer = decrypted_index_writer.clone();
Ok::<_, Error>(async move {
- {
- // limit guard scope
- let mut guard = encountered_chunks.lock().unwrap();
- if let Some(reusable) = guard.check_reusable(&info.digest) {
- if reusable.touched {
- return Ok::<_, Error>(());
+ //info!("sync {} chunk {}", pos, hex::encode(digest));
+ let (chunk, digest, size) = match decrypted_index_writer {
+ Some(DecryptedIndexWriter::Fixed(index)) => {
+ if let Some(reusable) = encountered_chunks
+ .lock()
+ .unwrap()
+ .check_reusable(&info.digest)
+ {
+ if let Some(decrypted_digest) = reusable.decrypted_digest {
+ // already got the decrypted digest and chunk has been written,
+ // no need to process again
+ let size = info.size();
+ let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+
+ index.lock().unwrap().add_chunk(
+ start_offset,
+ size as u32,
+ decrypted_digest,
+ )?;
+
+ return Ok::<_, Error>(());
+ }
}
- let chunk_exists = proxmox_async::runtime::block_in_place(|| {
- target.cond_touch_chunk(&info.digest, false)
- })?;
- if chunk_exists {
- guard.mark_touched(&info.digest, None);
- //info!("chunk {} exists {}", pos, hex::encode(digest));
- return Ok::<_, Error>(());
+
+ let chunk_data = chunk_reader.read_chunk(&info.digest).await?;
+ let (chunk, digest) =
+ DataChunkBuilder::new(&chunk_data).compress(true).build()?;
+
+ let size = chunk_data.len() as u64;
+ let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+
+ index
+ .lock()
+ .unwrap()
+ .add_chunk(start_offset, size as u32, &digest)?;
+
+ encountered_chunks
+ .lock()
+ .unwrap()
+ .mark_reusable(&info.digest, Some(digest));
+
+ (chunk, digest, size)
+ }
+ Some(DecryptedIndexWriter::Dynamic(index)) => {
+ if let Some(reusable) = encountered_chunks
+ .lock()
+ .unwrap()
+ .check_reusable(&info.digest)
+ {
+ if let Some(decrypted_digest) = reusable.decrypted_digest {
+ // already got the decrypted digest and chunk has been written,
+ // no need to process again
+ let size = info.size();
+ let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+ let end_offset = start_offset + size;
+
+ index
+ .lock()
+ .unwrap()
+ .add_chunk(end_offset, decrypted_digest)?;
+
+ return Ok::<_, Error>(());
+ }
}
+
+ let chunk_data = chunk_reader.read_chunk(&info.digest).await?;
+ let (chunk, digest) =
+ DataChunkBuilder::new(&chunk_data).compress(true).build()?;
+
+ let size = chunk_data.len() as u64;
+ let start_offset = offset.fetch_add(size, Ordering::SeqCst);
+ let end_offset = start_offset + size;
+
+ index.lock().unwrap().add_chunk(end_offset, &digest)?;
+
+ encountered_chunks
+ .lock()
+ .unwrap()
+ .mark_reusable(&info.digest, Some(digest));
+
+ (chunk, digest, size)
}
- // mark before actually downloading the chunk, so this happens only once
- guard.mark_reusable(&info.digest, None);
- guard.mark_touched(&info.digest, None);
- }
+ None => {
+ {
+ // limit guard scope
+ let mut guard = encountered_chunks.lock().unwrap();
+ if let Some(reusable) = guard.check_reusable(&info.digest) {
+ if reusable.touched {
+ return Ok::<_, Error>(());
+ }
+ let chunk_exists = proxmox_async::runtime::block_in_place(|| {
+ target.cond_touch_chunk(&info.digest, false)
+ })?;
+ if chunk_exists {
+ guard.mark_touched(&info.digest, None);
+ //info!("chunk {} exists {}", pos, hex::encode(digest));
+ return Ok::<_, Error>(());
+ }
+ }
+ // mark before actually downloading the chunk, so this happens only once
+ guard.mark_reusable(&info.digest, None);
+ guard.mark_touched(&info.digest, None);
+ }
- //info!("sync {} chunk {}", pos, hex::encode(digest));
- let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+ let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
+ (chunk, info.digest, info.size())
+ }
+ };
let raw_size = chunk.raw_size() as usize;
// decode, verify and write in a separate threads to maximize throughput
proxmox_async::runtime::block_in_place(|| {
- verify_and_write_channel.send((chunk, info.digest, info.size()))
+ verify_and_write_channel.send((chunk, digest, size))
})?;
bytes.fetch_add(raw_size, Ordering::SeqCst);
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 28/30] sync: pull: decrypt snapshots with matching encryption key fingerprint
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (26 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 27/30] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 29/30] api: encryption keys: allow to toggle the archived state for keys Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 30/30] docs: add section describing server side encryption for sync jobs Christian Ebner
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Decrypt any backup snapshot during pull which was encrypted with a
matching encryption key. Matching of keys is performed by comparing
the fingerprint of the key as stored in the source manifest and the
key configured for the pull sync jobs.
If matching, pass along the key's crypto config to the index and chunk
readers and write the local files unencrypted instead of simply
downloading them. A new manifest file is written instead of the
original one and files registered accordingly.
If the local snapshot already exists (resync), refuse to sync without
decryption if the target snapshot is unencrypted, the source however
encrypted.
To detect file changes for resync, compare the file change
fingerprint calculated on the decrypted files before push sync with
encryption.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/server/pull.rs | 104 ++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 99 insertions(+), 5 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index aeb82af99..c5924e82b 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -11,6 +11,9 @@ use std::time::SystemTime;
use anyhow::{bail, format_err, Context, Error};
use pbs_tools::crypt_config::CryptConfig;
use proxmox_human_byte::HumanByte;
+use serde_json::Value;
+use tokio::fs::OpenOptions;
+use tokio::io::AsyncWriteExt;
use tracing::{info, warn};
use pbs_api_types::{
@@ -457,12 +460,23 @@ async fn pull_single_archive<'a>(
)
.await?;
if let Some(DecryptedIndexWriter::Dynamic(index)) = &new_index_writer {
- let _csum = index.lock().unwrap().close()?;
+ let csum = index.lock().unwrap().close()?;
// For both cases, with and without rewriting the index the final index is
// persisted with a rename of the tempfile. Therefore, overwrite current
// tempfile here so it will be finally persisted instead.
std::fs::rename(&path, &tmp_path)?;
+
+ if let Some(new_manifest) = new_manifest {
+ let name = archive_name.as_str().try_into()?;
+ // size is identical to original, encrypted index
+ new_manifest.lock().unwrap().add_file(
+ &name,
+ size,
+ csum,
+ CryptMode::None,
+ )?;
+ }
}
sync_stats.add(stats);
@@ -497,12 +511,23 @@ async fn pull_single_archive<'a>(
)
.await?;
if let Some(DecryptedIndexWriter::Fixed(index)) = &new_index_writer {
- let _csum = index.lock().unwrap().close()?;
+ let csum = index.lock().unwrap().close()?;
// For both cases, with and without rewriting the index the final index is
// persisted with a rename of the tempfile. Therefore, overwrite current
// tempfile here so it will be finally persisted instead.
std::fs::rename(&path, &tmp_path)?;
+
+ if let Some(new_manifest) = new_manifest {
+ let name = archive_name.as_str().try_into()?;
+ // size is identical to original, encrypted index
+ new_manifest.lock().unwrap().add_file(
+ &name,
+ size,
+ csum,
+ CryptMode::None,
+ )?;
+ }
}
sync_stats.add(stats);
@@ -621,6 +646,7 @@ async fn pull_snapshot<'a>(
return Ok(sync_stats);
}
+ let mut local_manifest_file_fp = None;
if manifest_name.exists() && !corrupt {
let manifest_blob = proxmox_lang::try_block!({
let mut manifest_file = std::fs::File::open(&manifest_name).map_err(|err| {
@@ -641,12 +667,31 @@ async fn pull_snapshot<'a>(
info!("no data changes");
let _ = std::fs::remove_file(&tmp_manifest_name);
return Ok(sync_stats); // nothing changed
+ } else {
+ let manifest = BackupManifest::try_from(manifest_blob)?;
+ if !params.crypt_configs.is_empty() {
+ let fp = manifest.change_detection_fingerprint()?;
+ local_manifest_file_fp = Some(hex::encode(fp));
+ }
}
}
- let manifest_data = tmp_manifest_blob.raw_data().to_vec();
+ let mut manifest_data = tmp_manifest_blob.raw_data().to_vec();
let manifest = BackupManifest::try_from(tmp_manifest_blob)?;
+ if let Value::String(fp) = &manifest.unprotected["change-detection-fingerprint"] {
+ if let Some(local) = &local_manifest_file_fp {
+ if fp == local {
+ if !client_log_name.exists() {
+ reader.try_download_client_log(&client_log_name).await?;
+ };
+ info!("no data changes");
+ let _ = std::fs::remove_file(&tmp_manifest_name);
+ return Ok(sync_stats);
+ }
+ }
+ }
+
if ignore_not_verified_or_encrypted(
&manifest,
snapshot.dir(),
@@ -663,8 +708,23 @@ async fn pull_snapshot<'a>(
return Ok(sync_stats);
}
- let crypt_config = None;
- let new_manifest = None;
+ let mut crypt_config = None;
+ let mut new_manifest = None;
+ if let Ok(Some(source_fingerprint)) = manifest.fingerprint() {
+ for (key_id, config) in ¶ms.crypt_configs {
+ if config.fingerprint() == *source_fingerprint.bytes() {
+ crypt_config = Some(Arc::clone(config));
+ new_manifest = Some(Arc::new(Mutex::new(BackupManifest::new(snapshot.into()))));
+ info!("Found matching key '{key_id}' with fingerprint {source_fingerprint}, decrypt on pull");
+ break;
+ }
+ }
+ }
+
+ // pre-existing local manifest for unencrypted snapshot, never overwrite with encrypted
+ if local_manifest_file_fp.is_some() && crypt_config.is_none() {
+ bail!("local unencrypted snapshot detected, refuse to sync without source decryption");
+ }
let backend = ¶ms.target.backend;
for item in manifest.files() {
@@ -720,6 +780,40 @@ async fn pull_snapshot<'a>(
sync_stats.add(stats);
}
+ if let Some(new_manifest) = new_manifest {
+ let mut new_manifest = Arc::try_unwrap(new_manifest)
+ .map_err(|_arc| {
+ format_err!("failed to take ownership of still referenced new manifest")
+ })?
+ .into_inner()
+ .unwrap();
+
+ // copy over notes ecc, but drop encryption key fingerprint and verify state, to be
+ // reverified independent from the sync.
+ new_manifest.unprotected = manifest.unprotected.clone();
+ if let Some(unprotected) = new_manifest.unprotected.as_object_mut() {
+ unprotected.remove("change-detection-fingerprint");
+ unprotected.remove("key-fingerprint");
+ unprotected.remove("verify_state");
+ } else {
+ bail!("Encountered unexpected manifest without 'unprotected' section.");
+ }
+
+ let manifest_string = new_manifest.to_string(None)?;
+ let manifest_blob = DataBlob::encode(manifest_string.as_bytes(), None, true)?;
+ // update contents to be uploaded to backend
+ manifest_data = manifest_blob.raw_data().to_vec();
+
+ let mut tmp_manifest_file = OpenOptions::new()
+ .write(true)
+ .truncate(true) // clear pre-existing manifest content
+ .open(&tmp_manifest_name)
+ .await?;
+ tmp_manifest_file.write_all(&manifest_data).await?;
+ tmp_manifest_file.flush().await?;
+ nix::unistd::fsync(tmp_manifest_file.as_raw_fd())?;
+ }
+
if let Err(err) = std::fs::rename(&tmp_manifest_name, &manifest_name) {
bail!("Atomic rename file {:?} failed - {}", manifest_name, err);
}
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 29/30] api: encryption keys: allow to toggle the archived state for keys
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (27 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 28/30] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 30/30] docs: add section describing server side encryption for sync jobs Christian Ebner
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Adapt the api endpoint to not only allow to archive a key, but rather
allow to toggle its archived state by setting or stripping the
optional `archived-at` timestamp in the config.
Expose this in the ui by adapting the corresponding button
accordingly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
src/api2/config/encryption_keys.rs | 12 +++++-------
www/config/EncryptionKeysView.js | 24 +++++++++++++++++-------
2 files changed, 22 insertions(+), 14 deletions(-)
diff --git a/src/api2/config/encryption_keys.rs b/src/api2/config/encryption_keys.rs
index 620eb8f69..c3cc7e7b8 100644
--- a/src/api2/config/encryption_keys.rs
+++ b/src/api2/config/encryption_keys.rs
@@ -126,8 +126,8 @@ pub fn create_key(
permission: &Permission::Privilege(&["system", "encryption-keys", "{id}"], PRIV_SYS_MODIFY, false),
},
)]
-/// Mark the key by given id as archived, no longer usable to encrypt contents.
-pub fn archive_key(
+/// Toggle the archive state for the key by given id, archived keys are no longer usable to encrypt contents.
+pub fn toggle_key_archive_state(
id: String,
digest: Option<String>,
_rpcenv: &mut dyn RpcEnvironment,
@@ -140,15 +140,13 @@ pub fn archive_key(
let mut key: CryptKey = config.lookup(ENCRYPTION_KEYS_CFG_TYPE_ID, &id)?;
if key.archived_at.is_some() {
- bail!("key already marked as archived");
+ key.archived_at = None;
} else {
check_encryption_key_in_use(&id, false)?;
+ key.archived_at = Some(proxmox_time::epoch_i64());
}
- key.archived_at = Some(proxmox_time::epoch_i64());
-
config.set_data(&id, ENCRYPTION_KEYS_CFG_TYPE_ID, &key)?;
- // drops config lock
encryption_keys::save_config(&config)?;
Ok(())
@@ -222,7 +220,7 @@ fn check_encryption_key_in_use(id: &str, include_associated: bool) -> Result<(),
}
const ITEM_ROUTER: Router = Router::new()
- .post(&API_METHOD_ARCHIVE_KEY)
+ .post(&API_METHOD_TOGGLE_KEY_ARCHIVE_STATE)
.delete(&API_METHOD_DELETE_KEY);
pub const ROUTER: Router = Router::new()
diff --git a/www/config/EncryptionKeysView.js b/www/config/EncryptionKeysView.js
index 35f147799..77542932d 100644
--- a/www/config/EncryptionKeysView.js
+++ b/www/config/EncryptionKeysView.js
@@ -38,7 +38,7 @@ Ext.define('PBS.config.EncryptionKeysView', {
}).show();
},
- archiveEncryptionKey: function () {
+ toggleEncryptionKeyArchiveState: function () {
let me = this;
let view = me.getView();
let selection = view.getSelection();
@@ -246,14 +246,24 @@ Ext.define('PBS.config.EncryptionKeysView', {
'-',
{
xtype: 'proxmoxButton',
- text: gettext('Archive'),
- handler: 'archiveEncryptionKey',
+ text: gettext('Toggle Archived'),
+ handler: 'toggleEncryptionKeyArchiveState',
dangerous: true,
- confirmMsg: Ext.String.format(
- gettext('Archiving will render the key unusable to encrypt new content, proceed?'),
- ),
+ confirmMsg: (item) => {
+ let msg;
+ if (item.data['archived-at']) {
+ msg = gettext(
+ 'Are you sure you want to restore the archived key to be active again?',
+ );
+ } else {
+ msg = gettext(
+ 'Archiving will render the key unusable to encrypt new content, proceed?',
+ );
+ }
+ return Ext.String.format(msg);
+ },
disabled: true,
- enableFn: (item) => item.data.type === 'sync' && !item.data['archived-at'],
+ enableFn: (item) => item.data.type === 'sync',
},
'-',
{
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH proxmox-backup v4 30/30] docs: add section describing server side encryption for sync jobs
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
` (28 preceding siblings ...)
2026-04-20 16:15 ` [PATCH proxmox-backup v4 29/30] api: encryption keys: allow to toggle the archived state for keys Christian Ebner
@ 2026-04-20 16:15 ` Christian Ebner
29 siblings, 0 replies; 31+ messages in thread
From: Christian Ebner @ 2026-04-20 16:15 UTC (permalink / raw)
To: pbs-devel
Especially clarify the terminology of active encryption key,
associated keys, key archiving and requirements for removal.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
docs/managing-remotes.rst | 54 +++++++++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)
diff --git a/docs/managing-remotes.rst b/docs/managing-remotes.rst
index 95ac4823c..39df18744 100644
--- a/docs/managing-remotes.rst
+++ b/docs/managing-remotes.rst
@@ -302,3 +302,57 @@ The following permissions are required for a sync job in push direction:
.. note:: Sync jobs in push direction require namespace support on the remote
Proxmox Backup Server instance (minimum version 2.2).
+
+Server Side Encryption/Decryption During Sync
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Sync job in push direction allow to encrypt unencrypted snapshots when syncing
+to a less trusted remote Proxmox Backup Server instance. For this, a server side
+encryption key can be assigned to the sync job. This key will then be used to
+encrypt the contents before pushing them to the remote, analogous to performing
+a backup with an encryption key. Already encrypted snapshots are not re-encrypted
+but rather pushed unmodified. Snapshots containing only partially encrypted
+contents are skipped for security reasons.
+
+Therefore, sync jobs using the ``encrypted-only`` flag will never use the
+``active-encryption-key`` when pushing snapshots, since only already encrypted
+snapshots are being synced.
+
+On the other hand, sync jobs in pull direction allow to assign a number of
+associated keys, which will be used to decrypt snapshot contents if the key
+fingerprint of one of the listed keys matches the one used to encrypt the
+backup snapshot. The active encryption key has no effect for sync jobs in pull
+direction and should not be set.
+
+In order to configure the sync job, as well as for sync job owner/local user
+to access the keys during sync, ``System.Modify`` permissions are required on
+the ``/system/encryption-keys/{key}`` path.
+
+.. note:: Encryption key handling comes with a few risks, especially with key
+ rotation. Therefore, only active keys can be used to encrypt new snapshot
+ contents during push sync. If an active encryption key is changed, the key is
+ kept back as associated key on the sync job, in order to protect it from
+ accidental removal. Further, any encryption key can be archived, rendering it
+ no longer usable for encryption, only to decrypt pre-existing contents. Any
+ encryption key usable for sync jobs must therefore be marked as archived and
+ disassociated from any sync job still associated to it, before being able to
+ remove it.
+
+The following command can be used to assign the active encryption key for a sync
+job.
+
+.. code-block:: console
+
+ # proxmox-backup-manager sync-job update pbs2-push --active-encryption-key key0
+
+Setting the associated keys will drop any key not present in the given key list,
+with exception of the previously assigned active encryption key, if it is updated
+as well. The previously assigned encryption key (in the example above ``key0``)
+will always be pushed to the list of associated keys on rotation. For example,
+since ``key0`` is currently the active encryption key, below command would assign
+``key1`` as the new active encryption key and result in ``key0,key2,key3`` as
+associated keys for the sync job.
+
+.. code-block:: console
+
+ # proxmox-backup-manager sync-job update pbs2-push --active-encryption-key key1 --associated-key key2 --associated-key key3
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2026-04-20 16:22 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-20 16:15 [PATCH proxmox{,-backup} v4 00/30] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox v4 01/30] pbs-api-types: define en-/decryption key type and schema Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox v4 02/30] pbs-api-types: sync job: add optional cryptographic keys to config Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 03/30] sync: push: use tracing macros instead of log Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 04/30] datastore: blob: implement async reader for data blobs Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 05/30] datastore: manifest: add helper for change detection fingerprint Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 06/30] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 07/30] pbs-config: implement encryption key config handling Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 08/30] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 09/30] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 10/30] sync: add helper to check encryption key acls and load key Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 11/30] api: config: add endpoints for encryption key manipulation Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 12/30] api: config: check sync owner has access to en-/decryption keys Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 13/30] api: config: allow encryption key manipulation for sync job Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 14/30] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 15/30] api: push sync: expose optional encryption key for push sync Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 16/30] sync: push: optionally encrypt data blob on upload Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 17/30] sync: push: optionally encrypt client log on upload if key is given Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 18/30] sync: push: add helper for loading known chunks from previous snapshot Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 19/30] fix #7251: api: push: encrypt snapshots using configured encryption key Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 20/30] ui: define and expose encryption key management menu item and windows Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 21/30] ui: expose assigning encryption key to sync jobs Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 22/30] sync: pull: load encryption key if given in job config Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 23/30] sync: expand source chunk reader trait by crypt config Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 24/30] sync: pull: introduce and use decrypt index writer if " Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 25/30] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 26/30] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 27/30] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 28/30] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 29/30] api: encryption keys: allow to toggle the archived state for keys Christian Ebner
2026-04-20 16:15 ` [PATCH proxmox-backup v4 30/30] docs: add section describing server side encryption for sync jobs Christian Ebner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox