public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Christian Ebner <c.ebner@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [PATCH proxmox-backup v2 18/27] fix #7251: api: push: encrypt snapshots using configured encryption key
Date: Fri, 10 Apr 2026 18:54:45 +0200	[thread overview]
Message-ID: <20260410165454.1578501-19-c.ebner@proxmox.com> (raw)
In-Reply-To: <20260410165454.1578501-1-c.ebner@proxmox.com>

If an encryption key id is provided in the push parameters, the key
is loaded at the start of the push sync job and passed along via the
crypt config.

Backup snapshots which are already encrypted or partially encrypted
snapshots are skipped to avoid mixing of contents. Pre-existing
snapshots on the remote are however not checked to match the key.

Special care has to be taken when tracking the already encountered
chunks. For regular push sync jobs chunk upload is optimized to skip
re-upload of chunks from the previous snapshot (if any) and new, but
already encountered chunks for the current group sync. Since the chunks
now have to be re-processes anyways, do not load the chunks from the
previous snapshot into memory if they need re-encryption and keep track
of the unencrypted -> encrypted digest mapping in a hashmap to avoid
re-processing. This might be optimized in the future by e.g. move the
tracking to an LRU cache, which however requrires more carefully
evaluaton of memory consumption.

Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=7251
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
 src/server/push.rs | 112 ++++++++++++++++++++++++++++++++++-----------
 1 file changed, 86 insertions(+), 26 deletions(-)

diff --git a/src/server/push.rs b/src/server/push.rs
index 9b7a4adcb..f433ca50d 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -1,6 +1,6 @@
 //! Sync datastore by pushing contents to remote server
 
-use std::collections::HashSet;
+use std::collections::{HashMap, HashSet};
 use std::path::Path;
 use std::sync::{Arc, Mutex};
 
@@ -12,17 +12,17 @@ use tracing::{info, warn};
 
 use pbs_api_types::{
     print_store_and_ns, ApiVersion, ApiVersionInfo, ArchiveType, Authid, BackupArchiveName,
-    BackupDir, BackupGroup, BackupGroupDeleteStats, BackupNamespace, GroupFilter, GroupListItem,
-    NamespaceListItem, Operation, RateLimitConfig, Remote, SnapshotListItem, CLIENT_LOG_BLOB_NAME,
-    MANIFEST_BLOB_NAME, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ, PRIV_REMOTE_DATASTORE_BACKUP,
-    PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
+    BackupDir, BackupGroup, BackupGroupDeleteStats, BackupNamespace, CryptMode, GroupFilter,
+    GroupListItem, NamespaceListItem, Operation, RateLimitConfig, Remote, SnapshotListItem,
+    CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+    PRIV_REMOTE_DATASTORE_BACKUP, PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
 };
 use pbs_client::{
     BackupRepository, BackupStats, BackupWriter, BackupWriterOptions, HttpClient, IndexType,
     MergedChunkInfo, UploadOptions,
 };
 use pbs_config::CachedUserInfo;
-use pbs_datastore::data_blob::ChunkInfo;
+use pbs_datastore::data_blob::{ChunkInfo, DataChunkBuilder};
 use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
@@ -886,6 +886,27 @@ pub(crate) async fn push_snapshot(
     }
 
     let mut encrypt_using_key = None;
+    if params.crypt_config.is_some() {
+        let mut contains_unencrypted_file = false;
+        // Check if snapshot is fully encrypted or not encrypted at all:
+        // refuse progress otherwise to upload partially unencrypted contents or mix encryption key.
+        if source_manifest.files().iter().all(|file| {
+            if file.chunk_crypt_mode() == CryptMode::None {
+                contains_unencrypted_file = true;
+                true
+            } else {
+                false
+            }
+        }) {
+            encrypt_using_key = params.crypt_config.clone();
+            info!("Encrypt and push unencrypted snapshot '{snapshot}'");
+        } else if contains_unencrypted_file {
+            warn!("Encountered partially encrypted snapshot, refuse to re-encrypt and skip");
+            return Ok(stats);
+        } else {
+            info!("Pushing already encrypted snapshot '{snapshot}' without re-encryption");
+        }
+    }
 
     // Writer instance locks the snapshot on the remote side
     let backup_writer = BackupWriter::start(
@@ -911,19 +932,20 @@ pub(crate) async fn push_snapshot(
         }
     };
 
-    // Dummy upload options: the actual compression and/or encryption already happened while
-    // the chunks were generated during creation of the backup snapshot, therefore pre-existing
-    // chunks (already compressed and/or encrypted) can be pushed to the target.
+    // Dummy upload options: The actual compression already happened while
+    // the chunks were generated during creation of the backup snapshot,
+    // therefore pre-existing chunks (already compressed) can be pushed to
+    // the target.
+    //
     // Further, these steps are skipped in the backup writer upload stream.
     //
     // Therefore, these values do not need to fit the values given in the manifest.
     // The original manifest is uploaded in the end anyways.
     //
     // Compression is set to true so that the uploaded manifest will be compressed.
-    // Encrypt is set to assure that above files are not encrypted.
     let upload_options = UploadOptions {
         compress: true,
-        encrypt: false,
+        encrypt: encrypt_using_key.is_some(),
         previous_manifest,
         ..UploadOptions::default()
     };
@@ -937,6 +959,10 @@ pub(crate) async fn push_snapshot(
         path.push(&entry.filename);
         if path.try_exists()? {
             let archive_name = BackupArchiveName::from_path(&entry.filename)?;
+            let crypt_mode = match &encrypt_using_key {
+                Some(_) => CryptMode::Encrypt,
+                None => entry.chunk_crypt_mode(),
+            };
 
             load_previous_snapshot_known_chunks(
                 params,
@@ -967,7 +993,7 @@ pub(crate) async fn push_snapshot(
                         &archive_name,
                         backup_stats.size,
                         backup_stats.csum,
-                        entry.chunk_crypt_mode(),
+                        crypt_mode,
                     )?;
                     stats.add(SyncStats {
                         chunk_count: backup_stats.chunk_count as usize,
@@ -988,13 +1014,14 @@ pub(crate) async fn push_snapshot(
                         &backup_writer,
                         IndexType::Dynamic,
                         known_chunks.clone(),
+                        encrypt_using_key.clone(),
                     )
                     .await?;
                     target_manifest.add_file(
                         &archive_name,
                         upload_stats.size,
                         upload_stats.csum,
-                        entry.chunk_crypt_mode(),
+                        crypt_mode,
                     )?;
                     stats.add(SyncStats {
                         chunk_count: upload_stats.chunk_count as usize,
@@ -1016,13 +1043,14 @@ pub(crate) async fn push_snapshot(
                         &backup_writer,
                         IndexType::Fixed(Some(size)),
                         known_chunks.clone(),
+                        encrypt_using_key.clone(),
                     )
                     .await?;
                     target_manifest.add_file(
                         &archive_name,
                         upload_stats.size,
                         upload_stats.csum,
-                        entry.chunk_crypt_mode(),
+                        crypt_mode,
                     )?;
                     stats.add(SyncStats {
                         chunk_count: upload_stats.chunk_count as usize,
@@ -1064,15 +1092,25 @@ pub(crate) async fn push_snapshot(
 
     // Rewrite manifest for pushed snapshot, recreating manifest from source on target,
     // needs to update all relevant info for new manifest.
-    target_manifest.unprotected = source_manifest.unprotected;
-    target_manifest.signature = source_manifest.signature;
-    let manifest_json = serde_json::to_value(target_manifest)?;
-    let manifest_string = serde_json::to_string_pretty(&manifest_json)?;
+    target_manifest.unprotected = source_manifest.unprotected.clone();
+    target_manifest.signature = source_manifest.signature.clone();
+    let manifest_string = if encrypt_using_key.is_some() {
+        let fp = source_manifest.change_detection_fingerprint()?;
+        target_manifest.set_change_detection_fingerprint(&fp)?;
+        target_manifest.to_string(encrypt_using_key.as_ref().map(Arc::as_ref))?
+    } else {
+        let manifest_json = serde_json::to_value(target_manifest)?;
+        serde_json::to_string_pretty(&manifest_json)?
+    };
     let backup_stats = backup_writer
         .upload_blob_from_data(
             manifest_string.into_bytes(),
             MANIFEST_BLOB_NAME.as_ref(),
-            upload_options,
+            UploadOptions {
+                compress: true,
+                encrypt: false,
+                ..UploadOptions::default()
+            },
         )
         .await?;
     backup_writer.finish().await?;
@@ -1112,12 +1150,15 @@ async fn push_index(
     backup_writer: &BackupWriter,
     index_type: IndexType,
     known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
+    crypt_config: Option<Arc<CryptConfig>>,
 ) -> Result<BackupStats, Error> {
     let (upload_channel_tx, upload_channel_rx) = mpsc::channel(20);
     let mut chunk_infos =
         stream::iter(0..index.index_count()).map(move |pos| index.chunk_info(pos).unwrap());
 
+    let crypt_config_cloned = crypt_config.clone();
     tokio::spawn(async move {
+        let mut encrypted_mapping = HashMap::new();
         while let Some(chunk_info) = chunk_infos.next().await {
             // Avoid reading known chunks, as they are not uploaded by the backup writer anyways
             let needs_upload = {
@@ -1131,20 +1172,39 @@ async fn push_index(
                 chunk_reader
                     .read_raw_chunk(&chunk_info.digest)
                     .await
-                    .map(|chunk| {
-                        MergedChunkInfo::New(ChunkInfo {
+                    .and_then(|chunk| {
+                        let (chunk, digest, chunk_len) = match crypt_config_cloned.as_ref() {
+                            Some(crypt_config) => {
+                                let data = chunk.decode(None, Some(&chunk_info.digest))?;
+                                let (chunk, digest) = DataChunkBuilder::new(&data)
+                                    .compress(true)
+                                    .crypt_config(crypt_config)
+                                    .build()?;
+                                encrypted_mapping.insert(chunk_info.digest, digest);
+                                (chunk, digest, data.len() as u64)
+                            }
+                            None => (chunk, chunk_info.digest, chunk_info.size()),
+                        };
+
+                        Ok(MergedChunkInfo::New(ChunkInfo {
                             chunk,
-                            digest: chunk_info.digest,
-                            chunk_len: chunk_info.size(),
+                            digest,
+                            chunk_len,
                             offset: chunk_info.range.start,
-                        })
+                        }))
                     })
             } else {
+                let digest =
+                    if let Some(encrypted_digest) = encrypted_mapping.get(&chunk_info.digest) {
+                        *encrypted_digest
+                    } else {
+                        chunk_info.digest
+                    };
                 Ok(MergedChunkInfo::Known(vec![(
                     // Pass size instead of offset, will be replaced with offset by the backup
                     // writer
                     chunk_info.size(),
-                    chunk_info.digest,
+                    digest,
                 )]))
             };
             let _ = upload_channel_tx.send(merged_chunk_info).await;
@@ -1155,7 +1215,7 @@ async fn push_index(
 
     let upload_options = UploadOptions {
         compress: true,
-        encrypt: false,
+        encrypt: crypt_config.is_some(),
         index_type,
         ..UploadOptions::default()
     };
-- 
2.47.3





  parent reply	other threads:[~2026-04-10 16:55 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-10 16:54 [PATCH proxmox{,-backup} v2 00/27] fix #7251: implement server side encryption support for push sync jobs Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox v2 01/27] pbs-api-types: define en-/decryption key type and schema Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox v2 02/27] pbs-api-types: sync job: add optional cryptographic keys to config Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 03/27] datastore: blob: implement async reader for data blobs Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 04/27] datastore: manifest: add helper for change detection fingerprint Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 05/27] pbs-key-config: introduce store_with() for KeyConfig Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 06/27] pbs-config: implement encryption key config handling Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 07/27] pbs-config: acls: add 'encryption-keys' as valid 'system' subpath Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 08/27] ui: expose 'encryption-keys' as acl subpath for 'system' Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 09/27] sync: add helper to check encryption key acls and load key Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 10/27] api: config: add endpoints for encryption key manipulation Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 11/27] api: config: check sync owner has access to en-/decryption keys Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 12/27] api: config: allow encryption key manipulation for sync job Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 13/27] sync: push: rewrite manifest instead of pushing pre-existing one Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 14/27] api: push sync: expose optional encryption key for push sync Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 15/27] sync: push: optionally encrypt data blob on upload Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 16/27] sync: push: optionally encrypt client log on upload if key is given Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 17/27] sync: push: add helper for loading known chunks from previous snapshot Christian Ebner
2026-04-10 16:54 ` Christian Ebner [this message]
2026-04-10 16:54 ` [PATCH proxmox-backup v2 19/27] ui: define and expose encryption key management menu item and windows Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 20/27] ui: expose assigning encryption key to sync jobs Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 21/27] sync: pull: load encryption key if given in job config Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 22/27] sync: expand source chunk reader trait by crypt config Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 23/27] sync: pull: introduce and use decrypt index writer if " Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 24/27] sync: pull: extend encountered chunk by optional decrypted digest Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 25/27] sync: pull: decrypt blob files on pull if encryption key is configured Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 26/27] sync: pull: decrypt chunks and rewrite index file for matching key Christian Ebner
2026-04-10 16:54 ` [PATCH proxmox-backup v2 27/27] sync: pull: decrypt snapshots with matching encryption key fingerprint Christian Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260410165454.1578501-19-c.ebner@proxmox.com \
    --to=c.ebner@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal