public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup 0/4] add missing digest verification
@ 2020-08-03 12:10 Fabian Grünbichler
  2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 1/4] blobs: attempt to verify on decode when possible Fabian Grünbichler
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Fabian Grünbichler @ 2020-08-03 12:10 UTC (permalink / raw)
  To: pbs-devel

the last one is a bit unrelated, but I stumbled upon it while testing
this series ;)

Fabian Grünbichler (4):
  blobs: attempt to verify on decode when possible
  sync: verify chunk size and digest, if possible
  sync: verify size and checksum of pulled archives
  datastore: allow browsing signed pxar files

 src/backup/data_blob.rs           | 32 +++++++++++++++++++++----
 src/backup/datastore.rs           |  3 ++-
 src/backup/manifest.rs            |  3 ++-
 src/backup/read_chunk.rs          |  6 ++---
 src/backup/verify.rs              | 18 +++++++-------
 src/client/backup_reader.rs       |  3 ++-
 src/client/backup_writer.rs       |  3 ++-
 src/client/pull.rs                | 40 +++++++++++++++++++++++++------
 src/client/remote_chunk_reader.rs | 12 ++++------
 tests/blob_writer.rs              | 16 ++++++++-----
 www/DataStoreContent.js           |  2 +-
 11 files changed, 95 insertions(+), 43 deletions(-)

-- 
2.20.1





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 1/4] blobs: attempt to verify on decode when possible
  2020-08-03 12:10 [pbs-devel] [PATCH proxmox-backup 0/4] add missing digest verification Fabian Grünbichler
@ 2020-08-03 12:10 ` Fabian Grünbichler
  2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 2/4] sync: verify chunk size and digest, if possible Fabian Grünbichler
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Fabian Grünbichler @ 2020-08-03 12:10 UTC (permalink / raw)
  To: pbs-devel

regular chunks are only decoded when their contents are accessed, in
which case we need to have the key anyway and want to verify the digest.

for blobs we need to verify beforehand, since their checksums are always
calculated based on their raw content, and stored in the manifest.

manifests are also stored as blobs, but don't have a digest in the
traditional sense (they might have a signature covering parts of their
contents, but that is verified already when loading the manifest).

this commit does not cover pull/sync code which copies blobs and chunks
as-is without decoding them.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/backup/data_blob.rs           | 32 +++++++++++++++++++++++++++----
 src/backup/datastore.rs           |  3 ++-
 src/backup/manifest.rs            |  3 ++-
 src/backup/read_chunk.rs          |  6 ++----
 src/backup/verify.rs              | 18 ++++++++---------
 src/client/backup_reader.rs       |  3 ++-
 src/client/backup_writer.rs       |  3 ++-
 src/client/remote_chunk_reader.rs |  8 ++------
 tests/blob_writer.rs              | 16 ++++++++++------
 9 files changed, 59 insertions(+), 33 deletions(-)

diff --git a/src/backup/data_blob.rs b/src/backup/data_blob.rs
index af9ebf8a..59336b80 100644
--- a/src/backup/data_blob.rs
+++ b/src/backup/data_blob.rs
@@ -185,16 +185,23 @@ impl DataBlob {
     }
 
     /// Decode blob data
-    pub fn decode(&self, config: Option<&CryptConfig>) -> Result<Vec<u8>, Error> {
+    pub fn decode(&self, config: Option<&CryptConfig>, digest: Option<&[u8; 32]>) -> Result<Vec<u8>, Error> {
 
         let magic = self.magic();
 
         if magic == &UNCOMPRESSED_BLOB_MAGIC_1_0 {
             let data_start = std::mem::size_of::<DataBlobHeader>();
-            Ok(self.raw_data[data_start..].to_vec())
+            let data = self.raw_data[data_start..].to_vec();
+            if let Some(digest) = digest {
+                Self::verify_digest(&data, None, digest)?;
+            }
+            Ok(data)
         } else if magic == &COMPRESSED_BLOB_MAGIC_1_0 {
             let data_start = std::mem::size_of::<DataBlobHeader>();
             let data = zstd::block::decompress(&self.raw_data[data_start..], MAX_BLOB_SIZE)?;
+            if let Some(digest) = digest {
+                Self::verify_digest(&data, None, digest)?;
+            }
             Ok(data)
         } else if magic == &ENCR_COMPR_BLOB_MAGIC_1_0 || magic == &ENCRYPTED_BLOB_MAGIC_1_0 {
             let header_len = std::mem::size_of::<EncryptedDataBlobHeader>();
@@ -208,6 +215,9 @@ impl DataBlob {
                 } else {
                     config.decode_uncompressed_chunk(&self.raw_data[header_len..], &head.iv, &head.tag)?
                 };
+                if let Some(digest) = digest {
+                    Self::verify_digest(&data, Some(config), digest)?;
+                }
                 Ok(data)
             } else {
                 bail!("unable to decrypt blob - missing CryptConfig");
@@ -276,12 +286,26 @@ impl DataBlob {
             return Ok(());
         }
 
-        let data = self.decode(None)?;
+        // verifies digest!
+        let data = self.decode(None, Some(expected_digest))?;
 
         if expected_chunk_size != data.len() {
             bail!("detected chunk with wrong length ({} != {})", expected_chunk_size, data.len());
         }
-        let digest = openssl::sha::sha256(&data);
+
+        Ok(())
+    }
+
+    fn verify_digest(
+        data: &[u8],
+        config: Option<&CryptConfig>,
+        expected_digest: &[u8; 32],
+    ) -> Result<(), Error> {
+
+        let digest = match config {
+            Some(config) => config.compute_digest(data),
+            None => openssl::sha::sha256(&data),
+        };
         if &digest != expected_digest {
             bail!("detected chunk with wrong digest.");
         }
diff --git a/src/backup/datastore.rs b/src/backup/datastore.rs
index 8d882d29..a8bb282b 100644
--- a/src/backup/datastore.rs
+++ b/src/backup/datastore.rs
@@ -591,7 +591,8 @@ impl DataStore {
         backup_dir: &BackupDir,
     ) -> Result<Value, Error> {
         let blob = self.load_blob(backup_dir, MANIFEST_BLOB_NAME)?;
-        let manifest_data = blob.decode(None)?;
+        // no expected digest available
+        let manifest_data = blob.decode(None, None)?;
         let manifest: Value = serde_json::from_slice(&manifest_data[..])?;
         Ok(manifest)
     }
diff --git a/src/backup/manifest.rs b/src/backup/manifest.rs
index 44df6158..a42cdeb7 100644
--- a/src/backup/manifest.rs
+++ b/src/backup/manifest.rs
@@ -238,7 +238,8 @@ impl TryFrom<super::DataBlob> for BackupManifest {
     type Error = Error;
 
     fn try_from(blob: super::DataBlob) -> Result<Self, Error> {
-        let data = blob.decode(None)
+        // no expected digest available
+        let data = blob.decode(None, None)
             .map_err(|err| format_err!("decode backup manifest blob failed - {}", err))?;
         let json: Value = serde_json::from_slice(&data[..])
             .map_err(|err| format_err!("unable to parse backup manifest json - {}", err))?;
diff --git a/src/backup/read_chunk.rs b/src/backup/read_chunk.rs
index fb8296fd..200f53ea 100644
--- a/src/backup/read_chunk.rs
+++ b/src/backup/read_chunk.rs
@@ -40,9 +40,7 @@ impl ReadChunk for LocalChunkReader {
     fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
         let chunk = ReadChunk::read_raw_chunk(self, digest)?;
 
-        let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?;
-
-        // fixme: verify digest?
+        let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref), Some(digest))?;
 
         Ok(raw_data)
     }
@@ -85,7 +83,7 @@ impl AsyncReadChunk for LocalChunkReader {
         Box::pin(async move {
             let chunk = AsyncReadChunk::read_raw_chunk(self, digest).await?;
 
-            let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?;
+            let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref), Some(digest))?;
 
             // fixme: verify digest?
 
diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index 0617fbf6..ec47534c 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -6,7 +6,7 @@ use crate::server::WorkerTask;
 
 use super::{
     DataStore, BackupGroup, BackupDir, BackupInfo, IndexFile,
-    ENCR_COMPR_BLOB_MAGIC_1_0, ENCRYPTED_BLOB_MAGIC_1_0,
+    CryptMode,
     FileInfo, ArchiveType, archive_type,
 };
 
@@ -24,15 +24,15 @@ fn verify_blob(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo) -
         bail!("wrong index checksum");
     }
 
-    let magic = blob.magic();
-
-    if magic == &ENCR_COMPR_BLOB_MAGIC_1_0 || magic == &ENCRYPTED_BLOB_MAGIC_1_0 {
-        return Ok(());
+    match blob.crypt_mode()? {
+        CryptMode::Encrypt => Ok(()),
+        CryptMode::None => {
+            // digest already verified above
+            blob.decode(None, None)?;
+            Ok(())
+        },
+        CryptMode::SignOnly => bail!("Invalid CryptMode for blob"),
     }
-
-    blob.decode(None)?;
-
-    Ok(())
 }
 
 fn verify_index_chunks(
diff --git a/src/client/backup_reader.rs b/src/client/backup_reader.rs
index c60b6524..d4185716 100644
--- a/src/client/backup_reader.rs
+++ b/src/client/backup_reader.rs
@@ -130,7 +130,8 @@ impl BackupReader {
         let mut raw_data = Vec::with_capacity(64 * 1024);
         self.download(MANIFEST_BLOB_NAME, &mut raw_data).await?;
         let blob = DataBlob::load_from_reader(&mut &raw_data[..])?;
-        let data = blob.decode(None)?;
+        // no expected digest available
+        let data = blob.decode(None, None)?;
 
         let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;
 
diff --git a/src/client/backup_writer.rs b/src/client/backup_writer.rs
index 0b0ef93b..38686f67 100644
--- a/src/client/backup_writer.rs
+++ b/src/client/backup_writer.rs
@@ -480,7 +480,8 @@ impl BackupWriter {
         self.h2.download("previous", Some(param), &mut raw_data).await?;
 
         let blob = DataBlob::load_from_reader(&mut &raw_data[..])?;
-        let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref))?;
+        // no expected digest available
+        let data = blob.decode(self.crypt_config.as_ref().map(Arc::as_ref), None)?;
 
         let manifest = BackupManifest::from_data(&data[..], self.crypt_config.as_ref().map(Arc::as_ref))?;
 
diff --git a/src/client/remote_chunk_reader.rs b/src/client/remote_chunk_reader.rs
index bf195d66..4db11477 100644
--- a/src/client/remote_chunk_reader.rs
+++ b/src/client/remote_chunk_reader.rs
@@ -62,9 +62,7 @@ impl ReadChunk for RemoteChunkReader {
 
         let chunk = ReadChunk::read_raw_chunk(self, digest)?;
 
-        let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?;
-
-        // fixme: verify digest?
+        let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref), Some(digest))?;
 
         let use_cache = self.cache_hint.contains_key(digest);
         if use_cache {
@@ -94,9 +92,7 @@ impl AsyncReadChunk for RemoteChunkReader {
 
             let chunk = Self::read_raw_chunk(self, digest).await?;
 
-            let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?;
-
-            // fixme: verify digest?
+            let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref), Some(digest))?;
 
             let use_cache = self.cache_hint.contains_key(digest);
             if use_cache {
diff --git a/tests/blob_writer.rs b/tests/blob_writer.rs
index 3d17ebd6..7ea25bb8 100644
--- a/tests/blob_writer.rs
+++ b/tests/blob_writer.rs
@@ -21,9 +21,13 @@ lazy_static! {
         let key = [1u8; 32];
         Arc::new(CryptConfig::new(key).unwrap())
     };
+
+    static ref TEST_DIGEST_PLAIN: [u8; 32] = [83, 154, 96, 195, 167, 204, 38, 142, 204, 224, 130, 201, 24, 71, 2, 188, 130, 155, 177, 6, 162, 100, 61, 238, 38, 219, 63, 240, 191, 132, 87, 238];
+
+    static ref TEST_DIGEST_ENC: [u8; 32] = [50, 162, 191, 93, 255, 132, 9, 14, 127, 23, 92, 39, 246, 102, 245, 204, 130, 104, 4, 106, 182, 239, 218, 14, 80, 17, 150, 188, 239, 253, 198, 117];
 }
 
-fn verify_test_blob(mut cursor: Cursor<Vec<u8>>) -> Result<(), Error> {
+fn verify_test_blob(mut cursor: Cursor<Vec<u8>>, digest: &[u8; 32]) -> Result<(), Error> {
 
     // run read tests with different buffer sizes
     for size in [1, 3, 64*1024].iter() {
@@ -52,7 +56,7 @@ fn verify_test_blob(mut cursor: Cursor<Vec<u8>>) -> Result<(), Error> {
 
     let blob = DataBlob::load_from_reader(&mut &raw_data[..])?;
 
-    let data = blob.decode(Some(&CRYPT_CONFIG))?;
+    let data = blob.decode(Some(&CRYPT_CONFIG), Some(digest))?;
     if data != *TEST_DATA {
         bail!("blob data is wrong (decode)");
     }
@@ -65,7 +69,7 @@ fn test_uncompressed_blob_writer() -> Result<(), Error> {
     let mut blob_writer = DataBlobWriter::new_uncompressed(tmp)?;
     blob_writer.write_all(&TEST_DATA)?;
 
-    verify_test_blob(blob_writer.finish()?)
+    verify_test_blob(blob_writer.finish()?, &*TEST_DIGEST_PLAIN)
 }
 
 #[test]
@@ -74,7 +78,7 @@ fn test_compressed_blob_writer() -> Result<(), Error> {
     let mut blob_writer = DataBlobWriter::new_compressed(tmp)?;
     blob_writer.write_all(&TEST_DATA)?;
 
-    verify_test_blob(blob_writer.finish()?)
+    verify_test_blob(blob_writer.finish()?, &*TEST_DIGEST_PLAIN)
 }
 
 #[test]
@@ -83,7 +87,7 @@ fn test_encrypted_blob_writer() -> Result<(), Error> {
     let mut blob_writer = DataBlobWriter::new_encrypted(tmp, CRYPT_CONFIG.clone())?;
     blob_writer.write_all(&TEST_DATA)?;
 
-    verify_test_blob(blob_writer.finish()?)
+    verify_test_blob(blob_writer.finish()?, &*TEST_DIGEST_ENC)
 }
 
 #[test]
@@ -92,5 +96,5 @@ fn test_encrypted_compressed_blob_writer() -> Result<(), Error> {
     let mut blob_writer = DataBlobWriter::new_encrypted_compressed(tmp, CRYPT_CONFIG.clone())?;
     blob_writer.write_all(&TEST_DATA)?;
 
-    verify_test_blob(blob_writer.finish()?)
+    verify_test_blob(blob_writer.finish()?, &*TEST_DIGEST_ENC)
 }
-- 
2.20.1





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 2/4] sync: verify chunk size and digest, if possible
  2020-08-03 12:10 [pbs-devel] [PATCH proxmox-backup 0/4] add missing digest verification Fabian Grünbichler
  2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 1/4] blobs: attempt to verify on decode when possible Fabian Grünbichler
@ 2020-08-03 12:10 ` Fabian Grünbichler
  2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 3/4] sync: verify size and checksum of pulled archives Fabian Grünbichler
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Fabian Grünbichler @ 2020-08-03 12:10 UTC (permalink / raw)
  To: pbs-devel

for encrypted chunks this is currently not possible, as we need the key
to decode the chunk.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/client/pull.rs                | 10 ++++++----
 src/client/remote_chunk_reader.rs |  4 ++--
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/src/client/pull.rs b/src/client/pull.rs
index fc193a8d..629e8266 100644
--- a/src/client/pull.rs
+++ b/src/client/pull.rs
@@ -27,16 +27,18 @@ async fn pull_index_chunks<I: IndexFile>(
 
 
     for pos in 0..index.index_count() {
-        let digest = index.index_digest(pos).unwrap();
-        let chunk_exists = target.cond_touch_chunk(digest, false)?;
+        let info = index.chunk_info(pos).unwrap();
+        let chunk_exists = target.cond_touch_chunk(&info.digest, false)?;
         if chunk_exists {
             //worker.log(format!("chunk {} exists {}", pos, proxmox::tools::digest_to_hex(digest)));
             continue;
         }
         //worker.log(format!("sync {} chunk {}", pos, proxmox::tools::digest_to_hex(digest)));
-        let chunk = chunk_reader.read_raw_chunk(&digest).await?;
+        let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
 
-        target.insert_chunk(&chunk, &digest)?;
+        chunk.verify_unencrypted(info.size() as usize, &info.digest)?;
+
+        target.insert_chunk(&chunk, &info.digest)?;
     }
 
     Ok(())
diff --git a/src/client/remote_chunk_reader.rs b/src/client/remote_chunk_reader.rs
index 4db11477..b30d0567 100644
--- a/src/client/remote_chunk_reader.rs
+++ b/src/client/remote_chunk_reader.rs
@@ -35,6 +35,8 @@ impl RemoteChunkReader {
         }
     }
 
+    /// Downloads raw chunk. This only verifies the (untrusted) CRC32, use
+    /// DataBlob::verify_unencrypted or DataBlob::decode before storing/processing further.
     pub async fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
         let mut chunk_data = Vec::with_capacity(4 * 1024 * 1024);
 
@@ -43,8 +45,6 @@ impl RemoteChunkReader {
             .await?;
 
         let chunk = DataBlob::load_from_reader(&mut &chunk_data[..])?;
-        
-        // fixme: verify digest?
 
         Ok(chunk)
     }
-- 
2.20.1





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 3/4] sync: verify size and checksum of pulled archives
  2020-08-03 12:10 [pbs-devel] [PATCH proxmox-backup 0/4] add missing digest verification Fabian Grünbichler
  2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 1/4] blobs: attempt to verify on decode when possible Fabian Grünbichler
  2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 2/4] sync: verify chunk size and digest, if possible Fabian Grünbichler
@ 2020-08-03 12:10 ` Fabian Grünbichler
  2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 4/4] datastore: allow browsing signed pxar files Fabian Grünbichler
  2020-08-04  5:29 ` [pbs-devel] applied: [PATCH proxmox-backup 0/4] add missing digest verification Dietmar Maurer
  4 siblings, 0 replies; 6+ messages in thread
From: Fabian Grünbichler @ 2020-08-03 12:10 UTC (permalink / raw)
  To: pbs-devel

and not just of previously synced ones.

we can't use BackupManifest::verify_file as the archive is still stored
under the tmp path at this point.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/client/pull.rs | 30 +++++++++++++++++++++++++++---
 1 file changed, 27 insertions(+), 3 deletions(-)

diff --git a/src/client/pull.rs b/src/client/pull.rs
index 629e8266..429ab458 100644
--- a/src/client/pull.rs
+++ b/src/client/pull.rs
@@ -62,15 +62,32 @@ async fn download_manifest(
     Ok(tmp_manifest_file)
 }
 
+fn verify_archive(
+    info: &FileInfo,
+    csum: &[u8; 32],
+    size: u64,
+) -> Result<(), Error> {
+    if size != info.size {
+        bail!("wrong size for file '{}' ({} != {})", info.filename, info.size, size);
+    }
+
+    if csum != &info.csum {
+        bail!("wrong checksum for file '{}'", info.filename);
+    }
+
+    Ok(())
+}
+
 async fn pull_single_archive(
     worker: &WorkerTask,
     reader: &BackupReader,
     chunk_reader: &mut RemoteChunkReader,
     tgt_store: Arc<DataStore>,
     snapshot: &BackupDir,
-    archive_name: &str,
+    archive_info: &FileInfo,
 ) -> Result<(), Error> {
 
+    let archive_name = &archive_info.filename;
     let mut path = tgt_store.base_path();
     path.push(snapshot.relative_path());
     path.push(archive_name);
@@ -91,16 +108,23 @@ async fn pull_single_archive(
         ArchiveType::DynamicIndex => {
             let index = DynamicIndexReader::new(tmpfile)
                 .map_err(|err| format_err!("unable to read dynamic index {:?} - {}", tmp_path, err))?;
+            let (csum, size) = index.compute_csum();
+            verify_archive(archive_info, &csum, size)?;
 
             pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?;
         }
         ArchiveType::FixedIndex => {
             let index = FixedIndexReader::new(tmpfile)
                 .map_err(|err| format_err!("unable to read fixed index '{:?}' - {}", tmp_path, err))?;
+            let (csum, size) = index.compute_csum();
+            verify_archive(archive_info, &csum, size)?;
 
             pull_index_chunks(worker, chunk_reader, tgt_store.clone(), index).await?;
         }
-        ArchiveType::Blob => { /* nothing to do */ }
+        ArchiveType::Blob => {
+            let (csum, size) = compute_file_csum(&mut tmpfile)?;
+            verify_archive(archive_info, &csum, size)?;
+        }
     }
     if let Err(err) = std::fs::rename(&tmp_path, &path) {
         bail!("Atomic rename file {:?} failed - {}", path, err);
@@ -248,7 +272,7 @@ async fn pull_snapshot(
             &mut chunk_reader,
             tgt_store.clone(),
             snapshot,
-            &item.filename,
+            &item,
         ).await?;
     }
 
-- 
2.20.1





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 4/4] datastore: allow browsing signed pxar files
  2020-08-03 12:10 [pbs-devel] [PATCH proxmox-backup 0/4] add missing digest verification Fabian Grünbichler
                   ` (2 preceding siblings ...)
  2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 3/4] sync: verify size and checksum of pulled archives Fabian Grünbichler
@ 2020-08-03 12:10 ` Fabian Grünbichler
  2020-08-04  5:29 ` [pbs-devel] applied: [PATCH proxmox-backup 0/4] add missing digest verification Dietmar Maurer
  4 siblings, 0 replies; 6+ messages in thread
From: Fabian Grünbichler @ 2020-08-03 12:10 UTC (permalink / raw)
  To: pbs-devel

just because we can't verify the signature, does not mean the contents
are not accessible. it might make sense to make it obvious with a hint
or click-through warning that no signature verification can take place
or this and downloading.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 www/DataStoreContent.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/DataStoreContent.js b/www/DataStoreContent.js
index e8690486..62e57817 100644
--- a/www/DataStoreContent.js
+++ b/www/DataStoreContent.js
@@ -505,7 +505,7 @@ Ext.define('PBS.DataStoreContent', {
 			return !(data.leaf &&
 			    data.filename &&
 			    data.filename.endsWith('pxar.didx') &&
-			    data['crypt-mode'] < 2);
+			    data['crypt-mode'] < 3);
 		    }
 		},
 	    ]
-- 
2.20.1





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup 0/4] add missing digest verification
  2020-08-03 12:10 [pbs-devel] [PATCH proxmox-backup 0/4] add missing digest verification Fabian Grünbichler
                   ` (3 preceding siblings ...)
  2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 4/4] datastore: allow browsing signed pxar files Fabian Grünbichler
@ 2020-08-04  5:29 ` Dietmar Maurer
  4 siblings, 0 replies; 6+ messages in thread
From: Dietmar Maurer @ 2020-08-04  5:29 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Fabian Grünbichler

applied




^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-08-04  5:29 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-03 12:10 [pbs-devel] [PATCH proxmox-backup 0/4] add missing digest verification Fabian Grünbichler
2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 1/4] blobs: attempt to verify on decode when possible Fabian Grünbichler
2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 2/4] sync: verify chunk size and digest, if possible Fabian Grünbichler
2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 3/4] sync: verify size and checksum of pulled archives Fabian Grünbichler
2020-08-03 12:10 ` [pbs-devel] [PATCH proxmox-backup 4/4] datastore: allow browsing signed pxar files Fabian Grünbichler
2020-08-04  5:29 ` [pbs-devel] applied: [PATCH proxmox-backup 0/4] add missing digest verification Dietmar Maurer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal