public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type
@ 2024-11-13 10:50 Christian Ebner
  2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 1/5] datastore: move `ArchiveType` to api types Christian Ebner
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Christian Ebner @ 2024-11-13 10:50 UTC (permalink / raw)
  To: pbs-devel

There is currently no dedicated api type for the archive names, given
as input parameters to several api methods.

This patches introduce a dedicated type for archive names, in order
to collect the code for checks and eventual mappings into one
location and reduce possible unintentional misuse by passing
incorrect argument values to the functions and methods consuming
the archive names.

Further, drop all archive name constants in favor of helper methods on
the api type to generate `BackupArchiveName` instances for them. This
allows for direct comparison with other `BackupArchiveName` instances.

As a positive side effect, the mapping now allows also for the server
archive type extensions to be optionally passed as input to several
commands, e.g.
```
proxmox-backup-client restore <snapshot> <name>.pxar.didx <target>
```
is now valid, being equal to
```
proxmox-backup-client restore <snapshot> <name.pxar <target>
```

Changes since version 3:
- Removed catchall fallback to blob type, reworked type parsing logic
- Removed archive name constants in favor of helper methods to generate
  archive names for them
- Extended tests

Changes since version 2:
- Rebased onto current master
- Amended commit messages

Changes since version 1 (thanks @Gabriel):
- Rebased onto current master
- Added unit tests for archive name parsing
- Added missing check for invalid archive names ending with '/'
- Inlined variable names for format strings
- Import implemented traits at top

Christian Ebner (5):
  datastore: move `ArchiveType` to api types
  api types: introduce `BackupArchiveName` type
  client/server: use dedicated api type for all archive names
  client: drop unused parse_archive_type helper
  api types: add unit tests for backup archive name parsing

 pbs-api-types/src/datastore.rs       | 238 ++++++++++++++++++++++++++-
 pbs-client/src/backup_reader.rs      |  18 +-
 pbs-client/src/backup_writer.rs      |  45 +++--
 pbs-client/src/pxar/tools.rs         |   3 +-
 pbs-client/src/tools/mod.rs          |  28 ++--
 pbs-datastore/src/backup_info.rs     |  21 +--
 pbs-datastore/src/datastore.rs       |   6 +-
 pbs-datastore/src/lib.rs             |   3 -
 pbs-datastore/src/manifest.rs        |  55 +++----
 pbs-datastore/src/snapshot_reader.rs |  11 +-
 proxmox-backup-client/src/catalog.rs |  35 ++--
 proxmox-backup-client/src/helper.rs  |   7 +-
 proxmox-backup-client/src/main.rs    | 138 +++++++++-------
 proxmox-backup-client/src/mount.rs   |  33 ++--
 proxmox-file-restore/src/main.rs     |  13 +-
 src/api2/admin/datastore.rs          |  70 ++++----
 src/api2/backup/mod.rs               |   3 +-
 src/api2/reader/mod.rs               |   7 +-
 src/api2/tape/restore.rs             |  17 +-
 src/backup/mod.rs                    |   3 -
 src/backup/verify.rs                 |   7 +-
 src/bin/proxmox_backup_debug/diff.rs |  16 +-
 src/server/pull.rs                   |  24 +--
 src/server/sync.rs                   |  10 +-
 tests/prune.rs                       |   5 +-
 25 files changed, 522 insertions(+), 294 deletions(-)

-- 
2.39.5



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pbs-devel] [PATCH v4 proxmox-backup 1/5] datastore: move `ArchiveType` to api types
  2024-11-13 10:50 [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type Christian Ebner
@ 2024-11-13 10:50 ` Christian Ebner
  2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 2/5] api types: introduce `BackupArchiveName` type Christian Ebner
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2024-11-13 10:50 UTC (permalink / raw)
  To: pbs-devel

Moving the `ArchiveType` to avoid crate dependencies on
`pbs-datastore`.

In preparation for introducing a dedicated `BackupArchiveName` api
type, allowing to set the corresponding archive type variant when
parsing the archive name based on it's filename.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 3:
- no changes

 pbs-api-types/src/datastore.rs       | 23 ++++++++++++++++++++++-
 pbs-client/src/backup_writer.rs      |  4 ++--
 pbs-datastore/src/datastore.rs       |  6 +++---
 pbs-datastore/src/manifest.rs        | 24 +-----------------------
 pbs-datastore/src/snapshot_reader.rs |  4 ++--
 proxmox-backup-client/src/main.rs    | 12 +++++-------
 src/api2/backup/mod.rs               |  3 +--
 src/api2/reader/mod.rs               |  7 +++----
 src/api2/tape/restore.rs             | 10 +++++-----
 src/backup/verify.rs                 |  7 ++++---
 src/server/pull.rs                   |  9 ++++-----
 11 files changed, 52 insertions(+), 57 deletions(-)

diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index 31767417a..dfa6bb259 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -1,5 +1,5 @@
 use std::fmt;
-use std::path::PathBuf;
+use std::path::{Path, PathBuf};
 
 use anyhow::{bail, format_err, Error};
 use const_format::concatcp;
@@ -1569,3 +1569,24 @@ pub fn print_store_and_ns(store: &str, ns: &BackupNamespace) -> String {
         format!("datastore '{}', namespace '{}'", store, ns)
     }
 }
+
+#[derive(PartialEq, Eq)]
+/// Allowed variants of backup archives to be contained in a snapshot's manifest
+pub enum ArchiveType {
+    FixedIndex,
+    DynamicIndex,
+    Blob,
+}
+
+impl ArchiveType {
+    pub fn from_path(archive_name: impl AsRef<Path>) -> Result<Self, Error> {
+        let archive_name = archive_name.as_ref();
+        let archive_type = match archive_name.extension().and_then(|ext| ext.to_str()) {
+            Some("didx") => ArchiveType::DynamicIndex,
+            Some("fidx") => ArchiveType::FixedIndex,
+            Some("blob") => ArchiveType::Blob,
+            _ => bail!("unknown archive type: {archive_name:?}"),
+        };
+        Ok(archive_type)
+    }
+}
diff --git a/pbs-client/src/backup_writer.rs b/pbs-client/src/backup_writer.rs
index 4d2e8a801..8adaf9ef2 100644
--- a/pbs-client/src/backup_writer.rs
+++ b/pbs-client/src/backup_writer.rs
@@ -11,12 +11,12 @@ use tokio::io::AsyncReadExt;
 use tokio::sync::{mpsc, oneshot};
 use tokio_stream::wrappers::ReceiverStream;
 
-use pbs_api_types::{BackupDir, BackupNamespace};
+use pbs_api_types::{ArchiveType, BackupDir, BackupNamespace};
 use pbs_datastore::data_blob::{ChunkInfo, DataBlob, DataChunkBuilder};
 use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::{ArchiveType, BackupManifest, MANIFEST_BLOB_NAME};
+use pbs_datastore::manifest::{BackupManifest, MANIFEST_BLOB_NAME};
 use pbs_datastore::{CATALOG_NAME, PROXMOX_BACKUP_PROTOCOL_ID_V1};
 use pbs_tools::crypt_config::CryptConfig;
 
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index d0f3c53ac..d5419f881 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -18,8 +18,9 @@ use proxmox_sys::process_locker::ProcessLockSharedGuard;
 use proxmox_worker_task::WorkerTaskContext;
 
 use pbs_api_types::{
-    Authid, BackupNamespace, BackupType, ChunkOrder, DataStoreConfig, DatastoreFSyncLevel,
-    DatastoreTuning, GarbageCollectionStatus, MaintenanceMode, MaintenanceType, Operation, UPID,
+    ArchiveType, Authid, BackupNamespace, BackupType, ChunkOrder, DataStoreConfig,
+    DatastoreFSyncLevel, DatastoreTuning, GarbageCollectionStatus, MaintenanceMode,
+    MaintenanceType, Operation, UPID,
 };
 
 use crate::backup_info::{BackupDir, BackupGroup, BackupGroupDeleteStats};
@@ -28,7 +29,6 @@ use crate::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
 use crate::fixed_index::{FixedIndexReader, FixedIndexWriter};
 use crate::hierarchy::{ListGroups, ListGroupsType, ListNamespaces, ListNamespacesRecursive};
 use crate::index::IndexFile;
-use crate::manifest::ArchiveType;
 use crate::task_tracking::{self, update_active_operations};
 use crate::DataBlob;
 
diff --git a/pbs-datastore/src/manifest.rs b/pbs-datastore/src/manifest.rs
index c3df01427..823c85003 100644
--- a/pbs-datastore/src/manifest.rs
+++ b/pbs-datastore/src/manifest.rs
@@ -1,11 +1,9 @@
-use std::path::Path;
-
 use anyhow::{bail, format_err, Error};
 
 use serde::{Deserialize, Serialize};
 use serde_json::{json, Value};
 
-use pbs_api_types::{BackupType, CryptMode, Fingerprint};
+use pbs_api_types::{ArchiveType, BackupType, CryptMode, Fingerprint};
 use pbs_tools::crypt_config::CryptConfig;
 
 pub const MANIFEST_BLOB_NAME: &str = "index.json.blob";
@@ -56,26 +54,6 @@ pub struct BackupManifest {
     pub signature: Option<String>,
 }
 
-#[derive(PartialEq, Eq)]
-pub enum ArchiveType {
-    FixedIndex,
-    DynamicIndex,
-    Blob,
-}
-
-impl ArchiveType {
-    pub fn from_path(archive_name: impl AsRef<Path>) -> Result<Self, Error> {
-        let archive_name = archive_name.as_ref();
-        let archive_type = match archive_name.extension().and_then(|ext| ext.to_str()) {
-            Some("didx") => ArchiveType::DynamicIndex,
-            Some("fidx") => ArchiveType::FixedIndex,
-            Some("blob") => ArchiveType::Blob,
-            _ => bail!("unknown archive type: {:?}", archive_name),
-        };
-        Ok(archive_type)
-    }
-}
-
 impl BackupManifest {
     pub fn new(snapshot: pbs_api_types::BackupDir) -> Self {
         Self {
diff --git a/pbs-datastore/src/snapshot_reader.rs b/pbs-datastore/src/snapshot_reader.rs
index f9c772079..432701ea0 100644
--- a/pbs-datastore/src/snapshot_reader.rs
+++ b/pbs-datastore/src/snapshot_reader.rs
@@ -8,13 +8,13 @@ use nix::dir::Dir;
 
 use proxmox_sys::fs::lock_dir_noblock_shared;
 
-use pbs_api_types::{print_store_and_ns, BackupNamespace, Operation};
+use pbs_api_types::{print_store_and_ns, ArchiveType, BackupNamespace, Operation};
 
 use crate::backup_info::BackupDir;
 use crate::dynamic_index::DynamicIndexReader;
 use crate::fixed_index::FixedIndexReader;
 use crate::index::IndexFile;
-use crate::manifest::{ArchiveType, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME};
+use crate::manifest::{CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME};
 use crate::DataStore;
 
 /// Helper to access the contents of a datastore backup snapshot
diff --git a/proxmox-backup-client/src/main.rs b/proxmox-backup-client/src/main.rs
index e4034aa99..f6fb3555e 100644
--- a/proxmox-backup-client/src/main.rs
+++ b/proxmox-backup-client/src/main.rs
@@ -25,10 +25,10 @@ use pxar::accessor::aio::Accessor;
 use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
 
 use pbs_api_types::{
-    Authid, BackupDir, BackupGroup, BackupNamespace, BackupPart, BackupType, ClientRateLimitConfig,
-    CryptMode, Fingerprint, GroupListItem, PruneJobOptions, PruneListItem, RateLimitConfig,
-    SnapshotListItem, StorageStatus, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA,
-    BACKUP_TYPE_SCHEMA,
+    ArchiveType, Authid, BackupDir, BackupGroup, BackupNamespace, BackupPart, BackupType,
+    ClientRateLimitConfig, CryptMode, Fingerprint, GroupListItem, PruneJobOptions, PruneListItem,
+    RateLimitConfig, SnapshotListItem, StorageStatus, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA,
+    BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA,
 };
 use pbs_client::catalog_shell::Shell;
 use pbs_client::pxar::{ErrorHandler as PxarErrorHandler, MetadataArchiveReader, PxarPrevRef};
@@ -54,9 +54,7 @@ use pbs_datastore::chunk_store::verify_chunk_size;
 use pbs_datastore::dynamic_index::{BufferedDynamicReader, DynamicIndexReader, LocalDynamicReadAt};
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::{
-    ArchiveType, BackupManifest, ENCRYPTED_KEY_BLOB_NAME, MANIFEST_BLOB_NAME,
-};
+use pbs_datastore::manifest::{BackupManifest, ENCRYPTED_KEY_BLOB_NAME, MANIFEST_BLOB_NAME};
 use pbs_datastore::read_chunk::AsyncReadChunk;
 use pbs_datastore::CATALOG_NAME;
 use pbs_key_config::{decrypt_key, rsa_encrypt_key_config, KeyConfig};
diff --git a/src/api2/backup/mod.rs b/src/api2/backup/mod.rs
index ea0d0292e..92e79a267 100644
--- a/src/api2/backup/mod.rs
+++ b/src/api2/backup/mod.rs
@@ -19,13 +19,12 @@ use proxmox_sortable_macro::sortable;
 use proxmox_sys::fs::lock_dir_noblock_shared;
 
 use pbs_api_types::{
-    Authid, BackupNamespace, BackupType, Operation, SnapshotVerifyState, VerifyState,
+    ArchiveType, Authid, BackupNamespace, BackupType, Operation, SnapshotVerifyState, VerifyState,
     BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA,
     BACKUP_TYPE_SCHEMA, CHUNK_DIGEST_SCHEMA, DATASTORE_SCHEMA, PRIV_DATASTORE_BACKUP,
 };
 use pbs_config::CachedUserInfo;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::ArchiveType;
 use pbs_datastore::{DataStore, PROXMOX_BACKUP_PROTOCOL_ID_V1};
 use pbs_tools::json::{required_array_param, required_integer_param, required_string_param};
 
diff --git a/src/api2/reader/mod.rs b/src/api2/reader/mod.rs
index 23051653e..50f80de43 100644
--- a/src/api2/reader/mod.rs
+++ b/src/api2/reader/mod.rs
@@ -19,13 +19,12 @@ use proxmox_sortable_macro::sortable;
 use proxmox_sys::fs::lock_dir_noblock_shared;
 
 use pbs_api_types::{
-    Authid, Operation, BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA,
-    BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA, CHUNK_DIGEST_SCHEMA, DATASTORE_SCHEMA,
-    PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+    ArchiveType, Authid, Operation, BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA,
+    BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA, CHUNK_DIGEST_SCHEMA,
+    DATASTORE_SCHEMA, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
 };
 use pbs_config::CachedUserInfo;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::ArchiveType;
 use pbs_datastore::{DataStore, PROXMOX_BACKUP_READER_PROTOCOL_ID_V1};
 use pbs_tools::json::required_string_param;
 
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index f7481bacc..a180a4b02 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -19,10 +19,10 @@ use proxmox_uuid::Uuid;
 use proxmox_worker_task::WorkerTaskContext;
 
 use pbs_api_types::{
-    parse_ns_and_snapshot, print_ns_and_snapshot, Authid, BackupDir, BackupNamespace, CryptMode,
-    NotificationMode, Operation, TapeRestoreNamespace, Userid, DATASTORE_MAP_ARRAY_SCHEMA,
-    DATASTORE_MAP_LIST_SCHEMA, DRIVE_NAME_SCHEMA, MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP,
-    PRIV_DATASTORE_MODIFY, PRIV_TAPE_READ, TAPE_RESTORE_NAMESPACE_SCHEMA,
+    parse_ns_and_snapshot, print_ns_and_snapshot, ArchiveType, Authid, BackupDir, BackupNamespace,
+    CryptMode, NotificationMode, Operation, TapeRestoreNamespace, Userid,
+    DATASTORE_MAP_ARRAY_SCHEMA, DATASTORE_MAP_LIST_SCHEMA, DRIVE_NAME_SCHEMA, MAX_NAMESPACE_DEPTH,
+    PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, PRIV_TAPE_READ, TAPE_RESTORE_NAMESPACE_SCHEMA,
     TAPE_RESTORE_SNAPSHOT_SCHEMA, UPID_SCHEMA,
 };
 use pbs_client::pxar::tools::handle_root_with_optional_format_version_prelude;
@@ -30,7 +30,7 @@ use pbs_config::CachedUserInfo;
 use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::{ArchiveType, BackupManifest, MANIFEST_BLOB_NAME};
+use pbs_datastore::manifest::{BackupManifest, MANIFEST_BLOB_NAME};
 use pbs_datastore::{DataBlob, DataStore};
 use pbs_tape::{
     BlockReadError, MediaContentHeader, TapeRead, PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0,
diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index 6ef7e8eb3..fee6ecf5f 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -11,12 +11,13 @@ use proxmox_sys::fs::lock_dir_noblock_shared;
 use proxmox_worker_task::WorkerTaskContext;
 
 use pbs_api_types::{
-    print_ns_and_snapshot, print_store_and_ns, Authid, BackupNamespace, BackupType, CryptMode,
-    SnapshotVerifyState, VerifyState, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_VERIFY, UPID,
+    print_ns_and_snapshot, print_store_and_ns, ArchiveType, Authid, BackupNamespace, BackupType,
+    CryptMode, SnapshotVerifyState, VerifyState, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_VERIFY,
+    UPID,
 };
 use pbs_datastore::backup_info::{BackupDir, BackupGroup, BackupInfo};
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::{ArchiveType, BackupManifest, FileInfo};
+use pbs_datastore::manifest::{BackupManifest, FileInfo};
 use pbs_datastore::{DataBlob, DataStore, StoreProgress};
 
 use crate::tools::parallel_handler::ParallelHandler;
diff --git a/src/server/pull.rs b/src/server/pull.rs
index d9584776e..8d53ccd6e 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -11,8 +11,9 @@ use proxmox_human_byte::HumanByte;
 use tracing::info;
 
 use pbs_api_types::{
-    print_store_and_ns, Authid, BackupDir, BackupGroup, BackupNamespace, GroupFilter, Operation,
-    RateLimitConfig, Remote, MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP,
+    print_store_and_ns, ArchiveType, Authid, BackupDir, BackupGroup, BackupNamespace, GroupFilter,
+    Operation, RateLimitConfig, Remote, MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_AUDIT,
+    PRIV_DATASTORE_BACKUP,
 };
 use pbs_client::BackupRepository;
 use pbs_config::CachedUserInfo;
@@ -20,9 +21,7 @@ use pbs_datastore::data_blob::DataBlob;
 use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::{
-    ArchiveType, BackupManifest, FileInfo, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME,
-};
+use pbs_datastore::manifest::{BackupManifest, FileInfo, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME};
 use pbs_datastore::read_chunk::AsyncReadChunk;
 use pbs_datastore::{check_backup_owner, DataStore, StoreProgress};
 use pbs_tools::sha::sha256;
-- 
2.39.5



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pbs-devel] [PATCH v4 proxmox-backup 2/5] api types: introduce `BackupArchiveName` type
  2024-11-13 10:50 [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type Christian Ebner
  2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 1/5] datastore: move `ArchiveType` to api types Christian Ebner
@ 2024-11-13 10:50 ` Christian Ebner
  2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 3/5] client/server: use dedicated api type for all archive names Christian Ebner
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2024-11-13 10:50 UTC (permalink / raw)
  To: pbs-devel

Introduces a dedicated wrapper type to be used for backup archive
names instead of plain strings and associated helper methods for
archive type checks and archive name mappings.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 3:
- reworked archive type parsing, removed catch all blob mapping and made
  the mapping more expressive by using a match statement
- added helpers for common archive names such as catalog, manifest, key
  files, ecc. to use them over the const definitions for comparisons
- introduce extension helper for `ArchiveType`, to be used for archive
  name string generation

 pbs-api-types/src/datastore.rs | 153 ++++++++++++++++++++++++++++++++-
 1 file changed, 152 insertions(+), 1 deletion(-)

diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index dfa6bb259..00ac63255 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -1,5 +1,7 @@
+use std::convert::{AsRef, TryFrom};
 use std::fmt;
 use std::path::{Path, PathBuf};
+use std::str::FromStr;
 
 use anyhow::{bail, format_err, Error};
 use const_format::concatcp;
@@ -1570,7 +1572,7 @@ pub fn print_store_and_ns(store: &str, ns: &BackupNamespace) -> String {
     }
 }
 
-#[derive(PartialEq, Eq)]
+#[derive(Clone, PartialEq, Eq)]
 /// Allowed variants of backup archives to be contained in a snapshot's manifest
 pub enum ArchiveType {
     FixedIndex,
@@ -1589,4 +1591,153 @@ impl ArchiveType {
         };
         Ok(archive_type)
     }
+
+    pub fn extension(&self) -> &'static str {
+        match self {
+            ArchiveType::DynamicIndex => "didx",
+            ArchiveType::FixedIndex => "fidx",
+            ArchiveType::Blob => "blob",
+        }
+    }
+}
+
+#[derive(Clone, PartialEq, Eq)]
+/// Name of archive files contained in snapshot's manifest
+pub struct BackupArchiveName {
+    // archive name including the `.fidx`, `.didx` or `.blob` archive type extension
+    name: String,
+    // archive type parsed based on given extension
+    ty: ArchiveType,
+}
+
+impl fmt::Display for BackupArchiveName {
+    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+        write!(f, "{name}", name = self.name)
+    }
+}
+
+serde_plain::derive_deserialize_from_fromstr!(BackupArchiveName, "archive name");
+
+impl FromStr for BackupArchiveName {
+    type Err = Error;
+
+    fn from_str(name: &str) -> Result<Self, Self::Err> {
+        Self::try_from(name)
+    }
+}
+
+serde_plain::derive_serialize_from_display!(BackupArchiveName);
+
+impl TryFrom<&str> for BackupArchiveName {
+    type Error = anyhow::Error;
+
+    fn try_from(value: &str) -> Result<Self, Self::Error> {
+        let (name, ty) = Self::parse_archive_type(value)?;
+        Ok(Self { name, ty })
+    }
+}
+
+impl AsRef<str> for BackupArchiveName {
+    fn as_ref(&self) -> &str {
+        &self.name
+    }
+}
+
+impl BackupArchiveName {
+    pub fn from_path(path: impl AsRef<Path>) -> Result<Self, Error> {
+        let path = path.as_ref();
+        if path.as_os_str().as_encoded_bytes().last() == Some(&b'/') {
+            bail!("invalid archive name, got directory");
+        }
+        let file_name = path
+            .file_name()
+            .ok_or_else(|| format_err!("invalid archive name"))?;
+        let name = file_name
+            .to_str()
+            .ok_or_else(|| format_err!("archive name not valid UTF-8"))?;
+
+        Self::try_from(name)
+    }
+
+    pub fn catalog() -> Self {
+        // Note: .pcat1 => Proxmox Catalog Format version 1
+        Self {
+            name: "catalog.pcat1.didx".to_string(),
+            ty: ArchiveType::DynamicIndex,
+        }
+    }
+
+    pub fn manifest() -> Self {
+        Self {
+            name: "index.json.blob".to_string(),
+            ty: ArchiveType::Blob,
+        }
+    }
+
+    pub fn client_log() -> Self {
+        Self {
+            name: "client.log.blob".to_string(),
+            ty: ArchiveType::Blob,
+        }
+    }
+
+    pub fn encrypted_key() -> Self {
+        Self {
+            name: "rsa-encrypted.key.blob".to_string(),
+            ty: ArchiveType::Blob,
+        }
+    }
+
+    pub fn archive_type(&self) -> ArchiveType {
+        self.ty.clone()
+    }
+
+    pub fn ends_with(&self, postfix: &str) -> bool {
+        self.name.ends_with(postfix)
+    }
+
+    pub fn has_pxar_filename_extension(&self) -> bool {
+        self.name.ends_with(".pxar.didx")
+            || self.name.ends_with(".mpxar.didx")
+            || self.name.ends_with(".ppxar.didx")
+    }
+
+    pub fn without_type_extension(&self) -> String {
+        self.name
+            .strip_suffix(&format!(".{ext}", ext = self.ty.extension()))
+            .unwrap()
+            .into()
+    }
+
+    fn parse_archive_type(archive_name: &str) -> Result<(String, ArchiveType), Error> {
+        // Detect archive type via given server archive name type extension, if present
+        if let Ok(archive_type) = ArchiveType::from_path(archive_name) {
+            return Ok((archive_name.into(), archive_type));
+        }
+
+        // No server archive name type extension in archive name, map based on extension
+        let archive_type = match Path::new(archive_name)
+            .extension()
+            .and_then(|ext| ext.to_str())
+        {
+            Some("pxar") => ArchiveType::DynamicIndex,
+            Some("mpxar") => ArchiveType::DynamicIndex,
+            Some("ppxar") => ArchiveType::DynamicIndex,
+            Some("pcat1") => ArchiveType::DynamicIndex,
+            Some("img") => ArchiveType::FixedIndex,
+            Some("json") => ArchiveType::Blob,
+            Some("key") => ArchiveType::Blob,
+            Some("log") => ArchiveType::Blob,
+            _ => bail!("failed to parse archive type for '{archive_name}'"),
+        };
+
+        Ok((
+            format!("{archive_name}.{ext}", ext = archive_type.extension()),
+            archive_type,
+        ))
+    }
+}
+
+impl ApiType for BackupArchiveName {
+    const API_SCHEMA: Schema = BACKUP_ARCHIVE_NAME_SCHEMA;
 }
-- 
2.39.5



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pbs-devel] [PATCH v4 proxmox-backup 3/5] client/server: use dedicated api type for all archive names
  2024-11-13 10:50 [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type Christian Ebner
  2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 1/5] datastore: move `ArchiveType` to api types Christian Ebner
  2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 2/5] api types: introduce `BackupArchiveName` type Christian Ebner
@ 2024-11-13 10:50 ` Christian Ebner
  2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 4/5] client: drop unused parse_archive_type helper Christian Ebner
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2024-11-13 10:50 UTC (permalink / raw)
  To: pbs-devel

Instead of using the plain String or slices of it for archive names,
use the dedicated api type and its methods to parse and check for
archive type based on archive filename extension.

Thereby, keeping the checks and mappings in the api type and
resticting function parameters by the narrower wrapper type to reduce
potential misuse.

Further, instead of declaring and using the archive name constants
throughout the codebase, use the `BackupArchiveName` helpers to
generate the archive names for manifest, client logs and encryption
keys.

This allows for easy archive name comparisons using the same
`BackupArchiveName` type, at the cost of some extra allocations and
avoids the currently present double constant declaration of
`CATALOG_NAME`.

A positive ergonomic side effect of this is that commands now also
accept the archive type extension optionally, when passing the archive
name.

E.g.
```
proxmox-backup-client restore <snapshot> <name>.pxar.didx <target>
```
is equal to
```
proxmox-backup-client restore <snapshot> <name>.pxar <target>
```

The previously default mapping of any archive name extension to a blob
has been dropped in favor of consistent mapping by the api type
helpers.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 3:
- use api type helpers and drop consts for archive names
- adapt to rework of mapping from previous patch in this series,
  especially use the `ArchiveType::extension()` helper for pxar archive
  name generation (when mapping regular -> split pxar archives).

 pbs-client/src/backup_reader.rs      |  18 ++--
 pbs-client/src/backup_writer.rs      |  45 +++++-----
 pbs-client/src/pxar/tools.rs         |   3 +-
 pbs-client/src/tools/mod.rs          |  28 +++---
 pbs-datastore/src/backup_info.rs     |  21 ++---
 pbs-datastore/src/lib.rs             |   3 -
 pbs-datastore/src/manifest.rs        |  33 ++++---
 pbs-datastore/src/snapshot_reader.rs |  11 +--
 proxmox-backup-client/src/catalog.rs |  35 ++++----
 proxmox-backup-client/src/helper.rs  |   7 +-
 proxmox-backup-client/src/main.rs    | 124 +++++++++++++++++----------
 proxmox-backup-client/src/mount.rs   |  33 +++----
 proxmox-file-restore/src/main.rs     |  13 +--
 src/api2/admin/datastore.rs          |  70 +++++++--------
 src/api2/tape/restore.rs             |  17 ++--
 src/backup/mod.rs                    |   3 -
 src/bin/proxmox_backup_debug/diff.rs |  16 ++--
 src/server/pull.rs                   |  23 ++---
 src/server/sync.rs                   |  10 +--
 tests/prune.rs                       |   5 +-
 20 files changed, 274 insertions(+), 244 deletions(-)

diff --git a/pbs-client/src/backup_reader.rs b/pbs-client/src/backup_reader.rs
index 4706abc78..24c2edbba 100644
--- a/pbs-client/src/backup_reader.rs
+++ b/pbs-client/src/backup_reader.rs
@@ -6,13 +6,12 @@ use std::sync::Arc;
 use futures::future::AbortHandle;
 use serde_json::{json, Value};
 
-use pbs_api_types::{BackupDir, BackupNamespace};
+use pbs_api_types::{BackupArchiveName, BackupDir, BackupNamespace};
 use pbs_datastore::data_blob::DataBlob;
 use pbs_datastore::data_blob_reader::DataBlobReader;
 use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::MANIFEST_BLOB_NAME;
 use pbs_datastore::{BackupManifest, PROXMOX_BACKUP_READER_PROTOCOL_ID_V1};
 use pbs_tools::crypt_config::CryptConfig;
 use pbs_tools::sha::sha256;
@@ -127,7 +126,8 @@ impl BackupReader {
     /// The manifest signature is verified if we have a crypt_config.
     pub async fn download_manifest(&self) -> Result<(BackupManifest, Vec<u8>), Error> {
         let mut raw_data = Vec::with_capacity(64 * 1024);
-        self.download(MANIFEST_BLOB_NAME, &mut raw_data).await?;
+        self.download(BackupArchiveName::manifest().as_ref(), &mut raw_data)
+            .await?;
         let blob = DataBlob::load_from_reader(&mut &raw_data[..])?;
         // no expected digest available
         let data = blob.decode(None, None)?;
@@ -145,11 +145,11 @@ impl BackupReader {
     pub async fn download_blob(
         &self,
         manifest: &BackupManifest,
-        name: &str,
+        name: &BackupArchiveName,
     ) -> Result<DataBlobReader<'_, File>, Error> {
         let mut tmpfile = crate::tools::create_tmp_file()?;
 
-        self.download(name, &mut tmpfile).await?;
+        self.download(name.as_ref(), &mut tmpfile).await?;
 
         tmpfile.seek(SeekFrom::Start(0))?;
         let (csum, size) = sha256(&mut tmpfile)?;
@@ -167,11 +167,11 @@ impl BackupReader {
     pub async fn download_dynamic_index(
         &self,
         manifest: &BackupManifest,
-        name: &str,
+        name: &BackupArchiveName,
     ) -> Result<DynamicIndexReader, Error> {
         let mut tmpfile = crate::tools::create_tmp_file()?;
 
-        self.download(name, &mut tmpfile).await?;
+        self.download(name.as_ref(), &mut tmpfile).await?;
 
         let index = DynamicIndexReader::new(tmpfile)
             .map_err(|err| format_err!("unable to read dynamic index '{}' - {}", name, err))?;
@@ -190,11 +190,11 @@ impl BackupReader {
     pub async fn download_fixed_index(
         &self,
         manifest: &BackupManifest,
-        name: &str,
+        name: &BackupArchiveName,
     ) -> Result<FixedIndexReader, Error> {
         let mut tmpfile = crate::tools::create_tmp_file()?;
 
-        self.download(name, &mut tmpfile).await?;
+        self.download(name.as_ref(), &mut tmpfile).await?;
 
         let index = FixedIndexReader::new(tmpfile)
             .map_err(|err| format_err!("unable to read fixed index '{}' - {}", name, err))?;
diff --git a/pbs-client/src/backup_writer.rs b/pbs-client/src/backup_writer.rs
index 8adaf9ef2..cd142cff4 100644
--- a/pbs-client/src/backup_writer.rs
+++ b/pbs-client/src/backup_writer.rs
@@ -11,13 +11,13 @@ use tokio::io::AsyncReadExt;
 use tokio::sync::{mpsc, oneshot};
 use tokio_stream::wrappers::ReceiverStream;
 
-use pbs_api_types::{ArchiveType, BackupDir, BackupNamespace};
+use pbs_api_types::{ArchiveType, BackupArchiveName, BackupDir, BackupNamespace};
 use pbs_datastore::data_blob::{ChunkInfo, DataBlob, DataChunkBuilder};
 use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::{BackupManifest, MANIFEST_BLOB_NAME};
-use pbs_datastore::{CATALOG_NAME, PROXMOX_BACKUP_PROTOCOL_ID_V1};
+use pbs_datastore::manifest::BackupManifest;
+use pbs_datastore::PROXMOX_BACKUP_PROTOCOL_ID_V1;
 use pbs_tools::crypt_config::CryptConfig;
 
 use proxmox_human_byte::HumanByte;
@@ -270,7 +270,7 @@ impl BackupWriter {
 
     pub async fn upload_stream(
         &self,
-        archive_name: &str,
+        archive_name: &BackupArchiveName,
         stream: impl Stream<Item = Result<bytes::BytesMut, Error>>,
         options: UploadOptions,
         injections: Option<std::sync::mpsc::Receiver<InjectChunks>>,
@@ -296,13 +296,13 @@ impl BackupWriter {
             if !manifest
                 .files()
                 .iter()
-                .any(|file| file.filename == archive_name)
+                .any(|file| file.filename == archive_name.as_ref())
             {
                 log::info!("Previous manifest does not contain an archive called '{archive_name}', skipping download..");
             } else {
                 // try, but ignore errors
-                match ArchiveType::from_path(archive_name) {
-                    Ok(ArchiveType::FixedIndex) => {
+                match archive_name.archive_type() {
+                    ArchiveType::FixedIndex => {
                         if let Err(err) = self
                             .download_previous_fixed_index(
                                 archive_name,
@@ -314,7 +314,7 @@ impl BackupWriter {
                             log::warn!("Error downloading .fidx from previous manifest: {}", err);
                         }
                     }
-                    Ok(ArchiveType::DynamicIndex) => {
+                    ArchiveType::DynamicIndex => {
                         if let Err(err) = self
                             .download_previous_dynamic_index(
                                 archive_name,
@@ -338,12 +338,6 @@ impl BackupWriter {
             .as_u64()
             .unwrap();
 
-        let archive = if log::log_enabled!(log::Level::Debug) {
-            archive_name
-        } else {
-            pbs_tools::format::strip_server_file_extension(archive_name)
-        };
-
         let upload_stats = Self::upload_chunk_info_stream(
             self.h2.clone(),
             wid,
@@ -357,12 +351,17 @@ impl BackupWriter {
             },
             options.compress,
             injections,
-            archive,
+            archive_name,
         )
         .await?;
 
         let size_dirty = upload_stats.size - upload_stats.size_reused;
         let size: HumanByte = upload_stats.size.into();
+        let archive = if log::log_enabled!(log::Level::Debug) {
+            archive_name.to_string()
+        } else {
+            archive_name.without_type_extension()
+        };
 
         if upload_stats.chunk_injected > 0 {
             log::info!(
@@ -372,7 +371,7 @@ impl BackupWriter {
             );
         }
 
-        if archive_name != CATALOG_NAME {
+        if *archive_name != BackupArchiveName::catalog() {
             let speed: HumanByte =
                 ((size_dirty * 1_000_000) / (upload_stats.duration.as_micros() as usize)).into();
             let size_dirty: HumanByte = size_dirty.into();
@@ -541,7 +540,7 @@ impl BackupWriter {
 
     pub async fn download_previous_fixed_index(
         &self,
-        archive_name: &str,
+        archive_name: &BackupArchiveName,
         manifest: &BackupManifest,
         known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
     ) -> Result<FixedIndexReader, Error> {
@@ -576,7 +575,7 @@ impl BackupWriter {
 
     pub async fn download_previous_dynamic_index(
         &self,
-        archive_name: &str,
+        archive_name: &BackupArchiveName,
         manifest: &BackupManifest,
         known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
     ) -> Result<DynamicIndexReader, Error> {
@@ -623,7 +622,7 @@ impl BackupWriter {
     pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {
         let mut raw_data = Vec::with_capacity(64 * 1024);
 
-        let param = json!({ "archive-name": MANIFEST_BLOB_NAME });
+        let param = json!({ "archive-name": BackupArchiveName::manifest().to_string() });
         self.h2
             .download("previous", Some(param), &mut raw_data)
             .await?;
@@ -651,7 +650,7 @@ impl BackupWriter {
         crypt_config: Option<Arc<CryptConfig>>,
         compress: bool,
         injections: Option<std::sync::mpsc::Receiver<InjectChunks>>,
-        archive: &str,
+        archive: &BackupArchiveName,
     ) -> impl Future<Output = Result<UploadStats, Error>> {
         let total_chunks = Arc::new(AtomicUsize::new(0));
         let total_chunks2 = total_chunks.clone();
@@ -683,9 +682,9 @@ impl BackupWriter {
         let index_csum = Arc::new(Mutex::new(Some(openssl::sha::Sha256::new())));
         let index_csum_2 = index_csum.clone();
 
-        let progress_handle = if archive.ends_with(".img")
-            || archive.ends_with(".pxar")
-            || archive.ends_with(".ppxar")
+        let progress_handle = if archive.ends_with(".img.fidx")
+            || archive.ends_with(".pxar.didx")
+            || archive.ends_with(".ppxar.didx")
         {
             Some(tokio::spawn(async move {
                 loop {
diff --git a/pbs-client/src/pxar/tools.rs b/pbs-client/src/pxar/tools.rs
index b076daf6b..483ef19b8 100644
--- a/pbs-client/src/pxar/tools.rs
+++ b/pbs-client/src/pxar/tools.rs
@@ -14,6 +14,7 @@ use pxar::accessor::ReadAt;
 use pxar::format::StatxTimestamp;
 use pxar::{mode, Entry, EntryKind, Metadata};
 
+use pbs_api_types::BackupArchiveName;
 use pbs_datastore::catalog::{ArchiveEntry, CatalogEntryType, DirEntryAttribute};
 
 use pbs_datastore::dynamic_index::{BufferedDynamicReader, LocalDynamicReadAt};
@@ -330,7 +331,7 @@ pub fn handle_root_with_optional_format_version_prelude<R: pxar::decoder::SeqRea
 }
 
 pub async fn get_remote_pxar_reader(
-    archive_name: &str,
+    archive_name: &BackupArchiveName,
     client: Arc<BackupReader>,
     manifest: &BackupManifest,
     crypt_config: Option<Arc<CryptConfig>>,
diff --git a/pbs-client/src/tools/mod.rs b/pbs-client/src/tools/mod.rs
index 28db6f348..8068dc004 100644
--- a/pbs-client/src/tools/mod.rs
+++ b/pbs-client/src/tools/mod.rs
@@ -17,7 +17,9 @@ use proxmox_router::cli::{complete_file_name, shellword_split};
 use proxmox_schema::*;
 use proxmox_sys::fs::file_get_json;
 
-use pbs_api_types::{Authid, BackupNamespace, RateLimitConfig, UserWithTokens, BACKUP_REPO_URL};
+use pbs_api_types::{
+    Authid, BackupArchiveName, BackupNamespace, RateLimitConfig, UserWithTokens, BACKUP_REPO_URL,
+};
 use pbs_datastore::BackupManifest;
 
 use crate::{BackupRepository, HttpClient, HttpClientOptions};
@@ -548,19 +550,18 @@ pub fn place_xdg_file(
 }
 
 pub fn get_pxar_archive_names(
-    archive_name: &str,
+    archive_name: &BackupArchiveName,
     manifest: &BackupManifest,
-) -> Result<(String, Option<String>), Error> {
-    let (filename, ext) = match archive_name.strip_suffix(".didx") {
-        Some(filename) => (filename, ".didx"),
-        None => (archive_name, ""),
-    };
+) -> Result<(BackupArchiveName, Option<BackupArchiveName>), Error> {
+    let filename = archive_name.without_type_extension();
+    let ext = archive_name.archive_type().extension();
 
-    // Check if archive with given extension is present
+    // Check if archive is given as split archive or regular archive and is present in manifest,
+    // otherwise goto fallback below
     if manifest
         .files()
         .iter()
-        .any(|fileinfo| fileinfo.filename == format!("{filename}.didx"))
+        .any(|fileinfo| fileinfo.filename == archive_name.as_ref())
     {
         // check if already given as one of split archive name variants
         if let Some(base) = filename
@@ -568,8 +569,8 @@ pub fn get_pxar_archive_names(
             .or_else(|| filename.strip_suffix(".ppxar"))
         {
             return Ok((
-                format!("{base}.mpxar{ext}"),
-                Some(format!("{base}.ppxar{ext}")),
+                format!("{base}.mpxar.{ext}").as_str().try_into()?,
+                Some(format!("{base}.ppxar.{ext}").as_str().try_into()?),
             ));
         }
         return Ok((archive_name.to_owned(), None));
@@ -577,7 +578,10 @@ pub fn get_pxar_archive_names(
 
     // if not, try fallback from regular to split archive
     if let Some(base) = filename.strip_suffix(".pxar") {
-        return get_pxar_archive_names(&format!("{base}.mpxar{ext}"), manifest);
+        return get_pxar_archive_names(
+            &format!("{base}.mpxar.{ext}").as_str().try_into()?,
+            manifest,
+        );
     }
 
     bail!("archive not found in manifest");
diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
index 414ec878d..972931000 100644
--- a/pbs-datastore/src/backup_info.rs
+++ b/pbs-datastore/src/backup_info.rs
@@ -8,13 +8,12 @@ use anyhow::{bail, format_err, Error};
 use proxmox_sys::fs::{lock_dir_noblock, replace_file, CreateOptions};
 
 use pbs_api_types::{
-    Authid, BackupNamespace, BackupType, GroupFilter, BACKUP_DATE_REGEX, BACKUP_FILE_REGEX,
+    Authid, BackupArchiveName, BackupNamespace, BackupType, GroupFilter, BACKUP_DATE_REGEX,
+    BACKUP_FILE_REGEX,
 };
 use pbs_config::{open_backup_lockfile, BackupLockGuard};
 
-use crate::manifest::{
-    BackupManifest, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME, MANIFEST_LOCK_NAME,
-};
+use crate::manifest::{BackupManifest, MANIFEST_LOCK_NAME};
 use crate::{DataBlob, DataStore};
 
 #[derive(Default)]
@@ -168,7 +167,7 @@ impl BackupGroup {
                 }
 
                 let mut manifest_path = PathBuf::from(backup_time);
-                manifest_path.push(MANIFEST_BLOB_NAME);
+                manifest_path.push(BackupArchiveName::manifest().as_ref());
 
                 use nix::fcntl::{openat, OFlag};
                 match openat(
@@ -520,7 +519,7 @@ impl BackupDir {
 
     /// Load the manifest without a lock. Must not be written back.
     pub fn load_manifest(&self) -> Result<(BackupManifest, u64), Error> {
-        let blob = self.load_blob(MANIFEST_BLOB_NAME)?;
+        let blob = self.load_blob(BackupArchiveName::manifest().as_ref())?;
         let raw_size = blob.raw_size();
         let manifest = BackupManifest::try_from(blob)?;
         Ok((manifest, raw_size))
@@ -543,7 +542,7 @@ impl BackupDir {
         let raw_data = blob.raw_data();
 
         let mut path = self.full_path();
-        path.push(MANIFEST_BLOB_NAME);
+        path.push(BackupArchiveName::manifest().as_ref());
 
         // atomic replace invalidates flock - no other writes past this point!
         replace_file(&path, raw_data, CreateOptions::new(), false)?;
@@ -555,8 +554,8 @@ impl BackupDir {
         let full_path = self.full_path();
 
         let mut wanted_files = std::collections::HashSet::new();
-        wanted_files.insert(MANIFEST_BLOB_NAME.to_string());
-        wanted_files.insert(CLIENT_LOG_BLOB_NAME.to_string());
+        wanted_files.insert(BackupArchiveName::manifest().to_string());
+        wanted_files.insert(BackupArchiveName::client_log().to_string());
         manifest.files().iter().for_each(|item| {
             wanted_files.insert(item.filename.clone());
         });
@@ -664,7 +663,9 @@ impl BackupInfo {
 
     pub fn is_finished(&self) -> bool {
         // backup is considered unfinished if there is no manifest
-        self.files.iter().any(|name| name == MANIFEST_BLOB_NAME)
+        self.files
+            .iter()
+            .any(|name| name == BackupArchiveName::manifest().as_ref())
     }
 }
 
diff --git a/pbs-datastore/src/lib.rs b/pbs-datastore/src/lib.rs
index 202b09558..8050cf4d0 100644
--- a/pbs-datastore/src/lib.rs
+++ b/pbs-datastore/src/lib.rs
@@ -144,9 +144,6 @@
 
 #![deny(unsafe_op_in_unsafe_fn)]
 
-// Note: .pcat1 => Proxmox Catalog Format version 1
-pub const CATALOG_NAME: &str = "catalog.pcat1.didx";
-
 /// Directory path where active operations counters are saved.
 pub const ACTIVE_OPERATIONS_DIR: &str = concat!(
     pbs_buildcfg::PROXMOX_BACKUP_RUN_DIR_M!(),
diff --git a/pbs-datastore/src/manifest.rs b/pbs-datastore/src/manifest.rs
index 823c85003..51ec117ea 100644
--- a/pbs-datastore/src/manifest.rs
+++ b/pbs-datastore/src/manifest.rs
@@ -3,13 +3,10 @@ use anyhow::{bail, format_err, Error};
 use serde::{Deserialize, Serialize};
 use serde_json::{json, Value};
 
-use pbs_api_types::{ArchiveType, BackupType, CryptMode, Fingerprint};
+use pbs_api_types::{BackupArchiveName, BackupType, CryptMode, Fingerprint};
 use pbs_tools::crypt_config::CryptConfig;
 
-pub const MANIFEST_BLOB_NAME: &str = "index.json.blob";
 pub const MANIFEST_LOCK_NAME: &str = ".index.json.lck";
-pub const CLIENT_LOG_BLOB_NAME: &str = "client.log.blob";
-pub const ENCRYPTED_KEY_BLOB_NAME: &str = "rsa-encrypted.key.blob";
 
 fn crypt_mode_none() -> CryptMode {
     CryptMode::None
@@ -68,14 +65,13 @@ impl BackupManifest {
 
     pub fn add_file(
         &mut self,
-        filename: String,
+        filename: &BackupArchiveName,
         size: u64,
         csum: [u8; 32],
         crypt_mode: CryptMode,
     ) -> Result<(), Error> {
-        let _archive_type = ArchiveType::from_path(&filename)?; // check type
         self.files.push(FileInfo {
-            filename,
+            filename: filename.to_string(),
             size,
             csum,
             crypt_mode,
@@ -87,8 +83,11 @@ impl BackupManifest {
         &self.files[..]
     }
 
-    pub fn lookup_file_info(&self, name: &str) -> Result<&FileInfo, Error> {
-        let info = self.files.iter().find(|item| item.filename == name);
+    pub fn lookup_file_info(&self, name: &BackupArchiveName) -> Result<&FileInfo, Error> {
+        let info = self
+            .files
+            .iter()
+            .find(|item| item.filename == name.as_ref());
 
         match info {
             None => bail!("manifest does not contain file '{}'", name),
@@ -96,7 +95,12 @@ impl BackupManifest {
         }
     }
 
-    pub fn verify_file(&self, name: &str, csum: &[u8; 32], size: u64) -> Result<(), Error> {
+    pub fn verify_file(
+        &self,
+        name: &BackupArchiveName,
+        csum: &[u8; 32],
+        size: u64,
+    ) -> Result<(), Error> {
         let info = self.lookup_file_info(name)?;
 
         if size != info.size {
@@ -256,8 +260,13 @@ fn test_manifest_signature() -> Result<(), Error> {
 
     let mut manifest = BackupManifest::new("host/elsa/2020-06-26T13:56:05Z".parse()?);
 
-    manifest.add_file("test1.img.fidx".into(), 200, [1u8; 32], CryptMode::Encrypt)?;
-    manifest.add_file("abc.blob".into(), 200, [2u8; 32], CryptMode::None)?;
+    manifest.add_file(
+        &"test1.img.fidx".try_into()?,
+        200,
+        [1u8; 32],
+        CryptMode::Encrypt,
+    )?;
+    manifest.add_file(&"abc.blob".try_into()?, 200, [2u8; 32], CryptMode::None)?;
 
     manifest.unprotected["note"] = "This is not protected by the signature.".into();
 
diff --git a/pbs-datastore/src/snapshot_reader.rs b/pbs-datastore/src/snapshot_reader.rs
index 432701ea0..ef70c7013 100644
--- a/pbs-datastore/src/snapshot_reader.rs
+++ b/pbs-datastore/src/snapshot_reader.rs
@@ -8,13 +8,14 @@ use nix::dir::Dir;
 
 use proxmox_sys::fs::lock_dir_noblock_shared;
 
-use pbs_api_types::{print_store_and_ns, ArchiveType, BackupNamespace, Operation};
+use pbs_api_types::{
+    print_store_and_ns, ArchiveType, BackupArchiveName, BackupNamespace, Operation,
+};
 
 use crate::backup_info::BackupDir;
 use crate::dynamic_index::DynamicIndexReader;
 use crate::fixed_index::FixedIndexReader;
 use crate::index::IndexFile;
-use crate::manifest::{CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME};
 use crate::DataStore;
 
 /// Helper to access the contents of a datastore backup snapshot
@@ -62,14 +63,14 @@ impl SnapshotReader {
         };
 
         let mut client_log_path = snapshot_path;
-        client_log_path.push(CLIENT_LOG_BLOB_NAME);
+        client_log_path.push(BackupArchiveName::client_log().as_ref());
 
-        let mut file_list = vec![MANIFEST_BLOB_NAME.to_string()];
+        let mut file_list = vec![BackupArchiveName::manifest().to_string()];
         for item in manifest.files() {
             file_list.push(item.filename.clone());
         }
         if client_log_path.exists() {
-            file_list.push(CLIENT_LOG_BLOB_NAME.to_string());
+            file_list.push(BackupArchiveName::client_log().to_string());
         }
 
         Ok(Self {
diff --git a/proxmox-backup-client/src/catalog.rs b/proxmox-backup-client/src/catalog.rs
index a55c9effe..39416d1d4 100644
--- a/proxmox-backup-client/src/catalog.rs
+++ b/proxmox-backup-client/src/catalog.rs
@@ -7,9 +7,8 @@ use serde_json::Value;
 use proxmox_router::cli::*;
 use proxmox_schema::api;
 
-use pbs_api_types::BackupNamespace;
+use pbs_api_types::{BackupArchiveName, BackupNamespace};
 use pbs_client::pxar::tools::get_remote_pxar_reader;
-use pbs_client::tools::has_pxar_filename_extension;
 use pbs_client::tools::key_source::get_encryption_key_password;
 use pbs_client::{BackupReader, RemoteChunkReader};
 use pbs_tools::crypt_config::CryptConfig;
@@ -22,7 +21,7 @@ use crate::{
     complete_pxar_archive_name, complete_repository, connect, crypto_parameters, decrypt_key,
     dir_or_last_from_group, extract_repository_from_value, format_key_source, optional_ns_param,
     record_repository, BackupDir, BufferedDynamicReader, CatalogReader, DynamicIndexReader,
-    IndexFile, Shell, CATALOG_NAME, KEYFD_SCHEMA, REPO_URL_SCHEMA,
+    IndexFile, Shell, KEYFD_SCHEMA, REPO_URL_SCHEMA,
 };
 
 #[api(
@@ -90,7 +89,8 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
     let (manifest, _) = client.download_manifest().await?;
     manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
 
-    let file_info = match manifest.lookup_file_info(CATALOG_NAME) {
+    let catalog_name = BackupArchiveName::catalog();
+    let file_info = match manifest.lookup_file_info(&catalog_name) {
         Ok(file_info) => file_info,
         Err(err) => {
             let mut metadata_archives = Vec::new();
@@ -104,7 +104,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
 
             for archive in &metadata_archives {
                 let (reader, archive_size) = get_remote_pxar_reader(
-                    &archive,
+                    &archive.as_str().try_into()?,
                     client.clone(),
                     &manifest,
                     crypt_config.clone(),
@@ -128,7 +128,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
     };
 
     let index = client
-        .download_dynamic_index(&manifest, CATALOG_NAME)
+        .download_dynamic_index(&manifest, &catalog_name)
         .await?;
 
     let most_used = index.find_most_used_chunks(8);
@@ -170,8 +170,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
                 description: "Group/Snapshot path.",
             },
             "archive-name": {
-                type: String,
-                description: "Backup archive name.",
+                type: BackupArchiveName,
             },
             "repository": {
                 optional: true,
@@ -195,7 +194,8 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
     let client = connect(&repo)?;
     let backup_ns = optional_ns_param(&param)?;
     let path = required_string_param(&param, "snapshot")?;
-    let archive_name = required_string_param(&param, "archive-name")?;
+    let server_archive_name: BackupArchiveName =
+        required_string_param(&param, "archive-name")?.try_into()?;
 
     let backup_dir = dir_or_last_from_group(&client, &repo, &backup_ns, path).await?;
 
@@ -214,9 +214,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
         }
     };
 
-    let server_archive_name = if has_pxar_filename_extension(archive_name, false) {
-        format!("{}.didx", archive_name)
-    } else {
+    if !server_archive_name.has_pxar_filename_extension() {
         bail!("Can only mount pxar archives.");
     };
 
@@ -233,7 +231,8 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
     let (manifest, _) = client.download_manifest().await?;
     manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
 
-    if let Err(_err) = manifest.lookup_file_info(CATALOG_NAME) {
+    let catalog_name = BackupArchiveName::catalog();
+    if let Err(_err) = manifest.lookup_file_info(&catalog_name) {
         // No catalog, fallback to pxar archive accessor if present
         let accessor = helper::get_pxar_fuse_accessor(
             &server_archive_name,
@@ -243,7 +242,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
         )
         .await?;
 
-        let state = Shell::new(None, &server_archive_name, accessor).await?;
+        let state = Shell::new(None, &server_archive_name.as_ref(), accessor).await?;
         log::info!("Starting interactive shell");
         state.shell().await?;
         record_repository(&repo);
@@ -261,17 +260,17 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
     )
     .await?;
 
-    client.download(CATALOG_NAME, &mut tmpfile).await?;
+    client.download(catalog_name.as_ref(), &mut tmpfile).await?;
     let index = DynamicIndexReader::new(tmpfile)
         .map_err(|err| format_err!("unable to read catalog index - {}", err))?;
 
     // Note: do not use values stored in index (not trusted) - instead, computed them again
     let (csum, size) = index.compute_csum();
-    manifest.verify_file(CATALOG_NAME, &csum, size)?;
+    manifest.verify_file(&catalog_name, &csum, size)?;
 
     let most_used = index.find_most_used_chunks(8);
 
-    let file_info = manifest.lookup_file_info(CATALOG_NAME)?;
+    let file_info = manifest.lookup_file_info(&catalog_name)?;
     let chunk_reader = RemoteChunkReader::new(
         client.clone(),
         crypt_config,
@@ -286,7 +285,7 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
 
     catalogfile.seek(SeekFrom::Start(0))?;
     let catalog_reader = CatalogReader::new(catalogfile);
-    let state = Shell::new(Some(catalog_reader), &server_archive_name, decoder).await?;
+    let state = Shell::new(Some(catalog_reader), &server_archive_name.as_ref(), decoder).await?;
 
     log::info!("Starting interactive shell");
     state.shell().await?;
diff --git a/proxmox-backup-client/src/helper.rs b/proxmox-backup-client/src/helper.rs
index 60355d7d0..642d66a7b 100644
--- a/proxmox-backup-client/src/helper.rs
+++ b/proxmox-backup-client/src/helper.rs
@@ -1,6 +1,7 @@
 use std::sync::Arc;
 
 use anyhow::Error;
+use pbs_api_types::BackupArchiveName;
 use pbs_client::{BackupReader, RemoteChunkReader};
 use pbs_datastore::BackupManifest;
 use pbs_tools::crypt_config::CryptConfig;
@@ -8,7 +9,7 @@ use pbs_tools::crypt_config::CryptConfig;
 use crate::{BufferedDynamicReadAt, BufferedDynamicReader, IndexFile};
 
 pub(crate) async fn get_pxar_fuse_accessor(
-    archive_name: &str,
+    archive_name: &BackupArchiveName,
     client: Arc<BackupReader>,
     manifest: &BackupManifest,
     crypt_config: Option<Arc<CryptConfig>>,
@@ -44,7 +45,7 @@ pub(crate) async fn get_pxar_fuse_accessor(
 }
 
 pub(crate) async fn get_pxar_fuse_reader(
-    archive_name: &str,
+    archive_name: &BackupArchiveName,
     client: Arc<BackupReader>,
     manifest: &BackupManifest,
     crypt_config: Option<Arc<CryptConfig>>,
@@ -57,7 +58,7 @@ pub(crate) async fn get_pxar_fuse_reader(
 }
 
 pub(crate) async fn get_buffered_pxar_reader(
-    archive_name: &str,
+    archive_name: &BackupArchiveName,
     client: Arc<BackupReader>,
     manifest: &BackupManifest,
     crypt_config: Option<Arc<CryptConfig>>,
diff --git a/proxmox-backup-client/src/main.rs b/proxmox-backup-client/src/main.rs
index f6fb3555e..a155f56f0 100644
--- a/proxmox-backup-client/src/main.rs
+++ b/proxmox-backup-client/src/main.rs
@@ -25,10 +25,10 @@ use pxar::accessor::aio::Accessor;
 use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
 
 use pbs_api_types::{
-    ArchiveType, Authid, BackupDir, BackupGroup, BackupNamespace, BackupPart, BackupType,
-    ClientRateLimitConfig, CryptMode, Fingerprint, GroupListItem, PruneJobOptions, PruneListItem,
-    RateLimitConfig, SnapshotListItem, StorageStatus, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA,
-    BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA,
+    ArchiveType, Authid, BackupArchiveName, BackupDir, BackupGroup, BackupNamespace, BackupPart,
+    BackupType, ClientRateLimitConfig, CryptMode, Fingerprint, GroupListItem, PruneJobOptions,
+    PruneListItem, RateLimitConfig, SnapshotListItem, StorageStatus, BACKUP_ID_SCHEMA,
+    BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA,
 };
 use pbs_client::catalog_shell::Shell;
 use pbs_client::pxar::{ErrorHandler as PxarErrorHandler, MetadataArchiveReader, PxarPrevRef};
@@ -36,7 +36,7 @@ use pbs_client::tools::{
     complete_archive_name, complete_auth_id, complete_backup_group, complete_backup_snapshot,
     complete_backup_source, complete_chunk_size, complete_group_or_snapshot,
     complete_img_archive_name, complete_namespace, complete_pxar_archive_name, complete_repository,
-    connect, connect_rate_limited, extract_repository_from_value, has_pxar_filename_extension,
+    connect, connect_rate_limited, extract_repository_from_value,
     key_source::{
         crypto_parameters, format_key_source, get_encryption_key_password, KEYFD_SCHEMA,
         KEYFILE_SCHEMA, MASTER_PUBKEY_FD_SCHEMA, MASTER_PUBKEY_FILE_SCHEMA,
@@ -54,9 +54,8 @@ use pbs_datastore::chunk_store::verify_chunk_size;
 use pbs_datastore::dynamic_index::{BufferedDynamicReader, DynamicIndexReader, LocalDynamicReadAt};
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::{BackupManifest, ENCRYPTED_KEY_BLOB_NAME, MANIFEST_BLOB_NAME};
+use pbs_datastore::manifest::BackupManifest;
 use pbs_datastore::read_chunk::AsyncReadChunk;
-use pbs_datastore::CATALOG_NAME;
 use pbs_key_config::{decrypt_key, rsa_encrypt_key_config, KeyConfig};
 use pbs_tools::crypt_config::CryptConfig;
 use pbs_tools::json;
@@ -196,8 +195,8 @@ pub async fn dir_or_last_from_group(
 async fn backup_directory<P: AsRef<Path>>(
     client: &BackupWriter,
     dir_path: P,
-    archive_name: &str,
-    payload_target: Option<&str>,
+    archive_name: &BackupArchiveName,
+    payload_target: Option<&BackupArchiveName>,
     chunk_size: Option<usize>,
     catalog: Option<Arc<Mutex<CatalogWriter<TokioWriterAdapter<StdChannelWriter<Error>>>>>>,
     pxar_create_options: pbs_client::pxar::PxarCreateOptions,
@@ -276,7 +275,7 @@ async fn backup_directory<P: AsRef<Path>>(
 async fn backup_image<P: AsRef<Path>>(
     client: &BackupWriter,
     image_path: P,
-    archive_name: &str,
+    archive_name: &BackupArchiveName,
     chunk_size: Option<usize>,
     upload_options: UploadOptions,
 ) -> Result<BackupStats, Error> {
@@ -606,7 +605,12 @@ fn spawn_catalog_upload(
 
     tokio::spawn(async move {
         let catalog_upload_result = client
-            .upload_stream(CATALOG_NAME, catalog_chunk_stream, upload_options, None)
+            .upload_stream(
+                &BackupArchiveName::catalog(),
+                catalog_chunk_stream,
+                upload_options,
+                None,
+            )
             .await;
 
         if let Err(ref err) = catalog_upload_result {
@@ -1005,13 +1009,21 @@ async fn create_backup(
     };
 
     for (backup_type, filename, target_base, extension, size) in upload_list {
-        let target = format!("{target_base}.{extension}");
+        let target: BackupArchiveName = format!("{target_base}.{extension}").as_str().try_into()?;
         match (backup_type, dry_run) {
             // dry-run
-            (BackupSpecificationType::CONFIG, true) => log_file("config file", &filename, &target),
-            (BackupSpecificationType::LOGFILE, true) => log_file("log file", &filename, &target),
-            (BackupSpecificationType::PXAR, true) => log_file("directory", &filename, &target),
-            (BackupSpecificationType::IMAGE, true) => log_file("image", &filename, &target),
+            (BackupSpecificationType::CONFIG, true) => {
+                log_file("config file", &filename, target.as_ref())
+            }
+            (BackupSpecificationType::LOGFILE, true) => {
+                log_file("log file", &filename, target.as_ref())
+            }
+            (BackupSpecificationType::PXAR, true) => {
+                log_file("directory", &filename, target.as_ref())
+            }
+            (BackupSpecificationType::IMAGE, true) => {
+                log_file("image", &filename, &target.as_ref())
+            }
             // no dry-run
             (BackupSpecificationType::CONFIG, false) => {
                 let upload_options = UploadOptions {
@@ -1020,11 +1032,11 @@ async fn create_backup(
                     ..UploadOptions::default()
                 };
 
-                log_file("config file", &filename, &target);
+                log_file("config file", &filename, target.as_ref());
                 let stats = client
-                    .upload_blob_from_file(&filename, &target, upload_options)
+                    .upload_blob_from_file(&filename, target.as_ref(), upload_options)
                     .await?;
-                manifest.add_file(target, stats.size, stats.csum, crypto.mode)?;
+                manifest.add_file(&target, stats.size, stats.csum, crypto.mode)?;
             }
             (BackupSpecificationType::LOGFILE, false) => {
                 // fixme: remove - not needed anymore ?
@@ -1034,11 +1046,11 @@ async fn create_backup(
                     ..UploadOptions::default()
                 };
 
-                log_file("log file", &filename, &target);
+                log_file("log file", &filename, target.as_ref());
                 let stats = client
-                    .upload_blob_from_file(&filename, &target, upload_options)
+                    .upload_blob_from_file(&filename, target.as_ref(), upload_options)
                     .await?;
-                manifest.add_file(target, stats.size, stats.csum, crypto.mode)?;
+                manifest.add_file(&target, stats.size, stats.csum, crypto.mode)?;
             }
             (BackupSpecificationType::PXAR, false) => {
                 let target_base = if let Some(base) = target_base.strip_suffix(".pxar") {
@@ -1050,8 +1062,14 @@ async fn create_backup(
                 let (target, payload_target) =
                     if detection_mode.is_metadata() || detection_mode.is_data() {
                         (
-                            format!("{target_base}.mpxar.{extension}"),
-                            Some(format!("{target_base}.ppxar.{extension}")),
+                            format!("{target_base}.mpxar.{extension}")
+                                .as_str()
+                                .try_into()?,
+                            Some(
+                                format!("{target_base}.ppxar.{extension}")
+                                    .as_str()
+                                    .try_into()?,
+                            ),
                         )
                     } else {
                         (target, None)
@@ -1065,12 +1083,12 @@ async fn create_backup(
                     catalog_result_rx = Some(catalog_upload_res.result);
                 }
 
-                log_file("directory", &filename, &target);
+                log_file("directory", &filename, target.as_ref());
                 if let Some(catalog) = catalog.as_ref() {
                     catalog
                         .lock()
                         .unwrap()
-                        .start_directory(std::ffi::CString::new(target.as_str())?.as_c_str())?;
+                        .start_directory(std::ffi::CString::new(target.as_ref())?.as_c_str())?;
                 }
 
                 let mut previous_ref = None;
@@ -1137,7 +1155,7 @@ async fn create_backup(
                     &client,
                     &filename,
                     &target,
-                    payload_target.as_deref(),
+                    payload_target.as_ref().as_deref(),
                     chunk_size_opt,
                     catalog.as_ref().cloned(),
                     pxar_options,
@@ -1147,20 +1165,20 @@ async fn create_backup(
 
                 if let Some(payload_stats) = payload_stats {
                     manifest.add_file(
-                        payload_target
+                        &payload_target
                             .ok_or_else(|| format_err!("missing payload target archive"))?,
                         payload_stats.size,
                         payload_stats.csum,
                         crypto.mode,
                     )?;
                 }
-                manifest.add_file(target, stats.size, stats.csum, crypto.mode)?;
+                manifest.add_file(&target, stats.size, stats.csum, crypto.mode)?;
                 if let Some(catalog) = catalog.as_ref() {
                     catalog.lock().unwrap().end_directory()?;
                 }
             }
             (BackupSpecificationType::IMAGE, false) => {
-                log_file("image", &filename, &target);
+                log_file("image", &filename, target.as_ref());
 
                 let upload_options = UploadOptions {
                     previous_manifest: previous_manifest.clone(),
@@ -1172,7 +1190,7 @@ async fn create_backup(
                 let stats =
                     backup_image(&client, &filename, &target, chunk_size_opt, upload_options)
                         .await?;
-                manifest.add_file(target, stats.size, stats.csum, crypto.mode)?;
+                manifest.add_file(&target, stats.size, stats.csum, crypto.mode)?;
             }
         }
     }
@@ -1194,12 +1212,17 @@ async fn create_backup(
 
         if let Some(catalog_result_rx) = catalog_result_rx {
             let stats = catalog_result_rx.await??;
-            manifest.add_file(CATALOG_NAME.to_owned(), stats.size, stats.csum, crypto.mode)?;
+            manifest.add_file(
+                &BackupArchiveName::catalog(),
+                stats.size,
+                stats.csum,
+                crypto.mode,
+            )?;
         }
     }
 
     if let Some(rsa_encrypted_key) = rsa_encrypted_key {
-        let target = ENCRYPTED_KEY_BLOB_NAME;
+        let target = BackupArchiveName::encrypted_key();
         log::info!("Upload RSA encoded key to '{}' as {}", repo, target);
         let options = UploadOptions {
             compress: false,
@@ -1207,9 +1230,9 @@ async fn create_backup(
             ..UploadOptions::default()
         };
         let stats = client
-            .upload_blob_from_data(rsa_encrypted_key, target, options)
+            .upload_blob_from_data(rsa_encrypted_key, target.as_ref(), options)
             .await?;
-        manifest.add_file(target.to_string(), stats.size, stats.csum, crypto.mode)?;
+        manifest.add_file(&target, stats.size, stats.csum, crypto.mode)?;
     }
     // create manifest (index.json)
     // manifests are never encrypted, but include a signature
@@ -1225,7 +1248,11 @@ async fn create_backup(
         ..UploadOptions::default()
     };
     client
-        .upload_blob_from_data(manifest.into_bytes(), MANIFEST_BLOB_NAME, options)
+        .upload_blob_from_data(
+            manifest.into_bytes(),
+            BackupArchiveName::manifest().as_ref(),
+            options,
+        )
         .await?;
 
     client.finish().await?;
@@ -1238,7 +1265,7 @@ async fn create_backup(
 }
 
 async fn prepare_reference(
-    target: &str,
+    target: &BackupArchiveName,
     manifest: Arc<BackupManifest>,
     backup_writer: &BackupWriter,
     backup_reader: Arc<BackupReader>,
@@ -1250,7 +1277,11 @@ async fn prepare_reference(
             Ok((target, payload_target)) => (target, payload_target),
             Err(_) => return Ok(None),
         };
-    let payload_target = payload_target.unwrap_or_default();
+    let payload_target = if let Some(payload_target) = payload_target {
+        payload_target
+    } else {
+        return Ok(None);
+    };
 
     let metadata_ref_index = if let Ok(index) = backup_reader
         .download_dynamic_index(&manifest, &target)
@@ -1299,7 +1330,7 @@ async fn prepare_reference(
     Ok(Some(pbs_client::pxar::PxarPrevRef {
         accessor,
         payload_index: payload_ref_index,
-        archive_name: target,
+        archive_name: target.to_string(),
     }))
 }
 
@@ -1486,7 +1517,8 @@ async fn restore(
 ) -> Result<Value, Error> {
     let repo = extract_repository_from_value(&param)?;
 
-    let archive_name = json::required_string_param(&param, "archive-name")?;
+    let archive_name: BackupArchiveName =
+        json::required_string_param(&param, "archive-name")?.try_into()?;
 
     let rate_limit = RateLimitConfig::from_client_config(limit);
 
@@ -1525,11 +1557,9 @@ async fn restore(
     )
     .await?;
 
-    let (archive_name, archive_type) = parse_archive_type(archive_name);
-
     let (manifest, backup_index_data) = client.download_manifest().await?;
 
-    if archive_name == ENCRYPTED_KEY_BLOB_NAME && crypt_config.is_none() {
+    if archive_name == BackupArchiveName::encrypted_key() && crypt_config.is_none() {
         log::info!("Restoring encrypted key blob without original key - skipping manifest fingerprint check!")
     } else {
         if manifest.signature.is_some() {
@@ -1543,7 +1573,7 @@ async fn restore(
         manifest.check_fingerprint(crypt_config.as_ref().map(Arc::as_ref))?;
     }
 
-    if archive_name == MANIFEST_BLOB_NAME {
+    if archive_name == BackupArchiveName::manifest() {
         if let Some(target) = target {
             replace_file(target, &backup_index_data, CreateOptions::new(), false)?;
         } else {
@@ -1557,7 +1587,7 @@ async fn restore(
         return Ok(Value::Null);
     }
 
-    if archive_type == ArchiveType::Blob {
+    if archive_name.archive_type() == ArchiveType::Blob {
         let mut reader = client.download_blob(&manifest, &archive_name).await?;
 
         if let Some(target) = target {
@@ -1576,7 +1606,7 @@ async fn restore(
             std::io::copy(&mut reader, &mut writer)
                 .map_err(|err| format_err!("unable to pipe data - {}", err))?;
         }
-    } else if archive_type == ArchiveType::DynamicIndex {
+    } else if archive_name.archive_type() == ArchiveType::DynamicIndex {
         let (archive_name, payload_archive_name) =
             pbs_client::tools::get_pxar_archive_names(&archive_name, &manifest)?;
 
@@ -1680,7 +1710,7 @@ async fn restore(
             std::io::copy(&mut reader, &mut writer)
                 .map_err(|err| format_err!("unable to pipe data - {}", err))?;
         }
-    } else if archive_type == ArchiveType::FixedIndex {
+    } else if archive_name.archive_type() == ArchiveType::FixedIndex {
         let file_info = manifest.lookup_file_info(&archive_name)?;
         let index = client
             .download_fixed_index(&manifest, &archive_name)
diff --git a/proxmox-backup-client/src/mount.rs b/proxmox-backup-client/src/mount.rs
index c15e030f5..0048a8ad4 100644
--- a/proxmox-backup-client/src/mount.rs
+++ b/proxmox-backup-client/src/mount.rs
@@ -18,8 +18,7 @@ use proxmox_schema::*;
 use proxmox_sortable_macro::sortable;
 use proxmox_systemd;
 
-use pbs_api_types::BackupNamespace;
-use pbs_client::tools::has_pxar_filename_extension;
+use pbs_api_types::{ArchiveType, BackupArchiveName, BackupNamespace};
 use pbs_client::tools::key_source::get_encryption_key_password;
 use pbs_client::{BackupReader, RemoteChunkReader};
 use pbs_datastore::cached_chunk_reader::CachedChunkReader;
@@ -47,11 +46,7 @@ const API_METHOD_MOUNT: ApiMethod = ApiMethod::new(
                 false,
                 &StringSchema::new("Group/Snapshot path.").schema()
             ),
-            (
-                "archive-name",
-                false,
-                &StringSchema::new("Backup archive name.").schema()
-            ),
+            ("archive-name", false, &BackupArchiveName::API_SCHEMA),
             (
                 "target",
                 false,
@@ -87,11 +82,7 @@ WARNING: Only do this with *trusted* backups!",
                 false,
                 &StringSchema::new("Group/Snapshot path.").schema()
             ),
-            (
-                "archive-name",
-                false,
-                &StringSchema::new("Backup archive name.").schema()
-            ),
+            ("archive-name", false, &BackupArchiveName::API_SCHEMA),
             ("repository", true, &REPO_URL_SCHEMA),
             (
                 "keyfile",
@@ -208,7 +199,8 @@ fn mount(
 
 async fn mount_do(param: Value, pipe: Option<OwnedFd>) -> Result<Value, Error> {
     let repo = extract_repository_from_value(&param)?;
-    let archive_name = required_string_param(&param, "archive-name")?;
+    let server_archive_name: BackupArchiveName =
+        required_string_param(&param, "archive-name")?.try_into()?;
     let client = connect(&repo)?;
 
     let target = param["target"].as_str();
@@ -230,16 +222,14 @@ async fn mount_do(param: Value, pipe: Option<OwnedFd>) -> Result<Value, Error> {
         }
     };
 
-    let server_archive_name = if has_pxar_filename_extension(archive_name, false) {
+    if server_archive_name.has_pxar_filename_extension() {
         if target.is_none() {
             bail!("use the 'mount' command to mount pxar archives");
         }
-        format!("{}.didx", archive_name)
-    } else if archive_name.ends_with(".img") {
+    } else if server_archive_name.ends_with(".img.fidx") {
         if target.is_some() {
             bail!("use the 'map' command to map drive images");
         }
-        format!("{}.fidx", archive_name)
     } else {
         bail!("Can only mount/map pxar archives and drive images.");
     };
@@ -291,7 +281,7 @@ async fn mount_do(param: Value, pipe: Option<OwnedFd>) -> Result<Value, Error> {
     let mut interrupt =
         futures::future::select(interrupt_int.recv().boxed(), interrupt_term.recv().boxed());
 
-    if server_archive_name.ends_with(".didx") {
+    if server_archive_name.archive_type() == ArchiveType::DynamicIndex {
         let decoder = helper::get_pxar_fuse_accessor(
             &server_archive_name,
             client.clone(),
@@ -312,7 +302,7 @@ async fn mount_do(param: Value, pipe: Option<OwnedFd>) -> Result<Value, Error> {
                 // exit on interrupted
             }
         }
-    } else if server_archive_name.ends_with(".fidx") {
+    } else if server_archive_name.archive_type() == ArchiveType::FixedIndex {
         let file_info = manifest.lookup_file_info(&server_archive_name)?;
         let index = client
             .download_fixed_index(&manifest, &server_archive_name)
@@ -326,7 +316,10 @@ async fn mount_do(param: Value, pipe: Option<OwnedFd>) -> Result<Value, Error> {
         );
         let reader = CachedChunkReader::new(chunk_reader, index, 8).seekable();
 
-        let name = &format!("{}:{}/{}", repo, path, archive_name);
+        let name = &format!(
+            "{repo}:{path}/{}",
+            server_archive_name.without_type_extension(),
+        );
         let name_escaped = proxmox_systemd::escape_unit(name, false);
 
         let mut session =
diff --git a/proxmox-file-restore/src/main.rs b/proxmox-file-restore/src/main.rs
index 08354b454..5434a1351 100644
--- a/proxmox-file-restore/src/main.rs
+++ b/proxmox-file-restore/src/main.rs
@@ -5,6 +5,7 @@ use std::sync::Arc;
 
 use anyhow::{bail, format_err, Error};
 use futures::StreamExt;
+use pbs_api_types::BackupArchiveName;
 use serde_json::{json, Value};
 use tokio::io::AsyncWriteExt;
 
@@ -37,7 +38,6 @@ use pbs_client::{BackupReader, BackupRepository, RemoteChunkReader};
 use pbs_datastore::catalog::{ArchiveEntry, CatalogReader, DirEntryAttribute};
 use pbs_datastore::dynamic_index::BufferedDynamicReader;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::CATALOG_NAME;
 use pbs_key_config::decrypt_key;
 use pbs_tools::crypt_config::CryptConfig;
 
@@ -149,9 +149,10 @@ async fn list_files(
             Ok(entries)
         }
         ExtractPath::Pxar(file, mut path) => {
-            if let Ok(file_info) = manifest.lookup_file_info(CATALOG_NAME) {
+            let catalog_name = BackupArchiveName::catalog();
+            if let Ok(file_info) = manifest.lookup_file_info(&catalog_name) {
                 let index = client
-                    .download_dynamic_index(&manifest, CATALOG_NAME)
+                    .download_dynamic_index(&manifest, &catalog_name)
                     .await?;
                 let most_used = index.find_most_used_chunks(8);
                 let chunk_reader = RemoteChunkReader::new(
@@ -172,6 +173,7 @@ async fn list_files(
                     path = vec![b'/'];
                 }
 
+                let file: BackupArchiveName = file.as_str().try_into()?;
                 let (archive_name, _payload_archive_name) =
                     pbs_client::tools::get_pxar_archive_names(&file, &manifest)?;
 
@@ -191,7 +193,7 @@ async fn list_files(
                 pbs_client::pxar::tools::pxar_metadata_catalog_lookup(
                     accessor,
                     path,
-                    Some(&archive_name),
+                    Some(archive_name.as_ref()),
                 )
                 .await
             }
@@ -476,10 +478,11 @@ async fn extract(
 
     match path {
         ExtractPath::Pxar(archive_name, path) => {
+            let archive_name: BackupArchiveName = archive_name.as_str().try_into()?;
             let (archive_name, payload_archive_name) =
                 pbs_client::tools::get_pxar_archive_names(&archive_name, &manifest)?;
             let (reader, archive_size) = get_remote_pxar_reader(
-                &archive_name,
+                &archive_name.try_into()?,
                 client.clone(),
                 &manifest,
                 crypt_config.clone(),
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index b73ad0ff0..3ca9d35ae 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -34,15 +34,15 @@ use pxar::accessor::aio::Accessor;
 use pxar::EntryKind;
 
 use pbs_api_types::{
-    print_ns_and_snapshot, print_store_and_ns, Authid, BackupContent, BackupNamespace, BackupType,
-    Counts, CryptMode, DataStoreConfig, DataStoreListItem, DataStoreStatus,
-    GarbageCollectionJobStatus, GroupListItem, JobScheduleStatus, KeepOptions, Operation,
-    PruneJobOptions, SnapshotListItem, SnapshotVerifyState, BACKUP_ARCHIVE_NAME_SCHEMA,
-    BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA,
-    DATASTORE_SCHEMA, IGNORE_VERIFIED_BACKUPS_SCHEMA, MAX_NAMESPACE_DEPTH, NS_MAX_DEPTH_SCHEMA,
-    PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_PRUNE,
-    PRIV_DATASTORE_READ, PRIV_DATASTORE_VERIFY, UPID, UPID_SCHEMA,
-    VERIFICATION_OUTDATED_AFTER_SCHEMA,
+    print_ns_and_snapshot, print_store_and_ns, ArchiveType, Authid, BackupArchiveName,
+    BackupContent, BackupNamespace, BackupType, Counts, CryptMode, DataStoreConfig,
+    DataStoreListItem, DataStoreStatus, GarbageCollectionJobStatus, GroupListItem,
+    JobScheduleStatus, KeepOptions, Operation, PruneJobOptions, SnapshotListItem,
+    SnapshotVerifyState, BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA,
+    BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA, DATASTORE_SCHEMA, IGNORE_VERIFIED_BACKUPS_SCHEMA,
+    MAX_NAMESPACE_DEPTH, NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP,
+    PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ, PRIV_DATASTORE_VERIFY, UPID,
+    UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA,
 };
 use pbs_client::pxar::{create_tar, create_zip};
 use pbs_config::CachedUserInfo;
@@ -54,11 +54,11 @@ use pbs_datastore::data_blob_reader::DataBlobReader;
 use pbs_datastore::dynamic_index::{BufferedDynamicReader, DynamicIndexReader, LocalDynamicReadAt};
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::{BackupManifest, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME};
+use pbs_datastore::manifest::BackupManifest;
 use pbs_datastore::prune::compute_prune_info;
 use pbs_datastore::{
     check_backup_owner, task_tracking, BackupDir, BackupGroup, DataStore, LocalChunkReader,
-    StoreProgress, CATALOG_NAME,
+    StoreProgress,
 };
 use pbs_tools::json::required_string_param;
 use proxmox_rest_server::{formatter, WorkerTask};
@@ -124,7 +124,7 @@ fn read_backup_index(
     }
 
     result.push(BackupContent {
-        filename: MANIFEST_BLOB_NAME.to_string(),
+        filename: BackupArchiveName::manifest().to_string(),
         crypt_mode: match manifest.signature {
             Some(_) => Some(CryptMode::SignOnly),
             None => Some(CryptMode::None),
@@ -1468,12 +1468,13 @@ pub fn download_file_decoded(
             &backup_dir_api.group,
         )?;
 
-        let file_name = required_string_param(&param, "file-name")?.to_owned();
+        let file_name: BackupArchiveName =
+            required_string_param(&param, "file-name")?.try_into()?;
         let backup_dir = datastore.backup_dir(backup_ns.clone(), backup_dir_api.clone())?;
 
         let (manifest, files) = read_backup_index(&backup_dir)?;
         for file in files {
-            if file.filename == file_name && file.crypt_mode == Some(CryptMode::Encrypt) {
+            if file.filename == file_name.as_ref() && file.crypt_mode == Some(CryptMode::Encrypt) {
                 bail!("cannot decode '{}' - is encrypted", file_name);
             }
         }
@@ -1488,12 +1489,10 @@ pub fn download_file_decoded(
 
         let mut path = datastore.base_path();
         path.push(backup_dir.relative_path());
-        path.push(&file_name);
+        path.push(file_name.as_ref());
 
-        let (_, extension) = file_name.rsplit_once('.').unwrap();
-
-        let body = match extension {
-            "didx" => {
+        let body = match file_name.archive_type() {
+            ArchiveType::DynamicIndex => {
                 let index = DynamicIndexReader::open(&path).map_err(|err| {
                     format_err!("unable to read dynamic index '{:?}' - {}", &path, err)
                 })?;
@@ -1507,7 +1506,7 @@ pub fn download_file_decoded(
                     err
                 }))
             }
-            "fidx" => {
+            ArchiveType::FixedIndex => {
                 let index = FixedIndexReader::open(&path).map_err(|err| {
                     format_err!("unable to read fixed index '{:?}' - {}", &path, err)
                 })?;
@@ -1526,7 +1525,7 @@ pub fn download_file_decoded(
                     ),
                 )
             }
-            "blob" => {
+            ArchiveType::Blob => {
                 let file = std::fs::File::open(&path)
                     .map_err(|err| http_err!(BAD_REQUEST, "File open failed: {}", err))?;
 
@@ -1541,9 +1540,6 @@ pub fn download_file_decoded(
                     ),
                 )
             }
-            extension => {
-                bail!("cannot download '{}' files", extension);
-            }
         };
 
         // fixme: set other headers ?
@@ -1600,10 +1596,10 @@ pub fn upload_backup_log(
         )?;
         let backup_dir = datastore.backup_dir(backup_ns.clone(), backup_dir_api.clone())?;
 
-        let file_name = CLIENT_LOG_BLOB_NAME;
+        let file_name = BackupArchiveName::client_log();
 
         let mut path = backup_dir.full_path();
-        path.push(file_name);
+        path.push(file_name.as_ref());
 
         if path.exists() {
             bail!("backup already contains a log.");
@@ -1658,7 +1654,7 @@ fn decode_path(path: &str) -> Result<Vec<u8>, Error> {
                 type: String,
             },
             "archive-name": {
-                schema: BACKUP_ARCHIVE_NAME_SCHEMA,
+                type: BackupArchiveName,
                 optional: true,
             },
         },
@@ -1675,12 +1671,12 @@ pub async fn catalog(
     ns: Option<BackupNamespace>,
     backup_dir: pbs_api_types::BackupDir,
     filepath: String,
-    archive_name: Option<String>,
+    archive_name: Option<BackupArchiveName>,
     rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<Vec<ArchiveEntry>, Error> {
     let file_name = archive_name
         .clone()
-        .unwrap_or_else(|| CATALOG_NAME.to_string());
+        .unwrap_or_else(|| BackupArchiveName::catalog());
 
     let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
 
@@ -1700,7 +1696,7 @@ pub async fn catalog(
 
     let (manifest, files) = read_backup_index(&backup_dir)?;
     for file in files {
-        if file.filename == file_name && file.crypt_mode == Some(CryptMode::Encrypt) {
+        if file.filename == file_name.as_ref() && file.crypt_mode == Some(CryptMode::Encrypt) {
             bail!("cannot decode '{file_name}' - is encrypted");
         }
     }
@@ -1709,7 +1705,7 @@ pub async fn catalog(
         tokio::task::spawn_blocking(move || {
             let mut path = datastore.base_path();
             path.push(backup_dir.relative_path());
-            path.push(&file_name);
+            path.push(file_name.as_ref());
 
             let index = DynamicIndexReader::open(&path)
                 .map_err(|err| format_err!("unable to read dynamic index '{path:?}' - {err}"))?;
@@ -1759,7 +1755,7 @@ pub const API_METHOD_PXAR_FILE_DOWNLOAD: ApiMethod = ApiMethod::new(
             ("backup-time", false, &BACKUP_TIME_SCHEMA),
             ("filepath", false, &StringSchema::new("Base64 encoded path").schema()),
             ("tar", true, &BooleanSchema::new("Download as .tar.zst").schema()),
-            ("archive-name", true, &BACKUP_ARCHIVE_NAME_SCHEMA),
+            ("archive-name", true, &BackupArchiveName::API_SCHEMA),
         ]),
     )
 ).access(
@@ -1774,11 +1770,11 @@ fn get_local_pxar_reader(
     datastore: Arc<DataStore>,
     manifest: &BackupManifest,
     backup_dir: &BackupDir,
-    pxar_name: &str,
+    pxar_name: &BackupArchiveName,
 ) -> Result<(LocalDynamicReadAt<LocalChunkReader>, u64), Error> {
     let mut path = datastore.base_path();
     path.push(backup_dir.relative_path());
-    path.push(pxar_name);
+    path.push(pxar_name.as_ref());
 
     let index = DynamicIndexReader::open(&path)
         .map_err(|err| format_err!("unable to read dynamic index '{:?}' - {}", &path, err))?;
@@ -1836,16 +1832,16 @@ pub fn pxar_file_download(
             let file_path = split.next().unwrap_or(b"/");
             (pxar_name.to_owned(), file_path.to_owned())
         };
-        let pxar_name = std::str::from_utf8(&pxar_name)?;
+        let pxar_name: BackupArchiveName = std::str::from_utf8(&pxar_name)?.try_into()?;
         let (manifest, files) = read_backup_index(&backup_dir)?;
         for file in files {
-            if file.filename == pxar_name && file.crypt_mode == Some(CryptMode::Encrypt) {
+            if file.filename == pxar_name.as_ref() && file.crypt_mode == Some(CryptMode::Encrypt) {
                 bail!("cannot decode '{}' - is encrypted", pxar_name);
             }
         }
 
         let (pxar_name, payload_archive_name) =
-            pbs_client::tools::get_pxar_archive_names(pxar_name, &manifest)?;
+            pbs_client::tools::get_pxar_archive_names(&pxar_name, &manifest)?;
         let (reader, archive_size) =
             get_local_pxar_reader(datastore.clone(), &manifest, &backup_dir, &pxar_name)?;
 
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index a180a4b02..65eda56dd 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -19,18 +19,18 @@ use proxmox_uuid::Uuid;
 use proxmox_worker_task::WorkerTaskContext;
 
 use pbs_api_types::{
-    parse_ns_and_snapshot, print_ns_and_snapshot, ArchiveType, Authid, BackupDir, BackupNamespace,
-    CryptMode, NotificationMode, Operation, TapeRestoreNamespace, Userid,
-    DATASTORE_MAP_ARRAY_SCHEMA, DATASTORE_MAP_LIST_SCHEMA, DRIVE_NAME_SCHEMA, MAX_NAMESPACE_DEPTH,
-    PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, PRIV_TAPE_READ, TAPE_RESTORE_NAMESPACE_SCHEMA,
-    TAPE_RESTORE_SNAPSHOT_SCHEMA, UPID_SCHEMA,
+    parse_ns_and_snapshot, print_ns_and_snapshot, ArchiveType, Authid, BackupArchiveName,
+    BackupDir, BackupNamespace, CryptMode, NotificationMode, Operation, TapeRestoreNamespace,
+    Userid, DATASTORE_MAP_ARRAY_SCHEMA, DATASTORE_MAP_LIST_SCHEMA, DRIVE_NAME_SCHEMA,
+    MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, PRIV_TAPE_READ,
+    TAPE_RESTORE_NAMESPACE_SCHEMA, TAPE_RESTORE_SNAPSHOT_SCHEMA, UPID_SCHEMA,
 };
 use pbs_client::pxar::tools::handle_root_with_optional_format_version_prelude;
 use pbs_config::CachedUserInfo;
 use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::{BackupManifest, MANIFEST_BLOB_NAME};
+use pbs_datastore::manifest::BackupManifest;
 use pbs_datastore::{DataBlob, DataStore};
 use pbs_tape::{
     BlockReadError, MediaContentHeader, TapeRead, PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0,
@@ -1652,7 +1652,8 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
     }
 
     let root_path = Path::new("/");
-    let manifest_file_name = OsStr::new(MANIFEST_BLOB_NAME);
+    let manifest_archive_name = BackupArchiveName::manifest();
+    let manifest_file_name = OsStr::new(manifest_archive_name.as_ref());
 
     let mut manifest = None;
 
@@ -1732,7 +1733,7 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
 
     // commit manifest
     let mut manifest_path = snapshot_path.to_owned();
-    manifest_path.push(MANIFEST_BLOB_NAME);
+    manifest_path.push(BackupArchiveName::manifest().as_ref());
     let mut tmp_manifest_path = manifest_path.clone();
     tmp_manifest_path.set_extension("tmp");
 
diff --git a/src/backup/mod.rs b/src/backup/mod.rs
index 8c84b8ce8..c5dae69a6 100644
--- a/src/backup/mod.rs
+++ b/src/backup/mod.rs
@@ -1,8 +1,5 @@
 //! Server/client-specific parts for what's otherwise in pbs-datastore.
 
-// Note: .pcat1 => Proxmox Catalog Format version 1
-pub const CATALOG_NAME: &str = "catalog.pcat1.didx";
-
 mod verify;
 pub use verify::*;
 
diff --git a/src/bin/proxmox_backup_debug/diff.rs b/src/bin/proxmox_backup_debug/diff.rs
index b0436d048..dcd351d93 100644
--- a/src/bin/proxmox_backup_debug/diff.rs
+++ b/src/bin/proxmox_backup_debug/diff.rs
@@ -13,7 +13,7 @@ use proxmox_human_byte::HumanByte;
 use proxmox_router::cli::{CliCommand, CliCommandMap, CommandLineInterface};
 use proxmox_schema::api;
 
-use pbs_api_types::{BackupNamespace, BackupPart};
+use pbs_api_types::{BackupArchiveName, BackupNamespace, BackupPart};
 use pbs_client::tools::key_source::{
     crypto_parameters, format_key_source, get_encryption_key_password, KEYFD_SCHEMA,
 };
@@ -70,8 +70,7 @@ pub fn diff_commands() -> CommandLineInterface {
                 type: String,
             },
             "archive-name": {
-                description: "Name of the .pxar archive",
-                type: String,
+                type: BackupArchiveName,
             },
             "repository": {
                 optional: true,
@@ -106,7 +105,7 @@ pub fn diff_commands() -> CommandLineInterface {
 async fn diff_archive_cmd(
     prev_snapshot: String,
     snapshot: String,
-    archive_name: String,
+    archive_name: BackupArchiveName,
     compare_content: bool,
     color: Option<ColorMode>,
     ns: Option<BackupNamespace>,
@@ -140,12 +139,11 @@ async fn diff_archive_cmd(
 
     let output_params = OutputParams { color };
 
-    if archive_name.ends_with(".pxar") {
-        let file_name = format!("{}.didx", archive_name);
+    if archive_name.ends_with(".pxar.didx") {
         diff_archive(
             &prev_snapshot,
             &snapshot,
-            &file_name,
+            &archive_name,
             &repo_params,
             compare_content,
             &output_params,
@@ -161,7 +159,7 @@ async fn diff_archive_cmd(
 async fn diff_archive(
     snapshot_a: &str,
     snapshot_b: &str,
-    file_name: &str,
+    file_name: &BackupArchiveName,
     repo_params: &RepoParams,
     compare_contents: bool,
     output_params: &OutputParams,
@@ -249,7 +247,7 @@ struct OutputParams {
 
 async fn open_dynamic_index(
     snapshot: &str,
-    archive_name: &str,
+    archive_name: &BackupArchiveName,
     params: &RepoParams,
 ) -> Result<(DynamicIndexReader, Accessor), Error> {
     let backup_reader = create_backup_reader(snapshot, params).await?;
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 8d53ccd6e..7a72f6615 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -11,9 +11,9 @@ use proxmox_human_byte::HumanByte;
 use tracing::info;
 
 use pbs_api_types::{
-    print_store_and_ns, ArchiveType, Authid, BackupDir, BackupGroup, BackupNamespace, GroupFilter,
-    Operation, RateLimitConfig, Remote, MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_AUDIT,
-    PRIV_DATASTORE_BACKUP,
+    print_store_and_ns, ArchiveType, Authid, BackupArchiveName, BackupDir, BackupGroup,
+    BackupNamespace, GroupFilter, Operation, RateLimitConfig, Remote, MAX_NAMESPACE_DEPTH,
+    PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP,
 };
 use pbs_client::BackupRepository;
 use pbs_config::CachedUserInfo;
@@ -21,7 +21,7 @@ use pbs_datastore::data_blob::DataBlob;
 use pbs_datastore::dynamic_index::DynamicIndexReader;
 use pbs_datastore::fixed_index::FixedIndexReader;
 use pbs_datastore::index::IndexFile;
-use pbs_datastore::manifest::{BackupManifest, FileInfo, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME};
+use pbs_datastore::manifest::{BackupManifest, FileInfo};
 use pbs_datastore::read_chunk::AsyncReadChunk;
 use pbs_datastore::{check_backup_owner, DataStore, StoreProgress};
 use pbs_tools::sha::sha256;
@@ -334,16 +334,16 @@ async fn pull_snapshot<'a>(
 ) -> Result<SyncStats, Error> {
     let mut sync_stats = SyncStats::default();
     let mut manifest_name = snapshot.full_path();
-    manifest_name.push(MANIFEST_BLOB_NAME);
+    manifest_name.push(BackupArchiveName::manifest().as_ref());
 
     let mut client_log_name = snapshot.full_path();
-    client_log_name.push(CLIENT_LOG_BLOB_NAME);
+    client_log_name.push(BackupArchiveName::client_log().as_ref());
 
     let mut tmp_manifest_name = manifest_name.clone();
     tmp_manifest_name.set_extension("tmp");
     let tmp_manifest_blob;
     if let Some(data) = reader
-        .load_file_into(MANIFEST_BLOB_NAME, &tmp_manifest_name)
+        .load_file_into(BackupArchiveName::manifest().as_ref(), &tmp_manifest_name)
         .await?
     {
         tmp_manifest_blob = data;
@@ -381,11 +381,12 @@ async fn pull_snapshot<'a>(
         path.push(&item.filename);
 
         if path.exists() {
-            match ArchiveType::from_path(&item.filename)? {
+            let filename: BackupArchiveName = item.filename.as_str().try_into()?;
+            match filename.archive_type() {
                 ArchiveType::DynamicIndex => {
                     let index = DynamicIndexReader::open(&path)?;
                     let (csum, size) = index.compute_csum();
-                    match manifest.verify_file(&item.filename, &csum, size) {
+                    match manifest.verify_file(&filename, &csum, size) {
                         Ok(_) => continue,
                         Err(err) => {
                             info!("detected changed file {path:?} - {err}");
@@ -395,7 +396,7 @@ async fn pull_snapshot<'a>(
                 ArchiveType::FixedIndex => {
                     let index = FixedIndexReader::open(&path)?;
                     let (csum, size) = index.compute_csum();
-                    match manifest.verify_file(&item.filename, &csum, size) {
+                    match manifest.verify_file(&filename, &csum, size) {
                         Ok(_) => continue,
                         Err(err) => {
                             info!("detected changed file {path:?} - {err}");
@@ -405,7 +406,7 @@ async fn pull_snapshot<'a>(
                 ArchiveType::Blob => {
                     let mut tmpfile = std::fs::File::open(&path)?;
                     let (csum, size) = sha256(&mut tmpfile)?;
-                    match manifest.verify_file(&item.filename, &csum, size) {
+                    match manifest.verify_file(&filename, &csum, size) {
                         Ok(_) => continue,
                         Err(err) => {
                             info!("detected changed file {path:?} - {err}");
diff --git a/src/server/sync.rs b/src/server/sync.rs
index bd68dda46..1f11ee5b4 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -14,12 +14,11 @@ use tracing::{info, warn};
 use proxmox_router::HttpError;
 
 use pbs_api_types::{
-    Authid, BackupDir, BackupGroup, BackupNamespace, CryptMode, GroupListItem, SnapshotListItem,
-    MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+    Authid, BackupArchiveName, BackupDir, BackupGroup, BackupNamespace, CryptMode, GroupListItem,
+    SnapshotListItem, MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
 };
 use pbs_client::{BackupReader, BackupRepository, HttpClient, RemoteChunkReader};
 use pbs_datastore::data_blob::DataBlob;
-use pbs_datastore::manifest::CLIENT_LOG_BLOB_NAME;
 use pbs_datastore::read_chunk::AsyncReadChunk;
 use pbs_datastore::{DataStore, ListNamespacesRecursive, LocalChunkReader};
 
@@ -156,15 +155,16 @@ impl SyncSourceReader for RemoteSourceReader {
             .open(&tmp_path)?;
 
         // Note: be silent if there is no log - only log successful download
+        let client_log_name = BackupArchiveName::client_log();
         if let Ok(()) = self
             .backup_reader
-            .download(CLIENT_LOG_BLOB_NAME, tmpfile)
+            .download(client_log_name.as_ref(), tmpfile)
             .await
         {
             if let Err(err) = std::fs::rename(&tmp_path, to_path) {
                 bail!("Atomic rename file {to_path:?} failed - {err}");
             }
-            info!("got backup log file {CLIENT_LOG_BLOB_NAME:?}");
+            info!("got backup log file {client_log_name}");
         }
 
         Ok(())
diff --git a/tests/prune.rs b/tests/prune.rs
index 3b3209698..edc614821 100644
--- a/tests/prune.rs
+++ b/tests/prune.rs
@@ -2,8 +2,7 @@ use std::path::PathBuf;
 
 use anyhow::Error;
 
-use pbs_api_types::PruneJobOptions;
-use pbs_datastore::manifest::MANIFEST_BLOB_NAME;
+use pbs_api_types::{BackupArchiveName, PruneJobOptions};
 use pbs_datastore::prune::compute_prune_info;
 use pbs_datastore::{BackupDir, BackupInfo};
 
@@ -34,7 +33,7 @@ fn create_info(snapshot: &str, partial: bool) -> BackupInfo {
     let mut files = Vec::new();
 
     if !partial {
-        files.push(String::from(MANIFEST_BLOB_NAME));
+        files.push(BackupArchiveName::manifest().to_string());
     }
 
     BackupInfo {
-- 
2.39.5



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pbs-devel] [PATCH v4 proxmox-backup 4/5] client: drop unused parse_archive_type helper
  2024-11-13 10:50 [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type Christian Ebner
                   ` (2 preceding siblings ...)
  2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 3/5] client/server: use dedicated api type for all archive names Christian Ebner
@ 2024-11-13 10:50 ` Christian Ebner
  2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 5/5] api types: add unit tests for backup archive name parsing Christian Ebner
  2024-11-22 10:33 ` [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type Christian Ebner
  5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2024-11-13 10:50 UTC (permalink / raw)
  To: pbs-devel

Parsing of the type based on the archive name extension is now
handled by `BackupArchiveName`.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 3:
- no changes

 proxmox-backup-client/src/main.rs | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/proxmox-backup-client/src/main.rs b/proxmox-backup-client/src/main.rs
index a155f56f0..581bc245b 100644
--- a/proxmox-backup-client/src/main.rs
+++ b/proxmox-backup-client/src/main.rs
@@ -1380,18 +1380,6 @@ async fn dump_image<W: Write>(
     Ok(())
 }
 
-fn parse_archive_type(name: &str) -> (String, ArchiveType) {
-    if name.ends_with(".didx") || name.ends_with(".fidx") || name.ends_with(".blob") {
-        (name.into(), ArchiveType::from_path(name).unwrap())
-    } else if has_pxar_filename_extension(name, false) {
-        (format!("{}.didx", name), ArchiveType::DynamicIndex)
-    } else if name.ends_with(".img") {
-        (format!("{}.fidx", name), ArchiveType::FixedIndex)
-    } else {
-        (format!("{}.blob", name), ArchiveType::Blob)
-    }
-}
-
 #[api(
     input: {
         properties: {
-- 
2.39.5



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pbs-devel] [PATCH v4 proxmox-backup 5/5] api types: add unit tests for backup archive name parsing
  2024-11-13 10:50 [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type Christian Ebner
                   ` (3 preceding siblings ...)
  2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 4/5] client: drop unused parse_archive_type helper Christian Ebner
@ 2024-11-13 10:50 ` Christian Ebner
  2024-11-22 10:33 ` [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type Christian Ebner
  5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2024-11-13 10:50 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
---
changes since version 3:
- extend tests to cover currently used blob files since the catchall blob mapping was removed

 pbs-api-types/src/datastore.rs | 64 ++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index 00ac63255..f2bd26d85 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -1741,3 +1741,67 @@ impl BackupArchiveName {
 impl ApiType for BackupArchiveName {
     const API_SCHEMA: Schema = BACKUP_ARCHIVE_NAME_SCHEMA;
 }
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[test]
+    fn test_invalid_backup_archive_names() {
+        let invalid_archive_names = ["/invalid/", "/invalid/..", "/invalid/archive-name.invalid"];
+
+        for archive_name in invalid_archive_names {
+            assert!(BackupArchiveName::from_path(archive_name).is_err());
+        }
+    }
+
+    #[test]
+    fn test_valid_didx_backup_archive_names() {
+        let valid_archive_names = [
+            "/valid/archive-name.pxar",
+            "/valid/archive-name.pxar.didx",
+            "/valid/archive-name.mpxar",
+            "/valid/archive-name.mpxar.didx",
+            "/valid/archive-name.ppxar",
+            "/valid/archive-name.ppxar.didx",
+            "/valid/archive-name.pcat1",
+            "/valid/archive-name.pcat1.didx",
+        ];
+
+        for archive_name in valid_archive_names {
+            let archive = BackupArchiveName::from_path(archive_name).unwrap();
+            assert!(archive.as_ref().ends_with(".didx"));
+            assert!(archive.archive_type() == ArchiveType::DynamicIndex);
+        }
+    }
+
+    #[test]
+    fn test_valid_fidx_backup_archive_names() {
+        let valid_archive_names = ["/valid/archive-name.img", "/valid/archive-name.img.fidx"];
+
+        for archive_name in valid_archive_names {
+            let archive = BackupArchiveName::from_path(archive_name).unwrap();
+            assert!(archive.as_ref() == "archive-name.img.fidx");
+            assert!(archive.without_type_extension() == "archive-name.img");
+            assert!(archive.archive_type() == ArchiveType::FixedIndex);
+        }
+    }
+
+    #[test]
+    fn test_valid_blob_backup_archive_names() {
+        let valid_archive_names = [
+            "/valid/index.json",
+            "/valid/index.json.blob",
+            "/valid/rsa-encrypted.key",
+            "/valid/rsa-encrypted.key.blob",
+            "/valid/archive-name.log",
+            "/valid/archive-name.log.blob",
+        ];
+
+        for archive_name in valid_archive_names {
+            let archive = BackupArchiveName::from_path(archive_name).unwrap();
+            assert!(archive.as_ref().ends_with(".blob"));
+            assert!(archive.archive_type() == ArchiveType::Blob);
+        }
+    }
+}
-- 
2.39.5



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type
  2024-11-13 10:50 [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type Christian Ebner
                   ` (4 preceding siblings ...)
  2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 5/5] api types: add unit tests for backup archive name parsing Christian Ebner
@ 2024-11-22 10:33 ` Christian Ebner
  5 siblings, 0 replies; 7+ messages in thread
From: Christian Ebner @ 2024-11-22 10:33 UTC (permalink / raw)
  To: pbs-devel

superseded-by version 4:
https://lore.proxmox.com/pbs-devel/20241122103011.165010-1-c.ebner@proxmox.com/T


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-11-22 10:34 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-13 10:50 [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type Christian Ebner
2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 1/5] datastore: move `ArchiveType` to api types Christian Ebner
2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 2/5] api types: introduce `BackupArchiveName` type Christian Ebner
2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 3/5] client/server: use dedicated api type for all archive names Christian Ebner
2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 4/5] client: drop unused parse_archive_type helper Christian Ebner
2024-11-13 10:50 ` [pbs-devel] [PATCH v4 proxmox-backup 5/5] api types: add unit tests for backup archive name parsing Christian Ebner
2024-11-22 10:33 ` [pbs-devel] [PATCH v4 proxmox-backup 0/5] introduce dedcated archive name api type Christian Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal