* [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore
@ 2025-05-26 14:14 Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 01/12] chunkstore: add CanRead and CanWrite trait Hannes Laimer
` (11 more replies)
0 siblings, 12 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
This patch series introduces two traits, CanRead and CanWrite, to define whether
a datastore reference is readable, writable, or neither. Functions that read
or write are now implemented in `impl<T: CanRead>` or `impl<T: CanWrite>` blocks, ensuring
that they are only available to references that are supposed to read/write.
Motivation:
Currently, we track the number of read/write references of a datastore but we don't
track Lookup operations as they don't read or write, they still need a chunkstore, so
eventhough they don't neccessarily directly do IO, they hold an open file handle.
This is a problem for things like unmounting, currently lookup operations are only really
short, so you'd need really unlucky timing to actually run into problems, but still,
if a datastore is in "offline" maintenance mode, we shouldn't open filehandles on it.
By encoding state in the type:
1. We can assign non-readable/writable references for lookup operations.
2. The compiler ensures correct usage of references. Since it is easy to miss
what might happen a few function calls down the line, having the compiler
yell at you for easily missed things like this, is a really good thing
I think.
Changes:
* Added CanRead and CanWrite traits.
* Separated functions into impl<T: CanRead> or impl<T: CanWrite>.
* Introduced three new datastore lookup functions that return concrete types implementing
CanRead, CanWrite, or neither.
* Renamed lookup_datastore() to open_datastore() and made it private.
The main downside is needing separate datastore caches for read and write references due to
concrete type requirements in the cache HashMap.
Almost all changes are either adding generics or moving functions into the appropriate
trait implementations. The logic itself is only touched three times
- once in datastore_lookup()
- once check_privs_and_load_store() in /api/admin/datastore, this function now only checks
the privs, the datastore opening happens in the endpoint function directly.
-(new in v2) and the checking of if a gc is currently running is now done without the need for a datastore reference
instead we just try to get the gc lock directly from the cached write reference(only if one even exists)
of the datastore in question. This was only used once by the job scheduler, now we just call a function that
checks the relevant cache entries instead of actually getting the whole store reference.
changes since v1:
- seal trait implementations
- re-structure patches
- changed how checking if gc is running is done
- "rebased" onto master, was actually mostly rewritten, given the age and type of changes it just wouldn't really
apply all that well anymore...
- we used Operation::Read for verification, turns out verification does also rename currupted chunks, only noticed because
the compiler yelled at me :). Not necessarily changed from v1, but didn't mention it there.
--
Since I didn't add new comp times for v1, @Wolfgang suggested to maybe monomorphise some
functions manually to potentially reduce the impact on comp time/binary sizes. But given the
minimal differences on comp time and binary sizes, I don't think that would be worth the
effort.
Binary sizes were unchanged(`ls -lah`).
Compile times:
| dbg | release
--------|------|---------
master | 52s | 92s
series | 53s | 94s
individual measurements:
* master -> dbg: 52s,52s,53s release: 92s,93s,92s
* series -> dbg: 53s,53s,53s release: 94s,96s,95s
Hannes Laimer (12):
chunkstore: add CanRead and CanWrite trait
chunkstore: separate functions into impl block
datastore: add generics and new lookup functions
datastore: separate functions into impl block
backup_info: add generics and separate functions into impl blocks
pbs-datastore: add generics and separate functions into impl blocks
api: backup: env: add generics and separate functions into impl block
api/backup/bin/server/tape: add missing generics
examples/tests: add missing generics
api: admin: pull datastore loading out of check_privs helper
datastore: move `fn gc_running` out of DataStoreImpl
api/server: replace datastore_lookup with new, state-typed datastore
returning functions
pbs-datastore/examples/ls-snapshots.rs | 4 +-
pbs-datastore/src/backup_info.rs | 579 ++++----
pbs-datastore/src/chunk_store.rs | 329 +++--
pbs-datastore/src/datastore.rs | 1342 ++++++++++---------
pbs-datastore/src/dynamic_index.rs | 22 +-
pbs-datastore/src/fixed_index.rs | 50 +-
pbs-datastore/src/hierarchy.rs | 92 +-
pbs-datastore/src/lib.rs | 3 +-
pbs-datastore/src/local_chunk_reader.rs | 13 +-
pbs-datastore/src/prune.rs | 19 +-
pbs-datastore/src/snapshot_reader.rs | 31 +-
src/api2/admin/datastore.rs | 161 +--
src/api2/admin/namespace.rs | 10 +-
src/api2/backup/environment.rs | 337 ++---
src/api2/backup/mod.rs | 29 +-
src/api2/backup/upload_chunk.rs | 19 +-
src/api2/config/datastore.rs | 5 +-
src/api2/reader/environment.rs | 30 +-
src/api2/reader/mod.rs | 13 +-
src/api2/status/mod.rs | 8 +-
src/api2/tape/backup.rs | 21 +-
src/api2/tape/drive.rs | 3 +-
src/api2/tape/restore.rs | 83 +-
src/backup/hierarchy.rs | 23 +-
src/backup/verify.rs | 53 +-
src/bin/proxmox-backup-proxy.rs | 26 +-
src/server/gc_job.rs | 7 +-
src/server/prune_job.rs | 9 +-
src/server/pull.rs | 32 +-
src/server/push.rs | 7 +-
src/server/sync.rs | 13 +-
src/server/verify_job.rs | 4 +-
src/tape/file_formats/snapshot_archive.rs | 5 +-
src/tape/pool_writer/mod.rs | 11 +-
src/tape/pool_writer/new_chunks_iterator.rs | 7 +-
tests/prune.rs | 8 +-
36 files changed, 1794 insertions(+), 1614 deletions(-)
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 01/12] chunkstore: add CanRead and CanWrite trait
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 02/12] chunkstore: separate functions into impl block Hannes Laimer
` (10 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
pbs-datastore/src/chunk_store.rs | 24 +++++++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 29a3d477..9a77bef2 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -21,14 +21,36 @@ use crate::file_formats::{
};
use crate::DataBlob;
+mod private {
+ pub trait Sealed: Clone + Copy {}
+ impl Sealed for super::Read {}
+ impl Sealed for super::Write {}
+ impl Sealed for super::Lookup {}
+}
+
+pub trait CanRead: private::Sealed {}
+pub trait CanWrite: CanRead + private::Sealed {}
+
+#[derive(Clone, Copy, Debug)]
+pub struct Read;
+#[derive(Clone, Copy, Debug)]
+pub struct Write;
+#[derive(Clone, Copy, Debug)]
+pub struct Lookup;
+
+impl CanRead for Read {}
+impl CanRead for Write {}
+impl CanWrite for Write {}
+
/// File system based chunk store
-pub struct ChunkStore {
+pub struct ChunkStore<T> {
name: String, // used for error reporting
pub(crate) base: PathBuf,
chunk_dir: PathBuf,
mutex: Mutex<()>,
locker: Option<Arc<Mutex<ProcessLocker>>>,
sync_level: DatastoreFSyncLevel,
+ _marker: std::marker::PhantomData<T>,
}
// TODO: what about sysctl setting vm.vfs_cache_pressure (0 - 100) ?
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 02/12] chunkstore: separate functions into impl block
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 01/12] chunkstore: add CanRead and CanWrite trait Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 03/12] datastore: add generics and new lookup functions Hannes Laimer
` (9 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
... based on whether they are reading/writing.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
pbs-datastore/src/chunk_store.rs | 305 +++++++++++++++++--------------
1 file changed, 169 insertions(+), 136 deletions(-)
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 9a77bef2..e998c798 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -88,30 +88,29 @@ fn digest_to_prefix(digest: &[u8]) -> PathBuf {
path.into()
}
-impl ChunkStore {
- #[doc(hidden)]
- pub unsafe fn panic_store() -> Self {
- Self {
- name: String::new(),
- base: PathBuf::new(),
- chunk_dir: PathBuf::new(),
- mutex: Mutex::new(()),
- locker: None,
- sync_level: Default::default(),
- }
- }
+impl ChunkStore<Lookup> {
+ pub fn open_lookup<P: Into<PathBuf>>(name: &str, base: P) -> Result<Self, Error> {
+ let base: PathBuf = base.into();
- fn chunk_dir<P: AsRef<Path>>(path: P) -> PathBuf {
- let mut chunk_dir: PathBuf = PathBuf::from(path.as_ref());
- chunk_dir.push(".chunks");
+ if !base.is_absolute() {
+ bail!("expected absolute path - got {:?}", base);
+ }
- chunk_dir
- }
+ let chunk_dir = Self::chunk_dir(&base);
- pub fn base(&self) -> &Path {
- &self.base
+ Ok(Self {
+ name: name.to_owned(),
+ base,
+ chunk_dir,
+ locker: None,
+ mutex: Mutex::new(()),
+ sync_level: DatastoreFSyncLevel::None,
+ _marker: std::marker::PhantomData,
+ })
}
+}
+impl ChunkStore<Write> {
pub fn create<P>(
name: &str,
path: P,
@@ -174,13 +173,9 @@ impl ChunkStore {
Self::open(name, base, sync_level)
}
+}
- fn lockfile_path<P: Into<PathBuf>>(base: P) -> PathBuf {
- let mut lockfile_path: PathBuf = base.into();
- lockfile_path.push(".lock");
- lockfile_path
- }
-
+impl<T: CanRead> ChunkStore<T> {
/// Check if the chunkstore path is absolute and that we can
/// access it. Returns the absolute '.chunks' path on success.
fn chunk_dir_accessible(base: &Path) -> Result<PathBuf, Error> {
@@ -209,7 +204,7 @@ impl ChunkStore {
) -> Result<Self, Error> {
let base: PathBuf = base.into();
- let chunk_dir = ChunkStore::chunk_dir_accessible(&base)?;
+ let chunk_dir = Self::chunk_dir_accessible(&base)?;
let lockfile_path = Self::lockfile_path(&base);
@@ -222,59 +217,10 @@ impl ChunkStore {
locker: Some(locker),
mutex: Mutex::new(()),
sync_level,
+ _marker: std::marker::PhantomData,
})
}
- pub fn touch_chunk(&self, digest: &[u8; 32]) -> Result<(), Error> {
- // unwrap: only `None` in unit tests
- assert!(self.locker.is_some());
-
- self.cond_touch_chunk(digest, true)?;
- Ok(())
- }
-
- pub fn cond_touch_chunk(&self, digest: &[u8; 32], assert_exists: bool) -> Result<bool, Error> {
- // unwrap: only `None` in unit tests
- assert!(self.locker.is_some());
-
- let (chunk_path, _digest_str) = self.chunk_path(digest);
- self.cond_touch_path(&chunk_path, assert_exists)
- }
-
- pub fn cond_touch_path(&self, path: &Path, assert_exists: bool) -> Result<bool, Error> {
- // unwrap: only `None` in unit tests
- assert!(self.locker.is_some());
-
- let times: [libc::timespec; 2] = [
- // access time -> update to now
- libc::timespec {
- tv_sec: 0,
- tv_nsec: libc::UTIME_NOW,
- },
- // modification time -> keep as is
- libc::timespec {
- tv_sec: 0,
- tv_nsec: libc::UTIME_OMIT,
- },
- ];
-
- use nix::NixPath;
-
- let res = path.with_nix_path(|cstr| unsafe {
- let tmp = libc::utimensat(-1, cstr.as_ptr(), ×[0], libc::AT_SYMLINK_NOFOLLOW);
- nix::errno::Errno::result(tmp)
- })?;
-
- if let Err(err) = res {
- if !assert_exists && err == nix::errno::Errno::ENOENT {
- return Ok(false);
- }
- bail!("update atime failed for chunk/file {path:?} - {err}");
- }
-
- Ok(true)
- }
-
pub fn get_chunk_iterator(
&self,
) -> Result<
@@ -370,10 +316,116 @@ impl ChunkStore {
.fuse())
}
+ /// Checks permissions and owner of passed path.
+ fn check_permissions<P: AsRef<Path>>(path: P, file_mode: u32) -> Result<(), Error> {
+ match nix::sys::stat::stat(path.as_ref()) {
+ Ok(stat) => {
+ if stat.st_uid != u32::from(pbs_config::backup_user()?.uid)
+ || stat.st_gid != u32::from(pbs_config::backup_group()?.gid)
+ || stat.st_mode & 0o777 != file_mode
+ {
+ bail!(
+ "unable to open existing chunk store path {:?} - permissions or owner not correct",
+ path.as_ref(),
+ );
+ }
+ }
+ Err(err) => {
+ bail!(
+ "unable to open existing chunk store path {:?} - {err}",
+ path.as_ref(),
+ );
+ }
+ }
+ Ok(())
+ }
+
+ /// Verify vital files in datastore. Checks the owner and permissions of: the chunkstore, it's
+ /// subdirectories and the lock file.
+ pub fn verify_chunkstore<P: AsRef<Path>>(path: P) -> Result<(), Error> {
+ // Check datastore root path perm/owner
+ Self::check_permissions(path.as_ref(), 0o755)?;
+
+ let chunk_dir = Self::chunk_dir(path.as_ref());
+ // Check datastore .chunks path perm/owner
+ Self::check_permissions(&chunk_dir, 0o750)?;
+
+ // Check all .chunks subdirectories
+ for i in 0..64 * 1024 {
+ let mut l1path = chunk_dir.clone();
+ l1path.push(format!("{:04x}", i));
+ Self::check_permissions(&l1path, 0o750)?;
+ }
+
+ // Check .lock file
+ let lockfile_path = Self::lockfile_path(path.as_ref());
+ Self::check_permissions(lockfile_path, 0o644)?;
+ Ok(())
+ }
+
+ pub fn try_shared_lock(&self) -> Result<ProcessLockSharedGuard, Error> {
+ // unwrap: only `None` in unit tests
+ ProcessLocker::try_shared_lock(self.locker.clone().unwrap())
+ }
pub fn oldest_writer(&self) -> Option<i64> {
// unwrap: only `None` in unit tests
ProcessLocker::oldest_shared_lock(self.locker.clone().unwrap())
}
+}
+
+impl<T: CanWrite> ChunkStore<T> {
+ pub fn touch_chunk(&self, digest: &[u8; 32]) -> Result<(), Error> {
+ // unwrap: only `None` in unit tests
+ assert!(self.locker.is_some());
+
+ self.cond_touch_chunk(digest, true)?;
+ Ok(())
+ }
+
+ pub fn cond_touch_chunk(&self, digest: &[u8; 32], assert_exists: bool) -> Result<bool, Error> {
+ // unwrap: only `None` in unit tests
+ assert!(self.locker.is_some());
+
+ let (chunk_path, _digest_str) = self.chunk_path(digest);
+ self.cond_touch_path(&chunk_path, assert_exists)
+ }
+
+ pub fn cond_touch_path(&self, path: &Path, assert_exists: bool) -> Result<bool, Error> {
+ // unwrap: only `None` in unit tests
+ assert!(self.locker.is_some());
+
+ const UTIME_NOW: i64 = (1 << 30) - 1;
+ const UTIME_OMIT: i64 = (1 << 30) - 2;
+
+ let times: [libc::timespec; 2] = [
+ // access time -> update to now
+ libc::timespec {
+ tv_sec: 0,
+ tv_nsec: UTIME_NOW,
+ },
+ // modification time -> keep as is
+ libc::timespec {
+ tv_sec: 0,
+ tv_nsec: UTIME_OMIT,
+ },
+ ];
+
+ use nix::NixPath;
+
+ let res = path.with_nix_path(|cstr| unsafe {
+ let tmp = libc::utimensat(-1, cstr.as_ptr(), ×[0], libc::AT_SYMLINK_NOFOLLOW);
+ nix::errno::Errno::result(tmp)
+ })?;
+
+ if let Err(err) = res {
+ if !assert_exists && err == nix::errno::Errno::ENOENT {
+ return Ok(false);
+ }
+ bail!("update atime failed for chunk/file {path:?} - {err}");
+ }
+
+ Ok(true)
+ }
pub fn sweep_unused_chunks(
&self,
@@ -611,6 +663,43 @@ impl ChunkStore {
Ok((false, encoded_size))
}
+ pub fn try_exclusive_lock(&self) -> Result<ProcessLockExclusiveGuard, Error> {
+ // unwrap: only `None` in unit tests
+ ProcessLocker::try_exclusive_lock(self.locker.clone().unwrap())
+ }
+}
+
+impl<T> ChunkStore<T> {
+ #[doc(hidden)]
+ pub fn dummy_store() -> Self {
+ Self {
+ name: String::new(),
+ base: PathBuf::new(),
+ chunk_dir: PathBuf::new(),
+ mutex: Mutex::new(()),
+ locker: None,
+ sync_level: Default::default(),
+ _marker: std::marker::PhantomData,
+ }
+ }
+
+ fn chunk_dir<P: AsRef<Path>>(path: P) -> PathBuf {
+ let mut chunk_dir: PathBuf = PathBuf::from(path.as_ref());
+ chunk_dir.push(".chunks");
+
+ chunk_dir
+ }
+
+ pub fn base(&self) -> &Path {
+ &self.base
+ }
+
+ fn lockfile_path<P: Into<PathBuf>>(base: P) -> PathBuf {
+ let mut lockfile_path: PathBuf = base.into();
+ lockfile_path.push(".lock");
+ lockfile_path
+ }
+
pub fn chunk_path(&self, digest: &[u8; 32]) -> (PathBuf, String) {
// unwrap: only `None` in unit tests
assert!(self.locker.is_some());
@@ -642,63 +731,6 @@ impl ChunkStore {
self.base.clone()
}
-
- pub fn try_shared_lock(&self) -> Result<ProcessLockSharedGuard, Error> {
- // unwrap: only `None` in unit tests
- ProcessLocker::try_shared_lock(self.locker.clone().unwrap())
- }
-
- pub fn try_exclusive_lock(&self) -> Result<ProcessLockExclusiveGuard, Error> {
- // unwrap: only `None` in unit tests
- ProcessLocker::try_exclusive_lock(self.locker.clone().unwrap())
- }
-
- /// Checks permissions and owner of passed path.
- fn check_permissions<T: AsRef<Path>>(path: T, file_mode: u32) -> Result<(), Error> {
- match nix::sys::stat::stat(path.as_ref()) {
- Ok(stat) => {
- if stat.st_uid != u32::from(pbs_config::backup_user()?.uid)
- || stat.st_gid != u32::from(pbs_config::backup_group()?.gid)
- || stat.st_mode & 0o777 != file_mode
- {
- bail!(
- "unable to open existing chunk store path {:?} - permissions or owner not correct",
- path.as_ref(),
- );
- }
- }
- Err(err) => {
- bail!(
- "unable to open existing chunk store path {:?} - {err}",
- path.as_ref(),
- );
- }
- }
- Ok(())
- }
-
- /// Verify vital files in datastore. Checks the owner and permissions of: the chunkstore, it's
- /// subdirectories and the lock file.
- pub fn verify_chunkstore<T: AsRef<Path>>(path: T) -> Result<(), Error> {
- // Check datastore root path perm/owner
- ChunkStore::check_permissions(path.as_ref(), 0o755)?;
-
- let chunk_dir = Self::chunk_dir(path.as_ref());
- // Check datastore .chunks path perm/owner
- ChunkStore::check_permissions(&chunk_dir, 0o750)?;
-
- // Check all .chunks subdirectories
- for i in 0..64 * 1024 {
- let mut l1path = chunk_dir.clone();
- l1path.push(format!("{:04x}", i));
- ChunkStore::check_permissions(&l1path, 0o750)?;
- }
-
- // Check .lock file
- let lockfile_path = Self::lockfile_path(path.as_ref());
- ChunkStore::check_permissions(lockfile_path, 0o644)?;
- Ok(())
- }
}
#[test]
@@ -708,13 +740,14 @@ fn test_chunk_store1() {
if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }
- let chunk_store = ChunkStore::open("test", &path, DatastoreFSyncLevel::None);
+ let chunk_store: Result<ChunkStore<Read>, _> =
+ ChunkStore::open("test", &path, DatastoreFSyncLevel::None);
assert!(chunk_store.is_err());
let user = nix::unistd::User::from_uid(nix::unistd::Uid::current())
.unwrap()
.unwrap();
- let chunk_store =
+ let chunk_store: ChunkStore<Write> =
ChunkStore::create("test", &path, user.uid, user.gid, DatastoreFSyncLevel::None).unwrap();
let (chunk, digest) = crate::data_blob::DataChunkBuilder::new(&[0u8, 1u8])
@@ -727,7 +760,7 @@ fn test_chunk_store1() {
let (exists, _) = chunk_store.insert_chunk(&chunk, &digest).unwrap();
assert!(exists);
- let chunk_store =
+ let chunk_store: Result<ChunkStore<Write>, _> =
ChunkStore::create("test", &path, user.uid, user.gid, DatastoreFSyncLevel::None);
assert!(chunk_store.is_err());
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 03/12] datastore: add generics and new lookup functions
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 01/12] chunkstore: add CanRead and CanWrite trait Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 02/12] chunkstore: separate functions into impl block Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 04/12] datastore: separate functions into impl block Hannes Laimer
` (8 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
pbs-datastore/src/datastore.rs | 80 +++++++++++++++++++++++++++++-----
1 file changed, 68 insertions(+), 12 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index cbf78ecb..6936875e 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -8,6 +8,7 @@ use std::time::Duration;
use anyhow::{bail, format_err, Context, Error};
use nix::unistd::{unlinkat, UnlinkatFlags};
+use pbs_config::BackupLockGuard;
use pbs_tools::lru_cache::LruCache;
use tracing::{info, warn};
@@ -29,7 +30,7 @@ use pbs_api_types::{
use pbs_config::BackupLockGuard;
use crate::backup_info::{BackupDir, BackupGroup, BackupInfo, OLD_LOCKING};
-use crate::chunk_store::ChunkStore;
+use crate::chunk_store::{CanRead, CanWrite, ChunkStore, Lookup as L, Read as R, Write as W};
use crate::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
use crate::fixed_index::{FixedIndexReader, FixedIndexWriter};
use crate::hierarchy::{ListGroups, ListGroupsType, ListNamespaces, ListNamespacesRecursive};
@@ -37,7 +38,12 @@ use crate::index::IndexFile;
use crate::task_tracking::{self, update_active_operations};
use crate::DataBlob;
-static DATASTORE_MAP: LazyLock<Mutex<HashMap<String, Arc<DataStoreImpl>>>> =
+type DataStoreCache<T> = HashMap<String, Arc<DataStoreImpl<T>>>;
+
+static DATASTORE_MAP_READ: LazyLock<Mutex<DataStoreCache<R>>> =
+ LazyLock::new(|| Mutex::new(HashMap::new()));
+
+static DATASTORE_MAP_WRITE: LazyLock<Mutex<DataStoreCache<W>>> =
LazyLock::new(|| Mutex::new(HashMap::new()));
/// checks if auth_id is owner, or, if owner is a token, if
@@ -117,8 +123,8 @@ pub fn ensure_datastore_is_mounted(config: &DataStoreConfig) -> Result<(), Error
///
/// A Datastore can store severals backups, and provides the
/// management interface for backup.
-pub struct DataStoreImpl {
- chunk_store: Arc<ChunkStore>,
+pub struct DataStoreImpl<T> {
+ chunk_store: Arc<ChunkStore<T>>,
gc_mutex: Mutex<()>,
last_gc_status: Mutex<GarbageCollectionStatus>,
verify_new: bool,
@@ -127,12 +133,12 @@ pub struct DataStoreImpl {
sync_level: DatastoreFSyncLevel,
}
-impl DataStoreImpl {
+impl<T> DataStoreImpl<T> {
// This one just panics on everything
#[doc(hidden)]
- pub(crate) unsafe fn new_test() -> Arc<Self> {
+ pub(crate) fn new_test() -> Arc<Self> {
Arc::new(Self {
- chunk_store: Arc::new(unsafe { ChunkStore::panic_store() }),
+ chunk_store: Arc::new(ChunkStore::dummy_store()),
gc_mutex: Mutex::new(()),
last_gc_status: Mutex::new(GarbageCollectionStatus::default()),
verify_new: false,
@@ -143,12 +149,12 @@ impl DataStoreImpl {
}
}
-pub struct DataStore {
- inner: Arc<DataStoreImpl>,
+pub struct DataStore<T> {
+ inner: Arc<DataStoreImpl<T>>,
operation: Option<Operation>,
}
-impl Clone for DataStore {
+impl<T> Clone for DataStore<T> {
fn clone(&self) -> Self {
let mut new_operation = self.operation;
if let Some(operation) = self.operation {
@@ -165,7 +171,7 @@ impl Clone for DataStore {
}
}
-impl Drop for DataStore {
+impl<T> Drop for DataStore<T> {
fn drop(&mut self) {
if let Some(operation) = self.operation {
let mut last_task = false;
@@ -188,12 +194,62 @@ impl Drop for DataStore {
});
if remove_from_cache {
- DATASTORE_MAP.lock().unwrap().remove(self.name());
+ DATASTORE_MAP_READ.lock().unwrap().remove(self.name());
+ DATASTORE_MAP_WRITE.lock().unwrap().remove(self.name());
}
}
}
}
+impl DataStore<L> {
+ pub fn lookup_datastore(name: &str) -> Result<Arc<Self>, Error> {
+ let (config, digest, _lock) = Self::read_config(name)?;
+ let chunk_store = Arc::new(ChunkStore::open_lookup(name, config.absolute_path())?);
+ let tuning: DatastoreTuning = serde_json::from_value(
+ DatastoreTuning::API_SCHEMA
+ .parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
+ )?;
+ let store = DataStoreImpl {
+ chunk_store,
+ gc_mutex: Mutex::new(()),
+ last_gc_status: Mutex::new(GarbageCollectionStatus::default()),
+ verify_new: config.verify_new.unwrap_or(false),
+ chunk_order: tuning.chunk_order.unwrap_or_default(),
+ last_digest: Some(digest),
+ sync_level: tuning.sync_level.unwrap_or_default(),
+ };
+
+ Ok(Arc::new(Self {
+ inner: Arc::new(store),
+ operation: Some(Operation::Lookup),
+ }))
+ }
+}
+
+impl DataStore<R> {
+ pub fn lookup_datastore_read(name: &str) -> Result<Arc<Self>, Error> {
+ let mut datastore_cache = DATASTORE_MAP_READ.lock().unwrap();
+ let cache_entry = datastore_cache.get(name);
+ let store = Self::open_datastore(name, Some(Operation::Read), cache_entry.cloned())?;
+ if cache_entry.is_none() {
+ datastore_cache.insert(name.to_string(), store.inner.clone());
+ }
+ Ok(store)
+ }
+}
+
+impl DataStore<W> {
+ pub fn lookup_datastore_write(name: &str) -> Result<Arc<Self>, Error> {
+ let mut datastore_cache = DATASTORE_MAP_WRITE.lock().unwrap();
+ let cache_entry = datastore_cache.get(name);
+ let store = Self::open_datastore(name, Some(Operation::Write), cache_entry.cloned())?;
+ if cache_entry.is_none() {
+ datastore_cache.insert(name.to_string(), store.inner.clone());
+ }
+ Ok(store)
+ }
+}
+
impl DataStore {
// This one just panics on everything
#[doc(hidden)]
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 04/12] datastore: separate functions into impl block
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
` (2 preceding siblings ...)
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 03/12] datastore: add generics and new lookup functions Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 05/12] backup_info: add generics and separate functions into impl blocks Hannes Laimer
` (7 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
... based on whether they are reading/writing.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
pbs-datastore/src/datastore.rs | 1282 ++++++++++++++++----------------
1 file changed, 643 insertions(+), 639 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 6936875e..66a2e209 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -27,7 +27,6 @@ use pbs_api_types::{
DataStoreConfig, DatastoreFSyncLevel, DatastoreTuning, GarbageCollectionStatus,
MaintenanceMode, MaintenanceType, Operation, UPID,
};
-use pbs_config::BackupLockGuard;
use crate::backup_info::{BackupDir, BackupGroup, BackupInfo, OLD_LOCKING};
use crate::chunk_store::{CanRead, CanWrite, ChunkStore, Lookup as L, Read as R, Write as W};
@@ -250,28 +249,87 @@ impl DataStore<W> {
}
}
-impl DataStore {
- // This one just panics on everything
- #[doc(hidden)]
- pub(crate) unsafe fn new_test() -> Arc<Self> {
- Arc::new(Self {
- inner: unsafe { DataStoreImpl::new_test() },
- operation: None,
- })
+impl<T: CanRead + 'static> DataStore<T> {
+ /// Get a streaming iter over single-level backup namespaces of a datatstore, filtered by Ok
+ ///
+ /// The iterated item's result is already unwrapped, if it contained an error it will be
+ /// logged. Can be useful in iterator chain commands
+ pub fn iter_backup_ns_ok(
+ self: &Arc<DataStore<T>>,
+ ns: BackupNamespace,
+ ) -> Result<impl Iterator<Item = BackupNamespace> + 'static, Error> {
+ let this = Arc::clone(self);
+ Ok(
+ ListNamespaces::new(Arc::clone(self), ns)?.filter_map(move |ns| match ns {
+ Ok(ns) => Some(ns),
+ Err(err) => {
+ log::error!("list groups error on datastore {} - {}", this.name(), err);
+ None
+ }
+ }),
+ )
+ }
+
+ /// Get a streaming iter over single-level backup namespaces of a datatstore, filtered by Ok
+ ///
+ /// The iterated item's result is already unwrapped, if it contained an error it will be
+ /// logged. Can be useful in iterator chain commands
+ pub fn recursive_iter_backup_ns_ok(
+ self: &Arc<DataStore<T>>,
+ ns: BackupNamespace,
+ max_depth: Option<usize>,
+ ) -> Result<impl Iterator<Item = BackupNamespace> + 'static, Error> {
+ let this = Arc::clone(self);
+ Ok(if let Some(depth) = max_depth {
+ ListNamespacesRecursive::new_max_depth(Arc::clone(self), ns, depth)?
+ } else {
+ ListNamespacesRecursive::new(Arc::clone(self), ns)?
+ }
+ .filter_map(move |ns| match ns {
+ Ok(ns) => Some(ns),
+ Err(err) => {
+ log::error!("list groups error on datastore {} - {}", this.name(), err);
+ None
+ }
+ }))
+ }
+
+ /// Get a streaming iter over top-level backup groups of a datastore of a particular type,
+ /// filtered by `Ok` results
+ ///
+ /// The iterated item's result is already unwrapped, if it contained an error it will be
+ /// logged. Can be useful in iterator chain commands
+ pub fn iter_backup_type_ok(
+ self: &Arc<DataStore<T>>,
+ ns: BackupNamespace,
+ ty: BackupType,
+ ) -> Result<impl Iterator<Item = BackupGroup> + 'static, Error> {
+ Ok(self.iter_backup_type(ns, ty)?.ok())
+ }
+
+ /// Get a streaming iter over top-level backup groups of a datatstore, filtered by Ok results
+ ///
+ /// The iterated item's result is already unwrapped, if it contained an error it will be
+ /// logged. Can be useful in iterator chain commands
+ pub fn iter_backup_groups_ok(
+ self: &Arc<DataStore<T>>,
+ ns: BackupNamespace,
+ ) -> Result<impl Iterator<Item = BackupGroup> + 'static, Error> {
+ Ok(self.iter_backup_groups(ns)?.ok())
}
+}
- pub fn lookup_datastore(
+impl<T: CanRead> DataStore<T> {
+ pub fn open_datastore(
name: &str,
operation: Option<Operation>,
- ) -> Result<Arc<DataStore>, Error> {
+ cache_entry: Option<Arc<DataStoreImpl<T>>>,
+ ) -> Result<Arc<DataStore<T>>, Error> {
// Avoid TOCTOU between checking maintenance mode and updating active operation counter, as
// we use it to decide whether it is okay to delete the datastore.
- let _config_lock = pbs_config::datastore::lock_config()?;
-
// we could use the ConfigVersionCache's generation for staleness detection, but we load
// the config anyway -> just use digest, additional benefit: manual changes get detected
- let (config, digest) = pbs_config::datastore::config()?;
- let config: DataStoreConfig = config.lookup("datastore", name)?;
+ let (config, digest, _lock) = Self::read_config(name)?;
if let Some(maintenance_mode) = config.get_maintenance_mode() {
if let Err(error) = maintenance_mode.check(operation) {
@@ -280,16 +338,11 @@ impl DataStore {
}
if get_datastore_mount_status(&config) == Some(false) {
- let mut datastore_cache = DATASTORE_MAP.lock().unwrap();
- datastore_cache.remove(&config.name);
bail!("datastore '{}' is not mounted", config.name);
}
- let mut datastore_cache = DATASTORE_MAP.lock().unwrap();
- let entry = datastore_cache.get(name);
-
// reuse chunk store so that we keep using the same process locker instance!
- let chunk_store = if let Some(datastore) = &entry {
+ let chunk_store = if let Some(datastore) = &cache_entry {
let last_digest = datastore.last_digest.as_ref();
if let Some(true) = last_digest.map(|last_digest| last_digest == &digest) {
if let Some(operation) = operation {
@@ -306,73 +359,25 @@ impl DataStore {
DatastoreTuning::API_SCHEMA
.parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
)?;
- Arc::new(ChunkStore::open(
+ Arc::new(ChunkStore::<T>::open(
name,
config.absolute_path(),
tuning.sync_level.unwrap_or_default(),
)?)
};
- let datastore = DataStore::with_store_and_config(chunk_store, config, Some(digest))?;
-
- let datastore = Arc::new(datastore);
- datastore_cache.insert(name.to_string(), datastore.clone());
+ let datastore = Self::with_store_and_config(chunk_store, config, Some(digest))?;
if let Some(operation) = operation {
update_active_operations(name, operation, 1)?;
}
Ok(Arc::new(Self {
- inner: datastore,
+ inner: datastore.into(),
operation,
}))
}
- /// removes all datastores that are not configured anymore
- pub fn remove_unused_datastores() -> Result<(), Error> {
- let (config, _digest) = pbs_config::datastore::config()?;
-
- let mut map = DATASTORE_MAP.lock().unwrap();
- // removes all elements that are not in the config
- map.retain(|key, _| config.sections.contains_key(key));
- Ok(())
- }
-
- /// trigger clearing cache entry based on maintenance mode. Entry will only
- /// be cleared iff there is no other task running, if there is, the end of the
- /// last running task will trigger the clearing of the cache entry.
- pub fn update_datastore_cache(name: &str) -> Result<(), Error> {
- let (config, _digest) = pbs_config::datastore::config()?;
- let datastore: DataStoreConfig = config.lookup("datastore", name)?;
- if datastore
- .get_maintenance_mode()
- .is_some_and(|m| m.clear_from_cache())
- {
- // the datastore drop handler does the checking if tasks are running and clears the
- // cache entry, so we just have to trigger it here
- let _ = DataStore::lookup_datastore(name, Some(Operation::Lookup));
- }
-
- Ok(())
- }
-
- /// Open a raw database given a name and a path.
- ///
- /// # Safety
- /// See the safety section in `open_from_config`
- pub unsafe fn open_path(
- name: &str,
- path: impl AsRef<Path>,
- operation: Option<Operation>,
- ) -> Result<Arc<Self>, Error> {
- let path = path
- .as_ref()
- .to_str()
- .ok_or_else(|| format_err!("non-utf8 paths not supported"))?
- .to_owned();
- unsafe { Self::open_from_config(DataStoreConfig::new(name.to_owned(), path), operation) }
- }
-
/// Open a datastore given a raw configuration.
///
/// # Safety
@@ -394,7 +399,7 @@ impl DataStore {
DatastoreTuning::API_SCHEMA
.parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
)?;
- let chunk_store = ChunkStore::open(
+ let chunk_store = ChunkStore::<T>::open(
&name,
config.absolute_path(),
tuning.sync_level.unwrap_or_default(),
@@ -413,10 +418,10 @@ impl DataStore {
}
fn with_store_and_config(
- chunk_store: Arc<ChunkStore>,
+ chunk_store: Arc<ChunkStore<T>>,
config: DataStoreConfig,
last_digest: Option<[u8; 32]>,
- ) -> Result<DataStoreImpl, Error> {
+ ) -> Result<DataStoreImpl<T>, Error> {
let mut gc_status_path = chunk_store.base_path();
gc_status_path.push(".gc-status");
@@ -448,6 +453,23 @@ impl DataStore {
})
}
+ /// Open a raw database given a name and a path.
+ ///
+ /// # Safety
+ /// See the safety section in `open_from_config`
+ pub unsafe fn open_path(
+ name: &str,
+ path: impl AsRef<Path>,
+ operation: Option<Operation>,
+ ) -> Result<Arc<Self>, Error> {
+ let path = path
+ .as_ref()
+ .to_str()
+ .ok_or_else(|| format_err!("non-utf8 paths not supported"))?
+ .to_owned();
+ unsafe { Self::open_from_config(DataStoreConfig::new(name.to_owned(), path), operation) }
+ }
+
pub fn get_chunk_iterator(
&self,
) -> Result<
@@ -457,53 +479,6 @@ impl DataStore {
self.inner.chunk_store.get_chunk_iterator()
}
- pub fn create_fixed_writer<P: AsRef<Path>>(
- &self,
- filename: P,
- size: usize,
- chunk_size: usize,
- ) -> Result<FixedIndexWriter, Error> {
- let index = FixedIndexWriter::create(
- self.inner.chunk_store.clone(),
- filename.as_ref(),
- size,
- chunk_size,
- )?;
-
- Ok(index)
- }
-
- pub fn open_fixed_reader<P: AsRef<Path>>(
- &self,
- filename: P,
- ) -> Result<FixedIndexReader, Error> {
- let full_path = self.inner.chunk_store.relative_path(filename.as_ref());
-
- let index = FixedIndexReader::open(&full_path)?;
-
- Ok(index)
- }
-
- pub fn create_dynamic_writer<P: AsRef<Path>>(
- &self,
- filename: P,
- ) -> Result<DynamicIndexWriter, Error> {
- let index = DynamicIndexWriter::create(self.inner.chunk_store.clone(), filename.as_ref())?;
-
- Ok(index)
- }
-
- pub fn open_dynamic_reader<P: AsRef<Path>>(
- &self,
- filename: P,
- ) -> Result<DynamicIndexReader, Error> {
- let full_path = self.inner.chunk_store.relative_path(filename.as_ref());
-
- let index = DynamicIndexReader::open(&full_path)?;
-
- Ok(index)
- }
-
pub fn open_index<P>(&self, filename: P) -> Result<Box<dyn IndexFile + Send>, Error>
where
P: AsRef<Path>,
@@ -543,73 +518,26 @@ impl DataStore {
Ok(())
}
- pub fn name(&self) -> &str {
- self.inner.chunk_store.name()
- }
-
- pub fn base_path(&self) -> PathBuf {
- self.inner.chunk_store.base_path()
- }
-
- /// Returns the absolute path for a backup namespace on this datastore
- pub fn namespace_path(&self, ns: &BackupNamespace) -> PathBuf {
- let mut path = self.base_path();
- path.reserve(ns.path_len());
- for part in ns.components() {
- path.push("ns");
- path.push(part);
- }
- path
- }
+ pub fn open_fixed_reader<P: AsRef<Path>>(
+ &self,
+ filename: P,
+ ) -> Result<FixedIndexReader, Error> {
+ let full_path = self.inner.chunk_store.relative_path(filename.as_ref());
- /// Returns the absolute path for a backup_type
- pub fn type_path(&self, ns: &BackupNamespace, backup_type: BackupType) -> PathBuf {
- let mut full_path = self.namespace_path(ns);
- full_path.push(backup_type.to_string());
- full_path
- }
+ let index = FixedIndexReader::open(&full_path)?;
- /// Returns the absolute path for a backup_group
- pub fn group_path(
- &self,
- ns: &BackupNamespace,
- backup_group: &pbs_api_types::BackupGroup,
- ) -> PathBuf {
- let mut full_path = self.namespace_path(ns);
- full_path.push(backup_group.to_string());
- full_path
+ Ok(index)
}
- /// Returns the absolute path for backup_dir
- pub fn snapshot_path(
+ pub fn open_dynamic_reader<P: AsRef<Path>>(
&self,
- ns: &BackupNamespace,
- backup_dir: &pbs_api_types::BackupDir,
- ) -> PathBuf {
- let mut full_path = self.namespace_path(ns);
- full_path.push(backup_dir.to_string());
- full_path
- }
-
- /// Create a backup namespace.
- pub fn create_namespace(
- self: &Arc<Self>,
- parent: &BackupNamespace,
- name: String,
- ) -> Result<BackupNamespace, Error> {
- if !self.namespace_exists(parent) {
- bail!("cannot create new namespace, parent {parent} doesn't already exists");
- }
-
- // construct ns before mkdir to enforce max-depth and name validity
- let ns = BackupNamespace::from_parent_ns(parent, name)?;
-
- let mut ns_full_path = self.base_path();
- ns_full_path.push(ns.path());
+ filename: P,
+ ) -> Result<DynamicIndexReader, Error> {
+ let full_path = self.inner.chunk_store.relative_path(filename.as_ref());
- std::fs::create_dir_all(ns_full_path)?;
+ let index = DynamicIndexReader::open(&full_path)?;
- Ok(ns)
+ Ok(index)
}
/// Returns if the given namespace exists on the datastore
@@ -619,7 +547,401 @@ impl DataStore {
path.exists()
}
- /// Remove all backup groups of a single namespace level but not the namespace itself.
+ /// Returns the time of the last successful backup
+ ///
+ /// Or None if there is no backup in the group (or the group dir does not exist).
+ pub fn last_successful_backup(
+ self: &Arc<Self>,
+ ns: &BackupNamespace,
+ backup_group: &pbs_api_types::BackupGroup,
+ ) -> Result<Option<i64>, Error> {
+ let backup_group = self.backup_group(ns.clone(), backup_group.clone());
+
+ let group_path = backup_group.full_group_path();
+
+ if group_path.exists() {
+ backup_group.last_successful_backup()
+ } else {
+ Ok(None)
+ }
+ }
+
+ /// Returns the backup owner.
+ ///
+ /// The backup owner is the entity who first created the backup group.
+ pub fn get_owner(
+ &self,
+ ns: &BackupNamespace,
+ backup_group: &pbs_api_types::BackupGroup,
+ ) -> Result<Authid, Error> {
+ let full_path = self.owner_path(ns, backup_group);
+ let owner = proxmox_sys::fs::file_read_firstline(full_path)?;
+ owner
+ .trim_end() // remove trailing newline
+ .parse()
+ .map_err(|err| format_err!("parsing owner for {backup_group} failed: {err}"))
+ }
+
+ pub fn owns_backup(
+ &self,
+ ns: &BackupNamespace,
+ backup_group: &pbs_api_types::BackupGroup,
+ auth_id: &Authid,
+ ) -> Result<bool, Error> {
+ let owner = self.get_owner(ns, backup_group)?;
+
+ Ok(check_backup_owner(&owner, auth_id).is_ok())
+ }
+
+ /// Get a streaming iter over single-level backup namespaces of a datatstore
+ ///
+ /// The iterated item is still a Result that can contain errors from rather unexptected FS or
+ /// parsing errors.
+ pub fn iter_backup_ns(
+ self: &Arc<DataStore<T>>,
+ ns: BackupNamespace,
+ ) -> Result<ListNamespaces, Error> {
+ ListNamespaces::new(Arc::clone(self), ns)
+ }
+
+ /// Get a streaming iter over single-level backup namespaces of a datatstore
+ ///
+ /// The iterated item is still a Result that can contain errors from rather unexptected FS or
+ /// parsing errors.
+ pub fn recursive_iter_backup_ns(
+ self: &Arc<DataStore<T>>,
+ ns: BackupNamespace,
+ ) -> Result<ListNamespacesRecursive, Error> {
+ ListNamespacesRecursive::new(Arc::clone(self), ns)
+ }
+
+ /// Get a streaming iter over top-level backup groups of a datatstore of a particular type.
+ ///
+ /// The iterated item is still a Result that can contain errors from rather unexptected FS or
+ /// parsing errors.
+ pub fn iter_backup_type(
+ self: &Arc<DataStore<T>>,
+ ns: BackupNamespace,
+ ty: BackupType,
+ ) -> Result<ListGroupsType, Error> {
+ ListGroupsType::new(Arc::clone(self), ns, ty)
+ }
+
+ /// Get a streaming iter over top-level backup groups of a datatstore
+ ///
+ /// The iterated item is still a Result that can contain errors from rather unexptected FS or
+ /// parsing errors.
+ pub fn iter_backup_groups(
+ self: &Arc<DataStore<T>>,
+ ns: BackupNamespace,
+ ) -> Result<ListGroups, Error> {
+ ListGroups::new(Arc::clone(self), ns)
+ }
+
+ /// Get a in-memory vector for all top-level backup groups of a datatstore
+ ///
+ /// NOTE: using the iterator directly is most often more efficient w.r.t. memory usage
+ pub fn list_backup_groups(
+ self: &Arc<DataStore<T>>,
+ ns: BackupNamespace,
+ ) -> Result<Vec<BackupGroup>, Error> {
+ ListGroups::new(Arc::clone(self), ns)?.collect()
+ }
+
+ /// Lookup all index files to be found in the datastore without taking any logical iteration
+ /// into account.
+ /// The filesystem is walked recursevly to detect index files based on their archive type based
+ /// on the filename. This however excludes the chunks folder, hidden files and does not follow
+ /// symlinks.
+ fn list_index_files(&self) -> Result<HashSet<PathBuf>, Error> {
+ let base = self.base_path();
+
+ let mut list = HashSet::new();
+
+ use walkdir::WalkDir;
+
+ let walker = WalkDir::new(base).into_iter();
+
+ // make sure we skip .chunks (and other hidden files to keep it simple)
+ fn is_hidden(entry: &walkdir::DirEntry) -> bool {
+ entry
+ .file_name()
+ .to_str()
+ .map(|s| s.starts_with('.'))
+ .unwrap_or(false)
+ }
+ let handle_entry_err = |err: walkdir::Error| {
+ // first, extract the actual IO error and the affected path
+ let (inner, path) = match (err.io_error(), err.path()) {
+ (None, _) => return Ok(()), // not an IO-error
+ (Some(inner), Some(path)) => (inner, path),
+ (Some(inner), None) => bail!("unexpected error on datastore traversal: {inner}"),
+ };
+ if inner.kind() == io::ErrorKind::PermissionDenied {
+ if err.depth() <= 1 && path.ends_with("lost+found") {
+ // allow skipping of (root-only) ext4 fsck-directory on EPERM ..
+ return Ok(());
+ }
+ // .. but do not ignore EPERM in general, otherwise we might prune too many chunks.
+ // E.g., if users messed up with owner/perms on a rsync
+ bail!("cannot continue garbage-collection safely, permission denied on: {path:?}");
+ } else if inner.kind() == io::ErrorKind::NotFound {
+ log::info!("ignoring vanished file: {path:?}");
+ return Ok(());
+ } else {
+ bail!("unexpected error on datastore traversal: {inner} - {path:?}");
+ }
+ };
+ for entry in walker.filter_entry(|e| !is_hidden(e)) {
+ let path = match entry {
+ Ok(entry) => entry.into_path(),
+ Err(err) => {
+ handle_entry_err(err)?;
+ continue;
+ }
+ };
+ if let Ok(archive_type) = ArchiveType::from_path(&path) {
+ if archive_type == ArchiveType::FixedIndex
+ || archive_type == ArchiveType::DynamicIndex
+ {
+ list.insert(path);
+ }
+ }
+ }
+
+ Ok(list)
+ }
+
+ // Similar to open index, but return with Ok(None) if index file vanished.
+ fn open_index_reader(&self, absolute_path: &Path) -> Result<Option<Box<dyn IndexFile>>, Error> {
+ let archive_type = match ArchiveType::from_path(absolute_path) {
+ // ignore archives with unknown archive type
+ Ok(ArchiveType::Blob) | Err(_) => bail!("unexpected archive type"),
+ Ok(archive_type) => archive_type,
+ };
+
+ if absolute_path.is_relative() {
+ bail!("expected absolute path, got '{absolute_path:?}'");
+ }
+
+ let file = match std::fs::File::open(absolute_path) {
+ Ok(file) => file,
+ // ignore vanished files
+ Err(err) if err.kind() == io::ErrorKind::NotFound => return Ok(None),
+ Err(err) => {
+ return Err(Error::from(err).context(format!("can't open file '{absolute_path:?}'")))
+ }
+ };
+
+ match archive_type {
+ ArchiveType::FixedIndex => {
+ let reader = FixedIndexReader::new(file)
+ .with_context(|| format!("can't open fixed index '{absolute_path:?}'"))?;
+ Ok(Some(Box::new(reader)))
+ }
+ ArchiveType::DynamicIndex => {
+ let reader = DynamicIndexReader::new(file)
+ .with_context(|| format!("can't open dynamic index '{absolute_path:?}'"))?;
+ Ok(Some(Box::new(reader)))
+ }
+ ArchiveType::Blob => bail!("unexpected archive type blob"),
+ }
+ }
+
+ pub fn last_gc_status(&self) -> GarbageCollectionStatus {
+ self.inner.last_gc_status.lock().unwrap().clone()
+ }
+
+ pub fn garbage_collection_running(&self) -> bool {
+ self.inner.gc_mutex.try_lock().is_err()
+ }
+
+ pub fn try_shared_chunk_store_lock(&self) -> Result<ProcessLockSharedGuard, Error> {
+ self.inner.chunk_store.try_shared_lock()
+ }
+
+ pub fn stat_chunk(&self, digest: &[u8; 32]) -> Result<std::fs::Metadata, Error> {
+ let (chunk_path, _digest_str) = self.inner.chunk_store.chunk_path(digest);
+ std::fs::metadata(chunk_path).map_err(Error::from)
+ }
+
+ pub fn load_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
+ let (chunk_path, digest_str) = self.inner.chunk_store.chunk_path(digest);
+
+ proxmox_lang::try_block!({
+ let mut file = std::fs::File::open(&chunk_path)?;
+ DataBlob::load_from_reader(&mut file)
+ })
+ .map_err(|err| {
+ format_err!(
+ "store '{}', unable to load chunk '{}' - {}",
+ self.name(),
+ digest_str,
+ err,
+ )
+ })
+ }
+
+ /// returns a list of chunks sorted by their inode number on disk chunks that couldn't get
+ /// stat'ed are placed at the end of the list
+ pub fn get_chunks_in_order<F, A>(
+ &self,
+ index: &(dyn IndexFile + Send),
+ skip_chunk: F,
+ check_abort: A,
+ ) -> Result<Vec<(usize, u64)>, Error>
+ where
+ F: Fn(&[u8; 32]) -> bool,
+ A: Fn(usize) -> Result<(), Error>,
+ {
+ let index_count = index.index_count();
+ let mut chunk_list = Vec::with_capacity(index_count);
+ use std::os::unix::fs::MetadataExt;
+ for pos in 0..index_count {
+ check_abort(pos)?;
+
+ let info = index.chunk_info(pos).unwrap();
+
+ if skip_chunk(&info.digest) {
+ continue;
+ }
+
+ let ino = match self.inner.chunk_order {
+ ChunkOrder::Inode => {
+ match self.stat_chunk(&info.digest) {
+ Err(_) => u64::MAX, // could not stat, move to end of list
+ Ok(metadata) => metadata.ino(),
+ }
+ }
+ ChunkOrder::None => 0,
+ };
+
+ chunk_list.push((pos, ino));
+ }
+
+ match self.inner.chunk_order {
+ // sorting by inode improves data locality, which makes it lots faster on spinners
+ ChunkOrder::Inode => {
+ chunk_list.sort_unstable_by(|(_, ino_a), (_, ino_b)| ino_a.cmp(ino_b))
+ }
+ ChunkOrder::None => {}
+ }
+
+ Ok(chunk_list)
+ }
+
+ /// Open a snapshot (backup directory) from this datastore.
+ pub fn backup_dir_from_parts<D>(
+ self: &Arc<Self>,
+ ns: BackupNamespace,
+ ty: BackupType,
+ id: D,
+ time: i64,
+ ) -> Result<BackupDir, Error>
+ where
+ D: Into<String>,
+ {
+ self.backup_dir(ns, (ty, id.into(), time).into())
+ }
+
+ /// Open a snapshot (backup directory) from this datastore with a cached rfc3339 time string.
+ pub fn backup_dir_with_rfc3339<D: Into<String>>(
+ self: &Arc<Self>,
+ group: BackupGroup,
+ time_string: D,
+ ) -> Result<BackupDir, Error> {
+ BackupDir::with_rfc3339(group, time_string.into())
+ }
+
+ /// Open a backup group from this datastore.
+ pub fn backup_group_from_parts<D>(
+ self: &Arc<Self>,
+ ns: BackupNamespace,
+ ty: BackupType,
+ id: D,
+ ) -> BackupGroup
+ where
+ D: Into<String>,
+ {
+ self.backup_group(ns, (ty, id.into()).into())
+ }
+
+ /*
+ /// Open a backup group from this datastore by backup group path such as `vm/100`.
+ ///
+ /// Convenience method for `store.backup_group(path.parse()?)`
+ pub fn backup_group_from_path(self: &Arc<Self>, path: &str) -> Result<BackupGroup, Error> {
+ todo!("split out the namespace");
+ }
+ */
+
+ /// Open a backup group from this datastore.
+ pub fn backup_group(
+ self: &Arc<Self>,
+ ns: BackupNamespace,
+ group: pbs_api_types::BackupGroup,
+ ) -> BackupGroup<T> {
+ BackupGroup::new(Arc::clone(self), ns, group)
+ }
+
+ /// Open a snapshot (backup directory) from this datastore.
+ pub fn backup_dir(
+ self: &Arc<Self>,
+ ns: BackupNamespace,
+ dir: pbs_api_types::BackupDir,
+ ) -> Result<BackupDir, Error> {
+ BackupDir::with_group(self.backup_group(ns, dir.group), dir.time)
+ }
+}
+
+impl<T: CanWrite> DataStore<T> {
+ pub fn create_fixed_writer<P: AsRef<Path>>(
+ &self,
+ filename: P,
+ size: usize,
+ chunk_size: usize,
+ ) -> Result<FixedIndexWriter, Error> {
+ let index = FixedIndexWriter::create(
+ self.inner.chunk_store.clone(),
+ filename.as_ref(),
+ size,
+ chunk_size,
+ )?;
+
+ Ok(index)
+ }
+
+ pub fn create_dynamic_writer<P: AsRef<Path>>(
+ &self,
+ filename: P,
+ ) -> Result<DynamicIndexWriter, Error> {
+ let index = DynamicIndexWriter::create(self.inner.chunk_store.clone(), filename.as_ref())?;
+
+ Ok(index)
+ }
+
+ /// Create a backup namespace.
+ pub fn create_namespace(
+ self: &Arc<Self>,
+ parent: &BackupNamespace,
+ name: String,
+ ) -> Result<BackupNamespace, Error> {
+ if !self.namespace_exists(parent) {
+ bail!("cannot create new namespace, parent {parent} doesn't already exists");
+ }
+
+ // construct ns before mkdir to enforce max-depth and name validity
+ let ns = BackupNamespace::from_parent_ns(parent, name)?;
+
+ let mut ns_full_path = self.base_path();
+ ns_full_path.push(ns.path());
+
+ std::fs::create_dir_all(ns_full_path)?;
+
+ Ok(ns)
+ }
+
+ /// Remove all backup groups of a single namespace level but not the namespace itself.
///
/// Does *not* descends into child-namespaces and doesn't remoes the namespace itself either.
///
@@ -719,85 +1041,30 @@ impl DataStore {
Ok((removed_all_requested, stats))
}
- /// Remove a complete backup group including all snapshots.
- ///
- /// Returns `BackupGroupDeleteStats`, containing the number of deleted snapshots
- /// and number of protected snaphsots, which therefore were not removed.
- pub fn remove_backup_group(
- self: &Arc<Self>,
- ns: &BackupNamespace,
- backup_group: &pbs_api_types::BackupGroup,
- ) -> Result<BackupGroupDeleteStats, Error> {
- let backup_group = self.backup_group(ns.clone(), backup_group.clone());
-
- backup_group.destroy()
- }
-
- /// Remove a backup directory including all content
- pub fn remove_backup_dir(
- self: &Arc<Self>,
- ns: &BackupNamespace,
- backup_dir: &pbs_api_types::BackupDir,
- force: bool,
- ) -> Result<(), Error> {
- let backup_dir = self.backup_dir(ns.clone(), backup_dir.clone())?;
-
- backup_dir.destroy(force)
- }
-
- /// Returns the time of the last successful backup
- ///
- /// Or None if there is no backup in the group (or the group dir does not exist).
- pub fn last_successful_backup(
- self: &Arc<Self>,
- ns: &BackupNamespace,
- backup_group: &pbs_api_types::BackupGroup,
- ) -> Result<Option<i64>, Error> {
- let backup_group = self.backup_group(ns.clone(), backup_group.clone());
-
- let group_path = backup_group.full_group_path();
-
- if group_path.exists() {
- backup_group.last_successful_backup()
- } else {
- Ok(None)
- }
- }
-
- /// Return the path of the 'owner' file.
- pub(super) fn owner_path(
- &self,
- ns: &BackupNamespace,
- group: &pbs_api_types::BackupGroup,
- ) -> PathBuf {
- self.group_path(ns, group).join("owner")
- }
-
- /// Returns the backup owner.
+ /// Remove a complete backup group including all snapshots.
///
- /// The backup owner is the entity who first created the backup group.
- pub fn get_owner(
- &self,
+ /// Returns `BackupGroupDeleteStats`, containing the number of deleted snapshots
+ /// and number of protected snaphsots, which therefore were not removed.
+ pub fn remove_backup_group(
+ self: &Arc<Self>,
ns: &BackupNamespace,
backup_group: &pbs_api_types::BackupGroup,
- ) -> Result<Authid, Error> {
- let full_path = self.owner_path(ns, backup_group);
- let owner = proxmox_sys::fs::file_read_firstline(full_path)?;
- owner
- .trim_end() // remove trailing newline
- .parse()
- .map_err(|err| format_err!("parsing owner for {backup_group} failed: {err}"))
+ ) -> Result<BackupGroupDeleteStats, Error> {
+ let backup_group = self.backup_group(ns.clone(), backup_group.clone());
+
+ backup_group.destroy()
}
- pub fn owns_backup(
- &self,
+ /// Remove a backup directory including all content
+ pub fn remove_backup_dir(
+ self: &Arc<Self>,
ns: &BackupNamespace,
- backup_group: &pbs_api_types::BackupGroup,
- auth_id: &Authid,
- ) -> Result<bool, Error> {
- let owner = self.get_owner(ns, backup_group)?;
+ backup_dir: &pbs_api_types::BackupDir,
+ force: bool,
+ ) -> Result<(), Error> {
+ let backup_dir = self.backup_dir(ns.clone(), backup_dir.clone())?;
- Ok(check_backup_owner(&owner, auth_id).is_ok())
+ backup_dir.destroy(force)
}
/// Set the backup owner.
@@ -900,229 +1167,6 @@ impl DataStore {
}
}
- /// Get a streaming iter over single-level backup namespaces of a datatstore
- ///
- /// The iterated item is still a Result that can contain errors from rather unexptected FS or
- /// parsing errors.
- pub fn iter_backup_ns(
- self: &Arc<DataStore>,
- ns: BackupNamespace,
- ) -> Result<ListNamespaces, Error> {
- ListNamespaces::new(Arc::clone(self), ns)
- }
-
- /// Get a streaming iter over single-level backup namespaces of a datatstore, filtered by Ok
- ///
- /// The iterated item's result is already unwrapped, if it contained an error it will be
- /// logged. Can be useful in iterator chain commands
- pub fn iter_backup_ns_ok(
- self: &Arc<DataStore>,
- ns: BackupNamespace,
- ) -> Result<impl Iterator<Item = BackupNamespace> + 'static, Error> {
- let this = Arc::clone(self);
- Ok(
- ListNamespaces::new(Arc::clone(self), ns)?.filter_map(move |ns| match ns {
- Ok(ns) => Some(ns),
- Err(err) => {
- log::error!("list groups error on datastore {} - {}", this.name(), err);
- None
- }
- }),
- )
- }
-
- /// Get a streaming iter over single-level backup namespaces of a datatstore
- ///
- /// The iterated item is still a Result that can contain errors from rather unexptected FS or
- /// parsing errors.
- pub fn recursive_iter_backup_ns(
- self: &Arc<DataStore>,
- ns: BackupNamespace,
- ) -> Result<ListNamespacesRecursive, Error> {
- ListNamespacesRecursive::new(Arc::clone(self), ns)
- }
-
- /// Get a streaming iter over single-level backup namespaces of a datatstore, filtered by Ok
- ///
- /// The iterated item's result is already unwrapped, if it contained an error it will be
- /// logged. Can be useful in iterator chain commands
- pub fn recursive_iter_backup_ns_ok(
- self: &Arc<DataStore>,
- ns: BackupNamespace,
- max_depth: Option<usize>,
- ) -> Result<impl Iterator<Item = BackupNamespace> + 'static, Error> {
- let this = Arc::clone(self);
- Ok(if let Some(depth) = max_depth {
- ListNamespacesRecursive::new_max_depth(Arc::clone(self), ns, depth)?
- } else {
- ListNamespacesRecursive::new(Arc::clone(self), ns)?
- }
- .filter_map(move |ns| match ns {
- Ok(ns) => Some(ns),
- Err(err) => {
- log::error!("list groups error on datastore {} - {}", this.name(), err);
- None
- }
- }))
- }
-
- /// Get a streaming iter over top-level backup groups of a datatstore of a particular type.
- ///
- /// The iterated item is still a Result that can contain errors from rather unexptected FS or
- /// parsing errors.
- pub fn iter_backup_type(
- self: &Arc<DataStore>,
- ns: BackupNamespace,
- ty: BackupType,
- ) -> Result<ListGroupsType, Error> {
- ListGroupsType::new(Arc::clone(self), ns, ty)
- }
-
- /// Get a streaming iter over top-level backup groups of a datastore of a particular type,
- /// filtered by `Ok` results
- ///
- /// The iterated item's result is already unwrapped, if it contained an error it will be
- /// logged. Can be useful in iterator chain commands
- pub fn iter_backup_type_ok(
- self: &Arc<DataStore>,
- ns: BackupNamespace,
- ty: BackupType,
- ) -> Result<impl Iterator<Item = BackupGroup> + 'static, Error> {
- Ok(self.iter_backup_type(ns, ty)?.ok())
- }
-
- /// Get a streaming iter over top-level backup groups of a datatstore
- ///
- /// The iterated item is still a Result that can contain errors from rather unexptected FS or
- /// parsing errors.
- pub fn iter_backup_groups(
- self: &Arc<DataStore>,
- ns: BackupNamespace,
- ) -> Result<ListGroups, Error> {
- ListGroups::new(Arc::clone(self), ns)
- }
-
- /// Get a streaming iter over top-level backup groups of a datatstore, filtered by Ok results
- ///
- /// The iterated item's result is already unwrapped, if it contained an error it will be
- /// logged. Can be useful in iterator chain commands
- pub fn iter_backup_groups_ok(
- self: &Arc<DataStore>,
- ns: BackupNamespace,
- ) -> Result<impl Iterator<Item = BackupGroup> + 'static, Error> {
- Ok(self.iter_backup_groups(ns)?.ok())
- }
-
- /// Get a in-memory vector for all top-level backup groups of a datatstore
- ///
- /// NOTE: using the iterator directly is most often more efficient w.r.t. memory usage
- pub fn list_backup_groups(
- self: &Arc<DataStore>,
- ns: BackupNamespace,
- ) -> Result<Vec<BackupGroup>, Error> {
- ListGroups::new(Arc::clone(self), ns)?.collect()
- }
-
- /// Lookup all index files to be found in the datastore without taking any logical iteration
- /// into account.
- /// The filesystem is walked recursevly to detect index files based on their archive type based
- /// on the filename. This however excludes the chunks folder, hidden files and does not follow
- /// symlinks.
- fn list_index_files(&self) -> Result<HashSet<PathBuf>, Error> {
- let base = self.base_path();
-
- let mut list = HashSet::new();
-
- use walkdir::WalkDir;
-
- let walker = WalkDir::new(base).into_iter();
-
- // make sure we skip .chunks (and other hidden files to keep it simple)
- fn is_hidden(entry: &walkdir::DirEntry) -> bool {
- entry
- .file_name()
- .to_str()
- .map(|s| s.starts_with('.'))
- .unwrap_or(false)
- }
- let handle_entry_err = |err: walkdir::Error| {
- // first, extract the actual IO error and the affected path
- let (inner, path) = match (err.io_error(), err.path()) {
- (None, _) => return Ok(()), // not an IO-error
- (Some(inner), Some(path)) => (inner, path),
- (Some(inner), None) => bail!("unexpected error on datastore traversal: {inner}"),
- };
- if inner.kind() == io::ErrorKind::PermissionDenied {
- if err.depth() <= 1 && path.ends_with("lost+found") {
- // allow skipping of (root-only) ext4 fsck-directory on EPERM ..
- return Ok(());
- }
- // .. but do not ignore EPERM in general, otherwise we might prune too many chunks.
- // E.g., if users messed up with owner/perms on a rsync
- bail!("cannot continue garbage-collection safely, permission denied on: {path:?}");
- } else if inner.kind() == io::ErrorKind::NotFound {
- log::info!("ignoring vanished file: {path:?}");
- return Ok(());
- } else {
- bail!("unexpected error on datastore traversal: {inner} - {path:?}");
- }
- };
- for entry in walker.filter_entry(|e| !is_hidden(e)) {
- let path = match entry {
- Ok(entry) => entry.into_path(),
- Err(err) => {
- handle_entry_err(err)?;
- continue;
- }
- };
- if let Ok(archive_type) = ArchiveType::from_path(&path) {
- if archive_type == ArchiveType::FixedIndex
- || archive_type == ArchiveType::DynamicIndex
- {
- list.insert(path);
- }
- }
- }
-
- Ok(list)
- }
-
- // Similar to open index, but return with Ok(None) if index file vanished.
- fn open_index_reader(&self, absolute_path: &Path) -> Result<Option<Box<dyn IndexFile>>, Error> {
- let archive_type = match ArchiveType::from_path(absolute_path) {
- // ignore archives with unknown archive type
- Ok(ArchiveType::Blob) | Err(_) => bail!("unexpected archive type"),
- Ok(archive_type) => archive_type,
- };
-
- if absolute_path.is_relative() {
- bail!("expected absolute path, got '{absolute_path:?}'");
- }
-
- let file = match std::fs::File::open(absolute_path) {
- Ok(file) => file,
- // ignore vanished files
- Err(err) if err.kind() == io::ErrorKind::NotFound => return Ok(None),
- Err(err) => {
- return Err(Error::from(err).context(format!("can't open file '{absolute_path:?}'")))
- }
- };
-
- match archive_type {
- ArchiveType::FixedIndex => {
- let reader = FixedIndexReader::new(file)
- .with_context(|| format!("can't open fixed index '{absolute_path:?}'"))?;
- Ok(Some(Box::new(reader)))
- }
- ArchiveType::DynamicIndex => {
- let reader = DynamicIndexReader::new(file)
- .with_context(|| format!("can't open dynamic index '{absolute_path:?}'"))?;
- Ok(Some(Box::new(reader)))
- }
- ArchiveType::Blob => bail!("unexpected archive type blob"),
- }
- }
-
// mark chunks used by ``index`` as used
fn index_mark_used_chunks(
&self,
@@ -1301,15 +1345,7 @@ impl DataStore {
warn!("Found {strange_paths_count} index files outside of expected directory scheme");
}
- Ok(())
- }
-
- pub fn last_gc_status(&self) -> GarbageCollectionStatus {
- self.inner.last_gc_status.lock().unwrap().clone()
- }
-
- pub fn garbage_collection_running(&self) -> bool {
- self.inner.gc_mutex.try_lock().is_err()
+ Ok(())
}
pub fn garbage_collection(
@@ -1479,14 +1515,6 @@ impl DataStore {
Ok(())
}
- pub fn try_shared_chunk_store_lock(&self) -> Result<ProcessLockSharedGuard, Error> {
- self.inner.chunk_store.try_shared_lock()
- }
-
- pub fn chunk_path(&self, digest: &[u8; 32]) -> (PathBuf, String) {
- self.inner.chunk_store.chunk_path(digest)
- }
-
pub fn cond_touch_chunk(&self, digest: &[u8; 32], assert_exists: bool) -> Result<bool, Error> {
self.inner
.chunk_store
@@ -1497,28 +1525,6 @@ impl DataStore {
self.inner.chunk_store.insert_chunk(chunk, digest)
}
- pub fn stat_chunk(&self, digest: &[u8; 32]) -> Result<std::fs::Metadata, Error> {
- let (chunk_path, _digest_str) = self.inner.chunk_store.chunk_path(digest);
- std::fs::metadata(chunk_path).map_err(Error::from)
- }
-
- pub fn load_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
- let (chunk_path, digest_str) = self.inner.chunk_store.chunk_path(digest);
-
- proxmox_lang::try_block!({
- let mut file = std::fs::File::open(&chunk_path)?;
- DataBlob::load_from_reader(&mut file)
- })
- .map_err(|err| {
- format_err!(
- "store '{}', unable to load chunk '{}' - {}",
- self.name(),
- digest_str,
- err,
- )
- })
- }
-
/// Updates the protection status of the specified snapshot.
pub fn update_protection(&self, backup_dir: &BackupDir, protection: bool) -> Result<(), Error> {
let full_path = backup_dir.full_path();
@@ -1545,128 +1551,6 @@ impl DataStore {
Ok(())
}
- pub fn verify_new(&self) -> bool {
- self.inner.verify_new
- }
-
- /// returns a list of chunks sorted by their inode number on disk chunks that couldn't get
- /// stat'ed are placed at the end of the list
- pub fn get_chunks_in_order<F, A>(
- &self,
- index: &(dyn IndexFile + Send),
- skip_chunk: F,
- check_abort: A,
- ) -> Result<Vec<(usize, u64)>, Error>
- where
- F: Fn(&[u8; 32]) -> bool,
- A: Fn(usize) -> Result<(), Error>,
- {
- let index_count = index.index_count();
- let mut chunk_list = Vec::with_capacity(index_count);
- use std::os::unix::fs::MetadataExt;
- for pos in 0..index_count {
- check_abort(pos)?;
-
- let info = index.chunk_info(pos).unwrap();
-
- if skip_chunk(&info.digest) {
- continue;
- }
-
- let ino = match self.inner.chunk_order {
- ChunkOrder::Inode => {
- match self.stat_chunk(&info.digest) {
- Err(_) => u64::MAX, // could not stat, move to end of list
- Ok(metadata) => metadata.ino(),
- }
- }
- ChunkOrder::None => 0,
- };
-
- chunk_list.push((pos, ino));
- }
-
- match self.inner.chunk_order {
- // sorting by inode improves data locality, which makes it lots faster on spinners
- ChunkOrder::Inode => {
- chunk_list.sort_unstable_by(|(_, ino_a), (_, ino_b)| ino_a.cmp(ino_b))
- }
- ChunkOrder::None => {}
- }
-
- Ok(chunk_list)
- }
-
- /// Open a backup group from this datastore.
- pub fn backup_group(
- self: &Arc<Self>,
- ns: BackupNamespace,
- group: pbs_api_types::BackupGroup,
- ) -> BackupGroup {
- BackupGroup::new(Arc::clone(self), ns, group)
- }
-
- /// Open a backup group from this datastore.
- pub fn backup_group_from_parts<T>(
- self: &Arc<Self>,
- ns: BackupNamespace,
- ty: BackupType,
- id: T,
- ) -> BackupGroup
- where
- T: Into<String>,
- {
- self.backup_group(ns, (ty, id.into()).into())
- }
-
- /*
- /// Open a backup group from this datastore by backup group path such as `vm/100`.
- ///
- /// Convenience method for `store.backup_group(path.parse()?)`
- pub fn backup_group_from_path(self: &Arc<Self>, path: &str) -> Result<BackupGroup, Error> {
- todo!("split out the namespace");
- }
- */
-
- /// Open a snapshot (backup directory) from this datastore.
- pub fn backup_dir(
- self: &Arc<Self>,
- ns: BackupNamespace,
- dir: pbs_api_types::BackupDir,
- ) -> Result<BackupDir, Error> {
- BackupDir::with_group(self.backup_group(ns, dir.group), dir.time)
- }
-
- /// Open a snapshot (backup directory) from this datastore.
- pub fn backup_dir_from_parts<T>(
- self: &Arc<Self>,
- ns: BackupNamespace,
- ty: BackupType,
- id: T,
- time: i64,
- ) -> Result<BackupDir, Error>
- where
- T: Into<String>,
- {
- self.backup_dir(ns, (ty, id.into(), time).into())
- }
-
- /// Open a snapshot (backup directory) from this datastore with a cached rfc3339 time string.
- pub fn backup_dir_with_rfc3339<T: Into<String>>(
- self: &Arc<Self>,
- group: BackupGroup,
- time_string: T,
- ) -> Result<BackupDir, Error> {
- BackupDir::with_rfc3339(group, time_string.into())
- }
-
- /*
- /// Open a snapshot (backup directory) from this datastore by a snapshot path.
- pub fn backup_dir_from_path(self: &Arc<Self>, path: &str) -> Result<BackupDir, Error> {
- todo!("split out the namespace");
- }
- */
-
/// Syncs the filesystem of the datastore if 'sync_level' is set to
/// [`DatastoreFSyncLevel::Filesystem`]. Uses syncfs(2).
pub fn try_ensure_sync_level(&self) -> Result<(), Error> {
@@ -1786,6 +1670,126 @@ impl DataStore {
Ok(())
}
+}
+
+impl<T> DataStore<T> {
+ #[doc(hidden)]
+ pub(crate) fn new_test() -> Arc<Self> {
+ Arc::new(Self {
+ inner: DataStoreImpl::new_test(),
+ operation: None,
+ })
+ }
+
+ pub fn read_config(name: &str) -> Result<(DataStoreConfig, [u8; 32], BackupLockGuard), Error> {
+ let lock = pbs_config::datastore::lock_config()?;
+
+ let (config, digest) = pbs_config::datastore::config()?;
+ let config: DataStoreConfig = config.lookup("datastore", name)?;
+ Ok((config, digest, lock))
+ }
+
+ /// removes all datastores that are not configured anymore
+ pub fn remove_unused_datastores() -> Result<(), Error> {
+ let (config, _digest) = pbs_config::datastore::config()?;
+
+ let mut map_read = DATASTORE_MAP_READ.lock().unwrap();
+ let mut map_write = DATASTORE_MAP_READ.lock().unwrap();
+ // removes all elements that are not in the config
+ map_read.retain(|key, _| config.sections.contains_key(key));
+ map_write.retain(|key, _| config.sections.contains_key(key));
+ Ok(())
+ }
+
+ /// trigger clearing cache entry based on maintenance mode. Entry will only
+ /// be cleared iff there is no other task running, if there is, the end of the
+ /// last running task will trigger the clearing of the cache entry.
+ pub fn update_datastore_cache(name: &str) -> Result<(), Error> {
+ let (config, _digest) = pbs_config::datastore::config()?;
+ let datastore: DataStoreConfig = config.lookup("datastore", name)?;
+ if datastore
+ .get_maintenance_mode()
+ .is_some_and(|m| m.clear_from_cache())
+ {
+ // the datastore drop handler does the checking if tasks are running and clears the
+ // cache entry, so we just have to trigger it here
+ let _ = DataStore::<L>::lookup_datastore(name);
+ }
+
+ Ok(())
+ }
+
+ pub fn name(&self) -> &str {
+ self.inner.chunk_store.name()
+ }
+
+ pub fn base_path(&self) -> PathBuf {
+ self.inner.chunk_store.base_path()
+ }
+
+ /// Returns the absolute path for a backup namespace on this datastore
+ pub fn namespace_path(&self, ns: &BackupNamespace) -> PathBuf {
+ let mut path = self.base_path();
+ path.reserve(ns.path_len());
+ for part in ns.components() {
+ path.push("ns");
+ path.push(part);
+ }
+ path
+ }
+
+ /// Returns the absolute path for a backup_type
+ pub fn type_path(&self, ns: &BackupNamespace, backup_type: BackupType) -> PathBuf {
+ let mut full_path = self.namespace_path(ns);
+ full_path.push(backup_type.to_string());
+ full_path
+ }
+
+ /// Returns the absolute path for a backup_group
+ pub fn group_path(
+ &self,
+ ns: &BackupNamespace,
+ backup_group: &pbs_api_types::BackupGroup,
+ ) -> PathBuf {
+ let mut full_path = self.namespace_path(ns);
+ full_path.push(backup_group.to_string());
+ full_path
+ }
+
+ /// Returns the absolute path for backup_dir
+ pub fn snapshot_path(
+ &self,
+ ns: &BackupNamespace,
+ backup_dir: &pbs_api_types::BackupDir,
+ ) -> PathBuf {
+ let mut full_path = self.namespace_path(ns);
+ full_path.push(backup_dir.to_string());
+ full_path
+ }
+
+ /// Return the path of the 'owner' file.
+ pub(super) fn owner_path(
+ &self,
+ ns: &BackupNamespace,
+ group: &pbs_api_types::BackupGroup,
+ ) -> PathBuf {
+ self.group_path(ns, group).join("owner")
+ }
+
+ pub fn chunk_path(&self, digest: &[u8; 32]) -> (PathBuf, String) {
+ self.inner.chunk_store.chunk_path(digest)
+ }
+
+ pub fn verify_new(&self) -> bool {
+ self.inner.verify_new
+ }
+
+ /*
+ /// Open a snapshot (backup directory) from this datastore by a snapshot path.
+ pub fn backup_dir_from_path(self: &Arc<Self>, path: &str) -> Result<BackupDir, Error> {
+ todo!("split out the namespace");
+ }
+ */
pub fn old_locking(&self) -> bool {
*OLD_LOCKING
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 05/12] backup_info: add generics and separate functions into impl blocks
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
` (3 preceding siblings ...)
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 04/12] datastore: separate functions into impl block Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 06/12] pbs-datastore: " Hannes Laimer
` (6 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
pbs-datastore/src/backup_info.rs | 583 ++++++++++++++++---------------
pbs-datastore/src/datastore.rs | 26 +-
2 files changed, 313 insertions(+), 296 deletions(-)
diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
index d4732fdd..f3ca283c 100644
--- a/pbs-datastore/src/backup_info.rs
+++ b/pbs-datastore/src/backup_info.rs
@@ -18,7 +18,10 @@ use pbs_api_types::{
use pbs_config::{open_backup_lockfile, BackupLockGuard};
use crate::manifest::{BackupManifest, MANIFEST_LOCK_NAME};
-use crate::{DataBlob, DataStore};
+use crate::{
+ chunk_store::{CanRead, CanWrite},
+ DataBlob, DataStore,
+};
pub const DATASTORE_LOCKS_DIR: &str = "/run/proxmox-backup/locks";
@@ -34,14 +37,14 @@ pub(crate) static OLD_LOCKING: LazyLock<bool> = LazyLock::new(|| {
/// BackupGroup is a directory containing a list of BackupDir
#[derive(Clone)]
-pub struct BackupGroup {
- store: Arc<DataStore>,
+pub struct BackupGroup<T> {
+ store: Arc<DataStore<T>>,
ns: BackupNamespace,
group: pbs_api_types::BackupGroup,
}
-impl fmt::Debug for BackupGroup {
+impl<T> fmt::Debug for BackupGroup<T> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.debug_struct("BackupGroup")
.field("store", &self.store.name())
@@ -51,45 +54,12 @@ impl fmt::Debug for BackupGroup {
}
}
-impl BackupGroup {
- pub(crate) fn new(
- store: Arc<DataStore>,
- ns: BackupNamespace,
- group: pbs_api_types::BackupGroup,
- ) -> Self {
- Self { store, ns, group }
- }
-
- /// Access the underlying [`BackupGroup`](pbs_api_types::BackupGroup).
- #[inline]
- pub fn group(&self) -> &pbs_api_types::BackupGroup {
- &self.group
- }
-
- #[inline]
- pub fn backup_ns(&self) -> &BackupNamespace {
- &self.ns
- }
-
- #[inline]
- pub fn backup_type(&self) -> BackupType {
- self.group.ty
- }
-
- #[inline]
- pub fn backup_id(&self) -> &str {
- &self.group.id
- }
-
- pub fn full_group_path(&self) -> PathBuf {
- self.store.group_path(&self.ns, &self.group)
- }
-
- pub fn relative_group_path(&self) -> PathBuf {
- let mut path = self.ns.path();
- path.push(self.group.ty.as_str());
- path.push(&self.group.id);
- path
+impl<T: CanRead> BackupGroup<T> {
+ /// Returns the backup owner.
+ ///
+ /// The backup owner is the entity who first created the backup group.
+ pub fn get_owner(&self) -> Result<Authid, Error> {
+ self.store.get_owner(&self.ns, self.as_ref())
}
/// Simple check whether a group exists. This does not check whether there are any snapshots,
@@ -98,7 +68,7 @@ impl BackupGroup {
self.full_group_path().exists()
}
- pub fn list_backups(&self) -> Result<Vec<BackupInfo>, Error> {
+ pub fn list_backups(&self) -> Result<Vec<BackupInfo<T>>, Error> {
let mut list = vec![];
let path = self.full_group_path();
@@ -130,7 +100,7 @@ impl BackupGroup {
}
/// Finds the latest backup inside a backup group
- pub fn last_backup(&self, only_finished: bool) -> Result<Option<BackupInfo>, Error> {
+ pub fn last_backup(&self, only_finished: bool) -> Result<Option<BackupInfo<T>>, Error> {
let backups = self.list_backups()?;
Ok(backups
.into_iter()
@@ -190,24 +160,13 @@ impl BackupGroup {
Ok(last)
}
+}
- pub fn matches(&self, filter: &GroupFilter) -> bool {
- self.group.matches(filter)
- }
-
- pub fn backup_dir(&self, time: i64) -> Result<BackupDir, Error> {
- BackupDir::with_group(self.clone(), time)
- }
-
- pub fn backup_dir_with_rfc3339<T: Into<String>>(
- &self,
- time_string: T,
- ) -> Result<BackupDir, Error> {
- BackupDir::with_rfc3339(self.clone(), time_string.into())
- }
-
- pub fn iter_snapshots(&self) -> Result<crate::ListSnapshots, Error> {
- crate::ListSnapshots::new(self.clone())
+impl<T: CanWrite> BackupGroup<T> {
+ /// Set the backup owner.
+ pub fn set_owner(&self, auth_id: &Authid, force: bool) -> Result<(), Error> {
+ self.store
+ .set_owner(&self.ns, self.as_ref(), auth_id, force)
}
/// Destroy the group inclusive all its backup snapshots (BackupDir's)
@@ -260,32 +219,6 @@ impl BackupGroup {
Ok(())
}
- /// Returns the backup owner.
- ///
- /// The backup owner is the entity who first created the backup group.
- pub fn get_owner(&self) -> Result<Authid, Error> {
- self.store.get_owner(&self.ns, self.as_ref())
- }
-
- /// Set the backup owner.
- pub fn set_owner(&self, auth_id: &Authid, force: bool) -> Result<(), Error> {
- self.store
- .set_owner(&self.ns, self.as_ref(), auth_id, force)
- }
-
- /// Returns a file name for locking a group.
- ///
- /// The lock file will be located in:
- /// `${DATASTORE_LOCKS_DIR}/${datastore name}/${lock_file_path_helper(rpath)}`
- /// where `rpath` is the relative path of the group.
- fn lock_path(&self) -> PathBuf {
- let path = Path::new(DATASTORE_LOCKS_DIR).join(self.store.name());
-
- let rpath = Path::new(self.group.ty.as_str()).join(&self.group.id);
-
- path.join(lock_file_path_helper(&self.ns, rpath))
- }
-
/// Locks a group exclusively.
pub fn lock(&self) -> Result<BackupLockGuard, Error> {
if *OLD_LOCKING {
@@ -304,34 +237,108 @@ impl BackupGroup {
}
}
-impl AsRef<pbs_api_types::BackupNamespace> for BackupGroup {
+impl<T: Clone> BackupGroup<T> {
+ pub(crate) fn new(
+ store: Arc<DataStore<T>>,
+ ns: BackupNamespace,
+ group: pbs_api_types::BackupGroup,
+ ) -> Self {
+ Self { store, ns, group }
+ }
+
+ /// Access the underlying [`BackupGroup`](pbs_api_types::BackupGroup).
+ #[inline]
+ pub fn group(&self) -> &pbs_api_types::BackupGroup {
+ &self.group
+ }
+
+ #[inline]
+ pub fn backup_ns(&self) -> &BackupNamespace {
+ &self.ns
+ }
+
+ #[inline]
+ pub fn backup_type(&self) -> BackupType {
+ self.group.ty
+ }
+
+ #[inline]
+ pub fn backup_id(&self) -> &str {
+ &self.group.id
+ }
+
+ pub fn full_group_path(&self) -> PathBuf {
+ self.store.group_path(&self.ns, &self.group)
+ }
+
+ pub fn relative_group_path(&self) -> PathBuf {
+ let mut path = self.ns.path();
+ path.push(self.group.ty.as_str());
+ path.push(&self.group.id);
+ path
+ }
+
+ pub fn matches(&self, filter: &GroupFilter) -> bool {
+ self.group.matches(filter)
+ }
+
+ pub fn backup_dir(&self, time: i64) -> Result<BackupDir<T>, Error> {
+ BackupDir::with_group(self.clone(), time)
+ }
+
+ pub fn backup_dir_with_rfc3339<D: Into<String>>(
+ &self,
+ time_string: D,
+ ) -> Result<BackupDir<T>, Error> {
+ BackupDir::with_rfc3339(self.clone(), time_string.into())
+ }
+
+ pub fn iter_snapshots(&self) -> Result<crate::ListSnapshots, Error> {
+ crate::ListSnapshots::new(self.clone())
+ }
+
+ /// Returns a file name for locking a group.
+ ///
+ /// The lock file will be located in:
+ /// `${DATASTORE_LOCKS_DIR}/${datastore name}/${lock_file_path_helper(rpath)}`
+ /// where `rpath` is the relative path of the group.
+ fn lock_path(&self) -> PathBuf {
+ let path = Path::new(DATASTORE_LOCKS_DIR).join(self.store.name());
+
+ let rpath = Path::new(self.group.ty.as_str()).join(&self.group.id);
+
+ path.join(lock_file_path_helper(&self.ns, rpath))
+ }
+}
+
+impl<T> AsRef<pbs_api_types::BackupNamespace> for BackupGroup<T> {
#[inline]
fn as_ref(&self) -> &pbs_api_types::BackupNamespace {
&self.ns
}
}
-impl AsRef<pbs_api_types::BackupGroup> for BackupGroup {
+impl<T> AsRef<pbs_api_types::BackupGroup> for BackupGroup<T> {
#[inline]
fn as_ref(&self) -> &pbs_api_types::BackupGroup {
&self.group
}
}
-impl From<&BackupGroup> for pbs_api_types::BackupGroup {
- fn from(group: &BackupGroup) -> pbs_api_types::BackupGroup {
+impl<T> From<&BackupGroup<T>> for pbs_api_types::BackupGroup {
+ fn from(group: &BackupGroup<T>) -> pbs_api_types::BackupGroup {
group.group.clone()
}
}
-impl From<BackupGroup> for pbs_api_types::BackupGroup {
- fn from(group: BackupGroup) -> pbs_api_types::BackupGroup {
+impl<T> From<BackupGroup<T>> for pbs_api_types::BackupGroup {
+ fn from(group: BackupGroup<T>) -> pbs_api_types::BackupGroup {
group.group
}
}
-impl From<BackupDir> for BackupGroup {
- fn from(dir: BackupDir) -> BackupGroup {
+impl<T> From<BackupDir<T>> for BackupGroup<T> {
+ fn from(dir: BackupDir<T>) -> BackupGroup<T> {
BackupGroup {
store: dir.store,
ns: dir.ns,
@@ -340,8 +347,8 @@ impl From<BackupDir> for BackupGroup {
}
}
-impl From<&BackupDir> for BackupGroup {
- fn from(dir: &BackupDir) -> BackupGroup {
+impl<T> From<&BackupDir<T>> for BackupGroup<T> {
+ fn from(dir: &BackupDir<T>) -> BackupGroup<T> {
BackupGroup {
store: Arc::clone(&dir.store),
ns: dir.ns.clone(),
@@ -354,15 +361,15 @@ impl From<&BackupDir> for BackupGroup {
///
/// We also call this a backup snaphost.
#[derive(Clone)]
-pub struct BackupDir {
- store: Arc<DataStore>,
+pub struct BackupDir<T> {
+ store: Arc<DataStore<T>>,
ns: BackupNamespace,
dir: pbs_api_types::BackupDir,
// backup_time as rfc3339
backup_time_string: String,
}
-impl fmt::Debug for BackupDir {
+impl<T> fmt::Debug for BackupDir<T> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.debug_struct("BackupDir")
.field("store", &self.store.name())
@@ -373,102 +380,12 @@ impl fmt::Debug for BackupDir {
}
}
-impl BackupDir {
- /// Temporarily used for tests.
- #[doc(hidden)]
- pub fn new_test(dir: pbs_api_types::BackupDir) -> Self {
- Self {
- store: unsafe { DataStore::new_test() },
- backup_time_string: Self::backup_time_to_string(dir.time).unwrap(),
- ns: BackupNamespace::root(),
- dir,
- }
- }
-
- pub(crate) fn with_group(group: BackupGroup, backup_time: i64) -> Result<Self, Error> {
- let backup_time_string = Self::backup_time_to_string(backup_time)?;
- Ok(Self {
- store: group.store,
- ns: group.ns,
- dir: (group.group, backup_time).into(),
- backup_time_string,
- })
- }
-
- pub(crate) fn with_rfc3339(
- group: BackupGroup,
- backup_time_string: String,
- ) -> Result<Self, Error> {
- let backup_time = proxmox_time::parse_rfc3339(&backup_time_string)?;
- Ok(Self {
- store: group.store,
- ns: group.ns,
- dir: (group.group, backup_time).into(),
- backup_time_string,
- })
- }
-
- #[inline]
- pub fn backup_ns(&self) -> &BackupNamespace {
- &self.ns
- }
-
- #[inline]
- pub fn backup_type(&self) -> BackupType {
- self.dir.group.ty
- }
-
- #[inline]
- pub fn backup_id(&self) -> &str {
- &self.dir.group.id
- }
-
- #[inline]
- pub fn backup_time(&self) -> i64 {
- self.dir.time
- }
-
- pub fn backup_time_string(&self) -> &str {
- &self.backup_time_string
- }
-
- pub fn dir(&self) -> &pbs_api_types::BackupDir {
- &self.dir
- }
-
- pub fn group(&self) -> &pbs_api_types::BackupGroup {
- &self.dir.group
- }
-
- pub fn relative_path(&self) -> PathBuf {
- let mut path = self.ns.path();
- path.push(self.dir.group.ty.as_str());
- path.push(&self.dir.group.id);
- path.push(&self.backup_time_string);
- path
- }
-
- /// Returns the absolute path for backup_dir, using the cached formatted time string.
- pub fn full_path(&self) -> PathBuf {
- let mut path = self.store.base_path();
- path.push(self.relative_path());
- path
- }
-
- pub fn protected_file(&self) -> PathBuf {
- let mut path = self.full_path();
- path.push(".protected");
- path
- }
-
- pub fn is_protected(&self) -> bool {
- let path = self.protected_file();
- path.exists()
- }
-
- pub fn backup_time_to_string(backup_time: i64) -> Result<String, Error> {
- // fixme: can this fail? (avoid unwrap)
- proxmox_time::epoch_to_rfc3339_utc(backup_time)
+impl<T: CanRead> BackupDir<T> {
+ /// Returns the backup owner.
+ ///
+ /// The backup owner is the entity who first created the backup group.
+ pub fn get_owner(&self) -> Result<Authid, Error> {
+ self.store.get_owner(&self.ns, self.as_ref())
}
/// load a `DataBlob` from this snapshot's backup dir.
@@ -483,22 +400,38 @@ impl BackupDir {
.map_err(|err| format_err!("unable to load blob '{:?}' - {}", path, err))
}
- /// Returns the filename to lock a manifest
- ///
- /// Also creates the basedir. The lockfile is located in
- /// `${DATASTORE_LOCKS_DIR}/${datastore name}/${lock_file_path_helper(rpath)}.index.json.lck`
- /// where rpath is the relative path of the snapshot.
- fn manifest_lock_path(&self) -> PathBuf {
- let path = Path::new(DATASTORE_LOCKS_DIR).join(self.store.name());
+ /// Acquires a shared lock on a snapshot.
+ pub fn lock_shared(&self) -> Result<BackupLockGuard, Error> {
+ if *OLD_LOCKING {
+ lock_dir_noblock_shared(
+ &self.full_path(),
+ "snapshot",
+ "backup is running or snapshot is in use, could not acquire shared lock",
+ )
+ .map(BackupLockGuard::from)
+ } else {
+ lock_helper(self.store.name(), &self.lock_path(), |p| {
+ open_backup_lockfile(p, Some(Duration::from_secs(0)), false)
+ .with_context(|| format!("unable to acquire shared snapshot lock {p:?}"))
+ })
+ }
+ }
- let rpath = Path::new(self.dir.group.ty.as_str())
- .join(&self.dir.group.id)
- .join(&self.backup_time_string)
- .join(MANIFEST_LOCK_NAME);
+ /// Load the manifest without a lock. Must not be written back.
+ pub fn load_manifest(&self) -> Result<(BackupManifest, u64), Error> {
+ let blob = self.load_blob(MANIFEST_BLOB_NAME.as_ref())?;
+ let raw_size = blob.raw_size();
+ let manifest = BackupManifest::try_from(blob)?;
+ Ok((manifest, raw_size))
+ }
- path.join(lock_file_path_helper(&self.ns, rpath))
+ /// Load the verify state from the manifest.
+ pub fn verify_state(&self) -> Result<Option<VerifyState>, anyhow::Error> {
+ Ok(self.load_manifest()?.0.verify_state()?.map(|svs| svs.state))
}
+}
+impl<T: CanWrite> BackupDir<T> {
/// Locks the manifest of a snapshot, for example, to update or delete it.
pub(crate) fn lock_manifest(&self) -> Result<BackupLockGuard, Error> {
let path = if *OLD_LOCKING {
@@ -523,21 +456,6 @@ impl BackupDir {
})
}
- /// Returns a file name for locking a snapshot.
- ///
- /// The lock file will be located in:
- /// `${DATASTORE_LOCKS_DIR}/${datastore name}/${lock_file_path_helper(rpath)}`
- /// where `rpath` is the relative path of the snapshot.
- fn lock_path(&self) -> PathBuf {
- let path = Path::new(DATASTORE_LOCKS_DIR).join(self.store.name());
-
- let rpath = Path::new(self.dir.group.ty.as_str())
- .join(&self.dir.group.id)
- .join(&self.backup_time_string);
-
- path.join(lock_file_path_helper(&self.ns, rpath))
- }
-
/// Locks a snapshot exclusively.
pub fn lock(&self) -> Result<BackupLockGuard, Error> {
if *OLD_LOCKING {
@@ -555,23 +473,6 @@ impl BackupDir {
}
}
- /// Acquires a shared lock on a snapshot.
- pub fn lock_shared(&self) -> Result<BackupLockGuard, Error> {
- if *OLD_LOCKING {
- lock_dir_noblock_shared(
- &self.full_path(),
- "snapshot",
- "backup is running or snapshot is in use, could not acquire shared lock",
- )
- .map(BackupLockGuard::from)
- } else {
- lock_helper(self.store.name(), &self.lock_path(), |p| {
- open_backup_lockfile(p, Some(Duration::from_secs(0)), false)
- .with_context(|| format!("unable to acquire shared snapshot lock {p:?}"))
- })
- }
- }
-
/// Destroy the whole snapshot, bails if it's protected
///
/// Setting `force` to true skips locking and thus ignores if the backup is currently in use.
@@ -624,31 +525,6 @@ impl BackupDir {
Ok(())
}
- /// Get the datastore.
- pub fn datastore(&self) -> &Arc<DataStore> {
- &self.store
- }
-
- /// Returns the backup owner.
- ///
- /// The backup owner is the entity who first created the backup group.
- pub fn get_owner(&self) -> Result<Authid, Error> {
- self.store.get_owner(&self.ns, self.as_ref())
- }
-
- /// Lock the snapshot and open a reader.
- pub fn locked_reader(&self) -> Result<crate::SnapshotReader, Error> {
- crate::SnapshotReader::new_do(self.clone())
- }
-
- /// Load the manifest without a lock. Must not be written back.
- pub fn load_manifest(&self) -> Result<(BackupManifest, u64), Error> {
- let blob = self.load_blob(MANIFEST_BLOB_NAME.as_ref())?;
- let raw_size = blob.raw_size();
- let manifest = BackupManifest::try_from(blob)?;
- Ok((manifest, raw_size))
- }
-
/// Update the manifest of the specified snapshot. Never write a manifest directly,
/// only use this method - anything else may break locking guarantees.
pub fn update_manifest(
@@ -706,68 +582,203 @@ impl BackupDir {
Ok(())
}
+}
+
+impl<T> BackupDir<T> {
+ /// Temporarily used for tests.
+ #[doc(hidden)]
+ pub fn new_test(dir: pbs_api_types::BackupDir) -> Self {
+ Self {
+ store: DataStore::new_test(),
+ backup_time_string: Self::backup_time_to_string(dir.time).unwrap(),
+ ns: BackupNamespace::root(),
+ dir,
+ }
+ }
- /// Load the verify state from the manifest.
- pub fn verify_state(&self) -> Result<Option<VerifyState>, anyhow::Error> {
- Ok(self.load_manifest()?.0.verify_state()?.map(|svs| svs.state))
+ pub(crate) fn with_group(group: BackupGroup<T>, backup_time: i64) -> Result<Self, Error> {
+ let backup_time_string = Self::backup_time_to_string(backup_time)?;
+ Ok(Self {
+ store: group.store,
+ ns: group.ns,
+ dir: (group.group, backup_time).into(),
+ backup_time_string,
+ })
+ }
+
+ pub(crate) fn with_rfc3339(
+ group: BackupGroup<T>,
+ backup_time_string: String,
+ ) -> Result<Self, Error> {
+ let backup_time = proxmox_time::parse_rfc3339(&backup_time_string)?;
+ Ok(Self {
+ store: group.store,
+ ns: group.ns,
+ dir: (group.group, backup_time).into(),
+ backup_time_string,
+ })
+ }
+
+ #[inline]
+ pub fn backup_ns(&self) -> &BackupNamespace {
+ &self.ns
+ }
+
+ #[inline]
+ pub fn backup_type(&self) -> BackupType {
+ self.dir.group.ty
+ }
+
+ #[inline]
+ pub fn backup_id(&self) -> &str {
+ &self.dir.group.id
+ }
+
+ #[inline]
+ pub fn backup_time(&self) -> i64 {
+ self.dir.time
+ }
+
+ pub fn backup_time_string(&self) -> &str {
+ &self.backup_time_string
+ }
+
+ pub fn dir(&self) -> &pbs_api_types::BackupDir {
+ &self.dir
+ }
+
+ pub fn group(&self) -> &pbs_api_types::BackupGroup {
+ &self.dir.group
+ }
+
+ pub fn relative_path(&self) -> PathBuf {
+ let mut path = self.ns.path();
+ path.push(self.dir.group.ty.as_str());
+ path.push(&self.dir.group.id);
+ path.push(&self.backup_time_string);
+ path
+ }
+
+ /// Returns the absolute path for backup_dir, using the cached formatted time string.
+ pub fn full_path(&self) -> PathBuf {
+ let mut path = self.store.base_path();
+ path.push(self.relative_path());
+ path
+ }
+
+ pub fn protected_file(&self) -> PathBuf {
+ let mut path = self.full_path();
+ path.push(".protected");
+ path
+ }
+
+ pub fn is_protected(&self) -> bool {
+ let path = self.protected_file();
+ path.exists()
+ }
+
+ pub fn backup_time_to_string(backup_time: i64) -> Result<String, Error> {
+ // fixme: can this fail? (avoid unwrap)
+ proxmox_time::epoch_to_rfc3339_utc(backup_time)
+ }
+
+ /// Returns the filename to lock a manifest
+ ///
+ /// Also creates the basedir. The lockfile is located in
+ /// `${DATASTORE_LOCKS_DIR}/${datastore name}/${lock_file_path_helper(rpath)}.index.json.lck`
+ /// where rpath is the relative path of the snapshot.
+ fn manifest_lock_path(&self) -> PathBuf {
+ let path = Path::new(DATASTORE_LOCKS_DIR).join(self.store.name());
+
+ let rpath = Path::new(self.dir.group.ty.as_str())
+ .join(&self.dir.group.id)
+ .join(&self.backup_time_string)
+ .join(MANIFEST_LOCK_NAME);
+
+ path.join(lock_file_path_helper(&self.ns, rpath))
+ }
+
+ /// Returns a file name for locking a snapshot.
+ ///
+ /// The lock file will be located in:
+ /// `${DATASTORE_LOCKS_DIR}/${datastore name}/${lock_file_path_helper(rpath)}`
+ /// where `rpath` is the relative path of the snapshot.
+ fn lock_path(&self) -> PathBuf {
+ let path = Path::new(DATASTORE_LOCKS_DIR).join(self.store.name());
+
+ let rpath = Path::new(self.dir.group.ty.as_str())
+ .join(&self.dir.group.id)
+ .join(&self.backup_time_string);
+
+ path.join(lock_file_path_helper(&self.ns, rpath))
+ }
+
+ /// Get the datastore.
+ pub fn datastore(&self) -> &Arc<DataStore<T>> {
+ &self.store
+ }
+
+ /// Lock the snapshot and open a reader.
+ pub fn locked_reader(&self) -> Result<crate::SnapshotReader, Error> {
+ crate::SnapshotReader::new_do(self.clone())
}
}
-impl AsRef<pbs_api_types::BackupNamespace> for BackupDir {
+impl<T> AsRef<pbs_api_types::BackupNamespace> for BackupDir<T> {
fn as_ref(&self) -> &pbs_api_types::BackupNamespace {
&self.ns
}
}
-impl AsRef<pbs_api_types::BackupDir> for BackupDir {
+impl<T> AsRef<pbs_api_types::BackupDir> for BackupDir<T> {
fn as_ref(&self) -> &pbs_api_types::BackupDir {
&self.dir
}
}
-impl AsRef<pbs_api_types::BackupGroup> for BackupDir {
+impl<T> AsRef<pbs_api_types::BackupGroup> for BackupDir<T> {
fn as_ref(&self) -> &pbs_api_types::BackupGroup {
&self.dir.group
}
}
-impl From<&BackupDir> for pbs_api_types::BackupGroup {
- fn from(dir: &BackupDir) -> pbs_api_types::BackupGroup {
+impl<T> From<&BackupDir<T>> for pbs_api_types::BackupGroup {
+ fn from(dir: &BackupDir<T>) -> pbs_api_types::BackupGroup {
dir.dir.group.clone()
}
}
-impl From<BackupDir> for pbs_api_types::BackupGroup {
- fn from(dir: BackupDir) -> pbs_api_types::BackupGroup {
+impl<T> From<BackupDir<T>> for pbs_api_types::BackupGroup {
+ fn from(dir: BackupDir<T>) -> pbs_api_types::BackupGroup {
dir.dir.group
}
}
-impl From<&BackupDir> for pbs_api_types::BackupDir {
- fn from(dir: &BackupDir) -> pbs_api_types::BackupDir {
+impl<T> From<&BackupDir<T>> for pbs_api_types::BackupDir {
+ fn from(dir: &BackupDir<T>) -> pbs_api_types::BackupDir {
dir.dir.clone()
}
}
-impl From<BackupDir> for pbs_api_types::BackupDir {
- fn from(dir: BackupDir) -> pbs_api_types::BackupDir {
+impl<T> From<BackupDir<T>> for pbs_api_types::BackupDir {
+ fn from(dir: BackupDir<T>) -> pbs_api_types::BackupDir {
dir.dir
}
}
/// Detailed Backup Information, lists files inside a BackupDir
#[derive(Clone, Debug)]
-pub struct BackupInfo {
+pub struct BackupInfo<T> {
/// the backup directory
- pub backup_dir: BackupDir,
+ pub backup_dir: BackupDir<T>,
/// List of data files
pub files: Vec<String>,
/// Protection Status
pub protected: bool,
}
-impl BackupInfo {
- pub fn new(backup_dir: BackupDir) -> Result<BackupInfo, Error> {
+impl<T: CanRead> BackupInfo<T> {
+ pub fn new(backup_dir: BackupDir<T>) -> Result<BackupInfo<T>, Error> {
let path = backup_dir.full_path();
let files = list_backup_files(libc::AT_FDCWD, &path)?;
@@ -779,8 +790,10 @@ impl BackupInfo {
protected,
})
}
+}
- pub fn sort_list(list: &mut [BackupInfo], ascendending: bool) {
+impl<T> BackupInfo<T> {
+ pub fn sort_list(list: &mut [BackupInfo<T>], ascendending: bool) {
if ascendending {
// oldest first
list.sort_unstable_by(|a, b| a.backup_dir.dir.time.cmp(&b.backup_dir.dir.time));
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 66a2e209..9356750b 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -303,7 +303,7 @@ impl<T: CanRead + 'static> DataStore<T> {
self: &Arc<DataStore<T>>,
ns: BackupNamespace,
ty: BackupType,
- ) -> Result<impl Iterator<Item = BackupGroup> + 'static, Error> {
+ ) -> Result<impl Iterator<Item = BackupGroup<T>> + 'static, Error> {
Ok(self.iter_backup_type(ns, ty)?.ok())
}
@@ -314,7 +314,7 @@ impl<T: CanRead + 'static> DataStore<T> {
pub fn iter_backup_groups_ok(
self: &Arc<DataStore<T>>,
ns: BackupNamespace,
- ) -> Result<impl Iterator<Item = BackupGroup> + 'static, Error> {
+ ) -> Result<impl Iterator<Item = BackupGroup<T>> + 'static, Error> {
Ok(self.iter_backup_groups(ns)?.ok())
}
}
@@ -644,7 +644,7 @@ impl<T: CanRead> DataStore<T> {
pub fn list_backup_groups(
self: &Arc<DataStore<T>>,
ns: BackupNamespace,
- ) -> Result<Vec<BackupGroup>, Error> {
+ ) -> Result<Vec<BackupGroup<T>>, Error> {
ListGroups::new(Arc::clone(self), ns)?.collect()
}
@@ -837,7 +837,7 @@ impl<T: CanRead> DataStore<T> {
ty: BackupType,
id: D,
time: i64,
- ) -> Result<BackupDir, Error>
+ ) -> Result<BackupDir<T>, Error>
where
D: Into<String>,
{
@@ -847,10 +847,10 @@ impl<T: CanRead> DataStore<T> {
/// Open a snapshot (backup directory) from this datastore with a cached rfc3339 time string.
pub fn backup_dir_with_rfc3339<D: Into<String>>(
self: &Arc<Self>,
- group: BackupGroup,
+ group: BackupGroup<T>,
time_string: D,
- ) -> Result<BackupDir, Error> {
- BackupDir::with_rfc3339(group, time_string.into())
+ ) -> Result<BackupDir<T>, Error> {
+ BackupDir::<T>::with_rfc3339(group, time_string.into())
}
/// Open a backup group from this datastore.
@@ -859,7 +859,7 @@ impl<T: CanRead> DataStore<T> {
ns: BackupNamespace,
ty: BackupType,
id: D,
- ) -> BackupGroup
+ ) -> BackupGroup<T>
where
D: Into<String>,
{
@@ -889,7 +889,7 @@ impl<T: CanRead> DataStore<T> {
self: &Arc<Self>,
ns: BackupNamespace,
dir: pbs_api_types::BackupDir,
- ) -> Result<BackupDir, Error> {
+ ) -> Result<BackupDir<T>, Error> {
BackupDir::with_group(self.backup_group(ns, dir.group), dir.time)
}
}
@@ -1258,7 +1258,7 @@ impl<T: CanWrite> DataStore<T> {
_ => bail!("exhausted retries and unexpected counter overrun"),
};
- let mut snapshots = match group.list_backups() {
+ let mut snapshots: Vec<BackupInfo<T>> = match group.list_backups() {
Ok(snapshots) => snapshots,
Err(err) => {
if group.exists() {
@@ -1526,7 +1526,11 @@ impl<T: CanWrite> DataStore<T> {
}
/// Updates the protection status of the specified snapshot.
- pub fn update_protection(&self, backup_dir: &BackupDir, protection: bool) -> Result<(), Error> {
+ pub fn update_protection(
+ &self,
+ backup_dir: &BackupDir<T>,
+ protection: bool,
+ ) -> Result<(), Error> {
let full_path = backup_dir.full_path();
if !full_path.exists() {
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 06/12] pbs-datastore: add generics and separate functions into impl blocks
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
` (4 preceding siblings ...)
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 05/12] backup_info: add generics and separate functions into impl blocks Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 07/12] api: backup: env: add generics and separate functions into impl block Hannes Laimer
` (5 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
pbs-datastore/src/backup_info.rs | 18 ++---
pbs-datastore/src/datastore.rs | 12 ++--
pbs-datastore/src/dynamic_index.rs | 22 +++---
pbs-datastore/src/fixed_index.rs | 50 +++++++-------
pbs-datastore/src/hierarchy.rs | 92 ++++++++++++++-----------
pbs-datastore/src/local_chunk_reader.rs | 13 ++--
pbs-datastore/src/prune.rs | 19 +++--
pbs-datastore/src/snapshot_reader.rs | 25 +++----
8 files changed, 134 insertions(+), 117 deletions(-)
diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
index f3ca283c..25d8fc08 100644
--- a/pbs-datastore/src/backup_info.rs
+++ b/pbs-datastore/src/backup_info.rs
@@ -160,6 +160,10 @@ impl<T: CanRead> BackupGroup<T> {
Ok(last)
}
+
+ pub fn iter_snapshots(&self) -> Result<crate::ListSnapshots<T>, Error> {
+ crate::ListSnapshots::new(self.clone())
+ }
}
impl<T: CanWrite> BackupGroup<T> {
@@ -293,10 +297,6 @@ impl<T: Clone> BackupGroup<T> {
BackupDir::with_rfc3339(self.clone(), time_string.into())
}
- pub fn iter_snapshots(&self) -> Result<crate::ListSnapshots, Error> {
- crate::ListSnapshots::new(self.clone())
- }
-
/// Returns a file name for locking a group.
///
/// The lock file will be located in:
@@ -429,6 +429,11 @@ impl<T: CanRead> BackupDir<T> {
pub fn verify_state(&self) -> Result<Option<VerifyState>, anyhow::Error> {
Ok(self.load_manifest()?.0.verify_state()?.map(|svs| svs.state))
}
+
+ /// Lock the snapshot and open a reader.
+ pub fn locked_reader(&self) -> Result<crate::SnapshotReader<T>, Error> {
+ crate::SnapshotReader::new_do(self.clone())
+ }
}
impl<T: CanWrite> BackupDir<T> {
@@ -717,11 +722,6 @@ impl<T> BackupDir<T> {
pub fn datastore(&self) -> &Arc<DataStore<T>> {
&self.store
}
-
- /// Lock the snapshot and open a reader.
- pub fn locked_reader(&self) -> Result<crate::SnapshotReader, Error> {
- crate::SnapshotReader::new_do(self.clone())
- }
}
impl<T> AsRef<pbs_api_types::BackupNamespace> for BackupDir<T> {
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 9356750b..cb2d2172 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -600,7 +600,7 @@ impl<T: CanRead> DataStore<T> {
pub fn iter_backup_ns(
self: &Arc<DataStore<T>>,
ns: BackupNamespace,
- ) -> Result<ListNamespaces, Error> {
+ ) -> Result<ListNamespaces<T>, Error> {
ListNamespaces::new(Arc::clone(self), ns)
}
@@ -611,7 +611,7 @@ impl<T: CanRead> DataStore<T> {
pub fn recursive_iter_backup_ns(
self: &Arc<DataStore<T>>,
ns: BackupNamespace,
- ) -> Result<ListNamespacesRecursive, Error> {
+ ) -> Result<ListNamespacesRecursive<T>, Error> {
ListNamespacesRecursive::new(Arc::clone(self), ns)
}
@@ -623,7 +623,7 @@ impl<T: CanRead> DataStore<T> {
self: &Arc<DataStore<T>>,
ns: BackupNamespace,
ty: BackupType,
- ) -> Result<ListGroupsType, Error> {
+ ) -> Result<ListGroupsType<T>, Error> {
ListGroupsType::new(Arc::clone(self), ns, ty)
}
@@ -634,7 +634,7 @@ impl<T: CanRead> DataStore<T> {
pub fn iter_backup_groups(
self: &Arc<DataStore<T>>,
ns: BackupNamespace,
- ) -> Result<ListGroups, Error> {
+ ) -> Result<ListGroups<T>, Error> {
ListGroups::new(Arc::clone(self), ns)
}
@@ -900,7 +900,7 @@ impl<T: CanWrite> DataStore<T> {
filename: P,
size: usize,
chunk_size: usize,
- ) -> Result<FixedIndexWriter, Error> {
+ ) -> Result<FixedIndexWriter<T>, Error> {
let index = FixedIndexWriter::create(
self.inner.chunk_store.clone(),
filename.as_ref(),
@@ -914,7 +914,7 @@ impl<T: CanWrite> DataStore<T> {
pub fn create_dynamic_writer<P: AsRef<Path>>(
&self,
filename: P,
- ) -> Result<DynamicIndexWriter, Error> {
+ ) -> Result<DynamicIndexWriter<T>, Error> {
let index = DynamicIndexWriter::create(self.inner.chunk_store.clone(), filename.as_ref())?;
Ok(index)
diff --git a/pbs-datastore/src/dynamic_index.rs b/pbs-datastore/src/dynamic_index.rs
index 8e9cb116..dbf99faa 100644
--- a/pbs-datastore/src/dynamic_index.rs
+++ b/pbs-datastore/src/dynamic_index.rs
@@ -18,7 +18,7 @@ use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
use pbs_tools::lru_cache::LruCache;
use crate::chunk_stat::ChunkStat;
-use crate::chunk_store::ChunkStore;
+use crate::chunk_store::{CanWrite, ChunkStore};
use crate::data_blob::{DataBlob, DataChunkBuilder};
use crate::file_formats;
use crate::index::{ChunkReadInfo, IndexFile};
@@ -275,8 +275,8 @@ impl IndexFile for DynamicIndexReader {
}
/// Create dynamic index files (`.dixd`)
-pub struct DynamicIndexWriter {
- store: Arc<ChunkStore>,
+pub struct DynamicIndexWriter<T> {
+ store: Arc<ChunkStore<T>>,
_lock: ProcessLockSharedGuard,
writer: BufWriter<File>,
closed: bool,
@@ -287,14 +287,14 @@ pub struct DynamicIndexWriter {
pub ctime: i64,
}
-impl Drop for DynamicIndexWriter {
+impl<T> Drop for DynamicIndexWriter<T> {
fn drop(&mut self) {
let _ = std::fs::remove_file(&self.tmp_filename); // ignore errors
}
}
-impl DynamicIndexWriter {
- pub fn create(store: Arc<ChunkStore>, path: &Path) -> Result<Self, Error> {
+impl<T: CanWrite> DynamicIndexWriter<T> {
+ pub fn create(store: Arc<ChunkStore<T>>, path: &Path) -> Result<Self, Error> {
let shared_lock = store.try_shared_lock()?;
let full_path = store.relative_path(path);
@@ -394,8 +394,8 @@ impl DynamicIndexWriter {
/// Writer which splits a binary stream into dynamic sized chunks
///
/// And store the resulting chunk list into the index file.
-pub struct DynamicChunkWriter {
- index: DynamicIndexWriter,
+pub struct DynamicChunkWriter<T: CanWrite> {
+ index: DynamicIndexWriter<T>,
closed: bool,
chunker: ChunkerImpl,
stat: ChunkStat,
@@ -404,8 +404,8 @@ pub struct DynamicChunkWriter {
chunk_buffer: Vec<u8>,
}
-impl DynamicChunkWriter {
- pub fn new(index: DynamicIndexWriter, chunk_size: usize) -> Self {
+impl<T: CanWrite> DynamicChunkWriter<T> {
+ pub fn new(index: DynamicIndexWriter<T>, chunk_size: usize) -> Self {
Self {
index,
closed: false,
@@ -490,7 +490,7 @@ impl DynamicChunkWriter {
}
}
-impl Write for DynamicChunkWriter {
+impl<T: CanWrite> Write for DynamicChunkWriter<T> {
fn write(&mut self, data: &[u8]) -> std::result::Result<usize, std::io::Error> {
let chunker = &mut self.chunker;
diff --git a/pbs-datastore/src/fixed_index.rs b/pbs-datastore/src/fixed_index.rs
index d4bfcb51..8db9f440 100644
--- a/pbs-datastore/src/fixed_index.rs
+++ b/pbs-datastore/src/fixed_index.rs
@@ -214,8 +214,8 @@ impl IndexFile for FixedIndexReader {
}
}
-pub struct FixedIndexWriter {
- store: Arc<ChunkStore>,
+pub struct FixedIndexWriter<T> {
+ store: Arc<ChunkStore<T>>,
file: File,
_lock: ProcessLockSharedGuard,
filename: PathBuf,
@@ -229,9 +229,9 @@ pub struct FixedIndexWriter {
}
// `index` is mmap()ed which cannot be thread-local so should be sendable
-unsafe impl Send for FixedIndexWriter {}
+unsafe impl<T> Send for FixedIndexWriter<T> {}
-impl Drop for FixedIndexWriter {
+impl<T> Drop for FixedIndexWriter<T> {
fn drop(&mut self) {
let _ = std::fs::remove_file(&self.tmp_filename); // ignore errors
if let Err(err) = self.unmap() {
@@ -240,10 +240,30 @@ impl Drop for FixedIndexWriter {
}
}
-impl FixedIndexWriter {
+impl<T> FixedIndexWriter<T> {
+ fn unmap(&mut self) -> Result<(), Error> {
+ if self.index.is_null() {
+ return Ok(());
+ }
+
+ let index_size = self.index_length * 32;
+
+ if let Err(err) =
+ unsafe { nix::sys::mman::munmap(self.index as *mut std::ffi::c_void, index_size) }
+ {
+ bail!("unmap file {:?} failed - {}", self.tmp_filename, err);
+ }
+
+ self.index = std::ptr::null_mut();
+
+ Ok(())
+ }
+}
+
+impl<T: crate::chunk_store::CanWrite> FixedIndexWriter<T> {
#[allow(clippy::cast_ptr_alignment)]
pub fn create(
- store: Arc<ChunkStore>,
+ store: Arc<ChunkStore<T>>,
path: &Path,
size: usize,
chunk_size: usize,
@@ -320,24 +340,6 @@ impl FixedIndexWriter {
self.index_length
}
- fn unmap(&mut self) -> Result<(), Error> {
- if self.index.is_null() {
- return Ok(());
- }
-
- let index_size = self.index_length * 32;
-
- if let Err(err) =
- unsafe { nix::sys::mman::munmap(self.index as *mut std::ffi::c_void, index_size) }
- {
- bail!("unmap file {:?} failed - {}", self.tmp_filename, err);
- }
-
- self.index = std::ptr::null_mut();
-
- Ok(())
- }
-
pub fn close(&mut self) -> Result<[u8; 32], Error> {
if self.index.is_null() {
bail!("cannot close already closed index file.");
diff --git a/pbs-datastore/src/hierarchy.rs b/pbs-datastore/src/hierarchy.rs
index e0bf8441..b331d1de 100644
--- a/pbs-datastore/src/hierarchy.rs
+++ b/pbs-datastore/src/hierarchy.rs
@@ -9,16 +9,17 @@ use pbs_api_types::{BackupNamespace, BackupType, BACKUP_DATE_REGEX, BACKUP_ID_RE
use proxmox_sys::fs::get_file_type;
use crate::backup_info::{BackupDir, BackupGroup};
+use crate::chunk_store::CanRead;
use crate::DataStore;
/// A iterator for all BackupDir's (Snapshots) in a BackupGroup
-pub struct ListSnapshots {
- group: BackupGroup,
+pub struct ListSnapshots<T: CanRead> {
+ group: BackupGroup<T>,
fd: proxmox_sys::fs::ReadDir,
}
-impl ListSnapshots {
- pub fn new(group: BackupGroup) -> Result<Self, Error> {
+impl<T: CanRead> ListSnapshots<T> {
+ pub fn new(group: BackupGroup<T>) -> Result<Self, Error> {
let group_path = group.full_group_path();
Ok(ListSnapshots {
fd: proxmox_sys::fs::read_subdir(libc::AT_FDCWD, &group_path)
@@ -28,8 +29,8 @@ impl ListSnapshots {
}
}
-impl Iterator for ListSnapshots {
- type Item = Result<BackupDir, Error>;
+impl<T: CanRead> Iterator for ListSnapshots<T> {
+ type Item = Result<BackupDir<T>, Error>;
fn next(&mut self) -> Option<Self::Item> {
loop {
@@ -64,21 +65,25 @@ impl Iterator for ListSnapshots {
}
/// An iterator for a single backup group type.
-pub struct ListGroupsType {
- store: Arc<DataStore>,
+pub struct ListGroupsType<T> {
+ store: Arc<DataStore<T>>,
ns: BackupNamespace,
ty: BackupType,
dir: proxmox_sys::fs::ReadDir,
}
-impl ListGroupsType {
- pub fn new(store: Arc<DataStore>, ns: BackupNamespace, ty: BackupType) -> Result<Self, Error> {
+impl<T: CanRead> ListGroupsType<T> {
+ pub fn new(
+ store: Arc<DataStore<T>>,
+ ns: BackupNamespace,
+ ty: BackupType,
+ ) -> Result<Self, Error> {
Self::new_at(libc::AT_FDCWD, store, ns, ty)
}
fn new_at(
fd: RawFd,
- store: Arc<DataStore>,
+ store: Arc<DataStore<T>>,
ns: BackupNamespace,
ty: BackupType,
) -> Result<Self, Error> {
@@ -90,13 +95,13 @@ impl ListGroupsType {
})
}
- pub(crate) fn ok(self) -> ListGroupsOk<Self> {
+ pub(crate) fn ok(self) -> ListGroupsOk<Self, T> {
ListGroupsOk::new(self)
}
}
-impl Iterator for ListGroupsType {
- type Item = Result<BackupGroup, Error>;
+impl<T: CanRead> Iterator for ListGroupsType<T> {
+ type Item = Result<BackupGroup<T>, Error>;
fn next(&mut self) -> Option<Self::Item> {
loop {
@@ -134,15 +139,15 @@ impl Iterator for ListGroupsType {
}
/// A iterator for a (single) level of Backup Groups
-pub struct ListGroups {
- store: Arc<DataStore>,
+pub struct ListGroups<T> {
+ store: Arc<DataStore<T>>,
ns: BackupNamespace,
type_fd: proxmox_sys::fs::ReadDir,
- id_state: Option<ListGroupsType>,
+ id_state: Option<ListGroupsType<T>>,
}
-impl ListGroups {
- pub fn new(store: Arc<DataStore>, ns: BackupNamespace) -> Result<Self, Error> {
+impl<T: CanRead> ListGroups<T> {
+ pub fn new(store: Arc<DataStore<T>>, ns: BackupNamespace) -> Result<Self, Error> {
Ok(Self {
type_fd: proxmox_sys::fs::read_subdir(libc::AT_FDCWD, &store.namespace_path(&ns))?,
store,
@@ -151,13 +156,13 @@ impl ListGroups {
})
}
- pub(crate) fn ok(self) -> ListGroupsOk<Self> {
+ pub(crate) fn ok(self) -> ListGroupsOk<Self, T> {
ListGroupsOk::new(self)
}
}
-impl Iterator for ListGroups {
- type Item = Result<BackupGroup, Error>;
+impl<T: CanRead> Iterator for ListGroups<T> {
+ type Item = Result<BackupGroup<T>, Error>;
fn next(&mut self) -> Option<Self::Item> {
loop {
@@ -217,36 +222,36 @@ pub(crate) trait GroupIter {
fn store_name(&self) -> &str;
}
-impl GroupIter for ListGroups {
+impl<T: CanRead> GroupIter for ListGroups<T> {
fn store_name(&self) -> &str {
self.store.name()
}
}
-impl GroupIter for ListGroupsType {
+impl<T: CanRead> GroupIter for ListGroupsType<T> {
fn store_name(&self) -> &str {
self.store.name()
}
}
-pub(crate) struct ListGroupsOk<I>(Option<I>)
+pub(crate) struct ListGroupsOk<I, T: CanRead>(Option<I>)
where
- I: GroupIter + Iterator<Item = Result<BackupGroup, Error>>;
+ I: GroupIter + Iterator<Item = Result<BackupGroup<T>, Error>>;
-impl<I> ListGroupsOk<I>
+impl<I, T: CanRead> ListGroupsOk<I, T>
where
- I: GroupIter + Iterator<Item = Result<BackupGroup, Error>>,
+ I: GroupIter + Iterator<Item = Result<BackupGroup<T>, Error>>,
{
fn new(inner: I) -> Self {
Self(Some(inner))
}
}
-impl<I> Iterator for ListGroupsOk<I>
+impl<I, T: CanRead> Iterator for ListGroupsOk<I, T>
where
- I: GroupIter + Iterator<Item = Result<BackupGroup, Error>>,
+ I: GroupIter + Iterator<Item = Result<BackupGroup<T>, Error>>,
{
- type Item = BackupGroup;
+ type Item = BackupGroup<T>;
fn next(&mut self) -> Option<Self::Item> {
if let Some(iter) = &mut self.0 {
@@ -269,19 +274,21 @@ where
}
/// A iterator for a (single) level of Namespaces
-pub struct ListNamespaces {
+pub struct ListNamespaces<T> {
ns: BackupNamespace,
base_path: PathBuf,
ns_state: Option<proxmox_sys::fs::ReadDir>,
+ _marker: std::marker::PhantomData<T>,
}
-impl ListNamespaces {
+impl<T: CanRead> ListNamespaces<T> {
/// construct a new single-level namespace iterator on a datastore with an optional anchor ns
- pub fn new(store: Arc<DataStore>, ns: BackupNamespace) -> Result<Self, Error> {
+ pub fn new(store: Arc<DataStore<T>>, ns: BackupNamespace) -> Result<Self, Error> {
Ok(ListNamespaces {
ns,
base_path: store.base_path(),
ns_state: None,
+ _marker: std::marker::PhantomData,
})
}
@@ -293,11 +300,12 @@ impl ListNamespaces {
ns: ns.unwrap_or_default(),
base_path: path,
ns_state: None,
+ _marker: std::marker::PhantomData,
})
}
}
-impl Iterator for ListNamespaces {
+impl<T: CanRead> Iterator for ListNamespaces<T> {
type Item = Result<BackupNamespace, Error>;
fn next(&mut self) -> Option<Self::Item> {
@@ -361,18 +369,18 @@ impl Iterator for ListNamespaces {
/// can be useful for searching all backup groups from a certain anchor, as that can contain
/// sub-namespaces but also groups on its own level, so otherwise one would need to special case
/// the ones from the own level.
-pub struct ListNamespacesRecursive {
- store: Arc<DataStore>,
+pub struct ListNamespacesRecursive<T> {
+ store: Arc<DataStore<T>>,
/// the starting namespace we search downward from
ns: BackupNamespace,
/// the maximal recursion depth from the anchor start ns (depth == 0) downwards
max_depth: u8,
- state: Option<Vec<ListNamespaces>>, // vector to avoid code recursion
+ state: Option<Vec<ListNamespaces<T>>>, // vector to avoid code recursion
}
-impl ListNamespacesRecursive {
+impl<T: CanRead> ListNamespacesRecursive<T> {
/// Creates an recursive namespace iterator.
- pub fn new(store: Arc<DataStore>, ns: BackupNamespace) -> Result<Self, Error> {
+ pub fn new(store: Arc<DataStore<T>>, ns: BackupNamespace) -> Result<Self, Error> {
Self::new_max_depth(store, ns, pbs_api_types::MAX_NAMESPACE_DEPTH)
}
@@ -383,7 +391,7 @@ impl ListNamespacesRecursive {
/// Depth is counted relatively, that means not from the datastore as anchor, but from `ns`,
/// and it will be clamped to `min(depth, MAX_NAMESPACE_DEPTH - ns.depth())` automatically.
pub fn new_max_depth(
- store: Arc<DataStore>,
+ store: Arc<DataStore<T>>,
ns: BackupNamespace,
max_depth: usize,
) -> Result<Self, Error> {
@@ -403,7 +411,7 @@ impl ListNamespacesRecursive {
}
}
-impl Iterator for ListNamespacesRecursive {
+impl<T: CanRead> Iterator for ListNamespacesRecursive<T> {
type Item = Result<BackupNamespace, Error>;
fn next(&mut self) -> Option<Self::Item> {
diff --git a/pbs-datastore/src/local_chunk_reader.rs b/pbs-datastore/src/local_chunk_reader.rs
index 05a70c06..ccdde9f1 100644
--- a/pbs-datastore/src/local_chunk_reader.rs
+++ b/pbs-datastore/src/local_chunk_reader.rs
@@ -7,20 +7,21 @@ use anyhow::{bail, Error};
use pbs_api_types::CryptMode;
use pbs_tools::crypt_config::CryptConfig;
+use crate::chunk_store::CanRead;
use crate::data_blob::DataBlob;
use crate::read_chunk::{AsyncReadChunk, ReadChunk};
use crate::DataStore;
#[derive(Clone)]
-pub struct LocalChunkReader {
- store: Arc<DataStore>,
+pub struct LocalChunkReader<T: CanRead> {
+ store: Arc<DataStore<T>>,
crypt_config: Option<Arc<CryptConfig>>,
crypt_mode: CryptMode,
}
-impl LocalChunkReader {
+impl<T: CanRead> LocalChunkReader<T> {
pub fn new(
- store: Arc<DataStore>,
+ store: Arc<DataStore<T>>,
crypt_config: Option<Arc<CryptConfig>>,
crypt_mode: CryptMode,
) -> Self {
@@ -47,7 +48,7 @@ impl LocalChunkReader {
}
}
-impl ReadChunk for LocalChunkReader {
+impl<T: CanRead> ReadChunk for LocalChunkReader<T> {
fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let chunk = self.store.load_chunk(digest)?;
self.ensure_crypt_mode(chunk.crypt_mode()?)?;
@@ -63,7 +64,7 @@ impl ReadChunk for LocalChunkReader {
}
}
-impl AsyncReadChunk for LocalChunkReader {
+impl<T: CanRead + Send + Sync> AsyncReadChunk for LocalChunkReader<T> {
fn read_raw_chunk<'a>(
&'a self,
digest: &'a [u8; 32],
diff --git a/pbs-datastore/src/prune.rs b/pbs-datastore/src/prune.rs
index 7b6f9f75..a8eb511b 100644
--- a/pbs-datastore/src/prune.rs
+++ b/pbs-datastore/src/prune.rs
@@ -5,6 +5,8 @@ use anyhow::Error;
use pbs_api_types::KeepOptions;
+use crate::chunk_store::CanRead;
+
use super::BackupInfo;
#[derive(Clone, Copy, PartialEq, Eq)]
@@ -36,9 +38,9 @@ impl std::fmt::Display for PruneMark {
}
}
-fn mark_selections<F: Fn(&BackupInfo) -> Result<String, Error>>(
+fn mark_selections<F: Fn(&BackupInfo<T>) -> Result<String, Error>, T: CanRead>(
mark: &mut HashMap<PathBuf, PruneMark>,
- list: &[BackupInfo],
+ list: &[BackupInfo<T>],
keep: usize,
select_id: F,
) -> Result<(), Error> {
@@ -82,7 +84,10 @@ fn mark_selections<F: Fn(&BackupInfo) -> Result<String, Error>>(
Ok(())
}
-fn remove_incomplete_snapshots(mark: &mut HashMap<PathBuf, PruneMark>, list: &[BackupInfo]) {
+fn remove_incomplete_snapshots<T: CanRead>(
+ mark: &mut HashMap<PathBuf, PruneMark>,
+ list: &[BackupInfo<T>],
+) {
let mut keep_unfinished = true;
for info in list.iter() {
// backup is considered unfinished if there is no manifest
@@ -104,10 +109,10 @@ fn remove_incomplete_snapshots(mark: &mut HashMap<PathBuf, PruneMark>, list: &[B
}
/// This filters incomplete and kept backups.
-pub fn compute_prune_info(
- mut list: Vec<BackupInfo>,
+pub fn compute_prune_info<T: CanRead>(
+ mut list: Vec<BackupInfo<T>>,
options: &KeepOptions,
-) -> Result<Vec<(BackupInfo, PruneMark)>, Error> {
+) -> Result<Vec<(BackupInfo<T>, PruneMark)>, Error> {
let mut mark = HashMap::new();
BackupInfo::sort_list(&mut list, false);
@@ -154,7 +159,7 @@ pub fn compute_prune_info(
})?;
}
- let prune_info: Vec<(BackupInfo, PruneMark)> = list
+ let prune_info: Vec<(BackupInfo<T>, PruneMark)> = list
.into_iter()
.map(|info| {
let backup_id = info.backup_dir.relative_path();
diff --git a/pbs-datastore/src/snapshot_reader.rs b/pbs-datastore/src/snapshot_reader.rs
index 5da0533c..d604507d 100644
--- a/pbs-datastore/src/snapshot_reader.rs
+++ b/pbs-datastore/src/snapshot_reader.rs
@@ -16,6 +16,7 @@ use pbs_api_types::{
};
use crate::backup_info::BackupDir;
+use crate::chunk_store::CanRead;
use crate::dynamic_index::DynamicIndexReader;
use crate::fixed_index::FixedIndexReader;
use crate::index::IndexFile;
@@ -24,8 +25,8 @@ use crate::DataStore;
/// Helper to access the contents of a datastore backup snapshot
///
/// This make it easy to iterate over all used chunks and files.
-pub struct SnapshotReader {
- snapshot: BackupDir,
+pub struct SnapshotReader<T: CanRead> {
+ snapshot: BackupDir<T>,
datastore_name: String,
file_list: Vec<String>,
locked_dir: Dir,
@@ -35,17 +36,17 @@ pub struct SnapshotReader {
_lock: BackupLockGuard,
}
-impl SnapshotReader {
+impl<T: CanRead> SnapshotReader<T> {
/// Lock snapshot, reads the manifest and returns a new instance
pub fn new(
- datastore: Arc<DataStore>,
+ datastore: Arc<DataStore<T>>,
ns: BackupNamespace,
snapshot: pbs_api_types::BackupDir,
) -> Result<Self, Error> {
Self::new_do(datastore.backup_dir(ns, snapshot)?)
}
- pub(crate) fn new_do(snapshot: BackupDir) -> Result<Self, Error> {
+ pub(crate) fn new_do(snapshot: BackupDir<T>) -> Result<Self, Error> {
let datastore = snapshot.datastore();
let snapshot_path = snapshot.full_path();
@@ -94,7 +95,7 @@ impl SnapshotReader {
}
/// Return the snapshot directory
- pub fn snapshot(&self) -> &BackupDir {
+ pub fn snapshot(&self) -> &BackupDir<T> {
&self.snapshot
}
@@ -124,7 +125,7 @@ impl SnapshotReader {
pub fn chunk_iterator<F: Fn(&[u8; 32]) -> bool>(
&self,
skip_fn: F,
- ) -> Result<SnapshotChunkIterator<F>, Error> {
+ ) -> Result<SnapshotChunkIterator<F, T>, Error> {
SnapshotChunkIterator::new(self, skip_fn)
}
}
@@ -134,15 +135,15 @@ impl SnapshotReader {
/// Note: The iterator returns a `Result`, and the iterator state is
/// undefined after the first error. So it make no sense to continue
/// iteration after the first error.
-pub struct SnapshotChunkIterator<'a, F: Fn(&[u8; 32]) -> bool> {
- snapshot_reader: &'a SnapshotReader,
+pub struct SnapshotChunkIterator<'a, F: Fn(&[u8; 32]) -> bool, T: CanRead> {
+ snapshot_reader: &'a SnapshotReader<T>,
todo_list: Vec<String>,
skip_fn: F,
#[allow(clippy::type_complexity)]
current_index: Option<(Rc<Box<dyn IndexFile + Send>>, usize, Vec<(usize, u64)>)>,
}
-impl<F: Fn(&[u8; 32]) -> bool> Iterator for SnapshotChunkIterator<'_, F> {
+impl<F: Fn(&[u8; 32]) -> bool, T: CanRead> Iterator for SnapshotChunkIterator<'_, F, T> {
type Item = Result<[u8; 32], Error>;
fn next(&mut self) -> Option<Self::Item> {
@@ -189,8 +190,8 @@ impl<F: Fn(&[u8; 32]) -> bool> Iterator for SnapshotChunkIterator<'_, F> {
}
}
-impl<'a, F: Fn(&[u8; 32]) -> bool> SnapshotChunkIterator<'a, F> {
- pub fn new(snapshot_reader: &'a SnapshotReader, skip_fn: F) -> Result<Self, Error> {
+impl<'a, F: Fn(&[u8; 32]) -> bool, T: CanRead> SnapshotChunkIterator<'a, F, T> {
+ pub fn new(snapshot_reader: &'a SnapshotReader<T>, skip_fn: F) -> Result<Self, Error> {
let mut todo_list = Vec::new();
for filename in snapshot_reader.file_list() {
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 07/12] api: backup: env: add generics and separate functions into impl block
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
` (5 preceding siblings ...)
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 06/12] pbs-datastore: " Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 08/12] api/backup/bin/server/tape: add missing generics Hannes Laimer
` (4 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
... based on whether they read or write.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
src/api2/backup/environment.rs | 337 +++++++++++++++++----------------
1 file changed, 174 insertions(+), 163 deletions(-)
diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
index 3d541b46..a1620fb9 100644
--- a/src/api2/backup/environment.rs
+++ b/src/api2/backup/environment.rs
@@ -13,6 +13,7 @@ use proxmox_sys::fs::{replace_file, CreateOptions};
use pbs_api_types::Authid;
use pbs_datastore::backup_info::{BackupDir, BackupInfo};
+use pbs_datastore::chunk_store::CanWrite;
use pbs_datastore::dynamic_index::DynamicIndexWriter;
use pbs_datastore::fixed_index::FixedIndexWriter;
use pbs_datastore::{DataBlob, DataStore};
@@ -54,17 +55,17 @@ impl std::ops::Add for UploadStatistic {
}
}
-struct DynamicWriterState {
+struct DynamicWriterState<T> {
name: String,
- index: DynamicIndexWriter,
+ index: DynamicIndexWriter<T>,
offset: u64,
chunk_count: u64,
upload_stat: UploadStatistic,
}
-struct FixedWriterState {
+struct FixedWriterState<T> {
name: String,
- index: FixedIndexWriter,
+ index: FixedIndexWriter<T>,
size: usize,
chunk_size: u32,
chunk_count: u64,
@@ -76,18 +77,18 @@ struct FixedWriterState {
// key=digest, value=length
type KnownChunksMap = HashMap<[u8; 32], u32>;
-struct SharedBackupState {
+struct SharedBackupState<T> {
finished: bool,
uid_counter: usize,
file_counter: usize, // successfully uploaded files
- dynamic_writers: HashMap<usize, DynamicWriterState>,
- fixed_writers: HashMap<usize, FixedWriterState>,
+ dynamic_writers: HashMap<usize, DynamicWriterState<T>>,
+ fixed_writers: HashMap<usize, FixedWriterState<T>>,
known_chunks: KnownChunksMap,
backup_size: u64, // sums up size of all files
backup_stat: UploadStatistic,
}
-impl SharedBackupState {
+impl<T> SharedBackupState<T> {
// Raise error if finished flag is set
fn ensure_unfinished(&self) -> Result<(), Error> {
if self.finished {
@@ -105,26 +106,32 @@ impl SharedBackupState {
/// `RpcEnvironment` implementation for backup service
#[derive(Clone)]
-pub struct BackupEnvironment {
+pub struct BackupEnvironment<T> {
env_type: RpcEnvironmentType,
result_attributes: Value,
auth_id: Authid,
pub debug: bool,
pub formatter: &'static dyn OutputFormatter,
pub worker: Arc<WorkerTask>,
- pub datastore: Arc<DataStore>,
- pub backup_dir: BackupDir,
- pub last_backup: Option<BackupInfo>,
- state: Arc<Mutex<SharedBackupState>>,
+ pub datastore: Arc<DataStore<T>>,
+ pub backup_dir: BackupDir<T>,
+ pub last_backup: Option<BackupInfo<T>>,
+ state: Arc<Mutex<SharedBackupState<T>>>,
}
-impl BackupEnvironment {
+impl<T: Send + Sync + 'static> BackupEnvironment<T> {
+ pub fn format_response(&self, result: Result<Value, Error>) -> Response<Body> {
+ self.formatter.format_result(result, self)
+ }
+}
+
+impl<T> BackupEnvironment<T> {
pub fn new(
env_type: RpcEnvironmentType,
auth_id: Authid,
worker: Arc<WorkerTask>,
- datastore: Arc<DataStore>,
- backup_dir: BackupDir,
+ datastore: Arc<DataStore<T>>,
+ backup_dir: BackupDir<T>,
) -> Self {
let state = SharedBackupState {
finished: false,
@@ -260,10 +267,148 @@ impl BackupEnvironment {
state.known_chunks.get(digest).copied()
}
+ fn log_upload_stat(
+ &self,
+ archive_name: &str,
+ csum: &[u8; 32],
+ uuid: &[u8; 16],
+ size: u64,
+ chunk_count: u64,
+ upload_stat: &UploadStatistic,
+ ) {
+ self.log(format!("Upload statistics for '{}'", archive_name));
+ self.log(format!("UUID: {}", hex::encode(uuid)));
+ self.log(format!("Checksum: {}", hex::encode(csum)));
+ self.log(format!("Size: {}", size));
+ self.log(format!("Chunk count: {}", chunk_count));
+
+ if size == 0 || chunk_count == 0 {
+ return;
+ }
+
+ self.log(format!(
+ "Upload size: {} ({}%)",
+ upload_stat.size,
+ (upload_stat.size * 100) / size
+ ));
+
+ // account for zero chunk, which might be uploaded but never used
+ let client_side_duplicates = if chunk_count < upload_stat.count {
+ 0
+ } else {
+ chunk_count - upload_stat.count
+ };
+
+ let server_side_duplicates = upload_stat.duplicates;
+
+ if (client_side_duplicates + server_side_duplicates) > 0 {
+ let per = (client_side_duplicates + server_side_duplicates) * 100 / chunk_count;
+ self.log(format!(
+ "Duplicates: {}+{} ({}%)",
+ client_side_duplicates, server_side_duplicates, per
+ ));
+ }
+
+ if upload_stat.size > 0 {
+ self.log(format!(
+ "Compression: {}%",
+ (upload_stat.compressed_size * 100) / upload_stat.size
+ ));
+ }
+ }
+
+ pub fn log<S: AsRef<str>>(&self, msg: S) {
+ info!("{}", msg.as_ref());
+ }
+
+ pub fn debug<S: AsRef<str>>(&self, msg: S) {
+ if self.debug {
+ // This is kinda weird, we would like to use tracing::debug! here and automatically
+ // filter it, but self.debug is set from the client-side and the logs are printed on
+ // client and server side. This means that if the client sets the log level to debug,
+ // both server and client need to have 'debug' logs printed.
+ self.log(msg);
+ }
+ }
+
+ /// Raise error if finished flag is not set
+ pub fn ensure_finished(&self) -> Result<(), Error> {
+ let state = self.state.lock().unwrap();
+ if !state.finished {
+ bail!("backup ended but finished flag is not set.");
+ }
+ Ok(())
+ }
+
+ /// Return true if the finished flag is set
+ pub fn finished(&self) -> bool {
+ let state = self.state.lock().unwrap();
+ state.finished
+ }
+}
+
+impl<T: CanWrite + Send + Sync + std::panic::RefUnwindSafe + 'static> BackupEnvironment<T> {
+ /// If verify-new is set on the datastore, this will run a new verify task
+ /// for the backup. If not, this will return and also drop the passed lock
+ /// immediately.
+ pub fn verify_after_complete(&self, excl_snap_lock: BackupLockGuard) -> Result<(), Error> {
+ self.ensure_finished()?;
+
+ if !self.datastore.verify_new() {
+ // no verify requested, do nothing
+ return Ok(());
+ }
+
+ // Downgrade to shared lock, the backup itself is finished
+ drop(excl_snap_lock);
+ let snap_lock = self.backup_dir.lock_shared().with_context(|| {
+ format!(
+ "while trying to verify snapshot '{:?}' after completion",
+ self.backup_dir
+ )
+ })?;
+ let worker_id = format!(
+ "{}:{}/{}/{:08X}",
+ self.datastore.name(),
+ self.backup_dir.backup_type(),
+ self.backup_dir.backup_id(),
+ self.backup_dir.backup_time()
+ );
+
+ let datastore = self.datastore.clone();
+ let backup_dir = self.backup_dir.clone();
+
+ WorkerTask::new_thread(
+ "verify",
+ Some(worker_id),
+ self.auth_id.to_string(),
+ false,
+ move |worker| {
+ worker.log_message("Automatically verifying newly added snapshot");
+
+ let verify_worker = crate::backup::VerifyWorker::new(worker.clone(), datastore);
+ if !verify_backup_dir_with_lock(
+ &verify_worker,
+ &backup_dir,
+ worker.upid().clone(),
+ None,
+ snap_lock,
+ )? {
+ bail!("verification failed - please check the log for details");
+ }
+
+ Ok(())
+ },
+ )
+ .map(|_| ())
+ }
+}
+
+impl<T: CanWrite> BackupEnvironment<T> {
/// Store the writer with an unique ID
pub fn register_dynamic_writer(
&self,
- index: DynamicIndexWriter,
+ index: DynamicIndexWriter<T>,
name: String,
) -> Result<usize, Error> {
let mut state = self.state.lock().unwrap();
@@ -289,7 +434,7 @@ impl BackupEnvironment {
/// Store the writer with an unique ID
pub fn register_fixed_writer(
&self,
- index: FixedIndexWriter,
+ index: FixedIndexWriter<T>,
name: String,
size: usize,
chunk_size: u32,
@@ -379,56 +524,6 @@ impl BackupEnvironment {
Ok(())
}
- fn log_upload_stat(
- &self,
- archive_name: &str,
- csum: &[u8; 32],
- uuid: &[u8; 16],
- size: u64,
- chunk_count: u64,
- upload_stat: &UploadStatistic,
- ) {
- self.log(format!("Upload statistics for '{}'", archive_name));
- self.log(format!("UUID: {}", hex::encode(uuid)));
- self.log(format!("Checksum: {}", hex::encode(csum)));
- self.log(format!("Size: {}", size));
- self.log(format!("Chunk count: {}", chunk_count));
-
- if size == 0 || chunk_count == 0 {
- return;
- }
-
- self.log(format!(
- "Upload size: {} ({}%)",
- upload_stat.size,
- (upload_stat.size * 100) / size
- ));
-
- // account for zero chunk, which might be uploaded but never used
- let client_side_duplicates = if chunk_count < upload_stat.count {
- 0
- } else {
- chunk_count - upload_stat.count
- };
-
- let server_side_duplicates = upload_stat.duplicates;
-
- if (client_side_duplicates + server_side_duplicates) > 0 {
- let per = (client_side_duplicates + server_side_duplicates) * 100 / chunk_count;
- self.log(format!(
- "Duplicates: {}+{} ({}%)",
- client_side_duplicates, server_side_duplicates, per
- ));
- }
-
- if upload_stat.size > 0 {
- self.log(format!(
- "Compression: {}%",
- (upload_stat.compressed_size * 100) / upload_stat.size
- ));
- }
- }
-
/// Close dynamic writer
pub fn dynamic_writer_close(
&self,
@@ -633,94 +728,6 @@ impl BackupEnvironment {
Ok(())
}
- /// If verify-new is set on the datastore, this will run a new verify task
- /// for the backup. If not, this will return and also drop the passed lock
- /// immediately.
- pub fn verify_after_complete(&self, excl_snap_lock: BackupLockGuard) -> Result<(), Error> {
- self.ensure_finished()?;
-
- if !self.datastore.verify_new() {
- // no verify requested, do nothing
- return Ok(());
- }
-
- // Downgrade to shared lock, the backup itself is finished
- drop(excl_snap_lock);
- let snap_lock = self.backup_dir.lock_shared().with_context(|| {
- format!(
- "while trying to verify snapshot '{:?}' after completion",
- self.backup_dir
- )
- })?;
- let worker_id = format!(
- "{}:{}/{}/{:08X}",
- self.datastore.name(),
- self.backup_dir.backup_type(),
- self.backup_dir.backup_id(),
- self.backup_dir.backup_time()
- );
-
- let datastore = self.datastore.clone();
- let backup_dir = self.backup_dir.clone();
-
- WorkerTask::new_thread(
- "verify",
- Some(worker_id),
- self.auth_id.to_string(),
- false,
- move |worker| {
- worker.log_message("Automatically verifying newly added snapshot");
-
- let verify_worker = crate::backup::VerifyWorker::new(worker.clone(), datastore);
- if !verify_backup_dir_with_lock(
- &verify_worker,
- &backup_dir,
- worker.upid().clone(),
- None,
- snap_lock,
- )? {
- bail!("verification failed - please check the log for details");
- }
-
- Ok(())
- },
- )
- .map(|_| ())
- }
-
- pub fn log<S: AsRef<str>>(&self, msg: S) {
- info!("{}", msg.as_ref());
- }
-
- pub fn debug<S: AsRef<str>>(&self, msg: S) {
- if self.debug {
- // This is kinda weird, we would like to use tracing::debug! here and automatically
- // filter it, but self.debug is set from the client-side and the logs are printed on
- // client and server side. This means that if the client sets the log level to debug,
- // both server and client need to have 'debug' logs printed.
- self.log(msg);
- }
- }
-
- pub fn format_response(&self, result: Result<Value, Error>) -> Response<Body> {
- self.formatter.format_result(result, self)
- }
-
- /// Raise error if finished flag is not set
- pub fn ensure_finished(&self) -> Result<(), Error> {
- let state = self.state.lock().unwrap();
- if !state.finished {
- bail!("backup ended but finished flag is not set.");
- }
- Ok(())
- }
-
- /// Return true if the finished flag is set
- pub fn finished(&self) -> bool {
- let state = self.state.lock().unwrap();
- state.finished
- }
-
/// Remove complete backup
pub fn remove_backup(&self) -> Result<(), Error> {
let mut state = self.state.lock().unwrap();
@@ -736,7 +743,7 @@ impl BackupEnvironment {
}
}
-impl RpcEnvironment for BackupEnvironment {
+impl<T: Send + Sync + 'static> RpcEnvironment for BackupEnvironment<T> {
fn result_attrib_mut(&mut self) -> &mut Value {
&mut self.result_attributes
}
@@ -758,14 +765,18 @@ impl RpcEnvironment for BackupEnvironment {
}
}
-impl AsRef<BackupEnvironment> for dyn RpcEnvironment {
- fn as_ref(&self) -> &BackupEnvironment {
- self.as_any().downcast_ref::<BackupEnvironment>().unwrap()
+impl<T: 'static> AsRef<BackupEnvironment<T>> for dyn RpcEnvironment {
+ fn as_ref(&self) -> &BackupEnvironment<T> {
+ self.as_any()
+ .downcast_ref::<BackupEnvironment<T>>()
+ .unwrap()
}
}
-impl AsRef<BackupEnvironment> for Box<dyn RpcEnvironment> {
- fn as_ref(&self) -> &BackupEnvironment {
- self.as_any().downcast_ref::<BackupEnvironment>().unwrap()
+impl<T: 'static> AsRef<BackupEnvironment<T>> for Box<dyn RpcEnvironment> {
+ fn as_ref(&self) -> &BackupEnvironment<T> {
+ self.as_any()
+ .downcast_ref::<BackupEnvironment<T>>()
+ .unwrap()
}
}
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 08/12] api/backup/bin/server/tape: add missing generics
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
` (6 preceding siblings ...)
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 07/12] api: backup: env: add generics and separate functions into impl block Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 09/12] examples/tests: " Hannes Laimer
` (3 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
src/api2/admin/datastore.rs | 27 ++++----
src/api2/backup/mod.rs | 21 +++---
src/api2/backup/upload_chunk.rs | 19 +++---
src/api2/config/datastore.rs | 5 +-
src/api2/reader/environment.rs | 30 +++++----
src/api2/reader/mod.rs | 5 +-
src/api2/tape/backup.rs | 11 ++--
src/api2/tape/drive.rs | 3 +-
src/api2/tape/restore.rs | 71 +++++++++++----------
src/backup/hierarchy.rs | 23 +++----
src/backup/verify.rs | 53 +++++++--------
src/bin/proxmox-backup-proxy.rs | 7 +-
src/server/gc_job.rs | 7 +-
src/server/prune_job.rs | 5 +-
src/server/pull.rs | 23 +++----
src/server/push.rs | 3 +-
src/server/sync.rs | 13 ++--
src/tape/file_formats/snapshot_archive.rs | 5 +-
src/tape/pool_writer/mod.rs | 11 ++--
src/tape/pool_writer/new_chunks_iterator.rs | 7 +-
20 files changed, 189 insertions(+), 160 deletions(-)
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 39249448..e3f93cdd 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -54,6 +54,7 @@ use pbs_config::CachedUserInfo;
use pbs_datastore::backup_info::BackupInfo;
use pbs_datastore::cached_chunk_reader::CachedChunkReader;
use pbs_datastore::catalog::{ArchiveEntry, CatalogReader};
+use pbs_datastore::chunk_store::{CanRead, Read as R};
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::data_blob_reader::DataBlobReader;
use pbs_datastore::dynamic_index::{BufferedDynamicReader, DynamicIndexReader, LocalDynamicReadAt};
@@ -79,8 +80,8 @@ use crate::server::jobstate::{compute_schedule_status, Job, JobState};
const GROUP_NOTES_FILE_NAME: &str = "notes";
-fn get_group_note_path(
- store: &DataStore,
+fn get_group_note_path<T>(
+ store: &DataStore<T>,
ns: &BackupNamespace,
group: &pbs_api_types::BackupGroup,
) -> PathBuf {
@@ -114,8 +115,8 @@ fn check_privs_and_load_store(
Ok(datastore)
}
-fn read_backup_index(
- backup_dir: &BackupDir,
+fn read_backup_index<T: CanRead>(
+ backup_dir: &BackupDir<T>,
) -> Result<(BackupManifest, Vec<BackupContent>), Error> {
let (manifest, index_size) = backup_dir.load_manifest()?;
@@ -140,8 +141,8 @@ fn read_backup_index(
Ok((manifest, result))
}
-fn get_all_snapshot_files(
- info: &BackupInfo,
+fn get_all_snapshot_files<T: CanRead>(
+ info: &BackupInfo<T>,
) -> Result<(BackupManifest, Vec<BackupContent>), Error> {
let (manifest, mut files) = read_backup_index(&info.backup_dir)?;
@@ -529,7 +530,7 @@ unsafe fn list_snapshots_blocking(
(None, None) => datastore.list_backup_groups(ns.clone())?,
};
- let info_to_snapshot_list_item = |group: &BackupGroup, owner, info: BackupInfo| {
+ let info_to_snapshot_list_item = |group: &BackupGroup<R>, owner, info: BackupInfo<R>| {
let backup = pbs_api_types::BackupDir {
group: group.into(),
time: info.backup_dir.backup_time(),
@@ -629,8 +630,8 @@ unsafe fn list_snapshots_blocking(
})
}
-async fn get_snapshots_count(
- store: &Arc<DataStore>,
+async fn get_snapshots_count<T: CanRead + Send + Sync + 'static>(
+ store: &Arc<DataStore<T>>,
owner: Option<&Authid>,
) -> Result<Counts, Error> {
let store = Arc::clone(store);
@@ -1796,12 +1797,12 @@ pub const API_METHOD_PXAR_FILE_DOWNLOAD: ApiMethod = ApiMethod::new(
&Permission::Anybody,
);
-fn get_local_pxar_reader(
- datastore: Arc<DataStore>,
+fn get_local_pxar_reader<T: CanRead>(
+ datastore: Arc<DataStore<T>>,
manifest: &BackupManifest,
- backup_dir: &BackupDir,
+ backup_dir: &BackupDir<T>,
pxar_name: &BackupArchiveName,
-) -> Result<(LocalDynamicReadAt<LocalChunkReader>, u64), Error> {
+) -> Result<(LocalDynamicReadAt<LocalChunkReader<T>>, u64), Error> {
let mut path = datastore.base_path();
path.push(backup_dir.relative_path());
path.push(pxar_name.as_ref());
diff --git a/src/api2/backup/mod.rs b/src/api2/backup/mod.rs
index 629df933..79354dbf 100644
--- a/src/api2/backup/mod.rs
+++ b/src/api2/backup/mod.rs
@@ -24,6 +24,7 @@ use pbs_api_types::{
BACKUP_TYPE_SCHEMA, CHUNK_DIGEST_SCHEMA, DATASTORE_SCHEMA, PRIV_DATASTORE_BACKUP,
};
use pbs_config::CachedUserInfo;
+use pbs_datastore::chunk_store::{Read as R, Write as W};
use pbs_datastore::index::IndexFile;
use pbs_datastore::{DataStore, PROXMOX_BACKUP_PROTOCOL_ID_V1};
use pbs_tools::json::{required_array_param, required_integer_param, required_string_param};
@@ -279,7 +280,7 @@ fn upgrade_to_backup_protocol(
return Ok(());
}
- let verify = |env: BackupEnvironment| {
+ let verify = |env: BackupEnvironment<W>| {
if let Err(err) = env.verify_after_complete(snap_guard) {
env.log(format!(
"backup finished, but starting the requested verify task failed: {}",
@@ -400,7 +401,7 @@ fn create_dynamic_index(
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<W> = rpcenv.as_ref();
let name = required_string_param(¶m, "archive-name")?.to_owned();
@@ -450,7 +451,7 @@ fn create_fixed_index(
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<W> = rpcenv.as_ref();
let name = required_string_param(¶m, "archive-name")?.to_owned();
let size = required_integer_param(¶m, "size")? as usize;
@@ -565,7 +566,7 @@ fn dynamic_append(
);
}
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<W> = rpcenv.as_ref();
env.debug(format!("dynamic_append {} chunks", digest_list.len()));
@@ -639,7 +640,7 @@ fn fixed_append(
);
}
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<W> = rpcenv.as_ref();
env.debug(format!("fixed_append {} chunks", digest_list.len()));
@@ -714,7 +715,7 @@ fn close_dynamic_index(
let csum_str = required_string_param(¶m, "csum")?;
let csum = <[u8; 32]>::from_hex(csum_str)?;
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<W> = rpcenv.as_ref();
env.dynamic_writer_close(wid, chunk_count, size, csum)?;
@@ -767,7 +768,7 @@ fn close_fixed_index(
let csum_str = required_string_param(¶m, "csum")?;
let csum = <[u8; 32]>::from_hex(csum_str)?;
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<W> = rpcenv.as_ref();
env.fixed_writer_close(wid, chunk_count, size, csum)?;
@@ -781,7 +782,7 @@ fn finish_backup(
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<W> = rpcenv.as_ref();
env.finish_backup()?;
env.log("successfully finished backup");
@@ -800,7 +801,7 @@ fn get_previous_backup_time(
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<R> = rpcenv.as_ref();
let backup_time = env
.last_backup
@@ -827,7 +828,7 @@ fn download_previous(
rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture {
async move {
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<R> = rpcenv.as_ref();
let archive_name = required_string_param(¶m, "archive-name")?.to_owned();
diff --git a/src/api2/backup/upload_chunk.rs b/src/api2/backup/upload_chunk.rs
index 20259660..bb1566ae 100644
--- a/src/api2/backup/upload_chunk.rs
+++ b/src/api2/backup/upload_chunk.rs
@@ -14,25 +14,26 @@ use proxmox_schema::*;
use proxmox_sortable_macro::sortable;
use pbs_api_types::{BACKUP_ARCHIVE_NAME_SCHEMA, CHUNK_DIGEST_SCHEMA};
+use pbs_datastore::chunk_store::{CanWrite, Lookup as L, Write as W};
use pbs_datastore::file_formats::{DataBlobHeader, EncryptedDataBlobHeader};
use pbs_datastore::{DataBlob, DataStore};
use pbs_tools::json::{required_integer_param, required_string_param};
use super::environment::*;
-pub struct UploadChunk {
+pub struct UploadChunk<T> {
stream: Body,
- store: Arc<DataStore>,
+ store: Arc<DataStore<T>>,
digest: [u8; 32],
size: u32,
encoded_size: u32,
raw_data: Option<Vec<u8>>,
}
-impl UploadChunk {
+impl<T> UploadChunk<T> {
pub fn new(
stream: Body,
- store: Arc<DataStore>,
+ store: Arc<DataStore<T>>,
digest: [u8; 32],
size: u32,
encoded_size: u32,
@@ -48,7 +49,7 @@ impl UploadChunk {
}
}
-impl Future for UploadChunk {
+impl<T: CanWrite> Future for UploadChunk<T> {
type Output = Result<([u8; 32], u32, u32, bool), Error>;
fn poll(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Self::Output> {
@@ -159,7 +160,7 @@ fn upload_fixed_chunk(
let digest_str = required_string_param(¶m, "digest")?;
let digest = <[u8; 32]>::from_hex(digest_str)?;
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<W> = rpcenv.as_ref();
let (digest, size, compressed_size, is_duplicate) =
UploadChunk::new(req_body, env.datastore.clone(), digest, size, encoded_size).await?;
@@ -228,7 +229,7 @@ fn upload_dynamic_chunk(
let digest_str = required_string_param(¶m, "digest")?;
let digest = <[u8; 32]>::from_hex(digest_str)?;
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<W> = rpcenv.as_ref();
let (digest, size, compressed_size, is_duplicate) =
UploadChunk::new(req_body, env.datastore.clone(), digest, size, encoded_size).await?;
@@ -273,7 +274,7 @@ fn upload_speedtest(
println!("Upload error: {}", err);
}
}
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<L> = rpcenv.as_ref();
Ok(env.format_response(Ok(Value::Null)))
}
.boxed()
@@ -312,7 +313,7 @@ fn upload_blob(
let file_name = required_string_param(¶m, "file-name")?.to_owned();
let encoded_size = required_integer_param(¶m, "encoded-size")? as usize;
- let env: &BackupEnvironment = rpcenv.as_ref();
+ let env: &BackupEnvironment<W> = rpcenv.as_ref();
if !file_name.ends_with(".blob") {
bail!("wrong blob file extension: '{}'", file_name);
diff --git a/src/api2/config/datastore.rs b/src/api2/config/datastore.rs
index b133be70..52fa6db1 100644
--- a/src/api2/config/datastore.rs
+++ b/src/api2/config/datastore.rs
@@ -30,6 +30,7 @@ use crate::api2::config::tape_backup_job::{delete_tape_backup_job, list_tape_bac
use crate::api2::config::verify::delete_verification_job;
use pbs_config::CachedUserInfo;
+use pbs_datastore::chunk_store::{Read as R, Write as W};
use pbs_datastore::get_datastore_mount_status;
use proxmox_rest_server::WorkerTask;
@@ -124,7 +125,7 @@ pub(crate) fn do_create_datastore(
};
let chunk_store = if reuse_datastore {
- ChunkStore::verify_chunkstore(&path).and_then(|_| {
+ ChunkStore::<R>::verify_chunkstore(&path).and_then(|_| {
// Must be the only instance accessing and locking the chunk store,
// dropping will close all other locks from this process on the lockfile as well.
ChunkStore::open(
@@ -666,7 +667,7 @@ pub async fn delete_datastore(
auth_id.to_string(),
to_stdout,
move |_worker| {
- pbs_datastore::DataStore::destroy(&name, destroy_data)?;
+ pbs_datastore::DataStore::<W>::destroy(&name, destroy_data)?;
// ignore errors
let _ = jobstate::remove_state_file("prune", &name);
diff --git a/src/api2/reader/environment.rs b/src/api2/reader/environment.rs
index 3b2f06f4..26f5bec6 100644
--- a/src/api2/reader/environment.rs
+++ b/src/api2/reader/environment.rs
@@ -14,25 +14,25 @@ use tracing::info;
/// `RpcEnvironment` implementation for backup reader service
#[derive(Clone)]
-pub struct ReaderEnvironment {
+pub struct ReaderEnvironment<T> {
env_type: RpcEnvironmentType,
result_attributes: Value,
auth_id: Authid,
pub debug: bool,
pub formatter: &'static dyn OutputFormatter,
pub worker: Arc<WorkerTask>,
- pub datastore: Arc<DataStore>,
- pub backup_dir: BackupDir,
+ pub datastore: Arc<DataStore<T>>,
+ pub backup_dir: BackupDir<T>,
allowed_chunks: Arc<RwLock<HashSet<[u8; 32]>>>,
}
-impl ReaderEnvironment {
+impl<T> ReaderEnvironment<T> {
pub fn new(
env_type: RpcEnvironmentType,
auth_id: Authid,
worker: Arc<WorkerTask>,
- datastore: Arc<DataStore>,
- backup_dir: BackupDir,
+ datastore: Arc<DataStore<T>>,
+ backup_dir: BackupDir<T>,
) -> Self {
Self {
result_attributes: json!({}),
@@ -71,7 +71,7 @@ impl ReaderEnvironment {
}
}
-impl RpcEnvironment for ReaderEnvironment {
+impl<T: Send + Sync + 'static> RpcEnvironment for ReaderEnvironment<T> {
fn result_attrib_mut(&mut self) -> &mut Value {
&mut self.result_attributes
}
@@ -93,14 +93,18 @@ impl RpcEnvironment for ReaderEnvironment {
}
}
-impl AsRef<ReaderEnvironment> for dyn RpcEnvironment {
- fn as_ref(&self) -> &ReaderEnvironment {
- self.as_any().downcast_ref::<ReaderEnvironment>().unwrap()
+impl<T: 'static> AsRef<ReaderEnvironment<T>> for dyn RpcEnvironment {
+ fn as_ref(&self) -> &ReaderEnvironment<T> {
+ self.as_any()
+ .downcast_ref::<ReaderEnvironment<T>>()
+ .unwrap()
}
}
-impl AsRef<ReaderEnvironment> for Box<dyn RpcEnvironment> {
- fn as_ref(&self) -> &ReaderEnvironment {
- self.as_any().downcast_ref::<ReaderEnvironment>().unwrap()
+impl<T: 'static> AsRef<ReaderEnvironment<T>> for Box<dyn RpcEnvironment> {
+ fn as_ref(&self) -> &ReaderEnvironment<T> {
+ self.as_any()
+ .downcast_ref::<ReaderEnvironment<T>>()
+ .unwrap()
}
}
diff --git a/src/api2/reader/mod.rs b/src/api2/reader/mod.rs
index cc791299..52f0953a 100644
--- a/src/api2/reader/mod.rs
+++ b/src/api2/reader/mod.rs
@@ -23,6 +23,7 @@ use pbs_api_types::{
DATASTORE_SCHEMA, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
};
use pbs_config::CachedUserInfo;
+use pbs_datastore::chunk_store::Read as R;
use pbs_datastore::index::IndexFile;
use pbs_datastore::{DataStore, PROXMOX_BACKUP_READER_PROTOCOL_ID_V1};
use pbs_tools::json::required_string_param;
@@ -247,7 +248,7 @@ fn download_file(
rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture {
async move {
- let env: &ReaderEnvironment = rpcenv.as_ref();
+ let env: &ReaderEnvironment<R> = rpcenv.as_ref();
let file_name = required_string_param(¶m, "file-name")?.to_owned();
@@ -303,7 +304,7 @@ fn download_chunk(
rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture {
async move {
- let env: &ReaderEnvironment = rpcenv.as_ref();
+ let env: &ReaderEnvironment<R> = rpcenv.as_ref();
let digest_str = required_string_param(¶m, "digest")?;
let digest = <[u8; 32]>::from_hex(digest_str)?;
diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
index 31293a9a..306d5936 100644
--- a/src/api2/tape/backup.rs
+++ b/src/api2/tape/backup.rs
@@ -18,6 +18,7 @@ use pbs_api_types::{
use pbs_config::CachedUserInfo;
use pbs_datastore::backup_info::{BackupDir, BackupInfo};
+use pbs_datastore::chunk_store::CanRead;
use pbs_datastore::{DataStore, StoreProgress};
use crate::tape::TapeNotificationMode;
@@ -360,9 +361,9 @@ enum SnapshotBackupResult {
Ignored,
}
-fn backup_worker(
+fn backup_worker<T: CanRead + Send + Sync + 'static>(
worker: &WorkerTask,
- datastore: Arc<DataStore>,
+ datastore: Arc<DataStore<T>>,
pool_config: &MediaPoolConfig,
setup: &TapeBackupJobSetup,
summary: &mut TapeBackupJobSummary,
@@ -564,11 +565,11 @@ fn update_media_online_status(drive: &str) -> Result<Option<String>, Error> {
}
}
-fn backup_snapshot(
+fn backup_snapshot<T: CanRead + Send + Sync + 'static>(
worker: &WorkerTask,
pool_writer: &mut PoolWriter,
- datastore: Arc<DataStore>,
- snapshot: BackupDir,
+ datastore: Arc<DataStore<T>>,
+ snapshot: BackupDir<T>,
) -> Result<SnapshotBackupResult, Error> {
let snapshot_path = snapshot.relative_path();
info!("backup snapshot {snapshot_path:?}");
diff --git a/src/api2/tape/drive.rs b/src/api2/tape/drive.rs
index ba9051de..47fa06dc 100644
--- a/src/api2/tape/drive.rs
+++ b/src/api2/tape/drive.rs
@@ -24,6 +24,7 @@ use pbs_api_types::{
use pbs_api_types::{PRIV_TAPE_AUDIT, PRIV_TAPE_READ, PRIV_TAPE_WRITE};
use pbs_config::CachedUserInfo;
+use pbs_datastore::chunk_store::Write as W;
use pbs_tape::{
linux_list_drives::{lookup_device_identification, lto_tape_device_list, open_lto_tape_device},
sg_tape::tape_alert_flags_critical,
@@ -1342,7 +1343,7 @@ pub fn catalog_media(
drive.read_label()?; // skip over labels - we already read them above
let mut checked_chunks = HashMap::new();
- restore_media(
+ restore_media::<W>(
worker,
&mut drive,
&media_id,
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 2cc1baab..8f089c20 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -27,6 +27,7 @@ use pbs_api_types::{
};
use pbs_client::pxar::tools::handle_root_with_optional_format_version_prelude;
use pbs_config::CachedUserInfo;
+use pbs_datastore::chunk_store::{CanRead, CanWrite, Write as W};
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
@@ -120,13 +121,13 @@ impl NamespaceMap {
}
}
-pub struct DataStoreMap {
- map: HashMap<String, Arc<DataStore>>,
- default: Option<Arc<DataStore>>,
+pub struct DataStoreMap<T> {
+ map: HashMap<String, Arc<DataStore<T>>>,
+ default: Option<Arc<DataStore<T>>>,
ns_map: Option<NamespaceMap>,
}
-impl TryFrom<String> for DataStoreMap {
+impl TryFrom<String> for DataStoreMap<W> {
type Error = Error;
fn try_from(value: String) -> Result<Self, Error> {
@@ -161,7 +162,7 @@ impl TryFrom<String> for DataStoreMap {
}
}
-impl DataStoreMap {
+impl<T> DataStoreMap<T> {
fn add_namespaces_maps(&mut self, mappings: Vec<String>) -> Result<bool, Error> {
let count = mappings.len();
let ns_map = NamespaceMap::try_from(mappings)?;
@@ -169,7 +170,10 @@ impl DataStoreMap {
Ok(count > 0)
}
- fn used_datastores(&self) -> HashMap<&str, (Arc<DataStore>, Option<HashSet<BackupNamespace>>)> {
+ #[allow(clippy::type_complexity)]
+ fn used_datastores(
+ &self,
+ ) -> HashMap<&str, (Arc<DataStore<T>>, Option<HashSet<BackupNamespace>>)> {
let mut map = HashMap::new();
for (source, target) in self.map.iter() {
let ns = self.ns_map.as_ref().map(|map| map.used_namespaces(source));
@@ -189,18 +193,19 @@ impl DataStoreMap {
.map(|mapping| mapping.get_namespaces(datastore, ns))
}
- fn target_store(&self, source_datastore: &str) -> Option<Arc<DataStore>> {
+ fn target_store(&self, source_datastore: &str) -> Option<Arc<DataStore<T>>> {
self.map
.get(source_datastore)
.or(self.default.as_ref())
.map(Arc::clone)
}
+ #[allow(clippy::type_complexity)]
fn get_targets(
&self,
source_datastore: &str,
source_ns: &BackupNamespace,
- ) -> Option<(Arc<DataStore>, Option<Vec<BackupNamespace>>)> {
+ ) -> Option<(Arc<DataStore<T>>, Option<Vec<BackupNamespace>>)> {
self.target_store(source_datastore)
.map(|store| (store, self.target_ns(source_datastore, source_ns)))
}
@@ -237,9 +242,9 @@ fn check_datastore_privs(
Ok(())
}
-fn check_and_create_namespaces(
+fn check_and_create_namespaces<T: CanWrite>(
user_info: &CachedUserInfo,
- store: &Arc<DataStore>,
+ store: &Arc<DataStore<T>>,
ns: &BackupNamespace,
auth_id: &Authid,
owner: Option<&Authid>,
@@ -449,13 +454,13 @@ pub fn restore(
}
#[allow(clippy::too_many_arguments)]
-fn restore_full_worker(
+fn restore_full_worker<T: CanWrite + Send + Sync + 'static>(
worker: Arc<WorkerTask>,
inventory: Inventory,
media_set_uuid: Uuid,
drive_config: SectionConfigData,
drive_name: &str,
- store_map: DataStoreMap,
+ store_map: DataStoreMap<T>,
restore_owner: &Authid,
notification_mode: &TapeNotificationMode,
auth_id: &Authid,
@@ -529,8 +534,8 @@ fn restore_full_worker(
}
#[allow(clippy::too_many_arguments)]
-fn check_snapshot_restorable(
- store_map: &DataStoreMap,
+fn check_snapshot_restorable<T: CanRead>(
+ store_map: &DataStoreMap<T>,
store: &str,
snapshot: &str,
ns: &BackupNamespace,
@@ -618,14 +623,14 @@ fn log_required_tapes<'a>(inventory: &Inventory, list: impl Iterator<Item = &'a
}
#[allow(clippy::too_many_arguments)]
-fn restore_list_worker(
+fn restore_list_worker<T: CanWrite + Send + Sync + 'static>(
worker: Arc<WorkerTask>,
snapshots: Vec<String>,
inventory: Inventory,
media_set_uuid: Uuid,
drive_config: SectionConfigData,
drive_name: &str,
- store_map: DataStoreMap,
+ store_map: DataStoreMap<T>,
restore_owner: &Authid,
notification_mode: &TapeNotificationMode,
user_info: Arc<CachedUserInfo>,
@@ -955,16 +960,16 @@ fn get_media_set_catalog(
Ok(catalog)
}
-fn media_set_tmpdir(datastore: &DataStore, media_set_uuid: &Uuid) -> PathBuf {
+fn media_set_tmpdir<T>(datastore: &DataStore<T>, media_set_uuid: &Uuid) -> PathBuf {
let mut path = datastore.base_path();
path.push(".tmp");
path.push(media_set_uuid.to_string());
path
}
-fn snapshot_tmpdir(
+fn snapshot_tmpdir<T>(
source_datastore: &str,
- datastore: &DataStore,
+ datastore: &DataStore<T>,
snapshot: &str,
media_set_uuid: &Uuid,
) -> PathBuf {
@@ -974,9 +979,9 @@ fn snapshot_tmpdir(
path
}
-fn restore_snapshots_to_tmpdir(
+fn restore_snapshots_to_tmpdir<T>(
worker: Arc<WorkerTask>,
- store_map: &DataStoreMap,
+ store_map: &DataStoreMap<T>,
file_list: &[u64],
mut drive: Box<dyn TapeDriver>,
media_id: &MediaId,
@@ -1083,10 +1088,10 @@ fn restore_snapshots_to_tmpdir(
Ok(tmp_paths)
}
-fn restore_file_chunk_map(
+fn restore_file_chunk_map<T: CanWrite + Send + Sync + 'static>(
worker: Arc<WorkerTask>,
drive: &mut Box<dyn TapeDriver>,
- store_map: &DataStoreMap,
+ store_map: &DataStoreMap<T>,
file_chunk_map: &mut BTreeMap<u64, HashSet<[u8; 32]>>,
) -> Result<(), Error> {
for (nr, chunk_map) in file_chunk_map.iter_mut() {
@@ -1133,10 +1138,10 @@ fn restore_file_chunk_map(
Ok(())
}
-fn restore_partial_chunk_archive<'a>(
+fn restore_partial_chunk_archive<'a, T: CanWrite + Send + Sync + 'static>(
worker: Arc<WorkerTask>,
reader: Box<dyn 'a + TapeRead>,
- datastore: Arc<DataStore>,
+ datastore: Arc<DataStore<T>>,
chunk_list: &mut HashSet<[u8; 32]>,
) -> Result<usize, Error> {
let mut decoder = ChunkArchiveDecoder::new(reader);
@@ -1195,12 +1200,12 @@ fn restore_partial_chunk_archive<'a>(
/// Request and restore complete media without using existing catalog (create catalog instead)
#[allow(clippy::too_many_arguments)]
-pub fn request_and_restore_media(
+pub fn request_and_restore_media<T: CanWrite + Send + Sync + 'static>(
worker: Arc<WorkerTask>,
media_id: &MediaId,
drive_config: &SectionConfigData,
drive_name: &str,
- store_map: &DataStoreMap,
+ store_map: &DataStoreMap<T>,
checked_chunks_map: &mut HashMap<String, HashSet<[u8; 32]>>,
restore_owner: &Authid,
notification_mode: &TapeNotificationMode,
@@ -1253,11 +1258,11 @@ pub fn request_and_restore_media(
/// Restore complete media content and catalog
///
/// Only create the catalog if target is None.
-pub fn restore_media(
+pub fn restore_media<T: CanWrite + Send + Sync + 'static>(
worker: Arc<WorkerTask>,
drive: &mut Box<dyn TapeDriver>,
media_id: &MediaId,
- target: Option<(&DataStoreMap, &Authid)>,
+ target: Option<(&DataStoreMap<T>, &Authid)>,
checked_chunks_map: &mut HashMap<String, HashSet<[u8; 32]>>,
verbose: bool,
auth_id: &Authid,
@@ -1301,11 +1306,11 @@ pub fn restore_media(
}
#[allow(clippy::too_many_arguments)]
-fn restore_archive<'a>(
+fn restore_archive<'a, T: CanWrite + Send + Sync + 'static>(
worker: Arc<WorkerTask>,
mut reader: Box<dyn 'a + TapeRead>,
current_file_number: u64,
- target: Option<(&DataStoreMap, &Authid)>,
+ target: Option<(&DataStoreMap<T>, &Authid)>,
catalog: &mut MediaCatalog,
checked_chunks_map: &mut HashMap<String, HashSet<[u8; 32]>>,
verbose: bool,
@@ -1525,10 +1530,10 @@ fn scan_chunk_archive<'a>(
Ok(Some(chunks))
}
-fn restore_chunk_archive<'a>(
+fn restore_chunk_archive<'a, T: CanWrite + Send + Sync + 'static>(
worker: Arc<WorkerTask>,
reader: Box<dyn 'a + TapeRead>,
- datastore: Arc<DataStore>,
+ datastore: Arc<DataStore<T>>,
checked_chunks: &mut HashSet<[u8; 32]>,
verbose: bool,
) -> Result<Option<Vec<[u8; 32]>>, Error> {
diff --git a/src/backup/hierarchy.rs b/src/backup/hierarchy.rs
index 8dd71fcf..039e32a6 100644
--- a/src/backup/hierarchy.rs
+++ b/src/backup/hierarchy.rs
@@ -7,6 +7,7 @@ use pbs_api_types::{
PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_READ,
};
use pbs_config::CachedUserInfo;
+use pbs_datastore::chunk_store::CanRead;
use pbs_datastore::{backup_info::BackupGroup, DataStore, ListGroups, ListNamespacesRecursive};
/// Asserts that `privs` are fulfilled on datastore + (optional) namespace.
@@ -68,8 +69,8 @@ pub fn check_ns_privs_full(
);
}
-pub fn can_access_any_namespace(
- store: Arc<DataStore>,
+pub fn can_access_any_namespace<T: CanRead + 'static>(
+ store: Arc<DataStore<T>>,
auth_id: &Authid,
user_info: &CachedUserInfo,
) -> bool {
@@ -95,8 +96,8 @@ pub fn can_access_any_namespace(
///
/// Is basically just a filter-iter for pbs_datastore::ListNamespacesRecursive including access and
/// optional owner checks.
-pub struct ListAccessibleBackupGroups<'a> {
- store: &'a Arc<DataStore>,
+pub struct ListAccessibleBackupGroups<'a, T> {
+ store: &'a Arc<DataStore<T>>,
auth_id: Option<&'a Authid>,
user_info: Arc<CachedUserInfo>,
/// The priv on NS level that allows auth_id trump the owner check
@@ -104,15 +105,15 @@ pub struct ListAccessibleBackupGroups<'a> {
/// The priv that auth_id is required to have on NS level additionally to being owner
owner_and_priv: u64,
/// Contains the intertnal state, group iter and a bool flag for override_owner_priv
- state: Option<(ListGroups, bool)>,
- ns_iter: ListNamespacesRecursive,
+ state: Option<(ListGroups<T>, bool)>,
+ ns_iter: ListNamespacesRecursive<T>,
}
-impl<'a> ListAccessibleBackupGroups<'a> {
+impl<'a, T: CanRead> ListAccessibleBackupGroups<'a, T> {
// TODO: builder pattern
pub fn new_owned(
- store: &'a Arc<DataStore>,
+ store: &'a Arc<DataStore<T>>,
ns: BackupNamespace,
max_depth: usize,
auth_id: Option<&'a Authid>,
@@ -122,7 +123,7 @@ impl<'a> ListAccessibleBackupGroups<'a> {
}
pub fn new_with_privs(
- store: &'a Arc<DataStore>,
+ store: &'a Arc<DataStore<T>>,
ns: BackupNamespace,
max_depth: usize,
override_owner_priv: Option<u64>,
@@ -145,8 +146,8 @@ impl<'a> ListAccessibleBackupGroups<'a> {
pub static NS_PRIVS_OK: u64 =
PRIV_DATASTORE_MODIFY | PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP | PRIV_DATASTORE_AUDIT;
-impl Iterator for ListAccessibleBackupGroups<'_> {
- type Item = Result<BackupGroup, Error>;
+impl<T: CanRead> Iterator for ListAccessibleBackupGroups<'_, T> {
+ type Item = Result<BackupGroup<T>, Error>;
fn next(&mut self) -> Option<Self::Item> {
loop {
diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index 3d2cba8a..15c2e9e4 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -15,6 +15,7 @@ use pbs_api_types::{
UPID,
};
use pbs_datastore::backup_info::{BackupDir, BackupGroup, BackupInfo};
+use pbs_datastore::chunk_store::{CanRead, CanWrite};
use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::{BackupManifest, FileInfo};
use pbs_datastore::{DataBlob, DataStore, StoreProgress};
@@ -25,16 +26,16 @@ use crate::backup::hierarchy::ListAccessibleBackupGroups;
/// A VerifyWorker encapsulates a task worker, datastore and information about which chunks have
/// already been verified or detected as corrupt.
-pub struct VerifyWorker {
+pub struct VerifyWorker<T> {
worker: Arc<dyn WorkerTaskContext>,
- datastore: Arc<DataStore>,
+ datastore: Arc<DataStore<T>>,
verified_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
}
-impl VerifyWorker {
+impl<T> VerifyWorker<T> {
/// Creates a new VerifyWorker for a given task worker and datastore.
- pub fn new(worker: Arc<dyn WorkerTaskContext>, datastore: Arc<DataStore>) -> Self {
+ pub fn new(worker: Arc<dyn WorkerTaskContext>, datastore: Arc<DataStore<T>>) -> Self {
Self {
worker,
datastore,
@@ -46,7 +47,7 @@ impl VerifyWorker {
}
}
-fn verify_blob(backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
+fn verify_blob<T: CanRead>(backup_dir: &BackupDir<T>, info: &FileInfo) -> Result<(), Error> {
let blob = backup_dir.load_blob(&info.filename)?;
let raw_size = blob.raw_size();
@@ -70,7 +71,7 @@ fn verify_blob(backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
}
}
-fn rename_corrupted_chunk(datastore: Arc<DataStore>, digest: &[u8; 32]) {
+fn rename_corrupted_chunk<T: CanWrite>(datastore: Arc<DataStore<T>>, digest: &[u8; 32]) {
let (path, digest_str) = datastore.chunk_path(digest);
let mut counter = 0;
@@ -97,8 +98,8 @@ fn rename_corrupted_chunk(datastore: Arc<DataStore>, digest: &[u8; 32]) {
};
}
-fn verify_index_chunks(
- verify_worker: &VerifyWorker,
+fn verify_index_chunks<T: CanWrite + Send + Sync + 'static>(
+ verify_worker: &VerifyWorker<T>,
index: Box<dyn IndexFile + Send>,
crypt_mode: CryptMode,
) -> Result<(), Error> {
@@ -238,9 +239,9 @@ fn verify_index_chunks(
Ok(())
}
-fn verify_fixed_index(
- verify_worker: &VerifyWorker,
- backup_dir: &BackupDir,
+fn verify_fixed_index<T: CanWrite + Send + Sync + 'static>(
+ verify_worker: &VerifyWorker<T>,
+ backup_dir: &BackupDir<T>,
info: &FileInfo,
) -> Result<(), Error> {
let mut path = backup_dir.relative_path();
@@ -260,9 +261,9 @@ fn verify_fixed_index(
verify_index_chunks(verify_worker, Box::new(index), info.chunk_crypt_mode())
}
-fn verify_dynamic_index(
- verify_worker: &VerifyWorker,
- backup_dir: &BackupDir,
+fn verify_dynamic_index<T: CanWrite + Send + Sync + 'static>(
+ verify_worker: &VerifyWorker<T>,
+ backup_dir: &BackupDir<T>,
info: &FileInfo,
) -> Result<(), Error> {
let mut path = backup_dir.relative_path();
@@ -291,9 +292,9 @@ fn verify_dynamic_index(
/// - Ok(true) if verify is successful
/// - Ok(false) if there were verification errors
/// - Err(_) if task was aborted
-pub fn verify_backup_dir(
- verify_worker: &VerifyWorker,
- backup_dir: &BackupDir,
+pub fn verify_backup_dir<T: CanWrite + Send + Sync + 'static>(
+ verify_worker: &VerifyWorker<T>,
+ backup_dir: &BackupDir<T>,
upid: UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<bool, Error> {
@@ -325,9 +326,9 @@ pub fn verify_backup_dir(
}
/// See verify_backup_dir
-pub fn verify_backup_dir_with_lock(
- verify_worker: &VerifyWorker,
- backup_dir: &BackupDir,
+pub fn verify_backup_dir_with_lock<T: CanWrite + Send + Sync + 'static>(
+ verify_worker: &VerifyWorker<T>,
+ backup_dir: &BackupDir<T>,
upid: UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
_snap_lock: BackupLockGuard,
@@ -403,9 +404,9 @@ pub fn verify_backup_dir_with_lock(
/// Returns
/// - Ok((count, failed_dirs)) where failed_dirs had verification errors
/// - Err(_) if task was aborted
-pub fn verify_backup_group(
- verify_worker: &VerifyWorker,
- group: &BackupGroup,
+pub fn verify_backup_group<T: CanWrite + Send + Sync + 'static>(
+ verify_worker: &VerifyWorker<T>,
+ group: &BackupGroup<T>,
progress: &mut StoreProgress,
upid: &UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
@@ -455,8 +456,8 @@ pub fn verify_backup_group(
/// Returns
/// - Ok(failed_dirs) where failed_dirs had verification errors
/// - Err(_) if task was aborted
-pub fn verify_all_backups(
- verify_worker: &VerifyWorker,
+pub fn verify_all_backups<T: CanWrite + Send + Sync + 'static>(
+ verify_worker: &VerifyWorker<T>,
upid: &UPID,
ns: BackupNamespace,
max_depth: Option<usize>,
@@ -504,7 +505,7 @@ pub fn verify_all_backups(
.filter(|group| {
!(group.backup_type() == BackupType::Host && group.backup_id() == "benchmark")
})
- .collect::<Vec<BackupGroup>>(),
+ .collect::<Vec<BackupGroup<T>>>(),
Err(err) => {
info!("unable to list backups: {err}");
return Ok(errors);
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 1d4cf37c..bda2f17b 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -20,7 +20,8 @@ use proxmox_router::{RpcEnvironment, RpcEnvironmentType};
use proxmox_sys::fs::CreateOptions;
use proxmox_sys::logrotate::LogRotate;
-use pbs_datastore::DataStore;
+use pbs_datastore::chunk_store::Lookup as L;
+use pbs_datastore::{is_garbage_collection_running, DataStore};
use proxmox_rest_server::{
cleanup_old_tasks, cookie_from_header, rotate_task_log_archive, ApiConfig, Redirector,
@@ -265,7 +266,7 @@ async fn run() -> Result<(), Error> {
// to remove references for not configured datastores
command_sock.register_command("datastore-removed".to_string(), |_value| {
- if let Err(err) = DataStore::remove_unused_datastores() {
+ if let Err(err) = DataStore::<L>::remove_unused_datastores() {
log::error!("could not refresh datastores: {err}");
}
Ok(Value::Null)
@@ -274,7 +275,7 @@ async fn run() -> Result<(), Error> {
// clear cache entry for datastore that is in a specific maintenance mode
command_sock.register_command("update-datastore-cache".to_string(), |value| {
if let Some(name) = value.and_then(Value::as_str) {
- if let Err(err) = DataStore::update_datastore_cache(name) {
+ if let Err(err) = DataStore::<L>::update_datastore_cache(name) {
log::error!("could not trigger update datastore cache: {err}");
}
}
diff --git a/src/server/gc_job.rs b/src/server/gc_job.rs
index 64835028..c2af6c67 100644
--- a/src/server/gc_job.rs
+++ b/src/server/gc_job.rs
@@ -4,15 +4,18 @@ use std::sync::Arc;
use tracing::info;
use pbs_api_types::Authid;
+use pbs_datastore::chunk_store::CanWrite;
use pbs_datastore::DataStore;
use proxmox_rest_server::WorkerTask;
use crate::server::{jobstate::Job, send_gc_status};
/// Runs a garbage collection job.
-pub fn do_garbage_collection_job(
+pub fn do_garbage_collection_job<
+ T: CanWrite + Send + Sync + std::panic::RefUnwindSafe + 'static,
+>(
mut job: Job,
- datastore: Arc<DataStore>,
+ datastore: Arc<DataStore<T>>,
auth_id: &Authid,
schedule: Option<String>,
to_stdout: bool,
diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
index 1c86647a..395aaee4 100644
--- a/src/server/prune_job.rs
+++ b/src/server/prune_job.rs
@@ -7,6 +7,7 @@ use pbs_api_types::{
print_store_and_ns, Authid, KeepOptions, Operation, PruneJobOptions, MAX_NAMESPACE_DEPTH,
PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_PRUNE,
};
+use pbs_datastore::chunk_store::CanWrite;
use pbs_datastore::prune::compute_prune_info;
use pbs_datastore::DataStore;
use proxmox_rest_server::WorkerTask;
@@ -14,10 +15,10 @@ use proxmox_rest_server::WorkerTask;
use crate::backup::ListAccessibleBackupGroups;
use crate::server::jobstate::Job;
-pub fn prune_datastore(
+pub fn prune_datastore<T: CanWrite>(
auth_id: Authid,
prune_options: PruneJobOptions,
- datastore: Arc<DataStore>,
+ datastore: Arc<DataStore<T>>,
dry_run: bool,
) -> Result<(), Error> {
let store = &datastore.name();
diff --git a/src/server/pull.rs b/src/server/pull.rs
index b1724c14..573aa805 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -18,6 +18,7 @@ use pbs_api_types::{
};
use pbs_client::BackupRepository;
use pbs_config::CachedUserInfo;
+use pbs_datastore::chunk_store::{CanWrite, Write as W};
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
@@ -34,8 +35,8 @@ use super::sync::{
use crate::backup::{check_ns_modification_privs, check_ns_privs};
use crate::tools::parallel_handler::ParallelHandler;
-pub(crate) struct PullTarget {
- store: Arc<DataStore>,
+pub(crate) struct PullTarget<T> {
+ store: Arc<DataStore<T>>,
ns: BackupNamespace,
}
@@ -44,7 +45,7 @@ pub(crate) struct PullParameters {
/// Where data is pulled from
source: Arc<dyn SyncSource>,
/// Where data should be pulled into
- target: PullTarget,
+ target: PullTarget<W>,
/// Owner of synced groups (needs to match local owner of pre-existing groups)
owner: Authid,
/// Whether to remove groups which exist locally, but not on the remote end
@@ -135,9 +136,9 @@ impl PullParameters {
}
}
-async fn pull_index_chunks<I: IndexFile>(
+async fn pull_index_chunks<I: IndexFile, T: CanWrite + Send + Sync + 'static>(
chunk_reader: Arc<dyn AsyncReadChunk>,
- target: Arc<DataStore>,
+ target: Arc<DataStore<T>>,
index: I,
downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
) -> Result<SyncStats, Error> {
@@ -260,9 +261,9 @@ fn verify_archive(info: &FileInfo, csum: &[u8; 32], size: u64) -> Result<(), Err
/// -- Verify tmp file checksum
/// - if archive is an index, pull referenced chunks
/// - Rename tmp file into real path
-async fn pull_single_archive<'a>(
+async fn pull_single_archive<'a, T: CanWrite + Send + Sync + 'static>(
reader: Arc<dyn SyncSourceReader + 'a>,
- snapshot: &'a pbs_datastore::BackupDir,
+ snapshot: &'a pbs_datastore::BackupDir<T>,
archive_info: &'a FileInfo,
downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
) -> Result<SyncStats, Error> {
@@ -343,10 +344,10 @@ async fn pull_single_archive<'a>(
/// -- if file already exists, verify contents
/// -- if not, pull it from the remote
/// - Download log if not already existing
-async fn pull_snapshot<'a>(
+async fn pull_snapshot<'a, T: CanWrite + Send + Sync + 'static>(
params: &PullParameters,
reader: Arc<dyn SyncSourceReader + 'a>,
- snapshot: &'a pbs_datastore::BackupDir,
+ snapshot: &'a pbs_datastore::BackupDir<T>,
downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
corrupt: bool,
is_new: bool,
@@ -482,10 +483,10 @@ async fn pull_snapshot<'a>(
///
/// The `reader` is configured to read from the source backup directory, while the
/// `snapshot` is pointing to the local datastore and target namespace.
-async fn pull_snapshot_from<'a>(
+async fn pull_snapshot_from<'a, T: CanWrite + Send + Sync + 'static>(
params: &PullParameters,
reader: Arc<dyn SyncSourceReader + 'a>,
- snapshot: &'a pbs_datastore::BackupDir,
+ snapshot: &'a pbs_datastore::BackupDir<T>,
downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
corrupt: bool,
) -> Result<SyncStats, Error> {
diff --git a/src/server/push.rs b/src/server/push.rs
index e71012ed..ff9d9358 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -18,6 +18,7 @@ use pbs_api_types::{
};
use pbs_client::{BackupRepository, BackupWriter, HttpClient, MergedChunkInfo, UploadOptions};
use pbs_config::CachedUserInfo;
+use pbs_datastore::chunk_store::Read as R;
use pbs_datastore::data_blob::ChunkInfo;
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
@@ -61,7 +62,7 @@ impl PushTarget {
/// Parameters for a push operation
pub(crate) struct PushParameters {
/// Source of backups to be pushed to remote
- source: Arc<LocalSource>,
+ source: Arc<LocalSource<R>>,
/// Target for backups to be pushed to
target: PushTarget,
/// User used for permission checks on the source side, including potentially filtering visible
diff --git a/src/server/sync.rs b/src/server/sync.rs
index 09814ef0..96a73503 100644
--- a/src/server/sync.rs
+++ b/src/server/sync.rs
@@ -24,6 +24,7 @@ use pbs_api_types::{
PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
};
use pbs_client::{BackupReader, BackupRepository, HttpClient, RemoteChunkReader};
+use pbs_datastore::chunk_store::CanRead;
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::read_chunk::AsyncReadChunk;
use pbs_datastore::{BackupManifest, DataStore, ListNamespacesRecursive, LocalChunkReader};
@@ -105,10 +106,10 @@ pub(crate) struct RemoteSourceReader {
pub(crate) dir: BackupDir,
}
-pub(crate) struct LocalSourceReader {
+pub(crate) struct LocalSourceReader<T> {
pub(crate) _dir_lock: Arc<Mutex<BackupLockGuard>>,
pub(crate) path: PathBuf,
- pub(crate) datastore: Arc<DataStore>,
+ pub(crate) datastore: Arc<DataStore<T>>,
}
#[async_trait::async_trait]
@@ -189,7 +190,7 @@ impl SyncSourceReader for RemoteSourceReader {
}
#[async_trait::async_trait]
-impl SyncSourceReader for LocalSourceReader {
+impl<T: CanRead + Send + Sync + 'static> SyncSourceReader for LocalSourceReader<T> {
fn chunk_reader(&self, crypt_mode: CryptMode) -> Arc<dyn AsyncReadChunk> {
Arc::new(LocalChunkReader::new(
self.datastore.clone(),
@@ -266,8 +267,8 @@ pub(crate) struct RemoteSource {
pub(crate) client: HttpClient,
}
-pub(crate) struct LocalSource {
- pub(crate) store: Arc<DataStore>,
+pub(crate) struct LocalSource<T> {
+ pub(crate) store: Arc<DataStore<T>>,
pub(crate) ns: BackupNamespace,
}
@@ -415,7 +416,7 @@ impl SyncSource for RemoteSource {
}
#[async_trait::async_trait]
-impl SyncSource for LocalSource {
+impl<T: CanRead + Send + Sync + 'static> SyncSource for LocalSource<T> {
async fn list_namespaces(
&self,
max_depth: &mut Option<usize>,
diff --git a/src/tape/file_formats/snapshot_archive.rs b/src/tape/file_formats/snapshot_archive.rs
index 9d11c04b..7f4ef01f 100644
--- a/src/tape/file_formats/snapshot_archive.rs
+++ b/src/tape/file_formats/snapshot_archive.rs
@@ -5,6 +5,7 @@ use std::task::{Context, Poll};
use proxmox_sys::error::SysError;
use proxmox_uuid::Uuid;
+use pbs_datastore::chunk_store::CanRead;
use pbs_datastore::SnapshotReader;
use pbs_tape::{MediaContentHeader, TapeWrite, PROXMOX_TAPE_BLOCK_SIZE};
@@ -21,9 +22,9 @@ use crate::tape::file_formats::{
/// `LEOM` was detected before all data was written. The stream is
/// marked inclomplete in that case and does not contain all data (The
/// backup task must rewrite the whole file on the next media).
-pub fn tape_write_snapshot_archive<'a>(
+pub fn tape_write_snapshot_archive<'a, T: CanRead>(
writer: &mut (dyn TapeWrite + 'a),
- snapshot_reader: &SnapshotReader,
+ snapshot_reader: &SnapshotReader<T>,
) -> Result<Option<Uuid>, std::io::Error> {
let backup_dir = snapshot_reader.snapshot();
let snapshot =
diff --git a/src/tape/pool_writer/mod.rs b/src/tape/pool_writer/mod.rs
index 54084421..17c20add 100644
--- a/src/tape/pool_writer/mod.rs
+++ b/src/tape/pool_writer/mod.rs
@@ -15,6 +15,7 @@ use tracing::{info, warn};
use proxmox_uuid::Uuid;
+use pbs_datastore::chunk_store::CanRead;
use pbs_datastore::{DataStore, SnapshotReader};
use pbs_tape::{sg_tape::tape_alert_flags_critical, TapeWrite};
use proxmox_rest_server::WorkerTask;
@@ -452,9 +453,9 @@ impl PoolWriter {
/// archive is marked incomplete, and we do not use it. The caller
/// should mark the media as full and try again using another
/// media.
- pub fn append_snapshot_archive(
+ pub fn append_snapshot_archive<T: CanRead>(
&mut self,
- snapshot_reader: &SnapshotReader,
+ snapshot_reader: &SnapshotReader<T>,
) -> Result<(bool, usize), Error> {
let status = match self.status {
Some(ref mut status) => status,
@@ -543,10 +544,10 @@ impl PoolWriter {
Ok((leom, bytes_written))
}
- pub fn spawn_chunk_reader_thread(
+ pub fn spawn_chunk_reader_thread<T: CanRead + Send + Sync + 'static>(
&self,
- datastore: Arc<DataStore>,
- snapshot_reader: Arc<Mutex<SnapshotReader>>,
+ datastore: Arc<DataStore<T>>,
+ snapshot_reader: Arc<Mutex<SnapshotReader<T>>>,
) -> Result<(std::thread::JoinHandle<()>, NewChunksIterator), Error> {
NewChunksIterator::spawn(
datastore,
diff --git a/src/tape/pool_writer/new_chunks_iterator.rs b/src/tape/pool_writer/new_chunks_iterator.rs
index e6f418df..e1a0da20 100644
--- a/src/tape/pool_writer/new_chunks_iterator.rs
+++ b/src/tape/pool_writer/new_chunks_iterator.rs
@@ -3,6 +3,7 @@ use std::sync::{Arc, Mutex};
use anyhow::{format_err, Error};
+use pbs_datastore::chunk_store::CanRead;
use pbs_datastore::{DataBlob, DataStore, SnapshotReader};
use crate::tape::CatalogSet;
@@ -21,9 +22,9 @@ impl NewChunksIterator {
/// Creates the iterator, spawning a new thread
///
/// Make sure to join() the returned thread handle.
- pub fn spawn(
- datastore: Arc<DataStore>,
- snapshot_reader: Arc<Mutex<SnapshotReader>>,
+ pub fn spawn<T: CanRead + Send + Sync + 'static>(
+ datastore: Arc<DataStore<T>>,
+ snapshot_reader: Arc<Mutex<SnapshotReader<T>>>,
catalog_set: Arc<Mutex<CatalogSet>>,
read_threads: usize,
) -> Result<(std::thread::JoinHandle<()>, Self), Error> {
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 09/12] examples/tests: add missing generics
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
` (7 preceding siblings ...)
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 08/12] api/backup/bin/server/tape: add missing generics Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 10/12] api: admin: pull datastore loading out of check_privs helper Hannes Laimer
` (2 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
pbs-datastore/examples/ls-snapshots.rs | 4 ++--
tests/prune.rs | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/pbs-datastore/examples/ls-snapshots.rs b/pbs-datastore/examples/ls-snapshots.rs
index 2eeea489..cf860a05 100644
--- a/pbs-datastore/examples/ls-snapshots.rs
+++ b/pbs-datastore/examples/ls-snapshots.rs
@@ -2,7 +2,7 @@ use std::path::PathBuf;
use anyhow::{bail, Error};
-use pbs_datastore::DataStore;
+use pbs_datastore::{chunk_store::Read as R, DataStore};
fn run() -> Result<(), Error> {
let base: PathBuf = match std::env::args().nth(1) {
@@ -18,7 +18,7 @@ fn run() -> Result<(), Error> {
None => None,
};
- let store = unsafe { DataStore::open_path("", base, None)? };
+ let store = unsafe { DataStore::<R>::open_path("", base, None)? };
for ns in store.recursive_iter_backup_ns_ok(Default::default(), max_depth)? {
println!("found namespace store:/{}", ns);
diff --git a/tests/prune.rs b/tests/prune.rs
index b11449ca..02de1078 100644
--- a/tests/prune.rs
+++ b/tests/prune.rs
@@ -4,10 +4,10 @@ use anyhow::Error;
use pbs_api_types::{PruneJobOptions, MANIFEST_BLOB_NAME};
use pbs_datastore::prune::compute_prune_info;
-use pbs_datastore::{BackupDir, BackupInfo};
+use pbs_datastore::{chunk_store::Read as R, BackupDir, BackupInfo};
fn get_prune_list(
- list: Vec<BackupInfo>,
+ list: Vec<BackupInfo<R>>,
return_kept: bool,
options: &PruneJobOptions,
) -> Vec<PathBuf> {
@@ -27,7 +27,7 @@ fn get_prune_list(
.collect()
}
-fn create_info(snapshot: &str, partial: bool) -> BackupInfo {
+fn create_info(snapshot: &str, partial: bool) -> BackupInfo<R> {
let backup_dir = BackupDir::new_test(snapshot.parse().unwrap());
let mut files = Vec::new();
@@ -43,7 +43,7 @@ fn create_info(snapshot: &str, partial: bool) -> BackupInfo {
}
}
-fn create_info_protected(snapshot: &str, partial: bool) -> BackupInfo {
+fn create_info_protected(snapshot: &str, partial: bool) -> BackupInfo<R> {
let mut info = create_info(snapshot, partial);
info.protected = true;
info
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 10/12] api: admin: pull datastore loading out of check_privs helper
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
` (8 preceding siblings ...)
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 09/12] examples/tests: " Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 11/12] datastore: move `fn gc_running` out of DataStoreImpl Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 12/12] api/server: replace datastore_lookup with new, state-typed datastore returning functions Hannes Laimer
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
We have to use different lookup functions depending on what we
plan to do with the reference, deciding what function to use
based on the passed operation type was problematic as
the different functions return differently typed datastore
references which is a problem for the return type of the helper
function.
Like this the loading and permission checking is separated, which
can definitely make debugging easier and may itself be a reason for
separating it in the first place, though that was not why it was done
here.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
src/api2/admin/datastore.rs | 116 ++++++++++++++++++------------------
1 file changed, 59 insertions(+), 57 deletions(-)
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index e3f93cdd..218d7e73 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -41,7 +41,7 @@ use pbs_api_types::{
BackupContent, BackupGroupDeleteStats, BackupNamespace, BackupType, Counts, CryptMode,
DataStoreConfig, DataStoreListItem, DataStoreMountStatus, DataStoreStatus,
GarbageCollectionJobStatus, GroupListItem, JobScheduleStatus, KeepOptions, MaintenanceMode,
- MaintenanceType, Operation, PruneJobOptions, SnapshotListItem, SnapshotVerifyState,
+ MaintenanceType, PruneJobOptions, SnapshotListItem, SnapshotVerifyState,
BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA,
BACKUP_TYPE_SCHEMA, CATALOG_NAME, CLIENT_LOG_BLOB_NAME, DATASTORE_SCHEMA,
IGNORE_VERIFIED_BACKUPS_SCHEMA, MANIFEST_BLOB_NAME, MAX_NAMESPACE_DEPTH, NS_MAX_DEPTH_SCHEMA,
@@ -92,27 +92,29 @@ fn get_group_note_path<T>(
// helper to unify common sequence of checks:
// 1. check privs on NS (full or limited access)
-// 2. load datastore
-// 3. if needed (only limited access), check owner of group
-fn check_privs_and_load_store(
- store: &str,
+// 2. if needed (only limited access), check owner of group
+fn check_privs<T: CanRead>(
+ store: &Arc<DataStore<T>>,
ns: &BackupNamespace,
auth_id: &Authid,
full_access_privs: u64,
partial_access_privs: u64,
- operation: Option<Operation>,
backup_group: &pbs_api_types::BackupGroup,
-) -> Result<Arc<DataStore>, Error> {
- let limited = check_ns_privs_full(store, ns, auth_id, full_access_privs, partial_access_privs)?;
-
- let datastore = DataStore::lookup_datastore(store, operation)?;
+) -> Result<(), Error> {
+ let limited = check_ns_privs_full(
+ store.name(),
+ ns,
+ auth_id,
+ full_access_privs,
+ partial_access_privs,
+ )?;
if limited {
- let owner = datastore.get_owner(ns, backup_group)?;
+ let owner = store.get_owner(ns, backup_group)?;
check_backup_owner(&owner, auth_id)?;
}
- Ok(datastore)
+ Ok(())
}
fn read_backup_index<T: CanRead>(
@@ -303,13 +305,13 @@ pub async fn delete_group(
tokio::task::spawn_blocking(move || {
let ns = ns.unwrap_or_default();
- let datastore = check_privs_and_load_store(
- &store,
+ let datastore = DataStore::lookup_datastore_write(&store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_PRUNE,
- Some(Operation::Write),
&group,
)?;
@@ -370,13 +372,13 @@ pub async fn list_snapshot_files(
tokio::task::spawn_blocking(move || {
let ns = ns.unwrap_or_default();
- let datastore = check_privs_and_load_store(
- &store,
+ let datastore = DataStore::lookup_datastore_read(&store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
&backup_dir.group,
)?;
@@ -424,13 +426,13 @@ pub async fn delete_snapshot(
tokio::task::spawn_blocking(move || {
let ns = ns.unwrap_or_default();
- let datastore = check_privs_and_load_store(
- &store,
+ let datastore = DataStore::lookup_datastore_write(&store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_PRUNE,
- Some(Operation::Write),
&backup_dir.group,
)?;
@@ -1001,13 +1003,13 @@ pub fn prune(
) -> Result<Value, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let ns = ns.unwrap_or_default();
- let datastore = check_privs_and_load_store(
- &store,
+ let datastore = DataStore::lookup_datastore_write(&store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_PRUNE,
- Some(Operation::Write),
&group,
)?;
@@ -1405,13 +1407,13 @@ pub fn download_file(
let backup_ns = optional_ns_param(¶m)?;
let backup_dir: pbs_api_types::BackupDir = Deserialize::deserialize(¶m)?;
- let datastore = check_privs_and_load_store(
- store,
+ let datastore = DataStore::lookup_datastore_read(store)?;
+ check_privs(
+ &datastore,
&backup_ns,
&auth_id,
PRIV_DATASTORE_READ,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
&backup_dir.group,
)?;
@@ -1490,13 +1492,13 @@ pub fn download_file_decoded(
let backup_ns = optional_ns_param(¶m)?;
let backup_dir_api: pbs_api_types::BackupDir = Deserialize::deserialize(¶m)?;
- let datastore = check_privs_and_load_store(
- store,
+ let datastore = DataStore::lookup_datastore_read(store)?;
+ check_privs(
+ &datastore,
&backup_ns,
&auth_id,
PRIV_DATASTORE_READ,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
&backup_dir_api.group,
)?;
@@ -1617,13 +1619,13 @@ pub fn upload_backup_log(
let backup_dir_api: pbs_api_types::BackupDir = Deserialize::deserialize(¶m)?;
- let datastore = check_privs_and_load_store(
- store,
+ let datastore = DataStore::lookup_datastore_write(store)?;
+ check_privs(
+ &datastore,
&backup_ns,
&auth_id,
0,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Write),
&backup_dir_api.group,
)?;
let backup_dir = datastore.backup_dir(backup_ns.clone(), backup_dir_api.clone())?;
@@ -1713,13 +1715,13 @@ pub async fn catalog(
let ns = ns.unwrap_or_default();
- let datastore = check_privs_and_load_store(
- &store,
+ let datastore = DataStore::lookup_datastore_read(&store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_READ,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
&backup_dir.group,
)?;
@@ -1833,13 +1835,13 @@ pub fn pxar_file_download(
let ns = optional_ns_param(¶m)?;
let backup_dir: pbs_api_types::BackupDir = Deserialize::deserialize(¶m)?;
- let datastore = check_privs_and_load_store(
- store,
+ let datastore = DataStore::lookup_datastore_read(store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_READ,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
&backup_dir.group,
)?;
@@ -2044,13 +2046,13 @@ pub fn get_group_notes(
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let ns = ns.unwrap_or_default();
- let datastore = check_privs_and_load_store(
- &store,
+ let datastore = DataStore::lookup_datastore_read(&store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
&backup_group,
)?;
@@ -2092,13 +2094,13 @@ pub fn set_group_notes(
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let ns = ns.unwrap_or_default();
- let datastore = check_privs_and_load_store(
- &store,
+ let datastore = DataStore::lookup_datastore_write(&store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Write),
&backup_group,
)?;
@@ -2138,13 +2140,13 @@ pub fn get_notes(
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let ns = ns.unwrap_or_default();
- let datastore = check_privs_and_load_store(
- &store,
+ let datastore = DataStore::lookup_datastore_read(&store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
&backup_dir.group,
)?;
@@ -2191,13 +2193,13 @@ pub fn set_notes(
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let ns = ns.unwrap_or_default();
- let datastore = check_privs_and_load_store(
- &store,
+ let datastore = DataStore::lookup_datastore_write(&store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Write),
&backup_dir.group,
)?;
@@ -2241,13 +2243,13 @@ pub fn get_protection(
) -> Result<bool, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let ns = ns.unwrap_or_default();
- let datastore = check_privs_and_load_store(
- &store,
+ let datastore = DataStore::lookup_datastore_read(&store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Read),
&backup_dir.group,
)?;
@@ -2291,13 +2293,13 @@ pub async fn set_protection(
tokio::task::spawn_blocking(move || {
let ns = ns.unwrap_or_default();
- let datastore = check_privs_and_load_store(
- &store,
+ let datastore = DataStore::lookup_datastore_write(&store)?;
+ check_privs(
+ &datastore,
&ns,
&auth_id,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_BACKUP,
- Some(Operation::Write),
&backup_dir.group,
)?;
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 11/12] datastore: move `fn gc_running` out of DataStoreImpl
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
` (9 preceding siblings ...)
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 10/12] api: admin: pull datastore loading out of check_privs helper Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 12/12] api/server: replace datastore_lookup with new, state-typed datastore returning functions Hannes Laimer
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
Like this we can avoid having to get a `CanWrite` datastore reference
just to check if we can obtain the gc lock, as lookup references
(neither `CanRead` nor `CanWrite`) will not come from the cache that
would contain the relevant locks.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
pbs-datastore/src/datastore.rs | 10 ++++++----
pbs-datastore/src/lib.rs | 3 ++-
src/bin/proxmox-backup-proxy.rs | 15 ++-------------
3 files changed, 10 insertions(+), 18 deletions(-)
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index cb2d2172..20ad73c5 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -118,6 +118,12 @@ pub fn ensure_datastore_is_mounted(config: &DataStoreConfig) -> Result<(), Error
}
}
+pub fn is_garbage_collection_running(name: &str) -> bool {
+ let datastore_cache = DATASTORE_MAP_WRITE.lock().unwrap();
+ let cache_entry = datastore_cache.get(name);
+ cache_entry.is_some_and(|s| s.gc_mutex.try_lock().is_err())
+}
+
/// Datastore Management
///
/// A Datastore can store severals backups, and provides the
@@ -752,10 +758,6 @@ impl<T: CanRead> DataStore<T> {
self.inner.last_gc_status.lock().unwrap().clone()
}
- pub fn garbage_collection_running(&self) -> bool {
- self.inner.gc_mutex.try_lock().is_err()
- }
-
pub fn try_shared_chunk_store_lock(&self) -> Result<ProcessLockSharedGuard, Error> {
self.inner.chunk_store.try_shared_lock()
}
diff --git a/pbs-datastore/src/lib.rs b/pbs-datastore/src/lib.rs
index 5014b6c0..857ee78e 100644
--- a/pbs-datastore/src/lib.rs
+++ b/pbs-datastore/src/lib.rs
@@ -202,7 +202,8 @@ pub use store_progress::StoreProgress;
mod datastore;
pub use datastore::{
- check_backup_owner, ensure_datastore_is_mounted, get_datastore_mount_status, DataStore,
+ check_backup_owner, ensure_datastore_is_mounted, get_datastore_mount_status,
+ is_garbage_collection_running, DataStore,
};
mod hierarchy;
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index bda2f17b..643a2dbd 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -482,19 +482,8 @@ async fn schedule_datastore_garbage_collection() {
}
};
- {
- // limit datastore scope due to Op::Lookup
- let datastore = match DataStore::lookup_datastore(&store, Some(Operation::Lookup)) {
- Ok(datastore) => datastore,
- Err(err) => {
- eprintln!("lookup_datastore failed - {err}");
- continue;
- }
- };
-
- if datastore.garbage_collection_running() {
- continue;
- }
+ if is_garbage_collection_running(&store) {
+ continue;
}
let worker_type = "garbage_collection";
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v2 12/12] api/server: replace datastore_lookup with new, state-typed datastore returning functions
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
` (10 preceding siblings ...)
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 11/12] datastore: move `fn gc_running` out of DataStoreImpl Hannes Laimer
@ 2025-05-26 14:14 ` Hannes Laimer
11 siblings, 0 replies; 13+ messages in thread
From: Hannes Laimer @ 2025-05-26 14:14 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
pbs-datastore/src/snapshot_reader.rs | 6 ++----
src/api2/admin/datastore.rs | 18 +++++++++---------
src/api2/admin/namespace.rs | 10 +++++-----
src/api2/backup/mod.rs | 8 ++++----
src/api2/reader/mod.rs | 8 ++++----
src/api2/status/mod.rs | 8 ++++----
src/api2/tape/backup.rs | 10 +++++-----
src/api2/tape/restore.rs | 12 ++++++------
src/bin/proxmox-backup-proxy.rs | 4 ++--
src/server/prune_job.rs | 4 ++--
src/server/pull.rs | 9 ++++-----
src/server/push.rs | 4 ++--
src/server/verify_job.rs | 4 ++--
13 files changed, 51 insertions(+), 54 deletions(-)
diff --git a/pbs-datastore/src/snapshot_reader.rs b/pbs-datastore/src/snapshot_reader.rs
index d604507d..6ece122f 100644
--- a/pbs-datastore/src/snapshot_reader.rs
+++ b/pbs-datastore/src/snapshot_reader.rs
@@ -11,8 +11,7 @@ use nix::sys::stat::Mode;
use pbs_config::BackupLockGuard;
use pbs_api_types::{
- print_store_and_ns, ArchiveType, BackupNamespace, Operation, CLIENT_LOG_BLOB_NAME,
- MANIFEST_BLOB_NAME,
+ print_store_and_ns, ArchiveType, BackupNamespace, CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME,
};
use crate::backup_info::BackupDir;
@@ -163,9 +162,8 @@ impl<F: Fn(&[u8; 32]) -> bool, T: CanRead> Iterator for SnapshotChunkIterator<'_
),
};
- let datastore = DataStore::lookup_datastore(
+ let datastore = DataStore::lookup_datastore_read(
self.snapshot_reader.datastore_name(),
- Some(Operation::Read),
)?;
let order =
datastore.get_chunks_in_order(&*index, &self.skip_fn, |_| Ok(()))?;
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 218d7e73..dfcb9123 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -203,7 +203,7 @@ pub fn list_groups(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore_read(&store)?;
datastore
.iter_backup_groups(ns.clone())? // FIXME: Namespaces and recursion parameters!
@@ -508,7 +508,7 @@ unsafe fn list_snapshots_blocking(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore_read(&store)?;
// FIXME: filter also owner before collecting, for doing that nicely the owner should move into
// backup group and provide an error free (Err -> None) accessor
@@ -720,7 +720,7 @@ pub async fn status(
}
};
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore_read(&store)?;
let (counts, gc_status) = if verbose {
let filter_owner = if store_privs & PRIV_DATASTORE_AUDIT != 0 {
@@ -833,7 +833,7 @@ pub fn verify(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore_write(&store)?;
let ignore_verified = ignore_verified.unwrap_or(true);
let worker_id;
@@ -1182,7 +1182,7 @@ pub fn prune_datastore(
true,
)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore_write(&store)?;
let ns = prune_options.ns.clone().unwrap_or_default();
let worker_id = format!("{}:{}", store, ns);
@@ -1220,7 +1220,7 @@ pub fn start_garbage_collection(
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore_write(&store)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let job = Job::new("garbage_collection", &store)
@@ -1267,7 +1267,7 @@ pub fn garbage_collection_status(
..Default::default()
};
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore_read(&store)?;
let status_in_memory = datastore.last_gc_status();
let state_file = JobState::load("garbage_collection", &store)
.map_err(|err| log::error!("could not open GC statefile for {store}: {err}"))
@@ -1973,7 +1973,7 @@ pub fn get_rrd_stats(
cf: RrdMode,
_param: Value,
) -> Result<Value, Error> {
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore_read(&store)?;
let disk_manager = crate::tools::disks::DiskManage::new();
let mut rrd_fields = vec![
@@ -2353,7 +2353,7 @@ pub async fn set_backup_owner(
PRIV_DATASTORE_BACKUP,
)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore_write(&store)?;
let backup_group = datastore.backup_group(ns, backup_group);
let owner = backup_group.get_owner()?;
diff --git a/src/api2/admin/namespace.rs b/src/api2/admin/namespace.rs
index 6cf88d89..44a31269 100644
--- a/src/api2/admin/namespace.rs
+++ b/src/api2/admin/namespace.rs
@@ -5,8 +5,8 @@ use proxmox_router::{http_bail, ApiMethod, Permission, Router, RpcEnvironment};
use proxmox_schema::*;
use pbs_api_types::{
- Authid, BackupGroupDeleteStats, BackupNamespace, NamespaceListItem, Operation,
- DATASTORE_SCHEMA, NS_MAX_DEPTH_SCHEMA, PROXMOX_SAFE_ID_FORMAT,
+ Authid, BackupGroupDeleteStats, BackupNamespace, NamespaceListItem, DATASTORE_SCHEMA,
+ NS_MAX_DEPTH_SCHEMA, PROXMOX_SAFE_ID_FORMAT,
};
use pbs_datastore::DataStore;
@@ -54,7 +54,7 @@ pub fn create_namespace(
check_ns_modification_privs(&store, &ns, &auth_id)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore_write(&store)?;
datastore.create_namespace(&parent, name)
}
@@ -97,7 +97,7 @@ pub fn list_namespaces(
// get result up-front to avoid cloning NS, it's relatively cheap anyway (no IO normally)
let parent_access = check_ns_privs(&store, &parent, &auth_id, NS_PRIVS_OK);
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore_read(&store)?;
let iter = match datastore.recursive_iter_backup_ns_ok(parent, max_depth) {
Ok(iter) => iter,
@@ -162,7 +162,7 @@ pub fn delete_namespace(
check_ns_modification_privs(&store, &ns, &auth_id)?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore_write(&store)?;
let (removed_all, stats) = datastore.remove_namespace_recursive(&ns, delete_groups)?;
if !removed_all {
diff --git a/src/api2/backup/mod.rs b/src/api2/backup/mod.rs
index 79354dbf..c160080e 100644
--- a/src/api2/backup/mod.rs
+++ b/src/api2/backup/mod.rs
@@ -19,9 +19,9 @@ use proxmox_schema::*;
use proxmox_sortable_macro::sortable;
use pbs_api_types::{
- ArchiveType, Authid, BackupNamespace, BackupType, Operation, VerifyState,
- BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA,
- BACKUP_TYPE_SCHEMA, CHUNK_DIGEST_SCHEMA, DATASTORE_SCHEMA, PRIV_DATASTORE_BACKUP,
+ ArchiveType, Authid, BackupNamespace, BackupType, VerifyState, BACKUP_ARCHIVE_NAME_SCHEMA,
+ BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA,
+ CHUNK_DIGEST_SCHEMA, DATASTORE_SCHEMA, PRIV_DATASTORE_BACKUP,
};
use pbs_config::CachedUserInfo;
use pbs_datastore::chunk_store::{Read as R, Write as W};
@@ -96,7 +96,7 @@ fn upgrade_to_backup_protocol(
)
.map_err(|err| http_err!(FORBIDDEN, "{err}"))?;
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore_write(&store)?;
let protocols = parts
.headers
diff --git a/src/api2/reader/mod.rs b/src/api2/reader/mod.rs
index 52f0953a..ec9fb751 100644
--- a/src/api2/reader/mod.rs
+++ b/src/api2/reader/mod.rs
@@ -18,9 +18,9 @@ use proxmox_schema::{BooleanSchema, ObjectSchema};
use proxmox_sortable_macro::sortable;
use pbs_api_types::{
- ArchiveType, Authid, Operation, BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA,
- BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA, CHUNK_DIGEST_SCHEMA,
- DATASTORE_SCHEMA, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
+ ArchiveType, Authid, BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA,
+ BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA, CHUNK_DIGEST_SCHEMA, DATASTORE_SCHEMA,
+ PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ,
};
use pbs_config::CachedUserInfo;
use pbs_datastore::chunk_store::Read as R;
@@ -92,7 +92,7 @@ fn upgrade_to_backup_reader_protocol(
bail!("no permissions on /{}", acl_path.join("/"));
}
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore_read(&store)?;
let backup_dir = pbs_api_types::BackupDir::deserialize(¶m)?;
diff --git a/src/api2/status/mod.rs b/src/api2/status/mod.rs
index e066a99c..5a85cf80 100644
--- a/src/api2/status/mod.rs
+++ b/src/api2/status/mod.rs
@@ -10,8 +10,8 @@ use proxmox_schema::api;
use proxmox_sortable_macro::sortable;
use pbs_api_types::{
- Authid, DataStoreConfig, DataStoreMountStatus, DataStoreStatusListItem, Operation,
- PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP,
+ Authid, DataStoreConfig, DataStoreMountStatus, DataStoreStatusListItem, PRIV_DATASTORE_AUDIT,
+ PRIV_DATASTORE_BACKUP,
};
use pbs_config::CachedUserInfo;
@@ -69,7 +69,7 @@ pub async fn datastore_status(
};
if !allowed {
- if let Ok(datastore) = DataStore::lookup_datastore(store, Some(Operation::Lookup)) {
+ if let Ok(datastore) = DataStore::lookup_datastore_read(store) {
if can_access_any_namespace(datastore, &auth_id, &user_info) {
list.push(DataStoreStatusListItem::empty(store, None, mount_status));
}
@@ -77,7 +77,7 @@ pub async fn datastore_status(
continue;
}
- let datastore = match DataStore::lookup_datastore(store, Some(Operation::Read)) {
+ let datastore = match DataStore::lookup_datastore_read(store) {
Ok(datastore) => datastore,
Err(err) => {
list.push(DataStoreStatusListItem::empty(
diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
index 306d5936..d9ff7ba9 100644
--- a/src/api2/tape/backup.rs
+++ b/src/api2/tape/backup.rs
@@ -11,9 +11,9 @@ use proxmox_schema::api;
use proxmox_worker_task::WorkerTaskContext;
use pbs_api_types::{
- print_ns_and_snapshot, print_store_and_ns, Authid, MediaPoolConfig, Operation,
- TapeBackupJobConfig, TapeBackupJobSetup, TapeBackupJobStatus, JOB_ID_SCHEMA,
- PRIV_DATASTORE_READ, PRIV_TAPE_AUDIT, PRIV_TAPE_WRITE, UPID_SCHEMA,
+ print_ns_and_snapshot, print_store_and_ns, Authid, MediaPoolConfig, TapeBackupJobConfig,
+ TapeBackupJobSetup, TapeBackupJobStatus, JOB_ID_SCHEMA, PRIV_DATASTORE_READ, PRIV_TAPE_AUDIT,
+ PRIV_TAPE_WRITE, UPID_SCHEMA,
};
use pbs_config::CachedUserInfo;
@@ -151,7 +151,7 @@ pub fn do_tape_backup_job(
let worker_type = job.jobtype().to_string();
- let datastore = DataStore::lookup_datastore(&setup.store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore_read(&setup.store)?;
let (config, _digest) = pbs_config::media_pool::config()?;
let pool_config: MediaPoolConfig = config.lookup("pool", &setup.pool)?;
@@ -307,7 +307,7 @@ pub fn backup(
check_backup_permission(&auth_id, &setup.store, &setup.pool, &setup.drive)?;
- let datastore = DataStore::lookup_datastore(&setup.store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore_read(&setup.store)?;
let (config, _digest) = pbs_config::media_pool::config()?;
let pool_config: MediaPoolConfig = config.lookup("pool", &setup.pool)?;
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 8f089c20..1147623b 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -20,10 +20,10 @@ use proxmox_worker_task::WorkerTaskContext;
use pbs_api_types::{
parse_ns_and_snapshot, print_ns_and_snapshot, ArchiveType, Authid, BackupDir, BackupNamespace,
- CryptMode, NotificationMode, Operation, TapeRestoreNamespace, Userid,
- DATASTORE_MAP_ARRAY_SCHEMA, DATASTORE_MAP_LIST_SCHEMA, DRIVE_NAME_SCHEMA, MANIFEST_BLOB_NAME,
- MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, PRIV_TAPE_READ,
- TAPE_RESTORE_NAMESPACE_SCHEMA, TAPE_RESTORE_SNAPSHOT_SCHEMA, UPID_SCHEMA,
+ CryptMode, NotificationMode, TapeRestoreNamespace, Userid, DATASTORE_MAP_ARRAY_SCHEMA,
+ DATASTORE_MAP_LIST_SCHEMA, DRIVE_NAME_SCHEMA, MANIFEST_BLOB_NAME, MAX_NAMESPACE_DEPTH,
+ PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, PRIV_TAPE_READ, TAPE_RESTORE_NAMESPACE_SCHEMA,
+ TAPE_RESTORE_SNAPSHOT_SCHEMA, UPID_SCHEMA,
};
use pbs_client::pxar::tools::handle_root_with_optional_format_version_prelude;
use pbs_config::CachedUserInfo;
@@ -145,10 +145,10 @@ impl TryFrom<String> for DataStoreMap<W> {
if let Some(index) = store.find('=') {
let mut target = store.split_off(index);
target.remove(0); // remove '='
- let datastore = DataStore::lookup_datastore(&target, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore_write(&target)?;
map.insert(store, datastore);
} else if default.is_none() {
- default = Some(DataStore::lookup_datastore(&store, Some(Operation::Write))?);
+ default = Some(DataStore::lookup_datastore_write(&store)?);
} else {
bail!("multiple default stores given");
}
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 643a2dbd..4f5d9681 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -40,7 +40,7 @@ use pbs_buildcfg::configdir;
use proxmox_time::CalendarEvent;
use pbs_api_types::{
- Authid, DataStoreConfig, Operation, PruneJobConfig, SyncJobConfig, TapeBackupJobConfig,
+ Authid, DataStoreConfig, PruneJobConfig, SyncJobConfig, TapeBackupJobConfig,
VerificationJobConfig,
};
@@ -516,7 +516,7 @@ async fn schedule_datastore_garbage_collection() {
Err(_) => continue, // could not get lock
};
- let datastore = match DataStore::lookup_datastore(&store, Some(Operation::Write)) {
+ let datastore = match DataStore::lookup_datastore_write(&store) {
Ok(datastore) => datastore,
Err(err) => {
log::warn!("skipping scheduled GC on {store}, could look it up - {err}");
diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
index 395aaee4..20cb9218 100644
--- a/src/server/prune_job.rs
+++ b/src/server/prune_job.rs
@@ -4,7 +4,7 @@ use anyhow::Error;
use tracing::{info, warn};
use pbs_api_types::{
- print_store_and_ns, Authid, KeepOptions, Operation, PruneJobOptions, MAX_NAMESPACE_DEPTH,
+ print_store_and_ns, Authid, KeepOptions, PruneJobOptions, MAX_NAMESPACE_DEPTH,
PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_PRUNE,
};
use pbs_datastore::chunk_store::CanWrite;
@@ -128,7 +128,7 @@ pub fn do_prune_job(
auth_id: &Authid,
schedule: Option<String>,
) -> Result<String, Error> {
- let datastore = DataStore::lookup_datastore(&store, Some(Operation::Write))?;
+ let datastore = DataStore::lookup_datastore_write(&store)?;
let worker_type = job.jobtype().to_string();
let auth_id = auth_id.clone();
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 573aa805..d885a3a8 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -12,9 +12,8 @@ use tracing::info;
use pbs_api_types::{
print_store_and_ns, ArchiveType, Authid, BackupArchiveName, BackupDir, BackupGroup,
- BackupNamespace, GroupFilter, Operation, RateLimitConfig, Remote, VerifyState,
- CLIENT_LOG_BLOB_NAME, MANIFEST_BLOB_NAME, MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_AUDIT,
- PRIV_DATASTORE_BACKUP,
+ BackupNamespace, GroupFilter, RateLimitConfig, Remote, VerifyState, CLIENT_LOG_BLOB_NAME,
+ MANIFEST_BLOB_NAME, MAX_NAMESPACE_DEPTH, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP,
};
use pbs_client::BackupRepository;
use pbs_config::CachedUserInfo;
@@ -110,12 +109,12 @@ impl PullParameters {
})
} else {
Arc::new(LocalSource {
- store: DataStore::lookup_datastore(remote_store, Some(Operation::Read))?,
+ store: DataStore::lookup_datastore_read(remote_store)?,
ns: remote_ns,
})
};
let target = PullTarget {
- store: DataStore::lookup_datastore(store, Some(Operation::Write))?,
+ store: DataStore::lookup_datastore_write(store)?,
ns,
};
diff --git a/src/server/push.rs b/src/server/push.rs
index ff9d9358..532fc688 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -12,7 +12,7 @@ use tracing::{info, warn};
use pbs_api_types::{
print_store_and_ns, ApiVersion, ApiVersionInfo, ArchiveType, Authid, BackupArchiveName,
BackupDir, BackupGroup, BackupGroupDeleteStats, BackupNamespace, GroupFilter, GroupListItem,
- NamespaceListItem, Operation, RateLimitConfig, Remote, SnapshotListItem, CLIENT_LOG_BLOB_NAME,
+ NamespaceListItem, RateLimitConfig, Remote, SnapshotListItem, CLIENT_LOG_BLOB_NAME,
MANIFEST_BLOB_NAME, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_READ, PRIV_REMOTE_DATASTORE_BACKUP,
PRIV_REMOTE_DATASTORE_MODIFY, PRIV_REMOTE_DATASTORE_PRUNE,
};
@@ -107,7 +107,7 @@ impl PushParameters {
let remove_vanished = remove_vanished.unwrap_or(false);
let encrypted_only = encrypted_only.unwrap_or(false);
let verified_only = verified_only.unwrap_or(false);
- let store = DataStore::lookup_datastore(store, Some(Operation::Read))?;
+ let store = DataStore::lookup_datastore_read(store)?;
if !store.namespace_exists(&ns) {
bail!(
diff --git a/src/server/verify_job.rs b/src/server/verify_job.rs
index a15a257d..bd5253ed 100644
--- a/src/server/verify_job.rs
+++ b/src/server/verify_job.rs
@@ -1,7 +1,7 @@
use anyhow::{format_err, Error};
use tracing::{error, info};
-use pbs_api_types::{Authid, Operation, VerificationJobConfig};
+use pbs_api_types::{Authid, VerificationJobConfig};
use pbs_datastore::DataStore;
use proxmox_rest_server::WorkerTask;
@@ -18,7 +18,7 @@ pub fn do_verification_job(
schedule: Option<String>,
to_stdout: bool,
) -> Result<String, Error> {
- let datastore = DataStore::lookup_datastore(&verification_job.store, Some(Operation::Read))?;
+ let datastore = DataStore::lookup_datastore_write(&verification_job.store)?;
let outdated_after = verification_job.outdated_after;
let ignore_verified_snapshots = verification_job.ignore_verified.unwrap_or(true);
--
2.39.5
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-05-26 14:15 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-05-26 14:14 [pbs-devel] [PATCH proxmox-backup v2 00/12] introduce typestate for datastore/chunkstore Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 01/12] chunkstore: add CanRead and CanWrite trait Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 02/12] chunkstore: separate functions into impl block Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 03/12] datastore: add generics and new lookup functions Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 04/12] datastore: separate functions into impl block Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 05/12] backup_info: add generics and separate functions into impl blocks Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 06/12] pbs-datastore: " Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 07/12] api: backup: env: add generics and separate functions into impl block Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 08/12] api/backup/bin/server/tape: add missing generics Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 09/12] examples/tests: " Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 10/12] api: admin: pull datastore loading out of check_privs helper Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 11/12] datastore: move `fn gc_running` out of DataStoreImpl Hannes Laimer
2025-05-26 14:14 ` [pbs-devel] [PATCH proxmox-backup v2 12/12] api/server: replace datastore_lookup with new, state-typed datastore returning functions Hannes Laimer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal