From: Christian Ebner <c.ebner@proxmox.com>
To: Lukas Wagner <l.wagner@proxmox.com>,
Proxmox Backup Server development discussion
<pbs-devel@lists.proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox-backup v8 30/45] datastore: add local datastore cache for network attached storages
Date: Fri, 18 Jul 2025 16:59:23 +0200 [thread overview]
Message-ID: <9690a105-f738-4f05-b247-eea7dc70026d@proxmox.com> (raw)
In-Reply-To: <c882b375-4749-4884-b9d0-7936b8062322@proxmox.com>
On 7/18/25 1:24 PM, Lukas Wagner wrote:
> Some rustdoc comments are missing, but otherwise looks fine to me.
>
> As a general remark, applying to this patch, but also in general: I think we should put a much larger
> focus onto writing unit- and integration tests for any significant chunks for new code, e.g.
> like the LocalDatastoreLruCache, and also slowly refactor existing code in a way so that it can be tested.
>
> Naturally, it is additional effort, but IMO it well pays of later. I'd also say that it makes
> reviews much easier, since the tests are living proof in the code that it works, and as a reviewer
> I also immediately see how the code is supposed to be used. Furthermore, they are a good way
> to detect regressions later on, e.g. due to changing third-party dependencies, and of course
> also changes in the product code itself.
>
> That being said, I won't ask you to write test for this patch now, since adding them after
> the fact is a big pain and might require a big refactor, e.g. to separate out and abstract away
> any dependencies on existing code. I just felt the urge to bring this up, since this
> is something we can definitely improve on.
Agreed, noted this in my todo list. For the time being I added the
missing docstrings as requested.
> On 2025-07-15 14:53, Christian Ebner wrote:
>> Use a local datastore as cache using LRU cache replacement policy for
>> operations on a datastore backed by a network, e.g. by an S3 object
>> store backend. The goal is to reduce number of requests to the
>> backend and thereby save costs (monetary as well as time).
>>
>> Cached chunks are stored on the local datastore cache, already
>> containing the datastore's contents metadata (namespace, group,
>> snapshot, owner, index files, ecc..), used to perform fast lookups.
>> The cache itself only stores chunk digests, not the raw data itself.
>> When payload data is required, contents are looked up and read from
>> the local datastore cache filesystem, including fallback to fetch from
>> the backend if the presumably cached entry is not found.
>>
>> The cacher allows to fetch cache items on cache misses via the access
>> method.
>>
>> The capacity of the cache is derived from the local datastore cache
>> filesystem, or by the user configured value, whichever is smalller.
>> The capacity is only set on instantiation of the store, and the current
>> value kept as long as the datastore remains cached in the datastore
>> cache. To change the value, the store has to be either be set to offline
>> mode and back, or the services restarted.
>>
>> Basic performance tests:
>>
>> Backup and upload of contents of linux git repository to AWS S3,
>> snapshots removed in-between each backup run to avoid other chunk reuse
>> optimization of PBS.
>>
>> no-cache:
>> had to backup 5.069 GiB of 5.069 GiB (compressed 3.718 GiB) in 50.76 s (average 102.258 MiB/s)
>> empty-cache:
>> had to backup 5.069 GiB of 5.069 GiB (compressed 3.718 GiB) in 50.42 s (average 102.945 MiB/s)
>> all-cached:
>> had to backup 5.069 GiB of 5.069 GiB (compressed 3.718 GiB) in 43.78 s (average 118.554 MiB/s)
>>
>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>> ---
>> changes since version 7:
>> - use info instead of warn, as these might end up in the task logs as
>> well, possibly causing confusion if warning level
>>
>> pbs-datastore/src/datastore.rs | 70 ++++++-
>> pbs-datastore/src/lib.rs | 3 +
>> .../src/local_datastore_lru_cache.rs | 172 ++++++++++++++++++
>> 3 files changed, 244 insertions(+), 1 deletion(-)
>> create mode 100644 pbs-datastore/src/local_datastore_lru_cache.rs
>>
>> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
>> index 89f45e7f8..cab0f5b4d 100644
>> --- a/pbs-datastore/src/datastore.rs
>> +++ b/pbs-datastore/src/datastore.rs
>> @@ -40,9 +40,10 @@ use crate::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
>> use crate::fixed_index::{FixedIndexReader, FixedIndexWriter};
>> use crate::hierarchy::{ListGroups, ListGroupsType, ListNamespaces, ListNamespacesRecursive};
>> use crate::index::IndexFile;
>> +use crate::local_datastore_lru_cache::S3Cacher;
>> use crate::s3::S3_CONTENT_PREFIX;
>> use crate::task_tracking::{self, update_active_operations};
>> -use crate::DataBlob;
>> +use crate::{DataBlob, LocalDatastoreLruCache};
>>
>> static DATASTORE_MAP: LazyLock<Mutex<HashMap<String, Arc<DataStoreImpl>>>> =
>> LazyLock::new(|| Mutex::new(HashMap::new()));
>> @@ -136,6 +137,7 @@ pub struct DataStoreImpl {
>> last_digest: Option<[u8; 32]>,
>> sync_level: DatastoreFSyncLevel,
>> backend_config: DatastoreBackendConfig,
>> + lru_store_caching: Option<LocalDatastoreLruCache>,
>> }
>>
>> impl DataStoreImpl {
>> @@ -151,6 +153,7 @@ impl DataStoreImpl {
>> last_digest: None,
>> sync_level: Default::default(),
>> backend_config: Default::default(),
>> + lru_store_caching: None,
>> })
>> }
>> }
>> @@ -255,6 +258,37 @@ impl DataStore {
>> Ok(backend_type)
>> }
>>
>> + pub fn cache(&self) -> Option<&LocalDatastoreLruCache> {
>> + self.inner.lru_store_caching.as_ref()
>> + }
>> +
>> + /// Check if the digest is present in the local datastore cache.
>> + /// Always returns false if there is no cache configured for this datastore.
>> + pub fn cache_contains(&self, digest: &[u8; 32]) -> bool {
>> + if let Some(cache) = self.inner.lru_store_caching.as_ref() {
>> + return cache.contains(digest);
>> + }
>> + false
>> + }
>> +
>> + /// Insert digest as most recently used on in the cache.
>> + /// Returns with success if there is no cache configured for this datastore.
>> + pub fn cache_insert(&self, digest: &[u8; 32], chunk: &DataBlob) -> Result<(), Error> {
>> + if let Some(cache) = self.inner.lru_store_caching.as_ref() {
>> + return cache.insert(digest, chunk);
>> + }
>> + Ok(())
>> + }
>> +
>
> Missing rustdoc comment for this pub fn
>
>> + pub fn cacher(&self) -> Result<Option<S3Cacher>, Error> {
>> + self.backend().map(|backend| match backend {
>> + DatastoreBackend::S3(s3_client) => {
>> + Some(S3Cacher::new(s3_client, self.inner.chunk_store.clone()))
>> + }
>> + DatastoreBackend::Filesystem => None,
>> + })
>> + }
>> +
>> pub fn lookup_datastore(
>> name: &str,
>> operation: Option<Operation>,
>> @@ -437,6 +471,33 @@ impl DataStore {
>> .parse_property_string(config.backend.as_deref().unwrap_or(""))?,
>> )?;
>>
>> + let lru_store_caching = if DatastoreBackendType::S3 == backend_config.ty.unwrap_or_default()
>> + {
>> + let mut cache_capacity = 0;
>> + if let Ok(fs_info) = proxmox_sys::fs::fs_info(&chunk_store.base_path()) {
>> + cache_capacity = fs_info.available / (16 * 1024 * 1024);
>> + }
>> + if let Some(max_cache_size) = backend_config.max_cache_size {
>> + info!(
>> + "Got requested max cache size {max_cache_size} for store {}",
>> + config.name
>> + );
>> + let max_cache_capacity = max_cache_size.as_u64() / (16 * 1024 * 1024);
>> + cache_capacity = cache_capacity.min(max_cache_capacity);
>> + }
>> + let cache_capacity = usize::try_from(cache_capacity).unwrap_or_default();
>> +
>> + info!(
>> + "Using datastore cache with capacity {cache_capacity} for store {}",
>> + config.name
>> + );
>> +
>> + let cache = LocalDatastoreLruCache::new(cache_capacity, chunk_store.clone());
>> + Some(cache)
>> + } else {
>> + None
>> + };
>> +
>> Ok(DataStoreImpl {
>> chunk_store,
>> gc_mutex: Mutex::new(()),
>> @@ -446,6 +507,7 @@ impl DataStore {
>> last_digest,
>> sync_level: tuning.sync_level.unwrap_or_default(),
>> backend_config,
>> + lru_store_caching,
>> })
>> }
>>
>> @@ -1580,6 +1642,12 @@ impl DataStore {
>> chunk_count += 1;
>>
>> if atime < min_atime {
>> + if let Some(cache) = self.cache() {
>> + let mut digest_bytes = [0u8; 32];
>> + hex::decode_to_slice(digest.as_bytes(), &mut digest_bytes)?;
>> + // ignore errors, phase 3 will retry cleanup anyways
>> + let _ = cache.remove(&digest_bytes);
>> + }
>> delete_list.push(content.key);
>> if bad {
>> gc_status.removed_bad += 1;
>> diff --git a/pbs-datastore/src/lib.rs b/pbs-datastore/src/lib.rs
>> index ca6fdb7d8..b9eb035c2 100644
>> --- a/pbs-datastore/src/lib.rs
>> +++ b/pbs-datastore/src/lib.rs
>> @@ -217,3 +217,6 @@ pub use snapshot_reader::SnapshotReader;
>>
>> mod local_chunk_reader;
>> pub use local_chunk_reader::LocalChunkReader;
>> +
>> +mod local_datastore_lru_cache;
>> +pub use local_datastore_lru_cache::LocalDatastoreLruCache;
>> diff --git a/pbs-datastore/src/local_datastore_lru_cache.rs b/pbs-datastore/src/local_datastore_lru_cache.rs
>> new file mode 100644
>> index 000000000..bb64c52f3
>> --- /dev/null
>> +++ b/pbs-datastore/src/local_datastore_lru_cache.rs
>> @@ -0,0 +1,172 @@
>> +//! Use a local datastore as cache for operations on a datastore attached via
>> +//! a network layer (e.g. via the S3 backend).
>> +
>> +use std::future::Future;
>> +use std::sync::Arc;
>> +
>> +use anyhow::{bail, Error};
>> +use http_body_util::BodyExt;
>> +
>> +use pbs_tools::async_lru_cache::{AsyncCacher, AsyncLruCache};
>> +use proxmox_s3_client::S3Client;
>> +
>> +use crate::ChunkStore;
>> +use crate::DataBlob;
>> +
>
> v missing rustdoc for pub struct
>
>> +#[derive(Clone)]
>> +pub struct S3Cacher {
>> + client: Arc<S3Client>,
>> + store: Arc<ChunkStore>,
>> +}
>> +
>> +impl AsyncCacher<[u8; 32], ()> for S3Cacher {
>> + fn fetch(
>> + &self,
>> + key: [u8; 32],
>> + ) -> Box<dyn Future<Output = Result<Option<()>, Error>> + Send + 'static> {
>> + let client = self.client.clone();
>> + let store = self.store.clone();
>
> rather use Arc::clone(&...) here to avoid ambiguity
>
>> + Box::new(async move {
>> + let object_key = crate::s3::object_key_from_digest(&key)?;
>> + match client.get_object(object_key).await? {
>> + None => bail!("could not fetch object with key {}", hex::encode(key)),
>> + Some(response) => {
>> + let bytes = response.content.collect().await?.to_bytes();
>> + let chunk = DataBlob::from_raw(bytes.to_vec())?;
>> + store.insert_chunk(&chunk, &key)?;
>> + Ok(Some(()))
>> + }
>> + }
>> + })
>> + }
>> +}
>> +
>> +impl S3Cacher {
>
> v missing rustdoc for pub fn
>
>> + pub fn new(client: Arc<S3Client>, store: Arc<ChunkStore>) -> Self {
>> + Self { client, store }
>> + }
>> +}
>> +
>> +/// LRU cache using local datastore for caching chunks
>> +///
>> +/// Uses a LRU cache, but without storing the values in-memory but rather
>> +/// on the filesystem
>> +pub struct LocalDatastoreLruCache {
>> + cache: AsyncLruCache<[u8; 32], ()>,
>> + store: Arc<ChunkStore>,
>> +}
>> +
>> +impl LocalDatastoreLruCache {
>> + pub fn new(capacity: usize, store: Arc<ChunkStore>) -> Self {
>> + Self {
>> + cache: AsyncLruCache::new(capacity),
>> + store,
>> + }
>> + }
>> +
>> + /// Insert a new chunk into the local datastore cache.
>> + ///
>> + /// Fails if the chunk cannot be inserted successfully.
>> + pub fn insert(&self, digest: &[u8; 32], chunk: &DataBlob) -> Result<(), Error> {
>> + self.store.insert_chunk(chunk, digest)?;
>> + self.cache.insert(*digest, (), |digest| {
>> + let (path, _digest_str) = self.store.chunk_path(&digest);
>> + // Truncate to free up space but keep the inode around, since that
>> + // is used as marker for chunks in use by garbage collection.
>> + if let Err(err) = nix::unistd::truncate(&path, 0) {
>> + if err != nix::errno::Errno::ENOENT {
>> + return Err(Error::from(err));
>> + }
>> + }
>> + Ok(())
>> + })
>> + }
>> +
>> + /// Remove a chunk from the local datastore cache.
>> + ///
>> + /// Fails if the chunk cannot be deleted successfully.
>> + pub fn remove(&self, digest: &[u8; 32]) -> Result<(), Error> {
>> + self.cache.remove(*digest);
>> + let (path, _digest_str) = self.store.chunk_path(digest);
>> + std::fs::remove_file(path).map_err(Error::from)
>> + }
>> +
>
> v missing rustdoc
>
>> + pub async fn access(
>> + &self,
>> + digest: &[u8; 32],
>> + cacher: &mut S3Cacher,
>> + ) -> Result<Option<DataBlob>, Error> {
>> + if self
>> + .cache
>> + .access(*digest, cacher, |digest| {
>> + let (path, _digest_str) = self.store.chunk_path(&digest);
>> + // Truncate to free up space but keep the inode around, since that
>> + // is used as marker for chunks in use by garbage collection.
>> + if let Err(err) = nix::unistd::truncate(&path, 0) {
>> + if err != nix::errno::Errno::ENOENT {
>> + return Err(Error::from(err));
>> + }
>> + }
>> + Ok(())
>> + })
>> + .await?
>> + .is_some()
>> + {
>> + let (path, _digest_str) = self.store.chunk_path(digest);
>> + let mut file = match std::fs::File::open(&path) {
>> + Ok(file) => file,
>> + Err(err) => {
>> + // Expected chunk to be present since LRU cache has it, but it is missing
>> + // locally, try to fetch again
>> + if err.kind() == std::io::ErrorKind::NotFound {
>> + let object_key = crate::s3::object_key_from_digest(digest)?;
>> + match cacher.client.get_object(object_key).await? {
>> + None => {
>> + bail!("could not fetch object with key {}", hex::encode(digest))
>> + }
>> + Some(response) => {
>> + let bytes = response.content.collect().await?.to_bytes();
>> + let chunk = DataBlob::from_raw(bytes.to_vec())?;
>> + self.store.insert_chunk(&chunk, digest)?;
>> + std::fs::File::open(&path)?
>> + }
>> + }
>> + } else {
>> + return Err(Error::from(err));
>> + }
>> + }
>> + };
>> + let chunk = match DataBlob::load_from_reader(&mut file) {
>> + Ok(chunk) => chunk,
>> + Err(err) => {
>> + use std::io::Seek;
>> + // Check if file is empty marker file, try fetching content if so
>> + if file.seek(std::io::SeekFrom::End(0))? == 0 {
>> + let object_key = crate::s3::object_key_from_digest(digest)?;
>> + match cacher.client.get_object(object_key).await? {
>> + None => {
>> + bail!("could not fetch object with key {}", hex::encode(digest))
>> + }
>> + Some(response) => {
>> + let bytes = response.content.collect().await?.to_bytes();
>> + let chunk = DataBlob::from_raw(bytes.to_vec())?;
>> + self.store.insert_chunk(&chunk, digest)?;
>> + let mut file = std::fs::File::open(&path)?;
>> + DataBlob::load_from_reader(&mut file)?
>> + }
>> + }
>> + } else {
>> + return Err(err);
>> + }
>> + }
>> + };
>> + Ok(Some(chunk))
>> + } else {
>> + Ok(None)
>> + }
>> + }
>> +
>
> v missing rustdoc
>
>> + pub fn contains(&self, digest: &[u8; 32]) -> bool {
>> + self.cache.contains(*digest)
>> + }
>> +}
>
_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
next prev parent reply other threads:[~2025-07-18 14:58 UTC|newest]
Thread overview: 108+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-15 12:52 [pbs-devel] [PATCH proxmox{, -backup} v8 00/54] fix #2943: S3 storage backend for datastores Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox v8 1/9] s3 client: add crate for AWS s3 compatible object store client Christian Ebner
2025-07-15 21:13 ` [pbs-devel] partially-applied-series: " Thomas Lamprecht
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox v8 2/9] s3 client: implement AWS signature v4 request authentication Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox v8 3/9] s3 client: add dedicated type for s3 object keys Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox v8 4/9] s3 client: add type for last modified timestamp in responses Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox v8 5/9] s3 client: add helper to parse http date headers Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox v8 6/9] s3 client: implement methods to operate on s3 objects in bucket Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox v8 7/9] s3 client: add example usage for basic operations Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox v8 8/9] pbs-api-types: extend datastore config by backend config enum Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox v8 9/9] pbs-api-types: maintenance: add new maintenance mode S3 refresh Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 01/45] datastore: add helpers for path/digest to s3 object key conversion Christian Ebner
2025-07-18 7:24 ` Lukas Wagner
2025-07-18 8:34 ` Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 02/45] config: introduce s3 object store client configuration Christian Ebner
2025-07-18 7:22 ` Lukas Wagner
2025-07-18 8:37 ` Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 03/45] api: config: implement endpoints to manipulate and list s3 configs Christian Ebner
2025-07-18 7:32 ` Lukas Wagner
2025-07-18 8:40 ` Christian Ebner
2025-07-18 9:07 ` Lukas Wagner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 04/45] api: datastore: check s3 backend bucket access on datastore create Christian Ebner
2025-07-18 7:40 ` Lukas Wagner
2025-07-18 8:55 ` Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 05/45] api/cli: add endpoint and command to check s3 client connection Christian Ebner
2025-07-18 7:43 ` Lukas Wagner
2025-07-18 9:04 ` Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 06/45] datastore: allow to get the backend for a datastore Christian Ebner
2025-07-18 7:52 ` Lukas Wagner
2025-07-18 9:10 ` Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 07/45] api: backup: store datastore backend in runtime environment Christian Ebner
2025-07-18 7:54 ` Lukas Wagner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 08/45] api: backup: conditionally upload chunks to s3 object store backend Christian Ebner
2025-07-18 8:11 ` Lukas Wagner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 09/45] api: backup: conditionally upload blobs " Christian Ebner
2025-07-18 8:13 ` Lukas Wagner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 10/45] api: backup: conditionally upload indices " Christian Ebner
2025-07-18 8:20 ` Lukas Wagner
2025-07-18 9:24 ` Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 11/45] api: backup: conditionally upload manifest " Christian Ebner
2025-07-18 8:26 ` Lukas Wagner
2025-07-18 9:33 ` Christian Ebner
2025-07-15 12:52 ` [pbs-devel] [PATCH proxmox-backup v8 12/45] api: datastore: conditionally upload client log to s3 backend Christian Ebner
2025-07-18 8:28 ` Lukas Wagner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 13/45] sync: pull: conditionally upload content " Christian Ebner
2025-07-18 8:35 ` Lukas Wagner
2025-07-18 9:43 ` Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 14/45] api: reader: fetch chunks based on datastore backend Christian Ebner
2025-07-18 8:38 ` Lukas Wagner
2025-07-18 9:58 ` Christian Ebner
2025-07-18 10:03 ` Lukas Wagner
2025-07-18 10:10 ` Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 15/45] datastore: local chunk reader: read chunks based on backend Christian Ebner
2025-07-18 8:45 ` Lukas Wagner
2025-07-18 10:11 ` Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 16/45] verify worker: add datastore backed to verify worker Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 17/45] verify: implement chunk verification for stores with s3 backend Christian Ebner
2025-07-18 8:56 ` Lukas Wagner
2025-07-18 11:45 ` Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 18/45] datastore: create namespace marker in " Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 19/45] datastore: create/delete protected marker file on s3 storage backend Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 20/45] datastore: prune groups/snapshots from s3 object store backend Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 21/45] datastore: get and set owner for s3 " Christian Ebner
2025-07-18 9:25 ` Lukas Wagner
2025-07-18 12:12 ` Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 22/45] datastore: implement garbage collection for s3 backend Christian Ebner
2025-07-18 9:47 ` Lukas Wagner
2025-07-18 14:31 ` Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 23/45] ui: add datastore type selector and reorganize component layout Christian Ebner
2025-07-18 9:55 ` Lukas Wagner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 24/45] ui: add s3 client edit window for configuration create/edit Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 25/45] ui: add s3 client view for configuration Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 26/45] ui: expose the s3 client view in the navigation tree Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 27/45] ui: add s3 client selector and bucket field for s3 backend setup Christian Ebner
2025-07-18 10:02 ` Lukas Wagner
2025-07-19 12:28 ` Christian Ebner
2025-07-22 9:25 ` Lukas Wagner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 28/45] tools: lru cache: add removed callback for evicted cache nodes Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 29/45] tools: async lru cache: implement insert, remove and contains methods Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 30/45] datastore: add local datastore cache for network attached storages Christian Ebner
2025-07-18 11:24 ` Lukas Wagner
2025-07-18 14:59 ` Christian Ebner [this message]
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 31/45] api: backup: use local datastore cache on s3 backend chunk upload Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 32/45] api: reader: use local datastore cache on s3 backend chunk fetching Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 33/45] datastore: local chunk reader: get cached chunk from local cache store Christian Ebner
2025-07-18 11:36 ` Lukas Wagner
2025-07-18 15:04 ` Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 34/45] api: backup: add no-cache flag to bypass local datastore cache Christian Ebner
2025-07-18 11:41 ` Lukas Wagner
2025-07-18 15:37 ` Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 35/45] api/datastore: implement refresh endpoint for stores with s3 backend Christian Ebner
2025-07-18 12:01 ` Lukas Wagner
2025-07-18 15:51 ` Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 36/45] cli: add dedicated subcommand for datastore s3 refresh Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 37/45] ui: render s3 refresh as valid maintenance type and task description Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 38/45] ui: expose s3 refresh button for datastores backed by object store Christian Ebner
2025-07-18 12:46 ` Lukas Wagner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 39/45] datastore: conditionally upload atime marker chunk to s3 backend Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 40/45] bin: implement client subcommands for s3 configuration manipulation Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 41/45] bin: expose reuse-datastore flag for proxmox-backup-manager Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 42/45] datastore: mark store as in-use by setting marker on s3 backend Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 43/45] datastore: run s3-refresh when reusing a datastore with " Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 44/45] api/ui: add flag to allow overwriting in-use marker for " Christian Ebner
2025-07-15 12:53 ` [pbs-devel] [PATCH proxmox-backup v8 45/45] docs: Add section describing how to setup s3 backed datastore Christian Ebner
2025-07-18 13:14 ` Maximiliano Sandoval
2025-07-18 14:38 ` Christian Ebner
2025-07-18 13:16 ` [pbs-devel] [PATCH proxmox{, -backup} v8 00/54] fix #2943: S3 storage backend for datastores Lukas Wagner
2025-07-19 12:52 ` [pbs-devel] superseded: " Christian Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9690a105-f738-4f05-b247-eea7dc70026d@proxmox.com \
--to=c.ebner@proxmox.com \
--cc=l.wagner@proxmox.com \
--cc=pbs-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox