public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Wolfgang Bumiller <w.bumiller@proxmox.com>
To: Stefan Reiter <s.reiter@proxmox.com>
Cc: pve-devel@lists.proxmox.com, pbs-devel@lists.proxmox.com
Subject: Re: [pve-devel] [pbs-devel] [PATCH proxmox-backup-qemu 8/9] add shared_cache module
Date: Fri, 4 Jun 2021 14:16:27 +0200	[thread overview]
Message-ID: <20210604121627.hrvzcoju6nape6cf@olga.proxmox.com> (raw)
In-Reply-To: <20210602143833.4423-9-s.reiter@proxmox.com>

On Wed, Jun 02, 2021 at 04:38:32PM +0200, Stefan Reiter wrote:
> Provides a shared AsyncLruCache of 256MB (w/ 4MB chunks) that can be
> used by multiple readers at the same time. It is dropped once no more
> readers exist, so the memory gets freed if all QEMU block/pbs instances
> disappear.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  src/lib.rs          |  7 ++++++-
>  src/shared_cache.rs | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 42 insertions(+), 1 deletion(-)
>  create mode 100644 src/shared_cache.rs
> 
> diff --git a/src/lib.rs b/src/lib.rs
> index 05d7b58..aa167f7 100644
> --- a/src/lib.rs
> +++ b/src/lib.rs
> @@ -25,6 +25,7 @@ mod restore;
>  use restore::*;
>  
>  mod tools;
> +mod shared_cache;
>  
>  pub const PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE: u64 = 1024*1024*4;
>  
> @@ -804,7 +805,11 @@ pub extern "C" fn proxmox_restore_connect_async(
>  pub extern "C" fn proxmox_restore_disconnect(handle: *mut ProxmoxRestoreHandle) {
>  
>      let restore_task = handle as * mut Arc<RestoreTask>;
> -    unsafe { Box::from_raw(restore_task) }; //drop(restore_task)
> +    let restore_task = unsafe { Box::from_raw(restore_task) };
> +    drop(restore_task);
> +
> +    // after dropping, cache may be unused (if no other handles open)
> +    shared_cache::shared_chunk_cache_cleanup();
>  }
>  
>  /// Restore an image (sync)
> diff --git a/src/shared_cache.rs b/src/shared_cache.rs
> new file mode 100644
> index 0000000..bebae5b
> --- /dev/null
> +++ b/src/shared_cache.rs
> @@ -0,0 +1,36 @@
> +use once_cell::sync::OnceCell;
> +use proxmox_backup::backup::ChunkCache;
> +use proxmox_backup::tools::async_lru_cache::AsyncLruCache;
> +use std::sync::{Arc, Mutex};
> +
> +const SHARED_CACHE_CAPACITY: usize = 64; // 256 MB
> +static SHARED_CACHE: OnceCell<Mutex<Option<ChunkCache>>> = OnceCell::new();

OnceCell *and* Option is a bit too much, `get_shared_chunk_chache()` can
just initialize it in `get_or_init()` directly.

> +
> +pub fn get_shared_chunk_cache() -> ChunkCache {
> +    let mut guard = SHARED_CACHE
> +        .get_or_init(|| Mutex::new(None))
> +        .lock()
> +        .unwrap();
> +    match &*guard {
> +        Some(cache) => Arc::clone(cache),
> +        None => {
> +            let cache = Arc::new(AsyncLruCache::new(SHARED_CACHE_CAPACITY));
> +            *guard = Some(Arc::clone(&cache));
> +            cache
> +        }
> +    }
> +}
> +
> +pub fn shared_chunk_cache_cleanup() {
> +    let mut guard = SHARED_CACHE
> +        .get_or_init(|| Mutex::new(None))
> +        .lock()
> +        .unwrap();
> +    if let Some(arc) = guard.as_ref() {
> +        let refcount = Arc::strong_count(arc);
> +        if refcount == 1 {
> +            // no one else is using the cache anymore, drop it
> +            let _drop = guard.take();
> +        }
> +    }
> +}
> -- 
> 2.30.2




  reply	other threads:[~2021-06-04 12:16 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-02 14:38 [pve-devel] [PATCH 0/9] Improve live-restore speed and replace AsyncIndexReader Stefan Reiter
2021-06-02 14:38 ` [pve-devel] [PATCH proxmox-backup 1/9] tools/BroadcastFuture: add testcase for better understanding Stefan Reiter
2021-06-02 14:38 ` [pve-devel] [PATCH proxmox-backup 2/9] tools: add AsyncLruCache as a wrapper around sync LruCache Stefan Reiter
2021-06-02 14:38 ` [pve-devel] [PATCH proxmox-backup 3/9] backup: add CachedChunkReader utilizing AsyncLruCache Stefan Reiter
2021-06-04 12:22   ` [pve-devel] [pbs-devel] " Wolfgang Bumiller
2021-06-02 14:38 ` [pve-devel] [PATCH proxmox-backup 4/9] backup: add AsyncRead/Seek to CachedChunkReader Stefan Reiter
2021-06-04 12:30   ` [pve-devel] [pbs-devel] " Wolfgang Bumiller
2021-06-02 14:38 ` [pve-devel] [PATCH proxmox-backup 5/9] replace AsyncIndexReader with SeekableCachedChunkReader Stefan Reiter
2021-06-02 14:38 ` [pve-devel] [PATCH proxmox-backup 6/9] backup: remove AsyncIndexReader Stefan Reiter
2021-06-02 14:38 ` [pve-devel] [PATCH proxmox-backup 7/9] tools/lru_cache: make minimum capacity 1 Stefan Reiter
2021-06-02 14:38 ` [pve-devel] [PATCH proxmox-backup-qemu 8/9] add shared_cache module Stefan Reiter
2021-06-04 12:16   ` Wolfgang Bumiller [this message]
2021-06-07  8:03     ` [pve-devel] [pbs-devel] " Stefan Reiter
2021-06-02 14:38 ` [pve-devel] [PATCH proxmox-backup-qemu 9/9] access: use CachedChunkReader Stefan Reiter
2021-06-07  8:07 [pve-devel] [pbs-devel] [PATCH proxmox-backup-qemu 8/9] add shared_cache module Wolfgang Bumiller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210604121627.hrvzcoju6nape6cf@olga.proxmox.com \
    --to=w.bumiller@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=s.reiter@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal