public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com,
	"Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH storage 1/2] zfs: only use cache when listing images locally
Date: Wed, 16 Nov 2022 14:30:22 +0100	[thread overview]
Message-ID: <39b12b91-05eb-8ad3-d7e0-6e67a3d1d103@proxmox.com> (raw)
In-Reply-To: <1668596522.lpeo4rqk2k.astroid@yuna.none>

Am 16.11.22 um 12:18 schrieb Fabian Grünbichler:
> On November 7, 2022 12:00 pm, Fiona Ebner wrote:
>> The plugin for remote ZFS storages currently also uses the same
>> list_images() as the plugin for local ZFS storages. The issue with
>> this is that there is only one cache which does not remember the
>> target host where the information originated.
>>
>> Simply restrict the cache to be used for the local ZFS plugin only. An
>> alternative solution would be to use a cache for each target host, but
>> that seems a bit more involved and could still be added in the future.
> 
> wouldn't it be sufficient to just do
> 
> $cache->{zfs}->{$storeid}
> 
> when filling/querying the cache, and combining that with *always* listing only
> the storage-relevant pool?

Yes, should work. I'll send a v2 with that.

> 
> the only case where we actually benefit from listing *all* zfs volumes/datasets
> is when
> - there are multiple storages configured referencing overlapping parts of the
> ZFS hierarchy
> - vdisk_list is called with a volume_list with multiple such storages being part
> of the set, or with $vmid but no $storeid (rescan, or purging unreferenced guest
> disks on guest removal)

The cache is already useful if there two ZFS storages, nothing as fancy
as the above needed ;) Then for rescan and others which iterate all
storages, only one zfs list call is issued, rather than one for each ZFS
storage.

> 
> in practice, it likely doesn't make much difference since ZFS should cache the
> metadata for the overlapping parts in memory anyway (given that we'd then call
> 'zfs list' in a loop with different starting points).
> 
> whereas, for most regular cases listing happens without a cache anyway (or with
> a cache, but only a single storage involved), so there is no benefit in querying
> volumes belonging to other storages since we are not interested in them anyway.
> 

Yes, I'd also guess that in practice the benefit of the current list-all
cache is rather limited.

> sidenote: it seems like vdisk_list's volume_list is not used anywhere as parameter?
> 

Seems to be only used in the ZFS tests in pve-storage ;)




  reply	other threads:[~2022-11-16 13:30 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-07 11:00 Fiona Ebner
2022-11-07 11:00 ` [pve-devel] [PATCH storage 2/2] zfs: list zvol: list single pool only for remote ZFS Fiona Ebner
2022-11-16 11:18 ` [pve-devel] [PATCH storage 1/2] zfs: only use cache when listing images locally Fabian Grünbichler
2022-11-16 13:30   ` Fiona Ebner [this message]
2022-11-28 12:19     ` Fiona Ebner
2022-11-29  9:50       ` Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=39b12b91-05eb-8ad3-d7e0-6e67a3d1d103@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal