From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id CDA618A7B for ; Wed, 16 Nov 2022 14:30:24 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B76E71F8F2 for ; Wed, 16 Nov 2022 14:30:24 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Wed, 16 Nov 2022 14:30:23 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 1CCA644D5B for ; Wed, 16 Nov 2022 14:30:23 +0100 (CET) Message-ID: <39b12b91-05eb-8ad3-d7e0-6e67a3d1d103@proxmox.com> Date: Wed, 16 Nov 2022 14:30:22 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.3.0 Content-Language: en-US To: pve-devel@lists.proxmox.com, =?UTF-8?Q?Fabian_Gr=c3=bcnbichler?= References: <20221107110035.93972-1-f.ebner@proxmox.com> <1668596522.lpeo4rqk2k.astroid@yuna.none> From: Fiona Ebner In-Reply-To: <1668596522.lpeo4rqk2k.astroid@yuna.none> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: =?UTF-8?Q?0=0A=09?=AWL 0.028 Adjusted score from AWL reputation of From: =?UTF-8?Q?address=0A=09?=BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict =?UTF-8?Q?Alignment=0A=09?=NICE_REPLY_A -0.001 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF =?UTF-8?Q?Record=0A=09?=SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH storage 1/2] zfs: only use cache when listing images locally X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Nov 2022 13:30:24 -0000 Am 16.11.22 um 12:18 schrieb Fabian Grünbichler: > On November 7, 2022 12:00 pm, Fiona Ebner wrote: >> The plugin for remote ZFS storages currently also uses the same >> list_images() as the plugin for local ZFS storages. The issue with >> this is that there is only one cache which does not remember the >> target host where the information originated. >> >> Simply restrict the cache to be used for the local ZFS plugin only. An >> alternative solution would be to use a cache for each target host, but >> that seems a bit more involved and could still be added in the future. > > wouldn't it be sufficient to just do > > $cache->{zfs}->{$storeid} > > when filling/querying the cache, and combining that with *always* listing only > the storage-relevant pool? Yes, should work. I'll send a v2 with that. > > the only case where we actually benefit from listing *all* zfs volumes/datasets > is when > - there are multiple storages configured referencing overlapping parts of the > ZFS hierarchy > - vdisk_list is called with a volume_list with multiple such storages being part > of the set, or with $vmid but no $storeid (rescan, or purging unreferenced guest > disks on guest removal) The cache is already useful if there two ZFS storages, nothing as fancy as the above needed ;) Then for rescan and others which iterate all storages, only one zfs list call is issued, rather than one for each ZFS storage. > > in practice, it likely doesn't make much difference since ZFS should cache the > metadata for the overlapping parts in memory anyway (given that we'd then call > 'zfs list' in a loop with different starting points). > > whereas, for most regular cases listing happens without a cache anyway (or with > a cache, but only a single storage involved), so there is no benefit in querying > volumes belonging to other storages since we are not interested in them anyway. > Yes, I'd also guess that in practice the benefit of the current list-all cache is rather limited. > sidenote: it seems like vdisk_list's volume_list is not used anywhere as parameter? > Seems to be only used in the ZFS tests in pve-storage ;)