From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 3E321912EF for ; Tue, 20 Dec 2022 14:17:14 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 1FCE733471 for ; Tue, 20 Dec 2022 14:16:44 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Tue, 20 Dec 2022 14:16:43 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id D462B44B8C for ; Tue, 20 Dec 2022 14:16:42 +0100 (CET) From: Fiona Ebner To: pve-devel@lists.proxmox.com Date: Tue, 20 Dec 2022 14:16:36 +0100 Message-Id: <20221220131638.223940-1-f.ebner@proxmox.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.026 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com, zfspoolplugin.pm] Subject: [pve-devel] [PATCH v2 storage 1/3] zfs: list: only cache and list images for requested storage/pool X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Dec 2022 13:17:14 -0000 The plugin for remote ZFS storages currently also uses the same list_images() as the plugin for local ZFS storages. There is only one cache which does not remember the target host where the information originated. This is problematic for rescan in qemu-server: 1. Via list_images() and zfs_list_zvol(), ZFSPlugin.pm's zfs_request() is executed for a remote ZFS. 2. $cache->{zfs} is set to the result of scanning the images there. 3. Another list_images() for a local ZFS storage happens and uses the cache with the wrong information. The only two operations using the cache currently are 1. Disk rescan in qemu-server which is affected by the issue. It is done as part of (or as a) long-running operations. 2. prepare_local_job for pvesr, but the non-local ZFS plugin doesn't support replication, so it cannot trigger there. The cache only helps if there is a replicated guest with volumes on different ZFS storages, but otherwise it will be less efficient than no cache or querying every storage by itself. Fix the issue by making the cache $storeid-specific, which effectively makes the cache superfluous, because there is no caller using the same $storeid multiple times. As argued above, this should not really hurt the callers described above much and actually improves performance for all other callers. Signed-off-by: Fiona Ebner --- Changes from v1: * Always only list images for $scfg->{pool} (and drop patch that would only do so for the remote ZFS plugin). * This makes the cache useless, so add a patch removing it. * Also add a patch for cleanup. See here for previous discussion: https://lists.proxmox.com/pipermail/pve-devel/2022-November/054485.html PVE/Storage/ZFSPoolPlugin.pm | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index e264fde..0f16e7d 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage/ZFSPoolPlugin.pm @@ -254,11 +254,12 @@ sub free_image { sub list_images { my ($class, $storeid, $scfg, $vmid, $vollist, $cache) = @_; - $cache->{zfs} = $class->zfs_list_zvol($scfg) if !$cache->{zfs}; - my $zfspool = $scfg->{pool}; + $cache->{zfs}->{$storeid} = $class->zfs_list_zvol($scfg) + if !$cache->{zfs}->{$storeid}; + my $res = []; - if (my $dat = $cache->{zfs}->{$zfspool}) { + if (my $dat = $cache->{zfs}->{$storeid}) { foreach my $image (keys %$dat) { @@ -375,20 +376,32 @@ sub zfs_delete_zvol { sub zfs_list_zvol { my ($class, $scfg) = @_; - my $text = $class->zfs_request($scfg, 10, 'list', '-o', 'name,volsize,origin,type,refquota', '-t', 'volume,filesystem', '-Hrp'); + my $text = $class->zfs_request( + $scfg, + 10, + 'list', + '-o', + 'name,volsize,origin,type,refquota', + '-t', + 'volume,filesystem', + '-Hrp', + $scfg->{pool}, + ); my $zvols = zfs_parse_zvol_list($text); return undef if !$zvols; my $list = (); foreach my $zvol (@$zvols) { - my $pool = $zvol->{pool}; + # The "pool" in $scfg is not the same as ZFS pool, so it's necessary to filter here. + next if $scfg->{pool} ne $zvol->{pool}; + my $name = $zvol->{name}; my $parent = $zvol->{origin}; if($zvol->{origin} && $zvol->{origin} =~ m/^$scfg->{pool}\/(\S+)$/){ $parent = $1; } - $list->{$pool}->{$name} = { + $list->{$name} = { name => $name, size => $zvol->{size}, parent => $parent, -- 2.30.2