public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v2 storage 1/3] zfs: list: only cache and list images for requested storage/pool
@ 2022-12-20 13:16 Fiona Ebner
  2022-12-20 13:16 ` [pve-devel] [PATCH v2 storage 2/3] zfs: list images: don't use cache Fiona Ebner
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Fiona Ebner @ 2022-12-20 13:16 UTC (permalink / raw)
  To: pve-devel

The plugin for remote ZFS storages currently also uses the same
list_images() as the plugin for local ZFS storages. There is only
one cache which does not remember the target host where the
information originated.

This is problematic for rescan in qemu-server:
1. Via list_images() and zfs_list_zvol(), ZFSPlugin.pm's zfs_request()
   is executed for a remote ZFS.
2. $cache->{zfs} is set to the result of scanning the images there.
3. Another list_images() for a local ZFS storage happens and uses the
   cache with the wrong information.

The only two operations using the cache currently are
1. Disk rescan in qemu-server which is affected by the issue. It is
   done as part of (or as a) long-running operations.
2. prepare_local_job for pvesr, but the non-local ZFS plugin doesn't
   support replication, so it cannot trigger there. The cache only
   helps if there is a replicated guest with volumes on different
   ZFS storages, but otherwise it will be less efficient than no
   cache or querying every storage by itself.

Fix the issue by making the cache $storeid-specific, which effectively
makes the cache superfluous, because there is no caller using the same
$storeid multiple times. As argued above, this should not really hurt
the callers described above much and actually improves performance for
all other callers.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Changes from v1:
    * Always only list images for $scfg->{pool} (and drop patch that
      would only do so for the remote ZFS plugin).
    * This makes the cache useless, so add a patch removing it.
    * Also add a patch for cleanup.

See here for previous discussion:
https://lists.proxmox.com/pipermail/pve-devel/2022-November/054485.html

 PVE/Storage/ZFSPoolPlugin.pm | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index e264fde..0f16e7d 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -254,11 +254,12 @@ sub free_image {
 sub list_images {
     my ($class, $storeid, $scfg, $vmid, $vollist, $cache) = @_;
 
-    $cache->{zfs} = $class->zfs_list_zvol($scfg) if !$cache->{zfs};
-    my $zfspool = $scfg->{pool};
+    $cache->{zfs}->{$storeid} = $class->zfs_list_zvol($scfg)
+	if !$cache->{zfs}->{$storeid};
+
     my $res = [];
 
-    if (my $dat = $cache->{zfs}->{$zfspool}) {
+    if (my $dat = $cache->{zfs}->{$storeid}) {
 
 	foreach my $image (keys %$dat) {
 
@@ -375,20 +376,32 @@ sub zfs_delete_zvol {
 sub zfs_list_zvol {
     my ($class, $scfg) = @_;
 
-    my $text = $class->zfs_request($scfg, 10, 'list', '-o', 'name,volsize,origin,type,refquota', '-t', 'volume,filesystem', '-Hrp');
+    my $text = $class->zfs_request(
+	$scfg,
+	10,
+	'list',
+	'-o',
+	'name,volsize,origin,type,refquota',
+	'-t',
+	'volume,filesystem',
+	'-Hrp',
+	$scfg->{pool},
+    );
     my $zvols = zfs_parse_zvol_list($text);
     return undef if !$zvols;
 
     my $list = ();
     foreach my $zvol (@$zvols) {
-	my $pool = $zvol->{pool};
+	# The "pool" in $scfg is not the same as ZFS pool, so it's necessary to filter here.
+	next if $scfg->{pool} ne $zvol->{pool};
+
 	my $name = $zvol->{name};
 	my $parent = $zvol->{origin};
 	if($zvol->{origin} && $zvol->{origin} =~ m/^$scfg->{pool}\/(\S+)$/){
 	    $parent = $1;
 	}
 
-	$list->{$pool}->{$name} = {
+	$list->{$name} = {
 	    name => $name,
 	    size => $zvol->{size},
 	    parent => $parent,
-- 
2.30.2





^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-12-21 10:09 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-20 13:16 [pve-devel] [PATCH v2 storage 1/3] zfs: list: only cache and list images for requested storage/pool Fiona Ebner
2022-12-20 13:16 ` [pve-devel] [PATCH v2 storage 2/3] zfs: list images: don't use cache Fiona Ebner
2022-12-20 13:16 ` [pve-devel] [PATCH v2 storage 3/3] zfs: list images: code cleanup Fiona Ebner
2022-12-21 10:09 ` [pve-devel] applied-series: [PATCH v2 storage 1/3] zfs: list: only Fabian Grünbichler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal