all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: [pve-devel] applied-series: [PATCH v2 storage 1/3] zfs: list: only
Date: Wed, 21 Dec 2022 11:09:01 +0100	[thread overview]
Message-ID: <1671616821.xwylqnkt0c.astroid@yuna.none> (raw)
In-Reply-To: <20221220131638.223940-1-f.ebner@proxmox.com>

thanks!

some possible room for further improvements:
- zfs_list_zvol could add `-d1` to only list $scfg->{pool} and direct children
  (then we don't need to filter out any indirect descendants, just the "pool"
  itself..)
- $list in zfs_list_zvol could be initialized as `{}`, then we don't need an
  explicit fallback in list_images

On December 20, 2022 2:16 pm, Fiona Ebner wrote:
> The plugin for remote ZFS storages currently also uses the same
> list_images() as the plugin for local ZFS storages. There is only
> one cache which does not remember the target host where the
> information originated.
> 
> This is problematic for rescan in qemu-server:
> 1. Via list_images() and zfs_list_zvol(), ZFSPlugin.pm's zfs_request()
>    is executed for a remote ZFS.
> 2. $cache->{zfs} is set to the result of scanning the images there.
> 3. Another list_images() for a local ZFS storage happens and uses the
>    cache with the wrong information.
> 
> The only two operations using the cache currently are
> 1. Disk rescan in qemu-server which is affected by the issue. It is
>    done as part of (or as a) long-running operations.
> 2. prepare_local_job for pvesr, but the non-local ZFS plugin doesn't
>    support replication, so it cannot trigger there. The cache only
>    helps if there is a replicated guest with volumes on different
>    ZFS storages, but otherwise it will be less efficient than no
>    cache or querying every storage by itself.
> 
> Fix the issue by making the cache $storeid-specific, which effectively
> makes the cache superfluous, because there is no caller using the same
> $storeid multiple times. As argued above, this should not really hurt
> the callers described above much and actually improves performance for
> all other callers.
> 
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
> 
> Changes from v1:
>     * Always only list images for $scfg->{pool} (and drop patch that
>       would only do so for the remote ZFS plugin).
>     * This makes the cache useless, so add a patch removing it.
>     * Also add a patch for cleanup.
> 
> See here for previous discussion:
> https://lists.proxmox.com/pipermail/pve-devel/2022-November/054485.html
> 
>  PVE/Storage/ZFSPoolPlugin.pm | 25 +++++++++++++++++++------
>  1 file changed, 19 insertions(+), 6 deletions(-)
> 
> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
> index e264fde..0f16e7d 100644
> --- a/PVE/Storage/ZFSPoolPlugin.pm
> +++ b/PVE/Storage/ZFSPoolPlugin.pm
> @@ -254,11 +254,12 @@ sub free_image {
>  sub list_images {
>      my ($class, $storeid, $scfg, $vmid, $vollist, $cache) = @_;
>  
> -    $cache->{zfs} = $class->zfs_list_zvol($scfg) if !$cache->{zfs};
> -    my $zfspool = $scfg->{pool};
> +    $cache->{zfs}->{$storeid} = $class->zfs_list_zvol($scfg)
> +	if !$cache->{zfs}->{$storeid};
> +
>      my $res = [];
>  
> -    if (my $dat = $cache->{zfs}->{$zfspool}) {
> +    if (my $dat = $cache->{zfs}->{$storeid}) {
>  
>  	foreach my $image (keys %$dat) {
>  
> @@ -375,20 +376,32 @@ sub zfs_delete_zvol {
>  sub zfs_list_zvol {
>      my ($class, $scfg) = @_;
>  
> -    my $text = $class->zfs_request($scfg, 10, 'list', '-o', 'name,volsize,origin,type,refquota', '-t', 'volume,filesystem', '-Hrp');
> +    my $text = $class->zfs_request(
> +	$scfg,
> +	10,
> +	'list',
> +	'-o',
> +	'name,volsize,origin,type,refquota',
> +	'-t',
> +	'volume,filesystem',
> +	'-Hrp',
> +	$scfg->{pool},
> +    );
>      my $zvols = zfs_parse_zvol_list($text);
>      return undef if !$zvols;
>  
>      my $list = ();
>      foreach my $zvol (@$zvols) {
> -	my $pool = $zvol->{pool};
> +	# The "pool" in $scfg is not the same as ZFS pool, so it's necessary to filter here.
> +	next if $scfg->{pool} ne $zvol->{pool};
> +
>  	my $name = $zvol->{name};
>  	my $parent = $zvol->{origin};
>  	if($zvol->{origin} && $zvol->{origin} =~ m/^$scfg->{pool}\/(\S+)$/){
>  	    $parent = $1;
>  	}
>  
> -	$list->{$pool}->{$name} = {
> +	$list->{$name} = {
>  	    name => $name,
>  	    size => $zvol->{size},
>  	    parent => $parent,
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




      parent reply	other threads:[~2022-12-21 10:09 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-20 13:16 [pve-devel] [PATCH v2 storage 1/3] zfs: list: only cache and list images for requested storage/pool Fiona Ebner
2022-12-20 13:16 ` [pve-devel] [PATCH v2 storage 2/3] zfs: list images: don't use cache Fiona Ebner
2022-12-20 13:16 ` [pve-devel] [PATCH v2 storage 3/3] zfs: list images: code cleanup Fiona Ebner
2022-12-21 10:09 ` Fabian Grünbichler [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1671616821.xwylqnkt0c.astroid@yuna.none \
    --to=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal