From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 44C166B254 for ; Thu, 18 Feb 2021 12:34:15 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 32BF6CF3E for ; Thu, 18 Feb 2021 12:33:45 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 16AF5CF34 for ; Thu, 18 Feb 2021 12:33:41 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id D6DBD46259 for ; Thu, 18 Feb 2021 12:33:40 +0100 (CET) From: Stoiko Ivanov To: pve-devel@lists.proxmox.com Date: Thu, 18 Feb 2021 12:33:28 +0100 Message-Id: <20210218113328.14488-1-s.ivanov@proxmox.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.064 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [RFC storage] zfspoolplugin: check if mounted instead of imported X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Feb 2021 11:34:15 -0000 This patch addresses an issue we recently saw on a production machine: * after booting a ZFS pool failed to get imported (due to an empty /etc/zfs/zpool.cache) * pvestatd/guest-startall eventually tried to import the pool * the pool was imported, yet the datasets of the pool remained not mounted A bit of debugging showed that `zpool import ` is not atomic, in fact it does fork+exec `mount` with appropriate parameters. If an import ran longer than the hardcoded timeout of 15s, it could happen that the pool got imported, but the zpool command (and its forks) got terminated due to timing out. reproducing this is straight-forward by setting (drastic) bw+iops limits on a guests' disk (which contains a zpool) - e.g.: `qm set 100 -scsi1 wd:vm-100-disk-1,iops_rd=10,iops_rd_max=20,\ iops_wr=15,iops_wr_max=20,mbps_rd=10,mbps_rd_max=15,mbps_wr=10,mbps_wr_max=15` afterwards running `timeout 15 zpool import ` resulted in that situation in the guest on my machine The patch changes the check in activate_storage for the ZFSPoolPlugin, to check if the configured 'pool' (which can also be a sub-dataset) is mounted (via `zfs get mounted `). * In case this errors out we try to import the pool (and run `zfs mount -a` if the dataset is not mounted after importing) * In case it succeeds but reports the dataset as not mounted, we run `zfs mount -a` The choice of running `zfs mount -a` has potential for regression, in case someone manually umounts some dataset after booting the system (but does not set the canmount option) Signed-off-by: Stoiko Ivanov --- * sending as RFC since I have the feeling that I miss quite a few corner-cases. * an alternative to getting the mounted property via `zfs get` might be parsing /proc/mounts (but zfs get mounted does mostly just that +stating files in /sys, and 2 (potentially blocking) ioctls * could reproduce the issue with ZFS 2.0.1 and 0.8.6 - so my current guess is the issue on the production box might be related to some other performance regression in its disk-subsystem (maybe introduced with the new kernel) * huge thanks to Dominik for many suggestions (including the idea of limiting the disk-speed via our stack instead of dm-delay :) PVE/Storage/ZFSPoolPlugin.pm | 25 +++++++++++++++++++++---- 1 file changed, 21 insertions(+), 4 deletions(-) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index 105d802..32ad1c9 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage/ZFSPoolPlugin.pm @@ -525,8 +525,17 @@ sub activate_storage { my ($class, $storeid, $scfg, $cache) = @_; # Note: $scfg->{pool} can include dataset / - my $pool = $scfg->{pool}; - $pool =~ s!/.*$!!; + my $dataset = $scfg->{pool}; + my $pool = ($dataset =~ s!/.*$!!r); + + my $dataset_mounted = sub { + my $mounted = eval { $class->zfs_get_properties($scfg, 'mounted', "$dataset") }; + if ($@) { + warn "$@\n"; + return undef; + } + return defined($mounted) && $mounted =~ m/^yes$/; + }; my $pool_imported = sub { my @param = ('-o', 'name', '-H', "$pool"); @@ -538,13 +547,21 @@ sub activate_storage { return defined($res) && $res =~ m/$pool/; }; - if (!$pool_imported->()) { + if (!defined($dataset_mounted->())) { + # error from zfs get mounted - pool import necessary # import can only be done if not yet imported! my @param = ('-d', '/dev/disk/by-id/', '-o', 'cachefile=none', "$pool"); eval { $class->zfs_request($scfg, undef, 'zpool_import', @param) }; if (my $err = $@) { # just could've raced with another import, so recheck if it is imported - die "could not activate storage '$storeid', $@\n" if !$pool_imported->(); + die "could not activate storage '$storeid', $err\n" if !$pool_imported->(); + } + if (!$dataset_mounted->()) { + eval { $class->zfs_request($scfg, undef, 'mount', '-a') }; + if (my $err = $@) { + # just could've raced with another import, so recheck if it is imported + die "could not activate storage '$storeid', $err\n"; + } } } return 1; -- 2.20.1