From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id A2C031FF38F for ; Tue, 21 May 2024 15:32:16 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 8344219951; Tue, 21 May 2024 15:32:11 +0200 (CEST) Mime-Version: 1.0 Date: Tue, 21 May 2024 15:31:33 +0200 Message-Id: To: "Proxmox VE development discussion" From: "Max Carrara" X-Mailer: aerc 0.17.0-72-g6a84f1331f1c References: <20240507150210.1391522-1-s.ivanov@proxmox.com> In-Reply-To: <20240507150210.1391522-1-s.ivanov@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL -0.372 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_NUMSUBJECT 0.5 Subject ends in numbers excluding current years POISEN_SPAM_PILL 0.1 Meta: its spam POISEN_SPAM_PILL_1 0.1 random spam to be learned in bayes POISEN_SPAM_PILL_3 0.1 random spam to be learned in bayes SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH zfsonlinux v2 0/2] Update to ZFS 2.2.4 X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" On Tue May 7, 2024 at 5:02 PM CEST, Stoiko Ivanov wrote: > v1->v2: > Patch 2/2 (adaptation of arc_summary/arcstat patch) modified: > * right after sending the v1 I saw a report where pinning kernel 6.2 (thus > ZFS 2.1) leads to a similar traceback - which I seem to have overlooked > when packaging 2.2.0 ... > adapted the patch by booting a VM with kernel 6.2 and the current > userspace and running arc_summary /arcstat -a until no traceback was > displayed with a single-disk pool. Testing ------- * Built and installed ZFS with those two patches on my test VM - Note: Couldn't install zfs-initramfs and zfs-dracut due to some dependency issue - zfs-initramfs depends on initramfs-tools, but complained it wasn't available (even though the package is installed ...) - zfs-dracut did the same for dracut - initramfs-tools then conflicts with the virtual linux-initramfs-tool - Removing zfs-initramfs from the packages to be installed "fixed" this; all other packages then installed without any issue * `arcstat -a` and `arc_summary` correctly displayed the new values while old kernel was still running * Didn't encounter any exceptions * VM also survived a reboot - same results for new kernel * Didn't notice anything off overall while the VM was running - will holler if I find anything Review ------ Looked specifically at patch 02; applied and diffed it on the upstream ZFS sources checked out at tag `zfs-2.2.4`. What can I say, it's just replacing calls to `obj.__getitem__()` with `obj.get('foo', 0)` - so, pretty straightforward. (The original code could use a brush-up, but that's beside the point.) Summary ------- All in all, LGTM - haven't really looked at patch 01 in detail, so I'll add my R-b tag only to patch 02. Good work! > > original cover-letter for v1: > This patchset updates ZFS to the recently released 2.2.4 > > We had about half of the patches already in 2.2.3-2, due to the needed > support for kernel 6.8. > > Compared to the last 2.2 point releases this one compares quite a few > potential performance improvments: > * for ZVOL workloads (relevant for qemu guests) multiple taskq were > introduced [1] - this change is active by default (can be put back to > the old behavior with explicitly setting `zvol_num_taskqs=1` > * the interface for ZFS submitting operations to the kernel's block layer > was augmented to better deal with split-pages [2] - which should also > improve performance, and prevent unaligned writes which are rejected by > e.g. the SCSI subsystem. - The default remains with the current code > (`zfs_vdev_disk_classic=0` turns on the 'new' behavior...) > * Speculative prefetching was improved [3], which introduced not kstats, > which are reported by`arc_summary` and `arcstat`, as before with the > MRU/MFU additions there was not guard for running the new user-space > with an old kernel resulting in Python exceptions of both tools. > I adapted the patch where Thomas fixed that back in the 2.1 release > times. - sending as separate patch for easier review - and I hope it's > ok that I dropped the S-o-b tag (as it's changed code) - glad to resend > it, if this should be adapted. > > Minimally tested on 2 VMs (the arcstat/arc_summary changes by running with > an old kernel and new user-space) > > > [0] https://github.com/openzfs/zfs/releases/tag/zfs-2.2.4 > [1] https://github.com/openzfs/zfs/pull/15992 > [2] https://github.com/openzfs/zfs/pull/15588 > [3] https://github.com/openzfs/zfs/pull/16022 > > Stoiko Ivanov (2): > update zfs submodule to 2.2.4 and refresh patches > update arc_summary arcstat patch with new introduced values > > ...md-unit-for-importing-specific-pools.patch | 4 +- > ...-move-manpage-arcstat-1-to-arcstat-8.patch | 2 +- > ...-guard-access-to-freshly-introduced-.patch | 438 ++++++++++++ > ...-guard-access-to-l2arc-MFU-MRU-stats.patch | 113 --- > ...hten-bounds-for-noalloc-stat-availab.patch | 4 +- > ...rectly-handle-partition-16-and-later.patch | 52 -- > ...-use-splice_copy_file_range-for-fall.patch | 135 ---- > .../0014-linux-5.4-compat-page_size.patch | 121 ---- > .../patches/0015-abd-add-page-iterator.patch | 334 --------- > ...-existing-functions-to-vdev_classic_.patch | 349 --------- > ...v_disk-reorganise-vdev_disk_io_start.patch | 111 --- > ...-read-write-IO-function-configurable.patch | 69 -- > ...e-BIO-filling-machinery-to-avoid-spl.patch | 671 ------------------ > ...dule-parameter-to-select-BIO-submiss.patch | 104 --- > ...se-bio_chain-to-submit-multiple-BIOs.patch | 363 ---------- > ...on-t-use-compound-heads-on-Linux-4.5.patch | 96 --- > ...ault-to-classic-submission-for-2.2.x.patch | 90 --- > ...ion-caused-by-mmap-flushing-problems.patch | 104 --- > ...touch-vbio-after-its-handed-off-to-t.patch | 57 -- > debian/patches/series | 16 +- > upstream | 2 +- > 21 files changed, 445 insertions(+), 2790 deletions(-) > create mode 100644 debian/patches/0009-arc-stat-summary-guard-access-to-freshly-introduced-.patch > delete mode 100644 debian/patches/0009-arc-stat-summary-guard-access-to-l2arc-MFU-MRU-stats.patch > delete mode 100644 debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch > delete mode 100644 debian/patches/0013-Linux-6.8-compat-use-splice_copy_file_range-for-fall.patch > delete mode 100644 debian/patches/0014-linux-5.4-compat-page_size.patch > delete mode 100644 debian/patches/0015-abd-add-page-iterator.patch > delete mode 100644 debian/patches/0016-vdev_disk-rename-existing-functions-to-vdev_classic_.patch > delete mode 100644 debian/patches/0017-vdev_disk-reorganise-vdev_disk_io_start.patch > delete mode 100644 debian/patches/0018-vdev_disk-make-read-write-IO-function-configurable.patch > delete mode 100644 debian/patches/0019-vdev_disk-rewrite-BIO-filling-machinery-to-avoid-spl.patch > delete mode 100644 debian/patches/0020-vdev_disk-add-module-parameter-to-select-BIO-submiss.patch > delete mode 100644 debian/patches/0021-vdev_disk-use-bio_chain-to-submit-multiple-BIOs.patch > delete mode 100644 debian/patches/0022-abd_iter_page-don-t-use-compound-heads-on-Linux-4.5.patch > delete mode 100644 debian/patches/0023-vdev_disk-default-to-classic-submission-for-2.2.x.patch > delete mode 100644 debian/patches/0024-Fix-corruption-caused-by-mmap-flushing-problems.patch > delete mode 100644 debian/patches/0025-vdev_disk-don-t-touch-vbio-after-its-handed-off-to-t.patch _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel