From: "Max Carrara" <m.carrara@proxmox.com>
To: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH zfsonlinux v2 0/2] Update to ZFS 2.2.4
Date: Tue, 21 May 2024 15:31:33 +0200 [thread overview]
Message-ID: <D1FCYVCBE63Y.2S9GI724G9IGU@proxmox.com> (raw)
In-Reply-To: <20240507150210.1391522-1-s.ivanov@proxmox.com>
On Tue May 7, 2024 at 5:02 PM CEST, Stoiko Ivanov wrote:
> v1->v2:
> Patch 2/2 (adaptation of arc_summary/arcstat patch) modified:
> * right after sending the v1 I saw a report where pinning kernel 6.2 (thus
> ZFS 2.1) leads to a similar traceback - which I seem to have overlooked
> when packaging 2.2.0 ...
> adapted the patch by booting a VM with kernel 6.2 and the current
> userspace and running arc_summary /arcstat -a until no traceback was
> displayed with a single-disk pool.
Testing
-------
* Built and installed ZFS with those two patches on my test VM
- Note: Couldn't install zfs-initramfs and zfs-dracut due to some
dependency issue
- zfs-initramfs depends on initramfs-tools, but complained it wasn't
available (even though the package is installed ...)
- zfs-dracut did the same for dracut
- initramfs-tools then conflicts with the virtual linux-initramfs-tool
- Removing zfs-initramfs from the packages to be installed "fixed"
this; all other packages then installed without any issue
* `arcstat -a` and `arc_summary` correctly displayed the new values
while old kernel was still running
* Didn't encounter any exceptions
* VM also survived a reboot - same results for new kernel
* Didn't notice anything off overall while the VM was running - will
holler if I find anything
Review
------
Looked specifically at patch 02; applied and diffed it on the upstream
ZFS sources checked out at tag `zfs-2.2.4`. What can I say, it's just
replacing calls to `obj.__getitem__()` with `obj.get('foo', 0)` - so,
pretty straightforward. (The original code could use a brush-up, but
that's beside the point.)
Summary
-------
All in all, LGTM - haven't really looked at patch 01 in detail, so I'll
add my R-b tag only to patch 02. Good work!
>
> original cover-letter for v1:
> This patchset updates ZFS to the recently released 2.2.4
>
> We had about half of the patches already in 2.2.3-2, due to the needed
> support for kernel 6.8.
>
> Compared to the last 2.2 point releases this one compares quite a few
> potential performance improvments:
> * for ZVOL workloads (relevant for qemu guests) multiple taskq were
> introduced [1] - this change is active by default (can be put back to
> the old behavior with explicitly setting `zvol_num_taskqs=1`
> * the interface for ZFS submitting operations to the kernel's block layer
> was augmented to better deal with split-pages [2] - which should also
> improve performance, and prevent unaligned writes which are rejected by
> e.g. the SCSI subsystem. - The default remains with the current code
> (`zfs_vdev_disk_classic=0` turns on the 'new' behavior...)
> * Speculative prefetching was improved [3], which introduced not kstats,
> which are reported by`arc_summary` and `arcstat`, as before with the
> MRU/MFU additions there was not guard for running the new user-space
> with an old kernel resulting in Python exceptions of both tools.
> I adapted the patch where Thomas fixed that back in the 2.1 release
> times. - sending as separate patch for easier review - and I hope it's
> ok that I dropped the S-o-b tag (as it's changed code) - glad to resend
> it, if this should be adapted.
>
> Minimally tested on 2 VMs (the arcstat/arc_summary changes by running with
> an old kernel and new user-space)
>
>
> [0] https://github.com/openzfs/zfs/releases/tag/zfs-2.2.4
> [1] https://github.com/openzfs/zfs/pull/15992
> [2] https://github.com/openzfs/zfs/pull/15588
> [3] https://github.com/openzfs/zfs/pull/16022
>
> Stoiko Ivanov (2):
> update zfs submodule to 2.2.4 and refresh patches
> update arc_summary arcstat patch with new introduced values
>
> ...md-unit-for-importing-specific-pools.patch | 4 +-
> ...-move-manpage-arcstat-1-to-arcstat-8.patch | 2 +-
> ...-guard-access-to-freshly-introduced-.patch | 438 ++++++++++++
> ...-guard-access-to-l2arc-MFU-MRU-stats.patch | 113 ---
> ...hten-bounds-for-noalloc-stat-availab.patch | 4 +-
> ...rectly-handle-partition-16-and-later.patch | 52 --
> ...-use-splice_copy_file_range-for-fall.patch | 135 ----
> .../0014-linux-5.4-compat-page_size.patch | 121 ----
> .../patches/0015-abd-add-page-iterator.patch | 334 ---------
> ...-existing-functions-to-vdev_classic_.patch | 349 ---------
> ...v_disk-reorganise-vdev_disk_io_start.patch | 111 ---
> ...-read-write-IO-function-configurable.patch | 69 --
> ...e-BIO-filling-machinery-to-avoid-spl.patch | 671 ------------------
> ...dule-parameter-to-select-BIO-submiss.patch | 104 ---
> ...se-bio_chain-to-submit-multiple-BIOs.patch | 363 ----------
> ...on-t-use-compound-heads-on-Linux-4.5.patch | 96 ---
> ...ault-to-classic-submission-for-2.2.x.patch | 90 ---
> ...ion-caused-by-mmap-flushing-problems.patch | 104 ---
> ...touch-vbio-after-its-handed-off-to-t.patch | 57 --
> debian/patches/series | 16 +-
> upstream | 2 +-
> 21 files changed, 445 insertions(+), 2790 deletions(-)
> create mode 100644 debian/patches/0009-arc-stat-summary-guard-access-to-freshly-introduced-.patch
> delete mode 100644 debian/patches/0009-arc-stat-summary-guard-access-to-l2arc-MFU-MRU-stats.patch
> delete mode 100644 debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch
> delete mode 100644 debian/patches/0013-Linux-6.8-compat-use-splice_copy_file_range-for-fall.patch
> delete mode 100644 debian/patches/0014-linux-5.4-compat-page_size.patch
> delete mode 100644 debian/patches/0015-abd-add-page-iterator.patch
> delete mode 100644 debian/patches/0016-vdev_disk-rename-existing-functions-to-vdev_classic_.patch
> delete mode 100644 debian/patches/0017-vdev_disk-reorganise-vdev_disk_io_start.patch
> delete mode 100644 debian/patches/0018-vdev_disk-make-read-write-IO-function-configurable.patch
> delete mode 100644 debian/patches/0019-vdev_disk-rewrite-BIO-filling-machinery-to-avoid-spl.patch
> delete mode 100644 debian/patches/0020-vdev_disk-add-module-parameter-to-select-BIO-submiss.patch
> delete mode 100644 debian/patches/0021-vdev_disk-use-bio_chain-to-submit-multiple-BIOs.patch
> delete mode 100644 debian/patches/0022-abd_iter_page-don-t-use-compound-heads-on-Linux-4.5.patch
> delete mode 100644 debian/patches/0023-vdev_disk-default-to-classic-submission-for-2.2.x.patch
> delete mode 100644 debian/patches/0024-Fix-corruption-caused-by-mmap-flushing-problems.patch
> delete mode 100644 debian/patches/0025-vdev_disk-don-t-touch-vbio-after-its-handed-off-to-t.patch
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2024-05-21 13:32 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-07 15:02 Stoiko Ivanov
2024-05-07 15:02 ` [pve-devel] [PATCH zfsonlinux v2 1/2] update zfs submodule to 2.2.4 and refresh patches Stoiko Ivanov
2024-05-21 13:56 ` Max Carrara
2024-05-07 15:02 ` [pve-devel] [PATCH zfsonlinux v2 2/2] update arc_summary arcstat patch with new introduced values Stoiko Ivanov
2024-05-21 13:32 ` Max Carrara
2024-05-21 13:31 ` Max Carrara [this message]
2024-05-21 14:06 ` [pve-devel] applied-series: [PATCH zfsonlinux v2 0/2] Update to ZFS 2.2.4 Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D1FCYVCBE63Y.2S9GI724G9IGU@proxmox.com \
--to=m.carrara@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox