* [pve-devel] [PATCH zfsonlinux 1/8] update zfs submodule to 2.3.1 and refresh patches
2025-03-31 13:41 [pve-devel] [PATCH zfsonlinux 0/8] update to ZFS 2.3.1 Stoiko Ivanov
@ 2025-03-31 13:41 ` Stoiko Ivanov
2025-03-31 13:41 ` [pve-devel] [PATCH zfsonlinux 2/8] Install new manpages for zpool-{ddtprune, prefetch} Stoiko Ivanov
` (6 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Stoiko Ivanov @ 2025-03-31 13:41 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
---
...META-and-DCH-consistency-in-autoconf.patch | 2 +-
.../0002-always-load-ZFS-module-on-boot.patch | 2 +-
...o-the-zed-binary-on-the-systemd-unit.patch | 2 +-
...ith-d-dev-disk-by-id-in-scan-service.patch | 2 +-
debian/patches/0005-Enable-zed-emails.patch | 2 +-
.../0006-dont-symlink-zed-scripts.patch | 2 +-
...md-unit-for-importing-specific-pools.patch | 6 +-
...-move-manpage-arcstat-1-to-arcstat-8.patch | 4 +-
...-guard-access-to-freshly-introduced-.patch | 439 ------------------
...ten-bounds-for-noalloc-stat-availab.patch} | 4 +-
...runcate_shares-without-etc-exports.d.patch | 77 ---
...rrectly-detect-flush-requests-17131.patch} | 11 +-
...-use-LVM-autoactivation-for-activat.patch} | 7 +-
...ops-d_revalidate-now-takes-four-args.patch | 103 ----
...14-BLK_MQ_F_SHOULD_MERGE-was-removed.patch | 44 --
debian/patches/series | 10 +-
upstream | 2 +-
17 files changed, 23 insertions(+), 696 deletions(-)
delete mode 100644 debian/patches/0009-arc-stat-summary-guard-access-to-freshly-introduced-.patch
rename debian/patches/{0011-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch => 0009-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch} (94%)
delete mode 100644 debian/patches/0010-Fix-nfs_truncate_shares-without-etc-exports.d.patch
rename debian/patches/{0012-linux-zvols-correctly-detect-flush-requests.patch => 0010-linux-zvols-correctly-detect-flush-requests-17131.patch} (88%)
rename debian/patches/{0013-contrib-initramfs-use-LVM-autoactivation-for-activat.patch => 0011-contrib-initramfs-use-LVM-autoactivation-for-activat.patch} (92%)
delete mode 100644 debian/patches/0014-Linux-6.14-dops-d_revalidate-now-takes-four-args.patch
delete mode 100644 debian/patches/0015-Linux-6.14-BLK_MQ_F_SHOULD_MERGE-was-removed.patch
diff --git a/debian/patches/0001-Check-for-META-and-DCH-consistency-in-autoconf.patch b/debian/patches/0001-Check-for-META-and-DCH-consistency-in-autoconf.patch
index 504bb9987..41fa5b583 100644
--- a/debian/patches/0001-Check-for-META-and-DCH-consistency-in-autoconf.patch
+++ b/debian/patches/0001-Check-for-META-and-DCH-consistency-in-autoconf.patch
@@ -10,7 +10,7 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
1 file changed, 29 insertions(+), 5 deletions(-)
diff --git a/config/zfs-meta.m4 b/config/zfs-meta.m4
-index 20064a0fb..4d5f545ad 100644
+index 20064a0fb5957288640494bd6a640942796050b4..4d5f545adc23c3c97604b7a1ac47e9711c81e9fd 100644
--- a/config/zfs-meta.m4
+++ b/config/zfs-meta.m4
@@ -1,9 +1,10 @@
diff --git a/debian/patches/0002-always-load-ZFS-module-on-boot.patch b/debian/patches/0002-always-load-ZFS-module-on-boot.patch
index 6b1e068b1..87133b7f5 100644
--- a/debian/patches/0002-always-load-ZFS-module-on-boot.patch
+++ b/debian/patches/0002-always-load-ZFS-module-on-boot.patch
@@ -19,7 +19,7 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/etc/modules-load.d/zfs.conf b/etc/modules-load.d/zfs.conf
-index 44e1bb3ed..7509b03cb 100644
+index 44e1bb3ed906b18eaafb6376471380b669590357..7509b03cb7dd8f326db514841f3f90d47021caa7 100644
--- a/etc/modules-load.d/zfs.conf
+++ b/etc/modules-load.d/zfs.conf
@@ -1,3 +1,3 @@
diff --git a/debian/patches/0003-Fix-the-path-to-the-zed-binary-on-the-systemd-unit.patch b/debian/patches/0003-Fix-the-path-to-the-zed-binary-on-the-systemd-unit.patch
index fa365df58..51f3e4e24 100644
--- a/debian/patches/0003-Fix-the-path-to-the-zed-binary-on-the-systemd-unit.patch
+++ b/debian/patches/0003-Fix-the-path-to-the-zed-binary-on-the-systemd-unit.patch
@@ -13,7 +13,7 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/etc/systemd/system/zfs-zed.service.in b/etc/systemd/system/zfs-zed.service.in
-index be2fc6734..7606604ec 100644
+index be2fc67348f937b074554377f46d51ef81ef5105..7606604ec0251a1239e078dc15fd969a584bb843 100644
--- a/etc/systemd/system/zfs-zed.service.in
+++ b/etc/systemd/system/zfs-zed.service.in
@@ -5,7 +5,7 @@ ConditionPathIsDirectory=/sys/module/zfs
diff --git a/debian/patches/0004-import-with-d-dev-disk-by-id-in-scan-service.patch b/debian/patches/0004-import-with-d-dev-disk-by-id-in-scan-service.patch
index 7ea61c811..d551dc595 100644
--- a/debian/patches/0004-import-with-d-dev-disk-by-id-in-scan-service.patch
+++ b/debian/patches/0004-import-with-d-dev-disk-by-id-in-scan-service.patch
@@ -14,7 +14,7 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/etc/systemd/system/zfs-import-scan.service.in b/etc/systemd/system/zfs-import-scan.service.in
-index c5dd45d87..1c792edf0 100644
+index c5dd45d87e683bd78d0d09c506f5b3db179edf2e..1c792edf054271560ec482b0ed8eab8e7b900253 100644
--- a/etc/systemd/system/zfs-import-scan.service.in
+++ b/etc/systemd/system/zfs-import-scan.service.in
@@ -14,7 +14,7 @@ ConditionPathIsDirectory=/sys/module/zfs
diff --git a/debian/patches/0005-Enable-zed-emails.patch b/debian/patches/0005-Enable-zed-emails.patch
index c3ccdecd7..87769ce10 100644
--- a/debian/patches/0005-Enable-zed-emails.patch
+++ b/debian/patches/0005-Enable-zed-emails.patch
@@ -13,7 +13,7 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cmd/zed/zed.d/zed.rc b/cmd/zed/zed.d/zed.rc
-index 859c6f9cb..9d1ee1560 100644
+index af56147a969b0bef63a9cd6b16b1b59c186298b0..47fb01631ec46980ba1ffbce5d2bb69a3e7150fc 100644
--- a/cmd/zed/zed.d/zed.rc
+++ b/cmd/zed/zed.d/zed.rc
@@ -41,7 +41,7 @@ ZED_EMAIL_ADDR="root"
diff --git a/debian/patches/0006-dont-symlink-zed-scripts.patch b/debian/patches/0006-dont-symlink-zed-scripts.patch
index 2591e8604..eeb8791df 100644
--- a/debian/patches/0006-dont-symlink-zed-scripts.patch
+++ b/debian/patches/0006-dont-symlink-zed-scripts.patch
@@ -32,7 +32,7 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cmd/zed/zed.d/Makefile.am b/cmd/zed/zed.d/Makefile.am
-index 093a04c46..e5e735d00 100644
+index 093a04c4636a7455f8ca4b2644f79967736d6233..e5e735d0013a6a20120ff9529a9275557cf88713 100644
--- a/cmd/zed/zed.d/Makefile.am
+++ b/cmd/zed/zed.d/Makefile.am
@@ -50,7 +50,7 @@ zed-install-data-hook:
diff --git a/debian/patches/0007-Add-systemd-unit-for-importing-specific-pools.patch b/debian/patches/0007-Add-systemd-unit-for-importing-specific-pools.patch
index 0600296fb..d9fa32321 100644
--- a/debian/patches/0007-Add-systemd-unit-for-importing-specific-pools.patch
+++ b/debian/patches/0007-Add-systemd-unit-for-importing-specific-pools.patch
@@ -23,7 +23,7 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
create mode 100644 etc/systemd/system/zfs-import@.service.in
diff --git a/etc/Makefile.am b/etc/Makefile.am
-index 7187762d3..de131dc87 100644
+index 7187762d380296981b0c440824676f8e47355816..de131dc87952e0668ef9d2b5b951634c939bfff5 100644
--- a/etc/Makefile.am
+++ b/etc/Makefile.am
@@ -54,6 +54,7 @@ dist_systemdpreset_DATA = \
@@ -35,7 +35,7 @@ index 7187762d3..de131dc87 100644
%D%/systemd/system/zfs-mount.service \
%D%/systemd/system/zfs-scrub-monthly@.timer \
diff --git a/etc/systemd/system/50-zfs.preset b/etc/systemd/system/50-zfs.preset
-index e4056a92c..030611419 100644
+index e4056a92cd985380aa7346f4b291e4c3f1446fee..030611419816c83bcb86fc9c23d31855014eb914 100644
--- a/etc/systemd/system/50-zfs.preset
+++ b/etc/systemd/system/50-zfs.preset
@@ -1,6 +1,7 @@
@@ -48,7 +48,7 @@ index e4056a92c..030611419 100644
enable zfs-share.service
diff --git a/etc/systemd/system/zfs-import@.service.in b/etc/systemd/system/zfs-import@.service.in
new file mode 100644
-index 000000000..5bd19fb79
+index 0000000000000000000000000000000000000000..5bd19fb795e2b5554b3932e0b84b2668c2dcac51
--- /dev/null
+++ b/etc/systemd/system/zfs-import@.service.in
@@ -0,0 +1,18 @@
diff --git a/debian/patches/0008-Patch-move-manpage-arcstat-1-to-arcstat-8.patch b/debian/patches/0008-Patch-move-manpage-arcstat-1-to-arcstat-8.patch
index 9a4aea56e..af7509c64 100644
--- a/debian/patches/0008-Patch-move-manpage-arcstat-1-to-arcstat-8.patch
+++ b/debian/patches/0008-Patch-move-manpage-arcstat-1-to-arcstat-8.patch
@@ -15,7 +15,7 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
rename man/{man1/arcstat.1 => man8/arcstat.8} (99%)
diff --git a/man/Makefile.am b/man/Makefile.am
-index 43bb014dd..a9293468a 100644
+index fde7049337640aadc5c4c699c1ccf56c444f9850..f7bd823a2343b7089d38351d510c3ade451f55a1 100644
--- a/man/Makefile.am
+++ b/man/Makefile.am
@@ -2,7 +2,6 @@ dist_noinst_man_MANS = \
@@ -38,7 +38,7 @@ diff --git a/man/man1/arcstat.1 b/man/man8/arcstat.8
similarity index 99%
rename from man/man1/arcstat.1
rename to man/man8/arcstat.8
-index 82358fa68..a8fb55498 100644
+index 019a8270204a2551035a407ca0e3537ef1d345f3..4128104d5e32ea8804c80712971685c1279cac9b 100644
--- a/man/man1/arcstat.1
+++ b/man/man8/arcstat.8
@@ -13,7 +13,7 @@
diff --git a/debian/patches/0009-arc-stat-summary-guard-access-to-freshly-introduced-.patch b/debian/patches/0009-arc-stat-summary-guard-access-to-freshly-introduced-.patch
deleted file mode 100644
index 75e68c895..000000000
--- a/debian/patches/0009-arc-stat-summary-guard-access-to-freshly-introduced-.patch
+++ /dev/null
@@ -1,439 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Thomas Lamprecht <t.lamprecht@proxmox.com>
-Date: Wed, 10 Nov 2021 09:29:47 +0100
-Subject: [PATCH] arc stat/summary: guard access to freshly introduced stats
-
-l2arc MFU/MRU and zfetch past future and stride stats were introduced
-in 2.1 and 2.2.4 respectively:
-
-commit 085321621e79a75bea41c2b6511da6ebfbf2ba0a added printing MFU
-and MRU stats for 2.1 user space tools, but those keys are not
-available in the 2.0 module. That means it may break the arcstat and
-arc_summary tools after upgrade to 2.1 (user space), before a reboot
-to the new 2.1 ZFS kernel-module happened, due to python raising a
-KeyError on the dict access then.
-
-Move those two keys to a .get accessor with `0` as fallback, as it
-should be better to show some possible wrong data for new stat-keys
-than throwing an exception.
-
-also move l2_mfu_asize l2_mru_asize l2_prefetch_asize
-l2_bufc_data_asize l2_bufc_metadata_asize to .get accessor
-(these are only present with a cache device in the pool)
-
-guard access to iohits and uncached state introduced in
-792a6ee462efc15a7614f27e13f0f8aaa9414a08
-
-guard access to zfetch past future stride stats introduced in
-026fe796465e3da7b27d06ef5338634ee6dd30d8
-
-These are present in the current kernel, but lead to an exception, if
-running the new user-space with an old kernel module.
-
-Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
----
- cmd/arc_summary | 132 ++++++++++++++++++++++++------------------------
- cmd/arcstat.in | 48 +++++++++---------
- 2 files changed, 90 insertions(+), 90 deletions(-)
-
-diff --git a/cmd/arc_summary b/cmd/arc_summary
-index 100fb1987..30f5d23e9 100755
---- a/cmd/arc_summary
-+++ b/cmd/arc_summary
-@@ -551,21 +551,21 @@ def section_arc(kstats_dict):
- arc_target_size = arc_stats['c']
- arc_max = arc_stats['c_max']
- arc_min = arc_stats['c_min']
-- meta = arc_stats['meta']
-- pd = arc_stats['pd']
-- pm = arc_stats['pm']
-- anon_data = arc_stats['anon_data']
-- anon_metadata = arc_stats['anon_metadata']
-- mfu_data = arc_stats['mfu_data']
-- mfu_metadata = arc_stats['mfu_metadata']
-- mru_data = arc_stats['mru_data']
-- mru_metadata = arc_stats['mru_metadata']
-- mfug_data = arc_stats['mfu_ghost_data']
-- mfug_metadata = arc_stats['mfu_ghost_metadata']
-- mrug_data = arc_stats['mru_ghost_data']
-- mrug_metadata = arc_stats['mru_ghost_metadata']
-- unc_data = arc_stats['uncached_data']
-- unc_metadata = arc_stats['uncached_metadata']
-+ meta = arc_stats.get('meta', 0)
-+ pd = arc_stats.get('pd', 0)
-+ pm = arc_stats.get('pm', 0)
-+ anon_data = arc_stats.get('anon_data', 0)
-+ anon_metadata = arc_stats.get('anon_metadata', 0)
-+ mfu_data = arc_stats.get('mfu_data', 0)
-+ mfu_metadata = arc_stats.get('mfu_metadata', 0)
-+ mru_data = arc_stats.get('mru_data', 0)
-+ mru_metadata = arc_stats.get('mru_metadata', 0)
-+ mfug_data = arc_stats.get('mfu_ghost_data', 0)
-+ mfug_metadata = arc_stats.get('mfu_ghost_metadata', 0)
-+ mrug_data = arc_stats.get('mru_ghost_data', 0)
-+ mrug_metadata = arc_stats.get('mru_ghost_metadata', 0)
-+ unc_data = arc_stats.get('uncached_data', 0)
-+ unc_metadata = arc_stats.get('uncached_metadata', 0)
- bonus_size = arc_stats['bonus_size']
- dnode_limit = arc_stats['arc_dnode_limit']
- dnode_size = arc_stats['dnode_size']
-@@ -655,13 +655,13 @@ def section_arc(kstats_dict):
- prt_i1('L2 cached evictions:', f_bytes(arc_stats['evict_l2_cached']))
- prt_i1('L2 eligible evictions:', f_bytes(arc_stats['evict_l2_eligible']))
- prt_i2('L2 eligible MFU evictions:',
-- f_perc(arc_stats['evict_l2_eligible_mfu'],
-+ f_perc(arc_stats.get('evict_l2_eligible_mfu', 0), # 2.0 module compat
- arc_stats['evict_l2_eligible']),
-- f_bytes(arc_stats['evict_l2_eligible_mfu']))
-+ f_bytes(arc_stats.get('evict_l2_eligible_mfu', 0)))
- prt_i2('L2 eligible MRU evictions:',
-- f_perc(arc_stats['evict_l2_eligible_mru'],
-+ f_perc(arc_stats.get('evict_l2_eligible_mru', 0), # 2.0 module compat
- arc_stats['evict_l2_eligible']),
-- f_bytes(arc_stats['evict_l2_eligible_mru']))
-+ f_bytes(arc_stats.get('evict_l2_eligible_mru', 0)))
- prt_i1('L2 ineligible evictions:',
- f_bytes(arc_stats['evict_l2_ineligible']))
- print()
-@@ -672,106 +672,106 @@ def section_archits(kstats_dict):
- """
-
- arc_stats = isolate_section('arcstats', kstats_dict)
-- all_accesses = int(arc_stats['hits'])+int(arc_stats['iohits'])+\
-+ all_accesses = int(arc_stats['hits'])+int(arc_stats.get('iohits', 0))+\
- int(arc_stats['misses'])
-
- prt_1('ARC total accesses:', f_hits(all_accesses))
- ta_todo = (('Total hits:', arc_stats['hits']),
-- ('Total I/O hits:', arc_stats['iohits']),
-+ ('Total I/O hits:', arc_stats.get('iohits', 0)),
- ('Total misses:', arc_stats['misses']))
- for title, value in ta_todo:
- prt_i2(title, f_perc(value, all_accesses), f_hits(value))
- print()
-
- dd_total = int(arc_stats['demand_data_hits']) +\
-- int(arc_stats['demand_data_iohits']) +\
-+ int(arc_stats.get('demand_data_iohits', 0)) +\
- int(arc_stats['demand_data_misses'])
- prt_2('ARC demand data accesses:', f_perc(dd_total, all_accesses),
- f_hits(dd_total))
- dd_todo = (('Demand data hits:', arc_stats['demand_data_hits']),
-- ('Demand data I/O hits:', arc_stats['demand_data_iohits']),
-+ ('Demand data I/O hits:', arc_stats.get('demand_data_iohits', 0)),
- ('Demand data misses:', arc_stats['demand_data_misses']))
- for title, value in dd_todo:
- prt_i2(title, f_perc(value, dd_total), f_hits(value))
- print()
-
- dm_total = int(arc_stats['demand_metadata_hits']) +\
-- int(arc_stats['demand_metadata_iohits']) +\
-+ int(arc_stats.get('demand_metadata_iohits', 0)) +\
- int(arc_stats['demand_metadata_misses'])
- prt_2('ARC demand metadata accesses:', f_perc(dm_total, all_accesses),
- f_hits(dm_total))
- dm_todo = (('Demand metadata hits:', arc_stats['demand_metadata_hits']),
- ('Demand metadata I/O hits:',
-- arc_stats['demand_metadata_iohits']),
-+ arc_stats.get('demand_metadata_iohits', 0)),
- ('Demand metadata misses:', arc_stats['demand_metadata_misses']))
- for title, value in dm_todo:
- prt_i2(title, f_perc(value, dm_total), f_hits(value))
- print()
-
- pd_total = int(arc_stats['prefetch_data_hits']) +\
-- int(arc_stats['prefetch_data_iohits']) +\
-+ int(arc_stats.get('prefetch_data_iohits', 0)) +\
- int(arc_stats['prefetch_data_misses'])
- prt_2('ARC prefetch data accesses:', f_perc(pd_total, all_accesses),
- f_hits(pd_total))
- pd_todo = (('Prefetch data hits:', arc_stats['prefetch_data_hits']),
-- ('Prefetch data I/O hits:', arc_stats['prefetch_data_iohits']),
-+ ('Prefetch data I/O hits:', arc_stats.get('prefetch_data_iohits', 0)),
- ('Prefetch data misses:', arc_stats['prefetch_data_misses']))
- for title, value in pd_todo:
- prt_i2(title, f_perc(value, pd_total), f_hits(value))
- print()
-
- pm_total = int(arc_stats['prefetch_metadata_hits']) +\
-- int(arc_stats['prefetch_metadata_iohits']) +\
-+ int(arc_stats.get('prefetch_metadata_iohits', 0)) +\
- int(arc_stats['prefetch_metadata_misses'])
- prt_2('ARC prefetch metadata accesses:', f_perc(pm_total, all_accesses),
- f_hits(pm_total))
- pm_todo = (('Prefetch metadata hits:',
- arc_stats['prefetch_metadata_hits']),
- ('Prefetch metadata I/O hits:',
-- arc_stats['prefetch_metadata_iohits']),
-+ arc_stats.get('prefetch_metadata_iohits', 0)),
- ('Prefetch metadata misses:',
- arc_stats['prefetch_metadata_misses']))
- for title, value in pm_todo:
- prt_i2(title, f_perc(value, pm_total), f_hits(value))
- print()
-
-- all_prefetches = int(arc_stats['predictive_prefetch'])+\
-- int(arc_stats['prescient_prefetch'])
-+ all_prefetches = int(arc_stats.get('predictive_prefetch', 0))+\
-+ int(arc_stats.get('prescient_prefetch', 0))
- prt_2('ARC predictive prefetches:',
-- f_perc(arc_stats['predictive_prefetch'], all_prefetches),
-- f_hits(arc_stats['predictive_prefetch']))
-+ f_perc(arc_stats.get('predictive_prefetch', 0), all_prefetches),
-+ f_hits(arc_stats.get('predictive_prefetch', 0)))
- prt_i2('Demand hits after predictive:',
- f_perc(arc_stats['demand_hit_predictive_prefetch'],
-- arc_stats['predictive_prefetch']),
-+ arc_stats.get('predictive_prefetch', 0)),
- f_hits(arc_stats['demand_hit_predictive_prefetch']))
- prt_i2('Demand I/O hits after predictive:',
-- f_perc(arc_stats['demand_iohit_predictive_prefetch'],
-- arc_stats['predictive_prefetch']),
-- f_hits(arc_stats['demand_iohit_predictive_prefetch']))
-- never = int(arc_stats['predictive_prefetch']) -\
-+ f_perc(arc_stats.get('demand_iohit_predictive_prefetch', 0),
-+ arc_stats.get('predictive_prefetch', 0)),
-+ f_hits(arc_stats.get('demand_iohit_predictive_prefetch', 0)))
-+ never = int(arc_stats.get('predictive_prefetch', 0)) -\
- int(arc_stats['demand_hit_predictive_prefetch']) -\
-- int(arc_stats['demand_iohit_predictive_prefetch'])
-+ int(arc_stats.get('demand_iohit_predictive_prefetch', 0))
- prt_i2('Never demanded after predictive:',
-- f_perc(never, arc_stats['predictive_prefetch']),
-+ f_perc(never, arc_stats.get('predictive_prefetch', 0)),
- f_hits(never))
- print()
-
- prt_2('ARC prescient prefetches:',
-- f_perc(arc_stats['prescient_prefetch'], all_prefetches),
-- f_hits(arc_stats['prescient_prefetch']))
-+ f_perc(arc_stats.get('prescient_prefetch', 0), all_prefetches),
-+ f_hits(arc_stats.get('prescient_prefetch', 0)))
- prt_i2('Demand hits after prescient:',
- f_perc(arc_stats['demand_hit_prescient_prefetch'],
-- arc_stats['prescient_prefetch']),
-+ arc_stats.get('prescient_prefetch', 0)),
- f_hits(arc_stats['demand_hit_prescient_prefetch']))
- prt_i2('Demand I/O hits after prescient:',
-- f_perc(arc_stats['demand_iohit_prescient_prefetch'],
-- arc_stats['prescient_prefetch']),
-- f_hits(arc_stats['demand_iohit_prescient_prefetch']))
-- never = int(arc_stats['prescient_prefetch'])-\
-+ f_perc(arc_stats.get('demand_iohit_prescient_prefetch', 0),
-+ arc_stats.get('prescient_prefetch', 0)),
-+ f_hits(arc_stats.get('demand_iohit_prescient_prefetch', 0)))
-+ never = int(arc_stats.get('prescient_prefetch', 0))-\
- int(arc_stats['demand_hit_prescient_prefetch'])-\
-- int(arc_stats['demand_iohit_prescient_prefetch'])
-+ int(arc_stats.get('demand_iohit_prescient_prefetch', 0))
- prt_i2('Never demanded after prescient:',
-- f_perc(never, arc_stats['prescient_prefetch']),
-+ f_perc(never, arc_stats.get('prescient_prefetch', 0)),
- f_hits(never))
- print()
-
-@@ -782,7 +782,7 @@ def section_archits(kstats_dict):
- arc_stats['mfu_ghost_hits']),
- ('Most recently used (MRU) ghost:',
- arc_stats['mru_ghost_hits']),
-- ('Uncached:', arc_stats['uncached_hits']))
-+ ('Uncached:', arc_stats.get('uncached_hits', 0)))
- for title, value in cl_todo:
- prt_i2(title, f_perc(value, all_accesses), f_hits(value))
- print()
-@@ -794,26 +794,26 @@ def section_dmu(kstats_dict):
- zfetch_stats = isolate_section('zfetchstats', kstats_dict)
-
- zfetch_access_total = int(zfetch_stats['hits']) +\
-- int(zfetch_stats['future']) + int(zfetch_stats['stride']) +\
-- int(zfetch_stats['past']) + int(zfetch_stats['misses'])
-+ int(zfetch_stats.get('future', 0)) + int(zfetch_stats.get('stride', 0)) +\
-+ int(zfetch_stats.get('past', 0)) + int(zfetch_stats['misses'])
-
- prt_1('DMU predictive prefetcher calls:', f_hits(zfetch_access_total))
- prt_i2('Stream hits:',
- f_perc(zfetch_stats['hits'], zfetch_access_total),
- f_hits(zfetch_stats['hits']))
-- future = int(zfetch_stats['future']) + int(zfetch_stats['stride'])
-+ future = int(zfetch_stats.get('future', 0)) + int(zfetch_stats.get('stride', 0))
- prt_i2('Hits ahead of stream:', f_perc(future, zfetch_access_total),
- f_hits(future))
- prt_i2('Hits behind stream:',
-- f_perc(zfetch_stats['past'], zfetch_access_total),
-- f_hits(zfetch_stats['past']))
-+ f_perc(zfetch_stats.get('past', 0), zfetch_access_total),
-+ f_hits(zfetch_stats.get('past', 0)))
- prt_i2('Stream misses:',
- f_perc(zfetch_stats['misses'], zfetch_access_total),
- f_hits(zfetch_stats['misses']))
- prt_i2('Streams limit reached:',
- f_perc(zfetch_stats['max_streams'], zfetch_stats['misses']),
- f_hits(zfetch_stats['max_streams']))
-- prt_i1('Stream strides:', f_hits(zfetch_stats['stride']))
-+ prt_i1('Stream strides:', f_hits(zfetch_stats.get('stride', 0)))
- prt_i1('Prefetches issued', f_hits(zfetch_stats['io_issued']))
- print()
-
-@@ -860,20 +860,20 @@ def section_l2arc(kstats_dict):
- f_perc(arc_stats['l2_hdr_size'], arc_stats['l2_size']),
- f_bytes(arc_stats['l2_hdr_size']))
- prt_i2('MFU allocated size:',
-- f_perc(arc_stats['l2_mfu_asize'], arc_stats['l2_asize']),
-- f_bytes(arc_stats['l2_mfu_asize']))
-+ f_perc(arc_stats.get('l2_mfu_asize', 0), arc_stats['l2_asize']),
-+ f_bytes(arc_stats.get('l2_mfu_asize', 0))) # 2.0 module compat
- prt_i2('MRU allocated size:',
-- f_perc(arc_stats['l2_mru_asize'], arc_stats['l2_asize']),
-- f_bytes(arc_stats['l2_mru_asize']))
-+ f_perc(arc_stats.get('l2_mru_asize', 0), arc_stats['l2_asize']),
-+ f_bytes(arc_stats.get('l2_mru_asize', 0))) # 2.0 module compat
- prt_i2('Prefetch allocated size:',
-- f_perc(arc_stats['l2_prefetch_asize'], arc_stats['l2_asize']),
-- f_bytes(arc_stats['l2_prefetch_asize']))
-+ f_perc(arc_stats.get('l2_prefetch_asize', 0), arc_stats['l2_asize']),
-+ f_bytes(arc_stats.get('l2_prefetch_asize',0))) # 2.0 module compat
- prt_i2('Data (buffer content) allocated size:',
-- f_perc(arc_stats['l2_bufc_data_asize'], arc_stats['l2_asize']),
-- f_bytes(arc_stats['l2_bufc_data_asize']))
-+ f_perc(arc_stats.get('l2_bufc_data_asize', 0), arc_stats['l2_asize']),
-+ f_bytes(arc_stats.get('l2_bufc_data_asize', 0))) # 2.0 module compat
- prt_i2('Metadata (buffer content) allocated size:',
-- f_perc(arc_stats['l2_bufc_metadata_asize'], arc_stats['l2_asize']),
-- f_bytes(arc_stats['l2_bufc_metadata_asize']))
-+ f_perc(arc_stats.get('l2_bufc_metadata_asize', 0), arc_stats['l2_asize']),
-+ f_bytes(arc_stats.get('l2_bufc_metadata_asize', 0))) # 2.0 module compat
-
- print()
- prt_1('L2ARC breakdown:', f_hits(l2_access_total))
-diff --git a/cmd/arcstat.in b/cmd/arcstat.in
-index c4f10a1d6..bf47ec90e 100755
---- a/cmd/arcstat.in
-+++ b/cmd/arcstat.in
-@@ -510,7 +510,7 @@ def calculate():
- v = dict()
- v["time"] = time.strftime("%H:%M:%S", time.localtime())
- v["hits"] = d["hits"] // sint
-- v["iohs"] = d["iohits"] // sint
-+ v["iohs"] = d.get("iohits", 0) // sint
- v["miss"] = d["misses"] // sint
- v["read"] = v["hits"] + v["iohs"] + v["miss"]
- v["hit%"] = 100 * v["hits"] // v["read"] if v["read"] > 0 else 0
-@@ -518,7 +518,7 @@ def calculate():
- v["miss%"] = 100 - v["hit%"] - v["ioh%"] if v["read"] > 0 else 0
-
- v["dhit"] = (d["demand_data_hits"] + d["demand_metadata_hits"]) // sint
-- v["dioh"] = (d["demand_data_iohits"] + d["demand_metadata_iohits"]) // sint
-+ v["dioh"] = (d.get("demand_data_iohits", 0) + d.get("demand_metadata_iohits", 0)) // sint
- v["dmis"] = (d["demand_data_misses"] + d["demand_metadata_misses"]) // sint
-
- v["dread"] = v["dhit"] + v["dioh"] + v["dmis"]
-@@ -527,7 +527,7 @@ def calculate():
- v["dm%"] = 100 - v["dh%"] - v["di%"] if v["dread"] > 0 else 0
-
- v["ddhit"] = d["demand_data_hits"] // sint
-- v["ddioh"] = d["demand_data_iohits"] // sint
-+ v["ddioh"] = d.get("demand_data_iohits", 0) // sint
- v["ddmis"] = d["demand_data_misses"] // sint
-
- v["ddread"] = v["ddhit"] + v["ddioh"] + v["ddmis"]
-@@ -536,7 +536,7 @@ def calculate():
- v["ddm%"] = 100 - v["ddh%"] - v["ddi%"] if v["ddread"] > 0 else 0
-
- v["dmhit"] = d["demand_metadata_hits"] // sint
-- v["dmioh"] = d["demand_metadata_iohits"] // sint
-+ v["dmioh"] = d.get("demand_metadata_iohits", 0) // sint
- v["dmmis"] = d["demand_metadata_misses"] // sint
-
- v["dmread"] = v["dmhit"] + v["dmioh"] + v["dmmis"]
-@@ -545,8 +545,8 @@ def calculate():
- v["dmm%"] = 100 - v["dmh%"] - v["dmi%"] if v["dmread"] > 0 else 0
-
- v["phit"] = (d["prefetch_data_hits"] + d["prefetch_metadata_hits"]) // sint
-- v["pioh"] = (d["prefetch_data_iohits"] +
-- d["prefetch_metadata_iohits"]) // sint
-+ v["pioh"] = (d.get("prefetch_data_iohits", 0) +
-+ d.get("prefetch_metadata_iohits", 0)) // sint
- v["pmis"] = (d["prefetch_data_misses"] +
- d["prefetch_metadata_misses"]) // sint
-
-@@ -556,7 +556,7 @@ def calculate():
- v["pm%"] = 100 - v["ph%"] - v["pi%"] if v["pread"] > 0 else 0
-
- v["pdhit"] = d["prefetch_data_hits"] // sint
-- v["pdioh"] = d["prefetch_data_iohits"] // sint
-+ v["pdioh"] = d.get("prefetch_data_iohits", 0) // sint
- v["pdmis"] = d["prefetch_data_misses"] // sint
-
- v["pdread"] = v["pdhit"] + v["pdioh"] + v["pdmis"]
-@@ -565,7 +565,7 @@ def calculate():
- v["pdm%"] = 100 - v["pdh%"] - v["pdi%"] if v["pdread"] > 0 else 0
-
- v["pmhit"] = d["prefetch_metadata_hits"] // sint
-- v["pmioh"] = d["prefetch_metadata_iohits"] // sint
-+ v["pmioh"] = d.get("prefetch_metadata_iohits", 0) // sint
- v["pmmis"] = d["prefetch_metadata_misses"] // sint
-
- v["pmread"] = v["pmhit"] + v["pmioh"] + v["pmmis"]
-@@ -575,8 +575,8 @@ def calculate():
-
- v["mhit"] = (d["prefetch_metadata_hits"] +
- d["demand_metadata_hits"]) // sint
-- v["mioh"] = (d["prefetch_metadata_iohits"] +
-- d["demand_metadata_iohits"]) // sint
-+ v["mioh"] = (d.get("prefetch_metadata_iohits", 0) +
-+ d.get("demand_metadata_iohits", 0)) // sint
- v["mmis"] = (d["prefetch_metadata_misses"] +
- d["demand_metadata_misses"]) // sint
-
-@@ -592,24 +592,24 @@ def calculate():
- v["mru"] = d["mru_hits"] // sint
- v["mrug"] = d["mru_ghost_hits"] // sint
- v["mfug"] = d["mfu_ghost_hits"] // sint
-- v["unc"] = d["uncached_hits"] // sint
-+ v["unc"] = d.get("uncached_hits", 0) // sint
- v["eskip"] = d["evict_skip"] // sint
- v["el2skip"] = d["evict_l2_skip"] // sint
- v["el2cach"] = d["evict_l2_cached"] // sint
- v["el2el"] = d["evict_l2_eligible"] // sint
-- v["el2mfu"] = d["evict_l2_eligible_mfu"] // sint
-- v["el2mru"] = d["evict_l2_eligible_mru"] // sint
-+ v["el2mfu"] = d.get("evict_l2_eligible_mfu", 0) // sint
-+ v["el2mru"] = d.get("evict_l2_eligible_mru", 0) // sint
- v["el2inel"] = d["evict_l2_ineligible"] // sint
- v["mtxmis"] = d["mutex_miss"] // sint
-- v["ztotal"] = (d["zfetch_hits"] + d["zfetch_future"] + d["zfetch_stride"] +
-- d["zfetch_past"] + d["zfetch_misses"]) // sint
-+ v["ztotal"] = (d["zfetch_hits"] + d.get("zfetch_future", 0) + d.get("zfetch_stride", 0) +
-+ d.get("zfetch_past", 0) + d["zfetch_misses"]) // sint
- v["zhits"] = d["zfetch_hits"] // sint
-- v["zahead"] = (d["zfetch_future"] + d["zfetch_stride"]) // sint
-- v["zpast"] = d["zfetch_past"] // sint
-+ v["zahead"] = (d.get("zfetch_future", 0) + d.get("zfetch_stride", 0)) // sint
-+ v["zpast"] = d.get("zfetch_past", 0) // sint
- v["zmisses"] = d["zfetch_misses"] // sint
- v["zmax"] = d["zfetch_max_streams"] // sint
-- v["zfuture"] = d["zfetch_future"] // sint
-- v["zstride"] = d["zfetch_stride"] // sint
-+ v["zfuture"] = d.get("zfetch_future", 0) // sint
-+ v["zstride"] = d.get("zfetch_stride", 0) // sint
- v["zissued"] = d["zfetch_io_issued"] // sint
- v["zactive"] = d["zfetch_io_active"] // sint
-
-@@ -624,11 +624,11 @@ def calculate():
- v["l2size"] = cur["l2_size"]
- v["l2bytes"] = d["l2_read_bytes"] // sint
-
-- v["l2pref"] = cur["l2_prefetch_asize"]
-- v["l2mfu"] = cur["l2_mfu_asize"]
-- v["l2mru"] = cur["l2_mru_asize"]
-- v["l2data"] = cur["l2_bufc_data_asize"]
-- v["l2meta"] = cur["l2_bufc_metadata_asize"]
-+ v["l2pref"] = cur.get("l2_prefetch_asize", 0)
-+ v["l2mfu"] = cur.get("l2_mfu_asize", 0)
-+ v["l2mru"] = cur.get("l2_mru_asize", 0)
-+ v["l2data"] = cur.get("l2_bufc_data_asize", 0)
-+ v["l2meta"] = cur.get("l2_bufc_metadata_asize", 0)
- v["l2pref%"] = 100 * v["l2pref"] // v["l2asize"]
- v["l2mfu%"] = 100 * v["l2mfu"] // v["l2asize"]
- v["l2mru%"] = 100 * v["l2mru"] // v["l2asize"]
diff --git a/debian/patches/0011-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch b/debian/patches/0009-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
similarity index 94%
rename from debian/patches/0011-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
rename to debian/patches/0009-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
index 29c7f9abb..7110becb4 100644
--- a/debian/patches/0011-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
+++ b/debian/patches/0009-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
@@ -51,10 +51,10 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/cmd/zpool/zpool_main.c b/cmd/zpool/zpool_main.c
-index ed0b8d7a1..f3acc49d0 100644
+index 5fcf0991de6623b44a5af07ad313b59017ae4484..b5d0735cd089b3286b0740e5691d3fcda54a44cb 100644
--- a/cmd/zpool/zpool_main.c
+++ b/cmd/zpool/zpool_main.c
-@@ -2663,7 +2663,8 @@ print_status_config(zpool_handle_t *zhp, status_cbdata_t *cb, const char *name,
+@@ -3116,7 +3116,8 @@ print_status_config(zpool_handle_t *zhp, status_cbdata_t *cb, const char *name,
if (vs->vs_scan_removing != 0) {
(void) printf(gettext(" (removing)"));
diff --git a/debian/patches/0010-Fix-nfs_truncate_shares-without-etc-exports.d.patch b/debian/patches/0010-Fix-nfs_truncate_shares-without-etc-exports.d.patch
deleted file mode 100644
index 3fa72205d..000000000
--- a/debian/patches/0010-Fix-nfs_truncate_shares-without-etc-exports.d.patch
+++ /dev/null
@@ -1,77 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: siv0 <github@nomore.at>
-Date: Tue, 31 Oct 2023 21:57:54 +0100
-Subject: [PATCH] Fix nfs_truncate_shares without /etc/exports.d
-
-Calling nfs_reset_shares on Linux prints a warning:
-`failed to lock /etc/exports.d/zfs.exports.lock: No such file or
-directory`
-when /etc/exports.d does not exist. The directory gets created, when a
-filesystem is actually exported through nfs_toggle_share and
-nfs_init_share. The truncation of /etc/exports.d/zfs.exports happens
-unconditionally when calling `zfs mount -a` (via zfs_do_mount and
-share_mount in `cmd/zfs/zfs_main.c`).
-
-Fixing the issue only in the Linux part, since the exports file on
-freebsd is in `/etc/zfs/`, which seems present on 2 FreeBSD systems I
-have access to (through `/etc/zfs/compatibility.d/`), while a Debian
-box does not have the directory even if `/usr/sbin/exportfs` is
-present through the `nfs-kernel-server` package.
-
-The code for exports_available is copied from nfs_available above.
-
-Fixes: ede037cda73675f42b1452187e8dd3438fafc220
-("Make zfs-share service resilient to stale exports")
-
-Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
-Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
-Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
-Closes #15369
-Closes #15468
-(cherry picked from commit 41e55b476bcfc90f1ad81c02c5375367fdace9e9)
-Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
----
- lib/libshare/os/linux/nfs.c | 18 ++++++++++++++++++
- 1 file changed, 18 insertions(+)
-
-diff --git a/lib/libshare/os/linux/nfs.c b/lib/libshare/os/linux/nfs.c
-index 004946b0c..3dce81840 100644
---- a/lib/libshare/os/linux/nfs.c
-+++ b/lib/libshare/os/linux/nfs.c
-@@ -47,6 +47,7 @@
-
-
- static boolean_t nfs_available(void);
-+static boolean_t exports_available(void);
-
- typedef int (*nfs_shareopt_callback_t)(const char *opt, const char *value,
- void *cookie);
-@@ -539,6 +540,8 @@ nfs_commit_shares(void)
- static void
- nfs_truncate_shares(void)
- {
-+ if (!exports_available())
-+ return;
- nfs_reset_shares(ZFS_EXPORTS_LOCK, ZFS_EXPORTS_FILE);
- }
-
-@@ -566,3 +569,18 @@ nfs_available(void)
-
- return (avail == 1);
- }
-+
-+static boolean_t
-+exports_available(void)
-+{
-+ static int avail;
-+
-+ if (!avail) {
-+ if (access(ZFS_EXPORTS_DIR, F_OK) != 0)
-+ avail = -1;
-+ else
-+ avail = 1;
-+ }
-+
-+ return (avail == 1);
-+}
diff --git a/debian/patches/0012-linux-zvols-correctly-detect-flush-requests.patch b/debian/patches/0010-linux-zvols-correctly-detect-flush-requests-17131.patch
similarity index 88%
rename from debian/patches/0012-linux-zvols-correctly-detect-flush-requests.patch
rename to debian/patches/0010-linux-zvols-correctly-detect-flush-requests-17131.patch
index 25159efd1..b38acae03 100644
--- a/debian/patches/0012-linux-zvols-correctly-detect-flush-requests.patch
+++ b/debian/patches/0010-linux-zvols-correctly-detect-flush-requests-17131.patch
@@ -1,7 +1,7 @@
-From 4482e91446c35d4194be49b715c6bb8a3ad9ba18 Mon Sep 17 00:00:00 2001
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Fabian-Gruenbichler <f.gruenbichler@proxmox.com>
Date: Wed, 12 Mar 2025 22:39:01 +0100
-Subject: [PATCH 12/12] linux: zvols: correctly detect flush requests (#17131)
+Subject: [PATCH] linux: zvols: correctly detect flush requests (#17131)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@@ -46,10 +46,10 @@ Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/os/linux/kernel/linux/blkdev_compat.h b/include/os/linux/kernel/linux/blkdev_compat.h
-index c0d377074..26e7b0b2a 100644
+index d96708c600ac6f8b07354286a0745dcdcf84cccb..9af496e8777b389733745ce37655c1ee1de67633 100644
--- a/include/os/linux/kernel/linux/blkdev_compat.h
+++ b/include/os/linux/kernel/linux/blkdev_compat.h
-@@ -356,7 +356,7 @@ bio_set_flush(struct bio *bio)
+@@ -383,7 +383,7 @@ bio_set_flush(struct bio *bio)
static inline boolean_t
bio_is_flush(struct bio *bio)
{
@@ -58,6 +58,3 @@ index c0d377074..26e7b0b2a 100644
}
/*
---
-2.39.5
-
diff --git a/debian/patches/0013-contrib-initramfs-use-LVM-autoactivation-for-activat.patch b/debian/patches/0011-contrib-initramfs-use-LVM-autoactivation-for-activat.patch
similarity index 92%
rename from debian/patches/0013-contrib-initramfs-use-LVM-autoactivation-for-activat.patch
rename to debian/patches/0011-contrib-initramfs-use-LVM-autoactivation-for-activat.patch
index e51096d0a..752242791 100644
--- a/debian/patches/0013-contrib-initramfs-use-LVM-autoactivation-for-activat.patch
+++ b/debian/patches/0011-contrib-initramfs-use-LVM-autoactivation-for-activat.patch
@@ -1,4 +1,4 @@
-From 3726c500deffadce1012b5c5ccf19515bb58bdbb Mon Sep 17 00:00:00 2001
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Friedrich Weber <f.weber@proxmox.com>
Date: Thu, 6 Mar 2025 11:44:36 +0100
Subject: [PATCH] contrib/initramfs: use LVM autoactivation for activating VGs
@@ -29,7 +29,7 @@ Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/contrib/initramfs/scripts/local-top/zfs b/contrib/initramfs/scripts/local-top/zfs
-index 6b80e9f43..fc455077e 100755
+index 6b80e9f43607215a377adac12936ddfa463e1d79..fc455077ec94fbd14343023ace2d7c2c7ba71159 100755
--- a/contrib/initramfs/scripts/local-top/zfs
+++ b/contrib/initramfs/scripts/local-top/zfs
@@ -41,9 +41,9 @@ activate_vg()
@@ -44,6 +44,3 @@ index 6b80e9f43..fc455077e 100755
return $?
}
---
-2.39.5
-
diff --git a/debian/patches/0014-Linux-6.14-dops-d_revalidate-now-takes-four-args.patch b/debian/patches/0014-Linux-6.14-dops-d_revalidate-now-takes-four-args.patch
deleted file mode 100644
index ccf81a2c9..000000000
--- a/debian/patches/0014-Linux-6.14-dops-d_revalidate-now-takes-four-args.patch
+++ /dev/null
@@ -1,103 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Rob Norris <robn@despairlabs.com>
-Date: Wed, 5 Feb 2025 17:14:20 +1100
-Subject: [PATCH] Linux 6.14: dops->d_revalidate now takes four args
-
-This is a convenience for filesystems that need the inode of their
-parent or their own name, as its often complicated to get that
-information. We don't need those things, so this is just detecting which
-prototype is expected and adjusting our callback to match.
-
-Sponsored-by: https://despairlabs.com/sponsor/
-Signed-off-by: Rob Norris <robn@despairlabs.com>
-Reviewed-by: Alexander Motin <mav@FreeBSD.org>
-Reviewed-by: Tony Hutter <hutter2@llnl.gov>
-(cherry picked from commit 7ef6b70e960a7cc504242952699057f0ee616449)
-Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
----
- config/kernel-automount.m4 | 41 ++++++++++++++++++++++++++++++--
- module/os/linux/zfs/zpl_ctldir.c | 6 +++++
- 2 files changed, 45 insertions(+), 2 deletions(-)
-
-diff --git a/config/kernel-automount.m4 b/config/kernel-automount.m4
-index 52f1931b7..b5f1392d0 100644
---- a/config/kernel-automount.m4
-+++ b/config/kernel-automount.m4
-@@ -5,7 +5,7 @@ dnl # solution to handling automounts. Prior to this cifs/nfs clients
- dnl # which required automount support would abuse the follow_link()
- dnl # operation on directories for this purpose.
- dnl #
--AC_DEFUN([ZFS_AC_KERNEL_SRC_AUTOMOUNT], [
-+AC_DEFUN([ZFS_AC_KERNEL_SRC_D_AUTOMOUNT], [
- ZFS_LINUX_TEST_SRC([dentry_operations_d_automount], [
- #include <linux/dcache.h>
- static struct vfsmount *d_automount(struct path *p) { return NULL; }
-@@ -15,7 +15,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_AUTOMOUNT], [
- ])
- ])
-
--AC_DEFUN([ZFS_AC_KERNEL_AUTOMOUNT], [
-+AC_DEFUN([ZFS_AC_KERNEL_D_AUTOMOUNT], [
- AC_MSG_CHECKING([whether dops->d_automount() exists])
- ZFS_LINUX_TEST_RESULT([dentry_operations_d_automount], [
- AC_MSG_RESULT(yes)
-@@ -23,3 +23,40 @@ AC_DEFUN([ZFS_AC_KERNEL_AUTOMOUNT], [
- ZFS_LINUX_TEST_ERROR([dops->d_automount()])
- ])
- ])
-+
-+dnl #
-+dnl # 6.14 API change
-+dnl # dops->d_revalidate now has four args.
-+dnl #
-+AC_DEFUN([ZFS_AC_KERNEL_SRC_D_REVALIDATE_4ARGS], [
-+ ZFS_LINUX_TEST_SRC([dentry_operations_d_revalidate_4args], [
-+ #include <linux/dcache.h>
-+ static int d_revalidate(struct inode *dir,
-+ const struct qstr *name, struct dentry *dentry,
-+ unsigned int fl) { return 0; }
-+ struct dentry_operations dops __attribute__ ((unused)) = {
-+ .d_revalidate = d_revalidate,
-+ };
-+ ])
-+])
-+
-+AC_DEFUN([ZFS_AC_KERNEL_D_REVALIDATE_4ARGS], [
-+ AC_MSG_CHECKING([whether dops->d_revalidate() takes 4 args])
-+ ZFS_LINUX_TEST_RESULT([dentry_operations_d_revalidate_4args], [
-+ AC_MSG_RESULT(yes)
-+ AC_DEFINE(HAVE_D_REVALIDATE_4ARGS, 1,
-+ [dops->d_revalidate() takes 4 args])
-+ ],[
-+ AC_MSG_RESULT(no)
-+ ])
-+])
-+
-+AC_DEFUN([ZFS_AC_KERNEL_SRC_AUTOMOUNT], [
-+ ZFS_AC_KERNEL_SRC_D_AUTOMOUNT
-+ ZFS_AC_KERNEL_SRC_D_REVALIDATE_4ARGS
-+])
-+
-+AC_DEFUN([ZFS_AC_KERNEL_AUTOMOUNT], [
-+ ZFS_AC_KERNEL_D_AUTOMOUNT
-+ ZFS_AC_KERNEL_D_REVALIDATE_4ARGS
-+])
-diff --git a/module/os/linux/zfs/zpl_ctldir.c b/module/os/linux/zfs/zpl_ctldir.c
-index 56a30be51..d6a755af6 100644
---- a/module/os/linux/zfs/zpl_ctldir.c
-+++ b/module/os/linux/zfs/zpl_ctldir.c
-@@ -185,8 +185,14 @@ zpl_snapdir_automount(struct path *path)
- * as of the 3.18 kernel revaliding the mountpoint dentry will result in
- * the snapshot being immediately unmounted.
- */
-+#ifdef HAVE_D_REVALIDATE_4ARGS
-+static int
-+zpl_snapdir_revalidate(struct inode *dir, const struct qstr *name,
-+ struct dentry *dentry, unsigned int flags)
-+#else
- static int
- zpl_snapdir_revalidate(struct dentry *dentry, unsigned int flags)
-+#endif
- {
- return (!!dentry->d_inode);
- }
diff --git a/debian/patches/0015-Linux-6.14-BLK_MQ_F_SHOULD_MERGE-was-removed.patch b/debian/patches/0015-Linux-6.14-BLK_MQ_F_SHOULD_MERGE-was-removed.patch
deleted file mode 100644
index c99461c32..000000000
--- a/debian/patches/0015-Linux-6.14-BLK_MQ_F_SHOULD_MERGE-was-removed.patch
+++ /dev/null
@@ -1,44 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Rob Norris <robn@despairlabs.com>
-Date: Wed, 5 Feb 2025 17:52:45 +1100
-Subject: [PATCH] Linux 6.14: BLK_MQ_F_SHOULD_MERGE was removed
-
-According to the upstream change, all callers set it, and all block
-devices either honoured it or ignored it, so removing it entirely allows
-a bunch of handling for the "unset" case to be removed, and it becomes
-effectively implied.
-
-We follow suit, and keep setting it for older kernels.
-
-Sponsored-by: https://despairlabs.com/sponsor/
-Signed-off-by: Rob Norris <robn@despairlabs.com>
-Reviewed-by: Alexander Motin <mav@FreeBSD.org>
-Reviewed-by: Tony Hutter <hutter2@llnl.gov>
-(cherry picked from commit 2ca91ba3cf209c6f1db42247ff2ca3f9ce4f2d4d)
-Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
----
- module/os/linux/zfs/zvol_os.c | 11 ++++++++++-
- 1 file changed, 10 insertions(+), 1 deletion(-)
-
-diff --git a/module/os/linux/zfs/zvol_os.c b/module/os/linux/zfs/zvol_os.c
-index 01f812b8e..4c61ae232 100644
---- a/module/os/linux/zfs/zvol_os.c
-+++ b/module/os/linux/zfs/zvol_os.c
-@@ -202,7 +202,16 @@ static int zvol_blk_mq_alloc_tag_set(zvol_state_t *zv)
- * We need BLK_MQ_F_BLOCKING here since we do blocking calls in
- * zvol_request_impl()
- */
-- zso->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING;
-+ zso->tag_set.flags = BLK_MQ_F_BLOCKING;
-+
-+#ifdef BLK_MQ_F_SHOULD_MERGE
-+ /*
-+ * Linux 6.14 removed BLK_MQ_F_SHOULD_MERGE and made it implicit.
-+ * For older kernels, we set it.
-+ */
-+ zso->tag_set.flags |= BLK_MQ_F_SHOULD_MERGE;
-+#endif
-+
- zso->tag_set.driver_data = zv;
-
- return (blk_mq_alloc_tag_set(&zso->tag_set));
diff --git a/debian/patches/series b/debian/patches/series
index 7914934db..e3103f9b4 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -6,10 +6,6 @@
0006-dont-symlink-zed-scripts.patch
0007-Add-systemd-unit-for-importing-specific-pools.patch
0008-Patch-move-manpage-arcstat-1-to-arcstat-8.patch
-0009-arc-stat-summary-guard-access-to-freshly-introduced-.patch
-0010-Fix-nfs_truncate_shares-without-etc-exports.d.patch
-0011-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
-0012-linux-zvols-correctly-detect-flush-requests.patch
-0013-contrib-initramfs-use-LVM-autoactivation-for-activat.patch
-0014-Linux-6.14-dops-d_revalidate-now-takes-four-args.patch
-0015-Linux-6.14-BLK_MQ_F_SHOULD_MERGE-was-removed.patch
+0009-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
+0010-linux-zvols-correctly-detect-flush-requests-17131.patch
+0011-contrib-initramfs-use-LVM-autoactivation-for-activat.patch
diff --git a/upstream b/upstream
index e269af1b3..f3e4043a3 160000
--- a/upstream
+++ b/upstream
@@ -1 +1 @@
-Subproject commit e269af1b3c7b1b1c000d05f147a2f75e5e72e0ca
+Subproject commit f3e4043a36942e67ccbc05318479a07d242fc611
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 9+ messages in thread
* [pve-devel] [PATCH zfsonlinux 8/8] cherry-pick fix for ABI break from zfs 2.3.2-staging
2025-03-31 13:41 [pve-devel] [PATCH zfsonlinux 0/8] update to ZFS 2.3.1 Stoiko Ivanov
` (6 preceding siblings ...)
2025-03-31 13:41 ` [pve-devel] [PATCH zfsonlinux 7/8] d/control: add Multi-Arch attributes for binary packages Stoiko Ivanov
@ 2025-03-31 13:41 ` Stoiko Ivanov
7 siblings, 0 replies; 9+ messages in thread
From: Stoiko Ivanov @ 2025-03-31 13:41 UTC (permalink / raw)
To: pve-devel
without this patch many common operations break when running with
a kernel module < 2.3.1.
Noticed while testing replication with our current 2.2.7 module and
userspace from 2.3.1
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
---
...ount-matches-and-injections-for-each.patch | 500 ++++++++++++++++++
debian/patches/series | 1 +
2 files changed, 501 insertions(+)
create mode 100644 debian/patches/0012-Revert-zinject-count-matches-and-injections-for-each.patch
diff --git a/debian/patches/0012-Revert-zinject-count-matches-and-injections-for-each.patch b/debian/patches/0012-Revert-zinject-count-matches-and-injections-for-each.patch
new file mode 100644
index 000000000..9f829f124
--- /dev/null
+++ b/debian/patches/0012-Revert-zinject-count-matches-and-injections-for-each.patch
@@ -0,0 +1,500 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Rob Norris <rob.norris@klarasystems.com>
+Date: Tue, 25 Mar 2025 07:49:10 +1100
+Subject: [PATCH] Revert "zinject: count matches and injections for each
+ handler" (#17137)
+
+Adding fields to zinject_record_t unexpectedly extended zfs_cmd_t,
+preventing some things working properly with 2.3.1 userspace tools
+against 2.3.0 kernel module.
+
+This reverts commit fabdd502f4f04e27d057aedc7fb7697e7bd95b74.
+
+Sponsored-by: Klara, Inc.
+Sponsored-by: Wasabi Technology, Inc.
+
+Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
+Reviewed-by: Alexander Motin <mav@FreeBSD.org>
+Reviewed-by: Tony Hutter <hutter2@llnl.gov>
+(cherry picked from commit 5f7037067e3113332ebfcb2913fd5d5183898540)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ cmd/zinject/zinject.c | 64 +++-----
+ include/sys/zfs_ioctl.h | 2 -
+ module/zfs/zio_inject.c | 58 ++-----
+ tests/runfiles/common.run | 2 +-
+ tests/zfs-tests/tests/Makefile.am | 1 -
+ .../cli_root/zinject/zinject_counts.ksh | 142 ------------------
+ 6 files changed, 36 insertions(+), 233 deletions(-)
+ delete mode 100755 tests/zfs-tests/tests/functional/cli_root/zinject/zinject_counts.ksh
+
+diff --git a/cmd/zinject/zinject.c b/cmd/zinject/zinject.c
+index 4374e69a7f94d9529f9db33c8646bd30010f0ec7..fdb2221eaea63f7f7b290323eaf0d810c2b76697 100644
+--- a/cmd/zinject/zinject.c
++++ b/cmd/zinject/zinject.c
+@@ -434,29 +434,26 @@ print_data_handler(int id, const char *pool, zinject_record_t *record,
+
+ if (*count == 0) {
+ (void) printf("%3s %-15s %-6s %-6s %-8s %3s %-4s "
+- "%-15s %-6s %-15s\n", "ID", "POOL", "OBJSET", "OBJECT",
+- "TYPE", "LVL", "DVAs", "RANGE", "MATCH", "INJECT");
++ "%-15s\n", "ID", "POOL", "OBJSET", "OBJECT", "TYPE",
++ "LVL", "DVAs", "RANGE");
+ (void) printf("--- --------------- ------ "
+- "------ -------- --- ---- --------------- "
+- "------ ------\n");
++ "------ -------- --- ---- ---------------\n");
+ }
+
+ *count += 1;
+
+- char rangebuf[32];
+- if (record->zi_start == 0 && record->zi_end == -1ULL)
+- snprintf(rangebuf, sizeof (rangebuf), "all");
+- else
+- snprintf(rangebuf, sizeof (rangebuf), "[%llu, %llu]",
+- (u_longlong_t)record->zi_start,
+- (u_longlong_t)record->zi_end);
++ (void) printf("%3d %-15s %-6llu %-6llu %-8s %-3d 0x%02x ",
++ id, pool, (u_longlong_t)record->zi_objset,
++ (u_longlong_t)record->zi_object, type_to_name(record->zi_type),
++ record->zi_level, record->zi_dvas);
+
+
+- (void) printf("%3d %-15s %-6llu %-6llu %-8s %-3d 0x%02x %-15s "
+- "%6lu %6lu\n", id, pool, (u_longlong_t)record->zi_objset,
+- (u_longlong_t)record->zi_object, type_to_name(record->zi_type),
+- record->zi_level, record->zi_dvas, rangebuf,
+- record->zi_match_count, record->zi_inject_count);
++ if (record->zi_start == 0 &&
++ record->zi_end == -1ULL)
++ (void) printf("all\n");
++ else
++ (void) printf("[%llu, %llu]\n", (u_longlong_t)record->zi_start,
++ (u_longlong_t)record->zi_end);
+
+ return (0);
+ }
+@@ -474,14 +471,11 @@ print_device_handler(int id, const char *pool, zinject_record_t *record,
+ return (0);
+
+ if (*count == 0) {
+- (void) printf("%3s %-15s %-16s %-5s %-10s %-9s "
+- "%-6s %-6s\n",
+- "ID", "POOL", "GUID", "TYPE", "ERROR", "FREQ",
+- "MATCH", "INJECT");
++ (void) printf("%3s %-15s %-16s %-5s %-10s %-9s\n",
++ "ID", "POOL", "GUID", "TYPE", "ERROR", "FREQ");
+ (void) printf(
+ "--- --------------- ---------------- "
+- "----- ---------- --------- "
+- "------ ------\n");
++ "----- ---------- ---------\n");
+ }
+
+ *count += 1;
+@@ -489,10 +483,9 @@ print_device_handler(int id, const char *pool, zinject_record_t *record,
+ double freq = record->zi_freq == 0 ? 100.0f :
+ (((double)record->zi_freq) / ZI_PERCENTAGE_MAX) * 100.0f;
+
+- (void) printf("%3d %-15s %llx %-5s %-10s %8.4f%% "
+- "%6lu %6lu\n", id, pool, (u_longlong_t)record->zi_guid,
+- iotype_to_str(record->zi_iotype), err_to_str(record->zi_error),
+- freq, record->zi_match_count, record->zi_inject_count);
++ (void) printf("%3d %-15s %llx %-5s %-10s %8.4f%%\n", id, pool,
++ (u_longlong_t)record->zi_guid, iotype_to_str(record->zi_iotype),
++ err_to_str(record->zi_error), freq);
+
+ return (0);
+ }
+@@ -510,25 +503,18 @@ print_delay_handler(int id, const char *pool, zinject_record_t *record,
+ return (0);
+
+ if (*count == 0) {
+- (void) printf("%3s %-15s %-16s %-10s %-5s %-9s "
+- "%-6s %-6s\n",
+- "ID", "POOL", "GUID", "DELAY (ms)", "LANES", "FREQ",
+- "MATCH", "INJECT");
+- (void) printf("--- --------------- ---------------- "
+- "---------- ----- --------- "
+- "------ ------\n");
++ (void) printf("%3s %-15s %-15s %-15s %s\n",
++ "ID", "POOL", "DELAY (ms)", "LANES", "GUID");
++ (void) printf("--- --------------- --------------- "
++ "--------------- ----------------\n");
+ }
+
+ *count += 1;
+
+- double freq = record->zi_freq == 0 ? 100.0f :
+- (((double)record->zi_freq) / ZI_PERCENTAGE_MAX) * 100.0f;
+-
+- (void) printf("%3d %-15s %llx %10llu %5llu %8.4f%% "
+- "%6lu %6lu\n", id, pool, (u_longlong_t)record->zi_guid,
++ (void) printf("%3d %-15s %-15llu %-15llu %llx\n", id, pool,
+ (u_longlong_t)NSEC2MSEC(record->zi_timer),
+ (u_longlong_t)record->zi_nlanes,
+- freq, record->zi_match_count, record->zi_inject_count);
++ (u_longlong_t)record->zi_guid);
+
+ return (0);
+ }
+diff --git a/include/sys/zfs_ioctl.h b/include/sys/zfs_ioctl.h
+index a8c3ffc76455c4165252b367d39df8f9ba0efc6e..7297ac7f4b3ec9e11d76e85105011c528adbabd5 100644
+--- a/include/sys/zfs_ioctl.h
++++ b/include/sys/zfs_ioctl.h
+@@ -421,8 +421,6 @@ typedef struct zinject_record {
+ uint64_t zi_nlanes;
+ uint32_t zi_cmd;
+ uint32_t zi_dvas;
+- uint64_t zi_match_count; /* count of times matched */
+- uint64_t zi_inject_count; /* count of times injected */
+ } zinject_record_t;
+
+ #define ZINJECT_NULL 0x1
+diff --git a/module/zfs/zio_inject.c b/module/zfs/zio_inject.c
+index f90044299cef4b936e71100f2c01e14750e4c7b6..05b8da3d4e51781bc988de7a8e247e8686a0f91f 100644
+--- a/module/zfs/zio_inject.c
++++ b/module/zfs/zio_inject.c
+@@ -129,9 +129,6 @@ static boolean_t
+ zio_match_handler(const zbookmark_phys_t *zb, uint64_t type, int dva,
+ zinject_record_t *record, int error)
+ {
+- boolean_t matched = B_FALSE;
+- boolean_t injected = B_FALSE;
+-
+ /*
+ * Check for a match against the MOS, which is based on type
+ */
+@@ -140,8 +137,9 @@ zio_match_handler(const zbookmark_phys_t *zb, uint64_t type, int dva,
+ record->zi_object == DMU_META_DNODE_OBJECT) {
+ if (record->zi_type == DMU_OT_NONE ||
+ type == record->zi_type)
+- matched = B_TRUE;
+- goto done;
++ return (freq_triggered(record->zi_freq));
++ else
++ return (B_FALSE);
+ }
+
+ /*
+@@ -155,20 +153,10 @@ zio_match_handler(const zbookmark_phys_t *zb, uint64_t type, int dva,
+ (record->zi_dvas == 0 ||
+ (dva != ZI_NO_DVA && (record->zi_dvas & (1ULL << dva)))) &&
+ error == record->zi_error) {
+- matched = B_TRUE;
+- goto done;
+- }
+-
+-done:
+- if (matched) {
+- record->zi_match_count++;
+- injected = freq_triggered(record->zi_freq);
++ return (freq_triggered(record->zi_freq));
+ }
+
+- if (injected)
+- record->zi_inject_count++;
+-
+- return (injected);
++ return (B_FALSE);
+ }
+
+ /*
+@@ -189,11 +177,8 @@ zio_handle_panic_injection(spa_t *spa, const char *tag, uint64_t type)
+ continue;
+
+ if (handler->zi_record.zi_type == type &&
+- strcmp(tag, handler->zi_record.zi_func) == 0) {
+- handler->zi_record.zi_match_count++;
+- handler->zi_record.zi_inject_count++;
++ strcmp(tag, handler->zi_record.zi_func) == 0)
+ panic("Panic requested in function %s\n", tag);
+- }
+ }
+
+ rw_exit(&inject_lock);
+@@ -351,8 +336,6 @@ zio_handle_label_injection(zio_t *zio, int error)
+
+ if (zio->io_vd->vdev_guid == handler->zi_record.zi_guid &&
+ (offset >= start && offset <= end)) {
+- handler->zi_record.zi_match_count++;
+- handler->zi_record.zi_inject_count++;
+ ret = error;
+ break;
+ }
+@@ -443,16 +426,12 @@ zio_handle_device_injection_impl(vdev_t *vd, zio_t *zio, int err1, int err2)
+
+ if (handler->zi_record.zi_error == err1 ||
+ handler->zi_record.zi_error == err2) {
+- handler->zi_record.zi_match_count++;
+-
+ /*
+ * limit error injection if requested
+ */
+ if (!freq_triggered(handler->zi_record.zi_freq))
+ continue;
+
+- handler->zi_record.zi_inject_count++;
+-
+ /*
+ * For a failed open, pretend like the device
+ * has gone away.
+@@ -488,8 +467,6 @@ zio_handle_device_injection_impl(vdev_t *vd, zio_t *zio, int err1, int err2)
+ break;
+ }
+ if (handler->zi_record.zi_error == ENXIO) {
+- handler->zi_record.zi_match_count++;
+- handler->zi_record.zi_inject_count++;
+ ret = SET_ERROR(EIO);
+ break;
+ }
+@@ -532,8 +509,6 @@ zio_handle_ignored_writes(zio_t *zio)
+ handler->zi_record.zi_cmd != ZINJECT_IGNORED_WRITES)
+ continue;
+
+- handler->zi_record.zi_match_count++;
+-
+ /*
+ * Positive duration implies # of seconds, negative
+ * a number of txgs
+@@ -546,10 +521,8 @@ zio_handle_ignored_writes(zio_t *zio)
+ }
+
+ /* Have a "problem" writing 60% of the time */
+- if (random_in_range(100) < 60) {
+- handler->zi_record.zi_inject_count++;
++ if (random_in_range(100) < 60)
+ zio->io_pipeline &= ~ZIO_VDEV_IO_STAGES;
+- }
+ break;
+ }
+
+@@ -573,9 +546,6 @@ spa_handle_ignored_writes(spa_t *spa)
+ handler->zi_record.zi_cmd != ZINJECT_IGNORED_WRITES)
+ continue;
+
+- handler->zi_record.zi_match_count++;
+- handler->zi_record.zi_inject_count++;
+-
+ if (handler->zi_record.zi_duration > 0) {
+ VERIFY(handler->zi_record.zi_timer == 0 ||
+ ddi_time_after64(
+@@ -657,6 +627,9 @@ zio_handle_io_delay(zio_t *zio)
+ if (handler->zi_record.zi_cmd != ZINJECT_DELAY_IO)
+ continue;
+
++ if (!freq_triggered(handler->zi_record.zi_freq))
++ continue;
++
+ if (vd->vdev_guid != handler->zi_record.zi_guid)
+ continue;
+
+@@ -679,12 +652,6 @@ zio_handle_io_delay(zio_t *zio)
+ ASSERT3U(handler->zi_record.zi_nlanes, >,
+ handler->zi_next_lane);
+
+- handler->zi_record.zi_match_count++;
+-
+- /* Limit the use of this handler if requested */
+- if (!freq_triggered(handler->zi_record.zi_freq))
+- continue;
+-
+ /*
+ * We want to issue this IO to the lane that will become
+ * idle the soonest, so we compare the soonest this
+@@ -756,9 +723,6 @@ zio_handle_io_delay(zio_t *zio)
+ */
+ min_handler->zi_next_lane = (min_handler->zi_next_lane + 1) %
+ min_handler->zi_record.zi_nlanes;
+-
+- min_handler->zi_record.zi_inject_count++;
+-
+ }
+
+ mutex_exit(&inject_delay_mtx);
+@@ -781,11 +745,9 @@ zio_handle_pool_delay(spa_t *spa, hrtime_t elapsed, zinject_type_t command)
+ handler = list_next(&inject_handlers, handler)) {
+ ASSERT3P(handler->zi_spa_name, !=, NULL);
+ if (strcmp(spa_name(spa), handler->zi_spa_name) == 0) {
+- handler->zi_record.zi_match_count++;
+ uint64_t pause =
+ SEC2NSEC(handler->zi_record.zi_duration);
+ if (pause > elapsed) {
+- handler->zi_record.zi_inject_count++;
+ delay = pause - elapsed;
+ }
+ id = handler->zi_id;
+diff --git a/tests/runfiles/common.run b/tests/runfiles/common.run
+index 8e1ffab5b4ebecd247719a23648295b73719eb3f..ee1f29595222a3a140084bb81a65c2061d611de6 100644
+--- a/tests/runfiles/common.run
++++ b/tests/runfiles/common.run
+@@ -159,7 +159,7 @@ tests = ['json_sanity']
+ tags = ['functional', 'cli_root', 'json']
+
+ [tests/functional/cli_root/zinject]
+-tests = ['zinject_args', 'zinject_counts', 'zinject_probe']
++tests = ['zinject_args', 'zinject_probe']
+ pre =
+ post =
+ tags = ['functional', 'cli_root', 'zinject']
+diff --git a/tests/zfs-tests/tests/Makefile.am b/tests/zfs-tests/tests/Makefile.am
+index 24eeac11299f4dc4fea096f41e8bca3c8731403a..52a0bd02818455147c152390c71018694a3723d6 100644
+--- a/tests/zfs-tests/tests/Makefile.am
++++ b/tests/zfs-tests/tests/Makefile.am
+@@ -615,7 +615,6 @@ nobase_dist_datadir_zfs_tests_tests_SCRIPTS += \
+ functional/cli_root/json/setup.ksh \
+ functional/cli_root/json/json_sanity.ksh \
+ functional/cli_root/zinject/zinject_args.ksh \
+- functional/cli_root/zinject/zinject_counts.ksh \
+ functional/cli_root/zinject/zinject_probe.ksh \
+ functional/cli_root/zdb/zdb_002_pos.ksh \
+ functional/cli_root/zdb/zdb_003_pos.ksh \
+diff --git a/tests/zfs-tests/tests/functional/cli_root/zinject/zinject_counts.ksh b/tests/zfs-tests/tests/functional/cli_root/zinject/zinject_counts.ksh
+deleted file mode 100755
+index 19b223aba46cba9a1689762affc6ef7d0325abc3..0000000000000000000000000000000000000000
+--- a/tests/zfs-tests/tests/functional/cli_root/zinject/zinject_counts.ksh
++++ /dev/null
+@@ -1,142 +0,0 @@
+-#!/bin/ksh -p
+-#
+-# CDDL HEADER START
+-#
+-# The contents of this file are subject to the terms of the
+-# Common Development and Distribution License (the "License").
+-# You may not use this file except in compliance with the License.
+-#
+-# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+-# or https://opensource.org/licenses/CDDL-1.0.
+-# See the License for the specific language governing permissions
+-# and limitations under the License.
+-#
+-# When distributing Covered Code, include this CDDL HEADER in each
+-# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+-# If applicable, add the following below this CDDL HEADER, with the
+-# fields enclosed by brackets "[]" replaced with your own identifying
+-# information: Portions Copyright [yyyy] [name of copyright owner]
+-#
+-# CDDL HEADER END
+-#
+-
+-#
+-# Copyright (c) 2025, Klara, Inc.
+-#
+-
+-#
+-# This test sets various injections, does some IO to trigger them. and then
+-# checks the "match" and "inject" counters on the injection records to ensure
+-# that they're being counted properly.
+-#
+-# Note that this is a test of the counters, not injection generally. We're
+-# usually only looking for the counters moving at all, not caring too much
+-# about their actual values.
+-
+-. $STF_SUITE/include/libtest.shlib
+-
+-verify_runnable "global"
+-
+-log_assert "Check zinject counts are displayed and advanced as expected."
+-
+-DISK1=${DISKS%% *}
+-
+-function cleanup
+-{
+- zinject -c all
+- default_cleanup_noexit
+-}
+-
+-log_onexit cleanup
+-
+-default_mirror_setup_noexit $DISKS
+-
+-# Call zinject, get the match and inject counts, and make sure they look
+-# plausible for the requested frequency.
+-function check_count_freq
+-{
+- typeset -i freq=$1
+-
+- # assuming a single rule, with the match and inject counts in the
+- # last two columns
+- typeset rule=$(zinject | grep -m 1 -oE '^ *[0-9].*[0-9]$')
+-
+- log_note "check_count_freq: using rule: $rule"
+-
+- typeset -a record=($(echo $rule | grep -oE ' [0-9]+ +[0-9]+$'))
+- typeset -i match=${record[0]}
+- typeset -i inject=${record[1]}
+-
+- log_note "check_count_freq: freq=$freq match=$match inject=$inject"
+-
+- # equality check, for 100% frequency, or if we've never matched the rule
+- if [[ $match -eq 0 || $freq -eq 100 ]] ; then
+- return [[ $match -eq 0 $inject ]]
+- fi
+-
+- # Compute the expected injection count, and compare. Because we're
+- # not testing the fine details here, it's considered good-enough for
+- # the injection account to be within +/- 10% of the expected count.
+- typeset -i expect=$(($match * $freq / 100))
+- typeset -i diff=$((($expect - $inject) / 10))
+- return [[ $diff -ge -1 && $diff -le 1 ]]
+-}
+-
+-# Test device IO injections by injecting write errors, doing some writes,
+-# and making sure the count moved
+-function test_device_injection
+-{
+- for freq in 100 50 ; do
+- log_must zinject -d $DISK1 -e io -T write -f $freq $TESTPOOL
+-
+- log_must dd if=/dev/urandom of=/$TESTPOOL/file bs=1M count=1
+- log_must zpool sync
+-
+- log_must check_count_freq $freq
+-
+- log_must zinject -c all
+- done
+-}
+-
+-# Test object injections by writing a file, injecting checksum errors and
+-# trying to read it back
+-function test_object_injection
+-{
+- log_must dd if=/dev/urandom of=/$TESTPOOL/file bs=1M count=1
+- zpool sync
+-
+- for freq in 100 50 ; do
+- log_must zinject -t data -e checksum -f $freq /$TESTPOOL/file
+-
+- cat /tank/file > /dev/null || true
+-
+- log_must check_count_freq $freq
+-
+- log_must zinject -c all
+- done
+-}
+-
+-# Test delay injections, by injecting delays and writing
+-function test_delay_injection
+-{
+- for freq in 100 50 ; do
+- log_must zinject -d $DISK1 -D 50:1 -f $freq $TESTPOOL
+-
+- log_must dd if=/dev/urandom of=/$TESTPOOL/file bs=1M count=1
+- zpool sync
+-
+- log_must check_count_freq $freq
+-
+- log_must zinject -c all
+- done
+-}
+-
+-# Disable cache, to ensure reads induce IO
+-log_must zfs set primarycache=none $TESTPOOL
+-
+-# Test 'em all.
+-log_must test_device_injection
+-log_must test_object_injection
+-log_must test_delay_injection
+-
+-log_pass "zinject counts are displayed and advanced as expected."
diff --git a/debian/patches/series b/debian/patches/series
index e3103f9b4..71bce2b7e 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -9,3 +9,4 @@
0009-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
0010-linux-zvols-correctly-detect-flush-requests-17131.patch
0011-contrib-initramfs-use-LVM-autoactivation-for-activat.patch
+0012-Revert-zinject-count-matches-and-injections-for-each.patch
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 9+ messages in thread