* [pve-devel] [PATCH zfsonlinux] update submodule and patches to zfs-2.0.3
@ 2021-02-12 17:28 Stoiko Ivanov
2021-02-15 15:40 ` [pve-devel] applied: " Thomas Lamprecht
0 siblings, 1 reply; 2+ messages in thread
From: Stoiko Ivanov @ 2021-02-12 17:28 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
---
did a quick test on my zfs storage-replication testcluster:
* both systems failed to import their zfs pools upon boot (I'm quite sure
it's related to upstream commit 642d86af0d91b2bf88d5ea34cb6888b03c39c459)
* both systems were used for running the zfs testsuite - so probably don't
really represent a clean production-ready state
* importing the pool worked after a manual import (consistently, even
after reboots)
* did not have similiar issues on 4 other systems I tested this on
...ith-d-dev-disk-by-id-in-scan-service.patch | 2 +-
.../0010-Set-file-mode-during-zfs_write.patch | 39 -------------------
debian/patches/series | 1 -
upstream | 2 +-
4 files changed, 2 insertions(+), 42 deletions(-)
delete mode 100644 debian/patches/0010-Set-file-mode-during-zfs_write.patch
diff --git a/debian/patches/0004-import-with-d-dev-disk-by-id-in-scan-service.patch b/debian/patches/0004-import-with-d-dev-disk-by-id-in-scan-service.patch
index 12dfde85..46b03fd4 100644
--- a/debian/patches/0004-import-with-d-dev-disk-by-id-in-scan-service.patch
+++ b/debian/patches/0004-import-with-d-dev-disk-by-id-in-scan-service.patch
@@ -14,7 +14,7 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/etc/systemd/system/zfs-import-scan.service.in b/etc/systemd/system/zfs-import-scan.service.in
-index 6520f3246..1718f98a2 100644
+index f0317e23e..9a5e9cb17 100644
--- a/etc/systemd/system/zfs-import-scan.service.in
+++ b/etc/systemd/system/zfs-import-scan.service.in
@@ -13,7 +13,7 @@ ConditionPathIsDirectory=/sys/module/zfs
diff --git a/debian/patches/0010-Set-file-mode-during-zfs_write.patch b/debian/patches/0010-Set-file-mode-during-zfs_write.patch
deleted file mode 100644
index c164d13a..00000000
--- a/debian/patches/0010-Set-file-mode-during-zfs_write.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Antonio Russo <aerusso@aerusso.net>
-Date: Mon, 8 Feb 2021 10:15:05 -0700
-Subject: [PATCH] Set file mode during zfs_write
-
-3d40b65 refactored zfs_vnops.c, which shared much code verbatim between
-Linux and BSD. After a successful write, the suid/sgid bits are reset,
-and the mode to be written is stored in newmode. On Linux, this was
-propagated to both the in-memory inode and znode, which is then updated
-with sa_update.
-
-3d40b65 accidentally removed the initialization of newmode, which
-happened to occur on the same line as the inode update (which has been
-moved out of the function).
-
-The uninitialized newmode can be saved to disk, leading to a crash on
-stat() of that file, in addition to a merely incorrect file mode.
-
-Reviewed-by: Ryan Moeller <ryan@ixsystems.com>
-Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
-Signed-off-by: Antonio Russo <aerusso@aerusso.net>
-Closes #11474
-Closes #11576
----
- module/zfs/zfs_vnops.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/module/zfs/zfs_vnops.c b/module/zfs/zfs_vnops.c
-index 17ea788f3..e54488882 100644
---- a/module/zfs/zfs_vnops.c
-+++ b/module/zfs/zfs_vnops.c
-@@ -528,6 +528,7 @@ zfs_write(znode_t *zp, uio_t *uio, int ioflag, cred_t *cr)
- ((zp->z_mode & S_ISUID) != 0 && uid == 0)) != 0) {
- uint64_t newmode;
- zp->z_mode &= ~(S_ISUID | S_ISGID);
-+ newmode = zp->z_mode;
- (void) sa_update(zp->z_sa_hdl, SA_ZPL_MODE(zfsvfs),
- (void *)&newmode, sizeof (uint64_t), tx);
- }
diff --git a/debian/patches/series b/debian/patches/series
index bd60b69f..91b8a3b1 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -7,4 +7,3 @@
0007-Use-installed-python3.patch
0008-Add-systemd-unit-for-importing-specific-pools.patch
0009-Patch-move-manpage-arcstat-1-to-arcstat-8.patch
-0010-Set-file-mode-during-zfs_write.patch
diff --git a/upstream b/upstream
index d022406a..9f5f8662 160000
--- a/upstream
+++ b/upstream
@@ -1 +1 @@
-Subproject commit d022406a1499279167362f9c36280e1f847204e2
+Subproject commit 9f5f86626620c52ad1bebf27d17cece6a28d39a0
--
2.20.1
^ permalink raw reply [flat|nested] 2+ messages in thread
* [pve-devel] applied: [PATCH zfsonlinux] update submodule and patches to zfs-2.0.3
2021-02-12 17:28 [pve-devel] [PATCH zfsonlinux] update submodule and patches to zfs-2.0.3 Stoiko Ivanov
@ 2021-02-15 15:40 ` Thomas Lamprecht
0 siblings, 0 replies; 2+ messages in thread
From: Thomas Lamprecht @ 2021-02-15 15:40 UTC (permalink / raw)
To: Proxmox VE development discussion, Stoiko Ivanov
On 12.02.21 18:28, Stoiko Ivanov wrote:
> Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
> ---
> did a quick test on my zfs storage-replication testcluster:
> * both systems failed to import their zfs pools upon boot (I'm quite sure
> it's related to upstream commit 642d86af0d91b2bf88d5ea34cb6888b03c39c459)
> * both systems were used for running the zfs testsuite - so probably don't
> really represent a clean production-ready state
> * importing the pool worked after a manual import (consistently, even
> after reboots)
> * did not have similiar issues on 4 other systems I tested this on
>
> ...ith-d-dev-disk-by-id-in-scan-service.patch | 2 +-
> .../0010-Set-file-mode-during-zfs_write.patch | 39 -------------------
> debian/patches/series | 1 -
> upstream | 2 +-
> 4 files changed, 2 insertions(+), 42 deletions(-)
> delete mode 100644 debian/patches/0010-Set-file-mode-during-zfs_write.patch
>
>
applied, thanks!
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-02-15 15:41 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-12 17:28 [pve-devel] [PATCH zfsonlinux] update submodule and patches to zfs-2.0.3 Stoiko Ivanov
2021-02-15 15:40 ` [pve-devel] applied: " Thomas Lamprecht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox