From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 3259D1FF137 for ; Tue, 31 Mar 2026 17:04:51 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 51B83735; Tue, 31 Mar 2026 17:05:18 +0200 (CEST) From: Fiona Ebner To: pve-devel@lists.proxmox.com Subject: [PATCH kernel] cherry-pick fix for NULL pointer dereference in ceph_mds_auth_match() Date: Tue, 31 Mar 2026 17:04:29 +0200 Message-ID: <20260331150439.864438-1-f.ebner@proxmox.com> X-Mailer: git-send-email 2.47.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1774969426427 X-SPAM-LEVEL: Spam detection results: 0 AWL -1.618 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_LOTSOFHASH 0.25 Emails with lots of hash-like gibberish RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 1 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 1 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 1 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: NXMSDCRJUQNAIDP3I3U5CLZUHHYPY7EW X-Message-ID-Hash: NXMSDCRJUQNAIDP3I3U5CLZUHHYPY7EW X-MailFrom: f.ebner@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: As reported in the enterprise support, there is a regression between kernels 6.17.4-2-pve and 6.17.13-2-pve that might result in a NULL pointer dereference in the ceph_mds_auth_match() function when using an external CephFS file system. The only interesting commit touching fs/ceph/ between those kernels is a backport of 22c73d52a6d0 ("ceph: fix multifs mds auth caps issue"), namely f26ac354dcb3f ("ceph: fix multifs mds auth caps issue"). There is an explicit fix for that commit/issue upstream, namely 7987cce375ac8 ("ceph: fix NULL pointer dereference in ceph_mds_auth_match()"). Pick it up. Signed-off-by: Fiona Ebner --- Based on 5b5bf10 ("update ABI file for 6.17.13-2-pve (amd64)") rather than current master which is already using 7.0 as a base. Did not reproduce the original issue, but did a quick smoke test with an external CephFS file system. ...inter-dereference-in-ceph_mds_auth_m.patch | 189 ++++++++++++++++++ 1 file changed, 189 insertions(+) create mode 100644 patches/kernel/0035-ceph-fix-NULL-pointer-dereference-in-ceph_mds_auth_m.patch diff --git a/patches/kernel/0035-ceph-fix-NULL-pointer-dereference-in-ceph_mds_auth_m.patch b/patches/kernel/0035-ceph-fix-NULL-pointer-dereference-in-ceph_mds_auth_m.patch new file mode 100644 index 0000000..d8115eb --- /dev/null +++ b/patches/kernel/0035-ceph-fix-NULL-pointer-dereference-in-ceph_mds_auth_m.patch @@ -0,0 +1,189 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Viacheslav Dubeyko +Date: Tue, 3 Feb 2026 14:54:46 -0800 +Subject: [PATCH] ceph: fix NULL pointer dereference in ceph_mds_auth_match() + +The CephFS kernel client has regression starting from 6.18-rc1. +We have issue in ceph_mds_auth_match() if fs_name == NULL: + + const char fs_name = mdsc->fsc->mount_options->mds_namespace; + ... + if (auth->match.fs_name && strcmp(auth->match.fs_name, fs_name)) { + / fsname mismatch, try next one */ + return 0; + } + +Patrick Donnelly suggested that: In summary, we should definitely start +decoding `fs_name` from the MDSMap and do strict authorizations checks +against it. Note that the `-o mds_namespace=foo` should only be used for +selecting the file system to mount and nothing else. It's possible +no mds_namespace is specified but the kernel will mount the only +file system that exists which may have name "foo". + +This patch reworks ceph_mdsmap_decode() and namespace_equals() with +the goal of supporting the suggested concept. Now struct ceph_mdsmap +contains m_fs_name field that receives copy of extracted FS name +by ceph_extract_encoded_string(). For the case of "old" CephFS file +systems, it is used "cephfs" name. + +[ idryomov: replace redundant %*pE with %s in ceph_mdsmap_decode(), + get rid of a series of strlen() calls in ceph_namespace_match(), + drop changes to namespace_equals() body to avoid treating empty + mds_namespace as equal, drop changes to ceph_mdsc_handle_fsmap() + as namespace_equals() isn't an equivalent substitution there ] + +Cc: stable@vger.kernel.org +Fixes: 22c73d52a6d0 ("ceph: fix multifs mds auth caps issue") +Link: https://tracker.ceph.com/issues/73886 +Signed-off-by: Viacheslav Dubeyko +Reviewed-by: Patrick Donnelly +Tested-by: Patrick Donnelly +Signed-off-by: Ilya Dryomov +(cherry picked from commit 7987cce375ac8ce98e170a77aa2399f2cf6eb99f) +Signed-off-by: Fiona Ebner +--- + fs/ceph/mds_client.c | 5 +++-- + fs/ceph/mdsmap.c | 26 +++++++++++++++++++------- + fs/ceph/mdsmap.h | 1 + + fs/ceph/super.h | 16 ++++++++++++++-- + include/linux/ceph/ceph_fs.h | 6 ++++++ + 5 files changed, 43 insertions(+), 11 deletions(-) + +diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c +index 3efbc11596e003193021f08b4f0d75ef36a7da7a..2d0bdb223db6714d253d76b8c0fd1a8347da33e5 100644 +--- a/fs/ceph/mds_client.c ++++ b/fs/ceph/mds_client.c +@@ -5649,7 +5649,7 @@ static int ceph_mds_auth_match(struct ceph_mds_client *mdsc, + u32 caller_uid = from_kuid(&init_user_ns, cred->fsuid); + u32 caller_gid = from_kgid(&init_user_ns, cred->fsgid); + struct ceph_client *cl = mdsc->fsc->client; +- const char *fs_name = mdsc->fsc->mount_options->mds_namespace; ++ const char *fs_name = mdsc->mdsmap->m_fs_name; + const char *spath = mdsc->fsc->mount_options->server_path; + bool gid_matched = false; + u32 gid, tlen, len; +@@ -5657,7 +5657,8 @@ static int ceph_mds_auth_match(struct ceph_mds_client *mdsc, + + doutc(cl, "fsname check fs_name=%s match.fs_name=%s\n", + fs_name, auth->match.fs_name ? auth->match.fs_name : ""); +- if (auth->match.fs_name && strcmp(auth->match.fs_name, fs_name)) { ++ ++ if (!ceph_namespace_match(auth->match.fs_name, fs_name)) { + /* fsname mismatch, try next one */ + return 0; + } +diff --git a/fs/ceph/mdsmap.c b/fs/ceph/mdsmap.c +index e82f09e50a8e7f7026b0602e8799f93f4a112692..dbe9f40cb262763c7877dbd73da9e97c7d5182b7 100644 +--- a/fs/ceph/mdsmap.c ++++ b/fs/ceph/mdsmap.c +@@ -354,22 +354,33 @@ struct ceph_mdsmap *ceph_mdsmap_decode(struct ceph_mds_client *mdsc, void **p, + __decode_and_drop_type(p, end, u8, bad_ext); + } + if (mdsmap_ev >= 8) { +- u32 fsname_len; ++ size_t fsname_len; ++ + /* enabled */ + ceph_decode_8_safe(p, end, m->m_enabled, bad_ext); ++ + /* fs_name */ +- ceph_decode_32_safe(p, end, fsname_len, bad_ext); ++ m->m_fs_name = ceph_extract_encoded_string(p, end, ++ &fsname_len, ++ GFP_NOFS); ++ if (IS_ERR(m->m_fs_name)) { ++ m->m_fs_name = NULL; ++ goto nomem; ++ } + + /* validate fsname against mds_namespace */ +- if (!namespace_equals(mdsc->fsc->mount_options, *p, ++ if (!namespace_equals(mdsc->fsc->mount_options, m->m_fs_name, + fsname_len)) { +- pr_warn_client(cl, "fsname %*pE doesn't match mds_namespace %s\n", +- (int)fsname_len, (char *)*p, ++ pr_warn_client(cl, "fsname %s doesn't match mds_namespace %s\n", ++ m->m_fs_name, + mdsc->fsc->mount_options->mds_namespace); + goto bad; + } +- /* skip fsname after validation */ +- ceph_decode_skip_n(p, end, fsname_len, bad); ++ } else { ++ m->m_enabled = false; ++ m->m_fs_name = kstrdup(CEPH_OLD_FS_NAME, GFP_NOFS); ++ if (!m->m_fs_name) ++ goto nomem; + } + /* damaged */ + if (mdsmap_ev >= 9) { +@@ -431,6 +442,7 @@ void ceph_mdsmap_destroy(struct ceph_mdsmap *m) + kfree(m->m_info); + } + kfree(m->m_data_pg_pools); ++ kfree(m->m_fs_name); + kfree(m); + } + +diff --git a/fs/ceph/mdsmap.h b/fs/ceph/mdsmap.h +index 1f2171dd01bfa34a404eef00113646bdcb978980..d48d07c3516d447a8f3e684f6ded1f1ca1674417 100644 +--- a/fs/ceph/mdsmap.h ++++ b/fs/ceph/mdsmap.h +@@ -45,6 +45,7 @@ struct ceph_mdsmap { + bool m_enabled; + bool m_damaged; + int m_num_laggy; ++ char *m_fs_name; + }; + + static inline struct ceph_entity_addr * +diff --git a/fs/ceph/super.h b/fs/ceph/super.h +index 4ac6561285b18a48c1e24f0752296b88e51c7971..7b67853c0ffe5e391e9098a76ced3b0a4fda94cc 100644 +--- a/fs/ceph/super.h ++++ b/fs/ceph/super.h +@@ -104,14 +104,26 @@ struct ceph_mount_options { + struct fscrypt_dummy_policy dummy_enc_policy; + }; + ++#define CEPH_NAMESPACE_WILDCARD "*" ++ ++static inline bool ceph_namespace_match(const char *pattern, ++ const char *target) ++{ ++ if (!pattern || !pattern[0] || ++ !strcmp(pattern, CEPH_NAMESPACE_WILDCARD)) ++ return true; ++ ++ return !strcmp(pattern, target); ++} ++ + /* + * Check if the mds namespace in ceph_mount_options matches + * the passed in namespace string. First time match (when + * ->mds_namespace is NULL) is treated specially, since + * ->mds_namespace needs to be initialized by the caller. + */ +-static inline int namespace_equals(struct ceph_mount_options *fsopt, +- const char *namespace, size_t len) ++static inline bool namespace_equals(struct ceph_mount_options *fsopt, ++ const char *namespace, size_t len) + { + return !(fsopt->mds_namespace && + (strlen(fsopt->mds_namespace) != len || +diff --git a/include/linux/ceph/ceph_fs.h b/include/linux/ceph/ceph_fs.h +index c7f2c63b3bc3fb3c6a1ec0d114fcaf9cd7489de3..08e5dbe15ca4446081779f035d64f0a6bc34b382 100644 +--- a/include/linux/ceph/ceph_fs.h ++++ b/include/linux/ceph/ceph_fs.h +@@ -31,6 +31,12 @@ + #define CEPH_INO_CEPH 2 /* hidden .ceph dir */ + #define CEPH_INO_GLOBAL_SNAPREALM 3 /* global dummy snaprealm */ + ++/* ++ * name for "old" CephFS file systems, ++ * see ceph.git e2b151d009640114b2565c901d6f41f6cd5ec652 ++ */ ++#define CEPH_OLD_FS_NAME "cephfs" ++ + /* arbitrary limit on max # of monitors (cluster of 3 is typical) */ + #define CEPH_MAX_MON 31 + -- 2.47.3