From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 436771FF13F for ; Thu, 26 Mar 2026 04:44:31 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 0AA46519; Thu, 26 Mar 2026 04:44:51 +0100 (CET) Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 Date: Thu, 26 Mar 2026 11:44:40 +0800 Message-Id: Subject: Re: [PATCH manager 0/1] ceph: add opt-in locality-aware replica reads (crush_location_hook) From: "Kefu Chai" To: "Kefu Chai" , X-Mailer: aerc 0.20.0 References: <20260325035104.2264118-1-k.chai@proxmox.com> In-Reply-To: <20260325035104.2264118-1-k.chai@proxmox.com> X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1774496637811 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.373 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: NY4NTYUTRVQF563KYX3YP5WXKEOCUNTW X-Message-ID-Hash: NY4NTYUTRVQF563KYX3YP5WXKEOCUNTW X-MailFrom: k.chai@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Hi, Putting a hold on this series for now. Friedrich kindly pointed out that Tentacle v20.2.0 ships with a regression = [1] that affects rbd_read_from_replica_policy=3Dlocalize. The issue is that com= mit 4b01c004b5d [2] ("PrimaryLogPG: don't accept ops with mixed balance_reads a= nd rwordered flags") causes the OSD to reject write ops that carry the LOCALIZ= E_READS flag, returning -EINVAL. Since librbd sets this flag connection-wide when t= he localize policy is active, this can lead to silent write failures. The fix (a revert, PR #66611 [3]) has been merged to the tentacle branch an= d should ship with v20.2.1, which is currently in QE validation [4]. Squid is= not affected =E2=80=94 the problematic commit was only cherry-picked into tenta= cle. I'll resend once v20.2.1 is released and picked up by our Tentacle packages= . The patch itself is opt-in, so there's no urgency. Thanks, Kefu [1] https://tracker.ceph.com/issues/73997 [2] https://github.com/ceph/ceph/commit/4b01c004b5dc342cbdfb7cb26b47f6afe62= 45599 [3] https://github.com/ceph/ceph/pull/66611 [4] https://tracker.ceph.com/issues/74838 On Wed Mar 25, 2026 at 11:51 AM CST, Kefu Chai wrote: > This patch was prompted by a forum thread [1] in which a user reported > persistent high IO wait on PostgreSQL VMs running on a three-AZ Ceph > cluster. The discussion surfaced a general optimization opportunity: > librbd, by default, always reads from the primary OSD regardless of > its location. In a multi-AZ deployment, that can mean every read pays > a cross-AZ round-trip even when a same-AZ replica is available. > > rbd_read_from_replica_policy =3D localize addresses this by directing > librbd to prefer the nearest replica, but it requires the client to > declare its own position in the CRUSH hierarchy. This patch ships a > hook script that supplies that position by querying the live CRUSH map > (ceph osd crush find), and wires it up as an opt-in in pveceph init. > > The benefit scales with topology: in a multi-AZ cluster it keeps reads > within the same AZ; in a hyperconverged setup, reads to a co-located > OSD never leave the host at all. The feature is opt-in because it can > degrade performance when replicas are equidistant or when the hook > falls back to an incorrect CRUSH root =E2=80=94 see the commit message fo= r > details. > > [1] https://forum.proxmox.com/threads/ceph-vm-with-high-io-wait.181751/ > =20 > > Kefu Chai (1): > ceph: add opt-in locality-aware replica reads (crush_location_hook) > > PVE/API2/Ceph.pm | 17 ++++++++++ > bin/Makefile | 3 +- > bin/ceph-crush-location | 43 ++++++++++++++++++++++++++ > www/manager6/ceph/CephInstallWizard.js | 8 ++++- > 4 files changed, 69 insertions(+), 2 deletions(-) > create mode 100644 bin/ceph-crush-location