From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id ED2E51FF13B for ; Wed, 22 Apr 2026 08:24:54 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 91715B6CB; Wed, 22 Apr 2026 08:24:53 +0200 (CEST) From: Kefu Chai To: Nicolas Frey , pve-devel@lists.proxmox.com Subject: Re: [PATCH pve-docs 1/1] ceph: add warning about mixing device-specific with device-unspecific CRUSH rules when using autoscaler In-Reply-To: <20260421080533.105169-1-n.frey@proxmox.com> References: <20260421080533.105169-1-n.frey@proxmox.com> Date: Wed, 22 Apr 2026 14:24:40 +0800 Message-ID: <878qafikbb.fsf@proxmox.com> MIME-Version: 1.0 Content-Type: text/plain X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1776839003588 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.334 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: XQZVBQ5RFQ7CYVKXGFKHSVACKDWROC2R X-Message-ID-Hash: XQZVBQ5RFQ7CYVKXGFKHSVACKDWROC2R X-MailFrom: k.chai@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Nicolas Frey writes: Hi Nicolas, thanks for this improvement. A few nits: > Suggested-by: Friedrich Weber > Signed-off-by: Nicolas Frey > --- > pveceph.adoc | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/pveceph.adoc b/pveceph.adoc > index 2aae6d6..dfa5d95 100644 > --- a/pveceph.adoc > +++ b/pveceph.adoc > @@ -916,6 +916,9 @@ TIP: If the pool already contains objects, these must be moved accordingly. > Depending on your setup, this may introduce a big performance impact on your > cluster. As an alternative, you can create a new pool and move disks separately. > > +WARNING: When using the autoscaler, all pools must either exclusively be assigned > +device-specific or device-unspecific CRUSH rules. Mixing them across pools will > device-specific / device-unspecific is not Ceph's terminology. Ceph's terms are "CRUSH rules that specify a device class", or simply "device class rules". > +prevent the autoscaler from functioning. this might overstates the problem caused by overlapped roots. The autoscaler keeps running: it just skips pools whose CRUSH rules's OSD sets overlap, and the pg_num is not adjustted for those pools. In other words, non-overlapping pools continue to scale. So, probably we can be more specific here? like: When using the PG autoscaler, all pools in the cluster must use CRUSH rules of the same kind, either all specifying a device class, or all without one. Otherwise the autoscaler will skip the affected pools and their `pg_num` will not be adjusted. what do you think? > > Ceph Client > ----------- > -- > 2.47.3