From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id AE05668F07 for ; Fri, 15 Jan 2021 15:19:30 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 9A2C125D74 for ; Fri, 15 Jan 2021 15:19:30 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id A83F625D3D for ; Fri, 15 Jan 2021 15:19:28 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 65C3F4579B for ; Fri, 15 Jan 2021 15:19:28 +0100 (CET) Date: Fri, 15 Jan 2021 15:19:25 +0100 (CET) From: Dylan Whyte To: Proxmox VE development discussion , Alwin Antreich Message-ID: <13732644.52.1610720366269@webmail.proxmox.com> In-Reply-To: <20210115131716.243126-2-a.antreich@proxmox.com> References: <20210115131716.243126-1-a.antreich@proxmox.com> <20210115131716.243126-2-a.antreich@proxmox.com> MIME-Version: 1.0 X-Priority: 3 Importance: Normal X-Mailer: Open-Xchange Mailer v7.10.4-Rev16 X-Originating-Client: open-xchange-appsuite X-SPAM-LEVEL: Spam detection results: 0 AWL 0.019 Adjusted score from AWL reputation of From: address HTML_MESSAGE 0.001 HTML included in message KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_SHORT 0.001 Use of a URL Shortener for very short URL RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [ceph.io, proxmox.com] Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Jan 2021 14:19:30 -0000 Sorry, I didn't prefix the other reply with a comment. Anyway, everything was pretty much fine, I just had some little issues. With this, I have some even more minor issues. > On 15.01.2021 14:17 Alwin Antreich wrote: > > > Signed-off-by: Alwin Antreich > --- > pveceph.adoc | 36 ++++++++++++++++++++++++++++++++++++ > 1 file changed, 36 insertions(+) > > diff --git a/pveceph.adoc b/pveceph.adoc > index 42dfb02..da8d35e 100644 > --- a/pveceph.adoc > +++ b/pveceph.adoc > @@ -540,6 +540,42 @@ pveceph pool destroy > NOTE: Deleting the data of a pool is a background task and can take some time. > You will notice that the data usage in the cluster is decreasing. > > + > +PG Autoscaler > +~~~~~~~~~~~~~ > + > +The PG autoscaler allows the cluster to consider the amount of (expected) data > +stored in each pool and to choose the appropriate pg_num values automatically. > + > +You may need to activate the PG autoscaler module before adjustments can take > +effect. > +[source,bash] > +---- > +ceph mgr module enable pg_autoscaler > +---- > + > +The autoscaler is configured on a per pool basis and has the following modes: > + > +[horizontal] > +warn:: A health warning is issued if the suggested `pg_num` value is too > +different from the current value. > s/is too different/differs too much/ (note: "too different" seems grammatically correct but something sounds strange about it here. It could just be a personal thing..) > +on:: The `pg_num` is adjusted automatically with no need for any manual > +interaction. > +off:: No automatic `pg_num` adjustments are made, no warning will be issued > s/made, no/made, and no/ > +if the PG count is far from optimal. > + > +The scaling factor can be adjusted to facilitate future data storage, with the > +`target_size`, `target_size_ratio` and the `pg_num_min` options. > + > +WARNING: By default, the autoscaler considers tuning the PG count of a pool if > +it is off by a factor of 3. This will lead to a considerable shift in data > +placement and might introduce a high load on the cluster. > + > +You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog - > +https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in > +Nautilus: PG merging and autotuning]. > + > + > [[pve_ceph_device_classes]] > Ceph CRUSH & device classes > --------------------------- > -- > 2.29.2 > > > > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com mailto:pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >