From: Dylan Whyte <d.whyte@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH v2 pve-docs 2/2] ceph: add explanation on the pg autoscaler
Date: Thu, 18 Feb 2021 11:39:10 +0100 [thread overview]
Message-ID: <20210218103910.21127-2-d.whyte@proxmox.com> (raw)
In-Reply-To: <20210218103910.21127-1-d.whyte@proxmox.com>
Signed-off-by: Alwin Antreich
edited-by: Dylan Whyte <d.whyte@proxmox.com>
---
v1->v2:
* minor language fixup
pveceph.adoc | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/pveceph.adoc b/pveceph.adoc
index 9253613..9ef268b 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -540,6 +540,42 @@ pveceph pool destroy <name>
NOTE: Deleting the data of a pool is a background task and can take some time.
You will notice that the data usage in the cluster is decreasing.
+
+PG Autoscaler
+~~~~~~~~~~~~~
+
+The PG autoscaler allows the cluster to consider the amount of (expected) data
+stored in each pool and to choose the appropriate pg_num values automatically.
+
+You may need to activate the PG autoscaler module before adjustments can take
+effect.
+[source,bash]
+----
+ceph mgr module enable pg_autoscaler
+----
+
+The autoscaler is configured on a per pool basis and has the following modes:
+
+[horizontal]
+warn:: A health warning is issued if the suggested `pg_num` value differs too
+much from the current value.
+on:: The `pg_num` is adjusted automatically with no need for any manual
+interaction.
+off:: No automatic `pg_num` adjustments are made, and no warning will be issued
+if the PG count is far from optimal.
+
+The scaling factor can be adjusted to facilitate future data storage, with the
+`target_size`, `target_size_ratio` and the `pg_num_min` options.
+
+WARNING: By default, the autoscaler considers tuning the PG count of a pool if
+it is off by a factor of 3. This will lead to a considerable shift in data
+placement and might introduce a high load on the cluster.
+
+You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
+https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
+Nautilus: PG merging and autotuning].
+
+
[[pve_ceph_device_classes]]
Ceph CRUSH & device classes
---------------------------
--
2.20.1
next prev parent reply other threads:[~2021-02-18 10:39 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-18 10:39 [pve-devel] [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options Dylan Whyte
2021-02-18 10:39 ` Dylan Whyte [this message]
2021-02-25 18:29 ` [pve-devel] applied: [PATCH v2 pve-docs 2/2] ceph: add explanation on the pg autoscaler Thomas Lamprecht
2021-02-25 18:29 ` [pve-devel] applied: [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210218103910.21127-2-d.whyte@proxmox.com \
--to=d.whyte@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.