all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Alwin Antreich <a.antreich@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler
Date: Fri, 15 Jan 2021 14:17:16 +0100	[thread overview]
Message-ID: <20210115131716.243126-2-a.antreich@proxmox.com> (raw)
In-Reply-To: <20210115131716.243126-1-a.antreich@proxmox.com>

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 pveceph.adoc | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/pveceph.adoc b/pveceph.adoc
index 42dfb02..da8d35e 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -540,6 +540,42 @@ pveceph pool destroy <name>
 NOTE: Deleting the data of a pool is a background task and can take some time.
 You will notice that the data usage in the cluster is decreasing.
 
+
+PG Autoscaler
+~~~~~~~~~~~~~
+
+The PG autoscaler allows the cluster to consider the amount of (expected) data
+stored in each pool and to choose the appropriate pg_num values automatically.
+
+You may need to activate the PG autoscaler module before adjustments can take
+effect.
+[source,bash]
+----
+ceph mgr module enable pg_autoscaler
+----
+
+The autoscaler is configured on a per pool basis and has the following modes:
+
+[horizontal]
+warn:: A health warning is issued if the suggested `pg_num` value is too
+different from the current value.
+on:: The `pg_num` is adjusted automatically with no need for any manual
+interaction.
+off:: No automatic `pg_num` adjustments are made, no warning will be issued
+if the PG count is far from optimal.
+
+The scaling factor can be adjusted to facilitate future data storage, with the
+`target_size`, `target_size_ratio` and the `pg_num_min` options.
+
+WARNING: By default, the autoscaler considers tuning the PG count of a pool if
+it is off by a factor of 3. This will lead to a considerable shift in data
+placement and might introduce a high load on the cluster.
+
+You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
+https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
+Nautilus: PG merging and autotuning].
+
+
 [[pve_ceph_device_classes]]
 Ceph CRUSH & device classes
 ---------------------------
-- 
2.29.2





  reply	other threads:[~2021-01-15 13:17 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-15 13:17 [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Alwin Antreich
2021-01-15 13:17 ` Alwin Antreich [this message]
2021-01-15 14:19   ` [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler Dylan Whyte
2021-01-15 14:05 ` [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Dylan Whyte

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210115131716.243126-2-a.antreich@proxmox.com \
    --to=a.antreich@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal