public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Dylan Whyte <d.whyte@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options
Date: Thu, 18 Feb 2021 11:39:09 +0100	[thread overview]
Message-ID: <20210218103910.21127-1-d.whyte@proxmox.com> (raw)

Signed-off-by: Alwin Antreich
edited-by: Dylan Whyte <d.whyte@proxmox.com>
---

v1->v2:
* Minor language fixup

 pveceph.adoc | 47 +++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 39 insertions(+), 8 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index fd3fded..9253613 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -466,12 +466,16 @@ WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
 allows I/O on an object when it has only 1 replica which could lead to data
 loss, incomplete PGs or unfound objects.
 
-It is advised to calculate the PG number depending on your setup, you can find
-the formula and the PG calculator footnote:[PG calculator
-https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to
-increase and decrease the number of PGs later on footnote:[Placement Groups
-{cephdocs-url}/rados/operations/placement-groups/].
+It is advised that you calculate the PG number based on your setup. You can
+find the formula and the PG calculator footnote:[PG calculator
+https://ceph.com/pgcalc/] online. From Ceph Nautilus onward, you can change the
+number of PGs footnoteref:[placement_groups,Placement Groups
+{cephdocs-url}/rados/operations/placement-groups/] after the setup.
 
+In addition to manual adjustment, the PG autoscaler
+footnoteref:[autoscaler,Automated Scaling
+{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
+automatically scale the PG count for a pool in the background.
 
 You can create pools through command line or on the GUI on each PVE host under
 **Ceph -> Pools**.
@@ -485,6 +489,34 @@ If you would like to automatically also get a storage definition for your pool,
 mark the checkbox "Add storages" in the GUI or use the command line option
 '--add_storages' at pool creation.
 
+.Base Options
+Name:: The name of the pool. This must be unique and can't be changed afterwards.
+Size:: The number of replicas per object. Ceph always tries to have this many
+copies of an object. Default: `3`.
+PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
+the pool. If set to `warn`, it produces a warning message when a pool
+has a non-optimal PG count. Default: `warn`.
+Add as Storage:: Configure a VM or container storage using the new pool.
+Default: `true`.
+
+.Advanced Options
+Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
+the pool if a PG has less than this many replicas. Default: `2`.
+Crush Rule:: The rule to use for mapping object placement in the cluster. These
+rules define how data is placed within the cluster. See
+xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
+device-based rules.
+# of PGs:: The number of placement groups footnoteref:[placement_groups] that
+the pool should have at the beginning. Default: `128`.
+Traget Size:: The estimated amount of data expected in the pool. The PG
+autoscaler uses this size to estimate the optimal PG count.
+Target Size Ratio:: The ratio of data that is expected in the pool. The PG
+autoscaler uses the ratio relative to other ratio sets. It takes precedence
+over the `target size` if both are set.
+Min. # of PGs:: The minimum number of placement groups. This setting is used to
+fine-tune the lower bound of the PG count for that pool. The PG autoscaler
+will not merge PGs below this threshold.
+
 Further information on Ceph pool handling can be found in the Ceph pool
 operation footnote:[Ceph pool operation
 {cephdocs-url}/rados/operations/pools/]
@@ -697,10 +729,9 @@ This creates a CephFS named `'cephfs'' using a pool for its data named
 `'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
 Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
 Ceph documentation for more information regarding a fitting placement group
-number (`pg_num`) for your setup footnote:[Ceph Placement Groups
-{cephdocs-url}/rados/operations/placement-groups/].
+number (`pg_num`) for your setup footnoteref:[placement_groups].
 Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
-storage configuration after it was created successfully.
+storage configuration after it has been created successfully.
 
 Destroy CephFS
 ~~~~~~~~~~~~~~
-- 
2.20.1





             reply	other threads:[~2021-02-18 10:39 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-18 10:39 Dylan Whyte [this message]
2021-02-18 10:39 ` [pve-devel] [PATCH v2 pve-docs 2/2] ceph: add explanation on the pg autoscaler Dylan Whyte
2021-02-25 18:29   ` [pve-devel] applied: " Thomas Lamprecht
2021-02-25 18:29 ` [pve-devel] applied: [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210218103910.21127-1-d.whyte@proxmox.com \
    --to=d.whyte@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal