From: Alwin Antreich <a.antreich@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options
Date: Fri, 15 Jan 2021 14:17:15 +0100 [thread overview]
Message-ID: <20210115131716.243126-1-a.antreich@proxmox.com> (raw)
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
pveceph.adoc | 45 ++++++++++++++++++++++++++++++++++++++-------
1 file changed, 38 insertions(+), 7 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index fd3fded..42dfb02 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -466,12 +466,16 @@ WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
allows I/O on an object when it has only 1 replica which could lead to data
loss, incomplete PGs or unfound objects.
-It is advised to calculate the PG number depending on your setup, you can find
-the formula and the PG calculator footnote:[PG calculator
-https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to
-increase and decrease the number of PGs later on footnote:[Placement Groups
-{cephdocs-url}/rados/operations/placement-groups/].
+It is advisable to calculate the PG number depending on your setup. You can
+find the formula and the PG calculator footnote:[PG calculator
+https://ceph.com/pgcalc/] online. Ceph Nautilus and newer, allow to increase
+and decrease the number of PGs footnoteref:[placement_groups,Placement Groups
+{cephdocs-url}/rados/operations/placement-groups/] later on.
+In addition to manual adjustment, the PG autoscaler
+footnoteref:[autoscaler,Automated Scaling
+{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
+automatically scale the PG count for a pool in the background.
You can create pools through command line or on the GUI on each PVE host under
**Ceph -> Pools**.
@@ -485,6 +489,34 @@ If you would like to automatically also get a storage definition for your pool,
mark the checkbox "Add storages" in the GUI or use the command line option
'--add_storages' at pool creation.
+.Base Options
+Name:: The name of the pool. It must be unique and can't be changed afterwards.
+Size:: The number of replicas per object. Ceph always tries to have that many
+copies of an object. Default: `3`.
+PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
+the pool. If set to `warn`, introducing a warning message when a pool
+is too far away from an optimal PG count. Default: `warn`.
+Add as Storage:: Configure a VM and container storage using the new pool.
+Default: `true`.
+
+.Advanced Options
+Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
+the pool if a PG has less than this many replicas. Default: `2`.
+Crush Rule:: The rule to use for mapping object placement in the cluster. These
+rules define how data is placed within the cluster. See
+xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
+device-based rules.
+# of PGs:: The number of placement groups footnoteref:[placement_groups] that
+the pool should have at the beginning. Default: `128`.
+Traget Size:: The estimated amount of data expected in the pool. The PG
+autoscaler uses that size to estimate the optimal PG count.
+Target Size Ratio:: The ratio of data that is expected in the pool. The PG
+autoscaler uses the ratio relative to other ratio sets. It takes precedence
+over the `target size` if both are set.
+Min. # of PGs:: The minimal number of placement groups. This setting is used to
+fine-tune the lower amount of the PG count for that pool. The PG autoscaler
+will not merge PGs below this threshold.
+
Further information on Ceph pool handling can be found in the Ceph pool
operation footnote:[Ceph pool operation
{cephdocs-url}/rados/operations/pools/]
@@ -697,8 +729,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named
`'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
Ceph documentation for more information regarding a fitting placement group
-number (`pg_num`) for your setup footnote:[Ceph Placement Groups
-{cephdocs-url}/rados/operations/placement-groups/].
+number (`pg_num`) for your setup footnoteref:[placement_groups].
Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
storage configuration after it was created successfully.
--
2.29.2
next reply other threads:[~2021-01-15 13:17 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-15 13:17 Alwin Antreich [this message]
2021-01-15 13:17 ` [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler Alwin Antreich
2021-01-15 14:19 ` Dylan Whyte
2021-01-15 14:05 ` [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Dylan Whyte
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210115131716.243126-1-a.antreich@proxmox.com \
--to=a.antreich@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox