all lists on lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options
@ 2021-01-15 13:17 Alwin Antreich
  2021-01-15 13:17 ` [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler Alwin Antreich
  2021-01-15 14:05 ` [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Dylan Whyte
  0 siblings, 2 replies; 4+ messages in thread
From: Alwin Antreich @ 2021-01-15 13:17 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 pveceph.adoc | 45 ++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 38 insertions(+), 7 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index fd3fded..42dfb02 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -466,12 +466,16 @@ WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
 allows I/O on an object when it has only 1 replica which could lead to data
 loss, incomplete PGs or unfound objects.
 
-It is advised to calculate the PG number depending on your setup, you can find
-the formula and the PG calculator footnote:[PG calculator
-https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to
-increase and decrease the number of PGs later on footnote:[Placement Groups
-{cephdocs-url}/rados/operations/placement-groups/].
+It is advisable to calculate the PG number depending on your setup. You can
+find the formula and the PG calculator footnote:[PG calculator
+https://ceph.com/pgcalc/] online. Ceph Nautilus and newer, allow to increase
+and decrease the number of PGs footnoteref:[placement_groups,Placement Groups
+{cephdocs-url}/rados/operations/placement-groups/] later on.
 
+In addition to manual adjustment, the PG autoscaler
+footnoteref:[autoscaler,Automated Scaling
+{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
+automatically scale the PG count for a pool in the background.
 
 You can create pools through command line or on the GUI on each PVE host under
 **Ceph -> Pools**.
@@ -485,6 +489,34 @@ If you would like to automatically also get a storage definition for your pool,
 mark the checkbox "Add storages" in the GUI or use the command line option
 '--add_storages' at pool creation.
 
+.Base Options
+Name:: The name of the pool. It must be unique and can't be changed afterwards.
+Size:: The number of replicas per object. Ceph always tries to have that many
+copies of an object. Default: `3`.
+PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
+the pool. If set to `warn`, introducing a warning message when a pool
+is too far away from an optimal PG count. Default: `warn`.
+Add as Storage:: Configure a VM and container storage using the new pool.
+Default: `true`.
+
+.Advanced Options
+Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
+the pool if a PG has less than this many replicas. Default: `2`.
+Crush Rule:: The rule to use for mapping object placement in the cluster. These
+rules define how data is placed within the cluster. See
+xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
+device-based rules.
+# of PGs:: The number of placement groups footnoteref:[placement_groups] that
+the pool should have at the beginning. Default: `128`.
+Traget Size:: The estimated amount of data expected in the pool. The PG
+autoscaler uses that size to estimate the optimal PG count.
+Target Size Ratio:: The ratio of data that is expected in the pool. The PG
+autoscaler uses the ratio relative to other ratio sets. It takes precedence
+over the `target size` if both are set.
+Min. # of PGs:: The minimal number of placement groups. This setting is used to
+fine-tune the lower amount of the PG count for that pool. The PG autoscaler
+will not merge PGs below this threshold.
+
 Further information on Ceph pool handling can be found in the Ceph pool
 operation footnote:[Ceph pool operation
 {cephdocs-url}/rados/operations/pools/]
@@ -697,8 +729,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named
 `'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
 Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
 Ceph documentation for more information regarding a fitting placement group
-number (`pg_num`) for your setup footnote:[Ceph Placement Groups
-{cephdocs-url}/rados/operations/placement-groups/].
+number (`pg_num`) for your setup footnoteref:[placement_groups].
 Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
 storage configuration after it was created successfully.
 
-- 
2.29.2





^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-01-15 14:19 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-15 13:17 [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Alwin Antreich
2021-01-15 13:17 ` [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler Alwin Antreich
2021-01-15 14:19   ` Dylan Whyte
2021-01-15 14:05 ` [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Dylan Whyte

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal