* [pve-devel] [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options
@ 2021-02-18 10:39 Dylan Whyte
2021-02-18 10:39 ` [pve-devel] [PATCH v2 pve-docs 2/2] ceph: add explanation on the pg autoscaler Dylan Whyte
2021-02-25 18:29 ` [pve-devel] applied: [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options Thomas Lamprecht
0 siblings, 2 replies; 4+ messages in thread
From: Dylan Whyte @ 2021-02-18 10:39 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Alwin Antreich
edited-by: Dylan Whyte <d.whyte@proxmox.com>
---
v1->v2:
* Minor language fixup
pveceph.adoc | 47 +++++++++++++++++++++++++++++++++++++++--------
1 file changed, 39 insertions(+), 8 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index fd3fded..9253613 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -466,12 +466,16 @@ WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
allows I/O on an object when it has only 1 replica which could lead to data
loss, incomplete PGs or unfound objects.
-It is advised to calculate the PG number depending on your setup, you can find
-the formula and the PG calculator footnote:[PG calculator
-https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to
-increase and decrease the number of PGs later on footnote:[Placement Groups
-{cephdocs-url}/rados/operations/placement-groups/].
+It is advised that you calculate the PG number based on your setup. You can
+find the formula and the PG calculator footnote:[PG calculator
+https://ceph.com/pgcalc/] online. From Ceph Nautilus onward, you can change the
+number of PGs footnoteref:[placement_groups,Placement Groups
+{cephdocs-url}/rados/operations/placement-groups/] after the setup.
+In addition to manual adjustment, the PG autoscaler
+footnoteref:[autoscaler,Automated Scaling
+{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
+automatically scale the PG count for a pool in the background.
You can create pools through command line or on the GUI on each PVE host under
**Ceph -> Pools**.
@@ -485,6 +489,34 @@ If you would like to automatically also get a storage definition for your pool,
mark the checkbox "Add storages" in the GUI or use the command line option
'--add_storages' at pool creation.
+.Base Options
+Name:: The name of the pool. This must be unique and can't be changed afterwards.
+Size:: The number of replicas per object. Ceph always tries to have this many
+copies of an object. Default: `3`.
+PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
+the pool. If set to `warn`, it produces a warning message when a pool
+has a non-optimal PG count. Default: `warn`.
+Add as Storage:: Configure a VM or container storage using the new pool.
+Default: `true`.
+
+.Advanced Options
+Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
+the pool if a PG has less than this many replicas. Default: `2`.
+Crush Rule:: The rule to use for mapping object placement in the cluster. These
+rules define how data is placed within the cluster. See
+xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
+device-based rules.
+# of PGs:: The number of placement groups footnoteref:[placement_groups] that
+the pool should have at the beginning. Default: `128`.
+Traget Size:: The estimated amount of data expected in the pool. The PG
+autoscaler uses this size to estimate the optimal PG count.
+Target Size Ratio:: The ratio of data that is expected in the pool. The PG
+autoscaler uses the ratio relative to other ratio sets. It takes precedence
+over the `target size` if both are set.
+Min. # of PGs:: The minimum number of placement groups. This setting is used to
+fine-tune the lower bound of the PG count for that pool. The PG autoscaler
+will not merge PGs below this threshold.
+
Further information on Ceph pool handling can be found in the Ceph pool
operation footnote:[Ceph pool operation
{cephdocs-url}/rados/operations/pools/]
@@ -697,10 +729,9 @@ This creates a CephFS named `'cephfs'' using a pool for its data named
`'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
Ceph documentation for more information regarding a fitting placement group
-number (`pg_num`) for your setup footnote:[Ceph Placement Groups
-{cephdocs-url}/rados/operations/placement-groups/].
+number (`pg_num`) for your setup footnoteref:[placement_groups].
Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
-storage configuration after it was created successfully.
+storage configuration after it has been created successfully.
Destroy CephFS
~~~~~~~~~~~~~~
--
2.20.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] [PATCH v2 pve-docs 2/2] ceph: add explanation on the pg autoscaler
2021-02-18 10:39 [pve-devel] [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options Dylan Whyte
@ 2021-02-18 10:39 ` Dylan Whyte
2021-02-25 18:29 ` [pve-devel] applied: " Thomas Lamprecht
2021-02-25 18:29 ` [pve-devel] applied: [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options Thomas Lamprecht
1 sibling, 1 reply; 4+ messages in thread
From: Dylan Whyte @ 2021-02-18 10:39 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Alwin Antreich
edited-by: Dylan Whyte <d.whyte@proxmox.com>
---
v1->v2:
* minor language fixup
pveceph.adoc | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/pveceph.adoc b/pveceph.adoc
index 9253613..9ef268b 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -540,6 +540,42 @@ pveceph pool destroy <name>
NOTE: Deleting the data of a pool is a background task and can take some time.
You will notice that the data usage in the cluster is decreasing.
+
+PG Autoscaler
+~~~~~~~~~~~~~
+
+The PG autoscaler allows the cluster to consider the amount of (expected) data
+stored in each pool and to choose the appropriate pg_num values automatically.
+
+You may need to activate the PG autoscaler module before adjustments can take
+effect.
+[source,bash]
+----
+ceph mgr module enable pg_autoscaler
+----
+
+The autoscaler is configured on a per pool basis and has the following modes:
+
+[horizontal]
+warn:: A health warning is issued if the suggested `pg_num` value differs too
+much from the current value.
+on:: The `pg_num` is adjusted automatically with no need for any manual
+interaction.
+off:: No automatic `pg_num` adjustments are made, and no warning will be issued
+if the PG count is far from optimal.
+
+The scaling factor can be adjusted to facilitate future data storage, with the
+`target_size`, `target_size_ratio` and the `pg_num_min` options.
+
+WARNING: By default, the autoscaler considers tuning the PG count of a pool if
+it is off by a factor of 3. This will lead to a considerable shift in data
+placement and might introduce a high load on the cluster.
+
+You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
+https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
+Nautilus: PG merging and autotuning].
+
+
[[pve_ceph_device_classes]]
Ceph CRUSH & device classes
---------------------------
--
2.20.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] applied: [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options
2021-02-18 10:39 [pve-devel] [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options Dylan Whyte
2021-02-18 10:39 ` [pve-devel] [PATCH v2 pve-docs 2/2] ceph: add explanation on the pg autoscaler Dylan Whyte
@ 2021-02-25 18:29 ` Thomas Lamprecht
1 sibling, 0 replies; 4+ messages in thread
From: Thomas Lamprecht @ 2021-02-25 18:29 UTC (permalink / raw)
To: Proxmox VE development discussion, Dylan Whyte
On 18.02.21 11:39, Dylan Whyte wrote:
> Signed-off-by: Alwin Antreich
should have been:
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> edited-by: Dylan Whyte <d.whyte@proxmox.com>
Please always sign-off your patches, multiple trailers are just fine.
Took the freedom to fix this.
> ---
>
> v1->v2:
> * Minor language fixup
>
> pveceph.adoc | 47 +++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 39 insertions(+), 8 deletions(-)
>
>
applied, thanks!
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] applied: [PATCH v2 pve-docs 2/2] ceph: add explanation on the pg autoscaler
2021-02-18 10:39 ` [pve-devel] [PATCH v2 pve-docs 2/2] ceph: add explanation on the pg autoscaler Dylan Whyte
@ 2021-02-25 18:29 ` Thomas Lamprecht
0 siblings, 0 replies; 4+ messages in thread
From: Thomas Lamprecht @ 2021-02-25 18:29 UTC (permalink / raw)
To: Proxmox VE development discussion, Dylan Whyte
On 18.02.21 11:39, Dylan Whyte wrote:
> Signed-off-by: Alwin Antreich
same as in the other patch
> edited-by: Dylan Whyte <d.whyte@proxmox.com>
> ---
> v1->v2:
> * minor language fixup
>
> pveceph.adoc | 36 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 36 insertions(+)
>
>
applied, thanks!
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-02-25 18:29 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-18 10:39 [pve-devel] [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options Dylan Whyte
2021-02-18 10:39 ` [pve-devel] [PATCH v2 pve-docs 2/2] ceph: add explanation on the pg autoscaler Dylan Whyte
2021-02-25 18:29 ` [pve-devel] applied: " Thomas Lamprecht
2021-02-25 18:29 ` [pve-devel] applied: [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options Thomas Lamprecht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox