* [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options
@ 2021-01-15 13:17 Alwin Antreich
2021-01-15 13:17 ` [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler Alwin Antreich
2021-01-15 14:05 ` [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Dylan Whyte
0 siblings, 2 replies; 4+ messages in thread
From: Alwin Antreich @ 2021-01-15 13:17 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
pveceph.adoc | 45 ++++++++++++++++++++++++++++++++++++++-------
1 file changed, 38 insertions(+), 7 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index fd3fded..42dfb02 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -466,12 +466,16 @@ WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
allows I/O on an object when it has only 1 replica which could lead to data
loss, incomplete PGs or unfound objects.
-It is advised to calculate the PG number depending on your setup, you can find
-the formula and the PG calculator footnote:[PG calculator
-https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to
-increase and decrease the number of PGs later on footnote:[Placement Groups
-{cephdocs-url}/rados/operations/placement-groups/].
+It is advisable to calculate the PG number depending on your setup. You can
+find the formula and the PG calculator footnote:[PG calculator
+https://ceph.com/pgcalc/] online. Ceph Nautilus and newer, allow to increase
+and decrease the number of PGs footnoteref:[placement_groups,Placement Groups
+{cephdocs-url}/rados/operations/placement-groups/] later on.
+In addition to manual adjustment, the PG autoscaler
+footnoteref:[autoscaler,Automated Scaling
+{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
+automatically scale the PG count for a pool in the background.
You can create pools through command line or on the GUI on each PVE host under
**Ceph -> Pools**.
@@ -485,6 +489,34 @@ If you would like to automatically also get a storage definition for your pool,
mark the checkbox "Add storages" in the GUI or use the command line option
'--add_storages' at pool creation.
+.Base Options
+Name:: The name of the pool. It must be unique and can't be changed afterwards.
+Size:: The number of replicas per object. Ceph always tries to have that many
+copies of an object. Default: `3`.
+PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
+the pool. If set to `warn`, introducing a warning message when a pool
+is too far away from an optimal PG count. Default: `warn`.
+Add as Storage:: Configure a VM and container storage using the new pool.
+Default: `true`.
+
+.Advanced Options
+Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
+the pool if a PG has less than this many replicas. Default: `2`.
+Crush Rule:: The rule to use for mapping object placement in the cluster. These
+rules define how data is placed within the cluster. See
+xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
+device-based rules.
+# of PGs:: The number of placement groups footnoteref:[placement_groups] that
+the pool should have at the beginning. Default: `128`.
+Traget Size:: The estimated amount of data expected in the pool. The PG
+autoscaler uses that size to estimate the optimal PG count.
+Target Size Ratio:: The ratio of data that is expected in the pool. The PG
+autoscaler uses the ratio relative to other ratio sets. It takes precedence
+over the `target size` if both are set.
+Min. # of PGs:: The minimal number of placement groups. This setting is used to
+fine-tune the lower amount of the PG count for that pool. The PG autoscaler
+will not merge PGs below this threshold.
+
Further information on Ceph pool handling can be found in the Ceph pool
operation footnote:[Ceph pool operation
{cephdocs-url}/rados/operations/pools/]
@@ -697,8 +729,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named
`'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
Ceph documentation for more information regarding a fitting placement group
-number (`pg_num`) for your setup footnote:[Ceph Placement Groups
-{cephdocs-url}/rados/operations/placement-groups/].
+number (`pg_num`) for your setup footnoteref:[placement_groups].
Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
storage configuration after it was created successfully.
--
2.29.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler
2021-01-15 13:17 [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Alwin Antreich
@ 2021-01-15 13:17 ` Alwin Antreich
2021-01-15 14:19 ` Dylan Whyte
2021-01-15 14:05 ` [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Dylan Whyte
1 sibling, 1 reply; 4+ messages in thread
From: Alwin Antreich @ 2021-01-15 13:17 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
pveceph.adoc | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/pveceph.adoc b/pveceph.adoc
index 42dfb02..da8d35e 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -540,6 +540,42 @@ pveceph pool destroy <name>
NOTE: Deleting the data of a pool is a background task and can take some time.
You will notice that the data usage in the cluster is decreasing.
+
+PG Autoscaler
+~~~~~~~~~~~~~
+
+The PG autoscaler allows the cluster to consider the amount of (expected) data
+stored in each pool and to choose the appropriate pg_num values automatically.
+
+You may need to activate the PG autoscaler module before adjustments can take
+effect.
+[source,bash]
+----
+ceph mgr module enable pg_autoscaler
+----
+
+The autoscaler is configured on a per pool basis and has the following modes:
+
+[horizontal]
+warn:: A health warning is issued if the suggested `pg_num` value is too
+different from the current value.
+on:: The `pg_num` is adjusted automatically with no need for any manual
+interaction.
+off:: No automatic `pg_num` adjustments are made, no warning will be issued
+if the PG count is far from optimal.
+
+The scaling factor can be adjusted to facilitate future data storage, with the
+`target_size`, `target_size_ratio` and the `pg_num_min` options.
+
+WARNING: By default, the autoscaler considers tuning the PG count of a pool if
+it is off by a factor of 3. This will lead to a considerable shift in data
+placement and might introduce a high load on the cluster.
+
+You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
+https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
+Nautilus: PG merging and autotuning].
+
+
[[pve_ceph_device_classes]]
Ceph CRUSH & device classes
---------------------------
--
2.29.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler
2021-01-15 13:17 ` [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler Alwin Antreich
@ 2021-01-15 14:19 ` Dylan Whyte
0 siblings, 0 replies; 4+ messages in thread
From: Dylan Whyte @ 2021-01-15 14:19 UTC (permalink / raw)
To: Proxmox VE development discussion, Alwin Antreich
Sorry, I didn't prefix the other reply with a comment. Anyway, everything was pretty much fine, I just had some little issues. With this, I have some even more minor issues.
> On 15.01.2021 14:17 Alwin Antreich <a.antreich@proxmox.com mailto:a.antreich@proxmox.com > wrote:
>
>
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com mailto:a.antreich@proxmox.com >
> ---
> pveceph.adoc | 36 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 36 insertions(+)
>
> diff --git a/pveceph.adoc b/pveceph.adoc
> index 42dfb02..da8d35e 100644
> --- a/pveceph.adoc
> +++ b/pveceph.adoc
> @@ -540,6 +540,42 @@ pveceph pool destroy <name>
> NOTE: Deleting the data of a pool is a background task and can take some time.
> You will notice that the data usage in the cluster is decreasing.
>
> +
> +PG Autoscaler
> +~~~~~~~~~~~~~
> +
> +The PG autoscaler allows the cluster to consider the amount of (expected) data
> +stored in each pool and to choose the appropriate pg_num values automatically.
> +
> +You may need to activate the PG autoscaler module before adjustments can take
> +effect.
> +[source,bash]
> +----
> +ceph mgr module enable pg_autoscaler
> +----
> +
> +The autoscaler is configured on a per pool basis and has the following modes:
> +
> +[horizontal]
> +warn:: A health warning is issued if the suggested `pg_num` value is too
> +different from the current value.
>
s/is too different/differs too much/
(note: "too different" seems grammatically correct but something sounds strange about it here. It could just be a personal thing..)
> +on:: The `pg_num` is adjusted automatically with no need for any manual
> +interaction.
> +off:: No automatic `pg_num` adjustments are made, no warning will be issued
>
s/made, no/made, and no/
> +if the PG count is far from optimal.
> +
> +The scaling factor can be adjusted to facilitate future data storage, with the
> +`target_size`, `target_size_ratio` and the `pg_num_min` options.
> +
> +WARNING: By default, the autoscaler considers tuning the PG count of a pool if
> +it is off by a factor of 3. This will lead to a considerable shift in data
> +placement and might introduce a high load on the cluster.
> +
> +You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
> +https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
> +Nautilus: PG merging and autotuning].
> +
> +
> [[pve_ceph_device_classes]]
> Ceph CRUSH & device classes
> ---------------------------
> --
> 2.29.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com mailto:pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options
2021-01-15 13:17 [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Alwin Antreich
2021-01-15 13:17 ` [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler Alwin Antreich
@ 2021-01-15 14:05 ` Dylan Whyte
1 sibling, 0 replies; 4+ messages in thread
From: Dylan Whyte @ 2021-01-15 14:05 UTC (permalink / raw)
To: Proxmox VE development discussion, Alwin Antreich
> On 15.01.2021 14:17 Alwin Antreich <a.antreich@proxmox.com mailto:a.antreich@proxmox.com > wrote:
>
>
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com mailto:a.antreich@proxmox.com >
> ---
> pveceph.adoc | 45 ++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 38 insertions(+), 7 deletions(-)
>
> diff --git a/pveceph.adoc b/pveceph.adoc
> index fd3fded..42dfb02 100644
> --- a/pveceph.adoc
> +++ b/pveceph.adoc
> @@ -466,12 +466,16 @@ WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
> allows I/O on an object when it has only 1 replica which could lead to data
> loss, incomplete PGs or unfound objects.
>
> -It is advised to calculate the PG number depending on your setup, you can find
> -the formula and the PG calculator footnote:[PG calculator
> -https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to
> -increase and decrease the number of PGs later on footnote:[Placement Groups
> -{cephdocs-url}/rados/operations/placement-groups/].
> +It is advisable to calculate the PG number depending on your setup. You can
> +find the formula and the PG calculator footnote:[PG calculator
> +https://ceph.com/pgcalc/] online. Ceph Nautilus and newer, allow to increase
> +and decrease the number of PGs footnoteref:[placement_groups,Placement Groups
>
s/Ceph Nautilus and newer, allow to/Ceph Nautilus and newer allow you to/
or "Ceph Nautilus and newer allow you to change the number of PGs", depending on whether you want "increase and decrease" to be clear or not.
> +{cephdocs-url}/rados/operations/placement-groups/] later on.
>
s/later on/after setup/
> +In addition to manual adjustment, the PG autoscaler
> +footnoteref:[autoscaler,Automated Scaling
> +{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
> +automatically scale the PG count for a pool in the background.
>
> You can create pools through command line or on the GUI on each PVE host under
> **Ceph -> Pools**.
> @@ -485,6 +489,34 @@ If you would like to automatically also get a storage definition for your pool,
> mark the checkbox "Add storages" in the GUI or use the command line option
> '--add_storages' at pool creation.
>
> +.Base Options
> +Name:: The name of the pool. It must be unique and can't be changed afterwards.
>
s/It must/This must/
s/unique and/unique, and/
> +Size:: The number of replicas per object. Ceph always tries to have that many
>
s/have that/have this/
> +copies of an object. Default: `3`.
> +PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
> +the pool. If set to `warn`, introducing a warning message when a pool
>
s/introducing/it produces/
> +is too far away from an optimal PG count. Default: `warn`.
>
s/is too far away from an optimal/has a suboptimal/
> +Add as Storage:: Configure a VM and container storage using the new pool.
>
s/VM and container/VM and/or container/
> +Default: `true`.
> +
> +.Advanced Options
> +Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
> +the pool if a PG has less than this many replicas. Default: `2`.
> +Crush Rule:: The rule to use for mapping object placement in the cluster. These
> +rules define how data is placed within the cluster. See
> +xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
> +device-based rules.
> +# of PGs:: The number of placement groups footnoteref:[placement_groups] that
> +the pool should have at the beginning. Default: `128`.
> +Traget Size:: The estimated amount of data expected in the pool. The PG
> +autoscaler uses that size to estimate the optimal PG count.
>
s/that size/this size/
> +Target Size Ratio:: The ratio of data that is expected in the pool. The PG
> +autoscaler uses the ratio relative to other ratio sets. It takes precedence
> +over the `target size` if both are set.
> +Min. # of PGs:: The minimal number of placement groups. This setting is used to
>
s/minimal/minimum/
> +fine-tune the lower amount of the PG count for that pool. The PG autoscaler
>
s/lower amount/lower bound/
> +will not merge PGs below this threshold.
> +
> Further information on Ceph pool handling can be found in the Ceph pool
> operation footnote:[Ceph pool operation
> {cephdocs-url}/rados/operations/pools/]
> @@ -697,8 +729,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named
> `'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
> Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
> Ceph documentation for more information regarding a fitting placement group
> -number (`pg_num`) for your setup footnote:[Ceph Placement Groups
> -{cephdocs-url}/rados/operations/placement-groups/].
> +number (`pg_num`) for your setup footnoteref:[placement_groups].
> Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
> storage configuration after it was created successfully.
>
s/was/has been/
> --
> 2.29.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com mailto:pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-01-15 14:19 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-15 13:17 [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Alwin Antreich
2021-01-15 13:17 ` [pve-devel] [PATCH docs 2/2] ceph: add explanation on the pg autoscaler Alwin Antreich
2021-01-15 14:19 ` Dylan Whyte
2021-01-15 14:05 ` [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options Dylan Whyte
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox