From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 19E0D6B25E for ; Thu, 18 Feb 2021 11:39:19 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 0CBD7C09C for ; Thu, 18 Feb 2021 11:39:19 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 297A6C094 for ; Thu, 18 Feb 2021 11:39:18 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id D8FBC41BB3 for ; Thu, 18 Feb 2021 11:39:17 +0100 (CET) From: Dylan Whyte To: pve-devel@lists.proxmox.com Date: Thu, 18 Feb 2021 11:39:09 +0100 Message-Id: <20210218103910.21127-1-d.whyte@proxmox.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.018 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_SHORT 0.001 Use of a URL Shortener for very short URL RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH v2 pve-docs 1/2] docs: ceph: explain pool options X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Feb 2021 10:39:19 -0000 Signed-off-by: Alwin Antreich edited-by: Dylan Whyte --- v1->v2: * Minor language fixup pveceph.adoc | 47 +++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 39 insertions(+), 8 deletions(-) diff --git a/pveceph.adoc b/pveceph.adoc index fd3fded..9253613 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -466,12 +466,16 @@ WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1 allows I/O on an object when it has only 1 replica which could lead to data loss, incomplete PGs or unfound objects. -It is advised to calculate the PG number depending on your setup, you can find -the formula and the PG calculator footnote:[PG calculator -https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to -increase and decrease the number of PGs later on footnote:[Placement Groups -{cephdocs-url}/rados/operations/placement-groups/]. +It is advised that you calculate the PG number based on your setup. You can +find the formula and the PG calculator footnote:[PG calculator +https://ceph.com/pgcalc/] online. From Ceph Nautilus onward, you can change the +number of PGs footnoteref:[placement_groups,Placement Groups +{cephdocs-url}/rados/operations/placement-groups/] after the setup. +In addition to manual adjustment, the PG autoscaler +footnoteref:[autoscaler,Automated Scaling +{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can +automatically scale the PG count for a pool in the background. You can create pools through command line or on the GUI on each PVE host under **Ceph -> Pools**. @@ -485,6 +489,34 @@ If you would like to automatically also get a storage definition for your pool, mark the checkbox "Add storages" in the GUI or use the command line option '--add_storages' at pool creation. +.Base Options +Name:: The name of the pool. This must be unique and can't be changed afterwards. +Size:: The number of replicas per object. Ceph always tries to have this many +copies of an object. Default: `3`. +PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of +the pool. If set to `warn`, it produces a warning message when a pool +has a non-optimal PG count. Default: `warn`. +Add as Storage:: Configure a VM or container storage using the new pool. +Default: `true`. + +.Advanced Options +Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on +the pool if a PG has less than this many replicas. Default: `2`. +Crush Rule:: The rule to use for mapping object placement in the cluster. These +rules define how data is placed within the cluster. See +xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on +device-based rules. +# of PGs:: The number of placement groups footnoteref:[placement_groups] that +the pool should have at the beginning. Default: `128`. +Traget Size:: The estimated amount of data expected in the pool. The PG +autoscaler uses this size to estimate the optimal PG count. +Target Size Ratio:: The ratio of data that is expected in the pool. The PG +autoscaler uses the ratio relative to other ratio sets. It takes precedence +over the `target size` if both are set. +Min. # of PGs:: The minimum number of placement groups. This setting is used to +fine-tune the lower bound of the PG count for that pool. The PG autoscaler +will not merge PGs below this threshold. + Further information on Ceph pool handling can be found in the Ceph pool operation footnote:[Ceph pool operation {cephdocs-url}/rados/operations/pools/] @@ -697,10 +729,9 @@ This creates a CephFS named `'cephfs'' using a pool for its data named `'cephfs_metadata'' with one quarter of the data pools placement groups (`32`). Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the Ceph documentation for more information regarding a fitting placement group -number (`pg_num`) for your setup footnote:[Ceph Placement Groups -{cephdocs-url}/rados/operations/placement-groups/]. +number (`pg_num`) for your setup footnoteref:[placement_groups]. Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve} -storage configuration after it was created successfully. +storage configuration after it has been created successfully. Destroy CephFS ~~~~~~~~~~~~~~ -- 2.20.1