From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 565BC68E84 for ; Fri, 15 Jan 2021 14:17:52 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 4E291254F7 for ; Fri, 15 Jan 2021 14:17:22 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 54C6F254EB for ; Fri, 15 Jan 2021 14:17:20 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 17D8844993 for ; Fri, 15 Jan 2021 14:17:20 +0100 (CET) From: Alwin Antreich To: pve-devel@lists.proxmox.com Date: Fri, 15 Jan 2021 14:17:15 +0100 Message-Id: <20210115131716.243126-1-a.antreich@proxmox.com> X-Mailer: git-send-email 2.29.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.015 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_SHORT 0.001 Use of a URL Shortener for very short URL RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [ceph.com] Subject: [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Jan 2021 13:17:52 -0000 Signed-off-by: Alwin Antreich --- pveceph.adoc | 45 ++++++++++++++++++++++++++++++++++++++------- 1 file changed, 38 insertions(+), 7 deletions(-) diff --git a/pveceph.adoc b/pveceph.adoc index fd3fded..42dfb02 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -466,12 +466,16 @@ WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1 allows I/O on an object when it has only 1 replica which could lead to data loss, incomplete PGs or unfound objects. -It is advised to calculate the PG number depending on your setup, you can find -the formula and the PG calculator footnote:[PG calculator -https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to -increase and decrease the number of PGs later on footnote:[Placement Groups -{cephdocs-url}/rados/operations/placement-groups/]. +It is advisable to calculate the PG number depending on your setup. You can +find the formula and the PG calculator footnote:[PG calculator +https://ceph.com/pgcalc/] online. Ceph Nautilus and newer, allow to increase +and decrease the number of PGs footnoteref:[placement_groups,Placement Groups +{cephdocs-url}/rados/operations/placement-groups/] later on. +In addition to manual adjustment, the PG autoscaler +footnoteref:[autoscaler,Automated Scaling +{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can +automatically scale the PG count for a pool in the background. You can create pools through command line or on the GUI on each PVE host under **Ceph -> Pools**. @@ -485,6 +489,34 @@ If you would like to automatically also get a storage definition for your pool, mark the checkbox "Add storages" in the GUI or use the command line option '--add_storages' at pool creation. +.Base Options +Name:: The name of the pool. It must be unique and can't be changed afterwards. +Size:: The number of replicas per object. Ceph always tries to have that many +copies of an object. Default: `3`. +PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of +the pool. If set to `warn`, introducing a warning message when a pool +is too far away from an optimal PG count. Default: `warn`. +Add as Storage:: Configure a VM and container storage using the new pool. +Default: `true`. + +.Advanced Options +Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on +the pool if a PG has less than this many replicas. Default: `2`. +Crush Rule:: The rule to use for mapping object placement in the cluster. These +rules define how data is placed within the cluster. See +xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on +device-based rules. +# of PGs:: The number of placement groups footnoteref:[placement_groups] that +the pool should have at the beginning. Default: `128`. +Traget Size:: The estimated amount of data expected in the pool. The PG +autoscaler uses that size to estimate the optimal PG count. +Target Size Ratio:: The ratio of data that is expected in the pool. The PG +autoscaler uses the ratio relative to other ratio sets. It takes precedence +over the `target size` if both are set. +Min. # of PGs:: The minimal number of placement groups. This setting is used to +fine-tune the lower amount of the PG count for that pool. The PG autoscaler +will not merge PGs below this threshold. + Further information on Ceph pool handling can be found in the Ceph pool operation footnote:[Ceph pool operation {cephdocs-url}/rados/operations/pools/] @@ -697,8 +729,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named `'cephfs_metadata'' with one quarter of the data pools placement groups (`32`). Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the Ceph documentation for more information regarding a fitting placement group -number (`pg_num`) for your setup footnote:[Ceph Placement Groups -{cephdocs-url}/rados/operations/placement-groups/]. +number (`pg_num`) for your setup footnoteref:[placement_groups]. Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve} storage configuration after it was created successfully. -- 2.29.2