From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 2987368EC6 for ; Fri, 15 Jan 2021 15:06:03 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 1293A25C23 for ; Fri, 15 Jan 2021 15:06:03 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 31E3725BC6 for ; Fri, 15 Jan 2021 15:06:01 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id DE8D54526D for ; Fri, 15 Jan 2021 15:06:00 +0100 (CET) Date: Fri, 15 Jan 2021 15:05:58 +0100 (CET) From: Dylan Whyte To: Proxmox VE development discussion , Alwin Antreich Message-ID: <832714900.48.1610719558401@webmail.proxmox.com> In-Reply-To: <20210115131716.243126-1-a.antreich@proxmox.com> References: <20210115131716.243126-1-a.antreich@proxmox.com> MIME-Version: 1.0 X-Priority: 3 Importance: Normal X-Mailer: Open-Xchange Mailer v7.10.4-Rev16 X-Originating-Client: open-xchange-appsuite X-SPAM-LEVEL: Spam detection results: 0 AWL 0.019 Adjusted score from AWL reputation of From: address HTML_MESSAGE 0.001 HTML included in message KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_SHORT 0.001 Use of a URL Shortener for very short URL RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com, ceph.com] Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Jan 2021 14:06:03 -0000 > On 15.01.2021 14:17 Alwin Antreich wrote: > > > Signed-off-by: Alwin Antreich > --- > pveceph.adoc | 45 ++++++++++++++++++++++++++++++++++++++------- > 1 file changed, 38 insertions(+), 7 deletions(-) > > diff --git a/pveceph.adoc b/pveceph.adoc > index fd3fded..42dfb02 100644 > --- a/pveceph.adoc > +++ b/pveceph.adoc > @@ -466,12 +466,16 @@ WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1 > allows I/O on an object when it has only 1 replica which could lead to data > loss, incomplete PGs or unfound objects. > > -It is advised to calculate the PG number depending on your setup, you can find > -the formula and the PG calculator footnote:[PG calculator > -https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to > -increase and decrease the number of PGs later on footnote:[Placement Groups > -{cephdocs-url}/rados/operations/placement-groups/]. > +It is advisable to calculate the PG number depending on your setup. You can > +find the formula and the PG calculator footnote:[PG calculator > +https://ceph.com/pgcalc/] online. Ceph Nautilus and newer, allow to increase > +and decrease the number of PGs footnoteref:[placement_groups,Placement Groups > s/Ceph Nautilus and newer, allow to/Ceph Nautilus and newer allow you to/ or "Ceph Nautilus and newer allow you to change the number of PGs", depending on whether you want "increase and decrease" to be clear or not. > +{cephdocs-url}/rados/operations/placement-groups/] later on. > s/later on/after setup/ > +In addition to manual adjustment, the PG autoscaler > +footnoteref:[autoscaler,Automated Scaling > +{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can > +automatically scale the PG count for a pool in the background. > > You can create pools through command line or on the GUI on each PVE host under > **Ceph -> Pools**. > @@ -485,6 +489,34 @@ If you would like to automatically also get a storage definition for your pool, > mark the checkbox "Add storages" in the GUI or use the command line option > '--add_storages' at pool creation. > > +.Base Options > +Name:: The name of the pool. It must be unique and can't be changed afterwards. > s/It must/This must/ s/unique and/unique, and/ > +Size:: The number of replicas per object. Ceph always tries to have that many > s/have that/have this/ > +copies of an object. Default: `3`. > +PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of > +the pool. If set to `warn`, introducing a warning message when a pool > s/introducing/it produces/ > +is too far away from an optimal PG count. Default: `warn`. > s/is too far away from an optimal/has a suboptimal/ > +Add as Storage:: Configure a VM and container storage using the new pool. > s/VM and container/VM and/or container/ > +Default: `true`. > + > +.Advanced Options > +Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on > +the pool if a PG has less than this many replicas. Default: `2`. > +Crush Rule:: The rule to use for mapping object placement in the cluster. These > +rules define how data is placed within the cluster. See > +xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on > +device-based rules. > +# of PGs:: The number of placement groups footnoteref:[placement_groups] that > +the pool should have at the beginning. Default: `128`. > +Traget Size:: The estimated amount of data expected in the pool. The PG > +autoscaler uses that size to estimate the optimal PG count. > s/that size/this size/ > +Target Size Ratio:: The ratio of data that is expected in the pool. The PG > +autoscaler uses the ratio relative to other ratio sets. It takes precedence > +over the `target size` if both are set. > +Min. # of PGs:: The minimal number of placement groups. This setting is used to > s/minimal/minimum/ > +fine-tune the lower amount of the PG count for that pool. The PG autoscaler > s/lower amount/lower bound/ > +will not merge PGs below this threshold. > + > Further information on Ceph pool handling can be found in the Ceph pool > operation footnote:[Ceph pool operation > {cephdocs-url}/rados/operations/pools/] > @@ -697,8 +729,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named > `'cephfs_metadata'' with one quarter of the data pools placement groups (`32`). > Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the > Ceph documentation for more information regarding a fitting placement group > -number (`pg_num`) for your setup footnote:[Ceph Placement Groups > -{cephdocs-url}/rados/operations/placement-groups/]. > +number (`pg_num`) for your setup footnoteref:[placement_groups]. > Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve} > storage configuration after it was created successfully. > s/was/has been/ > -- > 2.29.2 > > > > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com mailto:pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >