From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id B015F76F5C for ; Mon, 26 Apr 2021 17:28:32 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 9AD831816C for ; Mon, 26 Apr 2021 17:28:02 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 4988A18156 for ; Mon, 26 Apr 2021 17:28:00 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 14DF8428F9 for ; Mon, 26 Apr 2021 17:28:00 +0200 (CEST) From: Dylan Whyte To: pve-devel@lists.proxmox.com Date: Mon, 26 Apr 2021 17:27:41 +0200 Message-Id: <20210426152741.25253-2-d.whyte@proxmox.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210426152741.25253-1-d.whyte@proxmox.com> References: <20210426152741.25253-1-d.whyte@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.400 Adjusted score from AWL reputation of From: address KAM_ASCII_DIVIDERS 0.8 Spam that uses ascii formatting tricks KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH pve-docs 2/2] ceph: language fixup for storage section X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 Apr 2021 15:28:32 -0000 improve language of the cephfs storage backend section. Signed-off-by: Dylan Whyte --- pve-storage-cephfs.adoc | 63 +++++++++++++++++++++-------------------- 1 file changed, 32 insertions(+), 31 deletions(-) diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc index c8615a9..b5d99db 100644 --- a/pve-storage-cephfs.adoc +++ b/pve-storage-cephfs.adoc @@ -8,31 +8,31 @@ endif::wiki[] Storage pool type: `cephfs` -CephFS implements a POSIX-compliant filesystem using a http://ceph.com[Ceph] -storage cluster to store its data. As CephFS builds on Ceph it shares most of -its properties, this includes redundancy, scalability, self healing and high +CephFS implements a POSIX-compliant filesystem, using a http://ceph.com[Ceph] +storage cluster to store its data. As CephFS builds upon Ceph, it shares most of +its properties. This includes redundancy, scalability, self-healing, and high availability. -TIP: {pve} can xref:chapter_pveceph[manage ceph setups], which makes -configuring a CephFS storage easier. As recent hardware has plenty of CPU power -and RAM, running storage services and VMs on same node is possible without a -big performance impact. +TIP: {pve} can xref:chapter_pveceph[manage Ceph setups], which makes +configuring a CephFS storage easier. As modern hardware offers a lot of +processing power and RAM, running storage services and VMs on same node is +possible without a significant performance impact. -To use the CephFS storage plugin you need update the debian stock Ceph client. -Add our Ceph repository xref:sysadmin_package_repositories_ceph[Ceph repository]. -Once added, run an `apt update` and `apt dist-upgrade` cycle to get the newest -packages. +To use the CephFS storage plugin, you must replace the stock Debian Ceph client, +by adding our xref:sysadmin_package_repositories_ceph[Ceph repository]. +Once added, run `apt update`, followed by `apt dist-upgrade`, in order to get +the newest packages. -You need to make sure that there is no other Ceph repository configured, -otherwise the installation will fail or there will be mixed package -versions on the node, leading to unexpected behavior. +WARNING: Please ensure that there are no other Ceph repositories configured. +Otherwise the installation will fail or there will be mixed package versions on +the node, leading to unexpected behavior. [[storage_cephfs_config]] Configuration ~~~~~~~~~~~~~ This backend supports the common storage properties `nodes`, -`disable`, `content`, and the following `cephfs` specific properties: +`disable`, `content`, as well as the following `cephfs` specific properties: monhost:: @@ -45,7 +45,7 @@ The local mount point. Optional, defaults to `/mnt/pve//`. username:: -Ceph user id. Optional, only needed if Ceph is not running on the PVE cluster +Ceph user id. Optional, only needed if Ceph is not running on the PVE cluster, where it defaults to `admin`. subdir:: @@ -57,7 +57,7 @@ fuse:: Access CephFS through FUSE, instead of the kernel client. Optional, defaults to `0`. -.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`) +.Configuration example for an external Ceph cluster (`/etc/pve/storage.cfg`) ---- cephfs: cephfs-external monhost 10.1.1.20 10.1.1.21 10.1.1.22 @@ -65,13 +65,13 @@ cephfs: cephfs-external content backup username admin ---- -NOTE: Don't forget to setup the client secret key file if cephx was not turned -off. +NOTE: Don't forget to set up the client's secret key file, if cephx was not +disabled. Authentication ~~~~~~~~~~~~~~ -If you use the, by-default enabled, `cephx` authentication, you need to copy +If you use `cephx` authentication, which is enabled by default, you need to copy the secret from your external Ceph cluster to a Proxmox VE host. Create the directory `/etc/pve/priv/ceph` with @@ -82,25 +82,26 @@ Then copy the secret scp cephfs.secret :/etc/pve/priv/ceph/.secret -The secret must be named to match your ``. Copying the +The secret must be renamed to match your ``. Copying the secret generally requires root privileges. The file must only contain the -secret key itself, opposed to the `rbd` backend which also contains a +secret key itself, as opposed to the `rbd` backend which also contains a `[client.userid]` section. -A secret can be received from the ceph cluster (as ceph admin) by issuing the -following command. Replace the `userid` with the actual client ID configured to -access the cluster. For further ceph user management see the Ceph docs -footnote:[Ceph user management {cephdocs-url}/rados/operations/user-management/]. +A secret can be received from the Ceph cluster (as Ceph admin) by issuing the +command below, where `userid` is the client ID that has been configured to +access the cluster. For further information on Ceph user management, see the +Ceph docs footnote:[Ceph user management +{cephdocs-url}/rados/operations/user-management/]. ceph auth get-key client.userid > cephfs.secret -If Ceph is installed locally on the PVE cluster, i.e., setup with `pveceph`, -this is done automatically. +If Ceph is installed locally on the PVE cluster, that is, it was set up using +`pveceph`, this is done automatically. Storage Features ~~~~~~~~~~~~~~~~ -The `cephfs` backend is a POSIX-compliant filesystem on top of a Ceph cluster. +The `cephfs` backend is a POSIX-compliant filesystem, on top of a Ceph cluster. .Storage features for backend `cephfs` [width="100%",cols="m,m,3*d",options="header"] @@ -108,8 +109,8 @@ The `cephfs` backend is a POSIX-compliant filesystem on top of a Ceph cluster. |Content types |Image formats |Shared |Snapshots |Clones |vztmpl iso backup snippets |none |yes |yes^[1]^ |no |============================================================================== -^[1]^ Snapshots, while no known bugs, cannot be guaranteed to be stable yet, as -they lack testing. +^[1]^ While no known bugs exist, snapshots are not yet guaranteed to be stable, +as they lack sufficient testing. ifdef::wiki[] -- 2.20.1