From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 106F56385F for ; Wed, 26 Jan 2022 11:18:48 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 0D49865D8 for ; Wed, 26 Jan 2022 11:18:48 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 654B265CB for ; Wed, 26 Jan 2022 11:18:46 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 3CCA843AC6 for ; Wed, 26 Jan 2022 11:18:46 +0100 (CET) From: Aaron Lauterer To: pve-devel@lists.proxmox.com Date: Wed, 26 Jan 2022 11:18:43 +0100 Message-Id: <20220126101844.558040-2-a.lauterer@proxmox.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126101844.558040-1-a.lauterer@proxmox.com> References: <20220126101844.558040-1-a.lauterer@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.001 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH v2 docs 2/3] storage: rbd: cephs: update authentication section X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Jan 2022 10:18:48 -0000 It is not needed anymore to place the keyring/secret file manually in the correct location as it can be done with pvesm and the GUI/API now. Signed-off-by: Aaron Lauterer --- changes: restructured the overall flow according to @Fabian_E's suggestions. First CLI way, then mentioning GUI before giving the background information where the secret/keyring is actually stored. I also added more detailed CLI examples and changed their style. Also rephrased a few other sentences that were rather hard to read. Especially mentioning that this is done automatically in a hyperconverged setup. Fixed footnotes. There should only be one footnote regarding the Ceph user mgmt docs now. pve-storage-cephfs.adoc | 47 ++++++++++++++++++++++++++++------------- pve-storage-rbd.adoc | 42 +++++++++++++++++++++++++++--------- 2 files changed, 64 insertions(+), 25 deletions(-) diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc index 4035617..88b92e6 100644 --- a/pve-storage-cephfs.adoc +++ b/pve-storage-cephfs.adoc @@ -71,32 +71,49 @@ disabled. Authentication ~~~~~~~~~~~~~~ -If you use `cephx` authentication, which is enabled by default, you need to copy -the secret from your external Ceph cluster to a Proxmox VE host. +If you use `cephx` authentication, which is enabled by default, you need to +provide the secret from the external Ceph cluster. -Create the directory `/etc/pve/priv/ceph` with +To configure the storage via the CLI, you first need to make the file +containing the secret available. One way is to copy the file from the external +Ceph cluster directly to one of the {pve} nodes. The following example will +copy it to the `/root` directory of the node on which we run it: - mkdir /etc/pve/priv/ceph +---- +# scp :/etc/ceph/cephfs.secret /root/cephfs.secret +---- + +Then use the `pvesm` CLI tool to configure the external RBD storage, use the +`--keyring` parameter, which needs to be a path to the secret file that you +copied. For example: + +---- +# pvesm add cephfs --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret +---- -Then copy the secret +When configuring an external RBD storage via the GUI, you can copy and paste +the secret into the appropriate field. - scp cephfs.secret :/etc/pve/priv/ceph/.secret +The secret is only the key itself, as opposed to the `rbd` backend which also +contains a `[client.userid]` section. -The secret must be renamed to match your ``. Copying the -secret generally requires root privileges. The file must only contain the -secret key itself, as opposed to the `rbd` backend which also contains a -`[client.userid]` section. +The secret will be stored at + +---- +# /etc/pve/priv/ceph/.secret +---- A secret can be received from the Ceph cluster (as Ceph admin) by issuing the command below, where `userid` is the client ID that has been configured to access the cluster. For further information on Ceph user management, see the -Ceph docs footnote:[Ceph user management -{cephdocs-url}/rados/operations/user-management/]. +Ceph docs.footnoteref:[cephusermgmt] - ceph auth get-key client.userid > cephfs.secret +---- +# ceph auth get-key client.userid > cephfs.secret +---- -If Ceph is installed locally on the PVE cluster, that is, it was set up using -`pveceph`, this is done automatically. +If Ceph is installed locally on the {pve} cluster, this is done automatically +when adding the storage. Storage Features ~~~~~~~~~~~~~~~~ diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc index 917926d..4002fd3 100644 --- a/pve-storage-rbd.adoc +++ b/pve-storage-rbd.adoc @@ -1,3 +1,4 @@ +:fn-ceph-user-mgmt: footnote:cephusermgmt[Ceph user management {cephdocs-url}/rados/operations/user-management/] [[ceph_rados_block_devices]] Ceph RADOS Block Devices (RBD) ------------------------------ @@ -69,22 +70,43 @@ TIP: You can use the `rbd` utility to do low-level management tasks. Authentication ~~~~~~~~~~~~~~ -If you use `cephx` authentication, you need to copy the keyfile from your -external Ceph cluster to a Proxmox VE host. +If you use `cephx` authentication, which is enabled by default, you need to +provide the keyring from the external Ceph cluster. -Create the directory `/etc/pve/priv/ceph` with +To configure the storage via the CLI, you first need to make the file +containing the keyring available. One way is to copy the file from the external +Ceph cluster directly to one of the {pve} nodes. The following example will +copy it to the `/root` directory of the node on which we run it: - mkdir /etc/pve/priv/ceph +---- +# scp :/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring +---- + +Then use the `pvesm` CLI tool to configure the external RBD storage, use the +`--keyring` parameter, which needs to be a path to the keyring file that you +copied. For example: + +---- +# pvesm add rbd --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring +---- + +When configuring an external RBD storage via the GUI, you can copy and paste +the keyring into the appropriate field. + +The keyring will be stored at + +---- +# /etc/pve/priv/ceph/.keyring +---- -Then copy the keyring +If Ceph is installed locally on the {pve} cluster, this is done automatically +when adding the storage. - scp :/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/.keyring +TIP: Creating a keyring with only the needed capabilities is recommend when +connecting to an external cluster. For further information on Ceph user +management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/operations/user-management/[Ceph User Management]] -The keyring must be named to match your ``. Copying the -keyring generally requires root privileges. -If Ceph is installed locally on the PVE cluster, this is done automatically by -'pveceph' or in the GUI. Storage Features ~~~~~~~~~~~~~~~~ -- 2.30.2