From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH v2 docs 2/3] storage: rbd: cephs: update authentication section
Date: Wed, 26 Jan 2022 11:18:43 +0100 [thread overview]
Message-ID: <20220126101844.558040-2-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20220126101844.558040-1-a.lauterer@proxmox.com>
It is not needed anymore to place the keyring/secret file manually in
the correct location as it can be done with pvesm and the GUI/API now.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes:
restructured the overall flow according to @Fabian_E's suggestions.
First CLI way, then mentioning GUI before giving the background
information where the secret/keyring is actually stored.
I also added more detailed CLI examples and changed their style.
Also rephrased a few other sentences that were rather hard to read.
Especially mentioning that this is done automatically in a
hyperconverged setup.
Fixed footnotes. There should only be one footnote regarding the Ceph
user mgmt docs now.
pve-storage-cephfs.adoc | 47 ++++++++++++++++++++++++++++-------------
pve-storage-rbd.adoc | 42 +++++++++++++++++++++++++++---------
2 files changed, 64 insertions(+), 25 deletions(-)
diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc
index 4035617..88b92e6 100644
--- a/pve-storage-cephfs.adoc
+++ b/pve-storage-cephfs.adoc
@@ -71,32 +71,49 @@ disabled.
Authentication
~~~~~~~~~~~~~~
-If you use `cephx` authentication, which is enabled by default, you need to copy
-the secret from your external Ceph cluster to a Proxmox VE host.
+If you use `cephx` authentication, which is enabled by default, you need to
+provide the secret from the external Ceph cluster.
-Create the directory `/etc/pve/priv/ceph` with
+To configure the storage via the CLI, you first need to make the file
+containing the secret available. One way is to copy the file from the external
+Ceph cluster directly to one of the {pve} nodes. The following example will
+copy it to the `/root` directory of the node on which we run it:
- mkdir /etc/pve/priv/ceph
+----
+# scp <external cephserver>:/etc/ceph/cephfs.secret /root/cephfs.secret
+----
+
+Then use the `pvesm` CLI tool to configure the external RBD storage, use the
+`--keyring` parameter, which needs to be a path to the secret file that you
+copied. For example:
+
+----
+# pvesm add cephfs <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret
+----
-Then copy the secret
+When configuring an external RBD storage via the GUI, you can copy and paste
+the secret into the appropriate field.
- scp cephfs.secret <proxmox>:/etc/pve/priv/ceph/<STORAGE_ID>.secret
+The secret is only the key itself, as opposed to the `rbd` backend which also
+contains a `[client.userid]` section.
-The secret must be renamed to match your `<STORAGE_ID>`. Copying the
-secret generally requires root privileges. The file must only contain the
-secret key itself, as opposed to the `rbd` backend which also contains a
-`[client.userid]` section.
+The secret will be stored at
+
+----
+# /etc/pve/priv/ceph/<STORAGE_ID>.secret
+----
A secret can be received from the Ceph cluster (as Ceph admin) by issuing the
command below, where `userid` is the client ID that has been configured to
access the cluster. For further information on Ceph user management, see the
-Ceph docs footnote:[Ceph user management
-{cephdocs-url}/rados/operations/user-management/].
+Ceph docs.footnoteref:[cephusermgmt]
- ceph auth get-key client.userid > cephfs.secret
+----
+# ceph auth get-key client.userid > cephfs.secret
+----
-If Ceph is installed locally on the PVE cluster, that is, it was set up using
-`pveceph`, this is done automatically.
+If Ceph is installed locally on the {pve} cluster, this is done automatically
+when adding the storage.
Storage Features
~~~~~~~~~~~~~~~~
diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
index 917926d..4002fd3 100644
--- a/pve-storage-rbd.adoc
+++ b/pve-storage-rbd.adoc
@@ -1,3 +1,4 @@
+:fn-ceph-user-mgmt: footnote:cephusermgmt[Ceph user management {cephdocs-url}/rados/operations/user-management/]
[[ceph_rados_block_devices]]
Ceph RADOS Block Devices (RBD)
------------------------------
@@ -69,22 +70,43 @@ TIP: You can use the `rbd` utility to do low-level management tasks.
Authentication
~~~~~~~~~~~~~~
-If you use `cephx` authentication, you need to copy the keyfile from your
-external Ceph cluster to a Proxmox VE host.
+If you use `cephx` authentication, which is enabled by default, you need to
+provide the keyring from the external Ceph cluster.
-Create the directory `/etc/pve/priv/ceph` with
+To configure the storage via the CLI, you first need to make the file
+containing the keyring available. One way is to copy the file from the external
+Ceph cluster directly to one of the {pve} nodes. The following example will
+copy it to the `/root` directory of the node on which we run it:
- mkdir /etc/pve/priv/ceph
+----
+# scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring
+----
+
+Then use the `pvesm` CLI tool to configure the external RBD storage, use the
+`--keyring` parameter, which needs to be a path to the keyring file that you
+copied. For example:
+
+----
+# pvesm add rbd <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring
+----
+
+When configuring an external RBD storage via the GUI, you can copy and paste
+the keyring into the appropriate field.
+
+The keyring will be stored at
+
+----
+# /etc/pve/priv/ceph/<STORAGE_ID>.keyring
+----
-Then copy the keyring
+If Ceph is installed locally on the {pve} cluster, this is done automatically
+when adding the storage.
- scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
+TIP: Creating a keyring with only the needed capabilities is recommend when
+connecting to an external cluster. For further information on Ceph user
+management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/operations/user-management/[Ceph User Management]]
-The keyring must be named to match your `<STORAGE_ID>`. Copying the
-keyring generally requires root privileges.
-If Ceph is installed locally on the PVE cluster, this is done automatically by
-'pveceph' or in the GUI.
Storage Features
~~~~~~~~~~~~~~~~
--
2.30.2
next prev parent reply other threads:[~2022-01-26 10:18 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-26 10:18 [pve-devel] [PATCH v2 manager 1/3] ui: rbd: cephfs: add keyring/secret field for external clusters Aaron Lauterer
2022-01-26 10:18 ` Aaron Lauterer [this message]
2022-01-26 10:18 ` [pve-devel] [PATCH v2 docs 3/3] update Ceph codename and docs url to octopus Aaron Lauterer
2022-01-28 11:23 ` [pve-devel] applied-series: [PATCH v2 manager 1/3] ui: rbd: cephfs: add keyring/secret field for external clusters Fabian Ebner
2022-01-28 11:24 ` Aaron Lauterer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220126101844.558040-2-a.lauterer@proxmox.com \
--to=a.lauterer@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.