public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fabian Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com, Aaron Lauterer <a.lauterer@proxmox.com>
Subject: Re: [pve-devel] [PATCH docs 2/3] storage: rbd: cephs: update authentication section
Date: Mon, 24 Jan 2022 14:48:34 +0100	[thread overview]
Message-ID: <33cce4e5-9f53-cba2-6279-f1923d108221@proxmox.com> (raw)
In-Reply-To: <20211126164446.2558368-2-a.lauterer@proxmox.com>

Am 26.11.21 um 17:44 schrieb Aaron Lauterer:
> It is not needed anymore to place the keyring/secret file manually in
> the correct location as it can be done with pvesm and the GUI/API now.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> 
> Since both sectons share the same footnote, I tried to get them to share
> the same with footnote:<id>[here some text] and footnote:<id>[] to
> reference it as explained in the asciidoc documentation [0].
> Unfortunately I did not get it to work, most likely because they are
> both in separate files?
> I rather err on having the same footnote twice than missing it in one
> place.
> 
> 
> 
> [0] https://docs.asciidoctor.org/asciidoc/latest/macros/footnote/
> 

Maybe the idea from the "Externalizing a footnote" section with using 
document attributes works?

>   pve-storage-cephfs.adoc | 31 ++++++++++++++++++-------------
>   pve-storage-rbd.adoc    | 28 +++++++++++++++++++---------
>   2 files changed, 37 insertions(+), 22 deletions(-)
> 
> diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc
> index c67f089..2437859 100644
> --- a/pve-storage-cephfs.adoc
> +++ b/pve-storage-cephfs.adoc
> @@ -71,31 +71,36 @@ disabled.
>   Authentication
>   ~~~~~~~~~~~~~~
>   
> -If you use `cephx` authentication, which is enabled by default, you need to copy
> -the secret from your external Ceph cluster to a Proxmox VE host.
> +If you use `cephx` authentication, which is enabled by default, you need to provide
> +the secret from the external Ceph cluster.
>   
> -Create the directory `/etc/pve/priv/ceph` with
> +The secret file is expected to be located at
>   
> - mkdir /etc/pve/priv/ceph
> + /etc/pve/priv/ceph/<STORAGE_ID>.secret
>   
> -Then copy the secret
> +You can copy the secret with
>   
> - scp cephfs.secret <proxmox>:/etc/pve/priv/ceph/<STORAGE_ID>.secret
> + scp <external cephserver>:/etc/ceph/cephfs.secret /local/path/to/<STORAGE_ID>.secret

IMHO this is a bit confusing. We tell the user an explicit path where 
the key should be, and then suggest copying it to some location which 
might or might not be the same as already mentioned. After reading the 
next paragraph it might be clearer, but IMHO the structure should be "To 
add via CLI, do scp + pvesm. To add via GUI, do ...". And/or maybe make 
it clear that pvesm will put the keyring there?

>   
> -The secret must be renamed to match your `<STORAGE_ID>`. Copying the
> -secret generally requires root privileges. The file must only contain the
> -secret key itself, as opposed to the `rbd` backend which also contains a
> -`[client.userid]` section.
> +If you use the `pvesm` CLI tool to configure the external RBD storage, use the
> +`--keyring` parameter, which needs to be a path to the secret file that you
> +copied.
> +
> +When configuring an external RBD storage via the GUI, you can copy and paste
> +the secret into the appropriate field.
> +
> +The secret is only the key itself, as opposed to the `rbd` backend which also
> +contains a `[client.userid]` section.
>   
>   A secret can be received from the Ceph cluster (as Ceph admin) by issuing the
>   command below, where `userid` is the client ID that has been configured to
>   access the cluster. For further information on Ceph user management, see the
> -Ceph docs footnote:[Ceph user management
> -{cephdocs-url}/rados/operations/user-management/].
> +Ceph docs.footnote:[Ceph user management
> +{cephdocs-url}/rados/operations/user-management/]
>   
>    ceph auth get-key client.userid > cephfs.secret
>   
> -If Ceph is installed locally on the PVE cluster, that is, it was set up using
> +If Ceph is installed locally on the {pve} cluster, that is, it was set up using
>   `pveceph`, this is done automatically.
>   
>   Storage Features
> diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
> index bbc80e2..1f14b7c 100644
> --- a/pve-storage-rbd.adoc
> +++ b/pve-storage-rbd.adoc
> @@ -69,23 +69,33 @@ TIP: You can use the `rbd` utility to do low-level management tasks.
>   Authentication
>   ~~~~~~~~~~~~~~
>   
> -If you use `cephx` authentication, you need to copy the keyfile from your
> -external Ceph cluster to a Proxmox VE host.
> +If you use `cephx` authentication, which is enabled by default, you need to
> +provide the keyring from the external Ceph cluster.
>   
> -Create the directory `/etc/pve/priv/ceph` with
> +The keyring file is expected to be at

Nit: "to be located at" like above sounds better

>   
> - mkdir /etc/pve/priv/ceph
> + /etc/pve/priv/ceph/<STORAGE_ID>.keyring
>   
> -Then copy the keyring
> +You can copy the keyring with
>   
> - scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
> + scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /local/path/to/<STORAGE_ID>.keyring

Same as above.

>   
> -The keyring must be named to match your `<STORAGE_ID>`. Copying the
> -keyring generally requires root privileges.
> +If you use the `pvesm` CLI tool to configure the external RBD storage, use the
> +`--keyring` parameter, which needs to be a path to the keyring file that you
> +copied.
>   
> -If Ceph is installed locally on the PVE cluster, this is done automatically by
> +When configuring an external RBD storage via the GUI, you can copy and paste the
> +keyring into the appropriate field.
> +
> +If Ceph is installed locally on the {pve} cluster, this is done automatically by
>   'pveceph' or in the GUI.
>   
> +TIP: Creating a keyring with only the needed capabilities is recommend when
> +connecting to an external cluster. For further information on Ceph user
> +management, see the Ceph docs.footnote:[Ceph user management
> +{cephdocs-url}/rados/operations/user-management/]
> +
> +
>   Storage Features
>   ~~~~~~~~~~~~~~~~
>   




  reply	other threads:[~2022-01-24 13:48 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-26 16:44 [pve-devel] [PATCH manager] ui: rbd: cephfs: add keyring/secret field for external clusters Aaron Lauterer
2021-11-26 16:44 ` [pve-devel] [PATCH docs 2/3] storage: rbd: cephs: update authentication section Aaron Lauterer
2022-01-24 13:48   ` Fabian Ebner [this message]
2022-01-26  9:47     ` Aaron Lauterer
2021-11-26 16:44 ` [pve-devel] [PATCH docs 3/3] storage: rbd: cephs: replace PVE with {pve} Aaron Lauterer
2022-01-24 14:03   ` [pve-devel] applied: " Fabian Ebner
2021-11-29  8:26 ` [pve-devel] [PATCH manager] ui: rbd: cephfs: add keyring/secret field for external clusters Aaron Lauterer
2022-01-17 10:12 ` Aaron Lauterer
2022-01-24 12:54 ` Fabian Ebner
2022-01-24 15:26   ` Aaron Lauterer
2022-01-25  9:41     ` Fabian Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=33cce4e5-9f53-cba2-6279-f1923d108221@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal