all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: "Max Carrara" <m.carrara@proxmox.com>
To: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH docs 1/6] ceph: add anchors for use in troubleshooting section
Date: Mon, 03 Feb 2025 17:19:19 +0100	[thread overview]
Message-ID: <D7IY3VJ9GQ4Y.1PVI9Q4QY950A@proxmox.com> (raw)
In-Reply-To: <20250203142801.3-1-a.zeidler@proxmox.com>

On Mon Feb 3, 2025 at 3:27 PM CET, Alexander Zeidler wrote:
> Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
> ---

Some high-level feedback (see comments inline and in patches otherwise):

- The writing style is IMO quite clear and straightforward, nice work!

- In patch 03, the "_disk_health_monitoring" anchor reference seems to
  break my build for some reason. Does this also happen on your end? The
  single-page docs ("pve-admin-guide.html") seem to build just fine
  otherwise.

- Regarding implicitly / auto-generated anchors, is it fine to break
  those in general or not? See my other comments inline here.

- There are a few tiny style things I personally would correct, but if
  you disagree with them, feel free to leave them as they are.

All in all this seems pretty solid; the stuff regarding the anchors
needs to be clarified first (whether it's okay to break auto-generated
ones & the one anchor that makes my build fail). Otherwise, pretty good!

>  pveceph.adoc | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/pveceph.adoc b/pveceph.adoc
> index da39e7f..93c2f8d 100644
> --- a/pveceph.adoc
> +++ b/pveceph.adoc
> @@ -82,6 +82,7 @@ and vocabulary
>  footnote:[Ceph glossary {cephdocs-url}/glossary].
>  
>  
> +[[pve_ceph_recommendation]]
>  Recommendations for a Healthy Ceph Cluster
>  ------------------------------------------

AsciiDoc automatically generated an anchor for the heading above
already, and it's "_recommendations_for_a_healthy_ceph_cluster"
apparently. So, there's no need to provide one here explicitly, since it
already exists; it also might break old links that refer to the
documentation.

Though, perhaps in a separate series, you could look for all implicitly
defined anchors and set them explicitly..? Not sure if that's something
we want, though.

>  
> @@ -95,6 +96,7 @@ NOTE: The recommendations below should be seen as a rough guidance for choosing
>  hardware. Therefore, it is still essential to adapt it to your specific needs.
>  You should test your setup and monitor health and performance continuously.
>  
> +[[pve_ceph_recommendation_cpu]]
>  .CPU
>  Ceph services can be classified into two categories:
>  
> @@ -122,6 +124,7 @@ IOPS load over 100'000 with sub millisecond latency, each OSD can use multiple
>  CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is
>  likely for very high performance disks.
>  
> +[[pve_ceph_recommendation_memory]]
>  .Memory
>  Especially in a hyper-converged setup, the memory consumption needs to be
>  carefully planned out and monitored. In addition to the predicted memory usage
> @@ -137,6 +140,7 @@ normal operation, but rather leave some headroom to cope with outages.
>  The OSD service itself will use additional memory. The Ceph BlueStore backend of
>  the daemon requires by default **3-5 GiB of memory** (adjustable).
>  
> +[[pve_ceph_recommendation_network]]
>  .Network
>  We recommend a network bandwidth of at least 10 Gbps, or more, to be used
>  exclusively for Ceph traffic. A meshed network setup
> @@ -172,6 +176,7 @@ high-performance setups:
>  * one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync
>    cluster communication.
>  
> +[[pve_ceph_recommendation_disk]]
>  .Disks
>  When planning the size of your Ceph cluster, it is important to take the
>  recovery time into consideration. Especially with small clusters, recovery
> @@ -197,6 +202,7 @@ You also need to balance OSD count and single OSD capacity. More capacity
>  allows you to increase storage density, but it also means that a single OSD
>  failure forces Ceph to recover more data at once.
>  
> +[[pve_ceph_recommendation_raid]]
>  .Avoid RAID
>  As Ceph handles data object redundancy and multiple parallel writes to disks
>  (OSDs) on its own, using a RAID controller normally doesn’t improve
> @@ -1018,6 +1024,7 @@ to act as standbys.
>  Ceph maintenance
>  ----------------
>  
> +[[pve_ceph_osd_replace]]
>  Replace OSDs
>  ~~~~~~~~~~~~

This one here is also implicitly defined already, unfortunately.

>  
> @@ -1131,6 +1138,7 @@ ceph osd unset noout
>  You can now start up the guests. Highly available guests will change their state
>  to 'started' when they power on.
>  
> +[[pve_ceph_mon_and_ts]]
>  Ceph Monitoring and Troubleshooting
>  -----------------------------------
>  

So is this one.

Actually, now I do wonder: I think it's better to define them in the
AsciiDoc code directly, but how would we do that with existing anchors?
Just use the automatically generated anchor name? Or are we fine with
breaking links? Would be nice if someone could chime in here.

(Personally, I think it's fine to break these things, but I stand
corrected if that's a no-go.)



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

  parent reply	other threads:[~2025-02-03 16:19 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-03 14:27 Alexander Zeidler
2025-02-03 14:27 ` [pve-devel] [PATCH docs 2/6] ceph: correct heading capitalization Alexander Zeidler
2025-02-03 14:27 ` [pve-devel] [PATCH docs 3/6] ceph: troubleshooting: revise and add frequently needed information Alexander Zeidler
2025-02-03 16:19   ` Max Carrara
2025-02-03 14:27 ` [pve-devel] [PATCH docs 4/6] ceph: osd: revise and expand the section "Destroy OSDs" Alexander Zeidler
2025-02-03 16:19   ` Max Carrara
2025-02-03 14:28 ` [pve-devel] [PATCH docs 5/6] ceph: maintenance: revise and expand section "Replace OSDs" Alexander Zeidler
2025-02-03 14:28 ` [pve-devel] [PATCH docs 6/6] pvecm: remove node: mention Ceph and its steps for safe removal Alexander Zeidler
2025-02-03 16:19 ` Max Carrara [this message]
2025-02-04  9:22   ` [pve-devel] [PATCH docs 1/6] ceph: add anchors for use in troubleshooting section Alexander Zeidler
2025-02-04  9:52     ` Max Carrara
2025-02-05 10:10       ` Alexander Zeidler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=D7IY3VJ9GQ4Y.1PVI9Q4QY950A@proxmox.com \
    --to=m.carrara@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal