all lists on lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH docs 1/6] ceph: add anchors for use in troubleshooting section
@ 2025-02-03 14:27 Alexander Zeidler
  2025-02-03 14:27 ` [pve-devel] [PATCH docs 2/6] ceph: correct heading capitalization Alexander Zeidler
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Alexander Zeidler @ 2025-02-03 14:27 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
---
 pveceph.adoc | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/pveceph.adoc b/pveceph.adoc
index da39e7f..93c2f8d 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -82,6 +82,7 @@ and vocabulary
 footnote:[Ceph glossary {cephdocs-url}/glossary].
 
 
+[[pve_ceph_recommendation]]
 Recommendations for a Healthy Ceph Cluster
 ------------------------------------------
 
@@ -95,6 +96,7 @@ NOTE: The recommendations below should be seen as a rough guidance for choosing
 hardware. Therefore, it is still essential to adapt it to your specific needs.
 You should test your setup and monitor health and performance continuously.
 
+[[pve_ceph_recommendation_cpu]]
 .CPU
 Ceph services can be classified into two categories:
 
@@ -122,6 +124,7 @@ IOPS load over 100'000 with sub millisecond latency, each OSD can use multiple
 CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is
 likely for very high performance disks.
 
+[[pve_ceph_recommendation_memory]]
 .Memory
 Especially in a hyper-converged setup, the memory consumption needs to be
 carefully planned out and monitored. In addition to the predicted memory usage
@@ -137,6 +140,7 @@ normal operation, but rather leave some headroom to cope with outages.
 The OSD service itself will use additional memory. The Ceph BlueStore backend of
 the daemon requires by default **3-5 GiB of memory** (adjustable).
 
+[[pve_ceph_recommendation_network]]
 .Network
 We recommend a network bandwidth of at least 10 Gbps, or more, to be used
 exclusively for Ceph traffic. A meshed network setup
@@ -172,6 +176,7 @@ high-performance setups:
 * one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync
   cluster communication.
 
+[[pve_ceph_recommendation_disk]]
 .Disks
 When planning the size of your Ceph cluster, it is important to take the
 recovery time into consideration. Especially with small clusters, recovery
@@ -197,6 +202,7 @@ You also need to balance OSD count and single OSD capacity. More capacity
 allows you to increase storage density, but it also means that a single OSD
 failure forces Ceph to recover more data at once.
 
+[[pve_ceph_recommendation_raid]]
 .Avoid RAID
 As Ceph handles data object redundancy and multiple parallel writes to disks
 (OSDs) on its own, using a RAID controller normally doesn’t improve
@@ -1018,6 +1024,7 @@ to act as standbys.
 Ceph maintenance
 ----------------
 
+[[pve_ceph_osd_replace]]
 Replace OSDs
 ~~~~~~~~~~~~
 
@@ -1131,6 +1138,7 @@ ceph osd unset noout
 You can now start up the guests. Highly available guests will change their state
 to 'started' when they power on.
 
+[[pve_ceph_mon_and_ts]]
 Ceph Monitoring and Troubleshooting
 -----------------------------------
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-02-05 10:10 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-03 14:27 [pve-devel] [PATCH docs 1/6] ceph: add anchors for use in troubleshooting section Alexander Zeidler
2025-02-03 14:27 ` [pve-devel] [PATCH docs 2/6] ceph: correct heading capitalization Alexander Zeidler
2025-02-03 14:27 ` [pve-devel] [PATCH docs 3/6] ceph: troubleshooting: revise and add frequently needed information Alexander Zeidler
2025-02-03 16:19   ` Max Carrara
2025-02-03 14:27 ` [pve-devel] [PATCH docs 4/6] ceph: osd: revise and expand the section "Destroy OSDs" Alexander Zeidler
2025-02-03 16:19   ` Max Carrara
2025-02-03 14:28 ` [pve-devel] [PATCH docs 5/6] ceph: maintenance: revise and expand section "Replace OSDs" Alexander Zeidler
2025-02-03 14:28 ` [pve-devel] [PATCH docs 6/6] pvecm: remove node: mention Ceph and its steps for safe removal Alexander Zeidler
2025-02-03 16:19 ` [pve-devel] [PATCH docs 1/6] ceph: add anchors for use in troubleshooting section Max Carrara
2025-02-04  9:22   ` Alexander Zeidler
2025-02-04  9:52     ` Max Carrara
2025-02-05 10:10       ` Alexander Zeidler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal