From: Alexander Zeidler <a.zeidler@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section
Date: Wed, 5 Feb 2025 11:08:45 +0100 [thread overview]
Message-ID: <20250205100850.3-1-a.zeidler@proxmox.com> (raw)
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
---
v2:
* add two missing anchors to be usable via xref
pve-disk-health-monitoring.adoc | 1 +
pveceph.adoc | 8 ++++++++
pvecm.adoc | 1 +
3 files changed, 10 insertions(+)
diff --git a/pve-disk-health-monitoring.adoc b/pve-disk-health-monitoring.adoc
index 8ea9d5f..0109860 100644
--- a/pve-disk-health-monitoring.adoc
+++ b/pve-disk-health-monitoring.adoc
@@ -1,3 +1,4 @@
+[[disk_health_monitoring]]
Disk Health Monitoring
----------------------
ifdef::wiki[]
diff --git a/pveceph.adoc b/pveceph.adoc
index da39e7f..93c2f8d 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -82,6 +82,7 @@ and vocabulary
footnote:[Ceph glossary {cephdocs-url}/glossary].
+[[pve_ceph_recommendation]]
Recommendations for a Healthy Ceph Cluster
------------------------------------------
@@ -95,6 +96,7 @@ NOTE: The recommendations below should be seen as a rough guidance for choosing
hardware. Therefore, it is still essential to adapt it to your specific needs.
You should test your setup and monitor health and performance continuously.
+[[pve_ceph_recommendation_cpu]]
.CPU
Ceph services can be classified into two categories:
@@ -122,6 +124,7 @@ IOPS load over 100'000 with sub millisecond latency, each OSD can use multiple
CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is
likely for very high performance disks.
+[[pve_ceph_recommendation_memory]]
.Memory
Especially in a hyper-converged setup, the memory consumption needs to be
carefully planned out and monitored. In addition to the predicted memory usage
@@ -137,6 +140,7 @@ normal operation, but rather leave some headroom to cope with outages.
The OSD service itself will use additional memory. The Ceph BlueStore backend of
the daemon requires by default **3-5 GiB of memory** (adjustable).
+[[pve_ceph_recommendation_network]]
.Network
We recommend a network bandwidth of at least 10 Gbps, or more, to be used
exclusively for Ceph traffic. A meshed network setup
@@ -172,6 +176,7 @@ high-performance setups:
* one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync
cluster communication.
+[[pve_ceph_recommendation_disk]]
.Disks
When planning the size of your Ceph cluster, it is important to take the
recovery time into consideration. Especially with small clusters, recovery
@@ -197,6 +202,7 @@ You also need to balance OSD count and single OSD capacity. More capacity
allows you to increase storage density, but it also means that a single OSD
failure forces Ceph to recover more data at once.
+[[pve_ceph_recommendation_raid]]
.Avoid RAID
As Ceph handles data object redundancy and multiple parallel writes to disks
(OSDs) on its own, using a RAID controller normally doesn’t improve
@@ -1018,6 +1024,7 @@ to act as standbys.
Ceph maintenance
----------------
+[[pve_ceph_osd_replace]]
Replace OSDs
~~~~~~~~~~~~
@@ -1131,6 +1138,7 @@ ceph osd unset noout
You can now start up the guests. Highly available guests will change their state
to 'started' when they power on.
+[[pve_ceph_mon_and_ts]]
Ceph Monitoring and Troubleshooting
-----------------------------------
diff --git a/pvecm.adoc b/pvecm.adoc
index 15dda4e..cffea6d 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -506,6 +506,7 @@ if it loses quorum.
NOTE: {pve} assigns a single vote to each node by default.
+[[pvecm_cluster_network]]
Cluster Network
---------------
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next reply other threads:[~2025-02-05 10:09 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-05 10:08 Alexander Zeidler [this message]
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 2/6] ceph: correct heading capitalization Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 3/6] ceph: troubleshooting: revise and add frequently needed information Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 4/6] ceph: osd: revise and expand the section "Destroy OSDs" Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 5/6] ceph: maintenance: revise and expand section "Replace OSDs" Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 6/6] pvecm: remove node: mention Ceph and its steps for safe removal Alexander Zeidler
2025-02-05 14:20 ` [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Max Carrara
2025-03-24 16:42 ` [pve-devel] applied: " Aaron Lauterer
2025-03-26 10:20 ` Max Carrara
2025-03-26 13:13 ` Aaron Lauterer
2025-03-26 13:36 ` Max Carrara
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250205100850.3-1-a.zeidler@proxmox.com \
--to=a.zeidler@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal