From: Alwin Antreich via pve-devel <pve-devel@lists.proxmox.com>
To: pve-devel@lists.proxmox.com
Cc: Alwin Antreich <alwin@antreich.com>
Subject: [pve-devel] [PATCH docs] pveceph: update OSD memory considerations
Date: Thu, 18 Sep 2025 18:45:49 +0200 [thread overview]
Message-ID: <mailman.145.1758214297.390.pve-devel@lists.proxmox.com> (raw)
[-- Attachment #1: Type: message/rfc822, Size: 5160 bytes --]
From: Alwin Antreich <alwin@antreich.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH docs] pveceph: update OSD memory considerations
Date: Thu, 18 Sep 2025 18:45:49 +0200
Message-ID: <20250918164549.3018879-1-alwin@antreich.com>
Since bluestore, OSDs adhere to the osd_memory_target and the
recommended amount of memory was increased.
See: https://docs.ceph.com/en/reef/start/hardware-recommendations/#ram
Signed-off-by: Alwin Antreich <alwin@antreich.com>
---
pveceph.adoc | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index 17efa4d..a2d71e7 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -131,14 +131,14 @@ carefully planned out and monitored. In addition to the predicted memory usage
of virtual machines and containers, you must also account for having enough
memory available for Ceph to provide excellent and stable performance.
-As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
-by an OSD. While the usage might be less under normal conditions, it will use
-most during critical operations like recovery, re-balancing or backfilling.
-That means that you should avoid maxing out your available memory already on
-normal operation, but rather leave some headroom to cope with outages.
-
-The OSD service itself will use additional memory. The Ceph BlueStore backend of
-the daemon requires by default **3-5 GiB of memory** (adjustable).
+While usage may be less under normal conditions, it will consume more memory
+during critical operations, such as recovery, rebalancing, or backfilling. That
+means you should avoid maxing out your available memory already on regular
+operation, but rather leave some headroom to cope with outages.
+
+The current recommendation is to configure at least **8 GiB of memory per OSD
+daemon** for good performance. The OSD daemon requires, by default, 4 GiB of
+memory.
[[pve_ceph_recommendation_network]]
.Network
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next reply other threads:[~2025-09-18 16:51 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-18 16:45 Alwin Antreich via pve-devel [this message]
[not found] <20250918164549.3018879-1-alwin@antreich.com>
2025-09-19 12:00 ` Aaron Lauterer
2025-09-20 18:21 ` Alwin Antreich via pve-devel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=mailman.145.1758214297.390.pve-devel@lists.proxmox.com \
--to=pve-devel@lists.proxmox.com \
--cc=alwin@antreich.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox