* [pve-devel] [PATCH docs] pveceph: update OSD memory considerations
@ 2025-09-18 16:45 Alwin Antreich via pve-devel
0 siblings, 0 replies; 3+ messages in thread
From: Alwin Antreich via pve-devel @ 2025-09-18 16:45 UTC (permalink / raw)
To: pve-devel; +Cc: Alwin Antreich
[-- Attachment #1: Type: message/rfc822, Size: 5160 bytes --]
From: Alwin Antreich <alwin@antreich.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH docs] pveceph: update OSD memory considerations
Date: Thu, 18 Sep 2025 18:45:49 +0200
Message-ID: <20250918164549.3018879-1-alwin@antreich.com>
Since bluestore, OSDs adhere to the osd_memory_target and the
recommended amount of memory was increased.
See: https://docs.ceph.com/en/reef/start/hardware-recommendations/#ram
Signed-off-by: Alwin Antreich <alwin@antreich.com>
---
pveceph.adoc | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index 17efa4d..a2d71e7 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -131,14 +131,14 @@ carefully planned out and monitored. In addition to the predicted memory usage
of virtual machines and containers, you must also account for having enough
memory available for Ceph to provide excellent and stable performance.
-As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
-by an OSD. While the usage might be less under normal conditions, it will use
-most during critical operations like recovery, re-balancing or backfilling.
-That means that you should avoid maxing out your available memory already on
-normal operation, but rather leave some headroom to cope with outages.
-
-The OSD service itself will use additional memory. The Ceph BlueStore backend of
-the daemon requires by default **3-5 GiB of memory** (adjustable).
+While usage may be less under normal conditions, it will consume more memory
+during critical operations, such as recovery, rebalancing, or backfilling. That
+means you should avoid maxing out your available memory already on regular
+operation, but rather leave some headroom to cope with outages.
+
+The current recommendation is to configure at least **8 GiB of memory per OSD
+daemon** for good performance. The OSD daemon requires, by default, 4 GiB of
+memory.
[[pve_ceph_recommendation_network]]
.Network
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [pve-devel] [PATCH docs] pveceph: update OSD memory considerations
2025-09-19 12:00 ` Aaron Lauterer
@ 2025-09-20 18:21 ` Alwin Antreich via pve-devel
0 siblings, 0 replies; 3+ messages in thread
From: Alwin Antreich via pve-devel @ 2025-09-20 18:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alwin Antreich
[-- Attachment #1: Type: message/rfc822, Size: 7123 bytes --]
From: Alwin Antreich <alwin@antreich.com>
To: pve-devel@lists.proxmox.com
Subject: Re: [PATCH docs] pveceph: update OSD memory considerations
Date: Sat, 20 Sep 2025 20:21:15 +0200
Message-ID: <18C016FD-4966-40C9-8E78-F343A68D928A@antreich.com>
On 19 September 2025 14:00:18 CEST, Aaron Lauterer <a.lauterer@proxmox.com> wrote:
>thanks for the patch! see inline for comments
>
>On 2025-09-18 18:45, Alwin Antreich wrote:
>> Since bluestore, OSDs adhere to the osd_memory_target and the
>> recommended amount of memory was increased.
>>
>> See: https://docs.ceph.com/en/reef/start/hardware-recommendations/#ram
>>
>> Signed-off-by: Alwin Antreich <alwin@antreich.com>
>> ---
>> pveceph.adoc | 16 ++++++++--------
>> 1 file changed, 8 insertions(+), 8 deletions(-)
>>
>> diff --git a/pveceph.adoc b/pveceph.adoc
>> index 17efa4d..a2d71e7 100644
>> --- a/pveceph.adoc
>> +++ b/pveceph.adoc
>> @@ -131,14 +131,14 @@ carefully planned out and monitored. In addition to the predicted memory usage
>> of virtual machines and containers, you must also account for having enough
>> memory available for Ceph to provide excellent and stable performance.
>> -As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
>> -by an OSD. While the usage might be less under normal conditions, it will use
>> -most during critical operations like recovery, re-balancing or backfilling.
>> -That means that you should avoid maxing out your available memory already on
>> -normal operation, but rather leave some headroom to cope with outages.
>> -
>> -The OSD service itself will use additional memory. The Ceph BlueStore backend of
>> -the daemon requires by default **3-5 GiB of memory** (adjustable).
>> +While usage may be less under normal conditions, it will consume more memory
>> +during critical operations, such as recovery, rebalancing, or backfilling. That
>> +means you should avoid maxing out your available memory already on regular
>> +operation, but rather leave some headroom to cope with outages.
>> +
>> +The current recommendation is to configure at least **8 GiB of memory per OSD
>> +daemon** for good performance. The OSD daemon requires, by default, 4 GiB of
>> +memory.
>
>given how the current latest Ceph docs phrase it [0], I am not sure here. They sound like the default osd_memory_target of 4G is okay, but that they might use more in recovery situations and one should calculate with ~8G.
>
>So unless I understand that wrong, maybe we could phrase it more like the following?
>===
>The current recommendation is to calculate with at least 8 GiB of memory per OSD daemon to give it enough memory if needed. By default, the OSD daemon is set to use up to 4 GiB of memory in normal scenarios.
>===
>
>If I understand it wrong and users should change the osd_memory_target to 8 GiB, we should document how, or maybe even try to make it configurable in the GUI/API/pveceph...
I didn't want to clutter the cluster sizing text with configuration details.
The OSD daemon will adhere to the osd_memory_target , as it isn't a limit, the OSD may overshoot by 10-20%, as buffers (probably other things) aren't accounted for. Unless auto tuning is enabled, the memory_target should be adjusted to 8GiB. The experience we gathered also shows that 8GiB is worth it, especially when the cluster is degraded.
See inline
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [pve-devel] [PATCH docs] pveceph: update OSD memory considerations
[not found] <20250918164549.3018879-1-alwin@antreich.com>
@ 2025-09-19 12:00 ` Aaron Lauterer
2025-09-20 18:21 ` Alwin Antreich via pve-devel
0 siblings, 1 reply; 3+ messages in thread
From: Aaron Lauterer @ 2025-09-19 12:00 UTC (permalink / raw)
To: Alwin Antreich, pve-devel
thanks for the patch! see inline for comments
On 2025-09-18 18:45, Alwin Antreich wrote:
> Since bluestore, OSDs adhere to the osd_memory_target and the
> recommended amount of memory was increased.
>
> See: https://docs.ceph.com/en/reef/start/hardware-recommendations/#ram
>
> Signed-off-by: Alwin Antreich <alwin@antreich.com>
> ---
> pveceph.adoc | 16 ++++++++--------
> 1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/pveceph.adoc b/pveceph.adoc
> index 17efa4d..a2d71e7 100644
> --- a/pveceph.adoc
> +++ b/pveceph.adoc
> @@ -131,14 +131,14 @@ carefully planned out and monitored. In addition to the predicted memory usage
> of virtual machines and containers, you must also account for having enough
> memory available for Ceph to provide excellent and stable performance.
>
> -As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
> -by an OSD. While the usage might be less under normal conditions, it will use
> -most during critical operations like recovery, re-balancing or backfilling.
> -That means that you should avoid maxing out your available memory already on
> -normal operation, but rather leave some headroom to cope with outages.
> -
> -The OSD service itself will use additional memory. The Ceph BlueStore backend of
> -the daemon requires by default **3-5 GiB of memory** (adjustable).
> +While usage may be less under normal conditions, it will consume more memory
> +during critical operations, such as recovery, rebalancing, or backfilling. That
> +means you should avoid maxing out your available memory already on regular
> +operation, but rather leave some headroom to cope with outages.
> +
> +The current recommendation is to configure at least **8 GiB of memory per OSD
> +daemon** for good performance. The OSD daemon requires, by default, 4 GiB of
> +memory.
given how the current latest Ceph docs phrase it [0], I am not sure
here. They sound like the default osd_memory_target of 4G is okay, but
that they might use more in recovery situations and one should calculate
with ~8G.
So unless I understand that wrong, maybe we could phrase it more like
the following?
===
The current recommendation is to calculate with at least 8 GiB of memory
per OSD daemon to give it enough memory if needed. By default, the OSD
daemon is set to use up to 4 GiB of memory in normal scenarios.
===
If I understand it wrong and users should change the osd_memory_target
to 8 GiB, we should document how, or maybe even try to make it
configurable in the GUI/API/pveceph...
[0] https://docs.ceph.com/en/latest/start/hardware-recommendations/#ram
>
> [[pve_ceph_recommendation_network]]
> .Network
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-09-20 18:21 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-18 16:45 [pve-devel] [PATCH docs] pveceph: update OSD memory considerations Alwin Antreich via pve-devel
[not found] <20250918164549.3018879-1-alwin@antreich.com>
2025-09-19 12:00 ` Aaron Lauterer
2025-09-20 18:21 ` Alwin Antreich via pve-devel
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.