From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id DE78A1FF15C for ; Fri, 19 Sep 2025 14:00:41 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B6B58158BD; Fri, 19 Sep 2025 14:00:52 +0200 (CEST) Message-ID: Date: Fri, 19 Sep 2025 14:00:18 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: en-US To: Alwin Antreich , pve-devel@lists.proxmox.com References: <20250918164549.3018879-1-alwin@antreich.com> From: Aaron Lauterer In-Reply-To: <20250918164549.3018879-1-alwin@antreich.com> X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1758283209464 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.012 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [ceph.com] Subject: Re: [pve-devel] [PATCH docs] pveceph: update OSD memory considerations X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" thanks for the patch! see inline for comments On 2025-09-18 18:45, Alwin Antreich wrote: > Since bluestore, OSDs adhere to the osd_memory_target and the > recommended amount of memory was increased. > > See: https://docs.ceph.com/en/reef/start/hardware-recommendations/#ram > > Signed-off-by: Alwin Antreich > --- > pveceph.adoc | 16 ++++++++-------- > 1 file changed, 8 insertions(+), 8 deletions(-) > > diff --git a/pveceph.adoc b/pveceph.adoc > index 17efa4d..a2d71e7 100644 > --- a/pveceph.adoc > +++ b/pveceph.adoc > @@ -131,14 +131,14 @@ carefully planned out and monitored. In addition to the predicted memory usage > of virtual machines and containers, you must also account for having enough > memory available for Ceph to provide excellent and stable performance. > > -As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used > -by an OSD. While the usage might be less under normal conditions, it will use > -most during critical operations like recovery, re-balancing or backfilling. > -That means that you should avoid maxing out your available memory already on > -normal operation, but rather leave some headroom to cope with outages. > - > -The OSD service itself will use additional memory. The Ceph BlueStore backend of > -the daemon requires by default **3-5 GiB of memory** (adjustable). > +While usage may be less under normal conditions, it will consume more memory > +during critical operations, such as recovery, rebalancing, or backfilling. That > +means you should avoid maxing out your available memory already on regular > +operation, but rather leave some headroom to cope with outages. > + > +The current recommendation is to configure at least **8 GiB of memory per OSD > +daemon** for good performance. The OSD daemon requires, by default, 4 GiB of > +memory. given how the current latest Ceph docs phrase it [0], I am not sure here. They sound like the default osd_memory_target of 4G is okay, but that they might use more in recovery situations and one should calculate with ~8G. So unless I understand that wrong, maybe we could phrase it more like the following? === The current recommendation is to calculate with at least 8 GiB of memory per OSD daemon to give it enough memory if needed. By default, the OSD daemon is set to use up to 4 GiB of memory in normal scenarios. === If I understand it wrong and users should change the osd_memory_target to 8 GiB, we should document how, or maybe even try to make it configurable in the GUI/API/pveceph... [0] https://docs.ceph.com/en/latest/start/hardware-recommendations/#ram > > [[pve_ceph_recommendation_network]] > .Network _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel