all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
	Stefan Hanreich <s.hanreich@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-manager v2] api: ceph: improve reporting of ceph OSD memory usage
Date: Fri, 1 Sep 2023 16:00:48 +0200	[thread overview]
Message-ID: <4bd17ca5-0761-4bc2-87fb-ec92f9127d8f@proxmox.com> (raw)
In-Reply-To: <20230816152116.2454321-1-s.hanreich@proxmox.com>

Am 16/08/2023 um 17:21 schrieb Stefan Hanreich:
> Currently we are using the MemoryCurrent property of the OSD service
> to determine the used memory of a Ceph OSD. This includes, among other
> things, the memory used by buffers [1]. Since BlueFS uses buffered
> I/O, this can lead to extremely high values shown in the UI.
> 
> Instead we are now reading the PSS value from the proc filesystem,
> which should more accurately reflect the amount of memory currently
> used by the Ceph OSD.
> 
> We decided on PSS over RSS, since this should give a better idea of

Who's "we"?

> used memory - particularly when using a large amount of OSDs on one
> host, since the OSDs share some of the pages.

fine for me, I'd hint that in the UI too though, e.g., using
"Memory (PSS)" as label.

> 
> [1] https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
> 
> Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
> ---
> 
> Changes from v1:
>  * Now returns 0 instead of null in case of stopped OSDs in order to
>  preserve backwards compatibility
> 
> 
>  PVE/API2/Ceph/OSD.pm | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
> 



> +	    open (my $SMAPS, '<', "/proc/$pid/smaps_rollup")
> +		or die 'Could not open smaps_rollup for Ceph OSD';

The die is missing a trailing newline, which then will result in
showing the user a rather ugly "died at line ... in .."  suffixed.

Please also include the message of what the actual error was in the
die using $! – otherwise such things, especially due to being probably
rather rare, are unnecessarily hard to debug.

Would maybe reword it a bit too "smaps_rollup" is probably a bit
odd to read for some users ^^

nit, we normally start error messages lower case – while we have no
hard style rule for that so no hard feelings, just mentioning as it
stuck out to me.

So maybe something like:

    or die "failed to read PSS memory-stat from process - $!\n";

Oh, and I would move that open + parse stuff to a private local
method, as it only crowds the API endpoint's code and might be better
off if moved to PVE::ProcFSTools or the like in the future (but we
don't use PSS anywhere else, so it can live in this module for now),
something like


my sub get_proc_pss_from_pid {
    my ($pid) = @_;

    # ...




      reply	other threads:[~2023-09-01 14:01 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-16 15:21 Stefan Hanreich
2023-09-01 14:00 ` Thomas Lamprecht [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4bd17ca5-0761-4bc2-87fb-ec92f9127d8f@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=s.hanreich@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal