public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH manager v3] api: ceph: improve reporting of ceph OSD memory usage
@ 2023-09-04  9:18 Stefan Hanreich
  2023-09-04 13:14 ` [pve-devel] applied: " Thomas Lamprecht
  0 siblings, 1 reply; 3+ messages in thread
From: Stefan Hanreich @ 2023-09-04  9:18 UTC (permalink / raw)
  To: pve-devel

Currently we are using the MemoryCurrent property of the OSD service
to determine the used memory of a Ceph OSD. This includes, among other
things, the memory used by buffers [1]. Since BlueFS uses buffered
I/O, this can lead to extremely high values shown in the UI.

Instead we are now reading the PSS value from the proc filesystem,
which should more accurately reflect the amount of memory currently
used by the Ceph OSD.

Aaron and I decided on PSS over RSS, since this should give a better
idea of used memory - particularly when using a large amount of OSDs
on one host, since the OSDs share some of the pages.

[1] https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Tested-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Changes from v2:
* closing the file handle after using it
* improved error message when failing to open proc file
* add hint in the UI that we are using PSS
* mentioned Aaron's involvement in commit message

Changes from v1:
 * now returns 0 instead of null in case of stopped OSDs in order to
 preserve backwards compatibility

 PVE/API2/Ceph/OSD.pm            | 21 ++++++++++++++++-----
 www/manager6/ceph/OSDDetails.js |  2 +-
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index ded359904..389802971 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -687,13 +687,10 @@ __PACKAGE__->register_method ({

 	my $raw = '';
 	my $pid;
-	my $memory;
 	my $parser = sub {
 	    my $line = shift;
 	    if ($line =~ m/^MainPID=([0-9]*)$/) {
 		$pid = $1;
-	    } elsif ($line =~ m/^MemoryCurrent=([0-9]*|\[not set\])$/) {
-		$memory = $1 eq "[not set]" ? 0 : $1;
 	    }
 	};

@@ -702,12 +699,26 @@ __PACKAGE__->register_method ({
 	    'show',
 	    "ceph-osd\@${osdid}.service",
 	    '--property',
-	    'MainPID,MemoryCurrent',
+	    'MainPID',
 	];
 	run_command($cmd, errmsg => 'fetching OSD PID and memory usage failed', outfunc => $parser);

 	$pid = defined($pid) ? int($pid) : undef;
-	$memory = defined($memory) ? int($memory) : undef;
+
+	my $memory = 0;
+	if ($pid && $pid > 0) {
+	    open (my $SMAPS, '<', "/proc/$pid/smaps_rollup")
+		or die "failed to read PSS memory-stat from process - $!\n";
+
+	    while (my $line = <$SMAPS>) {
+		if ($line =~ m/^Pss:\s+([0-9]+) kB$/) {
+		    $memory = $1 * 1024;
+		    last;
+		}
+	    }
+
+	    close $SMAPS;
+	}

 	my $data = {
 	    osd => {
diff --git a/www/manager6/ceph/OSDDetails.js b/www/manager6/ceph/OSDDetails.js
index f0765d4fe..3b1c1d9c0 100644
--- a/www/manager6/ceph/OSDDetails.js
+++ b/www/manager6/ceph/OSDDetails.js
@@ -148,7 +148,7 @@ Ext.define('PVE.CephOsdDetails', {
 				{
 				    xtype: 'text',
 				    name: 'mem_usage',
-				    text: gettext('Memory usage'),
+				    text: gettext('Memory usage (PSS)'),
 				    renderer: Proxmox.Utils.render_size,
 				},
 				{
--
2.39.2




^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-09-04 13:48 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-04  9:18 [pve-devel] [PATCH manager v3] api: ceph: improve reporting of ceph OSD memory usage Stefan Hanreich
2023-09-04 13:14 ` [pve-devel] applied: " Thomas Lamprecht
2023-09-04 13:47   ` Stefan Hanreich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal