all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH manager 2/5] api ceph osd: add OSD details endpoint
Date: Fri,  1 Jul 2022 16:16:39 +0200	[thread overview]
Message-ID: <20220701141642.2743824-3-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20220701141642.2743824-1-a.lauterer@proxmox.com>

Adding a GET endpoint to .../ceph/osd/<osdid> that returns various
metadata regarding the OSD.

Such as
* process id
* memory usage
* info about devices used (bdev/block, db, wal)
    * size
    * disks used (sdX)
    ...
* network addresses and ports used
...

Memory usage and PID are retrieved from systemd while the rest can be
retrieved from the metadata provided by Ceph.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/API2/Ceph/OSD.pm | 181 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 181 insertions(+)

diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index 93433b3a..33b3fdb6 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -516,6 +516,187 @@ __PACKAGE__->register_method ({
 	return $rpcenv->fork_worker('cephcreateosd', $devs->{dev}->{name},  $authuser, $worker);
     }});
 
+my $OSD_DEV_RETURN_PROPS = {
+    dev_node => {
+	type => 'string',
+	description => 'Device node',
+    },
+    devices => {
+	type => 'string',
+	description => 'Physical disks used',
+    },
+    size => {
+	type => 'integer',
+	description => 'Size in bytes',
+    },
+    support_discard => {
+	type => 'boolean',
+	description => 'Discard support of the physical device',
+    },
+    type => {
+	type => 'string',
+	description => 'Type of device. For example, hdd or ssd',
+    },
+};
+
+__PACKAGE__->register_method ({
+    name => 'osddetails',
+    path => '{osdid}',
+    method => 'GET',
+    description => "Get OSD details",
+    proxyto => 'node',
+    protected => 1,
+    permissions => {
+	check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    osdid => {
+		description => 'OSD ID',
+		type => 'integer',
+	    },
+	},
+    },
+    returns => {
+	type => 'object',
+	properties => {
+	    osd => {
+		type => 'object',
+		description => 'General information about the OSD',
+		properties => {
+		    hostname => {
+			type => 'string',
+			description => 'Name of the host containing the OSD.',
+		    },
+		    id => {
+			type => 'integer',
+			description => 'ID of the OSD.',
+		    },
+		    mem_usage => {
+			type => 'integer',
+			description => 'Memory usage of the OSD service.',
+		    },
+		    osd_data => {
+			type => 'string',
+			description => "Path to the OSD's data directory.",
+		    },
+		    osd_objectstore => {
+			type => 'string',
+			description => 'The type of object store used.',
+		    },
+		    pid => {
+			type => 'integer',
+			description => 'OSD process ID.',
+		    },
+		    version => {
+			type => 'string',
+			description => 'Ceph version of the OSD service.',
+		    },
+		    front_addr => {
+			type => 'string',
+			description => 'Address and port used to talk to clients and monitors.',
+		    },
+		    back_addr => {
+			type => 'string',
+			description => 'Address and port used to talk to other OSDs.',
+		    },
+		    hb_front_addr => {
+			type => 'string',
+			description => 'Heartbeat address and port for clients and monitors.',
+		    },
+		    hb_back_addr => {
+			type => 'string',
+			description => 'Heartbeat address and port for other OSDs.',
+		    },
+		},
+	    },
+	    bdev => {
+		type => 'object',
+		description => 'Data about the OSD block device',
+		properties => $OSD_DEV_RETURN_PROPS,
+	    },
+	    db => {
+		type => 'object',
+		description => 'Data about the DB device (optional)',
+		properties => $OSD_DEV_RETURN_PROPS,
+		optional => 1,
+	    },
+	    wal => {
+		type => 'object',
+		description => 'Data about the WAL device (optional)',
+		properties => $OSD_DEV_RETURN_PROPS,
+		optional => 1,
+	    },
+	}
+    },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Ceph::Tools::check_ceph_inited();
+
+	my $osdid = $param->{osdid};
+	my $rados = PVE::RADOS->new();
+	my $metadata = $rados->mon_command({ prefix => 'osd metadata', id => int($osdid) });
+
+	die "OSD '${osdid}' does not exists on host '${nodename}'\n"
+	    if $nodename ne $metadata->{hostname};
+
+	my $raw = '';
+	my $pid;
+	my $memory;
+	my $parser = sub { $raw .= shift };
+	my $cmd = [
+	    '/bin/systemctl',
+	    'show',
+	    "ceph-osd\@${osdid}.service",
+	    '--property=MainPID,MemoryCurrent'
+	];
+	eval { run_command($cmd, errmsg => 'systemctl show error', outfunc => $parser) };
+	die $@ if $@;
+
+	if ($raw =~ m/^MainPID=([0-9]*)MemoryCurrent=([0-9]*|\[not set\])$/s) { #untaint
+	    $pid = $1;
+	    $memory = $2 eq "[not set]" ? 0 : $2;
+	} else {
+	    die "got unexpected data from systemctl: '${raw}'\n";
+	}
+
+	my $data = {
+	    osd => {
+		hostname => $metadata->{hostname},
+		id => $metadata->{id},
+		mem_usage => int($memory),
+		osd_data => $metadata->{osd_data},
+		osd_objectstore => $metadata->{osd_objectstore},
+		pid => int($pid),
+		version => "$metadata->{ceph_version_short} ($metadata->{ceph_release})",
+		front_addr => $metadata->{front_addr},
+		back_addr => $metadata->{back_addr},
+		hb_front_addr => $metadata->{hb_front_addr},
+		hb_back_addr => $metadata->{hb_back_addr},
+	    },
+	};
+
+	my $get_data = sub {
+	    my ($dev, $prefix) = @_;
+	    $data->{$dev} = {
+		dev_node => $metadata->{"${prefix}_${dev}_dev_node"},
+		devices => $metadata->{"${prefix}_${dev}_devices"},
+		size => int($metadata->{"${prefix}_${dev}_size"}),
+		support_discard => int($metadata->{"${prefix}_${dev}_support_discard"}),
+		type => $metadata->{"${prefix}_${dev}_type"},
+	    };
+	};
+
+	$get_data->("bdev", "bluestore");
+	$get_data->("db", "bluefs") if $metadata->{bluefs_dedicated_db};
+	$get_data->("wal", "bluefs") if $metadata->{bluefs_dedicated_wal};
+
+	return $data;
+    }});
+
 # Check if $osdid belongs to $nodename
 # $tree ... rados osd tree (passing the tree makes it easy to test)
 sub osd_belongs_to_node {
-- 
2.30.2





  parent reply	other threads:[~2022-07-01 14:16 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-01 14:16 [pve-devel] [PATCH widget-toolkit/manager 0/5] Ceph OSD: add detail infos Aaron Lauterer
2022-07-01 14:16 ` [pve-devel] [PATCH widget-toolkit 1/5] ObjectGrid: optionally show loading on reload Aaron Lauterer
2022-07-05  9:32   ` Thomas Lamprecht
2022-07-01 14:16 ` Aaron Lauterer [this message]
2022-07-01 14:16 ` [pve-devel] [PATCH manager 3/5] api ceph osd: add volume details endpoint Aaron Lauterer
2022-07-05  9:58   ` Thomas Lamprecht
2022-07-05 14:19     ` Aaron Lauterer
2022-07-06  6:37       ` Thomas Lamprecht
2022-07-01 14:16 ` [pve-devel] [PATCH manager 4/5] ui utils: add renderer for ceph osd addresses Aaron Lauterer
2022-07-01 14:16 ` [pve-devel] [PATCH manager 5/5] ui: osd: add details window Aaron Lauterer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220701141642.2743824-3-a.lauterer@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal