* [pve-devel] [PATCH manager v4 1/3] api ceph osd: add OSD index, metadata and lv-info
2022-12-06 15:47 [pve-devel] [PATCH manager v4 0/3] Ceph OSD: add detail infos Aaron Lauterer
@ 2022-12-06 15:47 ` Aaron Lauterer
2022-12-06 15:47 ` [pve-devel] [PATCH manager v4 2/3] ui utils: add renderer for ceph osd addresses Aaron Lauterer
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: Aaron Lauterer @ 2022-12-06 15:47 UTC (permalink / raw)
To: pve-devel
To get more details for a single OSD, we add two new endpoints:
* nodes/{node}/ceph/osd/{osdid}/metadata
* nodes/{node}/ceph/osd/{osdid}/lv-info
The {osdid} endpoint itself gets a new GET handler to return the index.
The metadata one provides various metadata regarding the OSD.
Such as
* process id
* memory usage
* info about devices used (bdev/block, db, wal)
* size
* disks used (sdX)
...
* network addresses and ports used
...
Memory usage and PID are retrieved from systemd while the rest can be
retrieved from the metadata provided by Ceph.
The second one (lv-info) returns the following infos for a logical
volume:
* creation time
* lv name
* lv path
* lv size
* lv uuid
* vg name
Possible volumes are:
* block (default value if not provided)
* db
* wal
'ceph-volume' is used to gather the infos, except for the creation time
of the LV which is retrieved via 'lvs'.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since
v3:
- verify definedness of $pid and $memory after run_command.
also handle cast to int there and not in the assignment of
the return values. This way we get a `null` value returned in case we
never got any value.
v2:
- rephrased errormsgs on run_command
- reworked systemctl show call and parsing of the results
- expanded error msg if no LV info is found to mention that it could be,
because the OSD is a bit older. This will hopefully reduce potential
concerns if user encounter it
- return array of devices instead of optionl bdev, db, and wal
- add 'device' to the devices metadata (block, db, wal), used to be done
in the UI
v1:
- squashed all API commits into one
- moved all new API endpoints into sub endpoints to {osdid}
- {osdid} itself returns the necessary index
- incorporated other code improvements
PVE/API2/Ceph/OSD.pm | 324 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 324 insertions(+)
diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index 93433b3a..8fb2fb78 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -5,6 +5,7 @@ use warnings;
use Cwd qw(abs_path);
use IO::File;
+use JSON;
use UUID;
use PVE::Ceph::Tools;
@@ -516,6 +517,329 @@ __PACKAGE__->register_method ({
return $rpcenv->fork_worker('cephcreateosd', $devs->{dev}->{name}, $authuser, $worker);
}});
+my $OSD_DEV_RETURN_PROPS = {
+ device => {
+ type => 'string',
+ enum => ['block', 'db', 'wal'],
+ description => 'Kind of OSD device',
+ },
+ dev_node => {
+ type => 'string',
+ description => 'Device node',
+ },
+ devices => {
+ type => 'string',
+ description => 'Physical disks used',
+ },
+ size => {
+ type => 'integer',
+ description => 'Size in bytes',
+ },
+ support_discard => {
+ type => 'boolean',
+ description => 'Discard support of the physical device',
+ },
+ type => {
+ type => 'string',
+ description => 'Type of device. For example, hdd or ssd',
+ },
+};
+
+__PACKAGE__->register_method ({
+ name => 'osdindex',
+ path => '{osdid}',
+ method => 'GET',
+ permissions => { user => 'all' },
+ description => "OSD index.",
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ osdid => {
+ description => 'OSD ID',
+ type => 'integer',
+ },
+ },
+ },
+ returns => {
+ type => 'array',
+ items => {
+ type => "object",
+ properties => {},
+ },
+ links => [ { rel => 'child', href => "{name}" } ],
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $result = [
+ { name => 'metadata' },
+ { name => 'lv-info' },
+ ];
+
+ return $result;
+ }});
+
+__PACKAGE__->register_method ({
+ name => 'osddetails',
+ path => '{osdid}/metadata',
+ method => 'GET',
+ description => "Get OSD details",
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', [ 'Sys.Audit' ], any => 1],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ osdid => {
+ description => 'OSD ID',
+ type => 'integer',
+ },
+ },
+ },
+ returns => {
+ type => 'object',
+ properties => {
+ osd => {
+ type => 'object',
+ description => 'General information about the OSD',
+ properties => {
+ hostname => {
+ type => 'string',
+ description => 'Name of the host containing the OSD.',
+ },
+ id => {
+ type => 'integer',
+ description => 'ID of the OSD.',
+ },
+ mem_usage => {
+ type => 'integer',
+ description => 'Memory usage of the OSD service.',
+ },
+ osd_data => {
+ type => 'string',
+ description => "Path to the OSD's data directory.",
+ },
+ osd_objectstore => {
+ type => 'string',
+ description => 'The type of object store used.',
+ },
+ pid => {
+ type => 'integer',
+ description => 'OSD process ID.',
+ },
+ version => {
+ type => 'string',
+ description => 'Ceph version of the OSD service.',
+ },
+ front_addr => {
+ type => 'string',
+ description => 'Address and port used to talk to clients and monitors.',
+ },
+ back_addr => {
+ type => 'string',
+ description => 'Address and port used to talk to other OSDs.',
+ },
+ hb_front_addr => {
+ type => 'string',
+ description => 'Heartbeat address and port for clients and monitors.',
+ },
+ hb_back_addr => {
+ type => 'string',
+ description => 'Heartbeat address and port for other OSDs.',
+ },
+ },
+ },
+ devices => {
+ type => 'array',
+ description => 'Array containing data about devices',
+ items => {
+ type => "object",
+ properties => $OSD_DEV_RETURN_PROPS,
+ },
+ }
+ }
+ },
+ code => sub {
+ my ($param) = @_;
+
+ PVE::Ceph::Tools::check_ceph_inited();
+
+ my $osdid = $param->{osdid};
+ my $rados = PVE::RADOS->new();
+ my $metadata = $rados->mon_command({ prefix => 'osd metadata', id => int($osdid) });
+
+ die "OSD '${osdid}' does not exists on host '${nodename}'\n"
+ if $nodename ne $metadata->{hostname};
+
+ my $raw = '';
+ my $pid;
+ my $memory;
+ my $parser = sub {
+ my $line = shift;
+ if ($line =~ m/^MainPID=([0-9]*)$/) {
+ $pid = $1;
+ } elsif ($line =~ m/^MemoryCurrent=([0-9]*|\[not set\])$/) {
+ $memory = $1 eq "[not set]" ? 0 : $1;
+ }
+ };
+
+ my $cmd = [
+ '/bin/systemctl',
+ 'show',
+ "ceph-osd\@${osdid}.service",
+ '--property',
+ 'MainPID,MemoryCurrent',
+ ];
+ run_command($cmd, errmsg => 'fetching OSD PID and memory usage failed', outfunc => $parser);
+
+ $pid = defined($pid) ? int($pid) : undef;
+ $memory = defined($memory) ? int($memory) : undef;
+
+ my $data = {
+ osd => {
+ hostname => $metadata->{hostname},
+ id => $metadata->{id},
+ mem_usage => $memory,
+ osd_data => $metadata->{osd_data},
+ osd_objectstore => $metadata->{osd_objectstore},
+ pid => $pid,
+ version => "$metadata->{ceph_version_short} ($metadata->{ceph_release})",
+ front_addr => $metadata->{front_addr},
+ back_addr => $metadata->{back_addr},
+ hb_front_addr => $metadata->{hb_front_addr},
+ hb_back_addr => $metadata->{hb_back_addr},
+ },
+ };
+
+ $data->{devices} = [];
+
+ my $get_data = sub {
+ my ($dev, $prefix, $device) = @_;
+ push (
+ @{$data->{devices}},
+ {
+ dev_node => $metadata->{"${prefix}_${dev}_dev_node"},
+ physical_device => $metadata->{"${prefix}_${dev}_devices"},
+ size => int($metadata->{"${prefix}_${dev}_size"}),
+ support_discard => int($metadata->{"${prefix}_${dev}_support_discard"}),
+ type => $metadata->{"${prefix}_${dev}_type"},
+ device => $device,
+ }
+ );
+ };
+
+ $get_data->("bdev", "bluestore", "block");
+ $get_data->("db", "bluefs", "db") if $metadata->{bluefs_dedicated_db};
+ $get_data->("wal", "bluefs", "wal") if $metadata->{bluefs_dedicated_wal};
+
+ return $data;
+ }});
+
+__PACKAGE__->register_method ({
+ name => 'osdvolume',
+ path => '{osdid}/lv-info',
+ method => 'GET',
+ description => "Get OSD volume details",
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', [ 'Sys.Audit' ], any => 1],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ osdid => {
+ description => 'OSD ID',
+ type => 'integer',
+ },
+ type => {
+ description => 'OSD device type',
+ type => 'string',
+ enum => ['block', 'db', 'wal'],
+ default => 'block',
+ optional => 1,
+ },
+ },
+ },
+ returns => {
+ type => 'object',
+ properties => {
+ creation_time => {
+ type => 'string',
+ description => "Creation time as reported by `lvs`.",
+ },
+ lv_name => {
+ type => 'string',
+ description => 'Name of the logical volume (LV).',
+ },
+ lv_path => {
+ type => 'string',
+ description => 'Path to the logical volume (LV).',
+ },
+ lv_size => {
+ type => 'integer',
+ description => 'Size of the logical volume (LV).',
+ },
+ lv_uuid => {
+ type => 'string',
+ description => 'UUID of the logical volume (LV).',
+ },
+ vg_name => {
+ type => 'string',
+ description => 'Name of the volume group (VG).',
+ },
+ },
+ },
+ code => sub {
+ my ($param) = @_;
+
+ PVE::Ceph::Tools::check_ceph_inited();
+
+ my $osdid = $param->{osdid};
+ my $type = $param->{type} // 'block';
+
+ my $raw = '';
+ my $parser = sub { $raw .= shift };
+ my $cmd = ['/usr/sbin/ceph-volume', 'lvm', 'list', '--format', 'json'];
+ run_command($cmd, errmsg => 'listing Ceph LVM volumes failed', outfunc => $parser);
+
+ my $result;
+ if ($raw =~ m/^(\{.*\})$/s) { #untaint
+ $result = JSON::decode_json($1);
+ } else {
+ die "got unexpected data from ceph-volume: '${raw}'\n";
+ }
+ if (!$result->{$osdid}) {
+ die "OSD '${osdid}' not found in 'ceph-volume lvm list' on node '${nodename}'.\n"
+ ."Maybe it was created before LVM became the default?\n";
+ }
+
+ my $lv_data = { map { $_->{type} => $_ } @{$result->{$osdid}} };
+ my $volume = $lv_data->{$type} || die "volume type '${type}' not found for OSD ${osdid}\n";
+
+ $raw = '';
+ $cmd = ['/sbin/lvs', $volume->{lv_path}, '--reportformat', 'json', '-o', 'lv_time'];
+ run_command($cmd, errmsg => 'listing logical volumes failed', outfunc => $parser);
+
+ if ($raw =~ m/(\{.*\})$/s) { #untaint, lvs has whitespace at beginning
+ $result = JSON::decode_json($1);
+ } else {
+ die "got unexpected data from lvs: '${raw}'\n";
+ }
+
+ my $data = { map { $_ => $volume->{$_} } qw(lv_name lv_path lv_uuid vg_name) };
+ $data->{lv_size} = int($volume->{lv_size});
+
+ $data->{creation_time} = @{$result->{report}}[0]->{lv}[0]->{lv_time};
+
+ return $data;
+ }});
+
# Check if $osdid belongs to $nodename
# $tree ... rados osd tree (passing the tree makes it easy to test)
sub osd_belongs_to_node {
--
2.30.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [pve-devel] [PATCH manager v4 2/3] ui utils: add renderer for ceph osd addresses
2022-12-06 15:47 [pve-devel] [PATCH manager v4 0/3] Ceph OSD: add detail infos Aaron Lauterer
2022-12-06 15:47 ` [pve-devel] [PATCH manager v4 1/3] api ceph osd: add OSD index, metadata and lv-info Aaron Lauterer
@ 2022-12-06 15:47 ` Aaron Lauterer
2022-12-06 15:47 ` [pve-devel] [PATCH manager v4 3/3] ui: osd: add details window Aaron Lauterer
[not found] ` <1cfa70b807f858eea840bd040b9a83cd@antreich.com>
3 siblings, 0 replies; 8+ messages in thread
From: Aaron Lauterer @ 2022-12-06 15:47 UTC (permalink / raw)
To: pve-devel
Render the OSD listening addresses a bit nicer and one per line.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since
v3:
- rebased
v2:
- improve and simplify the first preparation steps
- if regex matching fails, show the raw value
www/manager6/Utils.js | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 8c118fa2..1b245fdf 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1278,6 +1278,21 @@ Ext.define('PVE.Utils', {
return Ext.htmlEncode(first + " " + last);
},
+ // expecting the following format:
+ // [v2:10.10.10.1:6802/2008,v1:10.10.10.1:6803/2008]
+ render_ceph_osd_addr: function(value) {
+ value = value.trim();
+ if (value.startsWith('[') && value.endsWith(']')) {
+ value = value.slice(1, -1); // remove []
+ }
+ value = value.replaceAll(',', '\n'); // split IPs in lines
+ let retVal = '';
+ for (const i of value.matchAll(/^(v[0-9]):(.*):([0-9]*)\/([0-9]*)$/gm)) {
+ retVal += `${i[1]}: ${i[2]}:${i[3]}<br>`;
+ }
+ return retVal.length < 1 ? value : retVal;
+ },
+
windowHostname: function() {
return window.location.hostname.replace(Proxmox.Utils.IP6_bracket_match,
function(m, addr, offset, original) { return addr; });
--
2.30.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [pve-devel] [PATCH manager v4 3/3] ui: osd: add details window
2022-12-06 15:47 [pve-devel] [PATCH manager v4 0/3] Ceph OSD: add detail infos Aaron Lauterer
2022-12-06 15:47 ` [pve-devel] [PATCH manager v4 1/3] api ceph osd: add OSD index, metadata and lv-info Aaron Lauterer
2022-12-06 15:47 ` [pve-devel] [PATCH manager v4 2/3] ui utils: add renderer for ceph osd addresses Aaron Lauterer
@ 2022-12-06 15:47 ` Aaron Lauterer
[not found] ` <1cfa70b807f858eea840bd040b9a83cd@antreich.com>
3 siblings, 0 replies; 8+ messages in thread
From: Aaron Lauterer @ 2022-12-06 15:47 UTC (permalink / raw)
To: pve-devel
This new windows provides more detailes about an OSD such as:
* PID
* Memory usage
* various metadata that could be of interest
* list of phyiscal disks used for the main disk, db and wal with
additional infos about the volumes for each
A new 'Details' button is added to the OSD overview and a double click
on an OSD will also open this new window.
The componend defines the items in the initComponent instead of
following a fully declarative approach. This is because we need to pass
the same store to multiple Proxmox.ObjectGrids.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since
v3:
- rebased
v2:
- mask tab panel and content on loading, not the complete view/window
- remove building array for devices grid, happens in the API now
- rename 'Name' column to 'Device' in the devices listing
- rename 'Physical Devices' column to 'Physical Device' (will usually be
only one)
- define 'enableTextSelection' in the components directly
v1:
- adapting API urls
- renaming me.url to me.baseUrl
www/manager6/Makefile | 1 +
www/manager6/ceph/OSD.js | 26 +++
www/manager6/ceph/OSDDetails.js | 280 ++++++++++++++++++++++++++++++++
3 files changed, 307 insertions(+)
create mode 100644 www/manager6/ceph/OSDDetails.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 9786337b..0ab34384 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -185,6 +185,7 @@ JSSRC= \
ceph/Log.js \
ceph/Monitor.js \
ceph/OSD.js \
+ ceph/OSDDetails.js \
ceph/Pool.js \
ceph/ServiceList.js \
ceph/Services.js \
diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index 6f7e2159..8770b96a 100644
--- a/www/manager6/ceph/OSD.js
+++ b/www/manager6/ceph/OSD.js
@@ -583,6 +583,20 @@ Ext.define('PVE.node.CephOsdTree', {
}
},
+ run_details: function(view, rec) {
+ if (rec.data.host && rec.data.type === 'osd' && rec.data.id >= 0) {
+ this.details();
+ }
+ },
+
+ details: function() {
+ let vm = this.getViewModel();
+ Ext.create('PVE.CephOsdDetails', {
+ nodename: vm.get('osdhost'),
+ osdid: vm.get('osdid'),
+ }).show();
+ },
+
set_selection_status: function(tp, selection) {
if (selection.length < 1) {
return;
@@ -695,6 +709,9 @@ Ext.define('PVE.node.CephOsdTree', {
stateId: 'grid-ceph-osd',
rootVisible: false,
useArrows: true,
+ listeners: {
+ itemdblclick: 'run_details',
+ },
columns: [
{
@@ -835,6 +852,15 @@ Ext.define('PVE.node.CephOsdTree', {
'</tpl>',
],
},
+ {
+ text: gettext('Details'),
+ iconCls: 'fa fa-info-circle',
+ disabled: true,
+ bind: {
+ disabled: '{!isOsd}',
+ },
+ handler: 'details',
+ },
{
text: gettext('Start'),
iconCls: 'fa fa-play',
diff --git a/www/manager6/ceph/OSDDetails.js b/www/manager6/ceph/OSDDetails.js
new file mode 100644
index 00000000..24af8f15
--- /dev/null
+++ b/www/manager6/ceph/OSDDetails.js
@@ -0,0 +1,280 @@
+Ext.define('pve-osd-details-devices', {
+ extend: 'Ext.data.Model',
+ fields: ['device', 'type', 'physical_device', 'size', 'support_discard', 'dev_node'],
+ idProperty: 'device',
+});
+
+Ext.define('PVE.CephOsdDetails', {
+ extend: 'Ext.window.Window',
+ alias: ['widget.pveCephOsdDetails'],
+
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ cbindData: function() {
+ let me = this;
+ me.baseUrl = `/nodes/${me.nodename}/ceph/osd/${me.osdid}`;
+ return {
+ title: `${gettext('Details')}: OSD ${me.osdid}`,
+ };
+ },
+
+ viewModel: {
+ data: {
+ device: '',
+ },
+ },
+
+ modal: true,
+ width: 650,
+ minHeight: 250,
+ resizable: true,
+ cbind: {
+ title: '{title}',
+ },
+
+ layout: {
+ type: 'vbox',
+ align: 'stretch',
+ },
+ defaults: {
+ layout: 'fit',
+ border: false,
+ },
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ reload: function() {
+ let view = this.getView();
+
+ Proxmox.Utils.API2Request({
+ url: `${view.baseUrl}/metadata`,
+ waitMsgTarget: view.lookup('detailsTabs'),
+ method: 'GET',
+ failure: function(response, opts) {
+ Proxmox.Utils.setErrorMask(view.lookup('detailsTabs'), response.htmlStatus);
+ },
+ success: function(response, opts) {
+ let d = response.result.data;
+ let osdData = Object.keys(d.osd).sort().map(x => ({ key: x, value: d.osd[x] }));
+ view.osdStore.loadData(osdData);
+ let devices = view.lookup('devices');
+ let deviceStore = devices.getStore();
+ deviceStore.loadData(d.devices);
+
+ view.lookup('osdGeneral').rstore.fireEvent('load', view.osdStore, osdData, true);
+ view.lookup('osdNetwork').rstore.fireEvent('load', view.osdStore, osdData, true);
+
+ // select 'block' device automatically on first load
+ if (devices.getSelection().length === 0) {
+ devices.setSelection(deviceStore.findRecord('device', 'block'));
+ }
+ },
+ });
+ },
+
+ showDevInfo: function(grid, selected) {
+ let view = this.getView();
+ if (selected[0]) {
+ let device = selected[0].data.device;
+ this.getViewModel().set('device', device);
+
+ let detailStore = view.lookup('volumeDetails');
+ detailStore.rstore.getProxy().setUrl(`api2/json${view.baseUrl}/lv-info`);
+ detailStore.rstore.getProxy().setExtraParams({ 'type': device });
+ detailStore.setLoading();
+ detailStore.rstore.load({ callback: () => detailStore.setLoading(false) });
+ }
+ },
+
+ init: function() {
+ this.reload();
+ },
+
+ control: {
+ 'grid[reference=devices]': {
+ selectionchange: 'showDevInfo',
+ },
+ },
+ },
+ tbar: [
+ {
+ text: gettext('Reload'),
+ iconCls: 'fa fa-refresh',
+ handler: 'reload',
+ },
+ ],
+ initComponent: function() {
+ let me = this;
+
+ me.osdStore = Ext.create('Proxmox.data.ObjectStore');
+
+ Ext.applyIf(me, {
+ items: [
+ {
+ xtype: 'tabpanel',
+ reference: 'detailsTabs',
+ items: [
+ {
+ xtype: 'proxmoxObjectGrid',
+ reference: 'osdGeneral',
+ tooltip: gettext('Various information about the OSD'),
+ rstore: me.osdStore,
+ title: gettext('General'),
+ viewConfig: {
+ enableTextSelection: true,
+ },
+ gridRows: [
+ {
+ xtype: 'text',
+ name: 'version',
+ text: gettext('Version'),
+ },
+ {
+ xtype: 'text',
+ name: 'hostname',
+ text: gettext('Hostname'),
+ },
+ {
+ xtype: 'text',
+ name: 'osd_data',
+ text: gettext('OSD data path'),
+ },
+ {
+ xtype: 'text',
+ name: 'osd_objectstore',
+ text: gettext('OSD object store'),
+ },
+ {
+ xtype: 'text',
+ name: 'mem_usage',
+ text: gettext('Memory usage'),
+ renderer: Proxmox.Utils.render_size,
+ },
+ {
+ xtype: 'text',
+ name: 'pid',
+ text: `${gettext('Process ID')} (PID)`,
+ },
+ ],
+ },
+ {
+ xtype: 'proxmoxObjectGrid',
+ reference: 'osdNetwork',
+ tooltip: gettext('Addresses and ports used by the OSD service'),
+ rstore: me.osdStore,
+ title: gettext('Network'),
+ viewConfig: {
+ enableTextSelection: true,
+ },
+ gridRows: [
+ {
+ xtype: 'text',
+ name: 'front_addr',
+ text: `${gettext('Front Address')}<br>(Client & Monitor)`,
+ renderer: PVE.Utils.render_ceph_osd_addr,
+ },
+ {
+ xtype: 'text',
+ name: 'hb_front_addr',
+ text: gettext('Heartbeat Front Address'),
+ renderer: PVE.Utils.render_ceph_osd_addr,
+ },
+ {
+ xtype: 'text',
+ name: 'back_addr',
+ text: `${gettext('Back Address')}<br>(OSD)`,
+ renderer: PVE.Utils.render_ceph_osd_addr,
+ },
+ {
+ xtype: 'text',
+ name: 'hb_back_addr',
+ text: gettext('Heartbeat Back Address'),
+ renderer: PVE.Utils.render_ceph_osd_addr,
+ },
+ ],
+ },
+ {
+ xtype: 'panel',
+ title: 'Devices',
+ tooltip: gettext('Physical devices used by the OSD'),
+ items: [
+ {
+ xtype: 'grid',
+ border: false,
+ reference: 'devices',
+ store: {
+ model: 'pve-osd-details-devices',
+ },
+ columns: {
+ items: [
+ { text: gettext('Device'), dataIndex: 'device' },
+ { text: gettext('Type'), dataIndex: 'type' },
+ {
+ text: gettext('Physical Device'),
+ dataIndex: 'physical_device',
+ },
+ {
+ text: gettext('Size'),
+ dataIndex: 'size',
+ renderer: Proxmox.Utils.render_size,
+ },
+ {
+ text: 'Discard',
+ dataIndex: 'support_discard',
+ hidden: true,
+ },
+ {
+ text: gettext('Device node'),
+ dataIndex: 'dev_node',
+ hidden: true,
+ },
+ ],
+ defaults: {
+ tdCls: 'pointer',
+ flex: 1,
+ },
+ },
+ },
+ {
+ xtype: 'proxmoxObjectGrid',
+ reference: 'volumeDetails',
+ maskOnLoad: true,
+ viewConfig: {
+ enableTextSelection: true,
+ },
+ bind: {
+ title: Ext.String.format(
+ gettext('Volume Details for {0}'),
+ '{device}',
+ ),
+ },
+ rows: {
+ creation_time: {
+ header: gettext('Creation time'),
+ },
+ lv_name: {
+ header: gettext('LV Name'),
+ },
+ lv_path: {
+ header: gettext('LV Path'),
+ },
+ lv_uuid: {
+ header: gettext('LV UUID'),
+ },
+ vg_name: {
+ header: gettext('VG Name'),
+ },
+ },
+ url: 'nodes/', //placeholder will be set when device is selected
+ },
+ ],
+ },
+ ],
+ },
+ ],
+ });
+
+ me.callParent();
+ },
+});
--
2.30.2
^ permalink raw reply [flat|nested] 8+ messages in thread
[parent not found: <1cfa70b807f858eea840bd040b9a83cd@antreich.com>]
* Re: [pve-devel] [PATCH manager v4 1/3] api ceph osd: add OSD index, metadata and lv-info
[not found] ` <1cfa70b807f858eea840bd040b9a83cd@antreich.com>
@ 2022-12-07 13:22 ` Aaron Lauterer
[not found] ` <efba3a351dbb0300cca8b46529888eea@antreich.com>
1 sibling, 0 replies; 8+ messages in thread
From: Aaron Lauterer @ 2022-12-07 13:22 UTC (permalink / raw)
To: Alwin Antreich, Proxmox VE development discussion
On 12/7/22 12:15, Alwin Antreich wrote:
> Hi,
>
>
> December 6, 2022 4:47 PM, "Aaron Lauterer" <a.lauterer@proxmox.com> wrote:
>
>> To get more details for a single OSD, we add two new endpoints:
>> * nodes/{node}/ceph/osd/{osdid}/metadata
>> * nodes/{node}/ceph/osd/{osdid}/lv-info
> As an idea for a different name for lv-info, `nodes/{node}/ceph/osd/{osdid}/volume`? :)
>
Could be done, as you would expect to get overall physical volume infos from it,
right? So that the endpoint won't change, once the underlying technology changes?
>>
[...]
>>
>> Possible volumes are:
>> * block (default value if not provided)
>> * db
>> * wal
>>
>> 'ceph-volume' is used to gather the infos, except for the creation time
>> of the LV which is retrieved via 'lvs'.
> You could use lvs/vgs directly, the ceph osd relevant infos are in the lv_tags.
IIRC, and I looked at it again, mapping the OSD ID to the associated LV/VG would
be a manual lookup via /var/lib/ceph/osd/ceph-X/block which is a symlink to the
LV/VG.
So yeah, would be possible, but I think a bit more fragile should something
change (as unlikely as it is) in comparsion to using ceph-volume.
I don't expect these API endpoints to be run all the time, and am therefore okay
if they are a bit more expensive regarding computation resources.
>
> `lvs -o lv_all,vg_all --reportformat=json`
> `vgs -o vg_all,pv_all --reportformat=json`
>
> Why do you want to expose the lv-info?
Why not? The LVs are the only thing I found for an OSD that contain some hint to
when it was created. Adding more general infos such as VG and LV for a specific
OSD can help users understand where the actual data is stored. And that without
digging even deeper into how things are handled internally and how it is mapped.
Cheers,
Aaron
^ permalink raw reply [flat|nested] 8+ messages in thread
[parent not found: <efba3a351dbb0300cca8b46529888eea@antreich.com>]
* Re: [pve-devel] [PATCH manager v4 1/3] api ceph osd: add OSD index, metadata and lv-info
[not found] ` <efba3a351dbb0300cca8b46529888eea@antreich.com>
@ 2022-12-09 14:05 ` Aaron Lauterer
[not found] ` <2b8a24468941bf597877b3ac10a95c22@antreich.com>
2022-12-12 16:10 ` Aaron Lauterer
2 siblings, 0 replies; 8+ messages in thread
From: Aaron Lauterer @ 2022-12-09 14:05 UTC (permalink / raw)
To: Alwin Antreich; +Cc: pve-devel
On 12/7/22 18:23, Alwin Antreich wrote:
> December 7, 2022 2:22 PM, "Aaron Lauterer" <a.lauterer@proxmox.com> wrote:
>
>> On 12/7/22 12:15, Alwin Antreich wrote:
>>
[...]
>>
>> 'ceph-volume' is used to gather the infos, except for the creation time
>> of the LV which is retrieved via 'lvs'.
>>> You could use lvs/vgs directly, the ceph osd relevant infos are in the lv_tags.
>>
>> IIRC, and I looked at it again, mapping the OSD ID to the associated LV/VG would be a manual lookup
>> via /var/lib/ceph/osd/ceph-X/block which is a symlink to the LV/VG.
>> So yeah, would be possible, but I think a bit more fragile should something change (as unlikely as
>> it is) in comparsion to using ceph-volume.
>
> The lv_tags already shows the ID (ceph.osd_id=<id>). And I just see that `ceph-volume lvm list
> <id>` also exists, that is definitely faster then listing all OSDs.
Ok I see now what you meant with the lv tags. I'll think about it. Adding the
OSD ID to the ceph-volume call is definitely a good idea in case we stick with it.
>
>> I don't expect these API endpoints to be run all the time, and am therefore okay if they are a bit
>> more expensive regarding computation resources.
>>
>>> `lvs -o lv_all,vg_all --reportformat=json`
>>> `vgs -o vg_all,pv_all --reportformat=json`
>>> Why do you want to expose the lv-info?
>>
>> Why not? The LVs are the only thing I found for an OSD that contain some hint to when it was
>> created. Adding more general infos such as VG and LV for a specific OSD can help users understand
>> where the actual data is stored. And that without digging even deeper into how things are handled
>> internally and how it is mapped.
>
> In my experience this data is only useful if you want to handle the OSD on the CLI. Hence my
> question about the use-case. :)
>
> The metdata on the other hand displays all disks, sizes and more of an OSD. Then for example you
> can display DB/WAL devices in the UI and how big the DB partition is.
Did you look at the rest of the patches, or gave it a try on a test cluster?
Quite a bit of the metadata for each device is shown. With additional infos of
the underlying volume. I think it is nice, as it can make it a bit easier to
know where to look for the correct volumes when CLI interaction on a deeper
level is needed.
If you see something that should be added as well, let me know :)
And again, the creation time of the LV is the only thing I found where one can
get an idea how old an OSD is.
>
> Cheers,
> Alwin
>
^ permalink raw reply [flat|nested] 8+ messages in thread
[parent not found: <2b8a24468941bf597877b3ac10a95c22@antreich.com>]
* Re: [pve-devel] [PATCH manager v4 1/3] api ceph osd: add OSD index, metadata and lv-info
[not found] ` <efba3a351dbb0300cca8b46529888eea@antreich.com>
2022-12-09 14:05 ` Aaron Lauterer
[not found] ` <2b8a24468941bf597877b3ac10a95c22@antreich.com>
@ 2022-12-12 16:10 ` Aaron Lauterer
2 siblings, 0 replies; 8+ messages in thread
From: Aaron Lauterer @ 2022-12-12 16:10 UTC (permalink / raw)
To: Alwin Antreich; +Cc: pve-devel
On 12/7/22 18:23, Alwin Antreich wrote:
> December 7, 2022 2:22 PM, "Aaron Lauterer" <a.lauterer@proxmox.com> wrote:
>
>> On 12/7/22 12:15, Alwin Antreich wrote:
>>
>>> Hi,
>>
>> December 6, 2022 4:47 PM, "Aaron Lauterer" <a.lauterer@proxmox.com> wrote:
>>> To get more details for a single OSD, we add two new endpoints:
>>
>> * nodes/{node}/ceph/osd/{osdid}/metadata
>> * nodes/{node}/ceph/osd/{osdid}/lv-info
>>> As an idea for a different name for lv-info, `nodes/{node}/ceph/osd/{osdid}/volume`? :)
>>
>> Could be done, as you would expect to get overall physical volume infos from it, right? So that the
>> endpoint won't change, once the underlying technology changes?
>
> Yes. It sounds more clear to me, as LV could mean something different. :P
>
Thinking about it a bit more, I am hesitant to rename the API endpoint. It is
very specific to LVs. Should a new generation of OSDs use something completely
different in the future, I would rather add a new API call handling it and not
adapt a currently existing one. Changing an established API endpoint is not
something that should be done lightly.
^ permalink raw reply [flat|nested] 8+ messages in thread