* [pve-devel] [PATCH manager] ceph osd: ui: show PGs per OSD
@ 2023-02-14 8:13 Aaron Lauterer
2023-02-14 13:19 ` Thomas Lamprecht
2023-02-15 11:25 ` [pve-devel] applied: " Thomas Lamprecht
0 siblings, 2 replies; 7+ messages in thread
From: Aaron Lauterer @ 2023-02-14 8:13 UTC (permalink / raw)
To: pve-devel
By switching from 'ceph osd tree' to the 'ceph osd df tree' mon API
equivalent , we get the same data structure with more information per
OSD. One of them is the number of PGs stored on that OSD.
The number of PGs per OSD is an important number, for example when
trying to figure out why the performance is not as good as expected.
Therefore, adding it to the OSD overview visible by default should
reduce the number of times, one needs to access the CLI.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
PVE/API2/Ceph/OSD.pm | 4 ++--
www/manager6/ceph/OSD.js | 7 +++++++
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index 18195743..09fb3bba 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -105,7 +105,7 @@ __PACKAGE__->register_method ({
PVE::Ceph::Tools::check_ceph_inited();
my $rados = PVE::RADOS->new();
- my $res = $rados->mon_command({ prefix => 'osd tree' });
+ my $res = $rados->mon_command({ prefix => 'osd df', output_method => 'tree', });
die "no tree nodes found\n" if !($res && $res->{nodes});
@@ -131,7 +131,7 @@ __PACKAGE__->register_method ({
type => $e->{type}
};
- foreach my $opt (qw(status crush_weight reweight device_class)) {
+ foreach my $opt (qw(status crush_weight reweight device_class pgs)) {
$new->{$opt} = $e->{$opt} if defined($e->{$opt});
}
diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index 6f7e2159..ef193a0a 100644
--- a/www/manager6/ceph/OSD.js
+++ b/www/manager6/ceph/OSD.js
@@ -797,6 +797,13 @@ Ext.define('PVE.node.CephOsdTree', {
renderer: 'render_osd_latency',
width: 120,
},
+ {
+ text: 'PGs',
+ dataIndex: 'pgs',
+ align: 'right',
+ renderer: 'render_osd_val',
+ width: 90,
+ },
],
--
2.30.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [pve-devel] [PATCH manager] ceph osd: ui: show PGs per OSD
2023-02-14 8:13 [pve-devel] [PATCH manager] ceph osd: ui: show PGs per OSD Aaron Lauterer
@ 2023-02-14 13:19 ` Thomas Lamprecht
2023-02-14 15:05 ` Aaron Lauterer
2023-02-14 16:14 ` Aaron Lauterer
2023-02-15 11:25 ` [pve-devel] applied: " Thomas Lamprecht
1 sibling, 2 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2023-02-14 13:19 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
On 14/02/2023 09:13, Aaron Lauterer wrote:
> By switching from 'ceph osd tree' to the 'ceph osd df tree' mon API
> equivalent , we get the same data structure with more information per
the change looks almost too neat for using a completely different command,
a bit fishy, but hey, if it works (roughly as fast) as the other one its
fine to me.
> OSD. One of them is the number of PGs stored on that OSD.
>
did you benchmark the both to compare for any bigger runtime difference?
E.g., some loop with a few thousands rados mon_command calls in perl for each
using HiRes timer to measure total loop time and compare?
I'd not care for a few percent, but would be good to know if this is
order of magnitudes slower - which I'd not expect, but its to easy to
check to not do so IMO.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [pve-devel] [PATCH manager] ceph osd: ui: show PGs per OSD
2023-02-14 13:19 ` Thomas Lamprecht
@ 2023-02-14 15:05 ` Aaron Lauterer
2023-02-14 16:14 ` Aaron Lauterer
1 sibling, 0 replies; 7+ messages in thread
From: Aaron Lauterer @ 2023-02-14 15:05 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox VE development discussion
I compared the returned values, but did not benchmark it.
I'll follow up with the results.
On 2/14/23 14:19, Thomas Lamprecht wrote:
> On 14/02/2023 09:13, Aaron Lauterer wrote:
>> By switching from 'ceph osd tree' to the 'ceph osd df tree' mon API
>> equivalent , we get the same data structure with more information per
>
> the change looks almost too neat for using a completely different command,
> a bit fishy, but hey, if it works (roughly as fast) as the other one its
> fine to me.
>
>> OSD. One of them is the number of PGs stored on that OSD.
>>
>
> did you benchmark the both to compare for any bigger runtime difference?
>
> E.g., some loop with a few thousands rados mon_command calls in perl for each
> using HiRes timer to measure total loop time and compare?
>
> I'd not care for a few percent, but would be good to know if this is
> order of magnitudes slower - which I'd not expect, but its to easy to
> check to not do so IMO.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [pve-devel] [PATCH manager] ceph osd: ui: show PGs per OSD
2023-02-14 13:19 ` Thomas Lamprecht
2023-02-14 15:05 ` Aaron Lauterer
@ 2023-02-14 16:14 ` Aaron Lauterer
2023-02-15 6:18 ` Thomas Lamprecht
1 sibling, 1 reply; 7+ messages in thread
From: Aaron Lauterer @ 2023-02-14 16:14 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox VE development discussion
Seems like the `osd df tree` call is about 25% slower, plus minus.
Tested on our AMD test cluster that is currently set up with 3 nodes with 4 OSDs
each. 50k iterations.
root@jura1:~# ./bench.pl
Rate osd-df-tree osd-tree
osd-df-tree 9217/s -- -27%
osd-tree 12658/s 37% --
root@jura1:~# ./bench.pl
Rate osd-df-tree osd-tree
osd-df-tree 9141/s -- -25%
osd-tree 12136/s 33% --
root@jura1:~# ./bench.pl
Rate osd-df-tree osd-tree
osd-df-tree 9940/s -- -23%
osd-tree 12987/s 31% --
root@jura1:~# ./bench.pl
Rate osd-df-tree osd-tree
osd-df-tree 8666/s -- -20%
osd-tree 10846/s 25% --
root@jura1:~#
On 2/14/23 14:19, Thomas Lamprecht wrote:
> On 14/02/2023 09:13, Aaron Lauterer wrote:
>> By switching from 'ceph osd tree' to the 'ceph osd df tree' mon API
>> equivalent , we get the same data structure with more information per
>
> the change looks almost too neat for using a completely different command,
> a bit fishy, but hey, if it works (roughly as fast) as the other one its
> fine to me.
>
>> OSD. One of them is the number of PGs stored on that OSD.
>>
>
> did you benchmark the both to compare for any bigger runtime difference?
>
> E.g., some loop with a few thousands rados mon_command calls in perl for each
> using HiRes timer to measure total loop time and compare?
>
> I'd not care for a few percent, but would be good to know if this is
> order of magnitudes slower - which I'd not expect, but its to easy to
> check to not do so IMO.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [pve-devel] [PATCH manager] ceph osd: ui: show PGs per OSD
2023-02-14 16:14 ` Aaron Lauterer
@ 2023-02-15 6:18 ` Thomas Lamprecht
2023-02-15 9:20 ` Aaron Lauterer
0 siblings, 1 reply; 7+ messages in thread
From: Thomas Lamprecht @ 2023-02-15 6:18 UTC (permalink / raw)
To: Aaron Lauterer, Proxmox VE development discussion
Am 14/02/2023 um 17:14 schrieb Aaron Lauterer:
> Seems like the `osd df tree` call is about 25% slower, plus minus.
>
> Tested on our AMD test cluster that is currently set up with 3 nodes with 4 OSDs each. 50k iterations.
>
> root@jura1:~# ./bench.pl
> Rate osd-df-tree osd-tree
> osd-df-tree 9217/s -- -27%
> osd-tree 12658/s 37% --
> root@jura1:~# ./bench.pl
> Rate osd-df-tree osd-tree
> osd-df-tree 9141/s -- -25%
> osd-tree 12136/s 33% --
> root@jura1:~# ./bench.pl
> Rate osd-df-tree osd-tree
> osd-df-tree 9940/s -- -23%
> osd-tree 12987/s 31% --
> root@jura1:~# ./bench.pl
> Rate osd-df-tree osd-tree
> osd-df-tree 8666/s -- -20%
> osd-tree 10846/s 25% --
> root@jura1:~#
Many thanks for the insight, so significantly more than noise but far from problematic.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [pve-devel] [PATCH manager] ceph osd: ui: show PGs per OSD
2023-02-15 6:18 ` Thomas Lamprecht
@ 2023-02-15 9:20 ` Aaron Lauterer
0 siblings, 0 replies; 7+ messages in thread
From: Aaron Lauterer @ 2023-02-15 9:20 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox VE development discussion
On 2/15/23 07:18, Thomas Lamprecht wrote:
> Am 14/02/2023 um 17:14 schrieb Aaron Lauterer:
>> Seems like the `osd df tree` call is about 25% slower, plus minus.
>>
>> Tested on our AMD test cluster that is currently set up with 3 nodes with 4 OSDs each. 50k iterations.
>>
>> root@jura1:~# ./bench.pl
>> Rate osd-df-tree osd-tree
>> osd-df-tree 9217/s -- -27%
>> osd-tree 12658/s 37% --
>> root@jura1:~# ./bench.pl
>> Rate osd-df-tree osd-tree
>> osd-df-tree 9141/s -- -25%
>> osd-tree 12136/s 33% --
>> root@jura1:~# ./bench.pl
>> Rate osd-df-tree osd-tree
>> osd-df-tree 9940/s -- -23%
>> osd-tree 12987/s 31% --
>> root@jura1:~# ./bench.pl
>> Rate osd-df-tree osd-tree
>> osd-df-tree 8666/s -- -20%
>> osd-tree 10846/s 25% --
>> root@jura1:~#
>
> Many thanks for the insight, so significantly more than noise but far from problematic.
Yep, and we don't call it all the time, AFAIU when opening the OSD panel and on
manual "Reload"s.
I took another look at the Ceph API but couldn't find a call that might give us
that information as well to check if it would be cheaper to call.
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] applied: [PATCH manager] ceph osd: ui: show PGs per OSD
2023-02-14 8:13 [pve-devel] [PATCH manager] ceph osd: ui: show PGs per OSD Aaron Lauterer
2023-02-14 13:19 ` Thomas Lamprecht
@ 2023-02-15 11:25 ` Thomas Lamprecht
1 sibling, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2023-02-15 11:25 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
Am 14/02/2023 um 09:13 schrieb Aaron Lauterer:
> By switching from 'ceph osd tree' to the 'ceph osd df tree' mon API
> equivalent , we get the same data structure with more information per
> OSD. One of them is the number of PGs stored on that OSD.
>
> The number of PGs per OSD is an important number, for example when
> trying to figure out why the performance is not as good as expected.
> Therefore, adding it to the OSD overview visible by default should
> reduce the number of times, one needs to access the CLI.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> PVE/API2/Ceph/OSD.pm | 4 ++--
> www/manager6/ceph/OSD.js | 7 +++++++
> 2 files changed, 9 insertions(+), 2 deletions(-)
>
>
with parts of your benchmark result massaged into the commit message:
applied, thanks!
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-02-15 11:26 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-14 8:13 [pve-devel] [PATCH manager] ceph osd: ui: show PGs per OSD Aaron Lauterer
2023-02-14 13:19 ` Thomas Lamprecht
2023-02-14 15:05 ` Aaron Lauterer
2023-02-14 16:14 ` Aaron Lauterer
2023-02-15 6:18 ` Thomas Lamprecht
2023-02-15 9:20 ` Aaron Lauterer
2023-02-15 11:25 ` [pve-devel] applied: " Thomas Lamprecht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox