public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint
@ 2022-10-20  7:17 Stefan Sterz
  2022-10-20  7:17 ` [pve-devel] [PATCH manager v3 2/2] ui: only allow rbd pools to be added as rbd storage Stefan Sterz
  2022-10-20 12:55 ` [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint Thomas Lamprecht
  0 siblings, 2 replies; 10+ messages in thread
From: Stefan Sterz @ 2022-10-20  7:17 UTC (permalink / raw)
  To: pve-devel

since ceph luminous (ceph 12) pools need to be associated with at
least one applicaton. expose this information here too so that clients
of this endpoint can use it

Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
---
v3: add an api viewer entry for the applications object

thanks @ alwin antreich for pointing out that pools have applications!

 PVE/API2/Ceph/Pools.pm | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 6c05250e..f5984c2c 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -125,6 +125,11 @@ __PACKAGE__->register_method ({
 		    title => 'Autoscale Status',
 		    optional => 1,
 		},
+		applications => {
+		    type => 'object',
+		    title => 'Associated Applications',
+		    optional => 1,
+		},
 	    },
 	},
 	links => [ { rel => 'child', href => "{pool_name}" } ],
@@ -167,6 +172,8 @@ __PACKAGE__->register_method ({
 	# pg_autoscaler module is not enabled in Nautilus
 	my $autoscale = eval { $get_autoscale_status->($rados) };
 
+	my $apps = eval { $rados->mon_command({ prefix => 'osd pool application get' }) };
+
 	foreach my $e (@{$res->{pools}}) {
 	    my $d = {};
 	    foreach my $attr (@$attr_list) {
@@ -191,6 +198,10 @@ __PACKAGE__->register_method ({
 		$d->{percent_used} = $s->{percent_used};
 	    }
 
+	    if ($apps) {
+		$d->{applications} = $apps->{$d->{pool_name}} if $apps->{$d->{pool_name}};
+	    }
+
 	    # Cephs numerical pool types are barely documented. Found the following in the Ceph
 	    # codebase: https://github.com/ceph/ceph/blob/ff144995a849407c258bcb763daa3e03cfce5059/src/osd/osd_types.h#L1221-L1233
 	    if ($e->{type} == 1) {
-- 
2.30.2





^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pve-devel] [PATCH manager v3 2/2] ui: only allow rbd pools to be added as rbd storage
  2022-10-20  7:17 [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint Stefan Sterz
@ 2022-10-20  7:17 ` Stefan Sterz
  2022-10-20 13:00   ` Thomas Lamprecht
  2022-10-20 12:55 ` [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint Thomas Lamprecht
  1 sibling, 1 reply; 10+ messages in thread
From: Stefan Sterz @ 2022-10-20  7:17 UTC (permalink / raw)
  To: pve-devel

previously the ui would allow adding all pools (even the default
ceph-mon pools) as storage. this could lead to issues when users did
use these pools as storage (e.g.: vms missing their disks after a
migration). hence, restrict the pool selector to rbd pools.

fails gracefully by reverting to the previous behavior if a pool has
no application assigned to it.

Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
---
v3: changed the name of the filter function based on alwin antreich's
suggestion

 www/manager6/form/CephPoolSelector.js | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/www/manager6/form/CephPoolSelector.js b/www/manager6/form/CephPoolSelector.js
index 5b96398d..b98feb54 100644
--- a/www/manager6/form/CephPoolSelector.js
+++ b/www/manager6/form/CephPoolSelector.js
@@ -15,9 +15,17 @@ Ext.define('PVE.form.CephPoolSelector', {
 	    throw "no nodename given";
 	}
 
+	let onlyCephRBDPools = (item) => {
+	    let apps = item.data?.applications;
+	    return apps === undefined || apps?.rbd !== undefined;
+	};
+
 	var store = Ext.create('Ext.data.Store', {
 	    fields: ['name'],
 	    sorters: 'name',
+	    filters: [
+		onlyCephRBDPools,
+	    ],
 	    proxy: {
 		type: 'proxmox',
 		url: '/api2/json/nodes/' + me.nodename + '/ceph/pools',
@@ -32,8 +40,10 @@ Ext.define('PVE.form.CephPoolSelector', {
 
 	store.load({
 	    callback: function(rec, op, success) {
-		if (success && rec.length > 0) {
-		    me.select(rec[0]);
+		let filteredRec = rec.filter(onlyCephRBDPools);
+
+		if (success && filteredRec.length > 0) {
+		    me.select(filteredRec[0]);
 		}
 	    },
 	});
-- 
2.30.2





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint
  2022-10-20  7:17 [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint Stefan Sterz
  2022-10-20  7:17 ` [pve-devel] [PATCH manager v3 2/2] ui: only allow rbd pools to be added as rbd storage Stefan Sterz
@ 2022-10-20 12:55 ` Thomas Lamprecht
  2022-10-21  6:57   ` Stefan Sterz
  1 sibling, 1 reply; 10+ messages in thread
From: Thomas Lamprecht @ 2022-10-20 12:55 UTC (permalink / raw)
  To: Proxmox VE development discussion, Stefan Sterz

Am 20/10/2022 um 09:17 schrieb Stefan Sterz:
> since ceph luminous (ceph 12) pools need to be associated with at
> least one applicaton. expose this information here too so that clients
> of this endpoint can use it
> 
> Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
> ---
> v3: add an api viewer entry for the applications object
> 
> thanks @ alwin antreich for pointing out that pools have applications!
> 
>  PVE/API2/Ceph/Pools.pm | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
> index 6c05250e..f5984c2c 100644
> --- a/PVE/API2/Ceph/Pools.pm
> +++ b/PVE/API2/Ceph/Pools.pm
> @@ -125,6 +125,11 @@ __PACKAGE__->register_method ({
>  		    title => 'Autoscale Status',
>  		    optional => 1,
>  		},
> +		applications => {
> +		    type => 'object',
> +		    title => 'Associated Applications',
> +		    optional => 1,
> +		},
>  	    },
>  	},
>  	links => [ { rel => 'child', href => "{pool_name}" } ],
> @@ -167,6 +172,8 @@ __PACKAGE__->register_method ({
>  	# pg_autoscaler module is not enabled in Nautilus
>  	my $autoscale = eval { $get_autoscale_status->($rados) };
>  
> +	my $apps = eval { $rados->mon_command({ prefix => 'osd pool application get' }) };

out of interest: how expensive is this, did you check the overhead?

> +
>  	foreach my $e (@{$res->{pools}}) {
>  	    my $d = {};
>  	    foreach my $attr (@$attr_list) {
> @@ -191,6 +198,10 @@ __PACKAGE__->register_method ({
>  		$d->{percent_used} = $s->{percent_used};
>  	    }
>  
> +	    if ($apps) {
> +		$d->{applications} = $apps->{$d->{pool_name}} if $apps->{$d->{pool_name}};

no nested hashes-in-hashes, pull out $d->{pool_name} earlier and then make it an one
liner:

$d->{applications} = $apps->{$pool} if defined($apps->{$pool});

> +	    }
> +
>  	    # Cephs numerical pool types are barely documented. Found the following in the Ceph
>  	    # codebase: https://github.com/ceph/ceph/blob/ff144995a849407c258bcb763daa3e03cfce5059/src/osd/osd_types.h#L1221-L1233
>  	    if ($e->{type} == 1) {





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH manager v3 2/2] ui: only allow rbd pools to be added as rbd storage
  2022-10-20  7:17 ` [pve-devel] [PATCH manager v3 2/2] ui: only allow rbd pools to be added as rbd storage Stefan Sterz
@ 2022-10-20 13:00   ` Thomas Lamprecht
  2022-10-21  7:02     ` Stefan Sterz
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Lamprecht @ 2022-10-20 13:00 UTC (permalink / raw)
  To: Proxmox VE development discussion, Stefan Sterz

Am 20/10/2022 um 09:17 schrieb Stefan Sterz:
> previously the ui would allow adding all pools (even the default
> ceph-mon pools) as storage. this could lead to issues when users did
> use these pools as storage (e.g.: vms missing their disks after a
> migration). hence, restrict the pool selector to rbd pools.
> 
> fails gracefully by reverting to the previous behavior if a pool has
> no application assigned to it.
> 
> Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
> ---
> v3: changed the name of the filter function based on alwin antreich's
> suggestion
> 
>  www/manager6/form/CephPoolSelector.js | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/www/manager6/form/CephPoolSelector.js b/www/manager6/form/CephPoolSelector.js
> index 5b96398d..b98feb54 100644
> --- a/www/manager6/form/CephPoolSelector.js
> +++ b/www/manager6/form/CephPoolSelector.js
> @@ -15,9 +15,17 @@ Ext.define('PVE.form.CephPoolSelector', {
>  	    throw "no nodename given";
>  	}
>  
> +	let onlyCephRBDPools = (item) => {
> +	    let apps = item.data?.applications;
> +	    return apps === undefined || apps?.rbd !== undefined;

couldn't this be a one liner? and as nit I'd maybe drop the Ceph from the name,
suggests a bit that we got non-ceph rbd pools.

// filter out rgw, metadata, cephfs, ... pools
onlyRBDPools = ({data}) => !!data?.applications?.rbd;


more generally, the application could be also shown in our ceph pool list, could
be useful to see the usage type of each pool, would also reduce confusion for the
a bit odd metadata pool with its single PG.

> +	};
> +
>  	var store = Ext.create('Ext.data.Store', {
>  	    fields: ['name'],
>  	    sorters: 'name',
> +	    filters: [
> +		onlyCephRBDPools,
> +	    ],
>  	    proxy: {
>  		type: 'proxmox',
>  		url: '/api2/json/nodes/' + me.nodename + '/ceph/pools',
> @@ -32,8 +40,10 @@ Ext.define('PVE.form.CephPoolSelector', {
>  
>  	store.load({
>  	    callback: function(rec, op, success) {
> -		if (success && rec.length > 0) {
> -		    me.select(rec[0]);
> +		let filteredRec = rec.filter(onlyCephRBDPools);
> +
> +		if (success && filteredRec.length > 0) {
> +		    me.select(filteredRec[0]);
>  		}
>  	    },
>  	});





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint
  2022-10-20 12:55 ` [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint Thomas Lamprecht
@ 2022-10-21  6:57   ` Stefan Sterz
  2022-10-21  7:04     ` Thomas Lamprecht
  0 siblings, 1 reply; 10+ messages in thread
From: Stefan Sterz @ 2022-10-21  6:57 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

On 10/20/22 14:55, Thomas Lamprecht wrote:
> Am 20/10/2022 um 09:17 schrieb Stefan Sterz:
>> since ceph luminous (ceph 12) pools need to be associated with at
>> least one applicaton. expose this information here too so that clients
>> of this endpoint can use it
>>
>> Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
>> ---
>> v3: add an api viewer entry for the applications object
>>
>> thanks @ alwin antreich for pointing out that pools have applications!
>>
>>  PVE/API2/Ceph/Pools.pm | 11 +++++++++++
>>  1 file changed, 11 insertions(+)
>>
>> diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
>> index 6c05250e..f5984c2c 100644
>> --- a/PVE/API2/Ceph/Pools.pm
>> +++ b/PVE/API2/Ceph/Pools.pm
>> @@ -125,6 +125,11 @@ __PACKAGE__->register_method ({
>>  		    title => 'Autoscale Status',
>>  		    optional => 1,
>>  		},
>> +		applications => {
>> +		    type => 'object',
>> +		    title => 'Associated Applications',
>> +		    optional => 1,
>> +		},
>>  	    },
>>  	},
>>  	links => [ { rel => 'child', href => "{pool_name}" } ],
>> @@ -167,6 +172,8 @@ __PACKAGE__->register_method ({
>>  	# pg_autoscaler module is not enabled in Nautilus
>>  	my $autoscale = eval { $get_autoscale_status->($rados) };
>>  
>> +	my $apps = eval { $rados->mon_command({ prefix => 'osd pool application get' }) };
> 
> out of interest: how expensive is this, did you check the overhead?
> 

do you want a specific metric? in my (admittedly small) test setup
(three vm cluster with 4 cores and 4Gib RAM) it is barely noticeable.
the api call takes between 18 and 25ms in both cases for me.

>> +
>>  	foreach my $e (@{$res->{pools}}) {
>>  	    my $d = {};
>>  	    foreach my $attr (@$attr_list) {
>> @@ -191,6 +198,10 @@ __PACKAGE__->register_method ({
>>  		$d->{percent_used} = $s->{percent_used};
>>  	    }
>>  
>> +	    if ($apps) {
>> +		$d->{applications} = $apps->{$d->{pool_name}} if $apps->{$d->{pool_name}};
> 
> no nested hashes-in-hashes, pull out $d->{pool_name} earlier and then make it an one
> liner:
> 
> $d->{applications} = $apps->{$pool} if defined($apps->{$pool});
> 

noted, ill avoid that in the future.

>> +	    }
>> +
>>  	    # Cephs numerical pool types are barely documented. Found the following in the Ceph
>>  	    # codebase: https://github.com/ceph/ceph/blob/ff144995a849407c258bcb763daa3e03cfce5059/src/osd/osd_types.h#L1221-L1233
>>  	    if ($e->{type} == 1) {
> 





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH manager v3 2/2] ui: only allow rbd pools to be added as rbd storage
  2022-10-20 13:00   ` Thomas Lamprecht
@ 2022-10-21  7:02     ` Stefan Sterz
  2022-10-21  7:13       ` Thomas Lamprecht
  0 siblings, 1 reply; 10+ messages in thread
From: Stefan Sterz @ 2022-10-21  7:02 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

On 10/20/22 15:00, Thomas Lamprecht wrote:
> Am 20/10/2022 um 09:17 schrieb Stefan Sterz:
>> previously the ui would allow adding all pools (even the default
>> ceph-mon pools) as storage. this could lead to issues when users did
>> use these pools as storage (e.g.: vms missing their disks after a
>> migration). hence, restrict the pool selector to rbd pools.
>>
>> fails gracefully by reverting to the previous behavior if a pool has
>> no application assigned to it.
>>
>> Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
>> ---
>> v3: changed the name of the filter function based on alwin antreich's
>> suggestion
>>
>>  www/manager6/form/CephPoolSelector.js | 14 ++++++++++++--
>>  1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/www/manager6/form/CephPoolSelector.js b/www/manager6/form/CephPoolSelector.js
>> index 5b96398d..b98feb54 100644
>> --- a/www/manager6/form/CephPoolSelector.js
>> +++ b/www/manager6/form/CephPoolSelector.js
>> @@ -15,9 +15,17 @@ Ext.define('PVE.form.CephPoolSelector', {
>>  	    throw "no nodename given";
>>  	}
>>  
>> +	let onlyCephRBDPools = (item) => {
>> +	    let apps = item.data?.applications;
>> +	    return apps === undefined || apps?.rbd !== undefined;
> 
> couldn't this be a one liner? and as nit I'd maybe drop the Ceph from the name,
> suggests a bit that we got non-ceph rbd pools.
> 
> // filter out rgw, metadata, cephfs, ... pools
> onlyRBDPools = ({data}) => !!data?.applications?.rbd;
> 
> 

i see your point on variable naming, but this isn't equivalent afaiu.
the idea here was that the ui reverts back to including a pool if there
is no application defined for it (this should help make the ui change
independent from the api change). this wouldn't do that. you could do:

onlyRBDPools = ({ data }) => !data?.applications ||
!!data?.applications?.rbd

imo still pretty long, but maybe you have another trick up your
sleeve :)

if you don't care about keeping the front-end compatible with the
previous behavior we can also use your one-liner. just wanted to err on
the side of caution.

as a sidenote: personally i also prefer checking for undefined-ness to
relying on its falsy nature. e.g. if application.rbd = null, this would
also return false, even though technically rbd is defined. since ceph
seems to return empty objects for rbd pools by default, this currently
works fine. however, id prefer being more explicit. this is just a
personal preference though, so i'll do whatever you prefer :)

there is also the "in" operator, but if irc another patch of mine once
got rejected for using that.

> more generally, the application could be also shown in our ceph pool list, could
> be useful to see the usage type of each pool, would also reduce confusion for the
> a bit odd metadata pool with its single PG.
> 

sure i suppose that makes sense, i can add that to a v4.

>> +	};
>> +
>>  	var store = Ext.create('Ext.data.Store', {
>>  	    fields: ['name'],
>>  	    sorters: 'name',
>> +	    filters: [
>> +		onlyCephRBDPools,
>> +	    ],
>>  	    proxy: {
>>  		type: 'proxmox',
>>  		url: '/api2/json/nodes/' + me.nodename + '/ceph/pools',
>> @@ -32,8 +40,10 @@ Ext.define('PVE.form.CephPoolSelector', {
>>  
>>  	store.load({
>>  	    callback: function(rec, op, success) {
>> -		if (success && rec.length > 0) {
>> -		    me.select(rec[0]);
>> +		let filteredRec = rec.filter(onlyCephRBDPools);
>> +
>> +		if (success && filteredRec.length > 0) {
>> +		    me.select(filteredRec[0]);
>>  		}
>>  	    },
>>  	});
> 





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint
  2022-10-21  6:57   ` Stefan Sterz
@ 2022-10-21  7:04     ` Thomas Lamprecht
  2022-10-21  7:59       ` Stefan Sterz
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Lamprecht @ 2022-10-21  7:04 UTC (permalink / raw)
  To: Stefan Sterz, Proxmox VE development discussion

Am 21/10/2022 um 08:57 schrieb Stefan Sterz:
>> out of interest: how expensive is this, did you check the overhead?
>>
> do you want a specific metric? in my (admittedly small) test setup
> (three vm cluster with 4 cores and 4Gib RAM) it is barely noticeable.
> the api call takes between 18 and 25ms in both cases for me.
> 

I mean, with a handful of pools you probably won't (or really should not)
see any difference >50 ms, that would make it a hard case arguing.

Just wondered if many pools (e.g., [10, 100, 1000]) actually increase this
O(n) or if that falls in the noise of the rados monitor query overhead.

I don't expect that is significant, just wondered if you already checked
and got some info on that.




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH manager v3 2/2] ui: only allow rbd pools to be added as rbd storage
  2022-10-21  7:02     ` Stefan Sterz
@ 2022-10-21  7:13       ` Thomas Lamprecht
  0 siblings, 0 replies; 10+ messages in thread
From: Thomas Lamprecht @ 2022-10-21  7:13 UTC (permalink / raw)
  To: Stefan Sterz, Proxmox VE development discussion

Am 21/10/2022 um 09:02 schrieb Stefan Sterz:
>> // filter out rgw, metadata, cephfs, ... pools
>> onlyRBDPools = ({data}) => !!data?.applications?.rbd;
>>
>>
> i see your point on variable naming, but this isn't equivalent afaiu.

argh, yeah you're right

> the idea here was that the ui reverts back to including a pool if there
> is no application defined for it (this should help make the ui change
> independent from the api change). this wouldn't do that. you could do:
> 
> onlyRBDPools = ({ data }) => !data?.applications ||
> !!data?.applications?.rbd
> 
> imo still pretty long, but maybe you have another trick up your
> sleeve 😄

maybe (untested):

onlyRBDPools = ({ data }) => (data?.applications ?? { rbd: true }).rbd;


But at that point we're rather heading in direction of code golf, so just
use whatever you think works good and is understandable enough, I won't
bikeshed the style of this fillter fn anymore ;P

> 
> if you don't care about keeping the front-end compatible with the
> previous behavior we can also use your one-liner. just wanted to err on
> the side of caution.
> 
> as a sidenote: personally i also prefer checking for undefined-ness to
> relying on its falsy nature. e.g. if application.rbd = null, this would
> also return false, even though technically rbd is defined. since ceph
> seems to return empty objects for rbd pools by default, this currently
> works fine. however, id prefer being more explicit. this is just a
> personal preference though, so i'll do whatever you prefer 😄

in general OK but with the point of not being sure what cephs defined
behaviour is, or will be, checking falsy can even be a feature not a bug,
otoh. ceph releases are controlled by us and have a relatively low frequency,
if it ever changes we'll be able to fix it just fine in time.

> 
> there is also the "in" operator, but if irc another patch of mine once
> got rejected for using that.
> 
>> more generally, the application could be also shown in our ceph pool list, could
>> be useful to see the usage type of each pool, would also reduce confusion for the
>> a bit odd metadata pool with its single PG.
>>
> sure i suppose that makes sense, i can add that to a v4.
> 

can, and probably should be in a separate series, just noticed and mentioned
before I forgot again ;-)




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint
  2022-10-21  7:04     ` Thomas Lamprecht
@ 2022-10-21  7:59       ` Stefan Sterz
  2022-10-21  9:54         ` Stefan Sterz
  0 siblings, 1 reply; 10+ messages in thread
From: Stefan Sterz @ 2022-10-21  7:59 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

On 10/21/22 09:04, Thomas Lamprecht wrote:
> Am 21/10/2022 um 08:57 schrieb Stefan Sterz:
>>> out of interest: how expensive is this, did you check the overhead?
>>>
>> do you want a specific metric? in my (admittedly small) test setup
>> (three vm cluster with 4 cores and 4Gib RAM) it is barely noticeable.
>> the api call takes between 18 and 25ms in both cases for me.
>>
> 
> I mean, with a handful of pools you probably won't (or really should not)
> see any difference >50 ms, that would make it a hard case arguing.
> 
> Just wondered if many pools (e.g., [10, 100, 1000]) actually increase this
> O(n) or if that falls in the noise of the rados monitor query overhead.
> 
> I don't expect that is significant, just wondered if you already checked
> and got some info on that.

ok so from what i can tell this is probably O(n) as it iterates once
over all pools, but that info should be in memory and not too bad imo
(and since this is lspools there are some other calls here that are
likely in O(n)).

however, even if this rados command takes too long, it will time out
after 5 seconds and then no applications will be included in the
response. which is just the previous behavior and imo "safe".

some more detail below:

technically, one may argue this call is in O(n*m*k), with n being the
number of pools, m the number of applications and k the number of
metadata keys for the application. but m and k are very like very small
if not zero (e.g. for a default rbd pool m is one and k would be zero).
so more like O(n).

at least if i am reading this right:
https://github.com/smithfarm/ceph/blob/v17.0.0/src/mon/OSDMonitor.cc#L6876-L6956





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint
  2022-10-21  7:59       ` Stefan Sterz
@ 2022-10-21  9:54         ` Stefan Sterz
  0 siblings, 0 replies; 10+ messages in thread
From: Stefan Sterz @ 2022-10-21  9:54 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

On 10/21/22 09:59, Stefan Sterz wrote:
> On 10/21/22 09:04, Thomas Lamprecht wrote:
>> Am 21/10/2022 um 08:57 schrieb Stefan Sterz:
>>>> out of interest: how expensive is this, did you check the overhead?
>>>>
>>> do you want a specific metric? in my (admittedly small) test setup
>>> (three vm cluster with 4 cores and 4Gib RAM) it is barely noticeable.
>>> the api call takes between 18 and 25ms in both cases for me.
>>>
>>
>> I mean, with a handful of pools you probably won't (or really should not)
>> see any difference >50 ms, that would make it a hard case arguing.
>>
>> Just wondered if many pools (e.g., [10, 100, 1000]) actually increase this
>> O(n) or if that falls in the noise of the rados monitor query overhead.
>>
>> I don't expect that is significant, just wondered if you already checked
>> and got some info on that.
> 
> ok so from what i can tell this is probably O(n) as it iterates once
> over all pools, but that info should be in memory and not too bad imo
> (and since this is lspools there are some other calls here that are
> likely in O(n)).
> 
> however, even if this rados command takes too long, it will time out
> after 5 seconds and then no applications will be included in the
> response. which is just the previous behavior and imo "safe".
> 
> some more detail below:
> 
> technically, one may argue this call is in O(n*m*k), with n being the
> number of pools, m the number of applications and k the number of
> metadata keys for the application. but m and k are very like very small
> if not zero (e.g. for a default rbd pool m is one and k would be zero).
> so more like O(n).
> 
> at least if i am reading this right:
> https://github.com/smithfarm/ceph/blob/v17.0.0/src/mon/OSDMonitor.cc#L6876-L6956

sorry for the many emails, just figured out that we previously dump all
this information in that api call already so we don't need to issue
another rados mon command at all. ill update this patch series accordingly

> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 





^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-10-21  9:54 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-20  7:17 [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint Stefan Sterz
2022-10-20  7:17 ` [pve-devel] [PATCH manager v3 2/2] ui: only allow rbd pools to be added as rbd storage Stefan Sterz
2022-10-20 13:00   ` Thomas Lamprecht
2022-10-21  7:02     ` Stefan Sterz
2022-10-21  7:13       ` Thomas Lamprecht
2022-10-20 12:55 ` [pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint Thomas Lamprecht
2022-10-21  6:57   ` Stefan Sterz
2022-10-21  7:04     ` Thomas Lamprecht
2022-10-21  7:59       ` Stefan Sterz
2022-10-21  9:54         ` Stefan Sterz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal