public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH manager] ui: remove ceph-mgr pools from rbd pool selection
@ 2022-10-12 13:22 Stefan Sterz
       [not found] ` <40F19362-B31D-460A-B995-9F3E366C55F5@antreich.com>
  0 siblings, 1 reply; 2+ messages in thread
From: Stefan Sterz @ 2022-10-12 13:22 UTC (permalink / raw)
  To: pve-devel

when using a hyper-converged cluster it was previously possible to add
the pool used by the ceph-mgr modules (".mgr" since quincy or
"device_health_metrics" previously) as an RBD storage. this would lead
to all kinds of errors when that storage was used (e.g.: VMs missing
their disks after a migration). hence, filter these pools from the
list of available pools.

Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
---
 www/manager6/form/CephPoolSelector.js | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/www/manager6/form/CephPoolSelector.js b/www/manager6/form/CephPoolSelector.js
index 5b96398d..eabb04ef 100644
--- a/www/manager6/form/CephPoolSelector.js
+++ b/www/manager6/form/CephPoolSelector.js
@@ -15,9 +15,17 @@ Ext.define('PVE.form.CephPoolSelector', {
 	    throw "no nodename given";
 	}
 
+	let filterCephMgrPools = (item) => {
+	    let name = item.data.pool_name;
+	    return name !== ".mgr" && name !== "device_health_metrics";
+	};
+
 	var store = Ext.create('Ext.data.Store', {
 	    fields: ['name'],
 	    sorters: 'name',
+	    filters: [
+		filterCephMgrPools,
+	    ],
 	    proxy: {
 		type: 'proxmox',
 		url: '/api2/json/nodes/' + me.nodename + '/ceph/pools',
@@ -32,8 +40,10 @@ Ext.define('PVE.form.CephPoolSelector', {
 
 	store.load({
 	    callback: function(rec, op, success) {
-		if (success && rec.length > 0) {
-		    me.select(rec[0]);
+		let filteredRec = rec.filter(filterCephMgrPools);
+
+		if (success && filteredRec.length > 0) {
+		    me.select(filteredRec[0]);
 		}
 	    },
 	});
-- 
2.30.2





^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [pve-devel] [PATCH manager] ui: remove ceph-mgr pools from rbd pool selection
       [not found] ` <40F19362-B31D-460A-B995-9F3E366C55F5@antreich.com>
@ 2022-10-14  8:01   ` Stefan Sterz
  0 siblings, 0 replies; 2+ messages in thread
From: Stefan Sterz @ 2022-10-14  8:01 UTC (permalink / raw)
  To: Alwin Antreich, Proxmox VE development discussion

On 10/13/22 16:11, Alwin Antreich wrote:
> On October 12, 2022 3:22:18 PM GMT+02:00, Stefan Sterz <s.sterz@proxmox.com> wrote:
>> when using a hyper-converged cluster it was previously possible to add
>> the pool used by the ceph-mgr modules (".mgr" since quincy or
>> "device_health_metrics" previously) as an RBD storage. this would lead
>> to all kinds of errors when that storage was used (e.g.: VMs missing
>> their disks after a migration). hence, filter these pools from the
>> list of available pools.
>>
>> Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
>> ---
>> www/manager6/form/CephPoolSelector.js | 14 ++++++++++++--
>> 1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/www/manager6/form/CephPoolSelector.js b/www/manager6/form/CephPoolSelector.js
>> index 5b96398d..eabb04ef 100644
>> --- a/www/manager6/form/CephPoolSelector.js
>> +++ b/www/manager6/form/CephPoolSelector.js
>> @@ -15,9 +15,17 @@ Ext.define('PVE.form.CephPoolSelector', {
>> 	    throw "no nodename given";
>> 	}
>>
>> +	let filterCephMgrPools = (item) => {
>> +	    let name = item.data.pool_name;
>> +	    return name !== ".mgr" && name !== "device_health_metrics";
>> +	};
>> +
>> 	var store = Ext.create('Ext.data.Store', {
>> 	    fields: ['name'],
>> 	    sorters: 'name',
>> +	    filters: [
>> +		filterCephMgrPools,
>> +	    ],
>> 	    proxy: {
>> 		type: 'proxmox',
>> 		url: '/api2/json/nodes/' + me.nodename + '/ceph/pools',
>> @@ -32,8 +40,10 @@ Ext.define('PVE.form.CephPoolSelector', {
>>
>> 	store.load({
>> 	    callback: function(rec, op, success) {
>> -		if (success && rec.length > 0) {
>> -		    me.select(rec[0]);
>> +		let filteredRec = rec.filter(filterCephMgrPools);
>> +
>> +		if (success && filteredRec.length > 0) {
>> +		    me.select(filteredRec[0]);
>> 		}
>> 	    },
>> 	});
> A thought, each ceph pool has an application associated (eg. rbd/cephfs). You could use these to create an inclusion filter. You can see them with `ceph osd pool application get`.

Ah thanks! :) I was looking for something like that. Sadly the API
endpoint used here does not return this information. I'll check how much
work it is to extend it and start working on a v2 of this patch!

> From the voice from the off. :⁠-⁠)
> 
> Cheers,
> Alwin
> Hi,
> 





^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-10-14  8:01 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-12 13:22 [pve-devel] [PATCH manager] ui: remove ceph-mgr pools from rbd pool selection Stefan Sterz
     [not found] ` <40F19362-B31D-460A-B995-9F3E366C55F5@antreich.com>
2022-10-14  8:01   ` Stefan Sterz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal