public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems
@ 2021-10-25 14:01 Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH storage v2 1/1] cephfs: add support for " Dominik Csapak
                   ` (14 more replies)
  0 siblings, 15 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

this series support for multiple cephfs. no single patch fixes the bug,
so it's in no commit subject... (feel free to change the commit subject
when applying if you find one patch most appropriate?)

a user already can create multiple cephfs via 'pveceph' (or manually
with the ceph tools), but the ui does not support it properly

storage patch can be applied independently, it only adds a new parameter
that does nothing if not set.

changes from v1:
* moved 'destroyfs' from api to cli only
* removed 'destroy cephfs' from the gui
* added docs patch to document the exact steps on how to remove a cephfs
* added 'disable' check on remove-storages
* change 'is mds active' check to check for specific fs_name

pve-storage:

Dominik Csapak (1):
  cephfs: add support for multiple ceph filesystems

 PVE/Storage/CephFSPlugin.pm | 8 ++++++++
 1 file changed, 8 insertions(+)

pve-manager:

Dominik Csapak (11):
  api: ceph-mds: get mds state when multple ceph filesystems exist
  ui: ceph: catch missing version for service list
  api: cephfs: refactor {ls,create}_fs
  api: cephfs: more checks on fs create
  api: cephfs: add fs_name to 'is mds active' check
  ui: ceph/ServiceList: refactor controller out
  ui: ceph/fs: show fs for active mds
  api: cephfs: add 'fs-name' for cephfs storage
  ui: storage/cephfs: make ceph fs selectable
  ui: ceph/fs: allow creating multiple cephfs
  pveceph: add 'fs destroy' command

 PVE/API2/Ceph/FS.pm                 |  31 ++-
 PVE/CLI/pveceph.pm                  | 120 +++++++++++
 PVE/Ceph/Services.pm                |  33 +--
 PVE/Ceph/Tools.pm                   |  51 +++++
 www/manager6/Makefile               |   1 +
 www/manager6/Utils.js               |   1 +
 www/manager6/ceph/FS.js             |  24 +--
 www/manager6/ceph/ServiceList.js    | 313 +++++++++++++++-------------
 www/manager6/form/CephFSSelector.js |  42 ++++
 www/manager6/storage/CephFSEdit.js  |  25 +++
 10 files changed, 449 insertions(+), 192 deletions(-)
 create mode 100644 www/manager6/form/CephFSSelector.js

pve-docs:

Dominik Csapak (1):
  pveceph: improve documentation for destroying cephfs

 pveceph.adoc | 49 +++++++++++++++++++++++++++++++++++++------------
 1 file changed, 37 insertions(+), 12 deletions(-)

-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH storage v2 1/1] cephfs: add support for multiple ceph filesystems
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-11-05 12:54   ` [pve-devel] applied: " Thomas Lamprecht
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 01/11] api: ceph-mds: get mds state when multple ceph filesystems exist Dominik Csapak
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

by optionally saving the name of the cephfs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/Storage/CephFSPlugin.pm | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
index 3b9a791..f587db7 100644
--- a/PVE/Storage/CephFSPlugin.pm
+++ b/PVE/Storage/CephFSPlugin.pm
@@ -87,6 +87,7 @@ sub cephfs_mount {
     my $secretfile = $cmd_option->{keyring};
     my $server = $cmd_option->{mon_host} // PVE::CephConfig::get_monaddr_list($configfile);
     my $type = 'ceph';
+    my $fs_name = $scfg->{'fs-name'};
 
     my @opts = ();
     if ($scfg->{fuse}) {
@@ -94,10 +95,12 @@ sub cephfs_mount {
 	push @opts, "ceph.id=$cmd_option->{userid}";
 	push @opts, "ceph.keyfile=$secretfile" if defined($secretfile);
 	push @opts, "ceph.conf=$configfile" if defined($configfile);
+	push @opts, "ceph.client_fs=$fs_name" if defined($fs_name);
     } else {
 	push @opts, "name=$cmd_option->{userid}";
 	push @opts, "secretfile=$secretfile" if defined($secretfile);
 	push @opts, "conf=$configfile" if defined($configfile);
+	push @opts, "fs=$fs_name" if defined($fs_name);
     }
 
     push @opts, $scfg->{options} if $scfg->{options};
@@ -128,6 +131,10 @@ sub properties {
 	    description => "Subdir to mount.",
 	    type => 'string', format => 'pve-storage-path',
 	},
+	'fs-name' => {
+	    description => "The Ceph filesystem name.",
+	    type => 'string', format => 'pve-configid',
+	},
     };
 }
 
@@ -148,6 +155,7 @@ sub options {
 	maxfiles => { optional => 1 },
 	keyring => { optional => 1 },
 	'prune-backups' => { optional => 1 },
+	'fs-name' => { optional => 1 },
     };
 }
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH manager v2 01/11] api: ceph-mds: get mds state when multple ceph filesystems exist
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH storage v2 1/1] cephfs: add support for " Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 02/11] ui: ceph: catch missing version for service list Dominik Csapak
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

by iterating over all of them and saving the name to the active ones
this fixes the issue that an mds that is assigned to not the first
fs in the list gets wrongly shown as offline

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/Ceph/Services.pm | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/PVE/Ceph/Services.pm b/PVE/Ceph/Services.pm
index ad41dbe4..e8cc23dc 100644
--- a/PVE/Ceph/Services.pm
+++ b/PVE/Ceph/Services.pm
@@ -181,13 +181,14 @@ sub get_cluster_mds_state {
     }
 
     my $add_state = sub {
-	my ($mds) = @_;
+	my ($mds, $fsname) = @_;
 
 	my $state = {};
 	$state->{addr} = $mds->{addr};
 	$state->{rank} = $mds->{rank};
 	$state->{standby_replay} = $mds->{standby_replay} ? 1 : 0;
 	$state->{state} = $mds->{state};
+	$state->{fs_name} = $fsname;
 
 	$mds_state->{$mds->{name}} = $state;
     };
@@ -200,13 +201,14 @@ sub get_cluster_mds_state {
 	$add_state->($mds);
     }
 
-    my $fs_info = $fsmap->{filesystems}->[0];
-    my $active_mds = $fs_info->{mdsmap}->{info};
+    for my $fs_info (@{$fsmap->{filesystems}}) {
+	my $active_mds = $fs_info->{mdsmap}->{info};
 
-    # normally there's only one active MDS, but we can have multiple active for
-    # different ranks (e.g., different cephs path hierarchy). So just add all.
-    foreach my $mds (values %$active_mds) {
-	$add_state->($mds);
+	# normally there's only one active MDS, but we can have multiple active for
+	# different ranks (e.g., different cephs path hierarchy). So just add all.
+	foreach my $mds (values %$active_mds) {
+	    $add_state->($mds, $fs_info->{mdsmap}->{fs_name});
+	}
     }
 
     return $mds_state;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH manager v2 02/11] ui: ceph: catch missing version for service list
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH storage v2 1/1] cephfs: add support for " Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 01/11] api: ceph-mds: get mds state when multple ceph filesystems exist Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 03/11] api: cephfs: refactor {ls, create}_fs Dominik Csapak
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

when a daemon is stopped, the version here is 'undefined'. catch that
instead of letting the template renderer run into an error.
this fixes the rendering of the grid backgrounds

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/manager6/ceph/ServiceList.js | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/www/manager6/ceph/ServiceList.js b/www/manager6/ceph/ServiceList.js
index 971de635..86cdcc8f 100644
--- a/www/manager6/ceph/ServiceList.js
+++ b/www/manager6/ceph/ServiceList.js
@@ -63,6 +63,9 @@ Ext.define('PVE.node.CephServiceList', {
 	xclass: 'Ext.app.ViewController',
 
 	render_version: function(value, metadata, rec) {
+	    if (value === undefined) {
+		return '';
+	    }
 	    let view = this.getView();
 	    let host = rec.data.host, nodev = [0];
 	    if (view.nodeversions[host] !== undefined) {
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH manager v2 03/11] api: cephfs: refactor {ls, create}_fs
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (2 preceding siblings ...)
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 02/11] ui: ceph: catch missing version for service list Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 04/11] api: cephfs: more checks on fs create Dominik Csapak
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

no function change intended

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/API2/Ceph/FS.pm | 22 ++++++----------------
 PVE/Ceph/Tools.pm   | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+), 16 deletions(-)

diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm
index 82b5d616..cdced31a 100644
--- a/PVE/API2/Ceph/FS.pm
+++ b/PVE/API2/Ceph/FS.pm
@@ -59,15 +59,7 @@ __PACKAGE__->register_method ({
 
 	my $rados = PVE::RADOS->new();
 
-	my $cephfs_list = $rados->mon_command({ prefix => "fs ls" });
-	# we get something like:
-	#{
-	#   'metadata_pool_id' => 2,
-	#   'data_pool_ids' => [ 1 ],
-	#   'metadata_pool' => 'cephfs_metadata',
-	#   'data_pools' => [ 'cephfs_data' ],
-	#   'name' => 'cephfs',
-	#}
+	my $cephfs_list = PVE::Ceph::Tools::ls_fs($rados);
 
 	my $res = [
 	    map {{
@@ -161,13 +153,11 @@ __PACKAGE__->register_method ({
 		push @created_pools, $pool_metadata;
 
 		print "configuring new CephFS '$fs_name'\n";
-		$rados->mon_command({
-		    prefix => "fs new",
-		    fs_name => $fs_name,
-		    metadata => $pool_metadata,
-		    data => $pool_data,
-		    format => 'plain',
-		});
+		my $param = {
+		    pool_metadata => $pool_metadata,
+		    pool_data => $pool_data,
+		};
+		PVE::Ceph::Tools::create_fs($fs_name, $param, $rados);
 	    };
 	    if (my $err = $@) {
 		$@ = undef;
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 74ead6f7..2f818276 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -304,6 +304,42 @@ sub destroy_pool {
     });
 }
 
+# we get something like:
+#[{
+#   'metadata_pool_id' => 2,
+#   'data_pool_ids' => [ 1 ],
+#   'metadata_pool' => 'cephfs_metadata',
+#   'data_pools' => [ 'cephfs_data' ],
+#   'name' => 'cephfs',
+#}]
+sub ls_fs {
+    my ($rados) = @_;
+
+    if (!defined($rados)) {
+	$rados = PVE::RADOS->new();
+    }
+
+    my $res = $rados->mon_command({ prefix => "fs ls" });
+
+    return $res;
+}
+
+sub create_fs {
+    my ($fs, $param, $rados) = @_;
+
+    if (!defined($rados)) {
+	$rados = PVE::RADOS->new();
+    }
+
+    $rados->mon_command({
+	prefix => "fs new",
+	fs_name => $fs,
+	metadata => $param->{pool_metadata},
+	data => $param->{pool_data},
+	format => 'plain',
+    });
+}
+
 sub setup_pve_symlinks {
     # fail if we find a real file instead of a link
     if (-f $ceph_cfgpath) {
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH manager v2 04/11] api: cephfs: more checks on fs create
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (3 preceding siblings ...)
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 03/11] api: cephfs: refactor {ls, create}_fs Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 05/11] api: cephfs: add fs_name to 'is mds active' check Dominik Csapak
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

namely if the fs is already existing, and if there is currently a
standby mds that can be used for the new fs
previosuly, only one cephfs was possible, so these checks were not
necessary. now with pacific, it is possible to have multiple cephfs'
and we should check for those.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/API2/Ceph/FS.pm | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm
index cdced31a..845c4fbd 100644
--- a/PVE/API2/Ceph/FS.pm
+++ b/PVE/API2/Ceph/FS.pm
@@ -128,8 +128,14 @@ __PACKAGE__->register_method ({
 	die "ceph pools '$pool_data' and/or '$pool_metadata' already exist\n"
 	    if $existing_pools->{$pool_data} || $existing_pools->{$pool_metadata};
 
+	my $fs = PVE::Ceph::Tools::ls_fs($rados);
+	die "ceph fs '$fs_name' already exists\n"
+	    if (grep { $_->{name} eq $fs_name } @$fs);
+
 	my $running_mds = PVE::Ceph::Services::get_cluster_mds_state($rados);
 	die "no running Metadata Server (MDS) found!\n" if !scalar(keys %$running_mds);
+	die "no standby Metadata Server (MDS) found!\n"
+	    if !grep { $_->{state} eq 'up:standby' } values(%$running_mds);
 
 	PVE::Storage::assert_sid_unused($fs_name) if $param->{add_storage};
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH manager v2 05/11] api: cephfs: add fs_name to 'is mds active' check
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (4 preceding siblings ...)
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 04/11] api: cephfs: more checks on fs create Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 06/11] ui: ceph/ServiceList: refactor controller out Dominik Csapak
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

so that we check the mds for the correct cephfs we just added

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/API2/Ceph/FS.pm  |  2 +-
 PVE/Ceph/Services.pm | 17 ++++++++++-------
 2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm
index 845c4fbd..9b2a8d70 100644
--- a/PVE/API2/Ceph/FS.pm
+++ b/PVE/API2/Ceph/FS.pm
@@ -187,7 +187,7 @@ __PACKAGE__->register_method ({
 		print "Adding '$fs_name' to storage configuration...\n";
 
 		my $waittime = 0;
-		while (!PVE::Ceph::Services::is_any_mds_active($rados)) {
+		while (!PVE::Ceph::Services::is_mds_active($rados, $fs_name)) {
 		    if ($waittime >= 10) {
 			die "Need MDS to add storage, but none got active!\n";
 		    }
diff --git a/PVE/Ceph/Services.pm b/PVE/Ceph/Services.pm
index e8cc23dc..abe524e0 100644
--- a/PVE/Ceph/Services.pm
+++ b/PVE/Ceph/Services.pm
@@ -214,23 +214,26 @@ sub get_cluster_mds_state {
     return $mds_state;
 }
 
-sub is_any_mds_active {
-    my ($rados) = @_;
+sub is_mds_active {
+    my ($rados, $fs_name) = @_;
 
     if (!defined($rados)) {
 	$rados = PVE::RADOS->new();
     }
 
     my $mds_dump = $rados->mon_command({ prefix => 'mds stat' });
-    my $fs = $mds_dump->{fsmap}->{filesystems};
+    my $fsmap = $mds_dump->{fsmap}->{filesystems};
 
-    if (!($fs && scalar(@$fs) > 0)) {
+    if (!($fsmap && scalar(@$fsmap) > 0)) {
 	return undef;
     }
-    my $active_mds = $fs->[0]->{mdsmap}->{info};
+    for my $fs (@$fsmap) {
+	next if defined($fs_name) && $fs->{mdsmap}->{fs_name} ne $fs_name;
 
-    for my $mds (values %$active_mds) {
-	return 1 if $mds->{state} eq 'up:active';
+	my $active_mds = $fs->{mdsmap}->{info};
+	for my $mds (values %$active_mds) {
+	    return 1 if $mds->{state} eq 'up:active';
+	}
     }
 
     return 0;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH manager v2 06/11] ui: ceph/ServiceList: refactor controller out
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (5 preceding siblings ...)
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 05/11] api: cephfs: add fs_name to 'is mds active' check Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 07/11] ui: ceph/fs: show fs for active mds Dominik Csapak
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

we want to reuse that controller type by overriding some functionality
in the future, so factor it out.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/manager6/ceph/ServiceList.js | 302 ++++++++++++++++---------------
 1 file changed, 153 insertions(+), 149 deletions(-)

diff --git a/www/manager6/ceph/ServiceList.js b/www/manager6/ceph/ServiceList.js
index 86cdcc8f..d5ba2efa 100644
--- a/www/manager6/ceph/ServiceList.js
+++ b/www/manager6/ceph/ServiceList.js
@@ -44,172 +44,176 @@ Ext.define('PVE.CephCreateService', {
     },
 });
 
-Ext.define('PVE.node.CephServiceList', {
-    extend: 'Ext.grid.GridPanel',
-    xtype: 'pveNodeCephServiceList',
-
-    onlineHelp: 'chapter_pveceph',
-    emptyText: gettext('No such service configured.'),
-
-    stateful: true,
+Ext.define('PVE.node.CephServiceController', {
+    extend: 'Ext.app.ViewController',
+    alias: 'controller.CephServiceList',
 
-    // will be called when the store loads
-    storeLoadCallback: Ext.emptyFn,
 
-    // if set to true, does shows the ceph install mask if needed
-    showCephInstallMask: false,
+    render_version: function(value, metadata, rec) {
+	if (value === undefined) {
+	    return '';
+	}
+	let view = this.getView();
+	let host = rec.data.host, nodev = [0];
+	if (view.nodeversions[host] !== undefined) {
+	    nodev = view.nodeversions[host].version.parts;
+	}
 
-    controller: {
-	xclass: 'Ext.app.ViewController',
+	let icon = '';
+	if (PVE.Utils.compare_ceph_versions(view.maxversion, nodev) > 0) {
+	    icon = PVE.Utils.get_ceph_icon_html('HEALTH_UPGRADE');
+	} else if (PVE.Utils.compare_ceph_versions(nodev, value) > 0) {
+	    icon = PVE.Utils.get_ceph_icon_html('HEALTH_OLD');
+	} else if (view.mixedversions) {
+	    icon = PVE.Utils.get_ceph_icon_html('HEALTH_OK');
+	}
+	return icon + value;
+    },
 
-	render_version: function(value, metadata, rec) {
-	    if (value === undefined) {
-		return '';
+    getMaxVersions: function(store, records, success) {
+	if (!success || records.length < 1) {
+	    return;
+	}
+	let me = this;
+	let view = me.getView();
+
+	view.nodeversions = records[0].data.node;
+	view.maxversion = [];
+	view.mixedversions = false;
+	for (const [_nodename, data] of Object.entries(view.nodeversions)) {
+	    let res = PVE.Utils.compare_ceph_versions(data.version.parts, view.maxversion);
+	    if (res !== 0 && view.maxversion.length > 0) {
+		view.mixedversions = true;
 	    }
-	    let view = this.getView();
-	    let host = rec.data.host, nodev = [0];
-	    if (view.nodeversions[host] !== undefined) {
-		nodev = view.nodeversions[host].version.parts;
+	    if (res > 0) {
+		view.maxversion = data.version.parts;
 	    }
+	}
+    },
 
-	    let icon = '';
-	    if (PVE.Utils.compare_ceph_versions(view.maxversion, nodev) > 0) {
-		icon = PVE.Utils.get_ceph_icon_html('HEALTH_UPGRADE');
-	    } else if (PVE.Utils.compare_ceph_versions(nodev, value) > 0) {
-		icon = PVE.Utils.get_ceph_icon_html('HEALTH_OLD');
-	    } else if (view.mixedversions) {
-		icon = PVE.Utils.get_ceph_icon_html('HEALTH_OK');
-	    }
-	    return icon + value;
-	},
+    init: function(view) {
+	if (view.pveSelNode) {
+	    view.nodename = view.pveSelNode.data.node;
+	}
+	if (!view.nodename) {
+	    throw "no node name specified";
+	}
 
-	getMaxVersions: function(store, records, success) {
-	    if (!success || records.length < 1) {
-		return;
-	    }
-	    let me = this;
-	    let view = me.getView();
-
-	    view.nodeversions = records[0].data.node;
-	    view.maxversion = [];
-	    view.mixedversions = false;
-	    for (const [_nodename, data] of Object.entries(view.nodeversions)) {
-		let res = PVE.Utils.compare_ceph_versions(data.version.parts, view.maxversion);
-		if (res !== 0 && view.maxversion.length > 0) {
-		    view.mixedversions = true;
-		}
-		if (res > 0) {
-		    view.maxversion = data.version.parts;
-		}
-	    }
-	},
+	if (!view.type) {
+	    throw "no type specified";
+	}
 
-	init: function(view) {
-	    if (view.pveSelNode) {
-		view.nodename = view.pveSelNode.data.node;
-	    }
-	    if (!view.nodename) {
-		throw "no node name specified";
-	    }
+	view.versionsstore = Ext.create('Proxmox.data.UpdateStore', {
+	    autoStart: true,
+	    interval: 10000,
+	    storeid: `ceph-versions-${view.type}-list${view.nodename}`,
+	    proxy: {
+		type: 'proxmox',
+		url: "/api2/json/cluster/ceph/metadata?scope=versions",
+	    },
+	});
+	view.versionsstore.on('load', this.getMaxVersions, this);
+	view.on('destroy', view.versionsstore.stopUpdate);
+
+	view.rstore = Ext.create('Proxmox.data.UpdateStore', {
+	    autoStart: true,
+	    interval: 3000,
+	    storeid: `ceph-${view.type}-list${view.nodename}`,
+	    model: 'ceph-service-list',
+	    proxy: {
+		type: 'proxmox',
+		url: `/api2/json/nodes/${view.nodename}/ceph/${view.type}`,
+	    },
+	});
 
-	    if (!view.type) {
-		throw "no type specified";
-	    }
+	view.setStore(Ext.create('Proxmox.data.DiffStore', {
+	    rstore: view.rstore,
+	    sorters: [{ property: 'name' }],
+	}));
 
-	    view.versionsstore = Ext.create('Proxmox.data.UpdateStore', {
-		autoStart: true,
-		interval: 10000,
-		storeid: `ceph-versions-${view.type}-list${view.nodename}`,
-		proxy: {
-		    type: 'proxmox',
-		    url: "/api2/json/cluster/ceph/metadata?scope=versions",
-		},
-	    });
-	    view.versionsstore.on('load', this.getMaxVersions, this);
-	    view.on('destroy', view.versionsstore.stopUpdate);
-
-	    view.rstore = Ext.create('Proxmox.data.UpdateStore', {
-		autoStart: true,
-		interval: 3000,
-		storeid: `ceph-${view.type}-list${view.nodename}`,
-		model: 'ceph-service-list',
-		proxy: {
-		    type: 'proxmox',
-		    url: `/api2/json/nodes/${view.nodename}/ceph/${view.type}`,
-		},
-	    });
+	if (view.storeLoadCallback) {
+	    view.rstore.on('load', view.storeLoadCallback, this);
+	}
+	view.on('destroy', view.rstore.stopUpdate);
 
-	    view.setStore(Ext.create('Proxmox.data.DiffStore', {
-		rstore: view.rstore,
-		sorters: [{ property: 'name' }],
-	    }));
+	if (view.showCephInstallMask) {
+	    PVE.Utils.monitor_ceph_installed(view, view.rstore, view.nodename, true);
+	}
+    },
 
-	    if (view.storeLoadCallback) {
-		view.rstore.on('load', view.storeLoadCallback, this);
-	    }
-	    view.on('destroy', view.rstore.stopUpdate);
+    service_cmd: function(rec, cmd) {
+	let view = this.getView();
+	if (!rec.data.host) {
+	    Ext.Msg.alert(gettext('Error'), "entry has no host");
+	    return;
+	}
+	Proxmox.Utils.API2Request({
+				  url: `/nodes/${rec.data.host}/ceph/${cmd}`,
+				  method: 'POST',
+				  params: { service: view.type + '.' + rec.data.name },
+				  success: function(response, options) {
+				      Ext.create('Proxmox.window.TaskProgress', {
+					  autoShow: true,
+					  upid: response.result.data,
+					  taskDone: () => view.rstore.load(),
+				      });
+				  },
+				  failure: (response, _opts) => Ext.Msg.alert(gettext('Error'), response.htmlStatus),
+	});
+    },
+    onChangeService: function(button) {
+	let me = this;
+	let record = me.getView().getSelection()[0];
+	me.service_cmd(record, button.action);
+    },
 
-	    if (view.showCephInstallMask) {
-		PVE.Utils.monitor_ceph_installed(view, view.rstore, view.nodename, true);
-	    }
-	},
+    showSyslog: function() {
+	let view = this.getView();
+	let rec = view.getSelection()[0];
+	let service = `ceph-${view.type}@${rec.data.name}`;
+	Ext.create('Ext.window.Window', {
+	    title: `${gettext('Syslog')}: ${service}`,
+	    autoShow: true,
+	    modal: true,
+	    width: 800,
+	    height: 400,
+	    layout: 'fit',
+	    items: [{
+		xtype: 'proxmoxLogView',
+		url: `/api2/extjs/nodes/${rec.data.host}/syslog?service=${encodeURIComponent(service)}`,
+		log_select_timespan: 1,
+	    }],
+	});
+    },
 
-	service_cmd: function(rec, cmd) {
-	    let view = this.getView();
-	    if (!rec.data.host) {
-		Ext.Msg.alert(gettext('Error'), "entry has no host");
-		return;
-	    }
-	    Proxmox.Utils.API2Request({
-		url: `/nodes/${rec.data.host}/ceph/${cmd}`,
-		method: 'POST',
-		params: { service: view.type + '.' + rec.data.name },
-		success: function(response, options) {
-		    Ext.create('Proxmox.window.TaskProgress', {
-			autoShow: true,
-			upid: response.result.data,
-			taskDone: () => view.rstore.load(),
-		    });
-		},
-		failure: (response, _opts) => Ext.Msg.alert(gettext('Error'), response.htmlStatus),
-	    });
-	},
-	onChangeService: function(button) {
-	    let me = this;
-	    let record = me.getView().getSelection()[0];
-	    me.service_cmd(record, button.action);
-	},
+    onCreate: function() {
+	let view = this.getView();
+	Ext.create('PVE.CephCreateService', {
+	    autoShow: true,
+	    nodename: view.nodename,
+	    subject: view.getTitle(),
+	    type: view.type,
+	    taskDone: () => view.rstore.load(),
+	});
+    },
+});
 
-	showSyslog: function() {
-	    let view = this.getView();
-	    let rec = view.getSelection()[0];
-	    let service = `ceph-${view.type}@${rec.data.name}`;
-	    Ext.create('Ext.window.Window', {
-		title: `${gettext('Syslog')}: ${service}`,
-		autoShow: true,
-		modal: true,
-		width: 800,
-		height: 400,
-		layout: 'fit',
-		items: [{
-		    xtype: 'proxmoxLogView',
-		    url: `/api2/extjs/nodes/${rec.data.host}/syslog?service=${encodeURIComponent(service)}`,
-		    log_select_timespan: 1,
-		}],
-	    });
-	},
+Ext.define('PVE.node.CephServiceList', {
+    extend: 'Ext.grid.GridPanel',
+    xtype: 'pveNodeCephServiceList',
 
-	onCreate: function() {
-	    let view = this.getView();
-	    Ext.create('PVE.CephCreateService', {
-		autoShow: true,
-		nodename: view.nodename,
-		subject: view.getTitle(),
-		type: view.type,
-		taskDone: () => view.rstore.load(),
-	    });
-	},
-    },
+    onlineHelp: 'chapter_pveceph',
+    emptyText: gettext('No such service configured.'),
+
+    stateful: true,
+
+    // will be called when the store loads
+    storeLoadCallback: Ext.emptyFn,
+
+    // if set to true, does shows the ceph install mask if needed
+    showCephInstallMask: false,
+
+    controller: 'CephServiceList',
 
     tbar: [
 	{
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH manager v2 07/11] ui: ceph/fs: show fs for active mds
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (6 preceding siblings ...)
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 06/11] ui: ceph/ServiceList: refactor controller out Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 08/11] api: cephfs: add 'fs-name' for cephfs storage Dominik Csapak
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

so that the user can see which mds is responsible for which cephfs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/manager6/ceph/FS.js          |  2 +-
 www/manager6/ceph/ServiceList.js | 14 ++++++++++++++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/www/manager6/ceph/FS.js b/www/manager6/ceph/FS.js
index 90362586..c620ec6e 100644
--- a/www/manager6/ceph/FS.js
+++ b/www/manager6/ceph/FS.js
@@ -183,7 +183,7 @@ Ext.define('PVE.NodeCephFSPanel', {
 	    },
 	},
 	{
-	    xtype: 'pveNodeCephServiceList',
+	    xtype: 'pveNodeCephMDSList',
 	    title: gettext('Metadata Servers'),
 	    stateId: 'grid-ceph-mds',
 	    type: 'mds',
diff --git a/www/manager6/ceph/ServiceList.js b/www/manager6/ceph/ServiceList.js
index d5ba2efa..f2b2cbbd 100644
--- a/www/manager6/ceph/ServiceList.js
+++ b/www/manager6/ceph/ServiceList.js
@@ -48,6 +48,7 @@ Ext.define('PVE.node.CephServiceController', {
     extend: 'Ext.app.ViewController',
     alias: 'controller.CephServiceList',
 
+    render_status: (value, metadata, rec) => value,
 
     render_version: function(value, metadata, rec) {
 	if (value === undefined) {
@@ -305,6 +306,7 @@ Ext.define('PVE.node.CephServiceList', {
 	    header: gettext('Status'),
 	    flex: 1,
 	    sortable: false,
+	    renderer: 'render_status',
 	    dataIndex: 'state',
 	},
 	{
@@ -341,6 +343,7 @@ Ext.define('PVE.node.CephServiceList', {
 	fields: [
 	    'addr',
 	    'name',
+	    'fs_name',
 	    'rank',
 	    'host',
 	    'quorum',
@@ -356,3 +359,14 @@ Ext.define('PVE.node.CephServiceList', {
 	idProperty: 'name',
     });
 });
+
+Ext.define('PVE.node.CephMDSList', {
+    extend: 'PVE.node.CephServiceList',
+    xtype: 'pveNodeCephMDSList',
+
+    controller: {
+	type: 'CephServiceList',
+	render_status: (value, mD, rec) => rec.data.fs_name ? `${value} (${rec.data.fs_name})` : value,
+    },
+});
+
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH manager v2 08/11] api: cephfs: add 'fs-name' for cephfs storage
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (7 preceding siblings ...)
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 07/11] ui: ceph/fs: show fs for active mds Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 09/11] ui: storage/cephfs: make ceph fs selectable Dominik Csapak
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

so that we can uniquely identify the cephfs (in case of multiple)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/API2/Ceph/FS.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm
index 9b2a8d70..8e740115 100644
--- a/PVE/API2/Ceph/FS.pm
+++ b/PVE/API2/Ceph/FS.pm
@@ -202,6 +202,7 @@ __PACKAGE__->register_method ({
 			type => 'cephfs',
 			storage => $fs_name,
 			content => 'backup,iso,vztmpl',
+			'fs-name' => $fs_name,
 		    })
 		};
 		die "adding storage for CephFS '$fs_name' failed, check log ".
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH manager v2 09/11] ui: storage/cephfs: make ceph fs selectable
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (8 preceding siblings ...)
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 08/11] api: cephfs: add 'fs-name' for cephfs storage Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 10/11] ui: ceph/fs: allow creating multiple cephfs Dominik Csapak
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

by adding a CephFSSelector and using it in the CephFSEdit window
(similar to the poolselector/textfield)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/manager6/Makefile               |  1 +
 www/manager6/form/CephFSSelector.js | 42 +++++++++++++++++++++++++++++
 www/manager6/storage/CephFSEdit.js  | 25 +++++++++++++++++
 3 files changed, 68 insertions(+)
 create mode 100644 www/manager6/form/CephFSSelector.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index e5e85aed..c3e4056b 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -26,6 +26,7 @@ JSSRC= 							\
 	form/CacheTypeSelector.js			\
 	form/CalendarEvent.js				\
 	form/CephPoolSelector.js			\
+	form/CephFSSelector.js				\
 	form/CompressionSelector.js			\
 	form/ContentTypeSelector.js			\
 	form/ControllerSelector.js			\
diff --git a/www/manager6/form/CephFSSelector.js b/www/manager6/form/CephFSSelector.js
new file mode 100644
index 00000000..3c86e3cf
--- /dev/null
+++ b/www/manager6/form/CephFSSelector.js
@@ -0,0 +1,42 @@
+Ext.define('PVE.form.CephFSSelector', {
+    extend: 'Ext.form.field.ComboBox',
+    alias: 'widget.pveCephFSSelector',
+
+    allowBlank: false,
+    valueField: 'name',
+    displayField: 'name',
+    editable: false,
+    queryMode: 'local',
+
+    initComponent: function() {
+	var me = this;
+
+	if (!me.nodename) {
+	    throw "no nodename given";
+	}
+
+	var store = Ext.create('Ext.data.Store', {
+	    fields: ['name'],
+	    sorters: 'name',
+	    proxy: {
+		type: 'proxmox',
+		url: '/api2/json/nodes/' + me.nodename + '/ceph/fs',
+	    },
+	});
+
+	Ext.apply(me, {
+	    store: store,
+	});
+
+        me.callParent();
+
+	store.load({
+	    callback: function(rec, op, success) {
+		if (success && rec.length > 0) {
+		    me.select(rec[0]);
+		}
+	    },
+	});
+    },
+
+});
diff --git a/www/manager6/storage/CephFSEdit.js b/www/manager6/storage/CephFSEdit.js
index 1f5246cd..92fdfe63 100644
--- a/www/manager6/storage/CephFSEdit.js
+++ b/www/manager6/storage/CephFSEdit.js
@@ -64,6 +64,31 @@ Ext.define('PVE.storage.CephFSInputPanel', {
 	    },
 	);
 
+	if (me.isCreate) {
+	    me.column1.push({
+		xtype: 'pveCephFSSelector',
+		nodename: me.nodename,
+		name: 'fs-name',
+		bind: {
+		    disabled: '{!pveceph}',
+		    submitValue: '{pveceph}',
+		    hidden: '{!pveceph}',
+		},
+		fieldLabel: gettext('FS Name'),
+		allowBlank: false,
+	    }, {
+		xtype: 'textfield',
+		nodename: me.nodename,
+		name: 'fs-name',
+		bind: {
+		    disabled: '{pveceph}',
+		    submitValue: '{!pveceph}',
+		    hidden: '{pveceph}',
+		},
+		fieldLabel: gettext('FS Name'),
+	    });
+	}
+
 	me.column2 = [
 	    {
 		xtype: 'pveContentTypeSelector',
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH manager v2 10/11] ui: ceph/fs: allow creating multiple cephfs
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (9 preceding siblings ...)
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 09/11] ui: storage/cephfs: make ceph fs selectable Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 11/11] pveceph: add 'fs destroy' command Dominik Csapak
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

but only if there are any standby mds

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/manager6/ceph/FS.js | 22 ++++++++--------------
 1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/www/manager6/ceph/FS.js b/www/manager6/ceph/FS.js
index c620ec6e..1af5e6cc 100644
--- a/www/manager6/ceph/FS.js
+++ b/www/manager6/ceph/FS.js
@@ -86,12 +86,11 @@ Ext.define('PVE.NodeCephFSPanel', {
     viewModel: {
 	parent: null,
 	data: {
-	    cephfsConfigured: false,
 	    mdsCount: 0,
 	},
 	formulas: {
 	    canCreateFS: function(get) {
-		return !get('cephfsConfigured') && get('mdsCount') > 0;
+		return get('mdsCount') > 0;
 	    },
 	},
     },
@@ -125,7 +124,6 @@ Ext.define('PVE.NodeCephFSPanel', {
 		    }));
 		    // manages the "install ceph?" overlay
 		    PVE.Utils.monitor_ceph_installed(view, view.rstore, view.nodename, true);
-		    view.rstore.on('load', this.onLoad, this);
 		    view.on('destroy', () => view.rstore.stopUpdate());
 		},
 
@@ -140,15 +138,6 @@ Ext.define('PVE.NodeCephFSPanel', {
 			},
 		    });
 		},
-
-		onLoad: function(store, records, success) {
-		    var vm = this.getViewModel();
-		    if (!(success && records && records.length > 0)) {
-			vm.set('cephfsConfigured', false);
-			return;
-		    }
-		    vm.set('cephfsConfigured', true);
-		},
 	    },
 	    tbar: [
 		{
@@ -156,7 +145,6 @@ Ext.define('PVE.NodeCephFSPanel', {
 		    reference: 'createButton',
 		    handler: 'onCreate',
 		    bind: {
-			// only one CephFS per Ceph cluster makes sense for now
 			disabled: '{!canCreateFS}',
 		    },
 		},
@@ -193,7 +181,13 @@ Ext.define('PVE.NodeCephFSPanel', {
 		    vm.set('mdsCount', 0);
 		    return;
 		}
-		vm.set('mdsCount', records.length);
+		let count = 0;
+		for (const mds of records) {
+		    if (mds.data.state === 'up:standby') {
+			count++;
+		    }
+		}
+		vm.set('mdsCount', count);
 	    },
 	    cbind: {
 		nodename: '{nodename}',
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH manager v2 11/11] pveceph: add 'fs destroy' command
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (10 preceding siblings ...)
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 10/11] ui: ceph/fs: allow creating multiple cephfs Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-25 14:01 ` [pve-devel] [PATCH docs v2 1/1] pveceph: improve documentation for destroying cephfs Dominik Csapak
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

with 'remove-storages' and 'remove-pools' as optional parameters

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/CLI/pveceph.pm    | 120 ++++++++++++++++++++++++++++++++++++++++++
 PVE/Ceph/Tools.pm     |  15 ++++++
 www/manager6/Utils.js |   1 +
 3 files changed, 136 insertions(+)

diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index b04d1346..995cfcd5 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -221,6 +221,125 @@ __PACKAGE__->register_method ({
 	return undef;
     }});
 
+my $get_storages = sub {
+    my ($fs, $is_default) = @_;
+
+    my $cfg = PVE::Storage::config();
+
+    my $storages = $cfg->{ids};
+    my $res = {};
+    foreach my $storeid (keys %$storages) {
+	my $curr = $storages->{$storeid};
+	next if $curr->{type} ne 'cephfs';
+	my $cur_fs = $curr->{'fs-name'};
+	$res->{$storeid} = $storages->{$storeid}
+	    if (!defined($cur_fs) && $is_default) || (defined($cur_fs) && $fs eq $cur_fs);
+    }
+
+    return $res;
+};
+
+__PACKAGE__->register_method ({
+    name => 'destroyfs',
+    path => 'destroyfs',
+    method => 'DELETE',
+    description => "Destroy a Ceph filesystem",
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    name => {
+		description => "The ceph filesystem name.",
+		type => 'string',
+	    },
+	    'remove-storages' => {
+		description => "Remove all pveceph-managed storages configured for this fs.",
+		type => 'boolean',
+		optional => 1,
+		default => 0,
+	    },
+	    'remove-pools' => {
+		description => "Remove data and metadata pools configured for this fs.",
+		type => 'boolean',
+		optional => 1,
+		default => 0,
+	    },
+	},
+    },
+    returns => { type => 'string' },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Ceph::Tools::check_ceph_inited();
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $user = $rpcenv->get_user();
+
+	my $fs_name = $param->{name};
+
+	my $fs;
+	my $fs_list = PVE::Ceph::Tools::ls_fs();
+	for my $entry (@$fs_list) {
+	    next if $entry->{name} ne $fs_name;
+	    $fs = $entry;
+	    last;
+	}
+	die "no such cephfs '$fs_name'\n" if !$fs;
+
+	my $worker = sub {
+	    my $rados = PVE::RADOS->new();
+
+	    if ($param->{'remove-storages'}) {
+		my $defaultfs;
+		my $fs_dump = $rados->mon_command({ prefix => "fs dump" });
+		for my $fs ($fs_dump->{filesystems}->@*) {
+		    next if $fs->{id} != $fs_dump->{default_fscid};
+		    $defaultfs = $fs->{mdsmap}->{fs_name};
+		}
+		warn "no default fs found, maybe not all relevant storages are removed\n"
+		    if !defined($defaultfs);
+
+		my $storages = $get_storages->($fs_name, $fs_name eq ($defaultfs // ''));
+		for my $storeid (keys %$storages) {
+		    my $store = $storages->{$storeid};
+		    if (!$store->{disable}) {
+			die "storage '$storeid' is not disabled, make sure to disable ".
+			    "and unmount the storage first\n";
+		    }
+		}
+
+		my $err;
+		for my $storeid (keys %$storages) {
+		    # skip external clusters, not managed by pveceph
+		    next if $storages->{$storeid}->{monhost};
+		    eval { PVE::API2::Storage::Config->delete({storage => $storeid}) };
+		    if ($@) {
+			warn "failed to remove storage '$storeid': $@\n";
+			$err = 1;
+		    }
+		}
+		die "failed to remove (some) storages - check log and remove manually!\n"
+		    if $err;
+	    }
+
+	    PVE::Ceph::Tools::destroy_fs($fs_name, $rados);
+
+	    if ($param->{'remove-pools'}) {
+		warn "removing metadata pool '$fs->{metadata_pool}'\n";
+		eval { PVE::Ceph::Tools::destroy_pool($fs->{metadata_pool}, $rados) };
+		warn "$@\n" if $@;
+
+		foreach my $pool ($fs->{data_pools}->@*) {
+		    warn "removing data pool '$pool'\n";
+		    eval { PVE::Ceph::Tools::destroy_pool($pool, $rados) };
+		    warn "$@\n" if $@;
+		}
+	    }
+
+	};
+	return $rpcenv->fork_worker('cephdestroyfs', $fs_name,  $user, $worker);
+    }});
+
 our $cmddef = {
     init => [ 'PVE::API2::Ceph', 'init', [], { node => $nodename } ],
     pool => {
@@ -256,6 +375,7 @@ our $cmddef = {
     destroypool => { alias => 'pool destroy' },
     fs => {
 	create => [ 'PVE::API2::Ceph::FS', 'createfs', [], { node => $nodename }],
+	destroy => [ __PACKAGE__, 'destroyfs', ['name'], { node => $nodename }],
     },
     osd => {
 	create => [ 'PVE::API2::Ceph::OSD', 'createosd', ['dev'], { node => $nodename }, $upid_exit],
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 2f818276..36d7788a 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -340,6 +340,21 @@ sub create_fs {
     });
 }
 
+sub destroy_fs {
+    my ($fs, $rados) = @_;
+
+    if (!defined($rados)) {
+	$rados = PVE::RADOS->new();
+    }
+
+    $rados->mon_command({
+	prefix => "fs rm",
+	fs_name => $fs,
+	'yes_i_really_mean_it' => JSON::true,
+	format => 'plain',
+    });
+}
+
 sub setup_pve_symlinks {
     # fail if we find a real file instead of a link
     if (-f $ceph_cfgpath) {
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 274d4db2..38615c30 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1831,6 +1831,7 @@ Ext.define('PVE.Utils', {
 	    cephdestroymon: ['Ceph Monitor', gettext('Destroy')],
 	    cephdestroyosd: ['Ceph OSD', gettext('Destroy')],
 	    cephdestroypool: ['Ceph Pool', gettext('Destroy')],
+	    cephdestroyfs: ['CephFS', gettext('Destroy')],
 	    cephfscreate: ['CephFS', gettext('Create')],
 	    cephsetpool: ['Ceph Pool', gettext('Edit')],
 	    cephsetflags: ['', gettext('Change global Ceph flags')],
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH docs v2 1/1] pveceph: improve documentation for destroying cephfs
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (11 preceding siblings ...)
  2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 11/11] pveceph: add 'fs destroy' command Dominik Csapak
@ 2021-10-25 14:01 ` Dominik Csapak
  2021-10-27 10:15   ` Aaron Lauterer
  2021-10-27 10:48 ` [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Aaron Lauterer
  2021-11-11 17:04 ` [pve-devel] applied-series: " Thomas Lamprecht
  14 siblings, 1 reply; 18+ messages in thread
From: Dominik Csapak @ 2021-10-25 14:01 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 pveceph.adoc | 49 +++++++++++++++++++++++++++++++++++++------------
 1 file changed, 37 insertions(+), 12 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index aa7a20f..cceb1ca 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -809,28 +809,53 @@ Destroy CephFS
 WARNING: Destroying a CephFS will render all of its data unusable. This cannot be
 undone!
 
-If you really want to destroy an existing CephFS, you first need to stop or
-destroy all metadata servers (`M̀DS`). You can destroy them either via the web
-interface or via the command line interface, by issuing
+To completely an cleanly remove a CephFS, the following steps are necessary:
 
+* Disconnect every non-{PVE} client (e.g. unmount the CephFS in guests).
+* Disable all related CephFS {PVE} storage entries (to prevent it from being
+  automatically mounted).
+* Remove all used resources from guests (e.g. ISOs) that are on the CephFS you
+  want to destroy.
+* Unmount the CephFS storages on all cluster nodes manually with
++
 ----
-pveceph mds destroy NAME
+umount /mnt/pve/<STORAGE-NAME>
 ----
-on each {pve} node hosting an MDS daemon.
-
-Then, you can remove (destroy) the CephFS by issuing
++
+Where `<STORAGE-NAME>` is the name of the CephFS storage in your {PVE}.
 
+* Now make sure that no metadata server (`MDS`) is running for that CephFS,
+  either by stopping or destroying them. This can be done either via the web
+  interface or via the command line interface, by issuing:
++
+----
+pveceph stop --service mds.NAME
 ----
-ceph fs rm NAME --yes-i-really-mean-it
++
+to stop them, or
++
+----
+pveceph mds destroy NAME
 ----
-on a single node hosting Ceph. After this, you may want to remove the created
-data and metadata pools, this can be done either over the Web GUI or the CLI
-with:
++
+to destroy them.
++
+Note that standby servers will automatically be promoted to active when an
+active `MDS` is stopped or removed, so it is best to first stop all standby
+servers.
 
+* Now you can destroy the CephFS with
++
 ----
-pveceph pool destroy NAME
+pveceph fs destroy NAME --remove-storages --remove-pools
 ----
++
+This will automatically destroy the underlying ceph pools as well as remove
+the storages from pve config.
 
+After these steps, the CephFS should be completely removed and if you have
+other CephFS instances, the stopped metadata servers can be started again
+to act as standbys.
 
 Ceph maintenance
 ----------------
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH docs v2 1/1] pveceph: improve documentation for destroying cephfs
  2021-10-25 14:01 ` [pve-devel] [PATCH docs v2 1/1] pveceph: improve documentation for destroying cephfs Dominik Csapak
@ 2021-10-27 10:15   ` Aaron Lauterer
  0 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-10-27 10:15 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

a few small things inline

On 10/25/21 16:01, Dominik Csapak wrote:
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>   pveceph.adoc | 49 +++++++++++++++++++++++++++++++++++++------------
>   1 file changed, 37 insertions(+), 12 deletions(-)
> 
> diff --git a/pveceph.adoc b/pveceph.adoc
> index aa7a20f..cceb1ca 100644
> --- a/pveceph.adoc
> +++ b/pveceph.adoc
> @@ -809,28 +809,53 @@ Destroy CephFS
>   WARNING: Destroying a CephFS will render all of its data unusable. This cannot be
>   undone!
>   
> -If you really want to destroy an existing CephFS, you first need to stop or
> -destroy all metadata servers (`M̀DS`). You can destroy them either via the web
> -interface or via the command line interface, by issuing
> +To completely an cleanly remove a CephFS, the following steps are necessary:
s/an/and/
>   
> +* Disconnect every non-{PVE} client (e.g. unmount the CephFS in guests).
> +* Disable all related CephFS {PVE} storage entries (to prevent it from being
> +  automatically mounted).
> +* Remove all used resources from guests (e.g. ISOs) that are on the CephFS you
> +  want to destroy.
> +* Unmount the CephFS storages on all cluster nodes manually with
> ++
>   ----
> -pveceph mds destroy NAME
> +umount /mnt/pve/<STORAGE-NAME>
>   ----
> -on each {pve} node hosting an MDS daemon.
> -
> -Then, you can remove (destroy) the CephFS by issuing
> ++
> +Where `<STORAGE-NAME>` is the name of the CephFS storage in your {PVE}.
>   
> +* Now make sure that no metadata server (`MDS`) is running for that CephFS,
> +  either by stopping or destroying them. This can be done either via the web
s/either via/via/

to avoid close repetition of `either`
> +  interface or via the command line interface, by issuing:
> ++
> +----
> +pveceph stop --service mds.NAME
>   ----
> -ceph fs rm NAME --yes-i-really-mean-it
> ++
> +to stop them, or
> ++
> +----
> +pveceph mds destroy NAME
>   ----
> -on a single node hosting Ceph. After this, you may want to remove the created
> -data and metadata pools, this can be done either over the Web GUI or the CLI
> -with:
> ++
> +to destroy them.
> ++
> +Note that standby servers will automatically be promoted to active when an
> +active `MDS` is stopped or removed, so it is best to first stop all standby
> +servers.
>   
> +* Now you can destroy the CephFS with
> ++
>   ----
> -pveceph pool destroy NAME
> +pveceph fs destroy NAME --remove-storages --remove-pools
>   ----
> ++
> +This will automatically destroy the underlying ceph pools as well as remove
> +the storages from pve config.
>   
> +After these steps, the CephFS should be completely removed and if you have> +other CephFS instances, the stopped metadata servers can be started again
> +to act as standbys.
>   
>   Ceph maintenance
>   ----------------
> 




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (12 preceding siblings ...)
  2021-10-25 14:01 ` [pve-devel] [PATCH docs v2 1/1] pveceph: improve documentation for destroying cephfs Dominik Csapak
@ 2021-10-27 10:48 ` Aaron Lauterer
  2021-11-11 17:04 ` [pve-devel] applied-series: " Thomas Lamprecht
  14 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-10-27 10:48 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Works like expected.

The one big problem I had previously with MDS not being ready is gone as one can see now in the task log, that it is waiting for the MDS to become active before continuing.

Tested the removal procedure outlined in the docs patch and as long as the storage is still active, it gives a nice warning.

Of course, being lazy I did not unmount it on all the nodes first and once it is removed, the nodes and child items all show question marks in the tree view. Once it gets unmounted with -f -l (force & lazy) parameters, the nodes are shown as okay again in the GUI.

Not sure if we want to check against it not being mounted on all nodes before proceeding or if that is the responsibility of the admin.

Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>


On 10/25/21 16:01, Dominik Csapak wrote:
> this series support for multiple cephfs. no single patch fixes the bug,
> so it's in no commit subject... (feel free to change the commit subject
> when applying if you find one patch most appropriate?)
> 
> a user already can create multiple cephfs via 'pveceph' (or manually
> with the ceph tools), but the ui does not support it properly
> 
> storage patch can be applied independently, it only adds a new parameter
> that does nothing if not set.
> 
> changes from v1:
> * moved 'destroyfs' from api to cli only
> * removed 'destroy cephfs' from the gui
> * added docs patch to document the exact steps on how to remove a cephfs
> * added 'disable' check on remove-storages
> * change 'is mds active' check to check for specific fs_name
> 
> pve-storage:
> 
> Dominik Csapak (1):
>    cephfs: add support for multiple ceph filesystems
> 
>   PVE/Storage/CephFSPlugin.pm | 8 ++++++++
>   1 file changed, 8 insertions(+)
> 
> pve-manager:
> 
> Dominik Csapak (11):
>    api: ceph-mds: get mds state when multple ceph filesystems exist
>    ui: ceph: catch missing version for service list
>    api: cephfs: refactor {ls,create}_fs
>    api: cephfs: more checks on fs create
>    api: cephfs: add fs_name to 'is mds active' check
>    ui: ceph/ServiceList: refactor controller out
>    ui: ceph/fs: show fs for active mds
>    api: cephfs: add 'fs-name' for cephfs storage
>    ui: storage/cephfs: make ceph fs selectable
>    ui: ceph/fs: allow creating multiple cephfs
>    pveceph: add 'fs destroy' command
> 
>   PVE/API2/Ceph/FS.pm                 |  31 ++-
>   PVE/CLI/pveceph.pm                  | 120 +++++++++++
>   PVE/Ceph/Services.pm                |  33 +--
>   PVE/Ceph/Tools.pm                   |  51 +++++
>   www/manager6/Makefile               |   1 +
>   www/manager6/Utils.js               |   1 +
>   www/manager6/ceph/FS.js             |  24 +--
>   www/manager6/ceph/ServiceList.js    | 313 +++++++++++++++-------------
>   www/manager6/form/CephFSSelector.js |  42 ++++
>   www/manager6/storage/CephFSEdit.js  |  25 +++
>   10 files changed, 449 insertions(+), 192 deletions(-)
>   create mode 100644 www/manager6/form/CephFSSelector.js
> 
> pve-docs:
> 
> Dominik Csapak (1):
>    pveceph: improve documentation for destroying cephfs
> 
>   pveceph.adoc | 49 +++++++++++++++++++++++++++++++++++++------------
>   1 file changed, 37 insertions(+), 12 deletions(-)
> 




^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] applied: [PATCH storage v2 1/1] cephfs: add support for multiple ceph filesystems
  2021-10-25 14:01 ` [pve-devel] [PATCH storage v2 1/1] cephfs: add support for " Dominik Csapak
@ 2021-11-05 12:54   ` Thomas Lamprecht
  0 siblings, 0 replies; 18+ messages in thread
From: Thomas Lamprecht @ 2021-11-05 12:54 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

On 25.10.21 16:01, Dominik Csapak wrote:
> by optionally saving the name of the cephfs
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  PVE/Storage/CephFSPlugin.pm | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] applied-series: [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems
  2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
                   ` (13 preceding siblings ...)
  2021-10-27 10:48 ` [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Aaron Lauterer
@ 2021-11-11 17:04 ` Thomas Lamprecht
  14 siblings, 0 replies; 18+ messages in thread
From: Thomas Lamprecht @ 2021-11-11 17:04 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

On 25.10.21 16:01, Dominik Csapak wrote:
> pve-manager:
> 
> Dominik Csapak (11):
>   api: ceph-mds: get mds state when multple ceph filesystems exist
>   ui: ceph: catch missing version for service list
>   api: cephfs: refactor {ls,create}_fs
>   api: cephfs: more checks on fs create
>   api: cephfs: add fs_name to 'is mds active' check
>   ui: ceph/ServiceList: refactor controller out
>   ui: ceph/fs: show fs for active mds
>   api: cephfs: add 'fs-name' for cephfs storage
>   ui: storage/cephfs: make ceph fs selectable
>   ui: ceph/fs: allow creating multiple cephfs
>   pveceph: add 'fs destroy' command
> 
>  PVE/API2/Ceph/FS.pm                 |  31 ++-
>  PVE/CLI/pveceph.pm                  | 120 +++++++++++
>  PVE/Ceph/Services.pm                |  33 +--
>  PVE/Ceph/Tools.pm                   |  51 +++++
>  www/manager6/Makefile               |   1 +
>  www/manager6/Utils.js               |   1 +
>  www/manager6/ceph/FS.js             |  24 +--
>  www/manager6/ceph/ServiceList.js    | 313 +++++++++++++++-------------
>  www/manager6/form/CephFSSelector.js |  42 ++++
>  www/manager6/storage/CephFSEdit.js  |  25 +++
>  10 files changed, 449 insertions(+), 192 deletions(-)
>  create mode 100644 www/manager6/form/CephFSSelector.js
> 
> pve-docs:
> 
> Dominik Csapak (1):
>   pveceph: improve documentation for destroying cephfs
> 
>  pveceph.adoc | 49 +++++++++++++++++++++++++++++++++++++------------
>  1 file changed, 37 insertions(+), 12 deletions(-)
> 

applied remaining patches of the series + some followup to the docs one regarding
Aaron's feedback, thanks!




^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-11-11 17:04 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH storage v2 1/1] cephfs: add support for " Dominik Csapak
2021-11-05 12:54   ` [pve-devel] applied: " Thomas Lamprecht
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 01/11] api: ceph-mds: get mds state when multple ceph filesystems exist Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 02/11] ui: ceph: catch missing version for service list Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 03/11] api: cephfs: refactor {ls, create}_fs Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 04/11] api: cephfs: more checks on fs create Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 05/11] api: cephfs: add fs_name to 'is mds active' check Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 06/11] ui: ceph/ServiceList: refactor controller out Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 07/11] ui: ceph/fs: show fs for active mds Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 08/11] api: cephfs: add 'fs-name' for cephfs storage Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 09/11] ui: storage/cephfs: make ceph fs selectable Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 10/11] ui: ceph/fs: allow creating multiple cephfs Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 11/11] pveceph: add 'fs destroy' command Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH docs v2 1/1] pveceph: improve documentation for destroying cephfs Dominik Csapak
2021-10-27 10:15   ` Aaron Lauterer
2021-10-27 10:48 ` [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Aaron Lauterer
2021-11-11 17:04 ` [pve-devel] applied-series: " Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal