* [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems
@ 2021-10-19 9:33 Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH storage 1/1] cephfs: add support for " Dominik Csapak
` (12 more replies)
0 siblings, 13 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
this series support for multiple cephfs. no single patch fixes the bug,
so it's in no commit subject... (feel free to change the commit subject
when applying if you find one patch most appropriate?)
a user already can create multiple cephfs via 'pveceph' (or manually
with the ceph tools), but the ui does not support it properly
storage patch can be applied independently, it only adds a new parameter
that does nothing if not set.
manager:
patches 1,2 enables basic gui support for showing correct info
for multiple cephfs
patches 3,4,5 are mostly preparation for the following patches
(though 4 enables some additional checks that should not hurt either way)
patch 6 enables additional gui support for multiple fs
patch 7,8 depend on the storage patch
patch 9,10,11 are for actually creating multiple cephfs via the gui
so those can be left out if we do not want to support that
---
so if we only want to support basic display functionality, we could only apply
manager 1,2 & maybe 5+6
for being able to configure multiple cephfs on a ceph cluster, we'd need
storage 1/1 and manager 7,8
sorry that it's so complicated, if wanted, i can ofc reorder the patches
or send it in multiple series
pve-storage:
Dominik Csapak (1):
cephfs: add support for multiple ceph filesystems
PVE/Storage/CephFSPlugin.pm | 8 ++++++++
1 file changed, 8 insertions(+)
pve-manager:
Dominik Csapak (11):
api: ceph-mds: get mds state when multple ceph filesystems exist
ui: ceph: catch missing version for service list
api: cephfs: refactor {ls,create}_fs
api: cephfs: more checks on fs create
ui: ceph/ServiceList: refactor controller out
ui: ceph/fs: show fs for active mds
api: cephfs: add 'fs-name' for cephfs storage
ui: storage/cephfs: make ceph fs selectable
ui: ceph/fs: allow creating multiple cephfs
api: cephfs: add destroy cephfs api call
ui: ceph/fs: allow destroying cephfs
PVE/API2/Ceph/FS.pm | 148 +++++++++--
PVE/Ceph/Services.pm | 16 +-
PVE/Ceph/Tools.pm | 51 ++++
www/manager6/Makefile | 2 +
www/manager6/Utils.js | 1 +
www/manager6/ceph/FS.js | 52 +++-
www/manager6/ceph/ServiceList.js | 313 ++++++++++++-----------
www/manager6/form/CephFSSelector.js | 42 +++
www/manager6/storage/CephFSEdit.js | 25 ++
www/manager6/window/SafeDestroyCephFS.js | 22 ++
10 files changed, 492 insertions(+), 180 deletions(-)
create mode 100644 www/manager6/form/CephFSSelector.js
create mode 100644 www/manager6/window/SafeDestroyCephFS.js
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH storage 1/1] cephfs: add support for multiple ceph filesystems
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 01/11] api: ceph-mds: get mds state when multple ceph filesystems exist Dominik Csapak
` (11 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
by optionally saving the name of the cephfs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/Storage/CephFSPlugin.pm | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
index 3b9a791..f587db7 100644
--- a/PVE/Storage/CephFSPlugin.pm
+++ b/PVE/Storage/CephFSPlugin.pm
@@ -87,6 +87,7 @@ sub cephfs_mount {
my $secretfile = $cmd_option->{keyring};
my $server = $cmd_option->{mon_host} // PVE::CephConfig::get_monaddr_list($configfile);
my $type = 'ceph';
+ my $fs_name = $scfg->{'fs-name'};
my @opts = ();
if ($scfg->{fuse}) {
@@ -94,10 +95,12 @@ sub cephfs_mount {
push @opts, "ceph.id=$cmd_option->{userid}";
push @opts, "ceph.keyfile=$secretfile" if defined($secretfile);
push @opts, "ceph.conf=$configfile" if defined($configfile);
+ push @opts, "ceph.client_fs=$fs_name" if defined($fs_name);
} else {
push @opts, "name=$cmd_option->{userid}";
push @opts, "secretfile=$secretfile" if defined($secretfile);
push @opts, "conf=$configfile" if defined($configfile);
+ push @opts, "fs=$fs_name" if defined($fs_name);
}
push @opts, $scfg->{options} if $scfg->{options};
@@ -128,6 +131,10 @@ sub properties {
description => "Subdir to mount.",
type => 'string', format => 'pve-storage-path',
},
+ 'fs-name' => {
+ description => "The Ceph filesystem name.",
+ type => 'string', format => 'pve-configid',
+ },
};
}
@@ -148,6 +155,7 @@ sub options {
maxfiles => { optional => 1 },
keyring => { optional => 1 },
'prune-backups' => { optional => 1 },
+ 'fs-name' => { optional => 1 },
};
}
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 01/11] api: ceph-mds: get mds state when multple ceph filesystems exist
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH storage 1/1] cephfs: add support for " Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 02/11] ui: ceph: catch missing version for service list Dominik Csapak
` (10 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
by iterating over all of them and saving the name to the active ones
this fixes the issue that an mds that is assigned to not the first
fs in the list gets wrongly shown as offline
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/Ceph/Services.pm | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/PVE/Ceph/Services.pm b/PVE/Ceph/Services.pm
index 0f557360..362e479b 100644
--- a/PVE/Ceph/Services.pm
+++ b/PVE/Ceph/Services.pm
@@ -179,13 +179,14 @@ sub get_cluster_mds_state {
}
my $add_state = sub {
- my ($mds) = @_;
+ my ($mds, $fsname) = @_;
my $state = {};
$state->{addr} = $mds->{addr};
$state->{rank} = $mds->{rank};
$state->{standby_replay} = $mds->{standby_replay} ? 1 : 0;
$state->{state} = $mds->{state};
+ $state->{fs_name} = $fsname;
$mds_state->{$mds->{name}} = $state;
};
@@ -198,13 +199,14 @@ sub get_cluster_mds_state {
$add_state->($mds);
}
- my $fs_info = $fsmap->{filesystems}->[0];
- my $active_mds = $fs_info->{mdsmap}->{info};
+ for my $fs_info (@{$fsmap->{filesystems}}) {
+ my $active_mds = $fs_info->{mdsmap}->{info};
- # normally there's only one active MDS, but we can have multiple active for
- # different ranks (e.g., different cephs path hierarchy). So just add all.
- foreach my $mds (values %$active_mds) {
- $add_state->($mds);
+ # normally there's only one active MDS, but we can have multiple active for
+ # different ranks (e.g., different cephs path hierarchy). So just add all.
+ foreach my $mds (values %$active_mds) {
+ $add_state->($mds, $fs_info->{mdsmap}->{fs_name});
+ }
}
return $mds_state;
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 02/11] ui: ceph: catch missing version for service list
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH storage 1/1] cephfs: add support for " Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 01/11] api: ceph-mds: get mds state when multple ceph filesystems exist Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 03/11] api: cephfs: refactor {ls, create}_fs Dominik Csapak
` (9 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
when a daemon is stopped, the version here is 'undefined'. catch that
instead of letting the template renderer run into an error.
this fixes the rendering of the grid backgrounds
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/ceph/ServiceList.js | 3 +++
1 file changed, 3 insertions(+)
diff --git a/www/manager6/ceph/ServiceList.js b/www/manager6/ceph/ServiceList.js
index 971de635..86cdcc8f 100644
--- a/www/manager6/ceph/ServiceList.js
+++ b/www/manager6/ceph/ServiceList.js
@@ -63,6 +63,9 @@ Ext.define('PVE.node.CephServiceList', {
xclass: 'Ext.app.ViewController',
render_version: function(value, metadata, rec) {
+ if (value === undefined) {
+ return '';
+ }
let view = this.getView();
let host = rec.data.host, nodev = [0];
if (view.nodeversions[host] !== undefined) {
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 03/11] api: cephfs: refactor {ls, create}_fs
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
` (2 preceding siblings ...)
2021-10-19 9:33 ` [pve-devel] [PATCH manager 02/11] ui: ceph: catch missing version for service list Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 04/11] api: cephfs: more checks on fs create Dominik Csapak
` (8 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
no function change intended
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Ceph/FS.pm | 22 ++++++----------------
PVE/Ceph/Tools.pm | 36 ++++++++++++++++++++++++++++++++++++
2 files changed, 42 insertions(+), 16 deletions(-)
diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm
index 82b5d616..cdced31a 100644
--- a/PVE/API2/Ceph/FS.pm
+++ b/PVE/API2/Ceph/FS.pm
@@ -59,15 +59,7 @@ __PACKAGE__->register_method ({
my $rados = PVE::RADOS->new();
- my $cephfs_list = $rados->mon_command({ prefix => "fs ls" });
- # we get something like:
- #{
- # 'metadata_pool_id' => 2,
- # 'data_pool_ids' => [ 1 ],
- # 'metadata_pool' => 'cephfs_metadata',
- # 'data_pools' => [ 'cephfs_data' ],
- # 'name' => 'cephfs',
- #}
+ my $cephfs_list = PVE::Ceph::Tools::ls_fs($rados);
my $res = [
map {{
@@ -161,13 +153,11 @@ __PACKAGE__->register_method ({
push @created_pools, $pool_metadata;
print "configuring new CephFS '$fs_name'\n";
- $rados->mon_command({
- prefix => "fs new",
- fs_name => $fs_name,
- metadata => $pool_metadata,
- data => $pool_data,
- format => 'plain',
- });
+ my $param = {
+ pool_metadata => $pool_metadata,
+ pool_data => $pool_data,
+ };
+ PVE::Ceph::Tools::create_fs($fs_name, $param, $rados);
};
if (my $err = $@) {
$@ = undef;
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 74ead6f7..2f818276 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -304,6 +304,42 @@ sub destroy_pool {
});
}
+# we get something like:
+#[{
+# 'metadata_pool_id' => 2,
+# 'data_pool_ids' => [ 1 ],
+# 'metadata_pool' => 'cephfs_metadata',
+# 'data_pools' => [ 'cephfs_data' ],
+# 'name' => 'cephfs',
+#}]
+sub ls_fs {
+ my ($rados) = @_;
+
+ if (!defined($rados)) {
+ $rados = PVE::RADOS->new();
+ }
+
+ my $res = $rados->mon_command({ prefix => "fs ls" });
+
+ return $res;
+}
+
+sub create_fs {
+ my ($fs, $param, $rados) = @_;
+
+ if (!defined($rados)) {
+ $rados = PVE::RADOS->new();
+ }
+
+ $rados->mon_command({
+ prefix => "fs new",
+ fs_name => $fs,
+ metadata => $param->{pool_metadata},
+ data => $param->{pool_data},
+ format => 'plain',
+ });
+}
+
sub setup_pve_symlinks {
# fail if we find a real file instead of a link
if (-f $ceph_cfgpath) {
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 04/11] api: cephfs: more checks on fs create
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
` (3 preceding siblings ...)
2021-10-19 9:33 ` [pve-devel] [PATCH manager 03/11] api: cephfs: refactor {ls, create}_fs Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 05/11] ui: ceph/ServiceList: refactor controller out Dominik Csapak
` (7 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
namely if the fs is already existing, and if there is currently a
standby mds that can be used for the new fs
previosuly, only one cephfs was possible, so these checks were not
necessary. now with pacific, it is possible to have multiple cephfs'
and we should check for those.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Ceph/FS.pm | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm
index cdced31a..845c4fbd 100644
--- a/PVE/API2/Ceph/FS.pm
+++ b/PVE/API2/Ceph/FS.pm
@@ -128,8 +128,14 @@ __PACKAGE__->register_method ({
die "ceph pools '$pool_data' and/or '$pool_metadata' already exist\n"
if $existing_pools->{$pool_data} || $existing_pools->{$pool_metadata};
+ my $fs = PVE::Ceph::Tools::ls_fs($rados);
+ die "ceph fs '$fs_name' already exists\n"
+ if (grep { $_->{name} eq $fs_name } @$fs);
+
my $running_mds = PVE::Ceph::Services::get_cluster_mds_state($rados);
die "no running Metadata Server (MDS) found!\n" if !scalar(keys %$running_mds);
+ die "no standby Metadata Server (MDS) found!\n"
+ if !grep { $_->{state} eq 'up:standby' } values(%$running_mds);
PVE::Storage::assert_sid_unused($fs_name) if $param->{add_storage};
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 05/11] ui: ceph/ServiceList: refactor controller out
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
` (4 preceding siblings ...)
2021-10-19 9:33 ` [pve-devel] [PATCH manager 04/11] api: cephfs: more checks on fs create Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 06/11] ui: ceph/fs: show fs for active mds Dominik Csapak
` (6 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
we want to reuse that controller type by overriding some functionality
in the future, so factor it out.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/ceph/ServiceList.js | 302 ++++++++++++++++---------------
1 file changed, 153 insertions(+), 149 deletions(-)
diff --git a/www/manager6/ceph/ServiceList.js b/www/manager6/ceph/ServiceList.js
index 86cdcc8f..d5ba2efa 100644
--- a/www/manager6/ceph/ServiceList.js
+++ b/www/manager6/ceph/ServiceList.js
@@ -44,172 +44,176 @@ Ext.define('PVE.CephCreateService', {
},
});
-Ext.define('PVE.node.CephServiceList', {
- extend: 'Ext.grid.GridPanel',
- xtype: 'pveNodeCephServiceList',
-
- onlineHelp: 'chapter_pveceph',
- emptyText: gettext('No such service configured.'),
-
- stateful: true,
+Ext.define('PVE.node.CephServiceController', {
+ extend: 'Ext.app.ViewController',
+ alias: 'controller.CephServiceList',
- // will be called when the store loads
- storeLoadCallback: Ext.emptyFn,
- // if set to true, does shows the ceph install mask if needed
- showCephInstallMask: false,
+ render_version: function(value, metadata, rec) {
+ if (value === undefined) {
+ return '';
+ }
+ let view = this.getView();
+ let host = rec.data.host, nodev = [0];
+ if (view.nodeversions[host] !== undefined) {
+ nodev = view.nodeversions[host].version.parts;
+ }
- controller: {
- xclass: 'Ext.app.ViewController',
+ let icon = '';
+ if (PVE.Utils.compare_ceph_versions(view.maxversion, nodev) > 0) {
+ icon = PVE.Utils.get_ceph_icon_html('HEALTH_UPGRADE');
+ } else if (PVE.Utils.compare_ceph_versions(nodev, value) > 0) {
+ icon = PVE.Utils.get_ceph_icon_html('HEALTH_OLD');
+ } else if (view.mixedversions) {
+ icon = PVE.Utils.get_ceph_icon_html('HEALTH_OK');
+ }
+ return icon + value;
+ },
- render_version: function(value, metadata, rec) {
- if (value === undefined) {
- return '';
+ getMaxVersions: function(store, records, success) {
+ if (!success || records.length < 1) {
+ return;
+ }
+ let me = this;
+ let view = me.getView();
+
+ view.nodeversions = records[0].data.node;
+ view.maxversion = [];
+ view.mixedversions = false;
+ for (const [_nodename, data] of Object.entries(view.nodeversions)) {
+ let res = PVE.Utils.compare_ceph_versions(data.version.parts, view.maxversion);
+ if (res !== 0 && view.maxversion.length > 0) {
+ view.mixedversions = true;
}
- let view = this.getView();
- let host = rec.data.host, nodev = [0];
- if (view.nodeversions[host] !== undefined) {
- nodev = view.nodeversions[host].version.parts;
+ if (res > 0) {
+ view.maxversion = data.version.parts;
}
+ }
+ },
- let icon = '';
- if (PVE.Utils.compare_ceph_versions(view.maxversion, nodev) > 0) {
- icon = PVE.Utils.get_ceph_icon_html('HEALTH_UPGRADE');
- } else if (PVE.Utils.compare_ceph_versions(nodev, value) > 0) {
- icon = PVE.Utils.get_ceph_icon_html('HEALTH_OLD');
- } else if (view.mixedversions) {
- icon = PVE.Utils.get_ceph_icon_html('HEALTH_OK');
- }
- return icon + value;
- },
+ init: function(view) {
+ if (view.pveSelNode) {
+ view.nodename = view.pveSelNode.data.node;
+ }
+ if (!view.nodename) {
+ throw "no node name specified";
+ }
- getMaxVersions: function(store, records, success) {
- if (!success || records.length < 1) {
- return;
- }
- let me = this;
- let view = me.getView();
-
- view.nodeversions = records[0].data.node;
- view.maxversion = [];
- view.mixedversions = false;
- for (const [_nodename, data] of Object.entries(view.nodeversions)) {
- let res = PVE.Utils.compare_ceph_versions(data.version.parts, view.maxversion);
- if (res !== 0 && view.maxversion.length > 0) {
- view.mixedversions = true;
- }
- if (res > 0) {
- view.maxversion = data.version.parts;
- }
- }
- },
+ if (!view.type) {
+ throw "no type specified";
+ }
- init: function(view) {
- if (view.pveSelNode) {
- view.nodename = view.pveSelNode.data.node;
- }
- if (!view.nodename) {
- throw "no node name specified";
- }
+ view.versionsstore = Ext.create('Proxmox.data.UpdateStore', {
+ autoStart: true,
+ interval: 10000,
+ storeid: `ceph-versions-${view.type}-list${view.nodename}`,
+ proxy: {
+ type: 'proxmox',
+ url: "/api2/json/cluster/ceph/metadata?scope=versions",
+ },
+ });
+ view.versionsstore.on('load', this.getMaxVersions, this);
+ view.on('destroy', view.versionsstore.stopUpdate);
+
+ view.rstore = Ext.create('Proxmox.data.UpdateStore', {
+ autoStart: true,
+ interval: 3000,
+ storeid: `ceph-${view.type}-list${view.nodename}`,
+ model: 'ceph-service-list',
+ proxy: {
+ type: 'proxmox',
+ url: `/api2/json/nodes/${view.nodename}/ceph/${view.type}`,
+ },
+ });
- if (!view.type) {
- throw "no type specified";
- }
+ view.setStore(Ext.create('Proxmox.data.DiffStore', {
+ rstore: view.rstore,
+ sorters: [{ property: 'name' }],
+ }));
- view.versionsstore = Ext.create('Proxmox.data.UpdateStore', {
- autoStart: true,
- interval: 10000,
- storeid: `ceph-versions-${view.type}-list${view.nodename}`,
- proxy: {
- type: 'proxmox',
- url: "/api2/json/cluster/ceph/metadata?scope=versions",
- },
- });
- view.versionsstore.on('load', this.getMaxVersions, this);
- view.on('destroy', view.versionsstore.stopUpdate);
-
- view.rstore = Ext.create('Proxmox.data.UpdateStore', {
- autoStart: true,
- interval: 3000,
- storeid: `ceph-${view.type}-list${view.nodename}`,
- model: 'ceph-service-list',
- proxy: {
- type: 'proxmox',
- url: `/api2/json/nodes/${view.nodename}/ceph/${view.type}`,
- },
- });
+ if (view.storeLoadCallback) {
+ view.rstore.on('load', view.storeLoadCallback, this);
+ }
+ view.on('destroy', view.rstore.stopUpdate);
- view.setStore(Ext.create('Proxmox.data.DiffStore', {
- rstore: view.rstore,
- sorters: [{ property: 'name' }],
- }));
+ if (view.showCephInstallMask) {
+ PVE.Utils.monitor_ceph_installed(view, view.rstore, view.nodename, true);
+ }
+ },
- if (view.storeLoadCallback) {
- view.rstore.on('load', view.storeLoadCallback, this);
- }
- view.on('destroy', view.rstore.stopUpdate);
+ service_cmd: function(rec, cmd) {
+ let view = this.getView();
+ if (!rec.data.host) {
+ Ext.Msg.alert(gettext('Error'), "entry has no host");
+ return;
+ }
+ Proxmox.Utils.API2Request({
+ url: `/nodes/${rec.data.host}/ceph/${cmd}`,
+ method: 'POST',
+ params: { service: view.type + '.' + rec.data.name },
+ success: function(response, options) {
+ Ext.create('Proxmox.window.TaskProgress', {
+ autoShow: true,
+ upid: response.result.data,
+ taskDone: () => view.rstore.load(),
+ });
+ },
+ failure: (response, _opts) => Ext.Msg.alert(gettext('Error'), response.htmlStatus),
+ });
+ },
+ onChangeService: function(button) {
+ let me = this;
+ let record = me.getView().getSelection()[0];
+ me.service_cmd(record, button.action);
+ },
- if (view.showCephInstallMask) {
- PVE.Utils.monitor_ceph_installed(view, view.rstore, view.nodename, true);
- }
- },
+ showSyslog: function() {
+ let view = this.getView();
+ let rec = view.getSelection()[0];
+ let service = `ceph-${view.type}@${rec.data.name}`;
+ Ext.create('Ext.window.Window', {
+ title: `${gettext('Syslog')}: ${service}`,
+ autoShow: true,
+ modal: true,
+ width: 800,
+ height: 400,
+ layout: 'fit',
+ items: [{
+ xtype: 'proxmoxLogView',
+ url: `/api2/extjs/nodes/${rec.data.host}/syslog?service=${encodeURIComponent(service)}`,
+ log_select_timespan: 1,
+ }],
+ });
+ },
- service_cmd: function(rec, cmd) {
- let view = this.getView();
- if (!rec.data.host) {
- Ext.Msg.alert(gettext('Error'), "entry has no host");
- return;
- }
- Proxmox.Utils.API2Request({
- url: `/nodes/${rec.data.host}/ceph/${cmd}`,
- method: 'POST',
- params: { service: view.type + '.' + rec.data.name },
- success: function(response, options) {
- Ext.create('Proxmox.window.TaskProgress', {
- autoShow: true,
- upid: response.result.data,
- taskDone: () => view.rstore.load(),
- });
- },
- failure: (response, _opts) => Ext.Msg.alert(gettext('Error'), response.htmlStatus),
- });
- },
- onChangeService: function(button) {
- let me = this;
- let record = me.getView().getSelection()[0];
- me.service_cmd(record, button.action);
- },
+ onCreate: function() {
+ let view = this.getView();
+ Ext.create('PVE.CephCreateService', {
+ autoShow: true,
+ nodename: view.nodename,
+ subject: view.getTitle(),
+ type: view.type,
+ taskDone: () => view.rstore.load(),
+ });
+ },
+});
- showSyslog: function() {
- let view = this.getView();
- let rec = view.getSelection()[0];
- let service = `ceph-${view.type}@${rec.data.name}`;
- Ext.create('Ext.window.Window', {
- title: `${gettext('Syslog')}: ${service}`,
- autoShow: true,
- modal: true,
- width: 800,
- height: 400,
- layout: 'fit',
- items: [{
- xtype: 'proxmoxLogView',
- url: `/api2/extjs/nodes/${rec.data.host}/syslog?service=${encodeURIComponent(service)}`,
- log_select_timespan: 1,
- }],
- });
- },
+Ext.define('PVE.node.CephServiceList', {
+ extend: 'Ext.grid.GridPanel',
+ xtype: 'pveNodeCephServiceList',
- onCreate: function() {
- let view = this.getView();
- Ext.create('PVE.CephCreateService', {
- autoShow: true,
- nodename: view.nodename,
- subject: view.getTitle(),
- type: view.type,
- taskDone: () => view.rstore.load(),
- });
- },
- },
+ onlineHelp: 'chapter_pveceph',
+ emptyText: gettext('No such service configured.'),
+
+ stateful: true,
+
+ // will be called when the store loads
+ storeLoadCallback: Ext.emptyFn,
+
+ // if set to true, does shows the ceph install mask if needed
+ showCephInstallMask: false,
+
+ controller: 'CephServiceList',
tbar: [
{
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 06/11] ui: ceph/fs: show fs for active mds
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
` (5 preceding siblings ...)
2021-10-19 9:33 ` [pve-devel] [PATCH manager 05/11] ui: ceph/ServiceList: refactor controller out Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 07/11] api: cephfs: add 'fs-name' for cephfs storage Dominik Csapak
` (5 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
so that the user can see which mds is responsible for which cephfs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/ceph/FS.js | 2 +-
www/manager6/ceph/ServiceList.js | 14 ++++++++++++++
2 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/www/manager6/ceph/FS.js b/www/manager6/ceph/FS.js
index 90362586..c620ec6e 100644
--- a/www/manager6/ceph/FS.js
+++ b/www/manager6/ceph/FS.js
@@ -183,7 +183,7 @@ Ext.define('PVE.NodeCephFSPanel', {
},
},
{
- xtype: 'pveNodeCephServiceList',
+ xtype: 'pveNodeCephMDSList',
title: gettext('Metadata Servers'),
stateId: 'grid-ceph-mds',
type: 'mds',
diff --git a/www/manager6/ceph/ServiceList.js b/www/manager6/ceph/ServiceList.js
index d5ba2efa..f2b2cbbd 100644
--- a/www/manager6/ceph/ServiceList.js
+++ b/www/manager6/ceph/ServiceList.js
@@ -48,6 +48,7 @@ Ext.define('PVE.node.CephServiceController', {
extend: 'Ext.app.ViewController',
alias: 'controller.CephServiceList',
+ render_status: (value, metadata, rec) => value,
render_version: function(value, metadata, rec) {
if (value === undefined) {
@@ -305,6 +306,7 @@ Ext.define('PVE.node.CephServiceList', {
header: gettext('Status'),
flex: 1,
sortable: false,
+ renderer: 'render_status',
dataIndex: 'state',
},
{
@@ -341,6 +343,7 @@ Ext.define('PVE.node.CephServiceList', {
fields: [
'addr',
'name',
+ 'fs_name',
'rank',
'host',
'quorum',
@@ -356,3 +359,14 @@ Ext.define('PVE.node.CephServiceList', {
idProperty: 'name',
});
});
+
+Ext.define('PVE.node.CephMDSList', {
+ extend: 'PVE.node.CephServiceList',
+ xtype: 'pveNodeCephMDSList',
+
+ controller: {
+ type: 'CephServiceList',
+ render_status: (value, mD, rec) => rec.data.fs_name ? `${value} (${rec.data.fs_name})` : value,
+ },
+});
+
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 07/11] api: cephfs: add 'fs-name' for cephfs storage
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
` (6 preceding siblings ...)
2021-10-19 9:33 ` [pve-devel] [PATCH manager 06/11] ui: ceph/fs: show fs for active mds Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 08/11] ui: storage/cephfs: make ceph fs selectable Dominik Csapak
` (4 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
so that we can uniquely identify the cephfs (in case of multiple)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Ceph/FS.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm
index 845c4fbd..8bf71524 100644
--- a/PVE/API2/Ceph/FS.pm
+++ b/PVE/API2/Ceph/FS.pm
@@ -202,6 +202,7 @@ __PACKAGE__->register_method ({
type => 'cephfs',
storage => $fs_name,
content => 'backup,iso,vztmpl',
+ 'fs-name' => $fs_name,
})
};
die "adding storage for CephFS '$fs_name' failed, check log ".
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 08/11] ui: storage/cephfs: make ceph fs selectable
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
` (7 preceding siblings ...)
2021-10-19 9:33 ` [pve-devel] [PATCH manager 07/11] api: cephfs: add 'fs-name' for cephfs storage Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 09/11] ui: ceph/fs: allow creating multiple cephfs Dominik Csapak
` (3 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
by adding a CephFSSelector and using it in the CephFSEdit window
(similar to the poolselector/textfield)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/form/CephFSSelector.js | 42 +++++++++++++++++++++++++++++
www/manager6/storage/CephFSEdit.js | 25 +++++++++++++++++
3 files changed, 68 insertions(+)
create mode 100644 www/manager6/form/CephFSSelector.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 3d1778c2..a759db87 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -26,6 +26,7 @@ JSSRC= \
form/CacheTypeSelector.js \
form/CalendarEvent.js \
form/CephPoolSelector.js \
+ form/CephFSSelector.js \
form/CompressionSelector.js \
form/ContentTypeSelector.js \
form/ControllerSelector.js \
diff --git a/www/manager6/form/CephFSSelector.js b/www/manager6/form/CephFSSelector.js
new file mode 100644
index 00000000..3c86e3cf
--- /dev/null
+++ b/www/manager6/form/CephFSSelector.js
@@ -0,0 +1,42 @@
+Ext.define('PVE.form.CephFSSelector', {
+ extend: 'Ext.form.field.ComboBox',
+ alias: 'widget.pveCephFSSelector',
+
+ allowBlank: false,
+ valueField: 'name',
+ displayField: 'name',
+ editable: false,
+ queryMode: 'local',
+
+ initComponent: function() {
+ var me = this;
+
+ if (!me.nodename) {
+ throw "no nodename given";
+ }
+
+ var store = Ext.create('Ext.data.Store', {
+ fields: ['name'],
+ sorters: 'name',
+ proxy: {
+ type: 'proxmox',
+ url: '/api2/json/nodes/' + me.nodename + '/ceph/fs',
+ },
+ });
+
+ Ext.apply(me, {
+ store: store,
+ });
+
+ me.callParent();
+
+ store.load({
+ callback: function(rec, op, success) {
+ if (success && rec.length > 0) {
+ me.select(rec[0]);
+ }
+ },
+ });
+ },
+
+});
diff --git a/www/manager6/storage/CephFSEdit.js b/www/manager6/storage/CephFSEdit.js
index 1f5246cd..92fdfe63 100644
--- a/www/manager6/storage/CephFSEdit.js
+++ b/www/manager6/storage/CephFSEdit.js
@@ -64,6 +64,31 @@ Ext.define('PVE.storage.CephFSInputPanel', {
},
);
+ if (me.isCreate) {
+ me.column1.push({
+ xtype: 'pveCephFSSelector',
+ nodename: me.nodename,
+ name: 'fs-name',
+ bind: {
+ disabled: '{!pveceph}',
+ submitValue: '{pveceph}',
+ hidden: '{!pveceph}',
+ },
+ fieldLabel: gettext('FS Name'),
+ allowBlank: false,
+ }, {
+ xtype: 'textfield',
+ nodename: me.nodename,
+ name: 'fs-name',
+ bind: {
+ disabled: '{pveceph}',
+ submitValue: '{!pveceph}',
+ hidden: '{pveceph}',
+ },
+ fieldLabel: gettext('FS Name'),
+ });
+ }
+
me.column2 = [
{
xtype: 'pveContentTypeSelector',
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 09/11] ui: ceph/fs: allow creating multiple cephfs
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
` (8 preceding siblings ...)
2021-10-19 9:33 ` [pve-devel] [PATCH manager 08/11] ui: storage/cephfs: make ceph fs selectable Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 10/11] api: cephfs: add destroy cephfs api call Dominik Csapak
` (2 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
but only if there are any standby mds
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/ceph/FS.js | 21 ++++++++-------------
1 file changed, 8 insertions(+), 13 deletions(-)
diff --git a/www/manager6/ceph/FS.js b/www/manager6/ceph/FS.js
index c620ec6e..a3fa3672 100644
--- a/www/manager6/ceph/FS.js
+++ b/www/manager6/ceph/FS.js
@@ -86,12 +86,11 @@ Ext.define('PVE.NodeCephFSPanel', {
viewModel: {
parent: null,
data: {
- cephfsConfigured: false,
mdsCount: 0,
},
formulas: {
canCreateFS: function(get) {
- return !get('cephfsConfigured') && get('mdsCount') > 0;
+ return get('mdsCount') > 0;
},
},
},
@@ -125,7 +124,6 @@ Ext.define('PVE.NodeCephFSPanel', {
}));
// manages the "install ceph?" overlay
PVE.Utils.monitor_ceph_installed(view, view.rstore, view.nodename, true);
- view.rstore.on('load', this.onLoad, this);
view.on('destroy', () => view.rstore.stopUpdate());
},
@@ -141,14 +139,6 @@ Ext.define('PVE.NodeCephFSPanel', {
});
},
- onLoad: function(store, records, success) {
- var vm = this.getViewModel();
- if (!(success && records && records.length > 0)) {
- vm.set('cephfsConfigured', false);
- return;
- }
- vm.set('cephfsConfigured', true);
- },
},
tbar: [
{
@@ -156,7 +146,6 @@ Ext.define('PVE.NodeCephFSPanel', {
reference: 'createButton',
handler: 'onCreate',
bind: {
- // only one CephFS per Ceph cluster makes sense for now
disabled: '{!canCreateFS}',
},
},
@@ -193,7 +182,13 @@ Ext.define('PVE.NodeCephFSPanel', {
vm.set('mdsCount', 0);
return;
}
- vm.set('mdsCount', records.length);
+ let count = 0;
+ for (const mds of records) {
+ if (mds.data.state === 'up:standby') {
+ count++;
+ }
+ }
+ vm.set('mdsCount', count);
},
cbind: {
nodename: '{nodename}',
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 10/11] api: cephfs: add destroy cephfs api call
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
` (9 preceding siblings ...)
2021-10-19 9:33 ` [pve-devel] [PATCH manager 09/11] ui: ceph/fs: allow creating multiple cephfs Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 11/11] ui: ceph/fs: allow destroying cephfs Dominik Csapak
2021-10-20 14:40 ` [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Aaron Lauterer
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
with 'remove-storages' and 'remove-pools' as optional parameters
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Ceph/FS.pm | 119 ++++++++++++++++++++++++++++++++++++++++++
PVE/Ceph/Tools.pm | 15 ++++++
www/manager6/Utils.js | 1 +
3 files changed, 135 insertions(+)
diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm
index 8bf71524..a325c4bd 100644
--- a/PVE/API2/Ceph/FS.pm
+++ b/PVE/API2/Ceph/FS.pm
@@ -217,4 +217,123 @@ __PACKAGE__->register_method ({
}
});
+my $get_storages = sub {
+ my ($fs, $is_default) = @_;
+
+ my $cfg = PVE::Storage::config();
+
+ my $storages = $cfg->{ids};
+ my $res = {};
+ foreach my $storeid (keys %$storages) {
+ my $curr = $storages->{$storeid};
+ next if $curr->{type} ne 'cephfs';
+ my $cur_fs = $curr->{'fs-name'};
+ $res->{$storeid} = $storages->{$storeid}
+ if (!defined($cur_fs) && $is_default) || (defined($cur_fs) && $fs eq $cur_fs);
+ }
+
+ return $res;
+};
+
+__PACKAGE__->register_method ({
+ name => 'destroyfs',
+ path => '{name}',
+ method => 'DELETE',
+ description => "Destroy a Ceph filesystem",
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', [ 'Sys.Modify' ]],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ name => {
+ description => "The ceph filesystem name.",
+ type => 'string',
+ },
+ 'remove-storages' => {
+ description => "Remove all pveceph-managed storages configured for this fs.",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
+ 'remove-pools' => {
+ description => "Remove data and metadata pools configured for this fs.",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
+ },
+ },
+ returns => { type => 'string' },
+ code => sub {
+ my ($param) = @_;
+
+ PVE::Ceph::Tools::check_ceph_inited();
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $user = $rpcenv->get_user();
+ $rpcenv->check($user, '/storage', ['Datastore.Allocate'])
+ if $param->{remove_storages};
+
+ my $fs_name = $param->{name};
+
+ my $fs;
+ my $fs_list = PVE::Ceph::Tools::ls_fs();
+ for my $entry (@$fs_list) {
+ next if $entry->{name} ne $fs_name;
+ $fs = $entry;
+ last;
+ }
+ die "no such cephfs '$fs_name'\n" if !$fs;
+
+ my $worker = sub {
+ my $rados = PVE::RADOS->new();
+
+ my $defaultfs;
+ if ($param->{'remove-storages'}) {
+ my $fs_dump = $rados->mon_command({ prefix => "fs dump" });
+ for my $fs ($fs_dump->{filesystems}->@*) {
+ next if $fs->{id} != $fs_dump->{default_fscid};
+ $defaultfs = $fs->{mdsmap}->{fs_name};
+ }
+ warn "no default fs found, maybe not all relevant storages are removed\n"
+ if !defined($defaultfs);
+ }
+
+ PVE::Ceph::Tools::destroy_fs($fs_name, $rados);
+
+ if ($param->{'remove-pools'}) {
+ warn "removing metadata pool '$fs->{metadata_pool}'\n";
+ eval { PVE::Ceph::Tools::destroy_pool($fs->{metadata_pool}, $rados) };
+ warn "$@\n" if $@;
+
+ foreach my $pool ($fs->{data_pools}->@*) {
+ warn "removing data pool '$pool'\n";
+ eval { PVE::Ceph::Tools::destroy_pool($pool, $rados) };
+ warn "$@\n" if $@;
+ }
+ }
+
+ if ($param->{'remove-storages'}) {
+ my $storages = $get_storages->($fs_name, $fs_name eq ($defaultfs // ''));
+ my $err;
+ foreach my $storeid (keys %$storages) {
+ # skip external clusters, not managed by pveceph
+ next if $storages->{$storeid}->{monhost};
+ eval { PVE::API2::Storage::Config->delete({storage => $storeid}) };
+ if ($@) {
+ warn "failed to remove storage '$storeid': $@\n";
+ $err = 1;
+ }
+ }
+ die "failed to remove (some) storages - check log and remove manually!\n"
+ if $err;
+ }
+ };
+ return $rpcenv->fork_worker('cephdestroyfs', $fs_name, $user, $worker);
+ }});
+
1;
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 2f818276..36d7788a 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -340,6 +340,21 @@ sub create_fs {
});
}
+sub destroy_fs {
+ my ($fs, $rados) = @_;
+
+ if (!defined($rados)) {
+ $rados = PVE::RADOS->new();
+ }
+
+ $rados->mon_command({
+ prefix => "fs rm",
+ fs_name => $fs,
+ 'yes_i_really_mean_it' => JSON::true,
+ format => 'plain',
+ });
+}
+
sub setup_pve_symlinks {
# fail if we find a real file instead of a link
if (-f $ceph_cfgpath) {
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index ee92cd43..3c385ea9 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1830,6 +1830,7 @@ Ext.define('PVE.Utils', {
cephdestroymon: ['Ceph Monitor', gettext('Destroy')],
cephdestroyosd: ['Ceph OSD', gettext('Destroy')],
cephdestroypool: ['Ceph Pool', gettext('Destroy')],
+ cephdestroyfs: ['CephFS', gettext('Destroy')],
cephfscreate: ['CephFS', gettext('Create')],
cephsetpool: ['Ceph Pool', gettext('Edit')],
cephsetflags: ['', gettext('Change global Ceph flags')],
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 11/11] ui: ceph/fs: allow destroying cephfs
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
` (10 preceding siblings ...)
2021-10-19 9:33 ` [pve-devel] [PATCH manager 10/11] api: cephfs: add destroy cephfs api call Dominik Csapak
@ 2021-10-19 9:33 ` Dominik Csapak
2021-10-20 14:40 ` [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Aaron Lauterer
12 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2021-10-19 9:33 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/ceph/FS.js | 35 ++++++++++++++++++++++++
www/manager6/window/SafeDestroyCephFS.js | 22 +++++++++++++++
3 files changed, 58 insertions(+)
create mode 100644 www/manager6/window/SafeDestroyCephFS.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index a759db87..ccf9872b 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -104,6 +104,7 @@ JSSRC= \
window/Prune.js \
window/Restore.js \
window/SafeDestroyGuest.js \
+ window/SafeDestroyCephFS.js \
window/Settings.js \
window/Snapshot.js \
window/StartupEdit.js \
diff --git a/www/manager6/ceph/FS.js b/www/manager6/ceph/FS.js
index a3fa3672..1a8eca26 100644
--- a/www/manager6/ceph/FS.js
+++ b/www/manager6/ceph/FS.js
@@ -139,6 +139,34 @@ Ext.define('PVE.NodeCephFSPanel', {
});
},
+ onRemove: function() {
+ let me = this;
+ let view = me.getView();
+ let selection = view.getSelection();
+ if (!selection || selection.length < 1) {
+ return;
+ }
+
+ let rec = selection[0];
+ let fsname = rec.data.name;
+
+ Ext.create('PVE.window.SafeDestroyCephFS', {
+ showProgress: true,
+ url: `/nodes/${view.nodename}/ceph/fs/${fsname}`,
+ params: {
+ 'remove-storages': 1,
+ },
+ item: {
+ type: 'CephFS',
+ id: fsname,
+ },
+ taskName: 'cephdestroyfs',
+ autoShow: true,
+ listeners: {
+ destroy: () => view.rstore.load(),
+ },
+ });
+ },
},
tbar: [
{
@@ -149,6 +177,13 @@ Ext.define('PVE.NodeCephFSPanel', {
disabled: '{!canCreateFS}',
},
},
+ {
+ xtype: 'proxmoxButton',
+ text: gettext('Remove CephFS'),
+ reference: 'removeButton',
+ disabled: true,
+ handler: 'onRemove',
+ },
],
columns: [
{
diff --git a/www/manager6/window/SafeDestroyCephFS.js b/www/manager6/window/SafeDestroyCephFS.js
new file mode 100644
index 00000000..08e910ac
--- /dev/null
+++ b/www/manager6/window/SafeDestroyCephFS.js
@@ -0,0 +1,22 @@
+Ext.define('PVE.window.SafeDestroyCephFS', {
+ extend: 'Proxmox.window.SafeDestroy',
+ alias: 'widget.pveSafeDestroyCephFS',
+
+ additionalItems: [
+ {
+ xtype: 'proxmoxcheckbox',
+ name: 'remove-pools',
+ reference: 'poolsCheckbox',
+ boxLabel: gettext('Remove associated pools.'),
+ checked: true,
+ },
+ ],
+
+ getParams: function() {
+ let me = this;
+
+ me.params['remove-pools'] = me.lookup('poolsCheckbox').checked ? 1 : 0;
+
+ return me.callParent();
+ },
+});
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
` (11 preceding siblings ...)
2021-10-19 9:33 ` [pve-devel] [PATCH manager 11/11] ui: ceph/fs: allow destroying cephfs Dominik Csapak
@ 2021-10-20 14:40 ` Aaron Lauterer
12 siblings, 0 replies; 14+ messages in thread
From: Aaron Lauterer @ 2021-10-20 14:40 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
On my test cluster I ran into the problem when creating the 2nd or third Ceph FS, that the actual mounting and adding to the PVE storage config failed with the following in the task log:
-----
creating data pool 'foobar_data'...
pool foobar_data: applying application = cephfs
pool foobar_data: applying pg_num = 32
creating metadata pool 'foobar_metadata'...
pool foobar_metadata: applying pg_num = 8
configuring new CephFS 'foobar'
Successfully create CephFS 'foobar'
Adding 'foobar' to storage configuration...
TASK ERROR: adding storage for CephFS 'foobar' failed, check log and add manually! create storage failed: mount error: Job failed. See "journalctl -xe" for details.
------
The matching syslog:
------
Oct 20 15:20:04 cephtest1 systemd[1]: Mounting /mnt/pve/foobar...
Oct 20 15:20:04 cephtest1 mount[45484]: mount error: no mds server is up or the cluster is laggy
Oct 20 15:20:04 cephtest1 systemd[1]: mnt-pve-foobar.mount: Mount process exited, code=exited, status=32/n/a
Oct 20 15:20:04 cephtest1 systemd[1]: mnt-pve-foobar.mount: Failed with result 'exit-code'.
Oct 20 15:20:04 cephtest1 systemd[1]: Failed to mount /mnt/pve/foobar.
------
Adding the storage manually right after this worked fine. Seems like the MDS might not be fast enough all the time.
Regarding the removal of a Ceph FS we had an off list discussion which resulted in the following (I hope I am not forgetting something):
The process needs a few manual steps that are hard to automate:
- disable storage (so pvestatd does not auto mount it again)
- unmount on all nodes
- stop standby and active (for this storage) MDS
At this point, any still existing mount will be hanging
- remove storage cfg and pools
Since at least some of those need to be done manually on the CLI, it might not even be worth it to have a "remove button" in the GUI but rather a well documented procedure in the manual and the actual removal as part of `pveceph`.
On 10/19/21 11:33, Dominik Csapak wrote:
> this series support for multiple cephfs. no single patch fixes the bug,
> so it's in no commit subject... (feel free to change the commit subject
> when applying if you find one patch most appropriate?)
>
> a user already can create multiple cephfs via 'pveceph' (or manually
> with the ceph tools), but the ui does not support it properly
>
> storage patch can be applied independently, it only adds a new parameter
> that does nothing if not set.
>
> manager:
>
> patches 1,2 enables basic gui support for showing correct info
> for multiple cephfs
>
> patches 3,4,5 are mostly preparation for the following patches
> (though 4 enables some additional checks that should not hurt either way)
>
> patch 6 enables additional gui support for multiple fs
>
> patch 7,8 depend on the storage patch
>
> patch 9,10,11 are for actually creating multiple cephfs via the gui
> so those can be left out if we do not want to support that
>
> ---
> so if we only want to support basic display functionality, we could only apply
> manager 1,2 & maybe 5+6
>
> for being able to configure multiple cephfs on a ceph cluster, we'd need
> storage 1/1 and manager 7,8
>
> sorry that it's so complicated, if wanted, i can ofc reorder the patches
> or send it in multiple series
>
> pve-storage:
>
> Dominik Csapak (1):
> cephfs: add support for multiple ceph filesystems
>
> PVE/Storage/CephFSPlugin.pm | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> pve-manager:
>
> Dominik Csapak (11):
> api: ceph-mds: get mds state when multple ceph filesystems exist
> ui: ceph: catch missing version for service list
> api: cephfs: refactor {ls,create}_fs
> api: cephfs: more checks on fs create
> ui: ceph/ServiceList: refactor controller out
> ui: ceph/fs: show fs for active mds
> api: cephfs: add 'fs-name' for cephfs storage
> ui: storage/cephfs: make ceph fs selectable
> ui: ceph/fs: allow creating multiple cephfs
> api: cephfs: add destroy cephfs api call
> ui: ceph/fs: allow destroying cephfs
>
> PVE/API2/Ceph/FS.pm | 148 +++++++++--
> PVE/Ceph/Services.pm | 16 +-
> PVE/Ceph/Tools.pm | 51 ++++
> www/manager6/Makefile | 2 +
> www/manager6/Utils.js | 1 +
> www/manager6/ceph/FS.js | 52 +++-
> www/manager6/ceph/ServiceList.js | 313 ++++++++++++-----------
> www/manager6/form/CephFSSelector.js | 42 +++
> www/manager6/storage/CephFSEdit.js | 25 ++
> www/manager6/window/SafeDestroyCephFS.js | 22 ++
> 10 files changed, 492 insertions(+), 180 deletions(-)
> create mode 100644 www/manager6/form/CephFSSelector.js
> create mode 100644 www/manager6/window/SafeDestroyCephFS.js
>
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2021-10-20 14:40 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-19 9:33 [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH storage 1/1] cephfs: add support for " Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 01/11] api: ceph-mds: get mds state when multple ceph filesystems exist Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 02/11] ui: ceph: catch missing version for service list Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 03/11] api: cephfs: refactor {ls, create}_fs Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 04/11] api: cephfs: more checks on fs create Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 05/11] ui: ceph/ServiceList: refactor controller out Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 06/11] ui: ceph/fs: show fs for active mds Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 07/11] api: cephfs: add 'fs-name' for cephfs storage Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 08/11] ui: storage/cephfs: make ceph fs selectable Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 09/11] ui: ceph/fs: allow creating multiple cephfs Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 10/11] api: cephfs: add destroy cephfs api call Dominik Csapak
2021-10-19 9:33 ` [pve-devel] [PATCH manager 11/11] ui: ceph/fs: allow destroying cephfs Dominik Csapak
2021-10-20 14:40 ` [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems Aaron Lauterer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox