From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 2623776D2A for ; Wed, 20 Oct 2021 16:40:38 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 0DC4413BBC for ; Wed, 20 Oct 2021 16:40:08 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 429F113BAE for ; Wed, 20 Oct 2021 16:40:07 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id EFC3745E89 for ; Wed, 20 Oct 2021 16:40:06 +0200 (CEST) Message-ID: <5af97692-1579-9ea9-7ad1-ddb79081d89b@proxmox.com> Date: Wed, 20 Oct 2021 16:40:05 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 Content-Language: en-US To: Proxmox VE development discussion , Dominik Csapak References: <20211019093353.2451987-1-d.csapak@proxmox.com> From: Aaron Lauterer In-Reply-To: <20211019093353.2451987-1-d.csapak@proxmox.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL 1.355 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -2.267 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [cephfsplugin.pm, services.pm, fs.pm, tools.pm] Subject: Re: [pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Oct 2021 14:40:38 -0000 On my test cluster I ran into the problem when creating the 2nd or third Ceph FS, that the actual mounting and adding to the PVE storage config failed with the following in the task log: ----- creating data pool 'foobar_data'... pool foobar_data: applying application = cephfs pool foobar_data: applying pg_num = 32 creating metadata pool 'foobar_metadata'... pool foobar_metadata: applying pg_num = 8 configuring new CephFS 'foobar' Successfully create CephFS 'foobar' Adding 'foobar' to storage configuration... TASK ERROR: adding storage for CephFS 'foobar' failed, check log and add manually! create storage failed: mount error: Job failed. See "journalctl -xe" for details. ------ The matching syslog: ------ Oct 20 15:20:04 cephtest1 systemd[1]: Mounting /mnt/pve/foobar... Oct 20 15:20:04 cephtest1 mount[45484]: mount error: no mds server is up or the cluster is laggy Oct 20 15:20:04 cephtest1 systemd[1]: mnt-pve-foobar.mount: Mount process exited, code=exited, status=32/n/a Oct 20 15:20:04 cephtest1 systemd[1]: mnt-pve-foobar.mount: Failed with result 'exit-code'. Oct 20 15:20:04 cephtest1 systemd[1]: Failed to mount /mnt/pve/foobar. ------ Adding the storage manually right after this worked fine. Seems like the MDS might not be fast enough all the time. Regarding the removal of a Ceph FS we had an off list discussion which resulted in the following (I hope I am not forgetting something): The process needs a few manual steps that are hard to automate: - disable storage (so pvestatd does not auto mount it again) - unmount on all nodes - stop standby and active (for this storage) MDS At this point, any still existing mount will be hanging - remove storage cfg and pools Since at least some of those need to be done manually on the CLI, it might not even be worth it to have a "remove button" in the GUI but rather a well documented procedure in the manual and the actual removal as part of `pveceph`. On 10/19/21 11:33, Dominik Csapak wrote: > this series support for multiple cephfs. no single patch fixes the bug, > so it's in no commit subject... (feel free to change the commit subject > when applying if you find one patch most appropriate?) > > a user already can create multiple cephfs via 'pveceph' (or manually > with the ceph tools), but the ui does not support it properly > > storage patch can be applied independently, it only adds a new parameter > that does nothing if not set. > > manager: > > patches 1,2 enables basic gui support for showing correct info > for multiple cephfs > > patches 3,4,5 are mostly preparation for the following patches > (though 4 enables some additional checks that should not hurt either way) > > patch 6 enables additional gui support for multiple fs > > patch 7,8 depend on the storage patch > > patch 9,10,11 are for actually creating multiple cephfs via the gui > so those can be left out if we do not want to support that > > --- > so if we only want to support basic display functionality, we could only apply > manager 1,2 & maybe 5+6 > > for being able to configure multiple cephfs on a ceph cluster, we'd need > storage 1/1 and manager 7,8 > > sorry that it's so complicated, if wanted, i can ofc reorder the patches > or send it in multiple series > > pve-storage: > > Dominik Csapak (1): > cephfs: add support for multiple ceph filesystems > > PVE/Storage/CephFSPlugin.pm | 8 ++++++++ > 1 file changed, 8 insertions(+) > > pve-manager: > > Dominik Csapak (11): > api: ceph-mds: get mds state when multple ceph filesystems exist > ui: ceph: catch missing version for service list > api: cephfs: refactor {ls,create}_fs > api: cephfs: more checks on fs create > ui: ceph/ServiceList: refactor controller out > ui: ceph/fs: show fs for active mds > api: cephfs: add 'fs-name' for cephfs storage > ui: storage/cephfs: make ceph fs selectable > ui: ceph/fs: allow creating multiple cephfs > api: cephfs: add destroy cephfs api call > ui: ceph/fs: allow destroying cephfs > > PVE/API2/Ceph/FS.pm | 148 +++++++++-- > PVE/Ceph/Services.pm | 16 +- > PVE/Ceph/Tools.pm | 51 ++++ > www/manager6/Makefile | 2 + > www/manager6/Utils.js | 1 + > www/manager6/ceph/FS.js | 52 +++- > www/manager6/ceph/ServiceList.js | 313 ++++++++++++----------- > www/manager6/form/CephFSSelector.js | 42 +++ > www/manager6/storage/CephFSEdit.js | 25 ++ > www/manager6/window/SafeDestroyCephFS.js | 22 ++ > 10 files changed, 492 insertions(+), 180 deletions(-) > create mode 100644 www/manager6/form/CephFSSelector.js > create mode 100644 www/manager6/window/SafeDestroyCephFS.js >