From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id C496A79898 for ; Wed, 27 Oct 2021 12:16:31 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B408410E99 for ; Wed, 27 Oct 2021 12:16:01 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 2815F10E8E for ; Wed, 27 Oct 2021 12:16:01 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id ED78445F8B for ; Wed, 27 Oct 2021 12:16:00 +0200 (CEST) Message-ID: <502326ed-cfae-b2b7-9a03-49dfb60d7869@proxmox.com> Date: Wed, 27 Oct 2021 12:15:59 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.1 Content-Language: en-US To: Proxmox VE development discussion , Dominik Csapak References: <20211025140139.2015470-1-d.csapak@proxmox.com> <20211025140139.2015470-14-d.csapak@proxmox.com> From: Aaron Lauterer In-Reply-To: <20211025140139.2015470-14-d.csapak@proxmox.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.263 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -0.215 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [mds.name] Subject: Re: [pve-devel] [PATCH docs v2 1/1] pveceph: improve documentation for destroying cephfs X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Oct 2021 10:16:31 -0000 a few small things inline On 10/25/21 16:01, Dominik Csapak wrote: > Signed-off-by: Dominik Csapak > --- > pveceph.adoc | 49 +++++++++++++++++++++++++++++++++++++------------ > 1 file changed, 37 insertions(+), 12 deletions(-) > > diff --git a/pveceph.adoc b/pveceph.adoc > index aa7a20f..cceb1ca 100644 > --- a/pveceph.adoc > +++ b/pveceph.adoc > @@ -809,28 +809,53 @@ Destroy CephFS > WARNING: Destroying a CephFS will render all of its data unusable. This cannot be > undone! > > -If you really want to destroy an existing CephFS, you first need to stop or > -destroy all metadata servers (`MĚ€DS`). You can destroy them either via the web > -interface or via the command line interface, by issuing > +To completely an cleanly remove a CephFS, the following steps are necessary: s/an/and/ > > +* Disconnect every non-{PVE} client (e.g. unmount the CephFS in guests). > +* Disable all related CephFS {PVE} storage entries (to prevent it from being > + automatically mounted). > +* Remove all used resources from guests (e.g. ISOs) that are on the CephFS you > + want to destroy. > +* Unmount the CephFS storages on all cluster nodes manually with > ++ > ---- > -pveceph mds destroy NAME > +umount /mnt/pve/ > ---- > -on each {pve} node hosting an MDS daemon. > - > -Then, you can remove (destroy) the CephFS by issuing > ++ > +Where `` is the name of the CephFS storage in your {PVE}. > > +* Now make sure that no metadata server (`MDS`) is running for that CephFS, > + either by stopping or destroying them. This can be done either via the web s/either via/via/ to avoid close repetition of `either` > + interface or via the command line interface, by issuing: > ++ > +---- > +pveceph stop --service mds.NAME > ---- > -ceph fs rm NAME --yes-i-really-mean-it > ++ > +to stop them, or > ++ > +---- > +pveceph mds destroy NAME > ---- > -on a single node hosting Ceph. After this, you may want to remove the created > -data and metadata pools, this can be done either over the Web GUI or the CLI > -with: > ++ > +to destroy them. > ++ > +Note that standby servers will automatically be promoted to active when an > +active `MDS` is stopped or removed, so it is best to first stop all standby > +servers. > > +* Now you can destroy the CephFS with > ++ > ---- > -pveceph pool destroy NAME > +pveceph fs destroy NAME --remove-storages --remove-pools > ---- > ++ > +This will automatically destroy the underlying ceph pools as well as remove > +the storages from pve config. > > +After these steps, the CephFS should be completely removed and if you have> +other CephFS instances, the stopped metadata servers can be started again > +to act as standbys. > > Ceph maintenance > ---------------- >