From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id ED8FE7801B for ; Mon, 25 Oct 2021 16:01:46 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 846D32402F for ; Mon, 25 Oct 2021 16:01:46 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 0761023F03 for ; Mon, 25 Oct 2021 16:01:42 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id A1AB845F7E for ; Mon, 25 Oct 2021 16:01:41 +0200 (CEST) From: Dominik Csapak To: pve-devel@lists.proxmox.com Date: Mon, 25 Oct 2021 16:01:39 +0200 Message-Id: <20211025140139.2015470-14-d.csapak@proxmox.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025140139.2015470-1-d.csapak@proxmox.com> References: <20211025140139.2015470-1-d.csapak@proxmox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.274 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH docs v2 1/1] pveceph: improve documentation for destroying cephfs X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 Oct 2021 14:01:47 -0000 Signed-off-by: Dominik Csapak --- pveceph.adoc | 49 +++++++++++++++++++++++++++++++++++++------------ 1 file changed, 37 insertions(+), 12 deletions(-) diff --git a/pveceph.adoc b/pveceph.adoc index aa7a20f..cceb1ca 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -809,28 +809,53 @@ Destroy CephFS WARNING: Destroying a CephFS will render all of its data unusable. This cannot be undone! -If you really want to destroy an existing CephFS, you first need to stop or -destroy all metadata servers (`MĚ€DS`). You can destroy them either via the web -interface or via the command line interface, by issuing +To completely an cleanly remove a CephFS, the following steps are necessary: +* Disconnect every non-{PVE} client (e.g. unmount the CephFS in guests). +* Disable all related CephFS {PVE} storage entries (to prevent it from being + automatically mounted). +* Remove all used resources from guests (e.g. ISOs) that are on the CephFS you + want to destroy. +* Unmount the CephFS storages on all cluster nodes manually with ++ ---- -pveceph mds destroy NAME +umount /mnt/pve/ ---- -on each {pve} node hosting an MDS daemon. - -Then, you can remove (destroy) the CephFS by issuing ++ +Where `` is the name of the CephFS storage in your {PVE}. +* Now make sure that no metadata server (`MDS`) is running for that CephFS, + either by stopping or destroying them. This can be done either via the web + interface or via the command line interface, by issuing: ++ +---- +pveceph stop --service mds.NAME ---- -ceph fs rm NAME --yes-i-really-mean-it ++ +to stop them, or ++ +---- +pveceph mds destroy NAME ---- -on a single node hosting Ceph. After this, you may want to remove the created -data and metadata pools, this can be done either over the Web GUI or the CLI -with: ++ +to destroy them. ++ +Note that standby servers will automatically be promoted to active when an +active `MDS` is stopped or removed, so it is best to first stop all standby +servers. +* Now you can destroy the CephFS with ++ ---- -pveceph pool destroy NAME +pveceph fs destroy NAME --remove-storages --remove-pools ---- ++ +This will automatically destroy the underlying ceph pools as well as remove +the storages from pve config. +After these steps, the CephFS should be completely removed and if you have +other CephFS instances, the stopped metadata servers can be started again +to act as standbys. Ceph maintenance ---------------- -- 2.30.2