all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Dominik Csapak <d.csapak@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH docs v2 1/1] pveceph: improve documentation for destroying cephfs
Date: Mon, 25 Oct 2021 16:01:39 +0200	[thread overview]
Message-ID: <20211025140139.2015470-14-d.csapak@proxmox.com> (raw)
In-Reply-To: <20211025140139.2015470-1-d.csapak@proxmox.com>

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 pveceph.adoc | 49 +++++++++++++++++++++++++++++++++++++------------
 1 file changed, 37 insertions(+), 12 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index aa7a20f..cceb1ca 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -809,28 +809,53 @@ Destroy CephFS
 WARNING: Destroying a CephFS will render all of its data unusable. This cannot be
 undone!
 
-If you really want to destroy an existing CephFS, you first need to stop or
-destroy all metadata servers (`MĚ€DS`). You can destroy them either via the web
-interface or via the command line interface, by issuing
+To completely an cleanly remove a CephFS, the following steps are necessary:
 
+* Disconnect every non-{PVE} client (e.g. unmount the CephFS in guests).
+* Disable all related CephFS {PVE} storage entries (to prevent it from being
+  automatically mounted).
+* Remove all used resources from guests (e.g. ISOs) that are on the CephFS you
+  want to destroy.
+* Unmount the CephFS storages on all cluster nodes manually with
++
 ----
-pveceph mds destroy NAME
+umount /mnt/pve/<STORAGE-NAME>
 ----
-on each {pve} node hosting an MDS daemon.
-
-Then, you can remove (destroy) the CephFS by issuing
++
+Where `<STORAGE-NAME>` is the name of the CephFS storage in your {PVE}.
 
+* Now make sure that no metadata server (`MDS`) is running for that CephFS,
+  either by stopping or destroying them. This can be done either via the web
+  interface or via the command line interface, by issuing:
++
+----
+pveceph stop --service mds.NAME
 ----
-ceph fs rm NAME --yes-i-really-mean-it
++
+to stop them, or
++
+----
+pveceph mds destroy NAME
 ----
-on a single node hosting Ceph. After this, you may want to remove the created
-data and metadata pools, this can be done either over the Web GUI or the CLI
-with:
++
+to destroy them.
++
+Note that standby servers will automatically be promoted to active when an
+active `MDS` is stopped or removed, so it is best to first stop all standby
+servers.
 
+* Now you can destroy the CephFS with
++
 ----
-pveceph pool destroy NAME
+pveceph fs destroy NAME --remove-storages --remove-pools
 ----
++
+This will automatically destroy the underlying ceph pools as well as remove
+the storages from pve config.
 
+After these steps, the CephFS should be completely removed and if you have
+other CephFS instances, the stopped metadata servers can be started again
+to act as standbys.
 
 Ceph maintenance
 ----------------
-- 
2.30.2





  parent reply	other threads:[~2021-10-25 14:01 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-25 14:01 [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH storage v2 1/1] cephfs: add support for " Dominik Csapak
2021-11-05 12:54   ` [pve-devel] applied: " Thomas Lamprecht
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 01/11] api: ceph-mds: get mds state when multple ceph filesystems exist Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 02/11] ui: ceph: catch missing version for service list Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 03/11] api: cephfs: refactor {ls, create}_fs Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 04/11] api: cephfs: more checks on fs create Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 05/11] api: cephfs: add fs_name to 'is mds active' check Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 06/11] ui: ceph/ServiceList: refactor controller out Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 07/11] ui: ceph/fs: show fs for active mds Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 08/11] api: cephfs: add 'fs-name' for cephfs storage Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 09/11] ui: storage/cephfs: make ceph fs selectable Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 10/11] ui: ceph/fs: allow creating multiple cephfs Dominik Csapak
2021-10-25 14:01 ` [pve-devel] [PATCH manager v2 11/11] pveceph: add 'fs destroy' command Dominik Csapak
2021-10-25 14:01 ` Dominik Csapak [this message]
2021-10-27 10:15   ` [pve-devel] [PATCH docs v2 1/1] pveceph: improve documentation for destroying cephfs Aaron Lauterer
2021-10-27 10:48 ` [pve-devel] [PATCH storage/manager/docs v2] fix #3616: support multiple ceph filesystems Aaron Lauterer
2021-11-11 17:04 ` [pve-devel] applied-series: " Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211025140139.2015470-14-d.csapak@proxmox.com \
    --to=d.csapak@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal