all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Alexander Zeidler <a.zeidler@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH docs 6/6] pvecm: remove node: mention Ceph and its steps for safe removal
Date: Mon,  3 Feb 2025 15:28:01 +0100	[thread overview]
Message-ID: <20250203142801.3-6-a.zeidler@proxmox.com> (raw)
In-Reply-To: <20250203142801.3-1-a.zeidler@proxmox.com>

as it has already been missed in the past or the proper procedure was
not known.

Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
---
 pvecm.adoc | 47 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 47 insertions(+)

diff --git a/pvecm.adoc b/pvecm.adoc
index 15dda4e..8026de4 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -320,6 +320,53 @@ replication automatically switches direction if a replicated VM is migrated, so
 by migrating a replicated VM from a node to be deleted, replication jobs will be
 set up to that node automatically.
 
+If the node to be removed has been configured for
+xref:chapter_pveceph[Ceph]:
+
+. Ensure that sufficient {pve} nodes with running OSDs (`up` and `in`)
+continue to exist.
++
+NOTE: By default, Ceph pools have a `size/min_size` of `3/2` and a
+full node as `failure domain` at the object balancer
+xref:pve_ceph_device_classes[CRUSH]. So if less than `size` (`3`)
+nodes with running OSDs are online, data redundancy will be degraded.
+If less than `min_size` are online, pool I/O will be blocked and
+affected guests may crash.
+
+. Ensure that sufficient xref:pve_ceph_monitors[monitors],
+xref:pve_ceph_manager[managers] and, if using CephFS,
+xref:pveceph_fs_mds[metadata servers] remain available.
+
+. To maintain data redundancy, each destruction of an OSD, especially
+the last one on a node, will trigger a data rebalance. Therefore,
+ensure that the OSDs on the remaining nodes have sufficient free space
+left.
+
+. To remove Ceph from the node to be deleted, start by
+xref:pve_ceph_osd_destroy[destroying] its OSDs, one after the other.
+
+. Once the xref:pve_ceph_mon_and_ts[CEPH status] is `HEALTH_OK` again,
+proceed by:
+
+[arabic]
+.. destroying its xref:pveceph_fs_mds[metadata server] via web
+interface at __Ceph -> CephFS__ or by running:
++
+----
+# pveceph mds destroy <local hostname>
+----
+
+.. xref:pveceph_destroy_mon[destroying its monitor]
+
+.. xref:pveceph_destroy_mgr[destroying its manager]
+
+. Finally, remove the now empty bucket ({pve} node to be removed) from
+the CRUSH hierarchy by running:
++
+----
+# ceph osd crush remove <hostname>
+----
+
 In the following example, we will remove the node hp4 from the cluster.
 
 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2025-02-03 14:28 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-03 14:27 [pve-devel] [PATCH docs 1/6] ceph: add anchors for use in troubleshooting section Alexander Zeidler
2025-02-03 14:27 ` [pve-devel] [PATCH docs 2/6] ceph: correct heading capitalization Alexander Zeidler
2025-02-03 14:27 ` [pve-devel] [PATCH docs 3/6] ceph: troubleshooting: revise and add frequently needed information Alexander Zeidler
2025-02-03 16:19   ` Max Carrara
2025-02-03 14:27 ` [pve-devel] [PATCH docs 4/6] ceph: osd: revise and expand the section "Destroy OSDs" Alexander Zeidler
2025-02-03 16:19   ` Max Carrara
2025-02-03 14:28 ` [pve-devel] [PATCH docs 5/6] ceph: maintenance: revise and expand section "Replace OSDs" Alexander Zeidler
2025-02-03 14:28 ` Alexander Zeidler [this message]
2025-02-03 16:19 ` [pve-devel] [PATCH docs 1/6] ceph: add anchors for use in troubleshooting section Max Carrara
2025-02-04  9:22   ` Alexander Zeidler
2025-02-04  9:52     ` Max Carrara
2025-02-05 10:10       ` Alexander Zeidler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250203142801.3-6-a.zeidler@proxmox.com \
    --to=a.zeidler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal