From: Alexander Zeidler <a.zeidler@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH docs v2 6/6] pvecm: remove node: mention Ceph and its steps for safe removal
Date: Wed, 5 Feb 2025 11:08:50 +0100 [thread overview]
Message-ID: <20250205100850.3-6-a.zeidler@proxmox.com> (raw)
In-Reply-To: <20250205100850.3-1-a.zeidler@proxmox.com>
as it has already been missed in the past or the proper procedure was
not known.
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
---
v2:
* no changes
pvecm.adoc | 47 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 47 insertions(+)
diff --git a/pvecm.adoc b/pvecm.adoc
index cffea6d..a65736d 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -320,6 +320,53 @@ replication automatically switches direction if a replicated VM is migrated, so
by migrating a replicated VM from a node to be deleted, replication jobs will be
set up to that node automatically.
+If the node to be removed has been configured for
+xref:chapter_pveceph[Ceph]:
+
+. Ensure that sufficient {pve} nodes with running OSDs (`up` and `in`)
+continue to exist.
++
+NOTE: By default, Ceph pools have a `size/min_size` of `3/2` and a
+full node as `failure domain` at the object balancer
+xref:pve_ceph_device_classes[CRUSH]. So if less than `size` (`3`)
+nodes with running OSDs are online, data redundancy will be degraded.
+If less than `min_size` are online, pool I/O will be blocked and
+affected guests may crash.
+
+. Ensure that sufficient xref:pve_ceph_monitors[monitors],
+xref:pve_ceph_manager[managers] and, if using CephFS,
+xref:pveceph_fs_mds[metadata servers] remain available.
+
+. To maintain data redundancy, each destruction of an OSD, especially
+the last one on a node, will trigger a data rebalance. Therefore,
+ensure that the OSDs on the remaining nodes have sufficient free space
+left.
+
+. To remove Ceph from the node to be deleted, start by
+xref:pve_ceph_osd_destroy[destroying] its OSDs, one after the other.
+
+. Once the xref:pve_ceph_mon_and_ts[CEPH status] is `HEALTH_OK` again,
+proceed by:
+
+[arabic]
+.. destroying its xref:pveceph_fs_mds[metadata server] via web
+interface at __Ceph -> CephFS__ or by running:
++
+----
+# pveceph mds destroy <local hostname>
+----
+
+.. xref:pveceph_destroy_mon[destroying its monitor]
+
+.. xref:pveceph_destroy_mgr[destroying its manager]
+
+. Finally, remove the now empty bucket ({pve} node to be removed) from
+the CRUSH hierarchy by running:
++
+----
+# ceph osd crush remove <hostname>
+----
+
In the following example, we will remove the node hp4 from the cluster.
Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2025-02-05 10:09 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-05 10:08 [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 2/6] ceph: correct heading capitalization Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 3/6] ceph: troubleshooting: revise and add frequently needed information Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 4/6] ceph: osd: revise and expand the section "Destroy OSDs" Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 5/6] ceph: maintenance: revise and expand section "Replace OSDs" Alexander Zeidler
2025-02-05 10:08 ` Alexander Zeidler [this message]
2025-02-05 14:20 ` [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Max Carrara
2025-03-24 16:42 ` [pve-devel] applied: " Aaron Lauterer
2025-03-26 10:20 ` Max Carrara
2025-03-26 13:13 ` Aaron Lauterer
2025-03-26 13:36 ` Max Carrara
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250205100850.3-6-a.zeidler@proxmox.com \
--to=a.zeidler@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal