From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <pve-devel-bounces@lists.proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 45C9E1FF15C for <inbox@lore.proxmox.com>; Wed, 5 Feb 2025 11:09:21 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 34D137358; Wed, 5 Feb 2025 11:09:10 +0100 (CET) From: Alexander Zeidler <a.zeidler@proxmox.com> To: pve-devel@lists.proxmox.com Date: Wed, 5 Feb 2025 11:08:50 +0100 Message-Id: <20250205100850.3-6-a.zeidler@proxmox.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20250205100850.3-1-a.zeidler@proxmox.com> References: <20250205100850.3-1-a.zeidler@proxmox.com> MIME-Version: 1.0 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.087 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH docs v2 6/6] pvecm: remove node: mention Ceph and its steps for safe removal X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/> List-Post: <mailto:pve-devel@lists.proxmox.com> List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe> Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com> as it has already been missed in the past or the proper procedure was not known. Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com> --- v2: * no changes pvecm.adoc | 47 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 47 insertions(+) diff --git a/pvecm.adoc b/pvecm.adoc index cffea6d..a65736d 100644 --- a/pvecm.adoc +++ b/pvecm.adoc @@ -320,6 +320,53 @@ replication automatically switches direction if a replicated VM is migrated, so by migrating a replicated VM from a node to be deleted, replication jobs will be set up to that node automatically. +If the node to be removed has been configured for +xref:chapter_pveceph[Ceph]: + +. Ensure that sufficient {pve} nodes with running OSDs (`up` and `in`) +continue to exist. ++ +NOTE: By default, Ceph pools have a `size/min_size` of `3/2` and a +full node as `failure domain` at the object balancer +xref:pve_ceph_device_classes[CRUSH]. So if less than `size` (`3`) +nodes with running OSDs are online, data redundancy will be degraded. +If less than `min_size` are online, pool I/O will be blocked and +affected guests may crash. + +. Ensure that sufficient xref:pve_ceph_monitors[monitors], +xref:pve_ceph_manager[managers] and, if using CephFS, +xref:pveceph_fs_mds[metadata servers] remain available. + +. To maintain data redundancy, each destruction of an OSD, especially +the last one on a node, will trigger a data rebalance. Therefore, +ensure that the OSDs on the remaining nodes have sufficient free space +left. + +. To remove Ceph from the node to be deleted, start by +xref:pve_ceph_osd_destroy[destroying] its OSDs, one after the other. + +. Once the xref:pve_ceph_mon_and_ts[CEPH status] is `HEALTH_OK` again, +proceed by: + +[arabic] +.. destroying its xref:pveceph_fs_mds[metadata server] via web +interface at __Ceph -> CephFS__ or by running: ++ +---- +# pveceph mds destroy <local hostname> +---- + +.. xref:pveceph_destroy_mon[destroying its monitor] + +.. xref:pveceph_destroy_mgr[destroying its manager] + +. Finally, remove the now empty bucket ({pve} node to be removed) from +the CRUSH hierarchy by running: ++ +---- +# ceph osd crush remove <hostname> +---- + In the following example, we will remove the node hp4 from the cluster. Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes` -- 2.39.5 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel