From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id A11051FF38E for ; Tue, 28 May 2024 13:53:59 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 3899E15CED; Tue, 28 May 2024 13:54:23 +0200 (CEST) From: Aaron Lauterer To: pve-devel@lists.proxmox.com Date: Tue, 28 May 2024 13:54:20 +0200 Message-Id: <20240528115420.167342-1-a.lauterer@proxmox.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.037 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: [pve-devel] [PATCH docs v3] pveceph: document cluster shutdown X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" Signed-off-by: Aaron Lauterer --- changes: incorporated additional feedback regarding phrasing, structure and spelling pveceph.adoc | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/pveceph.adoc b/pveceph.adoc index 089ac80..9101ba5 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -1080,6 +1080,56 @@ scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-re are executed. +[[pveceph_shutdown]] +Shutdown {pve} + Ceph HCI cluster +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To shut down the whole {pve} + Ceph cluster, first stop all Ceph clients. These +will mainly be VMs and containers. If you have additional clients that might +access a Ceph FS or an installed RADOS GW, stop these as well. +Highly available guests will switch their state to 'stopped' when powered down +via the {pve} tooling. + +Once all clients, VMs and containers are off or not accessing the Ceph cluster +anymore, verify that the Ceph cluster is in a healthy state. Either via the Web UI +or the CLI: + +---- +ceph -s +---- + +To disable all self-healing actions, and to pause any client IO in the Ceph +cluster, enable the following OSD flags in the **Ceph -> OSD** panel or via the +CLI: + +---- +ceph osd set noout +ceph osd set norecover +ceph osd set norebalance +ceph osd set nobackfill +ceph osd set nodown +ceph osd set pause +---- + +Start powering down your nodes without a monitor (MON). After these nodes are +down, continue by shutting down nodes with monitors on them. + +When powering on the cluster, start the nodes with monitors (MONs) first. Once +all nodes are up and running, confirm that all Ceph services are up and running +before you unset the OSD flags again: + +---- +ceph osd unset pause +ceph osd unset nodown +ceph osd unset nobackfill +ceph osd unset norebalance +ceph osd unset norecover +ceph osd unset noout +---- + +You can now start up the guests. Highly available guests will change their state +to 'started' when they power on. + Ceph Monitoring and Troubleshooting ----------------------------------- -- 2.39.2 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel