From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 72D60BA474 for ; Tue, 19 Mar 2024 17:48:08 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 5B50A478F for ; Tue, 19 Mar 2024 17:48:08 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Tue, 19 Mar 2024 17:48:07 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 1471548835 for ; Tue, 19 Mar 2024 17:48:07 +0100 (CET) Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 Date: Tue, 19 Mar 2024 17:48:06 +0100 Message-Id: From: "Stefan Sterz" To: "Proxmox VE development discussion" X-Mailer: aerc 0.17.0-69-g65571b67d7d3-dirty References: <20240319150015.109714-1-a.lauterer@proxmox.com> In-Reply-To: <20240319150015.109714-1-a.lauterer@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL -0.072 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [pve-devel] [PATCH docs] pveceph: document cluster shutdown X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Mar 2024 16:48:08 -0000 On Tue Mar 19, 2024 at 4:00 PM CET, Aaron Lauterer wrote: > Signed-off-by: Aaron Lauterer > --- > pveceph.adoc | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 50 insertions(+) > > diff --git a/pveceph.adoc b/pveceph.adoc > index 089ac80..7b493c5 100644 > --- a/pveceph.adoc > +++ b/pveceph.adoc > @@ -1080,6 +1080,56 @@ scrubs footnote:[Ceph scrubbing {cephdocs-url}/rad= os/configuration/osd-config-re > are executed. > > > +[[pveceph_shutdown]] > +Shutdown {pve} + Ceph HCI cluster > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > + > +To shut down the whole {pve} + Ceph cluster, first stop all Ceph clients= . This > +will mainly be VMs and containers. If you have additional clients that m= ight > +access a Ceph FS or an installed RADOS GW, stop these as well. > +High available guests will switch their state to 'stopped' when powered = down I think this should be "Highly available" or "High availability". > +via the {pve} tooling. > + > +Once all clients, VMs and containers are off or not accessing the Ceph c= luster > +anymore, verify that the Ceph cluster is in a healthy state. Either via = the Web UI > +or the CLI: > + > +---- > +ceph -s > +---- > + > +Then enable the following OSD flags in the Ceph -> OSD panel or the CLI: > + > +---- > +ceph osd set noout > +ceph osd set norecover > +ceph osd set norebalance > +ceph osd set nobackfill > +ceph osd set nodown > +ceph osd set pause > +---- > + > +This will halt all self-healing actions for Ceph and the 'pause' will st= op any client IO. > + > +Start powering down the nodes one node at a time. Power down nodes with = a > +Monitor (MON) last. Might benefit from re-phrasing to avoid people only reading this while already in the middle of shutting down: Start powering down your nodes without a monitor (MON). After these nodes are down, also shut down hosts with monitors. > + > +When powering on the cluster, start the nodes with Monitors (MONs) first= . Once > +all nodes are up and running, confirm that all Ceph services are up and = running > +before you unset the OSD flags: > + > +---- > +ceph osd unset noout > +ceph osd unset norecover > +ceph osd unset norebalance > +ceph osd unset nobackfill > +ceph osd unset nodown > +ceph osd unset pause > +---- > + > +You can now start up the guests. High available guests will change their= state see above > +to 'started' when they power on. > + > Ceph Monitoring and Troubleshooting > ----------------------------------- >