From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <s.sterz@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 72D60BA474
 for <pve-devel@lists.proxmox.com>; Tue, 19 Mar 2024 17:48:08 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 5B50A478F
 for <pve-devel@lists.proxmox.com>; Tue, 19 Mar 2024 17:48:08 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Tue, 19 Mar 2024 17:48:07 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 1471548835
 for <pve-devel@lists.proxmox.com>; Tue, 19 Mar 2024 17:48:07 +0100 (CET)
Mime-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset=UTF-8
Date: Tue, 19 Mar 2024 17:48:06 +0100
Message-Id: <CZXVP198MJ54.3D25DPCW4C9M4@proxmox.com>
From: "Stefan Sterz" <s.sterz@proxmox.com>
To: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>
X-Mailer: aerc 0.17.0-69-g65571b67d7d3-dirty
References: <20240319150015.109714-1-a.lauterer@proxmox.com>
In-Reply-To: <20240319150015.109714-1-a.lauterer@proxmox.com>
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.072 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Subject: Re: [pve-devel] [PATCH docs] pveceph: document cluster shutdown
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Tue, 19 Mar 2024 16:48:08 -0000

On Tue Mar 19, 2024 at 4:00 PM CET, Aaron Lauterer wrote:
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>  pveceph.adoc | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 50 insertions(+)
>
> diff --git a/pveceph.adoc b/pveceph.adoc
> index 089ac80..7b493c5 100644
> --- a/pveceph.adoc
> +++ b/pveceph.adoc
> @@ -1080,6 +1080,56 @@ scrubs footnote:[Ceph scrubbing {cephdocs-url}/rad=
os/configuration/osd-config-re
>  are executed.
>
>
> +[[pveceph_shutdown]]
> +Shutdown {pve} + Ceph HCI cluster
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +To shut down the whole {pve} + Ceph cluster, first stop all Ceph clients=
. This
> +will mainly be VMs and containers. If you have additional clients that m=
ight
> +access a Ceph FS or an installed RADOS GW, stop these as well.
> +High available guests will switch their state to 'stopped' when powered =
down

I think this should be "Highly available" or "High availability".

> +via the {pve} tooling.
> +
> +Once all clients, VMs and containers are off or not accessing the Ceph c=
luster
> +anymore, verify that the Ceph cluster is in a healthy state. Either via =
the Web UI
> +or the CLI:
> +
> +----
> +ceph -s
> +----
> +
> +Then enable the following OSD flags in the Ceph -> OSD panel or the CLI:
> +
> +----
> +ceph osd set noout
> +ceph osd set norecover
> +ceph osd set norebalance
> +ceph osd set nobackfill
> +ceph osd set nodown
> +ceph osd set pause
> +----
> +
> +This will halt all self-healing actions for Ceph and the 'pause' will st=
op any client IO.
> +
> +Start powering down the nodes one node at a time. Power down nodes with =
a
> +Monitor (MON) last.

Might benefit from re-phrasing to avoid people only reading this while
already in the middle of shutting down:

Start powering down your nodes without a monitor (MON). After these
nodes are down, also shut down hosts with monitors.

> +
> +When powering on the cluster, start the nodes with Monitors (MONs) first=
. Once
> +all nodes are up and running, confirm that all Ceph services are up and =
running
> +before you unset the OSD flags:
> +
> +----
> +ceph osd unset noout
> +ceph osd unset norecover
> +ceph osd unset norebalance
> +ceph osd unset nobackfill
> +ceph osd unset nodown
> +ceph osd unset pause
> +----
> +
> +You can now start up the guests. High available guests will change their=
 state

see above

> +to 'started' when they power on.
> +
>  Ceph Monitoring and Troubleshooting
>  -----------------------------------
>