public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Stefan Sterz" <s.sterz@proxmox.com>
To: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH docs] pveceph: document cluster shutdown
Date: Tue, 19 Mar 2024 17:48:06 +0100	[thread overview]
Message-ID: <CZXVP198MJ54.3D25DPCW4C9M4@proxmox.com> (raw)
In-Reply-To: <20240319150015.109714-1-a.lauterer@proxmox.com>

On Tue Mar 19, 2024 at 4:00 PM CET, Aaron Lauterer wrote:
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>  pveceph.adoc | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 50 insertions(+)
>
> diff --git a/pveceph.adoc b/pveceph.adoc
> index 089ac80..7b493c5 100644
> --- a/pveceph.adoc
> +++ b/pveceph.adoc
> @@ -1080,6 +1080,56 @@ scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-re
>  are executed.
>
>
> +[[pveceph_shutdown]]
> +Shutdown {pve} + Ceph HCI cluster
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +To shut down the whole {pve} + Ceph cluster, first stop all Ceph clients. This
> +will mainly be VMs and containers. If you have additional clients that might
> +access a Ceph FS or an installed RADOS GW, stop these as well.
> +High available guests will switch their state to 'stopped' when powered down

I think this should be "Highly available" or "High availability".

> +via the {pve} tooling.
> +
> +Once all clients, VMs and containers are off or not accessing the Ceph cluster
> +anymore, verify that the Ceph cluster is in a healthy state. Either via the Web UI
> +or the CLI:
> +
> +----
> +ceph -s
> +----
> +
> +Then enable the following OSD flags in the Ceph -> OSD panel or the CLI:
> +
> +----
> +ceph osd set noout
> +ceph osd set norecover
> +ceph osd set norebalance
> +ceph osd set nobackfill
> +ceph osd set nodown
> +ceph osd set pause
> +----
> +
> +This will halt all self-healing actions for Ceph and the 'pause' will stop any client IO.
> +
> +Start powering down the nodes one node at a time. Power down nodes with a
> +Monitor (MON) last.

Might benefit from re-phrasing to avoid people only reading this while
already in the middle of shutting down:

Start powering down your nodes without a monitor (MON). After these
nodes are down, also shut down hosts with monitors.

> +
> +When powering on the cluster, start the nodes with Monitors (MONs) first. Once
> +all nodes are up and running, confirm that all Ceph services are up and running
> +before you unset the OSD flags:
> +
> +----
> +ceph osd unset noout
> +ceph osd unset norecover
> +ceph osd unset norebalance
> +ceph osd unset nobackfill
> +ceph osd unset nodown
> +ceph osd unset pause
> +----
> +
> +You can now start up the guests. High available guests will change their state

see above

> +to 'started' when they power on.
> +
>  Ceph Monitoring and Troubleshooting
>  -----------------------------------
>





      reply	other threads:[~2024-03-19 16:48 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-19 15:00 Aaron Lauterer
2024-03-19 16:48 ` Stefan Sterz [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CZXVP198MJ54.3D25DPCW4C9M4@proxmox.com \
    --to=s.sterz@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal