public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [PATCH docs 1/2] pvecm: node revomal: rephrase warning about number of nodes
@ 2026-02-06 15:23 Aaron Lauterer
  2026-02-06 15:23 ` [PATCH docs 2/2] pvecm: node removal: indent notes on prerequisites Aaron Lauterer
  0 siblings, 1 reply; 2+ messages in thread
From: Aaron Lauterer @ 2026-02-06 15:23 UTC (permalink / raw)
  To: pve-devel

this way it is more readable and the important thing to look out for is
mentioned in the beginning.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 pvecm.adoc | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 0ed1bd2..6d15e99 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -343,12 +343,12 @@ node automatically.
   of any OSD, especially the last one on a node, will trigger a data
   rebalance in Ceph.
 
-NOTE: By default, Ceph pools have a `size/min_size` of `3/2` and a
-full node as `failure domain` at the object balancer
-xref:pve_ceph_device_classes[CRUSH]. So if less than `size` (`3`)
-nodes with running OSDs are online, data redundancy will be degraded.
-If less than `min_size` are online, pool I/O will be blocked and
-affected guests may crash.
+NOTE: Make sure that there are still enough nodes with OSDs available to satisfy
+the `size/min_size` parameters configured for the Ceph pools. If there are fewer
+than `size` (default: 3) nodes available, data redundancy will be degraded. If
+there are fewer than `min_size` (default: 2) nodes available, the I/O of the
+pools will be blocked until there are enough replicas available. Affected guests
+may crash if their I/O is blocked.
 
 * Ensure that sufficient xref:pve_ceph_monitors[monitors],
   xref:pve_ceph_manager[managers] and, if using CephFS,
-- 
2.47.3





^ permalink raw reply	[flat|nested] 2+ messages in thread

* [PATCH docs 2/2] pvecm: node removal: indent notes on prerequisites
  2026-02-06 15:23 [PATCH docs 1/2] pvecm: node revomal: rephrase warning about number of nodes Aaron Lauterer
@ 2026-02-06 15:23 ` Aaron Lauterer
  0 siblings, 0 replies; 2+ messages in thread
From: Aaron Lauterer @ 2026-02-06 15:23 UTC (permalink / raw)
  To: pve-devel

this way it is clearer that the notes are part of the list item

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
I consider this patch optional, but from a readability POV I kinda prefer the
indented notes

 pvecm.adoc | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 6d15e99..0636b12 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -330,7 +330,7 @@ Prerequisites
 * Back up all local data on the node to be deleted.
 * Make sure the node to be deleted is not part of any replication job
   anymore.
-
++
 CAUTION: If you fail to remove replication jobs from a node before
 removing the node itself, the replication job will become irremovable.
 Note that replication automatically switches direction when a
@@ -342,7 +342,7 @@ node automatically.
   and that the OSDs are running (i.e. `up` and `in`). The destruction
   of any OSD, especially the last one on a node, will trigger a data
   rebalance in Ceph.
-
++
 NOTE: Make sure that there are still enough nodes with OSDs available to satisfy
 the `size/min_size` parameters configured for the Ceph pools. If there are fewer
 than `size` (default: 3) nodes available, data redundancy will be degraded. If
-- 
2.47.3





^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-02-06 15:23 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-06 15:23 [PATCH docs 1/2] pvecm: node revomal: rephrase warning about number of nodes Aaron Lauterer
2026-02-06 15:23 ` [PATCH docs 2/2] pvecm: node removal: indent notes on prerequisites Aaron Lauterer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal