From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 4A35D1FF13E for ; Fri, 06 Feb 2026 16:23:53 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 0B0FB8726; Fri, 6 Feb 2026 16:24:25 +0100 (CET) From: Aaron Lauterer To: pve-devel@lists.proxmox.com Subject: [PATCH docs 1/2] pvecm: node revomal: rephrase warning about number of nodes Date: Fri, 6 Feb 2026 16:23:49 +0100 Message-ID: <20260206152350.1261246-1-a.lauterer@proxmox.com> X-Mailer: git-send-email 2.47.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1770391351432 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.012 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: JQSBAQ6IAN6R6WNI326VTAFXF2XP7LSG X-Message-ID-Hash: JQSBAQ6IAN6R6WNI326VTAFXF2XP7LSG X-MailFrom: a.lauterer@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: this way it is more readable and the important thing to look out for is mentioned in the beginning. Signed-off-by: Aaron Lauterer --- pvecm.adoc | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pvecm.adoc b/pvecm.adoc index 0ed1bd2..6d15e99 100644 --- a/pvecm.adoc +++ b/pvecm.adoc @@ -343,12 +343,12 @@ node automatically. of any OSD, especially the last one on a node, will trigger a data rebalance in Ceph. -NOTE: By default, Ceph pools have a `size/min_size` of `3/2` and a -full node as `failure domain` at the object balancer -xref:pve_ceph_device_classes[CRUSH]. So if less than `size` (`3`) -nodes with running OSDs are online, data redundancy will be degraded. -If less than `min_size` are online, pool I/O will be blocked and -affected guests may crash. +NOTE: Make sure that there are still enough nodes with OSDs available to satisfy +the `size/min_size` parameters configured for the Ceph pools. If there are fewer +than `size` (default: 3) nodes available, data redundancy will be degraded. If +there are fewer than `min_size` (default: 2) nodes available, the I/O of the +pools will be blocked until there are enough replicas available. Affected guests +may crash if their I/O is blocked. * Ensure that sufficient xref:pve_ceph_monitors[monitors], xref:pve_ceph_manager[managers] and, if using CephFS, -- 2.47.3