From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id ABE721FF143 for ; Sat, 11 Apr 2026 14:50:25 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 8EEAF6945; Sat, 11 Apr 2026 14:51:11 +0200 (CEST) From: Kefu Chai To: pve-devel@lists.proxmox.com Subject: [PATCH docs 3/4] pveceph: various small language fixes Date: Sat, 11 Apr 2026 20:50:35 +0800 Message-ID: <20260411125036.778122-4-k.chai@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260411125036.778122-1-k.chai@proxmox.com> References: <20260411125036.778122-1-k.chai@proxmox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1775911788474 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.378 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [ceph-osd.target,ceph-mgr.target,ceph-mds.target] Message-ID-Hash: KHXXSSBI7MB7MUNBPZEURIFF4KSXFLHH X-Message-ID-Hash: KHXXSSBI7MB7MUNBPZEURIFF4KSXFLHH X-MailFrom: k.chai@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Subject/verb agreement, article agreement, apostrophe misuse, duplicated "the", and a missing auxiliary verb found during a proofreading pass. No functional changes. Signed-off-by: Kefu Chai --- pveceph.adoc | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pveceph.adoc b/pveceph.adoc index 5c1500b..fc8a072 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -116,10 +116,10 @@ For example, if you plan to run a Ceph monitor, a Ceph manager and 6 Ceph OSDs services on a node you should reserve 8 CPU cores purely for Ceph when targeting basic and stable performance. -Note that OSDs CPU usage depend mostly from the disks performance. The higher -the possible IOPS (**IO** **O**perations per **S**econd) of a disk, the more CPU -can be utilized by a OSD service. -For modern enterprise SSD disks, like NVMe's that can permanently sustain a high +Note that an OSD's CPU usage depends mostly on the disk's performance. The +higher the possible IOPS (**IO** **O**perations per **S**econd) of a disk, the +more CPU can be utilized by an OSD service. +For modern enterprise SSD disks, like NVMes that can permanently sustain a high IOPS load over 100'000 with sub millisecond latency, each OSD can use multiple CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is likely for very high performance disks. @@ -211,7 +211,7 @@ failure forces Ceph to recover more data at once. As Ceph handles data object redundancy and multiple parallel writes to disks (OSDs) on its own, using a RAID controller normally doesn’t improve performance or availability. On the contrary, Ceph is designed to handle whole -disks on it's own, without any abstraction in between. RAID controllers are not +disks on its own, without any abstraction in between. RAID controllers are not designed for the Ceph workload and may complicate things and sometimes even reduce performance, as their write and caching algorithms may interfere with the ones from Ceph. @@ -235,7 +235,7 @@ prompt offering to do so. The wizard is divided into multiple sections, where each needs to finish successfully, in order to use Ceph. -First you need to chose which Ceph version you want to install. Prefer the one +First you need to choose which Ceph version you want to install. Prefer the one from your other nodes, or the newest if this is the first node you install Ceph. @@ -275,7 +275,7 @@ The configuration step includes the following settings: separated network at a later time. You have two more options which are considered advanced and therefore should -only changed if you know what you are doing. +only be changed if you know what you are doing. * *Number of replicas*: Defines how often an object is replicated. * *Minimum replicas*: Defines the minimum number of required replicas for I/O to @@ -297,7 +297,7 @@ new Ceph cluster. CLI Installation of Ceph Packages ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Alternatively to the the recommended {pve} Ceph installation wizard available +Alternatively to the recommended {pve} Ceph installation wizard available in the web interface, you can use the following CLI command on each node: [source,bash] @@ -1290,7 +1290,7 @@ systemctl restart ceph-osd.target systemctl restart ceph-mgr.target systemctl restart ceph-mds.target ---- -NOTE: You will only have MDS' (Metadata Server) if you use CephFS. +NOTE: You will only have MDS daemons (Metadata Servers) if you use CephFS. NOTE: After the first OSD service got restarted, the GUI will complain that the OSD is not reachable anymore. This is not an issue; VMs can still reach -- 2.47.3