* [PATCH docs 1/4] pveceph: fix Ceph Manager abbreviation
2026-04-11 12:50 [PATCH docs 0/4] ceph: audit of Ceph-related chapters Kefu Chai
@ 2026-04-11 12:50 ` Kefu Chai
2026-04-11 12:50 ` [PATCH docs 2/4] pve-storage-cephfs: describe CephFS, not RBD, in external cluster setup Kefu Chai
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Kefu Chai @ 2026-04-11 12:50 UTC (permalink / raw)
To: pve-devel
The Ceph Manager daemon is abbreviated as MGR, not MGS. The rest of
the file already uses MGR correctly, so this was an internal
inconsistency.
Signed-off-by: Kefu Chai <k.chai@proxmox.com>
---
pveceph.adoc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index 2aae6d6..5c1500b 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -70,7 +70,7 @@ Terminology
// TODO: extend and also describe basic architecture here.
.Ceph consists of multiple Daemons, for use as an RBD storage:
- Ceph Monitor (ceph-mon, or MON)
-- Ceph Manager (ceph-mgr, or MGS)
+- Ceph Manager (ceph-mgr, or MGR)
- Ceph Metadata Service (ceph-mds, or MDS)
- Ceph Object Storage Daemon (ceph-osd, or OSD)
--
2.47.3
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH docs 2/4] pve-storage-cephfs: describe CephFS, not RBD, in external cluster setup
2026-04-11 12:50 [PATCH docs 0/4] ceph: audit of Ceph-related chapters Kefu Chai
2026-04-11 12:50 ` [PATCH docs 1/4] pveceph: fix Ceph Manager abbreviation Kefu Chai
@ 2026-04-11 12:50 ` Kefu Chai
2026-04-11 12:50 ` [PATCH docs 3/4] pveceph: various small language fixes Kefu Chai
2026-04-11 12:50 ` [PATCH docs 4/4] docs: various small grammar fixes Kefu Chai
3 siblings, 0 replies; 5+ messages in thread
From: Kefu Chai @ 2026-04-11 12:50 UTC (permalink / raw)
To: pve-devel
The "external Ceph cluster" section of the CephFS chapter describes
configuring "external RBD storage" in both the CLI and the GUI
walk-throughs. Refer to CephFS there instead - this was likely a
copy-paste leftover from the RBD chapter.
Signed-off-by: Kefu Chai <k.chai@proxmox.com>
---
pve-storage-cephfs.adoc | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc
index 8d36246..ab8d850 100644
--- a/pve-storage-cephfs.adoc
+++ b/pve-storage-cephfs.adoc
@@ -91,16 +91,16 @@ copy it to the `/root` directory of the node on which we run it:
# scp <external cephserver>:/etc/ceph/cephfs.secret /root/cephfs.secret
----
-Then use the `pvesm` CLI tool to configure the external RBD storage, use the
-`--keyring` parameter, which needs to be a path to the secret file that you
-copied. For example:
+Then use the `pvesm` CLI tool to configure the external CephFS storage, use
+the `--keyring` parameter, which needs to be a path to the secret file that
+you copied. For example:
----
# pvesm add cephfs <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret
----
-When configuring an external RBD storage via the GUI, you can copy and paste
-the secret into the appropriate field.
+When configuring an external CephFS storage via the GUI, you can copy and
+paste the secret into the appropriate field.
The secret is only the key itself, as opposed to the `rbd` backend which also
contains a `[client.userid]` section.
--
2.47.3
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH docs 3/4] pveceph: various small language fixes
2026-04-11 12:50 [PATCH docs 0/4] ceph: audit of Ceph-related chapters Kefu Chai
2026-04-11 12:50 ` [PATCH docs 1/4] pveceph: fix Ceph Manager abbreviation Kefu Chai
2026-04-11 12:50 ` [PATCH docs 2/4] pve-storage-cephfs: describe CephFS, not RBD, in external cluster setup Kefu Chai
@ 2026-04-11 12:50 ` Kefu Chai
2026-04-11 12:50 ` [PATCH docs 4/4] docs: various small grammar fixes Kefu Chai
3 siblings, 0 replies; 5+ messages in thread
From: Kefu Chai @ 2026-04-11 12:50 UTC (permalink / raw)
To: pve-devel
Subject/verb agreement, article agreement, apostrophe misuse,
duplicated "the", and a missing auxiliary verb found during a
proofreading pass. No functional changes.
Signed-off-by: Kefu Chai <k.chai@proxmox.com>
---
pveceph.adoc | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index 5c1500b..fc8a072 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -116,10 +116,10 @@ For example, if you plan to run a Ceph monitor, a Ceph manager and 6 Ceph OSDs
services on a node you should reserve 8 CPU cores purely for Ceph when targeting
basic and stable performance.
-Note that OSDs CPU usage depend mostly from the disks performance. The higher
-the possible IOPS (**IO** **O**perations per **S**econd) of a disk, the more CPU
-can be utilized by a OSD service.
-For modern enterprise SSD disks, like NVMe's that can permanently sustain a high
+Note that an OSD's CPU usage depends mostly on the disk's performance. The
+higher the possible IOPS (**IO** **O**perations per **S**econd) of a disk, the
+more CPU can be utilized by an OSD service.
+For modern enterprise SSD disks, like NVMes that can permanently sustain a high
IOPS load over 100'000 with sub millisecond latency, each OSD can use multiple
CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is
likely for very high performance disks.
@@ -211,7 +211,7 @@ failure forces Ceph to recover more data at once.
As Ceph handles data object redundancy and multiple parallel writes to disks
(OSDs) on its own, using a RAID controller normally doesn’t improve
performance or availability. On the contrary, Ceph is designed to handle whole
-disks on it's own, without any abstraction in between. RAID controllers are not
+disks on its own, without any abstraction in between. RAID controllers are not
designed for the Ceph workload and may complicate things and sometimes even
reduce performance, as their write and caching algorithms may interfere with
the ones from Ceph.
@@ -235,7 +235,7 @@ prompt offering to do so.
The wizard is divided into multiple sections, where each needs to
finish successfully, in order to use Ceph.
-First you need to chose which Ceph version you want to install. Prefer the one
+First you need to choose which Ceph version you want to install. Prefer the one
from your other nodes, or the newest if this is the first node you install
Ceph.
@@ -275,7 +275,7 @@ The configuration step includes the following settings:
separated network at a later time.
You have two more options which are considered advanced and therefore should
-only changed if you know what you are doing.
+only be changed if you know what you are doing.
* *Number of replicas*: Defines how often an object is replicated.
* *Minimum replicas*: Defines the minimum number of required replicas for I/O to
@@ -297,7 +297,7 @@ new Ceph cluster.
CLI Installation of Ceph Packages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Alternatively to the the recommended {pve} Ceph installation wizard available
+Alternatively to the recommended {pve} Ceph installation wizard available
in the web interface, you can use the following CLI command on each node:
[source,bash]
@@ -1290,7 +1290,7 @@ systemctl restart ceph-osd.target
systemctl restart ceph-mgr.target
systemctl restart ceph-mds.target
----
-NOTE: You will only have MDS' (Metadata Server) if you use CephFS.
+NOTE: You will only have MDS daemons (Metadata Servers) if you use CephFS.
NOTE: After the first OSD service got restarted, the GUI will complain that
the OSD is not reachable anymore. This is not an issue; VMs can still reach
--
2.47.3
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH docs 4/4] docs: various small grammar fixes
2026-04-11 12:50 [PATCH docs 0/4] ceph: audit of Ceph-related chapters Kefu Chai
` (2 preceding siblings ...)
2026-04-11 12:50 ` [PATCH docs 3/4] pveceph: various small language fixes Kefu Chai
@ 2026-04-11 12:50 ` Kefu Chai
3 siblings, 0 replies; 5+ messages in thread
From: Kefu Chai @ 2026-04-11 12:50 UTC (permalink / raw)
To: pve-devel
Small grammar fixes spotted during a proofreading pass of the
Ceph-related chapters:
* hyper-converged-infrastructure: drop stray article in "a both
self-healing..." and split "Checkout" into "Check out".
* pve-package-repos: "an selected update" -> "a selected update".
* pve-storage-rbd: "is recommend" -> "is recommended".
Signed-off-by: Kefu Chai <k.chai@proxmox.com>
---
hyper-converged-infrastructure.adoc | 4 ++--
pve-package-repos.adoc | 2 +-
pve-storage-rbd.adoc | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/hyper-converged-infrastructure.adoc b/hyper-converged-infrastructure.adoc
index 4616392..cc6199c 100644
--- a/hyper-converged-infrastructure.adoc
+++ b/hyper-converged-infrastructure.adoc
@@ -48,8 +48,8 @@ Hyper-Converged Infrastructure: Storage
infrastructure. You can, for example, deploy and manage the following two
storage technologies by using the web interface only:
-- *Ceph*: a both self-healing and self-managing shared, reliable and highly
- scalable storage system. Checkout
+- *Ceph*: a self-healing and self-managing shared, reliable and highly
+ scalable storage system. Check out
xref:chapter_pveceph[how to manage Ceph services on {pve} nodes]
- *ZFS*: a combined file system and logical volume manager with extensive
diff --git a/pve-package-repos.adoc b/pve-package-repos.adoc
index a2882a8..2091ff6 100644
--- a/pve-package-repos.adoc
+++ b/pve-package-repos.adoc
@@ -10,7 +10,7 @@ package management tool like any other Debian-based system.
{pve} automatically checks for package updates on a daily basis. The `root@pam`
user is notified via email about available updates. From the GUI, the
-'Changelog' button can be used to see more details about an selected update.
+'Changelog' button can be used to see more details about a selected update.
Repositories in {pve}
~~~~~~~~~~~~~~~~~~~~~
diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
index 5fe558a..9a52974 100644
--- a/pve-storage-rbd.adoc
+++ b/pve-storage-rbd.adoc
@@ -102,7 +102,7 @@ The keyring will be stored at
# /etc/pve/priv/ceph/<STORAGE_ID>.keyring
----
-TIP: Creating a keyring with only the needed capabilities is recommend when
+TIP: Creating a keyring with only the needed capabilities is recommended when
connecting to an external cluster. For further information on Ceph user
management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/operations/user-management/[Ceph User Management]]
--
2.47.3
^ permalink raw reply [flat|nested] 5+ messages in thread