From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <m.heiserer@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 885EC8D974
 for <pve-devel@lists.proxmox.com>; Wed,  9 Nov 2022 12:59:13 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id BAFD41B906
 for <pve-devel@lists.proxmox.com>; Wed,  9 Nov 2022 12:59:12 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Wed,  9 Nov 2022 12:59:10 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id AFF3444561
 for <pve-devel@lists.proxmox.com>; Wed,  9 Nov 2022 12:59:09 +0100 (CET)
From: Matthias Heiserer <m.heiserer@proxmox.com>
To: pve-devel@lists.proxmox.com
Date: Wed,  9 Nov 2022 12:58:21 +0100
Message-Id: <20221109115828.137770-4-m.heiserer@proxmox.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20221109115828.137770-1-m.heiserer@proxmox.com>
References: <20221109115828.137770-1-m.heiserer@proxmox.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.203 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: [pve-devel] [PATCH docs 03/10] consistently capitalize Ceph
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Wed, 09 Nov 2022 11:59:13 -0000

Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
---
 hyper-converged-infrastructure.adoc | 4 ++--
 pve-storage-rbd.adoc                | 4 ++--
 pveceph.adoc                        | 6 +++---
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/hyper-converged-infrastructure.adoc b/hyper-converged-infrastructure.adoc
index ee9f185..4616392 100644
--- a/hyper-converged-infrastructure.adoc
+++ b/hyper-converged-infrastructure.adoc
@@ -48,9 +48,9 @@ Hyper-Converged Infrastructure: Storage
 infrastructure. You can, for example, deploy and manage the following two
 storage technologies by using the web interface only:
 
-- *ceph*: a both self-healing and self-managing shared, reliable and highly
+- *Ceph*: a both self-healing and self-managing shared, reliable and highly
   scalable storage system. Checkout
-  xref:chapter_pveceph[how to manage ceph services on {pve} nodes]
+  xref:chapter_pveceph[how to manage Ceph services on {pve} nodes]
 
 - *ZFS*: a combined file system and logical volume manager with extensive
   protection against data corruption, various RAID modes, fast and cheap
diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
index 5f8619a..5fe558a 100644
--- a/pve-storage-rbd.adoc
+++ b/pve-storage-rbd.adoc
@@ -109,9 +109,9 @@ management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/ope
 Ceph client configuration (optional)
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Connecting to an external ceph storage doesn't always allow setting
+Connecting to an external Ceph storage doesn't always allow setting
 client-specific options in the config DB on the external cluster. You can add a
-`ceph.conf` beside the ceph keyring to change the ceph client configuration for
+`ceph.conf` beside the Ceph keyring to change the Ceph client configuration for
 the storage.
 
 The ceph.conf needs to have the same name as the storage.
diff --git a/pveceph.adoc b/pveceph.adoc
index 54fb214..fdd4cf6 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -636,7 +636,7 @@ pvesm add rbd <storage-name> --pool <replicated-pool> --data-pool <ec-pool>
 ----
 
 TIP: Do not forget to add the `keyring` and `monhost` option for any external
-ceph clusters, not managed by the local {pve} cluster.
+Ceph clusters, not managed by the local {pve} cluster.
 
 Destroy Pools
 ~~~~~~~~~~~~~
@@ -761,7 +761,7 @@ ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class
 [frame="none",grid="none", align="left", cols="30%,70%"]
 |===
 |<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
-|<root>|which crush root it should belong to (default ceph root "default")
+|<root>|which crush root it should belong to (default Ceph root "default")
 |<failure-domain>|at which failure-domain the objects should be distributed (usually host)
 |<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
 |===
@@ -943,7 +943,7 @@ servers.
 pveceph fs destroy NAME --remove-storages --remove-pools
 ----
 +
-This will automatically destroy the underlying ceph pools as well as remove
+This will automatically destroy the underlying Ceph pools as well as remove
 the storages from pve config.
 
 After these steps, the CephFS should be completely removed and if you have
-- 
2.30.2