public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH docs 1/2] update docs link for cephfs
@ 2020-09-24  7:27 Alwin Antreich
  2020-09-24  7:27 ` [pve-devel] [PATCH docs 2/2] update links to docs.ceph.com Alwin Antreich
  0 siblings, 1 reply; 3+ messages in thread
From: Alwin Antreich @ 2020-09-24  7:27 UTC (permalink / raw)
  To: pve-devel

* use codname instead of hardcoded release name
* ceph migrated to readthedocs with a minor uri change
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AQZJG75IST7HFDW7OB5MNCITQOVAAUR4/

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 pve-storage-cephfs.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc
index 45933f0..a942709 100644
--- a/pve-storage-cephfs.adoc
+++ b/pve-storage-cephfs.adoc
@@ -90,7 +90,7 @@ secret key itself, opposed to the `rbd` backend which also contains a
 A secret can be received from the ceph cluster (as ceph admin) by issuing the
 following command. Replace the `userid` with the actual client ID configured to
 access the cluster. For further ceph user management see the Ceph docs
-footnote:[Ceph user management http://docs.ceph.com/docs/luminous/rados/operations/user-management/].
+footnote:[Ceph user management http://docs.ceph.com/en/{ceph_codename}/rados/operations/user-management/].
 
  ceph auth get-key client.userid > cephfs.secret
 
-- 
2.27.0





^ permalink raw reply	[flat|nested] 3+ messages in thread

* [pve-devel] [PATCH docs 2/2] update links to docs.ceph.com
  2020-09-24  7:27 [pve-devel] [PATCH docs 1/2] update docs link for cephfs Alwin Antreich
@ 2020-09-24  7:27 ` Alwin Antreich
  2020-09-25  7:14   ` Thomas Lamprecht
  0 siblings, 1 reply; 3+ messages in thread
From: Alwin Antreich @ 2020-09-24  7:27 UTC (permalink / raw)
  To: pve-devel

ceph migrated their documentation to readthedocs with a minor uri change
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AQZJG75IST7HFDW7OB5MNCITQOVAAUR4/

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 pveceph.adoc | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index baf0988..6771b84 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -58,15 +58,15 @@ and VMs on the same node is possible.
 To simplify management, we provide 'pveceph' - a tool to install and
 manage {ceph} services on {pve} nodes.
 
-.Ceph consists of a couple of Daemons footnote:[Ceph intro https://docs.ceph.com/docs/{ceph_codename}/start/intro/], for use as a RBD storage:
+.Ceph consists of a couple of Daemons footnote:[Ceph intro https://docs.ceph.com/en/{ceph_codename}/start/intro/], for use as a RBD storage:
 - Ceph Monitor (ceph-mon)
 - Ceph Manager (ceph-mgr)
 - Ceph OSD (ceph-osd; Object Storage Daemon)
 
 TIP: We highly recommend to get familiar with Ceph's architecture
-footnote:[Ceph architecture https://docs.ceph.com/docs/{ceph_codename}/architecture/]
+footnote:[Ceph architecture https://docs.ceph.com/en/{ceph_codename}/architecture/]
 and vocabulary
-footnote:[Ceph glossary https://docs.ceph.com/docs/{ceph_codename}/glossary].
+footnote:[Ceph glossary https://docs.ceph.com/en/{ceph_codename}/glossary].
 
 
 Precondition
@@ -76,7 +76,7 @@ To build a hyper-converged Proxmox + Ceph Cluster there should be at least
 three (preferably) identical servers for the setup.
 
 Check also the recommendations from
-https://docs.ceph.com/docs/{ceph_codename}/start/hardware-recommendations/[Ceph's website].
+https://docs.ceph.com/en/{ceph_codename}/start/hardware-recommendations/[Ceph's website].
 
 .CPU
 Higher CPU core frequency reduce latency and should be preferred. As a simple
@@ -244,7 +244,7 @@ configuration file.
 Ceph Monitor
 -----------
 The Ceph Monitor (MON)
-footnote:[Ceph Monitor https://docs.ceph.com/docs/{ceph_codename}/start/intro/]
+footnote:[Ceph Monitor https://docs.ceph.com/en/{ceph_codename}/start/intro/]
 maintains a master copy of the cluster map. For high availability you need to
 have at least 3 monitors. One monitor will already be installed if you
 used the installation wizard. You won't need more than 3 monitors as long
@@ -290,7 +290,7 @@ Ceph Manager
 ------------
 The Manager daemon runs alongside the monitors. It provides an interface to
 monitor the cluster. Since the Ceph luminous release at least one ceph-mgr
-footnote:[Ceph Manager https://docs.ceph.com/docs/{ceph_codename}/mgr/] daemon is
+footnote:[Ceph Manager https://docs.ceph.com/en/{ceph_codename}/mgr/] daemon is
 required.
 
 [[pveceph_create_mgr]]
@@ -464,7 +464,7 @@ It is advised to calculate the PG number depending on your setup, you can find
 the formula and the PG calculator footnote:[PG calculator
 https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to
 increase and decrease the number of PGs later on footnote:[Placement Groups
-https://docs.ceph.com/docs/{ceph_codename}/rados/operations/placement-groups/].
+https://docs.ceph.com/en/{ceph_codename}/rados/operations/placement-groups/].
 
 
 You can create pools through command line or on the GUI on each PVE host under
@@ -481,7 +481,7 @@ mark the checkbox "Add storages" in the GUI or use the command line option
 
 Further information on Ceph pool handling can be found in the Ceph pool
 operation footnote:[Ceph pool operation
-https://docs.ceph.com/docs/{ceph_codename}/rados/operations/pools/]
+https://docs.ceph.com/en/{ceph_codename}/rados/operations/pools/]
 manual.
 
 
@@ -514,7 +514,7 @@ advantage that no central index service is needed. CRUSH works with a map of
 OSDs, buckets (device locations) and rulesets (data replication) for pools.
 
 NOTE: Further information can be found in the Ceph documentation, under the
-section CRUSH map footnote:[CRUSH map https://docs.ceph.com/docs/{ceph_codename}/rados/operations/crush-map/].
+section CRUSH map footnote:[CRUSH map https://docs.ceph.com/en/{ceph_codename}/rados/operations/crush-map/].
 
 This map can be altered to reflect different replication hierarchies. The object
 replicas can be separated (eg. failure domains), while maintaining the desired
@@ -660,7 +660,7 @@ Since Luminous (12.2.x) you can also have multiple active metadata servers
 running, but this is normally only useful for a high count on parallel clients,
 as else the `MDS` seldom is the bottleneck. If you want to set this up please
 refer to the ceph documentation. footnote:[Configuring multiple active MDS
-daemons https://docs.ceph.com/docs/{ceph_codename}/cephfs/multimds/]
+daemons https://docs.ceph.com/en/{ceph_codename}/cephfs/multimds/]
 
 [[pveceph_fs_create]]
 Create CephFS
@@ -692,7 +692,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named
 Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
 Ceph documentation for more information regarding a fitting placement group
 number (`pg_num`) for your setup footnote:[Ceph Placement Groups
-https://docs.ceph.com/docs/{ceph_codename}/rados/operations/placement-groups/].
+https://docs.ceph.com/en/{ceph_codename}/rados/operations/placement-groups/].
 Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
 storage configuration after it was created successfully.
 
@@ -782,7 +782,7 @@ object in a PG for its health. There are two forms of Scrubbing, daily
 cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
 the objects and uses checksums to ensure data integrity. If a running scrub
 interferes with business (performance) needs, you can adjust the time when
-scrubs footnote:[Ceph scrubbing https://docs.ceph.com/docs/{ceph_codename}/rados/configuration/osd-config-ref/#scrubbing]
+scrubs footnote:[Ceph scrubbing https://docs.ceph.com/en/{ceph_codename}/rados/configuration/osd-config-ref/#scrubbing]
 are executed.
 
 
@@ -806,10 +806,10 @@ pve# ceph -w
 
 To get a more detailed view, every ceph service has a log file under
 `/var/log/ceph/` and if there is not enough detail, the log level can be
-adjusted footnote:[Ceph log and debugging https://docs.ceph.com/docs/{ceph_codename}/rados/troubleshooting/log-and-debug/].
+adjusted footnote:[Ceph log and debugging https://docs.ceph.com/en/{ceph_codename}/rados/troubleshooting/log-and-debug/].
 
 You can find more information about troubleshooting
-footnote:[Ceph troubleshooting https://docs.ceph.com/docs/{ceph_codename}/rados/troubleshooting/]
+footnote:[Ceph troubleshooting https://docs.ceph.com/en/{ceph_codename}/rados/troubleshooting/]
 a Ceph cluster on the official website.
 
 
-- 
2.27.0





^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [pve-devel] [PATCH docs 2/2] update links to docs.ceph.com
  2020-09-24  7:27 ` [pve-devel] [PATCH docs 2/2] update links to docs.ceph.com Alwin Antreich
@ 2020-09-25  7:14   ` Thomas Lamprecht
  0 siblings, 0 replies; 3+ messages in thread
From: Thomas Lamprecht @ 2020-09-25  7:14 UTC (permalink / raw)
  To: Proxmox VE development discussion, Alwin Antreich

On 24.09.20 09:27, Alwin Antreich wrote:
> ceph migrated their documentation to readthedocs with a minor uri change
> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AQZJG75IST7HFDW7OB5MNCITQOVAAUR4/
> 
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
>  pveceph.adoc | 28 ++++++++++++++--------------
>  1 file changed, 14 insertions(+), 14 deletions(-)
> 

could be worth to add a {ceph_docs} replacement and use that for
"https://docs.ceph.com/docs/{ceph_codename}" ?





^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-09-25  7:14 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-24  7:27 [pve-devel] [PATCH docs 1/2] update docs link for cephfs Alwin Antreich
2020-09-24  7:27 ` [pve-devel] [PATCH docs 2/2] update links to docs.ceph.com Alwin Antreich
2020-09-25  7:14   ` Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal