From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 19E5360D68 for ; Fri, 25 Sep 2020 14:52:22 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 107ED1B5F9 for ; Fri, 25 Sep 2020 14:51:52 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 7C7111B5EE for ; Fri, 25 Sep 2020 14:51:50 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 4333045575 for ; Fri, 25 Sep 2020 14:51:50 +0200 (CEST) From: Alwin Antreich To: pve-devel@lists.proxmox.com Date: Fri, 25 Sep 2020 14:51:46 +0200 Message-Id: <20200925125146.1945521-2-a.antreich@proxmox.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200925125146.1945521-1-a.antreich@proxmox.com> References: <20200925125146.1945521-1-a.antreich@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.000 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_SHORT 0.001 Use of a URL Shortener for very short URL RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [ceph.io, ceph.com] Subject: [pve-devel] [PATCH docs 2/2] update links to the ceph docs X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Sep 2020 12:52:22 -0000 * use a variable instead of hardcoded url+release name * ceph migrated to readthedocs with a minor uri change https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AQZJG75IST7HFDW7OB5MNCITQOVAAUR4/ Signed-off-by: Alwin Antreich --- pve-storage-cephfs.adoc | 2 +- pveceph.adoc | 28 ++++++++++++++-------------- asciidoc/asciidoc-pve.conf | 1 + 3 files changed, 16 insertions(+), 15 deletions(-) diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc index 45933f0..c8615a9 100644 --- a/pve-storage-cephfs.adoc +++ b/pve-storage-cephfs.adoc @@ -90,7 +90,7 @@ secret key itself, opposed to the `rbd` backend which also contains a A secret can be received from the ceph cluster (as ceph admin) by issuing the following command. Replace the `userid` with the actual client ID configured to access the cluster. For further ceph user management see the Ceph docs -footnote:[Ceph user management http://docs.ceph.com/docs/luminous/rados/operations/user-management/]. +footnote:[Ceph user management {cephdocs-url}/rados/operations/user-management/]. ceph auth get-key client.userid > cephfs.secret diff --git a/pveceph.adoc b/pveceph.adoc index 0f94b97..84a45d5 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -64,11 +64,11 @@ manage {ceph} services on {pve} nodes. - Ceph OSD (ceph-osd; Object Storage Daemon) TIP: We highly recommend to get familiar with Ceph -footnote:[Ceph intro https://docs.ceph.com/docs/{ceph_codename}/start/intro/], +footnote:[Ceph intro {cephdocs-url}/start/intro/], its architecture -footnote:[Ceph architecture https://docs.ceph.com/docs/{ceph_codename}/architecture/] +footnote:[Ceph architecture {cephdocs-url}/architecture/] and vocabulary -footnote:[Ceph glossary https://docs.ceph.com/docs/{ceph_codename}/glossary]. +footnote:[Ceph glossary {cephdocs-url}/glossary]. Precondition @@ -78,7 +78,7 @@ To build a hyper-converged Proxmox + Ceph Cluster there should be at least three (preferably) identical servers for the setup. Check also the recommendations from -https://docs.ceph.com/docs/{ceph_codename}/start/hardware-recommendations/[Ceph's website]. +{cephdocs-url}/start/hardware-recommendations/[Ceph's website]. .CPU Higher CPU core frequency reduce latency and should be preferred. As a simple @@ -246,7 +246,7 @@ configuration file. Ceph Monitor ----------- The Ceph Monitor (MON) -footnote:[Ceph Monitor https://docs.ceph.com/docs/{ceph_codename}/start/intro/] +footnote:[Ceph Monitor {cephdocs-url}/start/intro/] maintains a master copy of the cluster map. For high availability you need to have at least 3 monitors. One monitor will already be installed if you used the installation wizard. You won't need more than 3 monitors as long @@ -292,7 +292,7 @@ Ceph Manager ------------ The Manager daemon runs alongside the monitors. It provides an interface to monitor the cluster. Since the Ceph luminous release at least one ceph-mgr -footnote:[Ceph Manager https://docs.ceph.com/docs/{ceph_codename}/mgr/] daemon is +footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is required. [[pveceph_create_mgr]] @@ -466,7 +466,7 @@ It is advised to calculate the PG number depending on your setup, you can find the formula and the PG calculator footnote:[PG calculator https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to increase and decrease the number of PGs later on footnote:[Placement Groups -https://docs.ceph.com/docs/{ceph_codename}/rados/operations/placement-groups/]. +{cephdocs-url}/rados/operations/placement-groups/]. You can create pools through command line or on the GUI on each PVE host under @@ -483,7 +483,7 @@ mark the checkbox "Add storages" in the GUI or use the command line option Further information on Ceph pool handling can be found in the Ceph pool operation footnote:[Ceph pool operation -https://docs.ceph.com/docs/{ceph_codename}/rados/operations/pools/] +{cephdocs-url}/rados/operations/pools/] manual. @@ -516,7 +516,7 @@ advantage that no central index service is needed. CRUSH works with a map of OSDs, buckets (device locations) and rulesets (data replication) for pools. NOTE: Further information can be found in the Ceph documentation, under the -section CRUSH map footnote:[CRUSH map https://docs.ceph.com/docs/{ceph_codename}/rados/operations/crush-map/]. +section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/]. This map can be altered to reflect different replication hierarchies. The object replicas can be separated (eg. failure domains), while maintaining the desired @@ -662,7 +662,7 @@ Since Luminous (12.2.x) you can also have multiple active metadata servers running, but this is normally only useful for a high count on parallel clients, as else the `MDS` seldom is the bottleneck. If you want to set this up please refer to the ceph documentation. footnote:[Configuring multiple active MDS -daemons https://docs.ceph.com/docs/{ceph_codename}/cephfs/multimds/] +daemons {cephdocs-url}/cephfs/multimds/] [[pveceph_fs_create]] Create CephFS @@ -694,7 +694,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the Ceph documentation for more information regarding a fitting placement group number (`pg_num`) for your setup footnote:[Ceph Placement Groups -https://docs.ceph.com/docs/{ceph_codename}/rados/operations/placement-groups/]. +{cephdocs-url}/rados/operations/placement-groups/]. Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve} storage configuration after it was created successfully. @@ -784,7 +784,7 @@ object in a PG for its health. There are two forms of Scrubbing, daily cheap metadata checks and weekly deep data checks. The weekly deep scrub reads the objects and uses checksums to ensure data integrity. If a running scrub interferes with business (performance) needs, you can adjust the time when -scrubs footnote:[Ceph scrubbing https://docs.ceph.com/docs/{ceph_codename}/rados/configuration/osd-config-ref/#scrubbing] +scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing] are executed. @@ -808,10 +808,10 @@ pve# ceph -w To get a more detailed view, every ceph service has a log file under `/var/log/ceph/` and if there is not enough detail, the log level can be -adjusted footnote:[Ceph log and debugging https://docs.ceph.com/docs/{ceph_codename}/rados/troubleshooting/log-and-debug/]. +adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/]. You can find more information about troubleshooting -footnote:[Ceph troubleshooting https://docs.ceph.com/docs/{ceph_codename}/rados/troubleshooting/] +footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/] a Ceph cluster on the official website. diff --git a/asciidoc/asciidoc-pve.conf b/asciidoc/asciidoc-pve.conf index 2e355cc..8b02627 100644 --- a/asciidoc/asciidoc-pve.conf +++ b/asciidoc/asciidoc-pve.conf @@ -16,4 +16,5 @@ email=support@proxmox.com endif::docinfo1[] ceph=http://ceph.com[Ceph] ceph_codename=nautilus +cephdocs-url=https://docs.ceph.com/en/nautilus -- 2.27.0