From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 8FA3E602DA for ; Thu, 24 Sep 2020 09:27:47 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 70D8CD754 for ; Thu, 24 Sep 2020 09:27:17 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id DB902D742 for ; Thu, 24 Sep 2020 09:27:15 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id A7BC2454B3 for ; Thu, 24 Sep 2020 09:27:15 +0200 (CEST) From: Alwin Antreich To: pve-devel@lists.proxmox.com Date: Thu, 24 Sep 2020 09:27:10 +0200 Message-Id: <20200924072710.2195707-2-a.antreich@proxmox.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200924072710.2195707-1-a.antreich@proxmox.com> References: <20200924072710.2195707-1-a.antreich@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.000 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_SHORT 0.001 Use of a URL Shortener for very short URL RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [ceph.io, ceph.com] Subject: [pve-devel] [PATCH docs 2/2] update links to docs.ceph.com X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Sep 2020 07:27:47 -0000 ceph migrated their documentation to readthedocs with a minor uri change https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AQZJG75IST7HFDW7OB5MNCITQOVAAUR4/ Signed-off-by: Alwin Antreich --- pveceph.adoc | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/pveceph.adoc b/pveceph.adoc index baf0988..6771b84 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -58,15 +58,15 @@ and VMs on the same node is possible. To simplify management, we provide 'pveceph' - a tool to install and manage {ceph} services on {pve} nodes. -.Ceph consists of a couple of Daemons footnote:[Ceph intro https://docs.ceph.com/docs/{ceph_codename}/start/intro/], for use as a RBD storage: +.Ceph consists of a couple of Daemons footnote:[Ceph intro https://docs.ceph.com/en/{ceph_codename}/start/intro/], for use as a RBD storage: - Ceph Monitor (ceph-mon) - Ceph Manager (ceph-mgr) - Ceph OSD (ceph-osd; Object Storage Daemon) TIP: We highly recommend to get familiar with Ceph's architecture -footnote:[Ceph architecture https://docs.ceph.com/docs/{ceph_codename}/architecture/] +footnote:[Ceph architecture https://docs.ceph.com/en/{ceph_codename}/architecture/] and vocabulary -footnote:[Ceph glossary https://docs.ceph.com/docs/{ceph_codename}/glossary]. +footnote:[Ceph glossary https://docs.ceph.com/en/{ceph_codename}/glossary]. Precondition @@ -76,7 +76,7 @@ To build a hyper-converged Proxmox + Ceph Cluster there should be at least three (preferably) identical servers for the setup. Check also the recommendations from -https://docs.ceph.com/docs/{ceph_codename}/start/hardware-recommendations/[Ceph's website]. +https://docs.ceph.com/en/{ceph_codename}/start/hardware-recommendations/[Ceph's website]. .CPU Higher CPU core frequency reduce latency and should be preferred. As a simple @@ -244,7 +244,7 @@ configuration file. Ceph Monitor ----------- The Ceph Monitor (MON) -footnote:[Ceph Monitor https://docs.ceph.com/docs/{ceph_codename}/start/intro/] +footnote:[Ceph Monitor https://docs.ceph.com/en/{ceph_codename}/start/intro/] maintains a master copy of the cluster map. For high availability you need to have at least 3 monitors. One monitor will already be installed if you used the installation wizard. You won't need more than 3 monitors as long @@ -290,7 +290,7 @@ Ceph Manager ------------ The Manager daemon runs alongside the monitors. It provides an interface to monitor the cluster. Since the Ceph luminous release at least one ceph-mgr -footnote:[Ceph Manager https://docs.ceph.com/docs/{ceph_codename}/mgr/] daemon is +footnote:[Ceph Manager https://docs.ceph.com/en/{ceph_codename}/mgr/] daemon is required. [[pveceph_create_mgr]] @@ -464,7 +464,7 @@ It is advised to calculate the PG number depending on your setup, you can find the formula and the PG calculator footnote:[PG calculator https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to increase and decrease the number of PGs later on footnote:[Placement Groups -https://docs.ceph.com/docs/{ceph_codename}/rados/operations/placement-groups/]. +https://docs.ceph.com/en/{ceph_codename}/rados/operations/placement-groups/]. You can create pools through command line or on the GUI on each PVE host under @@ -481,7 +481,7 @@ mark the checkbox "Add storages" in the GUI or use the command line option Further information on Ceph pool handling can be found in the Ceph pool operation footnote:[Ceph pool operation -https://docs.ceph.com/docs/{ceph_codename}/rados/operations/pools/] +https://docs.ceph.com/en/{ceph_codename}/rados/operations/pools/] manual. @@ -514,7 +514,7 @@ advantage that no central index service is needed. CRUSH works with a map of OSDs, buckets (device locations) and rulesets (data replication) for pools. NOTE: Further information can be found in the Ceph documentation, under the -section CRUSH map footnote:[CRUSH map https://docs.ceph.com/docs/{ceph_codename}/rados/operations/crush-map/]. +section CRUSH map footnote:[CRUSH map https://docs.ceph.com/en/{ceph_codename}/rados/operations/crush-map/]. This map can be altered to reflect different replication hierarchies. The object replicas can be separated (eg. failure domains), while maintaining the desired @@ -660,7 +660,7 @@ Since Luminous (12.2.x) you can also have multiple active metadata servers running, but this is normally only useful for a high count on parallel clients, as else the `MDS` seldom is the bottleneck. If you want to set this up please refer to the ceph documentation. footnote:[Configuring multiple active MDS -daemons https://docs.ceph.com/docs/{ceph_codename}/cephfs/multimds/] +daemons https://docs.ceph.com/en/{ceph_codename}/cephfs/multimds/] [[pveceph_fs_create]] Create CephFS @@ -692,7 +692,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the Ceph documentation for more information regarding a fitting placement group number (`pg_num`) for your setup footnote:[Ceph Placement Groups -https://docs.ceph.com/docs/{ceph_codename}/rados/operations/placement-groups/]. +https://docs.ceph.com/en/{ceph_codename}/rados/operations/placement-groups/]. Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve} storage configuration after it was created successfully. @@ -782,7 +782,7 @@ object in a PG for its health. There are two forms of Scrubbing, daily cheap metadata checks and weekly deep data checks. The weekly deep scrub reads the objects and uses checksums to ensure data integrity. If a running scrub interferes with business (performance) needs, you can adjust the time when -scrubs footnote:[Ceph scrubbing https://docs.ceph.com/docs/{ceph_codename}/rados/configuration/osd-config-ref/#scrubbing] +scrubs footnote:[Ceph scrubbing https://docs.ceph.com/en/{ceph_codename}/rados/configuration/osd-config-ref/#scrubbing] are executed. @@ -806,10 +806,10 @@ pve# ceph -w To get a more detailed view, every ceph service has a log file under `/var/log/ceph/` and if there is not enough detail, the log level can be -adjusted footnote:[Ceph log and debugging https://docs.ceph.com/docs/{ceph_codename}/rados/troubleshooting/log-and-debug/]. +adjusted footnote:[Ceph log and debugging https://docs.ceph.com/en/{ceph_codename}/rados/troubleshooting/log-and-debug/]. You can find more information about troubleshooting -footnote:[Ceph troubleshooting https://docs.ceph.com/docs/{ceph_codename}/rados/troubleshooting/] +footnote:[Ceph troubleshooting https://docs.ceph.com/en/{ceph_codename}/rados/troubleshooting/] a Ceph cluster on the official website. -- 2.27.0