From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH docs 3/3] pvecm: expand on public/cluster networks
Date: Mon, 20 Nov 2023 16:48:30 +0100 [thread overview]
Message-ID: <20231120154830.2640139-3-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20231120154830.2640139-1-a.lauterer@proxmox.com>
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
pveceph.adoc | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index 56d745a..0720941 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -241,22 +241,26 @@ The configuration step includes the following settings:
[[pve_ceph_wizard_networks]]
* *Public Network:* This network will be used for public storage communication
- (e.g., for virtual machines using a Ceph RBD backed disk, or a CephFS mount).
- This setting is required.
+ (e.g., for virtual machines using a Ceph RBD backed disk, or a CephFS mount),
+ and communication between the different Ceph services. This setting is
+ required.
+
- Separating your Ceph traffic from cluster communication, and possible the
- front-facing (public) networks of your virtual gusts, is highly recommended.
- Otherwise, Ceph's high-bandwidth IO-traffic could cause interference with
- other low-latency dependent services.
+ Separating your Ceph traffic from the {pve} cluster communication (corosync),
+ and possible the front-facing (public) networks of your virtual guests, is
+ highly recommended. Otherwise, Ceph's high-bandwidth IO-traffic could cause
+ interference with other low-latency dependent services.
[thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
* *Cluster Network:* Specify to separate the xref:pve_ceph_osds[OSD] replication
- and heartbeat traffic as well.
+ and heartbeat traffic as well. This setting is optional.
+
Using a physically separated network is recommended, as it will relieve the
Ceph public and the virtual guests network, while also providing a significant
Ceph performance improvements.
+ +
+ The Ceph cluster network can be configured and moved to another physically
+ separated network at a later time.
You have two more options which are considered advanced and therefore should
only changed if you know what you are doing.
--
2.39.2
next prev parent reply other threads:[~2023-11-20 15:48 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-20 15:48 [pve-devel] [PATCH docs 1/3] pvecm: fix qnetd typo Aaron Lauterer
2023-11-20 15:48 ` [pve-devel] [PATCH docs 2/3] pveceph: fix typo Aaron Lauterer
2023-11-20 15:48 ` Aaron Lauterer [this message]
2023-11-20 16:01 ` [pve-devel] applied: [PATCH docs 1/3] pvecm: fix qnetd typo Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231120154830.2640139-3-a.lauterer@proxmox.com \
--to=a.lauterer@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.