public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH docs, manager 0/7] add and expand on cluster recommendations
@ 2025-04-29 13:57 Aaron Lauterer
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 1/7] pvecm: drop notes about old version incompatibilities Aaron Lauterer
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: Aaron Lauterer @ 2025-04-29 13:57 UTC (permalink / raw)
  To: pve-devel

by expanding on it in the docs and the UI.

the doc patches have some cleanup and new anchor patches
the second ui patch is to fix an external link that I noticed and is
just a drive-by patch.

docs: Aaron Lauterer (5):
  pvecm: drop notes about old version incompatibilities
  pvecm: add anchor for cluster requirements
  pvecm: add anchor for corosync exernal vote support
  pvecm: extend cluster Requirements
  ha-manager: expand requirements

 ha-manager.adoc |  7 ++++++-
 pvecm.adoc      | 34 +++++++++++++++++++---------------
 2 files changed, 25 insertions(+), 16 deletions(-)

manager: Aaron Lauterer (2):
  ui: cluster create: add recommendations for cluster networks
  ui: guest import: make sure an external link has target _blank

 www/manager6/dc/ClusterEdit.js     | 15 ++++++++++++++-
 www/manager6/window/GuestImport.js |  2 +-
 2 files changed, 15 insertions(+), 2 deletions(-)

-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH docs 1/7] pvecm: drop notes about old version incompatibilities
  2025-04-29 13:57 [pve-devel] [PATCH docs, manager 0/7] add and expand on cluster recommendations Aaron Lauterer
@ 2025-04-29 13:57 ` Aaron Lauterer
  2025-05-07 15:22   ` Kevin Schneider
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 2/7] pvecm: add anchor for cluster requirements Aaron Lauterer
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 12+ messages in thread
From: Aaron Lauterer @ 2025-04-29 13:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 pvecm.adoc | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 18f7389..47e42e2 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -77,18 +77,6 @@ Requirements
 * Online migration of virtual machines is only supported when nodes have CPUs
   from the same vendor. It might work otherwise, but this is never guaranteed.
 
-NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
-nodes.
-
-NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
-not supported as a production configuration and should only be done temporarily,
-during an upgrade of the whole cluster from one major version to another.
-
-NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
-cluster protocol (corosync) between {pve} 6.x and earlier versions changed
-fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
-upgrade procedure to {pve} 6.0.
-
 
 Preparing Nodes
 ---------------
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH docs 2/7] pvecm: add anchor for cluster requirements
  2025-04-29 13:57 [pve-devel] [PATCH docs, manager 0/7] add and expand on cluster recommendations Aaron Lauterer
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 1/7] pvecm: drop notes about old version incompatibilities Aaron Lauterer
@ 2025-04-29 13:57 ` Aaron Lauterer
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 3/7] pvecm: add anchor for corosync exernal vote support Aaron Lauterer
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Aaron Lauterer @ 2025-04-29 13:57 UTC (permalink / raw)
  To: pve-devel

so we can link help buttons to it

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 pvecm.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 47e42e2..a38351c 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -54,7 +54,7 @@ Grouping nodes into a cluster has the following advantages:
 
 * Cluster-wide services like firewall and HA
 
-
+[[pvecm_cluster_requirements]]
 Requirements
 ------------
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH docs 3/7] pvecm: add anchor for corosync exernal vote support
  2025-04-29 13:57 [pve-devel] [PATCH docs, manager 0/7] add and expand on cluster recommendations Aaron Lauterer
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 1/7] pvecm: drop notes about old version incompatibilities Aaron Lauterer
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 2/7] pvecm: add anchor for cluster requirements Aaron Lauterer
@ 2025-04-29 13:57 ` Aaron Lauterer
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 4/7] pvecm: extend cluster Requirements Aaron Lauterer
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Aaron Lauterer @ 2025-04-29 13:57 UTC (permalink / raw)
  To: pve-devel

so we can reference the chapter. Manually set the automatically
generated one to avoid breaking existing deep links.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 pvecm.adoc | 1 +
 1 file changed, 1 insertion(+)

diff --git a/pvecm.adoc b/pvecm.adoc
index a38351c..3b9cfc4 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -999,6 +999,7 @@ case $- in
 esac
 ----
 
+[[_corosync_external_vote_support]]
 Corosync External Vote Support
 ------------------------------
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH docs 4/7] pvecm: extend cluster Requirements
  2025-04-29 13:57 [pve-devel] [PATCH docs, manager 0/7] add and expand on cluster recommendations Aaron Lauterer
                   ` (2 preceding siblings ...)
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 3/7] pvecm: add anchor for corosync exernal vote support Aaron Lauterer
@ 2025-04-29 13:57 ` Aaron Lauterer
  2025-05-07 15:22   ` Kevin Schneider
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 5/7] ha-manager: expand requirements Aaron Lauterer
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 12+ messages in thread
From: Aaron Lauterer @ 2025-04-29 13:57 UTC (permalink / raw)
  To: pve-devel

by expanding on best practices with background information as to how and
why.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 pvecm.adoc | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 3b9cfc4..c50ec15 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -68,9 +68,24 @@ Requirements
 * If you are interested in High Availability, you need to have at
   least three nodes for reliable quorum. All nodes should have the
   same version.
++
+NOTE: For smaller 2-node clusters, the xref:_corosync_external_vote_support[QDevice]
+can be used to provide a 3rd vote.
 
-* We recommend a dedicated NIC for the cluster traffic, especially if
-  you use shared storage.
+* We recommend a dedicated physical NIC for the cluster traffic.
++
+NOTE: The {pve} cluster communication uses the Corosync protocol. It needs consistent
+low latency but not a lot of bandwidth. A dedicated 1 Gbit NIC is enough in
+most situations. It helps to avoid situations where other services can use up
+all the available bandwidth. Which in turn would increase the latency for the
+Corosync packets.
+
+* Configuring additional links for the cluster traffic can help in situations where
+  the dedicated network is down.
++
+NOTE: The Corosync protocol used for the {pve} cluster communication can
+switch between multiple networks by itself and does not rely on a bonded network
+for redundancy. You can configure up to 8 networks to be used by Corosync.
 
 * The root password of a cluster node is required for adding nodes.
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH docs 5/7] ha-manager: expand requirements
  2025-04-29 13:57 [pve-devel] [PATCH docs, manager 0/7] add and expand on cluster recommendations Aaron Lauterer
                   ` (3 preceding siblings ...)
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 4/7] pvecm: extend cluster Requirements Aaron Lauterer
@ 2025-04-29 13:57 ` Aaron Lauterer
  2025-04-29 13:57 ` [pve-devel] [PATCH manager 6/7] ui: cluster create: add recommendations for cluster networks Aaron Lauterer
  2025-04-29 13:57 ` [pve-devel] [PATCH manager 7/7] ui: guest import: make sure an external link has target _blank Aaron Lauterer
  6 siblings, 0 replies; 12+ messages in thread
From: Aaron Lauterer @ 2025-04-29 13:57 UTC (permalink / raw)
  To: pve-devel

* make it clear that the corosync/cluster communication is important
* mark hardware watchdogs as optional

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 ha-manager.adoc | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/ha-manager.adoc b/ha-manager.adoc
index 3d6fc4a..592ef96 100644
--- a/ha-manager.adoc
+++ b/ha-manager.adoc
@@ -111,6 +111,11 @@ Requirements
 You must meet the following requirements before you start with HA:
 
 * at least three cluster nodes (to get reliable quorum)
++
+NOTE: A stable {pve} cluster communication is the foundation for the high
+availability feature of {pve}. Follow the recommendations in
+xref:pvecm_cluster_requirements[{pve} cluster requirements] to avoid issues
+like nodes fencing themselves, due to an unreliable cluster communication.
 
 * shared storage for VMs and containers
 
@@ -118,7 +123,7 @@ You must meet the following requirements before you start with HA:
 
 * use reliable “server” components
 
-* hardware watchdog - if not available we fall back to the
+* optional hardware watchdog - if not available we fall back to the
   linux kernel software watchdog (`softdog`)
 
 * optional hardware fencing devices
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH manager 6/7] ui: cluster create: add recommendations for cluster networks
  2025-04-29 13:57 [pve-devel] [PATCH docs, manager 0/7] add and expand on cluster recommendations Aaron Lauterer
                   ` (4 preceding siblings ...)
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 5/7] ha-manager: expand requirements Aaron Lauterer
@ 2025-04-29 13:57 ` Aaron Lauterer
  2025-04-29 13:57 ` [pve-devel] [PATCH manager 7/7] ui: guest import: make sure an external link has target _blank Aaron Lauterer
  6 siblings, 0 replies; 12+ messages in thread
From: Aaron Lauterer @ 2025-04-29 13:57 UTC (permalink / raw)
  To: pve-devel

adding a short list of recommendations regarding the cluster network
right where the cluster is created will hopefully reduce the amount of
clusters that don't follow best practices.

We also point the help button to the requirements a little bit earlier
in the docs than what it used to be.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 www/manager6/dc/ClusterEdit.js | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/www/manager6/dc/ClusterEdit.js b/www/manager6/dc/ClusterEdit.js
index b56df713..399d2393 100644
--- a/www/manager6/dc/ClusterEdit.js
+++ b/www/manager6/dc/ClusterEdit.js
@@ -12,7 +12,7 @@ Ext.define('PVE.ClusterCreateWindow', {
     subject: gettext('Cluster'),
     showTaskViewer: true,
 
-    onlineHelp: 'pvecm_create_cluster',
+    onlineHelp: 'pvecm_cluster_requirements',
 
     items: {
 	xtype: 'inputpanel',
@@ -33,6 +33,19 @@ Ext.define('PVE.ClusterCreateWindow', {
 		    name: 'links',
 		},
 	    ],
+	},
+	{
+	    xtype: 'box',
+	    html: `<ul><li>
+		${gettext('Use a dedicated physical network for the first Corosync link.')}
+	    </li><li>
+		${gettext('Configuring multiple links is recommended for redundancy.')}
+	    </li><li>
+		${Ext.String.format(
+		    gettext('For more information, check the <a target="_blank" href="{0}">reference documentation</a>.'),
+		    Proxmox.Utils.get_help_link('pvecm_cluster_requirements'))}
+	    </li></ul>`,
+	    border: 0,
 	}],
     },
 });
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH manager 7/7] ui: guest import: make sure an external link has target _blank
  2025-04-29 13:57 [pve-devel] [PATCH docs, manager 0/7] add and expand on cluster recommendations Aaron Lauterer
                   ` (5 preceding siblings ...)
  2025-04-29 13:57 ` [pve-devel] [PATCH manager 6/7] ui: cluster create: add recommendations for cluster networks Aaron Lauterer
@ 2025-04-29 13:57 ` Aaron Lauterer
  6 siblings, 0 replies; 12+ messages in thread
From: Aaron Lauterer @ 2025-04-29 13:57 UTC (permalink / raw)
  To: pve-devel

otherwise it will most likely open in the current tab and not in a new
one.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 www/manager6/window/GuestImport.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/window/GuestImport.js b/www/manager6/window/GuestImport.js
index 107afa51..b3421740 100644
--- a/www/manager6/window/GuestImport.js
+++ b/www/manager6/window/GuestImport.js
@@ -961,7 +961,7 @@ Ext.define('PVE.window.GuestImport', {
 		'guest-is-running': gettext('Virtual guest seems to be running on source host. Import might fail or have inconsistent state!'),
 		'efi-state-lost': Ext.String.format(
 		    gettext('EFI state cannot be imported, you may need to reconfigure the boot order (see {0})'),
-		    '<a href="https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries">OVMF/UEFI Boot Entries</a>',
+		    '<a target="_blank" href="https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries">OVMF/UEFI Boot Entries</a>',
 		),
 		'ova-needs-extracting': gettext('Importing an OVA temporarily requires extra space on the working storage while extracting the contained disks for further processing.'),
 	    };
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [pve-devel] [PATCH docs 1/7] pvecm: drop notes about old version incompatibilities
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 1/7] pvecm: drop notes about old version incompatibilities Aaron Lauterer
@ 2025-05-07 15:22   ` Kevin Schneider
  2025-05-12 12:01     ` Thomas Lamprecht
  0 siblings, 1 reply; 12+ messages in thread
From: Kevin Schneider @ 2025-05-07 15:22 UTC (permalink / raw)
  To: pve-devel

On 29.04.25 15:57, Aaron Lauterer wrote:
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>   pvecm.adoc | 12 ------------
>   1 file changed, 12 deletions(-)
>
> diff --git a/pvecm.adoc b/pvecm.adoc
> index 18f7389..47e42e2 100644
> --- a/pvecm.adoc
> +++ b/pvecm.adoc
> @@ -77,18 +77,6 @@ Requirements
>   * Online migration of virtual machines is only supported when nodes have CPUs
>     from the same vendor. It might work otherwise, but this is never guaranteed.
>   
> -NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
> -nodes.
> -
> -NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
> -not supported as a production configuration and should only be done temporarily,
> -during an upgrade of the whole cluster from one major version to another.
> -
> -NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
> -cluster protocol (corosync) between {pve} 6.x and earlier versions changed
> -fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
> -upgrade procedure to {pve} 6.0.
> -
>   
>   Preparing Nodes
>   ---------------

  In 8.2 we modernized the handling of host keys for SSH connections 
between cluster by moving them onto the cluster filesystem. We also 
introduced symlinks for the ceph.client.admin.keyring and ceph.conf 
files. So =<8.1 and =>8.3 are not compatible.

For the documentation it would probably the best to either include 
recent limitations or clearly state, that nodes should only have a 
single dot release difference between each other  and give examples like:

Perfect: All Nodes are up to date

Good: 8.1 and 8.2

Bad: 8.1 and 8.3 ; 8.1 and 8.2 and 8.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [pve-devel] [PATCH docs 4/7] pvecm: extend cluster Requirements
  2025-04-29 13:57 ` [pve-devel] [PATCH docs 4/7] pvecm: extend cluster Requirements Aaron Lauterer
@ 2025-05-07 15:22   ` Kevin Schneider
  2025-05-08 11:54     ` Robin Christ
  0 siblings, 1 reply; 12+ messages in thread
From: Kevin Schneider @ 2025-05-07 15:22 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

On 29.04.25 15:57, Aaron Lauterer wrote:

> It helps to avoid situations where other services can use up
> +all the available bandwidth. Which in turn would increase the latency for the
> +Corosync packets.
> +
> +* Configuring additional links for the cluster traffic can help in situations where
> +  the dedicated network is down.
> ++

Might be a nitpick, but this can be more direct and shorter I suggest

It avoids situations where other services use up the available 
bandwidth. Thus increasing the latency of the Corosync packets

Additional links for cluster traffic offers redundancy in case the 
dedicated network is down

> +NOTE: The Corosync protocol used for the {pve} cluster communication can
> +switch between multiple networks by itself and does not rely on a bonded network
> +for redundancy. You can configure up to 8 networks to be used by Corosync.

IMO this isn't strict enough and we should empathize on the importance 
of the problem. I would go for

To ensure reliable Corosync redundancy, it's essential to use at least 
two separate physical and logical networks. Single bonded interfaces do 
not provide Corosync redundancy. When a bonded interface fails without 
redundancy, it can lead to asymmetric communication, causing all nodes 
to lose quorum—even if more than half of them can still communicate with 
each other.



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [pve-devel] [PATCH docs 4/7] pvecm: extend cluster Requirements
  2025-05-07 15:22   ` Kevin Schneider
@ 2025-05-08 11:54     ` Robin Christ
  0 siblings, 0 replies; 12+ messages in thread
From: Robin Christ @ 2025-05-08 11:54 UTC (permalink / raw)
  To: pve-devel


On 07.05.25 17:22, Kevin Schneider wrote:
> IMO this isn't strict enough and we should empathize on the importance 
> of the problem. I would go for
> 
> To ensure reliable Corosync redundancy, it's essential to use at least 
> two separate physical and logical networks. Single bonded interfaces do 
> not provide Corosync redundancy. When a bonded interface fails without 
> redundancy, it can lead to asymmetric communication, causing all nodes 
> to lose quorum—even if more than half of them can still communicate with 
> each other.


Although a bond on the interface together with MLAG'd switches CAN 
provide further resiliency in case of switch or single NIC PHY failure. 
It does not protect against total failure of the NIC of course.


I think adding a "typical topologies" or "example topologies" to the 
docs might be a good idea?


Below my personal, opinionated recommendation after deploying quite a 
good amount of Proxmox clusters. Of course I don't expect everyone to 
agree with this... But hopefully it can serve as a starting point?


Typical topologies:

In most cases, a server for a Proxmox cluster will have at least two 
physical NICs. One is usually a low or medium speed dual-port onboard 
NIC (1GBase-T or 10GBase-T). The other one is typically a medium or high 
speed add-in PCIe NIC (e.g. 10G SFP+, 40G QSFP+, 25G SFP28, 100G 
QSFP28). There may be more NICs depending on the specific use case, e.g. 
a separate NIC for Ceph Cluster (private, replication, back-side) traffic.

In such a setup, it is recommended to reserve the low or medium speed 
onboard NICs for cluster traffic (and potentially management purposes). 
These NICs should be connected using a switch.
Although for very small clusters (3 nodes) and a dual-port NIC a ring 
topology could be used to connect the nodes together, this is not 
recommended as it makes later expansion more troublesome.

It is recommended to use a physically separate switch just for the 
cluster network. If your main switch is the only way for nodes to 
communicate, failure of this switch will take out your entire cluster 
with potentially catastrophic consequences.

For single-port onboard NICs there are no further design decisions to 
make. However, onboard NICs are almost always dual port, which allows 
some more freedom in the design of the cluster network.

Design of the dedicated cluster network:

a) Two separate cluster switches, switches support MLAG or Stacking / 
Virtual Chassis
This is an ideal scenario, in which you deploy two managed switches in 
an MLAG or Stacking / Virtual Chassis configuration. MLAG or Stacking / 
Virtual Chassis requires the switches to have a link between them, 
called IPL ("Inter Peer Link"). MLAG or Stacking / Virtual Chassis makes 
two switches behave as if they were one, but if one switch fails, the 
remaining one will still work and take over seamlessly!

Each cluster node is connected to both switches. Both NIC ports on each 
node are bonded together (LACP recommended).

This topology provides a very good degree of resiliency.

The bond is configured as Ring0 for corosync.


b) Two separate cluster switches, switches DO NOT support MLAG or 
Stacking / Virtual Chassis

In this scenario you deploy two separate switches (potentially 
unmanaged). There should not be a link between the switches, as this can 
easily lead to loops and makes the entire configuration more complex.

Each cluster node is connected to both switches, but the NIC ports are 
not bonded together. Typically, both NIC ports will be in separate IP 
subnets.

This topology provides a slightly smaller degree of resiliency compared 
to MLAG.

One switch / broadcast domain is configured as Ring0 for corosync, the 
other one is configured as Ring1.


c) Single separate cluster switch

If you only want to deploy a single switch that is reserved for cluster 
traffic, you can either use a single NIC port on each node, or both 
bonded together. It will not make much of a difference, as bonding will 
only protect against single PHY / port failure.

The interface is configured as Ring0 for corosync.


Usage of the other NICs for redundancy purposes:
It is recommended to add the other NICs / networks in the system as 
backup links / additional rings to corosync. Bad connectivity over a 
potentially congested storage network is better than no connectivity at 
all, because the dedicated cluster network has failed and there is no 
backup.



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [pve-devel] [PATCH docs 1/7] pvecm: drop notes about old version incompatibilities
  2025-05-07 15:22   ` Kevin Schneider
@ 2025-05-12 12:01     ` Thomas Lamprecht
  0 siblings, 0 replies; 12+ messages in thread
From: Thomas Lamprecht @ 2025-05-12 12:01 UTC (permalink / raw)
  To: Proxmox VE development discussion, Kevin Schneider

Am 07.05.25 um 17:22 schrieb Kevin Schneider:
> On 29.04.25 15:57, Aaron Lauterer wrote:
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>>   pvecm.adoc | 12 ------------
>>   1 file changed, 12 deletions(-)
>>
>> diff --git a/pvecm.adoc b/pvecm.adoc
>> index 18f7389..47e42e2 100644
>> --- a/pvecm.adoc
>> +++ b/pvecm.adoc
>> @@ -77,18 +77,6 @@ Requirements
>>   * Online migration of virtual machines is only supported when nodes have CPUs
>>     from the same vendor. It might work otherwise, but this is never guaranteed.
>>   
>> -NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
>> -nodes.
>> -
>> -NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
>> -not supported as a production configuration and should only be done temporarily,
>> -during an upgrade of the whole cluster from one major version to another.
>> -
>> -NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
>> -cluster protocol (corosync) between {pve} 6.x and earlier versions changed
>> -fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
>> -upgrade procedure to {pve} 6.0.
>> -
>>   
>>   Preparing Nodes
>>   ---------------
> 
>   In 8.2 we modernized the handling of host keys for SSH connections 
> between cluster by moving them onto the cluster filesystem. We also 
> introduced symlinks for the ceph.client.admin.keyring and ceph.conf 
> files. So =<8.1 and =>8.3 are not compatible.
> 
> For the documentation it would probably the best to either include 
> recent limitations or clearly state, that nodes should only have a 
> single dot release difference between each other  and give examples like:
> 
> Perfect: All Nodes are up to date
> 
> Good: 8.1 and 8.2
> 
> Bad: 8.1 and 8.3 ; 8.1 and 8.2 and 8.3
That's not general true though, the following is:
- we try hard to always allow forward-upgrades.
- in the end all nodes should run the same version.

That means, admins should upgrade all nodes, one after another, and
if the–whyever that is–did not do that they cannot have expect full
compatibility no matter what the version difference is but can expect
that upgrading the older node(s) will work and will make the cluster
fully compatible again.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-05-12 12:01 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-29 13:57 [pve-devel] [PATCH docs, manager 0/7] add and expand on cluster recommendations Aaron Lauterer
2025-04-29 13:57 ` [pve-devel] [PATCH docs 1/7] pvecm: drop notes about old version incompatibilities Aaron Lauterer
2025-05-07 15:22   ` Kevin Schneider
2025-05-12 12:01     ` Thomas Lamprecht
2025-04-29 13:57 ` [pve-devel] [PATCH docs 2/7] pvecm: add anchor for cluster requirements Aaron Lauterer
2025-04-29 13:57 ` [pve-devel] [PATCH docs 3/7] pvecm: add anchor for corosync exernal vote support Aaron Lauterer
2025-04-29 13:57 ` [pve-devel] [PATCH docs 4/7] pvecm: extend cluster Requirements Aaron Lauterer
2025-05-07 15:22   ` Kevin Schneider
2025-05-08 11:54     ` Robin Christ
2025-04-29 13:57 ` [pve-devel] [PATCH docs 5/7] ha-manager: expand requirements Aaron Lauterer
2025-04-29 13:57 ` [pve-devel] [PATCH manager 6/7] ui: cluster create: add recommendations for cluster networks Aaron Lauterer
2025-04-29 13:57 ` [pve-devel] [PATCH manager 7/7] ui: guest import: make sure an external link has target _blank Aaron Lauterer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal