public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [PATCH docs] qdevice: correct qdevice partition tie breaking section.
@ 2026-02-20 12:57 Manuel Federanko
  2026-02-20 13:27 ` Maximiliano Sandoval
  0 siblings, 1 reply; 2+ messages in thread
From: Manuel Federanko @ 2026-02-20 12:57 UTC (permalink / raw)
  To: pve-devel

The partition that gets a vote is not randomly chosen, but depends on
the configuration of the qdevice in corosync.conf.

A partner asked about predictable behavior for vote casting by the
QDevice. The documentation of corosync and ours didn't align.

Setting tie_breaker influences how a partition is chosen for votes,
which was tested with a 2-node test-cluster.

Signed-off-by: Manuel Federanko <m.federanko@proxmox.com>
---
 pvecm.adoc | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 0ed1bd2..899a4de 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -1266,8 +1266,10 @@ Tie Breaking
 ^^^^^^^^^^^^
 
 In case of a tie, where two same-sized cluster partitions cannot see each other
-but can see the QDevice, the QDevice chooses one of those partitions randomly
-and provides a vote to it.
+but can see the QDevice, the QDevice chooses the partition which has
+the lowest node id and provides a vote to it. This behavior can be tuned with
+the configuration option `tie_breaker` (see `man corosync-qdevice` for more
+information) and requires a restart of `corosync-qdevice.service` on all nodes.
 
 Possible Negative Implications
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-- 
2.47.3




^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-02-20 13:26 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-20 12:57 [PATCH docs] qdevice: correct qdevice partition tie breaking section Manuel Federanko
2026-02-20 13:27 ` Maximiliano Sandoval

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal