From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <aderumier@odiso.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 7DB5F61567
 for <pve-devel@pve.proxmox.com>; Sun,  6 Sep 2020 07:36:43 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 73C7729C52
 for <pve-devel@pve.proxmox.com>; Sun,  6 Sep 2020 07:36:13 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [89.248.211.110])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id 9FEC329C3F
 for <pve-devel@pve.proxmox.com>; Sun,  6 Sep 2020 07:36:12 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id BB12918984B3;
 Sun,  6 Sep 2020 07:36:10 +0200 (CEST)
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id NuXiHa9JwAbd; Sun,  6 Sep 2020 07:36:10 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id 9F99718984BA;
 Sun,  6 Sep 2020 07:36:10 +0200 (CEST)
X-Virus-Scanned: amavisd-new at mailpro.odiso.com
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id vrBplpMRwbRf; Sun,  6 Sep 2020 07:36:10 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [10.1.31.111])
 by mailpro.odiso.net (Postfix) with ESMTP id 89E9318984B3;
 Sun,  6 Sep 2020 07:36:10 +0200 (CEST)
Date: Sun, 6 Sep 2020 07:36:10 +0200 (CEST)
From: Alexandre DERUMIER <aderumier@odiso.com>
To: dietmar <dietmar@proxmox.com>
Cc: Proxmox VE development discussion <pve-devel@lists.proxmox.com>, 
 pve-devel <pve-devel@pve.proxmox.com>
Message-ID: <570223166.391607.1599370570342.JavaMail.zimbra@odiso.com>
In-Reply-To: <469910091.758.1599366116137@webmail.proxmox.com>
References: <216436814.339545.1599142316781.JavaMail.zimbra@odiso.com>
 <1044807310.366666.1599222580644.JavaMail.zimbra@odiso.com>
 <481953113.753.1599234165778@webmail.proxmox.com>
 <1667839988.383835.1599312761359.JavaMail.zimbra@odiso.com>
 <665305060.757.1599319409105@webmail.proxmox.com>
 <1710924670.385348.1599327014568.JavaMail.zimbra@odiso.com>
 <469910091.758.1599366116137@webmail.proxmox.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Mailer: Zimbra 8.8.12_GA_3866 (ZimbraWebClient - GC83 (Linux)/8.8.12_GA_3844)
Thread-Topic: corosync bug: cluster break after 1 node clean shutdown
Thread-Index: UtQFHu/PWBabUjDhVoQhub8d5y1/Jw==
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.029 Adjusted score from AWL reputation of From: address
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_DNSWL_NONE     -0.0001 Sender listed at https://www.dnswl.org/,
 no trust
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] corosync bug: cluster break after 1 node clean
 shutdown
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Sun, 06 Sep 2020 05:36:43 -0000

>>But the pve logs look ok, and there is no indication
>>that we stopped updating the watchdog. So why did the
>>watchdog trigger? Maybe an IPMI bug?

do you mean an ipmi bug on all 13 servers at the same time ?
(I also have 2 supermicro servers in this cluster, but they use same ipmi w=
atchdog driver. (ipmi_watchdog)



I had same kind of with bug once (when stopping a server), on another clust=
er, 6 months ago.
This was without HA, but different version of corosync, and that time, I wa=
s really seeing quorum split in the corosync logs of the servers.


I'll try to reproduce with a virtual cluster with 14 nodes (don't have enou=
gh hardware)


Could I be a bug in proxmox HA code, where watchdog is not resetted by LRM =
anymore?

----- Mail original -----
De: "dietmar" <dietmar@proxmox.com>
=C3=80: "aderumier" <aderumier@odiso.com>
Cc: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>, "pve=
-devel" <pve-devel@pve.proxmox.com>
Envoy=C3=A9: Dimanche 6 Septembre 2020 06:21:55
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd=
own

> >>So you are using ipmi hardware watchdog?=20
>=20
> yes, I'm using dell idrac ipmi card watchdog=20

But the pve logs look ok, and there is no indication=20
that we stopped updating the watchdog. So why did the=20
watchdog trigger? Maybe an IPMI bug?=20