From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <aderumier@odiso.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 379496162A
 for <pve-devel@pve.proxmox.com>; Sun,  6 Sep 2020 10:44:10 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 26F882A71D
 for <pve-devel@pve.proxmox.com>; Sun,  6 Sep 2020 10:43:40 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [89.248.211.110])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id B52242A706
 for <pve-devel@pve.proxmox.com>; Sun,  6 Sep 2020 10:43:37 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id 681F4189A0F5;
 Sun,  6 Sep 2020 10:43:37 +0200 (CEST)
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id YBhIJWpRkEz0; Sun,  6 Sep 2020 10:43:37 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id 4DCE8189A0F9;
 Sun,  6 Sep 2020 10:43:37 +0200 (CEST)
X-Virus-Scanned: amavisd-new at mailpro.odiso.com
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id oJndjNFMpG6B; Sun,  6 Sep 2020 10:43:37 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [10.1.31.111])
 by mailpro.odiso.net (Postfix) with ESMTP id 39624189A0F5;
 Sun,  6 Sep 2020 10:43:37 +0200 (CEST)
Date: Sun, 6 Sep 2020 10:43:36 +0200 (CEST)
From: Alexandre DERUMIER <aderumier@odiso.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: dietmar <dietmar@proxmox.com>, pve-devel <pve-devel@pve.proxmox.com>
Message-ID: <1059698258.392627.1599381816979.JavaMail.zimbra@odiso.com>
In-Reply-To: <570223166.391607.1599370570342.JavaMail.zimbra@odiso.com>
References: <216436814.339545.1599142316781.JavaMail.zimbra@odiso.com>
 <1044807310.366666.1599222580644.JavaMail.zimbra@odiso.com>
 <481953113.753.1599234165778@webmail.proxmox.com>
 <1667839988.383835.1599312761359.JavaMail.zimbra@odiso.com>
 <665305060.757.1599319409105@webmail.proxmox.com>
 <1710924670.385348.1599327014568.JavaMail.zimbra@odiso.com>
 <469910091.758.1599366116137@webmail.proxmox.com>
 <570223166.391607.1599370570342.JavaMail.zimbra@odiso.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Mailer: Zimbra 8.8.12_GA_3866 (ZimbraWebClient - GC83 (Linux)/8.8.12_GA_3844)
Thread-Topic: corosync bug: cluster break after 1 node clean shutdown
Thread-Index: UtQFHu/PWBabUjDhVoQhub8d5y1/Jxgnhekc
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.024 Adjusted score from AWL reputation of From: address
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_DNSWL_NONE     -0.0001 Sender listed at https://www.dnswl.org/,
 no trust
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] corosync bug: cluster break after 1 node clean
 shutdown
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Sun, 06 Sep 2020 08:44:10 -0000

Maybe something interesting, the only survived node was node7, and it was t=
he crm master

I'm also seein crm disabling watchdog, and also some "loop take too long" m=
essages



(some migration logs from node2 to node1 before the maintenance)
Sep  3 10:36:29 m6kvm7 pve-ha-crm[16196]: service 'vm:992': state changed f=
rom 'migrate' to 'started'  (node =3D m6kvm1)
Sep  3 10:36:29 m6kvm7 pve-ha-crm[16196]: service 'vm:993': state changed f=
rom 'migrate' to 'started'  (node =3D m6kvm1)
Sep  3 10:36:29 m6kvm7 pve-ha-crm[16196]: service 'vm:997': state changed f=
rom 'migrate' to 'started'  (node =3D m6kvm1)
....

Sep  3 10:40:41 m6kvm7 pve-ha-crm[16196]: node 'm6kvm2': state changed from=
 'online' =3D> 'unknown'
Sep  3 10:40:50 m6kvm7 pve-ha-crm[16196]: got unexpected error - error duri=
ng cfs-locked 'domain-ha' operation: no quorum!
Sep  3 10:40:51 m6kvm7 pve-ha-lrm[16140]: loop take too long (87 seconds)
Sep  3 10:40:51 m6kvm7 pve-ha-crm[16196]: loop take too long (92 seconds)
Sep  3 10:40:51 m6kvm7 pve-ha-crm[16196]: lost lock 'ha_manager_lock - cfs =
lock update failed - Permission denied
Sep  3 10:40:51 m6kvm7 pve-ha-lrm[16140]: lost lock 'ha_agent_m6kvm7_lock -=
 cfs lock update failed - Permission denied
Sep  3 10:40:56 m6kvm7 pve-ha-lrm[16140]: status change active =3D> lost_ag=
ent_lock
Sep  3 10:40:56 m6kvm7 pve-ha-crm[16196]: status change master =3D> lost_ma=
nager_lock
Sep  3 10:40:56 m6kvm7 pve-ha-crm[16196]: watchdog closed (disabled)
Sep  3 10:40:56 m6kvm7 pve-ha-crm[16196]: status change lost_manager_lock =
=3D> wait_for_quorum



others nodes timing
--------------------

10:39:16 ->  node2 shutdown, leave coroync

10:40:25 -> other nodes rebooted by watchdog


----- Mail original -----
De: "aderumier" <aderumier@odiso.com>
=C3=80: "dietmar" <dietmar@proxmox.com>
Cc: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>, "pve=
-devel" <pve-devel@pve.proxmox.com>
Envoy=C3=A9: Dimanche 6 Septembre 2020 07:36:10
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd=
own

>>But the pve logs look ok, and there is no indication=20
>>that we stopped updating the watchdog. So why did the=20
>>watchdog trigger? Maybe an IPMI bug?=20

do you mean an ipmi bug on all 13 servers at the same time ?=20
(I also have 2 supermicro servers in this cluster, but they use same ipmi w=
atchdog driver. (ipmi_watchdog)=20



I had same kind of with bug once (when stopping a server), on another clust=
er, 6 months ago.=20
This was without HA, but different version of corosync, and that time, I wa=
s really seeing quorum split in the corosync logs of the servers.=20


I'll try to reproduce with a virtual cluster with 14 nodes (don't have enou=
gh hardware)=20


Could I be a bug in proxmox HA code, where watchdog is not resetted by LRM =
anymore?=20

----- Mail original -----=20
De: "dietmar" <dietmar@proxmox.com>=20
=C3=80: "aderumier" <aderumier@odiso.com>=20
Cc: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>, "pve=
-devel" <pve-devel@pve.proxmox.com>=20
Envoy=C3=A9: Dimanche 6 Septembre 2020 06:21:55=20
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd=
own=20

> >>So you are using ipmi hardware watchdog?=20
>=20
> yes, I'm using dell idrac ipmi card watchdog=20

But the pve logs look ok, and there is no indication=20
that we stopped updating the watchdog. So why did the=20
watchdog trigger? Maybe an IPMI bug?=20


_______________________________________________=20
pve-devel mailing list=20
pve-devel@lists.proxmox.com=20
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel=20