From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <aderumier@odiso.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id B62CF6183B
 for <pve-devel@lists.proxmox.com>; Mon, 14 Sep 2020 17:45:09 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id A810512509
 for <pve-devel@lists.proxmox.com>; Mon, 14 Sep 2020 17:45:09 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [89.248.211.110])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id D3ABA124FC
 for <pve-devel@lists.proxmox.com>; Mon, 14 Sep 2020 17:45:06 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id 60BEF17BFBB1;
 Mon, 14 Sep 2020 17:45:06 +0200 (CEST)
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id KCs22xTE0bMm; Mon, 14 Sep 2020 17:45:06 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id 44CA217BFBB8;
 Mon, 14 Sep 2020 17:45:06 +0200 (CEST)
X-Virus-Scanned: amavisd-new at mailpro.odiso.com
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 8zpIlodgbJMz; Mon, 14 Sep 2020 17:45:06 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [10.1.31.111])
 by mailpro.odiso.net (Postfix) with ESMTP id 2A3B117BFBB1;
 Mon, 14 Sep 2020 17:45:06 +0200 (CEST)
Date: Mon, 14 Sep 2020 17:45:05 +0200 (CEST)
From: Alexandre DERUMIER <aderumier@odiso.com>
To: Thomas Lamprecht <t.lamprecht@proxmox.com>
Cc: Proxmox VE development discussion <pve-devel@lists.proxmox.com>, 
 dietmar <dietmar@proxmox.com>
Message-ID: <1775665592.735772.1600098305930.JavaMail.zimbra@odiso.com>
In-Reply-To: <88fe5075-870d-9197-7c84-71ae8a25e9dd@proxmox.com>
References: <216436814.339545.1599142316781.JavaMail.zimbra@odiso.com>
 <3ee5d9cf-19be-1067-3931-1c54f1c6043a@proxmox.com>
 <1245358354.508169.1599737684557.JavaMail.zimbra@odiso.com>
 <9e2974b8-3c39-0fda-6f73-6677e3d796f4@proxmox.com>
 <1928266603.714059.1600059280338.JavaMail.zimbra@odiso.com>
 <803983196.1499.1600067690947@webmail.proxmox.com>
 <2093781647.723563.1600072074707.JavaMail.zimbra@odiso.com>
 <88fe5075-870d-9197-7c84-71ae8a25e9dd@proxmox.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Mailer: Zimbra 8.8.12_GA_3866 (ZimbraWebClient - GC83 (Linux)/8.8.12_GA_3844)
Thread-Topic: corosync bug: cluster break after 1 node clean shutdown
Thread-Index: Sne6bbvfuCLM2Bp5iG39LUqWfcqWfQ==
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.036 Adjusted score from AWL reputation of From: address
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_DNSWL_NONE     -0.0001 Sender listed at https://www.dnswl.org/,
 no trust
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] corosync bug: cluster break after 1 node clean
 shutdown
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Mon, 14 Sep 2020 15:45:09 -0000

>>Did you get in contact with knet/corosync devs about this?=20
>>Because, it may well be something their stack is better at handling it, m=
aybe=20
>>there's also really still a bug, or bad behaviour on some edge cases...=
=20

not yet, I would like to have more infos to submit, because I'm blind.
I have enabled debug logs on all my cluster if that happen again.


BTW,
I have noticed something,=20

corosync is stopped after syslog stop, so at shutdown we never have corosyn=
c log


I have edit corosync.service

- After=3Dnetwork-online.target
+ After=3Dnetwork-online.target syslog.target


and now It's logging correctly.



Now, that logging work, I'm also seeeing pmxcfs errors when corosync is sto=
pping.
(But no pmxcfs shutdown log)

Do you think it's possible to have a clean shutdown of pmxcfs first, before=
 stopping corosync ?


"
Sep 14 17:23:49 pve corosync[1346]:   [MAIN  ] Node was shut down by a sign=
al
Sep 14 17:23:49 pve systemd[1]: Stopping Corosync Cluster Engine...
Sep 14 17:23:49 pve corosync[1346]:   [SERV  ] Unloading all Corosync servi=
ce engines.
Sep 14 17:23:49 pve corosync[1346]:   [QB    ] withdrawing server sockets
Sep 14 17:23:49 pve corosync[1346]:   [SERV  ] Service engine unloaded: cor=
osync vote quorum service v1.0
Sep 14 17:23:49 pve pmxcfs[1132]: [confdb] crit: cmap_dispatch failed: 2
Sep 14 17:23:49 pve corosync[1346]:   [QB    ] withdrawing server sockets
Sep 14 17:23:49 pve corosync[1346]:   [SERV  ] Service engine unloaded: cor=
osync configuration map access
Sep 14 17:23:49 pve corosync[1346]:   [QB    ] withdrawing server sockets
Sep 14 17:23:49 pve corosync[1346]:   [SERV  ] Service engine unloaded: cor=
osync configuration service
Sep 14 17:23:49 pve pmxcfs[1132]: [status] crit: cpg_dispatch failed: 2
Sep 14 17:23:49 pve pmxcfs[1132]: [status] crit: cpg_leave failed: 2
Sep 14 17:23:49 pve pmxcfs[1132]: [dcdb] crit: cpg_dispatch failed: 2
Sep 14 17:23:49 pve pmxcfs[1132]: [dcdb] crit: cpg_leave failed: 2
Sep 14 17:23:49 pve corosync[1346]:   [QB    ] withdrawing server sockets
Sep 14 17:23:49 pve corosync[1346]:   [SERV  ] Service engine unloaded: cor=
osync cluster quorum service v0.1
Sep 14 17:23:49 pve pmxcfs[1132]: [quorum] crit: quorum_dispatch failed: 2
Sep 14 17:23:49 pve pmxcfs[1132]: [status] notice: node lost quorum
Sep 14 17:23:49 pve corosync[1346]:   [SERV  ] Service engine unloaded: cor=
osync profile loading service
Sep 14 17:23:49 pve corosync[1346]:   [SERV  ] Service engine unloaded: cor=
osync resource monitoring service
Sep 14 17:23:49 pve corosync[1346]:   [SERV  ] Service engine unloaded: cor=
osync watchdog service
Sep 14 17:23:49 pve pmxcfs[1132]: [quorum] crit: quorum_initialize failed: =
2
Sep 14 17:23:49 pve pmxcfs[1132]: [quorum] crit: can't initialize service
Sep 14 17:23:49 pve pmxcfs[1132]: [confdb] crit: cmap_initialize failed: 2
Sep 14 17:23:49 pve pmxcfs[1132]: [confdb] crit: can't initialize service
Sep 14 17:23:49 pve pmxcfs[1132]: [dcdb] notice: start cluster connection
Sep 14 17:23:49 pve pmxcfs[1132]: [dcdb] crit: cpg_initialize failed: 2
Sep 14 17:23:49 pve pmxcfs[1132]: [dcdb] crit: can't initialize service
Sep 14 17:23:49 pve pmxcfs[1132]: [status] notice: start cluster connection
Sep 14 17:23:49 pve pmxcfs[1132]: [status] crit: cpg_initialize failed: 2
Sep 14 17:23:49 pve pmxcfs[1132]: [status] crit: can't initialize service
Sep 14 17:23:50 pve corosync[1346]:   [MAIN  ] Corosync Cluster Engine exit=
ing normally
"



----- Mail original -----
De: "Thomas Lamprecht" <t.lamprecht@proxmox.com>
=C3=80: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>, =
"aderumier" <aderumier@odiso.com>, "dietmar" <dietmar@proxmox.com>
Envoy=C3=A9: Lundi 14 Septembre 2020 10:51:03
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd=
own

On 9/14/20 10:27 AM, Alexandre DERUMIER wrote:=20
>> I wonder if something like pacemaker sbd could be implemented in proxmox=
 as extra layer of protection ?=20
>=20
>>> AFAIK Thomas already has patches to implement active fencing.=20
>=20
>>> But IMHO this will not solve the corosync problems..=20
>=20
> Yes, sure. I'm really to have to 2 differents sources of verification, wi=
th different path/software, to avoid this kind of bug.=20
> (shit happens, murphy law ;)=20

would then need at least three, and if one has a bug flooding the network i=
n=20
a lot of setups (not having beefy switches like you ;) the other two will b=
e=20
taken down also, either as memory or the system stack gets overloaded.=20

>=20
> as we say in French "ceinture & bretelles" -> "belt and braces"=20
>=20
>=20
> BTW,=20
> a user have reported new corosync problem here:=20
> https://forum.proxmox.com/threads/proxmox-6-2-corosync-3-rare-and-spontan=
eous-disruptive-udp-5405-storm-flood.75871=20
> (Sound like the bug that I have 6month ago, with corosync bug flooding a =
lof of udp packets, but not the same bug I have here)=20

Did you get in contact with knet/corosync devs about this?=20

Because, it may well be something their stack is better at handling it, may=
be=20
there's also really still a bug, or bad behaviour on some edge cases...=20