From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <aderumier@odiso.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id C0B84632A9
 for <pve-devel@lists.proxmox.com>; Mon, 21 Sep 2020 01:55:09 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id A9B1720A9C
 for <pve-devel@lists.proxmox.com>; Mon, 21 Sep 2020 01:55:09 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [89.248.211.110])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id 2AE4B20A8F
 for <pve-devel@lists.proxmox.com>; Mon, 21 Sep 2020 01:55:07 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id DEF2E1B83BE5;
 Mon, 21 Sep 2020 01:54:59 +0200 (CEST)
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 1-ZyC-z83esp; Mon, 21 Sep 2020 01:54:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id C489A1B83E0F;
 Mon, 21 Sep 2020 01:54:59 +0200 (CEST)
X-Virus-Scanned: amavisd-new at mailpro.odiso.com
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Z8v7trmqyE7u; Mon, 21 Sep 2020 01:54:59 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [10.1.31.111])
 by mailpro.odiso.net (Postfix) with ESMTP id B0C2E1B83BE5;
 Mon, 21 Sep 2020 01:54:59 +0200 (CEST)
Date: Mon, 21 Sep 2020 01:54:59 +0200 (CEST)
From: Alexandre DERUMIER <aderumier@odiso.com>
To: Thomas Lamprecht <t.lamprecht@proxmox.com>
Cc: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Message-ID: <335862527.964527.1600646099489.JavaMail.zimbra@odiso.com>
In-Reply-To: <501f031f-3f1b-0633-fab3-7fcb7fdddaf5@proxmox.com>
References: <216436814.339545.1599142316781.JavaMail.zimbra@odiso.com>
 <2054513461.868164.1600262132255.JavaMail.zimbra@odiso.com>
 <2bdde345-b966-d393-44d1-e5385821fbad@proxmox.com>
 <65105078.871552.1600269422383.JavaMail.zimbra@odiso.com>
 <1600333910.bmtyynl8cl.astroid@nora.none>
 <475756962.894651.1600336772315.JavaMail.zimbra@odiso.com>
 <86855479.894870.1600336947072.JavaMail.zimbra@odiso.com>
 <501f031f-3f1b-0633-fab3-7fcb7fdddaf5@proxmox.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Mailer: Zimbra 8.8.12_GA_3866 (ZimbraWebClient - GC83 (Linux)/8.8.12_GA_3844)
Thread-Topic: corosync bug: cluster break after 1 node clean shutdown
Thread-Index: xPNtozNbu5qmgHM5Di8DkzaYnky2hw==
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.076 Adjusted score from AWL reputation of From: address
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_DNSWL_NONE     -0.0001 Sender listed at https://www.dnswl.org/,
 no trust
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] corosync bug: cluster break after 1 node clean
 shutdown
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Sun, 20 Sep 2020 23:55:09 -0000

Hi,

I have done a new test, this time with "systemctl stop corosync", wait 15s,=
 "systemctl start corosync", wait 15s.

I was able to reproduce it at corosync stop on node1, 1second later /etc/pv=
e was locked on all other nodes.


I have started corosync 10min later on node1, and /etc/pve has become write=
able again on all nodes



node1: corosync stop: 01:26:50
node2 : /etc/pve locked : 01:26:51

http://odisoweb1.odiso.net/corosync-stop.log


pmxcfs : bt full all threads:

https://gist.github.com/aderumier/c45af4ee73b80330367e416af858bc65

pmxcfs: coredump :http://odisoweb1.odiso.net/core.17995.gz


node1:corosync start: 01:35:36
http://odisoweb1.odiso.net/corosync-start.log





BTW, I have been contacted in pm on the forum by a user following this mail=
ing thread,
and he had exactly the same problem with a 7 nodes cluster recently.
(shutting down 1 node, /etc/pve was locked until the node was restarted)



----- Mail original -----
De: "Thomas Lamprecht" <t.lamprecht@proxmox.com>
=C3=80: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>, =
"aderumier" <aderumier@odiso.com>
Envoy=C3=A9: Jeudi 17 Septembre 2020 13:35:55
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd=
own

On 9/17/20 12:02 PM, Alexandre DERUMIER wrote:=20
> if needed, here my test script to reproduce it=20

thanks, I'm now using this specific one, had a similar (but all nodes write=
s)=20
running here since ~ two hours without luck yet, lets see how this behaves.=
=20

>=20
> node1 (restart corosync until node2 don't send the timestamp anymore)=20
> -----=20
>=20
> #!/bin/bash=20
>=20
> for i in `seq 10000`; do=20
> now=3D$(date +"%T")=20
> echo "restart corosync : $now"=20
> systemctl restart corosync=20
> for j in {1..59}; do=20
> last=3D$(cat /tmp/timestamp)=20
> curr=3D`date '+%s'`=20
> diff=3D$(($curr - $last))=20
> if [ $diff -gt 20 ]; then=20
> echo "too old"=20
> exit 0=20
> fi=20
> sleep 1=20
> done=20
> done=20
>=20
>=20
>=20
> node2 (write to /etc/pve/test each second, then send the last timestamp t=
o node1)=20
> -----=20
> #!/bin/bash=20
> for i in {1..10000};=20
> do=20
> now=3D$(date +"%T")=20
> echo "Current time : $now"=20
> curr=3D`date '+%s'`=20
> ssh root@node1 "echo $curr > /tmp/timestamp"=20
> echo "test" > /etc/pve/test=20
> sleep 1=20
> done=20
>=20