From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <aderumier@odiso.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 7983663570
 for <pve-devel@lists.proxmox.com>; Tue, 22 Sep 2020 07:44:31 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 62DF02A6F6
 for <pve-devel@lists.proxmox.com>; Tue, 22 Sep 2020 07:44:01 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [89.248.211.110])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id 54CC42A6EC
 for <pve-devel@lists.proxmox.com>; Tue, 22 Sep 2020 07:44:00 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id 120881B8527A;
 Tue, 22 Sep 2020 07:43:54 +0200 (CEST)
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id kxIIr1ZirbHF; Tue, 22 Sep 2020 07:43:54 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id EBED21B8D13F;
 Tue, 22 Sep 2020 07:43:53 +0200 (CEST)
X-Virus-Scanned: amavisd-new at mailpro.odiso.com
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Zabajnpo4Hdn; Tue, 22 Sep 2020 07:43:53 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [10.1.31.111])
 by mailpro.odiso.net (Postfix) with ESMTP id D81211B8527A;
 Tue, 22 Sep 2020 07:43:53 +0200 (CEST)
Date: Tue, 22 Sep 2020 07:43:53 +0200 (CEST)
From: Alexandre DERUMIER <aderumier@odiso.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: Thomas Lamprecht <t.lamprecht@proxmox.com>
Message-ID: <7286111.1004215.1600753433567.JavaMail.zimbra@odiso.com>
In-Reply-To: <335862527.964527.1600646099489.JavaMail.zimbra@odiso.com>
References: <216436814.339545.1599142316781.JavaMail.zimbra@odiso.com>
 <2bdde345-b966-d393-44d1-e5385821fbad@proxmox.com>
 <65105078.871552.1600269422383.JavaMail.zimbra@odiso.com>
 <1600333910.bmtyynl8cl.astroid@nora.none>
 <475756962.894651.1600336772315.JavaMail.zimbra@odiso.com>
 <86855479.894870.1600336947072.JavaMail.zimbra@odiso.com>
 <501f031f-3f1b-0633-fab3-7fcb7fdddaf5@proxmox.com>
 <335862527.964527.1600646099489.JavaMail.zimbra@odiso.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Mailer: Zimbra 8.8.12_GA_3866 (ZimbraWebClient - GC83 (Linux)/8.8.12_GA_3844)
Thread-Topic: corosync bug: cluster break after 1 node clean shutdown
Thread-Index: xPNtozNbu5qmgHM5Di8DkzaYnky2h7ptv89N
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.129 Adjusted score from AWL reputation of From: address
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_DNSWL_NONE     -0.0001 Sender listed at https://www.dnswl.org/,
 no trust
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See
 http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more
 information. [proxmox.com, odiso.net]
Subject: Re: [pve-devel] corosync bug: cluster break after 1 node clean
 shutdown
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Tue, 22 Sep 2020 05:44:31 -0000

I have done test with "kill -9 <pidofcorosync",  and I have around 20s hang=
 on other nodes,
but after that it's become available again.


So, it's really something when corosync is in shutdown phase, and pmxcfs is=
 running.

So, for now, as workaround, I have changed

/lib/systemd/system/pve-cluster.service

#Wants=3Dcorosync.service
#Before=3Dcorosync.service
Requires=3Dcorosync.service
After=3Dcorosync.service


Like this, at shutdown, pve-cluster is stopped before corosync, and if I re=
start corosync, pve-cluster is stopped first.




----- Mail original -----
De: "aderumier" <aderumier@odiso.com>
=C3=80: "Thomas Lamprecht" <t.lamprecht@proxmox.com>
Cc: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>
Envoy=C3=A9: Lundi 21 Septembre 2020 01:54:59
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd=
own

Hi,=20

I have done a new test, this time with "systemctl stop corosync", wait 15s,=
 "systemctl start corosync", wait 15s.=20

I was able to reproduce it at corosync stop on node1, 1second later /etc/pv=
e was locked on all other nodes.=20


I have started corosync 10min later on node1, and /etc/pve has become write=
able again on all nodes=20



node1: corosync stop: 01:26:50=20
node2 : /etc/pve locked : 01:26:51=20

http://odisoweb1.odiso.net/corosync-stop.log=20


pmxcfs : bt full all threads:=20

https://gist.github.com/aderumier/c45af4ee73b80330367e416af858bc65=20

pmxcfs: coredump :http://odisoweb1.odiso.net/core.17995.gz=20


node1:corosync start: 01:35:36=20
http://odisoweb1.odiso.net/corosync-start.log=20





BTW, I have been contacted in pm on the forum by a user following this mail=
ing thread,=20
and he had exactly the same problem with a 7 nodes cluster recently.=20
(shutting down 1 node, /etc/pve was locked until the node was restarted)=20



----- Mail original -----=20
De: "Thomas Lamprecht" <t.lamprecht@proxmox.com>=20
=C3=80: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>, =
"aderumier" <aderumier@odiso.com>=20
Envoy=C3=A9: Jeudi 17 Septembre 2020 13:35:55=20
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd=
own=20

On 9/17/20 12:02 PM, Alexandre DERUMIER wrote:=20
> if needed, here my test script to reproduce it=20

thanks, I'm now using this specific one, had a similar (but all nodes write=
s)=20
running here since ~ two hours without luck yet, lets see how this behaves.=
=20

>=20
> node1 (restart corosync until node2 don't send the timestamp anymore)=20
> -----=20
>=20
> #!/bin/bash=20
>=20
> for i in `seq 10000`; do=20
> now=3D$(date +"%T")=20
> echo "restart corosync : $now"=20
> systemctl restart corosync=20
> for j in {1..59}; do=20
> last=3D$(cat /tmp/timestamp)=20
> curr=3D`date '+%s'`=20
> diff=3D$(($curr - $last))=20
> if [ $diff -gt 20 ]; then=20
> echo "too old"=20
> exit 0=20
> fi=20
> sleep 1=20
> done=20
> done=20
>=20
>=20
>=20
> node2 (write to /etc/pve/test each second, then send the last timestamp t=
o node1)=20
> -----=20
> #!/bin/bash=20
> for i in {1..10000};=20
> do=20
> now=3D$(date +"%T")=20
> echo "Current time : $now"=20
> curr=3D`date '+%s'`=20
> ssh root@node1 "echo $curr > /tmp/timestamp"=20
> echo "test" > /etc/pve/test=20
> sleep 1=20
> done=20
>=20


_______________________________________________=20
pve-devel mailing list=20
pve-devel@lists.proxmox.com=20
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel=20