From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 97F74628FB for ; Thu, 17 Sep 2020 12:02:58 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 86E0DC05D for ; Thu, 17 Sep 2020 12:02:28 +0200 (CEST) Received: from mailpro.odiso.net (mailpro.odiso.net [89.248.211.110]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 732FDC052 for ; Thu, 17 Sep 2020 12:02:27 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mailpro.odiso.net (Postfix) with ESMTP id 47D811A9FA05; Thu, 17 Sep 2020 12:02:27 +0200 (CEST) Received: from mailpro.odiso.net ([127.0.0.1]) by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id M0dRJ48GPMeq; Thu, 17 Sep 2020 12:02:27 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mailpro.odiso.net (Postfix) with ESMTP id 2E2821A9FA06; Thu, 17 Sep 2020 12:02:27 +0200 (CEST) X-Virus-Scanned: amavisd-new at mailpro.odiso.com Received: from mailpro.odiso.net ([127.0.0.1]) by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id nL4UjJMcbdoz; Thu, 17 Sep 2020 12:02:27 +0200 (CEST) Received: from mailpro.odiso.net (mailpro.odiso.net [10.1.31.111]) by mailpro.odiso.net (Postfix) with ESMTP id 189B41A9FA05; Thu, 17 Sep 2020 12:02:27 +0200 (CEST) Date: Thu, 17 Sep 2020 12:02:27 +0200 (CEST) From: Alexandre DERUMIER To: Proxmox VE development discussion Cc: Thomas Lamprecht Message-ID: <86855479.894870.1600336947072.JavaMail.zimbra@odiso.com> In-Reply-To: <475756962.894651.1600336772315.JavaMail.zimbra@odiso.com> References: <216436814.339545.1599142316781.JavaMail.zimbra@odiso.com> <1767271081.853403.1600245029802.JavaMail.zimbra@odiso.com> <1894376736.864562.1600253445817.JavaMail.zimbra@odiso.com> <2054513461.868164.1600262132255.JavaMail.zimbra@odiso.com> <2bdde345-b966-d393-44d1-e5385821fbad@proxmox.com> <65105078.871552.1600269422383.JavaMail.zimbra@odiso.com> <1600333910.bmtyynl8cl.astroid@nora.none> <475756962.894651.1600336772315.JavaMail.zimbra@odiso.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Mailer: Zimbra 8.8.12_GA_3866 (ZimbraWebClient - GC83 (Linux)/8.8.12_GA_3844) Thread-Topic: corosync bug: cluster break after 1 node clean shutdown Thread-Index: kvc1NL2TQsyEIcFjTEwozOCRk3+yGyGVZsFu X-SPAM-LEVEL: Spam detection results: 0 AWL 0.077 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/, no trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [odiso.net, proxmox.com] Subject: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Sep 2020 10:02:58 -0000 if needed, here my test script to reproduce it node1 (restart corosync until node2 don't send the timestamp anymore) ----- #!/bin/bash for i in `seq 10000`; do=20 now=3D$(date +"%T") echo "restart corosync : $now" systemctl restart corosync for j in {1..59}; do last=3D$(cat /tmp/timestamp) curr=3D`date '+%s'` diff=3D$(($curr - $last)) if [ $diff -gt 20 ]; then echo "too old" exit 0 fi sleep 1 done done=20 node2 (write to /etc/pve/test each second, then send the last timestamp to = node1) ----- #!/bin/bash for i in {1..10000}; do now=3D$(date +"%T") echo "Current time : $now" curr=3D`date '+%s'` ssh root@node1 "echo $curr > /tmp/timestamp" echo "test" > /etc/pve/test sleep 1 done ----- Mail original ----- De: "aderumier" =C3=80: "Proxmox VE development discussion" Cc: "Thomas Lamprecht" Envoy=C3=A9: Jeudi 17 Septembre 2020 11:59:32 Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd= own Thanks for the update.=20 >> if=20 >>we can't reproduce it, we'll have to send you patches/patched debs with= =20 >>increased logging to narrow down what is going on. if we can, than we=20 >>can hopefully find and fix the issue fast.=20 No problem, I can install the patched deb if needed.=20 ----- Mail original -----=20 De: "Fabian Gr=C3=BCnbichler" =20 =C3=80: "Proxmox VE development discussion" , = "Thomas Lamprecht" =20 Envoy=C3=A9: Jeudi 17 Septembre 2020 11:21:45=20 Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd= own=20 On September 16, 2020 5:17 pm, Alexandre DERUMIER wrote:=20 > I have produce it again, with the coredump this time=20 >=20 >=20 > restart corosync : 17:05:27=20 >=20 > http://odisoweb1.odiso.net/pmxcfs-corosync2.log=20 >=20 >=20 > bt full=20 >=20 > https://gist.github.com/aderumier/466dcc4aedb795aaf0f308de0d1c652b=20 >=20 >=20 > coredump=20 >=20 >=20 > http://odisoweb1.odiso.net/core.7761.gz=20 just a short update on this:=20 dcdb is stuck in START_SYNC mode, but nodeid 13 hasn't sent a STATE msg=20 (yet). this looks like either the START_SYNC message to node 13, or the=20 STATE response from it got lost or processed wrong. until the mode=20 switches to SYNCED (after all states have been received and the state=20 update went through), regular/normal messages can be sent, but the=20 incoming normal messages are queued and not processed. this is why the=20 fuse access blocks, it sends the request out, but the response ends up=20 in the queue.=20 status (the other thing running on top of dfsm) got correctly synced up=20 at the same time, so it's either a dcdb specific bug, or just bad luck=20 that one was affected and the other wasn't.=20 unfortunately even with debug enabled the logs don't contain much=20 information that would help (e.g., we don't log sending/receiving STATE=20 messages except when they look 'wrong'), so Thomas is trying to=20 reproduce this using your scenario here to improve turn around time. if=20 we can't reproduce it, we'll have to send you patches/patched debs with=20 increased logging to narrow down what is going on. if we can, than we=20 can hopefully find and fix the issue fast.=20 _______________________________________________=20 pve-devel mailing list=20 pve-devel@lists.proxmox.com=20 https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel=20