From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <f.gruenbichler@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 3BDB362140
 for <pve-devel@lists.proxmox.com>; Tue, 29 Sep 2020 10:51:42 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 30F0AD9A0
 for <pve-devel@lists.proxmox.com>; Tue, 29 Sep 2020 10:51:42 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [212.186.127.180])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id 6DD4FD993
 for <pve-devel@lists.proxmox.com>; Tue, 29 Sep 2020 10:51:40 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 31F6545829
 for <pve-devel@lists.proxmox.com>; Tue, 29 Sep 2020 10:51:40 +0200 (CEST)
Date: Tue, 29 Sep 2020 10:51:32 +0200
From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
References: <216436814.339545.1599142316781.JavaMail.zimbra@odiso.com>
 <1264529857.1248647.1601018149719.JavaMail.zimbra@odiso.com>
 <1601024991.2yoxd1np1v.astroid@nora.none>
 <1157671072.1253096.1601027205997.JavaMail.zimbra@odiso.com>
 <1601037918.lwca57m6tz.astroid@nora.none>
 <2049133658.1264587.1601051377789.JavaMail.zimbra@odiso.com>
 <1601282139.yqoafefp96.astroid@nora.none>
 <936571798.1335244.1601285700689.JavaMail.zimbra@odiso.com>
 <260722331.1517115.1601308760989.JavaMail.zimbra@odiso.com>
In-Reply-To: <260722331.1517115.1601308760989.JavaMail.zimbra@odiso.com>
MIME-Version: 1.0
User-Agent: astroid/0.15.0 (https://github.com/astroidmail/astroid)
Message-Id: <1601368526.gv9th0ekl0.astroid@nora.none>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.089 Adjusted score from AWL reputation of From: address
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 KAM_LOTSOFHASH           0.25 Emails with lots of hash-like gibberish
 RCVD_IN_DNSWL_MED        -2.3 Sender listed at https://www.dnswl.org/,
 medium trust
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See
 http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more
 information. [odiso.net]
Subject: Re: [pve-devel] corosync bug: cluster break after 1 node clean
 shutdown
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Tue, 29 Sep 2020 08:51:42 -0000

On September 28, 2020 5:59 pm, Alexandre DERUMIER wrote:
> Here a new test http://odisoweb1.odiso.net/test5
>=20
> This has occured at corosync start
>=20
>=20
> node1:
> -----
> start corosync : 17:30:19
>=20
>=20
> node2: /etc/pve locked
> --------------
> Current time : 17:30:24
>=20
>=20
> I have done backtrace of all nodes at same time with parallel ssh at 17:3=
5:22=20
>=20
> and a coredump of all nodes at same time with parallel ssh at 17:42:26
>=20
>=20
> (Note that this time, /etc/pve was still locked after backtrace/coredump)

okay, so this time two more log lines got printed on the (again) problem=20
causing node #13, but it still stops logging at a point where this makes=20
no sense.

I rebuilt the packages:

f318f12e5983cb09d186c2ee37743203f599d103b6abb2d00c78d312b4f12df942d8ed1ff5d=
e6e6c194785d0a81eb881e80f7bbfd4865ca1a5a509acd40f64aa  pve-cluster_6.1-8_am=
d64.deb
b220ee95303e22704793412e83ac5191ba0e53c2f41d85358a247c248d2a6856e5b791b1d12=
c36007a297056388224acf4e5a1250ef1dd019aee97e8ac4bcac7  pve-cluster-dbgsym_6=
.1-8_amd64.deb

with a change of how the logging is set up (I now suspect that some=20
messages might get dropped if the logging throughput is high enough),=20
let's hope this gets us the information we need. please repeat the test5=20
again with these packages.

is there anything special about node 13? network topology, slower=20
hardware, ... ?
=