From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 81CB4621D5 for ; Tue, 29 Sep 2020 13:43:40 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 6C39BF3B8 for ; Tue, 29 Sep 2020 13:43:10 +0200 (CEST) Received: from mailpro.odiso.net (mailpro.odiso.net [89.248.211.110]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 5AF47F3AE for ; Tue, 29 Sep 2020 13:43:09 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mailpro.odiso.net (Postfix) with ESMTP id BED42F807F7 for ; Tue, 29 Sep 2020 13:43:08 +0200 (CEST) Received: from mailpro.odiso.net ([127.0.0.1]) by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id imBzctImJs5u for ; Tue, 29 Sep 2020 13:43:08 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mailpro.odiso.net (Postfix) with ESMTP id A033DF807F3 for ; Tue, 29 Sep 2020 13:43:08 +0200 (CEST) X-Virus-Scanned: amavisd-new at mailpro.odiso.com Received: from mailpro.odiso.net ([127.0.0.1]) by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id SGLH7MjMFhgK for ; Tue, 29 Sep 2020 13:43:08 +0200 (CEST) Received: from mailpro.odiso.net (mailpro.odiso.net [10.1.31.111]) by mailpro.odiso.net (Postfix) with ESMTP id 88E90F807F2 for ; Tue, 29 Sep 2020 13:43:08 +0200 (CEST) Date: Tue, 29 Sep 2020 13:43:08 +0200 (CEST) From: Alexandre DERUMIER To: Proxmox VE development discussion Message-ID: <596957573.1989195.1601379788390.JavaMail.zimbra@odiso.com> In-Reply-To: <1740926248.1973796.1601376764334.JavaMail.zimbra@odiso.com> References: <216436814.339545.1599142316781.JavaMail.zimbra@odiso.com> <2049133658.1264587.1601051377789.JavaMail.zimbra@odiso.com> <1601282139.yqoafefp96.astroid@nora.none> <936571798.1335244.1601285700689.JavaMail.zimbra@odiso.com> <260722331.1517115.1601308760989.JavaMail.zimbra@odiso.com> <1601368526.gv9th0ekl0.astroid@nora.none> <1140250655.1944706.1601372261446.JavaMail.zimbra@odiso.com> <1740926248.1973796.1601376764334.JavaMail.zimbra@odiso.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Mailer: Zimbra 8.8.12_GA_3866 (ZimbraWebClient - GC83 (Linux)/8.8.12_GA_3844) Thread-Topic: corosync bug: cluster break after 1 node clean shutdown Thread-Index: xHWpQwGYG9jw0yqvP/bkFqvNjcrGqQ0kGyWfltPt/wM= X-SPAM-LEVEL: Spam detection results: 1 AWL -0.028 Adjusted score from AWL reputation of From: address KAM_ASCII_DIVIDERS 0.8 Spam that uses ascii formatting tricks KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_LOTSOFHASH 0.25 Emails with lots of hash-like gibberish RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/, no trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com, odiso.net] Subject: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Sep 2020 11:43:40 -0000 >> >>node1 (stop corosync : unlock /etc/pve) >>----- >>12:28:11 : systemctl stop corosync sorry, this was wrong,I need to start corosync after the stop to get it wor= king again I'll reupload theses logs ----- Mail original ----- De: "aderumier" =C3=80: "Proxmox VE development discussion" Envoy=C3=A9: Mardi 29 Septembre 2020 12:52:44 Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd= own here a new test:=20 http://odisoweb1.odiso.net/test6/=20 node1=20 -----=20 start corosync : 12:08:33=20 node2 (/etc/pve lock)=20 -----=20 Current time : 12:08:39=20 node1 (stop corosync : unlock /etc/pve)=20 -----=20 12:28:11 : systemctl stop corosync=20 backtraces: 12:26:30=20 coredump : 12:27:21=20 ----- Mail original -----=20 De: "aderumier" =20 =C3=80: "Proxmox VE development discussion" = =20 Envoy=C3=A9: Mardi 29 Septembre 2020 11:37:41=20 Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd= own=20 >>with a change of how the logging is set up (I now suspect that some=20 >>messages might get dropped if the logging throughput is high enough),=20 >>let's hope this gets us the information we need. please repeat the test5= =20 >>again with these packages.=20 I'll test this afternoon=20 >>is there anything special about node 13? network topology, slower=20 >>hardware, ... ?=20 no nothing special, all nodes have exactly same hardware/cpu (24cores/48thr= eads 3ghz)/memory/disk.=20 this node is around 10% cpu usage, load is around 5.=20 ----- Mail original -----=20 De: "Fabian Gr=C3=BCnbichler" =20 =C3=80: "Proxmox VE development discussion" = =20 Envoy=C3=A9: Mardi 29 Septembre 2020 10:51:32=20 Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd= own=20 On September 28, 2020 5:59 pm, Alexandre DERUMIER wrote:=20 > Here a new test http://odisoweb1.odiso.net/test5=20 >=20 > This has occured at corosync start=20 >=20 >=20 > node1:=20 > -----=20 > start corosync : 17:30:19=20 >=20 >=20 > node2: /etc/pve locked=20 > --------------=20 > Current time : 17:30:24=20 >=20 >=20 > I have done backtrace of all nodes at same time with parallel ssh at 17:3= 5:22=20 >=20 > and a coredump of all nodes at same time with parallel ssh at 17:42:26=20 >=20 >=20 > (Note that this time, /etc/pve was still locked after backtrace/coredump)= =20 okay, so this time two more log lines got printed on the (again) problem=20 causing node #13, but it still stops logging at a point where this makes=20 no sense.=20 I rebuilt the packages:=20 f318f12e5983cb09d186c2ee37743203f599d103b6abb2d00c78d312b4f12df942d8ed1ff5d= e6e6c194785d0a81eb881e80f7bbfd4865ca1a5a509acd40f64aa pve-cluster_6.1-8_amd= 64.deb=20 b220ee95303e22704793412e83ac5191ba0e53c2f41d85358a247c248d2a6856e5b791b1d12= c36007a297056388224acf4e5a1250ef1dd019aee97e8ac4bcac7 pve-cluster-dbgsym_6.= 1-8_amd64.deb=20 with a change of how the logging is set up (I now suspect that some=20 messages might get dropped if the logging throughput is high enough),=20 let's hope this gets us the information we need. please repeat the test5=20 again with these packages.=20 is there anything special about node 13? network topology, slower=20 hardware, ... ?=20 _______________________________________________=20 pve-devel mailing list=20 pve-devel@lists.proxmox.com=20 https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel=20 _______________________________________________=20 pve-devel mailing list=20 pve-devel@lists.proxmox.com=20 https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel=20 _______________________________________________=20 pve-devel mailing list=20 pve-devel@lists.proxmox.com=20 https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel=20