From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 770DE60B07 for ; Thu, 10 Sep 2020 13:34:48 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 63E4A1B7B2 for ; Thu, 10 Sep 2020 13:34:48 +0200 (CEST) Received: from mailpro.odiso.net (mailpro.odiso.net [89.248.211.110]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 2F8971B7A5 for ; Thu, 10 Sep 2020 13:34:46 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mailpro.odiso.net (Postfix) with ESMTP id ED67E19BD52E; Thu, 10 Sep 2020 13:34:44 +0200 (CEST) Received: from mailpro.odiso.net ([127.0.0.1]) by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id MdUPo_9r4JDC; Thu, 10 Sep 2020 13:34:44 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mailpro.odiso.net (Postfix) with ESMTP id CEC9D19BD52F; Thu, 10 Sep 2020 13:34:44 +0200 (CEST) X-Virus-Scanned: amavisd-new at mailpro.odiso.com Received: from mailpro.odiso.net ([127.0.0.1]) by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id QNm2COsw9bNu; Thu, 10 Sep 2020 13:34:44 +0200 (CEST) Received: from mailpro.odiso.net (mailpro.odiso.net [10.1.31.111]) by mailpro.odiso.net (Postfix) with ESMTP id B4FF819BD52E; Thu, 10 Sep 2020 13:34:44 +0200 (CEST) Date: Thu, 10 Sep 2020 13:34:44 +0200 (CEST) From: Alexandre DERUMIER To: Thomas Lamprecht Cc: Proxmox VE development discussion Message-ID: <1245358354.508169.1599737684557.JavaMail.zimbra@odiso.com> In-Reply-To: <3ee5d9cf-19be-1067-3931-1c54f1c6043a@proxmox.com> References: <216436814.339545.1599142316781.JavaMail.zimbra@odiso.com> <1066029576.414316.1599471133463.JavaMail.zimbra@odiso.com> <872332597.423950.1599485006085.JavaMail.zimbra@odiso.com> <1551800621.910.1599540071310@webmail.proxmox.com> <1680829869.439013.1599549082330.JavaMail.zimbra@odiso.com> <761694744.496919.1599713892772.JavaMail.zimbra@odiso.com> <3ee5d9cf-19be-1067-3931-1c54f1c6043a@proxmox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Mailer: Zimbra 8.8.12_GA_3866 (ZimbraWebClient - GC83 (Linux)/8.8.12_GA_3844) Thread-Topic: corosync bug: cluster break after 1 node clean shutdown Thread-Index: F70X1M0PcQMka82Gu3HECWlk38J6pw== X-SPAM-LEVEL: Spam detection results: 0 AWL 0.035 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/, no trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Sep 2020 11:34:48 -0000 >>as said, if the other nodes where not using HA, the watchdog-mux had no >>client which could expire. sorry, maybe I have wrong explained it, but all my nodes had HA enabled. I have double check lrm_status json files from my morning backup 2h before = the problem, they were all in "active" state. ("state":"active","mode":"active" ) I don't why node7 don't have rebooted, the only difference is that is was t= he crm master. (I think crm also reset the watchdog counter ? maybe behaviour is different= than lrm ?) >>above lines also indicate very high load.=20 >>Do you have some monitoring which shows the CPU/IO load before/during thi= s event?=20 load (1,5,15 ) was: 6 (for 48cores), cpu usage: 23% no iowait on disk (vms are on a remote ceph, only proxmox services are runn= ing on local ssd disk) so nothing strange here :/ ----- Mail original ----- De: "Thomas Lamprecht" =C3=80: "Proxmox VE development discussion" , = "Alexandre Derumier" Envoy=C3=A9: Jeudi 10 Septembre 2020 10:21:48 Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutd= own On 10.09.20 06:58, Alexandre DERUMIER wrote:=20 > Thanks Thomas for the investigations.=20 >=20 > I'm still trying to reproduce...=20 > I think I have some special case here, because the user of the forum with= 30 nodes had corosync cluster split. (Note that I had this bug 6 months ag= o,when shuting down a node too, and the only way was stop full stop corosyn= c on all nodes, and start corosync again on all nodes).=20 >=20 >=20 > But this time, corosync logs looks fine. (every node, correctly see node2= down, and see remaning nodes)=20 >=20 > surviving node7, was the only node with HA, and LRM didn't have enable wa= tchog (I don't have found any log like "pve-ha-lrm: watchdog active" for th= e last 6months on this nodes=20 >=20 >=20 > So, the timing was:=20 >=20 > 10:39:05 : "halt" command is send to node2=20 > 10:39:16 : node2 is leaving corosync / halt -> every node is seeing it an= d correctly do a new membership with 13 remaining nodes=20 >=20 > ...don't see any special logs (corosync,pmxcfs,pve-ha-crm,pve-ha-lrm) aft= er the node2 leaving.=20 > But they are still activity on the server, pve-firewall is still logging,= vms are running fine=20 >=20 >=20 > between 10:40:25 - 10:40:34 : watchdog reset nodes, but not node7.=20 >=20 > -> so between 70s-80s after the node2 was done, so I think that watchdog-= mux was still running fine until that.=20 > (That's sound like lrm was stuck, and client_watchdog_timeout have expire= d in watchdog-mux)=20 as said, if the other nodes where not using HA, the watchdog-mux had no=20 client which could expire.=20 >=20 > 10:40:41 node7, loose quorum (as all others nodes have reset),=20 > 10:40:50: node7 crm/lrm finally log.=20 >=20 > Sep 3 10:40:50 m6kvm7 pve-ha-crm[16196]: got unexpected error - error dur= ing cfs-locked 'domain-ha' operation: no quorum!=20 > Sep 3 10:40:51 m6kvm7 pve-ha-lrm[16140]: loop take too long (87 seconds)= =20 > Sep 3 10:40:51 m6kvm7 pve-ha-crm[16196]: loop take too long (92 seconds)= =20 above lines also indicate very high load.=20 Do you have some monitoring which shows the CPU/IO load before/during this = event?=20 > Sep 3 10:40:51 m6kvm7 pve-ha-crm[16196]: lost lock 'ha_manager_lock - cfs= lock update failed - Permission denied=20 > Sep 3 10:40:51 m6kvm7 pve-ha-lrm[16140]: lost lock 'ha_agent_m6kvm7_lock = - cfs lock update failed - Permission denied=20 >=20 >=20 >=20 > So, I really think that something have stucked lrm/crm loop, and watchdog= was not resetted because of that.=20 >=20