From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 65B8260A29 for ; Thu, 10 Sep 2020 10:22:20 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 509AC199FC for ; Thu, 10 Sep 2020 10:21:50 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id AEC5A199EF for ; Thu, 10 Sep 2020 10:21:49 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 7FC2944AE7; Thu, 10 Sep 2020 10:21:49 +0200 (CEST) To: Proxmox VE development discussion , Alexandre DERUMIER References: <216436814.339545.1599142316781.JavaMail.zimbra@odiso.com> <1661182651.406890.1599463180810.JavaMail.zimbra@odiso.com> <72727125.827.1599466723564@webmail.proxmox.com> <1066029576.414316.1599471133463.JavaMail.zimbra@odiso.com> <872332597.423950.1599485006085.JavaMail.zimbra@odiso.com> <1551800621.910.1599540071310@webmail.proxmox.com> <1680829869.439013.1599549082330.JavaMail.zimbra@odiso.com> <761694744.496919.1599713892772.JavaMail.zimbra@odiso.com> From: Thomas Lamprecht Message-ID: <3ee5d9cf-19be-1067-3931-1c54f1c6043a@proxmox.com> Date: Thu, 10 Sep 2020 10:21:48 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:81.0) Gecko/20100101 Thunderbird/81.0 MIME-Version: 1.0 In-Reply-To: <761694744.496919.1599713892772.JavaMail.zimbra@odiso.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL 1.595 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -3.576 Looks like a legit reply (A) RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Sep 2020 08:22:20 -0000 On 10.09.20 06:58, Alexandre DERUMIER wrote: > Thanks Thomas for the investigations. > > I'm still trying to reproduce... > I think I have some special case here, because the user of the forum with 30 nodes had corosync cluster split. (Note that I had this bug 6 months ago,when shuting down a node too, and the only way was stop full stop corosync on all nodes, and start corosync again on all nodes). > > > But this time, corosync logs looks fine. (every node, correctly see node2 down, and see remaning nodes) > > surviving node7, was the only node with HA, and LRM didn't have enable watchog (I don't have found any log like "pve-ha-lrm: watchdog active" for the last 6months on this nodes > > > So, the timing was: > > 10:39:05 : "halt" command is send to node2 > 10:39:16 : node2 is leaving corosync / halt -> every node is seeing it and correctly do a new membership with 13 remaining nodes > > ...don't see any special logs (corosync,pmxcfs,pve-ha-crm,pve-ha-lrm) after the node2 leaving. > But they are still activity on the server, pve-firewall is still logging, vms are running fine > > > between 10:40:25 - 10:40:34 : watchdog reset nodes, but not node7. > > -> so between 70s-80s after the node2 was done, so I think that watchdog-mux was still running fine until that. > (That's sound like lrm was stuck, and client_watchdog_timeout have expired in watchdog-mux) as said, if the other nodes where not using HA, the watchdog-mux had no client which could expire. > > 10:40:41 node7, loose quorum (as all others nodes have reset), > 10:40:50: node7 crm/lrm finally log. > > Sep 3 10:40:50 m6kvm7 pve-ha-crm[16196]: got unexpected error - error during cfs-locked 'domain-ha' operation: no quorum! > Sep 3 10:40:51 m6kvm7 pve-ha-lrm[16140]: loop take too long (87 seconds) > Sep 3 10:40:51 m6kvm7 pve-ha-crm[16196]: loop take too long (92 seconds) above lines also indicate very high load. Do you have some monitoring which shows the CPU/IO load before/during this event? > Sep 3 10:40:51 m6kvm7 pve-ha-crm[16196]: lost lock 'ha_manager_lock - cfs lock update failed - Permission denied > Sep 3 10:40:51 m6kvm7 pve-ha-lrm[16140]: lost lock 'ha_agent_m6kvm7_lock - cfs lock update failed - Permission denied > > > > So, I really think that something have stucked lrm/crm loop, and watchdog was not resetted because of that. >