From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <f.ebner@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 273D188CE
 for <pve-devel@lists.proxmox.com>; Wed, 16 Nov 2022 10:37:50 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 02A741CEE3
 for <pve-devel@lists.proxmox.com>; Wed, 16 Nov 2022 10:37:20 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Wed, 16 Nov 2022 10:37:19 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 1310044B58
 for <pve-devel@lists.proxmox.com>; Wed, 16 Nov 2022 10:37:19 +0100 (CET)
Message-ID: <82c69808-1032-ff32-1d23-ceacdc0a11eb@proxmox.com>
Date: Wed, 16 Nov 2022 10:37:18 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.3.0
Content-Language: en-US
To: Thomas Lamprecht <t.lamprecht@proxmox.com>,
 Proxmox VE development discussion <pve-devel@lists.proxmox.com>
References: <20221110143800.98047-1-f.ebner@proxmox.com>
 <20221110143800.98047-19-f.ebner@proxmox.com>
 <5aa83262-a8f4-8f8e-0612-55fbf9cfd7d9@proxmox.com>
 <ff380260-8de5-57d8-321e-7a1e0b8893cf@proxmox.com>
From: Fiona Ebner <f.ebner@proxmox.com>
In-Reply-To: <ff380260-8de5-57d8-321e-7a1e0b8893cf@proxmox.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results: =?UTF-8?Q?0=0A=09?=AWL 0.028 Adjusted
 score from AWL reputation of From: =?UTF-8?Q?address=0A=09?=BAYES_00 -1.9
 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict
 =?UTF-8?Q?Alignment=0A=09?=NICE_REPLY_A -0.001 Looks like a legit reply (A)
 SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF
 =?UTF-8?Q?Record=0A=09?=SPF_PASS -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] [PATCH ha-manager 09/11] manager: use static
 resource scheduler when configured
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Wed, 16 Nov 2022 09:37:50 -0000

Am 16.11.22 um 08:14 schrieb Thomas Lamprecht:
> Am 11/11/2022 um 10:28 schrieb Fiona Ebner:
>> Am 10.11.22 um 15:37 schrieb Fiona Ebner:
>>> @@ -206,11 +207,30 @@ my $valid_service_states = {
>>>  sub recompute_online_node_usage {
>> So I was a bit worried that recompute_online_node_usage() would become
>> too inefficient with the new add_service_usage_to_node() overhead from
>> needing to read the guest configs. I now tested it with ~300 HA services
>> (minimal containers) running on my virtual test cluster.
>>
>> Timings with 'basic' mode were between 0.0004 - 0.001 seconds
>> Timings with 'static' mode were between 0.007 - 0.012 seconds
>>
>> While about a 10-fold increase, it's not too dramatic at least. I guess
>> that's what the caching of cfs files is for :)
>>
>> Still, the function is currently not only called in the main loop in
>> manage(), but also in next_state_recovery() and change_service_state().
>>
>> With, say, 400 HA services each on 5 nodes, if a node fails there's
>> 400 calls from changing to freeze
> 
> huh, freeze should only happen on graceful shutdown of a node, not
> if it fails?

Sorry, I meant fence not freeze.

> 
>> 400 calls from changing to recovery
>> 400 calls in next_state_recovery
>> 400 calls from changing to started
>> If we take a generous estimate that each call takes 0.1 seconds (there's
>> 2000 services in total), that's 40+80+40 seconds in 3 bursts during the
>> fencing and recovery period.
> 
> doesn't that lead to overly long run windows between watchdog updates?
> 
>>
>> Is that acceptable? Should I try to optimize how often the function is
>> called?
>>
> 
> hmm, a quick look wouldn't hurt, but not required for now IMO - if it can
> interfere with watchdog updates I'd sneak in updating it once in between
> though.
> 

Yes, from a quick look that might become a problem, exactly because the
delays happen in bursts (all services change state in a single manage()
run).

Not sure how you would trigger the update, because that would need to
happen in the CRM AFAIU?

There is a fixme comment in CRM.pm's work() to set an alert timer and
enforce working for at most $max_time seconds. That would of course help
here.

Getting rid of superfluous recompute_online_node_usage() calls should
also not be impossible. We'd need to ensure that we add service usage
(that already is done in recovery and next_state_started) and remove
service usage (removing is not implemented right now) when changing
nodes or states. Then it'd be enough to call
recompute_online_node_usage() once per cycle and it'd be a huge
improvement compared to now. Additionally, we could call it whenever we
iterated a certain number of services, just to be sure.

> 
> ps. maybe you can have some of that info/stats here in the commit message
> of this patch.

Sure.