From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <t.lamprecht@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 8A0068797
 for <pve-devel@lists.proxmox.com>; Wed, 16 Nov 2022 08:15:23 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 63A851B0B8
 for <pve-devel@lists.proxmox.com>; Wed, 16 Nov 2022 08:14:53 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Wed, 16 Nov 2022 08:14:52 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 8475043E26
 for <pve-devel@lists.proxmox.com>; Wed, 16 Nov 2022 08:14:52 +0100 (CET)
Message-ID: <ff380260-8de5-57d8-321e-7a1e0b8893cf@proxmox.com>
Date: Wed, 16 Nov 2022 08:14:51 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:107.0) Gecko/20100101
 Thunderbird/107.0
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
 Fiona Ebner <f.ebner@proxmox.com>
References: <20221110143800.98047-1-f.ebner@proxmox.com>
 <20221110143800.98047-19-f.ebner@proxmox.com>
 <5aa83262-a8f4-8f8e-0612-55fbf9cfd7d9@proxmox.com>
Content-Language: en-GB
In-Reply-To: <5aa83262-a8f4-8f8e-0612-55fbf9cfd7d9@proxmox.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results: =?UTF-8?Q?0=0A=09?=AWL -0.032 Adjusted
 score from AWL reputation of From: =?UTF-8?Q?address=0A=09?=BAYES_00 -1.9
 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict
 =?UTF-8?Q?Alignment=0A=09?=NICE_REPLY_A -0.001 Looks like a legit reply (A)
 SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF
 =?UTF-8?Q?Record=0A=09?=SPF_PASS -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] [PATCH ha-manager 09/11] manager: use static
 resource scheduler when configured
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Wed, 16 Nov 2022 07:15:23 -0000

Am 11/11/2022 um 10:28 schrieb Fiona Ebner:
> Am 10.11.22 um 15:37 schrieb Fiona Ebner:
>> @@ -206,11 +207,30 @@ my $valid_service_states = {
>>  sub recompute_online_node_usage {
> So I was a bit worried that recompute_online_node_usage() would become
> too inefficient with the new add_service_usage_to_node() overhead from
> needing to read the guest configs. I now tested it with ~300 HA services
> (minimal containers) running on my virtual test cluster.
> 
> Timings with 'basic' mode were between 0.0004 - 0.001 seconds
> Timings with 'static' mode were between 0.007 - 0.012 seconds
> 
> While about a 10-fold increase, it's not too dramatic at least. I guess
> that's what the caching of cfs files is for :)
> 
> Still, the function is currently not only called in the main loop in
> manage(), but also in next_state_recovery() and change_service_state().
> 
> With, say, 400 HA services each on 5 nodes, if a node fails there's
> 400 calls from changing to freeze

huh, freeze should only happen on graceful shutdown of a node, not
if it fails?

> 400 calls from changing to recovery
> 400 calls in next_state_recovery
> 400 calls from changing to started
> If we take a generous estimate that each call takes 0.1 seconds (there's
> 2000 services in total), that's 40+80+40 seconds in 3 bursts during the
> fencing and recovery period.

doesn't that lead to overly long run windows between watchdog updates?

> 
> Is that acceptable? Should I try to optimize how often the function is
> called?
> 

hmm, a quick look wouldn't hurt, but not required for now IMO - if it can
interfere with watchdog updates I'd sneak in updating it once in between
though.


ps. maybe you can have some of that info/stats here in the commit message
of this patch.