From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id F15E61FF17E for ; Thu, 16 Oct 2025 11:08:02 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 4FC2BD45F; Thu, 16 Oct 2025 11:08:21 +0200 (CEST) Mime-Version: 1.0 Date: Thu, 16 Oct 2025 11:07:47 +0200 Message-Id: From: "Daniel Kral" To: "Fiona Ebner" , "Proxmox VE development discussion" X-Mailer: aerc 0.20.0 References: <20250930142021.366529-1-d.kral@proxmox.com> <20250930142021.366529-2-d.kral@proxmox.com> <04c02890-b538-407a-bcf8-f35f5912e4ab@proxmox.com> In-Reply-To: <04c02890-b538-407a-bcf8-f35f5912e4ab@proxmox.com> X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1760605664726 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.387 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_ASCII_DIVIDERS 0.8 Email that uses ascii formatting dividers and possible spam tricks KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [qemuserver.pm, qemuconfig.pm] Subject: Re: [pve-devel] [PATCH qemu-server 1/1] config: only fetch necessary default values in get_derived_property helper X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" On Wed Oct 15, 2025 at 4:31 PM CEST, Fiona Ebner wrote: > Am 30.09.25 um 4:20 PM schrieb Daniel Kral: >> get_derived_property(...) is called in the semi-hot path of the HA >> Manager's static load scheduler to retrieve the static stats of each VM. >> As the defaults are only needed in certain cases and for a very small >> subset of properties in the VM config, get those separately when needed. >> >> Signed-off-by: Daniel Kral >> --- >> get_current_memory(...) is still quite costly here, because it calls >> parse_memory(...), which calls >> PVE::JSONSchema::parse_property_string(...), which adds up for many >> guest configurations parsed in every manage(...) call, but this already >> helps quite a lot. > > If this really is a problem, we could do our own parsing, i.e. returning > the value if the property string starts with \d+ or current=\d+ and > falling back to get_current_memory() if it doesn't. Of course also using > the default from $memory_fmt if not set. > >> >> src/PVE/QemuConfig.pm | 8 +++----- >> src/PVE/QemuServer.pm | 6 ++++++ >> 2 files changed, 9 insertions(+), 5 deletions(-) >> >> diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm >> index d0844c4c..078c87e0 100644 >> --- a/src/PVE/QemuConfig.pm >> +++ b/src/PVE/QemuConfig.pm >> @@ -582,12 +582,10 @@ sub load_current_config { > > We could go a step further and save the three defaults we are interested > in during module load into variables. Then you also save the hash > accesses into $confdesc and $memory_fmt. > >> sub get_derived_property { >> my ($class, $conf, $name) = @_; >> >> - my $defaults = PVE::QemuServer::load_defaults(); >> - >> if ($name eq 'max-cpu') { >> - my $cpus = >> - ($conf->{sockets} || $defaults->{sockets}) * ($conf->{cores} || $defaults->{cores}); >> - return $conf->{vcpus} || $cpus; >> + my $sockets = $conf->{sockets} || PVE::QemuServer::get_default_property_value('sockets'); >> + my $cores = $conf->{cores} || PVE::QemuServer::get_default_property_value('cores'); >> + return $conf->{vcpus} || ($sockets * $cores); >> } elsif ($name eq 'max-memory') { # current usage maximum, not maximum hotpluggable >> return get_current_memory($conf->{memory}) * 1024 * 1024; >> } else { > > Question is, how much do we really need to optimize the function here? > I'm not against it, but just want to note that looking at the static > usage will always have its limitations in practice (independent of > performance). With PSI-based usage, we should only need the static > information for to-be-started or to-be-recovered guests rather than all, > so performance of get_derived_property() becomes much less relevant. Or > what do you think? That's a good point, in that regard I definitely favor maintainability over minor performance improvements here. The (one-time) profile of 9,000 HA resources being balanced on start with all patches applied (except ha #9) shows the following runtimes: +---------------------------------------------+------------+------------+ | Function | Excl. time | Incl. time | +---------------------------------------------+------------+------------+ | PVE::HA::Resources::PVEVM::get_static_stats | 55 ms | 437 ms | | PVE::QemuConfig::get_derived_property | 40 ms | 363 ms | | PVE::QemuServer::Memory::get_current_memory | 23 ms | 340 ms | | PVE::QemuServer::Memory::parse_memory | 22 ms | 317 ms | | PVE::JSONSchema::parse_property_string | 94 ms | 223 ms | +---------------------------------------------+------------+------------+ parse_property_string does show up as the 6th highest-exclusive-runtime function, but my main objective was that that `manage(...)` should finish under ~10 seconds for ~10,000 HA resources. That should be doable now, especially as the amount of calls to get_static_usage(...) is now constant per manage(...) call instead of proportional to the amount of HA resource state changes. So let's stay with the current patch unless there are actually people running into this.. By the way, the highest-exclusive-runtime function in that profile is Sys::Syslog::syslog now (excluding CORE::sleep of course) ;). But since that should happen only in very special cases, I don't think that's an actual performance problem either. _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel