public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Daniel Kral" <d.kral@proxmox.com>
To: "Thomas Lamprecht" <t.lamprecht@proxmox.com>,
	"Proxmox VE development discussion" <pve-devel@lists.proxmox.com>,
	"Daniel Kral" <d.kral@proxmox.com>
Subject: Re: [pve-devel] [PATCH perl-rs v3 1/2] pve-rs: resource_scheduling: allow granular usage changes
Date: Thu, 13 Nov 2025 10:30:17 +0100	[thread overview]
Message-ID: <DE7GMV8BLU0Q.2YW1KSZVJMGNV@proxmox.com> (raw)
In-Reply-To: <9b51dec9-a6e3-429f-8f5f-4138f658c381@proxmox.com>

On Wed Nov 12, 2025 at 11:49 AM CET, Thomas Lamprecht wrote:
> nicer commit subject would be:
>
> pve resource scheduling: allow granular usage changes
>
> Am 27.10.25 um 17:46 schrieb Daniel Kral:
>> Implements a simple bidirectional map to track which service usages have
>> been added to nodes, so that these can be removed later individually.
>> 
>> The `StaticNodeUsage` is newly initialized on every invocation of
>> score_nodes_to_start_service(...) instead of updating the values on
>> every call of `add_service_usage_to_node(...)` to reduce the likelihood
>> of introducing numerical instability caused by floating-point operations
>> done on the `cpu` field.
>> 
>> The StaticServiceUsage is added to the HashMap<> in StaticNodeInfo to
>> reduce unnecessary indirection when summing these values in
>> score_nodes_to_start_service(...).
>> 
>> Signed-off-by: Daniel Kral <d.kral@proxmox.com>
>> Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
>> ---
>> Needs a build dependency bump for
>> librust-proxmox-resource-scheduling-dev and a versioned breaks for
>> pve-ha-manager.
>
> The latter only due to the (signature) change to add_service_usage_to_node, or?
> While versioned breaks can be done, if it's somewhat easy to avoid them it's
> always better to do so, especially as it makes downgrades much easier 
>
> Can we keep backward compat without having to bend to
> much backwards? E.g. adding a new method for the new more granular way while
> keeping add_service_usage_to_node as is, like "record_service_usage_for_node" (or
> just slap a 2 at the end of the method name is also a simple trick that, while not
> beautiful, works and avoids bikeshedding).

I'd go for that route, but the new `sid` parameter is needed to track on
which node a service puts its load on, i.e. where we need to remove it
later. That's information we didn't get before and we unfortunately
cannot make it optional, e.g. shoving it in the already existing
StaticNodeUsage as a Option<_> field. (Haven't tested it yet, but this
shouldn't be an API break for perlmod?)

Another approach would be to allow both the non-granular and granular
behavior to coexist - at least for the 9.x series, even though it might
be a little ugly, it's probably not too bad.

What do you think?


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  reply	other threads:[~2025-11-13  9:29 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-27 16:43 [pve-devel] [PATCH ha-manager/perl-rs/proxmox v3 00/11] Granular online_node_usage accounting Daniel Kral
2025-10-27 16:43 ` [pve-devel] [PATCH proxmox v3 1/1] resource-scheduling: change score_nodes_to_start_service signature Daniel Kral
2025-11-12 11:01   ` Thomas Lamprecht
2025-11-13  8:58     ` Daniel Kral
2025-10-27 16:43 ` [pve-devel] [PATCH perl-rs v3 1/2] pve-rs: resource_scheduling: allow granular usage changes Daniel Kral
2025-11-12 10:49   ` Thomas Lamprecht
2025-11-13  9:30     ` Daniel Kral [this message]
2025-11-14 10:12       ` Daniel Kral
2025-10-27 16:43 ` [pve-devel] [PATCH perl-rs v3 2/2] test: resource_scheduling: use score_nodes helper to imitate HA Manager Daniel Kral
2025-10-27 16:43 ` [pve-devel] [PATCH ha-manager v3 1/8] manager: remove redundant recompute_online_node_usage from next_state_recovery Daniel Kral
2025-10-27 16:43 ` [pve-devel] [PATCH ha-manager v3 2/8] manager: remove redundant add_service_usage_to_node " Daniel Kral
2025-10-27 16:43 ` [pve-devel] [PATCH ha-manager v3 3/8] manager: remove redundant add_service_usage_to_node from next_state_started Daniel Kral
2025-10-27 16:43 ` [pve-devel] [PATCH ha-manager v3 4/8] rules: resource affinity: decouple get_resource_affinity helper from Usage class Daniel Kral
2025-11-03 15:50   ` Michael Köppl
2025-11-03 16:01     ` Daniel Kral
2025-10-27 16:43 ` [pve-devel] [PATCH ha-manager v3 5/8] manager: make recompute_online_node_usage use add_service_usage helper Daniel Kral
2025-10-27 16:43 ` [pve-devel] [PATCH ha-manager v3 6/8] usage: allow granular changes to Usage implementations Daniel Kral
2025-10-27 16:43 ` [pve-devel] [PATCH ha-manager v3 7/8] manager: make online node usage computation granular Daniel Kral
2025-10-27 16:43 ` [pve-devel] [PATCH ha-manager v3 8/8] implement static service stats cache Daniel Kral
2025-11-14 10:13 ` [pve-devel] superseded: [PATCH ha-manager/perl-rs/proxmox v3 00/11] Granular online_node_usage accounting Daniel Kral

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DE7GMV8BLU0Q.2YW1KSZVJMGNV@proxmox.com \
    --to=d.kral@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal