public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Wolfgang Bumiller <w.bumiller@proxmox.com>
To: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Cc: pve-devel@lists.proxmox.com
Subject: Re: [pve-devel] [PATCH manager 2/4] pvestatd: collect and broadcast pool usage
Date: Thu, 11 Apr 2024 11:32:16 +0200	[thread overview]
Message-ID: <ydtgpstjnnzm2cwlhx2q63gtnk625lklvxaudpmwejxclss2ox@wwv2uzkejcuw> (raw)
In-Reply-To: <20240410131316.1208679-12-f.gruenbichler@proxmox.com>

On Wed, Apr 10, 2024 at 03:13:08PM +0200, Fabian Grünbichler wrote:
> so that other nodes can query it and both block changes that would violate the
> limits, and mark pools which are violating it currently accordingly.
> 
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
>  PVE/Service/pvestatd.pm | 59 ++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 55 insertions(+), 4 deletions(-)
> 
> diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
> index 2515120c6..d7e4755e9 100755
> --- a/PVE/Service/pvestatd.pm
> +++ b/PVE/Service/pvestatd.pm
> @@ -231,7 +231,7 @@ sub auto_balloning {
>  }
>  
>  sub update_qemu_status {
> -    my ($status_cfg) = @_;
> +    my ($status_cfg, $pool_membership, $pool_usage) = @_;
>  
>      my $ctime = time();
>      my $vmstatus = PVE::QemuServer::vmstatus(undef, 1);
> @@ -242,6 +242,21 @@ sub update_qemu_status {
>      my $transactions = PVE::ExtMetric::transactions_start($status_cfg);
>      foreach my $vmid (keys %$vmstatus) {
>  	my $d = $vmstatus->{$vmid};
> +
> +	if (my $pool = $pool_membership->{$vmid}) {
> +	    $pool_usage->{$pool}->{$vmid} = {
> +		cpu => {
> +		    config => ($d->{confcpus} // 0),
> +		    run => ($d->{runcpus} // 0),
> +		},
> +		mem => {
> +		    config => ($d->{confmem} // 0),
> +		    run => ($d->{runmem} // 0),
> +		},

I feel like it should be possible to build this hash given the `keys` in
the limit hash... The `cpu-run/config` vs `{cpu}->{run}` vs `runcpu`
naming feels a bit awkward to me.

> +		running => $d->{pid} ? 1 : 0,
> +	    };
> +	}
> +
>  	my $data;
>  	my $status = $d->{qmpstatus} || $d->{status} || 'stopped';
>  	my $template = $d->{template} ? $d->{template} : "0";
> @@ -263,6 +278,17 @@ sub update_qemu_status {
>      PVE::ExtMetric::transactions_finish($transactions);
>  }
>  
> +sub update_pool_usage {
> +    my ($usage) = @_;
> +
> +    my $ctime = time();
> +
> +    # TODO? RRD and ExtMetric support here?
> +
> +    my $new = { data => $usage, timestamp => $ctime };
> +    PVE::Cluster::broadcast_node_kv('pool-usage', encode_json($new));
> +}
> +
>  sub remove_stale_lxc_consoles {
>  
>      my $vmstatus = PVE::LXC::vmstatus();
> @@ -440,7 +466,7 @@ sub rebalance_lxc_containers {
>  }
>  
>  sub update_lxc_status {
> -    my ($status_cfg) = @_;
> +    my ($status_cfg, $pool_membership, $pool_usage) = @_;
>  
>      my $ctime = time();
>      my $vmstatus = PVE::LXC::vmstatus();
> @@ -449,6 +475,21 @@ sub update_lxc_status {
>  
>      foreach my $vmid (keys %$vmstatus) {
>  	my $d = $vmstatus->{$vmid};
> +
> +	if (my $pool = $pool_membership->{$vmid}) {
> +	    $pool_usage->{$pool}->{$vmid} = {
> +		cpu => {
> +		    config => ($d->{confcpus} // 0),
> +		    run => ($d->{runcpus} // 0),
> +		},
> +		mem => {
> +		    config => ($d->{confmem} // 0),
> +		    run => ($d->{runmem} // 0),
> +		},
> +		running => $d->{status} eq 'running' ? 1 : 0,
> +	    };
> +	}
> +
>  	my $template = $d->{template} ? $d->{template} : "0";
>  	my $data;
>  	if ($d->{status} eq 'running') { # running
> @@ -540,6 +581,10 @@ sub update_status {
>      syslog('err', $err) if $err;
>  
>      my $status_cfg = PVE::Cluster::cfs_read_file('status.cfg');
> +    my $user_cfg = PVE::Cluster::cfs_read_file('user.cfg');
> +    my $pools = $user_cfg->{pools};
> +    my $pool_membership = $user_cfg->{vms};
> +    my $pool_usage = {};
>  
>      eval {
>  	update_node_status($status_cfg);
> @@ -548,17 +593,23 @@ sub update_status {
>      syslog('err', "node status update error: $err") if $err;
>  
>      eval {
> -	update_qemu_status($status_cfg);
> +	update_qemu_status($status_cfg, $pool_membership, $pool_usage);
>      };
>      $err = $@;
>      syslog('err', "qemu status update error: $err") if $err;
>  
>      eval {
> -	update_lxc_status($status_cfg);
> +	update_lxc_status($status_cfg, $pool_membership, $pool_usage);
>      };
>      $err = $@;
>      syslog('err', "lxc status update error: $err") if $err;
>  
> +    eval {
> +	update_pool_usage($pool_usage);
> +    };
> +    $err =$@;
> +    syslog('err', "pool usage status update error: $err") if $err;
> +
>      eval {
>  	rebalance_lxc_containers();
>      };
> -- 
> 2.39.2




  reply	other threads:[~2024-04-11  9:32 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-10 13:12 [pve-devel] [RFC qemu-server/pve-container/.. 0/19] pool resource limits Fabian Grünbichler
2024-04-10 13:12 ` [pve-devel] [PATCH access-control 1/1] pools: define " Fabian Grünbichler
2024-04-10 13:12 ` [pve-devel] [PATCH container 1/7] config: add pool usage helper Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH container 2/7] status: add pool usage fields Fabian Grünbichler
2024-04-11  9:28   ` Wolfgang Bumiller
2024-04-15  9:32     ` Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH container 3/7] create/restore/clone: handle pool limits Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH container 4/7] start: " Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH container 5/7] hotplug: " Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH container 6/7] rollback: " Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH container 7/7] update: " Fabian Grünbichler
2024-04-11  7:23   ` Fabian Grünbichler
2024-04-11 10:03     ` Wolfgang Bumiller
2024-04-15  9:35       ` Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH guest-common 1/1] helpers: add pool limit/usage helpers Fabian Grünbichler
2024-04-11  9:17   ` Wolfgang Bumiller
2024-04-15  9:38     ` Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH manager 1/4] api: pools: add limits management Fabian Grünbichler
2024-04-11  9:24   ` Wolfgang Bumiller
2024-04-10 13:13 ` [pve-devel] [PATCH manager 2/4] pvestatd: collect and broadcast pool usage Fabian Grünbichler
2024-04-11  9:32   ` Wolfgang Bumiller [this message]
2024-04-15 12:36     ` Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH manager 3/4] api: return pool usage when queried Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH manager 4/4] ui: add pool limits and usage Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH qemu-server 1/6] config: add pool usage helper Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH qemu-server 2/6] vmstatus: add usage values for pool limits Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH qemu-server 3/6] create/restore/clone: handle " Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH qemu-server 4/6] update/hotplug: " Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH qemu-server 5/6] start: " Fabian Grünbichler
2024-04-10 13:13 ` [pve-devel] [PATCH qemu-server 6/6] rollback: " Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ydtgpstjnnzm2cwlhx2q63gtnk625lklvxaudpmwejxclss2ox@wwv2uzkejcuw \
    --to=w.bumiller@proxmox.com \
    --cc=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal