public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits
@ 2024-04-16 12:20 Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 access-control 1/1] pools: define " Fabian Grünbichler
                   ` (18 more replies)
  0 siblings, 19 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

high level description:

VM/CT vmstatus returns new fields for configured and running "usage"
values, these are then broadcasted by pvestatd on each node via KV.

helpers in guest-common to check those limits
pool API returns limits and usage, them and allows setting the limits

qemu-server/pve-container try to check actions against those limits.

since the broadcast is async, there is always an opportunity to cheat by
racing against the broadcast. this is unavoidable unless we want to
serialize all usage affecting tasks across the cluster..

changelog since v1/RFC:
- GUI can edit limits now
- incorporated most feedback from Wolfgang and Dominik

potential follow-ups:
- disk limits/usage
- RRD/metrics support (or even switching entirely to RRD based
  broadcasting instead of KV)
- timeout of broadcasted data if node goes offline/stops broadcasting
- better GUI
- ... ?

pve-access-control:

Fabian Grünbichler (1):
  pools: define resource limits

 src/PVE/AccessControl.pm  | 42 +++++++++++++++++++++++++++++++++++++--
 src/test/parser_writer.pl | 14 ++++++-------
 2 files changed, 47 insertions(+), 9 deletions(-)

pve-container:

Fabian Grünbichler (7):
  config: add pool usage helper
  status: add pool usage fields
  create/restore/clone: handle pool limits
  start: handle pool limits
  hotplug: handle pool limits
  rollback: handle pool limits
  update: handle pool limits

 src/PVE/API2/LXC.pm          | 25 +++++++++++++++++++
 src/PVE/API2/LXC/Config.pm   | 21 ++++++++++++++++
 src/PVE/API2/LXC/Snapshot.pm |  7 ++++++
 src/PVE/LXC.pm               | 37 +++++++++++++++++++++++++++
 src/PVE/LXC/Config.pm        | 48 ++++++++++++++++++++++++++++++++++++
 5 files changed, 138 insertions(+)

pve-guest-common:

Fabian Grünbichler (1):
  helpers: add pool limit/usage helpers

 src/PVE/GuestHelpers.pm | 183 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 183 insertions(+)

pve-manager:

Fabian Grünbichler (4):
  api: pools: add limits management
  pvestatd: collect and broadcast pool usage
  api: return pool usage when queried
  ui: add pool limits and usage

 PVE/API2/Pool.pm                |  65 ++++++++++++++-
 PVE/Service/pvestatd.pm         |  59 ++++++++++++-
 www/manager6/pool/StatusView.js | 141 +++++++++++++++++++++++++++++++-
 3 files changed, 256 insertions(+), 9 deletions(-)

qemu-server:

Fabian Grünbichler (6):
  config: add pool usage helper
  vmstatus: add usage values for pool limits
  create/restore/clone: handle pool limits
  update/hotplug: handle pool limits
  start: handle pool limits
  rollback: handle pool limits

 PVE/API2/Qemu.pm         | 54 ++++++++++++++++++++++++++++++++++++++++
 PVE/QemuConfig.pm        | 30 ++++++++++++++++++++++
 PVE/QemuServer.pm        | 49 ++++++++++++++++++++++++++++++++++++
 PVE/QemuServer/Memory.pm |  6 +++++
 4 files changed, 139 insertions(+)




^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 access-control 1/1] pools: define resource limits
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 1/7] config: add pool usage helper Fabian Grünbichler
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

and handle them when parsing/writing user.cfg

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---

Notes:
    - make limit schema public for pve-guest-common usage

 src/PVE/AccessControl.pm  | 42 +++++++++++++++++++++++++++++++++++++--
 src/test/parser_writer.pl | 14 ++++++-------
 2 files changed, 47 insertions(+), 9 deletions(-)

diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
index 21f93ff..f1863c8 100644
--- a/src/PVE/AccessControl.pm
+++ b/src/PVE/AccessControl.pm
@@ -72,6 +72,36 @@ sub pve_verify_realm {
     PVE::Auth::Plugin::pve_verify_realm(@_);
 }
 
+our $pool_limits_desc = {
+    "mem-config" => {
+	type => 'integer',
+	description => "Sum of memory (in MB) guests in this pools can be configured with.",
+	optional => 1,
+    },
+    "mem-run" => {
+	type => 'integer',
+	description => "Sum of memory (in MB) guests in this pools can be started with.",
+	optional => 1,
+    },
+    "cpu-config" => {
+	type => 'integer',
+	description => "Sum of (virtual) cores guests in this pools can be configured with.",
+	optional => 1,
+    },
+    "cpu-run" => {
+	type => 'integer',
+	description => "Sum of (virtual) cores guests in this pools can be started with.",
+	optional => 1,
+    },
+};
+
+PVE::JSONSchema::register_format('pve-pool-limits', $pool_limits_desc);
+PVE::JSONSchema::register_standard_option('pve-pool-limits', {
+    type => 'string',
+    format => $pool_limits_desc,
+    optional => 1,
+});
+
 # Locking both config files together is only ever allowed in one order:
 #  1) tfa config
 #  2) user config
@@ -1524,7 +1554,7 @@ sub parse_user_config {
 		warn "user config - ignore invalid path in acl '$pathtxt'\n";
 	    }
 	} elsif ($et eq 'pool') {
-	    my ($pool, $comment, $vmlist, $storelist) = @data;
+	    my ($pool, $comment, $vmlist, $storelist, $limits) = @data;
 
 	    if (!verify_poolname($pool, 1)) {
 		warn "user config - ignore pool '$pool' - invalid characters in pool name\n";
@@ -1575,6 +1605,13 @@ sub parse_user_config {
 		}
 		$cfg->{pools}->{$pool}->{storage}->{$storeid} = 1;
 	    }
+
+	    if ($limits) {
+		my $parsed_limits = eval { PVE::JSONSchema::parse_property_string($pool_limits_desc, $limits) };
+		warn "Failed to parse pool limits for '$pool' - $@\n" if $@;
+
+		$cfg->{pools}->{$pool}->{limits} = $parsed_limits;
+	    }
 	} elsif ($et eq 'token') {
 	    my ($tokenid, $expire, $privsep, $comment) = @data;
 
@@ -1656,7 +1693,8 @@ sub write_user_config {
 	my $vmlist = join (',', sort keys %{$d->{vms}});
 	my $storelist = join (',', sort keys %{$d->{storage}});
 	my $comment = $d->{comment} ? PVE::Tools::encode_text($d->{comment}) : '';
-	$data .= "pool:$pool:$comment:$vmlist:$storelist:\n";
+	my $limits = $d->{limits} ? PVE::JSONSchema::print_property_string($d->{limits}, $pool_limits_desc) : '';
+	$data .= "pool:$pool:$comment:$vmlist:$storelist:$limits:\n";
     }
 
     $data .= "\n";
diff --git a/src/test/parser_writer.pl b/src/test/parser_writer.pl
index 80c346b..2e6eb61 100755
--- a/src/test/parser_writer.pl
+++ b/src/test/parser_writer.pl
@@ -431,12 +431,12 @@ my $default_raw = {
 	'test_role_privs_invalid' => 'role:testrole:VM.Invalid,Datastore.Audit,VM.Allocate:',
     },
     pools => {
-	'test_pool_empty' => 'pool:testpool::::',
-	'test_pool_invalid' => 'pool:testpool::non-numeric:inval!d:',
-	'test_pool_members' => 'pool:testpool::123,1234:local,local-zfs:',
-	'test_pool_duplicate_vms' => 'pool:test_duplicate_vms::123,1234::',
-	'test_pool_duplicate_vms_expected' => 'pool:test_duplicate_vms::::',
-	'test_pool_duplicate_storages' => 'pool:test_duplicate_storages:::local,local-zfs:',
+	'test_pool_empty' => 'pool:testpool:::::',
+	'test_pool_invalid' => 'pool:testpool::non-numeric:inval!d::',
+	'test_pool_members' => 'pool:testpool::123,1234:local,local-zfs::',
+	'test_pool_duplicate_vms' => 'pool:test_duplicate_vms::123,1234:::',
+	'test_pool_duplicate_vms_expected' => 'pool:test_duplicate_vms:::::',
+	'test_pool_duplicate_storages' => 'pool:test_duplicate_storages:::local,local-zfs::',
     },
     acl => {
 	'acl_simple_user' => 'acl:1:/:test@pam:PVEVMAdmin:',
@@ -1018,7 +1018,7 @@ my $tests = [
 	       'user:test@pam:0:0::::::'."\n".
 	       'token:test@pam!test:0:0::'."\n\n".
 	       'group:testgroup:::'."\n\n".
-	       'pool:testpool::::'."\n\n".
+	       'pool:testpool:::::'."\n\n".
 	       'role:testrole::'."\n\n",
     },
 ];
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 container 1/7] config: add pool usage helper
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 access-control 1/1] pools: define " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 2/7] status: add pool usage fields Fabian Grünbichler
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

to avoid repeating those calculations all over the place.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/PVE/LXC/Config.pm | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 1664a35..a6baccd 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -11,6 +11,7 @@ use PVE::DataCenterConfig;
 use PVE::GuestHelpers;
 use PVE::INotify;
 use PVE::JSONSchema qw(get_standard_option);
+use PVE::ProcFSTools;
 use PVE::Tools;
 
 use PVE::LXC;
@@ -1886,4 +1887,38 @@ sub foreach_passthrough_device {
     }
 }
 
+my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
+
+# for determining pool usage vs limits
+#
+# this gives the higher of pending or currently configured
+sub get_pool_usage {
+    my ($class, $conf) = @_;
+
+    my $usage = {};
+
+    my $get_max = sub {
+	my $max = 0;
+
+	for my $curr (@_) {
+	    $max = $curr if defined($curr) && $curr > $max;
+	}
+
+	return $max;
+    };
+
+    $usage->{cpu} = $get_max->(
+	$conf->{pending}->{cores},
+	$conf->{cores},
+    );
+    $usage->{cpu} = $cpuinfo->{cpus} if !$usage->{cpu};
+
+    my $swap = $get_max->($conf->{pending}->{swap}, $conf->{swap});
+    my $mem = $get_max->($conf->{pending}->{memory}, $conf->{memory}, 512);
+    $usage->{mem} = $mem+$swap;
+    $usage->{mem} *= 1024*1024;
+
+    return $usage;
+}
+
 1;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 container 2/7] status: add pool usage fields
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 access-control 1/1] pools: define " Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 1/7] config: add pool usage helper Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 3/7] create/restore/clone: handle pool limits Fabian Grünbichler
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

these are similar to existing ones, but with slightly different semantics.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/PVE/LXC.pm | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index e688ea6..9b03a97 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -138,6 +138,18 @@ our $vmstatus_return_properties = {
 	optional => 1,
 	renderer => 'bytes',
     },
+    confmem => {
+	description => "Configured amount of memory (inc. swap), might be higher than 'maxmem'.",
+	type => 'integer',
+	optional => 1,
+	renderer => 'bytes',
+    },
+    runmem => {
+	description => "Currently configured amount of memory (inc. swap).",
+	type => 'integer',
+	optional => 1,
+	renderer => 'bytes',
+    },
     maxdisk => {
 	description => "Root disk size in bytes.",
 	type => 'integer',
@@ -160,6 +172,16 @@ our $vmstatus_return_properties = {
 	type => 'number',
 	optional => 1,
     },
+    confcpus => {
+	description => "Configured amount of CPUs, might be higher than 'cpus'.",
+	type => 'integer',
+	optional => 1,
+    },
+    runcpus => {
+	description => "Currently used amount of CPUs.",
+	type => 'integer',
+	optional => 1,
+    },
     lock => {
 	description => "The current config lock, if any.",
 	type => 'string',
@@ -200,6 +222,7 @@ sub vmstatus {
 	my $conf = PVE::Cluster::cfs_read_file($cfspath) || {};
 
 	$unprivileged->{$vmid} = $conf->{unprivileged};
+	my $usage = PVE::LXC::Config->get_pool_usage($conf);
 
 	$d->{name} = $conf->{'hostname'} || "CT$vmid";
 	$d->{name} =~ s/[\s]//g;
@@ -207,6 +230,9 @@ sub vmstatus {
 	$d->{cpus} = $conf->{cores} || $conf->{cpulimit};
 	$d->{cpus} = $cpucount if !$d->{cpus};
 
+	$d->{confcpus} = $usage->{cpu};
+	$d->{runcpus} = $conf->{cores} || $cpucount;
+
 	$d->{tags} = $conf->{tags} if defined($conf->{tags});
 
 	if ($d->{pid}) {
@@ -229,6 +255,9 @@ sub vmstatus {
 	$d->{maxmem} = ($conf->{memory}||512)*1024*1024;
 	$d->{maxswap} = ($conf->{swap}//0)*1024*1024;
 
+	$d->{confmem} = $usage->{mem};
+	$d->{runmem} = $conf->{maxmem} + $conf->{swap};
+
 	$d->{uptime} = 0;
 	$d->{cpu} = 0;
 
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 container 3/7] create/restore/clone: handle pool limits
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (2 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 2/7] status: add pool usage fields Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 4/7] start: " Fabian Grünbichler
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

early if possible, to avoid big cleanups cause of limit exhaustion.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/PVE/API2/LXC.pm | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index fd42ccf..80bac3d 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -4,6 +4,7 @@ use strict;
 use warnings;
 
 use IO::Socket::UNIX;
+use Storable qw(dclone);
 use Socket qw(SOCK_STREAM);
 
 use PVE::SafeSyslog;
@@ -51,6 +52,23 @@ my $check_storage_access_migrate = sub {
 	if !$scfg->{content}->{rootdir};
 };
 
+my $check_pool_limits_create = sub {
+    my ($vmid, $conf, $running, $pool) = @_;
+    if ($pool) {
+	my $usage = PVE::LXC::Config->get_pool_usage($conf);
+	my $changes = {
+	    cpu => $usage->{cpu},
+	    mem => $usage->{mem},
+	    absolute => 1, # in case this is an in-place restore
+	};
+	PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes, $pool);
+	if ($running) {
+	    $changes->{running} = 1;
+	    PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes, $pool);
+	}
+    }
+};
+
 __PACKAGE__->register_method ({
     subclass => "PVE::API2::LXC::Config",
     path => '{vmid}/config',
@@ -412,6 +430,12 @@ __PACKAGE__->register_method({
 			if ($old_conf->{unprivileged} && !$conf->{unprivileged}) {
 			    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Allocate']);
 			}
+
+			my $merged = dclone($conf);
+			PVE::LXC::Create::sanitize_and_merge_config($merged, $orig_conf, 0, 0);
+			$check_pool_limits_create->($vmid, $merged, $pool);
+		    } else {
+			$check_pool_limits_create->($vmid, $conf, $pool);
 		    }
 		}
 		if ($storage_only_mode) {
@@ -1748,6 +1772,7 @@ __PACKAGE__->register_method({
 
 	    # Replace the 'disk' lock with a 'create' lock.
 	    $newconf->{lock} = 'create';
+	    $check_pool_limits_create->($newid, $newconf, 0, $pool);
 
 	    # delete all snapshot related config options
 	    delete $newconf->@{qw(snapshots parent snaptime snapstate)};
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 container 4/7] start: handle pool limits
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (3 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 3/7] create/restore/clone: handle pool limits Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 5/7] hotplug: " Fabian Grünbichler
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/PVE/LXC.pm | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 9b03a97..1856b6a 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -2577,6 +2577,14 @@ sub vm_start {
 
     update_lxc_config($vmid, $conf);
 
+    my $usage = PVE::LXC::Config->get_pool_usage($conf);
+    PVE::GuestHelpers::check_guest_pool_limit($vmid, {
+	absolute => 1,
+	running => 1,
+	mem => $usage->{mem},
+	cpu => $usage->{cpu},
+    });
+
     eval {
 	my ($id_map, undef, undef) = PVE::LXC::parse_id_maps($conf);
 	PVE::LXC::validate_id_maps($id_map);
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 container 5/7] hotplug: handle pool limits
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (4 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 4/7] start: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 6/7] rollback: " Fabian Grünbichler
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

by checking the new values against the running limits.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/PVE/LXC/Config.pm | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index a6baccd..248100e 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1434,6 +1434,13 @@ sub vmconfig_hotplug_pending {
 
     foreach my $opt (sort keys %{$conf->{pending}}) { # add/change
 	next if $selection && !$selection->{$opt};
+	if ($opt eq 'cores') {
+	    PVE::GuestHelpers::check_guest_pool_limit($vmid, {
+		running => 1,
+		absolute => 1,
+		cpu => $conf->{pending}->{cores},
+	    })
+	}
 	if ($LXC_FASTPLUG_OPTIONS->{$opt}) {
 	    $conf->{$opt} = delete $conf->{pending}->{$opt};
 	}
@@ -1447,6 +1454,12 @@ sub vmconfig_hotplug_pending {
     my $hotplug_memory = sub {
 	my ($new_memory, $new_swap) = @_;
 
+	PVE::GuestHelpers::check_guest_pool_limit($vmid, {
+	    running => 1,
+	    absolute => 1,
+	    mem => ($new_memory + $new_swap)*1024*1024,
+	});
+
 	($new_memory, my $new_memory_high) = calculate_memory_constraints($new_memory);
 	$new_swap = int($new_swap * 1024 * 1024) if defined($new_swap);
 	$cgroup->change_memory_limit($new_memory, $new_swap, $new_memory_high);
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 container 6/7] rollback: handle pool limits
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (5 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 5/7] hotplug: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 7/7] update: " Fabian Grünbichler
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

by checking the snapshot conf values as if the CT was newly created.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/PVE/API2/LXC/Snapshot.pm | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/src/PVE/API2/LXC/Snapshot.pm b/src/PVE/API2/LXC/Snapshot.pm
index 0999fbc..37a02a6 100644
--- a/src/PVE/API2/LXC/Snapshot.pm
+++ b/src/PVE/API2/LXC/Snapshot.pm
@@ -299,6 +299,13 @@ __PACKAGE__->register_method({
 
 	my $realcmd = sub {
 	    PVE::Cluster::log_msg('info', $authuser, "rollback snapshot LXC $vmid: $snapname");
+	    my $snap_conf = PVE::LXC::Config->load_config($vmid)->{snapshots}->{snapname};
+	    my $snap_usage = PVE::LXC::Config->get_pool_usage($snap_conf);
+	    my $changes = {
+		absolute => 1,
+		mem => $snap_usage->{mem},
+		cpu => $snap_usage->{cpu},
+	    };
 	    PVE::LXC::Config->snapshot_rollback($vmid, $snapname);
 
 	    if ($param->{start}) {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 container 7/7] update: handle pool limits
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (6 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 6/7] rollback: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 guest-common 1/1] helpers: add pool limit/usage helpers Fabian Grünbichler
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---

Notes:
    v2:
    - don't multiply mem+swap, but add them up (thanks Dominik)

 src/PVE/API2/LXC/Config.pm | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/src/PVE/API2/LXC/Config.pm b/src/PVE/API2/LXC/Config.pm
index e6c0980..15fab61 100644
--- a/src/PVE/API2/LXC/Config.pm
+++ b/src/PVE/API2/LXC/Config.pm
@@ -208,6 +208,27 @@ __PACKAGE__->register_method({
 
 	    my $running = PVE::LXC::check_running($vmid);
 
+	    my $usage = PVE::LXC::Config->get_pool_usage($conf);
+	    if (defined($param->{memory}) || defined($param->{swap})) {
+		my $old = $usage->{mem};
+		my $new = $param->{memory} || $usage->{memory};
+		$new += ($param->{swap} || $usage->{swap});
+
+		if ($new > $old) {
+		    my $change = { mem => ($new - $old) * 1024 * 1024 };
+		    PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
+		}
+	    }
+	    if (defined($param->{cores})) {
+		my $old = $usage->{cpu};
+		my $new = $param->{cores};
+
+		if ($new > $old) {
+		    my $change = { cpu => ($new - $old) };
+		    PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
+		}
+	    }
+
 	    my $errors = PVE::LXC::Config->update_pct_config($vmid, $conf, $running, $param, \@delete, \@revert);
 	    PVE::LXC::Config->write_config($vmid, $conf);
 	    $conf = PVE::LXC::Config->load_config($vmid);
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 guest-common 1/1] helpers: add pool limit/usage helpers
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (7 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 7/7] update: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 1/4] api: pools: add limits management Fabian Grünbichler
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

one for combining the per-node broadcasted values, one for checking a pool's
limit, and one specific helper for checking guest-related actions such as
starting a VM.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---

Notes:
    v2:
    - style
    - introduce new helper for mapping limit key to usage hash
    - introduce new helper for default usage hash
    - avoid hard-coding cpu/mem and run/config where sensible

 src/PVE/GuestHelpers.pm | 183 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 183 insertions(+)

diff --git a/src/PVE/GuestHelpers.pm b/src/PVE/GuestHelpers.pm
index 961a7b8..e52eaf0 100644
--- a/src/PVE/GuestHelpers.pm
+++ b/src/PVE/GuestHelpers.pm
@@ -416,4 +416,187 @@ sub check_vnet_access {
 	if !($tag || $trunks);
 }
 
+sub pool_limit_to_usage {
+    my ($limit_key) = @_;
+
+    my ($resource, $kind) = split(/-/, $limit_key, 2);
+
+    return ($resource, $kind, $kind eq 'run' ? 1 : 0);
+}
+
+sub pool_default_usage {
+    my $default = {};
+
+    for my $limit (keys $PVE::AccessControl::pool_limits_desc->%*) {
+	my ($resource, $kind) = pool_limit_to_usage($limit);
+	$default->{$resource}->{$kind} = 0;
+    }
+
+    return $default;
+}
+
+# combines the broadcasted pool usage information to get per-pool stats
+#
+# $pools parsed pool info from user.cfg
+# $usage broadcasted KV hash
+# $pool filter for specific pool
+# $skip skip a certain guest to ignore its current usage
+#
+# returns usage hash:
+# pool -> cpu/mem/.. -> run/config -> $usage
+sub get_pool_usage {
+    my ($pools, $usage, $pool, $skip) = @_;
+
+    my $res = {};
+    my $included_guests = {};
+    for my $node (keys $usage->%*) {
+	my $node_usage = JSON::decode_json($usage->{$node} // '');
+
+	# long IDs first, so we can add children to their parents right away
+	for my $poolid (sort {$b cmp $a} keys $pools->%*) {
+	    if (
+		defined($pool)
+		&& !($pool eq $poolid || $poolid =~ m!^$pool/! || $pool =~ m!^$poolid/!)
+	    ) {
+		next;
+	    }
+
+	    my $d = $res->{$poolid} //= pool_default_usage();
+
+	    my $pool_usage = $node_usage->{data}->{$poolid} // {};
+	    for my $vmid (keys $pool_usage->%*) {
+		# only include once in case of migration between broadcast
+		next if $included_guests->{$vmid};
+		next if $skip && $skip->{$vmid};
+		$included_guests->{$vmid} = 1;
+
+		my $vm_data = $pool_usage->{$vmid};
+		for my $key (keys $vm_data->%*) {
+		    next if $key eq 'running';
+		    $d->{$key}->{run} += $vm_data->{$key}->{run} if $vm_data->{running};
+		    $d->{$key}->{config} += $vm_data->{$key}->{config};
+		}
+	    }
+
+	    if (my $parent = $pools->{$poolid}->{parent}) {
+		$res->{$parent} //= pool_default_usage();
+		for my $key (keys $d->%*) {
+		    for my $kind (keys $d->{$key}->%*) {
+			$res->{$parent}->{$key}->{$kind} = $d->{$key}->{$kind};
+		    }
+		}
+	    }
+	}
+    }
+
+    return $res;
+}
+
+# checks whether a pool is (or would be) over its resource limits
+#
+# $changes is for checking limits for config/state changes like VM starts, if
+# set, only the limits with changes are checked (see check_guest_pool_limit)
+#
+# return value indicates whether any limit was overstepped or not (if $noerr is set)
+sub check_pool_limits {
+    my ($usage, $limits, $noerr, $changes) = @_;
+
+    my $over = {};
+    my $only_changed = defined($changes);
+
+    my $check_limit = sub {
+	my ($key, $running, $limit, $change) = @_;
+
+	return if $only_changed && $change == 0;
+
+	my $kind = $running ? 'run' : 'config';
+
+	my $value = $usage->{$key}->{$kind};
+	$value = int($value);
+	$value += $change;
+	$value = $value / (1024*1024) if $key eq 'mem';
+	if ($limit < $value) {
+	    $over->{$key}->{$kind}->{change} = $change if $change;
+	    $over->{$key}->{$kind}->{over} = 1;
+	}
+    };
+
+    my $get_change = sub {
+	my ($key, $running) = @_;
+
+	return 0 if !defined($changes);
+
+	my $check_running = defined($changes->{running}) && $changes->{running} ? 1 : 0;
+
+	if ($running == $check_running) {
+	    return $changes->{$key} // 0;
+	} else {
+	    return 0;
+	}
+    };
+
+    while (my ($key, $limit) = each $limits->%*) {
+	my ($resource, $kind, $running) = pool_limit_to_usage($key);
+	my $change = $get_change->($resource, $running);
+	$check_limit->($resource, $running, $limit, $change);
+    }
+
+    if (!$noerr) {
+	my $msg = '';
+	for my $key (keys $over->%*) {
+	    for my $kind (keys $over->{$key}->%*) {
+		my $value = $usage->{$key}->{$kind};
+		$value = $value / (1024*1024) if $key eq 'mem';
+		my $change = $over->{$key}->{$kind}->{change};
+		if ($change) {
+		    $change = $change / (1024*1024) if $key eq 'mem';
+		    $value = "$value + $change" if $change;
+		}
+		my $limit = $limits->{"$key-$kind"};
+		$msg .= "($kind) $key: $value over $limit, ";
+	    }
+	}
+	if ($msg) {
+	$msg =~ s/, $//;
+	die "pool limits exhausted: $msg\n";
+	}
+    }
+
+    return $over->%* ? 1 : 0;
+}
+
+# checks whether the given changes for a certain guest would overstep a pool limit
+#
+# $changes is an optional hash containing
+# - absolute: flag whether changes are relative or absolute
+# - running: flag whether the config or running limits should be checked
+# - cpu: change value for cpu limit
+# - mem: change value for mem limit
+# all elements are optional
+#
+# if no $changes is provided, the limits are checked against the current usage
+#
+# $poolid allows overriding the guest's pool membership, for example in case it
+# is not yet properly set when creating the guest
+sub check_guest_pool_limit {
+    my ($vmid, $changes, $poolid) = @_;
+
+    my $user_cfg = PVE::Cluster::cfs_read_file("user.cfg");
+
+    $poolid = $user_cfg->{vms}->{$vmid} if !defined($poolid);
+    if ($poolid) {
+	my $pool = $user_cfg->{pools}->{$poolid};
+
+	my $limits = $pool->{limits};
+	return if !$limits;
+
+	my $skip = {};
+	$skip->{$vmid} = 1 if $changes && $changes->{absolute};
+	my $usage = PVE::Cluster::get_node_kv('pool-usage');
+
+	$usage = get_pool_usage($user_cfg->{pools}, $usage, $poolid, $skip);
+	check_pool_limits($usage->{$poolid}, $limits, 0, $changes);
+    }
+}
+
 1;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 manager 1/4] api: pools: add limits management
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (8 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 guest-common 1/1] helpers: add pool limit/usage helpers Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 2/4] pvestatd: collect and broadcast pool usage Fabian Grünbichler
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

allow to set/update limits, and return them when querying individual pools.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---

Notes:
    requires bumped pve-access-control
    
    v2:
    - unify run vs config limit checks into helper
    - avoid hard-coding resource kinds

 PVE/API2/Pool.pm | 50 ++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 46 insertions(+), 4 deletions(-)

diff --git a/PVE/API2/Pool.pm b/PVE/API2/Pool.pm
index 54e744558..031f0160f 100644
--- a/PVE/API2/Pool.pm
+++ b/PVE/API2/Pool.pm
@@ -7,6 +7,7 @@ use PVE::AccessControl;
 use PVE::Cluster qw (cfs_read_file cfs_write_file);
 use PVE::Exception qw(raise_param_exc);
 use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string print_property_string);
 use PVE::Storage;
 
 use PVE::SafeSyslog;
@@ -47,10 +48,7 @@ __PACKAGE__->register_method ({
 	    type => "object",
 	    properties => {
 		poolid => { type => 'string' },
-		comment => {
-		    type => 'string',
-		    optional => 1,
-		},
+		comment => { type => 'string', optional => 1 },
 		members => {
 		    type => 'array',
 		    optional => 1,
@@ -79,6 +77,7 @@ __PACKAGE__->register_method ({
 			},
 		    },
 		},
+		limits => { type => 'string', optional => 1 },
 	    },
 	},
 	links => [ { rel => 'child', href => "{poolid}" } ],
@@ -136,6 +135,8 @@ __PACKAGE__->register_method ({
 		members => $members,
 	    };
 	    $pool_info->{comment} = $pool_config->{comment} if defined($pool_config->{comment});
+	    $pool_info->{limits} = print_property_string($pool_config->{limits}, 'pve-pool-limits')
+		if defined($pool_config->{limits});
 	    $pool_info->{poolid} = $poolid;
 
 	    push @$res, $pool_info;
@@ -153,6 +154,26 @@ __PACKAGE__->register_method ({
 	return $res;
     }});
 
+my $check_run_vs_config_limits = sub {
+    my ($limits) = @_;
+
+    my $exception;
+
+    for my $resource (keys $limits->%*) {
+	next if $resource !~ m/-run$/;
+
+	my $config = $resource;
+	$config =~ s/-run$/-config/;
+
+	if (defined($limits->{$config}) && $limits->{$config} < $limits->{$resource}) {
+	    my $msg = "run limit must be below config limit!";
+	    $exception->{$resource} = $msg;
+	}
+    }
+
+    raise_param_exc($exception) if $exception;
+};
+
 __PACKAGE__->register_method ({
     name => 'create_pool',
     protected => 1,
@@ -173,6 +194,7 @@ __PACKAGE__->register_method ({
 		type => 'string',
 		optional => 1,
 	    },
+	    limits => get_standard_option('pve-pool-limits'),
 	},
     },
     returns => { type => 'null' },
@@ -196,6 +218,10 @@ __PACKAGE__->register_method ({
 	    };
 
 	    $usercfg->{pools}->{$pool}->{comment} = $param->{comment} if $param->{comment};
+	    if (my $limits = parse_property_string('pve-pool-limits', $param->{limits} // '')) {
+		$check_run_vs_config_limits->($limits);
+		$usercfg->{pools}->{$pool}->{limits} = $limits;
+	    }
 
 	    cfs_write_file("user.cfg", $usercfg);
 	}, "create pool failed");
@@ -288,6 +314,9 @@ __PACKAGE__->register_method ({
 		optional => 1,
 		default => 0,
 	    },
+	    limits => get_standard_option("pve-pool-limits", {
+		description => "Update pool limits. Passing '0' for a specific limit will remove any currently configured value.",
+	    }),
 	},
     },
     returns => { type => 'null' },
@@ -346,6 +375,18 @@ __PACKAGE__->register_method ({
 		}
 	    }
 
+	    if (my $update = parse_property_string('pve-pool-limits', $param->{limits} // '')) {
+		my $limits = $pool_config->{limits};
+		for my $kind (sort keys $update->%*) {
+		    if ($update->{$kind} == 0) {
+			delete $limits->{$kind};
+		    } else {
+			$limits->{$kind} = $update->{$kind};
+		    }
+		}
+		$check_run_vs_config_limits->($limits);
+	    }
+
 	    cfs_write_file("user.cfg", $usercfg);
 	}, "update pools failed");
 
@@ -409,6 +450,7 @@ __PACKAGE__->register_method ({
 		    },
 		},
 	    },
+	    limits => get_standard_option("pve-pool-limits"),
 	},
     },
     code => sub {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 manager 2/4] pvestatd: collect and broadcast pool usage
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (9 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 1/4] api: pools: add limits management Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 3/4] api: return pool usage when queried Fabian Grünbichler
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

so that other nodes can query it and both block changes that would violate the
limits, and mark pools which are violating it currently accordingly.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 PVE/Service/pvestatd.pm | 59 ++++++++++++++++++++++++++++++++++++++---
 1 file changed, 55 insertions(+), 4 deletions(-)

diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index 2515120c6..d7e4755e9 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -231,7 +231,7 @@ sub auto_balloning {
 }
 
 sub update_qemu_status {
-    my ($status_cfg) = @_;
+    my ($status_cfg, $pool_membership, $pool_usage) = @_;
 
     my $ctime = time();
     my $vmstatus = PVE::QemuServer::vmstatus(undef, 1);
@@ -242,6 +242,21 @@ sub update_qemu_status {
     my $transactions = PVE::ExtMetric::transactions_start($status_cfg);
     foreach my $vmid (keys %$vmstatus) {
 	my $d = $vmstatus->{$vmid};
+
+	if (my $pool = $pool_membership->{$vmid}) {
+	    $pool_usage->{$pool}->{$vmid} = {
+		cpu => {
+		    config => ($d->{confcpus} // 0),
+		    run => ($d->{runcpus} // 0),
+		},
+		mem => {
+		    config => ($d->{confmem} // 0),
+		    run => ($d->{runmem} // 0),
+		},
+		running => $d->{pid} ? 1 : 0,
+	    };
+	}
+
 	my $data;
 	my $status = $d->{qmpstatus} || $d->{status} || 'stopped';
 	my $template = $d->{template} ? $d->{template} : "0";
@@ -263,6 +278,17 @@ sub update_qemu_status {
     PVE::ExtMetric::transactions_finish($transactions);
 }
 
+sub update_pool_usage {
+    my ($usage) = @_;
+
+    my $ctime = time();
+
+    # TODO? RRD and ExtMetric support here?
+
+    my $new = { data => $usage, timestamp => $ctime };
+    PVE::Cluster::broadcast_node_kv('pool-usage', encode_json($new));
+}
+
 sub remove_stale_lxc_consoles {
 
     my $vmstatus = PVE::LXC::vmstatus();
@@ -440,7 +466,7 @@ sub rebalance_lxc_containers {
 }
 
 sub update_lxc_status {
-    my ($status_cfg) = @_;
+    my ($status_cfg, $pool_membership, $pool_usage) = @_;
 
     my $ctime = time();
     my $vmstatus = PVE::LXC::vmstatus();
@@ -449,6 +475,21 @@ sub update_lxc_status {
 
     foreach my $vmid (keys %$vmstatus) {
 	my $d = $vmstatus->{$vmid};
+
+	if (my $pool = $pool_membership->{$vmid}) {
+	    $pool_usage->{$pool}->{$vmid} = {
+		cpu => {
+		    config => ($d->{confcpus} // 0),
+		    run => ($d->{runcpus} // 0),
+		},
+		mem => {
+		    config => ($d->{confmem} // 0),
+		    run => ($d->{runmem} // 0),
+		},
+		running => $d->{status} eq 'running' ? 1 : 0,
+	    };
+	}
+
 	my $template = $d->{template} ? $d->{template} : "0";
 	my $data;
 	if ($d->{status} eq 'running') { # running
@@ -540,6 +581,10 @@ sub update_status {
     syslog('err', $err) if $err;
 
     my $status_cfg = PVE::Cluster::cfs_read_file('status.cfg');
+    my $user_cfg = PVE::Cluster::cfs_read_file('user.cfg');
+    my $pools = $user_cfg->{pools};
+    my $pool_membership = $user_cfg->{vms};
+    my $pool_usage = {};
 
     eval {
 	update_node_status($status_cfg);
@@ -548,17 +593,23 @@ sub update_status {
     syslog('err', "node status update error: $err") if $err;
 
     eval {
-	update_qemu_status($status_cfg);
+	update_qemu_status($status_cfg, $pool_membership, $pool_usage);
     };
     $err = $@;
     syslog('err', "qemu status update error: $err") if $err;
 
     eval {
-	update_lxc_status($status_cfg);
+	update_lxc_status($status_cfg, $pool_membership, $pool_usage);
     };
     $err = $@;
     syslog('err', "lxc status update error: $err") if $err;
 
+    eval {
+	update_pool_usage($pool_usage);
+    };
+    $err =$@;
+    syslog('err', "pool usage status update error: $err") if $err;
+
     eval {
 	rebalance_lxc_containers();
     };
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 manager 3/4] api: return pool usage when queried
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (10 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 2/4] pvestatd: collect and broadcast pool usage Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 4/4] ui: add pool limits and usage Fabian Grünbichler
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 PVE/API2/Pool.pm | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Pool.pm b/PVE/API2/Pool.pm
index 031f0160f..2d83ccc67 100644
--- a/PVE/API2/Pool.pm
+++ b/PVE/API2/Pool.pm
@@ -6,6 +6,7 @@ use warnings;
 use PVE::AccessControl;
 use PVE::Cluster qw (cfs_read_file cfs_write_file);
 use PVE::Exception qw(raise_param_exc);
+use PVE::GuestHelpers;
 use PVE::INotify;
 use PVE::JSONSchema qw(get_standard_option parse_property_string print_property_string);
 use PVE::Storage;
@@ -78,6 +79,7 @@ __PACKAGE__->register_method ({
 		    },
 		},
 		limits => { type => 'string', optional => 1 },
+		usage => { type => 'object', optional => 1 },
 	    },
 	},
 	links => [ { rel => 'child', href => "{poolid}" } ],
@@ -135,17 +137,30 @@ __PACKAGE__->register_method ({
 		members => $members,
 	    };
 	    $pool_info->{comment} = $pool_config->{comment} if defined($pool_config->{comment});
-	    $pool_info->{limits} = print_property_string($pool_config->{limits}, 'pve-pool-limits')
-		if defined($pool_config->{limits});
 	    $pool_info->{poolid} = $poolid;
 
+	    my $usage = PVE::Cluster::get_node_kv('pool-usage');
+	    $usage = PVE::GuestHelpers::get_pool_usage($usercfg->{pools}, $usage, $poolid);
+	    if (my $limits = $pool_config->{limits}) {
+		$pool_info->{limits} = print_property_string($limits, 'pve-pool-limits');
+		$usage->{$poolid}->{over} = PVE::GuestHelpers::check_pool_limits($usage->{$poolid}, $limits, 1);
+	    }
+	    $pool_info->{usage} = $usage->{$poolid};
+
 	    push @$res, $pool_info;
 	} else {
+	    my $usage = PVE::Cluster::get_node_kv('pool-usage');
+	    $usage = PVE::GuestHelpers::get_pool_usage($usercfg->{pools}, $usage);
 	    for my $pool (sort keys %{$usercfg->{pools}}) {
 		next if !$rpcenv->check($authuser, "/pool/$pool", [ 'Pool.Audit' ], 1);
 
 		my $entry = { poolid => $pool };
 		my $pool_config = $usercfg->{pools}->{$pool};
+		if (my $limits = $pool_config->{limits}) {
+		    $usage->{$pool}->{over} = PVE::GuestHelpers::check_pool_limits($usage->{$pool}, $limits, 1);
+		    $entry->{limits} = print_property_string($limits, 'pve-pool-limits');
+		}
+		$entry->{usage} = $usage->{$pool};
 		$entry->{comment} = $pool_config->{comment} if defined($pool_config->{comment});
 		push @$res, $entry;
 	    }
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 manager 4/4] ui: add pool limits and usage
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (11 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 3/4] api: return pool usage when queried Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 1/6] config: add pool usage helper Fabian Grünbichler
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---

Notes:
    this is very "bare", obviously we'd want
    - a nicer grid/.. display of usage
    - a way to edit the limits
    
    I am not yet sure how to integrate this nicely, and wanted to get feedback on the rest first.
    
    v2:
    - fold in change I forgot to include in patch:
     (== vs ===, ? 1 : 0 vs just using the comparison result)
    
    add edit window

 www/manager6/pool/StatusView.js | 141 +++++++++++++++++++++++++++++++-
 1 file changed, 140 insertions(+), 1 deletion(-)

diff --git a/www/manager6/pool/StatusView.js b/www/manager6/pool/StatusView.js
index 3d46b3b1a..293ee1443 100644
--- a/www/manager6/pool/StatusView.js
+++ b/www/manager6/pool/StatusView.js
@@ -1,12 +1,103 @@
+var labelWidth = 200;
+
+Ext.define('PVE.pool.LimitInputPanel', {
+    extend: 'Proxmox.panel.InputPanel',
+    alias: 'widget.pvePoolLimitInputPanel',
+
+    onGetValues: function(values) {
+	let ret = PVE.Parser.printPropertyString(values, 'type');
+	if (ret === '') {
+	    return { 'delete': 'limits' };
+	}
+	return { limits: ret };
+    },
+
+    items: [
+	    {
+		xtype: 'proxmoxintegerfield',
+		name: 'mem-config',
+		minValue: 0,
+		value: '0',
+		step: 32,
+		fieldLabel: gettext('Memory Config Limit') + ' (MiB)',
+		labelWidth: labelWidth,
+		allowBlank: false,
+	    },
+	    {
+		xtype: 'proxmoxintegerfield',
+		name: 'mem-run',
+		minValue: 0,
+		value: '0',
+		step: 32,
+		fieldLabel: gettext('Memory Running Limit') + ' (MiB)',
+		labelWidth: labelWidth,
+		allowBlank: false,
+	    },
+	    {
+		xtype: 'proxmoxintegerfield',
+		name: 'cpu-config',
+		minValue: 0,
+		value: '0',
+		fieldLabel: gettext('CPU Config Limit'),
+		labelWidth: labelWidth,
+		allowBlank: false,
+	    },
+	    {
+		xtype: 'proxmoxintegerfield',
+		name: 'cpu-run',
+		minValue: 0,
+		value: '0',
+		fieldLabel: gettext('CPU Running Limit'),
+		labelWidth: labelWidth,
+		allowBlank: false,
+	    },
+	],
+});
+
+Ext.define('PVE.pool.EditLimits', {
+    extend: 'Proxmox.window.Edit',
+
+    width: 640,
+    height: 480,
+
+    initComponent: function() {
+	var me = this;
+
+	if (!me.pool) {
+	    throw "no pool specified";
+	}
+
+	me.url = '/pools/';
+	me.method = 'PUT';
+	me.extraRequestParams = { poolid: me.pool };
+	me.autoLoad = true;
+
+	Ext.apply(me, {
+	    subject: gettext('Pool Limits'),
+	    items: Ext.create('PVE.pool.LimitInputPanel'),
+	});
+
+	me.callParent();
+
+	me.load({
+	    success: function(response) {
+		me.poolconfig = response.result.data[0];
+		let limits = me.poolconfig.limits;
+		me.setValues(PVE.Parser.parsePropertyString(limits));
+	    },
+	});
+    },
+});
+
 Ext.define('PVE.pool.StatusView', {
     extend: 'Proxmox.grid.ObjectGrid',
     alias: ['widget.pvePoolStatusView'],
-    disabled: true,
 
     title: gettext('Status'),
     cwidth1: 150,
     interval: 30000,
     //height: 195,
+
     initComponent: function() {
 	var me = this;
 
@@ -15,17 +106,65 @@ Ext.define('PVE.pool.StatusView', {
 	    throw "no pool specified";
 	}
 
+	var reload = function() {
+	    me.rstore.load();
+	};
+
 	var rows = {
 	    comment: {
 		header: gettext('Comment'),
 		renderer: Ext.String.htmlEncode,
 		required: true,
 	    },
+	    usage: {
+		header: gettext('Usage'),
+		required: false,
+		renderer: value => {
+		    if (value === null) {
+			return '-';
+		    } else {
+			let rendered = '';
+			let over = false;
+			for (const [first, second] of Object.entries(value)) {
+			    if (first === 'over') {
+				over = second === 1;
+			    } else {
+				for (const [kind, usage] of Object.entries(second)) {
+				    if (rendered === '') {
+				        rendered = `${first}-${kind}=${usage}`;
+				    } else {
+				        rendered = rendered + `, ${first}-${kind}=${usage}`;
+				    }
+				}
+			    }
+			}
+			if (over) {
+			    rendered = rendered + "\nover limit";
+			}
+			return rendered;
+		    }
+		},
+	    },
+	    limits: {
+		header: gettext('Resource Limits'),
+		required: false,
+	    },
 	};
 
 	Ext.apply(me, {
 	    url: "/api2/json/pools/?poolid=" + pool,
 	    rows: rows,
+	    tbar: [
+		{
+		    text: gettext('Edit Limits'),
+		    iconCls: 'pve-itype-icon-qemu',
+		    handler: function() {
+		        var win = Ext.create('PVE.pool.EditLimits', { pool: pool });
+		        win.on('destroy', reload);
+		        win.show();
+		    },
+		},
+	    ],
 	});
 
 	me.callParent();
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 1/6] config: add pool usage helper
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (12 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 4/4] ui: add pool limits and usage Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 2/6] vmstatus: add usage values for pool limits Fabian Grünbichler
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

determining the usage values for the current config. pending values are taken
into account if they are higher than the current value only, else it would be
possible to easily circumvent config limits by setting non-hotpluggable pending
values.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 PVE/QemuConfig.pm | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 8e8a7828..1410bf14 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -573,4 +573,34 @@ sub has_cloudinit {
     return $found;
 }
 
+# for determining pool usage vs limits
+#
+# this gives the higher of pending or currently configured
+sub get_pool_usage {
+    my ($class, $conf) = @_;
+
+    my $usage = {};
+
+    my $get_max = sub {
+	my $max = 0;
+
+	for my $curr (@_) {
+	    $max = $curr if defined($curr) && $curr > $max;
+	}
+
+	return $max;
+    };
+
+    $usage->{sockets} = $get_max->($conf->{pending}->{sockets}, $conf->{sockets}, 1);
+    $usage->{cores} = $get_max->($conf->{pending}->{cores}, $conf->{cores}, 1);
+    $usage->{cpu} = $usage->{sockets} * $usage->{cores};
+    $usage->{mem} = $get_max->(
+	get_current_memory($conf->{pending}->{memory}),
+	get_current_memory($conf->{memory})
+    );
+    $usage->{mem} *= 1024*1024;
+
+    return $usage;
+}
+
 1;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 2/6] vmstatus: add usage values for pool limits
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (13 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 1/6] config: add pool usage helper Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 3/6] create/restore/clone: handle " Fabian Grünbichler
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

these are separate from the existing ones to allow changes on either end
without side-effects, since the semantics are not quite the same.

the conf values incorporate pending values (if higher than the current config
value), and avoid clamping.

the run values are currently identical to the regular ones.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 PVE/QemuServer.pm | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 661613df..93eaaec5 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2755,6 +2755,18 @@ our $vmstatus_return_properties = {
 	optional => 1,
 	renderer => 'bytes',
     },
+    confmem => {
+	description => "Configured memory in bytes, might be higher than 'maxmem'",
+	type => 'integer',
+	optional => 1,
+	renderer => 'bytes',
+    },
+    runmem => {
+	description => "Currently plugged memory in bytes, if running.",
+	type => 'integer',
+	optional => 1,
+	renderer => 'bytes',
+    },
     maxdisk => {
 	description => "Root disk size in bytes.",
 	type => 'integer',
@@ -2787,6 +2799,16 @@ our $vmstatus_return_properties = {
 	type => 'number',
 	optional => 1,
     },
+    confcpus => {
+	description => "Configured amount of CPUs, might be higher than 'cpus'.",
+	type => 'number',
+	optional => 1,
+    },
+    runcpus => {
+	description => "Currently used amount of CPUs.",
+	type => 'number',
+	optional => 1,
+    },
     lock => {
 	description => "The current config lock, if any.",
 	type => 'string',
@@ -2839,6 +2861,8 @@ sub vmstatus {
 	# fixme: better status?
 	$d->{status} = $list->{$vmid}->{pid} ? 'running' : 'stopped';
 
+	my $usage = PVE::QemuConfig->get_pool_usage($conf);
+
 	my $size = PVE::QemuServer::Drive::bootdisk_size($storecfg, $conf);
 	if (defined($size)) {
 	    $d->{disk} = 0; # no info available
@@ -2850,11 +2874,16 @@ sub vmstatus {
 
 	$d->{cpus} = ($conf->{sockets} || $defaults->{sockets})
 	    * ($conf->{cores} || $defaults->{cores});
+	
 	$d->{cpus} = $cpucount if $d->{cpus} > $cpucount;
 	$d->{cpus} = $conf->{vcpus} if $conf->{vcpus};
+	$d->{confcpus} = $usage->{cpu};
+	$d->{runcpus} = $d->{cpus};
 
 	$d->{name} = $conf->{name} || "VM $vmid";
 	$d->{maxmem} = get_current_memory($conf->{memory})*(1024*1024);
+	$d->{confmem} = $usage->{mem};
+	$d->{runmem} = $d->{maxmem};
 
 	if ($conf->{balloon}) {
 	    $d->{balloon_min} = $conf->{balloon}*(1024*1024);
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 3/6] create/restore/clone: handle pool limits
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (14 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 2/6] vmstatus: add usage values for pool limits Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 4/6] update/hotplug: " Fabian Grünbichler
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

as early as possible, to avoid having to undo expensive work or allowing a
window for limit exhaustion..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 PVE/API2/Qemu.pm | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 78f2236c..f0ff785c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -229,6 +229,26 @@ my $check_storage_access_migrate = sub {
 	if !$scfg->{content}->{images};
 };
 
+my $check_pool_limits_create = sub {
+    my ($vmid, $conf, $running, $pool) = @_;
+    if ($pool) {
+	my $usage = PVE::QemuConfig->get_pool_usage($conf);
+	my $changes = {
+	    cpu => $usage->{cpu},
+	    mem => $usage->{mem},
+	    absolute => 1, # in case this is an in-place restore
+	};
+	PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes, $pool);
+	if ($running) {
+	    $changes->{running} = 1;
+	    $changes->{cpu} = $conf->{vcpus}
+	    if $conf->{vcpus} && $conf->{vcpus} < $changes->{cpu};
+	    PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes, $pool);
+	}
+    }
+};
+
+
 my $import_from_volid = sub {
     my ($storecfg, $src_volid, $dest_info, $vollist) = @_;
 
@@ -1041,6 +1061,7 @@ __PACKAGE__->register_method({
 			warn "Skipping early checks!\n";
 		    } else {
 			PVE::QemuServer::check_restore_permissions($rpcenv, $authuser, $merged);
+			$check_pool_limits_create->($vmid, $merged, $live_restore || $start_after_create, $pool);
 		    }
 		}
 		if ($archive->{type} eq 'file' || $archive->{type} eq 'pipe') {
@@ -1085,6 +1106,7 @@ __PACKAGE__->register_method({
 
 	    my $realcmd = sub {
 		my $conf = $param;
+		$check_pool_limits_create->($vmid, $conf, $live_restore || $start_after_create, $pool);
 		my $arch = PVE::QemuServer::get_vm_arch($conf);
 
 		for my $opt (sort keys $param->%*) {
@@ -3793,6 +3815,8 @@ __PACKAGE__->register_method({
 	    my ($conffile, $newconf, $oldconf, $vollist, $drives, $fullclone) = $load_and_check->();
 	    my $storecfg = PVE::Storage::config();
 
+	    $check_pool_limits_create->($newid, $newconf, 0, $pool);
+
 	    # auto generate a new uuid
 	    my $smbios1 = PVE::QemuServer::parse_smbios1($newconf->{smbios1} || '');
 	    $smbios1->{uuid} = PVE::QemuServer::generate_uuid();
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 4/6] update/hotplug: handle pool limits
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (15 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 3/6] create/restore/clone: handle " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 5/6] start: " Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 6/6] rollback: " Fabian Grünbichler
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

if the new value is higher than the old one, check against limits. if the old
one is higher, then the change is always okay, to support reducing the usage in
steps spread over multiple guests..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 PVE/API2/Qemu.pm         | 22 ++++++++++++++++++++++
 PVE/QemuServer.pm        |  7 +++++++
 PVE/QemuServer/Memory.pm |  6 ++++++
 3 files changed, 35 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index f0ff785c..adbc6557 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1863,6 +1863,28 @@ my $update_vm_api  = sub {
 		}
 	    };
 
+	    # check pool limits, but only if values increase, ignoring
+	    # deletions and pending values
+	    my $usage = PVE::QemuConfig->get_pool_usage($conf);
+	    if (defined($param->{sockets}) || defined($param->{cores})) {
+	    	my $old = $usage->{cpu};
+		my $new = $param->{sockets} || $usage->{sockets};
+		$new *= ($param->{cores} || $usage->{cores});
+
+		if ($new > $old) {
+		    my $change = { cpu => $new - $old };
+		    PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
+		}
+	    } elsif (defined($param->{memory})) {
+		my $old = $usage->{mem};
+		my $new = PVE::QemuServer::Memory::get_current_memory($param->{memory})*1024*1024;
+
+		if ($new > $old) {
+		    my $change = { mem => $new - $old };
+		    PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
+		}
+	    }
+
 	    foreach my $opt (@delete) {
 		$modified->{$opt} = 1;
 		$conf = PVE::QemuConfig->load_config($vmid); # update/reload
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 93eaaec5..be937ec1 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4617,6 +4617,13 @@ sub qemu_cpu_hotplug {
 	}
 
 	return;
+    } else {
+	my $changes = {
+	    absolute => 1,
+	    running => 1,
+	    cpu => $vcpus,
+	};
+	PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes);
     }
 
     my $currentrunningvcpus = mon_cmd($vmid, "query-cpus-fast");
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index f365f2d1..b27b8b2b 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -234,6 +234,12 @@ sub qemu_memory_hotplug {
     die "you cannot add more memory than max mem $MAX_MEM MB!\n" if $value > $MAX_MEM;
 
     if ($value > $memory) {
+	my $changes = {
+	    absolute => 1,
+	    running => 1,
+	    mem => $memory - $value,
+	};
+	PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes);
 
 	my $numa_hostmap;
 
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 5/6] start: handle pool limits
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (16 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 4/6] update/hotplug: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 6/6] rollback: " Fabian Grünbichler
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

if the start is not part of an incoming migration, check the VM against its
pool's run limit.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 PVE/QemuServer.pm | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index be937ec1..78b6ff96 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5704,6 +5704,19 @@ sub vm_start_nolock {
 	    # But we do it above, so for now let's be consistent.
 	    $conf = PVE::QemuConfig->load_config($vmid); # update/reload
 	}
+
+	my $cpu_count = 1;
+	$cpu_count = $conf->{sockets} if $conf->{sockets};
+	$cpu_count *= ($conf->{cores} || 1);
+	$cpu_count = $conf->{vcpus} if $conf->{vcpus} && $conf->{vcpus} < $cpu_count;
+
+	my $changes = {
+	    absolute => 1,
+	    running => 1,
+	    mem => get_current_memory($conf->{memory}),
+	    cpu => $cpu_count,
+	};
+	PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes);
     }
 
     # override offline migrated volumes, conf is out of date still
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 6/6] rollback: handle pool limits
  2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
                   ` (17 preceding siblings ...)
  2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 5/6] start: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
  18 siblings, 0 replies; 20+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
  To: pve-devel

by checking the snapshot conf values as if the VM was newly created.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 PVE/API2/Qemu.pm | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index adbc6557..eca0ab79 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -5403,6 +5403,14 @@ __PACKAGE__->register_method({
 
 	my $realcmd = sub {
 	    PVE::Cluster::log_msg('info', $authuser, "rollback snapshot VM $vmid: $snapname");
+	    my $snap_conf = PVE::QemuConfig->load_config($vmid)->{snapshots}->{snapname};
+	    my $snap_usage = PVE::QemuConfig->get_pool_usage($snap_conf);
+	    my $changes = {
+		absolute => 1,
+		mem => $snap_usage->{mem},
+		cpu => $snap_usage->{cpu},
+	    };
+	    PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes);
 	    PVE::QemuConfig->snapshot_rollback($vmid, $snapname);
 
 	    if ($param->{start} && !PVE::QemuServer::Helpers::vm_running_locally($vmid)) {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2024-04-16 12:21 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 access-control 1/1] pools: define " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 1/7] config: add pool usage helper Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 2/7] status: add pool usage fields Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 3/7] create/restore/clone: handle pool limits Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 4/7] start: " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 5/7] hotplug: " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 6/7] rollback: " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 7/7] update: " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 guest-common 1/1] helpers: add pool limit/usage helpers Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 1/4] api: pools: add limits management Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 2/4] pvestatd: collect and broadcast pool usage Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 3/4] api: return pool usage when queried Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 4/4] ui: add pool limits and usage Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 1/6] config: add pool usage helper Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 2/6] vmstatus: add usage values for pool limits Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 3/6] create/restore/clone: handle " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 4/6] update/hotplug: " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 5/6] start: " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 6/6] rollback: " Fabian Grünbichler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal