* [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits
@ 2024-04-16 12:20 Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 access-control 1/1] pools: define " Fabian Grünbichler
` (19 more replies)
0 siblings, 20 replies; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
high level description:
VM/CT vmstatus returns new fields for configured and running "usage"
values, these are then broadcasted by pvestatd on each node via KV.
helpers in guest-common to check those limits
pool API returns limits and usage, them and allows setting the limits
qemu-server/pve-container try to check actions against those limits.
since the broadcast is async, there is always an opportunity to cheat by
racing against the broadcast. this is unavoidable unless we want to
serialize all usage affecting tasks across the cluster..
changelog since v1/RFC:
- GUI can edit limits now
- incorporated most feedback from Wolfgang and Dominik
potential follow-ups:
- disk limits/usage
- RRD/metrics support (or even switching entirely to RRD based
broadcasting instead of KV)
- timeout of broadcasted data if node goes offline/stops broadcasting
- better GUI
- ... ?
pve-access-control:
Fabian Grünbichler (1):
pools: define resource limits
src/PVE/AccessControl.pm | 42 +++++++++++++++++++++++++++++++++++++--
src/test/parser_writer.pl | 14 ++++++-------
2 files changed, 47 insertions(+), 9 deletions(-)
pve-container:
Fabian Grünbichler (7):
config: add pool usage helper
status: add pool usage fields
create/restore/clone: handle pool limits
start: handle pool limits
hotplug: handle pool limits
rollback: handle pool limits
update: handle pool limits
src/PVE/API2/LXC.pm | 25 +++++++++++++++++++
src/PVE/API2/LXC/Config.pm | 21 ++++++++++++++++
src/PVE/API2/LXC/Snapshot.pm | 7 ++++++
src/PVE/LXC.pm | 37 +++++++++++++++++++++++++++
src/PVE/LXC/Config.pm | 48 ++++++++++++++++++++++++++++++++++++
5 files changed, 138 insertions(+)
pve-guest-common:
Fabian Grünbichler (1):
helpers: add pool limit/usage helpers
src/PVE/GuestHelpers.pm | 183 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 183 insertions(+)
pve-manager:
Fabian Grünbichler (4):
api: pools: add limits management
pvestatd: collect and broadcast pool usage
api: return pool usage when queried
ui: add pool limits and usage
PVE/API2/Pool.pm | 65 ++++++++++++++-
PVE/Service/pvestatd.pm | 59 ++++++++++++-
www/manager6/pool/StatusView.js | 141 +++++++++++++++++++++++++++++++-
3 files changed, 256 insertions(+), 9 deletions(-)
qemu-server:
Fabian Grünbichler (6):
config: add pool usage helper
vmstatus: add usage values for pool limits
create/restore/clone: handle pool limits
update/hotplug: handle pool limits
start: handle pool limits
rollback: handle pool limits
PVE/API2/Qemu.pm | 54 ++++++++++++++++++++++++++++++++++++++++
PVE/QemuConfig.pm | 30 ++++++++++++++++++++++
PVE/QemuServer.pm | 49 ++++++++++++++++++++++++++++++++++++
PVE/QemuServer/Memory.pm | 6 +++++
4 files changed, 139 insertions(+)
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 access-control 1/1] pools: define resource limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:01 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 1/7] config: add pool usage helper Fabian Grünbichler
` (18 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
and handle them when parsing/writing user.cfg
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
- make limit schema public for pve-guest-common usage
src/PVE/AccessControl.pm | 42 +++++++++++++++++++++++++++++++++++++--
src/test/parser_writer.pl | 14 ++++++-------
2 files changed, 47 insertions(+), 9 deletions(-)
diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
index 21f93ff..f1863c8 100644
--- a/src/PVE/AccessControl.pm
+++ b/src/PVE/AccessControl.pm
@@ -72,6 +72,36 @@ sub pve_verify_realm {
PVE::Auth::Plugin::pve_verify_realm(@_);
}
+our $pool_limits_desc = {
+ "mem-config" => {
+ type => 'integer',
+ description => "Sum of memory (in MB) guests in this pools can be configured with.",
+ optional => 1,
+ },
+ "mem-run" => {
+ type => 'integer',
+ description => "Sum of memory (in MB) guests in this pools can be started with.",
+ optional => 1,
+ },
+ "cpu-config" => {
+ type => 'integer',
+ description => "Sum of (virtual) cores guests in this pools can be configured with.",
+ optional => 1,
+ },
+ "cpu-run" => {
+ type => 'integer',
+ description => "Sum of (virtual) cores guests in this pools can be started with.",
+ optional => 1,
+ },
+};
+
+PVE::JSONSchema::register_format('pve-pool-limits', $pool_limits_desc);
+PVE::JSONSchema::register_standard_option('pve-pool-limits', {
+ type => 'string',
+ format => $pool_limits_desc,
+ optional => 1,
+});
+
# Locking both config files together is only ever allowed in one order:
# 1) tfa config
# 2) user config
@@ -1524,7 +1554,7 @@ sub parse_user_config {
warn "user config - ignore invalid path in acl '$pathtxt'\n";
}
} elsif ($et eq 'pool') {
- my ($pool, $comment, $vmlist, $storelist) = @data;
+ my ($pool, $comment, $vmlist, $storelist, $limits) = @data;
if (!verify_poolname($pool, 1)) {
warn "user config - ignore pool '$pool' - invalid characters in pool name\n";
@@ -1575,6 +1605,13 @@ sub parse_user_config {
}
$cfg->{pools}->{$pool}->{storage}->{$storeid} = 1;
}
+
+ if ($limits) {
+ my $parsed_limits = eval { PVE::JSONSchema::parse_property_string($pool_limits_desc, $limits) };
+ warn "Failed to parse pool limits for '$pool' - $@\n" if $@;
+
+ $cfg->{pools}->{$pool}->{limits} = $parsed_limits;
+ }
} elsif ($et eq 'token') {
my ($tokenid, $expire, $privsep, $comment) = @data;
@@ -1656,7 +1693,8 @@ sub write_user_config {
my $vmlist = join (',', sort keys %{$d->{vms}});
my $storelist = join (',', sort keys %{$d->{storage}});
my $comment = $d->{comment} ? PVE::Tools::encode_text($d->{comment}) : '';
- $data .= "pool:$pool:$comment:$vmlist:$storelist:\n";
+ my $limits = $d->{limits} ? PVE::JSONSchema::print_property_string($d->{limits}, $pool_limits_desc) : '';
+ $data .= "pool:$pool:$comment:$vmlist:$storelist:$limits:\n";
}
$data .= "\n";
diff --git a/src/test/parser_writer.pl b/src/test/parser_writer.pl
index 80c346b..2e6eb61 100755
--- a/src/test/parser_writer.pl
+++ b/src/test/parser_writer.pl
@@ -431,12 +431,12 @@ my $default_raw = {
'test_role_privs_invalid' => 'role:testrole:VM.Invalid,Datastore.Audit,VM.Allocate:',
},
pools => {
- 'test_pool_empty' => 'pool:testpool::::',
- 'test_pool_invalid' => 'pool:testpool::non-numeric:inval!d:',
- 'test_pool_members' => 'pool:testpool::123,1234:local,local-zfs:',
- 'test_pool_duplicate_vms' => 'pool:test_duplicate_vms::123,1234::',
- 'test_pool_duplicate_vms_expected' => 'pool:test_duplicate_vms::::',
- 'test_pool_duplicate_storages' => 'pool:test_duplicate_storages:::local,local-zfs:',
+ 'test_pool_empty' => 'pool:testpool:::::',
+ 'test_pool_invalid' => 'pool:testpool::non-numeric:inval!d::',
+ 'test_pool_members' => 'pool:testpool::123,1234:local,local-zfs::',
+ 'test_pool_duplicate_vms' => 'pool:test_duplicate_vms::123,1234:::',
+ 'test_pool_duplicate_vms_expected' => 'pool:test_duplicate_vms:::::',
+ 'test_pool_duplicate_storages' => 'pool:test_duplicate_storages:::local,local-zfs::',
},
acl => {
'acl_simple_user' => 'acl:1:/:test@pam:PVEVMAdmin:',
@@ -1018,7 +1018,7 @@ my $tests = [
'user:test@pam:0:0::::::'."\n".
'token:test@pam!test:0:0::'."\n\n".
'group:testgroup:::'."\n\n".
- 'pool:testpool::::'."\n\n".
+ 'pool:testpool:::::'."\n\n".
'role:testrole::'."\n\n",
},
];
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 container 1/7] config: add pool usage helper
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 access-control 1/1] pools: define " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:01 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 2/7] status: add pool usage fields Fabian Grünbichler
` (17 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
to avoid repeating those calculations all over the place.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/PVE/LXC/Config.pm | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 1664a35..a6baccd 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -11,6 +11,7 @@ use PVE::DataCenterConfig;
use PVE::GuestHelpers;
use PVE::INotify;
use PVE::JSONSchema qw(get_standard_option);
+use PVE::ProcFSTools;
use PVE::Tools;
use PVE::LXC;
@@ -1886,4 +1887,38 @@ sub foreach_passthrough_device {
}
}
+my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
+
+# for determining pool usage vs limits
+#
+# this gives the higher of pending or currently configured
+sub get_pool_usage {
+ my ($class, $conf) = @_;
+
+ my $usage = {};
+
+ my $get_max = sub {
+ my $max = 0;
+
+ for my $curr (@_) {
+ $max = $curr if defined($curr) && $curr > $max;
+ }
+
+ return $max;
+ };
+
+ $usage->{cpu} = $get_max->(
+ $conf->{pending}->{cores},
+ $conf->{cores},
+ );
+ $usage->{cpu} = $cpuinfo->{cpus} if !$usage->{cpu};
+
+ my $swap = $get_max->($conf->{pending}->{swap}, $conf->{swap});
+ my $mem = $get_max->($conf->{pending}->{memory}, $conf->{memory}, 512);
+ $usage->{mem} = $mem+$swap;
+ $usage->{mem} *= 1024*1024;
+
+ return $usage;
+}
+
1;
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 container 2/7] status: add pool usage fields
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 access-control 1/1] pools: define " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 1/7] config: add pool usage helper Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:02 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 3/7] create/restore/clone: handle pool limits Fabian Grünbichler
` (16 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
these are similar to existing ones, but with slightly different semantics.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/PVE/LXC.pm | 29 +++++++++++++++++++++++++++++
1 file changed, 29 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index e688ea6..9b03a97 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -138,6 +138,18 @@ our $vmstatus_return_properties = {
optional => 1,
renderer => 'bytes',
},
+ confmem => {
+ description => "Configured amount of memory (inc. swap), might be higher than 'maxmem'.",
+ type => 'integer',
+ optional => 1,
+ renderer => 'bytes',
+ },
+ runmem => {
+ description => "Currently configured amount of memory (inc. swap).",
+ type => 'integer',
+ optional => 1,
+ renderer => 'bytes',
+ },
maxdisk => {
description => "Root disk size in bytes.",
type => 'integer',
@@ -160,6 +172,16 @@ our $vmstatus_return_properties = {
type => 'number',
optional => 1,
},
+ confcpus => {
+ description => "Configured amount of CPUs, might be higher than 'cpus'.",
+ type => 'integer',
+ optional => 1,
+ },
+ runcpus => {
+ description => "Currently used amount of CPUs.",
+ type => 'integer',
+ optional => 1,
+ },
lock => {
description => "The current config lock, if any.",
type => 'string',
@@ -200,6 +222,7 @@ sub vmstatus {
my $conf = PVE::Cluster::cfs_read_file($cfspath) || {};
$unprivileged->{$vmid} = $conf->{unprivileged};
+ my $usage = PVE::LXC::Config->get_pool_usage($conf);
$d->{name} = $conf->{'hostname'} || "CT$vmid";
$d->{name} =~ s/[\s]//g;
@@ -207,6 +230,9 @@ sub vmstatus {
$d->{cpus} = $conf->{cores} || $conf->{cpulimit};
$d->{cpus} = $cpucount if !$d->{cpus};
+ $d->{confcpus} = $usage->{cpu};
+ $d->{runcpus} = $conf->{cores} || $cpucount;
+
$d->{tags} = $conf->{tags} if defined($conf->{tags});
if ($d->{pid}) {
@@ -229,6 +255,9 @@ sub vmstatus {
$d->{maxmem} = ($conf->{memory}||512)*1024*1024;
$d->{maxswap} = ($conf->{swap}//0)*1024*1024;
+ $d->{confmem} = $usage->{mem};
+ $d->{runmem} = $conf->{maxmem} + $conf->{swap};
+
$d->{uptime} = 0;
$d->{cpu} = 0;
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 container 3/7] create/restore/clone: handle pool limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (2 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 2/7] status: add pool usage fields Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 4/7] start: " Fabian Grünbichler
` (15 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
early if possible, to avoid big cleanups cause of limit exhaustion.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/PVE/API2/LXC.pm | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index fd42ccf..80bac3d 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -4,6 +4,7 @@ use strict;
use warnings;
use IO::Socket::UNIX;
+use Storable qw(dclone);
use Socket qw(SOCK_STREAM);
use PVE::SafeSyslog;
@@ -51,6 +52,23 @@ my $check_storage_access_migrate = sub {
if !$scfg->{content}->{rootdir};
};
+my $check_pool_limits_create = sub {
+ my ($vmid, $conf, $running, $pool) = @_;
+ if ($pool) {
+ my $usage = PVE::LXC::Config->get_pool_usage($conf);
+ my $changes = {
+ cpu => $usage->{cpu},
+ mem => $usage->{mem},
+ absolute => 1, # in case this is an in-place restore
+ };
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes, $pool);
+ if ($running) {
+ $changes->{running} = 1;
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes, $pool);
+ }
+ }
+};
+
__PACKAGE__->register_method ({
subclass => "PVE::API2::LXC::Config",
path => '{vmid}/config',
@@ -412,6 +430,12 @@ __PACKAGE__->register_method({
if ($old_conf->{unprivileged} && !$conf->{unprivileged}) {
$rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Allocate']);
}
+
+ my $merged = dclone($conf);
+ PVE::LXC::Create::sanitize_and_merge_config($merged, $orig_conf, 0, 0);
+ $check_pool_limits_create->($vmid, $merged, $pool);
+ } else {
+ $check_pool_limits_create->($vmid, $conf, $pool);
}
}
if ($storage_only_mode) {
@@ -1748,6 +1772,7 @@ __PACKAGE__->register_method({
# Replace the 'disk' lock with a 'create' lock.
$newconf->{lock} = 'create';
+ $check_pool_limits_create->($newid, $newconf, 0, $pool);
# delete all snapshot related config options
delete $newconf->@{qw(snapshots parent snaptime snapstate)};
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 container 4/7] start: handle pool limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (3 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 3/7] create/restore/clone: handle pool limits Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 5/7] hotplug: " Fabian Grünbichler
` (14 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/PVE/LXC.pm | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 9b03a97..1856b6a 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -2577,6 +2577,14 @@ sub vm_start {
update_lxc_config($vmid, $conf);
+ my $usage = PVE::LXC::Config->get_pool_usage($conf);
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, {
+ absolute => 1,
+ running => 1,
+ mem => $usage->{mem},
+ cpu => $usage->{cpu},
+ });
+
eval {
my ($id_map, undef, undef) = PVE::LXC::parse_id_maps($conf);
PVE::LXC::validate_id_maps($id_map);
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 container 5/7] hotplug: handle pool limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (4 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 4/7] start: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:03 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 6/7] rollback: " Fabian Grünbichler
` (13 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
by checking the new values against the running limits.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/PVE/LXC/Config.pm | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index a6baccd..248100e 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1434,6 +1434,13 @@ sub vmconfig_hotplug_pending {
foreach my $opt (sort keys %{$conf->{pending}}) { # add/change
next if $selection && !$selection->{$opt};
+ if ($opt eq 'cores') {
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, {
+ running => 1,
+ absolute => 1,
+ cpu => $conf->{pending}->{cores},
+ })
+ }
if ($LXC_FASTPLUG_OPTIONS->{$opt}) {
$conf->{$opt} = delete $conf->{pending}->{$opt};
}
@@ -1447,6 +1454,12 @@ sub vmconfig_hotplug_pending {
my $hotplug_memory = sub {
my ($new_memory, $new_swap) = @_;
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, {
+ running => 1,
+ absolute => 1,
+ mem => ($new_memory + $new_swap)*1024*1024,
+ });
+
($new_memory, my $new_memory_high) = calculate_memory_constraints($new_memory);
$new_swap = int($new_swap * 1024 * 1024) if defined($new_swap);
$cgroup->change_memory_limit($new_memory, $new_swap, $new_memory_high);
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 container 6/7] rollback: handle pool limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (5 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 5/7] hotplug: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 7/7] update: " Fabian Grünbichler
` (12 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
by checking the snapshot conf values as if the CT was newly created.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/PVE/API2/LXC/Snapshot.pm | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/src/PVE/API2/LXC/Snapshot.pm b/src/PVE/API2/LXC/Snapshot.pm
index 0999fbc..37a02a6 100644
--- a/src/PVE/API2/LXC/Snapshot.pm
+++ b/src/PVE/API2/LXC/Snapshot.pm
@@ -299,6 +299,13 @@ __PACKAGE__->register_method({
my $realcmd = sub {
PVE::Cluster::log_msg('info', $authuser, "rollback snapshot LXC $vmid: $snapname");
+ my $snap_conf = PVE::LXC::Config->load_config($vmid)->{snapshots}->{snapname};
+ my $snap_usage = PVE::LXC::Config->get_pool_usage($snap_conf);
+ my $changes = {
+ absolute => 1,
+ mem => $snap_usage->{mem},
+ cpu => $snap_usage->{cpu},
+ };
PVE::LXC::Config->snapshot_rollback($vmid, $snapname);
if ($param->{start}) {
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 container 7/7] update: handle pool limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (6 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 6/7] rollback: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:04 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 guest-common 1/1] helpers: add pool limit/usage helpers Fabian Grünbichler
` (11 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
v2:
- don't multiply mem+swap, but add them up (thanks Dominik)
src/PVE/API2/LXC/Config.pm | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/src/PVE/API2/LXC/Config.pm b/src/PVE/API2/LXC/Config.pm
index e6c0980..15fab61 100644
--- a/src/PVE/API2/LXC/Config.pm
+++ b/src/PVE/API2/LXC/Config.pm
@@ -208,6 +208,27 @@ __PACKAGE__->register_method({
my $running = PVE::LXC::check_running($vmid);
+ my $usage = PVE::LXC::Config->get_pool_usage($conf);
+ if (defined($param->{memory}) || defined($param->{swap})) {
+ my $old = $usage->{mem};
+ my $new = $param->{memory} || $usage->{memory};
+ $new += ($param->{swap} || $usage->{swap});
+
+ if ($new > $old) {
+ my $change = { mem => ($new - $old) * 1024 * 1024 };
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
+ }
+ }
+ if (defined($param->{cores})) {
+ my $old = $usage->{cpu};
+ my $new = $param->{cores};
+
+ if ($new > $old) {
+ my $change = { cpu => ($new - $old) };
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
+ }
+ }
+
my $errors = PVE::LXC::Config->update_pct_config($vmid, $conf, $running, $param, \@delete, \@revert);
PVE::LXC::Config->write_config($vmid, $conf);
$conf = PVE::LXC::Config->load_config($vmid);
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 guest-common 1/1] helpers: add pool limit/usage helpers
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (7 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 7/7] update: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:04 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 1/4] api: pools: add limits management Fabian Grünbichler
` (10 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
one for combining the per-node broadcasted values, one for checking a pool's
limit, and one specific helper for checking guest-related actions such as
starting a VM.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
v2:
- style
- introduce new helper for mapping limit key to usage hash
- introduce new helper for default usage hash
- avoid hard-coding cpu/mem and run/config where sensible
src/PVE/GuestHelpers.pm | 183 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 183 insertions(+)
diff --git a/src/PVE/GuestHelpers.pm b/src/PVE/GuestHelpers.pm
index 961a7b8..e52eaf0 100644
--- a/src/PVE/GuestHelpers.pm
+++ b/src/PVE/GuestHelpers.pm
@@ -416,4 +416,187 @@ sub check_vnet_access {
if !($tag || $trunks);
}
+sub pool_limit_to_usage {
+ my ($limit_key) = @_;
+
+ my ($resource, $kind) = split(/-/, $limit_key, 2);
+
+ return ($resource, $kind, $kind eq 'run' ? 1 : 0);
+}
+
+sub pool_default_usage {
+ my $default = {};
+
+ for my $limit (keys $PVE::AccessControl::pool_limits_desc->%*) {
+ my ($resource, $kind) = pool_limit_to_usage($limit);
+ $default->{$resource}->{$kind} = 0;
+ }
+
+ return $default;
+}
+
+# combines the broadcasted pool usage information to get per-pool stats
+#
+# $pools parsed pool info from user.cfg
+# $usage broadcasted KV hash
+# $pool filter for specific pool
+# $skip skip a certain guest to ignore its current usage
+#
+# returns usage hash:
+# pool -> cpu/mem/.. -> run/config -> $usage
+sub get_pool_usage {
+ my ($pools, $usage, $pool, $skip) = @_;
+
+ my $res = {};
+ my $included_guests = {};
+ for my $node (keys $usage->%*) {
+ my $node_usage = JSON::decode_json($usage->{$node} // '');
+
+ # long IDs first, so we can add children to their parents right away
+ for my $poolid (sort {$b cmp $a} keys $pools->%*) {
+ if (
+ defined($pool)
+ && !($pool eq $poolid || $poolid =~ m!^$pool/! || $pool =~ m!^$poolid/!)
+ ) {
+ next;
+ }
+
+ my $d = $res->{$poolid} //= pool_default_usage();
+
+ my $pool_usage = $node_usage->{data}->{$poolid} // {};
+ for my $vmid (keys $pool_usage->%*) {
+ # only include once in case of migration between broadcast
+ next if $included_guests->{$vmid};
+ next if $skip && $skip->{$vmid};
+ $included_guests->{$vmid} = 1;
+
+ my $vm_data = $pool_usage->{$vmid};
+ for my $key (keys $vm_data->%*) {
+ next if $key eq 'running';
+ $d->{$key}->{run} += $vm_data->{$key}->{run} if $vm_data->{running};
+ $d->{$key}->{config} += $vm_data->{$key}->{config};
+ }
+ }
+
+ if (my $parent = $pools->{$poolid}->{parent}) {
+ $res->{$parent} //= pool_default_usage();
+ for my $key (keys $d->%*) {
+ for my $kind (keys $d->{$key}->%*) {
+ $res->{$parent}->{$key}->{$kind} = $d->{$key}->{$kind};
+ }
+ }
+ }
+ }
+ }
+
+ return $res;
+}
+
+# checks whether a pool is (or would be) over its resource limits
+#
+# $changes is for checking limits for config/state changes like VM starts, if
+# set, only the limits with changes are checked (see check_guest_pool_limit)
+#
+# return value indicates whether any limit was overstepped or not (if $noerr is set)
+sub check_pool_limits {
+ my ($usage, $limits, $noerr, $changes) = @_;
+
+ my $over = {};
+ my $only_changed = defined($changes);
+
+ my $check_limit = sub {
+ my ($key, $running, $limit, $change) = @_;
+
+ return if $only_changed && $change == 0;
+
+ my $kind = $running ? 'run' : 'config';
+
+ my $value = $usage->{$key}->{$kind};
+ $value = int($value);
+ $value += $change;
+ $value = $value / (1024*1024) if $key eq 'mem';
+ if ($limit < $value) {
+ $over->{$key}->{$kind}->{change} = $change if $change;
+ $over->{$key}->{$kind}->{over} = 1;
+ }
+ };
+
+ my $get_change = sub {
+ my ($key, $running) = @_;
+
+ return 0 if !defined($changes);
+
+ my $check_running = defined($changes->{running}) && $changes->{running} ? 1 : 0;
+
+ if ($running == $check_running) {
+ return $changes->{$key} // 0;
+ } else {
+ return 0;
+ }
+ };
+
+ while (my ($key, $limit) = each $limits->%*) {
+ my ($resource, $kind, $running) = pool_limit_to_usage($key);
+ my $change = $get_change->($resource, $running);
+ $check_limit->($resource, $running, $limit, $change);
+ }
+
+ if (!$noerr) {
+ my $msg = '';
+ for my $key (keys $over->%*) {
+ for my $kind (keys $over->{$key}->%*) {
+ my $value = $usage->{$key}->{$kind};
+ $value = $value / (1024*1024) if $key eq 'mem';
+ my $change = $over->{$key}->{$kind}->{change};
+ if ($change) {
+ $change = $change / (1024*1024) if $key eq 'mem';
+ $value = "$value + $change" if $change;
+ }
+ my $limit = $limits->{"$key-$kind"};
+ $msg .= "($kind) $key: $value over $limit, ";
+ }
+ }
+ if ($msg) {
+ $msg =~ s/, $//;
+ die "pool limits exhausted: $msg\n";
+ }
+ }
+
+ return $over->%* ? 1 : 0;
+}
+
+# checks whether the given changes for a certain guest would overstep a pool limit
+#
+# $changes is an optional hash containing
+# - absolute: flag whether changes are relative or absolute
+# - running: flag whether the config or running limits should be checked
+# - cpu: change value for cpu limit
+# - mem: change value for mem limit
+# all elements are optional
+#
+# if no $changes is provided, the limits are checked against the current usage
+#
+# $poolid allows overriding the guest's pool membership, for example in case it
+# is not yet properly set when creating the guest
+sub check_guest_pool_limit {
+ my ($vmid, $changes, $poolid) = @_;
+
+ my $user_cfg = PVE::Cluster::cfs_read_file("user.cfg");
+
+ $poolid = $user_cfg->{vms}->{$vmid} if !defined($poolid);
+ if ($poolid) {
+ my $pool = $user_cfg->{pools}->{$poolid};
+
+ my $limits = $pool->{limits};
+ return if !$limits;
+
+ my $skip = {};
+ $skip->{$vmid} = 1 if $changes && $changes->{absolute};
+ my $usage = PVE::Cluster::get_node_kv('pool-usage');
+
+ $usage = get_pool_usage($user_cfg->{pools}, $usage, $poolid, $skip);
+ check_pool_limits($usage->{$poolid}, $limits, 0, $changes);
+ }
+}
+
1;
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 manager 1/4] api: pools: add limits management
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (8 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 guest-common 1/1] helpers: add pool limit/usage helpers Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:05 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 2/4] pvestatd: collect and broadcast pool usage Fabian Grünbichler
` (9 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
allow to set/update limits, and return them when querying individual pools.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
requires bumped pve-access-control
v2:
- unify run vs config limit checks into helper
- avoid hard-coding resource kinds
PVE/API2/Pool.pm | 50 ++++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 46 insertions(+), 4 deletions(-)
diff --git a/PVE/API2/Pool.pm b/PVE/API2/Pool.pm
index 54e744558..031f0160f 100644
--- a/PVE/API2/Pool.pm
+++ b/PVE/API2/Pool.pm
@@ -7,6 +7,7 @@ use PVE::AccessControl;
use PVE::Cluster qw (cfs_read_file cfs_write_file);
use PVE::Exception qw(raise_param_exc);
use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string print_property_string);
use PVE::Storage;
use PVE::SafeSyslog;
@@ -47,10 +48,7 @@ __PACKAGE__->register_method ({
type => "object",
properties => {
poolid => { type => 'string' },
- comment => {
- type => 'string',
- optional => 1,
- },
+ comment => { type => 'string', optional => 1 },
members => {
type => 'array',
optional => 1,
@@ -79,6 +77,7 @@ __PACKAGE__->register_method ({
},
},
},
+ limits => { type => 'string', optional => 1 },
},
},
links => [ { rel => 'child', href => "{poolid}" } ],
@@ -136,6 +135,8 @@ __PACKAGE__->register_method ({
members => $members,
};
$pool_info->{comment} = $pool_config->{comment} if defined($pool_config->{comment});
+ $pool_info->{limits} = print_property_string($pool_config->{limits}, 'pve-pool-limits')
+ if defined($pool_config->{limits});
$pool_info->{poolid} = $poolid;
push @$res, $pool_info;
@@ -153,6 +154,26 @@ __PACKAGE__->register_method ({
return $res;
}});
+my $check_run_vs_config_limits = sub {
+ my ($limits) = @_;
+
+ my $exception;
+
+ for my $resource (keys $limits->%*) {
+ next if $resource !~ m/-run$/;
+
+ my $config = $resource;
+ $config =~ s/-run$/-config/;
+
+ if (defined($limits->{$config}) && $limits->{$config} < $limits->{$resource}) {
+ my $msg = "run limit must be below config limit!";
+ $exception->{$resource} = $msg;
+ }
+ }
+
+ raise_param_exc($exception) if $exception;
+};
+
__PACKAGE__->register_method ({
name => 'create_pool',
protected => 1,
@@ -173,6 +194,7 @@ __PACKAGE__->register_method ({
type => 'string',
optional => 1,
},
+ limits => get_standard_option('pve-pool-limits'),
},
},
returns => { type => 'null' },
@@ -196,6 +218,10 @@ __PACKAGE__->register_method ({
};
$usercfg->{pools}->{$pool}->{comment} = $param->{comment} if $param->{comment};
+ if (my $limits = parse_property_string('pve-pool-limits', $param->{limits} // '')) {
+ $check_run_vs_config_limits->($limits);
+ $usercfg->{pools}->{$pool}->{limits} = $limits;
+ }
cfs_write_file("user.cfg", $usercfg);
}, "create pool failed");
@@ -288,6 +314,9 @@ __PACKAGE__->register_method ({
optional => 1,
default => 0,
},
+ limits => get_standard_option("pve-pool-limits", {
+ description => "Update pool limits. Passing '0' for a specific limit will remove any currently configured value.",
+ }),
},
},
returns => { type => 'null' },
@@ -346,6 +375,18 @@ __PACKAGE__->register_method ({
}
}
+ if (my $update = parse_property_string('pve-pool-limits', $param->{limits} // '')) {
+ my $limits = $pool_config->{limits};
+ for my $kind (sort keys $update->%*) {
+ if ($update->{$kind} == 0) {
+ delete $limits->{$kind};
+ } else {
+ $limits->{$kind} = $update->{$kind};
+ }
+ }
+ $check_run_vs_config_limits->($limits);
+ }
+
cfs_write_file("user.cfg", $usercfg);
}, "update pools failed");
@@ -409,6 +450,7 @@ __PACKAGE__->register_method ({
},
},
},
+ limits => get_standard_option("pve-pool-limits"),
},
},
code => sub {
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 manager 2/4] pvestatd: collect and broadcast pool usage
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (9 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 1/4] api: pools: add limits management Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:06 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 3/4] api: return pool usage when queried Fabian Grünbichler
` (8 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
so that other nodes can query it and both block changes that would violate the
limits, and mark pools which are violating it currently accordingly.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
PVE/Service/pvestatd.pm | 59 ++++++++++++++++++++++++++++++++++++++---
1 file changed, 55 insertions(+), 4 deletions(-)
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index 2515120c6..d7e4755e9 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -231,7 +231,7 @@ sub auto_balloning {
}
sub update_qemu_status {
- my ($status_cfg) = @_;
+ my ($status_cfg, $pool_membership, $pool_usage) = @_;
my $ctime = time();
my $vmstatus = PVE::QemuServer::vmstatus(undef, 1);
@@ -242,6 +242,21 @@ sub update_qemu_status {
my $transactions = PVE::ExtMetric::transactions_start($status_cfg);
foreach my $vmid (keys %$vmstatus) {
my $d = $vmstatus->{$vmid};
+
+ if (my $pool = $pool_membership->{$vmid}) {
+ $pool_usage->{$pool}->{$vmid} = {
+ cpu => {
+ config => ($d->{confcpus} // 0),
+ run => ($d->{runcpus} // 0),
+ },
+ mem => {
+ config => ($d->{confmem} // 0),
+ run => ($d->{runmem} // 0),
+ },
+ running => $d->{pid} ? 1 : 0,
+ };
+ }
+
my $data;
my $status = $d->{qmpstatus} || $d->{status} || 'stopped';
my $template = $d->{template} ? $d->{template} : "0";
@@ -263,6 +278,17 @@ sub update_qemu_status {
PVE::ExtMetric::transactions_finish($transactions);
}
+sub update_pool_usage {
+ my ($usage) = @_;
+
+ my $ctime = time();
+
+ # TODO? RRD and ExtMetric support here?
+
+ my $new = { data => $usage, timestamp => $ctime };
+ PVE::Cluster::broadcast_node_kv('pool-usage', encode_json($new));
+}
+
sub remove_stale_lxc_consoles {
my $vmstatus = PVE::LXC::vmstatus();
@@ -440,7 +466,7 @@ sub rebalance_lxc_containers {
}
sub update_lxc_status {
- my ($status_cfg) = @_;
+ my ($status_cfg, $pool_membership, $pool_usage) = @_;
my $ctime = time();
my $vmstatus = PVE::LXC::vmstatus();
@@ -449,6 +475,21 @@ sub update_lxc_status {
foreach my $vmid (keys %$vmstatus) {
my $d = $vmstatus->{$vmid};
+
+ if (my $pool = $pool_membership->{$vmid}) {
+ $pool_usage->{$pool}->{$vmid} = {
+ cpu => {
+ config => ($d->{confcpus} // 0),
+ run => ($d->{runcpus} // 0),
+ },
+ mem => {
+ config => ($d->{confmem} // 0),
+ run => ($d->{runmem} // 0),
+ },
+ running => $d->{status} eq 'running' ? 1 : 0,
+ };
+ }
+
my $template = $d->{template} ? $d->{template} : "0";
my $data;
if ($d->{status} eq 'running') { # running
@@ -540,6 +581,10 @@ sub update_status {
syslog('err', $err) if $err;
my $status_cfg = PVE::Cluster::cfs_read_file('status.cfg');
+ my $user_cfg = PVE::Cluster::cfs_read_file('user.cfg');
+ my $pools = $user_cfg->{pools};
+ my $pool_membership = $user_cfg->{vms};
+ my $pool_usage = {};
eval {
update_node_status($status_cfg);
@@ -548,17 +593,23 @@ sub update_status {
syslog('err', "node status update error: $err") if $err;
eval {
- update_qemu_status($status_cfg);
+ update_qemu_status($status_cfg, $pool_membership, $pool_usage);
};
$err = $@;
syslog('err', "qemu status update error: $err") if $err;
eval {
- update_lxc_status($status_cfg);
+ update_lxc_status($status_cfg, $pool_membership, $pool_usage);
};
$err = $@;
syslog('err', "lxc status update error: $err") if $err;
+ eval {
+ update_pool_usage($pool_usage);
+ };
+ $err =$@;
+ syslog('err', "pool usage status update error: $err") if $err;
+
eval {
rebalance_lxc_containers();
};
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 manager 3/4] api: return pool usage when queried
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (10 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 2/4] pvestatd: collect and broadcast pool usage Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 4/4] ui: add pool limits and usage Fabian Grünbichler
` (7 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
PVE/API2/Pool.pm | 19 +++++++++++++++++--
1 file changed, 17 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Pool.pm b/PVE/API2/Pool.pm
index 031f0160f..2d83ccc67 100644
--- a/PVE/API2/Pool.pm
+++ b/PVE/API2/Pool.pm
@@ -6,6 +6,7 @@ use warnings;
use PVE::AccessControl;
use PVE::Cluster qw (cfs_read_file cfs_write_file);
use PVE::Exception qw(raise_param_exc);
+use PVE::GuestHelpers;
use PVE::INotify;
use PVE::JSONSchema qw(get_standard_option parse_property_string print_property_string);
use PVE::Storage;
@@ -78,6 +79,7 @@ __PACKAGE__->register_method ({
},
},
limits => { type => 'string', optional => 1 },
+ usage => { type => 'object', optional => 1 },
},
},
links => [ { rel => 'child', href => "{poolid}" } ],
@@ -135,17 +137,30 @@ __PACKAGE__->register_method ({
members => $members,
};
$pool_info->{comment} = $pool_config->{comment} if defined($pool_config->{comment});
- $pool_info->{limits} = print_property_string($pool_config->{limits}, 'pve-pool-limits')
- if defined($pool_config->{limits});
$pool_info->{poolid} = $poolid;
+ my $usage = PVE::Cluster::get_node_kv('pool-usage');
+ $usage = PVE::GuestHelpers::get_pool_usage($usercfg->{pools}, $usage, $poolid);
+ if (my $limits = $pool_config->{limits}) {
+ $pool_info->{limits} = print_property_string($limits, 'pve-pool-limits');
+ $usage->{$poolid}->{over} = PVE::GuestHelpers::check_pool_limits($usage->{$poolid}, $limits, 1);
+ }
+ $pool_info->{usage} = $usage->{$poolid};
+
push @$res, $pool_info;
} else {
+ my $usage = PVE::Cluster::get_node_kv('pool-usage');
+ $usage = PVE::GuestHelpers::get_pool_usage($usercfg->{pools}, $usage);
for my $pool (sort keys %{$usercfg->{pools}}) {
next if !$rpcenv->check($authuser, "/pool/$pool", [ 'Pool.Audit' ], 1);
my $entry = { poolid => $pool };
my $pool_config = $usercfg->{pools}->{$pool};
+ if (my $limits = $pool_config->{limits}) {
+ $usage->{$pool}->{over} = PVE::GuestHelpers::check_pool_limits($usage->{$pool}, $limits, 1);
+ $entry->{limits} = print_property_string($limits, 'pve-pool-limits');
+ }
+ $entry->{usage} = $usage->{$pool};
$entry->{comment} = $pool_config->{comment} if defined($pool_config->{comment});
push @$res, $entry;
}
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 manager 4/4] ui: add pool limits and usage
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (11 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 3/4] api: return pool usage when queried Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:07 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 1/6] config: add pool usage helper Fabian Grünbichler
` (6 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
this is very "bare", obviously we'd want
- a nicer grid/.. display of usage
- a way to edit the limits
I am not yet sure how to integrate this nicely, and wanted to get feedback on the rest first.
v2:
- fold in change I forgot to include in patch:
(== vs ===, ? 1 : 0 vs just using the comparison result)
add edit window
www/manager6/pool/StatusView.js | 141 +++++++++++++++++++++++++++++++-
1 file changed, 140 insertions(+), 1 deletion(-)
diff --git a/www/manager6/pool/StatusView.js b/www/manager6/pool/StatusView.js
index 3d46b3b1a..293ee1443 100644
--- a/www/manager6/pool/StatusView.js
+++ b/www/manager6/pool/StatusView.js
@@ -1,12 +1,103 @@
+var labelWidth = 200;
+
+Ext.define('PVE.pool.LimitInputPanel', {
+ extend: 'Proxmox.panel.InputPanel',
+ alias: 'widget.pvePoolLimitInputPanel',
+
+ onGetValues: function(values) {
+ let ret = PVE.Parser.printPropertyString(values, 'type');
+ if (ret === '') {
+ return { 'delete': 'limits' };
+ }
+ return { limits: ret };
+ },
+
+ items: [
+ {
+ xtype: 'proxmoxintegerfield',
+ name: 'mem-config',
+ minValue: 0,
+ value: '0',
+ step: 32,
+ fieldLabel: gettext('Memory Config Limit') + ' (MiB)',
+ labelWidth: labelWidth,
+ allowBlank: false,
+ },
+ {
+ xtype: 'proxmoxintegerfield',
+ name: 'mem-run',
+ minValue: 0,
+ value: '0',
+ step: 32,
+ fieldLabel: gettext('Memory Running Limit') + ' (MiB)',
+ labelWidth: labelWidth,
+ allowBlank: false,
+ },
+ {
+ xtype: 'proxmoxintegerfield',
+ name: 'cpu-config',
+ minValue: 0,
+ value: '0',
+ fieldLabel: gettext('CPU Config Limit'),
+ labelWidth: labelWidth,
+ allowBlank: false,
+ },
+ {
+ xtype: 'proxmoxintegerfield',
+ name: 'cpu-run',
+ minValue: 0,
+ value: '0',
+ fieldLabel: gettext('CPU Running Limit'),
+ labelWidth: labelWidth,
+ allowBlank: false,
+ },
+ ],
+});
+
+Ext.define('PVE.pool.EditLimits', {
+ extend: 'Proxmox.window.Edit',
+
+ width: 640,
+ height: 480,
+
+ initComponent: function() {
+ var me = this;
+
+ if (!me.pool) {
+ throw "no pool specified";
+ }
+
+ me.url = '/pools/';
+ me.method = 'PUT';
+ me.extraRequestParams = { poolid: me.pool };
+ me.autoLoad = true;
+
+ Ext.apply(me, {
+ subject: gettext('Pool Limits'),
+ items: Ext.create('PVE.pool.LimitInputPanel'),
+ });
+
+ me.callParent();
+
+ me.load({
+ success: function(response) {
+ me.poolconfig = response.result.data[0];
+ let limits = me.poolconfig.limits;
+ me.setValues(PVE.Parser.parsePropertyString(limits));
+ },
+ });
+ },
+});
+
Ext.define('PVE.pool.StatusView', {
extend: 'Proxmox.grid.ObjectGrid',
alias: ['widget.pvePoolStatusView'],
- disabled: true,
title: gettext('Status'),
cwidth1: 150,
interval: 30000,
//height: 195,
+
initComponent: function() {
var me = this;
@@ -15,17 +106,65 @@ Ext.define('PVE.pool.StatusView', {
throw "no pool specified";
}
+ var reload = function() {
+ me.rstore.load();
+ };
+
var rows = {
comment: {
header: gettext('Comment'),
renderer: Ext.String.htmlEncode,
required: true,
},
+ usage: {
+ header: gettext('Usage'),
+ required: false,
+ renderer: value => {
+ if (value === null) {
+ return '-';
+ } else {
+ let rendered = '';
+ let over = false;
+ for (const [first, second] of Object.entries(value)) {
+ if (first === 'over') {
+ over = second === 1;
+ } else {
+ for (const [kind, usage] of Object.entries(second)) {
+ if (rendered === '') {
+ rendered = `${first}-${kind}=${usage}`;
+ } else {
+ rendered = rendered + `, ${first}-${kind}=${usage}`;
+ }
+ }
+ }
+ }
+ if (over) {
+ rendered = rendered + "\nover limit";
+ }
+ return rendered;
+ }
+ },
+ },
+ limits: {
+ header: gettext('Resource Limits'),
+ required: false,
+ },
};
Ext.apply(me, {
url: "/api2/json/pools/?poolid=" + pool,
rows: rows,
+ tbar: [
+ {
+ text: gettext('Edit Limits'),
+ iconCls: 'pve-itype-icon-qemu',
+ handler: function() {
+ var win = Ext.create('PVE.pool.EditLimits', { pool: pool });
+ win.on('destroy', reload);
+ win.show();
+ },
+ },
+ ],
});
me.callParent();
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 qemu-server 1/6] config: add pool usage helper
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (12 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 4/4] ui: add pool limits and usage Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:08 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 2/6] vmstatus: add usage values for pool limits Fabian Grünbichler
` (5 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
determining the usage values for the current config. pending values are taken
into account if they are higher than the current value only, else it would be
possible to easily circumvent config limits by setting non-hotpluggable pending
values.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
PVE/QemuConfig.pm | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 8e8a7828..1410bf14 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -573,4 +573,34 @@ sub has_cloudinit {
return $found;
}
+# for determining pool usage vs limits
+#
+# this gives the higher of pending or currently configured
+sub get_pool_usage {
+ my ($class, $conf) = @_;
+
+ my $usage = {};
+
+ my $get_max = sub {
+ my $max = 0;
+
+ for my $curr (@_) {
+ $max = $curr if defined($curr) && $curr > $max;
+ }
+
+ return $max;
+ };
+
+ $usage->{sockets} = $get_max->($conf->{pending}->{sockets}, $conf->{sockets}, 1);
+ $usage->{cores} = $get_max->($conf->{pending}->{cores}, $conf->{cores}, 1);
+ $usage->{cpu} = $usage->{sockets} * $usage->{cores};
+ $usage->{mem} = $get_max->(
+ get_current_memory($conf->{pending}->{memory}),
+ get_current_memory($conf->{memory})
+ );
+ $usage->{mem} *= 1024*1024;
+
+ return $usage;
+}
+
1;
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 qemu-server 2/6] vmstatus: add usage values for pool limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (13 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 1/6] config: add pool usage helper Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:08 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 3/6] create/restore/clone: handle " Fabian Grünbichler
` (4 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
these are separate from the existing ones to allow changes on either end
without side-effects, since the semantics are not quite the same.
the conf values incorporate pending values (if higher than the current config
value), and avoid clamping.
the run values are currently identical to the regular ones.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
PVE/QemuServer.pm | 29 +++++++++++++++++++++++++++++
1 file changed, 29 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 661613df..93eaaec5 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2755,6 +2755,18 @@ our $vmstatus_return_properties = {
optional => 1,
renderer => 'bytes',
},
+ confmem => {
+ description => "Configured memory in bytes, might be higher than 'maxmem'",
+ type => 'integer',
+ optional => 1,
+ renderer => 'bytes',
+ },
+ runmem => {
+ description => "Currently plugged memory in bytes, if running.",
+ type => 'integer',
+ optional => 1,
+ renderer => 'bytes',
+ },
maxdisk => {
description => "Root disk size in bytes.",
type => 'integer',
@@ -2787,6 +2799,16 @@ our $vmstatus_return_properties = {
type => 'number',
optional => 1,
},
+ confcpus => {
+ description => "Configured amount of CPUs, might be higher than 'cpus'.",
+ type => 'number',
+ optional => 1,
+ },
+ runcpus => {
+ description => "Currently used amount of CPUs.",
+ type => 'number',
+ optional => 1,
+ },
lock => {
description => "The current config lock, if any.",
type => 'string',
@@ -2839,6 +2861,8 @@ sub vmstatus {
# fixme: better status?
$d->{status} = $list->{$vmid}->{pid} ? 'running' : 'stopped';
+ my $usage = PVE::QemuConfig->get_pool_usage($conf);
+
my $size = PVE::QemuServer::Drive::bootdisk_size($storecfg, $conf);
if (defined($size)) {
$d->{disk} = 0; # no info available
@@ -2850,11 +2874,16 @@ sub vmstatus {
$d->{cpus} = ($conf->{sockets} || $defaults->{sockets})
* ($conf->{cores} || $defaults->{cores});
+
$d->{cpus} = $cpucount if $d->{cpus} > $cpucount;
$d->{cpus} = $conf->{vcpus} if $conf->{vcpus};
+ $d->{confcpus} = $usage->{cpu};
+ $d->{runcpus} = $d->{cpus};
$d->{name} = $conf->{name} || "VM $vmid";
$d->{maxmem} = get_current_memory($conf->{memory})*(1024*1024);
+ $d->{confmem} = $usage->{mem};
+ $d->{runmem} = $d->{maxmem};
if ($conf->{balloon}) {
$d->{balloon_min} = $conf->{balloon}*(1024*1024);
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 qemu-server 3/6] create/restore/clone: handle pool limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (14 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 2/6] vmstatus: add usage values for pool limits Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 4/6] update/hotplug: " Fabian Grünbichler
` (3 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
as early as possible, to avoid having to undo expensive work or allowing a
window for limit exhaustion..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
PVE/API2/Qemu.pm | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 78f2236c..f0ff785c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -229,6 +229,26 @@ my $check_storage_access_migrate = sub {
if !$scfg->{content}->{images};
};
+my $check_pool_limits_create = sub {
+ my ($vmid, $conf, $running, $pool) = @_;
+ if ($pool) {
+ my $usage = PVE::QemuConfig->get_pool_usage($conf);
+ my $changes = {
+ cpu => $usage->{cpu},
+ mem => $usage->{mem},
+ absolute => 1, # in case this is an in-place restore
+ };
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes, $pool);
+ if ($running) {
+ $changes->{running} = 1;
+ $changes->{cpu} = $conf->{vcpus}
+ if $conf->{vcpus} && $conf->{vcpus} < $changes->{cpu};
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes, $pool);
+ }
+ }
+};
+
+
my $import_from_volid = sub {
my ($storecfg, $src_volid, $dest_info, $vollist) = @_;
@@ -1041,6 +1061,7 @@ __PACKAGE__->register_method({
warn "Skipping early checks!\n";
} else {
PVE::QemuServer::check_restore_permissions($rpcenv, $authuser, $merged);
+ $check_pool_limits_create->($vmid, $merged, $live_restore || $start_after_create, $pool);
}
}
if ($archive->{type} eq 'file' || $archive->{type} eq 'pipe') {
@@ -1085,6 +1106,7 @@ __PACKAGE__->register_method({
my $realcmd = sub {
my $conf = $param;
+ $check_pool_limits_create->($vmid, $conf, $live_restore || $start_after_create, $pool);
my $arch = PVE::QemuServer::get_vm_arch($conf);
for my $opt (sort keys $param->%*) {
@@ -3793,6 +3815,8 @@ __PACKAGE__->register_method({
my ($conffile, $newconf, $oldconf, $vollist, $drives, $fullclone) = $load_and_check->();
my $storecfg = PVE::Storage::config();
+ $check_pool_limits_create->($newid, $newconf, 0, $pool);
+
# auto generate a new uuid
my $smbios1 = PVE::QemuServer::parse_smbios1($newconf->{smbios1} || '');
$smbios1->{uuid} = PVE::QemuServer::generate_uuid();
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 qemu-server 4/6] update/hotplug: handle pool limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (15 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 3/6] create/restore/clone: handle " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:09 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 5/6] start: " Fabian Grünbichler
` (2 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
if the new value is higher than the old one, check against limits. if the old
one is higher, then the change is always okay, to support reducing the usage in
steps spread over multiple guests..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
PVE/API2/Qemu.pm | 22 ++++++++++++++++++++++
PVE/QemuServer.pm | 7 +++++++
PVE/QemuServer/Memory.pm | 6 ++++++
3 files changed, 35 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index f0ff785c..adbc6557 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1863,6 +1863,28 @@ my $update_vm_api = sub {
}
};
+ # check pool limits, but only if values increase, ignoring
+ # deletions and pending values
+ my $usage = PVE::QemuConfig->get_pool_usage($conf);
+ if (defined($param->{sockets}) || defined($param->{cores})) {
+ my $old = $usage->{cpu};
+ my $new = $param->{sockets} || $usage->{sockets};
+ $new *= ($param->{cores} || $usage->{cores});
+
+ if ($new > $old) {
+ my $change = { cpu => $new - $old };
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
+ }
+ } elsif (defined($param->{memory})) {
+ my $old = $usage->{mem};
+ my $new = PVE::QemuServer::Memory::get_current_memory($param->{memory})*1024*1024;
+
+ if ($new > $old) {
+ my $change = { mem => $new - $old };
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
+ }
+ }
+
foreach my $opt (@delete) {
$modified->{$opt} = 1;
$conf = PVE::QemuConfig->load_config($vmid); # update/reload
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 93eaaec5..be937ec1 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4617,6 +4617,13 @@ sub qemu_cpu_hotplug {
}
return;
+ } else {
+ my $changes = {
+ absolute => 1,
+ running => 1,
+ cpu => $vcpus,
+ };
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes);
}
my $currentrunningvcpus = mon_cmd($vmid, "query-cpus-fast");
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index f365f2d1..b27b8b2b 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -234,6 +234,12 @@ sub qemu_memory_hotplug {
die "you cannot add more memory than max mem $MAX_MEM MB!\n" if $value > $MAX_MEM;
if ($value > $memory) {
+ my $changes = {
+ absolute => 1,
+ running => 1,
+ mem => $memory - $value,
+ };
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes);
my $numa_hostmap;
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 qemu-server 5/6] start: handle pool limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (16 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 4/6] update/hotplug: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 16:09 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 6/6] rollback: " Fabian Grünbichler
2024-12-19 15:59 ` [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Daniel Kral
19 siblings, 1 reply; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
if the start is not part of an incoming migration, check the VM against its
pool's run limit.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
PVE/QemuServer.pm | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index be937ec1..78b6ff96 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5704,6 +5704,19 @@ sub vm_start_nolock {
# But we do it above, so for now let's be consistent.
$conf = PVE::QemuConfig->load_config($vmid); # update/reload
}
+
+ my $cpu_count = 1;
+ $cpu_count = $conf->{sockets} if $conf->{sockets};
+ $cpu_count *= ($conf->{cores} || 1);
+ $cpu_count = $conf->{vcpus} if $conf->{vcpus} && $conf->{vcpus} < $cpu_count;
+
+ my $changes = {
+ absolute => 1,
+ running => 1,
+ mem => get_current_memory($conf->{memory}),
+ cpu => $cpu_count,
+ };
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes);
}
# override offline migrated volumes, conf is out of date still
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* [pve-devel] [PATCH v2 qemu-server 6/6] rollback: handle pool limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (17 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 5/6] start: " Fabian Grünbichler
@ 2024-04-16 12:20 ` Fabian Grünbichler
2024-12-19 15:59 ` [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Daniel Kral
19 siblings, 0 replies; 34+ messages in thread
From: Fabian Grünbichler @ 2024-04-16 12:20 UTC (permalink / raw)
To: pve-devel
by checking the snapshot conf values as if the VM was newly created.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
PVE/API2/Qemu.pm | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index adbc6557..eca0ab79 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -5403,6 +5403,14 @@ __PACKAGE__->register_method({
my $realcmd = sub {
PVE::Cluster::log_msg('info', $authuser, "rollback snapshot VM $vmid: $snapname");
+ my $snap_conf = PVE::QemuConfig->load_config($vmid)->{snapshots}->{snapname};
+ my $snap_usage = PVE::QemuConfig->get_pool_usage($snap_conf);
+ my $changes = {
+ absolute => 1,
+ mem => $snap_usage->{mem},
+ cpu => $snap_usage->{cpu},
+ };
+ PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes);
PVE::QemuConfig->snapshot_rollback($vmid, $snapname);
if ($param->{start} && !PVE::QemuServer::Helpers::vm_running_locally($vmid)) {
--
2.39.2
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
` (18 preceding siblings ...)
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 6/6] rollback: " Fabian Grünbichler
@ 2024-12-19 15:59 ` Daniel Kral
19 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 15:59 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 6411 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> high level description:
>
> VM/CT vmstatus returns new fields for configured and running "usage"
> values, these are then broadcasted by pvestatd on each node via KV.
>
> helpers in guest-common to check those limits
> pool API returns limits and usage, them and allows setting the limits
>
> qemu-server/pve-container try to check actions against those limits.
>
> since the broadcast is async, there is always an opportunity to cheat by
> racing against the broadcast. this is unavoidable unless we want to
> serialize all usage affecting tasks across the cluster..
>
> changelog since v1/RFC:
> - GUI can edit limits now
> - incorporated most feedback from Wolfgang and Dominik
>
> potential follow-ups:
> - disk limits/usage
> - RRD/metrics support (or even switching entirely to RRD based
> broadcasting instead of KV)
> - timeout of broadcasted data if node goes offline/stops broadcasting
> - better GUI
> - ... ?
I've tried my hands on this patch series. I've added a few notes around
some patches and inlined them directly. Hope the notes are helpful as
the feature seems very promising, especially when an administrator wants
to be able to allocate specific resources to different user's guests.
---
I've tested this patch series by creating a few resource pools and
adding/removing some VMs and container to them and trying out different
ways to start them (start on boot, file-restore, normal start) and see
if these are correctly limited.
It worked fine for containers as far as I have tested, but I've had some
troubles with VMs because it seems that for VMs the memory is not
correctly calculated, I've mentioned that inline. I've also tried the
same thing with setting different cores / socket values and that worked
as expected, when a start would've reached the limit for the cpu count,
the start failed as expected.
Another thing I took a closer look was hotplugging memory and cpu cores,
which also worked as expected, even though I wasn't sure how to hit all
new codepaths correctly, as I pointed out inline.
All in all, with the two bug fixes I mentioned inside, this patch series
looks good to me so far! I've added some notes about applying below.
---
Since this patch series is a few months old, here's a quick overview on
which base commits I applied, but everything went fine except a single
conflict that was very easy to fix:
=== applying patches ===
01-01: pve-access-control
-------------------------
01 applied cleanly on 4465786 ("api, auth: fix two typos in user-visible description")
02-08: pve-container
--------------------
02 applied cleanly on a39c5b0 ("bump version to 5.2.2")
03 applied with fuzz:
Patch failed at 0002 status: add pool usage fields
patching file src/PVE/LXC.pm
Hunk #1 succeeded at 146 with fuzz 2 (offset 8 lines).
Hunk #2 succeeded at 208 (offset 36 lines).
Hunk #3 succeeded at 264 (offset 42 lines).
Hunk #4 succeeded at 272 (offset 42 lines).
Hunk #5 succeeded at 297 (offset 42 lines).
04-07 applied cleanly.
08 applied with fuzz:
Patch failed at 0007 update: handle pool limits
patching file src/PVE/API2/LXC/Config.pm
Hunk #1 succeeded at 208 with fuzz 2.
09-09: pve-guest-common
-----------------------
09 applied with fuzz:
Patch failed at 0001 helpers: add pool limit/usage helpers
patching file src/PVE/GuestHelpers.pm
Hunk #1 succeeded at 450 with fuzz 1 (offset 34 lines).
10-13: pve-manager
------------------
10 applied cleanly on ead665d5 ("ui: ceph pool: add columns for application")
11 applied partially with fuzz:
Patch failed at 0002 pvestatd: collect and broadcast pool usage
patching file PVE/Service/pvestatd.pm
Hunk #1 FAILED at 231.
Hunk #2 succeeded at 245 (offset 3 lines).
Hunk #3 succeeded at 283 with fuzz 1 (offset 5 lines).
Hunk #4 FAILED at 466.
Hunk #5 succeeded at 480 (offset 5 lines).
Hunk #6 succeeded at 592 with fuzz 1 (offset 11 lines).
Hunk #7 FAILED at 593.
Merged rejected changes of 11 to:
```gitdiff
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index c1d5bda9..fac2c37a 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -234,7 +234,7 @@ sub auto_balloning {
}
sub update_qemu_status {
- my ($status_cfg, $pull_txn) = @_;
+ my ($status_cfg, $pull_txn, $pool_membership, $pool_usage) = @_;
my $ctime = time();
my $vmstatus = PVE::QemuServer::vmstatus(undef, 1);
@@ -471,7 +471,7 @@ sub rebalance_lxc_containers {
}
sub update_lxc_status {
- my ($status_cfg, $pull_txn) = @_;
+ my ($status_cfg, $pull_txn, $pool_membership, $pool_usage) = @_;
my $ctime = time();
my $vmstatus = PVE::LXC::vmstatus();
@@ -604,17 +604,23 @@ sub update_status {
syslog('err', "node status update error: $err") if $err;
eval {
- update_qemu_status($status_cfg, $pull_txn);
+ update_qemu_status($status_cfg, $pull_txn, $pool_membership, $pool_usage);
};
$err = $@;
syslog('err', "qemu status update error: $err") if $err;
eval {
- update_lxc_status($status_cfg, $pull_txn);
+ update_lxc_status($status_cfg, $pull_txn, $pool_membership, $pool_usage);
};
$err = $@;
syslog('err', "lxc status update error: $err") if $err;
+ eval {
+ update_pool_usage($pool_usage);
+ };
+ $err =$@;
+ syslog('err', "pool usage status update error: $err") if $err;
+
eval {
rebalance_lxc_containers();
};
```
11-13 applied cleanly.
14-19: qemu-server
------------------
14 applied cleanly on 45fedd47 ("fix #5980: importdisk: fix spurious warning")
15 applied with warning:
Applying: vmstatus: add usage values for pool limits
.git/rebase-apply/patch:58: trailing whitespace.
warning: 1 line adds whitespace errors.
16 applied cleanly.
17 applied with warning:
Applying: update/hotplug: handle pool limits
.git/rebase-apply/patch:19: space before tab in indent.
my $old = $usage->{cpu};
warning: 1 line adds whitespace errors.
18-19 applied cleanly.
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 access-control 1/1] pools: define resource limits
2024-04-16 12:20 ` [pve-devel] [PATCH v2 access-control 1/1] pools: define " Fabian Grünbichler
@ 2024-12-19 16:01 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:01 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 6031 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> and handle them when parsing/writing user.cfg
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
>
> Notes:
> - make limit schema public for pve-guest-common usage
>
> src/PVE/AccessControl.pm | 42 +++++++++++++++++++++++++++++++++++++--
> src/test/parser_writer.pl | 14 ++++++-------
> 2 files changed, 47 insertions(+), 9 deletions(-)
>
> diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
> index 21f93ff..f1863c8 100644
> --- a/src/PVE/AccessControl.pm
> +++ b/src/PVE/AccessControl.pm
> @@ -72,6 +72,36 @@ sub pve_verify_realm {
> PVE::Auth::Plugin::pve_verify_realm(@_);
> }
>
> +our $pool_limits_desc = {
> + "mem-config" => {
> + type => 'integer',
> + description => "Sum of memory (in MB) guests in this pools can be configured with.",
I think this should be in MiB.
Also, I think it's a little bit more readable if we'd rephase it to use
either "maximum amount of" or "upper limit for", e.g.:
"The maximum amount of memory (in MiB), which can be configured for all
guests in this pool."
> + optional => 1,
> + },
> + "mem-run" => {
> + type => 'integer',
> + description => "Sum of memory (in MB) guests in this pools can be started with.",
I think this should be in MiB.
Similar changes as to 'mem-config', e.g.:
"The maximum amount of memory (in MiB), which can be configured for
running guests in this pool at the same time."
And maybe append something like:
"This amount must be lower than 'mem-config'."
I thought about using "allocated to" instead of "configured for", but
this would likely cause readers to believe that it's the actual
allocated amount of memory.
> + optional => 1,
> + },
> + "cpu-config" => {
> + type => 'integer',
> + description => "Sum of (virtual) cores guests in this pools can be configured with.",
Similar to 'mem-config':
"The maximum amount of virtual CPU cores, which can be configured for
all guests in this pool."
> + optional => 1,
> + },
> + "cpu-run" => {
> + type => 'integer',
> + description => "Sum of (virtual) cores guests in this pools can be started with.",
Similar to 'mem-run':
"The maximum amount of virtual CPU cores, which can be configured for
running guests in this pool at the same time. This amount must be lower
than 'cpu-config'."
> + optional => 1,
> + },
> +};
> +
> +PVE::JSONSchema::register_format('pve-pool-limits', $pool_limits_desc);
> +PVE::JSONSchema::register_standard_option('pve-pool-limits', {
> + type => 'string',
> + format => $pool_limits_desc,
> + optional => 1,
> +});
> +
> # Locking both config files together is only ever allowed in one order:
> # 1) tfa config
> # 2) user config
> @@ -1524,7 +1554,7 @@ sub parse_user_config {
> warn "user config - ignore invalid path in acl '$pathtxt'\n";
> }
> } elsif ($et eq 'pool') {
> - my ($pool, $comment, $vmlist, $storelist) = @data;
> + my ($pool, $comment, $vmlist, $storelist, $limits) = @data;
>
> if (!verify_poolname($pool, 1)) {
> warn "user config - ignore pool '$pool' - invalid characters in pool name\n";
> @@ -1575,6 +1605,13 @@ sub parse_user_config {
> }
> $cfg->{pools}->{$pool}->{storage}->{$storeid} = 1;
> }
> +
> + if ($limits) {
> + my $parsed_limits = eval { PVE::JSONSchema::parse_property_string($pool_limits_desc, $limits) };
> + warn "Failed to parse pool limits for '$pool' - $@\n" if $@;
> +
> + $cfg->{pools}->{$pool}->{limits} = $parsed_limits;
> + }
> } elsif ($et eq 'token') {
> my ($tokenid, $expire, $privsep, $comment) = @data;
>
> @@ -1656,7 +1693,8 @@ sub write_user_config {
> my $vmlist = join (',', sort keys %{$d->{vms}});
> my $storelist = join (',', sort keys %{$d->{storage}});
> my $comment = $d->{comment} ? PVE::Tools::encode_text($d->{comment}) : '';
> - $data .= "pool:$pool:$comment:$vmlist:$storelist:\n";
> + my $limits = $d->{limits} ? PVE::JSONSchema::print_property_string($d->{limits}, $pool_limits_desc) : '';
> + $data .= "pool:$pool:$comment:$vmlist:$storelist:$limits:\n";
> }
>
> $data .= "\n";
> diff --git a/src/test/parser_writer.pl b/src/test/parser_writer.pl
> index 80c346b..2e6eb61 100755
> --- a/src/test/parser_writer.pl
> +++ b/src/test/parser_writer.pl
> @@ -431,12 +431,12 @@ my $default_raw = {
> 'test_role_privs_invalid' => 'role:testrole:VM.Invalid,Datastore.Audit,VM.Allocate:',
> },
> pools => {
> - 'test_pool_empty' => 'pool:testpool::::',
> - 'test_pool_invalid' => 'pool:testpool::non-numeric:inval!d:',
> - 'test_pool_members' => 'pool:testpool::123,1234:local,local-zfs:',
> - 'test_pool_duplicate_vms' => 'pool:test_duplicate_vms::123,1234::',
> - 'test_pool_duplicate_vms_expected' => 'pool:test_duplicate_vms::::',
> - 'test_pool_duplicate_storages' => 'pool:test_duplicate_storages:::local,local-zfs:',
> + 'test_pool_empty' => 'pool:testpool:::::',
> + 'test_pool_invalid' => 'pool:testpool::non-numeric:inval!d::',
> + 'test_pool_members' => 'pool:testpool::123,1234:local,local-zfs::',
> + 'test_pool_duplicate_vms' => 'pool:test_duplicate_vms::123,1234:::',
> + 'test_pool_duplicate_vms_expected' => 'pool:test_duplicate_vms:::::',
> + 'test_pool_duplicate_storages' => 'pool:test_duplicate_storages:::local,local-zfs::',
> },
> acl => {
> 'acl_simple_user' => 'acl:1:/:test@pam:PVEVMAdmin:',
> @@ -1018,7 +1018,7 @@ my $tests = [
> 'user:test@pam:0:0::::::'."\n".
> 'token:test@pam!test:0:0::'."\n\n".
> 'group:testgroup:::'."\n\n".
> - 'pool:testpool::::'."\n\n".
> + 'pool:testpool:::::'."\n\n".
> 'role:testrole::'."\n\n",
> },
> ];
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 container 1/7] config: add pool usage helper
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 1/7] config: add pool usage helper Fabian Grünbichler
@ 2024-12-19 16:01 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:01 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2329 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> to avoid repeating those calculations all over the place.
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
> src/PVE/LXC/Config.pm | 35 +++++++++++++++++++++++++++++++++++
> 1 file changed, 35 insertions(+)
>
> diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
> index 1664a35..a6baccd 100644
> --- a/src/PVE/LXC/Config.pm
> +++ b/src/PVE/LXC/Config.pm
> @@ -11,6 +11,7 @@ use PVE::DataCenterConfig;
> use PVE::GuestHelpers;
> use PVE::INotify;
> use PVE::JSONSchema qw(get_standard_option);
> +use PVE::ProcFSTools;
> use PVE::Tools;
>
> use PVE::LXC;
> @@ -1886,4 +1887,38 @@ sub foreach_passthrough_device {
> }
> }
>
> +my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
> +
> +# for determining pool usage vs limits
> +#
> +# this gives the higher of pending or currently configured
> +sub get_pool_usage {
hm, would it make sense to rename this sub to `get_configured_lxc_usage`
or something similar to make it clear that it is used for the currently
configured resources and not for the running? It was a little confusing
to me at first, that `$usage` is used for the configuration in 'confmem'
and not for 'runmem'. Also dropped the `_pool` because it could be
independently used in the future.
> + my ($class, $conf) = @_;
> +
> + my $usage = {};
> +
> + my $get_max = sub {
> + my $max = 0;
> +
> + for my $curr (@_) {
> + $max = $curr if defined($curr) && $curr > $max;
> + }
> +
> + return $max;
> + };
> +
> + $usage->{cpu} = $get_max->(
> + $conf->{pending}->{cores},
> + $conf->{cores},
> + );
nit: this could be written in one line like for `$swap` and `$mem`.
> + $usage->{cpu} = $cpuinfo->{cpus} if !$usage->{cpu};
nit: we could name this 'cpus' just for consistency's sake (as 'cpu'
holds the cpu usage % and 'cpus' the amount of cores).
> +
> + my $swap = $get_max->($conf->{pending}->{swap}, $conf->{swap});
> + my $mem = $get_max->($conf->{pending}->{memory}, $conf->{memory}, 512);
> + $usage->{mem} = $mem+$swap;
> + $usage->{mem} *= 1024*1024;
> +
> + return $usage;
> +}
> +
> 1;
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 container 2/7] status: add pool usage fields
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 2/7] status: add pool usage fields Fabian Grünbichler
@ 2024-12-19 16:02 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:02 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3249 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> these are similar to existing ones, but with slightly different semantics.
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
> src/PVE/LXC.pm | 29 +++++++++++++++++++++++++++++
> 1 file changed, 29 insertions(+)
>
> diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
> index e688ea6..9b03a97 100644
> --- a/src/PVE/LXC.pm
> +++ b/src/PVE/LXC.pm
> @@ -138,6 +138,18 @@ our $vmstatus_return_properties = {
> optional => 1,
> renderer => 'bytes',
> },
> + confmem => {
> + description => "Configured amount of memory (inc. swap), might be higher than 'maxmem'.",
nit: a little redundant, but other descriptions in this description
schema append a 'in bytes', i.e.
"Configured amount of memory in bytes [...]"
> + type => 'integer',
> + optional => 1,
> + renderer => 'bytes',
> + },
> + runmem => {
> + description => "Currently configured amount of memory (inc. swap).",
same nit as for "confmem": append 'in bytes'.
> + type => 'integer',
> + optional => 1,
> + renderer => 'bytes',
> + },
> maxdisk => {
> description => "Root disk size in bytes.",
> type => 'integer',
> @@ -160,6 +172,16 @@ our $vmstatus_return_properties = {
> type => 'number',
> optional => 1,
> },
> + confcpus => {
> + description => "Configured amount of CPUs, might be higher than 'cpus'.",
> + type => 'integer',
> + optional => 1,
> + },
> + runcpus => {
> + description => "Currently used amount of CPUs.",
I think 'configured' instead of 'used' would make more sense for
consistency with the description of 'runmem' and also to underline that
this value is only configured as visible to the container and not the
actual amount of cpus used at the moment (or am I wrong about this?).
> + type => 'integer',
> + optional => 1,
> + },
> lock => {
> description => "The current config lock, if any.",
> type => 'string',
> @@ -200,6 +222,7 @@ sub vmstatus {
> my $conf = PVE::Cluster::cfs_read_file($cfspath) || {};
>
> $unprivileged->{$vmid} = $conf->{unprivileged};
> + my $usage = PVE::LXC::Config->get_pool_usage($conf);
>
> $d->{name} = $conf->{'hostname'} || "CT$vmid";
> $d->{name} =~ s/[\s]//g;
> @@ -207,6 +230,9 @@ sub vmstatus {
> $d->{cpus} = $conf->{cores} || $conf->{cpulimit};
> $d->{cpus} = $cpucount if !$d->{cpus};
>
> + $d->{confcpus} = $usage->{cpu};
> + $d->{runcpus} = $conf->{cores} || $cpucount;
> +
> $d->{tags} = $conf->{tags} if defined($conf->{tags});
>
> if ($d->{pid}) {
> @@ -229,6 +255,9 @@ sub vmstatus {
> $d->{maxmem} = ($conf->{memory}||512)*1024*1024;
> $d->{maxswap} = ($conf->{swap}//0)*1024*1024;
>
> + $d->{confmem} = $usage->{mem};
> + $d->{runmem} = $conf->{maxmem} + $conf->{swap};
Shouldn't the `$conf->{maxmem}` be a `$d->{maxmem}`?
Perl complains about this line for me in the journal logs:
Use of uninitialized value in addition (+) at
/usr/share/perl5/PVE/LXC.pm line 301.
> +
> $d->{uptime} = 0;
> $d->{cpu} = 0;
>
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 container 5/7] hotplug: handle pool limits
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 5/7] hotplug: " Fabian Grünbichler
@ 2024-12-19 16:03 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:03 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1586 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> by checking the new values against the running limits.
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
> src/PVE/LXC/Config.pm | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
> index a6baccd..248100e 100644
> --- a/src/PVE/LXC/Config.pm
> +++ b/src/PVE/LXC/Config.pm
> @@ -1434,6 +1434,13 @@ sub vmconfig_hotplug_pending {
>
> foreach my $opt (sort keys %{$conf->{pending}}) { # add/change
> next if $selection && !$selection->{$opt};
> + if ($opt eq 'cores') {
> + PVE::GuestHelpers::check_guest_pool_limit($vmid, {
> + running => 1,
> + absolute => 1,
> + cpu => $conf->{pending}->{cores},
> + })
This seems to run fine, but shouldn't there be a semicolon?
> + }
> if ($LXC_FASTPLUG_OPTIONS->{$opt}) {
> $conf->{$opt} = delete $conf->{pending}->{$opt};
> }
> @@ -1447,6 +1454,12 @@ sub vmconfig_hotplug_pending {
> my $hotplug_memory = sub {
> my ($new_memory, $new_swap) = @_;
>
> + PVE::GuestHelpers::check_guest_pool_limit($vmid, {
> + running => 1,
> + absolute => 1,
> + mem => ($new_memory + $new_swap)*1024*1024,
> + });
> +
> ($new_memory, my $new_memory_high) = calculate_memory_constraints($new_memory);
> $new_swap = int($new_swap * 1024 * 1024) if defined($new_swap);
> $cgroup->change_memory_limit($new_memory, $new_swap, $new_memory_high);
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 container 7/7] update: handle pool limits
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 7/7] update: " Fabian Grünbichler
@ 2024-12-19 16:04 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:04 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1752 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
>
> Notes:
> v2:
> - don't multiply mem+swap, but add them up (thanks Dominik)
>
> src/PVE/API2/LXC/Config.pm | 21 +++++++++++++++++++++
> 1 file changed, 21 insertions(+)
>
> diff --git a/src/PVE/API2/LXC/Config.pm b/src/PVE/API2/LXC/Config.pm
> index e6c0980..15fab61 100644
> --- a/src/PVE/API2/LXC/Config.pm
> +++ b/src/PVE/API2/LXC/Config.pm
> @@ -208,6 +208,27 @@ __PACKAGE__->register_method({
>
> my $running = PVE::LXC::check_running($vmid);
>
> + my $usage = PVE::LXC::Config->get_pool_usage($conf);
> + if (defined($param->{memory}) || defined($param->{swap})) {
> + my $old = $usage->{mem};
> + my $new = $param->{memory} || $usage->{memory};
> + $new += ($param->{swap} || $usage->{swap});
> +
> + if ($new > $old) {
> + my $change = { mem => ($new - $old) * 1024 * 1024 };
I haven't tested this exactly, but aren't $new and $old in different
magnitudes, i.e. bytes and MiB?
> + PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
> + }
> + }
> + if (defined($param->{cores})) {
> + my $old = $usage->{cpu};
> + my $new = $param->{cores};
> +
> + if ($new > $old) {
> + my $change = { cpu => ($new - $old) };
> + PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
> + }
> + }
> +
> my $errors = PVE::LXC::Config->update_pct_config($vmid, $conf, $running, $param, \@delete, \@revert);
> PVE::LXC::Config->write_config($vmid, $conf);
> $conf = PVE::LXC::Config->load_config($vmid);
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 guest-common 1/1] helpers: add pool limit/usage helpers
2024-04-16 12:20 ` [pve-devel] [PATCH v2 guest-common 1/1] helpers: add pool limit/usage helpers Fabian Grünbichler
@ 2024-12-19 16:04 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:04 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 8132 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> one for combining the per-node broadcasted values, one for checking a pool's
> limit, and one specific helper for checking guest-related actions such as
> starting a VM.
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
>
> Notes:
> v2:
> - style
> - introduce new helper for mapping limit key to usage hash
> - introduce new helper for default usage hash
> - avoid hard-coding cpu/mem and run/config where sensible
>
> src/PVE/GuestHelpers.pm | 183 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 183 insertions(+)
>
> diff --git a/src/PVE/GuestHelpers.pm b/src/PVE/GuestHelpers.pm
> index 961a7b8..e52eaf0 100644
> --- a/src/PVE/GuestHelpers.pm
> +++ b/src/PVE/GuestHelpers.pm
> @@ -416,4 +416,187 @@ sub check_vnet_access {
> if !($tag || $trunks);
> }
>
> +sub pool_limit_to_usage {
> + my ($limit_key) = @_;
> +
> + my ($resource, $kind) = split(/-/, $limit_key, 2);
> +
> + return ($resource, $kind, $kind eq 'run' ? 1 : 0);
> +}
> +
> +sub pool_default_usage {
> + my $default = {};
> +
> + for my $limit (keys $PVE::AccessControl::pool_limits_desc->%*) {
Perlcritic complains here about the direct access to the
`pool_limits_desc`. I haven't found any other occurence where we
reference a variable across packages directly. I can see that they are
only used here, but would it make sense to move these helpers (which are
unrelated to guests themselves) to the `PVE::AccessControl` package?
> + my ($resource, $kind) = pool_limit_to_usage($limit);
> + $default->{$resource}->{$kind} = 0;
> + }
> +
> + return $default;
> +}
> +
> +# combines the broadcasted pool usage information to get per-pool stats
> +#
> +# $pools parsed pool info from user.cfg
> +# $usage broadcasted KV hash
> +# $pool filter for specific pool
> +# $skip skip a certain guest to ignore its current usage
> +#
> +# returns usage hash:
> +# pool -> cpu/mem/.. -> run/config -> $usage
> +sub get_pool_usage {
> + my ($pools, $usage, $pool, $skip) = @_;
> +
> + my $res = {};
> + my $included_guests = {};
> + for my $node (keys $usage->%*) {
> + my $node_usage = JSON::decode_json($usage->{$node} // '');
As pointed out in the pve-manager ui patch, `encode_json` and
`decode_json` seem to be a little racy for preserving the order. If
these values stay user visible, we should sort the keys here afterwards,
so users can rely on some preserved order.
> +
> + # long IDs first, so we can add children to their parents right away
> + for my $poolid (sort {$b cmp $a} keys $pools->%*) {
> + if (
> + defined($pool)
> + && !($pool eq $poolid || $poolid =~ m!^$pool/! || $pool =~ m!^$poolid/!)
> + ) {
> + next;
> + }
> +
> + my $d = $res->{$poolid} //= pool_default_usage();
> +
> + my $pool_usage = $node_usage->{data}->{$poolid} // {};
> + for my $vmid (keys $pool_usage->%*) {
> + # only include once in case of migration between broadcast
> + next if $included_guests->{$vmid};
> + next if $skip && $skip->{$vmid};
> + $included_guests->{$vmid} = 1;
> +
> + my $vm_data = $pool_usage->{$vmid};
> + for my $key (keys $vm_data->%*) {
> + next if $key eq 'running';
> + $d->{$key}->{run} += $vm_data->{$key}->{run} if $vm_data->{running};
> + $d->{$key}->{config} += $vm_data->{$key}->{config};
> + }
> + }
> +
> + if (my $parent = $pools->{$poolid}->{parent}) {
> + $res->{$parent} //= pool_default_usage();
> + for my $key (keys $d->%*) {
> + for my $kind (keys $d->{$key}->%*) {
> + $res->{$parent}->{$key}->{$kind} = $d->{$key}->{$kind};
> + }
> + }
> + }
> + }
> + }
> +
> + return $res;
> +}
> +
> +# checks whether a pool is (or would be) over its resource limits
> +#
> +# $changes is for checking limits for config/state changes like VM starts, if
> +# set, only the limits with changes are checked (see check_guest_pool_limit)
> +#
> +# return value indicates whether any limit was overstepped or not (if $noerr is set)
> +sub check_pool_limits {
> + my ($usage, $limits, $noerr, $changes) = @_;
> +
> + my $over = {};
> + my $only_changed = defined($changes);
> +
> + my $check_limit = sub {
> + my ($key, $running, $limit, $change) = @_;
> +
> + return if $only_changed && $change == 0;
> +
> + my $kind = $running ? 'run' : 'config';
> +
> + my $value = $usage->{$key}->{$kind};
> + $value = int($value);
> + $value += $change;
> + $value = $value / (1024*1024) if $key eq 'mem';
> + if ($limit < $value) {
> + $over->{$key}->{$kind}->{change} = $change if $change;
> + $over->{$key}->{$kind}->{over} = 1;
> + }
> + };
> +
> + my $get_change = sub {
> + my ($key, $running) = @_;
> +
> + return 0 if !defined($changes);
> +
> + my $check_running = defined($changes->{running}) && $changes->{running} ? 1 : 0;
> +
> + if ($running == $check_running) {
> + return $changes->{$key} // 0;
> + } else {
> + return 0;
> + }
> + };
> +
> + while (my ($key, $limit) = each $limits->%*) {
> + my ($resource, $kind, $running) = pool_limit_to_usage($key);
> + my $change = $get_change->($resource, $running);
> + $check_limit->($resource, $running, $limit, $change);
> + }
> +
> + if (!$noerr) {
> + my $msg = '';
> + for my $key (keys $over->%*) {
> + for my $kind (keys $over->{$key}->%*) {
> + my $value = $usage->{$key}->{$kind};
> + $value = $value / (1024*1024) if $key eq 'mem';
> + my $change = $over->{$key}->{$kind}->{change};
> + if ($change) {
> + $change = $change / (1024*1024) if $key eq 'mem';
> + $value = "$value + $change" if $change;
> + }
> + my $limit = $limits->{"$key-$kind"};
> + $msg .= "($kind) $key: $value over $limit, ";
> + }
> + }
> + if ($msg) {
> + $msg =~ s/, $//;
> + die "pool limits exhausted: $msg\n";
> + }
> + }
> +
> + return $over->%* ? 1 : 0;
> +}
> +
> +# checks whether the given changes for a certain guest would overstep a pool limit
> +#
> +# $changes is an optional hash containing
> +# - absolute: flag whether changes are relative or absolute
> +# - running: flag whether the config or running limits should be checked
> +# - cpu: change value for cpu limit
> +# - mem: change value for mem limit
the description could benefit from a unit, i.e. "in bytes".
FWIW, we could use `PVE::Tools::convert_size` more often to make the
input values more agnostic to prefix (e.g. Mega vs Giga) and base unit
(bit vs byte) to reduce the cognitive load across pve-container and
qemu-server when to use what. We could also use that information above
for the error message to include units (i.e. "$value MiB over $limit").
But that would involve much more effort and I'm unsure it's worth it.
> +# all elements are optional
> +#
> +# if no $changes is provided, the limits are checked against the current usage
> +#
> +# $poolid allows overriding the guest's pool membership, for example in case it
> +# is not yet properly set when creating the guest
> +sub check_guest_pool_limit {
> + my ($vmid, $changes, $poolid) = @_;
> +
> + my $user_cfg = PVE::Cluster::cfs_read_file("user.cfg");
> +
> + $poolid = $user_cfg->{vms}->{$vmid} if !defined($poolid);
> + if ($poolid) {
> + my $pool = $user_cfg->{pools}->{$poolid};
> +
> + my $limits = $pool->{limits};
> + return if !$limits;
> +
> + my $skip = {};
> + $skip->{$vmid} = 1 if $changes && $changes->{absolute};
> + my $usage = PVE::Cluster::get_node_kv('pool-usage');
> +
> + $usage = get_pool_usage($user_cfg->{pools}, $usage, $poolid, $skip);
> + check_pool_limits($usage->{$poolid}, $limits, 0, $changes);
> + }
> +}
> +
> 1;
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 manager 1/4] api: pools: add limits management
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 1/4] api: pools: add limits management Fabian Grünbichler
@ 2024-12-19 16:05 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:05 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 5090 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> allow to set/update limits, and return them when querying individual pools.
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
>
> Notes:
> requires bumped pve-access-control
>
> v2:
> - unify run vs config limit checks into helper
> - avoid hard-coding resource kinds
>
> PVE/API2/Pool.pm | 50 ++++++++++++++++++++++++++++++++++++++++++++----
> 1 file changed, 46 insertions(+), 4 deletions(-)
>
> diff --git a/PVE/API2/Pool.pm b/PVE/API2/Pool.pm
> index 54e744558..031f0160f 100644
> --- a/PVE/API2/Pool.pm
> +++ b/PVE/API2/Pool.pm
> @@ -7,6 +7,7 @@ use PVE::AccessControl;
> use PVE::Cluster qw (cfs_read_file cfs_write_file);
> use PVE::Exception qw(raise_param_exc);
> use PVE::INotify;
> +use PVE::JSONSchema qw(get_standard_option parse_property_string print_property_string);
> use PVE::Storage;
>
> use PVE::SafeSyslog;
> @@ -47,10 +48,7 @@ __PACKAGE__->register_method ({
> type => "object",
> properties => {
> poolid => { type => 'string' },
> - comment => {
> - type => 'string',
> - optional => 1,
> - },
> + comment => { type => 'string', optional => 1 },
> members => {
> type => 'array',
> optional => 1,
> @@ -79,6 +77,7 @@ __PACKAGE__->register_method ({
> },
> },
> },
> + limits => { type => 'string', optional => 1 },
> },
> },
> links => [ { rel => 'child', href => "{poolid}" } ],
> @@ -136,6 +135,8 @@ __PACKAGE__->register_method ({
> members => $members,
> };
> $pool_info->{comment} = $pool_config->{comment} if defined($pool_config->{comment});
> + $pool_info->{limits} = print_property_string($pool_config->{limits}, 'pve-pool-limits')
> + if defined($pool_config->{limits});
> $pool_info->{poolid} = $poolid;
>
> push @$res, $pool_info;
> @@ -153,6 +154,26 @@ __PACKAGE__->register_method ({
> return $res;
> }});
>
> +my $check_run_vs_config_limits = sub {
> + my ($limits) = @_;
> +
> + my $exception;
> +
> + for my $resource (keys $limits->%*) {
> + next if $resource !~ m/-run$/;
> +
> + my $config = $resource;
> + $config =~ s/-run$/-config/;
> +
> + if (defined($limits->{$config}) && $limits->{$config} < $limits->{$resource}) {
> + my $msg = "run limit must be below config limit!";
> + $exception->{$resource} = $msg;
> + }
> + }
> +
> + raise_param_exc($exception) if $exception;
> +};
> +
> __PACKAGE__->register_method ({
> name => 'create_pool',
> protected => 1,
> @@ -173,6 +194,7 @@ __PACKAGE__->register_method ({
> type => 'string',
> optional => 1,
> },
> + limits => get_standard_option('pve-pool-limits'),
> },
> },
> returns => { type => 'null' },
> @@ -196,6 +218,10 @@ __PACKAGE__->register_method ({
> };
>
> $usercfg->{pools}->{$pool}->{comment} = $param->{comment} if $param->{comment};
> + if (my $limits = parse_property_string('pve-pool-limits', $param->{limits} // '')) {
> + $check_run_vs_config_limits->($limits);
> + $usercfg->{pools}->{$pool}->{limits} = $limits;
> + }
>
> cfs_write_file("user.cfg", $usercfg);
> }, "create pool failed");
> @@ -288,6 +314,9 @@ __PACKAGE__->register_method ({
> optional => 1,
> default => 0,
> },
> + limits => get_standard_option("pve-pool-limits", {
> + description => "Update pool limits. Passing '0' for a specific limit will remove any currently configured value.",
> + }),
> },
> },
> returns => { type => 'null' },
> @@ -346,6 +375,18 @@ __PACKAGE__->register_method ({
> }
> }
>
> + if (my $update = parse_property_string('pve-pool-limits', $param->{limits} // '')) {
Bug: Something like a
`$pool_config->{limits} = {} if !defined($pool_config->{limits});`
is missing here. Without this, a pool created without any pool usage
limits set will not be able to add them as the limits added in `$limits`
will never be written to the reference in `$pool_config->{limits}`.
Since we already default everywhere to the values in
`pool_usage_default`, an empty hash would not overwrite any settings
that weren't set yet.
> + my $limits = $pool_config->{limits};
> + for my $kind (sort keys $update->%*) {
> + if ($update->{$kind} == 0) {
> + delete $limits->{$kind};
> + } else {
> + $limits->{$kind} = $update->{$kind};
> + }
> + }
> + $check_run_vs_config_limits->($limits);
> + }
> +
> cfs_write_file("user.cfg", $usercfg);
> }, "update pools failed");
>
> @@ -409,6 +450,7 @@ __PACKAGE__->register_method ({
> },
> },
> },
> + limits => get_standard_option("pve-pool-limits"),
> },
> },
> code => sub {
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 manager 2/4] pvestatd: collect and broadcast pool usage
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 2/4] pvestatd: collect and broadcast pool usage Fabian Grünbichler
@ 2024-12-19 16:06 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:06 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 5668 bytes --]
This could only be partially applied because of changes to
`update_{qemu,lxc}_status` in the mean time:
Patch failed at 0002 pvestatd: collect and broadcast pool usage
patching file PVE/Service/pvestatd.pm
Hunk #1 FAILED at 231.
Hunk #2 succeeded at 245 (offset 3 lines).
Hunk #3 succeeded at 283 with fuzz 1 (offset 5 lines).
Hunk #4 FAILED at 466.
Hunk #5 succeeded at 480 (offset 5 lines).
Hunk #6 succeeded at 592 with fuzz 1 (offset 11 lines).
Hunk #7 FAILED at 593.
Inlined the changes to resolve conflicts when applying onto ead665d5
("ui: ceph pool: add columns for application").
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> so that other nodes can query it and both block changes that would violate the
> limits, and mark pools which are violating it currently accordingly.
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
> PVE/Service/pvestatd.pm | 59 ++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 55 insertions(+), 4 deletions(-)
>
> diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
> index 2515120c6..d7e4755e9 100755
> --- a/PVE/Service/pvestatd.pm
> +++ b/PVE/Service/pvestatd.pm
> @@ -231,7 +231,7 @@ sub auto_balloning {
> }
>
> sub update_qemu_status {
> - my ($status_cfg) = @_;
> + my ($status_cfg, $pool_membership, $pool_usage) = @_;
This should be
```
- my ($status_cfg, $pull_txn) = @_;
+ my ($status_cfg, $pull_txn, $pool_membership, $pool_usage) = @_;
```
as of ead665d5 ("ui: ceph pool: add columns for application").
>
> my $ctime = time();
> my $vmstatus = PVE::QemuServer::vmstatus(undef, 1);
> @@ -242,6 +242,21 @@ sub update_qemu_status {
> my $transactions = PVE::ExtMetric::transactions_start($status_cfg);
> foreach my $vmid (keys %$vmstatus) {
> my $d = $vmstatus->{$vmid};
> +
> + if (my $pool = $pool_membership->{$vmid}) {
> + $pool_usage->{$pool}->{$vmid} = {
> + cpu => {
> + config => ($d->{confcpus} // 0),
> + run => ($d->{runcpus} // 0),
> + },
> + mem => {
> + config => ($d->{confmem} // 0),
> + run => ($d->{runmem} // 0),
> + },
> + running => $d->{pid} ? 1 : 0,
> + };
> + }
> +
> my $data;
> my $status = $d->{qmpstatus} || $d->{status} || 'stopped';
> my $template = $d->{template} ? $d->{template} : "0";
> @@ -263,6 +278,17 @@ sub update_qemu_status {
> PVE::ExtMetric::transactions_finish($transactions);
> }
>
> +sub update_pool_usage {
> + my ($usage) = @_;
> +
> + my $ctime = time();
> +
> + # TODO? RRD and ExtMetric support here?
> +
> + my $new = { data => $usage, timestamp => $ctime };
> + PVE::Cluster::broadcast_node_kv('pool-usage', encode_json($new));
> +}
> +
> sub remove_stale_lxc_consoles {
>
> my $vmstatus = PVE::LXC::vmstatus();
> @@ -440,7 +466,7 @@ sub rebalance_lxc_containers {
> }
>
> sub update_lxc_status {
> - my ($status_cfg) = @_;
> + my ($status_cfg, $pool_membership, $pool_usage) = @_;
This should be
```
- my ($status_cfg, $pull_txn) = @_;
+ my ($status_cfg, $pull_txn, $pool_membership, $pool_usage) = @_;
```
>
> my $ctime = time();
> my $vmstatus = PVE::LXC::vmstatus();
> @@ -449,6 +475,21 @@ sub update_lxc_status {
>
> foreach my $vmid (keys %$vmstatus) {
> my $d = $vmstatus->{$vmid};
> +
> + if (my $pool = $pool_membership->{$vmid}) {
> + $pool_usage->{$pool}->{$vmid} = {
> + cpu => {
> + config => ($d->{confcpus} // 0),
> + run => ($d->{runcpus} // 0),
> + },
> + mem => {
> + config => ($d->{confmem} // 0),
> + run => ($d->{runmem} // 0),
> + },
> + running => $d->{status} eq 'running' ? 1 : 0,
> + };
> + }
> +
> my $template = $d->{template} ? $d->{template} : "0";
> my $data;
> if ($d->{status} eq 'running') { # running
> @@ -540,6 +581,10 @@ sub update_status {
> syslog('err', $err) if $err;
>
> my $status_cfg = PVE::Cluster::cfs_read_file('status.cfg');
> + my $user_cfg = PVE::Cluster::cfs_read_file('user.cfg');
> + my $pools = $user_cfg->{pools};
> + my $pool_membership = $user_cfg->{vms};
> + my $pool_usage = {};
>
> eval {
> update_node_status($status_cfg);
> @@ -548,17 +593,23 @@ sub update_status {
> syslog('err', "node status update error: $err") if $err;
>
> eval {
> - update_qemu_status($status_cfg);
> + update_qemu_status($status_cfg, $pool_membership, $pool_usage);
This should be
```
- update_qemu_status($status_cfg, $pull_txn);
+ update_qemu_status($status_cfg, $pull_txn, $pool_membership, $pool_usage);
```
> };
> $err = $@;
> syslog('err', "qemu status update error: $err") if $err;
>
> eval {
> - update_lxc_status($status_cfg);
> + update_lxc_status($status_cfg, $pool_membership, $pool_usage);
```
- update_lxc_status($status_cfg, $pull_txn);
+ update_lxc_status($status_cfg, $pull_txn, $pool_membership, $pool_usage);
```
> };
> $err = $@;
> syslog('err', "lxc status update error: $err") if $err;
>
> + eval {
> + update_pool_usage($pool_usage);
> + };
> + $err =$@;
nit: missing whitespace between `=` and `$@`.
> + syslog('err', "pool usage status update error: $err") if $err;
> +
> eval {
> rebalance_lxc_containers();
> };
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 manager 4/4] ui: add pool limits and usage
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 4/4] ui: add pool limits and usage Fabian Grünbichler
@ 2024-12-19 16:07 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:07 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 5749 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
>
> Notes:
> this is very "bare", obviously we'd want
> - a nicer grid/.. display of usage
> - a way to edit the limits
>
> I am not yet sure how to integrate this nicely, and wanted to get feedback on the rest first.
>
> v2:
> - fold in change I forgot to include in patch:
> (== vs ===, ? 1 : 0 vs just using the comparison result)
>
> add edit window
>
> www/manager6/pool/StatusView.js | 141 +++++++++++++++++++++++++++++++-
> 1 file changed, 140 insertions(+), 1 deletion(-)
>
> diff --git a/www/manager6/pool/StatusView.js b/www/manager6/pool/StatusView.js
> index 3d46b3b1a..293ee1443 100644
> --- a/www/manager6/pool/StatusView.js
> +++ b/www/manager6/pool/StatusView.js
> @@ -1,12 +1,103 @@
> +var labelWidth = 200;
> +
> +Ext.define('PVE.pool.LimitInputPanel', {
> + extend: 'Proxmox.panel.InputPanel',
> + alias: 'widget.pvePoolLimitInputPanel',
> +
> + onGetValues: function(values) {
> + let ret = PVE.Parser.printPropertyString(values, 'type');
> + if (ret === '') {
> + return { 'delete': 'limits' };
> + }
> + return { limits: ret };
> + },
> +
> + items: [
> + {
> + xtype: 'proxmoxintegerfield',
> + name: 'mem-config',
> + minValue: 0,
> + value: '0',
> + step: 32,
> + fieldLabel: gettext('Memory Config Limit') + ' (MiB)',
> + labelWidth: labelWidth,
> + allowBlank: false,
> + },
> + {
> + xtype: 'proxmoxintegerfield',
> + name: 'mem-run',
> + minValue: 0,
> + value: '0',
> + step: 32,
> + fieldLabel: gettext('Memory Running Limit') + ' (MiB)',
> + labelWidth: labelWidth,
> + allowBlank: false,
> + },
> + {
> + xtype: 'proxmoxintegerfield',
> + name: 'cpu-config',
> + minValue: 0,
> + value: '0',
> + fieldLabel: gettext('CPU Config Limit'),
> + labelWidth: labelWidth,
> + allowBlank: false,
> + },
> + {
> + xtype: 'proxmoxintegerfield',
> + name: 'cpu-run',
> + minValue: 0,
> + value: '0',
> + fieldLabel: gettext('CPU Running Limit'),
> + labelWidth: labelWidth,
> + allowBlank: false,
> + },
> + ],
> +});
> +
> +Ext.define('PVE.pool.EditLimits', {
> + extend: 'Proxmox.window.Edit',
> +
> + width: 640,
> + height: 480,
> +
> + initComponent: function() {
> + var me = this;
> +
> + if (!me.pool) {
> + throw "no pool specified";
> + }
> +
> + me.url = '/pools/';
> + me.method = 'PUT';
> + me.extraRequestParams = { poolid: me.pool };
> + me.autoLoad = true;
> +
> + Ext.apply(me, {
> + subject: gettext('Pool Limits'),
> + items: Ext.create('PVE.pool.LimitInputPanel'),
> + });
> +
> + me.callParent();
> +
> + me.load({
> + success: function(response) {
> + me.poolconfig = response.result.data[0];
> + let limits = me.poolconfig.limits;
> + me.setValues(PVE.Parser.parsePropertyString(limits));
> + },
> + });
> + },
> +});
> +
> Ext.define('PVE.pool.StatusView', {
> extend: 'Proxmox.grid.ObjectGrid',
> alias: ['widget.pvePoolStatusView'],
> - disabled: true,
>
> title: gettext('Status'),
> cwidth1: 150,
> interval: 30000,
> //height: 195,
> +
> initComponent: function() {
> var me = this;
>
> @@ -15,17 +106,65 @@ Ext.define('PVE.pool.StatusView', {
> throw "no pool specified";
> }
>
> + var reload = function() {
> + me.rstore.load();
> + };
> +
> var rows = {
> comment: {
> header: gettext('Comment'),
> renderer: Ext.String.htmlEncode,
> required: true,
> },
> + usage: {
> + header: gettext('Usage'),
> + required: false,
> + renderer: value => {
> + if (value === null) {
> + return '-';
> + } else {
> + let rendered = '';
> + let over = false;
> + for (const [first, second] of Object.entries(value)) {
We could append an `.sort()` for `Object.entries(value)` here...
> + if (first === 'over') {
> + over = second === 1;
> + } else {
> + for (const [kind, usage] of Object.entries(second)) {
...and here, as the `encode_json` / `decode_json` function in Perl seems
to be kinda racy on preserving the order of the sent keys, which made
the rendered key-value-pairs jump around in the WebGUI.
Better yet, it could be sorted at the API endpoint instead, as we use
these values for user-facing displays and those should be the same order
every time.
> + if (rendered === '') {
> + rendered = `${first}-${kind}=${usage}`;
> + } else {
> + rendered = rendered + `, ${first}-${kind}=${usage}`;
> + }
> + }
> + }
> + }
> + if (over) {
> + rendered = rendered + "\nover limit";
> + }
> + return rendered;
> + }
> + },
> + },
> + limits: {
> + header: gettext('Resource Limits'),
> + required: false,
> + },
> };
>
> Ext.apply(me, {
> url: "/api2/json/pools/?poolid=" + pool,
> rows: rows,
> + tbar: [
> + {
> + text: gettext('Edit Limits'),
> + iconCls: 'pve-itype-icon-qemu',
> + handler: function() {
> + var win = Ext.create('PVE.pool.EditLimits', { pool: pool });
> + win.on('destroy', reload);
> + win.show();
> + },
> + },
> + ],
> });
>
> me.callParent();
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 qemu-server 1/6] config: add pool usage helper
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 1/6] config: add pool usage helper Fabian Grünbichler
@ 2024-12-19 16:08 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:08 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1926 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> determining the usage values for the current config. pending values are taken
> into account if they are higher than the current value only, else it would be
> possible to easily circumvent config limits by setting non-hotpluggable pending
> values.
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
> PVE/QemuConfig.pm | 30 ++++++++++++++++++++++++++++++
> 1 file changed, 30 insertions(+)
>
> diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
> index 8e8a7828..1410bf14 100644
> --- a/PVE/QemuConfig.pm
> +++ b/PVE/QemuConfig.pm
> @@ -573,4 +573,34 @@ sub has_cloudinit {
> return $found;
> }
>
> +# for determining pool usage vs limits
> +#
> +# this gives the higher of pending or currently configured
> +sub get_pool_usage {
similar comment as for the pve-container: would it make sense to rename
this sub to `get_configured_vm_usage`?
> + my ($class, $conf) = @_;
> +
> + my $usage = {};
> +
> + my $get_max = sub {
> + my $max = 0;
> +
> + for my $curr (@_) {
> + $max = $curr if defined($curr) && $curr > $max;
> + }
> +
> + return $max;
> + };
> +
> + $usage->{sockets} = $get_max->($conf->{pending}->{sockets}, $conf->{sockets}, 1);
> + $usage->{cores} = $get_max->($conf->{pending}->{cores}, $conf->{cores}, 1);
> + $usage->{cpu} = $usage->{sockets} * $usage->{cores};
nit: same as for pve-container, we could name this 'cpus' just for
consistency's sake (as 'cpu' holds the cpu usage % and 'cpus' the
amount of cores).
> + $usage->{mem} = $get_max->(
> + get_current_memory($conf->{pending}->{memory}),
> + get_current_memory($conf->{memory})
> + );
> + $usage->{mem} *= 1024*1024;
> +
> + return $usage;
> +}
> +
> 1;
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 qemu-server 2/6] vmstatus: add usage values for pool limits
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 2/6] vmstatus: add usage values for pool limits Fabian Grünbichler
@ 2024-12-19 16:08 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:08 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3344 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> these are separate from the existing ones to allow changes on either end
> without side-effects, since the semantics are not quite the same.
>
> the conf values incorporate pending values (if higher than the current config
> value), and avoid clamping.
>
> the run values are currently identical to the regular ones.
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
> PVE/QemuServer.pm | 29 +++++++++++++++++++++++++++++
> 1 file changed, 29 insertions(+)
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 661613df..93eaaec5 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -2755,6 +2755,18 @@ our $vmstatus_return_properties = {
> optional => 1,
> renderer => 'bytes',
> },
> + confmem => {
> + description => "Configured memory in bytes, might be higher than 'maxmem'",
nit: consistency with "amount of memory"
> + type => 'integer',
> + optional => 1,
> + renderer => 'bytes',
> + },
> + runmem => {
> + description => "Currently plugged memory in bytes, if running.",
for consistency's sake we could also use "configured" instead of
"plugged" memory here (similar to pve-container).
nit: consistency with "amount of memory"
> + type => 'integer',
> + optional => 1,
> + renderer => 'bytes',
> + },
> maxdisk => {
> description => "Root disk size in bytes.",
> type => 'integer',
> @@ -2787,6 +2799,16 @@ our $vmstatus_return_properties = {
> type => 'number',
> optional => 1,
> },
> + confcpus => {
> + description => "Configured amount of CPUs, might be higher than 'cpus'.",
> + type => 'number',
> + optional => 1,
> + },
> + runcpus => {
> + description => "Currently used amount of CPUs.",
same comment as for the pve-container: "configured" instead of "used".
> + type => 'number',
> + optional => 1,
> + },
> lock => {
> description => "The current config lock, if any.",
> type => 'string',
> @@ -2839,6 +2861,8 @@ sub vmstatus {
> # fixme: better status?
> $d->{status} = $list->{$vmid}->{pid} ? 'running' : 'stopped';
>
> + my $usage = PVE::QemuConfig->get_pool_usage($conf);
> +
> my $size = PVE::QemuServer::Drive::bootdisk_size($storecfg, $conf);
> if (defined($size)) {
> $d->{disk} = 0; # no info available
> @@ -2850,11 +2874,16 @@ sub vmstatus {
>
> $d->{cpus} = ($conf->{sockets} || $defaults->{sockets})
> * ($conf->{cores} || $defaults->{cores});
> +
Git complained about this line:
Applying: vmstatus: add usage values for pool limits
.git/rebase-apply/patch:58: trailing whitespace.
warning: 1 line adds whitespace errors.
> $d->{cpus} = $cpucount if $d->{cpus} > $cpucount;
> $d->{cpus} = $conf->{vcpus} if $conf->{vcpus};
> + $d->{confcpus} = $usage->{cpu};
> + $d->{runcpus} = $d->{cpus};
>
> $d->{name} = $conf->{name} || "VM $vmid";
> $d->{maxmem} = get_current_memory($conf->{memory})*(1024*1024);
> + $d->{confmem} = $usage->{mem};
> + $d->{runmem} = $d->{maxmem};
>
> if ($conf->{balloon}) {
> $d->{balloon_min} = $conf->{balloon}*(1024*1024);
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 qemu-server 4/6] update/hotplug: handle pool limits
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 4/6] update/hotplug: " Fabian Grünbichler
@ 2024-12-19 16:09 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:09 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3770 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> if the new value is higher than the old one, check against limits. if the old
> one is higher, then the change is always okay, to support reducing the usage in
> steps spread over multiple guests..
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
> PVE/API2/Qemu.pm | 22 ++++++++++++++++++++++
> PVE/QemuServer.pm | 7 +++++++
> PVE/QemuServer/Memory.pm | 6 ++++++
> 3 files changed, 35 insertions(+)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index f0ff785c..adbc6557 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -1863,6 +1863,28 @@ my $update_vm_api = sub {
> }
> };
>
> + # check pool limits, but only if values increase, ignoring
> + # deletions and pending values
> + my $usage = PVE::QemuConfig->get_pool_usage($conf);
> + if (defined($param->{sockets}) || defined($param->{cores})) {
> + my $old = $usage->{cpu};
Git complained about this line:
Applying: update/hotplug: handle pool limits
.git/rebase-apply/patch:19: space before tab in indent.
my $old = $usage->{cpu};
warning: 1 line adds whitespace errors.
> + my $new = $param->{sockets} || $usage->{sockets};
> + $new *= ($param->{cores} || $usage->{cores});
> +
> + if ($new > $old) {
> + my $change = { cpu => $new - $old };
> + PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
> + }
> + } elsif (defined($param->{memory})) {
> + my $old = $usage->{mem};
> + my $new = PVE::QemuServer::Memory::get_current_memory($param->{memory})*1024*1024;
> +
> + if ($new > $old) {
> + my $change = { mem => $new - $old };
> + PVE::GuestHelpers::check_guest_pool_limit($vmid, $change);
> + }
> + }
> +
> foreach my $opt (@delete) {
> $modified->{$opt} = 1;
> $conf = PVE::QemuConfig->load_config($vmid); # update/reload
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 93eaaec5..be937ec1 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -4617,6 +4617,13 @@ sub qemu_cpu_hotplug {
> }
>
> return;
> + } else {
> + my $changes = {
> + absolute => 1,
> + running => 1,
> + cpu => $vcpus,
> + };
> + PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes);
> }
>
> my $currentrunningvcpus = mon_cmd($vmid, "query-cpus-fast");
> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index f365f2d1..b27b8b2b 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -234,6 +234,12 @@ sub qemu_memory_hotplug {
> die "you cannot add more memory than max mem $MAX_MEM MB!\n" if $value > $MAX_MEM;
>
> if ($value > $memory) {
> + my $changes = {
> + absolute => 1,
Shouldn't this be relative as the memory value is calculated as...
> + running => 1,
> + mem => $memory - $value,
the difference between the old and new value?
Also, I can only guess from the context that these values are in MiB,
which would mean that this should be converted to bytes, but I could be
totally wrong here.
Unfortunately, I'm unsure how to trigger this hotplug (as it only seems
to be called by `vmconfig_hotplug_pending`, which is called in
`do_import` and after another pool limit check in `update_vm_api`, so
the first call will always die first) without changing the code too
much. I'd be happy to know when this code gets executed.
> + };
> + PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes);
>
> my $numa_hostmap;
>
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [pve-devel] [PATCH v2 qemu-server 5/6] start: handle pool limits
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 5/6] start: " Fabian Grünbichler
@ 2024-12-19 16:09 ` Daniel Kral
0 siblings, 0 replies; 34+ messages in thread
From: Daniel Kral @ 2024-12-19 16:09 UTC (permalink / raw)
To: f.gruenbichler; +Cc: pve-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1698 bytes --]
On 16/04/2024 14:20, Fabian Grünbichler wrote:
> if the start is not part of an incoming migration, check the VM against its
> pool's run limit.
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
> PVE/QemuServer.pm | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index be937ec1..78b6ff96 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -5704,6 +5704,19 @@ sub vm_start_nolock {
> # But we do it above, so for now let's be consistent.
> $conf = PVE::QemuConfig->load_config($vmid); # update/reload
> }
> +
> + my $cpu_count = 1;
> + $cpu_count = $conf->{sockets} if $conf->{sockets};
> + $cpu_count *= ($conf->{cores} || 1);
> + $cpu_count = $conf->{vcpus} if $conf->{vcpus} && $conf->{vcpus} < $cpu_count;
> +
> + my $changes = {
> + absolute => 1,
> + running => 1,
> + mem => get_current_memory($conf->{memory}),
Bug: The memory value here should be converted from MiB to bytes, else
the pool limit will not be enforced as `check_guest_pool_limit` expects
the value to be in bytes, i.e. `get_current_memory(...) * 1024 * 1024`.
Spotted it by starting a second VM (which would've raised the memory
usage over the limit) and wondered why it didn't fail. For the third VM
I got:
```
pool limits exhausted: (run) mem: 4096 + 0.001953125 over 3072
```
> + cpu => $cpu_count,
> + };
> + PVE::GuestHelpers::check_guest_pool_limit($vmid, $changes);
> }
>
> # override offline migrated volumes, conf is out of date still
> --
> 2.39.2
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2024-12-19 16:10 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-16 12:20 [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 access-control 1/1] pools: define " Fabian Grünbichler
2024-12-19 16:01 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 1/7] config: add pool usage helper Fabian Grünbichler
2024-12-19 16:01 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 2/7] status: add pool usage fields Fabian Grünbichler
2024-12-19 16:02 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 3/7] create/restore/clone: handle pool limits Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 4/7] start: " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 5/7] hotplug: " Fabian Grünbichler
2024-12-19 16:03 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 6/7] rollback: " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 container 7/7] update: " Fabian Grünbichler
2024-12-19 16:04 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 guest-common 1/1] helpers: add pool limit/usage helpers Fabian Grünbichler
2024-12-19 16:04 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 1/4] api: pools: add limits management Fabian Grünbichler
2024-12-19 16:05 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 2/4] pvestatd: collect and broadcast pool usage Fabian Grünbichler
2024-12-19 16:06 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 3/4] api: return pool usage when queried Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 manager 4/4] ui: add pool limits and usage Fabian Grünbichler
2024-12-19 16:07 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 1/6] config: add pool usage helper Fabian Grünbichler
2024-12-19 16:08 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 2/6] vmstatus: add usage values for pool limits Fabian Grünbichler
2024-12-19 16:08 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 3/6] create/restore/clone: handle " Fabian Grünbichler
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 4/6] update/hotplug: " Fabian Grünbichler
2024-12-19 16:09 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 5/6] start: " Fabian Grünbichler
2024-12-19 16:09 ` Daniel Kral
2024-04-16 12:20 ` [pve-devel] [PATCH v2 qemu-server 6/6] rollback: " Fabian Grünbichler
2024-12-19 15:59 ` [pve-devel] [PATCH v2 qemu-server/pve-container 0/19] pool resource limits Daniel Kral
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox