* [pve-devel] [PATCH qemu-server 1/5] move parse_number_sets() helper to helpers module
2023-09-04 11:39 [pve-devel] [PATCH-SERIES qemu-server] avoid cyclic use of main module in memory module Fiona Ebner
@ 2023-09-04 11:39 ` Fiona Ebner
2023-09-04 11:39 ` [pve-devel] [PATCH qemu-server 2/5] move NUMA-related code into memory module Fiona Ebner
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Fiona Ebner @ 2023-09-04 11:39 UTC (permalink / raw)
To: pve-devel
In preparation to move parse_numa() to the memory module.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
PVE/QemuServer.pm | 16 +---------------
PVE/QemuServer/Helpers.pm | 15 +++++++++++++++
2 files changed, 16 insertions(+), 15 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index bf1de179..ed604a80 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -49,7 +49,7 @@ use PVE::Tools qw(run_command file_read_firstline file_get_contents dir_glob_for
use PVE::QMPClient;
use PVE::QemuConfig;
-use PVE::QemuServer::Helpers qw(min_version config_aware_timeout windows_version);
+use PVE::QemuServer::Helpers qw(config_aware_timeout min_version parse_number_sets windows_version);
use PVE::QemuServer::Cloudinit;
use PVE::QemuServer::CGroup;
use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
@@ -1930,20 +1930,6 @@ sub print_vga_device {
return "$type,id=${vgaid}${memory}${max_outputs}${pciaddr}${edidoff}";
}
-sub parse_number_sets {
- my ($set) = @_;
- my $res = [];
- foreach my $part (split(/;/, $set)) {
- if ($part =~ /^\s*(\d+)(?:-(\d+))?\s*$/) {
- die "invalid range: $part ($2 < $1)\n" if defined($2) && $2 < $1;
- push @$res, [ $1, $2 ];
- } else {
- die "invalid range: $part\n";
- }
- }
- return $res;
-}
-
sub parse_numa {
my ($data) = @_;
diff --git a/PVE/QemuServer/Helpers.pm b/PVE/QemuServer/Helpers.pm
index ee3979df..99eb33a0 100644
--- a/PVE/QemuServer/Helpers.pm
+++ b/PVE/QemuServer/Helpers.pm
@@ -13,6 +13,7 @@ use base 'Exporter';
our @EXPORT_OK = qw(
min_version
config_aware_timeout
+parse_number_sets
windows_version
);
@@ -186,6 +187,20 @@ sub pvecfg_min_version {
die "internal error: cannot check version of invalid string '$verstr'";
}
+sub parse_number_sets {
+ my ($set) = @_;
+ my $res = [];
+ foreach my $part (split(/;/, $set)) {
+ if ($part =~ /^\s*(\d+)(?:-(\d+))?\s*$/) {
+ die "invalid range: $part ($2 < $1)\n" if defined($2) && $2 < $1;
+ push @$res, [ $1, $2 ];
+ } else {
+ die "invalid range: $part\n";
+ }
+ }
+ return $res;
+}
+
sub windows_version {
my ($ostype) = @_;
--
2.39.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH qemu-server 2/5] move NUMA-related code into memory module
2023-09-04 11:39 [pve-devel] [PATCH-SERIES qemu-server] avoid cyclic use of main module in memory module Fiona Ebner
2023-09-04 11:39 ` [pve-devel] [PATCH qemu-server 1/5] move parse_number_sets() helper to helpers module Fiona Ebner
@ 2023-09-04 11:39 ` Fiona Ebner
2023-09-04 11:39 ` [pve-devel] [PATCH qemu-server 3/5] memory: replace deprecated check_running() call Fiona Ebner
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Fiona Ebner @ 2023-09-04 11:39 UTC (permalink / raw)
To: pve-devel
which is the only user of the parse_numa() helper. While at it, avoid
the duplication of MAX_NUMA.
In preparation to remove the cyclic include of PVE::QemuServer in the
memory module.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
PVE/QemuServer.pm | 50 +++--------------------------------
PVE/QemuServer/Memory.pm | 56 ++++++++++++++++++++++++++++++++++++----
2 files changed, 54 insertions(+), 52 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ed604a80..61409a58 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -49,7 +49,7 @@ use PVE::Tools qw(run_command file_read_firstline file_get_contents dir_glob_for
use PVE::QMPClient;
use PVE::QemuConfig;
-use PVE::QemuServer::Helpers qw(config_aware_timeout min_version parse_number_sets windows_version);
+use PVE::QemuServer::Helpers qw(config_aware_timeout min_version windows_version);
use PVE::QemuServer::Cloudinit;
use PVE::QemuServer::CGroup;
use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
@@ -842,44 +842,9 @@ while (my ($k, $v) = each %$confdesc) {
my $MAX_NETS = 32;
my $MAX_SERIAL_PORTS = 4;
my $MAX_PARALLEL_PORTS = 3;
-my $MAX_NUMA = 8;
-my $numa_fmt = {
- cpus => {
- type => "string",
- pattern => qr/\d+(?:-\d+)?(?:;\d+(?:-\d+)?)*/,
- description => "CPUs accessing this NUMA node.",
- format_description => "id[-id];...",
- },
- memory => {
- type => "number",
- description => "Amount of memory this NUMA node provides.",
- optional => 1,
- },
- hostnodes => {
- type => "string",
- pattern => qr/\d+(?:-\d+)?(?:;\d+(?:-\d+)?)*/,
- description => "Host NUMA nodes to use.",
- format_description => "id[-id];...",
- optional => 1,
- },
- policy => {
- type => 'string',
- enum => [qw(preferred bind interleave)],
- description => "NUMA allocation policy.",
- optional => 1,
- },
-};
-PVE::JSONSchema::register_format('pve-qm-numanode', $numa_fmt);
-my $numadesc = {
- optional => 1,
- type => 'string', format => $numa_fmt,
- description => "NUMA topology.",
-};
-PVE::JSONSchema::register_standard_option("pve-qm-numanode", $numadesc);
-
-for (my $i = 0; $i < $MAX_NUMA; $i++) {
- $confdesc->{"numa$i"} = $numadesc;
+for (my $i = 0; $i < $PVE::QemuServer::Memory::MAX_NUMA; $i++) {
+ $confdesc->{"numa$i"} = $PVE::QemuServer::Memory::numadesc;
}
my $nic_model_list = [
@@ -1930,15 +1895,6 @@ sub print_vga_device {
return "$type,id=${vgaid}${memory}${max_outputs}${pciaddr}${edidoff}";
}
-sub parse_numa {
- my ($data) = @_;
-
- my $res = parse_property_string($numa_fmt, $data);
- $res->{cpus} = parse_number_sets($res->{cpus}) if defined($res->{cpus});
- $res->{hostnodes} = parse_number_sets($res->{hostnodes}) if defined($res->{hostnodes});
- return $res;
-}
-
# netX: e1000=XX:XX:XX:XX:XX:XX,bridge=vmbr0,rate=<mbps>
sub parse_net {
my ($data, $disable_mac_autogen) = @_;
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index d459a08d..c5c3a0a6 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -3,13 +3,59 @@ package PVE::QemuServer::Memory;
use strict;
use warnings;
+use PVE::JSONSchema qw(parse_property_string);
use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline dir_glob_foreach);
use PVE::Exception qw(raise raise_param_exc);
use PVE::QemuServer;
+use PVE::QemuServer::Helpers qw(parse_number_sets);
use PVE::QemuServer::Monitor qw(mon_cmd);
-my $MAX_NUMA = 8;
+our $MAX_NUMA = 8;
+
+my $numa_fmt = {
+ cpus => {
+ type => "string",
+ pattern => qr/\d+(?:-\d+)?(?:;\d+(?:-\d+)?)*/,
+ description => "CPUs accessing this NUMA node.",
+ format_description => "id[-id];...",
+ },
+ memory => {
+ type => "number",
+ description => "Amount of memory this NUMA node provides.",
+ optional => 1,
+ },
+ hostnodes => {
+ type => "string",
+ pattern => qr/\d+(?:-\d+)?(?:;\d+(?:-\d+)?)*/,
+ description => "Host NUMA nodes to use.",
+ format_description => "id[-id];...",
+ optional => 1,
+ },
+ policy => {
+ type => 'string',
+ enum => [qw(preferred bind interleave)],
+ description => "NUMA allocation policy.",
+ optional => 1,
+ },
+};
+PVE::JSONSchema::register_format('pve-qm-numanode', $numa_fmt);
+our $numadesc = {
+ optional => 1,
+ type => 'string', format => $numa_fmt,
+ description => "NUMA topology.",
+};
+PVE::JSONSchema::register_standard_option("pve-qm-numanode", $numadesc);
+
+sub parse_numa {
+ my ($data) = @_;
+
+ my $res = parse_property_string($numa_fmt, $data);
+ $res->{cpus} = parse_number_sets($res->{cpus}) if defined($res->{cpus});
+ $res->{hostnodes} = parse_number_sets($res->{hostnodes}) if defined($res->{hostnodes});
+ return $res;
+}
+
my $STATICMEM = 1024;
my $_host_bits;
@@ -68,7 +114,7 @@ sub get_numa_node_list {
my @numa_map;
for (my $i = 0; $i < $MAX_NUMA; $i++) {
my $entry = $conf->{"numa$i"} or next;
- my $numa = PVE::QemuServer::parse_numa($entry) or next;
+ my $numa = parse_numa($entry) or next;
push @numa_map, $i;
}
return @numa_map if @numa_map;
@@ -88,7 +134,7 @@ sub get_numa_guest_to_host_map {
my $map = {};
for (my $i = 0; $i < $MAX_NUMA; $i++) {
my $entry = $conf->{"numa$i"} or next;
- my $numa = PVE::QemuServer::parse_numa($entry) or next;
+ my $numa = parse_numa($entry) or next;
$map->{$i} = print_numa_hostnodes($numa->{hostnodes});
}
return $map if %$map;
@@ -281,7 +327,7 @@ sub config {
my $numa_totalmemory = undef;
for (my $i = 0; $i < $MAX_NUMA; $i++) {
next if !$conf->{"numa$i"};
- my $numa = PVE::QemuServer::parse_numa($conf->{"numa$i"});
+ my $numa = parse_numa($conf->{"numa$i"});
next if !$numa;
# memory
die "missing NUMA node$i memory value\n" if !$numa->{memory};
@@ -484,7 +530,7 @@ sub hugepages_topology {
#custom numa topology
for (my $i = 0; $i < $MAX_NUMA; $i++) {
next if !$conf->{"numa$i"};
- my $numa = PVE::QemuServer::parse_numa($conf->{"numa$i"});
+ my $numa = parse_numa($conf->{"numa$i"});
next if !$numa;
$numa_custom_topology = 1;
--
2.39.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH qemu-server 3/5] memory: replace deprecated check_running() call
2023-09-04 11:39 [pve-devel] [PATCH-SERIES qemu-server] avoid cyclic use of main module in memory module Fiona Ebner
2023-09-04 11:39 ` [pve-devel] [PATCH qemu-server 1/5] move parse_number_sets() helper to helpers module Fiona Ebner
2023-09-04 11:39 ` [pve-devel] [PATCH qemu-server 2/5] move NUMA-related code into memory module Fiona Ebner
@ 2023-09-04 11:39 ` Fiona Ebner
2023-09-04 11:39 ` [pve-devel] [PATCH qemu-server 4/5] introduce QMPHelpers module Fiona Ebner
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Fiona Ebner @ 2023-09-04 11:39 UTC (permalink / raw)
To: pve-devel
PVE::QemuServer::check_running() does both
PVE::QemuConfig::assert_config_exists_on_node()
PVE::QemuServer::Helpers::vm_running_locally()
The former one isn't needed here when doing hotplug, because the API
already assert that the VM config exists. It also would introduce a
new cyclic dependency between PVE::QemuServer::Memory <->
PVE::QemuConfig with the proposed virtio-mem patch set.
In preparation to remove the cyclic include of PVE::QemuServer in the
memory module.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
PVE/QemuServer/Memory.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index c5c3a0a6..6ec5ceec 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -175,7 +175,7 @@ sub foreach_dimm{
sub qemu_memory_hotplug {
my ($vmid, $conf, $defaults, $value) = @_;
- return $value if !PVE::QemuServer::check_running($vmid);
+ return $value if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
my $sockets = $conf->{sockets} || 1;
--
2.39.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH qemu-server 4/5] introduce QMPHelpers module
2023-09-04 11:39 [pve-devel] [PATCH-SERIES qemu-server] avoid cyclic use of main module in memory module Fiona Ebner
` (2 preceding siblings ...)
2023-09-04 11:39 ` [pve-devel] [PATCH qemu-server 3/5] memory: replace deprecated check_running() call Fiona Ebner
@ 2023-09-04 11:39 ` Fiona Ebner
2023-09-04 11:39 ` [pve-devel] [PATCH qemu-server 5/5] add memory parser Fiona Ebner
2023-09-18 15:12 ` [pve-devel] applied-series: [PATCH-SERIES qemu-server] avoid cyclic use of main module in memory module Thomas Lamprecht
5 siblings, 0 replies; 7+ messages in thread
From: Fiona Ebner @ 2023-09-04 11:39 UTC (permalink / raw)
To: pve-devel
moving qemu_{device,object}{add,del} helpers there for now.
In preparation to remove the cyclic include of PVE::QemuServer in the
memory module and generally for better modularity in the future.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
PVE/QemuServer.pm | 32 +-----------------------
PVE/QemuServer/Makefile | 1 +
PVE/QemuServer/Memory.pm | 9 ++++---
PVE/QemuServer/QMPHelpers.pm | 48 ++++++++++++++++++++++++++++++++++++
4 files changed, 55 insertions(+), 35 deletions(-)
create mode 100644 PVE/QemuServer/QMPHelpers.pm
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 61409a58..af63987a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -58,6 +58,7 @@ use PVE::QemuServer::Machine;
use PVE::QemuServer::Memory;
use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
+use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
use PVE::QemuServer::USB;
my $have_sdn;
@@ -4377,21 +4378,6 @@ sub qemu_spice_usbredir_chardev_add {
));
}
-sub qemu_deviceadd {
- my ($vmid, $devicefull) = @_;
-
- $devicefull = "driver=".$devicefull;
- my %options = split(/[=,]/, $devicefull);
-
- mon_cmd($vmid, "device_add" , %options);
-}
-
-sub qemu_devicedel {
- my ($vmid, $deviceid) = @_;
-
- my $ret = mon_cmd($vmid, "device_del", id => $deviceid);
-}
-
sub qemu_iothread_add {
my ($vmid, $deviceid, $device) = @_;
@@ -4410,22 +4396,6 @@ sub qemu_iothread_del {
}
}
-sub qemu_objectadd {
- my ($vmid, $objectid, $qomtype) = @_;
-
- mon_cmd($vmid, "object-add", id => $objectid, "qom-type" => $qomtype);
-
- return 1;
-}
-
-sub qemu_objectdel {
- my ($vmid, $objectid) = @_;
-
- mon_cmd($vmid, "object-del", id => $objectid);
-
- return 1;
-}
-
sub qemu_driveadd {
my ($storecfg, $vmid, $device) = @_;
diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
index e4ed184c..ac26e56f 100644
--- a/PVE/QemuServer/Makefile
+++ b/PVE/QemuServer/Makefile
@@ -11,6 +11,7 @@ SOURCES=PCI.pm \
CPUConfig.pm \
CGroup.pm \
Drive.pm \
+ QMPHelpers.pm
.PHONY: install
install: ${SOURCES}
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index 6ec5ceec..023d1a91 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -10,6 +10,7 @@ use PVE::Exception qw(raise raise_param_exc);
use PVE::QemuServer;
use PVE::QemuServer::Helpers qw(parse_number_sets);
use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::QMPHelpers qw(qemu_devicedel qemu_objectdel);
our $MAX_NUMA = 8;
@@ -226,13 +227,13 @@ sub qemu_memory_hotplug {
}
if (my $err = $@) {
- eval { PVE::QemuServer::qemu_objectdel($vmid, "mem-$name"); };
+ eval { qemu_objectdel($vmid, "mem-$name"); };
die $err;
}
eval { mon_cmd($vmid, "device_add", driver => "pc-dimm", id => "$name", memdev => "mem-$name", node => $numanode) };
if (my $err = $@) {
- eval { PVE::QemuServer::qemu_objectdel($vmid, "mem-$name"); };
+ eval { qemu_objectdel($vmid, "mem-$name"); };
die $err;
}
#update conf after each succesful module hotplug
@@ -255,7 +256,7 @@ sub qemu_memory_hotplug {
my $retry = 0;
while (1) {
- eval { PVE::QemuServer::qemu_devicedel($vmid, $name) };
+ eval { qemu_devicedel($vmid, $name) };
sleep 3;
my $dimm_list = qemu_memdevices_list($vmid, 'dimm');
last if !$dimm_list->{$name};
@@ -266,7 +267,7 @@ sub qemu_memory_hotplug {
#update conf after each succesful module unplug
$conf->{memory} = $current_size;
- eval { PVE::QemuServer::qemu_objectdel($vmid, "mem-$name"); };
+ eval { qemu_objectdel($vmid, "mem-$name"); };
PVE::QemuConfig->write_config($vmid, $conf);
}
}
diff --git a/PVE/QemuServer/QMPHelpers.pm b/PVE/QemuServer/QMPHelpers.pm
new file mode 100644
index 00000000..d3a52327
--- /dev/null
+++ b/PVE/QemuServer/QMPHelpers.pm
@@ -0,0 +1,48 @@
+package PVE::QemuServer::QMPHelpers;
+
+use warnings;
+use strict;
+
+use PVE::QemuServer::Monitor qw(mon_cmd);
+
+use base 'Exporter';
+
+our @EXPORT_OK = qw(
+qemu_deviceadd
+qemu_devicedel
+qemu_objectadd
+qemu_objectdel
+);
+
+sub qemu_deviceadd {
+ my ($vmid, $devicefull) = @_;
+
+ $devicefull = "driver=".$devicefull;
+ my %options = split(/[=,]/, $devicefull);
+
+ mon_cmd($vmid, "device_add" , %options);
+}
+
+sub qemu_devicedel {
+ my ($vmid, $deviceid) = @_;
+
+ my $ret = mon_cmd($vmid, "device_del", id => $deviceid);
+}
+
+sub qemu_objectadd {
+ my ($vmid, $objectid, $qomtype) = @_;
+
+ mon_cmd($vmid, "object-add", id => $objectid, "qom-type" => $qomtype);
+
+ return 1;
+}
+
+sub qemu_objectdel {
+ my ($vmid, $objectid) = @_;
+
+ mon_cmd($vmid, "object-del", id => $objectid);
+
+ return 1;
+}
+
+1;
--
2.39.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH qemu-server 5/5] add memory parser
2023-09-04 11:39 [pve-devel] [PATCH-SERIES qemu-server] avoid cyclic use of main module in memory module Fiona Ebner
` (3 preceding siblings ...)
2023-09-04 11:39 ` [pve-devel] [PATCH qemu-server 4/5] introduce QMPHelpers module Fiona Ebner
@ 2023-09-04 11:39 ` Fiona Ebner
2023-09-18 15:12 ` [pve-devel] applied-series: [PATCH-SERIES qemu-server] avoid cyclic use of main module in memory module Thomas Lamprecht
5 siblings, 0 replies; 7+ messages in thread
From: Fiona Ebner @ 2023-09-04 11:39 UTC (permalink / raw)
To: pve-devel
From: Alexandre Derumier <aderumier@odiso.com>
In preparation to add more properties to the memory configuration like
maximum hotpluggable memory and whether virtio-mem devices should be
used.
This also allows to get rid of the cyclic include of PVE::QemuServer
in the memory module.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[FE: also convert new usage in get_derived_property
remove cyclic include of PVE::QemuServer
add commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
Versioned breaks for pve-ha-manager < 4.0.1 needed.
PVE/API2/Qemu.pm | 7 ++--
PVE/QemuConfig.pm | 8 ++---
PVE/QemuMigrate.pm | 6 ++--
PVE/QemuServer.pm | 27 +++++++--------
PVE/QemuServer/Helpers.pm | 3 +-
PVE/QemuServer/Memory.pm | 71 +++++++++++++++++++++++++++++++--------
6 files changed, 83 insertions(+), 39 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9606e72e..774b0c71 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -32,6 +32,7 @@ use PVE::QemuServer::Drive;
use PVE::QemuServer::ImportDisk;
use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::Machine;
+use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::QemuServer::PCI;
use PVE::QemuServer::USB;
use PVE::QemuMigrate;
@@ -1573,8 +1574,6 @@ my $update_vm_api = sub {
my $storecfg = PVE::Storage::config();
- my $defaults = PVE::QemuServer::load_defaults();
-
&$resolve_cdrom_alias($param);
# now try to verify all parameters
@@ -1694,7 +1693,9 @@ my $update_vm_api = sub {
}
if ($param->{memory} || defined($param->{balloon})) {
- my $maxmem = $param->{memory} || $conf->{pending}->{memory} || $conf->{memory} || $defaults->{memory};
+
+ my $memory = $param->{memory} || $conf->{pending}->{memory} || $conf->{memory};
+ my $maxmem = get_current_memory($memory);
my $balloon = defined($param->{balloon}) ? $param->{balloon} : $conf->{pending}->{balloon} || $conf->{balloon};
die "balloon value too large (must be smaller than assigned memory)\n"
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 10e69299..d49d8823 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -12,6 +12,7 @@ use PVE::QemuServer::Helpers;
use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer;
use PVE::QemuServer::Machine;
+use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::Storage;
use PVE::Tools;
use PVE::Format qw(render_bytes render_duration);
@@ -208,8 +209,7 @@ sub __snapshot_save_vmstate {
$target = PVE::QemuServer::find_vmstate_storage($conf, $storecfg);
}
- my $defaults = PVE::QemuServer::load_defaults();
- my $mem_size = $conf->{memory} // $defaults->{memory};
+ my $mem_size = get_current_memory($conf->{memory});
my $driver_state_size = 500; # assume 500MB is enough to safe all driver state;
# our savevm-start does live-save of the memory until the space left in the
# volume is just enough for the remaining memory content + internal state
@@ -549,8 +549,8 @@ sub get_derived_property {
my $cpus =
($conf->{sockets} || $defaults->{sockets}) * ($conf->{cores} || $defaults->{cores});
return $conf->{vcpus} || $cpus;
- } elsif ($name eq 'max-memory') {
- return ($conf->{memory} || $defaults->{memory}) * 1024 * 1024;
+ } elsif ($name eq 'max-memory') { # current usage maximum, not maximum hotpluggable
+ return get_current_memory($conf->{memory}) * 1024 * 1024;
} else {
die "unknown derived property - $name\n";
}
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 506911e2..f41c61f7 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -26,6 +26,7 @@ use PVE::QemuServer::Drive;
use PVE::QemuServer::Helpers qw(min_version);
use PVE::QemuServer::Machine;
use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::QemuServer;
use PVE::AbstractMigrate;
@@ -1023,7 +1024,8 @@ sub phase2_start_remote_cluster {
my $remote_vmid = $self->{opts}->{remote}->{vmid};
# like regular start but with some overhead accounted for
- my $timeout = PVE::QemuServer::Helpers::config_aware_timeout($self->{vmconf}) + 10;
+ my $memory = get_current_memory($self->{vmconf}->{memory});
+ my $timeout = PVE::QemuServer::Helpers::config_aware_timeout($self->{vmconf}, $memory) + 10;
my $res = PVE::Tunnel::write_tunnel($self->{tunnel}, $timeout, "start", $params);
@@ -1178,7 +1180,7 @@ sub phase2 {
$qemu_migrate_params->{'downtime-limit'} = int($migrate_downtime);
# set cachesize to 10% of the total memory
- my $memory = $conf->{memory} || $defaults->{memory};
+ my $memory = get_current_memory($conf->{memory});
my $cachesize = int($memory * 1048576 / 10);
$cachesize = round_powerof2($cachesize);
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index af63987a..8c39cdfb 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -55,7 +55,7 @@ use PVE::QemuServer::CGroup;
use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit drive_is_cdrom drive_is_read_only parse_drive print_drive);
use PVE::QemuServer::Machine;
-use PVE::QemuServer::Memory;
+use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
@@ -350,11 +350,9 @@ my $confdesc = {
},
memory => {
optional => 1,
- type => 'integer',
- description => "Amount of RAM for the VM in MiB. This is the maximum available memory when"
- ." you use the balloon device.",
- minimum => 16,
- default => 512,
+ type => 'string',
+ description => "Memory properties.",
+ format => $PVE::QemuServer::Memory::memory_fmt
},
balloon => {
optional => 1,
@@ -2903,8 +2901,7 @@ sub vmstatus {
$d->{cpus} = $conf->{vcpus} if $conf->{vcpus};
$d->{name} = $conf->{name} || "VM $vmid";
- $d->{maxmem} = $conf->{memory} ? $conf->{memory}*(1024*1024)
- : $defaults->{memory}*(1024*1024);
+ $d->{maxmem} = get_current_memory($conf->{memory})*(1024*1024);
if ($conf->{balloon}) {
$d->{balloon_min} = $conf->{balloon}*(1024*1024);
@@ -3849,7 +3846,7 @@ sub config_to_command {
}
PVE::QemuServer::Memory::config(
- $conf, $vmid, $sockets, $cores, $defaults, $hotplug_features->{memory}, $cmd);
+ $conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd);
push @$cmd, '-S' if $conf->{freeze};
@@ -4987,7 +4984,7 @@ sub vmconfig_hotplug_pending {
# enable balloon device is not hotpluggable
die "skip\n" if defined($conf->{balloon}) && $conf->{balloon} == 0;
# here we reset the ballooning value to memory
- my $balloon = $conf->{memory} || $defaults->{memory};
+ my $balloon = get_current_memory($conf->{memory});
mon_cmd($vmid, "balloon", value => $balloon*1024*1024);
} elsif ($fast_plug_option->{$opt}) {
# do nothing
@@ -5000,7 +4997,7 @@ sub vmconfig_hotplug_pending {
vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, $force);
} elsif ($opt =~ m/^memory$/) {
die "skip\n" if !$hotplug_features->{memory};
- PVE::QemuServer::Memory::qemu_memory_hotplug($vmid, $conf, $defaults);
+ PVE::QemuServer::Memory::qemu_memory_hotplug($vmid, $conf);
} elsif ($opt eq 'cpuunits') {
$cgroup->change_cpu_shares(undef);
} elsif ($opt eq 'cpulimit') {
@@ -5055,7 +5052,8 @@ sub vmconfig_hotplug_pending {
# allow manual ballooning if shares is set to zero
if ((defined($conf->{shares}) && ($conf->{shares} == 0))) {
- my $balloon = $conf->{pending}->{balloon} || $conf->{memory} || $defaults->{memory};
+ my $memory = get_current_memory($conf->{memory});
+ my $balloon = $conf->{pending}->{balloon} || $memory;
mon_cmd($vmid, "balloon", value => $balloon*1024*1024);
}
} elsif ($opt =~ m/^net(\d+)$/) {
@@ -5075,7 +5073,7 @@ sub vmconfig_hotplug_pending {
$vmid, $opt, $value, $arch, $machine_type);
} elsif ($opt =~ m/^memory$/) { #dimms
die "skip\n" if !$hotplug_features->{memory};
- $value = PVE::QemuServer::Memory::qemu_memory_hotplug($vmid, $conf, $defaults, $value);
+ $value = PVE::QemuServer::Memory::qemu_memory_hotplug($vmid, $conf, $value);
} elsif ($opt eq 'cpuunits') {
my $new_cpuunits = PVE::CGroup::clamp_cpu_shares($conf->{pending}->{$opt}); #clamp
$cgroup->change_cpu_shares($new_cpuunits);
@@ -5740,7 +5738,8 @@ sub vm_start_nolock {
push @$cmd, '-S';
}
- my $start_timeout = $params->{timeout} // config_aware_timeout($conf, $resume);
+ my $memory = get_current_memory($conf->{memory});
+ my $start_timeout = $params->{timeout} // config_aware_timeout($conf, $memory, $resume);
my $pci_reserve_list = [];
for my $device (values $pci_devices->%*) {
diff --git a/PVE/QemuServer/Helpers.pm b/PVE/QemuServer/Helpers.pm
index 99eb33a0..8817427a 100644
--- a/PVE/QemuServer/Helpers.pm
+++ b/PVE/QemuServer/Helpers.pm
@@ -144,8 +144,7 @@ sub version_cmp {
}
sub config_aware_timeout {
- my ($config, $is_suspended) = @_;
- my $memory = $config->{memory};
+ my ($config, $memory, $is_suspended) = @_;
my $timeout = 30;
# Based on user reported startup time for vm with 512GiB @ 4-5 minutes
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index 023d1a91..f365f2d1 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -7,11 +7,16 @@ use PVE::JSONSchema qw(parse_property_string);
use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline dir_glob_foreach);
use PVE::Exception qw(raise raise_param_exc);
-use PVE::QemuServer;
use PVE::QemuServer::Helpers qw(parse_number_sets);
use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::QMPHelpers qw(qemu_devicedel qemu_objectdel);
+use base qw(Exporter);
+
+our @EXPORT_OK = qw(
+get_current_memory
+);
+
our $MAX_NUMA = 8;
my $numa_fmt = {
@@ -59,6 +64,33 @@ sub parse_numa {
my $STATICMEM = 1024;
+our $memory_fmt = {
+ current => {
+ description => "Current amount of online RAM for the VM in MiB. This is the maximum available memory when"
+ ." you use the balloon device.",
+ type => 'integer',
+ default_key => 1,
+ minimum => 16,
+ default => 512,
+ },
+};
+
+sub print_memory {
+ my $memory = shift;
+
+ return PVE::JSONSchema::print_property_string($memory, $memory_fmt);
+}
+
+sub parse_memory {
+ my ($value) = @_;
+
+ return { current => $memory_fmt->{current}->{default} } if !defined($value);
+
+ my $res = PVE::JSONSchema::parse_property_string($memory_fmt, $value);
+
+ return $res;
+}
+
my $_host_bits;
sub get_host_phys_address_bits {
return $_host_bits if defined($_host_bits);
@@ -110,6 +142,13 @@ my sub get_max_mem {
return $bits_to_max_mem > 4*1024*1024 ? 4*1024*1024 : $bits_to_max_mem;
}
+sub get_current_memory {
+ my ($value) = @_;
+
+ my $memory = parse_memory($value);
+ return $memory->{current};
+}
+
sub get_numa_node_list {
my ($conf) = @_;
my @numa_map;
@@ -174,16 +213,19 @@ sub foreach_dimm{
}
sub qemu_memory_hotplug {
- my ($vmid, $conf, $defaults, $value) = @_;
+ my ($vmid, $conf, $value) = @_;
return $value if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
+ my $oldmem = parse_memory($conf->{memory});
+ my $newmem = parse_memory($value);
+
+ return $value if $newmem->{current} == $oldmem->{current};
+
+ my $memory = $oldmem->{current};
+ $value = $newmem->{current};
+
my $sockets = $conf->{sockets} || 1;
-
- my $memory = $conf->{memory} || $defaults->{memory};
- $value = $defaults->{memory} if !$value;
- return $value if $value == $memory;
-
my $static_memory = $STATICMEM;
$static_memory = $static_memory * $sockets if ($conf->{hugepages} && $conf->{hugepages} == 1024);
@@ -198,7 +240,7 @@ sub qemu_memory_hotplug {
foreach_dimm($conf, $vmid, $value, $static_memory, sub {
my ($conf, $vmid, $name, $dimm_size, $numanode, $current_size, $memory) = @_;
- return if $current_size <= $conf->{memory};
+ return if $current_size <= get_current_memory($conf->{memory});
if ($conf->{hugepages}) {
$numa_hostmap = get_numa_guest_to_host_map($conf) if !$numa_hostmap;
@@ -237,7 +279,8 @@ sub qemu_memory_hotplug {
die $err;
}
#update conf after each succesful module hotplug
- $conf->{memory} = $current_size;
+ $newmem->{current} = $current_size;
+ $conf->{memory} = print_memory($newmem);
PVE::QemuConfig->write_config($vmid, $conf);
});
@@ -265,7 +308,8 @@ sub qemu_memory_hotplug {
}
$current_size -= $dimm_size;
#update conf after each succesful module unplug
- $conf->{memory} = $current_size;
+ $newmem->{current} = $current_size;
+ $conf->{memory} = print_memory($newmem);
eval { qemu_objectdel($vmid, "mem-$name"); };
PVE::QemuConfig->write_config($vmid, $conf);
@@ -292,9 +336,9 @@ sub qemu_memdevices_list {
}
sub config {
- my ($conf, $vmid, $sockets, $cores, $defaults, $hotplug, $cmd) = @_;
+ my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd) = @_;
- my $memory = $conf->{memory} || $defaults->{memory};
+ my $memory = get_current_memory($conf->{memory});
my $static_memory = 0;
if ($hotplug) {
@@ -515,8 +559,7 @@ sub hugepages_topology {
return if !$conf->{numa};
- my $defaults = PVE::QemuServer::load_defaults();
- my $memory = $conf->{memory} || $defaults->{memory};
+ my $memory = get_current_memory($conf->{memory});
my $static_memory = 0;
my $sockets = $conf->{sockets} || 1;
my $numa_custom_topology = undef;
--
2.39.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] applied-series: [PATCH-SERIES qemu-server] avoid cyclic use of main module in memory module
2023-09-04 11:39 [pve-devel] [PATCH-SERIES qemu-server] avoid cyclic use of main module in memory module Fiona Ebner
` (4 preceding siblings ...)
2023-09-04 11:39 ` [pve-devel] [PATCH qemu-server 5/5] add memory parser Fiona Ebner
@ 2023-09-18 15:12 ` Thomas Lamprecht
5 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2023-09-18 15:12 UTC (permalink / raw)
To: Proxmox VE development discussion, Fiona Ebner
Am 04/09/2023 um 13:39 schrieb Fiona Ebner:
> Mostly cleanup, moving some code out of the main module and as
> indirect preparation for Alexandre's series reworking memory hotplug
> [0]. At one point during development, the cyclic use caused actual
> problems [1]. Let's avoid any future surprises caused by the cyclic
> use and get rid of it.
>
> Also includes the first patch of Alexandre's series, because that
> gets rid of the PVE::QemuServer::defaults() call.
>
> Because of that patch a versioned Breaks for pve-ha-manager < 4.0.1 is
> needed, because earlier versions don't expect a property string.
>
> [0]: https://lists.proxmox.com/pipermail/pve-devel/2023-June/057690.html
> [1]: https://lists.proxmox.com/pipermail/pve-devel/2023-January/055356.html
>
>
> Alexandre Derumier (1):
> add memory parser
>
> Fiona Ebner (4):
> move parse_number_sets() helper to helpers module
> move NUMA-related code into memory module
> memory: replace deprecated check_running() call
> introduce QMPHelpers module
>
> PVE/API2/Qemu.pm | 7 +-
> PVE/QemuConfig.pm | 8 +-
> PVE/QemuMigrate.pm | 6 +-
> PVE/QemuServer.pm | 123 +++++--------------------------
> PVE/QemuServer/Helpers.pm | 18 ++++-
> PVE/QemuServer/Makefile | 1 +
> PVE/QemuServer/Memory.pm | 138 +++++++++++++++++++++++++++++------
> PVE/QemuServer/QMPHelpers.pm | 48 ++++++++++++
> 8 files changed, 208 insertions(+), 141 deletions(-)
> create mode 100644 PVE/QemuServer/QMPHelpers.pm
>
applied series, thanks!
^ permalink raw reply [flat|nested] 7+ messages in thread