* [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three
@ 2025-06-25 15:56 Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 01/31] print ovmf commandline: collect hardware parameters into hash argument Fiona Ebner
` (31 more replies)
0 siblings, 32 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Changes to OVMF patches (left-over from part two):
* 01/31 is new
* keep get_efivars_size() as a wrapper in QemuServer module
* keep early check for CPU bitness in QemuServer module
* use read-only flag for OVMF code
* collect some parameters into $hw_info hash, avoid querying AMD-SEV
type inside the OVMF module
Splits out a Network module, qga_check_running(),
find_vmstate_storage(), QemuMigrate::Helpers, a RunState module with
the goal of splitting out a BlockJob module, where blockdev_mirror()
will be added.
Need some more time to make zeroinit work properly, got an initial
QEMU patch locally, but need to finalize it. Also need to check why
exactly block-export-add fails without Alexandre's patch, since we do
query the node name there. We shouldn't use the top node there in any
case, because we don't want to be limited by limits intended for the
guest during migration.
Therefore, the patches from 24/31 onwards are RFC, not finalized, just
for context and easier testing for reviewers.
Fiona Ebner (31):
print ovmf commandline: collect hardware parameters into hash argument
introduce OVMF module
ovmf: add support for using blockdev
cfg2cmd: ovmf: support print_ovmf_commandline() returning machine
flags
assume that SDN is available
schema: remove unused pve-qm-ipconfig standard option
remove unused $nic_model_list_txt variable
introduce Network module
agent: drop unused $noerr argument from helpers
agent: code style: order module imports according to style guide
agent: avoid dependency on QemuConfig module
agent: avoid use of deprecated check_running() function
agent: move qga_check_running() to agent module
move find_vmstate_storage() helper to QemuConfig module
introduce QemuMigrate::Helpers module
introduce RunState module
code cleanup: drive mirror: do not prefix calls to function in the
same module
introduce BlockJob module
drive: die in get_drive_id() if argument misses relevant members
block job: add and use wrapper for mirror
drive mirror: add variable for device ID and make name for drive ID
precise
test: migration: factor out common mocking for mirror
block job: factor out helper for common mirror QMP options
block job: add blockdev mirror
blockdev: support using zeroinit filter
blockdev: make some functions private
clone disk: skip check for aio=default (io_uring) compatibility
starting with machine version 10.0
print drive device: don't reference any drive for 'none' starting with
machine version 10.0
blockdev: add support for NBD paths
command line: switch to blockdev starting with machine version 10.0
test: migration: update running machine to 10.0
src/PVE/API2/Qemu.pm | 44 +-
src/PVE/API2/Qemu/Agent.pm | 28 +-
src/PVE/CLI/qm.pm | 11 +-
src/PVE/Makefile | 1 +
src/PVE/QemuConfig.pm | 60 +-
src/PVE/QemuMigrate.pm | 43 +-
src/PVE/QemuMigrate/Helpers.pm | 146 ++
src/PVE/QemuMigrate/Makefile | 9 +
src/PVE/QemuServer.pm | 1537 +++--------------
src/PVE/QemuServer/Agent.pm | 57 +-
src/PVE/QemuServer/BlockJob.pm | 529 ++++++
src/PVE/QemuServer/Blockdev.pm | 40 +-
src/PVE/QemuServer/Cloudinit.pm | 18 +-
src/PVE/QemuServer/Drive.pm | 4 +
src/PVE/QemuServer/Makefile | 4 +
src/PVE/QemuServer/Network.pm | 324 ++++
src/PVE/QemuServer/OVMF.pm | 233 +++
src/PVE/QemuServer/RunState.pm | 185 ++
src/PVE/VZDump/QemuServer.pm | 3 +-
src/test/MigrationTest/QemuMigrateMock.pm | 58 +-
src/test/MigrationTest/QmMock.pm | 3 -
src/test/MigrationTest/Shared.pm | 11 +
src/test/cfg2cmd/aio.conf.cmd | 42 +-
src/test/cfg2cmd/bootorder-empty.conf.cmd | 13 +-
src/test/cfg2cmd/bootorder-legacy.conf.cmd | 13 +-
src/test/cfg2cmd/bootorder.conf.cmd | 13 +-
...putype-icelake-client-deprecation.conf.cmd | 7 +-
src/test/cfg2cmd/efi-raw-template.conf.cmd | 7 +-
src/test/cfg2cmd/efi-raw.conf.cmd | 7 +-
.../cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd | 7 +-
src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd | 7 +-
src/test/cfg2cmd/efidisk-on-rbd.conf.cmd | 7 +-
src/test/cfg2cmd/ide.conf.cmd | 15 +-
src/test/cfg2cmd/q35-ide.conf.cmd | 15 +-
.../q35-linux-hostpci-mapping.conf.cmd | 7 +-
.../q35-linux-hostpci-multifunction.conf.cmd | 7 +-
.../q35-linux-hostpci-template.conf.cmd | 10 +-
...q35-linux-hostpci-x-pci-overrides.conf.cmd | 7 +-
src/test/cfg2cmd/q35-linux-hostpci.conf.cmd | 7 +-
src/test/cfg2cmd/q35-simple.conf.cmd | 7 +-
src/test/cfg2cmd/seabios_serial.conf.cmd | 7 +-
src/test/cfg2cmd/sev-es.conf.cmd | 7 +-
src/test/cfg2cmd/sev-std.conf.cmd | 7 +-
src/test/cfg2cmd/simple-btrfs.conf.cmd | 16 +-
src/test/cfg2cmd/simple-cifs.conf.cmd | 16 +-
.../cfg2cmd/simple-disk-passthrough.conf.cmd | 9 +-
src/test/cfg2cmd/simple-lvm.conf.cmd | 12 +-
src/test/cfg2cmd/simple-lvmthin.conf.cmd | 12 +-
src/test/cfg2cmd/simple-rbd.conf.cmd | 28 +-
src/test/cfg2cmd/simple-virtio-blk.conf.cmd | 7 +-
.../cfg2cmd/simple-zfs-over-iscsi.conf.cmd | 16 +-
src/test/cfg2cmd/simple1-template.conf.cmd | 10 +-
src/test/cfg2cmd/simple1.conf.cmd | 7 +-
src/test/run_config2command_tests.pl | 22 +
src/test/run_qemu_migrate_tests.pl | 16 +-
src/test/snapshot-test.pm | 11 +-
src/usr/pve-bridge | 24 +-
57 files changed, 2213 insertions(+), 1560 deletions(-)
create mode 100644 src/PVE/QemuMigrate/Helpers.pm
create mode 100644 src/PVE/QemuMigrate/Makefile
create mode 100644 src/PVE/QemuServer/BlockJob.pm
create mode 100644 src/PVE/QemuServer/Network.pm
create mode 100644 src/PVE/QemuServer/OVMF.pm
create mode 100644 src/PVE/QemuServer/RunState.pm
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 01/31] print ovmf commandline: collect hardware parameters into hash argument
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 02/31] introduce OVMF module Fiona Ebner
` (30 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Also avoids querying the AMD-SEV type again, inside the function.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 63b4d469..6926182b 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -3465,11 +3465,12 @@ my sub should_disable_smm {
}
my sub print_ovmf_drive_commandlines {
- my ($conf, $storecfg, $vmid, $arch, $q35, $version_guard) = @_;
+ my ($conf, $storecfg, $vmid, $hw_info, $version_guard) = @_;
+
+ my ($amd_sev_type, $arch, $q35) = $hw_info->@{qw(amd-sev-type arch q35)};
my $d = $conf->{efidisk0} ? parse_drive('efidisk0', $conf->{efidisk0}) : undef;
- my $amd_sev_type = get_amd_sev_type($conf);
die "Attempting to configure SEV-SNP with pflash devices instead of using `-bios`\n"
if $amd_sev_type && $amd_sev_type eq 'snp';
@@ -3690,8 +3691,13 @@ sub config_to_command {
}
push $cmd->@*, '-bios', get_ovmf_files($arch, undef, undef, $amd_sev_type);
} else {
+ my $hw_info = {
+ 'amd-sev-type' => $amd_sev_type,
+ arch => $arch,
+ q35 => $q35,
+ };
my ($code_drive_str, $var_drive_str) =
- print_ovmf_drive_commandlines($conf, $storecfg, $vmid, $arch, $q35, $version_guard);
+ print_ovmf_drive_commandlines($conf, $storecfg, $vmid, $hw_info, $version_guard);
push $cmd->@*, '-drive', $code_drive_str;
push $cmd->@*, '-drive', $var_drive_str;
}
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 02/31] introduce OVMF module
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 01/31] print ovmf commandline: collect hardware parameters into hash argument Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 03/31] ovmf: add support for using blockdev Fiona Ebner
` (29 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
Changes since previous series:
* keep get_efivars_size() as a wrapper in QemuServer module
* keep early check for CPU bitness in QemuServer module
src/PVE/API2/Qemu.pm | 3 +-
src/PVE/QemuServer.pm | 155 +++--------------------------
src/PVE/QemuServer/Makefile | 1 +
src/PVE/QemuServer/OVMF.pm | 166 +++++++++++++++++++++++++++++++
src/test/MigrationTest/Shared.pm | 4 +
5 files changed, 185 insertions(+), 144 deletions(-)
create mode 100644 src/PVE/QemuServer/OVMF.pm
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index ce6f362d..6830ea1e 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -36,6 +36,7 @@ use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::Machine;
use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::QemuServer::MetaInfo;
+use PVE::QemuServer::OVMF;
use PVE::QemuServer::PCI;
use PVE::QemuServer::QMPHelpers;
use PVE::QemuServer::RNG;
@@ -612,7 +613,7 @@ my sub create_disks : prototype($$$$$$$$$$$) {
"SEV-SNP uses consolidated read-only firmware and does not require an EFI disk\n"
if $amd_sev_type && $amd_sev_type eq 'snp';
- ($volid, $size) = PVE::QemuServer::create_efidisk(
+ ($volid, $size) = PVE::QemuServer::OVMF::create_efidisk(
$storecfg, $storeid, $vmid, $fmt, $arch, $disk, $smm, $amd_sev_type,
);
} elsif ($ds eq 'tpmstate0') {
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 6926182b..bb10f116 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -71,6 +71,7 @@ use PVE::QemuServer::Machine;
use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::QemuServer::MetaInfo;
use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::OVMF;
use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
use PVE::QemuServer::QemuImage;
use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
@@ -98,41 +99,6 @@ my sub vm_is_ha_managed {
return PVE::HA::Config::vm_is_ha_managed($vmid);
}
-my $EDK2_FW_BASE = '/usr/share/pve-edk2-firmware/';
-my $OVMF = {
- x86_64 => {
- '4m-no-smm' => [
- "$EDK2_FW_BASE/OVMF_CODE_4M.fd", "$EDK2_FW_BASE/OVMF_VARS_4M.fd",
- ],
- '4m-no-smm-ms' => [
- "$EDK2_FW_BASE/OVMF_CODE_4M.fd", "$EDK2_FW_BASE/OVMF_VARS_4M.ms.fd",
- ],
- '4m' => [
- "$EDK2_FW_BASE/OVMF_CODE_4M.secboot.fd", "$EDK2_FW_BASE/OVMF_VARS_4M.fd",
- ],
- '4m-ms' => [
- "$EDK2_FW_BASE/OVMF_CODE_4M.secboot.fd", "$EDK2_FW_BASE/OVMF_VARS_4M.ms.fd",
- ],
- '4m-sev' => [
- "$EDK2_FW_BASE/OVMF_CVM_CODE_4M.fd", "$EDK2_FW_BASE/OVMF_CVM_VARS_4M.fd",
- ],
- '4m-snp' => [
- "$EDK2_FW_BASE/OVMF_CVM_4M.fd",
- ],
- # FIXME: These are legacy 2MB-sized images that modern OVMF doesn't supports to build
- # anymore. how can we deperacate this sanely without breaking existing instances, or using
- # older backups and snapshot?
- default => [
- "$EDK2_FW_BASE/OVMF_CODE.fd", "$EDK2_FW_BASE/OVMF_VARS.fd",
- ],
- },
- aarch64 => {
- default => [
- "$EDK2_FW_BASE/AAVMF_CODE.fd", "$EDK2_FW_BASE/AAVMF_VARS.fd",
- ],
- },
-};
-
my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
# Note about locking: we use flock on the config file protect against concurrent actions.
@@ -3293,36 +3259,6 @@ sub vga_conf_has_spice {
return $1 || 1;
}
-sub get_ovmf_files($$$$) {
- my ($arch, $efidisk, $smm, $amd_sev_type) = @_;
-
- my $types = $OVMF->{$arch}
- or die "no OVMF images known for architecture '$arch'\n";
-
- my $type = 'default';
- if ($arch eq 'x86_64') {
- if ($amd_sev_type && $amd_sev_type eq 'snp') {
- $type = "4m-snp";
- my ($ovmf) = $types->{$type}->@*;
- die "EFI base image '$ovmf' not found\n" if !-f $ovmf;
- return ($ovmf);
- } elsif ($amd_sev_type) {
- $type = "4m-sev";
- } elsif (defined($efidisk->{efitype}) && $efidisk->{efitype} eq '4m') {
- $type = $smm ? "4m" : "4m-no-smm";
- $type .= '-ms' if $efidisk->{'pre-enrolled-keys'};
- } else {
- # TODO: log_warn about use of legacy images for x86_64 with Promxox VE 9
- }
- }
-
- my ($ovmf_code, $ovmf_vars) = $types->{$type}->@*;
- die "EFI base image '$ovmf_code' not found\n" if !-f $ovmf_code;
- die "EFI vars image '$ovmf_vars' not found\n" if !-f $ovmf_vars;
-
- return ($ovmf_code, $ovmf_vars);
-}
-
# To use query_supported_cpu_flags and query_understood_cpu_flags to get flags
# to use in a QEMU command line (-cpu element), first array_intersect the result
# of query_supported_ with query_understood_. This is necessary because:
@@ -3464,49 +3400,6 @@ my sub should_disable_smm {
&& $vga->{type} =~ m/^(serial\d+|none)$/;
}
-my sub print_ovmf_drive_commandlines {
- my ($conf, $storecfg, $vmid, $hw_info, $version_guard) = @_;
-
- my ($amd_sev_type, $arch, $q35) = $hw_info->@{qw(amd-sev-type arch q35)};
-
- my $d = $conf->{efidisk0} ? parse_drive('efidisk0', $conf->{efidisk0}) : undef;
-
- die "Attempting to configure SEV-SNP with pflash devices instead of using `-bios`\n"
- if $amd_sev_type && $amd_sev_type eq 'snp';
-
- my ($ovmf_code, $ovmf_vars) = get_ovmf_files($arch, $d, $q35, $amd_sev_type);
-
- my $var_drive_str = "if=pflash,unit=1,id=drive-efidisk0";
- if ($d) {
- my ($storeid, $volname) = PVE::Storage::parse_volume_id($d->{file}, 1);
- my ($path, $format) = $d->@{ 'file', 'format' };
- if ($storeid) {
- $path = PVE::Storage::path($storecfg, $d->{file});
- $format //= checked_volume_format($storecfg, $d->{file});
- } elsif (!defined($format)) {
- die "efidisk format must be specified\n";
- }
- # SPI flash does lots of read-modify-write OPs, without writeback this gets really slow #3329
- if ($path =~ m/^rbd:/) {
- $var_drive_str .= ',cache=writeback';
- $path .= ':rbd_cache_policy=writeback'; # avoid write-around, we *need* to cache writes too
- }
- $var_drive_str .= ",format=$format,file=$path";
-
- $var_drive_str .= ",size=" . (-s $ovmf_vars)
- if $format eq 'raw' && $version_guard->(4, 1, 2);
- $var_drive_str .= ',readonly=on' if drive_is_read_only($conf, $d);
- } else {
- log_warn("no efidisk configured! Using temporary efivars disk.");
- my $path = "/tmp/$vmid-ovmf.fd";
- PVE::Tools::file_copy($ovmf_vars, $path, -s $ovmf_vars);
- $var_drive_str .= ",format=raw,file=$path";
- $var_drive_str .= ",size=" . (-s $ovmf_vars) if $version_guard->(4, 1, 2);
- }
-
- return ("if=pflash,unit=0,format=raw,readonly=on,file=$ovmf_code", $var_drive_str);
-}
-
my sub get_vga_properties {
my ($conf, $arch, $machine_version, $winversion) = @_;
@@ -3684,23 +3577,15 @@ sub config_to_command {
die "OVMF (UEFI) BIOS is not supported on 32-bit CPU types\n"
if !$forcecpu && get_cpu_bitness($conf->{cpu}, $arch) == 32;
- my $amd_sev_type = get_amd_sev_type($conf);
- if ($amd_sev_type && $amd_sev_type eq 'snp') {
- if (defined($conf->{efidisk0})) {
- log_warn("EFI disks are not supported with SEV-SNP and will be ignored");
- }
- push $cmd->@*, '-bios', get_ovmf_files($arch, undef, undef, $amd_sev_type);
- } else {
- my $hw_info = {
- 'amd-sev-type' => $amd_sev_type,
- arch => $arch,
- q35 => $q35,
- };
- my ($code_drive_str, $var_drive_str) =
- print_ovmf_drive_commandlines($conf, $storecfg, $vmid, $hw_info, $version_guard);
- push $cmd->@*, '-drive', $code_drive_str;
- push $cmd->@*, '-drive', $var_drive_str;
- }
+ my $hw_info = {
+ 'amd-sev-type' => get_amd_sev_type($conf),
+ arch => $arch,
+ q35 => $q35,
+ };
+ my $ovmf_cmd = PVE::QemuServer::OVMF::print_ovmf_commandline(
+ $conf, $storecfg, $vmid, $hw_info, $version_guard,
+ );
+ push $cmd->@*, $ovmf_cmd->@*;
}
if ($q35) { # tell QEMU to load q35 config early
@@ -8866,8 +8751,8 @@ sub get_efivars_size {
$efidisk //= $conf->{efidisk0} ? parse_drive('efidisk0', $conf->{efidisk0}) : undef;
my $smm = PVE::QemuServer::Machine::machine_type_is_q35($conf);
my $amd_sev_type = get_amd_sev_type($conf);
- my (undef, $ovmf_vars) = get_ovmf_files($arch, $efidisk, $smm, $amd_sev_type);
- return -s $ovmf_vars;
+
+ return PVE::QemuServer::OVMF::get_efivars_size($arch, $efidisk, $smm, $amd_sev_type);
}
sub update_efidisk_size {
@@ -8890,22 +8775,6 @@ sub update_tpmstate_size {
$conf->{tpmstate0} = print_drive($disk);
}
-sub create_efidisk($$$$$$$$) {
- my ($storecfg, $storeid, $vmid, $fmt, $arch, $efidisk, $smm, $amd_sev_type) = @_;
-
- my (undef, $ovmf_vars) = get_ovmf_files($arch, $efidisk, $smm, $amd_sev_type);
-
- my $vars_size_b = -s $ovmf_vars;
- my $vars_size = PVE::Tools::convert_size($vars_size_b, 'b' => 'kb');
- my $volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, $fmt, undef, $vars_size);
- PVE::Storage::activate_volumes($storecfg, [$volid]);
-
- PVE::QemuServer::QemuImage::convert($ovmf_vars, $volid, $vars_size_b);
- my $size = PVE::Storage::volume_size_info($storecfg, $volid, 3);
-
- return ($volid, $size / 1024);
-}
-
sub vm_iothreads_list {
my ($vmid) = @_;
diff --git a/src/PVE/QemuServer/Makefile b/src/PVE/QemuServer/Makefile
index a34ec83b..dd6fe505 100644
--- a/src/PVE/QemuServer/Makefile
+++ b/src/PVE/QemuServer/Makefile
@@ -14,6 +14,7 @@ SOURCES=Agent.pm \
Memory.pm \
MetaInfo.pm \
Monitor.pm \
+ OVMF.pm \
PCI.pm \
QemuImage.pm \
QMPHelpers.pm \
diff --git a/src/PVE/QemuServer/OVMF.pm b/src/PVE/QemuServer/OVMF.pm
new file mode 100644
index 00000000..66da21ce
--- /dev/null
+++ b/src/PVE/QemuServer/OVMF.pm
@@ -0,0 +1,166 @@
+package PVE::QemuServer::OVMF;
+
+use strict;
+use warnings;
+
+use PVE::RESTEnvironment qw(log_warn);
+use PVE::Storage;
+use PVE::Tools;
+
+use PVE::QemuServer::Drive qw(checked_volume_format drive_is_read_only parse_drive print_drive);
+use PVE::QemuServer::QemuImage;
+
+my $EDK2_FW_BASE = '/usr/share/pve-edk2-firmware/';
+my $OVMF = {
+ x86_64 => {
+ '4m-no-smm' => [
+ "$EDK2_FW_BASE/OVMF_CODE_4M.fd", "$EDK2_FW_BASE/OVMF_VARS_4M.fd",
+ ],
+ '4m-no-smm-ms' => [
+ "$EDK2_FW_BASE/OVMF_CODE_4M.fd", "$EDK2_FW_BASE/OVMF_VARS_4M.ms.fd",
+ ],
+ '4m' => [
+ "$EDK2_FW_BASE/OVMF_CODE_4M.secboot.fd", "$EDK2_FW_BASE/OVMF_VARS_4M.fd",
+ ],
+ '4m-ms' => [
+ "$EDK2_FW_BASE/OVMF_CODE_4M.secboot.fd", "$EDK2_FW_BASE/OVMF_VARS_4M.ms.fd",
+ ],
+ '4m-sev' => [
+ "$EDK2_FW_BASE/OVMF_CVM_CODE_4M.fd", "$EDK2_FW_BASE/OVMF_CVM_VARS_4M.fd",
+ ],
+ '4m-snp' => [
+ "$EDK2_FW_BASE/OVMF_CVM_4M.fd",
+ ],
+ # FIXME: These are legacy 2MB-sized images that modern OVMF doesn't supports to build
+ # anymore. how can we deperacate this sanely without breaking existing instances, or using
+ # older backups and snapshot?
+ default => [
+ "$EDK2_FW_BASE/OVMF_CODE.fd", "$EDK2_FW_BASE/OVMF_VARS.fd",
+ ],
+ },
+ aarch64 => {
+ default => [
+ "$EDK2_FW_BASE/AAVMF_CODE.fd", "$EDK2_FW_BASE/AAVMF_VARS.fd",
+ ],
+ },
+};
+
+my sub get_ovmf_files($$$$) {
+ my ($arch, $efidisk, $smm, $amd_sev_type) = @_;
+
+ my $types = $OVMF->{$arch}
+ or die "no OVMF images known for architecture '$arch'\n";
+
+ my $type = 'default';
+ if ($arch eq 'x86_64') {
+ if ($amd_sev_type && $amd_sev_type eq 'snp') {
+ $type = "4m-snp";
+ my ($ovmf) = $types->{$type}->@*;
+ die "EFI base image '$ovmf' not found\n" if !-f $ovmf;
+ return ($ovmf);
+ } elsif ($amd_sev_type) {
+ $type = "4m-sev";
+ } elsif (defined($efidisk->{efitype}) && $efidisk->{efitype} eq '4m') {
+ $type = $smm ? "4m" : "4m-no-smm";
+ $type .= '-ms' if $efidisk->{'pre-enrolled-keys'};
+ } else {
+ # TODO: log_warn about use of legacy images for x86_64 with Promxox VE 9
+ }
+ }
+
+ my ($ovmf_code, $ovmf_vars) = $types->{$type}->@*;
+ die "EFI base image '$ovmf_code' not found\n" if !-f $ovmf_code;
+ die "EFI vars image '$ovmf_vars' not found\n" if !-f $ovmf_vars;
+
+ return ($ovmf_code, $ovmf_vars);
+}
+
+my sub print_ovmf_drive_commandlines {
+ my ($conf, $storecfg, $vmid, $hw_info, $version_guard) = @_;
+
+ my ($amd_sev_type, $arch, $q35) = $hw_info->@{qw(amd-sev-type arch q35)};
+
+ my $d = $conf->{efidisk0} ? parse_drive('efidisk0', $conf->{efidisk0}) : undef;
+
+ die "Attempting to configure SEV-SNP with pflash devices instead of using `-bios`\n"
+ if $amd_sev_type && $amd_sev_type eq 'snp';
+
+ my ($ovmf_code, $ovmf_vars) = get_ovmf_files($arch, $d, $q35, $amd_sev_type);
+
+ my $var_drive_str = "if=pflash,unit=1,id=drive-efidisk0";
+ if ($d) {
+ my ($storeid, $volname) = PVE::Storage::parse_volume_id($d->{file}, 1);
+ my ($path, $format) = $d->@{ 'file', 'format' };
+ if ($storeid) {
+ $path = PVE::Storage::path($storecfg, $d->{file});
+ $format //= checked_volume_format($storecfg, $d->{file});
+ } elsif (!defined($format)) {
+ die "efidisk format must be specified\n";
+ }
+ # SPI flash does lots of read-modify-write OPs, without writeback this gets really slow #3329
+ if ($path =~ m/^rbd:/) {
+ $var_drive_str .= ',cache=writeback';
+ $path .= ':rbd_cache_policy=writeback'; # avoid write-around, we *need* to cache writes too
+ }
+ $var_drive_str .= ",format=$format,file=$path";
+
+ $var_drive_str .= ",size=" . (-s $ovmf_vars)
+ if $format eq 'raw' && $version_guard->(4, 1, 2);
+ $var_drive_str .= ',readonly=on' if drive_is_read_only($conf, $d);
+ } else {
+ log_warn("no efidisk configured! Using temporary efivars disk.");
+ my $path = "/tmp/$vmid-ovmf.fd";
+ PVE::Tools::file_copy($ovmf_vars, $path, -s $ovmf_vars);
+ $var_drive_str .= ",format=raw,file=$path";
+ $var_drive_str .= ",size=" . (-s $ovmf_vars) if $version_guard->(4, 1, 2);
+ }
+
+ return ("if=pflash,unit=0,format=raw,readonly=on,file=$ovmf_code", $var_drive_str);
+}
+
+sub get_efivars_size {
+ my ($arch, $efidisk, $smm, $amd_sev_type) = @_;
+
+ my (undef, $ovmf_vars) = get_ovmf_files($arch, $efidisk, $smm, $amd_sev_type);
+ return -s $ovmf_vars;
+}
+
+sub create_efidisk($$$$$$$$) {
+ my ($storecfg, $storeid, $vmid, $fmt, $arch, $efidisk, $smm, $amd_sev_type) = @_;
+
+ my (undef, $ovmf_vars) = get_ovmf_files($arch, $efidisk, $smm, $amd_sev_type);
+
+ my $vars_size_b = -s $ovmf_vars;
+ my $vars_size = PVE::Tools::convert_size($vars_size_b, 'b' => 'kb');
+ my $volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, $fmt, undef, $vars_size);
+ PVE::Storage::activate_volumes($storecfg, [$volid]);
+
+ PVE::QemuServer::QemuImage::convert($ovmf_vars, $volid, $vars_size_b);
+ my $size = PVE::Storage::volume_size_info($storecfg, $volid, 3);
+
+ return ($volid, $size / 1024);
+}
+
+sub print_ovmf_commandline {
+ my ($conf, $storecfg, $vmid, $hw_info, $version_guard) = @_;
+
+ my $amd_sev_type = $hw_info->{'amd-sev-type'};
+
+ my $cmd = [];
+
+ if ($amd_sev_type && $amd_sev_type eq 'snp') {
+ if (defined($conf->{efidisk0})) {
+ log_warn("EFI disks are not supported with SEV-SNP and will be ignored");
+ }
+ push $cmd->@*, '-bios', get_ovmf_files($hw_info->{arch}, undef, undef, $amd_sev_type);
+ } else {
+ my ($code_drive_str, $var_drive_str) =
+ print_ovmf_drive_commandlines($conf, $storecfg, $vmid, $hw_info, $version_guard);
+ push $cmd->@*, '-drive', $code_drive_str;
+ push $cmd->@*, '-drive', $var_drive_str;
+ }
+
+ return $cmd;
+}
+
+1;
diff --git a/src/test/MigrationTest/Shared.pm b/src/test/MigrationTest/Shared.pm
index 0b1ac7a0..e29cd1df 100644
--- a/src/test/MigrationTest/Shared.pm
+++ b/src/test/MigrationTest/Shared.pm
@@ -150,6 +150,10 @@ $qemu_server_module->mock(
vm_stop_cleanup => sub {
return;
},
+);
+
+our $qemu_server_ovmf_module = Test::MockModule->new("PVE::QemuServer::OVMF");
+$qemu_server_ovmf_module->mock(
get_efivars_size => sub {
return 128 * 1024;
},
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 03/31] ovmf: add support for using blockdev
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 01/31] print ovmf commandline: collect hardware parameters into hash argument Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 02/31] introduce OVMF module Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 04/31] cfg2cmd: ovmf: support print_ovmf_commandline() returning machine flags Fiona Ebner
` (28 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
Changes since previous series:
* use read-only flag for OVMF code
* collect some parameters into $hw_info hash, avoid querying AMD-SEV
type inside the OVMF module
src/PVE/QemuServer/OVMF.pm | 55 ++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/src/PVE/QemuServer/OVMF.pm b/src/PVE/QemuServer/OVMF.pm
index 66da21ce..ae2f6fab 100644
--- a/src/PVE/QemuServer/OVMF.pm
+++ b/src/PVE/QemuServer/OVMF.pm
@@ -3,10 +3,13 @@ package PVE::QemuServer::OVMF;
use strict;
use warnings;
+use JSON;
+
use PVE::RESTEnvironment qw(log_warn);
use PVE::Storage;
use PVE::Tools;
+use PVE::QemuServer::Blockdev;
use PVE::QemuServer::Drive qw(checked_volume_format drive_is_read_only parse_drive print_drive);
use PVE::QemuServer::QemuImage;
@@ -141,6 +144,58 @@ sub create_efidisk($$$$$$$$) {
return ($volid, $size / 1024);
}
+my sub generate_ovmf_blockdev {
+ my ($conf, $storecfg, $vmid, $hw_info) = @_;
+
+ my ($amd_sev_type, $arch, $q35) = $hw_info->@{qw(amd-sev-type arch q35)};
+
+ my $drive = $conf->{efidisk0} ? parse_drive('efidisk0', $conf->{efidisk0}) : undef;
+
+ die "Attempting to configure SEV-SNP with pflash devices instead of using `-bios`\n"
+ if $amd_sev_type && $amd_sev_type eq 'snp';
+
+ my ($ovmf_code, $ovmf_vars) = get_ovmf_files($arch, $drive, $q35, $amd_sev_type);
+
+ my $ovmf_code_blockdev = {
+ driver => 'raw',
+ file => { driver => 'file', filename => "$ovmf_code" },
+ 'node-name' => 'pflash0',
+ 'read-only' => JSON::true,
+ };
+
+ my $format;
+
+ if ($drive) {
+ my ($storeid, $volname) = PVE::Storage::parse_volume_id($drive->{file}, 1);
+ $format = $drive->{format};
+ if ($storeid) {
+ $format //= checked_volume_format($storecfg, $drive->{file});
+ } elsif (!defined($format)) {
+ die "efidisk format must be specified\n";
+ }
+ } else {
+ log_warn("no efidisk configured! Using temporary efivars disk.");
+ my $path = "/tmp/$vmid-ovmf.fd";
+ PVE::Tools::file_copy($ovmf_vars, $path, -s $ovmf_vars);
+ $drive = { file => $path };
+ $format = 'raw';
+ }
+
+ my $extra_blockdev_options = {};
+ # extra protection for templates, but SATA and IDE don't support it..
+ $extra_blockdev_options->{'read-only'} = 1 if drive_is_read_only($conf, $drive);
+
+ $extra_blockdev_options->{size} = -s $ovmf_vars if $format eq 'raw';
+
+ my $throttle_group = PVE::QemuServer::Blockdev::generate_throttle_group($drive);
+
+ my $ovmf_vars_blockdev = PVE::QemuServer::Blockdev::generate_drive_blockdev(
+ $storecfg, $drive, $extra_blockdev_options,
+ );
+
+ return ($ovmf_code_blockdev, $ovmf_vars_blockdev, $throttle_group);
+}
+
sub print_ovmf_commandline {
my ($conf, $storecfg, $vmid, $hw_info, $version_guard) = @_;
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 04/31] cfg2cmd: ovmf: support print_ovmf_commandline() returning machine flags
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (2 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 03/31] ovmf: add support for using blockdev Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 05/31] assume that SDN is available Fiona Ebner
` (27 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
This is in preparation for the switch to -blockdev, where it will be
necessary to specify the 'pflash0' and 'pflash1' machine flags.
Suggested-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes since previous version.
src/PVE/QemuServer.pm | 3 ++-
src/PVE/QemuServer/OVMF.pm | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index bb10f116..513652d6 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -3582,10 +3582,11 @@ sub config_to_command {
arch => $arch,
q35 => $q35,
};
- my $ovmf_cmd = PVE::QemuServer::OVMF::print_ovmf_commandline(
+ my ($ovmf_cmd, $ovmf_machine_flags) = PVE::QemuServer::OVMF::print_ovmf_commandline(
$conf, $storecfg, $vmid, $hw_info, $version_guard,
);
push $cmd->@*, $ovmf_cmd->@*;
+ push $machineFlags->@*, $ovmf_machine_flags->@*;
}
if ($q35) { # tell QEMU to load q35 config early
diff --git a/src/PVE/QemuServer/OVMF.pm b/src/PVE/QemuServer/OVMF.pm
index ae2f6fab..dde81eb7 100644
--- a/src/PVE/QemuServer/OVMF.pm
+++ b/src/PVE/QemuServer/OVMF.pm
@@ -202,6 +202,7 @@ sub print_ovmf_commandline {
my $amd_sev_type = $hw_info->{'amd-sev-type'};
my $cmd = [];
+ my $machine_flags = [];
if ($amd_sev_type && $amd_sev_type eq 'snp') {
if (defined($conf->{efidisk0})) {
@@ -215,7 +216,7 @@ sub print_ovmf_commandline {
push $cmd->@*, '-drive', $var_drive_str;
}
- return $cmd;
+ return ($cmd, $machine_flags);
}
1;
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 05/31] assume that SDN is available
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (3 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 04/31] cfg2cmd: ovmf: support print_ovmf_commandline() returning machine flags Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 06/31] schema: remove unused pve-qm-ipconfig standard option Fiona Ebner
` (26 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
pve-manager >= 8.2.10 has a hard dependency on libpve-network-perl
which includes the required modules.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 143 +++++++++++++++---------------------------
src/usr/pve-bridge | 21 ++-----
2 files changed, 56 insertions(+), 108 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 513652d6..97a9ad5a 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -35,6 +35,8 @@ use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne);
use PVE::Mapping::Dir;
use PVE::Mapping::PCI;
use PVE::Mapping::USB;
+use PVE::Network::SDN::Vnets;
+use PVE::Network::SDN::Zones;
use PVE::INotify;
use PVE::JSONSchema qw(get_standard_option parse_property_string);
use PVE::ProcFSTools;
@@ -80,13 +82,6 @@ use PVE::QemuServer::StateFile;
use PVE::QemuServer::USB;
use PVE::QemuServer::Virtiofs qw(max_virtiofs start_all_virtiofsd);
-my $have_sdn;
-eval {
- require PVE::Network::SDN::Zones;
- require PVE::Network::SDN::Vnets;
- $have_sdn = 1;
-};
-
my $have_ha_config;
eval {
require PVE::HA::Config;
@@ -5011,14 +5006,12 @@ sub vmconfig_hotplug_pending {
} elsif ($opt =~ m/^net(\d+)$/) {
die "skip\n" if !$hotplug_features->{network};
vm_deviceunplug($vmid, $conf, $opt);
- if ($have_sdn) {
- my $net = PVE::QemuServer::parse_net($conf->{$opt});
- PVE::Network::SDN::Vnets::del_ips_from_mac(
- $net->{bridge},
- $net->{macaddr},
- $conf->{name},
- );
- }
+ my $net = PVE::QemuServer::parse_net($conf->{$opt});
+ PVE::Network::SDN::Vnets::del_ips_from_mac(
+ $net->{bridge},
+ $net->{macaddr},
+ $conf->{name},
+ );
} elsif (is_valid_drivename($opt)) {
die "skip\n"
if !$hotplug_features->{disk} || $opt =~ m/(efidisk|ide|sata|tpmstate)(\d+)/;
@@ -5252,17 +5245,15 @@ sub vmconfig_apply_pending {
} elsif (defined($conf->{$opt}) && is_valid_drivename($opt)) {
vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, $force);
} elsif (defined($conf->{$opt}) && $opt =~ m/^net\d+$/) {
- if ($have_sdn) {
- my $net = PVE::QemuServer::parse_net($conf->{$opt});
- eval {
- PVE::Network::SDN::Vnets::del_ips_from_mac(
- $net->{bridge},
- $net->{macaddr},
- $conf->{name},
- );
- };
- warn if $@;
- }
+ my $net = PVE::QemuServer::parse_net($conf->{$opt});
+ eval {
+ PVE::Network::SDN::Vnets::del_ips_from_mac(
+ $net->{bridge},
+ $net->{macaddr},
+ $conf->{name},
+ );
+ };
+ warn if $@;
}
};
if (my $err = $@) {
@@ -5288,8 +5279,6 @@ sub vmconfig_apply_pending {
parse_drive($opt, $conf->{$opt}),
);
} elsif (defined($conf->{pending}->{$opt}) && $opt =~ m/^net\d+$/) {
- return if !$have_sdn; # return from eval if SDN is not available
-
my $new_net = PVE::QemuServer::parse_net($conf->{pending}->{$opt});
if ($conf->{$opt}) {
my $old_net = PVE::QemuServer::parse_net($conf->{$opt});
@@ -5370,14 +5359,11 @@ sub vmconfig_update_net {
die "skip\n" if !$hotplug;
vm_deviceunplug($vmid, $conf, $opt);
- if ($have_sdn) {
- PVE::Network::SDN::Vnets::del_ips_from_mac(
- $oldnet->{bridge},
- $oldnet->{macaddr},
- $conf->{name},
- );
- }
-
+ PVE::Network::SDN::Vnets::del_ips_from_mac(
+ $oldnet->{bridge},
+ $oldnet->{macaddr},
+ $conf->{name},
+ );
} else {
die "internal error" if $opt !~ m/net(\d+)/;
@@ -5400,42 +5386,29 @@ sub vmconfig_update_net {
}
if (safe_string_ne($oldnet->{bridge}, $newnet->{bridge})) {
- if ($have_sdn) {
- PVE::Network::SDN::Vnets::del_ips_from_mac(
- $oldnet->{bridge},
- $oldnet->{macaddr},
- $conf->{name},
- );
- PVE::Network::SDN::Vnets::add_next_free_cidr(
- $newnet->{bridge},
- $conf->{name},
- $newnet->{macaddr},
- $vmid,
- undef,
- 1,
- );
- }
+ PVE::Network::SDN::Vnets::del_ips_from_mac(
+ $oldnet->{bridge},
+ $oldnet->{macaddr},
+ $conf->{name},
+ );
+ PVE::Network::SDN::Vnets::add_next_free_cidr(
+ $newnet->{bridge},
+ $conf->{name},
+ $newnet->{macaddr},
+ $vmid,
+ undef,
+ 1,
+ );
}
- if ($have_sdn) {
- PVE::Network::SDN::Zones::tap_plug(
- $iface,
- $newnet->{bridge},
- $newnet->{tag},
- $newnet->{firewall},
- $newnet->{trunks},
- $newnet->{rate},
- );
- } else {
- PVE::Network::tap_plug(
- $iface,
- $newnet->{bridge},
- $newnet->{tag},
- $newnet->{firewall},
- $newnet->{trunks},
- $newnet->{rate},
- );
- }
+ PVE::Network::SDN::Zones::tap_plug(
+ $iface,
+ $newnet->{bridge},
+ $newnet->{tag},
+ $newnet->{firewall},
+ $newnet->{trunks},
+ $newnet->{rate},
+ );
} elsif (safe_num_ne($oldnet->{rate}, $newnet->{rate})) {
# Rate can be applied on its own but any change above needs to
@@ -5458,14 +5431,12 @@ sub vmconfig_update_net {
}
if ($hotplug) {
- if ($have_sdn) {
- PVE::Network::SDN::Vnets::add_next_free_cidr(
- $newnet->{bridge}, $conf->{name}, $newnet->{macaddr}, $vmid, undef, 1,
- );
- PVE::Network::SDN::Vnets::add_dhcp_mapping(
- $newnet->{bridge}, $newnet->{macaddr}, $vmid, $conf->{name},
- );
- }
+ PVE::Network::SDN::Vnets::add_next_free_cidr(
+ $newnet->{bridge}, $conf->{name}, $newnet->{macaddr}, $vmid, undef, 1,
+ );
+ PVE::Network::SDN::Vnets::add_dhcp_mapping(
+ $newnet->{bridge}, $newnet->{macaddr}, $vmid, $conf->{name},
+ );
vm_deviceplug($storecfg, $conf, $vmid, $opt, $newnet, $arch, $machine_type);
} else {
die "skip\n";
@@ -9147,11 +9118,7 @@ sub add_nets_bridge_fdb {
log_warn("Interface '$iface' not attached to any bridge.");
next;
}
- if ($have_sdn) {
- PVE::Network::SDN::Zones::add_bridge_fdb($iface, $mac, $bridge);
- } elsif (-d "/sys/class/net/$bridge/bridge") { # avoid fdb management with OVS for now
- PVE::Network::add_bridge_fdb($iface, $mac);
- }
+ PVE::Network::SDN::Zones::add_bridge_fdb($iface, $mac, $bridge);
}
}
@@ -9166,19 +9133,13 @@ sub del_nets_bridge_fdb {
my $mac = $net->{macaddr} or next;
my $bridge = $net->{bridge};
- if ($have_sdn) {
- PVE::Network::SDN::Zones::del_bridge_fdb($iface, $mac, $bridge);
- } elsif (-d "/sys/class/net/$bridge/bridge") { # avoid fdb management with OVS for now
- PVE::Network::del_bridge_fdb($iface, $mac);
- }
+ PVE::Network::SDN::Zones::del_bridge_fdb($iface, $mac, $bridge);
}
}
sub create_ifaces_ipams_ips {
my ($conf, $vmid) = @_;
- return if !$have_sdn;
-
foreach my $opt (keys %$conf) {
if ($opt =~ m/^net(\d+)$/) {
my $value = $conf->{$opt};
@@ -9196,8 +9157,6 @@ sub create_ifaces_ipams_ips {
sub delete_ifaces_ipams_ips {
my ($conf, $vmid) = @_;
- return if !$have_sdn;
-
foreach my $opt (keys %$conf) {
if ($opt =~ m/^net(\d+)$/) {
my $net = PVE::QemuServer::parse_net($conf->{$opt});
diff --git a/src/usr/pve-bridge b/src/usr/pve-bridge
index 299be1f3..2608e1a0 100755
--- a/src/usr/pve-bridge
+++ b/src/usr/pve-bridge
@@ -5,16 +5,10 @@ use warnings;
use PVE::QemuServer;
use PVE::Tools qw(run_command);
-use PVE::Network;
+use PVE::Network::SDN::Vnets;
+use PVE::Network::SDN::Zones;
use PVE::Firewall;
-my $have_sdn;
-eval {
- require PVE::Network::SDN::Zones;
- require PVE::Network::SDN::Vnets;
- $have_sdn = 1;
-};
-
my $iface = shift;
my $hotplug = 0;
@@ -48,13 +42,8 @@ die "unable to parse network config '$netid'\n" if !$net;
# The nftable-based implementation from the newer proxmox-firewall does not requires FW bridges
my $create_firewall_bridges = $net->{firewall} && !PVE::Firewall::is_nftables();
-if ($have_sdn) {
- PVE::Network::SDN::Vnets::add_dhcp_mapping($net->{bridge}, $net->{macaddr}, $vmid, $conf->{name});
- PVE::Network::SDN::Zones::tap_create($iface, $net->{bridge});
- PVE::Network::SDN::Zones::tap_plug($iface, $net->{bridge}, $net->{tag}, $create_firewall_bridges, $net->{trunks}, $net->{rate});
-} else {
- PVE::Network::tap_create($iface, $net->{bridge});
- PVE::Network::tap_plug($iface, $net->{bridge}, $net->{tag}, $create_firewall_bridges, $net->{trunks}, $net->{rate});
-}
+PVE::Network::SDN::Vnets::add_dhcp_mapping($net->{bridge}, $net->{macaddr}, $vmid, $conf->{name});
+PVE::Network::SDN::Zones::tap_create($iface, $net->{bridge});
+PVE::Network::SDN::Zones::tap_plug($iface, $net->{bridge}, $net->{tag}, $create_firewall_bridges, $net->{trunks}, $net->{rate});
exit 0;
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 06/31] schema: remove unused pve-qm-ipconfig standard option
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (4 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 05/31] assume that SDN is available Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 07/31] remove unused $nic_model_list_txt variable Fiona Ebner
` (25 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Grepping in /usr/share/perl5/PVE, the standard option is never used
and it was accidentally assigned $netdesc rather than $ipconfigdesc.
This can be seen in commit 0c9a7596 ("implement cloudinit").
Can still be added correctly later if the need arises.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 97a9ad5a..792ed1cc 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -1026,7 +1026,6 @@ If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, i
dhcp on IPv4.
EODESCR
};
-PVE::JSONSchema::register_standard_option("pve-qm-ipconfig", $netdesc);
for (my $i = 0; $i < $MAX_NETS; $i++) {
$confdesc->{"net$i"} = $netdesc;
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 07/31] remove unused $nic_model_list_txt variable
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (5 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 06/31] schema: remove unused pve-qm-ipconfig standard option Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 08/31] introduce Network module Fiona Ebner
` (24 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
The last usage of this was removed by commit 52261945 ("improve
documentation").
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 792ed1cc..2335703b 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -871,7 +871,6 @@ my $nic_model_list = [
'virtio',
'vmxnet3',
];
-my $nic_model_list_txt = join(' ', sort @$nic_model_list);
my $net_fmt_bridge_descr = <<__EOD__;
Bridge to attach the network device to. The Proxmox VE standard bridge
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 08/31] introduce Network module
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (6 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 07/31] remove unused $nic_model_list_txt variable Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 09/31] agent: drop unused $noerr argument from helpers Fiona Ebner
` (23 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Also gets rid of a cyclic dependency between the main QemuServer
module and the Cloudinit module.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/API2/Qemu.pm | 18 +-
src/PVE/QemuMigrate.pm | 7 +-
src/PVE/QemuServer.pm | 347 ++--------------------
src/PVE/QemuServer/Cloudinit.pm | 18 +-
src/PVE/QemuServer/Makefile | 1 +
src/PVE/QemuServer/Network.pm | 324 ++++++++++++++++++++
src/test/MigrationTest/QemuMigrateMock.pm | 6 +-
src/usr/pve-bridge | 5 +-
8 files changed, 375 insertions(+), 351 deletions(-)
create mode 100644 src/PVE/QemuServer/Network.pm
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 6830ea1e..9600cf8d 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -36,6 +36,7 @@ use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::Machine;
use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::QemuServer::MetaInfo;
+use PVE::QemuServer::Network;
use PVE::QemuServer::OVMF;
use PVE::QemuServer::PCI;
use PVE::QemuServer::QMPHelpers;
@@ -1277,7 +1278,7 @@ __PACKAGE__->register_method({
$check_drive_param->($param, $storecfg);
- PVE::QemuServer::add_random_macs($param);
+ PVE::QemuServer::Network::add_random_macs($param);
}
my $emsg = $is_restore ? "unable to restore VM $vmid -" : "unable to create VM $vmid -";
@@ -1354,7 +1355,8 @@ __PACKAGE__->register_method({
warn $@ if $@;
}
- PVE::QemuServer::create_ifaces_ipams_ips($restored_conf, $vmid) if $unique;
+ PVE::QemuServer::Network::create_ifaces_ipams_ips($restored_conf, $vmid)
+ if $unique;
};
# ensure no old replication state are exists
@@ -1445,7 +1447,7 @@ __PACKAGE__->register_method({
PVE::AccessControl::add_vm_to_pool($vmid, $pool) if $pool;
- PVE::QemuServer::create_ifaces_ipams_ips($conf, $vmid);
+ PVE::QemuServer::Network::create_ifaces_ipams_ips($conf, $vmid);
};
PVE::QemuConfig->lock_config_full($vmid, 1, $realcmd);
@@ -2062,8 +2064,8 @@ my $update_vm_api = sub {
foreach my $opt (keys %$param) {
if ($opt =~ m/^net(\d+)$/) {
# add macaddr
- my $net = PVE::QemuServer::parse_net($param->{$opt});
- $param->{$opt} = PVE::QemuServer::print_net($net);
+ my $net = PVE::QemuServer::Network::parse_net($param->{$opt});
+ $param->{$opt} = PVE::QemuServer::Network::print_net($net);
} elsif ($opt eq 'vmgenid') {
if ($param->{$opt} eq '1') {
$param->{$opt} = PVE::QemuServer::generate_uuid();
@@ -4332,10 +4334,10 @@ __PACKAGE__->register_method({
# always change MAC! address
if ($opt =~ m/^net(\d+)$/) {
- my $net = PVE::QemuServer::parse_net($value);
+ my $net = PVE::QemuServer::Network::parse_net($value);
my $dc = PVE::Cluster::cfs_read_file('datacenter.cfg');
$net->{macaddr} = PVE::Tools::random_ether_addr($dc->{mac_prefix});
- $newconf->{$opt} = PVE::QemuServer::print_net($net);
+ $newconf->{$opt} = PVE::QemuServer::Network::print_net($net);
} elsif (PVE::QemuServer::is_valid_drivename($opt)) {
my $drive = PVE::QemuServer::parse_drive($opt, $value);
die "unable to parse drive options for '$opt'\n" if !$drive;
@@ -4488,7 +4490,7 @@ __PACKAGE__->register_method({
PVE::QemuConfig->write_config($newid, $newconf);
- PVE::QemuServer::create_ifaces_ipams_ips($newconf, $newid);
+ PVE::QemuServer::Network::create_ifaces_ipams_ips($newconf, $newid);
if ($target) {
if (!$running) {
diff --git a/src/PVE/QemuMigrate.pm b/src/PVE/QemuMigrate.pm
index 28d7ac56..934d4350 100644
--- a/src/PVE/QemuMigrate.pm
+++ b/src/PVE/QemuMigrate.pm
@@ -31,6 +31,7 @@ use PVE::QemuServer::Helpers qw(min_version);
use PVE::QemuServer::Machine;
use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::Memory qw(get_current_memory);
+use PVE::QemuServer::Network;
use PVE::QemuServer::QMPHelpers;
use PVE::QemuServer;
@@ -809,7 +810,7 @@ sub map_bridges {
next if $opt !~ m/^net\d+$/;
next if !$conf->{$opt};
- my $d = PVE::QemuServer::parse_net($conf->{$opt});
+ my $d = PVE::QemuServer::Network::parse_net($conf->{$opt});
next if !$d || !$d->{bridge};
my $target_bridge = PVE::JSONSchema::map_id($map, $d->{bridge});
@@ -818,7 +819,7 @@ sub map_bridges {
next if $scan_only;
$d->{bridge} = $target_bridge;
- $conf->{$opt} = PVE::QemuServer::print_net($d);
+ $conf->{$opt} = PVE::QemuServer::Network::print_net($d);
}
return $bridges;
@@ -1623,7 +1624,7 @@ sub phase3_cleanup {
}
# deletes local FDB entries if learning is disabled, they'll be re-added on target on resume
- PVE::QemuServer::del_nets_bridge_fdb($conf, $vmid);
+ PVE::QemuServer::Network::del_nets_bridge_fdb($conf, $vmid);
if (!$self->{vm_was_paused}) {
# config moved and nbd server stopped - now we can resume vm on target
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 2335703b..59958dc0 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -73,6 +73,7 @@ use PVE::QemuServer::Machine;
use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::QemuServer::MetaInfo;
use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::Network;
use PVE::QemuServer::OVMF;
use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
use PVE::QemuServer::QemuImage;
@@ -855,180 +856,13 @@ for (my $i = 0; $i < $PVE::QemuServer::Memory::MAX_NUMA; $i++) {
$confdesc->{"numa$i"} = $PVE::QemuServer::Memory::numadesc;
}
-my $nic_model_list = [
- 'e1000',
- 'e1000-82540em',
- 'e1000-82544gc',
- 'e1000-82545em',
- 'e1000e',
- 'i82551',
- 'i82557b',
- 'i82559er',
- 'ne2k_isa',
- 'ne2k_pci',
- 'pcnet',
- 'rtl8139',
- 'virtio',
- 'vmxnet3',
-];
-
-my $net_fmt_bridge_descr = <<__EOD__;
-Bridge to attach the network device to. The Proxmox VE standard bridge
-is called 'vmbr0'.
-
-If you do not specify a bridge, we create a kvm user (NATed) network
-device, which provides DHCP and DNS services. The following addresses
-are used:
-
- 10.0.2.2 Gateway
- 10.0.2.3 DNS Server
- 10.0.2.4 SMB Server
-
-The DHCP server assign addresses to the guest starting from 10.0.2.15.
-__EOD__
-
-my $net_fmt = {
- macaddr => get_standard_option(
- 'mac-addr',
- {
- description =>
- "MAC address. That address must be unique within your network. This is"
- . " automatically generated if not specified.",
- },
- ),
- model => {
- type => 'string',
- description =>
- "Network Card Model. The 'virtio' model provides the best performance with"
- . " very low CPU overhead. If your guest does not support this driver, it is usually"
- . " best to use 'e1000'.",
- enum => $nic_model_list,
- default_key => 1,
- },
- (map { $_ => { keyAlias => 'model', alias => 'macaddr' } } @$nic_model_list),
- bridge => get_standard_option(
- 'pve-bridge-id',
- {
- description => $net_fmt_bridge_descr,
- optional => 1,
- },
- ),
- queues => {
- type => 'integer',
- minimum => 0,
- maximum => 64,
- description => 'Number of packet queues to be used on the device.',
- optional => 1,
- },
- rate => {
- type => 'number',
- minimum => 0,
- description => "Rate limit in mbps (megabytes per second) as floating point number.",
- optional => 1,
- },
- tag => {
- type => 'integer',
- minimum => 1,
- maximum => 4094,
- description => 'VLAN tag to apply to packets on this interface.',
- optional => 1,
- },
- trunks => {
- type => 'string',
- pattern => qr/\d+(?:-\d+)?(?:;\d+(?:-\d+)?)*/,
- description => 'VLAN trunks to pass through this interface.',
- format_description => 'vlanid[;vlanid...]',
- optional => 1,
- },
- firewall => {
- type => 'boolean',
- description => 'Whether this interface should be protected by the firewall.',
- optional => 1,
- },
- link_down => {
- type => 'boolean',
- description => 'Whether this interface should be disconnected (like pulling the plug).',
- optional => 1,
- },
- mtu => {
- type => 'integer',
- minimum => 1,
- maximum => 65520,
- description => "Force MTU, for VirtIO only. Set to '1' to use the bridge MTU",
- optional => 1,
- },
-};
-
-my $netdesc = {
- optional => 1,
- type => 'string',
- format => $net_fmt,
- description => "Specify network devices.",
-};
-
-PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
-
for (my $i = 0; $i < max_virtiofs(); $i++) {
$confdesc->{"virtiofs$i"} = get_standard_option('pve-qm-virtiofs');
}
-my $ipconfig_fmt = {
- ip => {
- type => 'string',
- format => 'pve-ipv4-config',
- format_description => 'IPv4Format/CIDR',
- description => 'IPv4 address in CIDR format.',
- optional => 1,
- default => 'dhcp',
- },
- gw => {
- type => 'string',
- format => 'ipv4',
- format_description => 'GatewayIPv4',
- description => 'Default gateway for IPv4 traffic.',
- optional => 1,
- requires => 'ip',
- },
- ip6 => {
- type => 'string',
- format => 'pve-ipv6-config',
- format_description => 'IPv6Format/CIDR',
- description => 'IPv6 address in CIDR format.',
- optional => 1,
- default => 'dhcp',
- },
- gw6 => {
- type => 'string',
- format => 'ipv6',
- format_description => 'GatewayIPv6',
- description => 'Default gateway for IPv6 traffic.',
- optional => 1,
- requires => 'ip6',
- },
-};
-PVE::JSONSchema::register_format('pve-qm-ipconfig', $ipconfig_fmt);
-my $ipconfigdesc = {
- optional => 1,
- type => 'string',
- format => 'pve-qm-ipconfig',
- description => <<'EODESCR',
-cloud-init: Specify IP addresses and gateways for the corresponding interface.
-
-IP addresses use CIDR notation, gateways are optional but need an IP of the same type specified.
-
-The special string 'dhcp' can be used for IP addresses to use DHCP, in which case no explicit
-gateway should be provided.
-For IPv6 the special string 'auto' can be used to use stateless autoconfiguration. This requires
-cloud-init 19.4 or newer.
-
-If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, it defaults to using
-dhcp on IPv4.
-EODESCR
-};
-
for (my $i = 0; $i < $MAX_NETS; $i++) {
- $confdesc->{"net$i"} = $netdesc;
- $confdesc_cloudinit->{"ipconfig$i"} = $ipconfigdesc;
+ $confdesc->{"net$i"} = $PVE::QemuServer::Network::netdesc;
+ $confdesc_cloudinit->{"ipconfig$i"} = $PVE::QemuServer::Network::ipconfigdesc;
}
foreach my $key (keys %$confdesc_cloudinit) {
@@ -1755,74 +1589,6 @@ sub print_vga_device {
return "$type,id=${vgaid}${memory}${max_outputs}${pciaddr}${edidoff}";
}
-# netX: e1000=XX:XX:XX:XX:XX:XX,bridge=vmbr0,rate=<mbps>
-sub parse_net {
- my ($data, $disable_mac_autogen) = @_;
-
- my $res = eval { parse_property_string($net_fmt, $data) };
- if ($@) {
- warn $@;
- return;
- }
- if (!defined($res->{macaddr}) && !$disable_mac_autogen) {
- my $dc = PVE::Cluster::cfs_read_file('datacenter.cfg');
- $res->{macaddr} = PVE::Tools::random_ether_addr($dc->{mac_prefix});
- }
- return $res;
-}
-
-# ipconfigX ip=cidr,gw=ip,ip6=cidr,gw6=ip
-sub parse_ipconfig {
- my ($data) = @_;
-
- my $res = eval { parse_property_string($ipconfig_fmt, $data) };
- if ($@) {
- warn $@;
- return;
- }
-
- if ($res->{gw} && !$res->{ip}) {
- warn 'gateway specified without specifying an IP address';
- return;
- }
- if ($res->{gw6} && !$res->{ip6}) {
- warn 'IPv6 gateway specified without specifying an IPv6 address';
- return;
- }
- if ($res->{gw} && $res->{ip} eq 'dhcp') {
- warn 'gateway specified together with DHCP';
- return;
- }
- if ($res->{gw6} && $res->{ip6} !~ /^$IPV6RE/) {
- # gw6 + auto/dhcp
- warn "IPv6 gateway specified together with $res->{ip6} address";
- return;
- }
-
- if (!$res->{ip} && !$res->{ip6}) {
- return { ip => 'dhcp', ip6 => 'dhcp' };
- }
-
- return $res;
-}
-
-sub print_net {
- my $net = shift;
-
- return PVE::JSONSchema::print_property_string($net, $net_fmt);
-}
-
-sub add_random_macs {
- my ($settings) = @_;
-
- foreach my $opt (keys %$settings) {
- next if $opt !~ m/^net(\d+)$/;
- my $net = parse_net($settings->{$opt});
- next if !$net;
- $settings->{$opt} = print_net($net);
- }
-}
-
sub vm_is_volid_owner {
my ($storecfg, $vmid, $volid) = @_;
@@ -2179,7 +1945,7 @@ sub destroy_vm {
);
}
- eval { delete_ifaces_ipams_ips($conf, $vmid) };
+ eval { PVE::QemuServer::Network::delete_ifaces_ipams_ips($conf, $vmid) };
warn $@ if $@;
if (defined $replacement_conf) {
@@ -3979,7 +3745,7 @@ sub config_to_command {
my $netname = "net$i";
next if !$conf->{$netname};
- my $d = parse_net($conf->{$netname});
+ my $d = PVE::QemuServer::Network::parse_net($conf->{$netname});
next if !$d;
# save the MAC addr here (could be auto-gen. in some odd setups) for FDB registering later?
@@ -5004,7 +4770,7 @@ sub vmconfig_hotplug_pending {
} elsif ($opt =~ m/^net(\d+)$/) {
die "skip\n" if !$hotplug_features->{network};
vm_deviceunplug($vmid, $conf, $opt);
- my $net = PVE::QemuServer::parse_net($conf->{$opt});
+ my $net = PVE::QemuServer::Network::parse_net($conf->{$opt});
PVE::Network::SDN::Vnets::del_ips_from_mac(
$net->{bridge},
$net->{macaddr},
@@ -5243,7 +5009,7 @@ sub vmconfig_apply_pending {
} elsif (defined($conf->{$opt}) && is_valid_drivename($opt)) {
vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, $force);
} elsif (defined($conf->{$opt}) && $opt =~ m/^net\d+$/) {
- my $net = PVE::QemuServer::parse_net($conf->{$opt});
+ my $net = PVE::QemuServer::Network::parse_net($conf->{$opt});
eval {
PVE::Network::SDN::Vnets::del_ips_from_mac(
$net->{bridge},
@@ -5277,9 +5043,9 @@ sub vmconfig_apply_pending {
parse_drive($opt, $conf->{$opt}),
);
} elsif (defined($conf->{pending}->{$opt}) && $opt =~ m/^net\d+$/) {
- my $new_net = PVE::QemuServer::parse_net($conf->{pending}->{$opt});
+ my $new_net = PVE::QemuServer::Network::parse_net($conf->{pending}->{$opt});
if ($conf->{$opt}) {
- my $old_net = PVE::QemuServer::parse_net($conf->{$opt});
+ my $old_net = PVE::QemuServer::Network::parse_net($conf->{$opt});
if (
defined($old_net->{bridge})
@@ -5340,10 +5106,10 @@ sub vmconfig_apply_pending {
sub vmconfig_update_net {
my ($storecfg, $conf, $hotplug, $vmid, $opt, $value, $arch, $machine_type) = @_;
- my $newnet = parse_net($value);
+ my $newnet = PVE::QemuServer::Network::parse_net($value);
if ($conf->{$opt}) {
- my $oldnet = parse_net($conf->{$opt});
+ my $oldnet = PVE::QemuServer::Network::parse_net($conf->{$opt});
if (
safe_string_ne($oldnet->{model}, $newnet->{model})
@@ -6148,10 +5914,10 @@ sub vm_start_nolock {
foreach my $opt (keys %$conf) {
next if $opt !~ m/^net\d+$/;
- my $nicconf = parse_net($conf->{$opt});
+ my $nicconf = PVE::QemuServer::Network::parse_net($conf->{$opt});
qemu_set_link_status($vmid, $opt, 0) if $nicconf->{link_down};
}
- add_nets_bridge_fdb($conf, $vmid);
+ PVE::QemuServer::Network::add_nets_bridge_fdb($conf, $vmid);
}
if (!defined($conf->{balloon}) || $conf->{balloon}) {
@@ -6686,7 +6452,8 @@ sub vm_resume {
mon_cmd($vmid, "system_reset");
}
- add_nets_bridge_fdb($conf, $vmid) if $resume_cmd eq 'cont';
+ PVE::QemuServer::Network::add_nets_bridge_fdb($conf, $vmid)
+ if $resume_cmd eq 'cont';
mon_cmd($vmid, $resume_cmd);
},
@@ -6716,7 +6483,7 @@ sub check_bridge_access {
for my $opt (sort keys $conf->%*) {
next if $opt !~ m/^net\d+$/;
- my $net = parse_net($conf->{$opt});
+ my $net = PVE::QemuServer::Network::parse_net($conf->{$opt});
my ($bridge, $tag, $trunks) = $net->@{ 'bridge', 'tag', 'trunks' };
PVE::GuestHelpers::check_vnet_access($rpcenv, $authuser, $bridge, $tag, $trunks);
}
@@ -7017,16 +6784,16 @@ sub restore_update_config_line {
bridge => "vmbr$ind",
macaddr => $macaddr,
};
- my $netstr = print_net($net);
+ my $netstr = PVE::QemuServer::Network::print_net($net);
$res .= "net$cookie->{netcount}: $netstr\n";
$cookie->{netcount}++;
}
} elsif (($line =~ m/^(net\d+):\s*(\S+)\s*$/) && $unique) {
my ($id, $netstr) = ($1, $2);
- my $net = parse_net($netstr);
+ my $net = PVE::QemuServer::Network::parse_net($netstr);
$net->{macaddr} = PVE::Tools::random_ether_addr($dc->{mac_prefix}) if $net->{macaddr};
- $netstr = print_net($net);
+ $netstr = PVE::QemuServer::Network::print_net($net);
$res .= "$id: $netstr\n";
} elsif ($line =~ m/^((ide|scsi|virtio|sata|efidisk|tpmstate)\d+):\s*(\S+)\s*$/) {
my $virtdev = $1;
@@ -9094,80 +8861,4 @@ sub check_volume_storage_type {
return 1;
}
-sub add_nets_bridge_fdb {
- my ($conf, $vmid) = @_;
-
- for my $opt (keys %$conf) {
- next if $opt !~ m/^net(\d+)$/;
- my $iface = "tap${vmid}i$1";
- # NOTE: expect setups with learning off to *not* use auto-random-generation of MAC on start
- my $net = parse_net($conf->{$opt}, 1) or next;
-
- my $mac = $net->{macaddr};
- if (!$mac) {
- log_warn(
- "MAC learning disabled, but vNIC '$iface' has no static MAC to add to forwarding DB!"
- ) if !file_read_firstline("/sys/class/net/$iface/brport/learning");
- next;
- }
-
- my $bridge = $net->{bridge};
- if (!$bridge) {
- log_warn("Interface '$iface' not attached to any bridge.");
- next;
- }
- PVE::Network::SDN::Zones::add_bridge_fdb($iface, $mac, $bridge);
- }
-}
-
-sub del_nets_bridge_fdb {
- my ($conf, $vmid) = @_;
-
- for my $opt (keys %$conf) {
- next if $opt !~ m/^net(\d+)$/;
- my $iface = "tap${vmid}i$1";
-
- my $net = parse_net($conf->{$opt}) or next;
- my $mac = $net->{macaddr} or next;
-
- my $bridge = $net->{bridge};
- PVE::Network::SDN::Zones::del_bridge_fdb($iface, $mac, $bridge);
- }
-}
-
-sub create_ifaces_ipams_ips {
- my ($conf, $vmid) = @_;
-
- foreach my $opt (keys %$conf) {
- if ($opt =~ m/^net(\d+)$/) {
- my $value = $conf->{$opt};
- my $net = PVE::QemuServer::parse_net($value);
- eval {
- PVE::Network::SDN::Vnets::add_next_free_cidr(
- $net->{bridge}, $conf->{name}, $net->{macaddr}, $vmid, undef, 1,
- );
- };
- warn $@ if $@;
- }
- }
-}
-
-sub delete_ifaces_ipams_ips {
- my ($conf, $vmid) = @_;
-
- foreach my $opt (keys %$conf) {
- if ($opt =~ m/^net(\d+)$/) {
- my $net = PVE::QemuServer::parse_net($conf->{$opt});
- eval {
- PVE::Network::SDN::Vnets::del_ips_from_mac(
- $net->{bridge},
- $net->{macaddr},
- $conf->{name},
- );
- };
- warn $@ if $@;
- }
- }
-}
-
1;
diff --git a/src/PVE/QemuServer/Cloudinit.pm b/src/PVE/QemuServer/Cloudinit.pm
index 0d04e98f..349cf90b 100644
--- a/src/PVE/QemuServer/Cloudinit.pm
+++ b/src/PVE/QemuServer/Cloudinit.pm
@@ -12,9 +12,9 @@ use JSON;
use PVE::Tools qw(run_command file_set_contents);
use PVE::Storage;
-use PVE::QemuServer;
use PVE::QemuServer::Drive qw(checked_volume_format);
use PVE::QemuServer::Helpers;
+use PVE::QemuServer::Network;
use constant CLOUDINIT_DISK_SIZE => 4 * 1024 * 1024; # 4MiB in bytes
@@ -191,7 +191,7 @@ sub configdrive2_network {
foreach my $iface (sort @ifaces) {
(my $id = $iface) =~ s/^net//;
next if !$conf->{"ipconfig$id"};
- my $net = PVE::QemuServer::parse_ipconfig($conf->{"ipconfig$id"});
+ my $net = PVE::QemuServer::Network::parse_ipconfig($conf->{"ipconfig$id"});
$id = "eth$id";
$content .= "auto $id\n";
@@ -291,7 +291,7 @@ sub cloudbase_network_eni {
foreach my $iface (sort @ifaces) {
(my $id = $iface) =~ s/^net//;
next if !$conf->{"ipconfig$id"};
- my $net = PVE::QemuServer::parse_ipconfig($conf->{"ipconfig$id"});
+ my $net = PVE::QemuServer::Network::parse_ipconfig($conf->{"ipconfig$id"});
$id = "eth$id";
$content .= "auto $id\n";
@@ -383,9 +383,9 @@ sub generate_opennebula {
my @ifaces = grep { /^net(\d+)$/ } keys %$conf;
foreach my $iface (sort @ifaces) {
(my $id = $iface) =~ s/^net//;
- my $net = PVE::QemuServer::parse_net($conf->{$iface});
+ my $net = PVE::QemuServer::Network::parse_net($conf->{$iface});
next if !$conf->{"ipconfig$id"};
- my $ipconfig = PVE::QemuServer::parse_ipconfig($conf->{"ipconfig$id"});
+ my $ipconfig = PVE::QemuServer::Network::parse_ipconfig($conf->{"ipconfig$id"});
my $ethid = "ETH$id";
my $mac = lc $net->{hwaddr};
@@ -445,8 +445,8 @@ sub nocloud_network_v2 {
# indentation - network interfaces are inside an 'ethernets' hash
my $i = ' ';
- my $net = PVE::QemuServer::parse_net($conf->{$iface});
- my $ipconfig = PVE::QemuServer::parse_ipconfig($conf->{"ipconfig$id"});
+ my $net = PVE::QemuServer::Network::parse_net($conf->{$iface});
+ my $ipconfig = PVE::QemuServer::Network::parse_ipconfig($conf->{"ipconfig$id"});
my $mac = $net->{macaddr}
or die "network interface '$iface' has no mac address\n";
@@ -513,8 +513,8 @@ sub nocloud_network {
# indentation - network interfaces are inside an 'ethernets' hash
my $i = ' ';
- my $net = PVE::QemuServer::parse_net($conf->{$iface});
- my $ipconfig = PVE::QemuServer::parse_ipconfig($conf->{"ipconfig$id"});
+ my $net = PVE::QemuServer::Network::parse_net($conf->{$iface});
+ my $ipconfig = PVE::QemuServer::Network::parse_ipconfig($conf->{"ipconfig$id"});
my $mac = lc($net->{macaddr})
or die "network interface '$iface' has no mac address\n";
diff --git a/src/PVE/QemuServer/Makefile b/src/PVE/QemuServer/Makefile
index dd6fe505..e30c571c 100644
--- a/src/PVE/QemuServer/Makefile
+++ b/src/PVE/QemuServer/Makefile
@@ -14,6 +14,7 @@ SOURCES=Agent.pm \
Memory.pm \
MetaInfo.pm \
Monitor.pm \
+ Network.pm \
OVMF.pm \
PCI.pm \
QemuImage.pm \
diff --git a/src/PVE/QemuServer/Network.pm b/src/PVE/QemuServer/Network.pm
new file mode 100644
index 00000000..84d8981a
--- /dev/null
+++ b/src/PVE/QemuServer/Network.pm
@@ -0,0 +1,324 @@
+package PVE::QemuServer::Network;
+
+use strict;
+use warnings;
+
+use PVE::Cluster;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::Network::SDN::Vnets;
+use PVE::Network::SDN::Zones;
+use PVE::RESTEnvironment qw(log_warn);
+use PVE::Tools qw($IPV6RE file_read_firstline);
+
+my $nic_model_list = [
+ 'e1000',
+ 'e1000-82540em',
+ 'e1000-82544gc',
+ 'e1000-82545em',
+ 'e1000e',
+ 'i82551',
+ 'i82557b',
+ 'i82559er',
+ 'ne2k_isa',
+ 'ne2k_pci',
+ 'pcnet',
+ 'rtl8139',
+ 'virtio',
+ 'vmxnet3',
+];
+
+my $net_fmt_bridge_descr = <<__EOD__;
+Bridge to attach the network device to. The Proxmox VE standard bridge
+is called 'vmbr0'.
+
+If you do not specify a bridge, we create a kvm user (NATed) network
+device, which provides DHCP and DNS services. The following addresses
+are used:
+
+ 10.0.2.2 Gateway
+ 10.0.2.3 DNS Server
+ 10.0.2.4 SMB Server
+
+The DHCP server assign addresses to the guest starting from 10.0.2.15.
+__EOD__
+
+my $net_fmt = {
+ macaddr => get_standard_option(
+ 'mac-addr',
+ {
+ description =>
+ "MAC address. That address must be unique within your network. This is"
+ . " automatically generated if not specified.",
+ },
+ ),
+ model => {
+ type => 'string',
+ description =>
+ "Network Card Model. The 'virtio' model provides the best performance with"
+ . " very low CPU overhead. If your guest does not support this driver, it is usually"
+ . " best to use 'e1000'.",
+ enum => $nic_model_list,
+ default_key => 1,
+ },
+ (map { $_ => { keyAlias => 'model', alias => 'macaddr' } } @$nic_model_list),
+ bridge => get_standard_option(
+ 'pve-bridge-id',
+ {
+ description => $net_fmt_bridge_descr,
+ optional => 1,
+ },
+ ),
+ queues => {
+ type => 'integer',
+ minimum => 0,
+ maximum => 64,
+ description => 'Number of packet queues to be used on the device.',
+ optional => 1,
+ },
+ rate => {
+ type => 'number',
+ minimum => 0,
+ description => "Rate limit in mbps (megabytes per second) as floating point number.",
+ optional => 1,
+ },
+ tag => {
+ type => 'integer',
+ minimum => 1,
+ maximum => 4094,
+ description => 'VLAN tag to apply to packets on this interface.',
+ optional => 1,
+ },
+ trunks => {
+ type => 'string',
+ pattern => qr/\d+(?:-\d+)?(?:;\d+(?:-\d+)?)*/,
+ description => 'VLAN trunks to pass through this interface.',
+ format_description => 'vlanid[;vlanid...]',
+ optional => 1,
+ },
+ firewall => {
+ type => 'boolean',
+ description => 'Whether this interface should be protected by the firewall.',
+ optional => 1,
+ },
+ link_down => {
+ type => 'boolean',
+ description => 'Whether this interface should be disconnected (like pulling the plug).',
+ optional => 1,
+ },
+ mtu => {
+ type => 'integer',
+ minimum => 1,
+ maximum => 65520,
+ description => "Force MTU, for VirtIO only. Set to '1' to use the bridge MTU",
+ optional => 1,
+ },
+};
+
+our $netdesc = {
+ optional => 1,
+ type => 'string',
+ format => $net_fmt,
+ description => "Specify network devices.",
+};
+
+PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
+
+my $ipconfig_fmt = {
+ ip => {
+ type => 'string',
+ format => 'pve-ipv4-config',
+ format_description => 'IPv4Format/CIDR',
+ description => 'IPv4 address in CIDR format.',
+ optional => 1,
+ default => 'dhcp',
+ },
+ gw => {
+ type => 'string',
+ format => 'ipv4',
+ format_description => 'GatewayIPv4',
+ description => 'Default gateway for IPv4 traffic.',
+ optional => 1,
+ requires => 'ip',
+ },
+ ip6 => {
+ type => 'string',
+ format => 'pve-ipv6-config',
+ format_description => 'IPv6Format/CIDR',
+ description => 'IPv6 address in CIDR format.',
+ optional => 1,
+ default => 'dhcp',
+ },
+ gw6 => {
+ type => 'string',
+ format => 'ipv6',
+ format_description => 'GatewayIPv6',
+ description => 'Default gateway for IPv6 traffic.',
+ optional => 1,
+ requires => 'ip6',
+ },
+};
+PVE::JSONSchema::register_format('pve-qm-ipconfig', $ipconfig_fmt);
+our $ipconfigdesc = {
+ optional => 1,
+ type => 'string',
+ format => 'pve-qm-ipconfig',
+ description => <<'EODESCR',
+cloud-init: Specify IP addresses and gateways for the corresponding interface.
+
+IP addresses use CIDR notation, gateways are optional but need an IP of the same type specified.
+
+The special string 'dhcp' can be used for IP addresses to use DHCP, in which case no explicit
+gateway should be provided.
+For IPv6 the special string 'auto' can be used to use stateless autoconfiguration. This requires
+cloud-init 19.4 or newer.
+
+If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, it defaults to using
+dhcp on IPv4.
+EODESCR
+};
+
+# netX: e1000=XX:XX:XX:XX:XX:XX,bridge=vmbr0,rate=<mbps>
+sub parse_net {
+ my ($data, $disable_mac_autogen) = @_;
+
+ my $res = eval { parse_property_string($net_fmt, $data) };
+ if ($@) {
+ warn $@;
+ return;
+ }
+ if (!defined($res->{macaddr}) && !$disable_mac_autogen) {
+ my $dc = PVE::Cluster::cfs_read_file('datacenter.cfg');
+ $res->{macaddr} = PVE::Tools::random_ether_addr($dc->{mac_prefix});
+ }
+ return $res;
+}
+
+# ipconfigX ip=cidr,gw=ip,ip6=cidr,gw6=ip
+sub parse_ipconfig {
+ my ($data) = @_;
+
+ my $res = eval { parse_property_string($ipconfig_fmt, $data) };
+ if ($@) {
+ warn $@;
+ return;
+ }
+
+ if ($res->{gw} && !$res->{ip}) {
+ warn 'gateway specified without specifying an IP address';
+ return;
+ }
+ if ($res->{gw6} && !$res->{ip6}) {
+ warn 'IPv6 gateway specified without specifying an IPv6 address';
+ return;
+ }
+ if ($res->{gw} && $res->{ip} eq 'dhcp') {
+ warn 'gateway specified together with DHCP';
+ return;
+ }
+ if ($res->{gw6} && $res->{ip6} !~ /^$IPV6RE/) {
+ # gw6 + auto/dhcp
+ warn "IPv6 gateway specified together with $res->{ip6} address";
+ return;
+ }
+
+ if (!$res->{ip} && !$res->{ip6}) {
+ return { ip => 'dhcp', ip6 => 'dhcp' };
+ }
+
+ return $res;
+}
+
+sub print_net {
+ my $net = shift;
+
+ return PVE::JSONSchema::print_property_string($net, $net_fmt);
+}
+
+sub add_random_macs {
+ my ($settings) = @_;
+
+ foreach my $opt (keys %$settings) {
+ next if $opt !~ m/^net(\d+)$/;
+ my $net = parse_net($settings->{$opt});
+ next if !$net;
+ $settings->{$opt} = print_net($net);
+ }
+}
+
+sub add_nets_bridge_fdb {
+ my ($conf, $vmid) = @_;
+
+ for my $opt (keys %$conf) {
+ next if $opt !~ m/^net(\d+)$/;
+ my $iface = "tap${vmid}i$1";
+ # NOTE: expect setups with learning off to *not* use auto-random-generation of MAC on start
+ my $net = parse_net($conf->{$opt}, 1) or next;
+
+ my $mac = $net->{macaddr};
+ if (!$mac) {
+ log_warn(
+ "MAC learning disabled, but vNIC '$iface' has no static MAC to add to forwarding DB!"
+ ) if !file_read_firstline("/sys/class/net/$iface/brport/learning");
+ next;
+ }
+
+ my $bridge = $net->{bridge};
+ if (!$bridge) {
+ log_warn("Interface '$iface' not attached to any bridge.");
+ next;
+ }
+ PVE::Network::SDN::Zones::add_bridge_fdb($iface, $mac, $bridge);
+ }
+}
+
+sub del_nets_bridge_fdb {
+ my ($conf, $vmid) = @_;
+
+ for my $opt (keys %$conf) {
+ next if $opt !~ m/^net(\d+)$/;
+ my $iface = "tap${vmid}i$1";
+
+ my $net = parse_net($conf->{$opt}) or next;
+ my $mac = $net->{macaddr} or next;
+
+ my $bridge = $net->{bridge};
+ PVE::Network::SDN::Zones::del_bridge_fdb($iface, $mac, $bridge);
+ }
+}
+
+sub create_ifaces_ipams_ips {
+ my ($conf, $vmid) = @_;
+
+ foreach my $opt (keys %$conf) {
+ if ($opt =~ m/^net(\d+)$/) {
+ my $value = $conf->{$opt};
+ my $net = parse_net($value);
+ eval {
+ PVE::Network::SDN::Vnets::add_next_free_cidr(
+ $net->{bridge}, $conf->{name}, $net->{macaddr}, $vmid, undef, 1,
+ );
+ };
+ warn $@ if $@;
+ }
+ }
+}
+
+sub delete_ifaces_ipams_ips {
+ my ($conf, $vmid) = @_;
+
+ foreach my $opt (keys %$conf) {
+ if ($opt =~ m/^net(\d+)$/) {
+ my $net = parse_net($conf->{$opt});
+ eval {
+ PVE::Network::SDN::Vnets::del_ips_from_mac(
+ $net->{bridge},
+ $net->{macaddr},
+ $conf->{name},
+ );
+ };
+ warn $@ if $@;
+ }
+ }
+}
+
+1;
diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm
index f678f9ec..1b95a2ff 100644
--- a/src/test/MigrationTest/QemuMigrateMock.pm
+++ b/src/test/MigrationTest/QemuMigrateMock.pm
@@ -174,7 +174,6 @@ $MigrationTest::Shared::qemu_server_module->mock(
$vm_stop_executed = 1;
delete $expected_calls->{'vm_stop'};
},
- del_nets_bridge_fdb => sub { return; },
);
my $qemu_server_cpuconfig_module = Test::MockModule->new("PVE::QemuServer::CPUConfig");
@@ -203,6 +202,11 @@ $qemu_server_machine_module->mock(
},
);
+my $qemu_server_network_module = Test::MockModule->new("PVE::QemuServer::Network");
+$qemu_server_network_module->mock(
+ del_nets_bridge_fdb => sub { return; },
+);
+
my $qemu_server_qmphelpers_module = Test::MockModule->new("PVE::QemuServer::QMPHelpers");
$qemu_server_qmphelpers_module->mock(
runs_at_least_qemu_version => sub {
diff --git a/src/usr/pve-bridge b/src/usr/pve-bridge
index 2608e1a0..2f529364 100755
--- a/src/usr/pve-bridge
+++ b/src/usr/pve-bridge
@@ -3,12 +3,13 @@
use strict;
use warnings;
-use PVE::QemuServer;
use PVE::Tools qw(run_command);
use PVE::Network::SDN::Vnets;
use PVE::Network::SDN::Zones;
use PVE::Firewall;
+use PVE::QemuServer::Network;
+
my $iface = shift;
my $hotplug = 0;
@@ -36,7 +37,7 @@ $netconf = $conf->{pending}->{$netid} if !$migratedfrom && defined($conf->{pendi
die "unable to get network config '$netid'\n"
if !defined($netconf);
-my $net = PVE::QemuServer::parse_net($netconf);
+my $net = PVE::QemuServer::Network::parse_net($netconf);
die "unable to parse network config '$netid'\n" if !$net;
# The nftable-based implementation from the newer proxmox-firewall does not requires FW bridges
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 09/31] agent: drop unused $noerr argument from helpers
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (7 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 08/31] introduce Network module Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 10/31] agent: code style: order module imports according to style guide Fiona Ebner
` (22 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Both agent_available() and agent_cmd() have no callers that use the
$noerr argument.
The current implementation was not ideal: agent_cmd() did not check
the return value of agent_available() in the $noerr scenario. It
should return early. In agent_available(), failure was silently
ignored in the $noerr scenario. Having a message or warning then would
have been more useful.
The agent_available() function is renamed to assert_agent_available()
and it is not exported anymore, the single caller outside the module
can just call it with the full module path.
The import of 'agent_available' in qm.pm was not used and is dropped.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/API2/Qemu/Agent.pm | 4 ++--
src/PVE/CLI/qm.pm | 2 +-
src/PVE/QemuServer/Agent.pm | 27 ++++++++-------------------
3 files changed, 11 insertions(+), 22 deletions(-)
diff --git a/src/PVE/API2/Qemu/Agent.pm b/src/PVE/API2/Qemu/Agent.pm
index ef464403..3d1952a6 100644
--- a/src/PVE/API2/Qemu/Agent.pm
+++ b/src/PVE/API2/Qemu/Agent.pm
@@ -6,7 +6,7 @@ use warnings;
use PVE::RESTHandler;
use PVE::JSONSchema qw(get_standard_option);
use PVE::QemuServer;
-use PVE::QemuServer::Agent qw(agent_available agent_cmd check_agent_error);
+use PVE::QemuServer::Agent qw(agent_cmd check_agent_error);
use PVE::QemuServer::Monitor qw(mon_cmd);
use MIME::Base64 qw(encode_base64 decode_base64);
use JSON;
@@ -195,7 +195,7 @@ sub register_command {
my $conf = PVE::QemuConfig->load_config($vmid); # check if VM exists
- agent_available($vmid, $conf);
+ PVE::QemuServer::Agent::assert_agent_available($vmid, $conf);
my $cmd = $param->{command} // $command;
my $res = mon_cmd($vmid, "guest-$cmd");
diff --git a/src/PVE/CLI/qm.pm b/src/PVE/CLI/qm.pm
index c4be9eb3..a7f08cc2 100755
--- a/src/PVE/CLI/qm.pm
+++ b/src/PVE/CLI/qm.pm
@@ -32,7 +32,7 @@ use PVE::API2::Qemu;
use PVE::QemuConfig;
use PVE::QemuServer::Drive qw(is_valid_drivename);
use PVE::QemuServer::Helpers;
-use PVE::QemuServer::Agent qw(agent_available);
+use PVE::QemuServer::Agent;
use PVE::QemuServer::ImportDisk;
use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::QMPHelpers;
diff --git a/src/PVE/QemuServer/Agent.pm b/src/PVE/QemuServer/Agent.pm
index 47405963..9212a0c3 100644
--- a/src/PVE/QemuServer/Agent.pm
+++ b/src/PVE/QemuServer/Agent.pm
@@ -11,7 +11,6 @@ use base 'Exporter';
our @EXPORT_OK = qw(
check_agent_error
- agent_available
agent_cmd
);
@@ -36,33 +35,23 @@ sub check_agent_error {
return 1;
}
-sub agent_available {
- my ($vmid, $conf, $noerr) = @_;
+sub assert_agent_available {
+ my ($vmid, $conf) = @_;
- eval {
- die "No QEMU guest agent configured\n" if !defined($conf->{agent});
- die "VM $vmid is not running\n" if !PVE::QemuServer::check_running($vmid);
- die "QEMU guest agent is not running\n"
- if !PVE::QemuServer::qga_check_running($vmid, 1);
- };
-
- if (my $err = $@) {
- die $err if !$noerr;
- return;
- }
-
- return 1;
+ die "No QEMU guest agent configured\n" if !defined($conf->{agent});
+ die "VM $vmid is not running\n" if !PVE::QemuServer::check_running($vmid);
+ die "QEMU guest agent is not running\n" if !PVE::QemuServer::qga_check_running($vmid, 1);
}
# loads config, checks if available, executes command, checks for errors
sub agent_cmd {
- my ($vmid, $cmd, $params, $errormsg, $noerr) = @_;
+ my ($vmid, $cmd, $params, $errormsg) = @_;
my $conf = PVE::QemuConfig->load_config($vmid); # also checks if VM exists
- agent_available($vmid, $conf, $noerr);
+ assert_agent_available($vmid, $conf);
my $res = PVE::QemuServer::Monitor::mon_cmd($vmid, "guest-$cmd", %$params);
- check_agent_error($res, $errormsg, $noerr);
+ check_agent_error($res, $errormsg);
return $res;
}
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 10/31] agent: code style: order module imports according to style guide
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (8 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 09/31] agent: drop unused $noerr argument from helpers Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 11/31] agent: avoid dependency on QemuConfig module Fiona Ebner
` (21 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/Agent.pm | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/PVE/QemuServer/Agent.pm b/src/PVE/QemuServer/Agent.pm
index 9212a0c3..5bace202 100644
--- a/src/PVE/QemuServer/Agent.pm
+++ b/src/PVE/QemuServer/Agent.pm
@@ -3,10 +3,12 @@ package PVE::QemuServer::Agent;
use strict;
use warnings;
+use JSON;
+use MIME::Base64 qw(decode_base64 encode_base64);
+
use PVE::QemuServer;
use PVE::QemuServer::Monitor;
-use MIME::Base64 qw(decode_base64 encode_base64);
-use JSON;
+
use base 'Exporter';
our @EXPORT_OK = qw(
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 11/31] agent: avoid dependency on QemuConfig module
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (9 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 10/31] agent: code style: order module imports according to style guide Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 12/31] agent: avoid use of deprecated check_running() function Fiona Ebner
` (20 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
The QemuConfig module uses qga_check_running() which is planned to be
moved to the Agent module. Loading the config on the call-sites of
agent_cmd(), qemu_exec() and qemu_exec_status() makes it possible to
avoid introducing that cyclic dependency.
Note that the import for the QemuConfig module was already missing.
Also drops unused variables $write and $res from the 'file-write' API
endpoint implementation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/API2/Qemu/Agent.pm | 24 ++++++++++++++++++------
src/PVE/CLI/qm.pm | 6 ++++--
src/PVE/QemuServer/Agent.pm | 12 ++++++------
3 files changed, 28 insertions(+), 14 deletions(-)
diff --git a/src/PVE/API2/Qemu/Agent.pm b/src/PVE/API2/Qemu/Agent.pm
index 3d1952a6..8a9b9264 100644
--- a/src/PVE/API2/Qemu/Agent.pm
+++ b/src/PVE/API2/Qemu/Agent.pm
@@ -257,6 +257,7 @@ __PACKAGE__->register_method({
my ($param) = @_;
my $vmid = $param->{vmid};
+ my $conf = PVE::QemuConfig->load_config($vmid);
my $crypted = $param->{crypted} // 0;
my $args = {
@@ -264,7 +265,8 @@ __PACKAGE__->register_method({
password => encode_base64($param->{password}),
crypted => $crypted ? JSON::true : JSON::false,
};
- my $res = agent_cmd($vmid, "set-user-password", $args, 'cannot set user password');
+ my $res =
+ agent_cmd($vmid, $conf, "set-user-password", $args, 'cannot set user password');
return { result => $res };
},
@@ -317,9 +319,11 @@ __PACKAGE__->register_method({
my ($param) = @_;
my $vmid = $param->{vmid};
+ my $conf = PVE::QemuConfig->load_config($vmid);
+
my $cmd = $param->{command};
- my $res = PVE::QemuServer::Agent::qemu_exec($vmid, $param->{'input-data'}, $cmd);
+ my $res = PVE::QemuServer::Agent::qemu_exec($vmid, $conf, $param->{'input-data'}, $cmd);
return $res;
},
});
@@ -390,9 +394,11 @@ __PACKAGE__->register_method({
my ($param) = @_;
my $vmid = $param->{vmid};
+ my $conf = PVE::QemuConfig->load_config($vmid);
+
my $pid = int($param->{pid});
- my $res = PVE::QemuServer::Agent::qemu_exec_status($vmid, $pid);
+ my $res = PVE::QemuServer::Agent::qemu_exec_status($vmid, $conf, $pid);
return $res;
},
@@ -439,9 +445,10 @@ __PACKAGE__->register_method({
my ($param) = @_;
my $vmid = $param->{vmid};
+ my $conf = PVE::QemuConfig->load_config($vmid);
my $qgafh =
- agent_cmd($vmid, "file-open", { path => $param->{file} }, "can't open file");
+ agent_cmd($vmid, $conf, "file-open", { path => $param->{file} }, "can't open file");
my $bytes_left = $MAX_READ_SIZE;
my $eof = 0;
@@ -516,23 +523,28 @@ __PACKAGE__->register_method({
my ($param) = @_;
my $vmid = $param->{vmid};
+ my $conf = PVE::QemuConfig->load_config($vmid);
my $buf =
($param->{encode} // 1) ? encode_base64($param->{content}) : $param->{content};
my $qgafh = agent_cmd(
$vmid,
+ $conf,
"file-open",
{ path => $param->{file}, mode => 'wb' },
"can't open file",
);
- my $write = agent_cmd(
+
+ agent_cmd(
$vmid,
+ $conf,
"file-write",
{ handle => $qgafh, 'buf-b64' => $buf },
"can't write to file",
);
- my $res = agent_cmd($vmid, "file-close", { handle => $qgafh }, "can't close file");
+
+ agent_cmd($vmid, $conf, "file-close", { handle => $qgafh }, "can't close file");
return;
},
diff --git a/src/PVE/CLI/qm.pm b/src/PVE/CLI/qm.pm
index a7f08cc2..23f71ab0 100755
--- a/src/PVE/CLI/qm.pm
+++ b/src/PVE/CLI/qm.pm
@@ -962,7 +962,9 @@ __PACKAGE__->register_method({
my $args = $param->{'extra-args'};
$args = undef if !$args || !@$args;
- my $res = PVE::QemuServer::Agent::qemu_exec($vmid, $input_data, $args);
+ my $conf = PVE::QemuConfig->load_config($vmid);
+
+ my $res = PVE::QemuServer::Agent::qemu_exec($vmid, $conf, $input_data, $args);
if ($sync) {
my $pid = $res->{pid};
@@ -970,7 +972,7 @@ __PACKAGE__->register_method({
my $starttime = time();
while ($timeout == 0 || (time() - $starttime) < $timeout) {
- my $out = PVE::QemuServer::Agent::qemu_exec_status($vmid, $pid);
+ my $out = PVE::QemuServer::Agent::qemu_exec_status($vmid, $conf, $pid);
if ($out->{exited}) {
$res = $out;
last;
diff --git a/src/PVE/QemuServer/Agent.pm b/src/PVE/QemuServer/Agent.pm
index 5bace202..a81b87fb 100644
--- a/src/PVE/QemuServer/Agent.pm
+++ b/src/PVE/QemuServer/Agent.pm
@@ -47,9 +47,8 @@ sub assert_agent_available {
# loads config, checks if available, executes command, checks for errors
sub agent_cmd {
- my ($vmid, $cmd, $params, $errormsg) = @_;
+ my ($vmid, $conf, $cmd, $params, $errormsg) = @_;
- my $conf = PVE::QemuConfig->load_config($vmid); # also checks if VM exists
assert_agent_available($vmid, $conf);
my $res = PVE::QemuServer::Monitor::mon_cmd($vmid, "guest-$cmd", %$params);
@@ -59,7 +58,7 @@ sub agent_cmd {
}
sub qemu_exec {
- my ($vmid, $input_data, $cmd) = @_;
+ my ($vmid, $conf, $input_data, $cmd) = @_;
my $args = {
'capture-output' => JSON::true,
@@ -83,15 +82,16 @@ sub qemu_exec {
$errmsg .= " (input-data given)";
}
- my $res = agent_cmd($vmid, "exec", $args, $errmsg);
+ my $res = agent_cmd($vmid, $conf, "exec", $args, $errmsg);
return $res;
}
sub qemu_exec_status {
- my ($vmid, $pid) = @_;
+ my ($vmid, $conf, $pid) = @_;
- my $res = agent_cmd($vmid, "exec-status", { pid => $pid }, "can't get exec status for '$pid'");
+ my $res =
+ agent_cmd($vmid, $conf, "exec-status", { pid => $pid }, "can't get exec status for '$pid'");
if ($res->{'out-data'}) {
my $decoded = eval { decode_base64($res->{'out-data'}) };
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 12/31] agent: avoid use of deprecated check_running() function
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (10 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 11/31] agent: avoid dependency on QemuConfig module Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 13/31] agent: move qga_check_running() to agent module Fiona Ebner
` (19 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/Agent.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/PVE/QemuServer/Agent.pm b/src/PVE/QemuServer/Agent.pm
index a81b87fb..719db4b2 100644
--- a/src/PVE/QemuServer/Agent.pm
+++ b/src/PVE/QemuServer/Agent.pm
@@ -7,6 +7,7 @@ use JSON;
use MIME::Base64 qw(decode_base64 encode_base64);
use PVE::QemuServer;
+use PVE::QemuServer::Helpers;
use PVE::QemuServer::Monitor;
use base 'Exporter';
@@ -41,7 +42,7 @@ sub assert_agent_available {
my ($vmid, $conf) = @_;
die "No QEMU guest agent configured\n" if !defined($conf->{agent});
- die "VM $vmid is not running\n" if !PVE::QemuServer::check_running($vmid);
+ die "VM $vmid is not running\n" if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
die "QEMU guest agent is not running\n" if !PVE::QemuServer::qga_check_running($vmid, 1);
}
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 13/31] agent: move qga_check_running() to agent module
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (11 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 12/31] agent: avoid use of deprecated check_running() function Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 14/31] move find_vmstate_storage() helper to QemuConfig module Fiona Ebner
` (18 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Makes it possible to call into the module from the main QemuServer
module and other modules that are used by QemuServer without adding a
cyclic dependency.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/API2/Qemu.pm | 5 +++--
src/PVE/QemuConfig.pm | 3 ++-
src/PVE/QemuServer.pm | 12 +-----------
src/PVE/QemuServer/Agent.pm | 15 +++++++++++++--
src/PVE/VZDump/QemuServer.pm | 3 ++-
5 files changed, 21 insertions(+), 17 deletions(-)
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 9600cf8d..747edb62 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -27,6 +27,7 @@ use PVE::GuestHelpers qw(assert_tag_permissions);
use PVE::GuestImport;
use PVE::QemuConfig;
use PVE::QemuServer;
+use PVE::QemuServer::Agent;
use PVE::QemuServer::Cloudinit;
use PVE::QemuServer::CPUConfig;
use PVE::QemuServer::Drive qw(checked_volume_format checked_parse_volname);
@@ -4761,7 +4762,7 @@ __PACKAGE__->register_method({
PVE::QemuConfig->write_config($vmid, $conf);
my $do_trim = PVE::QemuServer::get_qga_key($conf, 'fstrim_cloned_disks');
- if ($running && $do_trim && PVE::QemuServer::qga_check_running($vmid)) {
+ if ($running && $do_trim && PVE::QemuServer::Agent::qga_check_running($vmid)) {
eval { mon_cmd($vmid, "guest-fstrim") };
}
@@ -6623,7 +6624,7 @@ __PACKAGE__->register_method({
return $info;
},
'fstrim' => sub {
- if (PVE::QemuServer::qga_check_running($state->{vmid})) {
+ if (PVE::QemuServer::Agent::qga_check_running($state->{vmid})) {
eval { mon_cmd($state->{vmid}, "guest-fstrim") };
warn "fstrim failed: $@\n" if $@;
}
diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm
index 20d9d2af..500b4c0b 100644
--- a/src/PVE/QemuConfig.pm
+++ b/src/PVE/QemuConfig.pm
@@ -8,6 +8,7 @@ use Scalar::Util qw(blessed);
use PVE::AbstractConfig;
use PVE::INotify;
use PVE::JSONSchema;
+use PVE::QemuServer::Agent;
use PVE::QemuServer::CPUConfig;
use PVE::QemuServer::Drive;
use PVE::QemuServer::Helpers;
@@ -293,7 +294,7 @@ sub __snapshot_check_freeze_needed {
$running,
$running
&& PVE::QemuServer::parse_guest_agent($config)->{enabled}
- && PVE::QemuServer::qga_check_running($vmid),
+ && PVE::QemuServer::Agent::qga_check_running($vmid),
);
} else {
return ($running, 0);
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 59958dc0..02dcc02c 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -53,6 +53,7 @@ use PVE::Tools
use PVE::QMPClient;
use PVE::QemuConfig;
use PVE::QemuConfig::NoWrite;
+use PVE::QemuServer::Agent qw(qga_check_running);
use PVE::QemuServer::Helpers
qw(config_aware_timeout get_iscsi_initiator_name min_version kvm_user_version windows_version);
use PVE::QemuServer::Cloudinit;
@@ -7923,17 +7924,6 @@ sub do_snapshots_with_qemu {
return;
}
-sub qga_check_running {
- my ($vmid, $nowarn) = @_;
-
- eval { mon_cmd($vmid, "guest-ping", timeout => 3); };
- if ($@) {
- warn "QEMU Guest Agent is not running - $@" if !$nowarn;
- return 0;
- }
- return 1;
-}
-
=head3 template_create($vmid, $conf [, $disk])
Converts all used disk volumes for the VM with the identifier C<$vmid> and
diff --git a/src/PVE/QemuServer/Agent.pm b/src/PVE/QemuServer/Agent.pm
index 719db4b2..ee48e83e 100644
--- a/src/PVE/QemuServer/Agent.pm
+++ b/src/PVE/QemuServer/Agent.pm
@@ -6,7 +6,6 @@ use warnings;
use JSON;
use MIME::Base64 qw(decode_base64 encode_base64);
-use PVE::QemuServer;
use PVE::QemuServer::Helpers;
use PVE::QemuServer::Monitor;
@@ -15,8 +14,20 @@ use base 'Exporter';
our @EXPORT_OK = qw(
check_agent_error
agent_cmd
+ qga_check_running
);
+sub qga_check_running {
+ my ($vmid, $nowarn) = @_;
+
+ eval { PVE::QemuServer::Monitor::mon_cmd($vmid, "guest-ping", timeout => 3); };
+ if ($@) {
+ warn "QEMU Guest Agent is not running - $@" if !$nowarn;
+ return 0;
+ }
+ return 1;
+}
+
sub check_agent_error {
my ($result, $errmsg, $noerr) = @_;
@@ -43,7 +54,7 @@ sub assert_agent_available {
die "No QEMU guest agent configured\n" if !defined($conf->{agent});
die "VM $vmid is not running\n" if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
- die "QEMU guest agent is not running\n" if !PVE::QemuServer::qga_check_running($vmid, 1);
+ die "QEMU guest agent is not running\n" if !qga_check_running($vmid, 1);
}
# loads config, checks if available, executes command, checks for errors
diff --git a/src/PVE/VZDump/QemuServer.pm b/src/PVE/VZDump/QemuServer.pm
index 93c55a91..243a927e 100644
--- a/src/PVE/VZDump/QemuServer.pm
+++ b/src/PVE/VZDump/QemuServer.pm
@@ -29,6 +29,7 @@ use PVE::Format qw(render_duration render_bytes);
use PVE::QemuConfig;
use PVE::QemuServer;
+use PVE::QemuServer::Agent;
use PVE::QemuServer::Drive qw(checked_volume_format);
use PVE::QemuServer::Helpers;
use PVE::QemuServer::Machine;
@@ -1082,7 +1083,7 @@ sub qga_fs_freeze {
|| !$self->{vm_was_running}
|| $self->{vm_was_paused};
- if (!PVE::QemuServer::qga_check_running($vmid, 1)) {
+ if (!PVE::QemuServer::Agent::qga_check_running($vmid, 1)) {
$self->loginfo("skipping guest-agent 'fs-freeze', agent configured but not running?");
return;
}
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 14/31] move find_vmstate_storage() helper to QemuConfig module
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (12 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 13/31] agent: move qga_check_running() to agent module Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 15/31] introduce QemuMigrate::Helpers module Fiona Ebner
` (17 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
While not the main motivation, this has the nice side effect of
removing a call from QemuConfig to the QemuServer main module.
This is in preparation to introduce a RunState module which does not
call back into the main QemuServer module. In particular, qm_suspend()
will be moved to RunState which needs to call find_vmstate_storage().
Intuitively, the StateFile module seems like the most natural place
for find_vmstate_storage(), but moving find_vmstate_storage() requires
moving foreach_storage_used_by_vm() too and that function calls into
QemuConfig. Now, QemuConfig also calls find_vmstate_storage(), meaning
a cyclic dependency would result.
Note that foreach_storage_used_by_vm(), is related to foreach_volume()
and also uses foreach_volume(), so QemuConfig is the natural place for
that function.
So the arguments for moving find_vmstate_storage() to QemuConfig are:
1. most natural way to avoid cylcic dependencies.
2. related function foreach_storage_used_by_vm() belongs there too.
3. qm_suspend() and other functions relating to the run state already
call other QemuConfig methods.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/API2/Qemu.pm | 2 +-
src/PVE/QemuConfig.pm | 52 ++++++++++++++++++++++++++++++++++++++++++-
src/PVE/QemuServer.pm | 52 +------------------------------------------
3 files changed, 53 insertions(+), 53 deletions(-)
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 747edb62..7f55998e 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -3922,7 +3922,7 @@ __PACKAGE__->register_method({
if (!$statestorage) {
# get statestorage from config if none is given
my $storecfg = PVE::Storage::config();
- $statestorage = PVE::QemuServer::find_vmstate_storage($conf, $storecfg);
+ $statestorage = PVE::QemuConfig::find_vmstate_storage($conf, $storecfg);
}
$rpcenv->check($authuser, "/storage/$statestorage", ['Datastore.AllocateSpace']);
diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm
index 500b4c0b..957c875a 100644
--- a/src/PVE/QemuConfig.pm
+++ b/src/PVE/QemuConfig.pm
@@ -221,7 +221,7 @@ sub __snapshot_save_vmstate {
my $target = $statestorage;
if (!$target) {
- $target = PVE::QemuServer::find_vmstate_storage($conf, $storecfg);
+ $target = find_vmstate_storage($conf, $storecfg);
}
my $mem_size = get_current_memory($conf->{memory});
@@ -712,4 +712,54 @@ sub cleanup_fleecing_images {
record_fleecing_images($vmid, $failed);
}
+sub foreach_storage_used_by_vm {
+ my ($conf, $func) = @_;
+
+ my $sidhash = {};
+
+ PVE::QemuConfig->foreach_volume(
+ $conf,
+ sub {
+ my ($ds, $drive) = @_;
+ return if PVE::QemuServer::Drive::drive_is_cdrom($drive);
+
+ my $volid = $drive->{file};
+
+ my ($sid, $volname) = PVE::Storage::parse_volume_id($volid, 1);
+ $sidhash->{$sid} = $sid if $sid;
+ },
+ );
+
+ foreach my $sid (sort keys %$sidhash) {
+ &$func($sid);
+ }
+}
+
+# NOTE: if this logic changes, please update docs & possibly gui logic
+sub find_vmstate_storage {
+ my ($conf, $storecfg) = @_;
+
+ # first, return storage from conf if set
+ return $conf->{vmstatestorage} if $conf->{vmstatestorage};
+
+ my ($target, $shared, $local);
+
+ foreach_storage_used_by_vm(
+ $conf,
+ sub {
+ my ($sid) = @_;
+ my $scfg = PVE::Storage::storage_config($storecfg, $sid);
+ my $dst = $scfg->{shared} ? \$shared : \$local;
+ $$dst = $sid if !$$dst || $scfg->{path}; # prefer file based storage
+ },
+ );
+
+ # second, use shared storage where VM has at least one disk
+ # third, use local storage where VM has at least one disk
+ # fall back to local storage
+ $target = $shared // $local // 'local';
+
+ return $target;
+}
+
1;
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 02dcc02c..d24dc7eb 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -6322,7 +6322,7 @@ sub vm_suspend {
my $date = strftime("%Y-%m-%d", localtime(time()));
$storecfg = PVE::Storage::config();
if (!$statestorage) {
- $statestorage = find_vmstate_storage($conf, $storecfg);
+ $statestorage = PVE::QemuConfig::find_vmstate_storage($conf, $storecfg);
# check permissions for the storage
my $rpcenv = PVE::RPCEnvironment::get();
if ($rpcenv->{type} ne 'cli') {
@@ -7877,29 +7877,6 @@ sub restore_tar_archive {
warn $@ if $@;
}
-sub foreach_storage_used_by_vm {
- my ($conf, $func) = @_;
-
- my $sidhash = {};
-
- PVE::QemuConfig->foreach_volume(
- $conf,
- sub {
- my ($ds, $drive) = @_;
- return if drive_is_cdrom($drive);
-
- my $volid = $drive->{file};
-
- my ($sid, $volname) = PVE::Storage::parse_volume_id($volid, 1);
- $sidhash->{$sid} = $sid if $sid;
- },
- );
-
- foreach my $sid (sort keys %$sidhash) {
- &$func($sid);
- }
-}
-
my $qemu_snap_storage = {
rbd => 1,
};
@@ -8558,33 +8535,6 @@ sub resolve_dst_disk_format {
return $format;
}
-# NOTE: if this logic changes, please update docs & possibly gui logic
-sub find_vmstate_storage {
- my ($conf, $storecfg) = @_;
-
- # first, return storage from conf if set
- return $conf->{vmstatestorage} if $conf->{vmstatestorage};
-
- my ($target, $shared, $local);
-
- foreach_storage_used_by_vm(
- $conf,
- sub {
- my ($sid) = @_;
- my $scfg = PVE::Storage::storage_config($storecfg, $sid);
- my $dst = $scfg->{shared} ? \$shared : \$local;
- $$dst = $sid if !$$dst || $scfg->{path}; # prefer file based storage
- },
- );
-
- # second, use shared storage where VM has at least one disk
- # third, use local storage where VM has at least one disk
- # fall back to local storage
- $target = $shared // $local // 'local';
-
- return $target;
-}
-
sub generate_uuid {
my ($uuid, $uuid_str);
UUID::generate($uuid);
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 15/31] introduce QemuMigrate::Helpers module
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (13 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 14/31] move find_vmstate_storage() helper to QemuConfig module Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 16/31] introduce RunState module Fiona Ebner
` (16 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
The QemuMigrate module is high-level and should not be called from
many other places, while also being an implementation of
AbstractMigrate, so using a separate module is much more natural.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/API2/Qemu.pm | 6 +-
src/PVE/Makefile | 1 +
src/PVE/QemuConfig.pm | 5 +-
src/PVE/QemuMigrate.pm | 5 +-
src/PVE/QemuMigrate/Helpers.pm | 146 ++++++++++++++++++++++
src/PVE/QemuMigrate/Makefile | 9 ++
src/PVE/QemuServer.pm | 136 +-------------------
src/test/MigrationTest/QemuMigrateMock.pm | 3 -
src/test/MigrationTest/QmMock.pm | 3 -
src/test/MigrationTest/Shared.pm | 7 ++
src/test/snapshot-test.pm | 11 +-
11 files changed, 186 insertions(+), 146 deletions(-)
create mode 100644 src/PVE/QemuMigrate/Helpers.pm
create mode 100644 src/PVE/QemuMigrate/Makefile
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 7f55998e..27426eab 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -45,6 +45,7 @@ use PVE::QemuServer::RNG;
use PVE::QemuServer::USB;
use PVE::QemuServer::Virtiofs qw(max_virtiofs);
use PVE::QemuMigrate;
+use PVE::QemuMigrate::Helpers;
use PVE::RPCEnvironment;
use PVE::AccessControl;
use PVE::INotify;
@@ -5167,7 +5168,7 @@ __PACKAGE__->register_method({
$res->{running} = PVE::QemuServer::check_running($vmid) ? 1 : 0;
my ($local_resources, $mapped_resources, $missing_mappings_by_node) =
- PVE::QemuServer::check_local_resources($vmconf, $res->{running}, 1);
+ PVE::QemuMigrate::Helpers::check_local_resources($vmconf, $res->{running}, 1);
my $vga = PVE::QemuServer::parse_vga($vmconf->{vga});
if ($res->{running} && $vga->{'clipboard'} && $vga->{'clipboard'} eq 'vnc') {
@@ -5903,7 +5904,8 @@ __PACKAGE__->register_method({
if lc($snapname) eq 'pending';
my $vmconf = PVE::QemuConfig->load_config($vmid);
- PVE::QemuServer::check_non_migratable_resources($vmconf, $param->{vmstate}, 0);
+ PVE::QemuMigrate::Helpers::check_non_migratable_resources($vmconf, $param->{vmstate},
+ 0);
my $realcmd = sub {
PVE::Cluster::log_msg('info', $authuser, "snapshot VM $vmid: $snapname");
diff --git a/src/PVE/Makefile b/src/PVE/Makefile
index 01cf9df6..e0537b4a 100644
--- a/src/PVE/Makefile
+++ b/src/PVE/Makefile
@@ -16,4 +16,5 @@ install:
$(MAKE) -C API2 install
$(MAKE) -C CLI install
$(MAKE) -C QemuConfig install
+ $(MAKE) -C QemuMigrate install
$(MAKE) -C QemuServer install
diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm
index 957c875a..01104723 100644
--- a/src/PVE/QemuConfig.pm
+++ b/src/PVE/QemuConfig.pm
@@ -8,6 +8,7 @@ use Scalar::Util qw(blessed);
use PVE::AbstractConfig;
use PVE::INotify;
use PVE::JSONSchema;
+use PVE::QemuMigrate::Helpers;
use PVE::QemuServer::Agent;
use PVE::QemuServer::CPUConfig;
use PVE::QemuServer::Drive;
@@ -211,7 +212,7 @@ sub get_backup_volumes {
sub __snapshot_assert_no_blockers {
my ($class, $vmconf, $save_vmstate) = @_;
- PVE::QemuServer::check_non_migratable_resources($vmconf, $save_vmstate, 0);
+ PVE::QemuMigrate::Helpers::check_non_migratable_resources($vmconf, $save_vmstate, 0);
}
sub __snapshot_save_vmstate {
@@ -325,7 +326,7 @@ sub __snapshot_create_vol_snapshots_hook {
PVE::Storage::activate_volumes($storecfg, [$snap->{vmstate}]);
my $state_storage_id = PVE::Storage::parse_volume_id($snap->{vmstate});
- PVE::QemuServer::set_migration_caps($vmid, 1);
+ PVE::QemuMigrate::Helpers::set_migration_caps($vmid, 1);
mon_cmd($vmid, "savevm-start", statefile => $path);
print "saving VM state and RAM using storage '$state_storage_id'\n";
my $render_state = sub {
diff --git a/src/PVE/QemuMigrate.pm b/src/PVE/QemuMigrate.pm
index 934d4350..4fd46a76 100644
--- a/src/PVE/QemuMigrate.pm
+++ b/src/PVE/QemuMigrate.pm
@@ -25,6 +25,7 @@ use PVE::Tools;
use PVE::Tunnel;
use PVE::QemuConfig;
+use PVE::QemuMigrate::Helpers;
use PVE::QemuServer::CPUConfig;
use PVE::QemuServer::Drive qw(checked_volume_format);
use PVE::QemuServer::Helpers qw(min_version);
@@ -239,7 +240,7 @@ sub prepare {
}
my ($loc_res, $mapped_res, $missing_mappings_by_node) =
- PVE::QemuServer::check_local_resources($conf, $running, 1);
+ PVE::QemuMigrate::Helpers::check_local_resources($conf, $running, 1);
my $blocking_resources = [];
for my $res ($loc_res->@*) {
if (!defined($mapped_res->{$res})) {
@@ -1235,7 +1236,7 @@ sub phase2 {
my $defaults = PVE::QemuServer::load_defaults();
$self->log('info', "set migration capabilities");
- eval { PVE::QemuServer::set_migration_caps($vmid) };
+ eval { PVE::QemuMigrate::Helpers::set_migration_caps($vmid) };
warn $@ if $@;
my $qemu_migrate_params = {};
diff --git a/src/PVE/QemuMigrate/Helpers.pm b/src/PVE/QemuMigrate/Helpers.pm
new file mode 100644
index 00000000..f191565a
--- /dev/null
+++ b/src/PVE/QemuMigrate/Helpers.pm
@@ -0,0 +1,146 @@
+package PVE::QemuMigrate::Helpers;
+
+use strict;
+use warnings;
+
+use JSON;
+
+use PVE::Cluster;
+use PVE::JSONSchema qw(parse_property_string);
+use PVE::Mapping::Dir;
+use PVE::Mapping::PCI;
+use PVE::Mapping::USB;
+
+use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::Virtiofs;
+
+sub check_non_migratable_resources {
+ my ($conf, $state, $noerr) = @_;
+
+ my @blockers = ();
+ if ($state) {
+ push @blockers, "amd-sev" if $conf->{"amd-sev"};
+ push @blockers, "virtiofs" if PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
+ }
+
+ if (scalar(@blockers) && !$noerr) {
+ die "Cannot live-migrate, snapshot (with RAM), or hibernate a VM with: "
+ . join(', ', @blockers) . "\n";
+ }
+
+ return @blockers;
+}
+
+# test if VM uses local resources (to prevent migration)
+sub check_local_resources {
+ my ($conf, $state, $noerr) = @_;
+
+ my @loc_res = ();
+ my $mapped_res = {};
+
+ my @non_migratable_resources = check_non_migratable_resources($conf, $state, $noerr);
+ push(@loc_res, @non_migratable_resources);
+
+ my $nodelist = PVE::Cluster::get_nodelist();
+ my $pci_map = PVE::Mapping::PCI::config();
+ my $usb_map = PVE::Mapping::USB::config();
+ my $dir_map = PVE::Mapping::Dir::config();
+
+ my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
+
+ my $add_missing_mapping = sub {
+ my ($type, $key, $id) = @_;
+ for my $node (@$nodelist) {
+ my $entry;
+ if ($type eq 'pci') {
+ $entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
+ } elsif ($type eq 'usb') {
+ $entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
+ } elsif ($type eq 'dir') {
+ $entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node);
+ }
+ if (!scalar($entry->@*)) {
+ push @{ $missing_mappings_by_node->{$node} }, $key;
+ }
+ }
+ };
+
+ push @loc_res, "hostusb" if $conf->{hostusb}; # old syntax
+ push @loc_res, "hostpci" if $conf->{hostpci}; # old syntax
+
+ push @loc_res, "ivshmem" if $conf->{ivshmem};
+
+ foreach my $k (keys %$conf) {
+ if ($k =~ m/^usb/) {
+ my $entry = parse_property_string('pve-qm-usb', $conf->{$k});
+ next if $entry->{host} && $entry->{host} =~ m/^spice$/i;
+ if (my $name = $entry->{mapping}) {
+ $add_missing_mapping->('usb', $k, $name);
+ $mapped_res->{$k} = { name => $name };
+ }
+ }
+ if ($k =~ m/^hostpci/) {
+ my $entry = parse_property_string('pve-qm-hostpci', $conf->{$k});
+ if (my $name = $entry->{mapping}) {
+ $add_missing_mapping->('pci', $k, $name);
+ my $mapped_device = { name => $name };
+ $mapped_res->{$k} = $mapped_device;
+
+ if ($pci_map->{ids}->{$name}->{'live-migration-capable'}) {
+ $mapped_device->{'live-migration'} = 1;
+ # don't add mapped device with live migration as blocker
+ next;
+ }
+
+ # don't add mapped devices as blocker for offline migration but still iterate over
+ # all mappings above to collect on which nodes they are available.
+ next if !$state;
+ }
+ }
+ if ($k =~ m/^virtiofs/) {
+ my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+ $add_missing_mapping->('dir', $k, $entry->{dirid});
+ $mapped_res->{$k} = { name => $entry->{dirid} };
+ }
+ # sockets are safe: they will recreated be on the target side post-migrate
+ next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
+ push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
+ }
+
+ die "VM uses local resources\n" if scalar @loc_res && !$noerr;
+
+ return wantarray ? (\@loc_res, $mapped_res, $missing_mappings_by_node) : \@loc_res;
+}
+
+sub set_migration_caps {
+ my ($vmid, $savevm) = @_;
+
+ my $qemu_support = eval { mon_cmd($vmid, "query-proxmox-support") };
+
+ my $bitmap_prop = $savevm ? 'pbs-dirty-bitmap-savevm' : 'pbs-dirty-bitmap-migration';
+ my $dirty_bitmaps = $qemu_support->{$bitmap_prop} ? 1 : 0;
+
+ my $cap_ref = [];
+
+ my $enabled_cap = {
+ "auto-converge" => 1,
+ "xbzrle" => 1,
+ "dirty-bitmaps" => $dirty_bitmaps,
+ };
+
+ my $supported_capabilities = mon_cmd($vmid, "query-migrate-capabilities");
+
+ for my $supported_capability (@$supported_capabilities) {
+ push @$cap_ref,
+ {
+ capability => $supported_capability->{capability},
+ state => $enabled_cap->{ $supported_capability->{capability} }
+ ? JSON::true
+ : JSON::false,
+ };
+ }
+
+ mon_cmd($vmid, "migrate-set-capabilities", capabilities => $cap_ref);
+}
+
+1;
diff --git a/src/PVE/QemuMigrate/Makefile b/src/PVE/QemuMigrate/Makefile
new file mode 100644
index 00000000..c6e50d3f
--- /dev/null
+++ b/src/PVE/QemuMigrate/Makefile
@@ -0,0 +1,9 @@
+DESTDIR=
+PREFIX=/usr
+PERLDIR=$(PREFIX)/share/perl5
+
+SOURCES=Helpers.pm
+
+.PHONY: install
+install: $(SOURCES)
+ for i in $(SOURCES); do install -D -m 0644 $$i $(DESTDIR)$(PERLDIR)/PVE/QemuMigrate/$$i; done
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index d24dc7eb..3c612b44 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -53,6 +53,7 @@ use PVE::Tools
use PVE::QMPClient;
use PVE::QemuConfig;
use PVE::QemuConfig::NoWrite;
+use PVE::QemuMigrate::Helpers;
use PVE::QemuServer::Agent qw(qga_check_running);
use PVE::QemuServer::Helpers
qw(config_aware_timeout get_iscsi_initiator_name min_version kvm_user_version windows_version);
@@ -2254,104 +2255,6 @@ sub config_list {
return $res;
}
-sub check_non_migratable_resources {
- my ($conf, $state, $noerr) = @_;
-
- my @blockers = ();
- if ($state) {
- push @blockers, "amd-sev" if $conf->{"amd-sev"};
- push @blockers, "virtiofs" if PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
- }
-
- if (scalar(@blockers) && !$noerr) {
- die "Cannot live-migrate, snapshot (with RAM), or hibernate a VM with: "
- . join(', ', @blockers) . "\n";
- }
-
- return @blockers;
-}
-
-# test if VM uses local resources (to prevent migration)
-sub check_local_resources {
- my ($conf, $state, $noerr) = @_;
-
- my @loc_res = ();
- my $mapped_res = {};
-
- my @non_migratable_resources = check_non_migratable_resources($conf, $state, $noerr);
- push(@loc_res, @non_migratable_resources);
-
- my $nodelist = PVE::Cluster::get_nodelist();
- my $pci_map = PVE::Mapping::PCI::config();
- my $usb_map = PVE::Mapping::USB::config();
- my $dir_map = PVE::Mapping::Dir::config();
-
- my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
-
- my $add_missing_mapping = sub {
- my ($type, $key, $id) = @_;
- for my $node (@$nodelist) {
- my $entry;
- if ($type eq 'pci') {
- $entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
- } elsif ($type eq 'usb') {
- $entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
- } elsif ($type eq 'dir') {
- $entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node);
- }
- if (!scalar($entry->@*)) {
- push @{ $missing_mappings_by_node->{$node} }, $key;
- }
- }
- };
-
- push @loc_res, "hostusb" if $conf->{hostusb}; # old syntax
- push @loc_res, "hostpci" if $conf->{hostpci}; # old syntax
-
- push @loc_res, "ivshmem" if $conf->{ivshmem};
-
- foreach my $k (keys %$conf) {
- if ($k =~ m/^usb/) {
- my $entry = parse_property_string('pve-qm-usb', $conf->{$k});
- next if $entry->{host} && $entry->{host} =~ m/^spice$/i;
- if (my $name = $entry->{mapping}) {
- $add_missing_mapping->('usb', $k, $name);
- $mapped_res->{$k} = { name => $name };
- }
- }
- if ($k =~ m/^hostpci/) {
- my $entry = parse_property_string('pve-qm-hostpci', $conf->{$k});
- if (my $name = $entry->{mapping}) {
- $add_missing_mapping->('pci', $k, $name);
- my $mapped_device = { name => $name };
- $mapped_res->{$k} = $mapped_device;
-
- if ($pci_map->{ids}->{$name}->{'live-migration-capable'}) {
- $mapped_device->{'live-migration'} = 1;
- # don't add mapped device with live migration as blocker
- next;
- }
-
- # don't add mapped devices as blocker for offline migration but still iterate over
- # all mappings above to collect on which nodes they are available.
- next if !$state;
- }
- }
- if ($k =~ m/^virtiofs/) {
- my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
- $add_missing_mapping->('dir', $k, $entry->{dirid});
- $mapped_res->{$k} = { name => $entry->{dirid} };
- }
- # sockets are safe: they will recreated be on the target side post-migrate
- next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
- push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
- }
-
- die "VM uses local resources\n" if scalar @loc_res && !$noerr;
-
- return wantarray ? (\@loc_res, $mapped_res, $missing_mappings_by_node) : \@loc_res;
-}
-
# check if used storages are available on all nodes (use by migrate)
sub check_storage_availability {
my ($storecfg, $conf, $node) = @_;
@@ -4508,37 +4411,6 @@ sub qemu_volume_snapshot_delete {
}
}
-sub set_migration_caps {
- my ($vmid, $savevm) = @_;
-
- my $qemu_support = eval { mon_cmd($vmid, "query-proxmox-support") };
-
- my $bitmap_prop = $savevm ? 'pbs-dirty-bitmap-savevm' : 'pbs-dirty-bitmap-migration';
- my $dirty_bitmaps = $qemu_support->{$bitmap_prop} ? 1 : 0;
-
- my $cap_ref = [];
-
- my $enabled_cap = {
- "auto-converge" => 1,
- "xbzrle" => 1,
- "dirty-bitmaps" => $dirty_bitmaps,
- };
-
- my $supported_capabilities = mon_cmd($vmid, "query-migrate-capabilities");
-
- for my $supported_capability (@$supported_capabilities) {
- push @$cap_ref,
- {
- capability => $supported_capability->{capability},
- state => $enabled_cap->{ $supported_capability->{capability} }
- ? JSON::true
- : JSON::false,
- };
- }
-
- mon_cmd($vmid, "migrate-set-capabilities", capabilities => $cap_ref);
-}
-
sub foreach_volid {
my ($conf, $func, @param) = @_;
@@ -5893,7 +5765,7 @@ sub vm_start_nolock {
}
if ($migratedfrom) {
- eval { set_migration_caps($vmid); };
+ eval { PVE::QemuMigrate::Helpers::set_migration_caps($vmid); };
warn $@ if $@;
if ($spice_port) {
@@ -6315,7 +6187,7 @@ sub vm_suspend {
die "cannot suspend to disk during backup\n"
if $is_backing_up && $includestate;
- check_non_migratable_resources($conf, $includestate, 0);
+ PVE::QemuMigrate::Helpers::check_non_migratable_resources($conf, $includestate, 0);
if ($includestate) {
$conf->{lock} = 'suspending';
@@ -6351,7 +6223,7 @@ sub vm_suspend {
PVE::Storage::activate_volumes($storecfg, [$vmstate]);
eval {
- set_migration_caps($vmid, 1);
+ PVE::QemuMigrate::Helpers::set_migration_caps($vmid, 1);
mon_cmd($vmid, "savevm-start", statefile => $path);
for (;;) {
my $state = mon_cmd($vmid, "query-savevm");
diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm
index 1b95a2ff..56a1d777 100644
--- a/src/test/MigrationTest/QemuMigrateMock.pm
+++ b/src/test/MigrationTest/QemuMigrateMock.pm
@@ -167,9 +167,6 @@ $MigrationTest::Shared::qemu_server_module->mock(
qemu_drive_mirror_switch_to_active_mode => sub {
return;
},
- set_migration_caps => sub {
- return;
- },
vm_stop => sub {
$vm_stop_executed = 1;
delete $expected_calls->{'vm_stop'};
diff --git a/src/test/MigrationTest/QmMock.pm b/src/test/MigrationTest/QmMock.pm
index 69b9c2c9..3eaa131f 100644
--- a/src/test/MigrationTest/QmMock.pm
+++ b/src/test/MigrationTest/QmMock.pm
@@ -142,9 +142,6 @@ $MigrationTest::Shared::qemu_server_module->mock(
}
die "run_command (mocked) - implement me: ${cmd_msg}";
},
- set_migration_caps => sub {
- return;
- },
vm_migrate_alloc_nbd_disks => sub {
my $nbd =
$MigrationTest::Shared::qemu_server_module->original('vm_migrate_alloc_nbd_disks')
diff --git a/src/test/MigrationTest/Shared.pm b/src/test/MigrationTest/Shared.pm
index e29cd1df..a51e1692 100644
--- a/src/test/MigrationTest/Shared.pm
+++ b/src/test/MigrationTest/Shared.pm
@@ -135,6 +135,13 @@ $qemu_config_module->mock(
},
);
+our $qemu_migrate_helpers_module = Test::MockModule->new("PVE::QemuMigrate::Helpers");
+$qemu_migrate_helpers_module->mock(
+ set_migration_caps => sub {
+ return;
+ },
+);
+
our $qemu_server_cloudinit_module = Test::MockModule->new("PVE::QemuServer::Cloudinit");
$qemu_server_cloudinit_module->mock(
generate_cloudinitconfig => sub {
diff --git a/src/test/snapshot-test.pm b/src/test/snapshot-test.pm
index 1a8623c0..4fce87f1 100644
--- a/src/test/snapshot-test.pm
+++ b/src/test/snapshot-test.pm
@@ -390,6 +390,12 @@ sub qmp_cmd {
}
# END mocked PVE::QemuServer::Monitor methods
+#
+# BEGIN mocked PVE::QemuMigrate::Helpers methods
+
+sub set_migration_caps { } # ignored
+
+# END mocked PVE::QemuMigrate::Helpers methods
# BEGIN redefine PVE::QemuServer methods
@@ -429,13 +435,14 @@ sub vm_stop {
return;
}
-sub set_migration_caps { } # ignored
-
# END redefine PVE::QemuServer methods
PVE::Tools::run_command("rm -rf snapshot-working");
PVE::Tools::run_command("cp -a snapshot-input snapshot-working");
+my $qemu_migrate_helpers_module = Test::MockModule->new('PVE::QemuMigrate::Helpers');
+$qemu_migrate_helpers_module->mock('set_migration_caps', \&set_migration_caps);
+
my $qemu_helpers_module = Test::MockModule->new('PVE::QemuServer::Helpers');
$qemu_helpers_module->mock('vm_running_locally', \&vm_running_locally);
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 16/31] introduce RunState module
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (14 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 15/31] introduce QemuMigrate::Helpers module Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 17/31] code cleanup: drive mirror: do not prefix calls to function in the same module Fiona Ebner
` (15 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
For now, move only the vm_resume() and vm_suspend() functions. Others
like vm_stop() and friends, vm_reboot() and vm_start() would require
more preparation.
Apart from slightly improving modularity, this is in preparation to
add a BlockJob module, where vm_resume() and vm_suspend() need to be
called after drive-mirror, to avoid a cyclic dependency with the main
QemuServer module.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/API2/Qemu.pm | 7 +-
src/PVE/CLI/qm.pm | 3 +-
src/PVE/QemuServer.pm | 175 +------------------------------
src/PVE/QemuServer/Makefile | 1 +
src/PVE/QemuServer/RunState.pm | 185 +++++++++++++++++++++++++++++++++
5 files changed, 196 insertions(+), 175 deletions(-)
create mode 100644 src/PVE/QemuServer/RunState.pm
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 27426eab..de762cca 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -42,6 +42,7 @@ use PVE::QemuServer::OVMF;
use PVE::QemuServer::PCI;
use PVE::QemuServer::QMPHelpers;
use PVE::QemuServer::RNG;
+use PVE::QemuServer::RunState;
use PVE::QemuServer::USB;
use PVE::QemuServer::Virtiofs qw(max_virtiofs);
use PVE::QemuMigrate;
@@ -3934,7 +3935,7 @@ __PACKAGE__->register_method({
syslog('info', "suspend VM $vmid: $upid\n");
- PVE::QemuServer::vm_suspend($vmid, $skiplock, $todisk, $statestorage);
+ PVE::QemuServer::RunState::vm_suspend($vmid, $skiplock, $todisk, $statestorage);
return;
};
@@ -4011,7 +4012,7 @@ __PACKAGE__->register_method({
syslog('info', "resume VM $vmid: $upid\n");
if (!$to_disk_suspended) {
- PVE::QemuServer::vm_resume($vmid, $skiplock, $nocheck);
+ PVE::QemuServer::RunState::vm_resume($vmid, $skiplock, $nocheck);
} else {
my $storecfg = PVE::Storage::config();
PVE::QemuServer::vm_start($storecfg, $vmid, { skiplock => $skiplock });
@@ -6642,7 +6643,7 @@ __PACKAGE__->register_method({
},
'resume' => sub {
if (PVE::QemuServer::Helpers::vm_running_locally($state->{vmid})) {
- PVE::QemuServer::vm_resume($state->{vmid}, 1, 1);
+ PVE::QemuServer::RunState::vm_resume($state->{vmid}, 1, 1);
} else {
die "VM $state->{vmid} not running\n";
}
diff --git a/src/PVE/CLI/qm.pm b/src/PVE/CLI/qm.pm
index 23f71ab0..f3e9a702 100755
--- a/src/PVE/CLI/qm.pm
+++ b/src/PVE/CLI/qm.pm
@@ -36,6 +36,7 @@ use PVE::QemuServer::Agent;
use PVE::QemuServer::ImportDisk;
use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::QMPHelpers;
+use PVE::QemuServer::RunState;
use PVE::QemuServer;
use PVE::CLIHandler;
@@ -465,7 +466,7 @@ __PACKAGE__->register_method({
# check_running and vm_resume with nocheck, since local node
# might not have processed config move/rename yet
if (PVE::QemuServer::check_running($vmid, 1)) {
- eval { PVE::QemuServer::vm_resume($vmid, 1, 1); };
+ eval { PVE::QemuServer::RunState::vm_resume($vmid, 1, 1); };
if ($@) {
$tunnel_write->("ERR: resume failed - $@");
} else {
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 3c612b44..942a1363 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -81,6 +81,7 @@ use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port
use PVE::QemuServer::QemuImage;
use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
use PVE::QemuServer::RNG qw(parse_rng print_rng_device_commandline print_rng_object_commandline);
+use PVE::QemuServer::RunState;
use PVE::QemuServer::StateFile;
use PVE::QemuServer::USB;
use PVE::QemuServer::Virtiofs qw(max_virtiofs start_all_virtiofsd);
@@ -5359,7 +5360,7 @@ sub vm_start {
if ($has_backup_lock && $running) {
# a backup is currently running, attempt to start the guest in the
# existing QEMU instance
- return vm_resume($vmid);
+ return PVE::QemuServer::RunState::vm_resume($vmid);
}
PVE::QemuConfig->check_lock($conf)
@@ -6165,174 +6166,6 @@ sub vm_reboot {
);
}
-# note: if using the statestorage parameter, the caller has to check privileges
-sub vm_suspend {
- my ($vmid, $skiplock, $includestate, $statestorage) = @_;
-
- my $conf;
- my $path;
- my $storecfg;
- my $vmstate;
-
- PVE::QemuConfig->lock_config(
- $vmid,
- sub {
-
- $conf = PVE::QemuConfig->load_config($vmid);
-
- my $is_backing_up = PVE::QemuConfig->has_lock($conf, 'backup');
- PVE::QemuConfig->check_lock($conf)
- if !($skiplock || $is_backing_up);
-
- die "cannot suspend to disk during backup\n"
- if $is_backing_up && $includestate;
-
- PVE::QemuMigrate::Helpers::check_non_migratable_resources($conf, $includestate, 0);
-
- if ($includestate) {
- $conf->{lock} = 'suspending';
- my $date = strftime("%Y-%m-%d", localtime(time()));
- $storecfg = PVE::Storage::config();
- if (!$statestorage) {
- $statestorage = PVE::QemuConfig::find_vmstate_storage($conf, $storecfg);
- # check permissions for the storage
- my $rpcenv = PVE::RPCEnvironment::get();
- if ($rpcenv->{type} ne 'cli') {
- my $authuser = $rpcenv->get_user();
- $rpcenv->check(
- $authuser,
- "/storage/$statestorage",
- ['Datastore.AllocateSpace'],
- );
- }
- }
-
- $vmstate = PVE::QemuConfig->__snapshot_save_vmstate(
- $vmid, $conf, "suspend-$date", $storecfg, $statestorage, 1,
- );
- $path = PVE::Storage::path($storecfg, $vmstate);
- PVE::QemuConfig->write_config($vmid, $conf);
- } else {
- mon_cmd($vmid, "stop");
- }
- },
- );
-
- if ($includestate) {
- # save vm state
- PVE::Storage::activate_volumes($storecfg, [$vmstate]);
-
- eval {
- PVE::QemuMigrate::Helpers::set_migration_caps($vmid, 1);
- mon_cmd($vmid, "savevm-start", statefile => $path);
- for (;;) {
- my $state = mon_cmd($vmid, "query-savevm");
- if (!$state->{status}) {
- die "savevm not active\n";
- } elsif ($state->{status} eq 'active') {
- sleep(1);
- next;
- } elsif ($state->{status} eq 'completed') {
- print "State saved, quitting\n";
- last;
- } elsif ($state->{status} eq 'failed' && $state->{error}) {
- die "query-savevm failed with error '$state->{error}'\n";
- } else {
- die "query-savevm returned status '$state->{status}'\n";
- }
- }
- };
- my $err = $@;
-
- PVE::QemuConfig->lock_config(
- $vmid,
- sub {
- $conf = PVE::QemuConfig->load_config($vmid);
- if ($err) {
- # cleanup, but leave suspending lock, to indicate something went wrong
- eval {
- eval { mon_cmd($vmid, "savevm-end"); };
- warn $@ if $@;
- PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
- PVE::Storage::vdisk_free($storecfg, $vmstate);
- delete $conf->@{qw(vmstate runningmachine runningcpu)};
- PVE::QemuConfig->write_config($vmid, $conf);
- };
- warn $@ if $@;
- die $err;
- }
-
- die "lock changed unexpectedly\n"
- if !PVE::QemuConfig->has_lock($conf, 'suspending');
-
- mon_cmd($vmid, "quit");
- $conf->{lock} = 'suspended';
- PVE::QemuConfig->write_config($vmid, $conf);
- },
- );
- }
-}
-
-# $nocheck is set when called as part of a migration - in this context the
-# location of the config file (source or target node) is not deterministic,
-# since migration cannot wait for pmxcfs to process the rename
-sub vm_resume {
- my ($vmid, $skiplock, $nocheck) = @_;
-
- PVE::QemuConfig->lock_config(
- $vmid,
- sub {
- # After migration, the VM might not immediately be able to respond to QMP commands, because
- # activating the block devices might take a bit of time.
- my $res = mon_cmd($vmid, 'query-status', timeout => 60);
- my $resume_cmd = 'cont';
- my $reset = 0;
- my $conf;
- if ($nocheck) {
- $conf = eval { PVE::QemuConfig->load_config($vmid) }; # try on target node
- if ($@) {
- my $vmlist = PVE::Cluster::get_vmlist();
- if (exists($vmlist->{ids}->{$vmid})) {
- my $node = $vmlist->{ids}->{$vmid}->{node};
- $conf = eval { PVE::QemuConfig->load_config($vmid, $node) }; # try on source node
- }
- if (!$conf) {
- PVE::Cluster::cfs_update(); # vmlist was wrong, invalidate cache
- $conf = PVE::QemuConfig->load_config($vmid); # last try on target node again
- }
- }
- } else {
- $conf = PVE::QemuConfig->load_config($vmid);
- }
-
- die "VM $vmid is a template and cannot be resumed!\n"
- if PVE::QemuConfig->is_template($conf);
-
- if ($res->{status}) {
- return if $res->{status} eq 'running'; # job done, go home
- $resume_cmd = 'system_wakeup' if $res->{status} eq 'suspended';
- $reset = 1 if $res->{status} eq 'shutdown';
- }
-
- if (!$nocheck) {
- PVE::QemuConfig->check_lock($conf)
- if !($skiplock || PVE::QemuConfig->has_lock($conf, 'backup'));
- }
-
- if ($reset) {
- # required if a VM shuts down during a backup and we get a resume
- # request before the backup finishes for example
- mon_cmd($vmid, "system_reset");
- }
-
- PVE::QemuServer::Network::add_nets_bridge_fdb($conf, $vmid)
- if $resume_cmd eq 'cont';
-
- mon_cmd($vmid, $resume_cmd);
- },
- );
-}
-
sub vm_sendkey {
my ($vmid, $skiplock, $key) = @_;
@@ -7967,7 +7800,7 @@ sub qemu_drive_mirror_monitor {
warn $@ if $@;
} else {
print "suspend vm\n";
- eval { PVE::QemuServer::vm_suspend($vmid, 1); };
+ eval { PVE::QemuServer::RunState::vm_suspend($vmid, 1); };
warn $@ if $@;
}
@@ -7980,7 +7813,7 @@ sub qemu_drive_mirror_monitor {
warn $@ if $@;
} else {
print "resume vm\n";
- eval { PVE::QemuServer::vm_resume($vmid, 1, 1); };
+ eval { PVE::QemuServer::RunState::vm_resume($vmid, 1, 1); };
warn $@ if $@;
}
diff --git a/src/PVE/QemuServer/Makefile b/src/PVE/QemuServer/Makefile
index e30c571c..5f475c73 100644
--- a/src/PVE/QemuServer/Makefile
+++ b/src/PVE/QemuServer/Makefile
@@ -20,6 +20,7 @@ SOURCES=Agent.pm \
QemuImage.pm \
QMPHelpers.pm \
RNG.pm \
+ RunState.pm \
StateFile.pm \
USB.pm \
Virtiofs.pm
diff --git a/src/PVE/QemuServer/RunState.pm b/src/PVE/QemuServer/RunState.pm
new file mode 100644
index 00000000..05e7fb47
--- /dev/null
+++ b/src/PVE/QemuServer/RunState.pm
@@ -0,0 +1,185 @@
+package PVE::QemuServer::RunState;
+
+use strict;
+use warnings;
+
+use POSIX qw(strftime);
+
+use PVE::Cluster;
+use PVE::RPCEnvironment;
+use PVE::Storage;
+
+use PVE::QemuConfig;
+use PVE::QemuMigrate::Helpers;
+use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::Network;
+
+# note: if using the statestorage parameter, the caller has to check privileges
+sub vm_suspend {
+ my ($vmid, $skiplock, $includestate, $statestorage) = @_;
+
+ my $conf;
+ my $path;
+ my $storecfg;
+ my $vmstate;
+
+ PVE::QemuConfig->lock_config(
+ $vmid,
+ sub {
+
+ $conf = PVE::QemuConfig->load_config($vmid);
+
+ my $is_backing_up = PVE::QemuConfig->has_lock($conf, 'backup');
+ PVE::QemuConfig->check_lock($conf)
+ if !($skiplock || $is_backing_up);
+
+ die "cannot suspend to disk during backup\n"
+ if $is_backing_up && $includestate;
+
+ PVE::QemuMigrate::Helpers::check_non_migratable_resources($conf, $includestate, 0);
+
+ if ($includestate) {
+ $conf->{lock} = 'suspending';
+ my $date = strftime("%Y-%m-%d", localtime(time()));
+ $storecfg = PVE::Storage::config();
+ if (!$statestorage) {
+ $statestorage = PVE::QemuConfig::find_vmstate_storage($conf, $storecfg);
+ # check permissions for the storage
+ my $rpcenv = PVE::RPCEnvironment::get();
+ if ($rpcenv->{type} ne 'cli') {
+ my $authuser = $rpcenv->get_user();
+ $rpcenv->check(
+ $authuser,
+ "/storage/$statestorage",
+ ['Datastore.AllocateSpace'],
+ );
+ }
+ }
+
+ $vmstate = PVE::QemuConfig->__snapshot_save_vmstate(
+ $vmid, $conf, "suspend-$date", $storecfg, $statestorage, 1,
+ );
+ $path = PVE::Storage::path($storecfg, $vmstate);
+ PVE::QemuConfig->write_config($vmid, $conf);
+ } else {
+ mon_cmd($vmid, "stop");
+ }
+ },
+ );
+
+ if ($includestate) {
+ # save vm state
+ PVE::Storage::activate_volumes($storecfg, [$vmstate]);
+
+ eval {
+ PVE::QemuMigrate::Helpers::set_migration_caps($vmid, 1);
+ mon_cmd($vmid, "savevm-start", statefile => $path);
+ for (;;) {
+ my $state = mon_cmd($vmid, "query-savevm");
+ if (!$state->{status}) {
+ die "savevm not active\n";
+ } elsif ($state->{status} eq 'active') {
+ sleep(1);
+ next;
+ } elsif ($state->{status} eq 'completed') {
+ print "State saved, quitting\n";
+ last;
+ } elsif ($state->{status} eq 'failed' && $state->{error}) {
+ die "query-savevm failed with error '$state->{error}'\n";
+ } else {
+ die "query-savevm returned status '$state->{status}'\n";
+ }
+ }
+ };
+ my $err = $@;
+
+ PVE::QemuConfig->lock_config(
+ $vmid,
+ sub {
+ $conf = PVE::QemuConfig->load_config($vmid);
+ if ($err) {
+ # cleanup, but leave suspending lock, to indicate something went wrong
+ eval {
+ eval { mon_cmd($vmid, "savevm-end"); };
+ warn $@ if $@;
+ PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
+ PVE::Storage::vdisk_free($storecfg, $vmstate);
+ delete $conf->@{qw(vmstate runningmachine runningcpu)};
+ PVE::QemuConfig->write_config($vmid, $conf);
+ };
+ warn $@ if $@;
+ die $err;
+ }
+
+ die "lock changed unexpectedly\n"
+ if !PVE::QemuConfig->has_lock($conf, 'suspending');
+
+ mon_cmd($vmid, "quit");
+ $conf->{lock} = 'suspended';
+ PVE::QemuConfig->write_config($vmid, $conf);
+ },
+ );
+ }
+}
+
+# $nocheck is set when called as part of a migration - in this context the
+# location of the config file (source or target node) is not deterministic,
+# since migration cannot wait for pmxcfs to process the rename
+sub vm_resume {
+ my ($vmid, $skiplock, $nocheck) = @_;
+
+ PVE::QemuConfig->lock_config(
+ $vmid,
+ sub {
+ # After migration, the VM might not immediately be able to respond to QMP commands, because
+ # activating the block devices might take a bit of time.
+ my $res = mon_cmd($vmid, 'query-status', timeout => 60);
+ my $resume_cmd = 'cont';
+ my $reset = 0;
+ my $conf;
+ if ($nocheck) {
+ $conf = eval { PVE::QemuConfig->load_config($vmid) }; # try on target node
+ if ($@) {
+ my $vmlist = PVE::Cluster::get_vmlist();
+ if (exists($vmlist->{ids}->{$vmid})) {
+ my $node = $vmlist->{ids}->{$vmid}->{node};
+ $conf = eval { PVE::QemuConfig->load_config($vmid, $node) }; # try on source node
+ }
+ if (!$conf) {
+ PVE::Cluster::cfs_update(); # vmlist was wrong, invalidate cache
+ $conf = PVE::QemuConfig->load_config($vmid); # last try on target node again
+ }
+ }
+ } else {
+ $conf = PVE::QemuConfig->load_config($vmid);
+ }
+
+ die "VM $vmid is a template and cannot be resumed!\n"
+ if PVE::QemuConfig->is_template($conf);
+
+ if ($res->{status}) {
+ return if $res->{status} eq 'running'; # job done, go home
+ $resume_cmd = 'system_wakeup' if $res->{status} eq 'suspended';
+ $reset = 1 if $res->{status} eq 'shutdown';
+ }
+
+ if (!$nocheck) {
+ PVE::QemuConfig->check_lock($conf)
+ if !($skiplock || PVE::QemuConfig->has_lock($conf, 'backup'));
+ }
+
+ if ($reset) {
+ # required if a VM shuts down during a backup and we get a resume
+ # request before the backup finishes for example
+ mon_cmd($vmid, "system_reset");
+ }
+
+ PVE::QemuServer::Network::add_nets_bridge_fdb($conf, $vmid)
+ if $resume_cmd eq 'cont';
+
+ mon_cmd($vmid, $resume_cmd);
+ },
+ );
+}
+
+1;
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 17/31] code cleanup: drive mirror: do not prefix calls to function in the same module
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (15 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 16/31] introduce RunState module Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 18/31] introduce BlockJob module Fiona Ebner
` (14 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
In preparation to move block job related helpers to a dedicated
module. Like this, moving the code will be clearly visible in the diff
without any changed lines sticking out in between.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 942a1363..7e944743 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -7701,7 +7701,7 @@ sub qemu_drive_mirror {
# if a job already runs for this device we get an error, catch it for cleanup
eval { mon_cmd($vmid, "drive-mirror", %$opts); };
if (my $err = $@) {
- eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid, $jobs) };
+ eval { qemu_blockjobs_cancel($vmid, $jobs) };
warn "$@\n" if $@;
die "mirroring error: $err\n";
}
@@ -7805,7 +7805,7 @@ sub qemu_drive_mirror_monitor {
}
# if we clone a disk for a new target vm, we don't switch the disk
- PVE::QemuServer::qemu_blockjobs_cancel($vmid, $jobs);
+ qemu_blockjobs_cancel($vmid, $jobs);
if ($agent_running) {
print "unfreeze filesystem\n";
@@ -7852,7 +7852,7 @@ sub qemu_drive_mirror_monitor {
my $err = $@;
if ($err) {
- eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid, $jobs) };
+ eval { qemu_blockjobs_cancel($vmid, $jobs) };
die "block job ($op) error: $err";
}
}
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 18/31] introduce BlockJob module
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (16 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 17/31] code cleanup: drive mirror: do not prefix calls to function in the same module Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 19/31] drive: die in get_drive_id() if argument misses relevant members Fiona Ebner
` (13 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/API2/Qemu.pm | 3 +-
src/PVE/QemuMigrate.pm | 14 +-
src/PVE/QemuServer.pm | 331 +--------------------
src/PVE/QemuServer/BlockJob.pm | 333 ++++++++++++++++++++++
src/PVE/QemuServer/Makefile | 1 +
src/test/MigrationTest/QemuMigrateMock.pm | 12 +-
6 files changed, 365 insertions(+), 329 deletions(-)
create mode 100644 src/PVE/QemuServer/BlockJob.pm
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index de762cca..6565ce71 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -28,6 +28,7 @@ use PVE::GuestImport;
use PVE::QemuConfig;
use PVE::QemuServer;
use PVE::QemuServer::Agent;
+use PVE::QemuServer::BlockJob;
use PVE::QemuServer::Cloudinit;
use PVE::QemuServer::CPUConfig;
use PVE::QemuServer::Drive qw(checked_volume_format checked_parse_volname);
@@ -4515,7 +4516,7 @@ __PACKAGE__->register_method({
PVE::AccessControl::add_vm_to_pool($newid, $pool) if $pool;
};
if (my $err = $@) {
- eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid, $jobs) };
+ eval { PVE::QemuServer::BlockJob::qemu_blockjobs_cancel($vmid, $jobs) };
sleep 1; # some storage like rbd need to wait before release volume - really?
foreach my $volid (@$newvollist) {
diff --git a/src/PVE/QemuMigrate.pm b/src/PVE/QemuMigrate.pm
index 4fd46a76..16c61837 100644
--- a/src/PVE/QemuMigrate.pm
+++ b/src/PVE/QemuMigrate.pm
@@ -26,6 +26,7 @@ use PVE::Tunnel;
use PVE::QemuConfig;
use PVE::QemuMigrate::Helpers;
+use PVE::QemuServer::BlockJob;
use PVE::QemuServer::CPUConfig;
use PVE::QemuServer::Drive qw(checked_volume_format);
use PVE::QemuServer::Helpers qw(min_version);
@@ -1206,7 +1207,7 @@ sub phase2 {
my $bitmap = $target->{bitmap};
$self->log('info', "$drive: start migration to $nbd_uri");
- PVE::QemuServer::qemu_drive_mirror(
+ PVE::QemuServer::BlockJob::qemu_drive_mirror(
$vmid,
$drive,
$nbd_uri,
@@ -1222,7 +1223,7 @@ sub phase2 {
if (PVE::QemuServer::QMPHelpers::runs_at_least_qemu_version($vmid, 8, 2)) {
$self->log('info', "switching mirror jobs to actively synced mode");
- PVE::QemuServer::qemu_drive_mirror_switch_to_active_mode(
+ PVE::QemuServer::BlockJob::qemu_drive_mirror_switch_to_active_mode(
$vmid,
$self->{storage_migration_jobs},
);
@@ -1474,7 +1475,7 @@ sub phase2 {
# to avoid it trying to re-establish it. We are in blockjob ready state,
# thus, this command changes to it to blockjob complete (see qapi docs)
eval {
- PVE::QemuServer::qemu_drive_mirror_monitor(
+ PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
$vmid, undef, $self->{storage_migration_jobs}, 'cancel',
);
};
@@ -1520,7 +1521,12 @@ sub phase2_cleanup {
# cleanup resources on target host
if ($self->{storage_migration}) {
- eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid, $self->{storage_migration_jobs}) };
+ eval {
+ PVE::QemuServer::BlockJob::qemu_blockjobs_cancel(
+ $vmid,
+ $self->{storage_migration_jobs},
+ );
+ };
if (my $err = $@) {
$self->log('err', $err);
}
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 7e944743..30566864 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -55,6 +55,7 @@ use PVE::QemuConfig;
use PVE::QemuConfig::NoWrite;
use PVE::QemuMigrate::Helpers;
use PVE::QemuServer::Agent qw(qga_check_running);
+use PVE::QemuServer::BlockJob;
use PVE::QemuServer::Helpers
qw(config_aware_timeout get_iscsi_initiator_name min_version kvm_user_version windows_version);
use PVE::QemuServer::Cloudinit;
@@ -7139,7 +7140,9 @@ sub pbs_live_restore {
}
mon_cmd($vmid, 'cont');
- qemu_drive_mirror_monitor($vmid, undef, $jobs, 'auto', 0, 'stream');
+ PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
+ $vmid, undef, $jobs, 'auto', 0, 'stream',
+ );
print "restore-drive jobs finished successfully, removing all tracking block devices"
. " to disconnect from Proxmox Backup Server\n";
@@ -7237,7 +7240,9 @@ sub live_import_from_files {
}
mon_cmd($vmid, 'cont');
- qemu_drive_mirror_monitor($vmid, undef, $jobs, 'auto', 0, 'stream');
+ PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
+ $vmid, undef, $jobs, 'auto', 0, 'stream',
+ );
print "restore-drive jobs finished successfully, removing all tracking block devices\n";
@@ -7642,322 +7647,6 @@ sub template_create : prototype($$;$) {
);
}
-sub qemu_drive_mirror {
- my (
- $vmid,
- $drive,
- $dst_volid,
- $vmiddst,
- $is_zero_initialized,
- $jobs,
- $completion,
- $qga,
- $bwlimit,
- $src_bitmap,
- ) = @_;
-
- $jobs = {} if !$jobs;
-
- my $qemu_target;
- my $format;
- $jobs->{"drive-$drive"} = {};
-
- if ($dst_volid =~ /^nbd:/) {
- $qemu_target = $dst_volid;
- $format = "nbd";
- } else {
- my $storecfg = PVE::Storage::config();
-
- $format = checked_volume_format($storecfg, $dst_volid);
-
- my $dst_path = PVE::Storage::path($storecfg, $dst_volid);
-
- $qemu_target = $is_zero_initialized ? "zeroinit:$dst_path" : $dst_path;
- }
-
- my $opts = {
- timeout => 10,
- device => "drive-$drive",
- mode => "existing",
- sync => "full",
- target => $qemu_target,
- 'auto-dismiss' => JSON::false,
- };
- $opts->{format} = $format if $format;
-
- if (defined($src_bitmap)) {
- $opts->{sync} = 'incremental';
- $opts->{bitmap} = $src_bitmap;
- print "drive mirror re-using dirty bitmap '$src_bitmap'\n";
- }
-
- if (defined($bwlimit)) {
- $opts->{speed} = $bwlimit * 1024;
- print "drive mirror is starting for drive-$drive with bandwidth limit: ${bwlimit} KB/s\n";
- } else {
- print "drive mirror is starting for drive-$drive\n";
- }
-
- # if a job already runs for this device we get an error, catch it for cleanup
- eval { mon_cmd($vmid, "drive-mirror", %$opts); };
- if (my $err = $@) {
- eval { qemu_blockjobs_cancel($vmid, $jobs) };
- warn "$@\n" if $@;
- die "mirroring error: $err\n";
- }
-
- qemu_drive_mirror_monitor($vmid, $vmiddst, $jobs, $completion, $qga);
-}
-
-# $completion can be either
-# 'complete': wait until all jobs are ready, block-job-complete them (default)
-# 'cancel': wait until all jobs are ready, block-job-cancel them
-# 'skip': wait until all jobs are ready, return with block jobs in ready state
-# 'auto': wait until all jobs disappear, only use for jobs which complete automatically
-sub qemu_drive_mirror_monitor {
- my ($vmid, $vmiddst, $jobs, $completion, $qga, $op) = @_;
-
- $completion //= 'complete';
- $op //= "mirror";
-
- eval {
- my $err_complete = 0;
-
- my $starttime = time();
- while (1) {
- die "block job ('$op') timed out\n" if $err_complete > 300;
-
- my $stats = mon_cmd($vmid, "query-block-jobs");
- my $ctime = time();
-
- my $running_jobs = {};
- for my $stat (@$stats) {
- next if $stat->{type} ne $op;
- $running_jobs->{ $stat->{device} } = $stat;
- }
-
- my $readycounter = 0;
-
- for my $job_id (sort keys %$jobs) {
- my $job = $running_jobs->{$job_id};
-
- my $vanished = !defined($job);
- my $complete = defined($jobs->{$job_id}->{complete}) && $vanished;
- if ($complete || ($vanished && $completion eq 'auto')) {
- print "$job_id: $op-job finished\n";
- delete $jobs->{$job_id};
- next;
- }
-
- die "$job_id: '$op' has been cancelled\n" if !defined($job);
-
- qemu_handle_concluded_blockjob($vmid, $job_id, $job)
- if $job && $job->{status} eq 'concluded';
-
- my $busy = $job->{busy};
- my $ready = $job->{ready};
- if (my $total = $job->{len}) {
- my $transferred = $job->{offset} || 0;
- my $remaining = $total - $transferred;
- my $percent = sprintf "%.2f", ($transferred * 100 / $total);
-
- my $duration = $ctime - $starttime;
- my $total_h = render_bytes($total, 1);
- my $transferred_h = render_bytes($transferred, 1);
-
- my $status = sprintf(
- "transferred $transferred_h of $total_h ($percent%%) in %s",
- render_duration($duration),
- );
-
- if ($ready) {
- if ($busy) {
- $status .= ", still busy"; # shouldn't even happen? but mirror is weird
- } else {
- $status .= ", ready";
- }
- }
- print "$job_id: $status\n" if !$jobs->{$job_id}->{ready};
- $jobs->{$job_id}->{ready} = $ready;
- }
-
- $readycounter++ if $job->{ready};
- }
-
- last if scalar(keys %$jobs) == 0;
-
- if ($readycounter == scalar(keys %$jobs)) {
- print "all '$op' jobs are ready\n";
-
- # do the complete later (or has already been done)
- last if $completion eq 'skip' || $completion eq 'auto';
-
- if ($vmiddst && $vmiddst != $vmid) {
- my $agent_running = $qga && qga_check_running($vmid);
- if ($agent_running) {
- print "freeze filesystem\n";
- eval { mon_cmd($vmid, "guest-fsfreeze-freeze"); };
- warn $@ if $@;
- } else {
- print "suspend vm\n";
- eval { PVE::QemuServer::RunState::vm_suspend($vmid, 1); };
- warn $@ if $@;
- }
-
- # if we clone a disk for a new target vm, we don't switch the disk
- qemu_blockjobs_cancel($vmid, $jobs);
-
- if ($agent_running) {
- print "unfreeze filesystem\n";
- eval { mon_cmd($vmid, "guest-fsfreeze-thaw"); };
- warn $@ if $@;
- } else {
- print "resume vm\n";
- eval { PVE::QemuServer::RunState::vm_resume($vmid, 1, 1); };
- warn $@ if $@;
- }
-
- last;
- } else {
-
- for my $job_id (sort keys %$jobs) {
- # try to switch the disk if source and destination are on the same guest
- print "$job_id: Completing block job...\n";
-
- my $op;
- if ($completion eq 'complete') {
- $op = 'block-job-complete';
- } elsif ($completion eq 'cancel') {
- $op = 'block-job-cancel';
- } else {
- die "invalid completion value: $completion\n";
- }
- eval { mon_cmd($vmid, $op, device => $job_id) };
- my $err = $@;
- if ($err && $err =~ m/cannot be completed/) {
- print "$job_id: block job cannot be completed, trying again.\n";
- $err_complete++;
- } elsif ($err) {
- die "$job_id: block job cannot be completed - $err\n";
- } else {
- print "$job_id: Completed successfully.\n";
- $jobs->{$job_id}->{complete} = 1;
- }
- }
- }
- }
- sleep 1;
- }
- };
- my $err = $@;
-
- if ($err) {
- eval { qemu_blockjobs_cancel($vmid, $jobs) };
- die "block job ($op) error: $err";
- }
-}
-
-# If the job was started with auto-dismiss=false, it's necessary to dismiss it manually. Using this
-# option is useful to get the error for failed jobs here. QEMU's job lock should make it impossible
-# to see a job in 'concluded' state when auto-dismiss=true.
-# $info is the 'BlockJobInfo' for the job returned by query-block-jobs.
-sub qemu_handle_concluded_blockjob {
- my ($vmid, $job_id, $info) = @_;
-
- eval { mon_cmd($vmid, 'job-dismiss', id => $job_id); };
- log_warn("$job_id: failed to dismiss job - $@") if $@;
-
- die "$job_id: $info->{error} (io-status: $info->{'io-status'})\n" if $info->{error};
-}
-
-sub qemu_blockjobs_cancel {
- my ($vmid, $jobs) = @_;
-
- foreach my $job (keys %$jobs) {
- print "$job: Cancelling block job\n";
- eval { mon_cmd($vmid, "block-job-cancel", device => $job); };
- $jobs->{$job}->{cancel} = 1;
- }
-
- while (1) {
- my $stats = mon_cmd($vmid, "query-block-jobs");
-
- my $running_jobs = {};
- foreach my $stat (@$stats) {
- $running_jobs->{ $stat->{device} } = $stat;
- }
-
- foreach my $job (keys %$jobs) {
- my $info = $running_jobs->{$job};
- eval {
- qemu_handle_concluded_blockjob($vmid, $job, $info)
- if $info && $info->{status} eq 'concluded';
- };
- log_warn($@) if $@; # only warn and proceed with canceling other jobs
-
- if (defined($jobs->{$job}->{cancel}) && !defined($info)) {
- print "$job: Done.\n";
- delete $jobs->{$job};
- }
- }
-
- last if scalar(keys %$jobs) == 0;
-
- sleep 1;
- }
-}
-
-# Callers should version guard this (only available with a binary >= QEMU 8.2)
-sub qemu_drive_mirror_switch_to_active_mode {
- my ($vmid, $jobs) = @_;
-
- my $switching = {};
-
- for my $job (sort keys $jobs->%*) {
- print "$job: switching to actively synced mode\n";
-
- eval {
- mon_cmd(
- $vmid,
- "block-job-change",
- id => $job,
- type => 'mirror',
- 'copy-mode' => 'write-blocking',
- );
- $switching->{$job} = 1;
- };
- die "could not switch mirror job $job to active mode - $@\n" if $@;
- }
-
- while (1) {
- my $stats = mon_cmd($vmid, "query-block-jobs");
-
- my $running_jobs = {};
- $running_jobs->{ $_->{device} } = $_ for $stats->@*;
-
- for my $job (sort keys $switching->%*) {
- die "$job: vanished while switching to active mode\n" if !$running_jobs->{$job};
-
- my $info = $running_jobs->{$job};
- if ($info->{status} eq 'concluded') {
- qemu_handle_concluded_blockjob($vmid, $job, $info);
- # The 'concluded' state should occur here if and only if the job failed, so the
- # 'die' below should be unreachable, but play it safe.
- die "$job: expected job to have failed, but no error was set\n";
- }
-
- if ($info->{'actively-synced'}) {
- print "$job: successfully switched to actively synced mode\n";
- delete $switching->{$job};
- }
- }
-
- last if scalar(keys $switching->%*) == 0;
-
- sleep 1;
- }
-}
-
# Check for bug #4525: drive-mirror will open the target drive with the same aio setting as the
# source, but some storages have problems with io_uring, sometimes even leading to crashes.
my sub clone_disk_check_io_uring {
@@ -8063,14 +7752,16 @@ sub clone_disk {
# if this is the case, we have to complete any block-jobs still there from
# previous drive-mirrors
if (($completion && $completion eq 'complete') && (scalar(keys %$jobs) > 0)) {
- qemu_drive_mirror_monitor($vmid, $newvmid, $jobs, $completion, $qga);
+ PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
+ $vmid, $newvmid, $jobs, $completion, $qga,
+ );
}
goto no_data_clone;
}
my $sparseinit = PVE::Storage::volume_has_feature($storecfg, 'sparseinit', $newvolid);
if ($use_drive_mirror) {
- qemu_drive_mirror(
+ PVE::QemuServer::BlockJob::qemu_drive_mirror(
$vmid,
$src_drivename,
$newvolid,
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
new file mode 100644
index 00000000..7483aff3
--- /dev/null
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -0,0 +1,333 @@
+package PVE::QemuServer::BlockJob;
+
+use strict;
+use warnings;
+
+use JSON;
+
+use PVE::Format qw(render_duration render_bytes);
+use PVE::RESTEnvironment qw(log_warn);
+use PVE::Storage;
+
+use PVE::QemuServer::Agent qw(qga_check_running);
+use PVE::QemuServer::Drive qw(checked_volume_format);
+use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::RunState;
+
+# If the job was started with auto-dismiss=false, it's necessary to dismiss it manually. Using this
+# option is useful to get the error for failed jobs here. QEMU's job lock should make it impossible
+# to see a job in 'concluded' state when auto-dismiss=true.
+# $info is the 'BlockJobInfo' for the job returned by query-block-jobs.
+sub qemu_handle_concluded_blockjob {
+ my ($vmid, $job_id, $info) = @_;
+
+ eval { mon_cmd($vmid, 'job-dismiss', id => $job_id); };
+ log_warn("$job_id: failed to dismiss job - $@") if $@;
+
+ die "$job_id: $info->{error} (io-status: $info->{'io-status'})\n" if $info->{error};
+}
+
+sub qemu_blockjobs_cancel {
+ my ($vmid, $jobs) = @_;
+
+ foreach my $job (keys %$jobs) {
+ print "$job: Cancelling block job\n";
+ eval { mon_cmd($vmid, "block-job-cancel", device => $job); };
+ $jobs->{$job}->{cancel} = 1;
+ }
+
+ while (1) {
+ my $stats = mon_cmd($vmid, "query-block-jobs");
+
+ my $running_jobs = {};
+ foreach my $stat (@$stats) {
+ $running_jobs->{ $stat->{device} } = $stat;
+ }
+
+ foreach my $job (keys %$jobs) {
+ my $info = $running_jobs->{$job};
+ eval {
+ qemu_handle_concluded_blockjob($vmid, $job, $info)
+ if $info && $info->{status} eq 'concluded';
+ };
+ log_warn($@) if $@; # only warn and proceed with canceling other jobs
+
+ if (defined($jobs->{$job}->{cancel}) && !defined($info)) {
+ print "$job: Done.\n";
+ delete $jobs->{$job};
+ }
+ }
+
+ last if scalar(keys %$jobs) == 0;
+
+ sleep 1;
+ }
+}
+
+# $completion can be either
+# 'complete': wait until all jobs are ready, block-job-complete them (default)
+# 'cancel': wait until all jobs are ready, block-job-cancel them
+# 'skip': wait until all jobs are ready, return with block jobs in ready state
+# 'auto': wait until all jobs disappear, only use for jobs which complete automatically
+sub qemu_drive_mirror_monitor {
+ my ($vmid, $vmiddst, $jobs, $completion, $qga, $op) = @_;
+
+ $completion //= 'complete';
+ $op //= "mirror";
+
+ eval {
+ my $err_complete = 0;
+
+ my $starttime = time();
+ while (1) {
+ die "block job ('$op') timed out\n" if $err_complete > 300;
+
+ my $stats = mon_cmd($vmid, "query-block-jobs");
+ my $ctime = time();
+
+ my $running_jobs = {};
+ for my $stat (@$stats) {
+ next if $stat->{type} ne $op;
+ $running_jobs->{ $stat->{device} } = $stat;
+ }
+
+ my $readycounter = 0;
+
+ for my $job_id (sort keys %$jobs) {
+ my $job = $running_jobs->{$job_id};
+
+ my $vanished = !defined($job);
+ my $complete = defined($jobs->{$job_id}->{complete}) && $vanished;
+ if ($complete || ($vanished && $completion eq 'auto')) {
+ print "$job_id: $op-job finished\n";
+ delete $jobs->{$job_id};
+ next;
+ }
+
+ die "$job_id: '$op' has been cancelled\n" if !defined($job);
+
+ qemu_handle_concluded_blockjob($vmid, $job_id, $job)
+ if $job && $job->{status} eq 'concluded';
+
+ my $busy = $job->{busy};
+ my $ready = $job->{ready};
+ if (my $total = $job->{len}) {
+ my $transferred = $job->{offset} || 0;
+ my $remaining = $total - $transferred;
+ my $percent = sprintf "%.2f", ($transferred * 100 / $total);
+
+ my $duration = $ctime - $starttime;
+ my $total_h = render_bytes($total, 1);
+ my $transferred_h = render_bytes($transferred, 1);
+
+ my $status = sprintf(
+ "transferred $transferred_h of $total_h ($percent%%) in %s",
+ render_duration($duration),
+ );
+
+ if ($ready) {
+ if ($busy) {
+ $status .= ", still busy"; # shouldn't even happen? but mirror is weird
+ } else {
+ $status .= ", ready";
+ }
+ }
+ print "$job_id: $status\n" if !$jobs->{$job_id}->{ready};
+ $jobs->{$job_id}->{ready} = $ready;
+ }
+
+ $readycounter++ if $job->{ready};
+ }
+
+ last if scalar(keys %$jobs) == 0;
+
+ if ($readycounter == scalar(keys %$jobs)) {
+ print "all '$op' jobs are ready\n";
+
+ # do the complete later (or has already been done)
+ last if $completion eq 'skip' || $completion eq 'auto';
+
+ if ($vmiddst && $vmiddst != $vmid) {
+ my $agent_running = $qga && qga_check_running($vmid);
+ if ($agent_running) {
+ print "freeze filesystem\n";
+ eval { mon_cmd($vmid, "guest-fsfreeze-freeze"); };
+ warn $@ if $@;
+ } else {
+ print "suspend vm\n";
+ eval { PVE::QemuServer::RunState::vm_suspend($vmid, 1); };
+ warn $@ if $@;
+ }
+
+ # if we clone a disk for a new target vm, we don't switch the disk
+ qemu_blockjobs_cancel($vmid, $jobs);
+
+ if ($agent_running) {
+ print "unfreeze filesystem\n";
+ eval { mon_cmd($vmid, "guest-fsfreeze-thaw"); };
+ warn $@ if $@;
+ } else {
+ print "resume vm\n";
+ eval { PVE::QemuServer::RunState::vm_resume($vmid, 1, 1); };
+ warn $@ if $@;
+ }
+
+ last;
+ } else {
+
+ for my $job_id (sort keys %$jobs) {
+ # try to switch the disk if source and destination are on the same guest
+ print "$job_id: Completing block job...\n";
+
+ my $op;
+ if ($completion eq 'complete') {
+ $op = 'block-job-complete';
+ } elsif ($completion eq 'cancel') {
+ $op = 'block-job-cancel';
+ } else {
+ die "invalid completion value: $completion\n";
+ }
+ eval { mon_cmd($vmid, $op, device => $job_id) };
+ my $err = $@;
+ if ($err && $err =~ m/cannot be completed/) {
+ print "$job_id: block job cannot be completed, trying again.\n";
+ $err_complete++;
+ } elsif ($err) {
+ die "$job_id: block job cannot be completed - $err\n";
+ } else {
+ print "$job_id: Completed successfully.\n";
+ $jobs->{$job_id}->{complete} = 1;
+ }
+ }
+ }
+ }
+ sleep 1;
+ }
+ };
+ my $err = $@;
+
+ if ($err) {
+ eval { qemu_blockjobs_cancel($vmid, $jobs) };
+ die "block job ($op) error: $err";
+ }
+}
+
+sub qemu_drive_mirror {
+ my (
+ $vmid,
+ $drive,
+ $dst_volid,
+ $vmiddst,
+ $is_zero_initialized,
+ $jobs,
+ $completion,
+ $qga,
+ $bwlimit,
+ $src_bitmap,
+ ) = @_;
+
+ $jobs = {} if !$jobs;
+
+ my $qemu_target;
+ my $format;
+ $jobs->{"drive-$drive"} = {};
+
+ if ($dst_volid =~ /^nbd:/) {
+ $qemu_target = $dst_volid;
+ $format = "nbd";
+ } else {
+ my $storecfg = PVE::Storage::config();
+
+ $format = checked_volume_format($storecfg, $dst_volid);
+
+ my $dst_path = PVE::Storage::path($storecfg, $dst_volid);
+
+ $qemu_target = $is_zero_initialized ? "zeroinit:$dst_path" : $dst_path;
+ }
+
+ my $opts = {
+ timeout => 10,
+ device => "drive-$drive",
+ mode => "existing",
+ sync => "full",
+ target => $qemu_target,
+ 'auto-dismiss' => JSON::false,
+ };
+ $opts->{format} = $format if $format;
+
+ if (defined($src_bitmap)) {
+ $opts->{sync} = 'incremental';
+ $opts->{bitmap} = $src_bitmap;
+ print "drive mirror re-using dirty bitmap '$src_bitmap'\n";
+ }
+
+ if (defined($bwlimit)) {
+ $opts->{speed} = $bwlimit * 1024;
+ print "drive mirror is starting for drive-$drive with bandwidth limit: ${bwlimit} KB/s\n";
+ } else {
+ print "drive mirror is starting for drive-$drive\n";
+ }
+
+ # if a job already runs for this device we get an error, catch it for cleanup
+ eval { mon_cmd($vmid, "drive-mirror", %$opts); };
+ if (my $err = $@) {
+ eval { qemu_blockjobs_cancel($vmid, $jobs) };
+ warn "$@\n" if $@;
+ die "mirroring error: $err\n";
+ }
+
+ qemu_drive_mirror_monitor($vmid, $vmiddst, $jobs, $completion, $qga);
+}
+
+# Callers should version guard this (only available with a binary >= QEMU 8.2)
+sub qemu_drive_mirror_switch_to_active_mode {
+ my ($vmid, $jobs) = @_;
+
+ my $switching = {};
+
+ for my $job (sort keys $jobs->%*) {
+ print "$job: switching to actively synced mode\n";
+
+ eval {
+ mon_cmd(
+ $vmid,
+ "block-job-change",
+ id => $job,
+ type => 'mirror',
+ 'copy-mode' => 'write-blocking',
+ );
+ $switching->{$job} = 1;
+ };
+ die "could not switch mirror job $job to active mode - $@\n" if $@;
+ }
+
+ while (1) {
+ my $stats = mon_cmd($vmid, "query-block-jobs");
+
+ my $running_jobs = {};
+ $running_jobs->{ $_->{device} } = $_ for $stats->@*;
+
+ for my $job (sort keys $switching->%*) {
+ die "$job: vanished while switching to active mode\n" if !$running_jobs->{$job};
+
+ my $info = $running_jobs->{$job};
+ if ($info->{status} eq 'concluded') {
+ qemu_handle_concluded_blockjob($vmid, $job, $info);
+ # The 'concluded' state should occur here if and only if the job failed, so the
+ # 'die' below should be unreachable, but play it safe.
+ die "$job: expected job to have failed, but no error was set\n";
+ }
+
+ if ($info->{'actively-synced'}) {
+ print "$job: successfully switched to actively synced mode\n";
+ delete $switching->{$job};
+ }
+ }
+
+ last if scalar(keys $switching->%*) == 0;
+
+ sleep 1;
+ }
+}
+
+1;
diff --git a/src/PVE/QemuServer/Makefile b/src/PVE/QemuServer/Makefile
index 5f475c73..ca30a0ad 100644
--- a/src/PVE/QemuServer/Makefile
+++ b/src/PVE/QemuServer/Makefile
@@ -4,6 +4,7 @@ PERLDIR=$(PREFIX)/share/perl5
SOURCES=Agent.pm \
Blockdev.pm \
+ BlockJob.pm \
CGroup.pm \
Cloudinit.pm \
CPUConfig.pm \
diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm
index 56a1d777..b69b2b16 100644
--- a/src/test/MigrationTest/QemuMigrateMock.pm
+++ b/src/test/MigrationTest/QemuMigrateMock.pm
@@ -126,6 +126,14 @@ $MigrationTest::Shared::qemu_server_module->mock(
kvm_user_version => sub {
return "5.0.0";
},
+ vm_stop => sub {
+ $vm_stop_executed = 1;
+ delete $expected_calls->{'vm_stop'};
+ },
+);
+
+my $qemu_server_blockjob_module = Test::MockModule->new("PVE::QemuServer::BlockJob");
+$qemu_server_blockjob_module->mock(
qemu_blockjobs_cancel => sub {
return;
},
@@ -167,10 +175,6 @@ $MigrationTest::Shared::qemu_server_module->mock(
qemu_drive_mirror_switch_to_active_mode => sub {
return;
},
- vm_stop => sub {
- $vm_stop_executed = 1;
- delete $expected_calls->{'vm_stop'};
- },
);
my $qemu_server_cpuconfig_module = Test::MockModule->new("PVE::QemuServer::CPUConfig");
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 19/31] drive: die in get_drive_id() if argument misses relevant members
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (17 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 18/31] introduce BlockJob module Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 20/31] block job: add and use wrapper for mirror Fiona Ebner
` (12 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Catch errors early instead of continuing with unexpected values.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/Drive.pm | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/src/PVE/QemuServer/Drive.pm b/src/PVE/QemuServer/Drive.pm
index a6b8c72c..a6f5062f 100644
--- a/src/PVE/QemuServer/Drive.pm
+++ b/src/PVE/QemuServer/Drive.pm
@@ -841,6 +841,10 @@ sub print_drive {
sub get_drive_id {
my ($drive) = @_;
+
+ die "get_drive_id: no interface" if !defined($drive->{interface});
+ die "get_drive_id: no index" if !defined($drive->{index});
+
return "$drive->{interface}$drive->{index}";
}
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 20/31] block job: add and use wrapper for mirror
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (18 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 19/31] drive: die in get_drive_id() if argument misses relevant members Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 21/31] drive mirror: add variable for device ID and make name for drive ID precise Fiona Ebner
` (11 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
In preparation for the switch to -blockdev.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuMigrate.pm | 19 ++++++++++---------
src/PVE/QemuServer.pm | 19 +++++++++++--------
src/PVE/QemuServer/BlockJob.pm | 20 ++++++++++++++++++++
3 files changed, 41 insertions(+), 17 deletions(-)
diff --git a/src/PVE/QemuMigrate.pm b/src/PVE/QemuMigrate.pm
index 16c61837..f46bdf40 100644
--- a/src/PVE/QemuMigrate.pm
+++ b/src/PVE/QemuMigrate.pm
@@ -1207,17 +1207,18 @@ sub phase2 {
my $bitmap = $target->{bitmap};
$self->log('info', "$drive: start migration to $nbd_uri");
- PVE::QemuServer::BlockJob::qemu_drive_mirror(
- $vmid,
- $drive,
- $nbd_uri,
- $vmid,
- undef,
+
+ my $source_info = { vmid => $vmid, drive => $source_drive };
+ $source_info->{bitmap} = $bitmap if defined($bitmap);
+ my $dest_info = { volid => $nbd_uri };
+ my $mirror_opts = {};
+ $mirror_opts->{bwlimit} = $bwlimit if defined($bwlimit);
+ PVE::QemuServer::BlockJob::mirror(
+ $source_info,
+ $dest_info,
$self->{storage_migration_jobs},
'skip',
- undef,
- $bwlimit,
- $bitmap,
+ $mirror_opts,
);
}
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 30566864..e7c98520 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -7761,16 +7761,19 @@ sub clone_disk {
my $sparseinit = PVE::Storage::volume_has_feature($storecfg, 'sparseinit', $newvolid);
if ($use_drive_mirror) {
- PVE::QemuServer::BlockJob::qemu_drive_mirror(
- $vmid,
- $src_drivename,
- $newvolid,
- $newvmid,
- $sparseinit,
+ my $source_info = { vmid => $vmid, drive => $drive };
+ my $dest_info = { volid => $newvolid };
+ $dest_info->{'zero-initialized'} = 1 if $sparseinit;
+ $dest_info->{vmid} = $newvmid if defined($newvmid);
+ my $mirror_opts = {};
+ $mirror_opts->{'guest-agent'} = 1 if $qga;
+ $mirror_opts->{bwlimit} = $bwlimit if defined($bwlimit);
+ PVE::QemuServer::BlockJob::mirror(
+ $source_info,
+ $dest_info,
$jobs,
$completion,
- $qga,
- $bwlimit,
+ $mirror_opts,
);
} else {
if ($dst_drivename eq 'efidisk0') {
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 7483aff3..4638fb1e 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -330,4 +330,24 @@ sub qemu_drive_mirror_switch_to_active_mode {
}
}
+sub mirror {
+ my ($source, $dest, $jobs, $completion, $options) = @_;
+
+ # for the switch to -blockdev
+
+ my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
+ qemu_drive_mirror(
+ $source->{vmid},
+ $drive_id,
+ $dest->{volid},
+ $dest->{vmid},
+ $dest->{'zero-initialized'},
+ $jobs,
+ $completion,
+ $options->{'guest-agent'},
+ $options->{bwlimit},
+ $source->{bitmap},
+ );
+}
+
1;
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 21/31] drive mirror: add variable for device ID and make name for drive ID precise
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (19 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 20/31] block job: add and use wrapper for mirror Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 22/31] test: migration: factor out common mocking for mirror Fiona Ebner
` (10 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Suggested-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/BlockJob.pm | 12 +++++++-----
src/test/MigrationTest/QemuMigrateMock.pm | 12 ++++++------
2 files changed, 13 insertions(+), 11 deletions(-)
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 4638fb1e..8a74636c 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -215,7 +215,7 @@ sub qemu_drive_mirror_monitor {
sub qemu_drive_mirror {
my (
$vmid,
- $drive,
+ $drive_id,
$dst_volid,
$vmiddst,
$is_zero_initialized,
@@ -226,11 +226,13 @@ sub qemu_drive_mirror {
$src_bitmap,
) = @_;
+ my $device_id = "drive-$drive_id";
+
$jobs = {} if !$jobs;
my $qemu_target;
my $format;
- $jobs->{"drive-$drive"} = {};
+ $jobs->{$device_id} = {};
if ($dst_volid =~ /^nbd:/) {
$qemu_target = $dst_volid;
@@ -247,7 +249,7 @@ sub qemu_drive_mirror {
my $opts = {
timeout => 10,
- device => "drive-$drive",
+ device => "$device_id",
mode => "existing",
sync => "full",
target => $qemu_target,
@@ -263,9 +265,9 @@ sub qemu_drive_mirror {
if (defined($bwlimit)) {
$opts->{speed} = $bwlimit * 1024;
- print "drive mirror is starting for drive-$drive with bandwidth limit: ${bwlimit} KB/s\n";
+ print "drive mirror is starting for $device_id with bandwidth limit: ${bwlimit} KB/s\n";
} else {
- print "drive mirror is starting for drive-$drive\n";
+ print "drive mirror is starting for $device_id\n";
}
# if a job already runs for this device we get an error, catch it for cleanup
diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm
index b69b2b16..bdc574c1 100644
--- a/src/test/MigrationTest/QemuMigrateMock.pm
+++ b/src/test/MigrationTest/QemuMigrateMock.pm
@@ -140,7 +140,7 @@ $qemu_server_blockjob_module->mock(
qemu_drive_mirror => sub {
my (
$vmid,
- $drive,
+ $drive_id,
$dst_volid,
$vmiddst,
$is_zero_initialized,
@@ -152,13 +152,13 @@ $qemu_server_blockjob_module->mock(
) = @_;
die "drive_mirror with wrong vmid: '$vmid'\n" if $vmid ne $test_vmid;
- die "qemu_drive_mirror '$drive' error\n"
- if $fail_config->{qemu_drive_mirror} && $fail_config->{qemu_drive_mirror} eq $drive;
+ die "qemu_drive_mirror '$drive_id' error\n"
+ if $fail_config->{qemu_drive_mirror} && $fail_config->{qemu_drive_mirror} eq $drive_id;
my $nbd_info = decode_json(file_get_contents("${RUN_DIR_PATH}/nbd_info"));
- die "target does not expect drive mirror for '$drive'\n"
- if !defined($nbd_info->{$drive});
- delete $nbd_info->{$drive};
+ die "target does not expect drive mirror for '$drive_id'\n"
+ if !defined($nbd_info->{$drive_id});
+ delete $nbd_info->{$drive_id};
file_set_contents("${RUN_DIR_PATH}/nbd_info", to_json($nbd_info));
},
qemu_drive_mirror_monitor => sub {
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 22/31] test: migration: factor out common mocking for mirror
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (20 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 21/31] drive mirror: add variable for device ID and make name for drive ID precise Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 23/31] block job: factor out helper for common mirror QMP options Fiona Ebner
` (9 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
To be re-used for mocking blockdev_mirror.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/test/MigrationTest/QemuMigrateMock.pm | 24 ++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm
index bdc574c1..25a4f9b2 100644
--- a/src/test/MigrationTest/QemuMigrateMock.pm
+++ b/src/test/MigrationTest/QemuMigrateMock.pm
@@ -132,6 +132,20 @@ $MigrationTest::Shared::qemu_server_module->mock(
},
);
+my sub common_mirror_mock {
+ my ($vmid, $drive_id) = @_;
+
+ die "drive_mirror with wrong vmid: '$vmid'\n" if $vmid ne $test_vmid;
+ die "qemu_drive_mirror '$drive_id' error\n"
+ if $fail_config->{qemu_drive_mirror} && $fail_config->{qemu_drive_mirror} eq $drive_id;
+
+ my $nbd_info = decode_json(file_get_contents("${RUN_DIR_PATH}/nbd_info"));
+ die "target does not expect drive mirror for '$drive_id'\n"
+ if !defined($nbd_info->{$drive_id});
+ delete $nbd_info->{$drive_id};
+ file_set_contents("${RUN_DIR_PATH}/nbd_info", to_json($nbd_info));
+}
+
my $qemu_server_blockjob_module = Test::MockModule->new("PVE::QemuServer::BlockJob");
$qemu_server_blockjob_module->mock(
qemu_blockjobs_cancel => sub {
@@ -151,15 +165,7 @@ $qemu_server_blockjob_module->mock(
$src_bitmap,
) = @_;
- die "drive_mirror with wrong vmid: '$vmid'\n" if $vmid ne $test_vmid;
- die "qemu_drive_mirror '$drive_id' error\n"
- if $fail_config->{qemu_drive_mirror} && $fail_config->{qemu_drive_mirror} eq $drive_id;
-
- my $nbd_info = decode_json(file_get_contents("${RUN_DIR_PATH}/nbd_info"));
- die "target does not expect drive mirror for '$drive_id'\n"
- if !defined($nbd_info->{$drive_id});
- delete $nbd_info->{$drive_id};
- file_set_contents("${RUN_DIR_PATH}/nbd_info", to_json($nbd_info));
+ common_mirror_mock($vmid, $drive_id);
},
qemu_drive_mirror_monitor => sub {
my ($vmid, $vmiddst, $jobs, $completion, $qga) = @_;
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 23/31] block job: factor out helper for common mirror QMP options
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (21 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 22/31] test: migration: factor out common mocking for mirror Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 24/31] block job: add blockdev mirror Fiona Ebner
` (8 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
To be re-used by blockdev-mirror.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/BlockJob.pm | 50 ++++++++++++++++++++--------------
1 file changed, 29 insertions(+), 21 deletions(-)
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 8a74636c..0013bde6 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -212,6 +212,33 @@ sub qemu_drive_mirror_monitor {
}
}
+my sub common_mirror_qmp_options {
+ my ($device_id, $qemu_target, $src_bitmap, $bwlimit) = @_;
+
+ my $opts = {
+ timeout => 10,
+ device => "$device_id",
+ sync => "full",
+ target => $qemu_target,
+ 'auto-dismiss' => JSON::false,
+ };
+
+ if (defined($src_bitmap)) {
+ $opts->{sync} = 'incremental';
+ $opts->{bitmap} = $src_bitmap;
+ print "drive mirror re-using dirty bitmap '$src_bitmap'\n";
+ }
+
+ if (defined($bwlimit)) {
+ $opts->{speed} = $bwlimit * 1024;
+ print "drive mirror is starting for $device_id with bandwidth limit: ${bwlimit} KB/s\n";
+ } else {
+ print "drive mirror is starting for $device_id\n";
+ }
+
+ return $opts;
+}
+
sub qemu_drive_mirror {
my (
$vmid,
@@ -247,29 +274,10 @@ sub qemu_drive_mirror {
$qemu_target = $is_zero_initialized ? "zeroinit:$dst_path" : $dst_path;
}
- my $opts = {
- timeout => 10,
- device => "$device_id",
- mode => "existing",
- sync => "full",
- target => $qemu_target,
- 'auto-dismiss' => JSON::false,
- };
+ my $opts = common_mirror_qmp_options($device_id, $qemu_target, $src_bitmap, $bwlimit);
+ $opts->{mode} = "existing";
$opts->{format} = $format if $format;
- if (defined($src_bitmap)) {
- $opts->{sync} = 'incremental';
- $opts->{bitmap} = $src_bitmap;
- print "drive mirror re-using dirty bitmap '$src_bitmap'\n";
- }
-
- if (defined($bwlimit)) {
- $opts->{speed} = $bwlimit * 1024;
- print "drive mirror is starting for $device_id with bandwidth limit: ${bwlimit} KB/s\n";
- } else {
- print "drive mirror is starting for $device_id\n";
- }
-
# if a job already runs for this device we get an error, catch it for cleanup
eval { mon_cmd($vmid, "drive-mirror", %$opts); };
if (my $err = $@) {
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [RFC qemu-server 24/31] block job: add blockdev mirror
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (22 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 23/31] block job: factor out helper for common mirror QMP options Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 25/31] blockdev: support using zeroinit filter Fiona Ebner
` (7 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
With blockdev-mirror, it is possible to change the aio setting on the
fly and this is useful for migrations between storages where one wants
to use io_uring by default and the other doesn't.
Already mock blockdev_mirror in the tests.
Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/BlockJob.pm | 162 ++++++++++++++++++++++
src/PVE/QemuServer/Blockdev.pm | 2 +-
src/test/MigrationTest/QemuMigrateMock.pm | 8 ++
3 files changed, 171 insertions(+), 1 deletion(-)
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 0013bde6..89cd1312 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -4,12 +4,14 @@ use strict;
use warnings;
use JSON;
+use Storable qw(dclone);
use PVE::Format qw(render_duration render_bytes);
use PVE::RESTEnvironment qw(log_warn);
use PVE::Storage;
use PVE::QemuServer::Agent qw(qga_check_running);
+use PVE::QemuServer::Blockdev;
use PVE::QemuServer::Drive qw(checked_volume_format);
use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::RunState;
@@ -340,6 +342,166 @@ sub qemu_drive_mirror_switch_to_active_mode {
}
}
+=pod
+
+=head3 blockdev_mirror
+
+ blockdev_mirror($source, $dest, $jobs, $completion, $options)
+
+Mirrors the volume of a running VM specified by C<$source> to destination C<$dest>.
+
+=over
+
+=item C<$source>
+
+The source information consists of:
+
+=over
+
+=item C<< $source->{vmid} >>
+
+The ID of the running VM the source volume belongs to.
+
+=item C<< $source->{drive} >>
+
+The drive configuration of the source volume as currently attached to the VM.
+
+=item C<< $source->{bitmap} >>
+
+(optional) Use incremental mirroring based on the specified bitmap.
+
+=back
+
+=item C<$dest>
+
+The destination information consists of:
+
+=over
+
+=item C<< $dest->{volid} >>
+
+The volume ID of the target volume.
+
+=item C<< $dest->{vmid} >>
+
+(optional) The ID of the VM the target volume belongs to. Defaults to C<< $source->{vmid} >>.
+
+=item C<< $dest->{'zero-initialized'} >>
+
+(optional) True, if the target volume is zero-initialized.
+
+=back
+
+=item C<$jobs>
+
+(optional) Other jobs in the transaction when multiple volumes should be mirrored. All jobs must be
+ready before completion can happen.
+
+=item C<$completion>
+
+Completion mode, default is C<complete>:
+
+=over
+
+=item C<complete>
+
+Wait until all jobs are ready, block-job-complete them (default). This means switching the orignal
+drive to use the new target.
+
+=item C<cancel>
+
+Wait until all jobs are ready, block-job-cancel them. This means not switching the original drive
+to use the new target.
+
+=item C<skip>
+
+Wait until all jobs are ready, return with block jobs in ready state.
+
+=item C<auto>
+
+Wait until all jobs disappear, only use for jobs which complete automatically.
+
+=back
+
+=item C<$options>
+
+Further options:
+
+=over
+
+=item C<< $options->{'guest-agent'} >>
+
+If the guest agent is configured for the VM. It will be used to freeze and thaw the filesystems for
+consistency when the target belongs to a different VM.
+
+=item C<< $options->{'bwlimit'} >>
+
+The bandwidth limit to use for the mirroring operation, in KiB/s.
+
+=back
+
+=back
+
+=cut
+
+sub blockdev_mirror {
+ my ($source, $dest, $jobs, $completion, $options) = @_;
+
+ my $vmid = $source->{vmid};
+
+ my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
+ my $device_id = "drive-$drive_id";
+
+ my $storecfg = PVE::Storage::config();
+
+ # Need to replace the format node below the top node.
+ my $source_node_name = PVE::QemuServer::Blockdev::get_node_name(
+ 'fmt', $drive_id, $source->{drive}->{file},
+ );
+
+ # Copy original drive config (aio, cache, discard, ...):
+ my $dest_drive = dclone($source->{drive});
+ $dest_drive->{file} = $dest->{volid};
+
+ my $generate_blockdev_opts = {};
+ $generate_blockdev_opts->{'zero-initialized'} = 1 if $dest->{'zero-initialized'};
+
+ # Note that if 'aio' is not explicitly set, i.e. default, it can change if source and target
+ # don't both allow or both not allow 'io_uring' as the default.
+ my $target_drive_blockdev = PVE::QemuServer::Blockdev::generate_drive_blockdev(
+ $storecfg, $dest_drive, $generate_blockdev_opts,
+ );
+ # Top node is the throttle group, must use the file child.
+ my $target_blockdev = $target_drive_blockdev->{file};
+
+ PVE::QemuServer::Monitor::mon_cmd($vmid, 'blockdev-add', $target_blockdev->%*);
+ my $target_nodename = $target_blockdev->{'node-name'};
+
+ $jobs = {} if !$jobs;
+ my $jobid = "mirror-$drive_id";
+ $jobs->{$jobid} = {};
+
+ my $qmp_opts = common_mirror_qmp_options(
+ $device_id, $target_nodename, $source->{bitmap}, $options->{bwlimit},
+ );
+
+ $qmp_opts->{'job-id'} = "$jobid";
+ $qmp_opts->{replaces} = "$source_node_name";
+
+ # if a job already runs for this device we get an error, catch it for cleanup
+ eval { mon_cmd($vmid, "blockdev-mirror", $qmp_opts->%*); };
+ if (my $err = $@) {
+ eval { qemu_blockjobs_cancel($vmid, $jobs) };
+ log_warn("unable to cancel block jobs - $@");
+ eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $target_nodename) };
+ log_warn("unable to delete blockdev '$target_nodename' - $@");
+ die "error starting blockdev mirrror - $err";
+ }
+ qemu_drive_mirror_monitor(
+ $vmid, $dest->{vmid}, $jobs, $completion, $options->{'guest-agent'}, 'mirror',
+ );
+}
+
sub mirror {
my ($source, $dest, $jobs, $completion, $options) = @_;
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 6e6b9245..716a0ac9 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -12,7 +12,7 @@ use PVE::Storage;
use PVE::QemuServer::Drive qw(drive_is_cdrom);
-my sub get_node_name {
+sub get_node_name {
my ($type, $drive_id, $volid, $snap) = @_;
my $info = "drive=$drive_id,";
diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm
index 25a4f9b2..c52df84b 100644
--- a/src/test/MigrationTest/QemuMigrateMock.pm
+++ b/src/test/MigrationTest/QemuMigrateMock.pm
@@ -9,6 +9,7 @@ use Test::MockModule;
use MigrationTest::Shared;
use PVE::API2::Qemu;
+use PVE::QemuServer::Drive;
use PVE::Storage;
use PVE::Tools qw(file_set_contents file_get_contents);
@@ -167,6 +168,13 @@ $qemu_server_blockjob_module->mock(
common_mirror_mock($vmid, $drive_id);
},
+ blockdev_mirror => sub {
+ my ($source, $dest, $jobs, $completion, $options) = @_;
+
+ my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
+
+ common_mirror_mock($source->{vmid}, $drive_id);
+ },
qemu_drive_mirror_monitor => sub {
my ($vmid, $vmiddst, $jobs, $completion, $qga) = @_;
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [RFC qemu-server 25/31] blockdev: support using zeroinit filter
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (23 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 24/31] block job: add blockdev mirror Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 26/31] blockdev: make some functions private Fiona Ebner
` (6 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
The zeroinit filter is used for cloning/mirroring and importing with
target volumes that are known to produce zeros when reading parts that
were not written before and can be helpful for performance.
Since it is the target of the mirror, it won't have a 'throttle' node
associated with it, but be added as a top node itself. Therefore, it
requires an explicit node-name.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/Blockdev.pm | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 716a0ac9..493f67c1 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -26,6 +26,8 @@ sub get_node_name {
$prefix = 'f';
} elsif ($type eq 'file') {
$prefix = 'e';
+ } elsif ($type eq 'zeroinit') {
+ $prefix = 'z';
} else {
die "unknown node type '$type'";
}
@@ -212,6 +214,11 @@ sub generate_drive_blockdev {
my $child = generate_file_blockdev($storecfg, $drive, $options);
$child = generate_format_blockdev($storecfg, $drive, $child, $options);
+ if ($options->{'zero-initialized'}) {
+ my $node_name = get_node_name('zeroinit', $drive_id, $drive->{file}, $options->{'snapshot-name'});
+ $child = { driver => 'zeroinit', file => $child, 'node-name' => $node_name };
+ }
+
# this is the top filter entry point, use $drive-drive_id as nodename
return {
driver => "throttle",
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [PATCH qemu-server 26/31] blockdev: make some functions private
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (24 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 25/31] blockdev: support using zeroinit filter Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 27/31] clone disk: skip check for aio=default (io_uring) compatibility starting with machine version 10.0 Fiona Ebner
` (5 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Callers outside the module should only use generate_drive_blockdev()
and specific functionality should be controlled via the $options
parameter.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/Blockdev.pm | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 493f67c1..66d4544e 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -77,7 +77,7 @@ sub generate_throttle_group {
};
}
-sub generate_blockdev_drive_cache {
+my sub generate_blockdev_drive_cache {
my ($drive, $scfg) = @_;
my $cache_direct = PVE::QemuServer::Drive::drive_uses_cache_direct($drive, $scfg);
@@ -87,7 +87,7 @@ sub generate_blockdev_drive_cache {
};
}
-sub generate_file_blockdev {
+my sub generate_file_blockdev {
my ($storecfg, $drive, $options) = @_;
my $blockdev = {};
@@ -159,7 +159,7 @@ sub generate_file_blockdev {
return $blockdev;
}
-sub generate_format_blockdev {
+my sub generate_format_blockdev {
my ($storecfg, $drive, $child, $options) = @_;
die "generate_format_blockdev called without volid/path\n" if !$drive->{file};
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [RFC qemu-server 27/31] clone disk: skip check for aio=default (io_uring) compatibility starting with machine version 10.0
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (25 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 26/31] blockdev: make some functions private Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 28/31] print drive device: don't reference any drive for 'none' " Fiona Ebner
` (4 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
With blockdev-mirror, it is possible to change the aio setting on the
fly and this is useful for migrations between storages where one wants
to use io_uring by default and the other doesn't.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index e7c98520..d8e77649 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -7650,7 +7650,7 @@ sub template_create : prototype($$;$) {
# Check for bug #4525: drive-mirror will open the target drive with the same aio setting as the
# source, but some storages have problems with io_uring, sometimes even leading to crashes.
my sub clone_disk_check_io_uring {
- my ($src_drive, $storecfg, $src_storeid, $dst_storeid, $use_drive_mirror) = @_;
+ my ($vmid, $src_drive, $storecfg, $src_storeid, $dst_storeid, $use_drive_mirror) = @_;
return if !$use_drive_mirror;
@@ -7667,6 +7667,11 @@ my sub clone_disk_check_io_uring {
if ($src_drive->{aio}) {
$src_uses_io_uring = $src_drive->{aio} eq 'io_uring';
} else {
+ # With the switch to -blockdev and blockdev-mirror, the aio setting will be changed on the
+ # fly if not explicitly set.
+ my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+ return if PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0);
+
$src_uses_io_uring = storage_allows_io_uring_default($src_scfg, $cache_direct);
}
@@ -7731,7 +7736,9 @@ sub clone_disk {
$dst_format = 'raw';
$size = PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE;
} else {
- clone_disk_check_io_uring($drive, $storecfg, $src_storeid, $storeid, $use_drive_mirror);
+ clone_disk_check_io_uring(
+ $vmid, $drive, $storecfg, $src_storeid, $storeid, $use_drive_mirror,
+ );
$size = PVE::Storage::volume_size_info($storecfg, $drive->{file}, 10);
}
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [RFC qemu-server 28/31] print drive device: don't reference any drive for 'none' starting with machine version 10.0
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (26 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 27/31] clone disk: skip check for aio=default (io_uring) compatibility starting with machine version 10.0 Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 29/31] blockdev: add support for NBD paths Fiona Ebner
` (3 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
There will be no block node for 'none' after switching to '-blockdev'.
Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
[FE: split out from larger patch
do it also for non-SCSI cases]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes since the previous series.
src/PVE/QemuServer.pm | 18 +++++++++++++++---
src/test/cfg2cmd/bootorder-empty.conf.cmd | 2 +-
src/test/cfg2cmd/bootorder-legacy.conf.cmd | 2 +-
src/test/cfg2cmd/bootorder.conf.cmd | 2 +-
...cputype-icelake-client-deprecation.conf.cmd | 2 +-
src/test/cfg2cmd/seabios_serial.conf.cmd | 2 +-
src/test/cfg2cmd/simple-btrfs.conf.cmd | 2 +-
src/test/cfg2cmd/simple-cifs.conf.cmd | 2 +-
src/test/cfg2cmd/simple-rbd.conf.cmd | 2 +-
src/test/cfg2cmd/simple-virtio-blk.conf.cmd | 2 +-
.../cfg2cmd/simple-zfs-over-iscsi.conf.cmd | 2 +-
src/test/cfg2cmd/simple1-template.conf.cmd | 2 +-
src/test/cfg2cmd/simple1.conf.cmd | 2 +-
13 files changed, 27 insertions(+), 15 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index d8e77649..47c22a8c 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -1207,7 +1207,12 @@ sub print_drivedevice_full {
my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
if ($drive->{interface} eq 'virtio') {
my $pciaddr = print_pci_addr("$drive_id", $bridges, $arch);
- $device = "virtio-blk-pci,drive=drive-$drive_id,id=${drive_id}${pciaddr}";
+ $device = 'virtio-blk-pci';
+ # for the switch to -blockdev, there is no blockdev for 'none'
+ if (!min_version($machine_version, 10, 0) || $drive->{file} ne 'none') {
+ $device .= ",drive=drive-$drive_id";
+ }
+ $device .= ",id=${drive_id}${pciaddr}";
$device .= ",iothread=iothread-$drive_id" if $drive->{iothread};
} elsif ($drive->{interface} eq 'scsi') {
@@ -1223,7 +1228,11 @@ sub print_drivedevice_full {
$device = "scsi-$device_type,bus=$controller_prefix$controller.0,channel=0,scsi-id=0"
. ",lun=$drive->{index}";
}
- $device .= ",drive=drive-$drive_id,id=$drive_id";
+ # for the switch to -blockdev, there is no blockdev for 'none'
+ if (!min_version($machine_version, 10, 0) || $drive->{file} ne 'none') {
+ $device .= ",drive=drive-$drive_id";
+ }
+ $device .= ",id=$drive_id";
if ($drive->{ssd} && ($device_type eq 'block' || $device_type eq 'hd')) {
$device .= ",rotation_rate=1";
@@ -1263,7 +1272,10 @@ sub print_drivedevice_full {
} else {
$device .= ",bus=ahci$controller.$unit";
}
- $device .= ",drive=drive-$drive_id,id=$drive_id";
+ if (!min_version($machine_version, 10, 0) || $drive->{file} ne 'none') {
+ $device .= ",drive=drive-$drive_id";
+ }
+ $device .= ",id=$drive_id";
if ($device_type eq 'hd') {
if (my $model = $drive->{model}) {
diff --git a/src/test/cfg2cmd/bootorder-empty.conf.cmd b/src/test/cfg2cmd/bootorder-empty.conf.cmd
index e4bf4e6d..3f8fdb8e 100644
--- a/src/test/cfg2cmd/bootorder-empty.conf.cmd
+++ b/src/test/cfg2cmd/bootorder-empty.conf.cmd
@@ -28,7 +28,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2' \
-device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
-drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
diff --git a/src/test/cfg2cmd/bootorder-legacy.conf.cmd b/src/test/cfg2cmd/bootorder-legacy.conf.cmd
index 7627780c..cd990cd8 100644
--- a/src/test/cfg2cmd/bootorder-legacy.conf.cmd
+++ b/src/test/cfg2cmd/bootorder-legacy.conf.cmd
@@ -28,7 +28,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
-drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
diff --git a/src/test/cfg2cmd/bootorder.conf.cmd b/src/test/cfg2cmd/bootorder.conf.cmd
index 74af37e1..3cef2161 100644
--- a/src/test/cfg2cmd/bootorder.conf.cmd
+++ b/src/test/cfg2cmd/bootorder.conf.cmd
@@ -28,7 +28,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=103' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=103' \
-device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
-drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,bootindex=102,write-cache=on' \
diff --git a/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd b/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
index effba2b7..e6e09278 100644
--- a/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
+++ b/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
@@ -26,7 +26,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-drive 'file=/var/lib/vz/images/8006/base-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
diff --git a/src/test/cfg2cmd/seabios_serial.conf.cmd b/src/test/cfg2cmd/seabios_serial.conf.cmd
index f901a459..0eb02459 100644
--- a/src/test/cfg2cmd/seabios_serial.conf.cmd
+++ b/src/test/cfg2cmd/seabios_serial.conf.cmd
@@ -26,7 +26,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple-btrfs.conf.cmd b/src/test/cfg2cmd/simple-btrfs.conf.cmd
index 9c3f97d2..2aa2083d 100644
--- a/src/test/cfg2cmd/simple-btrfs.conf.cmd
+++ b/src/test/cfg2cmd/simple-btrfs.conf.cmd
@@ -26,7 +26,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-drive 'file=/butter/bread/images/8006/vm-8006-disk-0/disk.raw,if=none,id=drive-scsi0,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple-cifs.conf.cmd b/src/test/cfg2cmd/simple-cifs.conf.cmd
index 61e8d01e..d23a046a 100644
--- a/src/test/cfg2cmd/simple-cifs.conf.cmd
+++ b/src/test/cfg2cmd/simple-cifs.conf.cmd
@@ -24,7 +24,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple-rbd.conf.cmd b/src/test/cfg2cmd/simple-rbd.conf.cmd
index ea5934c4..df7cba3f 100644
--- a/src/test/cfg2cmd/simple-rbd.conf.cmd
+++ b/src/test/cfg2cmd/simple-rbd.conf.cmd
@@ -26,7 +26,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-drive 'file=rbd:cpool/vm-8006-disk-0:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple-virtio-blk.conf.cmd b/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
index 2182febc..0a7eb473 100644
--- a/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
+++ b/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
@@ -27,7 +27,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
-device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,bootindex=100,write-cache=on' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
diff --git a/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd b/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
index d9a8e5e9..a90156b0 100644
--- a/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
+++ b/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
@@ -26,7 +26,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-drive 'file=iscsi://127.0.0.1/iqn.2019-10.org.test:foobar/0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple1-template.conf.cmd b/src/test/cfg2cmd/simple1-template.conf.cmd
index 60531b77..c736c84a 100644
--- a/src/test/cfg2cmd/simple1-template.conf.cmd
+++ b/src/test/cfg2cmd/simple1-template.conf.cmd
@@ -24,7 +24,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-drive 'file=/var/lib/vz/images/8006/base-8006-disk-1.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap,readonly=on' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple1.conf.cmd b/src/test/cfg2cmd/simple1.conf.cmd
index aa76ca62..e657aed7 100644
--- a/src/test/cfg2cmd/simple1.conf.cmd
+++ b/src/test/cfg2cmd/simple1.conf.cmd
@@ -26,7 +26,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
- -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+ -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [RFC qemu-server 29/31] blockdev: add support for NBD paths
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (27 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 28/31] print drive device: don't reference any drive for 'none' " Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 30/31] command line: switch to blockdev starting with machine version 10.0 Fiona Ebner
` (2 subsequent siblings)
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes since the previous series.
src/PVE/QemuServer/Blockdev.pm | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 66d4544e..cf1d69ca 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -12,6 +12,18 @@ use PVE::Storage;
use PVE::QemuServer::Drive qw(drive_is_cdrom);
+# gives ($host, $port, $export)
+my $NBD_TCP_PATH_RE_3 = qr/nbd:(\S+):(\d+):exportname=(\S+)/;
+my $NBD_UNIX_PATH_RE_2 = qr/nbd:unix:(\S+):exportname=(\S+)/;
+
+my sub is_nbd {
+ my ($drive) = @_;
+
+ return 1 if $drive->{file} =~ $NBD_TCP_PATH_RE_3;
+ return 1 if $drive->{file} =~ $NBD_UNIX_PATH_RE_2;
+ return 0;
+}
+
sub get_node_name {
my ($type, $drive_id, $volid, $snap) = @_;
@@ -100,7 +112,13 @@ my sub generate_file_blockdev {
my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
- if ($drive->{file} eq 'cdrom') {
+ if ($drive->{file} =~ m/^$NBD_UNIX_PATH_RE_2$/) {
+ my $server = { type => 'unix', path => "$1" };
+ $blockdev = { driver => 'nbd', server => $server, export => "$2" };
+ } elsif ($drive->{file} =~ m/^$NBD_TCP_PATH_RE_3$/) {
+ my $server = { type => 'inet', host => "$1", port => "$2" }; # port is also a string in QAPI
+ $blockdev = { driver => 'nbd', server => $server, export => "$3" };
+ } elsif ($drive->{file} eq 'cdrom') {
my $path = PVE::QemuServer::Drive::get_iso_path($storecfg, $drive->{file});
$blockdev = { driver => 'host_cdrom', filename => "$path" };
} elsif ($drive->{file} =~ m|^/|) {
@@ -164,6 +182,7 @@ my sub generate_format_blockdev {
die "generate_format_blockdev called without volid/path\n" if !$drive->{file};
die "generate_format_blockdev called with 'none'\n" if $drive->{file} eq 'none';
+ die "generate_format_blockdev called with NBD path\n" if is_nbd($drive);
my $scfg;
my $format;
@@ -212,7 +231,9 @@ sub generate_drive_blockdev {
die "generate_drive_blockdev called with 'none'\n" if $drive->{file} eq 'none';
my $child = generate_file_blockdev($storecfg, $drive, $options);
- $child = generate_format_blockdev($storecfg, $drive, $child, $options);
+ if (!is_nbd($drive)) {
+ $child = generate_format_blockdev($storecfg, $drive, $child, $options);
+ }
if ($options->{'zero-initialized'}) {
my $node_name = get_node_name('zeroinit', $drive_id, $drive->{file}, $options->{'snapshot-name'});
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [RFC qemu-server 30/31] command line: switch to blockdev starting with machine version 10.0
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (28 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 29/31] blockdev: add support for NBD paths Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 31/31] test: migration: update running machine to 10.0 Fiona Ebner
2025-06-26 13:09 ` [pve-devel] partially-applied: [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fabian Grünbichler
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
Changes since the previous series:
* adapt to earlier OVMF changes
* add switch to blockdev mirror (after putting prepartion in earlier
patches)
src/PVE/QemuServer.pm | 165 +++++++++++++-----
src/PVE/QemuServer/BlockJob.pm | 32 ++--
src/PVE/QemuServer/OVMF.pm | 21 ++-
src/test/MigrationTest/QemuMigrateMock.pm | 5 +
src/test/cfg2cmd/aio.conf.cmd | 42 +++--
src/test/cfg2cmd/bootorder-empty.conf.cmd | 11 +-
src/test/cfg2cmd/bootorder-legacy.conf.cmd | 11 +-
src/test/cfg2cmd/bootorder.conf.cmd | 11 +-
...putype-icelake-client-deprecation.conf.cmd | 5 +-
src/test/cfg2cmd/efi-raw-template.conf.cmd | 7 +-
src/test/cfg2cmd/efi-raw.conf.cmd | 7 +-
.../cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd | 7 +-
src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd | 7 +-
src/test/cfg2cmd/efidisk-on-rbd.conf.cmd | 7 +-
src/test/cfg2cmd/ide.conf.cmd | 15 +-
src/test/cfg2cmd/q35-ide.conf.cmd | 15 +-
.../q35-linux-hostpci-mapping.conf.cmd | 7 +-
.../q35-linux-hostpci-multifunction.conf.cmd | 7 +-
.../q35-linux-hostpci-template.conf.cmd | 10 +-
...q35-linux-hostpci-x-pci-overrides.conf.cmd | 7 +-
src/test/cfg2cmd/q35-linux-hostpci.conf.cmd | 7 +-
src/test/cfg2cmd/q35-simple.conf.cmd | 7 +-
src/test/cfg2cmd/seabios_serial.conf.cmd | 5 +-
src/test/cfg2cmd/sev-es.conf.cmd | 7 +-
src/test/cfg2cmd/sev-std.conf.cmd | 7 +-
src/test/cfg2cmd/simple-btrfs.conf.cmd | 14 +-
src/test/cfg2cmd/simple-cifs.conf.cmd | 14 +-
.../cfg2cmd/simple-disk-passthrough.conf.cmd | 9 +-
src/test/cfg2cmd/simple-lvm.conf.cmd | 12 +-
src/test/cfg2cmd/simple-lvmthin.conf.cmd | 12 +-
src/test/cfg2cmd/simple-rbd.conf.cmd | 26 ++-
src/test/cfg2cmd/simple-virtio-blk.conf.cmd | 5 +-
.../cfg2cmd/simple-zfs-over-iscsi.conf.cmd | 14 +-
src/test/cfg2cmd/simple1-template.conf.cmd | 8 +-
src/test/cfg2cmd/simple1.conf.cmd | 5 +-
src/test/run_config2command_tests.pl | 22 +++
36 files changed, 392 insertions(+), 181 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 47c22a8c..70e51e97 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -3645,13 +3645,33 @@ sub config_to_command {
push @$devices, '-blockdev', $live_restore->{blockdev};
}
- my $drive_cmd =
- print_drive_commandline_full($storecfg, $vmid, $drive, $live_blockdev_name);
+ if (min_version($machine_version, 10, 0)) { # for the switch to -blockdev
+ my $throttle_group = PVE::QemuServer::Blockdev::generate_throttle_group($drive);
+ push @$cmd, '-object', to_json($throttle_group, { canonical => 1 });
- # extra protection for templates, but SATA and IDE don't support it..
- $drive_cmd .= ',readonly=on' if drive_is_read_only($conf, $drive);
+ die "FIXME: blockdev for live restore not yet implemented"
+ if $live_blockdev_name;
+
+ my $extra_blockdev_options = {};
+ # extra protection for templates, but SATA and IDE don't support it..
+ $extra_blockdev_options->{'read-only'} = 1 if drive_is_read_only($conf, $drive);
+
+ if ($drive->{file} ne 'none') {
+ my $blockdev = PVE::QemuServer::Blockdev::generate_drive_blockdev(
+ $storecfg, $drive, $extra_blockdev_options,
+ );
+ push @$devices, '-blockdev', to_json($blockdev, { canonical => 1 });
+ }
+ } else {
+ my $drive_cmd =
+ print_drive_commandline_full($storecfg, $vmid, $drive, $live_blockdev_name);
+
+ # extra protection for templates, but SATA and IDE don't support it..
+ $drive_cmd .= ',readonly=on' if drive_is_read_only($conf, $drive);
+
+ push @$devices, '-drive', $drive_cmd;
+ }
- push @$devices, '-drive', $drive_cmd;
push @$devices, '-device',
print_drivedevice_full(
$storecfg, $conf, $vmid, $drive, $bridges, $arch, $machine_type,
@@ -4055,28 +4075,63 @@ sub qemu_iothread_del {
sub qemu_driveadd {
my ($storecfg, $vmid, $device) = @_;
- my $drive = print_drive_commandline_full($storecfg, $vmid, $device, undef);
- $drive =~ s/\\/\\\\/g;
- my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
+ my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
- # If the command succeeds qemu prints: "OK"
- return 1 if $ret =~ m/OK/s;
+ # for the switch to -blockdev
+ if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
+ my $throttle_group = PVE::QemuServer::Blockdev::generate_throttle_group($device);
+ mon_cmd($vmid, 'object-add', %$throttle_group);
- die "adding drive failed: $ret\n";
+ eval {
+ my $blockdev =
+ PVE::QemuServer::Blockdev::generate_drive_blockdev($storecfg, $device, {});
+ mon_cmd($vmid, 'blockdev-add', %$blockdev);
+ };
+ if (my $err = $@) {
+ my $drive_id = PVE::QemuServer::Drive::get_drive_id($device);
+ eval { mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id"); };
+ warn $@ if $@;
+ die $err;
+ }
+
+ return 1;
+ } else {
+ my $drive = print_drive_commandline_full($storecfg, $vmid, $device, undef);
+ $drive =~ s/\\/\\\\/g;
+ my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
+
+ # If the command succeeds qemu prints: "OK"
+ return 1 if $ret =~ m/OK/s;
+
+ die "adding drive failed: $ret\n";
+ }
}
sub qemu_drivedel {
my ($vmid, $deviceid) = @_;
- my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_del drive-$deviceid", 10 * 60);
- $ret =~ s/^\s+//;
+ my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
- return 1 if $ret eq "";
+ # for the switch to -blockdev
+ if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
+ # QEMU recursively auto-removes the file children, i.e. file and format node below the top
+ # node and also implicit backing children referenced by a qcow2 image.
+ eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => "drive-$deviceid"); };
+ die "deleting blockdev $deviceid failed : $@\n" if $@;
+ # FIXME ignore already removed scenario like below?
- # NB: device not found errors mean the drive was auto-deleted and we ignore the error
- return 1 if $ret =~ m/Device \'.*?\' not found/s;
+ mon_cmd($vmid, 'object-del', id => "throttle-drive-$deviceid");
+ } else {
+ my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_del drive-$deviceid", 10 * 60);
+ $ret =~ s/^\s+//;
- die "deleting drive $deviceid failed : $ret\n";
+ return 1 if $ret eq "";
+
+ # NB: device not found errors mean the drive was auto-deleted and we ignore the error
+ return 1 if $ret =~ m/Device \'.*?\' not found/s;
+
+ die "deleting drive $deviceid failed : $ret\n";
+ }
}
sub qemu_deviceaddverify {
@@ -4333,29 +4388,59 @@ sub qemu_block_set_io_throttle {
return if !check_running($vmid);
- mon_cmd(
- $vmid, "block_set_io_throttle",
- device => $deviceid,
- bps => int($bps),
- bps_rd => int($bps_rd),
- bps_wr => int($bps_wr),
- iops => int($iops),
- iops_rd => int($iops_rd),
- iops_wr => int($iops_wr),
- bps_max => int($bps_max),
- bps_rd_max => int($bps_rd_max),
- bps_wr_max => int($bps_wr_max),
- iops_max => int($iops_max),
- iops_rd_max => int($iops_rd_max),
- iops_wr_max => int($iops_wr_max),
- bps_max_length => int($bps_max_length),
- bps_rd_max_length => int($bps_rd_max_length),
- bps_wr_max_length => int($bps_wr_max_length),
- iops_max_length => int($iops_max_length),
- iops_rd_max_length => int($iops_rd_max_length),
- iops_wr_max_length => int($iops_wr_max_length),
- );
-
+ my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+ # for the switch to -blockdev
+ if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
+ mon_cmd(
+ $vmid,
+ 'qom-set',
+ path => "throttle-$deviceid",
+ property => "limits",
+ value => {
+ 'bps-total' => int($bps),
+ 'bps-read' => int($bps_rd),
+ 'bps-write' => int($bps_wr),
+ 'iops-total' => int($iops),
+ 'iops-read' => int($iops_rd),
+ 'iops-write' => int($iops_wr),
+ 'bps-total-max' => int($bps_max),
+ 'bps-read-max' => int($bps_rd_max),
+ 'bps-write-max' => int($bps_wr_max),
+ 'iops-total-max' => int($iops_max),
+ 'iops-read-max' => int($iops_rd_max),
+ 'iops-write-max' => int($iops_wr_max),
+ 'bps-total-max-length' => int($bps_max_length),
+ 'bps-read-max-length' => int($bps_rd_max_length),
+ 'bps-write-max-length' => int($bps_wr_max_length),
+ 'iops-total-max-length' => int($iops_max_length),
+ 'iops-read-max-length' => int($iops_rd_max_length),
+ 'iops-write-max-length' => int($iops_wr_max_length),
+ },
+ );
+ } else {
+ mon_cmd(
+ $vmid, "block_set_io_throttle",
+ device => $deviceid,
+ bps => int($bps),
+ bps_rd => int($bps_rd),
+ bps_wr => int($bps_wr),
+ iops => int($iops),
+ iops_rd => int($iops_rd),
+ iops_wr => int($iops_wr),
+ bps_max => int($bps_max),
+ bps_rd_max => int($bps_rd_max),
+ bps_wr_max => int($bps_wr_max),
+ iops_max => int($iops_max),
+ iops_rd_max => int($iops_rd_max),
+ iops_wr_max => int($iops_wr_max),
+ bps_max_length => int($bps_max_length),
+ bps_rd_max_length => int($bps_rd_max_length),
+ bps_wr_max_length => int($bps_wr_max_length),
+ iops_max_length => int($iops_max_length),
+ iops_rd_max_length => int($iops_rd_max_length),
+ iops_wr_max_length => int($iops_wr_max_length),
+ );
+ }
}
sub qemu_block_resize {
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 89cd1312..dc2a822c 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -506,20 +506,24 @@ sub mirror {
my ($source, $dest, $jobs, $completion, $options) = @_;
# for the switch to -blockdev
-
- my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
- qemu_drive_mirror(
- $source->{vmid},
- $drive_id,
- $dest->{volid},
- $dest->{vmid},
- $dest->{'zero-initialized'},
- $jobs,
- $completion,
- $options->{'guest-agent'},
- $options->{bwlimit},
- $source->{bitmap},
- );
+ my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($source->{vmid});
+ if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
+ blockdev_mirror($source, $dest, $jobs, $completion, $options);
+ } else {
+ my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
+ qemu_drive_mirror(
+ $source->{vmid},
+ $drive_id,
+ $dest->{volid},
+ $dest->{vmid},
+ $dest->{'zero-initialized'},
+ $jobs,
+ $completion,
+ $options->{'guest-agent'},
+ $options->{bwlimit},
+ $source->{bitmap},
+ );
+ }
}
1;
diff --git a/src/PVE/QemuServer/OVMF.pm b/src/PVE/QemuServer/OVMF.pm
index dde81eb7..a7239614 100644
--- a/src/PVE/QemuServer/OVMF.pm
+++ b/src/PVE/QemuServer/OVMF.pm
@@ -3,7 +3,7 @@ package PVE::QemuServer::OVMF;
use strict;
use warnings;
-use JSON;
+use JSON qw(to_json);
use PVE::RESTEnvironment qw(log_warn);
use PVE::Storage;
@@ -210,10 +210,21 @@ sub print_ovmf_commandline {
}
push $cmd->@*, '-bios', get_ovmf_files($hw_info->{arch}, undef, undef, $amd_sev_type);
} else {
- my ($code_drive_str, $var_drive_str) =
- print_ovmf_drive_commandlines($conf, $storecfg, $vmid, $hw_info, $version_guard);
- push $cmd->@*, '-drive', $code_drive_str;
- push $cmd->@*, '-drive', $var_drive_str;
+ if ($version_guard->(10, 0, 0)) { # for the switch to -blockdev
+ my ($code_blockdev, $vars_blockdev, $throttle_group) =
+ generate_ovmf_blockdev($conf, $storecfg, $vmid, $hw_info);
+
+ push $cmd->@*, '-object', to_json($throttle_group, { canonical => 1 });
+ push $cmd->@*, '-blockdev', to_json($code_blockdev, { canonical => 1 });
+ push $cmd->@*, '-blockdev', to_json($vars_blockdev, { canonical => 1 });
+ push $machine_flags->@*, "pflash0=$code_blockdev->{'node-name'}",
+ "pflash1=$vars_blockdev->{'node-name'}";
+ } else {
+ my ($code_drive_str, $var_drive_str) =
+ print_ovmf_drive_commandlines($conf, $storecfg, $vmid, $hw_info, $version_guard);
+ push $cmd->@*, '-drive', $code_drive_str;
+ push $cmd->@*, '-drive', $var_drive_str;
+ }
}
return ($cmd, $machine_flags);
diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm
index c52df84b..b04cf78b 100644
--- a/src/test/MigrationTest/QemuMigrateMock.pm
+++ b/src/test/MigrationTest/QemuMigrateMock.pm
@@ -215,6 +215,11 @@ $qemu_server_machine_module->mock(
if !defined($vm_status->{runningmachine});
return $vm_status->{runningmachine};
},
+ get_current_qemu_machine => sub {
+ die "invalid test: no runningmachine specified\n"
+ if !defined($vm_status->{runningmachine});
+ return $vm_status->{runningmachine};
+ },
);
my $qemu_server_network_module = Test::MockModule->new("PVE::QemuServer::Network");
diff --git a/src/test/cfg2cmd/aio.conf.cmd b/src/test/cfg2cmd/aio.conf.cmd
index c199bacf..272c6cd6 100644
--- a/src/test/cfg2cmd/aio.conf.cmd
+++ b/src/test/cfg2cmd/aio.conf.cmd
@@ -14,6 +14,20 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 512 \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi4","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi5","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi6","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi7","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi8","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi9","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi10","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi11","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi12","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi13","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -24,33 +38,33 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=threads,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.raw","node-name":"e3b2553803d55d43b9986a0aac3e9a7","read-only":false},"node-name":"f3b2553803d55d43b9986a0aac3e9a7","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-1.raw,if=none,id=drive-scsi1,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-1.raw","node-name":"e08707d013893852b3d4d42301a4298","read-only":false},"node-name":"f08707d013893852b3d4d42301a4298","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-2.raw,if=none,id=drive-scsi2,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-2.raw","node-name":"edb0854bba55e8b2544ad937c9f5afc","read-only":false},"node-name":"fdb0854bba55e8b2544ad937c9f5afc","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=2,drive=drive-scsi2,id=scsi2,write-cache=on' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-3.raw,if=none,id=drive-scsi3,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-3.raw","node-name":"e9c170cb9491763cad3f31718205efc","read-only":false},"node-name":"f9c170cb9491763cad3f31718205efc","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=3,drive=drive-scsi3,id=scsi3,write-cache=on' \
- -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-4.raw,if=none,id=drive-scsi4,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-4.raw","node-name":"ea34ecc24c40da0d53420ef344ced37","read-only":false},"node-name":"fa34ecc24c40da0d53420ef344ced37","read-only":false},"node-name":"drive-scsi4","throttle-group":"throttle-drive-scsi4"}' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
- -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-5.raw,if=none,id=drive-scsi5,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-5.raw","node-name":"e39cacf47a4f4877072601505d90949","read-only":false},"node-name":"f39cacf47a4f4877072601505d90949","read-only":false},"node-name":"drive-scsi5","throttle-group":"throttle-drive-scsi5"}' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=5,drive=drive-scsi5,id=scsi5,write-cache=on' \
- -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-6,if=none,id=drive-scsi6,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-6","node-name":"e7db1ee70981087e4a2861bc7da417b","read-only":false},"node-name":"f7db1ee70981087e4a2861bc7da417b","read-only":false},"node-name":"drive-scsi6","throttle-group":"throttle-drive-scsi6"}' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=6,drive=drive-scsi6,id=scsi6,write-cache=on' \
-device 'lsi,id=scsihw1,bus=pci.0,addr=0x6' \
- -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-7,if=none,id=drive-scsi7,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-7","node-name":"e2d2deac808301140a96c862fe3ea85","read-only":false},"node-name":"f2d2deac808301140a96c862fe3ea85","read-only":false},"node-name":"drive-scsi7","throttle-group":"throttle-drive-scsi7"}' \
-device 'scsi-hd,bus=scsihw1.0,scsi-id=0,drive=drive-scsi7,id=scsi7,write-cache=on' \
- -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-8,if=none,id=drive-scsi8,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-8","node-name":"e9796b73db57b8943746ede7d0d3060","read-only":false},"node-name":"f9796b73db57b8943746ede7d0d3060","read-only":false},"node-name":"drive-scsi8","throttle-group":"throttle-drive-scsi8"}' \
-device 'scsi-hd,bus=scsihw1.0,scsi-id=1,drive=drive-scsi8,id=scsi8,write-cache=on' \
- -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-9,if=none,id=drive-scsi9,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-9","node-name":"efa538892acc012edbdc5810035bf7d","read-only":false},"node-name":"ffa538892acc012edbdc5810035bf7d","read-only":false},"node-name":"drive-scsi9","throttle-group":"throttle-drive-scsi9"}' \
-device 'scsi-hd,bus=scsihw1.0,scsi-id=2,drive=drive-scsi9,id=scsi9,write-cache=on' \
- -drive 'file=rbd:cpool/vm-8006-disk-8:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi10,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-8","node-name":"e6f4cbffa741d16bba69304eb2800ef","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"f6f4cbffa741d16bba69304eb2800ef","read-only":false},"node-name":"drive-scsi10","throttle-group":"throttle-drive-scsi10"}' \
-device 'scsi-hd,bus=scsihw1.0,scsi-id=3,drive=drive-scsi10,id=scsi10,write-cache=on' \
- -drive 'file=rbd:cpool/vm-8006-disk-8:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi11,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-8","node-name":"e42375c54de70f5f4be966d98c90255","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"f42375c54de70f5f4be966d98c90255","read-only":false},"node-name":"drive-scsi11","throttle-group":"throttle-drive-scsi11"}' \
-device 'scsi-hd,bus=scsihw1.0,scsi-id=4,drive=drive-scsi11,id=scsi11,write-cache=on' \
- -drive 'file=/dev/veegee/vm-8006-disk-9,if=none,id=drive-scsi12,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-9","node-name":"ed7b2c9e0133619fcf6cb8ce5903502","read-only":false},"node-name":"fd7b2c9e0133619fcf6cb8ce5903502","read-only":false},"node-name":"drive-scsi12","throttle-group":"throttle-drive-scsi12"}' \
-device 'scsi-hd,bus=scsihw1.0,scsi-id=5,drive=drive-scsi12,id=scsi12,write-cache=on' \
- -drive 'file=/dev/veegee/vm-8006-disk-9,if=none,id=drive-scsi13,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-9","node-name":"ed85420a880203ca1401d00a8edf132","read-only":false},"node-name":"fd85420a880203ca1401d00a8edf132","read-only":false},"node-name":"drive-scsi13","throttle-group":"throttle-drive-scsi13"}' \
-device 'scsi-hd,bus=scsihw1.0,scsi-id=6,drive=drive-scsi13,id=scsi13,write-cache=on' \
-machine 'type=pc+pve0'
diff --git a/src/test/cfg2cmd/bootorder-empty.conf.cmd b/src/test/cfg2cmd/bootorder-empty.conf.cmd
index 3f8fdb8e..89f73145 100644
--- a/src/test/cfg2cmd/bootorder-empty.conf.cmd
+++ b/src/test/cfg2cmd/bootorder-empty.conf.cmd
@@ -15,8 +15,12 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 768 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi4","limits":{},"qom-type":"throttle-group"}' \
-object 'iothread,id=iothread-virtio0' \
+ -object '{"id":"throttle-drive-virtio0","limits":{},"qom-type":"throttle-group"}' \
-object 'iothread,id=iothread-virtio1' \
+ -object '{"id":"throttle-drive-virtio1","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -27,14 +31,13 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2' \
-device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"e6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"f6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"drive-scsi4","throttle-group":"throttle-drive-scsi4"}' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"edd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"fdd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"drive-virtio0","throttle-group":"throttle-drive-virtio0"}' \
-device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,write-cache=on' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio1,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"eeb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"feb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"drive-virtio1","throttle-group":"throttle-drive-virtio1"}' \
-device 'virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb,iothread=iothread-virtio1,write-cache=on' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256' \
diff --git a/src/test/cfg2cmd/bootorder-legacy.conf.cmd b/src/test/cfg2cmd/bootorder-legacy.conf.cmd
index cd990cd8..a2341692 100644
--- a/src/test/cfg2cmd/bootorder-legacy.conf.cmd
+++ b/src/test/cfg2cmd/bootorder-legacy.conf.cmd
@@ -15,8 +15,12 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 768 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi4","limits":{},"qom-type":"throttle-group"}' \
-object 'iothread,id=iothread-virtio0' \
+ -object '{"id":"throttle-drive-virtio0","limits":{},"qom-type":"throttle-group"}' \
-object 'iothread,id=iothread-virtio1' \
+ -object '{"id":"throttle-drive-virtio1","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -27,14 +31,13 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"e6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"f6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"drive-scsi4","throttle-group":"throttle-drive-scsi4"}' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"edd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"fdd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"drive-virtio0","throttle-group":"throttle-drive-virtio0"}' \
-device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,write-cache=on' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio1,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"eeb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"feb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"drive-virtio1","throttle-group":"throttle-drive-virtio1"}' \
-device 'virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb,iothread=iothread-virtio1,bootindex=302,write-cache=on' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=100' \
diff --git a/src/test/cfg2cmd/bootorder.conf.cmd b/src/test/cfg2cmd/bootorder.conf.cmd
index 3cef2161..87a9fca0 100644
--- a/src/test/cfg2cmd/bootorder.conf.cmd
+++ b/src/test/cfg2cmd/bootorder.conf.cmd
@@ -15,8 +15,12 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 768 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi4","limits":{},"qom-type":"throttle-group"}' \
-object 'iothread,id=iothread-virtio0' \
+ -object '{"id":"throttle-drive-virtio0","limits":{},"qom-type":"throttle-group"}' \
-object 'iothread,id=iothread-virtio1' \
+ -object '{"id":"throttle-drive-virtio1","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -27,14 +31,13 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=103' \
-device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"e6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"f6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"drive-scsi4","throttle-group":"throttle-drive-scsi4"}' \
-device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,bootindex=102,write-cache=on' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"edd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"fdd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"drive-virtio0","throttle-group":"throttle-drive-virtio0"}' \
-device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,write-cache=on' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio1,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"eeb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"feb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"drive-virtio1","throttle-group":"throttle-drive-virtio1"}' \
-device 'virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb,iothread=iothread-virtio1,bootindex=100,write-cache=on' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=101' \
diff --git a/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd b/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
index e6e09278..11533c1d 100644
--- a/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
+++ b/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
@@ -15,6 +15,8 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu 'Icelake-Server,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,vendor=GenuineIntel' \
-m 768 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,9 +27,8 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/var/lib/vz/images/8006/base-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/base-8006-disk-0.qcow2","node-name":"e417d5947e69c5890b1e3ddf8a68167","read-only":false},"node-name":"f417d5947e69c5890b1e3ddf8a68167","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-machine 'type=pc+pve0'
diff --git a/src/test/cfg2cmd/efi-raw-template.conf.cmd b/src/test/cfg2cmd/efi-raw-template.conf.cmd
index f66cbb0d..b6064f98 100644
--- a/src/test/cfg2cmd/efi-raw-template.conf.cmd
+++ b/src/test/cfg2cmd/efi-raw-template.conf.cmd
@@ -8,8 +8,9 @@
-mon 'chardev=qmp-event,mode=control' \
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/base-disk-100-0.raw,size=131072,readonly=on' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/base-disk-100-0.raw","node-name":"e3bd051dc2860cd423537bc00138c50","read-only":true},"node-name":"f3bd051dc2860cd423537bc00138c50","read-only":true,"size":131072},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -25,5 +26,5 @@
-device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -machine 'accel=tcg,type=pc+pve0' \
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,accel=tcg,type=pc+pve0' \
-snapshot
diff --git a/src/test/cfg2cmd/efi-raw.conf.cmd b/src/test/cfg2cmd/efi-raw.conf.cmd
index 6406686d..c10df1cb 100644
--- a/src/test/cfg2cmd/efi-raw.conf.cmd
+++ b/src/test/cfg2cmd/efi-raw.conf.cmd
@@ -9,8 +9,9 @@
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
-smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-disk-100-0.raw,size=131072' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":131072},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -26,4 +27,4 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -machine 'type=pc+pve0'
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=pc+pve0'
diff --git a/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd b/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd
index 4e9a7e87..a9dcd474 100644
--- a/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd
+++ b/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd
@@ -9,8 +9,9 @@
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
-smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-disk-100-0.raw,size=540672' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -27,4 +28,4 @@
-device 'VGA,id=vga,bus=pcie.0,addr=0x1' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -machine 'type=q35+pve0'
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd b/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd
index 175d9b10..c65c74f5 100644
--- a/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd
+++ b/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd
@@ -9,8 +9,9 @@
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
-smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-disk-100-0.raw,size=540672' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE_4M.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -29,4 +30,4 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -machine 'type=pc+pve0'
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=pc+pve0'
diff --git a/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd b/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd
index 5c55c01b..585e6ee9 100644
--- a/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd
+++ b/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd
@@ -9,8 +9,9 @@
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
-smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e688' \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,cache=writeback,format=raw,file=rbd:cpool/vm-100-disk-1:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none:rbd_cache_policy=writeback,size=131072' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":false,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"rbd","image":"vm-100-disk-1","node-name":"eeb8f022b5551ad1d795611f112c767","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"feb8f022b5551ad1d795611f112c767","read-only":false,"size":131072},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -31,4 +32,4 @@
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
- -machine 'type=pc+pve0'
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=pc+pve0'
diff --git a/src/test/cfg2cmd/ide.conf.cmd b/src/test/cfg2cmd/ide.conf.cmd
index a0d6c3ed..78fe7550 100644
--- a/src/test/cfg2cmd/ide.conf.cmd
+++ b/src/test/cfg2cmd/ide.conf.cmd
@@ -15,6 +15,11 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 512 \
+ -object '{"id":"throttle-drive-ide0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-ide1","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-ide3","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,16 +30,16 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'file=/var/lib/vz/template/iso/zero.iso,if=none,id=drive-ide0,media=cdrom,format=raw,aio=io_uring' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/var/lib/vz/template/iso/zero.iso","node-name":"e19e15bf93b8cf09e2a5d1669648165","read-only":true},"node-name":"f19e15bf93b8cf09e2a5d1669648165","read-only":true},"node-name":"drive-ide0","throttle-group":"throttle-drive-ide0"}' \
-device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200' \
- -drive 'file=/mnt/pve/cifs-store/template/iso/one.iso,if=none,id=drive-ide1,media=cdrom,format=raw,aio=threads' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/one.iso","node-name":"e247a6535f864815c8e9dbb5a118336","read-only":true},"node-name":"f247a6535f864815c8e9dbb5a118336","read-only":true},"node-name":"drive-ide1","throttle-group":"throttle-drive-ide1"}' \
-device 'ide-cd,bus=ide.0,unit=1,drive=drive-ide1,id=ide1,bootindex=201' \
- -drive 'file=/mnt/pve/cifs-store/template/iso/two.iso,if=none,id=drive-ide2,media=cdrom,format=raw,aio=threads' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/two.iso","node-name":"ec78a64f692c2fa9f873a111580aebd","read-only":true},"node-name":"fc78a64f692c2fa9f873a111580aebd","read-only":true},"node-name":"drive-ide2","throttle-group":"throttle-drive-ide2"}' \
-device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=202' \
- -drive 'file=/mnt/pve/cifs-store/template/iso/three.iso,if=none,id=drive-ide3,media=cdrom,format=raw,aio=threads' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/three.iso","node-name":"e35557bae4bcbf9edc9f7ff7f132f30","read-only":true},"node-name":"f35557bae4bcbf9edc9f7ff7f132f30","read-only":true},"node-name":"drive-ide3","throttle-group":"throttle-drive-ide3"}' \
-device 'ide-cd,bus=ide.1,unit=1,drive=drive-ide3,id=ide3,bootindex=203' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/var/lib/vz/images/100/vm-100-disk-2.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=io_uring,detect-zeroes=on' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-2.qcow2","node-name":"ec11e0572184321efc5835152b95d5d","read-only":false},"node-name":"fc11e0572184321efc5835152b95d5d","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/q35-ide.conf.cmd b/src/test/cfg2cmd/q35-ide.conf.cmd
index f12fa44d..f94accb9 100644
--- a/src/test/cfg2cmd/q35-ide.conf.cmd
+++ b/src/test/cfg2cmd/q35-ide.conf.cmd
@@ -16,6 +16,11 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 512 \
+ -object '{"id":"throttle-drive-ide0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-ide1","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-ide3","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
-global 'ICH9-LPC.disable_s3=1' \
-global 'ICH9-LPC.disable_s4=1' \
-readconfig /usr/share/qemu-server/pve-q35-4.0.cfg \
@@ -24,16 +29,16 @@
-device 'VGA,id=vga,bus=pcie.0,addr=0x1' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'file=/mnt/pve/cifs-store/template/iso/zero.iso,if=none,id=drive-ide0,media=cdrom,format=raw,aio=threads' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/zero.iso","node-name":"e1677eafc00b7016099210662868e38","read-only":true},"node-name":"f1677eafc00b7016099210662868e38","read-only":true},"node-name":"drive-ide0","throttle-group":"throttle-drive-ide0"}' \
-device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200' \
- -drive 'file=/mnt/pve/cifs-store/template/iso/one.iso,if=none,id=drive-ide1,media=cdrom,format=raw,aio=threads' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/one.iso","node-name":"e247a6535f864815c8e9dbb5a118336","read-only":true},"node-name":"f247a6535f864815c8e9dbb5a118336","read-only":true},"node-name":"drive-ide1","throttle-group":"throttle-drive-ide1"}' \
-device 'ide-cd,bus=ide.2,unit=0,drive=drive-ide1,id=ide1,bootindex=201' \
- -drive 'file=/mnt/pve/cifs-store/template/iso/two.iso,if=none,id=drive-ide2,media=cdrom,format=raw,aio=threads' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/two.iso","node-name":"ec78a64f692c2fa9f873a111580aebd","read-only":true},"node-name":"fc78a64f692c2fa9f873a111580aebd","read-only":true},"node-name":"drive-ide2","throttle-group":"throttle-drive-ide2"}' \
-device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=202' \
- -drive 'file=/mnt/pve/cifs-store/template/iso/three.iso,if=none,id=drive-ide3,media=cdrom,format=raw,aio=threads' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/three.iso","node-name":"e35557bae4bcbf9edc9f7ff7f132f30","read-only":true},"node-name":"f35557bae4bcbf9edc9f7ff7f132f30","read-only":true},"node-name":"drive-ide3","throttle-group":"throttle-drive-ide3"}' \
-device 'ide-cd,bus=ide.3,unit=0,drive=drive-ide3,id=ide3,bootindex=203' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/var/lib/vz/images/100/vm-100-disk-2.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=io_uring,detect-zeroes=on' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-2.qcow2","node-name":"ec11e0572184321efc5835152b95d5d","read-only":false},"node-name":"fc11e0572184321efc5835152b95d5d","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/q35-linux-hostpci-mapping.conf.cmd b/src/test/cfg2cmd/q35-linux-hostpci-mapping.conf.cmd
index 717c0be4..42f1cb80 100644
--- a/src/test/cfg2cmd/q35-linux-hostpci-mapping.conf.cmd
+++ b/src/test/cfg2cmd/q35-linux-hostpci-mapping.conf.cmd
@@ -9,8 +9,9 @@
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
-smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e687' \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/vm-100-disk-1.qcow2' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-1.qcow2","node-name":"e70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"f70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' \
-smp '2,sockets=2,cores=1,maxcpus=2' \
-nodefaults \
@@ -35,4 +36,4 @@
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
- -machine 'type=q35+pve0'
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/q35-linux-hostpci-multifunction.conf.cmd b/src/test/cfg2cmd/q35-linux-hostpci-multifunction.conf.cmd
index 146bf3e5..e9cd47b8 100644
--- a/src/test/cfg2cmd/q35-linux-hostpci-multifunction.conf.cmd
+++ b/src/test/cfg2cmd/q35-linux-hostpci-multifunction.conf.cmd
@@ -9,8 +9,9 @@
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
-smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e687' \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/vm-100-disk-1.qcow2' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-1.qcow2","node-name":"e70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"f70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' \
-smp '2,sockets=2,cores=1,maxcpus=2' \
-nodefaults \
@@ -35,4 +36,4 @@
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
- -machine 'type=q35+pve0'
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/q35-linux-hostpci-template.conf.cmd b/src/test/cfg2cmd/q35-linux-hostpci-template.conf.cmd
index ce69f23a..ddc87814 100644
--- a/src/test/cfg2cmd/q35-linux-hostpci-template.conf.cmd
+++ b/src/test/cfg2cmd/q35-linux-hostpci-template.conf.cmd
@@ -8,8 +8,9 @@
-mon 'chardev=qmp-event,mode=control' \
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/base-100-disk-1.qcow2,readonly=on' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/base-100-disk-1.qcow2","node-name":"eb6bec0e3c391fabb7fb7dd73ced9bf","read-only":true},"node-name":"fb6bec0e3c391fabb7fb7dd73ced9bf","read-only":true},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -17,6 +18,7 @@
-nographic \
-cpu qemu64 \
-m 512 \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -26,7 +28,7 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/var/lib/vz/images/100/base-100-disk-2.raw,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on,readonly=on' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/base-100-disk-2.raw","node-name":"e24dfe239201bb9924fc4cfb899ca70","read-only":true},"node-name":"f24dfe239201bb9924fc4cfb899ca70","read-only":true},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
- -machine 'accel=tcg,type=pc+pve0' \
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,accel=tcg,type=pc+pve0' \
-snapshot
diff --git a/src/test/cfg2cmd/q35-linux-hostpci-x-pci-overrides.conf.cmd b/src/test/cfg2cmd/q35-linux-hostpci-x-pci-overrides.conf.cmd
index 0f0cb2c0..b06dbb4f 100644
--- a/src/test/cfg2cmd/q35-linux-hostpci-x-pci-overrides.conf.cmd
+++ b/src/test/cfg2cmd/q35-linux-hostpci-x-pci-overrides.conf.cmd
@@ -9,8 +9,9 @@
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
-smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e687' \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/vm-100-disk-1.qcow2' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-1.qcow2","node-name":"e70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"f70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' \
-smp '2,sockets=2,cores=1,maxcpus=2' \
-nodefaults \
@@ -34,4 +35,4 @@
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
- -machine 'type=q35+pve0'
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/q35-linux-hostpci.conf.cmd b/src/test/cfg2cmd/q35-linux-hostpci.conf.cmd
index 0abb569b..014eb09c 100644
--- a/src/test/cfg2cmd/q35-linux-hostpci.conf.cmd
+++ b/src/test/cfg2cmd/q35-linux-hostpci.conf.cmd
@@ -9,8 +9,9 @@
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
-smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e687' \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/vm-100-disk-1.qcow2' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-1.qcow2","node-name":"e70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"f70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' \
-smp '2,sockets=2,cores=1,maxcpus=2' \
-nodefaults \
@@ -40,4 +41,4 @@
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
- -machine 'type=q35+pve0'
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/q35-simple.conf.cmd b/src/test/cfg2cmd/q35-simple.conf.cmd
index 371ea7dd..c6b38f7d 100644
--- a/src/test/cfg2cmd/q35-simple.conf.cmd
+++ b/src/test/cfg2cmd/q35-simple.conf.cmd
@@ -9,8 +9,9 @@
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
-smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e687' \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/vm-100-disk-1.qcow2' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-1.qcow2","node-name":"e70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"f70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' \
-smp '2,sockets=1,cores=2,maxcpus=2' \
-nodefaults \
@@ -28,4 +29,4 @@
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
- -machine 'type=q35+pve0'
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/seabios_serial.conf.cmd b/src/test/cfg2cmd/seabios_serial.conf.cmd
index 0eb02459..a00def0a 100644
--- a/src/test/cfg2cmd/seabios_serial.conf.cmd
+++ b/src/test/cfg2cmd/seabios_serial.conf.cmd
@@ -15,6 +15,8 @@
-nographic \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 768 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,10 +27,9 @@
-device 'isa-serial,chardev=serial0' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"ecd04be4259153b8293415fefa2a84c","read-only":false},"node-name":"fcd04be4259153b8293415fefa2a84c","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/sev-es.conf.cmd b/src/test/cfg2cmd/sev-es.conf.cmd
index 3a100306..a39b6f67 100644
--- a/src/test/cfg2cmd/sev-es.conf.cmd
+++ b/src/test/cfg2cmd/sev-es.conf.cmd
@@ -9,8 +9,9 @@
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
-smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CVM_CODE_4M.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-disk-100-0.raw,size=540672' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CVM_CODE_4M.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -27,4 +28,4 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-object 'sev-guest,id=sev0,cbitpos=51,reduced-phys-bits=6,policy=0xc' \
- -machine 'type=pc+pve0,confidential-guest-support=sev0'
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=pc+pve0,confidential-guest-support=sev0'
diff --git a/src/test/cfg2cmd/sev-std.conf.cmd b/src/test/cfg2cmd/sev-std.conf.cmd
index 06da2ca0..3878f15c 100644
--- a/src/test/cfg2cmd/sev-std.conf.cmd
+++ b/src/test/cfg2cmd/sev-std.conf.cmd
@@ -9,8 +9,9 @@
-pidfile /var/run/qemu-server/8006.pid \
-daemonize \
-smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
- -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CVM_CODE_4M.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-disk-100-0.raw,size=540672' \
+ -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+ -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CVM_CODE_4M.fd"},"node-name":"pflash0","read-only":true}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -27,4 +28,4 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-object 'sev-guest,id=sev0,cbitpos=51,reduced-phys-bits=6,policy=0x8' \
- -machine 'type=pc+pve0,confidential-guest-support=sev0'
+ -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=pc+pve0,confidential-guest-support=sev0'
diff --git a/src/test/cfg2cmd/simple-btrfs.conf.cmd b/src/test/cfg2cmd/simple-btrfs.conf.cmd
index 2aa2083d..6c944f62 100644
--- a/src/test/cfg2cmd/simple-btrfs.conf.cmd
+++ b/src/test/cfg2cmd/simple-btrfs.conf.cmd
@@ -15,6 +15,11 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 768 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,16 +30,15 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/butter/bread/images/8006/vm-8006-disk-0/disk.raw,if=none,id=drive-scsi0,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/butter/bread/images/8006/vm-8006-disk-0/disk.raw","node-name":"e99aff0ff797aa030a22e9f580076dd","read-only":false},"node-name":"f99aff0ff797aa030a22e9f580076dd","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
- -drive 'file=/butter/bread/images/8006/vm-8006-disk-0/disk.raw,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/butter/bread/images/8006/vm-8006-disk-0/disk.raw","node-name":"e7b2fd2a8c5dbfc550d9781e5df8841","read-only":false},"node-name":"f7b2fd2a8c5dbfc550d9781e5df8841","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
- -drive 'file=/butter/bread/images/8006/vm-8006-disk-0/disk.raw,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/butter/bread/images/8006/vm-8006-disk-0/disk.raw","node-name":"ed78b07bb04c2cbd8aedc648e885569","read-only":false},"node-name":"fd78b07bb04c2cbd8aedc648e885569","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
- -drive 'file=/butter/bread/images/8006/vm-8006-disk-0/disk.raw,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/butter/bread/images/8006/vm-8006-disk-0/disk.raw","node-name":"e7487c01d831e2b51a5446980170ec9","read-only":false},"node-name":"f7487c01d831e2b51a5446980170ec9","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/simple-cifs.conf.cmd b/src/test/cfg2cmd/simple-cifs.conf.cmd
index d23a046a..f22eb033 100644
--- a/src/test/cfg2cmd/simple-cifs.conf.cmd
+++ b/src/test/cfg2cmd/simple-cifs.conf.cmd
@@ -14,6 +14,11 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 512 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -23,15 +28,14 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw","node-name":"e2b3b8f2d6a23adc1aa3ecd195dbaf5","read-only":false},"node-name":"f2b3b8f2d6a23adc1aa3ecd195dbaf5","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
- -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw","node-name":"ee4d9a961200a669c1a8182632aba3e","read-only":false},"node-name":"fe4d9a961200a669c1a8182632aba3e","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
- -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw","node-name":"e6a3bf7eee1e2636cbe31f62b537b6c","read-only":false},"node-name":"f6a3bf7eee1e2636cbe31f62b537b6c","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
- -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=native,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw","node-name":"e7042ee58e764b1296ad54014cb9a03","read-only":false},"node-name":"f7042ee58e764b1296ad54014cb9a03","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
-machine 'type=pc+pve0'
diff --git a/src/test/cfg2cmd/simple-disk-passthrough.conf.cmd b/src/test/cfg2cmd/simple-disk-passthrough.conf.cmd
index 70ee9f6b..58368210 100644
--- a/src/test/cfg2cmd/simple-disk-passthrough.conf.cmd
+++ b/src/test/cfg2cmd/simple-disk-passthrough.conf.cmd
@@ -15,6 +15,9 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 768 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,12 +28,12 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'file=/dev/cdrom,if=none,id=drive-ide2,media=cdrom,format=raw,aio=io_uring' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"driver":"host_cdrom","filename":"/dev/cdrom","node-name":"ee50e59431a6228dc388fc821b35696","read-only":true},"node-name":"fe50e59431a6228dc388fc821b35696","read-only":true},"node-name":"drive-ide2","throttle-group":"throttle-drive-ide2"}' \
-device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/dev/sda,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"host_device","filename":"/dev/sda","node-name":"eec235c1b362ebd19d5e98959b4c171","read-only":false},"node-name":"fec235c1b362ebd19d5e98959b4c171","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
- -drive 'file=/mnt/file.raw,if=none,id=drive-scsi1,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/mnt/file.raw","node-name":"e234a4e3b89ac3adac9bdbf0c3dd6b4","read-only":false},"node-name":"f234a4e3b89ac3adac9bdbf0c3dd6b4","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/simple-lvm.conf.cmd b/src/test/cfg2cmd/simple-lvm.conf.cmd
index 40a6c7c8..650f0ac3 100644
--- a/src/test/cfg2cmd/simple-lvm.conf.cmd
+++ b/src/test/cfg2cmd/simple-lvm.conf.cmd
@@ -14,6 +14,10 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 512 \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -24,12 +28,12 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/dev/veegee/vm-8006-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-0","node-name":"e0378a375d635b0f473569544c7c207","read-only":false},"node-name":"f0378a375d635b0f473569544c7c207","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
- -drive 'file=/dev/veegee/vm-8006-disk-0,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-0","node-name":"e2fbae024c8a771f708f4a5391211b0","read-only":false},"node-name":"f2fbae024c8a771f708f4a5391211b0","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
- -drive 'file=/dev/veegee/vm-8006-disk-0,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-0","node-name":"e4328c26b141e3efe1564cb60bf1155","read-only":false},"node-name":"f4328c26b141e3efe1564cb60bf1155","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
- -drive 'file=/dev/veegee/vm-8006-disk-0,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=native,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-0","node-name":"e68e10f8128f05fe5f7e85cc1f9922b","read-only":false},"node-name":"f68e10f8128f05fe5f7e85cc1f9922b","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
-machine 'type=pc+pve0'
diff --git a/src/test/cfg2cmd/simple-lvmthin.conf.cmd b/src/test/cfg2cmd/simple-lvmthin.conf.cmd
index 8d366aff..22251bc6 100644
--- a/src/test/cfg2cmd/simple-lvmthin.conf.cmd
+++ b/src/test/cfg2cmd/simple-lvmthin.conf.cmd
@@ -14,6 +14,10 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 512 \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -24,12 +28,12 @@
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/dev/pve/vm-8006-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/pve/vm-8006-disk-0","node-name":"e6d87b01b7bb888b8426534a542ff1c","read-only":false},"node-name":"f6d87b01b7bb888b8426534a542ff1c","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
- -drive 'file=/dev/pve/vm-8006-disk-0,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/pve/vm-8006-disk-0","node-name":"e96d9ece81aa4271aa2d8485184f66b","read-only":false},"node-name":"f96d9ece81aa4271aa2d8485184f66b","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
- -drive 'file=/dev/pve/vm-8006-disk-0,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/pve/vm-8006-disk-0","node-name":"e0b89788ef97beda10a850ab45897d9","read-only":false},"node-name":"f0b89788ef97beda10a850ab45897d9","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
- -drive 'file=/dev/pve/vm-8006-disk-0,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/pve/vm-8006-disk-0","node-name":"ea7b6871af66ca3e13e95bd74570aa2","read-only":false},"node-name":"fa7b6871af66ca3e13e95bd74570aa2","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
-machine 'type=pc+pve0'
diff --git a/src/test/cfg2cmd/simple-rbd.conf.cmd b/src/test/cfg2cmd/simple-rbd.conf.cmd
index df7cba3f..9260e448 100644
--- a/src/test/cfg2cmd/simple-rbd.conf.cmd
+++ b/src/test/cfg2cmd/simple-rbd.conf.cmd
@@ -15,6 +15,15 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 768 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi4","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi5","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi6","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi7","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,24 +34,23 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=rbd:cpool/vm-8006-disk-0:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-0","node-name":"e8e1af6f55c6a2466f178045aa79710","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"f8e1af6f55c6a2466f178045aa79710","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
- -drive 'file=rbd:cpool/vm-8006-disk-0:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-0","node-name":"e3990bba2ed1f48c5bb23e9f37b4cec","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"f3990bba2ed1f48c5bb23e9f37b4cec","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
- -drive 'file=rbd:cpool/vm-8006-disk-0:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-0","node-name":"e3beccc2a8f2eacb8b5df8055a7d093","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"f3beccc2a8f2eacb8b5df8055a7d093","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
- -drive 'file=rbd:cpool/vm-8006-disk-0:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-0","node-name":"eef923d5dfcee93fbc712b03f9f21af","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"fef923d5dfcee93fbc712b03f9f21af","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
- -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0,if=none,id=drive-scsi4,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0","node-name":"eb2c7a292f03b9f6d015cf83ae79730","read-only":false},"node-name":"fb2c7a292f03b9f6d015cf83ae79730","read-only":false},"node-name":"drive-scsi4","throttle-group":"throttle-drive-scsi4"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
- -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0,if=none,id=drive-scsi5,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0","node-name":"e5258ec75558b1f102af1e20e677fd0","read-only":false},"node-name":"f5258ec75558b1f102af1e20e677fd0","read-only":false},"node-name":"drive-scsi5","throttle-group":"throttle-drive-scsi5"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=5,drive=drive-scsi5,id=scsi5,write-cache=on' \
- -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0,if=none,id=drive-scsi6,cache=writethrough,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0","node-name":"edb33cdcea8ec3e2225509c4945227e","read-only":false},"node-name":"fdb33cdcea8ec3e2225509c4945227e","read-only":false},"node-name":"drive-scsi6","throttle-group":"throttle-drive-scsi6"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=6,drive=drive-scsi6,id=scsi6,write-cache=off' \
- -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0,if=none,id=drive-scsi7,cache=directsync,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0","node-name":"eb0b017124a47505c97a5da052e0141","read-only":false},"node-name":"fb0b017124a47505c97a5da052e0141","read-only":false},"node-name":"drive-scsi7","throttle-group":"throttle-drive-scsi7"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=7,drive=drive-scsi7,id=scsi7,write-cache=off' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/simple-virtio-blk.conf.cmd b/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
index 0a7eb473..4a3a4c7a 100644
--- a/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
+++ b/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
@@ -15,7 +15,9 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 768 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
-object 'iothread,id=iothread-virtio0' \
+ -object '{"id":"throttle-drive-virtio0","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -26,9 +28,8 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"edd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"fdd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"drive-virtio0","throttle-group":"throttle-drive-virtio0"}' \
-device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,bootindex=100,write-cache=on' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd b/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
index a90156b0..22603fa5 100644
--- a/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
+++ b/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
@@ -15,6 +15,11 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 768 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,16 +30,15 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=iscsi://127.0.0.1/iqn.2019-10.org.test:foobar/0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"iscsi","lun":0,"node-name":"e7106ac43d4f125a1911487dd9e3e42","portal":"127.0.0.1","read-only":false,"target":"iqn.2019-10.org.test:foobar","transport":"tcp"},"node-name":"f7106ac43d4f125a1911487dd9e3e42","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
- -drive 'file=iscsi://127.0.0.1/iqn.2019-10.org.test:foobar/0,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"iscsi","lun":0,"node-name":"efdb73e0d0acc5a60e3ff438cb20113","portal":"127.0.0.1","read-only":false,"target":"iqn.2019-10.org.test:foobar","transport":"tcp"},"node-name":"ffdb73e0d0acc5a60e3ff438cb20113","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
- -drive 'file=iscsi://127.0.0.1/iqn.2019-10.org.test:foobar/0,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"iscsi","lun":0,"node-name":"eab527a81b458aa9603dca5e2505f6e","portal":"127.0.0.1","read-only":false,"target":"iqn.2019-10.org.test:foobar","transport":"tcp"},"node-name":"fab527a81b458aa9603dca5e2505f6e","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
- -drive 'file=iscsi://127.0.0.1/iqn.2019-10.org.test:foobar/0,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"iscsi","lun":0,"node-name":"e915a332310039f7a3feed6901eb5da","portal":"127.0.0.1","read-only":false,"target":"iqn.2019-10.org.test:foobar","transport":"tcp"},"node-name":"f915a332310039f7a3feed6901eb5da","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/simple1-template.conf.cmd b/src/test/cfg2cmd/simple1-template.conf.cmd
index c736c84a..4f8f29f6 100644
--- a/src/test/cfg2cmd/simple1-template.conf.cmd
+++ b/src/test/cfg2cmd/simple1-template.conf.cmd
@@ -15,6 +15,9 @@
-nographic \
-cpu qemu64 \
-m 512 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-sata0","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -23,13 +26,12 @@
-device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/var/lib/vz/images/8006/base-8006-disk-1.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap,readonly=on' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/base-8006-disk-1.qcow2","node-name":"e1085774206ae4a6b6bf8426ff08f16","read-only":true},"node-name":"f1085774206ae4a6b6bf8426ff08f16","read-only":true},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
-device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' \
- -drive 'file=/var/lib/vz/images/8006/base-8006-disk-0.qcow2,if=none,id=drive-sata0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/base-8006-disk-0.qcow2","node-name":"eab334c2e07734480f33dd80d89871b","read-only":false},"node-name":"fab334c2e07734480f33dd80d89871b","read-only":false},"node-name":"drive-sata0","throttle-group":"throttle-drive-sata0"}' \
-device 'ide-hd,bus=ahci0.0,drive=drive-sata0,id=sata0,write-cache=on' \
-machine 'accel=tcg,smm=off,type=pc+pve0' \
-snapshot
diff --git a/src/test/cfg2cmd/simple1.conf.cmd b/src/test/cfg2cmd/simple1.conf.cmd
index e657aed7..677b0527 100644
--- a/src/test/cfg2cmd/simple1.conf.cmd
+++ b/src/test/cfg2cmd/simple1.conf.cmd
@@ -15,6 +15,8 @@
-vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 768 \
+ -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
-global 'PIIX4_PM.disable_s3=1' \
-global 'PIIX4_PM.disable_s4=1' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,10 +27,9 @@
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
- -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
- -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"ecd04be4259153b8293415fefa2a84c","read-only":false},"node-name":"fcd04be4259153b8293415fefa2a84c","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/run_config2command_tests.pl b/src/test/run_config2command_tests.pl
index 52fedd7b..d8315f9f 100755
--- a/src/test/run_config2command_tests.pl
+++ b/src/test/run_config2command_tests.pl
@@ -266,6 +266,18 @@ $storage_module->mock(
},
);
+my $file_stat_module = Test::MockModule->new("File::stat");
+$file_stat_module->mock(
+ stat => sub {
+ my ($path) = @_;
+ if ($path =~ m!/dev/!) {
+ return $file_stat_module->original('stat')->('/dev/null');
+ } else {
+ return $file_stat_module->original('stat')->('./run_config2command_tests.pl');
+ }
+ },
+);
+
my $zfsplugin_module = Test::MockModule->new("PVE::Storage::ZFSPlugin");
$zfsplugin_module->mock(
zfs_get_lu_name => sub {
@@ -276,6 +288,16 @@ $zfsplugin_module->mock(
},
);
+my $rbdplugin_module = Test::MockModule->new("PVE::Storage::RBDPlugin");
+$rbdplugin_module->mock(
+ rbd_volume_config_get => sub {
+ my ($scfg, $storeid, $volname, $key) = @_;
+ die "mocked rbd_volume_config_get: unexpected key '$key'\n"
+ if $key ne 'rbd_cache_policy';
+ return "writeback";
+ },
+);
+
my $qemu_server_config;
$qemu_server_config = Test::MockModule->new('PVE::QemuConfig');
$qemu_server_config->mock(
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] [RFC qemu-server 31/31] test: migration: update running machine to 10.0
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (29 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 30/31] command line: switch to blockdev starting with machine version 10.0 Fiona Ebner
@ 2025-06-25 15:56 ` Fiona Ebner
2025-06-26 13:09 ` [pve-devel] partially-applied: [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fabian Grünbichler
31 siblings, 0 replies; 33+ messages in thread
From: Fiona Ebner @ 2025-06-25 15:56 UTC (permalink / raw)
To: pve-devel
In particular, this also means that (mocked) blockdev_mirror() will be
used.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
Best to do it after the switch to blockdev-mirror, so we still test
with the old code paths before this commit.
src/test/run_qemu_migrate_tests.pl | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/src/test/run_qemu_migrate_tests.pl b/src/test/run_qemu_migrate_tests.pl
index 68f0784e..ed2f38ee 100755
--- a/src/test/run_qemu_migrate_tests.pl
+++ b/src/test/run_qemu_migrate_tests.pl
@@ -267,7 +267,7 @@ my $vm_configs = {
'numa' => 0,
'ostype' => 'l26',
'runningcpu' => 'kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep',
- 'runningmachine' => 'pc-i440fx-5.0+pve0',
+ 'runningmachine' => 'pc-i440fx-10.0+pve0',
'scsi0' => 'local-dir:4567/vm-4567-disk-0.qcow2,size=4G',
'scsihw' => 'virtio-scsi-pci',
'smbios1' => 'uuid=2925fdec-a066-4228-b46b-eef8662f5e74',
@@ -288,7 +288,7 @@ my $vm_configs = {
'ostype' => 'l26',
'parent' => 'snap1',
'runningcpu' => 'kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep',
- 'runningmachine' => 'pc-i440fx-5.0+pve0',
+ 'runningmachine' => 'pc-i440fx-10.0+pve0',
'scsi0' => 'local-dir:4567/vm-4567-disk-0.qcow2,size=4G',
'scsi1' => 'local-zfs:vm-4567-disk-0,size=1G',
'scsihw' => 'virtio-scsi-pci',
@@ -769,7 +769,7 @@ my $tests = [
vmid => 4567,
vm_status => {
running => 1,
- runningmachine => 'pc-i440fx-5.0+pve0',
+ runningmachine => 'pc-i440fx-10.0+pve0',
},
opts => {
online => 1,
@@ -783,7 +783,7 @@ my $tests = [
vm_config => $vm_configs->{4567},
vm_status => {
running => 1,
- runningmachine => 'pc-i440fx-5.0+pve0',
+ runningmachine => 'pc-i440fx-10.0+pve0',
},
},
},
@@ -1358,7 +1358,7 @@ my $tests = [
vmid => 105,
vm_status => {
running => 1,
- runningmachine => 'pc-i440fx-5.0+pve0',
+ runningmachine => 'pc-i440fx-10.0+pve0',
},
opts => {
online => 1,
@@ -1376,7 +1376,7 @@ my $tests = [
vm_config => $vm_configs->{105},
vm_status => {
running => 1,
- runningmachine => 'pc-i440fx-5.0+pve0',
+ runningmachine => 'pc-i440fx-10.0+pve0',
},
},
},
@@ -1404,7 +1404,7 @@ my $tests = [
vmid => 105,
vm_status => {
running => 1,
- runningmachine => 'pc-i440fx-5.0+pve0',
+ runningmachine => 'pc-i440fx-10.0+pve0',
},
config_patch => {
snapshots => undef,
@@ -1427,7 +1427,7 @@ my $tests = [
}),
vm_status => {
running => 1,
- runningmachine => 'pc-i440fx-5.0+pve0',
+ runningmachine => 'pc-i440fx-10.0+pve0',
},
},
},
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [pve-devel] partially-applied: [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
` (30 preceding siblings ...)
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 31/31] test: migration: update running machine to 10.0 Fiona Ebner
@ 2025-06-26 13:09 ` Fabian Grünbichler
31 siblings, 0 replies; 33+ messages in thread
From: Fabian Grünbichler @ 2025-06-26 13:09 UTC (permalink / raw)
To: Proxmox VE development discussion
On June 25, 2025 5:56 pm, Fiona Ebner wrote:
> Changes to OVMF patches (left-over from part two):
> * 01/31 is new
> * keep get_efivars_size() as a wrapper in QemuServer module
> * keep early check for CPU bitness in QemuServer module
> * use read-only flag for OVMF code
> * collect some parameters into $hw_info hash, avoid querying AMD-SEV
> type inside the OVMF module
>
> Splits out a Network module, qga_check_running(),
> find_vmstate_storage(), QemuMigrate::Helpers, a RunState module with
> the goal of splitting out a BlockJob module, where blockdev_mirror()
> will be added.
>
> Need some more time to make zeroinit work properly, got an initial
> QEMU patch locally, but need to finalize it. Also need to check why
> exactly block-export-add fails without Alexandre's patch, since we do
> query the node name there. We shouldn't use the top node there in any
> case, because we don't want to be limited by limits intended for the
> guest during migration.
>
> Therefore, the patches from 24/31 onwards are RFC, not finalized, just
> for context and easier testing for reviewers.
applied the non-RFC part (didn't do in-depth testing, but the changes
are sensible)
some follow-ups that would be nice to have, as discussed off-list:
- get rid of the small (pre-existing!) back-reference from Drive.pm to
QemuConfig.pm, by making the is_template bool a part of hw_info for
OVMF purposes, and the helper in Drive.pm just return whether setting
to read-only is possible.
- reduce the public interface of BlockJob.pm to just `mirror`, `monitor`
and `cancel`
- make the names of subs in BlockJob shorter, the context is already
there via the fully qualified name for external callers, and by virtue
of being in the module for private subs
- adapt the interface of qemu_drive_mirror to match the blockdev/public
ones (i.e., use source and dest info and options)
very nice work, we are now at 15% line count reduction for
QemuServer.pm already!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2025-06-26 13:09 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-06-25 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 01/31] print ovmf commandline: collect hardware parameters into hash argument Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 02/31] introduce OVMF module Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 03/31] ovmf: add support for using blockdev Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 04/31] cfg2cmd: ovmf: support print_ovmf_commandline() returning machine flags Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 05/31] assume that SDN is available Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 06/31] schema: remove unused pve-qm-ipconfig standard option Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 07/31] remove unused $nic_model_list_txt variable Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 08/31] introduce Network module Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 09/31] agent: drop unused $noerr argument from helpers Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 10/31] agent: code style: order module imports according to style guide Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 11/31] agent: avoid dependency on QemuConfig module Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 12/31] agent: avoid use of deprecated check_running() function Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 13/31] agent: move qga_check_running() to agent module Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 14/31] move find_vmstate_storage() helper to QemuConfig module Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 15/31] introduce QemuMigrate::Helpers module Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 16/31] introduce RunState module Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 17/31] code cleanup: drive mirror: do not prefix calls to function in the same module Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 18/31] introduce BlockJob module Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 19/31] drive: die in get_drive_id() if argument misses relevant members Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 20/31] block job: add and use wrapper for mirror Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 21/31] drive mirror: add variable for device ID and make name for drive ID precise Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 22/31] test: migration: factor out common mocking for mirror Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 23/31] block job: factor out helper for common mirror QMP options Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 24/31] block job: add blockdev mirror Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 25/31] blockdev: support using zeroinit filter Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [PATCH qemu-server 26/31] blockdev: make some functions private Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 27/31] clone disk: skip check for aio=default (io_uring) compatibility starting with machine version 10.0 Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 28/31] print drive device: don't reference any drive for 'none' " Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 29/31] blockdev: add support for NBD paths Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 30/31] command line: switch to blockdev starting with machine version 10.0 Fiona Ebner
2025-06-25 15:56 ` [pve-devel] [RFC qemu-server 31/31] test: migration: update running machine to 10.0 Fiona Ebner
2025-06-26 13:09 ` [pve-devel] partially-applied: [PATCH-SERIES qemu-server 00/31] preparation for blockdev, part three Fabian Grünbichler
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal