public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final)
@ 2025-06-27 15:56 Fiona Ebner
  2025-06-27 15:56 ` [pve-devel] [PATCH qemu-server 01/31] mirror: code style: avoid masking earlier declaration of $op Fiona Ebner
                   ` (31 more replies)
  0 siblings, 32 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:56 UTC (permalink / raw)
  To: pve-devel

The preliminary final part in the series. I'm sure there will be some
follow-ups, and the decisions about edge cases like cache mode for EFI
disk and querying file child are not yet set in stone. But this should
essentially be it.

The switch from '-drive' to '-blockdev' is in preparation for future
features like external snapshots, FUSE exports via qemu-storage-daemon
and also generally the more modern interface in QEMU. It also allows
to address some limitations drive-mirror had, in particular this
series makes it possible to mirror between storages having a different
aio default as well as mirror when the size of the allocated image
doesn't exactly match for EFI disks, see [2] and patch 31/31.

The switch is guarded by machine version 10.0 to avoid any potential
incompatibilities between -drive and -blockdev options/defaults.

What is still missing is support for the rather obscure 'snapshot'
drive option where writes will go to a temporary image (currently in
'/var/tmp', which is far from ideal to begin with). That requires
inserting an overlay node.

This series depends on [0] and [1].

Changes to the patches carried from part three:
* Mirror 17/31: Query QEMU for file child.
* Mirror 17/31: Remove appropriate node after mirror.
* Mirror 17/31: Delete format property from cloned drive hash for
  destination.
* Switch 29/31: Support for live restore and live import.
* Switch 29/31: Use Blockdev::{attach,detach} helpers for hot{,un}plug.
* Switch 29/31: Adapt to changes from previous patches.
* Switch 29/31: Also switch for medium change.


[0]: https://lore.proxmox.com/pve-devel/20250626160504.330350-1-f.ebner@proxmox.com/T/
[1]: https://lore.proxmox.com/pve-devel/20250626144644.279679-1-f.ebner@proxmox.com/T/
[2]: https://bugzilla.proxmox.com/show_bug.cgi?id=3227


Fiona Ebner (31):
  mirror: code style: avoid masking earlier declaration of $op
  test: collect mocked functions for QemuServer module
  drive: add helper to parse drive interface
  drive: drop invalid export of get_scsi_devicetype
  blockdev: add helpers for attaching and detaching block devices
  blockdev: add missing include for JSON module
  backup: use blockdev for fleecing images
  backup: use blockdev for TPM state file
  blockdev: introduce qdev_id_to_drive_id() helper
  blockdev: introduce and use get_block_info() helper
  blockdev: move helper for resize into module
  blockdev: add helper to get node below throttle node
  blockdev: resize: query and use node name for resize operation
  blockdev: support using zeroinit filter
  blockdev: make some functions private
  block job: allow specifying a block node that should be detached upon
    completion
  block job: add blockdev mirror
  blockdev: add change_medium() helper
  blockdev: add blockdev_change_medium() helper
  blockdev: move helper for configuring throttle limits to module
  clone disk: skip check for aio=default (io_uring) compatibility
    starting with machine version 10.0
  print drive device: don't reference any drive for 'none' starting with
    machine version 10.0
  blockdev: add support for NBD paths
  blockdev: add helper to generate PBS block device for live restore
  blockdev: support alloc-track driver for live-{import,restore}
  live import: also record volid information
  live import/restore: query which node to use for operation
  live import/restore: use Blockdev::detach helper
  command line: switch to blockdev starting with machine version 10.0
  test: migration: update running machine to 10.0
  partially fix #3227: ensure that target image for mirror has the same
    size for EFI disks

 src/PVE/API2/Qemu.pm                          |   5 +-
 src/PVE/QemuConfig.pm                         |  12 +-
 src/PVE/QemuServer.pm                         | 319 ++++++------
 src/PVE/QemuServer/BlockJob.pm                | 251 ++++++++-
 src/PVE/QemuServer/Blockdev.pm                | 482 +++++++++++++++++-
 src/PVE/QemuServer/Drive.pm                   |  21 +-
 src/PVE/QemuServer/OVMF.pm                    |  21 +-
 src/PVE/VZDump/QemuServer.pm                  |  50 +-
 src/test/MigrationTest/QemuMigrateMock.pm     |  13 +
 src/test/MigrationTest/QmMock.pm              |  56 +-
 src/test/cfg2cmd/aio.conf.cmd                 |  42 +-
 src/test/cfg2cmd/bootorder-empty.conf.cmd     |  13 +-
 src/test/cfg2cmd/bootorder-legacy.conf.cmd    |  13 +-
 src/test/cfg2cmd/bootorder.conf.cmd           |  13 +-
 ...putype-icelake-client-deprecation.conf.cmd |   7 +-
 src/test/cfg2cmd/efi-raw-template.conf.cmd    |   7 +-
 src/test/cfg2cmd/efi-raw.conf.cmd             |   7 +-
 .../cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd  |   7 +-
 src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd |   7 +-
 src/test/cfg2cmd/efidisk-on-rbd.conf.cmd      |   7 +-
 src/test/cfg2cmd/ide.conf.cmd                 |  15 +-
 src/test/cfg2cmd/q35-ide.conf.cmd             |  15 +-
 .../q35-linux-hostpci-mapping.conf.cmd        |   7 +-
 .../q35-linux-hostpci-multifunction.conf.cmd  |   7 +-
 .../q35-linux-hostpci-template.conf.cmd       |  10 +-
 ...q35-linux-hostpci-x-pci-overrides.conf.cmd |   7 +-
 src/test/cfg2cmd/q35-linux-hostpci.conf.cmd   |   7 +-
 src/test/cfg2cmd/q35-simple.conf.cmd          |   7 +-
 src/test/cfg2cmd/seabios_serial.conf.cmd      |   7 +-
 src/test/cfg2cmd/sev-es.conf.cmd              |   7 +-
 src/test/cfg2cmd/sev-std.conf.cmd             |   7 +-
 src/test/cfg2cmd/simple-btrfs.conf.cmd        |  16 +-
 src/test/cfg2cmd/simple-cifs.conf.cmd         |  16 +-
 .../cfg2cmd/simple-disk-passthrough.conf.cmd  |   9 +-
 src/test/cfg2cmd/simple-lvm.conf.cmd          |  12 +-
 src/test/cfg2cmd/simple-lvmthin.conf.cmd      |  12 +-
 src/test/cfg2cmd/simple-rbd.conf.cmd          |  28 +-
 src/test/cfg2cmd/simple-virtio-blk.conf.cmd   |   7 +-
 .../cfg2cmd/simple-zfs-over-iscsi.conf.cmd    |  16 +-
 src/test/cfg2cmd/simple1-template.conf.cmd    |  10 +-
 src/test/cfg2cmd/simple1.conf.cmd             |   7 +-
 src/test/run_config2command_tests.pl          |  19 +
 src/test/run_qemu_migrate_tests.pl            |  16 +-
 43 files changed, 1208 insertions(+), 409 deletions(-)

-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 01/31] mirror: code style: avoid masking earlier declaration of $op
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
@ 2025-06-27 15:56 ` Fiona Ebner
  2025-06-27 15:56 ` [pve-devel] [PATCH qemu-server 02/31] test: collect mocked functions for QemuServer module Fiona Ebner
                   ` (30 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:56 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/BlockJob.pm | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 0013bde6..d26bcb01 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -179,15 +179,15 @@ sub qemu_drive_mirror_monitor {
                         # try to switch the disk if source and destination are on the same guest
                         print "$job_id: Completing block job...\n";
 
-                        my $op;
+                        my $completion_command;
                         if ($completion eq 'complete') {
-                            $op = 'block-job-complete';
+                            $completion_command = 'block-job-complete';
                         } elsif ($completion eq 'cancel') {
-                            $op = 'block-job-cancel';
+                            $completion_command = 'block-job-cancel';
                         } else {
                             die "invalid completion value: $completion\n";
                         }
-                        eval { mon_cmd($vmid, $op, device => $job_id) };
+                        eval { mon_cmd($vmid, $completion_command, device => $job_id) };
                         my $err = $@;
                         if ($err && $err =~ m/cannot be completed/) {
                             print "$job_id: block job cannot be completed, trying again.\n";
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 02/31] test: collect mocked functions for QemuServer module
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
  2025-06-27 15:56 ` [pve-devel] [PATCH qemu-server 01/31] mirror: code style: avoid masking earlier declaration of $op Fiona Ebner
@ 2025-06-27 15:56 ` Fiona Ebner
  2025-06-27 15:56 ` [pve-devel] [PATCH qemu-server 03/31] drive: add helper to parse drive interface Fiona Ebner
                   ` (29 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:56 UTC (permalink / raw)
  To: pve-devel

Also order them alphabetically.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/test/MigrationTest/QmMock.pm | 27 ++++++++++++---------------
 1 file changed, 12 insertions(+), 15 deletions(-)

diff --git a/src/test/MigrationTest/QmMock.pm b/src/test/MigrationTest/QmMock.pm
index 3eaa131f..de7f4cd7 100644
--- a/src/test/MigrationTest/QmMock.pm
+++ b/src/test/MigrationTest/QmMock.pm
@@ -50,21 +50,6 @@ $inotify_module->mock(
     },
 );
 
-$MigrationTest::Shared::qemu_server_module->mock(
-    nodename => sub {
-        return $nodename;
-    },
-    config_to_command => sub {
-        return ['mocked_kvm_command'];
-    },
-    vm_start_nolock => sub {
-        my ($storecfg, $vmid, $conf, $params, $migrate_opts) = @_;
-        $forcemachine = $params->{forcemachine}
-            or die "mocked vm_start_nolock - expected 'forcemachine' parameter\n";
-        $MigrationTest::Shared::qemu_server_module->original('vm_start_nolock')->(@_);
-    },
-);
-
 my $qemu_server_helpers_module = Test::MockModule->new("PVE::QemuServer::Helpers");
 $qemu_server_helpers_module->mock(
     vm_running_locally => sub {
@@ -113,6 +98,9 @@ $MigrationTest::Shared::storage_module->mock(
 );
 
 $MigrationTest::Shared::qemu_server_module->mock(
+    config_to_command => sub {
+        return ['mocked_kvm_command'];
+    },
     mon_cmd => sub {
         my ($vmid, $command, %params) = @_;
 
@@ -127,6 +115,9 @@ $MigrationTest::Shared::qemu_server_module->mock(
         }
         die "mon_cmd (mocked) - implement me: $command";
     },
+    nodename => sub {
+        return $nodename;
+    },
     run_command => sub {
         my ($cmd_full, %param) = @_;
 
@@ -149,6 +140,12 @@ $MigrationTest::Shared::qemu_server_module->mock(
         file_set_contents("${RUN_DIR_PATH}/nbd_info", to_json($nbd));
         return $nbd;
     },
+    vm_start_nolock => sub {
+        my ($storecfg, $vmid, $conf, $params, $migrate_opts) = @_;
+        $forcemachine = $params->{forcemachine}
+            or die "mocked vm_start_nolock - expected 'forcemachine' parameter\n";
+        $MigrationTest::Shared::qemu_server_module->original('vm_start_nolock')->(@_);
+    },
 );
 
 our $cmddef = {
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 03/31] drive: add helper to parse drive interface
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
  2025-06-27 15:56 ` [pve-devel] [PATCH qemu-server 01/31] mirror: code style: avoid masking earlier declaration of $op Fiona Ebner
  2025-06-27 15:56 ` [pve-devel] [PATCH qemu-server 02/31] test: collect mocked functions for QemuServer module Fiona Ebner
@ 2025-06-27 15:56 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 04/31] drive: drop invalid export of get_scsi_devicetype Fiona Ebner
                   ` (28 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:56 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Drive.pm | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/src/PVE/QemuServer/Drive.pm b/src/PVE/QemuServer/Drive.pm
index a6f5062f..bc9fbc48 100644
--- a/src/PVE/QemuServer/Drive.pm
+++ b/src/PVE/QemuServer/Drive.pm
@@ -745,6 +745,16 @@ sub drive_is_read_only {
     return $drive->{interface} ne 'sata' && $drive->{interface} ne 'ide';
 }
 
+sub parse_drive_interface {
+    my ($key) = @_;
+
+    if ($key =~ m/^([^\d]+)(\d+)$/) {
+        return ($1, $2);
+    }
+
+    die "unable to parse drive interface $key\n";
+}
+
 # ideX = [volume=]volume-id[,media=d]
 #        [,snapshot=on|off][,cache=on|off][,format=f][,backup=yes|no]
 #        [,rerror=ignore|report|stop][,werror=enospc|ignore|report|stop]
@@ -754,14 +764,8 @@ sub drive_is_read_only {
 sub parse_drive {
     my ($key, $data, $with_alloc) = @_;
 
-    my ($interface, $index);
-
-    if ($key =~ m/^([^\d]+)(\d+)$/) {
-        $interface = $1;
-        $index = $2;
-    } else {
-        return;
-    }
+    my ($interface, $index) = eval { parse_drive_interface($key) };
+    return if $@;
 
     my $desc_hash = $with_alloc ? $drivedesc_hash_with_alloc : $drivedesc_hash;
 
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 04/31] drive: drop invalid export of get_scsi_devicetype
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (2 preceding siblings ...)
  2025-06-27 15:56 ` [pve-devel] [PATCH qemu-server 03/31] drive: add helper to parse drive interface Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices Fiona Ebner
                   ` (27 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

The function is called 'get_scsi_device_type' and all callers already
use the full package prefix to call it.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Drive.pm | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/PVE/QemuServer/Drive.pm b/src/PVE/QemuServer/Drive.pm
index bc9fbc48..cd00f37d 100644
--- a/src/PVE/QemuServer/Drive.pm
+++ b/src/PVE/QemuServer/Drive.pm
@@ -21,7 +21,6 @@ our @EXPORT_OK = qw(
     drive_is_cloudinit
     drive_is_cdrom
     drive_is_read_only
-    get_scsi_devicetype
     parse_drive
     print_drive
     storage_allows_io_uring_default
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (3 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 04/31] drive: drop invalid export of get_scsi_devicetype Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-30 10:15   ` Fabian Grünbichler
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 06/31] blockdev: add missing include for JSON module Fiona Ebner
                   ` (26 subsequent siblings)
  31 siblings, 1 reply; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 132 +++++++++++++++++++++++++++++++++
 1 file changed, 132 insertions(+)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 6e6b9245..26d70eee 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -11,6 +11,7 @@ use PVE::JSONSchema qw(json_bool);
 use PVE::Storage;
 
 use PVE::QemuServer::Drive qw(drive_is_cdrom);
+use PVE::QemuServer::Monitor qw(mon_cmd);
 
 my sub get_node_name {
     my ($type, $drive_id, $volid, $snap) = @_;
@@ -221,4 +222,135 @@ sub generate_drive_blockdev {
     };
 }
 
+my sub blockdev_add {
+    my ($vmid, $blockdev) = @_;
+
+    eval { mon_cmd($vmid, 'blockdev-add', $blockdev->%*); };
+    if (my $err = $@) {
+        my $node_name = $blockdev->{'node-name'} // 'undefined';
+        die "adding blockdev '$node_name' failed : $err\n" if $@;
+    }
+
+    return;
+}
+
+=pod
+
+=head3 attach
+
+    attach($storecfg, $vmid, $drive, $options);
+
+Attach the drive C<$drive> to the VM C<$vmid> considering the additional options C<$options>.
+
+Parameters:
+
+=over
+
+=item C<$storecfg>
+
+The storage configuration.
+
+=item C<$vmid>
+
+The ID of the virtual machine.
+
+=item C<$drive>
+
+The drive as parsed from a virtual machine configuration.
+
+=item C<$options>
+
+A hash reference with additional options.
+
+=over
+
+=item C<< $options->{'read-only'} >>
+
+Attach the image as read-only irrespective of the configuration in C<$drive>.
+
+=item C<< $options->{size} >>
+
+Attach the image with this virtual size. Must be smaller than the actual size of the image. The
+image format must be C<raw>.
+
+=item C<< $options->{'snapshot-name'} >>
+
+Attach this snapshot of the volume C<< $drive->{file} >>, rather than the volume itself.
+
+=back
+
+=back
+
+=cut
+
+sub attach {
+    my ($storecfg, $vmid, $drive, $options) = @_;
+
+    my $blockdev = generate_drive_blockdev($storecfg, $drive, $options);
+
+    my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
+    if ($blockdev->{'node-name'} eq "drive-$drive_id") { # device top nodes need a throttle group
+        my $throttle_group = generate_throttle_group($drive);
+        mon_cmd($vmid, 'object-add', $throttle_group->%*);
+    }
+
+    eval { blockdev_add($vmid, $blockdev); };
+    if (my $err = $@) {
+        eval { mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id"); };
+        warn $@ if $@;
+        die $err;
+    }
+
+    return;
+}
+
+=pod
+
+=head3 detach
+
+    detach($vmid, $node_name);
+
+Detach the block device C<$node_name> from the VM C<$vmid>. Also removes associated child block
+nodes.
+
+Parameters:
+
+=over
+
+=item C<$vmid>
+
+The ID of the virtual machine.
+
+=item C<$node_name>
+
+The node name identifying the block node in QEMU.
+
+=back
+
+=cut
+
+sub detach {
+    my ($vmid, $node_name) = @_;
+
+    die "Blockdev::detach - no node name\n" if !$node_name;
+
+    # QEMU recursively auto-removes the file children, i.e. file and format node below the top
+    # node and also implicit backing children referenced by a qcow2 image.
+    eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => "$node_name"); };
+    if (my $err = $@) {
+        return if $err =~ m/Failed to find node with node-name/; # already gone
+        die "deleting blockdev '$node_name' failed : $err\n";
+    }
+
+    if ($node_name =~ m/^drive-(.+)$/) {
+        # also remove throttle group if it was a device top node
+        my $drive_id = $1;
+        if (PVE::QemuServer::Drive::is_valid_drivename($drive_id)) {
+            mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id");
+        }
+    }
+
+    return;
+}
+
 1;
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 06/31] blockdev: add missing include for JSON module
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (4 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 07/31] backup: use blockdev for fleecing images Fiona Ebner
                   ` (25 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Fixes: f2f2edcd ("blockdev: add workaround for issue #3229")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 26d70eee..8a991587 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -6,6 +6,7 @@ use warnings;
 use Digest::SHA;
 use Fcntl qw(S_ISBLK S_ISCHR);
 use File::stat;
+use JSON;
 
 use PVE::JSONSchema qw(json_bool);
 use PVE::Storage;
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 07/31] backup: use blockdev for fleecing images
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (5 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 06/31] blockdev: add missing include for JSON module Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-30 10:15   ` Fabian Grünbichler
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 08/31] backup: use blockdev for TPM state file Fiona Ebner
                   ` (24 subsequent siblings)
  31 siblings, 1 reply; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuConfig.pm          | 12 ++-------
 src/PVE/QemuServer/Blockdev.pm | 45 +++++++++++++++++++++++++++++++---
 src/PVE/VZDump/QemuServer.pm   | 26 ++++++++++++--------
 3 files changed, 59 insertions(+), 24 deletions(-)

diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm
index 01104723..82295641 100644
--- a/src/PVE/QemuConfig.pm
+++ b/src/PVE/QemuConfig.pm
@@ -10,6 +10,7 @@ use PVE::INotify;
 use PVE::JSONSchema;
 use PVE::QemuMigrate::Helpers;
 use PVE::QemuServer::Agent;
+use PVE::QemuServer::Blockdev;
 use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::Helpers;
@@ -675,16 +676,7 @@ sub cleanup_fleecing_images {
         };
         $log_func->('warn', "checking/canceling old backup job failed - $@") if $@;
 
-        my $block_info = mon_cmd($vmid, "query-block");
-        for my $info ($block_info->@*) {
-            my $device_id = $info->{device};
-            next if $device_id !~ m/-fleecing$/;
-
-            $log_func->('info', "detaching (old) fleecing image for '$device_id'");
-            $device_id =~ s/^drive-//; # re-added by qemu_drivedel()
-            eval { PVE::QemuServer::qemu_drivedel($vmid, $device_id) };
-            $log_func->('warn', "error detaching (old) fleecing image '$device_id' - $@") if $@;
-        }
+        PVE::QemuServer::Blockdev::detach_fleecing_block_nodes($vmid, $log_func);
     }
 
     PVE::QemuConfig->lock_config(
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 8a991587..28a759a8 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -14,8 +14,30 @@ use PVE::Storage;
 use PVE::QemuServer::Drive qw(drive_is_cdrom);
 use PVE::QemuServer::Monitor qw(mon_cmd);
 
+my sub fleecing_node_name {
+    my ($type, $drive_id) = @_;
+
+    if ($type eq 'fmt') {
+        return "drive-$drive_id-fleecing"; # this is the top node for fleecing
+    } elsif ($type eq 'file') {
+        return "$drive_id-fleecing-file"; # drop the "drive-" prefix to be sure, max length is 31
+    }
+
+    die "unknown node type for fleecing '$type'";
+}
+
+my sub is_fleecing_top_node {
+    my ($node_name) = @_;
+
+    return $node_name =~ m/-fleecing$/ ? 1 : 0;
+}
+
 my sub get_node_name {
-    my ($type, $drive_id, $volid, $snap) = @_;
+    my ($type, $drive_id, $volid, $options) = @_;
+
+    return fleecing_node_name($type, $drive_id) if $options->{fleecing};
+
+    my $snap = $options->{'snapshot-name'};
 
     my $info = "drive=$drive_id,";
     $info .= "snap=$snap," if defined($snap);
@@ -151,8 +173,7 @@ sub generate_file_blockdev {
         $blockdev->{'detect-zeroes'} = PVE::QemuServer::Drive::detect_zeroes_cmdline_option($drive);
     }
 
-    $blockdev->{'node-name'} =
-        get_node_name('file', $drive_id, $drive->{file}, $options->{'snapshot-name'});
+    $blockdev->{'node-name'} = get_node_name('file', $drive_id, $drive->{file}, $options);
 
     $blockdev->{'read-only'} = read_only_json_option($drive, $options);
 
@@ -185,7 +206,7 @@ sub generate_format_blockdev {
         $format = $drive->{format} // 'raw';
     }
 
-    my $node_name = get_node_name('fmt', $drive_id, $drive->{file}, $options->{'snapshot-name'});
+    my $node_name = get_node_name('fmt', $drive_id, $drive->{file}, $options);
 
     my $blockdev = {
         'node-name' => "$node_name",
@@ -214,6 +235,8 @@ sub generate_drive_blockdev {
     my $child = generate_file_blockdev($storecfg, $drive, $options);
     $child = generate_format_blockdev($storecfg, $drive, $child, $options);
 
+    return $child if $options->{fleecing}; # for fleecing, this is already the top node
+
     # this is the top filter entry point, use $drive-drive_id as nodename
     return {
         driver => "throttle",
@@ -354,4 +377,18 @@ sub detach {
     return;
 }
 
+sub detach_fleecing_block_nodes {
+    my ($vmid, $log_func) = @_;
+
+    my $block_info = mon_cmd($vmid, "query-named-block-nodes");
+    for my $info ($block_info->@*) {
+        my $node_name = $info->{'node-name'};
+        next if !is_fleecing_top_node($node_name);
+
+        $log_func->('info', "detaching (old) fleecing image '$node_name'");
+        eval { detach($vmid, $node_name) };
+        $log_func->('warn', "error detaching (old) fleecing image '$node_name' - $@") if $@;
+    }
+}
+
 1;
diff --git a/src/PVE/VZDump/QemuServer.pm b/src/PVE/VZDump/QemuServer.pm
index 243a927e..8b643bc4 100644
--- a/src/PVE/VZDump/QemuServer.pm
+++ b/src/PVE/VZDump/QemuServer.pm
@@ -30,6 +30,7 @@ use PVE::Format qw(render_duration render_bytes);
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::QemuServer::Agent;
+use PVE::QemuServer::Blockdev;
 use PVE::QemuServer::Drive qw(checked_volume_format);
 use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Machine;
@@ -626,9 +627,8 @@ my sub detach_fleecing_images {
 
     for my $di ($disks->@*) {
         if (my $volid = $di->{'fleece-volid'}) {
-            my $devid = "$di->{qmdevice}-fleecing";
-            $devid =~ s/^drive-//; # re-added by qemu_drivedel()
-            eval { PVE::QemuServer::qemu_drivedel($vmid, $devid) };
+            my $node_name = "$di->{qmdevice}-fleecing";
+            eval { PVE::QemuServer::Blockdev::detach($vmid, $node_name) };
         }
     }
 }
@@ -646,15 +646,21 @@ my sub attach_fleecing_images {
         if (my $volid = $di->{'fleece-volid'}) {
             $self->loginfo("$di->{qmdevice}: attaching fleecing image $volid to QEMU");
 
-            my $path = PVE::Storage::path($self->{storecfg}, $volid);
-            my $devid = "$di->{qmdevice}-fleecing";
-            my $drive = "file=$path,if=none,id=$devid,format=$format,discard=unmap";
+            my ($interface, $index) = PVE::QemuServer::Drive::parse_drive_interface($di->{virtdev});
+            my $drive = {
+                file => $volid,
+                interface => $interface,
+                index => $index,
+                format => $format,
+                discard => 'on',
+            };
+
+            my $options = { 'fleecing' => 1 };
             # Specify size explicitly, to make it work if storage backend rounded up size for
             # fleecing image when allocating.
-            $drive .= ",size=$di->{'block-node-size'}" if $format eq 'raw';
-            $drive =~ s/\\/\\\\/g;
-            my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
-            die "attaching fleecing image $volid failed - $ret\n" if $ret !~ m/OK/s;
+            $options->{size} = $di->{'block-node-size'} if $format eq 'raw';
+
+            PVE::QemuServer::Blockdev::attach($self->{storecfg}, $vmid, $drive, $options);
         }
     }
 }
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 08/31] backup: use blockdev for TPM state file
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (6 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 07/31] backup: use blockdev for fleecing images Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-30 10:15   ` Fabian Grünbichler
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 09/31] blockdev: introduce qdev_id_to_drive_id() helper Fiona Ebner
                   ` (23 subsequent siblings)
  31 siblings, 1 reply; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 22 +++++++++++++++++++++-
 src/PVE/VZDump/QemuServer.pm   | 19 ++++++++++---------
 2 files changed, 31 insertions(+), 10 deletions(-)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 28a759a8..85887ab7 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -14,6 +14,18 @@ use PVE::Storage;
 use PVE::QemuServer::Drive qw(drive_is_cdrom);
 use PVE::QemuServer::Monitor qw(mon_cmd);
 
+my sub tpm_backup_node_name {
+    my ($type, $drive_id) = @_;
+
+    if ($type eq 'fmt') {
+        return "drive-$drive_id-backup"; # this is the top node
+    } elsif ($type eq 'file') {
+        return "$drive_id-backup-file"; # drop the "drive-" prefix to be sure, max length is 31
+    }
+
+    die "unknown node type for fleecing '$type'";
+}
+
 my sub fleecing_node_name {
     my ($type, $drive_id) = @_;
 
@@ -36,6 +48,7 @@ my sub get_node_name {
     my ($type, $drive_id, $volid, $options) = @_;
 
     return fleecing_node_name($type, $drive_id) if $options->{fleecing};
+    return tpm_backup_node_name($type, $drive_id) if $options->{'tpm-backup'};
 
     my $snap = $options->{'snapshot-name'};
 
@@ -235,7 +248,8 @@ sub generate_drive_blockdev {
     my $child = generate_file_blockdev($storecfg, $drive, $options);
     $child = generate_format_blockdev($storecfg, $drive, $child, $options);
 
-    return $child if $options->{fleecing}; # for fleecing, this is already the top node
+    # for fleecing and TPM backup, this is already the top node
+    return $child if $options->{fleecing} || $options->{'tpm-backup'};
 
     # this is the top filter entry point, use $drive-drive_id as nodename
     return {
@@ -377,6 +391,12 @@ sub detach {
     return;
 }
 
+sub detach_tpm_backup_node {
+    my ($vmid) = @_;
+
+    detach($vmid, "drive-tpmstate0-backup");
+}
+
 sub detach_fleecing_block_nodes {
     my ($vmid, $log_func) = @_;
 
diff --git a/src/PVE/VZDump/QemuServer.pm b/src/PVE/VZDump/QemuServer.pm
index 8b643bc4..f3e292e7 100644
--- a/src/PVE/VZDump/QemuServer.pm
+++ b/src/PVE/VZDump/QemuServer.pm
@@ -158,7 +158,7 @@ sub prepare {
         if ($ds eq 'tpmstate0') {
             # TPM drive only exists for backup, which is reflected in the name
             $diskinfo->{qmdevice} = 'drive-tpmstate0-backup';
-            $task->{tpmpath} = $path;
+            $task->{'tpm-volid'} = $volid;
         }
 
         if (-b $path) {
@@ -474,24 +474,25 @@ my $query_backup_status_loop = sub {
 my $attach_tpmstate_drive = sub {
     my ($self, $task, $vmid) = @_;
 
-    return if !$task->{tpmpath};
+    return if !$task->{'tpm-volid'};
 
     # unconditionally try to remove the tpmstate-named drive - it only exists
     # for backing up, and avoids errors if left over from some previous event
-    eval { PVE::QemuServer::qemu_drivedel($vmid, "tpmstate0-backup"); };
+    eval { PVE::QemuServer::Blockdev::detach_tpm_backup_node($vmid); };
 
     $self->loginfo('attaching TPM drive to QEMU for backup');
 
-    my $drive = "file=$task->{tpmpath},if=none,read-only=on,id=drive-tpmstate0-backup";
-    $drive =~ s/\\/\\\\/g;
-    my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
-    die "attaching TPM drive failed - $ret\n" if $ret !~ m/OK/s;
+    my $drive = { file => $task->{'tpm-volid'}, interface => 'tpmstate', index => 0 };
+    my $extra_options = { 'tpm-backup' => 1, 'read-only' => 1 };
+    PVE::QemuServer::Blockdev::attach($self->{storecfg}, $vmid, $drive, $extra_options);
 };
 
 my $detach_tpmstate_drive = sub {
     my ($task, $vmid) = @_;
-    return if !$task->{tpmpath} || !PVE::QemuServer::check_running($vmid);
-    eval { PVE::QemuServer::qemu_drivedel($vmid, "tpmstate0-backup"); };
+
+    return if !$task->{'tpm-volid'} || !PVE::QemuServer::Helpers::vm_running_locally($vmid);
+
+    eval { PVE::QemuServer::Blockdev::detach_tpm_backup_node($vmid); };
 };
 
 my sub add_backup_performance_options {
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 09/31] blockdev: introduce qdev_id_to_drive_id() helper
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (7 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 08/31] backup: use blockdev for TPM state file Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 10/31] blockdev: introduce and use get_block_info() helper Fiona Ebner
                   ` (22 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

To be re-used by users of the query-block QMP command. With -blockdev,
the 'device' property is not initialized. See also commit 9af3ef69
("vm devices list: prepare querying block device names for -blockdev")
for context.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer.pm          | 11 +++--------
 src/PVE/QemuServer/Blockdev.pm | 14 ++++++++++++++
 2 files changed, 17 insertions(+), 8 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index e7c98520..a15b4557 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -55,6 +55,7 @@ use PVE::QemuConfig;
 use PVE::QemuConfig::NoWrite;
 use PVE::QemuMigrate::Helpers;
 use PVE::QemuServer::Agent qw(qga_check_running);
+use PVE::QemuServer::Blockdev;
 use PVE::QemuServer::BlockJob;
 use PVE::QemuServer::Helpers
     qw(config_aware_timeout get_iscsi_initiator_name min_version kvm_user_version windows_version);
@@ -3813,14 +3814,8 @@ sub vm_devices_list {
     my $resblock = mon_cmd($vmid, 'query-block');
     for my $block ($resblock->@*) {
         my $qdev_id = $block->{qdev} or next;
-        if ($qdev_id =~ m|^/machine/peripheral/(virtio(\d+))/virtio-backend$|) {
-            $qdev_id = $1;
-        } elsif ($qdev_id =~ m|^/machine/system\.flash0$|) {
-            $qdev_id = 'pflash0';
-        } elsif ($qdev_id =~ m|^/machine/system\.flash1$|) {
-            $qdev_id = 'efidisk0';
-        }
-        $devices->{$qdev_id} = 1;
+        my $drive_id = PVE::QemuServer::Blockdev::qdev_id_to_drive_id($qdev_id);
+        $devices->{$drive_id} = 1;
     }
 
     my $resmice = mon_cmd($vmid, 'query-mice');
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 85887ab7..99b8a5c6 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -44,6 +44,20 @@ my sub is_fleecing_top_node {
     return $node_name =~ m/-fleecing$/ ? 1 : 0;
 }
 
+sub qdev_id_to_drive_id {
+    my ($qdev_id) = @_;
+
+    if ($qdev_id =~ m|^/machine/peripheral/(virtio(\d+))/virtio-backend$|) {
+        return $1;
+    } elsif ($qdev_id =~ m|^/machine/system\.flash0$|) {
+        return 'pflash0';
+    } elsif ($qdev_id =~ m|^/machine/system\.flash1$|) {
+        return 'efidisk0';
+    }
+
+    return $qdev_id; # for SCSI/SATA/IDE it's the same
+}
+
 my sub get_node_name {
     my ($type, $drive_id, $volid, $options) = @_;
 
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 10/31] blockdev: introduce and use get_block_info() helper
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (8 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 09/31] blockdev: introduce qdev_id_to_drive_id() helper Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 11/31] blockdev: move helper for resize into module Fiona Ebner
                   ` (21 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

When querying the block info, with -blockdev, it is necessary to look
at the 'qdev' property of the QMP result, because the 'device'
property is not initialized. See also commit 9af3ef69 ("vm devices
list: prepare querying block device names for -blockdev").

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer.pm            |  5 ++--
 src/PVE/QemuServer/Blockdev.pm   | 39 ++++++++++++++++++++++++++++++++
 src/PVE/VZDump/QemuServer.pm     |  5 ++--
 src/test/MigrationTest/QmMock.pm | 35 ++++++++++++++++------------
 4 files changed, 64 insertions(+), 20 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index a15b4557..dedb05f1 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -5732,14 +5732,13 @@ sub vm_start_nolock {
             $migrate_storage_uri = "nbd:${localip}:${storage_migrate_port}";
         }
 
-        my $block_info = mon_cmd($vmid, "query-block");
-        $block_info = { map { $_->{device} => $_ } $block_info->@* };
+        my $block_info = PVE::QemuServer::Blockdev::get_block_info($vmid);
 
         foreach my $opt (sort keys %$nbd) {
             my $drivestr = $nbd->{$opt}->{drivestr};
             my $volid = $nbd->{$opt}->{volid};
 
-            my $block_node = $block_info->{"drive-$opt"}->{inserted}->{'node-name'};
+            my $block_node = $block_info->{$opt}->{inserted}->{'node-name'};
 
             mon_cmd(
                 $vmid,
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 99b8a5c6..a7618258 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -58,6 +58,45 @@ sub qdev_id_to_drive_id {
     return $qdev_id; # for SCSI/SATA/IDE it's the same
 }
 
+=pod
+
+=head3 get_block_info
+
+    my $block_info = get_block_info($vmid);
+    my $inserted = $block_info->{$drive_key}->{inserted};
+    my $node_name = $inserted->{'node-name'};
+    my $block_node_size = $inserted->{image}->{'virtual-size'};
+
+Returns a hash reference with the information from the C<query-block> QMP command indexed by
+configuration drive keys like C<scsi2>. See the QMP documentation for details.
+
+Parameters:
+
+=over
+
+=item C<$vmid>
+
+The ID of the virtual machine to query.
+
+=back
+
+=cut
+
+sub get_block_info {
+    my ($vmid) = @_;
+
+    my $block_info = {};
+
+    my $qmp_block_info = mon_cmd($vmid, "query-block");
+    for my $info ($qmp_block_info->@*) {
+        my $qdev_id = $info->{qdev} or next;
+        my $drive_id = qdev_id_to_drive_id($qdev_id);
+        $block_info->{$drive_id} = $info;
+    }
+
+    return $block_info;
+}
+
 my sub get_node_name {
     my ($type, $drive_id, $volid, $options) = @_;
 
diff --git a/src/PVE/VZDump/QemuServer.pm b/src/PVE/VZDump/QemuServer.pm
index f3e292e7..44d3c594 100644
--- a/src/PVE/VZDump/QemuServer.pm
+++ b/src/PVE/VZDump/QemuServer.pm
@@ -1122,14 +1122,13 @@ sub qga_fs_thaw {
 sub query_block_node_sizes {
     my ($self, $vmid, $disks) = @_;
 
-    my $block_info = mon_cmd($vmid, "query-block");
-    $block_info = { map { $_->{device} => $_ } $block_info->@* };
+    my $block_info = PVE::QemuServer::Blockdev::get_block_info($vmid);
 
     for my $diskinfo ($disks->@*) {
         my $drive_key = $diskinfo->{virtdev};
         $drive_key .= "-backup" if $drive_key eq 'tpmstate0';
         my $block_node_size =
-            eval { $block_info->{"drive-$drive_key"}->{inserted}->{image}->{'virtual-size'}; };
+            eval { $block_info->{$drive_key}->{inserted}->{image}->{'virtual-size'}; };
         if (!$block_node_size) {
             $self->loginfo(
                 "could not determine block node size of drive '$drive_key' - using fallback");
diff --git a/src/test/MigrationTest/QmMock.pm b/src/test/MigrationTest/QmMock.pm
index de7f4cd7..78be47d3 100644
--- a/src/test/MigrationTest/QmMock.pm
+++ b/src/test/MigrationTest/QmMock.pm
@@ -43,6 +43,21 @@ sub fork_worker {
 
 # mocked modules
 
+my sub mocked_mon_cmd {
+    my ($vmid, $command, %params) = @_;
+
+    if ($command eq 'nbd-server-start') {
+        return;
+    } elsif ($command eq 'block-export-add') {
+        return;
+    } elsif ($command eq 'query-block') {
+        return [];
+    } elsif ($command eq 'qom-set') {
+        return;
+    }
+    die "mon_cmd (mocked) - implement me: $command";
+}
+
 my $inotify_module = Test::MockModule->new("PVE::INotify");
 $inotify_module->mock(
     nodename => sub {
@@ -50,6 +65,11 @@ $inotify_module->mock(
     },
 );
 
+my $qemu_server_blockdev_module = Test::MockModule->new("PVE::QemuServer::Blockdev");
+$qemu_server_blockdev_module->mock(
+    mon_cmd => \&mocked_mon_cmd,
+);
+
 my $qemu_server_helpers_module = Test::MockModule->new("PVE::QemuServer::Helpers");
 $qemu_server_helpers_module->mock(
     vm_running_locally => sub {
@@ -101,20 +121,7 @@ $MigrationTest::Shared::qemu_server_module->mock(
     config_to_command => sub {
         return ['mocked_kvm_command'];
     },
-    mon_cmd => sub {
-        my ($vmid, $command, %params) = @_;
-
-        if ($command eq 'nbd-server-start') {
-            return;
-        } elsif ($command eq 'block-export-add') {
-            return;
-        } elsif ($command eq 'query-block') {
-            return [];
-        } elsif ($command eq 'qom-set') {
-            return;
-        }
-        die "mon_cmd (mocked) - implement me: $command";
-    },
+    mon_cmd => \&mocked_mon_cmd,
     nodename => sub {
         return $nodename;
     },
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 11/31] blockdev: move helper for resize into module
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (9 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 10/31] blockdev: introduce and use get_block_info() helper Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 12/31] blockdev: add helper to get node below throttle node Fiona Ebner
                   ` (20 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

And replace the deprecated check_running() call while at it.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/API2/Qemu.pm           |  3 ++-
 src/PVE/QemuServer.pm          | 21 ---------------------
 src/PVE/QemuServer/Blockdev.pm | 22 ++++++++++++++++++++++
 3 files changed, 24 insertions(+), 22 deletions(-)

diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 6565ce71..1aa3b358 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -28,6 +28,7 @@ use PVE::GuestImport;
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::QemuServer::Agent;
+use PVE::QemuServer::Blockdev;
 use PVE::QemuServer::BlockJob;
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CPUConfig;
@@ -5745,7 +5746,7 @@ __PACKAGE__->register_method({
                 "update VM $vmid: resize --disk $disk --size $sizestr",
             );
 
-            PVE::QemuServer::qemu_block_resize(
+            PVE::QemuServer::Blockdev::resize(
                 $vmid, "drive-$disk", $storecfg, $volid, $newsize,
             );
 
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index dedb05f1..3f135fcb 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -4341,27 +4341,6 @@ sub qemu_block_set_io_throttle {
 
 }
 
-sub qemu_block_resize {
-    my ($vmid, $deviceid, $storecfg, $volid, $size) = @_;
-
-    my $running = check_running($vmid);
-
-    PVE::Storage::volume_resize($storecfg, $volid, $size, $running);
-
-    return if !$running;
-
-    my $padding = (1024 - $size % 1024) % 1024;
-    $size = $size + $padding;
-
-    mon_cmd(
-        $vmid,
-        "block_resize",
-        device => $deviceid,
-        size => int($size),
-        timeout => 60,
-    );
-}
-
 sub qemu_volume_snapshot {
     my ($vmid, $deviceid, $storecfg, $volid, $snap) = @_;
 
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index a7618258..c1d3cba8 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -12,6 +12,7 @@ use PVE::JSONSchema qw(json_bool);
 use PVE::Storage;
 
 use PVE::QemuServer::Drive qw(drive_is_cdrom);
+use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Monitor qw(mon_cmd);
 
 my sub tpm_backup_node_name {
@@ -464,4 +465,25 @@ sub detach_fleecing_block_nodes {
     }
 }
 
+sub resize {
+    my ($vmid, $deviceid, $storecfg, $volid, $size) = @_;
+
+    my $running = PVE::QemuServer::Helpers::vm_running_locally($vmid);
+
+    PVE::Storage::volume_resize($storecfg, $volid, $size, $running);
+
+    return if !$running;
+
+    my $padding = (1024 - $size % 1024) % 1024;
+    $size = $size + $padding;
+
+    mon_cmd(
+        $vmid,
+        "block_resize",
+        device => $deviceid,
+        size => int($size),
+        timeout => 60,
+    );
+}
+
 1;
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 12/31] blockdev: add helper to get node below throttle node
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (10 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 11/31] blockdev: move helper for resize into module Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 13/31] blockdev: resize: query and use node name for resize operation Fiona Ebner
                   ` (19 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index c1d3cba8..e5eba33e 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -124,6 +124,21 @@ my sub get_node_name {
     return "${prefix}${hash}";
 }
 
+sub get_node_name_below_throttle {
+    my ($vmid, $device_id) = @_;
+
+    my $block_info = get_block_info($vmid);
+    my $drive_id = $device_id =~ s/^drive-//r;
+    my $inserted = $block_info->{$drive_id}->{inserted}
+        or die "no block node inserted for drive '$drive_id'\n";
+
+    # before the switch to -blockdev, the top node was not throttle
+    return $inserted->{'node-name'} if $inserted->{drv} ne 'throttle';
+
+    my $child_info = mon_cmd($vmid, 'block-node-query-file-child', 'node-name' => $device_id);
+    return $child_info->{'node-name'};
+}
+
 my sub read_only_json_option {
     my ($drive, $options) = @_;
 
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 13/31] blockdev: resize: query and use node name for resize operation
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (11 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 12/31] blockdev: add helper to get node below throttle node Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-30  6:23   ` DERUMIER, Alexandre via pve-devel
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 14/31] blockdev: support using zeroinit filter Fiona Ebner
                   ` (18 subsequent siblings)
  31 siblings, 1 reply; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

This also works for -blockdev, which will be used instead of -drive
starting with machine version 10.0.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index e5eba33e..2a9a95e8 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -489,13 +489,15 @@ sub resize {
 
     return if !$running;
 
+    my $node_name = get_node_name_below_throttle($vmid, $deviceid);
+
     my $padding = (1024 - $size % 1024) % 1024;
     $size = $size + $padding;
 
     mon_cmd(
         $vmid,
         "block_resize",
-        device => $deviceid,
+        'node-name' => $node_name,
         size => int($size),
         timeout => 60,
     );
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 14/31] blockdev: support using zeroinit filter
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (12 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 13/31] blockdev: resize: query and use node name for resize operation Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 15/31] blockdev: make some functions private Fiona Ebner
                   ` (17 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

The zeroinit filter is used for cloning/mirroring and importing with
target volumes that are known to produce zeros when reading parts that
were not written before and can be helpful for performance.

Since it is the target of the mirror, it won't have a 'throttle' node
associated with it, but be added as a top node itself. Therefore, it
requires an explicit node-name.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 2a9a95e8..7148a3b7 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -117,6 +117,8 @@ my sub get_node_name {
         $prefix = 'f';
     } elsif ($type eq 'file') {
         $prefix = 'e';
+    } elsif ($type eq 'zeroinit') {
+        $prefix = 'z';
     } else {
         die "unknown node type '$type'";
     }
@@ -317,6 +319,11 @@ sub generate_drive_blockdev {
     my $child = generate_file_blockdev($storecfg, $drive, $options);
     $child = generate_format_blockdev($storecfg, $drive, $child, $options);
 
+    if ($options->{'zero-initialized'}) {
+        my $node_name = get_node_name('zeroinit', $drive_id, $drive->{file}, $options);
+        $child = { driver => 'zeroinit', file => $child, 'node-name' => "$node_name" };
+    }
+
     # for fleecing and TPM backup, this is already the top node
     return $child if $options->{fleecing} || $options->{'tpm-backup'};
 
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 15/31] blockdev: make some functions private
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (13 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 14/31] blockdev: support using zeroinit filter Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 16/31] block job: allow specifying a block node that should be detached upon completion Fiona Ebner
                   ` (16 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Callers outside the module should only use generate_drive_blockdev()
and specific functionality should be controlled via the $options
parameter.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 7148a3b7..73cb7ae5 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -183,7 +183,7 @@ sub generate_throttle_group {
     };
 }
 
-sub generate_blockdev_drive_cache {
+my sub generate_blockdev_drive_cache {
     my ($drive, $scfg) = @_;
 
     my $cache_direct = PVE::QemuServer::Drive::drive_uses_cache_direct($drive, $scfg);
@@ -193,7 +193,7 @@ sub generate_blockdev_drive_cache {
     };
 }
 
-sub generate_file_blockdev {
+my sub generate_file_blockdev {
     my ($storecfg, $drive, $options) = @_;
 
     my $blockdev = {};
@@ -264,7 +264,7 @@ sub generate_file_blockdev {
     return $blockdev;
 }
 
-sub generate_format_blockdev {
+my sub generate_format_blockdev {
     my ($storecfg, $drive, $child, $options) = @_;
 
     die "generate_format_blockdev called without volid/path\n" if !$drive->{file};
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 16/31] block job: allow specifying a block node that should be detached upon completion
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (14 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 15/31] blockdev: make some functions private Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 17/31] block job: add blockdev mirror Fiona Ebner
                   ` (15 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

In preparation for blockdev-mirror, where the node that is not used
anymore (either source when switching to target, or otherwise target)
needs to be detached.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/BlockJob.pm | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index d26bcb01..68d0431f 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -18,13 +18,20 @@ use PVE::QemuServer::RunState;
 # option is useful to get the error for failed jobs here. QEMU's job lock should make it impossible
 # to see a job in 'concluded' state when auto-dismiss=true.
 # $info is the 'BlockJobInfo' for the job returned by query-block-jobs.
+# $job is the information about the job recorded on the PVE-side.
+# A block node $job->{'detach-node-name'} will be detached if present.
 sub qemu_handle_concluded_blockjob {
-    my ($vmid, $job_id, $info) = @_;
+    my ($vmid, $job_id, $qmp_info, $job) = @_;
 
     eval { mon_cmd($vmid, 'job-dismiss', id => $job_id); };
     log_warn("$job_id: failed to dismiss job - $@") if $@;
 
-    die "$job_id: $info->{error} (io-status: $info->{'io-status'})\n" if $info->{error};
+    if (my $node_name = $job->{'detach-node-name'}) {
+        eval { PVE::QemuServer::Blockdev::detach($vmid, $node_name); };
+        log_warn($@) if $@;
+    }
+
+    die "$job_id: $qmp_info->{error} (io-status: $qmp_info->{'io-status'})\n" if $qmp_info->{error};
 }
 
 sub qemu_blockjobs_cancel {
@@ -47,7 +54,7 @@ sub qemu_blockjobs_cancel {
         foreach my $job (keys %$jobs) {
             my $info = $running_jobs->{$job};
             eval {
-                qemu_handle_concluded_blockjob($vmid, $job, $info)
+                qemu_handle_concluded_blockjob($vmid, $job, $info, $jobs->{$job})
                     if $info && $info->{status} eq 'concluded';
             };
             log_warn($@) if $@; # only warn and proceed with canceling other jobs
@@ -106,7 +113,7 @@ sub qemu_drive_mirror_monitor {
 
                 die "$job_id: '$op' has been cancelled\n" if !defined($job);
 
-                qemu_handle_concluded_blockjob($vmid, $job_id, $job)
+                qemu_handle_concluded_blockjob($vmid, $job_id, $job, $jobs->{$job_id})
                     if $job && $job->{status} eq 'concluded';
 
                 my $busy = $job->{busy};
@@ -322,7 +329,7 @@ sub qemu_drive_mirror_switch_to_active_mode {
 
             my $info = $running_jobs->{$job};
             if ($info->{status} eq 'concluded') {
-                qemu_handle_concluded_blockjob($vmid, $job, $info);
+                qemu_handle_concluded_blockjob($vmid, $job, $info, $jobs->{$job});
                 # The 'concluded' state should occur here if and only if the job failed, so the
                 # 'die' below should be unreachable, but play it safe.
                 die "$job: expected job to have failed, but no error was set\n";
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 17/31] block job: add blockdev mirror
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (15 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 16/31] block job: allow specifying a block node that should be detached upon completion Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-30 10:15   ` Fabian Grünbichler
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper Fiona Ebner
                   ` (14 subsequent siblings)
  31 siblings, 1 reply; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

With blockdev-mirror, it is possible to change the aio setting on the
fly and this is useful for migrations between storages where one wants
to use io_uring by default and the other doesn't.

The node below the top throttle node needs to be replaced so that the
limits stay intact and that the top node still has the drive ID as the
node name. That node is not necessarily a format node. For example, it
could also be a zeroinit node from an earlier mirror operation. So
query QEMU itself.

QEMU automatically drops nodes after mirror only if they were
implicitly added, i.e. not explicitly added via blockdev-add. Since a
previous mirror target is explicitly added (and not just implicitly as
the child of a top throttle node), it is necessary to detach the
appropriate block node after mirror.

Already mock blockdev_mirror in the tests.

Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

NOTE: Changes since last series:
* Query QEMU for file child.
* Remove appropriate node after mirror.
* Delete format property from cloned drive hash for destination.

 src/PVE/QemuServer/BlockJob.pm            | 176 ++++++++++++++++++++++
 src/test/MigrationTest/QemuMigrateMock.pm |   8 +
 2 files changed, 184 insertions(+)

diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 68d0431f..212d6a4f 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -4,12 +4,14 @@ use strict;
 use warnings;
 
 use JSON;
+use Storable qw(dclone);
 
 use PVE::Format qw(render_duration render_bytes);
 use PVE::RESTEnvironment qw(log_warn);
 use PVE::Storage;
 
 use PVE::QemuServer::Agent qw(qga_check_running);
+use PVE::QemuServer::Blockdev;
 use PVE::QemuServer::Drive qw(checked_volume_format);
 use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuServer::RunState;
@@ -187,10 +189,17 @@ sub qemu_drive_mirror_monitor {
                         print "$job_id: Completing block job...\n";
 
                         my $completion_command;
+                        # For blockdev, need to detach appropriate node. QEMU will only drop it if
+                        # it was implicitly added (e.g. as the child of a top throttle node), but
+                        # not if it was explicitly added via blockdev-add (e.g. as a previous mirror
+                        # target).
+                        my $detach_node_name;
                         if ($completion eq 'complete') {
                             $completion_command = 'block-job-complete';
+                            $detach_node_name = $jobs->{$job_id}->{'source-node-name'};
                         } elsif ($completion eq 'cancel') {
                             $completion_command = 'block-job-cancel';
+                            $detach_node_name = $jobs->{$job_id}->{'target-node-name'};
                         } else {
                             die "invalid completion value: $completion\n";
                         }
@@ -202,6 +211,9 @@ sub qemu_drive_mirror_monitor {
                         } elsif ($err) {
                             die "$job_id: block job cannot be completed - $err\n";
                         } else {
+                            $jobs->{$job_id}->{'detach-node-name'} = $detach_node_name
+                                if $detach_node_name;
+
                             print "$job_id: Completed successfully.\n";
                             $jobs->{$job_id}->{complete} = 1;
                         }
@@ -347,6 +359,170 @@ sub qemu_drive_mirror_switch_to_active_mode {
     }
 }
 
+=pod
+
+=head3 blockdev_mirror
+
+    blockdev_mirror($source, $dest, $jobs, $completion, $options)
+
+Mirrors the volume of a running VM specified by C<$source> to destination C<$dest>.
+
+=over
+
+=item C<$source>
+
+The source information consists of:
+
+=over
+
+=item C<< $source->{vmid} >>
+
+The ID of the running VM the source volume belongs to.
+
+=item C<< $source->{drive} >>
+
+The drive configuration of the source volume as currently attached to the VM.
+
+=item C<< $source->{bitmap} >>
+
+(optional) Use incremental mirroring based on the specified bitmap.
+
+=back
+
+=item C<$dest>
+
+The destination information consists of:
+
+=over
+
+=item C<< $dest->{volid} >>
+
+The volume ID of the target volume.
+
+=item C<< $dest->{vmid} >>
+
+(optional) The ID of the VM the target volume belongs to. Defaults to C<< $source->{vmid} >>.
+
+=item C<< $dest->{'zero-initialized'} >>
+
+(optional) True, if the target volume is zero-initialized.
+
+=back
+
+=item C<$jobs>
+
+(optional) Other jobs in the transaction when multiple volumes should be mirrored. All jobs must be
+ready before completion can happen.
+
+=item C<$completion>
+
+Completion mode, default is C<complete>:
+
+=over
+
+=item C<complete>
+
+Wait until all jobs are ready, block-job-complete them (default). This means switching the orignal
+drive to use the new target.
+
+=item C<cancel>
+
+Wait until all jobs are ready, block-job-cancel them. This means not switching the original drive
+to use the new target.
+
+=item C<skip>
+
+Wait until all jobs are ready, return with block jobs in ready state.
+
+=item C<auto>
+
+Wait until all jobs disappear, only use for jobs which complete automatically.
+
+=back
+
+=item C<$options>
+
+Further options:
+
+=over
+
+=item C<< $options->{'guest-agent'} >>
+
+If the guest agent is configured for the VM. It will be used to freeze and thaw the filesystems for
+consistency when the target belongs to a different VM.
+
+=item C<< $options->{'bwlimit'} >>
+
+The bandwidth limit to use for the mirroring operation, in KiB/s.
+
+=back
+
+=back
+
+=cut
+
+sub blockdev_mirror {
+    my ($source, $dest, $jobs, $completion, $options) = @_;
+
+    my $vmid = $source->{vmid};
+
+    my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
+    my $device_id = "drive-$drive_id";
+
+    my $storecfg = PVE::Storage::config();
+
+    # Need to replace the node below the top node. This is not necessarily a format node, for
+    # example, it can also be a zeroinit node by a previous mirror! So query QEMU itself.
+    my $child_info = mon_cmd($vmid, 'block-node-query-file-child', 'node-name' => $device_id);
+    my $source_node_name = $child_info->{'node-name'};
+
+    # Copy original drive config (aio, cache, discard, ...):
+    my $dest_drive = dclone($source->{drive});
+    delete($dest_drive->{format}); # cannot use the source's format
+    $dest_drive->{file} = $dest->{volid};
+
+    my $generate_blockdev_opts = {};
+    $generate_blockdev_opts->{'zero-initialized'} = 1 if $dest->{'zero-initialized'};
+
+    # Note that if 'aio' is not explicitly set, i.e. default, it can change if source and target
+    # don't both allow or both not allow 'io_uring' as the default.
+    my $target_drive_blockdev = PVE::QemuServer::Blockdev::generate_drive_blockdev(
+        $storecfg, $dest_drive, $generate_blockdev_opts,
+    );
+    # Top node is the throttle group, must use the file child.
+    my $target_blockdev = $target_drive_blockdev->{file};
+
+    PVE::QemuServer::Monitor::mon_cmd($vmid, 'blockdev-add', $target_blockdev->%*);
+    my $target_node_name = $target_blockdev->{'node-name'};
+
+    $jobs = {} if !$jobs;
+    my $jobid = "mirror-$drive_id";
+    $jobs->{$jobid} = {
+        'source-node-name' => $source_node_name,
+        'target-node-name' => $target_node_name,
+    };
+
+    my $qmp_opts = common_mirror_qmp_options(
+        $device_id, $target_node_name, $source->{bitmap}, $options->{bwlimit},
+    );
+
+    $qmp_opts->{'job-id'} = "$jobid";
+    $qmp_opts->{replaces} = "$source_node_name";
+
+    # if a job already runs for this device we get an error, catch it for cleanup
+    eval { mon_cmd($vmid, "blockdev-mirror", $qmp_opts->%*); };
+    if (my $err = $@) {
+        eval { qemu_blockjobs_cancel($vmid, $jobs) };
+        log_warn("unable to cancel block jobs - $@");
+        eval { PVE::QemuServer::Blockdev::detach($vmid, $target_node_name); };
+        log_warn("unable to delete blockdev '$target_node_name' - $@");
+        die "error starting blockdev mirrror - $err";
+    }
+    qemu_drive_mirror_monitor(
+        $vmid, $dest->{vmid}, $jobs, $completion, $options->{'guest-agent'}, 'mirror',
+    );
+}
+
 sub mirror {
     my ($source, $dest, $jobs, $completion, $options) = @_;
 
diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm
index 25a4f9b2..c52df84b 100644
--- a/src/test/MigrationTest/QemuMigrateMock.pm
+++ b/src/test/MigrationTest/QemuMigrateMock.pm
@@ -9,6 +9,7 @@ use Test::MockModule;
 use MigrationTest::Shared;
 
 use PVE::API2::Qemu;
+use PVE::QemuServer::Drive;
 use PVE::Storage;
 use PVE::Tools qw(file_set_contents file_get_contents);
 
@@ -167,6 +168,13 @@ $qemu_server_blockjob_module->mock(
 
         common_mirror_mock($vmid, $drive_id);
     },
+    blockdev_mirror => sub {
+        my ($source, $dest, $jobs, $completion, $options) = @_;
+
+        my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
+
+        common_mirror_mock($source->{vmid}, $drive_id);
+    },
     qemu_drive_mirror_monitor => sub {
         my ($vmid, $vmiddst, $jobs, $completion, $qga) = @_;
 
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (16 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 17/31] block job: add blockdev mirror Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-30 14:29   ` DERUMIER, Alexandre via pve-devel
       [not found]   ` <cd933fed020383019705045025d38c509042c267.camel@groupe-cyllene.com>
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 19/31] blockdev: add blockdev_change_medium() helper Fiona Ebner
                   ` (13 subsequent siblings)
  31 siblings, 2 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

There is a slight change in behavior for cloud-init disks, when the
file for the new cloud-init disk is 'none'. Previously, the current
drive would not be ejected, now it is. Not sure if that edge case can
even happen in practice and it is more correct, becuase the config was
already updated.

Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer.pm          | 40 ++++++----------------------------
 src/PVE/QemuServer/Blockdev.pm | 18 +++++++++++++++
 2 files changed, 25 insertions(+), 33 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 3f135fcb..6e44132e 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -5170,30 +5170,15 @@ sub vmconfig_update_disk {
             }
 
         } else { # cdrom
+            eval { PVE::QemuServer::Blockdev::change_medium($storecfg, $vmid, $opt, $drive); };
+            my $err = $@;
 
-            if ($drive->{file} eq 'none') {
-                mon_cmd($vmid, "eject", force => JSON::true, id => "$opt");
-                if (drive_is_cloudinit($old_drive)) {
-                    vmconfig_register_unused_drive($storecfg, $vmid, $conf, $old_drive);
-                }
-            } else {
-                my ($path, $format) =
-                    PVE::QemuServer::Drive::get_path_and_format($storecfg, $drive);
-
-                # force eject if locked
-                mon_cmd($vmid, "eject", force => JSON::true, id => "$opt");
-
-                if ($path) {
-                    mon_cmd(
-                        $vmid,
-                        "blockdev-change-medium",
-                        id => "$opt",
-                        filename => "$path",
-                        format => "$format",
-                    );
-                }
+            if ($drive->{file} eq 'none' && drive_is_cloudinit($old_drive)) {
+                vmconfig_register_unused_drive($storecfg, $vmid, $conf, $old_drive);
             }
 
+            die $err if $err;
+
             return 1;
         }
     }
@@ -5230,18 +5215,7 @@ sub vmconfig_update_cloudinit_drive {
     my $running = PVE::QemuServer::check_running($vmid);
 
     if ($running) {
-        my ($path, $format) =
-            PVE::QemuServer::Drive::get_path_and_format($storecfg, $cloudinit_drive);
-        if ($path) {
-            mon_cmd($vmid, "eject", force => JSON::true, id => "$cloudinit_ds");
-            mon_cmd(
-                $vmid,
-                "blockdev-change-medium",
-                id => "$cloudinit_ds",
-                filename => "$path",
-                format => "$format",
-            );
-        }
+        PVE::QemuServer::Blockdev::change_medium($storecfg, $vmid, $cloudinit_ds, $cloudinit_drive);
     }
 }
 
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 73cb7ae5..8ef17a3b 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -510,4 +510,22 @@ sub resize {
     );
 }
 
+sub change_medium {
+    my ($storecfg, $vmid, $qdev_id, $drive) = @_;
+
+    # force eject if locked
+    mon_cmd($vmid, "eject", force => JSON::true, id => "$qdev_id");
+
+    my ($path, $format) = PVE::QemuServer::Drive::get_path_and_format($storecfg, $drive);
+
+    if ($path) { # no path for 'none'
+        mon_cmd(
+            $vmid, "blockdev-change-medium",
+            id => "$qdev_id",
+            filename => "$path",
+            format => "$format",
+        );
+    }
+}
+
 1;
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 19/31] blockdev: add blockdev_change_medium() helper
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (17 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 20/31] blockdev: move helper for configuring throttle limits to module Fiona Ebner
                   ` (12 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

The new helper will be used after the switch to blockdev starting with
machine version 10.0.

Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 8ef17a3b..7884f242 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -510,6 +510,21 @@ sub resize {
     );
 }
 
+my sub blockdev_change_medium {
+    my ($storecfg, $vmid, $qdev_id, $drive) = @_;
+
+    # force eject if locked
+    mon_cmd($vmid, "blockdev-open-tray", force => JSON::true, id => "$qdev_id");
+    mon_cmd($vmid, "blockdev-remove-medium", id => "$qdev_id");
+    detach($vmid, "drive-$qdev_id");
+
+    return if $drive->{file} eq 'none';
+
+    attach($storecfg, $vmid, $drive, {});
+    mon_cmd($vmid, "blockdev-insert-medium", id => "$qdev_id", 'node-name' => "drive-$qdev_id");
+    mon_cmd($vmid, "blockdev-close-tray", id => "$qdev_id");
+}
+
 sub change_medium {
     my ($storecfg, $vmid, $qdev_id, $drive) = @_;
 
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 20/31] blockdev: move helper for configuring throttle limits to module
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (18 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 19/31] blockdev: add blockdev_change_medium() helper Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 21/31] clone disk: skip check for aio=default (io_uring) compatibility starting with machine version 10.0 Fiona Ebner
                   ` (11 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Replace the deprecated check_running() call while at it.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer.pm          | 53 +---------------------------------
 src/PVE/QemuServer/Blockdev.pm | 50 ++++++++++++++++++++++++++++++++
 2 files changed, 51 insertions(+), 52 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 6e44132e..4cc5ea34 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -4290,57 +4290,6 @@ sub qemu_cpu_hotplug {
     }
 }
 
-sub qemu_block_set_io_throttle {
-    my (
-        $vmid,
-        $deviceid,
-        $bps,
-        $bps_rd,
-        $bps_wr,
-        $iops,
-        $iops_rd,
-        $iops_wr,
-        $bps_max,
-        $bps_rd_max,
-        $bps_wr_max,
-        $iops_max,
-        $iops_rd_max,
-        $iops_wr_max,
-        $bps_max_length,
-        $bps_rd_max_length,
-        $bps_wr_max_length,
-        $iops_max_length,
-        $iops_rd_max_length,
-        $iops_wr_max_length,
-    ) = @_;
-
-    return if !check_running($vmid);
-
-    mon_cmd(
-        $vmid, "block_set_io_throttle",
-        device => $deviceid,
-        bps => int($bps),
-        bps_rd => int($bps_rd),
-        bps_wr => int($bps_wr),
-        iops => int($iops),
-        iops_rd => int($iops_rd),
-        iops_wr => int($iops_wr),
-        bps_max => int($bps_max),
-        bps_rd_max => int($bps_rd_max),
-        bps_wr_max => int($bps_wr_max),
-        iops_max => int($iops_max),
-        iops_rd_max => int($iops_rd_max),
-        iops_wr_max => int($iops_wr_max),
-        bps_max_length => int($bps_max_length),
-        bps_rd_max_length => int($bps_rd_max_length),
-        bps_wr_max_length => int($bps_wr_max_length),
-        iops_max_length => int($iops_max_length),
-        iops_rd_max_length => int($iops_rd_max_length),
-        iops_wr_max_length => int($iops_wr_max_length),
-    );
-
-}
-
 sub qemu_volume_snapshot {
     my ($vmid, $deviceid, $storecfg, $volid, $snap) = @_;
 
@@ -5141,7 +5090,7 @@ sub vmconfig_update_disk {
                         $old_drive->{iops_wr_max_length})
                 ) {
 
-                    qemu_block_set_io_throttle(
+                    PVE::QemuServer::Blockdev::set_io_throttle(
                         $vmid,
                         "drive-$opt",
                         ($drive->{mbps} || 0) * 1024 * 1024,
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 7884f242..e999d86c 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -543,4 +543,54 @@ sub change_medium {
     }
 }
 
+sub set_io_throttle {
+    my (
+        $vmid,
+        $deviceid,
+        $bps,
+        $bps_rd,
+        $bps_wr,
+        $iops,
+        $iops_rd,
+        $iops_wr,
+        $bps_max,
+        $bps_rd_max,
+        $bps_wr_max,
+        $iops_max,
+        $iops_rd_max,
+        $iops_wr_max,
+        $bps_max_length,
+        $bps_rd_max_length,
+        $bps_wr_max_length,
+        $iops_max_length,
+        $iops_rd_max_length,
+        $iops_wr_max_length,
+    ) = @_;
+
+    return if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
+
+    mon_cmd(
+        $vmid, "block_set_io_throttle",
+        device => $deviceid,
+        bps => int($bps),
+        bps_rd => int($bps_rd),
+        bps_wr => int($bps_wr),
+        iops => int($iops),
+        iops_rd => int($iops_rd),
+        iops_wr => int($iops_wr),
+        bps_max => int($bps_max),
+        bps_rd_max => int($bps_rd_max),
+        bps_wr_max => int($bps_wr_max),
+        iops_max => int($iops_max),
+        iops_rd_max => int($iops_rd_max),
+        iops_wr_max => int($iops_wr_max),
+        bps_max_length => int($bps_max_length),
+        bps_rd_max_length => int($bps_rd_max_length),
+        bps_wr_max_length => int($bps_wr_max_length),
+        iops_max_length => int($iops_max_length),
+        iops_rd_max_length => int($iops_rd_max_length),
+        iops_wr_max_length => int($iops_wr_max_length),
+    );
+}
+
 1;
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 21/31] clone disk: skip check for aio=default (io_uring) compatibility starting with machine version 10.0
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (19 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 20/31] blockdev: move helper for configuring throttle limits to module Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 22/31] print drive device: don't reference any drive for 'none' " Fiona Ebner
                   ` (10 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

With blockdev-mirror, it is possible to change the aio setting on the
fly and this is useful for migrations between storages where one wants
to use io_uring by default and the other doesn't.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer.pm | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 4cc5ea34..4d085ac2 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -7546,7 +7546,7 @@ sub template_create : prototype($$;$) {
 # Check for bug #4525: drive-mirror will open the target drive with the same aio setting as the
 # source, but some storages have problems with io_uring, sometimes even leading to crashes.
 my sub clone_disk_check_io_uring {
-    my ($src_drive, $storecfg, $src_storeid, $dst_storeid, $use_drive_mirror) = @_;
+    my ($vmid, $src_drive, $storecfg, $src_storeid, $dst_storeid, $use_drive_mirror) = @_;
 
     return if !$use_drive_mirror;
 
@@ -7563,6 +7563,11 @@ my sub clone_disk_check_io_uring {
     if ($src_drive->{aio}) {
         $src_uses_io_uring = $src_drive->{aio} eq 'io_uring';
     } else {
+        # With the switch to -blockdev and blockdev-mirror, the aio setting will be changed on the
+        # fly if not explicitly set.
+        my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+        return if PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0);
+
         $src_uses_io_uring = storage_allows_io_uring_default($src_scfg, $cache_direct);
     }
 
@@ -7627,7 +7632,9 @@ sub clone_disk {
             $dst_format = 'raw';
             $size = PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE;
         } else {
-            clone_disk_check_io_uring($drive, $storecfg, $src_storeid, $storeid, $use_drive_mirror);
+            clone_disk_check_io_uring(
+                $vmid, $drive, $storecfg, $src_storeid, $storeid, $use_drive_mirror,
+            );
 
             $size = PVE::Storage::volume_size_info($storecfg, $drive->{file}, 10);
         }
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 22/31] print drive device: don't reference any drive for 'none' starting with machine version 10.0
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (20 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 21/31] clone disk: skip check for aio=default (io_uring) compatibility starting with machine version 10.0 Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 23/31] blockdev: add support for NBD paths Fiona Ebner
                   ` (9 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

There will be no block node for 'none' after switching to '-blockdev'.

Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
[FE: split out from larger patch
     do it also for non-SCSI cases]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer.pm                          | 18 +++++++++++++++---
 src/test/cfg2cmd/bootorder-empty.conf.cmd      |  2 +-
 src/test/cfg2cmd/bootorder-legacy.conf.cmd     |  2 +-
 src/test/cfg2cmd/bootorder.conf.cmd            |  2 +-
 ...cputype-icelake-client-deprecation.conf.cmd |  2 +-
 src/test/cfg2cmd/seabios_serial.conf.cmd       |  2 +-
 src/test/cfg2cmd/simple-btrfs.conf.cmd         |  2 +-
 src/test/cfg2cmd/simple-cifs.conf.cmd          |  2 +-
 src/test/cfg2cmd/simple-rbd.conf.cmd           |  2 +-
 src/test/cfg2cmd/simple-virtio-blk.conf.cmd    |  2 +-
 .../cfg2cmd/simple-zfs-over-iscsi.conf.cmd     |  2 +-
 src/test/cfg2cmd/simple1-template.conf.cmd     |  2 +-
 src/test/cfg2cmd/simple1.conf.cmd              |  2 +-
 13 files changed, 27 insertions(+), 15 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 4d085ac2..4529e270 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -1208,7 +1208,12 @@ sub print_drivedevice_full {
     my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
     if ($drive->{interface} eq 'virtio') {
         my $pciaddr = print_pci_addr("$drive_id", $bridges, $arch);
-        $device = "virtio-blk-pci,drive=drive-$drive_id,id=${drive_id}${pciaddr}";
+        $device = 'virtio-blk-pci';
+        # for the switch to -blockdev, there is no blockdev for 'none'
+        if (!min_version($machine_version, 10, 0) || $drive->{file} ne 'none') {
+            $device .= ",drive=drive-$drive_id";
+        }
+        $device .= ",id=${drive_id}${pciaddr}";
         $device .= ",iothread=iothread-$drive_id" if $drive->{iothread};
     } elsif ($drive->{interface} eq 'scsi') {
 
@@ -1224,7 +1229,11 @@ sub print_drivedevice_full {
             $device = "scsi-$device_type,bus=$controller_prefix$controller.0,channel=0,scsi-id=0"
                 . ",lun=$drive->{index}";
         }
-        $device .= ",drive=drive-$drive_id,id=$drive_id";
+        # for the switch to -blockdev, there is no blockdev for 'none'
+        if (!min_version($machine_version, 10, 0) || $drive->{file} ne 'none') {
+            $device .= ",drive=drive-$drive_id";
+        }
+        $device .= ",id=$drive_id";
 
         if ($drive->{ssd} && ($device_type eq 'block' || $device_type eq 'hd')) {
             $device .= ",rotation_rate=1";
@@ -1264,7 +1273,10 @@ sub print_drivedevice_full {
         } else {
             $device .= ",bus=ahci$controller.$unit";
         }
-        $device .= ",drive=drive-$drive_id,id=$drive_id";
+        if (!min_version($machine_version, 10, 0) || $drive->{file} ne 'none') {
+            $device .= ",drive=drive-$drive_id";
+        }
+        $device .= ",id=$drive_id";
 
         if ($device_type eq 'hd') {
             if (my $model = $drive->{model}) {
diff --git a/src/test/cfg2cmd/bootorder-empty.conf.cmd b/src/test/cfg2cmd/bootorder-empty.conf.cmd
index e4bf4e6d..3f8fdb8e 100644
--- a/src/test/cfg2cmd/bootorder-empty.conf.cmd
+++ b/src/test/cfg2cmd/bootorder-empty.conf.cmd
@@ -28,7 +28,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2' \
   -device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
   -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
diff --git a/src/test/cfg2cmd/bootorder-legacy.conf.cmd b/src/test/cfg2cmd/bootorder-legacy.conf.cmd
index 7627780c..cd990cd8 100644
--- a/src/test/cfg2cmd/bootorder-legacy.conf.cmd
+++ b/src/test/cfg2cmd/bootorder-legacy.conf.cmd
@@ -28,7 +28,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
   -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
diff --git a/src/test/cfg2cmd/bootorder.conf.cmd b/src/test/cfg2cmd/bootorder.conf.cmd
index 74af37e1..3cef2161 100644
--- a/src/test/cfg2cmd/bootorder.conf.cmd
+++ b/src/test/cfg2cmd/bootorder.conf.cmd
@@ -28,7 +28,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=103' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=103' \
   -device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
   -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,bootindex=102,write-cache=on' \
diff --git a/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd b/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
index effba2b7..e6e09278 100644
--- a/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
+++ b/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
@@ -26,7 +26,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
   -drive 'file=/var/lib/vz/images/8006/base-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
diff --git a/src/test/cfg2cmd/seabios_serial.conf.cmd b/src/test/cfg2cmd/seabios_serial.conf.cmd
index f901a459..0eb02459 100644
--- a/src/test/cfg2cmd/seabios_serial.conf.cmd
+++ b/src/test/cfg2cmd/seabios_serial.conf.cmd
@@ -26,7 +26,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
   -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple-btrfs.conf.cmd b/src/test/cfg2cmd/simple-btrfs.conf.cmd
index 9c3f97d2..2aa2083d 100644
--- a/src/test/cfg2cmd/simple-btrfs.conf.cmd
+++ b/src/test/cfg2cmd/simple-btrfs.conf.cmd
@@ -26,7 +26,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
   -drive 'file=/butter/bread/images/8006/vm-8006-disk-0/disk.raw,if=none,id=drive-scsi0,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple-cifs.conf.cmd b/src/test/cfg2cmd/simple-cifs.conf.cmd
index 61e8d01e..d23a046a 100644
--- a/src/test/cfg2cmd/simple-cifs.conf.cmd
+++ b/src/test/cfg2cmd/simple-cifs.conf.cmd
@@ -24,7 +24,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
   -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple-rbd.conf.cmd b/src/test/cfg2cmd/simple-rbd.conf.cmd
index ea5934c4..df7cba3f 100644
--- a/src/test/cfg2cmd/simple-rbd.conf.cmd
+++ b/src/test/cfg2cmd/simple-rbd.conf.cmd
@@ -26,7 +26,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
   -drive 'file=rbd:cpool/vm-8006-disk-0:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple-virtio-blk.conf.cmd b/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
index 2182febc..0a7eb473 100644
--- a/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
+++ b/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
@@ -27,7 +27,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
   -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,bootindex=100,write-cache=on' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
diff --git a/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd b/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
index d9a8e5e9..a90156b0 100644
--- a/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
+++ b/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
@@ -26,7 +26,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
   -drive 'file=iscsi://127.0.0.1/iqn.2019-10.org.test:foobar/0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple1-template.conf.cmd b/src/test/cfg2cmd/simple1-template.conf.cmd
index 60531b77..c736c84a 100644
--- a/src/test/cfg2cmd/simple1-template.conf.cmd
+++ b/src/test/cfg2cmd/simple1-template.conf.cmd
@@ -24,7 +24,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
   -drive 'file=/var/lib/vz/images/8006/base-8006-disk-1.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap,readonly=on' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
diff --git a/src/test/cfg2cmd/simple1.conf.cmd b/src/test/cfg2cmd/simple1.conf.cmd
index aa76ca62..e657aed7 100644
--- a/src/test/cfg2cmd/simple1.conf.cmd
+++ b/src/test/cfg2cmd/simple1.conf.cmd
@@ -26,7 +26,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
   -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 23/31] blockdev: add support for NBD paths
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (21 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 22/31] print drive device: don't reference any drive for 'none' " Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 24/31] blockdev: add helper to generate PBS block device for live restore Fiona Ebner
                   ` (8 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 25 +++++++++++++++++++++++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index e999d86c..6674ecc6 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -15,6 +15,18 @@ use PVE::QemuServer::Drive qw(drive_is_cdrom);
 use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Monitor qw(mon_cmd);
 
+# gives ($host, $port, $export)
+my $NBD_TCP_PATH_RE_3 = qr/nbd:(\S+):(\d+):exportname=(\S+)/;
+my $NBD_UNIX_PATH_RE_2 = qr/nbd:unix:(\S+):exportname=(\S+)/;
+
+my sub is_nbd {
+    my ($drive) = @_;
+
+    return 1 if $drive->{file} =~ $NBD_TCP_PATH_RE_3;
+    return 1 if $drive->{file} =~ $NBD_UNIX_PATH_RE_2;
+    return 0;
+}
+
 my sub tpm_backup_node_name {
     my ($type, $drive_id) = @_;
 
@@ -206,7 +218,13 @@ my sub generate_file_blockdev {
 
     my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
 
-    if ($drive->{file} eq 'cdrom') {
+    if ($drive->{file} =~ m/^$NBD_UNIX_PATH_RE_2$/) {
+        my $server = { type => 'unix', path => "$1" };
+        $blockdev = { driver => 'nbd', server => $server, export => "$2" };
+    } elsif ($drive->{file} =~ m/^$NBD_TCP_PATH_RE_3$/) {
+        my $server = { type => 'inet', host => "$1", port => "$2" }; # port is also a string in QAPI
+        $blockdev = { driver => 'nbd', server => $server, export => "$3" };
+    } elsif ($drive->{file} eq 'cdrom') {
         my $path = PVE::QemuServer::Drive::get_iso_path($storecfg, $drive->{file});
         $blockdev = { driver => 'host_cdrom', filename => "$path" };
     } elsif ($drive->{file} =~ m|^/|) {
@@ -269,6 +287,7 @@ my sub generate_format_blockdev {
 
     die "generate_format_blockdev called without volid/path\n" if !$drive->{file};
     die "generate_format_blockdev called with 'none'\n" if $drive->{file} eq 'none';
+    die "generate_format_blockdev called with NBD path\n" if is_nbd($drive);
 
     my $scfg;
     my $format;
@@ -317,7 +336,9 @@ sub generate_drive_blockdev {
     die "generate_drive_blockdev called with 'none'\n" if $drive->{file} eq 'none';
 
     my $child = generate_file_blockdev($storecfg, $drive, $options);
-    $child = generate_format_blockdev($storecfg, $drive, $child, $options);
+    if (!is_nbd($drive)) {
+        $child = generate_format_blockdev($storecfg, $drive, $child, $options);
+    }
 
     if ($options->{'zero-initialized'}) {
         my $node_name = get_node_name('zeroinit', $drive_id, $drive->{file}, $options);
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 24/31] blockdev: add helper to generate PBS block device for live restore
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (22 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 23/31] blockdev: add support for NBD paths Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 25/31] blockdev: support alloc-track driver for live-{import, restore} Fiona Ebner
                   ` (7 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 6674ecc6..a66ae6ef 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -357,6 +357,23 @@ sub generate_drive_blockdev {
     };
 }
 
+sub generate_pbs_blockdev {
+    my ($pbs_conf, $pbs_name) = @_;
+
+    my $blockdev = {
+        driver => 'pbs',
+        'node-name' => "$pbs_name",
+        'read-only' => JSON::true,
+        archive => "$pbs_conf->{archive}",
+        repository => "$pbs_conf->{repository}",
+        snapshot => "$pbs_conf->{snapshot}",
+    };
+    $blockdev->{namespace} = "$pbs_conf->{namespace}" if $pbs_conf->{namespace};
+    $blockdev->{keyfile} = "$pbs_conf->{keyfile}" if $pbs_conf->{keyfile};
+
+    return $blockdev;
+}
+
 my sub blockdev_add {
     my ($vmid, $blockdev) = @_;
 
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 25/31] blockdev: support alloc-track driver for live-{import, restore}
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (23 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 24/31] blockdev: add helper to generate PBS block device for live restore Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 26/31] live import: also record volid information Fiona Ebner
                   ` (6 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index a66ae6ef..4aea1abd 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -125,10 +125,12 @@ my sub get_node_name {
     my $hash = substr(Digest::SHA::sha256_hex($info), 0, 30);
 
     my $prefix = "";
-    if ($type eq 'fmt') {
-        $prefix = 'f';
+    if ($type eq 'alloc-track') {
+        $prefix = 'a';
     } elsif ($type eq 'file') {
         $prefix = 'e';
+    } elsif ($type eq 'fmt') {
+        $prefix = 'f';
     } elsif ($type eq 'zeroinit') {
         $prefix = 'z';
     } else {
@@ -345,6 +347,17 @@ sub generate_drive_blockdev {
         $child = { driver => 'zeroinit', file => $child, 'node-name' => "$node_name" };
     }
 
+    if (my $live_restore = $options->{'live-restore'}) {
+        my $node_name = get_node_name('alloc-track', $drive_id, $drive->{file}, $options);
+        $child = {
+            driver => 'alloc-track',
+            'auto-remove' => JSON::true,
+            backing => $live_restore->{blockdev},
+            file => $child,
+            'node-name' => "$node_name",
+        };
+    }
+
     # for fleecing and TPM backup, this is already the top node
     return $child if $options->{fleecing} || $options->{'tpm-backup'};
 
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 26/31] live import: also record volid information
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (24 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 25/31] blockdev: support alloc-track driver for live-{import, restore} Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 27/31] live import/restore: query which node to use for operation Fiona Ebner
                   ` (5 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Will be required for generating the blockdev starting with machine
version 10.0.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/API2/Qemu.pm  | 2 ++
 src/PVE/QemuServer.pm | 3 ++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 1aa3b358..2e6358e4 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -530,6 +530,7 @@ my sub create_disks : prototype($$$$$$$$$$$) {
                         $live_import_mapping->{$ds} = {
                             path => $path,
                             format => $source_format,
+                            volid => $source,
                         };
                         $live_import_mapping->{$ds}->{'delete-after-finish'} = $source
                             if $needs_extraction;
@@ -574,6 +575,7 @@ my sub create_disks : prototype($$$$$$$$$$$) {
                         $live_import_mapping->{$ds} = {
                             path => $source,
                             format => $source_format,
+                            volid => $source,
                         };
                     } else {
                         (undef, $dst_volid) = PVE::QemuServer::ImportDisk::do_import(
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 4529e270..05c19390 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -7095,8 +7095,9 @@ sub live_import_from_files {
             if !exists($conf->{$dev});
 
         my $info = $mapping->{$dev};
-        my ($format, $path) = $info->@{qw(format path)};
+        my ($format, $path, $volid) = $info->@{qw(format path volid)};
         die "missing path for '$dev' mapping\n" if !$path;
+        die "missing volid for '$dev' mapping\n" if !$volid;
         die "missing format for '$dev' mapping\n" if !$format;
         die "invalid format '$format' for '$dev' mapping\n"
             if !grep { $format eq $_ } qw(raw qcow2 vmdk);
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 27/31] live import/restore: query which node to use for operation
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (25 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 26/31] live import: also record volid information Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 28/31] live import/restore: use Blockdev::detach helper Fiona Ebner
                   ` (4 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

In preparation for the switch to -blockdev.

Otherwise, there would be an error:
> An error occurred during live-restore: VM 103 qmp command
> 'block-stream' failed - Permission conflict on node
> 'a25d9b2028b5a364dddbb033603b68c': permissions 'write' are both
> required by node 'drive-ide0' (uses node
> 'a25d9b2028b5a364dddbb033603b68c' as 'file' child) and unshared
> by stream job 'restore-drive-ide0' (uses node
> 'a25d9b2028b5a364dddbb033603b68c' as 'intermediate node' child).

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer.pm | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 05c19390..d4154aeb 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -7037,11 +7037,12 @@ sub pbs_live_restore {
         # removes itself once all backing images vanish with 'auto-remove=on')
         my $jobs = {};
         for my $ds (sort keys %$restored_disks) {
+            my $node_name = PVE::QemuServer::Blockdev::get_node_name_below_throttle($vmid, $ds);
             my $job_id = "restore-$ds";
             mon_cmd(
                 $vmid, 'block-stream',
                 'job-id' => $job_id,
-                device => "$ds",
+                device => "$node_name",
                 'auto-dismiss' => JSON::false,
             );
             $jobs->{$job_id} = {};
@@ -7138,11 +7139,13 @@ sub live_import_from_files {
         # removes itself once all backing images vanish with 'auto-remove=on')
         my $jobs = {};
         for my $ds (sort keys %$live_restore_backing) {
+            my $node_name =
+                PVE::QemuServer::Blockdev::get_node_name_below_throttle($vmid, "drive-$ds");
             my $job_id = "restore-$ds";
             mon_cmd(
                 $vmid, 'block-stream',
                 'job-id' => $job_id,
-                device => "drive-$ds",
+                device => "$node_name",
                 'auto-dismiss' => JSON::false,
             );
             $jobs->{$job_id} = {};
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 28/31] live import/restore: use Blockdev::detach helper
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (26 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 27/31] live import/restore: query which node to use for operation Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 29/31] command line: switch to blockdev starting with machine version 10.0 Fiona Ebner
                   ` (3 subsequent siblings)
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Which also catches scenarios where the block node is already gone,
which can happen with -blockdev.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index d4154aeb..d9284843 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -7057,7 +7057,7 @@ sub pbs_live_restore {
             . " to disconnect from Proxmox Backup Server\n";
 
         for my $ds (sort keys %$restored_disks) {
-            mon_cmd($vmid, 'blockdev-del', 'node-name' => "$ds-pbs");
+            PVE::QemuServer::Blockdev::detach($vmid, "$ds-pbs");
         }
 
         close($qmeventd_fd);
@@ -7159,7 +7159,7 @@ sub live_import_from_files {
         print "restore-drive jobs finished successfully, removing all tracking block devices\n";
 
         for my $ds (sort keys %$live_restore_backing) {
-            mon_cmd($vmid, 'blockdev-del', 'node-name' => "drive-$ds-restore");
+            PVE::QemuServer::Blockdev::detach($vmid, "drive-$ds-restore");
         }
 
         close($qmeventd_fd);
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 29/31] command line: switch to blockdev starting with machine version 10.0
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (27 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 28/31] live import/restore: use Blockdev::detach helper Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-30 10:15   ` Fabian Grünbichler
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 30/31] test: migration: update running machine to 10.0 Fiona Ebner
                   ` (2 subsequent siblings)
  31 siblings, 1 reply; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Changes since last series:
* Support for live restore and live import.
* Use Blockdev::{attach,detach} helpers for hot{,un}plug.
* Adapt to changes from previous patches.
* Also switch for medium change.

 src/PVE/QemuServer.pm                         | 146 ++++++++++++++----
 src/PVE/QemuServer/BlockJob.pm                |  32 ++--
 src/PVE/QemuServer/Blockdev.pm                | 101 ++++++++----
 src/PVE/QemuServer/OVMF.pm                    |  21 ++-
 src/test/MigrationTest/QemuMigrateMock.pm     |   5 +
 src/test/cfg2cmd/aio.conf.cmd                 |  42 +++--
 src/test/cfg2cmd/bootorder-empty.conf.cmd     |  11 +-
 src/test/cfg2cmd/bootorder-legacy.conf.cmd    |  11 +-
 src/test/cfg2cmd/bootorder.conf.cmd           |  11 +-
 ...putype-icelake-client-deprecation.conf.cmd |   5 +-
 src/test/cfg2cmd/efi-raw-template.conf.cmd    |   7 +-
 src/test/cfg2cmd/efi-raw.conf.cmd             |   7 +-
 .../cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd  |   7 +-
 src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd |   7 +-
 src/test/cfg2cmd/efidisk-on-rbd.conf.cmd      |   7 +-
 src/test/cfg2cmd/ide.conf.cmd                 |  15 +-
 src/test/cfg2cmd/q35-ide.conf.cmd             |  15 +-
 .../q35-linux-hostpci-mapping.conf.cmd        |   7 +-
 .../q35-linux-hostpci-multifunction.conf.cmd  |   7 +-
 .../q35-linux-hostpci-template.conf.cmd       |  10 +-
 ...q35-linux-hostpci-x-pci-overrides.conf.cmd |   7 +-
 src/test/cfg2cmd/q35-linux-hostpci.conf.cmd   |   7 +-
 src/test/cfg2cmd/q35-simple.conf.cmd          |   7 +-
 src/test/cfg2cmd/seabios_serial.conf.cmd      |   5 +-
 src/test/cfg2cmd/sev-es.conf.cmd              |   7 +-
 src/test/cfg2cmd/sev-std.conf.cmd             |   7 +-
 src/test/cfg2cmd/simple-btrfs.conf.cmd        |  14 +-
 src/test/cfg2cmd/simple-cifs.conf.cmd         |  14 +-
 .../cfg2cmd/simple-disk-passthrough.conf.cmd  |   9 +-
 src/test/cfg2cmd/simple-lvm.conf.cmd          |  12 +-
 src/test/cfg2cmd/simple-lvmthin.conf.cmd      |  12 +-
 src/test/cfg2cmd/simple-rbd.conf.cmd          |  26 ++--
 src/test/cfg2cmd/simple-virtio-blk.conf.cmd   |   5 +-
 .../cfg2cmd/simple-zfs-over-iscsi.conf.cmd    |  14 +-
 src/test/cfg2cmd/simple1-template.conf.cmd    |   8 +-
 src/test/cfg2cmd/simple1.conf.cmd             |   5 +-
 src/test/run_config2command_tests.pl          |  19 +++
 37 files changed, 446 insertions(+), 206 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index d9284843..3eb7f339 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -3640,19 +3640,38 @@ sub config_to_command {
             }
 
             my $live_restore = $live_restore_backing->{$ds};
-            my $live_blockdev_name = undef;
-            if ($live_restore) {
-                $live_blockdev_name = $live_restore->{name};
-                push @$devices, '-blockdev', $live_restore->{blockdev};
+
+            if (min_version($machine_version, 10, 0)) { # for the switch to -blockdev
+                my $throttle_group = PVE::QemuServer::Blockdev::generate_throttle_group($drive);
+                push @$cmd, '-object', to_json($throttle_group, { canonical => 1 });
+
+                my $extra_blockdev_options = {};
+                $extra_blockdev_options->{'live-restore'} = $live_restore if $live_restore;
+                # extra protection for templates, but SATA and IDE don't support it..
+                $extra_blockdev_options->{'read-only'} = 1 if drive_is_read_only($conf, $drive);
+
+                if ($drive->{file} ne 'none') {
+                    my $blockdev = PVE::QemuServer::Blockdev::generate_drive_blockdev(
+                        $storecfg, $drive, $extra_blockdev_options,
+                    );
+                    push @$devices, '-blockdev', to_json($blockdev, { canonical => 1 });
+                }
+            } else {
+                my $live_blockdev_name = undef;
+                if ($live_restore) {
+                    $live_blockdev_name = $live_restore->{name};
+                    push @$devices, '-blockdev', $live_restore->{blockdev};
+                }
+
+                my $drive_cmd =
+                    print_drive_commandline_full($storecfg, $vmid, $drive, $live_blockdev_name);
+
+                # extra protection for templates, but SATA and IDE don't support it..
+                $drive_cmd .= ',readonly=on' if drive_is_read_only($conf, $drive);
+
+                push @$devices, '-drive', $drive_cmd;
             }
 
-            my $drive_cmd =
-                print_drive_commandline_full($storecfg, $vmid, $drive, $live_blockdev_name);
-
-            # extra protection for templates, but SATA and IDE don't support it..
-            $drive_cmd .= ',readonly=on' if drive_is_read_only($conf, $drive);
-
-            push @$devices, '-drive', $drive_cmd;
             push @$devices, '-device',
                 print_drivedevice_full(
                     $storecfg, $conf, $vmid, $drive, $bridges, $arch, $machine_type,
@@ -4050,28 +4069,63 @@ sub qemu_iothread_del {
 sub qemu_driveadd {
     my ($storecfg, $vmid, $device) = @_;
 
-    my $drive = print_drive_commandline_full($storecfg, $vmid, $device, undef);
-    $drive =~ s/\\/\\\\/g;
-    my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
+    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
 
-    # If the command succeeds qemu prints: "OK"
-    return 1 if $ret =~ m/OK/s;
+    # for the switch to -blockdev
+    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
+        my $throttle_group = PVE::QemuServer::Blockdev::generate_throttle_group($device);
+        mon_cmd($vmid, 'object-add', %$throttle_group);
 
-    die "adding drive failed: $ret\n";
+        eval {
+            my $blockdev =
+                PVE::QemuServer::Blockdev::generate_drive_blockdev($storecfg, $device, {});
+            mon_cmd($vmid, 'blockdev-add', %$blockdev);
+        };
+        if (my $err = $@) {
+            my $drive_id = PVE::QemuServer::Drive::get_drive_id($device);
+            eval { mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id"); };
+            warn $@ if $@;
+            die $err;
+        }
+
+        return 1;
+    } else {
+        my $drive = print_drive_commandline_full($storecfg, $vmid, $device, undef);
+        $drive =~ s/\\/\\\\/g;
+        my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
+
+        # If the command succeeds qemu prints: "OK"
+        return 1 if $ret =~ m/OK/s;
+
+        die "adding drive failed: $ret\n";
+    }
 }
 
 sub qemu_drivedel {
     my ($vmid, $deviceid) = @_;
 
-    my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_del drive-$deviceid", 10 * 60);
-    $ret =~ s/^\s+//;
+    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
 
-    return 1 if $ret eq "";
+    # for the switch to -blockdev
+    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
+        # QEMU recursively auto-removes the file children, i.e. file and format node below the top
+        # node and also implicit backing children referenced by a qcow2 image.
+        eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => "drive-$deviceid"); };
+        die "deleting blockdev $deviceid failed : $@\n" if $@;
+        # FIXME ignore already removed scenario like below?
 
-    # NB: device not found errors mean the drive was auto-deleted and we ignore the error
-    return 1 if $ret =~ m/Device \'.*?\' not found/s;
+        mon_cmd($vmid, 'object-del', id => "throttle-drive-$deviceid");
+    } else {
+        my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_del drive-$deviceid", 10 * 60);
+        $ret =~ s/^\s+//;
 
-    die "deleting drive $deviceid failed : $ret\n";
+        return 1 if $ret eq "";
+
+        # NB: device not found errors mean the drive was auto-deleted and we ignore the error
+        return 1 if $ret =~ m/Device \'.*?\' not found/s;
+
+        die "deleting drive $deviceid failed : $ret\n";
+    }
 }
 
 sub qemu_deviceaddverify {
@@ -7006,10 +7060,22 @@ sub pbs_live_restore {
         print "restoring '$ds' to '$drive->{file}'\n";
 
         my $pbs_name = "drive-${confname}-pbs";
-        $live_restore_backing->{$confname} = {
-            name => $pbs_name,
-            blockdev => print_pbs_blockdev($pbs_conf, $pbs_name),
-        };
+
+        $live_restore_backing->{$confname} = { name => $pbs_name };
+
+        # add blockdev information
+        my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf, undef, $conf->{arch});
+        my $machine_version = PVE::QemuServer::Machine::extract_version(
+            $machine_type,
+            PVE::QemuServer::Helpers::kvm_user_version(),
+        );
+        if (min_version($machine_version, 10, 0)) { # for the switch to -blockdev
+            $live_restore_backing->{$confname}->{blockdev} =
+                PVE::QemuServer::Blockdev::generate_pbs_blockdev($pbs_conf, $pbs_name);
+        } else {
+            $live_restore_backing->{$confname}->{blockdev} =
+                print_pbs_blockdev($pbs_conf, $pbs_name);
+        }
     }
 
     my $drives_streamed = 0;
@@ -7086,6 +7152,8 @@ sub pbs_live_restore {
 sub live_import_from_files {
     my ($mapping, $vmid, $conf, $restore_options) = @_;
 
+    my $storecfg = PVE::Storage::config();
+
     my $live_restore_backing = {};
     my $sources_to_remove = [];
     for my $dev (keys %$mapping) {
@@ -7103,18 +7171,30 @@ sub live_import_from_files {
         die "invalid format '$format' for '$dev' mapping\n"
             if !grep { $format eq $_ } qw(raw qcow2 vmdk);
 
-        $live_restore_backing->{$dev} = {
-            name => "drive-$dev-restore",
-            blockdev => "driver=$format,node-name=drive-$dev-restore"
+        $live_restore_backing->{$dev} = { name => "drive-$dev-restore" };
+
+        my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf, undef, $conf->{arch});
+        my $machine_version = PVE::QemuServer::Machine::extract_version(
+            $machine_type,
+            PVE::QemuServer::Helpers::kvm_user_version(),
+        );
+        if (min_version($machine_version, 10, 0)) { # for the switch to -blockdev
+            my ($interface, $index) = PVE::QemuServer::Drive::parse_drive_interface($dev);
+            my $drive = { file => $volid, interface => $interface, index => $index };
+            my $blockdev =
+                PVE::QemuServer::Blockdev::generate_drive_blockdev($storecfg, $drive, {});
+            $live_restore_backing->{$dev}->{blockdev} = $blockdev;
+        } else {
+            $live_restore_backing->{$dev}->{blockdev} =
+                "driver=$format,node-name=drive-$dev-restore"
                 . ",read-only=on"
-                . ",file.driver=file,file.filename=$path",
-        };
+                . ",file.driver=file,file.filename=$path";
+        }
 
         my $source_volid = $info->{'delete-after-finish'};
         push $sources_to_remove->@*, $source_volid if defined($source_volid);
     }
 
-    my $storecfg = PVE::Storage::config();
     eval {
 
         # make sure HA doesn't interrupt our restore by stopping the VM
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 212d6a4f..1f242cca 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -527,20 +527,24 @@ sub mirror {
     my ($source, $dest, $jobs, $completion, $options) = @_;
 
     # for the switch to -blockdev
-
-    my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
-    qemu_drive_mirror(
-        $source->{vmid},
-        $drive_id,
-        $dest->{volid},
-        $dest->{vmid},
-        $dest->{'zero-initialized'},
-        $jobs,
-        $completion,
-        $options->{'guest-agent'},
-        $options->{bwlimit},
-        $source->{bitmap},
-    );
+    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($source->{vmid});
+    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
+        blockdev_mirror($source, $dest, $jobs, $completion, $options);
+    } else {
+        my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
+        qemu_drive_mirror(
+            $source->{vmid},
+            $drive_id,
+            $dest->{volid},
+            $dest->{vmid},
+            $dest->{'zero-initialized'},
+            $jobs,
+            $completion,
+            $options->{'guest-agent'},
+            $options->{bwlimit},
+            $source->{bitmap},
+        );
+    }
 }
 
 1;
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 4aea1abd..b0b88ea3 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -579,18 +579,24 @@ my sub blockdev_change_medium {
 sub change_medium {
     my ($storecfg, $vmid, $qdev_id, $drive) = @_;
 
-    # force eject if locked
-    mon_cmd($vmid, "eject", force => JSON::true, id => "$qdev_id");
+    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+    # for the switch to -blockdev
+    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
+        blockdev_change_medium($storecfg, $vmid, $qdev_id, $drive);
+    } else {
+        # force eject if locked
+        mon_cmd($vmid, "eject", force => JSON::true, id => "$qdev_id");
 
-    my ($path, $format) = PVE::QemuServer::Drive::get_path_and_format($storecfg, $drive);
+        my ($path, $format) = PVE::QemuServer::Drive::get_path_and_format($storecfg, $drive);
 
-    if ($path) { # no path for 'none'
-        mon_cmd(
-            $vmid, "blockdev-change-medium",
-            id => "$qdev_id",
-            filename => "$path",
-            format => "$format",
-        );
+        if ($path) { # no path for 'none'
+            mon_cmd(
+                $vmid, "blockdev-change-medium",
+                id => "$qdev_id",
+                filename => "$path",
+                format => "$format",
+            );
+        }
     }
 }
 
@@ -620,28 +626,59 @@ sub set_io_throttle {
 
     return if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
 
-    mon_cmd(
-        $vmid, "block_set_io_throttle",
-        device => $deviceid,
-        bps => int($bps),
-        bps_rd => int($bps_rd),
-        bps_wr => int($bps_wr),
-        iops => int($iops),
-        iops_rd => int($iops_rd),
-        iops_wr => int($iops_wr),
-        bps_max => int($bps_max),
-        bps_rd_max => int($bps_rd_max),
-        bps_wr_max => int($bps_wr_max),
-        iops_max => int($iops_max),
-        iops_rd_max => int($iops_rd_max),
-        iops_wr_max => int($iops_wr_max),
-        bps_max_length => int($bps_max_length),
-        bps_rd_max_length => int($bps_rd_max_length),
-        bps_wr_max_length => int($bps_wr_max_length),
-        iops_max_length => int($iops_max_length),
-        iops_rd_max_length => int($iops_rd_max_length),
-        iops_wr_max_length => int($iops_wr_max_length),
-    );
+    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+    # for the switch to -blockdev
+    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
+        mon_cmd(
+            $vmid,
+            'qom-set',
+            path => "throttle-$deviceid",
+            property => "limits",
+            value => {
+                'bps-total' => int($bps),
+                'bps-read' => int($bps_rd),
+                'bps-write' => int($bps_wr),
+                'iops-total' => int($iops),
+                'iops-read' => int($iops_rd),
+                'iops-write' => int($iops_wr),
+                'bps-total-max' => int($bps_max),
+                'bps-read-max' => int($bps_rd_max),
+                'bps-write-max' => int($bps_wr_max),
+                'iops-total-max' => int($iops_max),
+                'iops-read-max' => int($iops_rd_max),
+                'iops-write-max' => int($iops_wr_max),
+                'bps-total-max-length' => int($bps_max_length),
+                'bps-read-max-length' => int($bps_rd_max_length),
+                'bps-write-max-length' => int($bps_wr_max_length),
+                'iops-total-max-length' => int($iops_max_length),
+                'iops-read-max-length' => int($iops_rd_max_length),
+                'iops-write-max-length' => int($iops_wr_max_length),
+            },
+        );
+    } else {
+        mon_cmd(
+            $vmid, "block_set_io_throttle",
+            device => $deviceid,
+            bps => int($bps),
+            bps_rd => int($bps_rd),
+            bps_wr => int($bps_wr),
+            iops => int($iops),
+            iops_rd => int($iops_rd),
+            iops_wr => int($iops_wr),
+            bps_max => int($bps_max),
+            bps_rd_max => int($bps_rd_max),
+            bps_wr_max => int($bps_wr_max),
+            iops_max => int($iops_max),
+            iops_rd_max => int($iops_rd_max),
+            iops_wr_max => int($iops_wr_max),
+            bps_max_length => int($bps_max_length),
+            bps_rd_max_length => int($bps_rd_max_length),
+            bps_wr_max_length => int($bps_wr_max_length),
+            iops_max_length => int($iops_max_length),
+            iops_rd_max_length => int($iops_rd_max_length),
+            iops_wr_max_length => int($iops_wr_max_length),
+        );
+    }
 }
 
 1;
diff --git a/src/PVE/QemuServer/OVMF.pm b/src/PVE/QemuServer/OVMF.pm
index dde81eb7..a7239614 100644
--- a/src/PVE/QemuServer/OVMF.pm
+++ b/src/PVE/QemuServer/OVMF.pm
@@ -3,7 +3,7 @@ package PVE::QemuServer::OVMF;
 use strict;
 use warnings;
 
-use JSON;
+use JSON qw(to_json);
 
 use PVE::RESTEnvironment qw(log_warn);
 use PVE::Storage;
@@ -210,10 +210,21 @@ sub print_ovmf_commandline {
         }
         push $cmd->@*, '-bios', get_ovmf_files($hw_info->{arch}, undef, undef, $amd_sev_type);
     } else {
-        my ($code_drive_str, $var_drive_str) =
-            print_ovmf_drive_commandlines($conf, $storecfg, $vmid, $hw_info, $version_guard);
-        push $cmd->@*, '-drive', $code_drive_str;
-        push $cmd->@*, '-drive', $var_drive_str;
+        if ($version_guard->(10, 0, 0)) { # for the switch to -blockdev
+            my ($code_blockdev, $vars_blockdev, $throttle_group) =
+                generate_ovmf_blockdev($conf, $storecfg, $vmid, $hw_info);
+
+            push $cmd->@*, '-object', to_json($throttle_group, { canonical => 1 });
+            push $cmd->@*, '-blockdev', to_json($code_blockdev, { canonical => 1 });
+            push $cmd->@*, '-blockdev', to_json($vars_blockdev, { canonical => 1 });
+            push $machine_flags->@*, "pflash0=$code_blockdev->{'node-name'}",
+                "pflash1=$vars_blockdev->{'node-name'}";
+        } else {
+            my ($code_drive_str, $var_drive_str) =
+                print_ovmf_drive_commandlines($conf, $storecfg, $vmid, $hw_info, $version_guard);
+            push $cmd->@*, '-drive', $code_drive_str;
+            push $cmd->@*, '-drive', $var_drive_str;
+        }
     }
 
     return ($cmd, $machine_flags);
diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm
index c52df84b..b04cf78b 100644
--- a/src/test/MigrationTest/QemuMigrateMock.pm
+++ b/src/test/MigrationTest/QemuMigrateMock.pm
@@ -215,6 +215,11 @@ $qemu_server_machine_module->mock(
             if !defined($vm_status->{runningmachine});
         return $vm_status->{runningmachine};
     },
+    get_current_qemu_machine => sub {
+        die "invalid test: no runningmachine specified\n"
+            if !defined($vm_status->{runningmachine});
+        return $vm_status->{runningmachine};
+    },
 );
 
 my $qemu_server_network_module = Test::MockModule->new("PVE::QemuServer::Network");
diff --git a/src/test/cfg2cmd/aio.conf.cmd b/src/test/cfg2cmd/aio.conf.cmd
index c199bacf..272c6cd6 100644
--- a/src/test/cfg2cmd/aio.conf.cmd
+++ b/src/test/cfg2cmd/aio.conf.cmd
@@ -14,6 +14,20 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 512 \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi4","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi5","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi6","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi7","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi8","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi9","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi10","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi11","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi12","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi13","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -24,33 +38,33 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=threads,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.raw","node-name":"e3b2553803d55d43b9986a0aac3e9a7","read-only":false},"node-name":"f3b2553803d55d43b9986a0aac3e9a7","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-1.raw,if=none,id=drive-scsi1,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-1.raw","node-name":"e08707d013893852b3d4d42301a4298","read-only":false},"node-name":"f08707d013893852b3d4d42301a4298","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-2.raw,if=none,id=drive-scsi2,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-2.raw","node-name":"edb0854bba55e8b2544ad937c9f5afc","read-only":false},"node-name":"fdb0854bba55e8b2544ad937c9f5afc","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=2,drive=drive-scsi2,id=scsi2,write-cache=on' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-3.raw,if=none,id=drive-scsi3,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-3.raw","node-name":"e9c170cb9491763cad3f31718205efc","read-only":false},"node-name":"f9c170cb9491763cad3f31718205efc","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=3,drive=drive-scsi3,id=scsi3,write-cache=on' \
-  -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-4.raw,if=none,id=drive-scsi4,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-4.raw","node-name":"ea34ecc24c40da0d53420ef344ced37","read-only":false},"node-name":"fa34ecc24c40da0d53420ef344ced37","read-only":false},"node-name":"drive-scsi4","throttle-group":"throttle-drive-scsi4"}' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
-  -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-5.raw,if=none,id=drive-scsi5,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-5.raw","node-name":"e39cacf47a4f4877072601505d90949","read-only":false},"node-name":"f39cacf47a4f4877072601505d90949","read-only":false},"node-name":"drive-scsi5","throttle-group":"throttle-drive-scsi5"}' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=5,drive=drive-scsi5,id=scsi5,write-cache=on' \
-  -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-6,if=none,id=drive-scsi6,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-6","node-name":"e7db1ee70981087e4a2861bc7da417b","read-only":false},"node-name":"f7db1ee70981087e4a2861bc7da417b","read-only":false},"node-name":"drive-scsi6","throttle-group":"throttle-drive-scsi6"}' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=6,drive=drive-scsi6,id=scsi6,write-cache=on' \
   -device 'lsi,id=scsihw1,bus=pci.0,addr=0x6' \
-  -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-7,if=none,id=drive-scsi7,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-7","node-name":"e2d2deac808301140a96c862fe3ea85","read-only":false},"node-name":"f2d2deac808301140a96c862fe3ea85","read-only":false},"node-name":"drive-scsi7","throttle-group":"throttle-drive-scsi7"}' \
   -device 'scsi-hd,bus=scsihw1.0,scsi-id=0,drive=drive-scsi7,id=scsi7,write-cache=on' \
-  -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-8,if=none,id=drive-scsi8,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-8","node-name":"e9796b73db57b8943746ede7d0d3060","read-only":false},"node-name":"f9796b73db57b8943746ede7d0d3060","read-only":false},"node-name":"drive-scsi8","throttle-group":"throttle-drive-scsi8"}' \
   -device 'scsi-hd,bus=scsihw1.0,scsi-id=1,drive=drive-scsi8,id=scsi8,write-cache=on' \
-  -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-9,if=none,id=drive-scsi9,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-9","node-name":"efa538892acc012edbdc5810035bf7d","read-only":false},"node-name":"ffa538892acc012edbdc5810035bf7d","read-only":false},"node-name":"drive-scsi9","throttle-group":"throttle-drive-scsi9"}' \
   -device 'scsi-hd,bus=scsihw1.0,scsi-id=2,drive=drive-scsi9,id=scsi9,write-cache=on' \
-  -drive 'file=rbd:cpool/vm-8006-disk-8:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi10,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-8","node-name":"e6f4cbffa741d16bba69304eb2800ef","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"f6f4cbffa741d16bba69304eb2800ef","read-only":false},"node-name":"drive-scsi10","throttle-group":"throttle-drive-scsi10"}' \
   -device 'scsi-hd,bus=scsihw1.0,scsi-id=3,drive=drive-scsi10,id=scsi10,write-cache=on' \
-  -drive 'file=rbd:cpool/vm-8006-disk-8:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi11,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-8","node-name":"e42375c54de70f5f4be966d98c90255","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"f42375c54de70f5f4be966d98c90255","read-only":false},"node-name":"drive-scsi11","throttle-group":"throttle-drive-scsi11"}' \
   -device 'scsi-hd,bus=scsihw1.0,scsi-id=4,drive=drive-scsi11,id=scsi11,write-cache=on' \
-  -drive 'file=/dev/veegee/vm-8006-disk-9,if=none,id=drive-scsi12,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-9","node-name":"ed7b2c9e0133619fcf6cb8ce5903502","read-only":false},"node-name":"fd7b2c9e0133619fcf6cb8ce5903502","read-only":false},"node-name":"drive-scsi12","throttle-group":"throttle-drive-scsi12"}' \
   -device 'scsi-hd,bus=scsihw1.0,scsi-id=5,drive=drive-scsi12,id=scsi12,write-cache=on' \
-  -drive 'file=/dev/veegee/vm-8006-disk-9,if=none,id=drive-scsi13,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-9","node-name":"ed85420a880203ca1401d00a8edf132","read-only":false},"node-name":"fd85420a880203ca1401d00a8edf132","read-only":false},"node-name":"drive-scsi13","throttle-group":"throttle-drive-scsi13"}' \
   -device 'scsi-hd,bus=scsihw1.0,scsi-id=6,drive=drive-scsi13,id=scsi13,write-cache=on' \
   -machine 'type=pc+pve0'
diff --git a/src/test/cfg2cmd/bootorder-empty.conf.cmd b/src/test/cfg2cmd/bootorder-empty.conf.cmd
index 3f8fdb8e..89f73145 100644
--- a/src/test/cfg2cmd/bootorder-empty.conf.cmd
+++ b/src/test/cfg2cmd/bootorder-empty.conf.cmd
@@ -15,8 +15,12 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 768 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi4","limits":{},"qom-type":"throttle-group"}' \
   -object 'iothread,id=iothread-virtio0' \
+  -object '{"id":"throttle-drive-virtio0","limits":{},"qom-type":"throttle-group"}' \
   -object 'iothread,id=iothread-virtio1' \
+  -object '{"id":"throttle-drive-virtio1","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -27,14 +31,13 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2' \
   -device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"e6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"f6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"drive-scsi4","throttle-group":"throttle-drive-scsi4"}' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"edd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"fdd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"drive-virtio0","throttle-group":"throttle-drive-virtio0"}' \
   -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,write-cache=on' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio1,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"eeb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"feb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"drive-virtio1","throttle-group":"throttle-drive-virtio1"}' \
   -device 'virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb,iothread=iothread-virtio1,write-cache=on' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256' \
diff --git a/src/test/cfg2cmd/bootorder-legacy.conf.cmd b/src/test/cfg2cmd/bootorder-legacy.conf.cmd
index cd990cd8..a2341692 100644
--- a/src/test/cfg2cmd/bootorder-legacy.conf.cmd
+++ b/src/test/cfg2cmd/bootorder-legacy.conf.cmd
@@ -15,8 +15,12 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 768 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi4","limits":{},"qom-type":"throttle-group"}' \
   -object 'iothread,id=iothread-virtio0' \
+  -object '{"id":"throttle-drive-virtio0","limits":{},"qom-type":"throttle-group"}' \
   -object 'iothread,id=iothread-virtio1' \
+  -object '{"id":"throttle-drive-virtio1","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -27,14 +31,13 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"e6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"f6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"drive-scsi4","throttle-group":"throttle-drive-scsi4"}' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"edd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"fdd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"drive-virtio0","throttle-group":"throttle-drive-virtio0"}' \
   -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,write-cache=on' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio1,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"eeb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"feb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"drive-virtio1","throttle-group":"throttle-drive-virtio1"}' \
   -device 'virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb,iothread=iothread-virtio1,bootindex=302,write-cache=on' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=100' \
diff --git a/src/test/cfg2cmd/bootorder.conf.cmd b/src/test/cfg2cmd/bootorder.conf.cmd
index 3cef2161..87a9fca0 100644
--- a/src/test/cfg2cmd/bootorder.conf.cmd
+++ b/src/test/cfg2cmd/bootorder.conf.cmd
@@ -15,8 +15,12 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 768 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi4","limits":{},"qom-type":"throttle-group"}' \
   -object 'iothread,id=iothread-virtio0' \
+  -object '{"id":"throttle-drive-virtio0","limits":{},"qom-type":"throttle-group"}' \
   -object 'iothread,id=iothread-virtio1' \
+  -object '{"id":"throttle-drive-virtio1","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -27,14 +31,13 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=103' \
   -device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi4,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"e6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"f6bf62e20f6c14a2c19bd6f1f5ac36c","read-only":false},"node-name":"drive-scsi4","throttle-group":"throttle-drive-scsi4"}' \
   -device 'scsi-hd,bus=scsihw0.0,scsi-id=4,drive=drive-scsi4,id=scsi4,bootindex=102,write-cache=on' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"edd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"fdd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"drive-virtio0","throttle-group":"throttle-drive-virtio0"}' \
   -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,write-cache=on' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio1,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"eeb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"feb683fb9c516c1a8707c917f0d7a38","read-only":false},"node-name":"drive-virtio1","throttle-group":"throttle-drive-virtio1"}' \
   -device 'virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb,iothread=iothread-virtio1,bootindex=100,write-cache=on' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=101' \
diff --git a/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd b/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
index e6e09278..11533c1d 100644
--- a/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
+++ b/src/test/cfg2cmd/cputype-icelake-client-deprecation.conf.cmd
@@ -15,6 +15,8 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu 'Icelake-Server,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,vendor=GenuineIntel' \
   -m 768 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,9 +27,8 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/var/lib/vz/images/8006/base-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/base-8006-disk-0.qcow2","node-name":"e417d5947e69c5890b1e3ddf8a68167","read-only":false},"node-name":"f417d5947e69c5890b1e3ddf8a68167","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
   -machine 'type=pc+pve0'
diff --git a/src/test/cfg2cmd/efi-raw-template.conf.cmd b/src/test/cfg2cmd/efi-raw-template.conf.cmd
index f66cbb0d..b6064f98 100644
--- a/src/test/cfg2cmd/efi-raw-template.conf.cmd
+++ b/src/test/cfg2cmd/efi-raw-template.conf.cmd
@@ -8,8 +8,9 @@
   -mon 'chardev=qmp-event,mode=control' \
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/base-disk-100-0.raw,size=131072,readonly=on' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/base-disk-100-0.raw","node-name":"e3bd051dc2860cd423537bc00138c50","read-only":true},"node-name":"f3bd051dc2860cd423537bc00138c50","read-only":true,"size":131072},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -smp '1,sockets=1,cores=1,maxcpus=1' \
   -nodefaults \
   -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -25,5 +26,5 @@
   -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -machine 'accel=tcg,type=pc+pve0' \
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,accel=tcg,type=pc+pve0' \
   -snapshot
diff --git a/src/test/cfg2cmd/efi-raw.conf.cmd b/src/test/cfg2cmd/efi-raw.conf.cmd
index 6406686d..c10df1cb 100644
--- a/src/test/cfg2cmd/efi-raw.conf.cmd
+++ b/src/test/cfg2cmd/efi-raw.conf.cmd
@@ -9,8 +9,9 @@
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
   -smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-disk-100-0.raw,size=131072' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":131072},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -smp '1,sockets=1,cores=1,maxcpus=1' \
   -nodefaults \
   -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -26,4 +27,4 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -machine 'type=pc+pve0'
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=pc+pve0'
diff --git a/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd b/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd
index 4e9a7e87..a9dcd474 100644
--- a/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd
+++ b/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd
@@ -9,8 +9,9 @@
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
   -smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-disk-100-0.raw,size=540672' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -smp '1,sockets=1,cores=1,maxcpus=1' \
   -nodefaults \
   -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -27,4 +28,4 @@
   -device 'VGA,id=vga,bus=pcie.0,addr=0x1' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -machine 'type=q35+pve0'
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd b/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd
index 175d9b10..c65c74f5 100644
--- a/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd
+++ b/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd
@@ -9,8 +9,9 @@
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
   -smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-disk-100-0.raw,size=540672' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE_4M.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -smp '1,sockets=1,cores=1,maxcpus=1' \
   -nodefaults \
   -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -29,4 +30,4 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -machine 'type=pc+pve0'
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=pc+pve0'
diff --git a/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd b/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd
index 5c55c01b..585e6ee9 100644
--- a/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd
+++ b/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd
@@ -9,8 +9,9 @@
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
   -smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e688' \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,cache=writeback,format=raw,file=rbd:cpool/vm-100-disk-1:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none:rbd_cache_policy=writeback,size=131072' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":false,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"rbd","image":"vm-100-disk-1","node-name":"eeb8f022b5551ad1d795611f112c767","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"feb8f022b5551ad1d795611f112c767","read-only":false,"size":131072},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -smp '1,sockets=1,cores=1,maxcpus=1' \
   -nodefaults \
   -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -31,4 +32,4 @@
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
-  -machine 'type=pc+pve0'
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=pc+pve0'
diff --git a/src/test/cfg2cmd/ide.conf.cmd b/src/test/cfg2cmd/ide.conf.cmd
index a0d6c3ed..78fe7550 100644
--- a/src/test/cfg2cmd/ide.conf.cmd
+++ b/src/test/cfg2cmd/ide.conf.cmd
@@ -15,6 +15,11 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 512 \
+  -object '{"id":"throttle-drive-ide0","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-ide1","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-ide3","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,16 +30,16 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'file=/var/lib/vz/template/iso/zero.iso,if=none,id=drive-ide0,media=cdrom,format=raw,aio=io_uring' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/var/lib/vz/template/iso/zero.iso","node-name":"e19e15bf93b8cf09e2a5d1669648165","read-only":true},"node-name":"f19e15bf93b8cf09e2a5d1669648165","read-only":true},"node-name":"drive-ide0","throttle-group":"throttle-drive-ide0"}' \
   -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200' \
-  -drive 'file=/mnt/pve/cifs-store/template/iso/one.iso,if=none,id=drive-ide1,media=cdrom,format=raw,aio=threads' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/one.iso","node-name":"e247a6535f864815c8e9dbb5a118336","read-only":true},"node-name":"f247a6535f864815c8e9dbb5a118336","read-only":true},"node-name":"drive-ide1","throttle-group":"throttle-drive-ide1"}' \
   -device 'ide-cd,bus=ide.0,unit=1,drive=drive-ide1,id=ide1,bootindex=201' \
-  -drive 'file=/mnt/pve/cifs-store/template/iso/two.iso,if=none,id=drive-ide2,media=cdrom,format=raw,aio=threads' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/two.iso","node-name":"ec78a64f692c2fa9f873a111580aebd","read-only":true},"node-name":"fc78a64f692c2fa9f873a111580aebd","read-only":true},"node-name":"drive-ide2","throttle-group":"throttle-drive-ide2"}' \
   -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=202' \
-  -drive 'file=/mnt/pve/cifs-store/template/iso/three.iso,if=none,id=drive-ide3,media=cdrom,format=raw,aio=threads' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/three.iso","node-name":"e35557bae4bcbf9edc9f7ff7f132f30","read-only":true},"node-name":"f35557bae4bcbf9edc9f7ff7f132f30","read-only":true},"node-name":"drive-ide3","throttle-group":"throttle-drive-ide3"}' \
   -device 'ide-cd,bus=ide.1,unit=1,drive=drive-ide3,id=ide3,bootindex=203' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/var/lib/vz/images/100/vm-100-disk-2.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=io_uring,detect-zeroes=on' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-2.qcow2","node-name":"ec11e0572184321efc5835152b95d5d","read-only":false},"node-name":"fc11e0572184321efc5835152b95d5d","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/q35-ide.conf.cmd b/src/test/cfg2cmd/q35-ide.conf.cmd
index f12fa44d..f94accb9 100644
--- a/src/test/cfg2cmd/q35-ide.conf.cmd
+++ b/src/test/cfg2cmd/q35-ide.conf.cmd
@@ -16,6 +16,11 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 512 \
+  -object '{"id":"throttle-drive-ide0","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-ide1","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-ide3","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
   -global 'ICH9-LPC.disable_s3=1' \
   -global 'ICH9-LPC.disable_s4=1' \
   -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg \
@@ -24,16 +29,16 @@
   -device 'VGA,id=vga,bus=pcie.0,addr=0x1' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'file=/mnt/pve/cifs-store/template/iso/zero.iso,if=none,id=drive-ide0,media=cdrom,format=raw,aio=threads' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/zero.iso","node-name":"e1677eafc00b7016099210662868e38","read-only":true},"node-name":"f1677eafc00b7016099210662868e38","read-only":true},"node-name":"drive-ide0","throttle-group":"throttle-drive-ide0"}' \
   -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200' \
-  -drive 'file=/mnt/pve/cifs-store/template/iso/one.iso,if=none,id=drive-ide1,media=cdrom,format=raw,aio=threads' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/one.iso","node-name":"e247a6535f864815c8e9dbb5a118336","read-only":true},"node-name":"f247a6535f864815c8e9dbb5a118336","read-only":true},"node-name":"drive-ide1","throttle-group":"throttle-drive-ide1"}' \
   -device 'ide-cd,bus=ide.2,unit=0,drive=drive-ide1,id=ide1,bootindex=201' \
-  -drive 'file=/mnt/pve/cifs-store/template/iso/two.iso,if=none,id=drive-ide2,media=cdrom,format=raw,aio=threads' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/two.iso","node-name":"ec78a64f692c2fa9f873a111580aebd","read-only":true},"node-name":"fc78a64f692c2fa9f873a111580aebd","read-only":true},"node-name":"drive-ide2","throttle-group":"throttle-drive-ide2"}' \
   -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=202' \
-  -drive 'file=/mnt/pve/cifs-store/template/iso/three.iso,if=none,id=drive-ide3,media=cdrom,format=raw,aio=threads' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"driver":"file","filename":"/mnt/pve/cifs-store/template/iso/three.iso","node-name":"e35557bae4bcbf9edc9f7ff7f132f30","read-only":true},"node-name":"f35557bae4bcbf9edc9f7ff7f132f30","read-only":true},"node-name":"drive-ide3","throttle-group":"throttle-drive-ide3"}' \
   -device 'ide-cd,bus=ide.3,unit=0,drive=drive-ide3,id=ide3,bootindex=203' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/var/lib/vz/images/100/vm-100-disk-2.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=io_uring,detect-zeroes=on' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-2.qcow2","node-name":"ec11e0572184321efc5835152b95d5d","read-only":false},"node-name":"fc11e0572184321efc5835152b95d5d","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/q35-linux-hostpci-mapping.conf.cmd b/src/test/cfg2cmd/q35-linux-hostpci-mapping.conf.cmd
index 717c0be4..42f1cb80 100644
--- a/src/test/cfg2cmd/q35-linux-hostpci-mapping.conf.cmd
+++ b/src/test/cfg2cmd/q35-linux-hostpci-mapping.conf.cmd
@@ -9,8 +9,9 @@
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
   -smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e687' \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/vm-100-disk-1.qcow2' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-1.qcow2","node-name":"e70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"f70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' \
   -smp '2,sockets=2,cores=1,maxcpus=2' \
   -nodefaults \
@@ -35,4 +36,4 @@
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
-  -machine 'type=q35+pve0'
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/q35-linux-hostpci-multifunction.conf.cmd b/src/test/cfg2cmd/q35-linux-hostpci-multifunction.conf.cmd
index 146bf3e5..e9cd47b8 100644
--- a/src/test/cfg2cmd/q35-linux-hostpci-multifunction.conf.cmd
+++ b/src/test/cfg2cmd/q35-linux-hostpci-multifunction.conf.cmd
@@ -9,8 +9,9 @@
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
   -smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e687' \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/vm-100-disk-1.qcow2' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-1.qcow2","node-name":"e70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"f70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' \
   -smp '2,sockets=2,cores=1,maxcpus=2' \
   -nodefaults \
@@ -35,4 +36,4 @@
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
-  -machine 'type=q35+pve0'
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/q35-linux-hostpci-template.conf.cmd b/src/test/cfg2cmd/q35-linux-hostpci-template.conf.cmd
index ce69f23a..ddc87814 100644
--- a/src/test/cfg2cmd/q35-linux-hostpci-template.conf.cmd
+++ b/src/test/cfg2cmd/q35-linux-hostpci-template.conf.cmd
@@ -8,8 +8,9 @@
   -mon 'chardev=qmp-event,mode=control' \
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/base-100-disk-1.qcow2,readonly=on' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/base-100-disk-1.qcow2","node-name":"eb6bec0e3c391fabb7fb7dd73ced9bf","read-only":true},"node-name":"fb6bec0e3c391fabb7fb7dd73ced9bf","read-only":true},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -smp '1,sockets=1,cores=1,maxcpus=1' \
   -nodefaults \
   -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -17,6 +18,7 @@
   -nographic \
   -cpu qemu64 \
   -m 512 \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -26,7 +28,7 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/var/lib/vz/images/100/base-100-disk-2.raw,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on,readonly=on' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/base-100-disk-2.raw","node-name":"e24dfe239201bb9924fc4cfb899ca70","read-only":true},"node-name":"f24dfe239201bb9924fc4cfb899ca70","read-only":true},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
-  -machine 'accel=tcg,type=pc+pve0' \
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,accel=tcg,type=pc+pve0' \
   -snapshot
diff --git a/src/test/cfg2cmd/q35-linux-hostpci-x-pci-overrides.conf.cmd b/src/test/cfg2cmd/q35-linux-hostpci-x-pci-overrides.conf.cmd
index 0f0cb2c0..b06dbb4f 100644
--- a/src/test/cfg2cmd/q35-linux-hostpci-x-pci-overrides.conf.cmd
+++ b/src/test/cfg2cmd/q35-linux-hostpci-x-pci-overrides.conf.cmd
@@ -9,8 +9,9 @@
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
   -smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e687' \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/vm-100-disk-1.qcow2' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-1.qcow2","node-name":"e70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"f70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' \
   -smp '2,sockets=2,cores=1,maxcpus=2' \
   -nodefaults \
@@ -34,4 +35,4 @@
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
-  -machine 'type=q35+pve0'
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/q35-linux-hostpci.conf.cmd b/src/test/cfg2cmd/q35-linux-hostpci.conf.cmd
index 0abb569b..014eb09c 100644
--- a/src/test/cfg2cmd/q35-linux-hostpci.conf.cmd
+++ b/src/test/cfg2cmd/q35-linux-hostpci.conf.cmd
@@ -9,8 +9,9 @@
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
   -smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e687' \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/vm-100-disk-1.qcow2' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-1.qcow2","node-name":"e70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"f70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' \
   -smp '2,sockets=2,cores=1,maxcpus=2' \
   -nodefaults \
@@ -40,4 +41,4 @@
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
-  -machine 'type=q35+pve0'
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/q35-simple.conf.cmd b/src/test/cfg2cmd/q35-simple.conf.cmd
index 371ea7dd..c6b38f7d 100644
--- a/src/test/cfg2cmd/q35-simple.conf.cmd
+++ b/src/test/cfg2cmd/q35-simple.conf.cmd
@@ -9,8 +9,9 @@
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
   -smbios 'type=1,uuid=3dd750ce-d910-44d0-9493-525c0be4e687' \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=qcow2,file=/var/lib/vz/images/100/vm-100-disk-1.qcow2' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-1.qcow2","node-name":"e70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"f70e3017c5a79fdee5a04aa92ac1e9c","read-only":false},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' \
   -smp '2,sockets=1,cores=2,maxcpus=2' \
   -nodefaults \
@@ -28,4 +29,4 @@
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=2E:01:68:F9:9C:87,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
-  -machine 'type=q35+pve0'
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=q35+pve0'
diff --git a/src/test/cfg2cmd/seabios_serial.conf.cmd b/src/test/cfg2cmd/seabios_serial.conf.cmd
index 0eb02459..a00def0a 100644
--- a/src/test/cfg2cmd/seabios_serial.conf.cmd
+++ b/src/test/cfg2cmd/seabios_serial.conf.cmd
@@ -15,6 +15,8 @@
   -nographic \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 768 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,10 +27,9 @@
   -device 'isa-serial,chardev=serial0' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"ecd04be4259153b8293415fefa2a84c","read-only":false},"node-name":"fcd04be4259153b8293415fefa2a84c","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/sev-es.conf.cmd b/src/test/cfg2cmd/sev-es.conf.cmd
index 3a100306..a39b6f67 100644
--- a/src/test/cfg2cmd/sev-es.conf.cmd
+++ b/src/test/cfg2cmd/sev-es.conf.cmd
@@ -9,8 +9,9 @@
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
   -smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CVM_CODE_4M.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-disk-100-0.raw,size=540672' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CVM_CODE_4M.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -smp '1,sockets=1,cores=1,maxcpus=1' \
   -nodefaults \
   -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -27,4 +28,4 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -object 'sev-guest,id=sev0,cbitpos=51,reduced-phys-bits=6,policy=0xc' \
-  -machine 'type=pc+pve0,confidential-guest-support=sev0'
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=pc+pve0,confidential-guest-support=sev0'
diff --git a/src/test/cfg2cmd/sev-std.conf.cmd b/src/test/cfg2cmd/sev-std.conf.cmd
index 06da2ca0..3878f15c 100644
--- a/src/test/cfg2cmd/sev-std.conf.cmd
+++ b/src/test/cfg2cmd/sev-std.conf.cmd
@@ -9,8 +9,9 @@
   -pidfile /var/run/qemu-server/8006.pid \
   -daemonize \
   -smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
-  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CVM_CODE_4M.fd' \
-  -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-disk-100-0.raw,size=540672' \
+  -object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
+  -blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CVM_CODE_4M.fd"},"node-name":"pflash0","read-only":true}' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
   -smp '1,sockets=1,cores=1,maxcpus=1' \
   -nodefaults \
   -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
@@ -27,4 +28,4 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -object 'sev-guest,id=sev0,cbitpos=51,reduced-phys-bits=6,policy=0x8' \
-  -machine 'type=pc+pve0,confidential-guest-support=sev0'
+  -machine 'pflash0=pflash0,pflash1=drive-efidisk0,type=pc+pve0,confidential-guest-support=sev0'
diff --git a/src/test/cfg2cmd/simple-btrfs.conf.cmd b/src/test/cfg2cmd/simple-btrfs.conf.cmd
index 2aa2083d..6c944f62 100644
--- a/src/test/cfg2cmd/simple-btrfs.conf.cmd
+++ b/src/test/cfg2cmd/simple-btrfs.conf.cmd
@@ -15,6 +15,11 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 768 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,16 +30,15 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/butter/bread/images/8006/vm-8006-disk-0/disk.raw,if=none,id=drive-scsi0,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/butter/bread/images/8006/vm-8006-disk-0/disk.raw","node-name":"e99aff0ff797aa030a22e9f580076dd","read-only":false},"node-name":"f99aff0ff797aa030a22e9f580076dd","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-  -drive 'file=/butter/bread/images/8006/vm-8006-disk-0/disk.raw,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/butter/bread/images/8006/vm-8006-disk-0/disk.raw","node-name":"e7b2fd2a8c5dbfc550d9781e5df8841","read-only":false},"node-name":"f7b2fd2a8c5dbfc550d9781e5df8841","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
-  -drive 'file=/butter/bread/images/8006/vm-8006-disk-0/disk.raw,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/butter/bread/images/8006/vm-8006-disk-0/disk.raw","node-name":"ed78b07bb04c2cbd8aedc648e885569","read-only":false},"node-name":"fd78b07bb04c2cbd8aedc648e885569","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
-  -drive 'file=/butter/bread/images/8006/vm-8006-disk-0/disk.raw,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/butter/bread/images/8006/vm-8006-disk-0/disk.raw","node-name":"e7487c01d831e2b51a5446980170ec9","read-only":false},"node-name":"f7487c01d831e2b51a5446980170ec9","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/simple-cifs.conf.cmd b/src/test/cfg2cmd/simple-cifs.conf.cmd
index d23a046a..f22eb033 100644
--- a/src/test/cfg2cmd/simple-cifs.conf.cmd
+++ b/src/test/cfg2cmd/simple-cifs.conf.cmd
@@ -14,6 +14,11 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 512 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -23,15 +28,14 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw","node-name":"e2b3b8f2d6a23adc1aa3ecd195dbaf5","read-only":false},"node-name":"f2b3b8f2d6a23adc1aa3ecd195dbaf5","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
-  -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw","node-name":"ee4d9a961200a669c1a8182632aba3e","read-only":false},"node-name":"fe4d9a961200a669c1a8182632aba3e","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
-  -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw","node-name":"e6a3bf7eee1e2636cbe31f62b537b6c","read-only":false},"node-name":"f6a3bf7eee1e2636cbe31f62b537b6c","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
-  -drive 'file=/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=native,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/mnt/pve/cifs-store/images/8006/vm-8006-disk-0.raw","node-name":"e7042ee58e764b1296ad54014cb9a03","read-only":false},"node-name":"f7042ee58e764b1296ad54014cb9a03","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
   -machine 'type=pc+pve0'
diff --git a/src/test/cfg2cmd/simple-disk-passthrough.conf.cmd b/src/test/cfg2cmd/simple-disk-passthrough.conf.cmd
index 70ee9f6b..58368210 100644
--- a/src/test/cfg2cmd/simple-disk-passthrough.conf.cmd
+++ b/src/test/cfg2cmd/simple-disk-passthrough.conf.cmd
@@ -15,6 +15,9 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 768 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,12 +28,12 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'file=/dev/cdrom,if=none,id=drive-ide2,media=cdrom,format=raw,aio=io_uring' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"driver":"host_cdrom","filename":"/dev/cdrom","node-name":"ee50e59431a6228dc388fc821b35696","read-only":true},"node-name":"fe50e59431a6228dc388fc821b35696","read-only":true},"node-name":"drive-ide2","throttle-group":"throttle-drive-ide2"}' \
   -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/dev/sda,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"host_device","filename":"/dev/sda","node-name":"eec235c1b362ebd19d5e98959b4c171","read-only":false},"node-name":"fec235c1b362ebd19d5e98959b4c171","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-  -drive 'file=/mnt/file.raw,if=none,id=drive-scsi1,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/mnt/file.raw","node-name":"e234a4e3b89ac3adac9bdbf0c3dd6b4","read-only":false},"node-name":"f234a4e3b89ac3adac9bdbf0c3dd6b4","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/simple-lvm.conf.cmd b/src/test/cfg2cmd/simple-lvm.conf.cmd
index 40a6c7c8..650f0ac3 100644
--- a/src/test/cfg2cmd/simple-lvm.conf.cmd
+++ b/src/test/cfg2cmd/simple-lvm.conf.cmd
@@ -14,6 +14,10 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 512 \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -24,12 +28,12 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/dev/veegee/vm-8006-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-0","node-name":"e0378a375d635b0f473569544c7c207","read-only":false},"node-name":"f0378a375d635b0f473569544c7c207","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-  -drive 'file=/dev/veegee/vm-8006-disk-0,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-0","node-name":"e2fbae024c8a771f708f4a5391211b0","read-only":false},"node-name":"f2fbae024c8a771f708f4a5391211b0","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
-  -drive 'file=/dev/veegee/vm-8006-disk-0,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-0","node-name":"e4328c26b141e3efe1564cb60bf1155","read-only":false},"node-name":"f4328c26b141e3efe1564cb60bf1155","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
-  -drive 'file=/dev/veegee/vm-8006-disk-0,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=native,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-0","node-name":"e68e10f8128f05fe5f7e85cc1f9922b","read-only":false},"node-name":"f68e10f8128f05fe5f7e85cc1f9922b","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
   -machine 'type=pc+pve0'
diff --git a/src/test/cfg2cmd/simple-lvmthin.conf.cmd b/src/test/cfg2cmd/simple-lvmthin.conf.cmd
index 8d366aff..22251bc6 100644
--- a/src/test/cfg2cmd/simple-lvmthin.conf.cmd
+++ b/src/test/cfg2cmd/simple-lvmthin.conf.cmd
@@ -14,6 +14,10 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 512 \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -24,12 +28,12 @@
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/dev/pve/vm-8006-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/pve/vm-8006-disk-0","node-name":"e6d87b01b7bb888b8426534a542ff1c","read-only":false},"node-name":"f6d87b01b7bb888b8426534a542ff1c","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-  -drive 'file=/dev/pve/vm-8006-disk-0,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/pve/vm-8006-disk-0","node-name":"e96d9ece81aa4271aa2d8485184f66b","read-only":false},"node-name":"f96d9ece81aa4271aa2d8485184f66b","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
-  -drive 'file=/dev/pve/vm-8006-disk-0,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/pve/vm-8006-disk-0","node-name":"e0b89788ef97beda10a850ab45897d9","read-only":false},"node-name":"f0b89788ef97beda10a850ab45897d9","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
-  -drive 'file=/dev/pve/vm-8006-disk-0,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/pve/vm-8006-disk-0","node-name":"ea7b6871af66ca3e13e95bd74570aa2","read-only":false},"node-name":"fa7b6871af66ca3e13e95bd74570aa2","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
   -machine 'type=pc+pve0'
diff --git a/src/test/cfg2cmd/simple-rbd.conf.cmd b/src/test/cfg2cmd/simple-rbd.conf.cmd
index df7cba3f..9260e448 100644
--- a/src/test/cfg2cmd/simple-rbd.conf.cmd
+++ b/src/test/cfg2cmd/simple-rbd.conf.cmd
@@ -15,6 +15,15 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 768 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi4","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi5","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi6","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi7","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,24 +34,23 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=rbd:cpool/vm-8006-disk-0:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-0","node-name":"e8e1af6f55c6a2466f178045aa79710","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"f8e1af6f55c6a2466f178045aa79710","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-  -drive 'file=rbd:cpool/vm-8006-disk-0:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-0","node-name":"e3990bba2ed1f48c5bb23e9f37b4cec","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"f3990bba2ed1f48c5bb23e9f37b4cec","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
-  -drive 'file=rbd:cpool/vm-8006-disk-0:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-0","node-name":"e3beccc2a8f2eacb8b5df8055a7d093","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"f3beccc2a8f2eacb8b5df8055a7d093","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
-  -drive 'file=rbd:cpool/vm-8006-disk-0:mon_host=127.0.0.42;127.0.0.21;[\:\:1]:auth_supported=none,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"auth-client-required":["none"],"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"rbd","image":"vm-8006-disk-0","node-name":"eef923d5dfcee93fbc712b03f9f21af","pool":"cpool","read-only":false,"server":[{"host":"127.0.0.42","port":"3300"},{"host":"127.0.0.21","port":"3300"},{"host":"::1","port":"3300"}]},"node-name":"fef923d5dfcee93fbc712b03f9f21af","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
-  -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0,if=none,id=drive-scsi4,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0","node-name":"eb2c7a292f03b9f6d015cf83ae79730","read-only":false},"node-name":"fb2c7a292f03b9f6d015cf83ae79730","read-only":false},"node-name":"drive-scsi4","throttle-group":"throttle-drive-scsi4"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=4,drive=drive-scsi4,id=scsi4,write-cache=on' \
-  -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0,if=none,id=drive-scsi5,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0","node-name":"e5258ec75558b1f102af1e20e677fd0","read-only":false},"node-name":"f5258ec75558b1f102af1e20e677fd0","read-only":false},"node-name":"drive-scsi5","throttle-group":"throttle-drive-scsi5"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=5,drive=drive-scsi5,id=scsi5,write-cache=on' \
-  -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0,if=none,id=drive-scsi6,cache=writethrough,discard=on,format=raw,aio=threads,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"aio":"threads","cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0","node-name":"edb33cdcea8ec3e2225509c4945227e","read-only":false},"node-name":"fdb33cdcea8ec3e2225509c4945227e","read-only":false},"node-name":"drive-scsi6","throttle-group":"throttle-drive-scsi6"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=6,drive=drive-scsi6,id=scsi6,write-cache=off' \
-  -drive 'file=/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0,if=none,id=drive-scsi7,cache=directsync,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"host_device","filename":"/dev/rbd-pve/fc4181a6-56eb-4f68-b452-8ba1f381ca2a/cpool/vm-8006-disk-0","node-name":"eb0b017124a47505c97a5da052e0141","read-only":false},"node-name":"fb0b017124a47505c97a5da052e0141","read-only":false},"node-name":"drive-scsi7","throttle-group":"throttle-drive-scsi7"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=7,drive=drive-scsi7,id=scsi7,write-cache=off' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/simple-virtio-blk.conf.cmd b/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
index 0a7eb473..4a3a4c7a 100644
--- a/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
+++ b/src/test/cfg2cmd/simple-virtio-blk.conf.cmd
@@ -15,7 +15,9 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 768 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
   -object 'iothread,id=iothread-virtio0' \
+  -object '{"id":"throttle-drive-virtio0","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -26,9 +28,8 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"edd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"fdd19f6c1b3a6d5a6248c3376a91a16","read-only":false},"node-name":"drive-virtio0","throttle-group":"throttle-drive-virtio0"}' \
   -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,bootindex=100,write-cache=on' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd b/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
index a90156b0..22603fa5 100644
--- a/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
+++ b/src/test/cfg2cmd/simple-zfs-over-iscsi.conf.cmd
@@ -15,6 +15,11 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 768 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi3","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,16 +30,15 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=iscsi://127.0.0.1/iqn.2019-10.org.test:foobar/0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"iscsi","lun":0,"node-name":"e7106ac43d4f125a1911487dd9e3e42","portal":"127.0.0.1","read-only":false,"target":"iqn.2019-10.org.test:foobar","transport":"tcp"},"node-name":"f7106ac43d4f125a1911487dd9e3e42","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
-  -drive 'file=iscsi://127.0.0.1/iqn.2019-10.org.test:foobar/0,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"iscsi","lun":0,"node-name":"efdb73e0d0acc5a60e3ff438cb20113","portal":"127.0.0.1","read-only":false,"target":"iqn.2019-10.org.test:foobar","transport":"tcp"},"node-name":"ffdb73e0d0acc5a60e3ff438cb20113","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
-  -drive 'file=iscsi://127.0.0.1/iqn.2019-10.org.test:foobar/0,if=none,id=drive-scsi2,cache=writethrough,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":false,"no-flush":false},"driver":"raw","file":{"cache":{"direct":false,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"iscsi","lun":0,"node-name":"eab527a81b458aa9603dca5e2505f6e","portal":"127.0.0.1","read-only":false,"target":"iqn.2019-10.org.test:foobar","transport":"tcp"},"node-name":"fab527a81b458aa9603dca5e2505f6e","read-only":false},"node-name":"drive-scsi2","throttle-group":"throttle-drive-scsi2"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,write-cache=off' \
-  -drive 'file=iscsi://127.0.0.1/iqn.2019-10.org.test:foobar/0,if=none,id=drive-scsi3,cache=directsync,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"iscsi","lun":0,"node-name":"e915a332310039f7a3feed6901eb5da","portal":"127.0.0.1","read-only":false,"target":"iqn.2019-10.org.test:foobar","transport":"tcp"},"node-name":"f915a332310039f7a3feed6901eb5da","read-only":false},"node-name":"drive-scsi3","throttle-group":"throttle-drive-scsi3"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3,write-cache=off' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/cfg2cmd/simple1-template.conf.cmd b/src/test/cfg2cmd/simple1-template.conf.cmd
index c736c84a..4f8f29f6 100644
--- a/src/test/cfg2cmd/simple1-template.conf.cmd
+++ b/src/test/cfg2cmd/simple1-template.conf.cmd
@@ -15,6 +15,9 @@
   -nographic \
   -cpu qemu64 \
   -m 512 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-sata0","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -23,13 +26,12 @@
   -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/var/lib/vz/images/8006/base-8006-disk-1.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap,readonly=on' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/base-8006-disk-1.qcow2","node-name":"e1085774206ae4a6b6bf8426ff08f16","read-only":true},"node-name":"f1085774206ae4a6b6bf8426ff08f16","read-only":true},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
   -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' \
-  -drive 'file=/var/lib/vz/images/8006/base-8006-disk-0.qcow2,if=none,id=drive-sata0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/base-8006-disk-0.qcow2","node-name":"eab334c2e07734480f33dd80d89871b","read-only":false},"node-name":"fab334c2e07734480f33dd80d89871b","read-only":false},"node-name":"drive-sata0","throttle-group":"throttle-drive-sata0"}' \
   -device 'ide-hd,bus=ahci0.0,drive=drive-sata0,id=sata0,write-cache=on' \
   -machine 'accel=tcg,smm=off,type=pc+pve0' \
   -snapshot
diff --git a/src/test/cfg2cmd/simple1.conf.cmd b/src/test/cfg2cmd/simple1.conf.cmd
index e657aed7..677b0527 100644
--- a/src/test/cfg2cmd/simple1.conf.cmd
+++ b/src/test/cfg2cmd/simple1.conf.cmd
@@ -15,6 +15,8 @@
   -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
   -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
   -m 768 \
+  -object '{"id":"throttle-drive-ide2","limits":{},"qom-type":"throttle-group"}' \
+  -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
   -global 'PIIX4_PM.disable_s3=1' \
   -global 'PIIX4_PM.disable_s4=1' \
   -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
@@ -25,10 +27,9 @@
   -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
   -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
   -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
-  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
   -device 'ide-cd,bus=ide.1,unit=0,id=ide2,bootindex=200' \
   -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
-  -drive 'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap' \
+  -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"unmap","discard":"unmap","driver":"file","filename":"/var/lib/vz/images/8006/vm-8006-disk-0.qcow2","node-name":"ecd04be4259153b8293415fefa2a84c","read-only":false},"node-name":"fcd04be4259153b8293415fefa2a84c","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
   -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,write-cache=on' \
   -netdev 'type=tap,id=net0,ifname=tap8006i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' \
   -device 'virtio-net-pci,mac=A2:C0:43:77:08:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
diff --git a/src/test/run_config2command_tests.pl b/src/test/run_config2command_tests.pl
index 52fedd7b..1262a0df 100755
--- a/src/test/run_config2command_tests.pl
+++ b/src/test/run_config2command_tests.pl
@@ -266,6 +266,18 @@ $storage_module->mock(
     },
 );
 
+my $file_stat_module = Test::MockModule->new("File::stat");
+$file_stat_module->mock(
+    stat => sub {
+        my ($path) = @_;
+        if ($path =~ m!/dev/!) {
+            return $file_stat_module->original('stat')->('/dev/null');
+        } else {
+            return $file_stat_module->original('stat')->('./run_config2command_tests.pl');
+        }
+    },
+);
+
 my $zfsplugin_module = Test::MockModule->new("PVE::Storage::ZFSPlugin");
 $zfsplugin_module->mock(
     zfs_get_lu_name => sub {
@@ -276,6 +288,13 @@ $zfsplugin_module->mock(
     },
 );
 
+my $rbdplugin_module = Test::MockModule->new("PVE::Storage::RBDPlugin");
+$rbdplugin_module->mock(
+    rbd_volume_config_set => sub {
+        return;
+    },
+);
+
 my $qemu_server_config;
 $qemu_server_config = Test::MockModule->new('PVE::QemuConfig');
 $qemu_server_config->mock(
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 30/31] test: migration: update running machine to 10.0
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (28 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 29/31] command line: switch to blockdev starting with machine version 10.0 Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 31/31] partially fix #3227: ensure that target image for mirror has the same size for EFI disks Fiona Ebner
  2025-06-27 16:00 ` [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

In particular, this also means that (mocked) blockdev_mirror() will be
used.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

No changes since the last series.

 src/test/run_qemu_migrate_tests.pl | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/src/test/run_qemu_migrate_tests.pl b/src/test/run_qemu_migrate_tests.pl
index 68f0784e..ed2f38ee 100755
--- a/src/test/run_qemu_migrate_tests.pl
+++ b/src/test/run_qemu_migrate_tests.pl
@@ -267,7 +267,7 @@ my $vm_configs = {
                 'numa' => 0,
                 'ostype' => 'l26',
                 'runningcpu' => 'kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep',
-                'runningmachine' => 'pc-i440fx-5.0+pve0',
+                'runningmachine' => 'pc-i440fx-10.0+pve0',
                 'scsi0' => 'local-dir:4567/vm-4567-disk-0.qcow2,size=4G',
                 'scsihw' => 'virtio-scsi-pci',
                 'smbios1' => 'uuid=2925fdec-a066-4228-b46b-eef8662f5e74',
@@ -288,7 +288,7 @@ my $vm_configs = {
                 'ostype' => 'l26',
                 'parent' => 'snap1',
                 'runningcpu' => 'kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep',
-                'runningmachine' => 'pc-i440fx-5.0+pve0',
+                'runningmachine' => 'pc-i440fx-10.0+pve0',
                 'scsi0' => 'local-dir:4567/vm-4567-disk-0.qcow2,size=4G',
                 'scsi1' => 'local-zfs:vm-4567-disk-0,size=1G',
                 'scsihw' => 'virtio-scsi-pci',
@@ -769,7 +769,7 @@ my $tests = [
         vmid => 4567,
         vm_status => {
             running => 1,
-            runningmachine => 'pc-i440fx-5.0+pve0',
+            runningmachine => 'pc-i440fx-10.0+pve0',
         },
         opts => {
             online => 1,
@@ -783,7 +783,7 @@ my $tests = [
             vm_config => $vm_configs->{4567},
             vm_status => {
                 running => 1,
-                runningmachine => 'pc-i440fx-5.0+pve0',
+                runningmachine => 'pc-i440fx-10.0+pve0',
             },
         },
     },
@@ -1358,7 +1358,7 @@ my $tests = [
         vmid => 105,
         vm_status => {
             running => 1,
-            runningmachine => 'pc-i440fx-5.0+pve0',
+            runningmachine => 'pc-i440fx-10.0+pve0',
         },
         opts => {
             online => 1,
@@ -1376,7 +1376,7 @@ my $tests = [
             vm_config => $vm_configs->{105},
             vm_status => {
                 running => 1,
-                runningmachine => 'pc-i440fx-5.0+pve0',
+                runningmachine => 'pc-i440fx-10.0+pve0',
             },
         },
     },
@@ -1404,7 +1404,7 @@ my $tests = [
         vmid => 105,
         vm_status => {
             running => 1,
-            runningmachine => 'pc-i440fx-5.0+pve0',
+            runningmachine => 'pc-i440fx-10.0+pve0',
         },
         config_patch => {
             snapshots => undef,
@@ -1427,7 +1427,7 @@ my $tests = [
             }),
             vm_status => {
                 running => 1,
-                runningmachine => 'pc-i440fx-5.0+pve0',
+                runningmachine => 'pc-i440fx-10.0+pve0',
             },
         },
     },
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [pve-devel] [PATCH qemu-server 31/31] partially fix #3227: ensure that target image for mirror has the same size for EFI disks
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (29 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 30/31] test: migration: update running machine to 10.0 Fiona Ebner
@ 2025-06-27 15:57 ` Fiona Ebner
  2025-06-27 16:00 ` [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
  31 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 15:57 UTC (permalink / raw)
  To: pve-devel

When the format is raw, the size can be explicitly passed. When the
format is a container format like qcow2, the image should already be
allocated with the correct virtual size.

It is not possible to resize a disk with an explicit 'size' set, so
only set this for EFI disks.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/BlockJob.pm | 18 ++++++++++++++++++
 src/PVE/QemuServer/Blockdev.pm |  2 +-
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 1f242cca..c895a084 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -484,6 +484,24 @@ sub blockdev_mirror {
     my $generate_blockdev_opts = {};
     $generate_blockdev_opts->{'zero-initialized'} = 1 if $dest->{'zero-initialized'};
 
+    # Source and target need to have the exact same virtual size, see bug #3227.
+    # However, it won't be possible to resize a disk with 'size' explicitly set afterwards, so only
+    # set it for EFI disks.
+    if ($drive_id eq 'efidisk0' && !PVE::QemuServer::Blockdev::is_nbd($dest_drive)) {
+        my ($storeid) = PVE::Storage::parse_volume_id($dest_drive->{file}, 1);
+        if (
+            $storeid
+            && PVE::QemuServer::Drive::checked_volume_format($storecfg, $dest->{volid}) eq 'raw'
+        ) {
+            my $block_info = PVE::QemuServer::Blockdev::get_block_info($vmid);
+            if (my $size = $block_info->{$drive_id}->{inserted}->{image}->{'virtual-size'}) {
+                $generate_blockdev_opts->{size} = $size;
+            } else {
+                log_warn("unable to determine source block node size - continuing anyway");
+            }
+        }
+    }
+
     # Note that if 'aio' is not explicitly set, i.e. default, it can change if source and target
     # don't both allow or both not allow 'io_uring' as the default.
     my $target_drive_blockdev = PVE::QemuServer::Blockdev::generate_drive_blockdev(
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index b0b88ea3..c4916ac1 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -19,7 +19,7 @@ use PVE::QemuServer::Monitor qw(mon_cmd);
 my $NBD_TCP_PATH_RE_3 = qr/nbd:(\S+):(\d+):exportname=(\S+)/;
 my $NBD_UNIX_PATH_RE_2 = qr/nbd:unix:(\S+):exportname=(\S+)/;
 
-my sub is_nbd {
+sub is_nbd {
     my ($drive) = @_;
 
     return 1 if $drive->{file} =~ $NBD_TCP_PATH_RE_3;
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final)
  2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
                   ` (30 preceding siblings ...)
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 31/31] partially fix #3227: ensure that target image for mirror has the same size for EFI disks Fiona Ebner
@ 2025-06-27 16:00 ` Fiona Ebner
  2025-06-30  8:19   ` DERUMIER, Alexandre via pve-devel
  31 siblings, 1 reply; 63+ messages in thread
From: Fiona Ebner @ 2025-06-27 16:00 UTC (permalink / raw)
  To: pve-devel

Am 27.06.25 um 17:56 schrieb Fiona Ebner:
> The preliminary final part in the series. I'm sure there will be some
> follow-ups, and the decisions about edge cases like cache mode for EFI
> disk and querying file child are not yet set in stone. But this should
> essentially be it.
> 
> The switch from '-drive' to '-blockdev' is in preparation for future
> features like external snapshots, FUSE exports via qemu-storage-daemon
> and also generally the more modern interface in QEMU. It also allows
> to address some limitations drive-mirror had, in particular this
> series makes it possible to mirror between storages having a different
> aio default as well as mirror when the size of the allocated image
> doesn't exactly match for EFI disks, see [2] and patch 31/31.
> 
> The switch is guarded by machine version 10.0 to avoid any potential
> incompatibilities between -drive and -blockdev options/defaults.
> 
> What is still missing is support for the rather obscure 'snapshot'
> drive option where writes will go to a temporary image (currently in
> '/var/tmp', which is far from ideal to begin with). That requires
> inserting an overlay node.

Forgot to mention, I couldn't test live-import yet, because there seems
to be an issue with fuse mounting an ESXi storage on Trixie (or at least
on my setup right now): "failed to spawn fuse mount, process exited with
status 25856"


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 13/31] blockdev: resize: query and use node name for resize operation
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 13/31] blockdev: resize: query and use node name for resize operation Fiona Ebner
@ 2025-06-30  6:23   ` DERUMIER, Alexandre via pve-devel
  2025-06-30  7:52     ` Fiona Ebner
  0 siblings, 1 reply; 63+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-06-30  6:23 UTC (permalink / raw)
  To: pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 14569 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 13/31] blockdev: resize: query and use node name for resize operation
Date: Mon, 30 Jun 2025 06:23:41 +0000
Message-ID: <2155ed72557978d7d49feb4cf076b829cf5b20f4.camel@groupe-cyllene.com>

Hi Fiona,

from my test, I needed to use the top throttle node to have to new
resize correctly reported to guest
https://lore.proxmox.com/all/mailman.947.1741688963.293.pve-devel@lists.proxmox.com/


(I'm going to test your patch serie today)


-------- Message initial --------
De: Fiona Ebner <f.ebner@proxmox.com>
Répondre à: Proxmox VE development discussion <pve-
devel@lists.proxmox.com>
À: pve-devel@lists.proxmox.com
Objet: [pve-devel] [PATCH qemu-server 13/31] blockdev: resize: query
and use node name for resize operation
Date: 27/06/2025 17:57:09

This also works for -blockdev, which will be used instead of -drive
starting with machine version 10.0.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/src/PVE/QemuServer/Blockdev.pm
b/src/PVE/QemuServer/Blockdev.pm
index e5eba33e..2a9a95e8 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -489,13 +489,15 @@ sub resize {
 
     return if !$running;
 
+    my $node_name = get_node_name_below_throttle($vmid, $deviceid);
+
     my $padding = (1024 - $size % 1024) % 1024;
     $size = $size + $padding;
 
     mon_cmd(
         $vmid,
         "block_resize",
-        device => $deviceid,
+        'node-name' => $node_name,
         size => int($size),
         timeout => 60,
     );

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 13/31] blockdev: resize: query and use node name for resize operation
  2025-06-30  6:23   ` DERUMIER, Alexandre via pve-devel
@ 2025-06-30  7:52     ` Fiona Ebner
  2025-06-30 11:38       ` Fiona Ebner
  0 siblings, 1 reply; 63+ messages in thread
From: Fiona Ebner @ 2025-06-30  7:52 UTC (permalink / raw)
  To: Proxmox VE development discussion

Am 30.06.25 um 08:23 schrieb DERUMIER, Alexandre via pve-devel:
> 
> Hi Fiona,
> 
> from my test, I needed to use the top throttle node to have to new
> resize correctly reported to guest
> https://lore.proxmox.com/all/mailman.947.1741688963.293.pve-devel@lists.proxmox.com/

Yes, you're right! I originally queried and used the top node (I think
your version doesn't work for legacy "-drive", because the node name is
auto-generated then), but switched to using the
get_node_name_below_throttle() helper after introducing it and didn't
check within the VM anymore.

So we'll need to switch to something like

>     my $block_info = get_block_info($vmid);
>     my $drive_id = $deviceid =~ s/^drive-//r;
>     my $inserted = $block_info->{$drive_id}->{inserted}
>         or die "no block node inserted for drive '$drive_id'\n";
> 
>     my $padding = (1024 - $size % 1024) % 1024;
>     $size = $size + $padding;
> 
>     mon_cmd(
>         $vmid,
>         "block_resize",
>         'node-name' => "$inserted->{'node-name'}",
>         size => int($size),
>         timeout => 60,

Still, it feels like a QEMU bug. I'd expect the filter node to also
report the updated size when its child node is resized. I'll see if that
is easily fixed upstream/ask what they think.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final)
  2025-06-27 16:00 ` [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
@ 2025-06-30  8:19   ` DERUMIER, Alexandre via pve-devel
  2025-06-30  8:24     ` Fiona Ebner
  0 siblings, 1 reply; 63+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-06-30  8:19 UTC (permalink / raw)
  To: pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 12620 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final)
Date: Mon, 30 Jun 2025 08:19:22 +0000
Message-ID: <d3994faada7899f907f5c2a03ed4d87154e4c2b3.camel@groupe-cyllene.com>

patch 29/31  seem to be missing. (I don't see it in lore.proxmox.com
too)


[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final)
  2025-06-30  8:19   ` DERUMIER, Alexandre via pve-devel
@ 2025-06-30  8:24     ` Fiona Ebner
  0 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-30  8:24 UTC (permalink / raw)
  To: Proxmox VE development discussion

Am 30.06.25 um 10:19 schrieb DERUMIER, Alexandre via pve-devel:
> patch 29/31  seem to be missing. (I don't see it in lore.proxmox.com
> too) 

It's big, because of all the test changes and waiting for moderator
approval.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 29/31] command line: switch to blockdev starting with machine version 10.0
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 29/31] command line: switch to blockdev starting with machine version 10.0 Fiona Ebner
@ 2025-06-30 10:15   ` Fabian Grünbichler
  2025-06-30 10:57     ` Fiona Ebner
  0 siblings, 1 reply; 63+ messages in thread
From: Fabian Grünbichler @ 2025-06-30 10:15 UTC (permalink / raw)
  To: Proxmox VE development discussion

On June 27, 2025 5:57 pm, Fiona Ebner wrote:
> Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
> 
> Changes since last series:
> * Support for live restore and live import.
> * Use Blockdev::{attach,detach} helpers for hot{,un}plug.
> * Adapt to changes from previous patches.
> * Also switch for medium change.
> 
>  src/PVE/QemuServer.pm                         | 146 ++++++++++++++----
>  src/PVE/QemuServer/BlockJob.pm                |  32 ++--
>  src/PVE/QemuServer/Blockdev.pm                | 101 ++++++++----
>  src/PVE/QemuServer/OVMF.pm                    |  21 ++-
>  src/test/MigrationTest/QemuMigrateMock.pm     |   5 +
>  src/test/cfg2cmd/aio.conf.cmd                 |  42 +++--
>  src/test/cfg2cmd/bootorder-empty.conf.cmd     |  11 +-
>  src/test/cfg2cmd/bootorder-legacy.conf.cmd    |  11 +-
>  src/test/cfg2cmd/bootorder.conf.cmd           |  11 +-
>  ...putype-icelake-client-deprecation.conf.cmd |   5 +-
>  src/test/cfg2cmd/efi-raw-template.conf.cmd    |   7 +-
>  src/test/cfg2cmd/efi-raw.conf.cmd             |   7 +-
>  .../cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd  |   7 +-
>  src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd |   7 +-
>  src/test/cfg2cmd/efidisk-on-rbd.conf.cmd      |   7 +-
>  src/test/cfg2cmd/ide.conf.cmd                 |  15 +-
>  src/test/cfg2cmd/q35-ide.conf.cmd             |  15 +-
>  .../q35-linux-hostpci-mapping.conf.cmd        |   7 +-
>  .../q35-linux-hostpci-multifunction.conf.cmd  |   7 +-
>  .../q35-linux-hostpci-template.conf.cmd       |  10 +-
>  ...q35-linux-hostpci-x-pci-overrides.conf.cmd |   7 +-
>  src/test/cfg2cmd/q35-linux-hostpci.conf.cmd   |   7 +-
>  src/test/cfg2cmd/q35-simple.conf.cmd          |   7 +-
>  src/test/cfg2cmd/seabios_serial.conf.cmd      |   5 +-
>  src/test/cfg2cmd/sev-es.conf.cmd              |   7 +-
>  src/test/cfg2cmd/sev-std.conf.cmd             |   7 +-
>  src/test/cfg2cmd/simple-btrfs.conf.cmd        |  14 +-
>  src/test/cfg2cmd/simple-cifs.conf.cmd         |  14 +-
>  .../cfg2cmd/simple-disk-passthrough.conf.cmd  |   9 +-
>  src/test/cfg2cmd/simple-lvm.conf.cmd          |  12 +-
>  src/test/cfg2cmd/simple-lvmthin.conf.cmd      |  12 +-
>  src/test/cfg2cmd/simple-rbd.conf.cmd          |  26 ++--
>  src/test/cfg2cmd/simple-virtio-blk.conf.cmd   |   5 +-
>  .../cfg2cmd/simple-zfs-over-iscsi.conf.cmd    |  14 +-
>  src/test/cfg2cmd/simple1-template.conf.cmd    |   8 +-
>  src/test/cfg2cmd/simple1.conf.cmd             |   5 +-
>  src/test/run_config2command_tests.pl          |  19 +++
>  37 files changed, 446 insertions(+), 206 deletions(-)
> 
> diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
> index d9284843..3eb7f339 100644
> --- a/src/PVE/QemuServer.pm
> +++ b/src/PVE/QemuServer.pm
> @@ -3640,19 +3640,38 @@ sub config_to_command {
>              }
>  
>              my $live_restore = $live_restore_backing->{$ds};
> -            my $live_blockdev_name = undef;
> -            if ($live_restore) {
> -                $live_blockdev_name = $live_restore->{name};
> -                push @$devices, '-blockdev', $live_restore->{blockdev};
> +
> +            if (min_version($machine_version, 10, 0)) { # for the switch to -blockdev
> +                my $throttle_group = PVE::QemuServer::Blockdev::generate_throttle_group($drive);
> +                push @$cmd, '-object', to_json($throttle_group, { canonical => 1 });
> +
> +                my $extra_blockdev_options = {};
> +                $extra_blockdev_options->{'live-restore'} = $live_restore if $live_restore;
> +                # extra protection for templates, but SATA and IDE don't support it..
> +                $extra_blockdev_options->{'read-only'} = 1 if drive_is_read_only($conf, $drive);
> +
> +                if ($drive->{file} ne 'none') {
> +                    my $blockdev = PVE::QemuServer::Blockdev::generate_drive_blockdev(
> +                        $storecfg, $drive, $extra_blockdev_options,
> +                    );
> +                    push @$devices, '-blockdev', to_json($blockdev, { canonical => 1 });
> +                }
> +            } else {
> +                my $live_blockdev_name = undef;
> +                if ($live_restore) {
> +                    $live_blockdev_name = $live_restore->{name};
> +                    push @$devices, '-blockdev', $live_restore->{blockdev};
> +                }
> +
> +                my $drive_cmd =
> +                    print_drive_commandline_full($storecfg, $vmid, $drive, $live_blockdev_name);
> +
> +                # extra protection for templates, but SATA and IDE don't support it..
> +                $drive_cmd .= ',readonly=on' if drive_is_read_only($conf, $drive);
> +
> +                push @$devices, '-drive', $drive_cmd;
>              }
>  
> -            my $drive_cmd =
> -                print_drive_commandline_full($storecfg, $vmid, $drive, $live_blockdev_name);
> -
> -            # extra protection for templates, but SATA and IDE don't support it..
> -            $drive_cmd .= ',readonly=on' if drive_is_read_only($conf, $drive);
> -
> -            push @$devices, '-drive', $drive_cmd;
>              push @$devices, '-device',
>                  print_drivedevice_full(
>                      $storecfg, $conf, $vmid, $drive, $bridges, $arch, $machine_type,
> @@ -4050,28 +4069,63 @@ sub qemu_iothread_del {
>  sub qemu_driveadd {
>      my ($storecfg, $vmid, $device) = @_;
>  
> -    my $drive = print_drive_commandline_full($storecfg, $vmid, $device, undef);
> -    $drive =~ s/\\/\\\\/g;
> -    my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
> +    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
>  
> -    # If the command succeeds qemu prints: "OK"
> -    return 1 if $ret =~ m/OK/s;
> +    # for the switch to -blockdev
> +    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {

isn't this part here basically Blockdev::attach?

> +        my $throttle_group = PVE::QemuServer::Blockdev::generate_throttle_group($device);
> +        mon_cmd($vmid, 'object-add', %$throttle_group);
>  
> -    die "adding drive failed: $ret\n";
> +        eval {
> +            my $blockdev =
> +                PVE::QemuServer::Blockdev::generate_drive_blockdev($storecfg, $device, {});
> +            mon_cmd($vmid, 'blockdev-add', %$blockdev);
> +        };
> +        if (my $err = $@) {
> +            my $drive_id = PVE::QemuServer::Drive::get_drive_id($device);
> +            eval { mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id"); };
> +            warn $@ if $@;
> +            die $err;
> +        }
> +
> +        return 1;
> +    } else {
> +        my $drive = print_drive_commandline_full($storecfg, $vmid, $device, undef);
> +        $drive =~ s/\\/\\\\/g;
> +        my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
> +
> +        # If the command succeeds qemu prints: "OK"
> +        return 1 if $ret =~ m/OK/s;
> +
> +        die "adding drive failed: $ret\n";
> +    }
>  }
>  
>  sub qemu_drivedel {
>      my ($vmid, $deviceid) = @_;
>  
> -    my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_del drive-$deviceid", 10 * 60);
> -    $ret =~ s/^\s+//;
> +    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
>  
> -    return 1 if $ret eq "";
> +    # for the switch to -blockdev
> +    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {

and this here Blockdev::detach?

> +        # QEMU recursively auto-removes the file children, i.e. file and format node below the top
> +        # node and also implicit backing children referenced by a qcow2 image.
> +        eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => "drive-$deviceid"); };
> +        die "deleting blockdev $deviceid failed : $@\n" if $@;
> +        # FIXME ignore already removed scenario like below?
>  
> -    # NB: device not found errors mean the drive was auto-deleted and we ignore the error
> -    return 1 if $ret =~ m/Device \'.*?\' not found/s;
> +        mon_cmd($vmid, 'object-del', id => "throttle-drive-$deviceid");
> +    } else {
> +        my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_del drive-$deviceid", 10 * 60);
> +        $ret =~ s/^\s+//;
>  
> -    die "deleting drive $deviceid failed : $ret\n";
> +        return 1 if $ret eq "";
> +
> +        # NB: device not found errors mean the drive was auto-deleted and we ignore the error
> +        return 1 if $ret =~ m/Device \'.*?\' not found/s;
> +
> +        die "deleting drive $deviceid failed : $ret\n";
> +    }
>  }
>  
>  sub qemu_deviceaddverify {
> @@ -7006,10 +7060,22 @@ sub pbs_live_restore {
>          print "restoring '$ds' to '$drive->{file}'\n";
>  
>          my $pbs_name = "drive-${confname}-pbs";
> -        $live_restore_backing->{$confname} = {
> -            name => $pbs_name,
> -            blockdev => print_pbs_blockdev($pbs_conf, $pbs_name),
> -        };
> +
> +        $live_restore_backing->{$confname} = { name => $pbs_name };
> +
> +        # add blockdev information
> +        my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf, undef, $conf->{arch});
> +        my $machine_version = PVE::QemuServer::Machine::extract_version(
> +            $machine_type,
> +            PVE::QemuServer::Helpers::kvm_user_version(),
> +        );
> +        if (min_version($machine_version, 10, 0)) { # for the switch to -blockdev
> +            $live_restore_backing->{$confname}->{blockdev} =
> +                PVE::QemuServer::Blockdev::generate_pbs_blockdev($pbs_conf, $pbs_name);
> +        } else {
> +            $live_restore_backing->{$confname}->{blockdev} =
> +                print_pbs_blockdev($pbs_conf, $pbs_name);
> +        }
>      }
>  
>      my $drives_streamed = 0;
> @@ -7086,6 +7152,8 @@ sub pbs_live_restore {
>  sub live_import_from_files {
>      my ($mapping, $vmid, $conf, $restore_options) = @_;
>  
> +    my $storecfg = PVE::Storage::config();
> +
>      my $live_restore_backing = {};
>      my $sources_to_remove = [];
>      for my $dev (keys %$mapping) {
> @@ -7103,18 +7171,30 @@ sub live_import_from_files {
>          die "invalid format '$format' for '$dev' mapping\n"
>              if !grep { $format eq $_ } qw(raw qcow2 vmdk);
>  
> -        $live_restore_backing->{$dev} = {
> -            name => "drive-$dev-restore",
> -            blockdev => "driver=$format,node-name=drive-$dev-restore"
> +        $live_restore_backing->{$dev} = { name => "drive-$dev-restore" };
> +
> +        my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf, undef, $conf->{arch});
> +        my $machine_version = PVE::QemuServer::Machine::extract_version(
> +            $machine_type,
> +            PVE::QemuServer::Helpers::kvm_user_version(),
> +        );
> +        if (min_version($machine_version, 10, 0)) { # for the switch to -blockdev
> +            my ($interface, $index) = PVE::QemuServer::Drive::parse_drive_interface($dev);
> +            my $drive = { file => $volid, interface => $interface, index => $index };
> +            my $blockdev =
> +                PVE::QemuServer::Blockdev::generate_drive_blockdev($storecfg, $drive, {});
> +            $live_restore_backing->{$dev}->{blockdev} = $blockdev;
> +        } else {
> +            $live_restore_backing->{$dev}->{blockdev} =
> +                "driver=$format,node-name=drive-$dev-restore"
>                  . ",read-only=on"
> -                . ",file.driver=file,file.filename=$path",
> -        };
> +                . ",file.driver=file,file.filename=$path";
> +        }
>  
>          my $source_volid = $info->{'delete-after-finish'};
>          push $sources_to_remove->@*, $source_volid if defined($source_volid);
>      }
>  
> -    my $storecfg = PVE::Storage::config();
>      eval {
>  
>          # make sure HA doesn't interrupt our restore by stopping the VM
> diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
> index 212d6a4f..1f242cca 100644
> --- a/src/PVE/QemuServer/BlockJob.pm
> +++ b/src/PVE/QemuServer/BlockJob.pm
> @@ -527,20 +527,24 @@ sub mirror {
>      my ($source, $dest, $jobs, $completion, $options) = @_;
>  
>      # for the switch to -blockdev
> -
> -    my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
> -    qemu_drive_mirror(
> -        $source->{vmid},
> -        $drive_id,
> -        $dest->{volid},
> -        $dest->{vmid},
> -        $dest->{'zero-initialized'},
> -        $jobs,
> -        $completion,
> -        $options->{'guest-agent'},
> -        $options->{bwlimit},
> -        $source->{bitmap},
> -    );
> +    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($source->{vmid});
> +    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
> +        blockdev_mirror($source, $dest, $jobs, $completion, $options);
> +    } else {
> +        my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
> +        qemu_drive_mirror(
> +            $source->{vmid},
> +            $drive_id,
> +            $dest->{volid},
> +            $dest->{vmid},
> +            $dest->{'zero-initialized'},
> +            $jobs,
> +            $completion,
> +            $options->{'guest-agent'},
> +            $options->{bwlimit},
> +            $source->{bitmap},
> +        );
> +    }
>  }
>  
>  1;
> diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
> index 4aea1abd..b0b88ea3 100644
> --- a/src/PVE/QemuServer/Blockdev.pm
> +++ b/src/PVE/QemuServer/Blockdev.pm
> @@ -579,18 +579,24 @@ my sub blockdev_change_medium {
>  sub change_medium {
>      my ($storecfg, $vmid, $qdev_id, $drive) = @_;
>  
> -    # force eject if locked
> -    mon_cmd($vmid, "eject", force => JSON::true, id => "$qdev_id");
> +    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
> +    # for the switch to -blockdev
> +    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
> +        blockdev_change_medium($storecfg, $vmid, $qdev_id, $drive);
> +    } else {
> +        # force eject if locked
> +        mon_cmd($vmid, "eject", force => JSON::true, id => "$qdev_id");
>  
> -    my ($path, $format) = PVE::QemuServer::Drive::get_path_and_format($storecfg, $drive);
> +        my ($path, $format) = PVE::QemuServer::Drive::get_path_and_format($storecfg, $drive);
>  
> -    if ($path) { # no path for 'none'
> -        mon_cmd(
> -            $vmid, "blockdev-change-medium",
> -            id => "$qdev_id",
> -            filename => "$path",
> -            format => "$format",
> -        );
> +        if ($path) { # no path for 'none'
> +            mon_cmd(
> +                $vmid, "blockdev-change-medium",
> +                id => "$qdev_id",
> +                filename => "$path",
> +                format => "$format",
> +            );
> +        }
>      }
>  }
>  
> @@ -620,28 +626,59 @@ sub set_io_throttle {
>  
>      return if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
>  
> -    mon_cmd(
> -        $vmid, "block_set_io_throttle",
> -        device => $deviceid,
> -        bps => int($bps),
> -        bps_rd => int($bps_rd),
> -        bps_wr => int($bps_wr),
> -        iops => int($iops),
> -        iops_rd => int($iops_rd),
> -        iops_wr => int($iops_wr),
> -        bps_max => int($bps_max),
> -        bps_rd_max => int($bps_rd_max),
> -        bps_wr_max => int($bps_wr_max),
> -        iops_max => int($iops_max),
> -        iops_rd_max => int($iops_rd_max),
> -        iops_wr_max => int($iops_wr_max),
> -        bps_max_length => int($bps_max_length),
> -        bps_rd_max_length => int($bps_rd_max_length),
> -        bps_wr_max_length => int($bps_wr_max_length),
> -        iops_max_length => int($iops_max_length),
> -        iops_rd_max_length => int($iops_rd_max_length),
> -        iops_wr_max_length => int($iops_wr_max_length),
> -    );
> +    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
> +    # for the switch to -blockdev
> +    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
> +        mon_cmd(
> +            $vmid,
> +            'qom-set',
> +            path => "throttle-$deviceid",
> +            property => "limits",
> +            value => {
> +                'bps-total' => int($bps),
> +                'bps-read' => int($bps_rd),
> +                'bps-write' => int($bps_wr),
> +                'iops-total' => int($iops),
> +                'iops-read' => int($iops_rd),
> +                'iops-write' => int($iops_wr),
> +                'bps-total-max' => int($bps_max),
> +                'bps-read-max' => int($bps_rd_max),
> +                'bps-write-max' => int($bps_wr_max),
> +                'iops-total-max' => int($iops_max),
> +                'iops-read-max' => int($iops_rd_max),
> +                'iops-write-max' => int($iops_wr_max),
> +                'bps-total-max-length' => int($bps_max_length),
> +                'bps-read-max-length' => int($bps_rd_max_length),
> +                'bps-write-max-length' => int($bps_wr_max_length),
> +                'iops-total-max-length' => int($iops_max_length),
> +                'iops-read-max-length' => int($iops_rd_max_length),
> +                'iops-write-max-length' => int($iops_wr_max_length),
> +            },
> +        );
> +    } else {
> +        mon_cmd(
> +            $vmid, "block_set_io_throttle",
> +            device => $deviceid,
> +            bps => int($bps),
> +            bps_rd => int($bps_rd),
> +            bps_wr => int($bps_wr),
> +            iops => int($iops),
> +            iops_rd => int($iops_rd),
> +            iops_wr => int($iops_wr),
> +            bps_max => int($bps_max),
> +            bps_rd_max => int($bps_rd_max),
> +            bps_wr_max => int($bps_wr_max),
> +            iops_max => int($iops_max),
> +            iops_rd_max => int($iops_rd_max),
> +            iops_wr_max => int($iops_wr_max),
> +            bps_max_length => int($bps_max_length),
> +            bps_rd_max_length => int($bps_rd_max_length),
> +            bps_wr_max_length => int($bps_wr_max_length),
> +            iops_max_length => int($iops_max_length),
> +            iops_rd_max_length => int($iops_rd_max_length),
> +            iops_wr_max_length => int($iops_wr_max_length),
> +        );
> +    }
>  }
>  
>  1;
> diff --git a/src/PVE/QemuServer/OVMF.pm b/src/PVE/QemuServer/OVMF.pm
> index dde81eb7..a7239614 100644
> --- a/src/PVE/QemuServer/OVMF.pm
> +++ b/src/PVE/QemuServer/OVMF.pm
> @@ -3,7 +3,7 @@ package PVE::QemuServer::OVMF;
>  use strict;
>  use warnings;
>  
> -use JSON;
> +use JSON qw(to_json);
>  
>  use PVE::RESTEnvironment qw(log_warn);
>  use PVE::Storage;
> @@ -210,10 +210,21 @@ sub print_ovmf_commandline {
>          }
>          push $cmd->@*, '-bios', get_ovmf_files($hw_info->{arch}, undef, undef, $amd_sev_type);
>      } else {
> -        my ($code_drive_str, $var_drive_str) =
> -            print_ovmf_drive_commandlines($conf, $storecfg, $vmid, $hw_info, $version_guard);
> -        push $cmd->@*, '-drive', $code_drive_str;
> -        push $cmd->@*, '-drive', $var_drive_str;
> +        if ($version_guard->(10, 0, 0)) { # for the switch to -blockdev
> +            my ($code_blockdev, $vars_blockdev, $throttle_group) =
> +                generate_ovmf_blockdev($conf, $storecfg, $vmid, $hw_info);
> +
> +            push $cmd->@*, '-object', to_json($throttle_group, { canonical => 1 });
> +            push $cmd->@*, '-blockdev', to_json($code_blockdev, { canonical => 1 });
> +            push $cmd->@*, '-blockdev', to_json($vars_blockdev, { canonical => 1 });
> +            push $machine_flags->@*, "pflash0=$code_blockdev->{'node-name'}",
> +                "pflash1=$vars_blockdev->{'node-name'}";
> +        } else {
> +            my ($code_drive_str, $var_drive_str) =
> +                print_ovmf_drive_commandlines($conf, $storecfg, $vmid, $hw_info, $version_guard);
> +            push $cmd->@*, '-drive', $code_drive_str;
> +            push $cmd->@*, '-drive', $var_drive_str;
> +        }
>      }
>  
>      return ($cmd, $machine_flags);


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 17/31] block job: add blockdev mirror
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 17/31] block job: add blockdev mirror Fiona Ebner
@ 2025-06-30 10:15   ` Fabian Grünbichler
  2025-07-01  9:21     ` Fiona Ebner
  0 siblings, 1 reply; 63+ messages in thread
From: Fabian Grünbichler @ 2025-06-30 10:15 UTC (permalink / raw)
  To: Proxmox VE development discussion

On June 27, 2025 5:57 pm, Fiona Ebner wrote:
> With blockdev-mirror, it is possible to change the aio setting on the
> fly and this is useful for migrations between storages where one wants
> to use io_uring by default and the other doesn't.
> 
> The node below the top throttle node needs to be replaced so that the
> limits stay intact and that the top node still has the drive ID as the
> node name. That node is not necessarily a format node. For example, it
> could also be a zeroinit node from an earlier mirror operation. So
> query QEMU itself.
> 
> QEMU automatically drops nodes after mirror only if they were
> implicitly added, i.e. not explicitly added via blockdev-add. Since a
> previous mirror target is explicitly added (and not just implicitly as
> the child of a top throttle node), it is necessary to detach the
> appropriate block node after mirror.
> 
> Already mock blockdev_mirror in the tests.
> 
> Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
> 
> NOTE: Changes since last series:
> * Query QEMU for file child.
> * Remove appropriate node after mirror.
> * Delete format property from cloned drive hash for destination.
> 
>  src/PVE/QemuServer/BlockJob.pm            | 176 ++++++++++++++++++++++
>  src/test/MigrationTest/QemuMigrateMock.pm |   8 +
>  2 files changed, 184 insertions(+)
> 
> diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
> index 68d0431f..212d6a4f 100644
> --- a/src/PVE/QemuServer/BlockJob.pm
> +++ b/src/PVE/QemuServer/BlockJob.pm
> @@ -4,12 +4,14 @@ use strict;
>  use warnings;
>  
>  use JSON;
> +use Storable qw(dclone);
>  
>  use PVE::Format qw(render_duration render_bytes);
>  use PVE::RESTEnvironment qw(log_warn);
>  use PVE::Storage;
>  
>  use PVE::QemuServer::Agent qw(qga_check_running);
> +use PVE::QemuServer::Blockdev;
>  use PVE::QemuServer::Drive qw(checked_volume_format);
>  use PVE::QemuServer::Monitor qw(mon_cmd);
>  use PVE::QemuServer::RunState;
> @@ -187,10 +189,17 @@ sub qemu_drive_mirror_monitor {
>                          print "$job_id: Completing block job...\n";
>  
>                          my $completion_command;
> +                        # For blockdev, need to detach appropriate node. QEMU will only drop it if
> +                        # it was implicitly added (e.g. as the child of a top throttle node), but
> +                        # not if it was explicitly added via blockdev-add (e.g. as a previous mirror
> +                        # target).
> +                        my $detach_node_name;
>                          if ($completion eq 'complete') {
>                              $completion_command = 'block-job-complete';
> +                            $detach_node_name = $jobs->{$job_id}->{'source-node-name'};
>                          } elsif ($completion eq 'cancel') {
>                              $completion_command = 'block-job-cancel';
> +                            $detach_node_name = $jobs->{$job_id}->{'target-node-name'};
>                          } else {
>                              die "invalid completion value: $completion\n";
>                          }
> @@ -202,6 +211,9 @@ sub qemu_drive_mirror_monitor {
>                          } elsif ($err) {
>                              die "$job_id: block job cannot be completed - $err\n";
>                          } else {
> +                            $jobs->{$job_id}->{'detach-node-name'} = $detach_node_name
> +                                if $detach_node_name;
> +
>                              print "$job_id: Completed successfully.\n";
>                              $jobs->{$job_id}->{complete} = 1;
>                          }
> @@ -347,6 +359,170 @@ sub qemu_drive_mirror_switch_to_active_mode {
>      }
>  }
>  
> +=pod
> +
> +=head3 blockdev_mirror
> +
> +    blockdev_mirror($source, $dest, $jobs, $completion, $options)
> +
> +Mirrors the volume of a running VM specified by C<$source> to destination C<$dest>.
> +
> +=over
> +
> +=item C<$source>
> +
> +The source information consists of:
> +
> +=over
> +
> +=item C<< $source->{vmid} >>
> +
> +The ID of the running VM the source volume belongs to.
> +
> +=item C<< $source->{drive} >>
> +
> +The drive configuration of the source volume as currently attached to the VM.
> +
> +=item C<< $source->{bitmap} >>
> +
> +(optional) Use incremental mirroring based on the specified bitmap.
> +
> +=back
> +
> +=item C<$dest>
> +
> +The destination information consists of:
> +
> +=over
> +
> +=item C<< $dest->{volid} >>
> +
> +The volume ID of the target volume.
> +
> +=item C<< $dest->{vmid} >>
> +
> +(optional) The ID of the VM the target volume belongs to. Defaults to C<< $source->{vmid} >>.
> +
> +=item C<< $dest->{'zero-initialized'} >>
> +
> +(optional) True, if the target volume is zero-initialized.
> +
> +=back
> +
> +=item C<$jobs>
> +
> +(optional) Other jobs in the transaction when multiple volumes should be mirrored. All jobs must be
> +ready before completion can happen.
> +
> +=item C<$completion>
> +
> +Completion mode, default is C<complete>:
> +
> +=over
> +
> +=item C<complete>
> +
> +Wait until all jobs are ready, block-job-complete them (default). This means switching the orignal
> +drive to use the new target.
> +
> +=item C<cancel>
> +
> +Wait until all jobs are ready, block-job-cancel them. This means not switching the original drive
> +to use the new target.
> +
> +=item C<skip>
> +
> +Wait until all jobs are ready, return with block jobs in ready state.
> +
> +=item C<auto>
> +
> +Wait until all jobs disappear, only use for jobs which complete automatically.
> +
> +=back
> +
> +=item C<$options>
> +
> +Further options:
> +
> +=over
> +
> +=item C<< $options->{'guest-agent'} >>
> +
> +If the guest agent is configured for the VM. It will be used to freeze and thaw the filesystems for
> +consistency when the target belongs to a different VM.
> +
> +=item C<< $options->{'bwlimit'} >>
> +
> +The bandwidth limit to use for the mirroring operation, in KiB/s.
> +
> +=back
> +
> +=back
> +
> +=cut
> +
> +sub blockdev_mirror {
> +    my ($source, $dest, $jobs, $completion, $options) = @_;
> +
> +    my $vmid = $source->{vmid};
> +
> +    my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
> +    my $device_id = "drive-$drive_id";
> +
> +    my $storecfg = PVE::Storage::config();
> +
> +    # Need to replace the node below the top node. This is not necessarily a format node, for
> +    # example, it can also be a zeroinit node by a previous mirror! So query QEMU itself.
> +    my $child_info = mon_cmd($vmid, 'block-node-query-file-child', 'node-name' => $device_id);
> +    my $source_node_name = $child_info->{'node-name'};

isn't this semantically equivalent to get_node_name_below_throttle? that
one does a few more checks and is slightly more expensive, but
validating that the top node is a throttle node as expected might be a
good thing here as well?

depending on how we see things, we might want to add a `$assert`
parameter to that helper though for call sites that are only happening
in blockdev context - to avoid the fallback in case the top node is not
a throttle group, and instead die?

> +
> +    # Copy original drive config (aio, cache, discard, ...):
> +    my $dest_drive = dclone($source->{drive});
> +    delete($dest_drive->{format}); # cannot use the source's format
> +    $dest_drive->{file} = $dest->{volid};
> +
> +    my $generate_blockdev_opts = {};
> +    $generate_blockdev_opts->{'zero-initialized'} = 1 if $dest->{'zero-initialized'};
> +
> +    # Note that if 'aio' is not explicitly set, i.e. default, it can change if source and target
> +    # don't both allow or both not allow 'io_uring' as the default.
> +    my $target_drive_blockdev = PVE::QemuServer::Blockdev::generate_drive_blockdev(
> +        $storecfg, $dest_drive, $generate_blockdev_opts,
> +    );
> +    # Top node is the throttle group, must use the file child.
> +    my $target_blockdev = $target_drive_blockdev->{file};

should we have an option for generate_drive_blockdev to skip the
throttle group/top node? then we could just use Blockdev::attach here..

at least if we make that return the top-level node name or blockdev..

> +
> +    PVE::QemuServer::Monitor::mon_cmd($vmid, 'blockdev-add', $target_blockdev->%*);
> +    my $target_node_name = $target_blockdev->{'node-name'};
> +
> +    $jobs = {} if !$jobs;
> +    my $jobid = "mirror-$drive_id";
> +    $jobs->{$jobid} = {
> +        'source-node-name' => $source_node_name,
> +        'target-node-name' => $target_node_name,
> +    };
> +
> +    my $qmp_opts = common_mirror_qmp_options(
> +        $device_id, $target_node_name, $source->{bitmap}, $options->{bwlimit},
> +    );
> +
> +    $qmp_opts->{'job-id'} = "$jobid";
> +    $qmp_opts->{replaces} = "$source_node_name";
> +
> +    # if a job already runs for this device we get an error, catch it for cleanup
> +    eval { mon_cmd($vmid, "blockdev-mirror", $qmp_opts->%*); };
> +    if (my $err = $@) {
> +        eval { qemu_blockjobs_cancel($vmid, $jobs) };
> +        log_warn("unable to cancel block jobs - $@");
> +        eval { PVE::QemuServer::Blockdev::detach($vmid, $target_node_name); };
> +        log_warn("unable to delete blockdev '$target_node_name' - $@");
> +        die "error starting blockdev mirrror - $err";
> +    }
> +    qemu_drive_mirror_monitor(
> +        $vmid, $dest->{vmid}, $jobs, $completion, $options->{'guest-agent'}, 'mirror',
> +    );
> +}
> +
>  sub mirror {
>      my ($source, $dest, $jobs, $completion, $options) = @_;
>  
> diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm
> index 25a4f9b2..c52df84b 100644
> --- a/src/test/MigrationTest/QemuMigrateMock.pm
> +++ b/src/test/MigrationTest/QemuMigrateMock.pm
> @@ -9,6 +9,7 @@ use Test::MockModule;
>  use MigrationTest::Shared;
>  
>  use PVE::API2::Qemu;
> +use PVE::QemuServer::Drive;
>  use PVE::Storage;
>  use PVE::Tools qw(file_set_contents file_get_contents);
>  
> @@ -167,6 +168,13 @@ $qemu_server_blockjob_module->mock(
>  
>          common_mirror_mock($vmid, $drive_id);
>      },
> +    blockdev_mirror => sub {
> +        my ($source, $dest, $jobs, $completion, $options) = @_;
> +
> +        my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive});
> +
> +        common_mirror_mock($source->{vmid}, $drive_id);
> +    },
>      qemu_drive_mirror_monitor => sub {
>          my ($vmid, $vmiddst, $jobs, $completion, $qga) = @_;
>  
> -- 
> 2.47.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 08/31] backup: use blockdev for TPM state file
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 08/31] backup: use blockdev for TPM state file Fiona Ebner
@ 2025-06-30 10:15   ` Fabian Grünbichler
  2025-07-01  8:22     ` Fiona Ebner
  0 siblings, 1 reply; 63+ messages in thread
From: Fabian Grünbichler @ 2025-06-30 10:15 UTC (permalink / raw)
  To: Proxmox VE development discussion

On June 27, 2025 5:57 pm, Fiona Ebner wrote:
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
>  src/PVE/QemuServer/Blockdev.pm | 22 +++++++++++++++++++++-
>  src/PVE/VZDump/QemuServer.pm   | 19 ++++++++++---------
>  2 files changed, 31 insertions(+), 10 deletions(-)
> 
> diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
> index 28a759a8..85887ab7 100644
> --- a/src/PVE/QemuServer/Blockdev.pm
> +++ b/src/PVE/QemuServer/Blockdev.pm
> @@ -14,6 +14,18 @@ use PVE::Storage;
>  use PVE::QemuServer::Drive qw(drive_is_cdrom);
>  use PVE::QemuServer::Monitor qw(mon_cmd);
>  
> +my sub tpm_backup_node_name {
> +    my ($type, $drive_id) = @_;
> +
> +    if ($type eq 'fmt') {
> +        return "drive-$drive_id-backup"; # this is the top node
> +    } elsif ($type eq 'file') {
> +        return "$drive_id-backup-file"; # drop the "drive-" prefix to be sure, max length is 31
> +    }

similar question as with previous patch

> +
> +    die "unknown node type for fleecing '$type'";

s/fleecing/tpm backup node/ ?

> +}
> +
>  my sub fleecing_node_name {
>      my ($type, $drive_id) = @_;
>  
> @@ -36,6 +48,7 @@ my sub get_node_name {
>      my ($type, $drive_id, $volid, $options) = @_;
>  
>      return fleecing_node_name($type, $drive_id) if $options->{fleecing};
> +    return tpm_backup_node_name($type, $drive_id) if $options->{'tpm-backup'};
>  
>      my $snap = $options->{'snapshot-name'};
>  
> @@ -235,7 +248,8 @@ sub generate_drive_blockdev {
>      my $child = generate_file_blockdev($storecfg, $drive, $options);
>      $child = generate_format_blockdev($storecfg, $drive, $child, $options);
>  
> -    return $child if $options->{fleecing}; # for fleecing, this is already the top node
> +    # for fleecing and TPM backup, this is already the top node
> +    return $child if $options->{fleecing} || $options->{'tpm-backup'};
>  
>      # this is the top filter entry point, use $drive-drive_id as nodename
>      return {
> @@ -377,6 +391,12 @@ sub detach {
>      return;
>  }
>  
> +sub detach_tpm_backup_node {
> +    my ($vmid) = @_;
> +
> +    detach($vmid, "drive-tpmstate0-backup");
> +}
> +
>  sub detach_fleecing_block_nodes {
>      my ($vmid, $log_func) = @_;
>  
> diff --git a/src/PVE/VZDump/QemuServer.pm b/src/PVE/VZDump/QemuServer.pm
> index 8b643bc4..f3e292e7 100644
> --- a/src/PVE/VZDump/QemuServer.pm
> +++ b/src/PVE/VZDump/QemuServer.pm
> @@ -158,7 +158,7 @@ sub prepare {
>          if ($ds eq 'tpmstate0') {
>              # TPM drive only exists for backup, which is reflected in the name
>              $diskinfo->{qmdevice} = 'drive-tpmstate0-backup';
> -            $task->{tpmpath} = $path;
> +            $task->{'tpm-volid'} = $volid;
>          }
>  
>          if (-b $path) {
> @@ -474,24 +474,25 @@ my $query_backup_status_loop = sub {
>  my $attach_tpmstate_drive = sub {
>      my ($self, $task, $vmid) = @_;
>  
> -    return if !$task->{tpmpath};
> +    return if !$task->{'tpm-volid'};
>  
>      # unconditionally try to remove the tpmstate-named drive - it only exists
>      # for backing up, and avoids errors if left over from some previous event
> -    eval { PVE::QemuServer::qemu_drivedel($vmid, "tpmstate0-backup"); };
> +    eval { PVE::QemuServer::Blockdev::detach_tpm_backup_node($vmid); };
>  
>      $self->loginfo('attaching TPM drive to QEMU for backup');
>  
> -    my $drive = "file=$task->{tpmpath},if=none,read-only=on,id=drive-tpmstate0-backup";
> -    $drive =~ s/\\/\\\\/g;
> -    my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
> -    die "attaching TPM drive failed - $ret\n" if $ret !~ m/OK/s;
> +    my $drive = { file => $task->{'tpm-volid'}, interface => 'tpmstate', index => 0 };
> +    my $extra_options = { 'tpm-backup' => 1, 'read-only' => 1 };
> +    PVE::QemuServer::Blockdev::attach($self->{storecfg}, $vmid, $drive, $extra_options);
>  };
>  
>  my $detach_tpmstate_drive = sub {
>      my ($task, $vmid) = @_;
> -    return if !$task->{tpmpath} || !PVE::QemuServer::check_running($vmid);
> -    eval { PVE::QemuServer::qemu_drivedel($vmid, "tpmstate0-backup"); };
> +
> +    return if !$task->{'tpm-volid'} || !PVE::QemuServer::Helpers::vm_running_locally($vmid);
> +
> +    eval { PVE::QemuServer::Blockdev::detach_tpm_backup_node($vmid); };
>  };
>  
>  my sub add_backup_performance_options {
> -- 
> 2.47.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 07/31] backup: use blockdev for fleecing images
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 07/31] backup: use blockdev for fleecing images Fiona Ebner
@ 2025-06-30 10:15   ` Fabian Grünbichler
  2025-07-01  8:20     ` Fiona Ebner
  0 siblings, 1 reply; 63+ messages in thread
From: Fabian Grünbichler @ 2025-06-30 10:15 UTC (permalink / raw)
  To: Proxmox VE development discussion

On June 27, 2025 5:57 pm, Fiona Ebner wrote:
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
>  src/PVE/QemuConfig.pm          | 12 ++-------
>  src/PVE/QemuServer/Blockdev.pm | 45 +++++++++++++++++++++++++++++++---
>  src/PVE/VZDump/QemuServer.pm   | 26 ++++++++++++--------
>  3 files changed, 59 insertions(+), 24 deletions(-)
> 
> diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm
> index 01104723..82295641 100644
> --- a/src/PVE/QemuConfig.pm
> +++ b/src/PVE/QemuConfig.pm
> @@ -10,6 +10,7 @@ use PVE::INotify;
>  use PVE::JSONSchema;
>  use PVE::QemuMigrate::Helpers;
>  use PVE::QemuServer::Agent;
> +use PVE::QemuServer::Blockdev;
>  use PVE::QemuServer::CPUConfig;
>  use PVE::QemuServer::Drive;
>  use PVE::QemuServer::Helpers;
> @@ -675,16 +676,7 @@ sub cleanup_fleecing_images {
>          };
>          $log_func->('warn', "checking/canceling old backup job failed - $@") if $@;
>  
> -        my $block_info = mon_cmd($vmid, "query-block");
> -        for my $info ($block_info->@*) {
> -            my $device_id = $info->{device};
> -            next if $device_id !~ m/-fleecing$/;
> -
> -            $log_func->('info', "detaching (old) fleecing image for '$device_id'");
> -            $device_id =~ s/^drive-//; # re-added by qemu_drivedel()
> -            eval { PVE::QemuServer::qemu_drivedel($vmid, $device_id) };
> -            $log_func->('warn', "error detaching (old) fleecing image '$device_id' - $@") if $@;
> -        }
> +        PVE::QemuServer::Blockdev::detach_fleecing_block_nodes($vmid, $log_func);
>      }
>  
>      PVE::QemuConfig->lock_config(
> diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
> index 8a991587..28a759a8 100644
> --- a/src/PVE/QemuServer/Blockdev.pm
> +++ b/src/PVE/QemuServer/Blockdev.pm
> @@ -14,8 +14,30 @@ use PVE::Storage;
>  use PVE::QemuServer::Drive qw(drive_is_cdrom);
>  use PVE::QemuServer::Monitor qw(mon_cmd);
>  
> +my sub fleecing_node_name {
> +    my ($type, $drive_id) = @_;
> +
> +    if ($type eq 'fmt') {
> +        return "drive-$drive_id-fleecing"; # this is the top node for fleecing
> +    } elsif ($type eq 'file') {
> +        return "$drive_id-fleecing-file"; # drop the "drive-" prefix to be sure, max length is 31

should we use `e-...` instead of `...-file`, to have similar encoding as
the regular block nodes? or even let get_node_name handle it by adding a
`top` type to it?

> +    }
> +
> +    die "unknown node type for fleecing '$type'";
> +}
> +
> +my sub is_fleecing_top_node {
> +    my ($node_name) = @_;
> +
> +    return $node_name =~ m/-fleecing$/ ? 1 : 0;
> +}
> +
>  my sub get_node_name {
> -    my ($type, $drive_id, $volid, $snap) = @_;
> +    my ($type, $drive_id, $volid, $options) = @_;
> +
> +    return fleecing_node_name($type, $drive_id) if $options->{fleecing};
> +
> +    my $snap = $options->{'snapshot-name'};
>  
>      my $info = "drive=$drive_id,";
>      $info .= "snap=$snap," if defined($snap);
> @@ -151,8 +173,7 @@ sub generate_file_blockdev {
>          $blockdev->{'detect-zeroes'} = PVE::QemuServer::Drive::detect_zeroes_cmdline_option($drive);
>      }
>  
> -    $blockdev->{'node-name'} =
> -        get_node_name('file', $drive_id, $drive->{file}, $options->{'snapshot-name'});
> +    $blockdev->{'node-name'} = get_node_name('file', $drive_id, $drive->{file}, $options);
>  
>      $blockdev->{'read-only'} = read_only_json_option($drive, $options);
>  
> @@ -185,7 +206,7 @@ sub generate_format_blockdev {
>          $format = $drive->{format} // 'raw';
>      }
>  
> -    my $node_name = get_node_name('fmt', $drive_id, $drive->{file}, $options->{'snapshot-name'});
> +    my $node_name = get_node_name('fmt', $drive_id, $drive->{file}, $options);
>  
>      my $blockdev = {
>          'node-name' => "$node_name",
> @@ -214,6 +235,8 @@ sub generate_drive_blockdev {
>      my $child = generate_file_blockdev($storecfg, $drive, $options);
>      $child = generate_format_blockdev($storecfg, $drive, $child, $options);
>  
> +    return $child if $options->{fleecing}; # for fleecing, this is already the top node
> +
>      # this is the top filter entry point, use $drive-drive_id as nodename
>      return {
>          driver => "throttle",
> @@ -354,4 +377,18 @@ sub detach {
>      return;
>  }
>  
> +sub detach_fleecing_block_nodes {
> +    my ($vmid, $log_func) = @_;
> +
> +    my $block_info = mon_cmd($vmid, "query-named-block-nodes");
> +    for my $info ($block_info->@*) {
> +        my $node_name = $info->{'node-name'};
> +        next if !is_fleecing_top_node($node_name);
> +
> +        $log_func->('info', "detaching (old) fleecing image '$node_name'");
> +        eval { detach($vmid, $node_name) };
> +        $log_func->('warn', "error detaching (old) fleecing image '$node_name' - $@") if $@;
> +    }
> +}
> +
>  1;
> diff --git a/src/PVE/VZDump/QemuServer.pm b/src/PVE/VZDump/QemuServer.pm
> index 243a927e..8b643bc4 100644
> --- a/src/PVE/VZDump/QemuServer.pm
> +++ b/src/PVE/VZDump/QemuServer.pm
> @@ -30,6 +30,7 @@ use PVE::Format qw(render_duration render_bytes);
>  use PVE::QemuConfig;
>  use PVE::QemuServer;
>  use PVE::QemuServer::Agent;
> +use PVE::QemuServer::Blockdev;
>  use PVE::QemuServer::Drive qw(checked_volume_format);
>  use PVE::QemuServer::Helpers;
>  use PVE::QemuServer::Machine;
> @@ -626,9 +627,8 @@ my sub detach_fleecing_images {
>  
>      for my $di ($disks->@*) {
>          if (my $volid = $di->{'fleece-volid'}) {
> -            my $devid = "$di->{qmdevice}-fleecing";
> -            $devid =~ s/^drive-//; # re-added by qemu_drivedel()
> -            eval { PVE::QemuServer::qemu_drivedel($vmid, $devid) };
> +            my $node_name = "$di->{qmdevice}-fleecing";
> +            eval { PVE::QemuServer::Blockdev::detach($vmid, $node_name) };
>          }
>      }
>  }
> @@ -646,15 +646,21 @@ my sub attach_fleecing_images {
>          if (my $volid = $di->{'fleece-volid'}) {
>              $self->loginfo("$di->{qmdevice}: attaching fleecing image $volid to QEMU");
>  
> -            my $path = PVE::Storage::path($self->{storecfg}, $volid);
> -            my $devid = "$di->{qmdevice}-fleecing";
> -            my $drive = "file=$path,if=none,id=$devid,format=$format,discard=unmap";
> +            my ($interface, $index) = PVE::QemuServer::Drive::parse_drive_interface($di->{virtdev});
> +            my $drive = {
> +                file => $volid,
> +                interface => $interface,
> +                index => $index,
> +                format => $format,
> +                discard => 'on',
> +            };
> +
> +            my $options = { 'fleecing' => 1 };
>              # Specify size explicitly, to make it work if storage backend rounded up size for
>              # fleecing image when allocating.
> -            $drive .= ",size=$di->{'block-node-size'}" if $format eq 'raw';
> -            $drive =~ s/\\/\\\\/g;
> -            my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
> -            die "attaching fleecing image $volid failed - $ret\n" if $ret !~ m/OK/s;
> +            $options->{size} = $di->{'block-node-size'} if $format eq 'raw';
> +
> +            PVE::QemuServer::Blockdev::attach($self->{storecfg}, $vmid, $drive, $options);
>          }
>      }
>  }
> -- 
> 2.47.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices Fiona Ebner
@ 2025-06-30 10:15   ` Fabian Grünbichler
  2025-06-30 10:35     ` DERUMIER, Alexandre via pve-devel
                       ` (2 more replies)
  0 siblings, 3 replies; 63+ messages in thread
From: Fabian Grünbichler @ 2025-06-30 10:15 UTC (permalink / raw)
  To: Proxmox VE development discussion

On June 27, 2025 5:57 pm, Fiona Ebner wrote:
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
>  src/PVE/QemuServer/Blockdev.pm | 132 +++++++++++++++++++++++++++++++++
>  1 file changed, 132 insertions(+)
> 
> diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
> index 6e6b9245..26d70eee 100644
> --- a/src/PVE/QemuServer/Blockdev.pm
> +++ b/src/PVE/QemuServer/Blockdev.pm
> @@ -11,6 +11,7 @@ use PVE::JSONSchema qw(json_bool);
>  use PVE::Storage;
>  
>  use PVE::QemuServer::Drive qw(drive_is_cdrom);
> +use PVE::QemuServer::Monitor qw(mon_cmd);
>  
>  my sub get_node_name {
>      my ($type, $drive_id, $volid, $snap) = @_;
> @@ -221,4 +222,135 @@ sub generate_drive_blockdev {
>      };
>  }
>  
> +my sub blockdev_add {
> +    my ($vmid, $blockdev) = @_;
> +
> +    eval { mon_cmd($vmid, 'blockdev-add', $blockdev->%*); };
> +    if (my $err = $@) {
> +        my $node_name = $blockdev->{'node-name'} // 'undefined';
> +        die "adding blockdev '$node_name' failed : $err\n" if $@;
> +    }
> +
> +    return;
> +}
> +
> +=pod
> +
> +=head3 attach
> +
> +    attach($storecfg, $vmid, $drive, $options);
> +
> +Attach the drive C<$drive> to the VM C<$vmid> considering the additional options C<$options>.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$storecfg>
> +
> +The storage configuration.
> +
> +=item C<$vmid>
> +
> +The ID of the virtual machine.
> +
> +=item C<$drive>
> +
> +The drive as parsed from a virtual machine configuration.
> +
> +=item C<$options>
> +
> +A hash reference with additional options.
> +
> +=over
> +
> +=item C<< $options->{'read-only'} >>
> +
> +Attach the image as read-only irrespective of the configuration in C<$drive>.
> +
> +=item C<< $options->{size} >>
> +
> +Attach the image with this virtual size. Must be smaller than the actual size of the image. The
> +image format must be C<raw>.
> +
> +=item C<< $options->{'snapshot-name'} >>
> +
> +Attach this snapshot of the volume C<< $drive->{file} >>, rather than the volume itself.
> +
> +=back
> +
> +=back
> +
> +=cut
> +
> +sub attach {
> +    my ($storecfg, $vmid, $drive, $options) = @_;
> +
> +    my $blockdev = generate_drive_blockdev($storecfg, $drive, $options);
> +
> +    my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
> +    if ($blockdev->{'node-name'} eq "drive-$drive_id") { # device top nodes need a throttle group
> +        my $throttle_group = generate_throttle_group($drive);
> +        mon_cmd($vmid, 'object-add', $throttle_group->%*);
> +    }
> +
> +    eval { blockdev_add($vmid, $blockdev); };
> +    if (my $err = $@) {
> +        eval { mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id"); };
> +        warn $@ if $@;
> +        die $err;
> +    }

not sure whether we want (central) helpers for top-level node name
(encoding and parsing) and throttle group ID (encoding and parsing)?

or alternatively, re-use the throttle-group ID from
generate_throttle_group here?

> +
> +    return;
> +}
> +
> +=pod
> +
> +=head3 detach
> +
> +    detach($vmid, $node_name);
> +
> +Detach the block device C<$node_name> from the VM C<$vmid>. Also removes associated child block
> +nodes.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$vmid>
> +
> +The ID of the virtual machine.
> +
> +=item C<$node_name>
> +
> +The node name identifying the block node in QEMU.
> +
> +=back
> +
> +=cut
> +
> +sub detach {
> +    my ($vmid, $node_name) = @_;
> +
> +    die "Blockdev::detach - no node name\n" if !$node_name;
> +
> +    # QEMU recursively auto-removes the file children, i.e. file and format node below the top
> +    # node and also implicit backing children referenced by a qcow2 image.
> +    eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => "$node_name"); };
> +    if (my $err = $@) {
> +        return if $err =~ m/Failed to find node with node-name/; # already gone

does this happen regularly?

> +        die "deleting blockdev '$node_name' failed : $err\n";
> +    }
> +
> +    if ($node_name =~ m/^drive-(.+)$/) {

see above

> +        # also remove throttle group if it was a device top node
> +        my $drive_id = $1;
> +        if (PVE::QemuServer::Drive::is_valid_drivename($drive_id)) {
> +            mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id");

should this get an eval?

> +        }
> +    }
> +
> +    return;
> +}
> +
>  1;
> -- 
> 2.47.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices
  2025-06-30 10:15   ` Fabian Grünbichler
@ 2025-06-30 10:35     ` DERUMIER, Alexandre via pve-devel
       [not found]     ` <6575d8fe67659098d2bbd533c9063bcbd44c0a21.camel@groupe-cyllene.com>
  2025-06-30 11:45     ` Fiona Ebner
  2 siblings, 0 replies; 63+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-06-30 10:35 UTC (permalink / raw)
  To: pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 13643 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices
Date: Mon, 30 Jun 2025 10:35:22 +0000
Message-ID: <6575d8fe67659098d2bbd533c9063bcbd44c0a21.camel@groupe-cyllene.com>

> +    # node and also implicit backing children referenced by a qcow2
> image.
> +    eval { mon_cmd($vmid, 'blockdev-del', 'node-name' =>
> "$node_name"); };
> +    if (my $err = $@) {
> +        return if $err =~ m/Failed to find node with node-name/; #
> already gone

>>does this happen regularly?

From my tests, I have seen different behaviour, depending if the
initial drive was defined in qemu command line  ,   or if it was live
hot-plugged first and hot-unplugged after.

I have also have seen different behaviour with block with defined node-
name and with autogenerated nodename.

I don't have retested since a while, so can't confirm 100%, I'll try to
do some test again today.


But I'm not sure we should return here, but instead simply skip and try
to remove the throttle group later.

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 29/31] command line: switch to blockdev starting with machine version 10.0
  2025-06-30 10:15   ` Fabian Grünbichler
@ 2025-06-30 10:57     ` Fiona Ebner
  0 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-30 10:57 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

Am 30.06.25 um 12:15 schrieb Fabian Grünbichler:
> On June 27, 2025 5:57 pm, Fiona Ebner wrote:
>> @@ -4050,28 +4069,63 @@ sub qemu_iothread_del {
>>  sub qemu_driveadd {
>>      my ($storecfg, $vmid, $device) = @_;
>>  
>> -    my $drive = print_drive_commandline_full($storecfg, $vmid, $device, undef);
>> -    $drive =~ s/\\/\\\\/g;
>> -    my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
>> +    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
>>  
>> -    # If the command succeeds qemu prints: "OK"
>> -    return 1 if $ret =~ m/OK/s;
>> +    # for the switch to -blockdev
>> +    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
> 
> isn't this part here basically Blockdev::attach?
>
>> +        my $throttle_group = PVE::QemuServer::Blockdev::generate_throttle_group($device);
>> +        mon_cmd($vmid, 'object-add', %$throttle_group);
>>  
>> -    die "adding drive failed: $ret\n";
>> +        eval {
>> +            my $blockdev =
>> +                PVE::QemuServer::Blockdev::generate_drive_blockdev($storecfg, $device, {});
>> +            mon_cmd($vmid, 'blockdev-add', %$blockdev);
>> +        };
>> +        if (my $err = $@) {
>> +            my $drive_id = PVE::QemuServer::Drive::get_drive_id($device);
>> +            eval { mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id"); };
>> +            warn $@ if $@;
>> +            die $err;
>> +        }
>> +
>> +        return 1;
>> +    } else {
>> +        my $drive = print_drive_commandline_full($storecfg, $vmid, $device, undef);
>> +        $drive =~ s/\\/\\\\/g;
>> +        my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
>> +
>> +        # If the command succeeds qemu prints: "OK"
>> +        return 1 if $ret =~ m/OK/s;
>> +
>> +        die "adding drive failed: $ret\n";
>> +    }
>>  }
>>  
>>  sub qemu_drivedel {
>>      my ($vmid, $deviceid) = @_;
>>  
>> -    my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_del drive-$deviceid", 10 * 60);
>> -    $ret =~ s/^\s+//;
>> +    my $machine_type = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
>>  
>> -    return 1 if $ret eq "";
>> +    # for the switch to -blockdev
>> +    if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
> 
> and this here Blockdev::detach?

Yes, sorry! I forgot to switch to using the helpers here.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 13/31] blockdev: resize: query and use node name for resize operation
  2025-06-30  7:52     ` Fiona Ebner
@ 2025-06-30 11:38       ` Fiona Ebner
  0 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-30 11:38 UTC (permalink / raw)
  To: Proxmox VE development discussion

Am 30.06.25 um 09:52 schrieb Fiona Ebner:
> Still, it feels like a QEMU bug. I'd expect the filter node to also
> report the updated size when its child node is resized. I'll see if that
> is easily fixed upstream/ask what they think.

For reference:
https://lore.kernel.org/qemu-devel/20250630113035.820557-1-f.ebner@proxmox.com/T/


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices
       [not found]     ` <6575d8fe67659098d2bbd533c9063bcbd44c0a21.camel@groupe-cyllene.com>
@ 2025-06-30 11:43       ` DERUMIER, Alexandre via pve-devel
  2025-06-30 11:58         ` Fiona Ebner
  0 siblings, 1 reply; 63+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-06-30 11:43 UTC (permalink / raw)
  To: pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 14089 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices
Date: Mon, 30 Jun 2025 11:43:28 +0000
Message-ID: <eb730e0f1c20d6017871124926edf647e84d9aca.camel@groupe-cyllene.com>

-------- Message initial --------
De: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
À: pve-devel@lists.proxmox.com <pve-devel@lists.proxmox.com>
Objet: Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers
for attaching and detaching block devices
Date: 30/06/2025 12:35:22

> +    # node and also implicit backing children referenced by a qcow2
> image.
> +    eval { mon_cmd($vmid, 'blockdev-del', 'node-name' =>
> "$node_name"); };
> +    if (my $err = $@) {
> +        return if $err =~ m/Failed to find node with node-name/; #
> already gone

> > does this happen regularly?

>>From my tests, I have seen different behaviour, depending if the
>>initial drive was defined in qemu command line  ,   or if it was live
>>hot-plugged first and hot-unplugged after.

>>I have also have seen different behaviour with block with defined
>>node-
>>name and with autogenerated nodename.
>>
>>I don't have retested since a while, so can't confirm 100%, I'll try
>>to
>>do some test again today.

Can't reproduce with simple hotplug/unplug,  or unplug after vm start.

But, I'm seeing a case, after a driver mirror with zeroinit filter in
front,  where the whole chain is not autoremoved by device del.
(including the zero filter).
And this current code don't seem to remove the file && format blocknode
too (maybe locked by the zero filter node ?)


I don't known if we need to keep the zeroinit filter after the drive
mirror ? (I think it could be removed with a blockdev-reopen) 



[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices
  2025-06-30 10:15   ` Fabian Grünbichler
  2025-06-30 10:35     ` DERUMIER, Alexandre via pve-devel
       [not found]     ` <6575d8fe67659098d2bbd533c9063bcbd44c0a21.camel@groupe-cyllene.com>
@ 2025-06-30 11:45     ` Fiona Ebner
  2025-06-30 11:55       ` Fabian Grünbichler
  2 siblings, 1 reply; 63+ messages in thread
From: Fiona Ebner @ 2025-06-30 11:45 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

Am 30.06.25 um 12:15 schrieb Fabian Grünbichler:
> On June 27, 2025 5:57 pm, Fiona Ebner wrote:
>> +sub attach {
>> +    my ($storecfg, $vmid, $drive, $options) = @_;
>> +
>> +    my $blockdev = generate_drive_blockdev($storecfg, $drive, $options);
>> +
>> +    my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
>> +    if ($blockdev->{'node-name'} eq "drive-$drive_id") { # device top nodes need a throttle group
>> +        my $throttle_group = generate_throttle_group($drive);
>> +        mon_cmd($vmid, 'object-add', $throttle_group->%*);
>> +    }
>> +
>> +    eval { blockdev_add($vmid, $blockdev); };
>> +    if (my $err = $@) {
>> +        eval { mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id"); };
>> +        warn $@ if $@;
>> +        die $err;
>> +    }
> 
> not sure whether we want (central) helpers for top-level node name
> (encoding and parsing) and throttle group ID (encoding and parsing)?

Won't hurt.

>> +        # also remove throttle group if it was a device top node
>> +        my $drive_id = $1;
>> +        if (PVE::QemuServer::Drive::is_valid_drivename($drive_id)) {
>> +            mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id");
> 
> should this get an eval?

I think it's better to propagate the error (or do you mean having an
eval+die for adding context to the message)?


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices
  2025-06-30 11:45     ` Fiona Ebner
@ 2025-06-30 11:55       ` Fabian Grünbichler
  2025-06-30 15:11         ` Fiona Ebner
  0 siblings, 1 reply; 63+ messages in thread
From: Fabian Grünbichler @ 2025-06-30 11:55 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion


> Fiona Ebner <f.ebner@proxmox.com> hat am 30.06.2025 13:45 CEST geschrieben:
> 
>  
> Am 30.06.25 um 12:15 schrieb Fabian Grünbichler:
> > On June 27, 2025 5:57 pm, Fiona Ebner wrote:
> >> +sub attach {
> >> +    my ($storecfg, $vmid, $drive, $options) = @_;
> >> +
> >> +    my $blockdev = generate_drive_blockdev($storecfg, $drive, $options);
> >> +
> >> +    my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
> >> +    if ($blockdev->{'node-name'} eq "drive-$drive_id") { # device top nodes need a throttle group
> >> +        my $throttle_group = generate_throttle_group($drive);
> >> +        mon_cmd($vmid, 'object-add', $throttle_group->%*);
> >> +    }
> >> +
> >> +    eval { blockdev_add($vmid, $blockdev); };
> >> +    if (my $err = $@) {
> >> +        eval { mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id"); };
> >> +        warn $@ if $@;
> >> +        die $err;
> >> +    }
> > 
> > not sure whether we want (central) helpers for top-level node name
> > (encoding and parsing) and throttle group ID (encoding and parsing)?
> 
> Won't hurt.
> 
> >> +        # also remove throttle group if it was a device top node
> >> +        my $drive_id = $1;
> >> +        if (PVE::QemuServer::Drive::is_valid_drivename($drive_id)) {
> >> +            mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id");
> > 
> > should this get an eval?
> 
> I think it's better to propagate the error (or do you mean having an
> eval+die for adding context to the message)?

I was thinking about a similar case like above - what if the throttle
group object was already removed.

but I guess it's more likely to hit the following sequence:

1. first detach runs into blockdev-del timeout and dies
2. blockdev deletion completes
3. second detach runs into blockdev no longer exists and returns

no object-del called at all.. should we maybe make it more robust
and handle the object already existing when attaching?


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices
  2025-06-30 11:43       ` DERUMIER, Alexandre via pve-devel
@ 2025-06-30 11:58         ` Fiona Ebner
  0 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-30 11:58 UTC (permalink / raw)
  To: Proxmox VE development discussion

Am 30.06.25 um 13:43 schrieb DERUMIER, Alexandre via pve-devel:
> De: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
> À: pve-devel@lists.proxmox.com <pve-devel@lists.proxmox.com>
> Objet: Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers
> for attaching and detaching block devices
> Date: 30/06/2025 12:35:22
> 
>> +    # node and also implicit backing children referenced by a qcow2
>> image.
>> +    eval { mon_cmd($vmid, 'blockdev-del', 'node-name' =>
>> "$node_name"); };
>> +    if (my $err = $@) {
>> +        return if $err =~ m/Failed to find node with node-name/; #
>> already gone
> 
>>> does this happen regularly?
> 
>> >From my tests, I have seen different behaviour, depending if the
>>> initial drive was defined in qemu command line  ,   or if it was live
>>> hot-plugged first and hot-unplugged after.
> 
>>> I have also have seen different behaviour with block with defined
>>> node-
>>> name and with autogenerated nodename.
>>>
>>> I don't have retested since a while, so can't confirm 100%, I'll try
>>> to
>>> do some test again today.
> 
> Can't reproduce with simple hotplug/unplug,  or unplug after vm start.
> 
> But, I'm seeing a case, after a driver mirror with zeroinit filter in
> front,  where the whole chain is not autoremoved by device del.
> (including the zero filter).
> And this current code don't seem to remove the file && format blocknode
> too (maybe locked by the zero filter node ?)

The file and format are auto-removed when you remove the zeroinit
filter. What matters is that you remove the node you previously added
explicitly via blockdev-add. Implicitly added child nodes will be
auto-removed.

It doesn't make a difference if there is a zeroinit filter. If you add a
mirror target, you will later need to remove that manually.

So yes, mirror followed by a hotunplug currently leaves left-over nodes.
Will fix that in v2.

> I don't known if we need to keep the zeroinit filter after the drive
> mirror ? (I think it could be removed with a blockdev-reopen) 

We could, but not sure if it's worth it. Can still be done as a
follow-up, but IMHO the rest of the code should work regardless of
whether the child below throttle is a zeroinit filter or the format node.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
  2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper Fiona Ebner
@ 2025-06-30 14:29   ` DERUMIER, Alexandre via pve-devel
       [not found]   ` <cd933fed020383019705045025d38c509042c267.camel@groupe-cyllene.com>
  1 sibling, 0 replies; 63+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-06-30 14:29 UTC (permalink / raw)
  To: pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 19995 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
Date: Mon, 30 Jun 2025 14:29:34 +0000
Message-ID: <cd933fed020383019705045025d38c509042c267.camel@groupe-cyllene.com>

After a cloudinit regenerate, or if I swap a cdrom image to a new cdrom
image,

the old format && file blockdev are not removed,  and the new blockdevs
have autogenerated nodenames


info blockdev -n

#block143: /var/lib/vz/images/107/vm-107-cloudinit.qcow2 (qcow2)
    Cache mode:       writeback

#block003: /var/lib/vz/images/107/vm-107-cloudinit.qcow2 (file)
    Cache mode:       writeback


fc4da005b8264191a9923ea266be591: /var/lib/vz/images/107/vm-107-
cloudinit.qcow2 (qcow2, read-only)
    Cache mode:       writeback

ec4da005b8264191a9923ea266be591: /var/lib/vz/images/107/vm-107-
cloudinit.qcow2 (file, read-only)
    Cache mode:       writeback


or


Type 'help' for help.
# info block -n
#block594: /var/lib/vz/template/iso/new.iso (raw)
    Cache mode:       writeback

#block439: /var/lib/vz/template/iso/new.iso (file)
    Cache mode:       writeback



fb01e5f0d77f17daef4df2ea5a1d0cd: /var/lib/vz/template/iso/old.iso (raw,
read-only)
    Cache mode:       writeback

eb01e5f0d77f17daef4df2ea5a1d0cd: /var/lib/vz/template/iso/old.iso
(file, read-only)
    Cache mode:       writeback


-------- Message initial --------
De: Fiona Ebner <f.ebner@proxmox.com>
Répondre à: Proxmox VE development discussion <pve-
devel@lists.proxmox.com>
À: pve-devel@lists.proxmox.com
Objet: [pve-devel] [PATCH qemu-server 18/31] blockdev: add
change_medium() helper
Date: 27/06/2025 17:57:14

There is a slight change in behavior for cloud-init disks, when the
file for the new cloud-init disk is 'none'. Previously, the current
drive would not be ejected, now it is. Not sure if that edge case can
even happen in practice and it is more correct, becuase the config was
already updated.

Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-
cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer.pm          | 40 ++++++----------------------------
 src/PVE/QemuServer/Blockdev.pm | 18 +++++++++++++++
 2 files changed, 25 insertions(+), 33 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 3f135fcb..6e44132e 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -5170,30 +5170,15 @@ sub vmconfig_update_disk {
             }
 
         } else { # cdrom
+            eval { PVE::QemuServer::Blockdev::change_medium($storecfg,
$vmid, $opt, $drive); };
+            my $err = $@;
 
-            if ($drive->{file} eq 'none') {
-                mon_cmd($vmid, "eject", force => JSON::true, id =>
"$opt");
-                if (drive_is_cloudinit($old_drive)) {
-                    vmconfig_register_unused_drive($storecfg, $vmid,
$conf, $old_drive);
-                }
-            } else {
-                my ($path, $format) =
-                   
PVE::QemuServer::Drive::get_path_and_format($storecfg, $drive);
-
-                # force eject if locked
-                mon_cmd($vmid, "eject", force => JSON::true, id =>
"$opt");
-
-                if ($path) {
-                    mon_cmd(
-                        $vmid,
-                        "blockdev-change-medium",
-                        id => "$opt",
-                        filename => "$path",
-                        format => "$format",
-                    );
-                }
+            if ($drive->{file} eq 'none' &&
drive_is_cloudinit($old_drive)) {
+                vmconfig_register_unused_drive($storecfg, $vmid,
$conf, $old_drive);
             }
 
+            die $err if $err;
+
             return 1;
         }
     }
@@ -5230,18 +5215,7 @@ sub vmconfig_update_cloudinit_drive {
     my $running = PVE::QemuServer::check_running($vmid);
 
     if ($running) {
-        my ($path, $format) =
-            PVE::QemuServer::Drive::get_path_and_format($storecfg,
$cloudinit_drive);
-        if ($path) {
-            mon_cmd($vmid, "eject", force => JSON::true, id =>
"$cloudinit_ds");
-            mon_cmd(
-                $vmid,
-                "blockdev-change-medium",
-                id => "$cloudinit_ds",
-                filename => "$path",
-                format => "$format",
-            );
-        }
+        PVE::QemuServer::Blockdev::change_medium($storecfg, $vmid,
$cloudinit_ds, $cloudinit_drive);
     }
 }
 
diff --git a/src/PVE/QemuServer/Blockdev.pm
b/src/PVE/QemuServer/Blockdev.pm
index 73cb7ae5..8ef17a3b 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -510,4 +510,22 @@ sub resize {
     );
 }
 
+sub change_medium {
+    my ($storecfg, $vmid, $qdev_id, $drive) = @_;
+
+    # force eject if locked
+    mon_cmd($vmid, "eject", force => JSON::true, id => "$qdev_id");
+
+    my ($path, $format) =
PVE::QemuServer::Drive::get_path_and_format($storecfg, $drive);
+
+    if ($path) { # no path for 'none'
+        mon_cmd(
+            $vmid, "blockdev-change-medium",
+            id => "$qdev_id",
+            filename => "$path",
+            format => "$format",
+        );
+    }
+}
+
 1;

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
       [not found]   ` <cd933fed020383019705045025d38c509042c267.camel@groupe-cyllene.com>
@ 2025-06-30 14:42     ` DERUMIER, Alexandre via pve-devel
  2025-07-01  7:30       ` DERUMIER, Alexandre via pve-devel
  2025-07-01 10:05       ` Fiona Ebner
  0 siblings, 2 replies; 63+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-06-30 14:42 UTC (permalink / raw)
  To: pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 13105 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
Date: Mon, 30 Jun 2025 14:42:53 +0000
Message-ID: <5dc03b33bf68fd17121d852b4bd782a65fb9fc06.camel@groupe-cyllene.com>

>>After a cloudinit regenerate, or if I swap a cdrom image to a new
>>cdrom
>>image,
>>
>>the old format && file blockdev are not removed,  and the new
>>blockdevs
>>have autogenerated nodenames

not sure if it's a qemu bug, but I think this is why I have use open-
tray, 	remove-medium, inser-medium, close-tray

https://lists.proxmox.com/pipermail/pve-devel/2025-April/070595.html

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices
  2025-06-30 11:55       ` Fabian Grünbichler
@ 2025-06-30 15:11         ` Fiona Ebner
  0 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-06-30 15:11 UTC (permalink / raw)
  To: Fabian Grünbichler, Proxmox VE development discussion

Am 30.06.25 um 13:55 schrieb Fabian Grünbichler:
>> Fiona Ebner <f.ebner@proxmox.com> hat am 30.06.2025 13:45 CEST geschrieben:
>> Am 30.06.25 um 12:15 schrieb Fabian Grünbichler:
>>> On June 27, 2025 5:57 pm, Fiona Ebner wrote:
>>>> +        # also remove throttle group if it was a device top node
>>>> +        my $drive_id = $1;
>>>> +        if (PVE::QemuServer::Drive::is_valid_drivename($drive_id)) {
>>>> +            mon_cmd($vmid, 'object-del', id => "throttle-drive-$drive_id");
>>>
>>> should this get an eval?
>>
>> I think it's better to propagate the error (or do you mean having an
>> eval+die for adding context to the message)?
> 
> I was thinking about a similar case like above - what if the throttle
> group object was already removed.
> 
> but I guess it's more likely to hit the following sequence:
> 
> 1. first detach runs into blockdev-del timeout and dies
> 2. blockdev deletion completes
> 3. second detach runs into blockdev no longer exists and returns
> 
> no object-del called at all.. should we maybe make it more robust
> and handle the object already existing when attaching?

Yeah, I'll go with that in v2.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
  2025-06-30 14:42     ` DERUMIER, Alexandre via pve-devel
@ 2025-07-01  7:30       ` DERUMIER, Alexandre via pve-devel
  2025-07-01  8:38         ` Fabian Grünbichler
  2025-07-01  8:42         ` Fiona Ebner
  2025-07-01 10:05       ` Fiona Ebner
  1 sibling, 2 replies; 63+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-01  7:30 UTC (permalink / raw)
  To: pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 12293 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
Date: Tue, 1 Jul 2025 07:30:22 +0000
Message-ID: <f979d3c7f33d815fb97a8828ce85b705cae8655c.camel@groupe-cyllene.com>

Another thing, 

if the vm is start with cdrom=none,  then you switch to an iso,

the throttle group is not generated (+the autogenerated nodenames)





[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 07/31] backup: use blockdev for fleecing images
  2025-06-30 10:15   ` Fabian Grünbichler
@ 2025-07-01  8:20     ` Fiona Ebner
  0 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-07-01  8:20 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

Am 30.06.25 um 12:15 schrieb Fabian Grünbichler:
> On June 27, 2025 5:57 pm, Fiona Ebner wrote:
>> +my sub fleecing_node_name {
>> +    my ($type, $drive_id) = @_;
>> +
>> +    if ($type eq 'fmt') {
>> +        return "drive-$drive_id-fleecing"; # this is the top node for fleecing
>> +    } elsif ($type eq 'file') {
>> +        return "$drive_id-fleecing-file"; # drop the "drive-" prefix to be sure, max length is 31
> 
> should we use `e-...` instead of `...-file`, to have similar encoding as
> the regular block nodes? or even let get_node_name handle it by adding a
> `top` type to it?

I mean, we could, but it's not encoded, so I don't really see the
benefit to go with that schema. It just makes checking if it's a
fleecing top node slightly harder.

Adding a 'top' type would just mean overriding it on the call side.
Currently, generate_{file,format}_blockdev simply call get_node_name()
with type '{file,fmt}' so that seems nice to me. When using a 'top'
type, generate_format_blockdev would also need to check $options, so
it'd just spread out the logic to one more place.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 08/31] backup: use blockdev for TPM state file
  2025-06-30 10:15   ` Fabian Grünbichler
@ 2025-07-01  8:22     ` Fiona Ebner
  0 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-07-01  8:22 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

Am 30.06.25 um 12:15 schrieb Fabian Grünbichler:
> On June 27, 2025 5:57 pm, Fiona Ebner wrote:
>> +    die "unknown node type for fleecing '$type'";
> 
> s/fleecing/tpm backup node/ ?

Will fix in v2!


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
  2025-07-01  7:30       ` DERUMIER, Alexandre via pve-devel
@ 2025-07-01  8:38         ` Fabian Grünbichler
  2025-07-01 10:01           ` DERUMIER, Alexandre via pve-devel
  2025-07-01  8:42         ` Fiona Ebner
  1 sibling, 1 reply; 63+ messages in thread
From: Fabian Grünbichler @ 2025-07-01  8:38 UTC (permalink / raw)
  To: Proxmox VE development discussion


> DERUMIER, Alexandre via pve-devel <pve-devel@lists.proxmox.com> hat am 01.07.2025 09:30 CEST geschrieben:
> Another thing, 
> 
> if the vm is start with cdrom=none,  then you switch to an iso,
> 
> the throttle group is not generated (+the autogenerated nodenames)

if you start a VM with cdrom=none, throttle group object is generated:

# qm config 123
boot: order=ide2;ide0
ide0: cdrom,media=cdrom
ide2: none,media=cdrom
kvm: 0
meta: creation-qemu=10.0.2,ctime=1750423164
scsi0: local:123/vm-123-disk-1.raw,size=1G
smbios1: uuid=8ca437bc-4a11-4bf2-8a5d-61d5cf407670
vmgenid: 0ca5ca77-a0d8-4f32-a7c3-c3c752f0280e

# qom-list /objects
type (string)
throttle-drive-ide2 (child<throttle-group>)
throttle-drive-ide0 (child<throttle-group>)
pc.ram (child<memory-backend-ram>)
throttle-drive-scsi0 (child<throttle-group>)

and switching that CD drom drive to using the host drive or an iso volume fails:

Parameter verification failed. (400)

ide0: hotplug problem - VM 123 qmp command 'object-add' failed - attempt to add duplicate property 'throttle-drive-ide0' to object (type 'container')

switching ide0 to none while running removes the object, and then switching back to cdrom works, and the block nodes also look okay.

for me, switching from cdrom to none to an iso file works correctly.
starting the VM with iso volume attached and switching to either none or a different iso file also work correctly.

@alexandre are you testing with all three patch series applied (qemu-server, pve-qemu-kvm, pve-storage)?


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
  2025-07-01  7:30       ` DERUMIER, Alexandre via pve-devel
  2025-07-01  8:38         ` Fabian Grünbichler
@ 2025-07-01  8:42         ` Fiona Ebner
  1 sibling, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-07-01  8:42 UTC (permalink / raw)
  To: Proxmox VE development discussion

Am 01.07.25 um 09:30 schrieb DERUMIER, Alexandre via pve-devel:
> Another thing, 
> 
> if the vm is start with cdrom=none,  then you switch to an iso,
> 
> the throttle group is not generated (+the autogenerated nodenames)

Thank you for testing!

AFAICS, the throttle group is already generated at VM start, but then
not re-generated. It already works after the changes you and Fabian
suggested for the attach() and detach() helpers, as the previous
throttle group is properly removed beforehand then.

But I think, I'll also go for not generating the throttle group for
'none' in the first place. It's not used if there is no associated
blockdev, and the limits can always change together with changing the
medium (as both happen by updating the VM config line/qm set), so it
just makes more sense to only generate it when there is an associated
blockdev. Or do you see any problem with that?


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 17/31] block job: add blockdev mirror
  2025-06-30 10:15   ` Fabian Grünbichler
@ 2025-07-01  9:21     ` Fiona Ebner
  0 siblings, 0 replies; 63+ messages in thread
From: Fiona Ebner @ 2025-07-01  9:21 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

Am 30.06.25 um 12:15 schrieb Fabian Grünbichler:
> On June 27, 2025 5:57 pm, Fiona Ebner wrote:
>> +    # Need to replace the node below the top node. This is not necessarily a format node, for
>> +    # example, it can also be a zeroinit node by a previous mirror! So query QEMU itself.
>> +    my $child_info = mon_cmd($vmid, 'block-node-query-file-child', 'node-name' => $device_id);
>> +    my $source_node_name = $child_info->{'node-name'};
> 
> isn't this semantically equivalent to get_node_name_below_throttle? that
> one does a few more checks and is slightly more expensive, but
> validating that the top node is a throttle node as expected might be a
> good thing here as well?
> 
> depending on how we see things, we might want to add a `$assert`
> parameter to that helper though for call sites that are only happening
> in blockdev context - to avoid the fallback in case the top node is not
> a throttle group, and instead die?

Yes, and I'll also add the assertion in v2.

>> +
>> +    # Copy original drive config (aio, cache, discard, ...):
>> +    my $dest_drive = dclone($source->{drive});
>> +    delete($dest_drive->{format}); # cannot use the source's format
>> +    $dest_drive->{file} = $dest->{volid};
>> +
>> +    my $generate_blockdev_opts = {};
>> +    $generate_blockdev_opts->{'zero-initialized'} = 1 if $dest->{'zero-initialized'};
>> +
>> +    # Note that if 'aio' is not explicitly set, i.e. default, it can change if source and target
>> +    # don't both allow or both not allow 'io_uring' as the default.
>> +    my $target_drive_blockdev = PVE::QemuServer::Blockdev::generate_drive_blockdev(
>> +        $storecfg, $dest_drive, $generate_blockdev_opts,
>> +    );
>> +    # Top node is the throttle group, must use the file child.
>> +    my $target_blockdev = $target_drive_blockdev->{file};
> 
> should we have an option for generate_drive_blockdev to skip the
> throttle group/top node? then we could just use Blockdev::attach here..
> 
> at least if we make that return the top-level node name or blockdev..

I thought about such an option, but decided against it in the end,
because this turned out to be the only user. Fleecing and TPM backup
have similar, but additional requirements. But that decision was made
before having the attach() helper. So reconsidering now, I feel like it
can be better for encapsulation and I'll go with this in v2.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
  2025-07-01  8:38         ` Fabian Grünbichler
@ 2025-07-01 10:01           ` DERUMIER, Alexandre via pve-devel
  0 siblings, 0 replies; 63+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-01 10:01 UTC (permalink / raw)
  To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 14731 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
Date: Tue, 1 Jul 2025 10:01:47 +0000
Message-ID: <5b5e89fd824c431a57b8fcf9a5eb6cf30aeb24e3.camel@groupe-cyllene.com>

>>if you start a VM with cdrom=none, throttle group object is
>>generated:

sorry, I wanted to said : the throttle top node (which is using the
throttle group)

I think it's just related to blockdev-medium-change, as we can't
specify the node chain, qemu is simply autocreating a format+file node
with autogenerated node-name, and attach it to the top.

here the blockdev dump after going from none -> iso

  ide2 => {
    device => "",
    inserted => {
      active => $VAR1->{ide0}{inserted}{active},
      backing_file_depth => 0,
      bps => 0,
      bps_rd => 0,
      bps_wr => 0,
      cache => {
        direct => $VAR1->{ide0}{inserted}{cache}{direct},
        "no-flush" => $VAR1->{ide0}{inserted}{cache}{direct},
        writeback => $VAR1->{ide0}{inserted}{active},
      },
      detect_zeroes => "off",
      drv => "raw",
      encrypted => $VAR1->{ide0}{inserted}{cache}{direct},
      file => "/var/lib/vz/template/iso/debian-12.5.0-amd64-
netinst.iso",
      image => {
        "actual-size" => 659554304,
        "dirty-flag" => $VAR1->{ide0}{inserted}{cache}{direct},
        filename => "/var/lib/vz/template/iso/isofile.iso",
        format => "raw",
        "virtual-size" => 659554304,
      },
      iops => 0,
      iops_rd => 0,
      iops_wr => 0,
      "node-name" => "#block127",
      ro => $VAR1->{ide0}{inserted}{active},
      write_threshold => 0,
    },
    "io-status" => "ok",
    locked => $VAR1->{ide0}{inserted}{active},
    qdev => "ide2",
    removable => $VAR1->{ide0}{inserted}{active},
    tray_open => $VAR1->{ide0}{inserted}{cache}{direct},
    type => "unknown",
  },


>>@alexandre are you testing with all three patch series applied (qemu-
>>server, pve-qemu-kvm, pve-storage)?

yes sure !

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
  2025-06-30 14:42     ` DERUMIER, Alexandre via pve-devel
  2025-07-01  7:30       ` DERUMIER, Alexandre via pve-devel
@ 2025-07-01 10:05       ` Fiona Ebner
  2025-07-01 10:20         ` DERUMIER, Alexandre via pve-devel
  1 sibling, 1 reply; 63+ messages in thread
From: Fiona Ebner @ 2025-07-01 10:05 UTC (permalink / raw)
  To: Proxmox VE development discussion

Am 30.06.25 um 16:42 schrieb DERUMIER, Alexandre via pve-devel:
>>> After a cloudinit regenerate, or if I swap a cdrom image to a new
>>> cdrom
>>> image,
>>>
>>> the old format && file blockdev are not removed,  and the new
>>> blockdevs
>>> have autogenerated nodenames

I cannot reproduce this here, could you share the exact commands and
machine configuration?

For medium change:

> [I] root@pve9a1 ~# qm create 500 --ide0 sani:iso/virtio-win-0.1.271.iso,media=cdrom
> [I] root@pve9a1 ~# qm start 500
> [I] root@pve9a1 ~# echo '{"execute": "qmp_capabilities"}{"execute": "query-named-block-nodes"}' | socat - /var/run/qemu-server/500.qmp | jq | grep \"node-name
>       "node-name": "drive-ide0",
>       "node-name": "f549bd09aa572d2ae134301979d01b3",
>       "node-name": "e549bd09aa572d2ae134301979d01b3",
> [I] root@pve9a1 ~# qm set 500 --ide0 sani:iso/virtio-win-0.1.266.iso,media=cdrom
> update VM 500: -ide0 sani:iso/virtio-win-0.1.266.iso,media=cdrom
> [I] root@pve9a1 ~# echo '{"execute": "qmp_capabilities"}{"execute": "query-named-block-nodes"}' | socat - /var/run/qemu-server/500.qmp | jq | grep \"node-name
>       "node-name": "drive-ide0",
>       "node-name": "ffb79807a199dd8817137fa5e247d9d",
>       "node-name": "efb79807a199dd8817137fa5e247d9d",

For cloudinit regenerate:

> [I] root@pve9a1 ~# qm create 500 --ide0 dir:cloudinit
> Formatting '/mnt/pve/dir/images/500/vm-500-cloudinit.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=4194304 lazy_refcounts=off refcount_bits=16
> ide0: successfully created disk 'dir:500/vm-500-cloudinit.qcow2,media=cdrom'
> [I] root@pve9a1 ~# qm start 500
> Use of uninitialized value in split at /usr/share/perl5/PVE/QemuServer/Cloudinit.pm line 115.
> generating cloud-init ISO
> [I] root@pve9a1 ~# echo '{"execute": "qmp_capabilities"}{"execute": "query-named-block-nodes"}' | socat - /var/run/qemu-server/500.qmp | jq | grep \"node-name
>       "node-name": "drive-ide0",
>       "node-name": "fc72045ad74e7732964db954986226f",
>       "node-name": "ec72045ad74e7732964db954986226f",
> [I] root@pve9a1 ~# qm set 500 --ciuser foobar
> update VM 500: -ciuser foobar
> [I] root@pve9a1 ~# pvesh create /nodes/pve9a1/qemu/500/cloudinit
> No 'create' handler defined for '/nodes/pve9a1/qemu/500/cloudinit'
> [I] root@pve9a1 ~ [1]# pvesh set /nodes/pve9a1/qemu/500/cloudinit
> Use of uninitialized value in split at /usr/share/perl5/PVE/QemuServer/Cloudinit.pm line 115.
> generating cloud-init ISO
> [I] root@pve9a1 ~# echo '{"execute": "qmp_capabilities"}{"execute": "query-named-block-nodes"}' | socat - /var/run/qemu-server/500.qmp | jq | grep \"node-name
>       "node-name": "drive-ide0",
>       "node-name": "fc72045ad74e7732964db954986226f",
>       "node-name": "ec72045ad74e7732964db954986226f",

For me, it replaces and regenerates just fine. Is the issue somehow in
combination with file=none like you reported in the other mail?

> not sure if it's a qemu bug, but I think this is why I have use open-
> tray, 	remove-medium, inser-medium, close-tray
> 
> https://lists.proxmox.com/pipermail/pve-devel/2025-April/070595.html

See the next patch, the blockdev_change_medium() helper is adapted from
yours :)


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
  2025-07-01 10:05       ` Fiona Ebner
@ 2025-07-01 10:20         ` DERUMIER, Alexandre via pve-devel
  2025-07-01 10:25           ` Fiona Ebner
  0 siblings, 1 reply; 63+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-01 10:20 UTC (permalink / raw)
  To: pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 13266 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
Date: Tue, 1 Jul 2025 10:20:23 +0000
Message-ID: <fce25ccc6f25deab65b2824e2e493a9250829b1b.camel@groupe-cyllene.com>

>>I cannot reproduce this here, could you share the exact commands and
>>machine configuration?

ah sorry, I didn't receive the patch29, and I have apply the patch from
the previous serie, and I think it's not calling the correct sub.

I'll retest it with correct patch, sorry for the noise

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
  2025-07-01 10:20         ` DERUMIER, Alexandre via pve-devel
@ 2025-07-01 10:25           ` Fiona Ebner
  2025-07-01 11:51             ` DERUMIER, Alexandre via pve-devel
  0 siblings, 1 reply; 63+ messages in thread
From: Fiona Ebner @ 2025-07-01 10:25 UTC (permalink / raw)
  To: Proxmox VE development discussion

Am 01.07.25 um 12:20 schrieb DERUMIER, Alexandre via pve-devel:
>>> I cannot reproduce this here, could you share the exact commands and
>>> machine configuration?
> 
> ah sorry, I didn't receive the patch29, and I have apply the patch from
> the previous serie, and I think it's not calling the correct sub.
> 
> I'll retest it with correct patch, sorry for the noise

No, worries :) I'll try to send v2 later today, just need to go over the
feedback for the storage patches now.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
  2025-07-01 10:25           ` Fiona Ebner
@ 2025-07-01 11:51             ` DERUMIER, Alexandre via pve-devel
  0 siblings, 0 replies; 63+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-01 11:51 UTC (permalink / raw)
  To: pve-devel, f.ebner; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 14662 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.ebner@proxmox.com" <f.ebner@proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper
Date: Tue, 1 Jul 2025 11:51:52 +0000
Message-ID: <99f7811d0bc8b9680971bd04ae98738dc570e608.camel@groupe-cyllene.com>

Ok, it's working now, switching between iso, cloudinit refresh.


I just have same error than Fabian, when starting with none  , then
switch to an iso:

ide2: hotplug problem - VM 107 qmp command 'object-add' failed -
attempt to add duplicate property 'throttle-drive-ide2' to object (type
'container')


(don't have host cdrom to test)


-------- Message initial --------
De: Fiona Ebner <f.ebner@proxmox.com>
À: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
Objet: Re: [pve-devel] [PATCH qemu-server 18/31] blockdev: add
change_medium() helper
Date: 01/07/2025 12:25:59

Am 01.07.25 um 12:20 schrieb DERUMIER, Alexandre via pve-devel:
> > > I cannot reproduce this here, could you share the exact commands
> > > and
> > > machine configuration?
> 
> ah sorry, I didn't receive the patch29, and I have apply the patch
> from
> the previous serie, and I think it's not calling the correct sub.
> 
> I'll retest it with correct patch, sorry for the noise

No, worries :) I'll try to send v2 later today, just need to go over
the
feedback for the storage patches now.


[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2025-07-01 11:51 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-06-27 15:56 [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
2025-06-27 15:56 ` [pve-devel] [PATCH qemu-server 01/31] mirror: code style: avoid masking earlier declaration of $op Fiona Ebner
2025-06-27 15:56 ` [pve-devel] [PATCH qemu-server 02/31] test: collect mocked functions for QemuServer module Fiona Ebner
2025-06-27 15:56 ` [pve-devel] [PATCH qemu-server 03/31] drive: add helper to parse drive interface Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 04/31] drive: drop invalid export of get_scsi_devicetype Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers for attaching and detaching block devices Fiona Ebner
2025-06-30 10:15   ` Fabian Grünbichler
2025-06-30 10:35     ` DERUMIER, Alexandre via pve-devel
     [not found]     ` <6575d8fe67659098d2bbd533c9063bcbd44c0a21.camel@groupe-cyllene.com>
2025-06-30 11:43       ` DERUMIER, Alexandre via pve-devel
2025-06-30 11:58         ` Fiona Ebner
2025-06-30 11:45     ` Fiona Ebner
2025-06-30 11:55       ` Fabian Grünbichler
2025-06-30 15:11         ` Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 06/31] blockdev: add missing include for JSON module Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 07/31] backup: use blockdev for fleecing images Fiona Ebner
2025-06-30 10:15   ` Fabian Grünbichler
2025-07-01  8:20     ` Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 08/31] backup: use blockdev for TPM state file Fiona Ebner
2025-06-30 10:15   ` Fabian Grünbichler
2025-07-01  8:22     ` Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 09/31] blockdev: introduce qdev_id_to_drive_id() helper Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 10/31] blockdev: introduce and use get_block_info() helper Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 11/31] blockdev: move helper for resize into module Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 12/31] blockdev: add helper to get node below throttle node Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 13/31] blockdev: resize: query and use node name for resize operation Fiona Ebner
2025-06-30  6:23   ` DERUMIER, Alexandre via pve-devel
2025-06-30  7:52     ` Fiona Ebner
2025-06-30 11:38       ` Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 14/31] blockdev: support using zeroinit filter Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 15/31] blockdev: make some functions private Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 16/31] block job: allow specifying a block node that should be detached upon completion Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 17/31] block job: add blockdev mirror Fiona Ebner
2025-06-30 10:15   ` Fabian Grünbichler
2025-07-01  9:21     ` Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 18/31] blockdev: add change_medium() helper Fiona Ebner
2025-06-30 14:29   ` DERUMIER, Alexandre via pve-devel
     [not found]   ` <cd933fed020383019705045025d38c509042c267.camel@groupe-cyllene.com>
2025-06-30 14:42     ` DERUMIER, Alexandre via pve-devel
2025-07-01  7:30       ` DERUMIER, Alexandre via pve-devel
2025-07-01  8:38         ` Fabian Grünbichler
2025-07-01 10:01           ` DERUMIER, Alexandre via pve-devel
2025-07-01  8:42         ` Fiona Ebner
2025-07-01 10:05       ` Fiona Ebner
2025-07-01 10:20         ` DERUMIER, Alexandre via pve-devel
2025-07-01 10:25           ` Fiona Ebner
2025-07-01 11:51             ` DERUMIER, Alexandre via pve-devel
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 19/31] blockdev: add blockdev_change_medium() helper Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 20/31] blockdev: move helper for configuring throttle limits to module Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 21/31] clone disk: skip check for aio=default (io_uring) compatibility starting with machine version 10.0 Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 22/31] print drive device: don't reference any drive for 'none' " Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 23/31] blockdev: add support for NBD paths Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 24/31] blockdev: add helper to generate PBS block device for live restore Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 25/31] blockdev: support alloc-track driver for live-{import, restore} Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 26/31] live import: also record volid information Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 27/31] live import/restore: query which node to use for operation Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 28/31] live import/restore: use Blockdev::detach helper Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 29/31] command line: switch to blockdev starting with machine version 10.0 Fiona Ebner
2025-06-30 10:15   ` Fabian Grünbichler
2025-06-30 10:57     ` Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 30/31] test: migration: update running machine to 10.0 Fiona Ebner
2025-06-27 15:57 ` [pve-devel] [PATCH qemu-server 31/31] partially fix #3227: ensure that target image for mirror has the same size for EFI disks Fiona Ebner
2025-06-27 16:00 ` [pve-devel] [PATCH-SERIES qemu-server 00/31] let's switch to blockdev, blockdev, blockdev, part four (final) Fiona Ebner
2025-06-30  8:19   ` DERUMIER, Alexandre via pve-devel
2025-06-30  8:24     ` Fiona Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal