* [pve-devel] [PATCH qemu-server 01/18] block job: fix variable name in documentation
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 02/18] qmp client: add default timeouts for more block commands Fiona Ebner
` (16 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/BlockJob.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index c89994db..85d86022 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -19,7 +19,7 @@ use PVE::QemuServer::RunState;
# If the job was started with auto-dismiss=false, it's necessary to dismiss it manually. Using this
# option is useful to get the error for failed jobs here. QEMU's job lock should make it impossible
# to see a job in 'concluded' state when auto-dismiss=true.
-# $info is the 'BlockJobInfo' for the job returned by query-block-jobs.
+# $qmp_info is the 'BlockJobInfo' for the job returned by query-block-jobs.
# $job is the information about the job recorded on the PVE-side.
# A block node $job->{'detach-node-name'} will be detached if present.
sub qemu_handle_concluded_blockjob {
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 02/18] qmp client: add default timeouts for more block commands
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 01/18] block job: fix variable name in documentation Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 03/18] drive: introduce drive_uses_qsd_fuse() helper Fiona Ebner
` (15 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Based on pre-existing defaults for similar commands, commands for
adding get 1 minute, commands for creating block jobs or removing get
10 minutes, since those might require in-flight IO to finish.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QMPClient.pm | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/src/PVE/QMPClient.pm b/src/PVE/QMPClient.pm
index c3ed0e32..a6f77032 100644
--- a/src/PVE/QMPClient.pm
+++ b/src/PVE/QMPClient.pm
@@ -120,6 +120,8 @@ sub cmd {
$timeout = 3 * 60;
} elsif (
$cmd->{execute} eq 'blockdev-add'
+ || $cmd->{execute} eq 'blockdev-insert-medium'
+ || $cmd->{execute} eq 'block-export-add'
|| $cmd->{execute} eq 'device_add'
|| $cmd->{execute} eq 'device_del'
|| $cmd->{execute} eq 'netdev_add'
@@ -130,8 +132,13 @@ sub cmd {
$timeout = 60;
} elsif (
$cmd->{execute} eq 'backup-cancel'
+ || $cmd->{execute} eq 'block-commit'
+ || $cmd->{execute} eq 'block-export-del'
+ || $cmd->{execute} eq 'block-stream'
|| $cmd->{execute} eq 'blockdev-del'
|| $cmd->{execute} eq 'blockdev-mirror'
+ || $cmd->{execute} eq 'blockdev-remove-medium'
+ || $cmd->{execute} eq 'blockdev-reopen'
|| $cmd->{execute} eq 'block-job-cancel'
|| $cmd->{execute} eq 'block-job-complete'
|| $cmd->{execute} eq 'drive-mirror'
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 03/18] drive: introduce drive_uses_qsd_fuse() helper
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 01/18] block job: fix variable name in documentation Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 02/18] qmp client: add default timeouts for more block commands Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 04/18] monitor: add vm_qmp_peer() helper Fiona Ebner
` (14 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
In preparation for supporting snapshot operations for drives which
are QSD FUSE exported. Having a central drive_uses_qsd_fuse() helper
makes it possible to consistently decide whether the QMP peer is the
QEMU storage daemon or the main QEMU instance.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 7 +++----
src/PVE/QemuServer/Drive.pm | 12 ++++++++++++
2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index a7fbec14..ed1cab79 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -2854,12 +2854,11 @@ sub start_swtpm {
my $tpm = parse_drive("tpmstate0", $tpmdrive);
my ($storeid) = PVE::Storage::parse_volume_id($tpm->{file}, 1);
if ($storeid) {
- my $format = checked_volume_format($storecfg, $tpm->{file});
- if ($format eq 'raw') {
- $state = PVE::Storage::map_volume($storecfg, $tpm->{file});
- } else {
+ if (PVE::QemuServer::Drive::drive_uses_qsd_fuse($storecfg, $tpm)) {
PVE::QemuServer::QSD::start($vmid);
$state = PVE::QemuServer::QSD::add_fuse_export($vmid, $tpm, 'tpmstate0');
+ } else {
+ $state = PVE::Storage::map_volume($storecfg, $tpm->{file});
}
} else {
$state = $tpm->{file};
diff --git a/src/PVE/QemuServer/Drive.pm b/src/PVE/QemuServer/Drive.pm
index c772c803..912c2b47 100644
--- a/src/PVE/QemuServer/Drive.pm
+++ b/src/PVE/QemuServer/Drive.pm
@@ -1149,4 +1149,16 @@ sub detect_zeroes_cmdline_option {
return 'on';
}
+sub drive_uses_qsd_fuse {
+ my ($storecfg, $drive) = @_;
+
+ if ($drive->{interface} eq 'tpmstate') {
+ my ($storeid) = PVE::Storage::parse_volume_id($drive->{file}, 1);
+ my $format = checked_volume_format($storecfg, $drive->{file});
+ return $storeid && $format ne 'raw';
+ }
+
+ return;
+}
+
1;
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 04/18] monitor: add vm_qmp_peer() helper
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (2 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 03/18] drive: introduce drive_uses_qsd_fuse() helper Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 05/18] monitor: add qsd_peer() helper Fiona Ebner
` (13 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Avoid duplicating this information to ensure consistency and improve
readability.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 10 ++++------
src/PVE/QemuServer/Blockdev.pm | 4 ++--
src/PVE/QemuServer/Monitor.pm | 9 ++++++++-
src/PVE/VZDump/QemuServer.pm | 4 ++--
4 files changed, 16 insertions(+), 11 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index ed1cab79..177fc7b1 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -83,7 +83,7 @@ use PVE::QemuServer::DriveDevice qw(print_drivedevice_full scsihw_infos);
use PVE::QemuServer::Machine;
use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::QemuServer::MetaInfo;
-use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::Monitor qw(mon_cmd vm_qmp_peer);
use PVE::QemuServer::Network;
use PVE::QemuServer::OVMF;
use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
@@ -2733,7 +2733,7 @@ sub vmstatus {
my $statuscb = sub {
my ($vmid, $resp) = @_;
- my $qmp_peer = { name => "VM $vmid", id => $vmid, type => 'qmp' };
+ my $qmp_peer = vm_qmp_peer($vmid);
$qmpclient->queue_cmd($qmp_peer, $proxmox_support_cb, 'query-proxmox-support');
$qmpclient->queue_cmd($qmp_peer, $blockstatscb, 'query-blockstats');
@@ -2755,8 +2755,7 @@ sub vmstatus {
foreach my $vmid (keys %$list) {
next if $opt_vmid && ($vmid ne $opt_vmid);
next if !$res->{$vmid}->{pid}; # not running
- my $qmp_peer = { name => "VM $vmid", id => $vmid, type => 'qmp' };
- $qmpclient->queue_cmd($qmp_peer, $statuscb, 'query-status');
+ $qmpclient->queue_cmd(vm_qmp_peer($vmid), $statuscb, 'query-status');
}
$qmpclient->queue_execute(undef, 2);
@@ -3213,8 +3212,7 @@ sub config_to_command {
my $use_virtio = 0;
- my $qmpsocket =
- PVE::QemuServer::Helpers::qmp_socket({ name => "VM $vmid", id => $vmid, type => 'qmp' });
+ my $qmpsocket = PVE::QemuServer::Helpers::qmp_socket(vm_qmp_peer($vmid));
push @$cmd, '-chardev', "socket,id=qmp,path=$qmpsocket,server=on,wait=off";
push @$cmd, '-mon', "chardev=qmp,mode=control";
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index af7e769b..17a4c8a0 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -16,7 +16,7 @@ use PVE::QemuServer::BlockJob;
use PVE::QemuServer::Drive qw(drive_is_cdrom);
use PVE::QemuServer::Helpers;
use PVE::QemuServer::Machine;
-use PVE::QemuServer::Monitor qw(mon_cmd qmp_cmd);
+use PVE::QemuServer::Monitor qw(mon_cmd qmp_cmd vm_qmp_peer);
# gives ($host, $port, $export)
my $NBD_TCP_PATH_RE_3 = qr/nbd:(\S+):(\d+):exportname=(\S+)/;
@@ -583,7 +583,7 @@ sub attach {
if ($options->{qsd}) {
$qmp_peer = { name => "QEMU storage daemon $id", id => $id, type => 'qsd' };
} else {
- $qmp_peer = { name => "VM $id", id => $id, type => 'qmp' };
+ $qmp_peer = vm_qmp_peer($id);
}
my $machine_version;
diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
index e5278881..7ad0f7db 100644
--- a/src/PVE/QemuServer/Monitor.pm
+++ b/src/PVE/QemuServer/Monitor.pm
@@ -11,6 +11,7 @@ use base 'Exporter';
our @EXPORT_OK = qw(
mon_cmd
qmp_cmd
+ vm_qmp_peer
);
=head3 qmp_cmd
@@ -102,6 +103,12 @@ sub qmp_cmd {
return $res;
}
+sub vm_qmp_peer {
+ my ($vmid) = @_;
+
+ return { name => "VM $vmid", id => $vmid, type => 'qmp' };
+}
+
sub qsd_cmd {
my ($id, $execute, %params) = @_;
@@ -121,7 +128,7 @@ sub hmp_cmd {
my ($vmid, $cmdline, $timeout) = @_;
return qmp_cmd(
- { name => "VM $vmid", id => $vmid, type => 'qmp' }, 'human-monitor-command',
+ vm_qmp_peer($vmid), 'human-monitor-command',
'command-line' => $cmdline,
timeout => $timeout,
);
diff --git a/src/PVE/VZDump/QemuServer.pm b/src/PVE/VZDump/QemuServer.pm
index 84ebbe80..ef398023 100644
--- a/src/PVE/VZDump/QemuServer.pm
+++ b/src/PVE/VZDump/QemuServer.pm
@@ -34,7 +34,7 @@ use PVE::QemuServer::Blockdev;
use PVE::QemuServer::Drive qw(checked_volume_format);
use PVE::QemuServer::Helpers;
use PVE::QemuServer::Machine;
-use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::Monitor qw(mon_cmd vm_qmp_peer);
use PVE::QemuServer::QMPHelpers;
use base qw (PVE::VZDump::Plugin);
@@ -995,7 +995,7 @@ sub archive_vma {
}
my $qmpclient = PVE::QMPClient->new();
- my $qmp_peer = { name => "VM $vmid", id => $vmid, type => 'qmp' };
+ my $qmp_peer = vm_qmp_peer($vmid);
my $backup_cb = sub {
my ($vmid, $resp) = @_;
$backup_job_uuid = $resp->{return}->{UUID};
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 05/18] monitor: add qsd_peer() helper
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (3 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 04/18] monitor: add vm_qmp_peer() helper Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 06/18] blockdev: rename variable in get_node_name_below_throttle() for readability Fiona Ebner
` (12 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Avoid duplicating this information to ensure consistency and improve
readability.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/Blockdev.pm | 9 ++-------
src/PVE/QemuServer/Monitor.pm | 10 ++++++++--
src/PVE/QemuServer/QSD.pm | 11 ++++-------
3 files changed, 14 insertions(+), 16 deletions(-)
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 17a4c8a0..966bbc0d 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -16,7 +16,7 @@ use PVE::QemuServer::BlockJob;
use PVE::QemuServer::Drive qw(drive_is_cdrom);
use PVE::QemuServer::Helpers;
use PVE::QemuServer::Machine;
-use PVE::QemuServer::Monitor qw(mon_cmd qmp_cmd vm_qmp_peer);
+use PVE::QemuServer::Monitor qw(mon_cmd qmp_cmd qsd_peer vm_qmp_peer);
# gives ($host, $port, $export)
my $NBD_TCP_PATH_RE_3 = qr/nbd:(\S+):(\d+):exportname=(\S+)/;
@@ -579,12 +579,7 @@ state image.
sub attach {
my ($storecfg, $id, $drive, $options) = @_;
- my $qmp_peer;
- if ($options->{qsd}) {
- $qmp_peer = { name => "QEMU storage daemon $id", id => $id, type => 'qsd' };
- } else {
- $qmp_peer = vm_qmp_peer($id);
- }
+ my $qmp_peer = $options->{qsd} ? qsd_peer($id) : vm_qmp_peer($id);
my $machine_version;
if ($options->{qsd}) { # qemu-storage-daemon runs with the installed binary version
diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
index 7ad0f7db..8d2c2270 100644
--- a/src/PVE/QemuServer/Monitor.pm
+++ b/src/PVE/QemuServer/Monitor.pm
@@ -11,6 +11,7 @@ use base 'Exporter';
our @EXPORT_OK = qw(
mon_cmd
qmp_cmd
+ qsd_peer
vm_qmp_peer
);
@@ -109,11 +110,16 @@ sub vm_qmp_peer {
return { name => "VM $vmid", id => $vmid, type => 'qmp' };
}
+sub qsd_peer {
+ my ($id) = @_;
+
+ return { name => "QEMU storage daemon $id", id => $id, type => 'qsd' };
+}
+
sub qsd_cmd {
my ($id, $execute, %params) = @_;
- return qmp_cmd({ name => "QEMU storage daemon $id", id => $id, type => 'qsd' },
- $execute, %params);
+ return qmp_cmd(qsd_peer($id), $execute, %params);
}
sub mon_cmd {
diff --git a/src/PVE/QemuServer/QSD.pm b/src/PVE/QemuServer/QSD.pm
index 9c30f7fd..bb85085a 100644
--- a/src/PVE/QemuServer/QSD.pm
+++ b/src/PVE/QemuServer/QSD.pm
@@ -11,7 +11,7 @@ use PVE::Tools;
use PVE::QemuServer::Blockdev;
use PVE::QemuServer::Helpers;
-use PVE::QemuServer::Monitor;
+use PVE::QemuServer::Monitor qw(qsd_peer);
=head3 start
@@ -22,13 +22,10 @@ Start a QEMU storage daemon instance with ID C<$id>.
=cut
sub start($id) {
- my $name = "QEMU storage daemon $id";
-
# If something is still mounted, that could block the new instance, try to clean up first.
PVE::QemuServer::Helpers::qsd_fuse_export_cleanup_files($id);
- my $qmp_socket_path =
- PVE::QemuServer::Helpers::qmp_socket({ name => $name, id => $id, type => 'qsd' });
+ my $qmp_socket_path = PVE::QemuServer::Helpers::qmp_socket(qsd_peer($id));
my $pidfile = PVE::QemuServer::Helpers::qsd_pidfile_name($id);
my $cmd = [
@@ -45,7 +42,7 @@ sub start($id) {
PVE::Tools::run_command($cmd);
my $pid = PVE::QemuServer::Helpers::qsd_running_locally($id);
- syslog("info", "$name started with PID $pid.");
+ syslog("info", "QEMU storage daemon $id started with PID $pid.");
return;
}
@@ -134,7 +131,7 @@ sub quit($id) {
}
unlink PVE::QemuServer::Helpers::qsd_pidfile_name($id);
- unlink PVE::QemuServer::Helpers::qmp_socket({ name => $name, id => $id, type => 'qsd' });
+ unlink PVE::QemuServer::Helpers::qmp_socket(qsd_peer($id));
PVE::QemuServer::Helpers::qsd_fuse_export_cleanup_files($id);
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 06/18] blockdev: rename variable in get_node_name_below_throttle() for readability
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (4 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 05/18] monitor: add qsd_peer() helper Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 07/18] blockdev: switch get_node_name_below_throttle() to use QMP peer Fiona Ebner
` (11 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
The inserted node for the front-end device queried via
get_block_info() is the top node. The next commit will allow
get_node_name_below_throttle() to also query QSD, where there is no
front-end device. Part of the code can still be re-used, but the name
'inserted' would just be confusing.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/Blockdev.pm | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 966bbc0d..36a0ea99 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -165,17 +165,17 @@ sub get_node_name_below_throttle {
my $block_info = get_block_info($vmid);
my $drive_id = $device_id =~ s/^drive-//r;
- my $inserted = $block_info->{$drive_id}->{inserted}
+ my $top = $block_info->{$drive_id}->{inserted}
or die "no block node inserted for drive '$drive_id'\n";
- if ($inserted->{drv} ne 'throttle') {
- die "$device_id: unexpected top node $inserted->{'node-name'} ($inserted->{drv})\n"
+ if ($top->{drv} ne 'throttle') {
+ die "$device_id: unexpected top node $top->{'node-name'} ($top->{drv})\n"
if $assert_top_is_throttle;
# before the switch to -blockdev, the top node was not throttle
- return $inserted->{'node-name'};
+ return $top->{'node-name'};
}
- my $children = { map { $_->{child} => $_ } $inserted->{children}->@* };
+ my $children = { map { $_->{child} => $_ } $top->{children}->@* };
if (my $node_name = $children->{file}->{'node-name'}) {
return $node_name;
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 07/18] blockdev: switch get_node_name_below_throttle() to use QMP peer
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (5 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 06/18] blockdev: rename variable in get_node_name_below_throttle() for readability Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 08/18] blockdev: switch detach() " Fiona Ebner
` (10 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
The get_block_info() function can only be used when the main QEMU
instance is the QMP peer, because it gets the information from the
front-end devices. In case of QSD, the relevant information can be
obtained with the 'query-named-block-nodes' QMP command.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 8 +++++---
src/PVE/QemuServer/BlockJob.pm | 2 +-
src/PVE/QemuServer/Blockdev.pm | 22 ++++++++++++++++------
3 files changed, 22 insertions(+), 10 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 177fc7b1..600072c3 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -7280,7 +7280,8 @@ sub pbs_live_restore {
# removes itself once all backing images vanish with 'auto-remove=on')
my $jobs = {};
for my $ds (sort keys %$restored_disks) {
- my $node_name = PVE::QemuServer::Blockdev::get_node_name_below_throttle($vmid, $ds);
+ my $node_name =
+ PVE::QemuServer::Blockdev::get_node_name_below_throttle(vm_qmp_peer($vmid), $ds);
my $job_id = "restore-$ds";
mon_cmd(
$vmid, 'block-stream',
@@ -7400,8 +7401,9 @@ sub live_import_from_files {
# removes itself once all backing images vanish with 'auto-remove=on')
my $jobs = {};
for my $ds (sort keys %$live_restore_backing) {
- my $node_name =
- PVE::QemuServer::Blockdev::get_node_name_below_throttle($vmid, "drive-$ds");
+ my $node_name = PVE::QemuServer::Blockdev::get_node_name_below_throttle(
+ vm_qmp_peer($vmid), "drive-$ds",
+ );
my $job_id = "restore-$ds";
mon_cmd(
$vmid, 'block-stream',
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 85d86022..f54b783a 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -448,7 +448,7 @@ sub blockdev_mirror {
# Need to replace the node below the top node. This is not necessarily a format node, for
# example, it can also be a zeroinit node by a previous mirror! So query QEMU itself.
my $source_node_name =
- PVE::QemuServer::Blockdev::get_node_name_below_throttle($vmid, $device_id, 1);
+ PVE::QemuServer::Blockdev::get_node_name_below_throttle(vm_qmp_peer($vmid), $device_id, 1);
# Copy original drive config (aio, cache, discard, ...):
my $dest_drive = dclone($source->{drive});
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 36a0ea99..52875010 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -161,12 +161,22 @@ sub top_node_name {
}
sub get_node_name_below_throttle {
- my ($vmid, $device_id, $assert_top_is_throttle) = @_;
+ my ($qmp_peer, $device_id, $assert_top_is_throttle) = @_;
- my $block_info = get_block_info($vmid);
- my $drive_id = $device_id =~ s/^drive-//r;
- my $top = $block_info->{$drive_id}->{inserted}
- or die "no block node inserted for drive '$drive_id'\n";
+ my $top;
+ if ($qmp_peer->{type} eq 'qmp') { # get_block_info() only works if there are front-end devices.
+ my $block_info = get_block_info($qmp_peer->{id});
+ my $drive_id = $device_id =~ s/^drive-//r;
+ $top = $block_info->{$drive_id}->{inserted}
+ or die "no block node inserted for drive '$drive_id'\n";
+ } else {
+ my $named_block_node_info = qmp_cmd($qmp_peer, 'query-named-block-nodes');
+ for my $info ($named_block_node_info->@*) {
+ next if $info->{'node-name'} ne $device_id;
+ $top = $info;
+ last;
+ }
+ }
if ($top->{drv} ne 'throttle') {
die "$device_id: unexpected top node $top->{'node-name'} ($top->{drv})\n"
@@ -945,7 +955,7 @@ sub blockdev_replace {
my $src_blockdev_name;
if ($src_snap eq 'current') {
# there might be other nodes on top like zeroinit, look up the current node below throttle
- $src_blockdev_name = get_node_name_below_throttle($vmid, $deviceid, 1);
+ $src_blockdev_name = get_node_name_below_throttle(vm_qmp_peer($vmid), $deviceid, 1);
} else {
$src_name_options = { 'snapshot-name' => $src_snap };
$src_blockdev_name = get_node_name('fmt', $drive_id, $volid, $src_name_options);
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 08/18] blockdev: switch detach() to use QMP peer
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (6 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 07/18] blockdev: switch get_node_name_below_throttle() to use QMP peer Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 09/18] blockdev: switch blockdev_replace() " Fiona Ebner
` (9 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 6 +++---
src/PVE/QemuServer/BlockJob.pm | 4 ++--
src/PVE/QemuServer/Blockdev.pm | 26 +++++++++++++-------------
src/PVE/VZDump/QemuServer.pm | 2 +-
4 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 600072c3..e3a8d116 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -4113,7 +4113,7 @@ sub qemu_drivedel {
# for the switch to -blockdev
if (PVE::QemuServer::Machine::is_machine_version_at_least($machine_type, 10, 0)) {
- PVE::QemuServer::Blockdev::detach($vmid, "drive-$deviceid");
+ PVE::QemuServer::Blockdev::detach(vm_qmp_peer($vmid), "drive-$deviceid");
return 1;
} else {
my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_del drive-$deviceid", 10 * 60);
@@ -7301,7 +7301,7 @@ sub pbs_live_restore {
. " to disconnect from Proxmox Backup Server\n";
for my $ds (sort keys %$restored_disks) {
- PVE::QemuServer::Blockdev::detach($vmid, "$ds-pbs");
+ PVE::QemuServer::Blockdev::detach(vm_qmp_peer($vmid), "$ds-pbs");
}
close($qmeventd_fd);
@@ -7422,7 +7422,7 @@ sub live_import_from_files {
print "restore-drive jobs finished successfully, removing all tracking block devices\n";
for my $ds (sort keys %$live_restore_backing) {
- PVE::QemuServer::Blockdev::detach($vmid, "drive-$ds-restore");
+ PVE::QemuServer::Blockdev::detach(vm_qmp_peer($vmid), "drive-$ds-restore");
}
close($qmeventd_fd);
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index f54b783a..dccd0ab6 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -34,7 +34,7 @@ sub qemu_handle_concluded_blockjob {
$job->{'detach-node-name'} = $job->{'target-node-name'} if $qmp_info->{error} || $job->{cancel};
if (my $node_name = $job->{'detach-node-name'}) {
- eval { PVE::QemuServer::Blockdev::detach($vmid, $node_name); };
+ eval { PVE::QemuServer::Blockdev::detach(vm_qmp_peer($vmid), $node_name); };
log_warn($@) if $@;
}
@@ -502,7 +502,7 @@ sub blockdev_mirror {
if (my $err = $@) {
eval { qemu_blockjobs_cancel($vmid, $jobs) };
log_warn("unable to cancel block jobs - $@");
- eval { PVE::QemuServer::Blockdev::detach($vmid, $target_node_name); };
+ eval { PVE::QemuServer::Blockdev::detach(vm_qmp_peer($vmid), $target_node_name); };
log_warn("unable to delete blockdev '$target_node_name' - $@");
die "error starting blockdev mirrror - $err";
}
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 52875010..fa252ce0 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -632,16 +632,16 @@ sub attach {
=head3 detach
- detach($vmid, $node_name);
+ detach($qmp_peer, $node_name);
-Detach the block device C<$node_name> from the VM C<$vmid>. Also removes associated child block
-nodes.
+Detach the block device C<$node_name> from the QMP peer C<$qmp_peer>. Also removes associated child
+block nodes.
Parameters:
=over
-=item C<$vmid>: The ID of the virtual machine.
+=item C<$qmp_peer>: QMP peer information.
=item C<$node_name>: The node name identifying the block node in QEMU.
@@ -650,11 +650,11 @@ Parameters:
=cut
sub detach {
- my ($vmid, $node_name) = @_;
+ my ($qmp_peer, $node_name) = @_;
die "Blockdev::detach - no node name\n" if !$node_name;
- my $block_info = mon_cmd($vmid, "query-named-block-nodes");
+ my $block_info = qmp_cmd($qmp_peer, "query-named-block-nodes");
$block_info = { map { $_->{'node-name'} => $_ } $block_info->@* };
my $remove_throttle_group_id;
@@ -665,7 +665,7 @@ sub detach {
while ($node_name) {
last if !$block_info->{$node_name}; # already gone
- my $res = mon_cmd($vmid, 'blockdev-del', 'node-name' => "$node_name", noerr => 1);
+ my $res = qmp_cmd($qmp_peer, 'blockdev-del', 'node-name' => "$node_name", noerr => 1);
if (my $err = $res->{error}) {
last if $err =~ m/Failed to find node with node-name/; # already gone
die "deleting blockdev '$node_name' failed : $err\n";
@@ -679,7 +679,7 @@ sub detach {
}
if ($remove_throttle_group_id) {
- eval { mon_cmd($vmid, 'object-del', id => $remove_throttle_group_id); };
+ eval { qmp_cmd($qmp_peer, 'object-del', id => $remove_throttle_group_id); };
die "removing throttle group failed - $@\n" if $@;
}
@@ -689,7 +689,7 @@ sub detach {
sub detach_tpm_backup_node {
my ($vmid) = @_;
- detach($vmid, "drive-tpmstate0-backup");
+ detach(vm_qmp_peer($vmid), "drive-tpmstate0-backup");
}
sub detach_fleecing_block_nodes {
@@ -701,7 +701,7 @@ sub detach_fleecing_block_nodes {
next if !is_fleecing_top_node($node_name);
$log_func->('info', "detaching (old) fleecing image '$node_name'");
- eval { detach($vmid, $node_name) };
+ eval { detach(vm_qmp_peer($vmid), $node_name) };
$log_func->('warn', "error detaching (old) fleecing image '$node_name' - $@") if $@;
}
}
@@ -741,7 +741,7 @@ my sub blockdev_change_medium {
# force eject if locked
mon_cmd($vmid, "blockdev-open-tray", force => JSON::true, id => "$qdev_id");
mon_cmd($vmid, "blockdev-remove-medium", id => "$qdev_id");
- detach($vmid, "drive-$qdev_id");
+ detach(vm_qmp_peer($vmid), "drive-$qdev_id");
return if $drive->{file} eq 'none';
@@ -908,7 +908,7 @@ sub blockdev_external_snapshot {
sub blockdev_delete {
my ($storecfg, $vmid, $drive, $file_blockdev, $fmt_blockdev, $snap) = @_;
- eval { detach($vmid, $fmt_blockdev->{'node-name'}); };
+ eval { detach(vm_qmp_peer($vmid), $fmt_blockdev->{'node-name'}); };
warn "detaching block node for $file_blockdev->{filename} failed - $@" if $@;
#delete the file (don't use vdisk_free as we don't want to delete all snapshot chain)
@@ -1022,7 +1022,7 @@ sub blockdev_replace {
}
# delete old file|fmt nodes
- eval { detach($vmid, $src_blockdev_name); };
+ eval { detach(vm_qmp_peer($vmid), $src_blockdev_name); };
warn "detaching block node for $src_snap failed - $@" if $@;
}
diff --git a/src/PVE/VZDump/QemuServer.pm b/src/PVE/VZDump/QemuServer.pm
index ef398023..25b8aa79 100644
--- a/src/PVE/VZDump/QemuServer.pm
+++ b/src/PVE/VZDump/QemuServer.pm
@@ -631,7 +631,7 @@ my sub detach_fleecing_images {
for my $di ($disks->@*) {
if (my $volid = $di->{'fleece-volid'}) {
my $node_name = "$di->{qmdevice}-fleecing";
- eval { PVE::QemuServer::Blockdev::detach($vmid, $node_name) };
+ eval { PVE::QemuServer::Blockdev::detach(vm_qmp_peer($vmid), $node_name) };
}
}
}
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 09/18] blockdev: switch blockdev_replace() to use QMP peer
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (7 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 08/18] blockdev: switch detach() " Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 10/18] blockdev: switch blockdev_external_snapshot() " Fiona Ebner
` (8 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 2 +-
src/PVE/QemuServer/Blockdev.pm | 20 ++++++++++----------
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index e3a8d116..638bc215 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -4453,7 +4453,7 @@ sub qemu_volume_snapshot_delete {
PVE::QemuServer::Blockdev::blockdev_replace(
$storecfg,
- $vmid,
+ vm_qmp_peer($vmid),
$machine_version,
$attached_deviceid,
$drive,
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index fa252ce0..614d71f4 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -868,7 +868,7 @@ sub blockdev_external_snapshot {
#reopen current to snap
blockdev_replace(
$storecfg,
- $vmid,
+ vm_qmp_peer($vmid),
$machine_version,
$deviceid,
$drive,
@@ -937,7 +937,7 @@ my sub blockdev_relative_backing_file {
sub blockdev_replace {
my (
$storecfg,
- $vmid,
+ $qmp_peer,
$machine_version,
$deviceid,
$drive,
@@ -955,7 +955,7 @@ sub blockdev_replace {
my $src_blockdev_name;
if ($src_snap eq 'current') {
# there might be other nodes on top like zeroinit, look up the current node below throttle
- $src_blockdev_name = get_node_name_below_throttle(vm_qmp_peer($vmid), $deviceid, 1);
+ $src_blockdev_name = get_node_name_below_throttle($qmp_peer, $deviceid, 1);
} else {
$src_name_options = { 'snapshot-name' => $src_snap };
$src_blockdev_name = get_node_name('fmt', $drive_id, $volid, $src_name_options);
@@ -983,15 +983,15 @@ sub blockdev_replace {
get_node_name('fmt', $drive_id, $volid, { 'snapshot-name' => $parent_snap });
$target_fmt_blockdev->{backing} = $parent_fmt_nodename;
}
- mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
+ qmp_cmd($qmp_peer, 'blockdev-add', %$target_fmt_blockdev);
#reopen the current throttlefilter nodename with the target fmt nodename
my $throttle_blockdev =
generate_throttle_blockdev($drive, $target_fmt_blockdev->{'node-name'}, {});
- mon_cmd($vmid, 'blockdev-reopen', options => [$throttle_blockdev]);
+ qmp_cmd($qmp_peer, 'blockdev-reopen', options => [$throttle_blockdev]);
} else {
#intermediate snapshot
- mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
+ qmp_cmd($qmp_peer, 'blockdev-add', %$target_fmt_blockdev);
#reopen the parent node with the new target fmt backing node
my $parent_file_blockdev = generate_file_blockdev(
@@ -1007,14 +1007,14 @@ sub blockdev_replace {
{ 'snapshot-name' => $parent_snap },
);
$parent_fmt_blockdev->{backing} = $target_fmt_blockdev->{'node-name'};
- mon_cmd($vmid, 'blockdev-reopen', options => [$parent_fmt_blockdev]);
+ qmp_cmd($qmp_peer, 'blockdev-reopen', options => [$parent_fmt_blockdev]);
my $backing_file =
blockdev_relative_backing_file($target_file_blockdev, $parent_file_blockdev);
#change backing-file in qcow2 metadatas
- mon_cmd(
- $vmid, 'change-backing-file',
+ qmp_cmd(
+ $qmp_peer, 'change-backing-file',
device => $deviceid,
'image-node-name' => $parent_fmt_blockdev->{'node-name'},
'backing-file' => $backing_file,
@@ -1022,7 +1022,7 @@ sub blockdev_replace {
}
# delete old file|fmt nodes
- eval { detach(vm_qmp_peer($vmid), $src_blockdev_name); };
+ eval { detach($qmp_peer, $src_blockdev_name); };
warn "detaching block node for $src_snap failed - $@" if $@;
}
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 10/18] blockdev: switch blockdev_external_snapshot() to use QMP peer
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (8 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 09/18] blockdev: switch blockdev_replace() " Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 11/18] block job: switch qemu_handle_concluded_blockjob() " Fiona Ebner
` (7 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 8 +++++++-
src/PVE/QemuServer/Blockdev.pm | 10 +++++-----
2 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 638bc215..6f2ee75c 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -4382,7 +4382,13 @@ sub qemu_volume_snapshot {
my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
my $parent_snap = $snapshots->{'current'}->{parent};
PVE::QemuServer::Blockdev::blockdev_external_snapshot(
- $storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $parent_snap,
+ $storecfg,
+ vm_qmp_peer($vmid),
+ $machine_version,
+ $deviceid,
+ $drive,
+ $snap,
+ $parent_snap,
);
} elsif ($do_snapshots_type eq 'storage') {
PVE::Storage::volume_snapshot($storecfg, $volid, $snap);
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 614d71f4..8db17683 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -856,7 +856,7 @@ sub set_io_throttle {
}
sub blockdev_external_snapshot {
- my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $parent_snap) = @_;
+ my ($storecfg, $qmp_peer, $machine_version, $deviceid, $drive, $snap, $parent_snap) = @_;
print "Creating a new current volume with $snap as backing snap\n";
@@ -868,7 +868,7 @@ sub blockdev_external_snapshot {
#reopen current to snap
blockdev_replace(
$storecfg,
- vm_qmp_peer($vmid),
+ $qmp_peer,
$machine_version,
$deviceid,
$drive,
@@ -895,11 +895,11 @@ sub blockdev_external_snapshot {
#backing need to be forced to undef in blockdev, to avoid reopen of backing-file on blockdev-add
$new_fmt_blockdev->{backing} = undef;
- mon_cmd($vmid, 'blockdev-add', %$new_fmt_blockdev);
+ qmp_cmd($qmp_peer, 'blockdev-add', %$new_fmt_blockdev);
print "blockdev-snapshot: reopen current with $snap backing image\n";
- mon_cmd(
- $vmid, 'blockdev-snapshot',
+ qmp_cmd(
+ $qmp_peer, 'blockdev-snapshot',
node => $snap_fmt_blockdev->{'node-name'},
overlay => $new_fmt_blockdev->{'node-name'},
);
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 11/18] block job: switch qemu_handle_concluded_blockjob() to use QMP peer
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (9 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 10/18] blockdev: switch blockdev_external_snapshot() " Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 12/18] block job: switch qemu_blockjobs_cancel() " Fiona Ebner
` (6 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/BlockJob.pm | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index dccd0ab6..33ff66bc 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -13,7 +13,7 @@ use PVE::Storage;
use PVE::QemuServer::Agent qw(qga_check_running);
use PVE::QemuServer::Blockdev;
use PVE::QemuServer::Drive qw(checked_volume_format);
-use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::Monitor qw(mon_cmd qmp_cmd vm_qmp_peer);
use PVE::QemuServer::RunState;
# If the job was started with auto-dismiss=false, it's necessary to dismiss it manually. Using this
@@ -23,9 +23,9 @@ use PVE::QemuServer::RunState;
# $job is the information about the job recorded on the PVE-side.
# A block node $job->{'detach-node-name'} will be detached if present.
sub qemu_handle_concluded_blockjob {
- my ($vmid, $job_id, $qmp_info, $job) = @_;
+ my ($qmp_peer, $job_id, $qmp_info, $job) = @_;
- eval { mon_cmd($vmid, 'job-dismiss', id => $job_id); };
+ eval { qmp_cmd($qmp_peer, 'job-dismiss', id => $job_id); };
log_warn("$job_id: failed to dismiss job - $@") if $@;
# If there was an error or if the job was cancelled, always detach the target. This is correct
@@ -34,7 +34,7 @@ sub qemu_handle_concluded_blockjob {
$job->{'detach-node-name'} = $job->{'target-node-name'} if $qmp_info->{error} || $job->{cancel};
if (my $node_name = $job->{'detach-node-name'}) {
- eval { PVE::QemuServer::Blockdev::detach(vm_qmp_peer($vmid), $node_name); };
+ eval { PVE::QemuServer::Blockdev::detach($qmp_peer, $node_name); };
log_warn($@) if $@;
}
@@ -61,7 +61,7 @@ sub qemu_blockjobs_cancel {
foreach my $job (keys %$jobs) {
my $info = $running_jobs->{$job};
eval {
- qemu_handle_concluded_blockjob($vmid, $job, $info, $jobs->{$job})
+ qemu_handle_concluded_blockjob(vm_qmp_peer($vmid), $job, $info, $jobs->{$job})
if $info && $info->{status} eq 'concluded';
};
log_warn($@) if $@; # only warn and proceed with canceling other jobs
@@ -120,8 +120,11 @@ sub qemu_drive_mirror_monitor {
die "$job_id: '$op' has been cancelled\n" if !defined($job);
- qemu_handle_concluded_blockjob($vmid, $job_id, $job, $jobs->{$job_id})
- if $job && $job->{status} eq 'concluded';
+ if ($job && $job->{status} eq 'concluded') {
+ qemu_handle_concluded_blockjob(
+ vm_qmp_peer($vmid), $job_id, $job, $jobs->{$job_id},
+ );
+ }
my $busy = $job->{busy};
my $ready = $job->{ready};
@@ -346,7 +349,7 @@ sub qemu_drive_mirror_switch_to_active_mode {
my $info = $running_jobs->{$job};
if ($info->{status} eq 'concluded') {
- qemu_handle_concluded_blockjob($vmid, $job, $info, $jobs->{$job});
+ qemu_handle_concluded_blockjob(vm_qmp_peer($vmid), $job, $info, $jobs->{$job});
# The 'concluded' state should occur here if and only if the job failed, so the
# 'die' below should be unreachable, but play it safe.
die "$job: expected job to have failed, but no error was set\n";
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 12/18] block job: switch qemu_blockjobs_cancel() to use QMP peer
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (10 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 11/18] block job: switch qemu_handle_concluded_blockjob() " Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 13/18] block job: switch qemu_drive_mirror_monitor() " Fiona Ebner
` (5 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/API2/Qemu.pm | 6 ++++--
src/PVE/QemuMigrate.pm | 4 ++--
src/PVE/QemuServer/BlockJob.pm | 16 ++++++++--------
3 files changed, 14 insertions(+), 12 deletions(-)
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 190878de..5a627936 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -35,7 +35,7 @@ use PVE::QemuServer::CPUConfig;
use PVE::QemuServer::Drive qw(checked_volume_format checked_parse_volname);
use PVE::QemuServer::Helpers;
use PVE::QemuServer::ImportDisk;
-use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::Monitor qw(mon_cmd vm_qmp_peer);
use PVE::QemuServer::Machine;
use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::QemuServer::MetaInfo;
@@ -4607,7 +4607,9 @@ __PACKAGE__->register_method({
PVE::AccessControl::add_vm_to_pool($newid, $pool) if $pool;
};
if (my $err = $@) {
- eval { PVE::QemuServer::BlockJob::qemu_blockjobs_cancel($vmid, $jobs) };
+ eval {
+ PVE::QemuServer::BlockJob::qemu_blockjobs_cancel(vm_qmp_peer($vmid), $jobs);
+ };
sleep 1; # some storage like rbd need to wait before release volume - really?
foreach my $volid (@$newvollist) {
diff --git a/src/PVE/QemuMigrate.pm b/src/PVE/QemuMigrate.pm
index 8fa84080..b7aba504 100644
--- a/src/PVE/QemuMigrate.pm
+++ b/src/PVE/QemuMigrate.pm
@@ -33,7 +33,7 @@ use PVE::QemuServer::CPUConfig;
use PVE::QemuServer::Drive qw(checked_volume_format);
use PVE::QemuServer::Helpers qw(min_version);
use PVE::QemuServer::Machine;
-use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::Monitor qw(mon_cmd vm_qmp_peer);
use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::QemuServer::Network;
use PVE::QemuServer::QMPHelpers;
@@ -1592,7 +1592,7 @@ sub phase2_cleanup {
if ($self->{storage_migration}) {
eval {
PVE::QemuServer::BlockJob::qemu_blockjobs_cancel(
- $vmid,
+ vm_qmp_peer($vmid),
$self->{storage_migration_jobs},
);
};
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 33ff66bc..49bb13c7 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -42,16 +42,16 @@ sub qemu_handle_concluded_blockjob {
}
sub qemu_blockjobs_cancel {
- my ($vmid, $jobs) = @_;
+ my ($qmp_peer, $jobs) = @_;
foreach my $job (keys %$jobs) {
print "$job: Cancelling block job\n";
- eval { mon_cmd($vmid, "block-job-cancel", device => $job); };
+ eval { qmp_cmd($qmp_peer, "block-job-cancel", device => $job); };
$jobs->{$job}->{cancel} = 1;
}
while (1) {
- my $stats = mon_cmd($vmid, "query-block-jobs");
+ my $stats = qmp_cmd($qmp_peer, "query-block-jobs");
my $running_jobs = {};
foreach my $stat (@$stats) {
@@ -61,7 +61,7 @@ sub qemu_blockjobs_cancel {
foreach my $job (keys %$jobs) {
my $info = $running_jobs->{$job};
eval {
- qemu_handle_concluded_blockjob(vm_qmp_peer($vmid), $job, $info, $jobs->{$job})
+ qemu_handle_concluded_blockjob($qmp_peer, $job, $info, $jobs->{$job})
if $info && $info->{status} eq 'concluded';
};
log_warn($@) if $@; # only warn and proceed with canceling other jobs
@@ -177,7 +177,7 @@ sub qemu_drive_mirror_monitor {
}
# if we clone a disk for a new target vm, we don't switch the disk
- qemu_blockjobs_cancel($vmid, $jobs);
+ qemu_blockjobs_cancel(vm_qmp_peer($vmid), $jobs);
if ($agent_running) {
print "unfreeze filesystem\n";
@@ -234,7 +234,7 @@ sub qemu_drive_mirror_monitor {
my $err = $@;
if ($err) {
- eval { qemu_blockjobs_cancel($vmid, $jobs) };
+ eval { qemu_blockjobs_cancel(vm_qmp_peer($vmid), $jobs) };
die "block job ($op) error: $err";
}
}
@@ -308,7 +308,7 @@ sub qemu_drive_mirror {
# if a job already runs for this device we get an error, catch it for cleanup
eval { mon_cmd($vmid, "drive-mirror", %$opts); };
if (my $err = $@) {
- eval { qemu_blockjobs_cancel($vmid, $jobs) };
+ eval { qemu_blockjobs_cancel(vm_qmp_peer($vmid), $jobs) };
warn "$@\n" if $@;
die "mirroring error: $err\n";
}
@@ -503,7 +503,7 @@ sub blockdev_mirror {
# if a job already runs for this device we get an error, catch it for cleanup
eval { mon_cmd($vmid, "blockdev-mirror", $qmp_opts->%*); };
if (my $err = $@) {
- eval { qemu_blockjobs_cancel($vmid, $jobs) };
+ eval { qemu_blockjobs_cancel(vm_qmp_peer($vmid), $jobs) };
log_warn("unable to cancel block jobs - $@");
eval { PVE::QemuServer::Blockdev::detach(vm_qmp_peer($vmid), $target_node_name); };
log_warn("unable to delete blockdev '$target_node_name' - $@");
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 13/18] block job: switch qemu_drive_mirror_monitor() to use QMP peer
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (11 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 12/18] block job: switch qemu_blockjobs_cancel() " Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 14/18] blockdev: switch blockdev_delete() " Fiona Ebner
` (4 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Take care to only allow a different destination ID when having the
main QEMU instance as a peer, because the required freeze/thaw or
suspend/resume can only be done then.
Also adds the missing $op argument in the signature of the mocked
function in the migration tests for completeness.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuMigrate.pm | 2 +-
src/PVE/QemuServer.pm | 6 ++---
src/PVE/QemuServer/BlockJob.pm | 29 ++++++++++++++---------
src/PVE/QemuServer/Blockdev.pm | 4 ++--
src/test/MigrationTest/QemuMigrateMock.pm | 2 +-
5 files changed, 25 insertions(+), 18 deletions(-)
diff --git a/src/PVE/QemuMigrate.pm b/src/PVE/QemuMigrate.pm
index b7aba504..5177fe8b 100644
--- a/src/PVE/QemuMigrate.pm
+++ b/src/PVE/QemuMigrate.pm
@@ -1545,7 +1545,7 @@ sub phase2 {
# thus, this command changes to it to blockjob complete (see qapi docs)
eval {
PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
- $vmid, undef, $self->{storage_migration_jobs}, 'cancel',
+ vm_qmp_peer($vmid), undef, $self->{storage_migration_jobs}, 'cancel',
);
};
if (my $err = $@) {
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 6f2ee75c..80bbb059 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -7300,7 +7300,7 @@ sub pbs_live_restore {
mon_cmd($vmid, 'cont');
PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
- $vmid, undef, $jobs, 'auto', 0, 'stream',
+ vm_qmp_peer($vmid), undef, $jobs, 'auto', 0, 'stream',
);
print "restore-drive jobs finished successfully, removing all tracking block devices"
@@ -7422,7 +7422,7 @@ sub live_import_from_files {
mon_cmd($vmid, 'cont');
PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
- $vmid, undef, $jobs, 'auto', 0, 'stream',
+ vm_qmp_peer($vmid), undef, $jobs, 'auto', 0, 'stream',
);
print "restore-drive jobs finished successfully, removing all tracking block devices\n";
@@ -7932,7 +7932,7 @@ sub clone_disk {
# previous drive-mirrors
if (($completion && $completion eq 'complete') && (scalar(keys %$jobs) > 0)) {
PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
- $vmid, $newvmid, $jobs, $completion, $qga,
+ vm_qmp_peer($vmid), $newvmid, $jobs, $completion, $qga,
);
}
goto no_data_clone;
diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 49bb13c7..f58bb4f6 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -84,7 +84,10 @@ sub qemu_blockjobs_cancel {
# 'skip': wait until all jobs are ready, return with block jobs in ready state
# 'auto': wait until all jobs disappear, only use for jobs which complete automatically
sub qemu_drive_mirror_monitor {
- my ($vmid, $vmiddst, $jobs, $completion, $qga, $op) = @_;
+ my ($qmp_peer, $vmiddst, $jobs, $completion, $qga, $op) = @_;
+
+ die "drive mirror: different destination is only supported when peer is main QEMU instance\n"
+ if $vmiddst && $qmp_peer->{type} ne 'qmp';
$completion //= 'complete';
$op //= "mirror";
@@ -96,7 +99,7 @@ sub qemu_drive_mirror_monitor {
while (1) {
die "block job ('$op') timed out\n" if $err_complete > 300;
- my $stats = mon_cmd($vmid, "query-block-jobs");
+ my $stats = qmp_cmd($qmp_peer, "query-block-jobs");
my $ctime = time();
my $running_jobs = {};
@@ -121,9 +124,7 @@ sub qemu_drive_mirror_monitor {
die "$job_id: '$op' has been cancelled\n" if !defined($job);
if ($job && $job->{status} eq 'concluded') {
- qemu_handle_concluded_blockjob(
- vm_qmp_peer($vmid), $job_id, $job, $jobs->{$job_id},
- );
+ qemu_handle_concluded_blockjob($qmp_peer, $job_id, $job, $jobs->{$job_id});
}
my $busy = $job->{busy};
@@ -164,7 +165,8 @@ sub qemu_drive_mirror_monitor {
# do the complete later (or has already been done)
last if $completion eq 'skip' || $completion eq 'auto';
- if ($vmiddst && $vmiddst != $vmid) {
+ if ($qmp_peer->{type} eq 'qmp' && $vmiddst && $vmiddst != $qmp_peer->{id}) {
+ my $vmid = $qmp_peer->{id};
my $agent_running = $qga && qga_check_running($vmid);
if ($agent_running) {
print "freeze filesystem\n";
@@ -177,7 +179,7 @@ sub qemu_drive_mirror_monitor {
}
# if we clone a disk for a new target vm, we don't switch the disk
- qemu_blockjobs_cancel(vm_qmp_peer($vmid), $jobs);
+ qemu_blockjobs_cancel($qmp_peer, $jobs);
if ($agent_running) {
print "unfreeze filesystem\n";
@@ -211,7 +213,7 @@ sub qemu_drive_mirror_monitor {
} else {
die "invalid completion value: $completion\n";
}
- eval { mon_cmd($vmid, $completion_command, device => $job_id) };
+ eval { qmp_cmd($qmp_peer, $completion_command, device => $job_id) };
my $err = $@;
if ($err && $err =~ m/cannot be completed/) {
print "$job_id: block job cannot be completed, trying again.\n";
@@ -234,7 +236,7 @@ sub qemu_drive_mirror_monitor {
my $err = $@;
if ($err) {
- eval { qemu_blockjobs_cancel(vm_qmp_peer($vmid), $jobs) };
+ eval { qemu_blockjobs_cancel($qmp_peer, $jobs) };
die "block job ($op) error: $err";
}
}
@@ -313,7 +315,7 @@ sub qemu_drive_mirror {
die "mirroring error: $err\n";
}
- qemu_drive_mirror_monitor($vmid, $vmiddst, $jobs, $completion, $qga);
+ qemu_drive_mirror_monitor(vm_qmp_peer($vmid), $vmiddst, $jobs, $completion, $qga);
}
# Callers should version guard this (only available with a binary >= QEMU 8.2)
@@ -510,7 +512,12 @@ sub blockdev_mirror {
die "error starting blockdev mirrror - $err";
}
qemu_drive_mirror_monitor(
- $vmid, $dest->{vmid}, $jobs, $completion, $options->{'guest-agent'}, 'mirror',
+ vm_qmp_peer($vmid),
+ $dest->{vmid},
+ $jobs,
+ $completion,
+ $options->{'guest-agent'},
+ 'mirror',
);
}
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 8db17683..b89b0f68 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -1084,7 +1084,7 @@ sub blockdev_commit {
eval {
PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
- $vmid, undef, $jobs, $complete, 0, 'commit',
+ vm_qmp_peer($vmid), undef, $jobs, $complete, 0, 'commit',
);
};
if ($@) {
@@ -1165,7 +1165,7 @@ sub blockdev_stream {
eval {
PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
- $vmid, undef, $jobs, 'auto', 0, 'stream',
+ vm_qmp_peer($vmid), undef, $jobs, 'auto', 0, 'stream',
);
};
if ($@) {
diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm
index 421f0bb7..d5ae29a4 100644
--- a/src/test/MigrationTest/QemuMigrateMock.pm
+++ b/src/test/MigrationTest/QemuMigrateMock.pm
@@ -176,7 +176,7 @@ $qemu_server_blockjob_module->mock(
common_mirror_mock($source->{vmid}, $drive_id);
},
qemu_drive_mirror_monitor => sub {
- my ($vmid, $vmiddst, $jobs, $completion, $qga) = @_;
+ my ($qmp_peer, $vmiddst, $jobs, $completion, $qga, $op) = @_;
if (
$fail_config->{qemu_drive_mirror_monitor}
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 14/18] blockdev: switch blockdev_delete() to use QMP peer
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (12 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 13/18] block job: switch qemu_drive_mirror_monitor() " Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 15/18] blockdev: switch blockdev_stream() " Fiona Ebner
` (3 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
In preparation to allow snapshot operations for QSD-FUSE-exported
TPM states.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer/Blockdev.pm | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index b89b0f68..7e43fbc9 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -906,9 +906,9 @@ sub blockdev_external_snapshot {
}
sub blockdev_delete {
- my ($storecfg, $vmid, $drive, $file_blockdev, $fmt_blockdev, $snap) = @_;
+ my ($storecfg, $qmp_peer, $drive, $file_blockdev, $fmt_blockdev, $snap) = @_;
- eval { detach(vm_qmp_peer($vmid), $fmt_blockdev->{'node-name'}); };
+ eval { detach($qmp_peer, $fmt_blockdev->{'node-name'}); };
warn "detaching block node for $file_blockdev->{filename} failed - $@" if $@;
#delete the file (don't use vdisk_free as we don't want to delete all snapshot chain)
@@ -1092,7 +1092,12 @@ sub blockdev_commit {
}
blockdev_delete(
- $storecfg, $vmid, $drive, $src_file_blockdev, $src_fmt_blockdev, $src_snap,
+ $storecfg,
+ vm_qmp_peer($vmid),
+ $drive,
+ $src_file_blockdev,
+ $src_fmt_blockdev,
+ $src_snap,
);
};
my $err = $@;
@@ -1172,7 +1177,14 @@ sub blockdev_stream {
die "Failed to complete block stream: $@\n";
}
- blockdev_delete($storecfg, $vmid, $drive, $snap_file_blockdev, $snap_fmt_blockdev, $snap);
+ blockdev_delete(
+ $storecfg,
+ vm_qmp_peer($vmid),
+ $drive,
+ $snap_file_blockdev,
+ $snap_fmt_blockdev,
+ $snap,
+ );
}
1;
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 15/18] blockdev: switch blockdev_stream() to use QMP peer
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (13 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 14/18] blockdev: switch blockdev_delete() " Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 16/18] blockdev: switch blockdev_commit() " Fiona Ebner
` (2 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 2 +-
src/PVE/QemuServer/Blockdev.pm | 23 +++++++++++++----------
2 files changed, 14 insertions(+), 11 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 80bbb059..76f585fb 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -4472,7 +4472,7 @@ sub qemu_volume_snapshot_delete {
print "stream intermediate snapshot $snap to $childsnap\n";
PVE::QemuServer::Blockdev::blockdev_stream(
$storecfg,
- $vmid,
+ vm_qmp_peer($vmid),
$machine_version,
$attached_deviceid,
$drive,
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 7e43fbc9..569b8fd3 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -1116,8 +1116,16 @@ sub blockdev_commit {
}
sub blockdev_stream {
- my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $parent_snap, $target_snap) =
- @_;
+ my (
+ $storecfg,
+ $qmp_peer,
+ $machine_version,
+ $deviceid,
+ $drive,
+ $snap,
+ $parent_snap,
+ $target_snap,
+ ) = @_;
my $volid = $drive->{file};
$target_snap = undef if $target_snap eq 'current';
@@ -1165,12 +1173,12 @@ sub blockdev_stream {
$options->{'base-node'} = $parent_fmt_blockdev->{'node-name'};
$options->{'backing-file'} = $backing_file;
- mon_cmd($vmid, 'block-stream', %$options);
+ qmp_cmd($qmp_peer, 'block-stream', %$options);
$jobs->{$job_id} = {};
eval {
PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
- vm_qmp_peer($vmid), undef, $jobs, 'auto', 0, 'stream',
+ $qmp_peer, undef, $jobs, 'auto', 0, 'stream',
);
};
if ($@) {
@@ -1178,12 +1186,7 @@ sub blockdev_stream {
}
blockdev_delete(
- $storecfg,
- vm_qmp_peer($vmid),
- $drive,
- $snap_file_blockdev,
- $snap_fmt_blockdev,
- $snap,
+ $storecfg, $qmp_peer, $drive, $snap_file_blockdev, $snap_fmt_blockdev, $snap,
);
}
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 16/18] blockdev: switch blockdev_commit() to use QMP peer
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (14 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 15/18] blockdev: switch blockdev_stream() " Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 17/18] snapshot: support live snapshot (remove) of qcow2 TPM drive on storage with snapshot-as-volume-chain Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 18/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive " Fiona Ebner
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 2 +-
src/PVE/QemuServer/Blockdev.pm | 17 ++++++-----------
2 files changed, 7 insertions(+), 12 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 76f585fb..d021dac6 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -4447,7 +4447,7 @@ sub qemu_volume_snapshot_delete {
print "delete first snapshot $snap\n";
PVE::QemuServer::Blockdev::blockdev_commit(
$storecfg,
- $vmid,
+ vm_qmp_peer($vmid),
$machine_version,
$attached_deviceid,
$drive,
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 569b8fd3..6566e9ac 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -1027,7 +1027,7 @@ sub blockdev_replace {
}
sub blockdev_commit {
- my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $src_snap, $target_snap) = @_;
+ my ($storecfg, $qmp_peer, $machine_version, $deviceid, $drive, $src_snap, $target_snap) = @_;
my $volid = $drive->{file};
my $target_was_read_only;
@@ -1064,7 +1064,7 @@ sub blockdev_commit {
print "reopening internal read-only block node for '$target_snap' as writable\n";
$target_fmt_blockdev->{'read-only'} = JSON::false;
$target_file_blockdev->{'read-only'} = JSON::false;
- mon_cmd($vmid, 'blockdev-reopen', options => [$target_fmt_blockdev]);
+ qmp_cmd($qmp_peer, 'blockdev-reopen', options => [$target_fmt_blockdev]);
# For the guest, the drive is still read-only, because the top throttle node is.
}
@@ -1076,7 +1076,7 @@ sub blockdev_commit {
$opts->{'base-node'} = $target_fmt_blockdev->{'node-name'};
$opts->{'top-node'} = $src_fmt_blockdev->{'node-name'};
- mon_cmd($vmid, "block-commit", %$opts);
+ qmp_cmd($qmp_peer, "block-commit", %$opts);
$jobs->{$job_id} = {};
# if we commit the current, the blockjob need to be in 'complete' mode
@@ -1084,7 +1084,7 @@ sub blockdev_commit {
eval {
PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
- vm_qmp_peer($vmid), undef, $jobs, $complete, 0, 'commit',
+ $qmp_peer, undef, $jobs, $complete, 0, 'commit',
);
};
if ($@) {
@@ -1092,12 +1092,7 @@ sub blockdev_commit {
}
blockdev_delete(
- $storecfg,
- vm_qmp_peer($vmid),
- $drive,
- $src_file_blockdev,
- $src_fmt_blockdev,
- $src_snap,
+ $storecfg, $qmp_peer, $drive, $src_file_blockdev, $src_fmt_blockdev, $src_snap,
);
};
my $err = $@;
@@ -1108,7 +1103,7 @@ sub blockdev_commit {
print "re-applying read-only flag for internal block node for '$target_snap'\n";
$target_fmt_blockdev->{'read-only'} = JSON::true;
$target_file_blockdev->{'read-only'} = JSON::true;
- eval { mon_cmd($vmid, 'blockdev-reopen', options => [$target_fmt_blockdev]); };
+ eval { qmp_cmd($qmp_peer, 'blockdev-reopen', options => [$target_fmt_blockdev]); };
print "failed to re-apply read-only flag - $@\n" if $@;
}
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 17/18] snapshot: support live snapshot (remove) of qcow2 TPM drive on storage with snapshot-as-volume-chain
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (15 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 16/18] blockdev: switch blockdev_commit() " Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 18/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive " Fiona Ebner
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Adds a new drive_qmp_peer() helper to determine whether the drive is
managed by the QEMU instance or QSD instance for the VM.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/QemuServer.pm | 30 +++++++++++++-----------------
src/PVE/QemuServer/Drive.pm | 8 ++++++++
2 files changed, 21 insertions(+), 17 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index d021dac6..5f2b05ca 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -83,7 +83,7 @@ use PVE::QemuServer::DriveDevice qw(print_drivedevice_full scsihw_infos);
use PVE::QemuServer::Machine;
use PVE::QemuServer::Memory qw(get_current_memory);
use PVE::QemuServer::MetaInfo;
-use PVE::QemuServer::Monitor qw(mon_cmd vm_qmp_peer);
+use PVE::QemuServer::Monitor qw(mon_cmd qmp_cmd vm_qmp_peer);
use PVE::QemuServer::Network;
use PVE::QemuServer::OVMF;
use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
@@ -4368,7 +4368,8 @@ sub qemu_volume_snapshot {
if ($do_snapshots_type eq 'internal') {
print "internal qemu snapshot\n";
- mon_cmd($vmid, 'blockdev-snapshot-internal-sync', device => $deviceid, name => $snap);
+ my $qmp_peer = PVE::QemuServer::Drive::drive_qmp_peer($storecfg, $vmid, $drive);
+ qmp_cmd($qmp_peer, 'blockdev-snapshot-internal-sync', device => $deviceid, name => $snap);
} elsif ($do_snapshots_type eq 'external') {
my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
if (!PVE::QemuServer::Machine::is_machine_version_at_least($machine_version, 10, 0)) {
@@ -4381,14 +4382,9 @@ sub qemu_volume_snapshot {
print "external qemu snapshot\n";
my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
my $parent_snap = $snapshots->{'current'}->{parent};
+ my $qmp_peer = PVE::QemuServer::Drive::drive_qmp_peer($storecfg, $vmid, $drive);
PVE::QemuServer::Blockdev::blockdev_external_snapshot(
- $storecfg,
- vm_qmp_peer($vmid),
- $machine_version,
- $deviceid,
- $drive,
- $snap,
- $parent_snap,
+ $storecfg, $qmp_peer, $machine_version, $deviceid, $drive, $snap, $parent_snap,
);
} elsif ($do_snapshots_type eq 'storage') {
PVE::Storage::volume_snapshot($storecfg, $volid, $snap);
@@ -4416,8 +4412,9 @@ sub qemu_volume_snapshot_delete {
my $do_snapshots_type = do_snapshots_type($storecfg, $volid, $attached_deviceid, $running);
if ($do_snapshots_type eq 'internal') {
- mon_cmd(
- $vmid,
+ my $qmp_peer = PVE::QemuServer::Drive::drive_qmp_peer($storecfg, $vmid, $drive);
+ qmp_cmd(
+ $qmp_peer,
'blockdev-snapshot-delete-internal-sync',
device => $attached_deviceid,
name => $snap,
@@ -4441,13 +4438,15 @@ sub qemu_volume_snapshot_delete {
my $parentsnap = $snapshots->{$snap}->{parent};
my $childsnap = $snapshots->{$snap}->{child};
+ my $qmp_peer = PVE::QemuServer::Drive::drive_qmp_peer($storecfg, $vmid, $drive);
+
# if we delete the first snasphot, we commit because the first snapshot original base image, it should be big.
# improve-me: if firstsnap > child : commit, if firstsnap < child do a stream.
if (!$parentsnap) {
print "delete first snapshot $snap\n";
PVE::QemuServer::Blockdev::blockdev_commit(
$storecfg,
- vm_qmp_peer($vmid),
+ $qmp_peer,
$machine_version,
$attached_deviceid,
$drive,
@@ -4459,7 +4458,7 @@ sub qemu_volume_snapshot_delete {
PVE::QemuServer::Blockdev::blockdev_replace(
$storecfg,
- vm_qmp_peer($vmid),
+ $qmp_peer,
$machine_version,
$attached_deviceid,
$drive,
@@ -4472,7 +4471,7 @@ sub qemu_volume_snapshot_delete {
print "stream intermediate snapshot $snap to $childsnap\n";
PVE::QemuServer::Blockdev::blockdev_stream(
$storecfg,
- vm_qmp_peer($vmid),
+ $qmp_peer,
$machine_version,
$attached_deviceid,
$drive,
@@ -7771,9 +7770,6 @@ sub restore_tar_archive {
sub do_snapshots_type {
my ($storecfg, $volid, $deviceid, $running) = @_;
- #always use storage snapshot for tpmstate
- return 'storage' if $deviceid && $deviceid =~ m/tpmstate0/;
-
#we use storage snapshot if vm is not running or if disk is unused;
return 'storage' if !$running || !$deviceid;
diff --git a/src/PVE/QemuServer/Drive.pm b/src/PVE/QemuServer/Drive.pm
index 912c2b47..8c3f212a 100644
--- a/src/PVE/QemuServer/Drive.pm
+++ b/src/PVE/QemuServer/Drive.pm
@@ -13,6 +13,8 @@ use PVE::Storage;
use PVE::Storage::Common;
use PVE::JSONSchema qw(get_standard_option);
+use PVE::QemuServer::Monitor qw(qsd_peer vm_qmp_peer);
+
use base qw(Exporter);
our @EXPORT_OK = qw(
@@ -1161,4 +1163,10 @@ sub drive_uses_qsd_fuse {
return;
}
+sub drive_qmp_peer {
+ my ($storecfg, $vmid, $drive) = @_;
+
+ return drive_uses_qsd_fuse($storecfg, $drive) ? qsd_peer($vmid) : vm_qmp_peer($vmid);
+}
+
1;
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread* [pve-devel] [PATCH qemu-server 18/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain
2025-12-03 13:26 [pve-devel] [PATCH-SERIES qemu-server 00/18] fix #7066: api: allow live snapshot (remove) of qcow2 TPM drive with snapshot-as-volume-chain Fiona Ebner
` (16 preceding siblings ...)
2025-12-03 13:26 ` [pve-devel] [PATCH qemu-server 17/18] snapshot: support live snapshot (remove) of qcow2 TPM drive on storage with snapshot-as-volume-chain Fiona Ebner
@ 2025-12-03 13:26 ` Fiona Ebner
17 siblings, 0 replies; 19+ messages in thread
From: Fiona Ebner @ 2025-12-03 13:26 UTC (permalink / raw)
To: pve-devel
Commit "snapshot: support live snapshot (remove) of qcow2 TPM drive on
storage with snapshot-as-volume-chain" prepared for this. It's not a
revert of commit c7d839df ("snapshot: prohibit live snapshot (remove)
of qcow2 TPM drive on storage with snapshot-as-volume-chain"), because
there is a single limitation remaining. That limitation is removing
the top-most snapshot live when the current image is exported via
FUSE, because exporting unshares the 'resize' permission, which would
be required by both 'block-commit' and 'block-stream', for example:
> QEMU storage daemon 100 qsd command 'block-commit' failed - Permission
> conflict on node '#block017': permissions 'resize' are both required
> by node 'drive-tpmstate0' (uses node '#block017' as 'file' child) and
> unshared by commit job 'commit-drive-tpmstate0' (uses node '#block017'
> as 'main node' child).
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/API2/Qemu.pm | 39 ++++++++++++++-------------------------
1 file changed, 14 insertions(+), 25 deletions(-)
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 5a627936..859afce9 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -726,11 +726,11 @@ my $check_cpu_model_access = sub {
}
};
-# TODO switch to doing internal snapshots only for TPM? Need a way to tell the storage. Also needs
-# handling for pre-existing as-volume-chain snapshots then. Or is there a way to make QSD+swtpm
-# compatible with using volume-chain live?
-my sub assert_tpm_snapshot_compat {
- my ($vmid, $conf, $op, $snap_conf) = @_;
+# The top-most snapshot for a FUSE-exported TPM state cannot be removed live, because exporting
+# unshares the 'resize' permission, which would be required by both 'block-commit' and
+# 'block-stream'.
+my sub assert_tpm_snapshot_delete_possible {
+ my ($vmid, $conf, $snap_conf, $snap_name) = @_;
return if !$conf->{tpmstate0};
return if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
@@ -739,19 +739,19 @@ my sub assert_tpm_snapshot_compat {
my $volid = $drive->{file};
my $storecfg = PVE::Storage::config();
- if ($snap_conf) {
- return if !$snap_conf->{tpmstate0};
- my $snap_drive = PVE::QemuServer::Drive::parse_drive('tpmstate0', $snap_conf->{tpmstate0});
- return if $volid ne $snap_drive->{file};
- }
+ return if $conf->{parent} ne $snap_name; # allowed if not top-most snapshot
+
+ return if !$snap_conf->{tpmstate0};
+ my $snap_drive = PVE::QemuServer::Drive::parse_drive('tpmstate0', $snap_conf->{tpmstate0});
+ return if $volid ne $snap_drive->{file};
my $format = PVE::QemuServer::Drive::checked_volume_format($storecfg, $volid);
my ($storeid) = PVE::Storage::parse_volume_id($volid, 1);
if ($storeid && $format eq 'qcow2') {
my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
if ($scfg && $scfg->{'snapshot-as-volume-chain'}) {
- die "snapshot $op of TPM state '$volid' on storage with 'snapshot-as-volume-chain' is"
- . " not yet supported while the VM is running.\n";
+ die "top-most snapshot of TPM state '$volid' on storage with 'snapshot-as-volume-chain'"
+ . " cannot be removed while the VM is running.\n";
}
}
}
@@ -6074,14 +6074,6 @@ __PACKAGE__->register_method({
0);
my $realcmd = sub {
- PVE::QemuConfig->lock_config(
- $vmid,
- sub {
- my $conf = PVE::QemuConfig->load_config($vmid);
- assert_tpm_snapshot_compat($vmid, $conf, 'create');
- },
- );
-
PVE::Cluster::log_msg('info', $authuser, "snapshot VM $vmid: $snapname");
PVE::QemuConfig->snapshot_create(
$vmid, $snapname, $param->{vmstate}, $param->{description},
@@ -6338,11 +6330,8 @@ __PACKAGE__->register_method({
$vmid,
sub {
my $conf = PVE::QemuConfig->load_config($vmid);
- assert_tpm_snapshot_compat(
- $vmid,
- $conf,
- 'delete',
- $conf->{snapshots}->{$snapname},
+ assert_tpm_snapshot_delete_possible(
+ $vmid, $conf, $conf->{snapshots}->{$snapname}, $snapname,
);
},
);
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 19+ messages in thread