public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive
@ 2025-10-14 14:39 Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu 01/16] d/rules: enable fuse Fiona Ebner
                   ` (16 more replies)
  0 siblings, 17 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

Add infrastructure for doing FUSE exports via QEMU storage daemon.
This makes it possible to use non-raw formatted volumes for the TPM
state, by exposing it to swtpm as raw via FUSE. A QEMU storage daemon
instance is associated to a given VM.

The swtpm_setup code tries to unlink files rather than just clear the
header like it does for block devices. FUSE exports cannot be
unlinked, align the behavior to also just remove the header for files.

To have FUSE exports available, it's necessary to enable via QEMU
build flags.

A new standard option for VM image formats is introduced and in the
end used for the TPM state drive. The need for that also came up
already in the past for setting a format override when restoring and
it's cleaner to use what the storage layer actually supports.

Then there's two independent improvements for qemu-server.

For the QMP client and wrappers, the QMP peer is better abstracted and
the QEMU storage daemon is added as a possible peer.

Blockdev code is updated to also support attaching a drive to the QEMU
storage daemon rather than just the main QEMU instance for a VM.

Then the QSD module is introduced and handling for TPM is added.

Finally, non-raw formats are allowed in the schema for the TPM state
drive.

Smoke tested, but not yet in-depth.

Build-dependency bump and dependency bump for pve-storage needed!
Dependency bump for QEMU and swtpm needed!

qemu:

Fiona Ebner (1):
  d/rules: enable fuse

 debian/rules | 1 +
 1 file changed, 1 insertion(+)


swtpm:

Fiona Ebner (1):
  swtpm setup: file: always just clear header rather than unlinking

 src/swtpm_setup/swtpm_backend_file.c | 42 +++++++++++-----------------
 1 file changed, 17 insertions(+), 25 deletions(-)


storage:

Fiona Ebner (1):
  common: add pve-vm-image-format standard option for VM image formats

 src/PVE/Storage/Common.pm | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)


qemu-server:

Fiona Ebner (13):
  tests: cfg2cmd: remove invalid mocking of qmp_cmd
  migration: offline volumes: drop deprecated special casing for TPM
    state
  qmp client: better abstract peer in preparation for
    qemu-storage-daemon
  monitor: qmp: precise error message by logging peer type
  helpers: add functions for qemu-storage-daemon instances
  monitor: qmp: allow 'qsd' peer type for qemu-storage-daemon
  monitor: align interface of qmp_cmd() with other helpers
  machine: include +pve version when getting installed machine version
  blockdev: support attaching to qemu-storage-daemon
  blockdev: attach: also return whether attached blockdev is read-only
  introduce QSD module for qemu-storage-daemon functionality
  tpm: support non-raw volumes via FUSE exports for swtpm
  fix #4693: drive: allow non-raw image formats for TPM state drive

 src/PVE/API2/Qemu.pm                 |   8 +-
 src/PVE/QMPClient.pm                 |  39 ++++-----
 src/PVE/QemuMigrate.pm               |   7 +-
 src/PVE/QemuServer.pm                |  57 +++++++++---
 src/PVE/QemuServer/BlockJob.pm       |   2 +-
 src/PVE/QemuServer/Blockdev.pm       |  33 ++++---
 src/PVE/QemuServer/Drive.pm          |   2 +
 src/PVE/QemuServer/Helpers.pm        |  57 +++++++++---
 src/PVE/QemuServer/Machine.pm        |  19 ++--
 src/PVE/QemuServer/Makefile          |   1 +
 src/PVE/QemuServer/Monitor.pm        |  76 +++++++++++-----
 src/PVE/QemuServer/QSD.pm            | 124 +++++++++++++++++++++++++++
 src/PVE/VZDump/QemuServer.pm         |   9 +-
 src/test/run_config2command_tests.pl |   1 -
 src/test/snapshot-test.pm            |   4 +-
 15 files changed, 335 insertions(+), 104 deletions(-)
 create mode 100644 src/PVE/QemuServer/QSD.pm


Summary over all repositories:
  18 files changed, 370 insertions(+), 131 deletions(-)

-- 
Generated by git-murpp 0.5.0


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu 01/16] d/rules: enable fuse
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-17 13:09   ` Daniel Kral
  2025-10-14 14:39 ` [pve-devel] [PATCH swtpm 02/16] swtpm setup: file: always just clear header rather than unlinking Fiona Ebner
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

This is in preparation to allow FUSE exports of qcow2-formatted TPM
state volumes via qemu-storage-daemon.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 debian/rules | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/rules b/debian/rules
index 757ca2a..912456e 100755
--- a/debian/rules
+++ b/debian/rules
@@ -61,6 +61,7 @@ endif
 	    --disable-xen \
 	    --enable-curl \
 	    --enable-docs \
+	    --enable-fuse \
 	    --enable-gnutls \
 	    --enable-libiscsi \
 	    --enable-libusb \
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH swtpm 02/16] swtpm setup: file: always just clear header rather than unlinking
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu 01/16] d/rules: enable fuse Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH storage 03/16] common: add pve-vm-image-format standard option for VM image formats Fiona Ebner
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

When using a FUSE export as the target file, it cannot be unlinked.
For block devices, the header is cleared, do the same for files.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/swtpm_setup/swtpm_backend_file.c | 42 +++++++++++-----------------
 1 file changed, 17 insertions(+), 25 deletions(-)

diff --git a/src/swtpm_setup/swtpm_backend_file.c b/src/swtpm_setup/swtpm_backend_file.c
index a0d0f4d..d4ca2c5 100644
--- a/src/swtpm_setup/swtpm_backend_file.c
+++ b/src/swtpm_setup/swtpm_backend_file.c
@@ -57,36 +57,28 @@ static int check_access(void *state,
     return ret == 0 || errno == ENOENT ? 0 : 1;
 }
 
-/* Delete state from file: if regular file, unlink, if blockdev, zero header so
- * swtpm binary will treat it as a new instance. */
+/* Delete state from file: zero header so swtpm binary will treat it as a new
+ * instance. */
 static int delete_state(void *state) {
     const struct file_state *fstate = (struct file_state*)state;
     int fd;
 
-    if (fstate->is_blockdev) {
-        char zerobuf[8] = {0}; /* swtpm header has 8 byte */
-        fd = open(fstate->path, O_WRONLY);
-        if (fd < 0) {
-            logerr(gl_LOGFILE, "Couldn't open file for clearing %s: %s\n",
-                   fstate->path, strerror(errno));
-            return 1;
-        }
-        /* writing less bytes than requested is bad, but won't set errno */
-        errno = 0;
-        if (write(fd, zerobuf, sizeof(zerobuf)) < (long)sizeof(zerobuf)) {
-            logerr(gl_LOGFILE, "Couldn't write file for clearing %s: %s\n",
-                   fstate->path, strerror(errno));
-            close(fd);
-            return 1;
-        }
-        close(fd);
-    } else {
-        if (unlink(fstate->path)) {
-            logerr(gl_LOGFILE, "Couldn't unlink file for clearing %s: %s\n",
-                   fstate->path, strerror(errno));
-            return 1;
-        }
+    char zerobuf[8] = {0}; /* swtpm header has 8 byte */
+    fd = open(fstate->path, O_WRONLY);
+    if (fd < 0) {
+        logerr(gl_LOGFILE, "Couldn't open file for clearing %s: %s\n",
+               fstate->path, strerror(errno));
+        return 1;
     }
+    /* writing less bytes than requested is bad, but won't set errno */
+    errno = 0;
+    if (write(fd, zerobuf, sizeof(zerobuf)) < (long)sizeof(zerobuf)) {
+        logerr(gl_LOGFILE, "Couldn't write file for clearing %s: %s\n",
+               fstate->path, strerror(errno));
+        close(fd);
+        return 1;
+    }
+    close(fd);
 
     return 0;
 }
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH storage 03/16] common: add pve-vm-image-format standard option for VM image formats
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu 01/16] d/rules: enable fuse Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH swtpm 02/16] swtpm setup: file: always just clear header rather than unlinking Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 04/16] tests: cfg2cmd: remove invalid mocking of qmp_cmd Fiona Ebner
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

The image formats defined for the pve-vm-image-format standard option
are the formats that are allowed on Proxmox VE storages for VM images.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/Storage/Common.pm | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
index 222dc76..3932aee 100644
--- a/src/PVE/Storage/Common.pm
+++ b/src/PVE/Storage/Common.pm
@@ -45,16 +45,31 @@ Possible formats a guest image can have.
 
 =cut
 
+PVE::JSONSchema::register_standard_option(
+    'pve-storage-image-format',
+    {
+        type => 'string',
+        enum => ['raw', 'qcow2', 'subvol', 'vmdk'],
+        description => "Format of the image.",
+    },
+);
+
+=head3 pve-vm-image-format
+
+Possible formats a VM image can have.
+
+=cut
+
 # TODO PVE 9 - Note that currently, qemu-server allows more formats for VM images, so third party
 # storage plugins might potentially allow more too, but none of the plugins we are aware of do that.
 # Those formats should either be allowed here or support for them should be phased out (at least in
 # the storage layer). Can still be added again in the future, should any plugin provider request it.
 
 PVE::JSONSchema::register_standard_option(
-    'pve-storage-image-format',
+    'pve-vm-image-format',
     {
         type => 'string',
-        enum => ['raw', 'qcow2', 'subvol', 'vmdk'],
+        enum => ['raw', 'qcow2', 'vmdk'],
         description => "Format of the image.",
     },
 );
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 04/16] tests: cfg2cmd: remove invalid mocking of qmp_cmd
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (2 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH storage 03/16] common: add pve-vm-image-format standard option for VM image formats Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 05/16] migration: offline volumes: drop deprecated special casing for TPM state Fiona Ebner
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

There is no definition of 'qmp_cmd' that could be referenced as a
subroutine with '\&qmp_cmd'. If the function would actually be called
during the test, there would be an error:
> Undefined subroutine &main::qmp_cmd called

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/test/run_config2command_tests.pl | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/test/run_config2command_tests.pl b/src/test/run_config2command_tests.pl
index 0623b5c1..f63e8b68 100755
--- a/src/test/run_config2command_tests.pl
+++ b/src/test/run_config2command_tests.pl
@@ -525,7 +525,6 @@ $qemu_monitor_module->mock(
         die "unexpected QMP command: '$cmd'";
     },
 );
-$qemu_monitor_module->mock('qmp_cmd', \&qmp_cmd);
 
 my $mapping_usb_module = Test::MockModule->new("PVE::Mapping::USB");
 $mapping_usb_module->mock(
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 05/16] migration: offline volumes: drop deprecated special casing for TPM state
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (3 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 04/16] tests: cfg2cmd: remove invalid mocking of qmp_cmd Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 06/16] qmp client: better abstract peer in preparation for qemu-storage-daemon Fiona Ebner
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

Since qemu-server >= 7.2-1 with commit 13d121d7 ("fix #3861: migrate:
fix live migration when cloud-init changes storage"), migration
targets can handle the 'offline_volume' log line for passing back the
new volume ID for an offline migrated volume to the source side. Drop
the special handling for TPM state now, so that the special handling
for parsing can also be dropped in the future.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/API2/Qemu.pm   | 1 +
 src/PVE/QemuMigrate.pm | 7 +------
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 71bedc1e..4243e4da 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -3491,6 +3491,7 @@ __PACKAGE__->register_method({
                 } elsif ($line =~ m/^replicated_volume: (.*)$/) {
                     $replicated_volumes->{$1} = 1;
                 } elsif ($line =~ m/^tpmstate0: (.*)$/) { # Deprecated, use offline_volume instead
+                    # TODO PVE 10.x drop special handling here
                     $offline_volumes->{tpmstate0} = $1;
                 } elsif ($line =~ m/^offline_volume: ([^:]+): (.*)$/) {
                     $offline_volumes->{$1} = $2;
diff --git a/src/PVE/QemuMigrate.pm b/src/PVE/QemuMigrate.pm
index 78954c20..b5023864 100644
--- a/src/PVE/QemuMigrate.pm
+++ b/src/PVE/QemuMigrate.pm
@@ -1020,12 +1020,7 @@ sub phase2_start_local_cluster {
         my $new_volid = $self->{volume_map}->{$volid};
         next if !$new_volid || $volid eq $new_volid;
 
-        # FIXME PVE 8.x only use offline_volume variant once all targets can handle it
-        if ($drivename eq 'tpmstate0') {
-            $input .= "$drivename: $new_volid\n";
-        } else {
-            $input .= "offline_volume: $drivename: $new_volid\n";
-        }
+        $input .= "offline_volume: $drivename: $new_volid\n";
     }
 
     $input .= "spice_ticket: $migrate->{spice_ticket}\n" if $migrate->{spice_ticket};
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 06/16] qmp client: better abstract peer in preparation for qemu-storage-daemon
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (4 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 05/16] migration: offline volumes: drop deprecated special casing for TPM state Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-17 12:38   ` Daniel Kral
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 07/16] monitor: qmp: precise error message by logging peer type Fiona Ebner
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

In preparation to add 'qsd' as a peer type for qemu-storage-daemon.

There already are two different peer types, namely 'qga' for the QEMU
guest agent and 'qmp' for the QEMU instance itself.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QMPClient.pm          | 39 ++++++++++++++++-------------------
 src/PVE/QemuServer.pm         | 19 ++++++++++-------
 src/PVE/QemuServer/Helpers.pm |  6 +++---
 src/PVE/QemuServer/Monitor.pm | 36 ++++++++++++++++++++++----------
 src/PVE/VZDump/QemuServer.pm  |  9 ++++++--
 src/test/snapshot-test.pm     |  2 +-
 6 files changed, 65 insertions(+), 46 deletions(-)

diff --git a/src/PVE/QMPClient.pm b/src/PVE/QMPClient.pm
index 1935a336..685cfe81 100644
--- a/src/PVE/QMPClient.pm
+++ b/src/PVE/QMPClient.pm
@@ -53,15 +53,13 @@ my $qga_allow_close_cmds = {
 };
 
 my $push_cmd_to_queue = sub {
-    my ($self, $vmid, $cmd) = @_;
+    my ($self, $peer, $cmd) = @_;
 
     my $execute = $cmd->{execute} || die "no command name specified";
 
-    my $qga = ($execute =~ /^guest\-+/) ? 1 : 0;
+    my $sname = PVE::QemuServer::Helpers::qmp_socket($peer);
 
-    my $sname = PVE::QemuServer::Helpers::qmp_socket($vmid, $qga);
-
-    $self->{queue_info}->{$sname} = { qga => $qga, vmid => $vmid, sname => $sname, cmds => [] }
+    $self->{queue_info}->{$sname} = { peer => $peer, sname => $sname, cmds => [] }
         if !$self->{queue_info}->{$sname};
 
     push @{ $self->{queue_info}->{$sname}->{cmds} }, $cmd;
@@ -72,21 +70,21 @@ my $push_cmd_to_queue = sub {
 # add a single command to the queue for later execution
 # with queue_execute()
 sub queue_cmd {
-    my ($self, $vmid, $callback, $execute, %params) = @_;
+    my ($self, $peer, $callback, $execute, %params) = @_;
 
     my $cmd = {};
     $cmd->{execute} = $execute;
     $cmd->{arguments} = \%params;
     $cmd->{callback} = $callback;
 
-    &$push_cmd_to_queue($self, $vmid, $cmd);
+    &$push_cmd_to_queue($self, $peer, $cmd);
 
     return;
 }
 
 # execute a single command
 sub cmd {
-    my ($self, $vmid, $cmd, $timeout, $noerr) = @_;
+    my ($self, $peer, $cmd, $timeout, $noerr) = @_;
 
     my $result;
 
@@ -101,7 +99,7 @@ sub cmd {
     $cmd->{callback} = $callback;
     $cmd->{arguments} = {} if !defined($cmd->{arguments});
 
-    my $queue_info = &$push_cmd_to_queue($self, $vmid, $cmd);
+    my $queue_info = &$push_cmd_to_queue($self, $peer, $cmd);
 
     if (!$timeout) {
         # hack: monitor sometime blocks
@@ -158,7 +156,8 @@ sub cmd {
     $self->queue_execute($timeout, 2);
 
     if (defined($queue_info->{error})) {
-        die "VM $vmid qmp command '$cmd->{execute}' failed - $queue_info->{error}" if !$noerr;
+        die "VM $peer->{vmid} $peer->{type} command '$cmd->{execute}' failed - $queue_info->{error}"
+            if !$noerr;
         $result = { error => $queue_info->{error} };
         $result->{'error-is-timeout'} = 1 if $queue_info->{'error-is-timeout'};
     }
@@ -206,10 +205,10 @@ my $open_connection = sub {
 
     die "duplicate call to open" if defined($queue_info->{fh});
 
-    my $vmid = $queue_info->{vmid};
-    my $qga = $queue_info->{qga};
+    my $peer = $queue_info->{peer};
+    my ($vmid, $sotype) = $peer->@{qw(vmid type)};
 
-    my $sname = PVE::QemuServer::Helpers::qmp_socket($vmid, $qga);
+    my $sname = PVE::QemuServer::Helpers::qmp_socket($peer);
 
     $timeout = 1 if !$timeout;
 
@@ -217,8 +216,6 @@ my $open_connection = sub {
     my $starttime = [gettimeofday];
     my $count = 0;
 
-    my $sotype = $qga ? 'qga' : 'qmp';
-
     for (;;) {
         $count++;
         $fh = IO::Socket::UNIX->new(Peer => $sname, Blocking => 0, Timeout => 1);
@@ -253,7 +250,7 @@ my $check_queue = sub {
         my $fh = $queue_info->{fh};
         next if !$fh;
 
-        my $qga = $queue_info->{qga};
+        my $qga = $queue_info->{peer}->{type} eq 'qga';
 
         if ($queue_info->{error}) {
             &$close_connection($self, $queue_info);
@@ -339,7 +336,7 @@ sub queue_execute {
         eval {
             &$open_connection($self, $queue_info, $timeout);
 
-            if (!$queue_info->{qga}) {
+            if ($queue_info->{peer}->{type} ne 'qga') {
                 my $cap_cmd = { execute => 'qmp_capabilities', arguments => {} };
                 unshift @{ $queue_info->{cmds} }, $cap_cmd;
             }
@@ -397,8 +394,8 @@ sub mux_input {
     return if !$queue_info;
 
     my $sname = $queue_info->{sname};
-    my $vmid = $queue_info->{vmid};
-    my $qga = $queue_info->{qga};
+    my $vmid = $queue_info->{peer}->{vmid};
+    my $qga = $queue_info->{peer}->{type} eq 'qga';
 
     my $curcmd = $queue_info->{current};
     die "unable to lookup current command for VM $vmid ($sname)\n" if !$curcmd;
@@ -501,8 +498,8 @@ sub mux_eof {
     return if !$queue_info;
 
     my $sname = $queue_info->{sname};
-    my $vmid = $queue_info->{vmid};
-    my $qga = $queue_info->{qga};
+    my $vmid = $queue_info->{peer}->{vmid};
+    my $qga = $queue_info->{peer}->{type} eq 'qga';
 
     my $curcmd = $queue_info->{current};
     die "unable to lookup current command for VM $vmid ($sname)\n" if !$curcmd;
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 45daa06c..7e1b7540 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -2706,13 +2706,15 @@ sub vmstatus {
     my $statuscb = sub {
         my ($vmid, $resp) = @_;
 
-        $qmpclient->queue_cmd($vmid, $proxmox_support_cb, 'query-proxmox-support');
-        $qmpclient->queue_cmd($vmid, $blockstatscb, 'query-blockstats');
-        $qmpclient->queue_cmd($vmid, $machinecb, 'query-machines');
-        $qmpclient->queue_cmd($vmid, $versioncb, 'query-version');
+        my $qmp_peer = { vmid => $vmid, type => 'qmp' };
+
+        $qmpclient->queue_cmd($qmp_peer, $proxmox_support_cb, 'query-proxmox-support');
+        $qmpclient->queue_cmd($qmp_peer, $blockstatscb, 'query-blockstats');
+        $qmpclient->queue_cmd($qmp_peer, $machinecb, 'query-machines');
+        $qmpclient->queue_cmd($qmp_peer, $versioncb, 'query-version');
         # this fails if ballon driver is not loaded, so this must be
         # the last command (following command are aborted if this fails).
-        $qmpclient->queue_cmd($vmid, $ballooncb, 'query-balloon');
+        $qmpclient->queue_cmd($qmp_peer, $ballooncb, 'query-balloon');
 
         my $status = 'unknown';
         if (!defined($status = $resp->{'return'}->{status})) {
@@ -2726,7 +2728,8 @@ sub vmstatus {
     foreach my $vmid (keys %$list) {
         next if $opt_vmid && ($vmid ne $opt_vmid);
         next if !$res->{$vmid}->{pid}; # not running
-        $qmpclient->queue_cmd($vmid, $statuscb, 'query-status');
+        my $qmp_peer = { vmid => $vmid, type => 'qmp' };
+        $qmpclient->queue_cmd($qmp_peer, $statuscb, 'query-status');
     }
 
     $qmpclient->queue_execute(undef, 2);
@@ -3180,7 +3183,7 @@ sub config_to_command {
 
     my $use_virtio = 0;
 
-    my $qmpsocket = PVE::QemuServer::Helpers::qmp_socket($vmid);
+    my $qmpsocket = PVE::QemuServer::Helpers::qmp_socket({ vmid => $vmid, type => 'qmp' });
     push @$cmd, '-chardev', "socket,id=qmp,path=$qmpsocket,server=on,wait=off";
     push @$cmd, '-mon', "chardev=qmp,mode=control";
 
@@ -3417,7 +3420,7 @@ sub config_to_command {
     my $guest_agent = parse_guest_agent($conf);
 
     if ($guest_agent->{enabled}) {
-        my $qgasocket = PVE::QemuServer::Helpers::qmp_socket($vmid, 1);
+        my $qgasocket = PVE::QemuServer::Helpers::qmp_socket({ vmid => $vmid, type => 'qga' });
         push @$devices, '-chardev', "socket,path=$qgasocket,server=on,wait=off,id=qga0";
 
         if (!$guest_agent->{type} || $guest_agent->{type} eq 'virtio') {
diff --git a/src/PVE/QemuServer/Helpers.pm b/src/PVE/QemuServer/Helpers.pm
index 3e444839..cfbcf726 100644
--- a/src/PVE/QemuServer/Helpers.pm
+++ b/src/PVE/QemuServer/Helpers.pm
@@ -79,9 +79,9 @@ our $var_run_tmpdir = "/var/run/qemu-server";
 mkdir $var_run_tmpdir;
 
 sub qmp_socket {
-    my ($vmid, $qga) = @_;
-    my $sockettype = $qga ? 'qga' : 'qmp';
-    return "${var_run_tmpdir}/$vmid.$sockettype";
+    my ($peer) = @_;
+    my ($vmid, $type) = $peer->@{qw(vmid type)};
+    return "${var_run_tmpdir}/${vmid}.${type}";
 }
 
 sub pidfile_name {
diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
index 0cccdfbe..00b52799 100644
--- a/src/PVE/QemuServer/Monitor.pm
+++ b/src/PVE/QemuServer/Monitor.pm
@@ -15,20 +15,31 @@ our @EXPORT_OK = qw(
 =head3 qmp_cmd
 
     my $cmd = { execute => $qmp_command_name, arguments => \%params };
-    my $result = qmp_cmd($vmid, $cmd);
+    my $peer = { vmid => $vmid, type => $type };
+    my $result = qmp_cmd($peer, $cmd);
 
-Execute the C<$qmp_command_name> with arguments C<%params> for VM C<$vmid>. Dies if the VM is not
-running or the monitor socket cannot be reached, even if the C<noerr> argument is used. Returns the
-structured result from the QMP side converted from JSON to structured Perl data. In case the
-C<noerr> argument is used and the QMP command failed or timed out, the result is a hash reference
-with an C<error> key containing the error message.
+Execute the C<$qmp_command_name> with arguments C<%params> for the peer C<$peer>. The type C<$type>
+of the peer can be C<qmp> for the QEMU instance of the VM or C<qga> for the guest agent of the VM.
+Dies if the VM is not running or the monitor socket cannot be reached, even if the C<noerr> argument
+is used. Returns the structured result from the QMP side converted from JSON to structured Perl
+data. In case the C<noerr> argument is used and the QMP command failed or timed out, the result is a
+hash reference with an C<error> key containing the error message.
 
 Parameters:
 
 =over
 
+=item C<$peer>: The peer to communicate with. A hash reference with:
+
+=over
+
 =item C<$vmid>: The ID of the virtual machine.
 
+=item C<$type>: Type of the peer to communicate with. This can be C<qmp> for the VM's QEMU instance
+or C<qga> for the VM's guest agent.
+
+=back
+
 =item C<$cmd>: Hash reference containing the QMP command name for the C<execute> key and additional
 arguments for the QMP command under the C<arguments> key. The following custom arguments are not
 part of the QMP schema and supported for all commands:
@@ -48,8 +59,9 @@ handle the error that is returned as a structured result.
 =cut
 
 sub qmp_cmd {
-    my ($vmid, $cmd) = @_;
+    my ($peer, $cmd) = @_;
 
+    my $vmid = $peer->{vmid};
     my $res;
 
     my ($noerr, $timeout);
@@ -59,11 +71,11 @@ sub qmp_cmd {
 
     eval {
         die "VM $vmid not running\n" if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
-        my $sname = PVE::QemuServer::Helpers::qmp_socket($vmid);
+        my $sname = PVE::QemuServer::Helpers::qmp_socket($peer);
         if (-e $sname) { # test if VM is reasonably new and supports qmp/qga
             my $qmpclient = PVE::QMPClient->new();
 
-            $res = $qmpclient->cmd($vmid, $cmd, $timeout, $noerr);
+            $res = $qmpclient->cmd($peer, $cmd, $timeout, $noerr);
         } else {
             die "unable to open monitor socket\n";
         }
@@ -81,7 +93,9 @@ sub mon_cmd {
 
     my $cmd = { execute => $execute, arguments => \%params };
 
-    return qmp_cmd($vmid, $cmd);
+    my $type = ($execute =~ /^guest\-+/) ? 'qga' : 'qmp';
+
+    return qmp_cmd({ vmid => $vmid, type => $type }, $cmd);
 }
 
 sub hmp_cmd {
@@ -92,7 +106,7 @@ sub hmp_cmd {
         arguments => { 'command-line' => $cmdline, timeout => $timeout },
     };
 
-    return qmp_cmd($vmid, $cmd);
+    return qmp_cmd({ vmid => $vmid, type => 'qmp' }, $cmd);
 }
 
 1;
diff --git a/src/PVE/VZDump/QemuServer.pm b/src/PVE/VZDump/QemuServer.pm
index dd789652..6256c0be 100644
--- a/src/PVE/VZDump/QemuServer.pm
+++ b/src/PVE/VZDump/QemuServer.pm
@@ -995,6 +995,7 @@ sub archive_vma {
         }
 
         my $qmpclient = PVE::QMPClient->new();
+        my $qmp_peer = { vmid => $vmid, type => 'qmp' };
         my $backup_cb = sub {
             my ($vmid, $resp) = @_;
             $backup_job_uuid = $resp->{return}->{UUID};
@@ -1012,10 +1013,14 @@ sub archive_vma {
             $params->{fleecing} = JSON::true if $task->{'use-fleecing'};
             add_backup_performance_options($params, $opts->{performance}, $qemu_support);
 
-            $qmpclient->queue_cmd($vmid, $backup_cb, 'backup', %$params);
+            $qmpclient->queue_cmd($qmp_peer, $backup_cb, 'backup', %$params);
         };
 
-        $qmpclient->queue_cmd($vmid, $add_fd_cb, 'getfd', fd => $outfileno, fdname => "backup");
+        $qmpclient->queue_cmd(
+            $qmp_peer, $add_fd_cb, 'getfd',
+            fd => $outfileno,
+            fdname => "backup",
+        );
 
         my $fs_frozen = $self->qga_fs_freeze($task, $vmid);
 
diff --git a/src/test/snapshot-test.pm b/src/test/snapshot-test.pm
index f61cd64b..5808f032 100644
--- a/src/test/snapshot-test.pm
+++ b/src/test/snapshot-test.pm
@@ -356,7 +356,7 @@ sub vm_running_locally {
 # BEGIN mocked PVE::QemuServer::Monitor methods
 
 sub qmp_cmd {
-    my ($vmid, $cmd) = @_;
+    my ($peer, $cmd) = @_;
 
     my $exec = $cmd->{execute};
     if ($exec eq "guest-ping") {
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 07/16] monitor: qmp: precise error message by logging peer type
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (5 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 06/16] qmp client: better abstract peer in preparation for qemu-storage-daemon Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 08/16] helpers: add functions for qemu-storage-daemon instances Fiona Ebner
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Monitor.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
index 00b52799..c8a72a64 100644
--- a/src/PVE/QemuServer/Monitor.pm
+++ b/src/PVE/QemuServer/Monitor.pm
@@ -81,7 +81,7 @@ sub qmp_cmd {
         }
     };
     if (my $err = $@) {
-        syslog("err", "VM $vmid qmp command failed - $err");
+        syslog("err", "VM $vmid $peer->{type} command failed - $err");
         die $err;
     }
 
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 08/16] helpers: add functions for qemu-storage-daemon instances
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (6 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 07/16] monitor: qmp: precise error message by logging peer type Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 09/16] monitor: qmp: allow 'qsd' peer type for qemu-storage-daemon Fiona Ebner
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

In particular, a function to get the path to the PID file and a
function to check whether the qemu-storage-daemon instance is running.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer.pm         |  4 ++--
 src/PVE/QemuServer/Helpers.pm | 33 ++++++++++++++++++++++++++-------
 2 files changed, 28 insertions(+), 9 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 7e1b7540..613ab361 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -2944,7 +2944,7 @@ sub query_supported_cpu_flags {
     my $kvm_supported = defined(kvm_version());
     my $qemu_cmd = PVE::QemuServer::Helpers::get_command_for_arch($arch);
     my $fakevmid = -1;
-    my $pidfile = PVE::QemuServer::Helpers::pidfile_name($fakevmid);
+    my $pidfile = PVE::QemuServer::Helpers::vm_pidfile_name($fakevmid);
 
     # Start a temporary (frozen) VM with vmid -1 to allow sending a QMP command
     my $query_supported_run_qemu = sub {
@@ -3197,7 +3197,7 @@ sub config_to_command {
         push @$cmd, '-mon', "chardev=qmp-event,mode=control";
     }
 
-    push @$cmd, '-pidfile', PVE::QemuServer::Helpers::pidfile_name($vmid);
+    push @$cmd, '-pidfile', PVE::QemuServer::Helpers::vm_pidfile_name($vmid);
 
     push @$cmd, '-daemonize';
 
diff --git a/src/PVE/QemuServer/Helpers.pm b/src/PVE/QemuServer/Helpers.pm
index cfbcf726..2c78a7b4 100644
--- a/src/PVE/QemuServer/Helpers.pm
+++ b/src/PVE/QemuServer/Helpers.pm
@@ -84,7 +84,12 @@ sub qmp_socket {
     return "${var_run_tmpdir}/${vmid}.${type}";
 }
 
-sub pidfile_name {
+sub qsd_pidfile_name {
+    my ($vmid) = @_;
+    return "${var_run_tmpdir}/qsd-${vmid}.pid";
+}
+
+sub vm_pidfile_name {
     my ($vmid) = @_;
     return "${var_run_tmpdir}/$vmid.pid";
 }
@@ -94,7 +99,7 @@ sub vnc_socket {
     return "${var_run_tmpdir}/$vmid.vnc";
 }
 
-# Parse the cmdline of a running kvm/qemu process and return arguments as hash
+# Parse the cmdline of a running kvm/qemu-* process and return arguments as hash
 sub parse_cmdline {
     my ($pid) = @_;
 
@@ -106,7 +111,7 @@ sub parse_cmdline {
         my @param = split(/\0/, $line);
 
         my $cmd = $param[0];
-        return if !$cmd || ($cmd !~ m|kvm$| && $cmd !~ m@(?:^|/)qemu-system-[^/]+$@);
+        return if !$cmd || ($cmd !~ m|kvm$| && $cmd !~ m@(?:^|/)qemu-[^/]+$@);
 
         my $phash = {};
         my $pending_cmd;
@@ -130,10 +135,8 @@ sub parse_cmdline {
     return;
 }
 
-sub vm_running_locally {
-    my ($vmid) = @_;
-
-    my $pidfile = pidfile_name($vmid);
+my sub instance_running_locally {
+    my ($pidfile) = @_;
 
     if (my $fd = IO::File->new("<$pidfile")) {
         my $st = stat($fd);
@@ -164,6 +167,22 @@ sub vm_running_locally {
     return;
 }
 
+sub qsd_running_locally {
+    my ($vmid) = @_;
+
+    my $pidfile = qsd_pidfile_name($vmid);
+
+    return instance_running_locally($pidfile);
+}
+
+sub vm_running_locally {
+    my ($vmid) = @_;
+
+    my $pidfile = vm_pidfile_name($vmid);
+
+    return instance_running_locally($pidfile);
+}
+
 sub min_version {
     my ($verstr, $major, $minor, $pve) = @_;
 
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 09/16] monitor: qmp: allow 'qsd' peer type for qemu-storage-daemon
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (7 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 08/16] helpers: add functions for qemu-storage-daemon instances Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 10/16] monitor: align interface of qmp_cmd() with other helpers Fiona Ebner
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Monitor.pm | 34 ++++++++++++++++++++++++++--------
 1 file changed, 26 insertions(+), 8 deletions(-)

diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
index c8a72a64..efdee8e7 100644
--- a/src/PVE/QemuServer/Monitor.pm
+++ b/src/PVE/QemuServer/Monitor.pm
@@ -19,11 +19,12 @@ our @EXPORT_OK = qw(
     my $result = qmp_cmd($peer, $cmd);
 
 Execute the C<$qmp_command_name> with arguments C<%params> for the peer C<$peer>. The type C<$type>
-of the peer can be C<qmp> for the QEMU instance of the VM or C<qga> for the guest agent of the VM.
-Dies if the VM is not running or the monitor socket cannot be reached, even if the C<noerr> argument
-is used. Returns the structured result from the QMP side converted from JSON to structured Perl
-data. In case the C<noerr> argument is used and the QMP command failed or timed out, the result is a
-hash reference with an C<error> key containing the error message.
+of the peer can be C<qmp> for the QEMU instance of the VM,  C<qga> for the guest agent of the VM or
+C<qsd> for the QEMU storage daemon associated to the VM. Dies if the VM is not running or the
+monitor socket cannot be reached, even if the C<noerr> argument is used. Returns the structured
+result from the QMP side converted from JSON to structured Perl data. In case the C<noerr> argument
+is used and the QMP command failed or timed out, the result is a hash reference with an C<error> key
+containing the error message.
 
 Parameters:
 
@@ -35,8 +36,8 @@ Parameters:
 
 =item C<$vmid>: The ID of the virtual machine.
 
-=item C<$type>: Type of the peer to communicate with. This can be C<qmp> for the VM's QEMU instance
-or C<qga> for the VM's guest agent.
+=item C<$type>: Type of the peer to communicate with. This can be C<qmp> for the VM's QEMU instance,
+C<qga> for the VM's guest agent or C<qsd> for the QEMU storage daemon assoicated to the VM.
 
 =back
 
@@ -70,7 +71,16 @@ sub qmp_cmd {
     }
 
     eval {
-        die "VM $vmid not running\n" if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
+        if ($peer->{type} eq 'qmp' || $peer->{type} eq 'qga') {
+            die "VM $vmid not running\n"
+                if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
+        } elsif ($peer->{type} eq 'qsd') {
+            die "QEMU storage daemon for VM $vmid not running\n"
+                if !PVE::QemuServer::Helpers::qsd_running_locally($vmid);
+        } else {
+            die "qmp_cmd - unknown peer type $peer->{type}\n";
+        }
+
         my $sname = PVE::QemuServer::Helpers::qmp_socket($peer);
         if (-e $sname) { # test if VM is reasonably new and supports qmp/qga
             my $qmpclient = PVE::QMPClient->new();
@@ -88,6 +98,14 @@ sub qmp_cmd {
     return $res;
 }
 
+sub qsd_cmd {
+    my ($vmid, $execute, %params) = @_;
+
+    my $cmd = { execute => $execute, arguments => \%params };
+
+    return qmp_cmd({ vmid => $vmid, type => 'qsd' }, $cmd);
+}
+
 sub mon_cmd {
     my ($vmid, $execute, %params) = @_;
 
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 10/16] monitor: align interface of qmp_cmd() with other helpers
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (8 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 09/16] monitor: qmp: allow 'qsd' peer type for qemu-storage-daemon Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 11/16] machine: include +pve version when getting installed machine version Fiona Ebner
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

Since all callers of qmp_cmd() construct the $cmd argument the same
way, this can also be done directly in qmp_cmd(). This aligns the
interface of qmp_cmd() to other helpers like mon_cmd(), except for
having a peer rather than just a VM ID. It's much more straightforward
to switch calls from mon_cmd() to qmp_cmd() after this change.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Monitor.pm | 33 +++++++++++++++------------------
 src/test/snapshot-test.pm     |  4 +++-
 2 files changed, 18 insertions(+), 19 deletions(-)

diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
index efdee8e7..0b15808b 100644
--- a/src/PVE/QemuServer/Monitor.pm
+++ b/src/PVE/QemuServer/Monitor.pm
@@ -14,9 +14,8 @@ our @EXPORT_OK = qw(
 
 =head3 qmp_cmd
 
-    my $cmd = { execute => $qmp_command_name, arguments => \%params };
     my $peer = { vmid => $vmid, type => $type };
-    my $result = qmp_cmd($peer, $cmd);
+    my $result = qmp_cmd($peer, $execute, %arguments);
 
 Execute the C<$qmp_command_name> with arguments C<%params> for the peer C<$peer>. The type C<$type>
 of the peer can be C<qmp> for the QEMU instance of the VM,  C<qga> for the guest agent of the VM or
@@ -41,9 +40,10 @@ C<qga> for the VM's guest agent or C<qsd> for the QEMU storage daemon assoicated
 
 =back
 
-=item C<$cmd>: Hash reference containing the QMP command name for the C<execute> key and additional
-arguments for the QMP command under the C<arguments> key. The following custom arguments are not
-part of the QMP schema and supported for all commands:
+=item C<$execute>: The QMP command name.
+
+=item C<%arguments>: Additional arguments for the QMP command. The following custom arguments are
+not part of the QMP schema and supported for all commands:
 
 =over
 
@@ -60,7 +60,9 @@ handle the error that is returned as a structured result.
 =cut
 
 sub qmp_cmd {
-    my ($peer, $cmd) = @_;
+    my ($peer, $execute, %arguments) = @_;
+
+    my $cmd = { execute => $execute, arguments => \%arguments };
 
     my $vmid = $peer->{vmid};
     my $res;
@@ -101,30 +103,25 @@ sub qmp_cmd {
 sub qsd_cmd {
     my ($vmid, $execute, %params) = @_;
 
-    my $cmd = { execute => $execute, arguments => \%params };
-
-    return qmp_cmd({ vmid => $vmid, type => 'qsd' }, $cmd);
+    return qmp_cmd({ vmid => $vmid, type => 'qsd' }, $execute, %params);
 }
 
 sub mon_cmd {
     my ($vmid, $execute, %params) = @_;
 
-    my $cmd = { execute => $execute, arguments => \%params };
-
     my $type = ($execute =~ /^guest\-+/) ? 'qga' : 'qmp';
 
-    return qmp_cmd({ vmid => $vmid, type => $type }, $cmd);
+    return qmp_cmd({ vmid => $vmid, type => $type }, $execute, %params);
 }
 
 sub hmp_cmd {
     my ($vmid, $cmdline, $timeout) = @_;
 
-    my $cmd = {
-        execute => 'human-monitor-command',
-        arguments => { 'command-line' => $cmdline, timeout => $timeout },
-    };
-
-    return qmp_cmd({ vmid => $vmid, type => 'qmp' }, $cmd);
+    return qmp_cmd(
+        { vmid => $vmid, type => 'qmp' }, 'human-monitor-command',
+        'command-line' => $cmdline,
+        timeout => $timeout,
+    );
 }
 
 1;
diff --git a/src/test/snapshot-test.pm b/src/test/snapshot-test.pm
index 5808f032..e28107b9 100644
--- a/src/test/snapshot-test.pm
+++ b/src/test/snapshot-test.pm
@@ -356,7 +356,9 @@ sub vm_running_locally {
 # BEGIN mocked PVE::QemuServer::Monitor methods
 
 sub qmp_cmd {
-    my ($peer, $cmd) = @_;
+    my ($peer, $execute, %arguments) = @_;
+
+    my $cmd = { execute => $execute, arguments => \%arguments };
 
     my $exec = $cmd->{execute};
     if ($exec eq "guest-ping") {
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 11/16] machine: include +pve version when getting installed machine version
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (9 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 10/16] monitor: align interface of qmp_cmd() with other helpers Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 12/16] blockdev: support attaching to qemu-storage-daemon Fiona Ebner
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

Move the code attaching the +pve version from the single call site
into the get_installed_machine_version() helper. Rename the helper
to latest_installed_machine_version() to make it a bit more explicit.

This is in preparation for the upcoming qemu-storage-daemon
functionality re-using this helper.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Machine.pm | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/src/PVE/QemuServer/Machine.pm b/src/PVE/QemuServer/Machine.pm
index b2ff7e24..719b134d 100644
--- a/src/PVE/QemuServer/Machine.pm
+++ b/src/PVE/QemuServer/Machine.pm
@@ -308,11 +308,17 @@ sub qemu_machine_pxe {
     return $machine;
 }
 
-sub get_installed_machine_version {
+sub latest_installed_machine_version {
     my ($kvmversion) = @_;
+
     $kvmversion = PVE::QemuServer::Helpers::kvm_user_version() if !defined($kvmversion);
-    $kvmversion =~ m/^(\d+\.\d+)/;
-    return $1;
+
+    my ($version) = ($kvmversion =~ m/^(\d+\.\d+)/);
+
+    my $pvever = get_pve_version($version);
+    $version .= "+pve$pvever" if $pvever > 0;
+
+    return $version;
 }
 
 sub windows_get_pinned_machine_version {
@@ -320,12 +326,7 @@ sub windows_get_pinned_machine_version {
 
     my $pin_version = $base_version;
     if (!defined($base_version) || !can_run_pve_machine_version($base_version, $kvmversion)) {
-        $pin_version = get_installed_machine_version($kvmversion);
-        # pin to the current pveX version to make use of most current features if > 0
-        my $pvever = get_pve_version($pin_version);
-        if ($pvever > 0) {
-            $pin_version .= "+pve$pvever";
-        }
+        $pin_version = latest_installed_machine_version($kvmversion);
     }
     if (!$machine || $machine eq 'pc') {
         $machine = "pc-i440fx-$pin_version";
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 12/16] blockdev: support attaching to qemu-storage-daemon
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (10 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 11/16] machine: include +pve version when getting installed machine version Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 13/16] blockdev: attach: also return whether attached blockdev is read-only Fiona Ebner
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

The qemu-storage-daemon runs with the installed binary version, so the
machine type should be set accordingly.

All that needs to be changed is the peer for QMP communication.

Adds an export for the qmp_cmd() subroutine.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/Blockdev.pm | 27 +++++++++++++++++++--------
 src/PVE/QemuServer/Monitor.pm  |  1 +
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 8fa5eb51..d08a4855 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -16,7 +16,7 @@ use PVE::QemuServer::BlockJob;
 use PVE::QemuServer::Drive qw(drive_is_cdrom);
 use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Machine;
-use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::Monitor qw(mon_cmd qmp_cmd);
 
 # gives ($host, $port, $export)
 my $NBD_TCP_PATH_RE_3 = qr/nbd:(\S+):(\d+):exportname=(\S+)/;
@@ -513,9 +513,9 @@ sub generate_pbs_blockdev {
 }
 
 my sub blockdev_add {
-    my ($vmid, $blockdev) = @_;
+    my ($qmp_peer, $blockdev) = @_;
 
-    eval { mon_cmd($vmid, 'blockdev-add', $blockdev->%*); };
+    eval { qmp_cmd($qmp_peer, 'blockdev-add', $blockdev->%*); };
     if (my $err = $@) {
         my $node_name = $blockdev->{'node-name'} // 'undefined';
         die "adding blockdev '$node_name' failed : $err\n" if $@;
@@ -563,6 +563,9 @@ rather than the volume itself.
 =item C<< $options->{'tpm-backup'} >>: Generate and attach a block device for backing up the TPM
 state image.
 
+=item C<< $options->{'qsd'} >>: Rather than attaching to the VM itself, attach to the QEMU storage
+daemon associated to the VM.
+
 =back
 
 =back
@@ -572,7 +575,15 @@ state image.
 sub attach {
     my ($storecfg, $vmid, $drive, $options) = @_;
 
-    my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+    my $qmp_peer = { vmid => $vmid, type => $options->{qsd} ? 'qsd' : 'qmp' };
+
+    my $machine_version;
+    if ($options->{qsd}) { # qemu-storage-daemon runs with the installed binary version
+        $machine_version =
+            'pc-i440fx-' . PVE::QemuServer::Machine::latest_installed_machine_version();
+    } else {
+        $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+    }
 
     my $blockdev = generate_drive_blockdev($storecfg, $drive, $machine_version, $options);
 
@@ -585,17 +596,17 @@ sub attach {
     eval {
         if ($throttle_group_id) {
             # Try to remove potential left-over.
-            mon_cmd($vmid, 'object-del', id => $throttle_group_id, noerr => 1);
+            qmp_cmd($qmp_peer, 'object-del', id => $throttle_group_id, noerr => 1);
 
             my $throttle_group = generate_throttle_group($drive);
-            mon_cmd($vmid, 'object-add', $throttle_group->%*);
+            qmp_cmd($qmp_peer, 'object-add', $throttle_group->%*);
         }
 
-        blockdev_add($vmid, $blockdev);
+        blockdev_add($qmp_peer, $blockdev);
     };
     if (my $err = $@) {
         if ($throttle_group_id) {
-            eval { mon_cmd($vmid, 'object-del', id => $throttle_group_id); };
+            eval { qmp_cmd($qmp_peer, 'object-del', id => $throttle_group_id); };
         }
         die $err;
     }
diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
index 0b15808b..9a66d142 100644
--- a/src/PVE/QemuServer/Monitor.pm
+++ b/src/PVE/QemuServer/Monitor.pm
@@ -10,6 +10,7 @@ use PVE::QMPClient;
 use base 'Exporter';
 our @EXPORT_OK = qw(
     mon_cmd
+    qmp_cmd
 );
 
 =head3 qmp_cmd
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 13/16] blockdev: attach: also return whether attached blockdev is read-only
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (11 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 12/16] blockdev: support attaching to qemu-storage-daemon Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality Fiona Ebner
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

This is in preparation for the qemu-storage-daemon, so that it can
apply the same setting for the export.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/BlockJob.pm | 2 +-
 src/PVE/QemuServer/Blockdev.pm | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 506010e1..c89994db 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -480,7 +480,7 @@ sub blockdev_mirror {
 
     # Note that if 'aio' is not explicitly set, i.e. default, it can change if source and target
     # don't both allow or both not allow 'io_uring' as the default.
-    my $target_node_name =
+    my ($target_node_name) =
         PVE::QemuServer::Blockdev::attach($storecfg, $vmid, $dest_drive, $attach_dest_opts);
 
     $jobs = {} if !$jobs;
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index d08a4855..f799c70b 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -528,10 +528,10 @@ my sub blockdev_add {
 
 =head3 attach
 
-    my $node_name = attach($storecfg, $vmid, $drive, $options);
+    my ($node_name, $read_only) = attach($storecfg, $vmid, $drive, $options);
 
 Attach the drive C<$drive> to the VM C<$vmid> considering the additional options C<$options>.
-Returns the node name of the (topmost) attached block device node.
+Returns the node name of the (topmost) attached block device node and whether the node is read-only.
 
 Parameters:
 
@@ -611,7 +611,7 @@ sub attach {
         die $err;
     }
 
-    return $blockdev->{'node-name'};
+    return ($blockdev->{'node-name'}, $blockdev->{'read-only'});
 }
 
 =pod
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (12 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 13/16] blockdev: attach: also return whether attached blockdev is read-only Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-17 13:08   ` Daniel Kral
  2025-10-20  8:47   ` Laurent GUERBY
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 15/16] tpm: support non-raw volumes via FUSE exports for swtpm Fiona Ebner
                   ` (2 subsequent siblings)
  16 siblings, 2 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

For now, supports creating FUSE exports based on Proxmox VE drive
definitions. NBD exports could be added later. In preparation to allow
qcow2 for TPM state volumes. A QEMU storage daemon instance is
associated to a given VM.

Target files where the FUSE export is mounted must already exist. The
'writable' flag for the export is taken to be the negation of the
'read-only' status of the added block node. The 'allow-other' flag is
set to 'off', so only the user the storage daemon is running as may
access the export. For now, exported images don't need to be resized,
so the 'growable' flag is hard-coded to 'false'.

When cleaning up, a 'quit' QMP command is sent to the storage daemon,
with 60 second timeout, after which a SIGTERM is sent with 10 seconds
timeout, before finally SIGKILL is used if the QEMU storage daemon
would still be running.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Dependency bump for QEMU needed!

 src/PVE/QemuServer/Helpers.pm |  18 +++++
 src/PVE/QemuServer/Makefile   |   1 +
 src/PVE/QemuServer/QSD.pm     | 124 ++++++++++++++++++++++++++++++++++
 3 files changed, 143 insertions(+)
 create mode 100644 src/PVE/QemuServer/QSD.pm

diff --git a/src/PVE/QemuServer/Helpers.pm b/src/PVE/QemuServer/Helpers.pm
index 2c78a7b4..ab8aa389 100644
--- a/src/PVE/QemuServer/Helpers.pm
+++ b/src/PVE/QemuServer/Helpers.pm
@@ -89,6 +89,24 @@ sub qsd_pidfile_name {
     return "${var_run_tmpdir}/qsd-${vmid}.pid";
 }
 
+sub qsd_fuse_export_cleanup_files {
+    my ($vmid) = @_;
+
+    PVE::Tools::dir_glob_foreach(
+        $var_run_tmpdir,
+        "qsd-${vmid}-.*.fuse",
+        sub {
+            my ($file) = @_;
+            unlink "${var_run_tmpdir}/${file}";
+        },
+    );
+}
+
+sub qsd_fuse_export_path {
+    my ($vmid, $export_name) = @_;
+    return "${var_run_tmpdir}/qsd-${vmid}-${export_name}.fuse";
+}
+
 sub vm_pidfile_name {
     my ($vmid) = @_;
     return "${var_run_tmpdir}/$vmid.pid";
diff --git a/src/PVE/QemuServer/Makefile b/src/PVE/QemuServer/Makefile
index 63c8d77c..d599ca91 100644
--- a/src/PVE/QemuServer/Makefile
+++ b/src/PVE/QemuServer/Makefile
@@ -23,6 +23,7 @@ SOURCES=Agent.pm	\
 	PCI.pm		\
 	QemuImage.pm	\
 	QMPHelpers.pm	\
+	QSD.pm		\
 	RNG.pm		\
 	RunState.pm	\
 	StateFile.pm	\
diff --git a/src/PVE/QemuServer/QSD.pm b/src/PVE/QemuServer/QSD.pm
new file mode 100644
index 00000000..4a538274
--- /dev/null
+++ b/src/PVE/QemuServer/QSD.pm
@@ -0,0 +1,124 @@
+package PVE::QemuServer::QSD;
+
+use v5.36;
+
+use JSON qw(to_json);
+
+use PVE::JSONSchema qw(json_bool);
+use PVE::SafeSyslog qw(syslog);
+use PVE::Storage;
+use PVE::Tools;
+
+use PVE::QemuServer::Blockdev;
+use PVE::QemuServer::Helpers;
+use PVE::QemuServer::Monitor;
+
+=head3 start
+
+    PVE::QemuServer::QSD::start($vmid);
+
+Start a QEMU storage daemon instance associated to VM C <$vmid>.
+
+=cut
+
+sub start($vmid) {
+    my $qmp_socket_path = PVE::QemuServer::Helpers::qmp_socket({ vmid => $vmid, type => 'qsd' });
+    my $pidfile = PVE::QemuServer::Helpers::qsd_pidfile_name($vmid);
+
+    my $cmd = [
+        'qemu-storage-daemon',
+        '--daemonize',
+        '--chardev',
+        "socket,id=qmp,path=$qmp_socket_path,server=on,wait=off",
+        '--monitor',
+        'chardev=qmp,mode=control',
+        '--pidfile',
+        $pidfile,
+    ];
+
+    PVE::Tools::run_command($cmd);
+
+    my $pid = PVE::QemuServer::Helpers::qsd_running_locally($vmid);
+    syslog("info", "QEMU storage daemon for $vmid started with PID $pid.");
+
+    return;
+}
+
+=head3 add_fuse_export
+
+    my $path = PVE::QemuServer::QSD::add_fuse_export($vmid, $drive, $name);
+
+Attach drive C<$drive> to the storage daemon associated to VM C<$vmid> and export it with name
+C<$name> via FUSE. Returns the path to the file representing the export.
+
+=cut
+
+sub add_fuse_export($vmid, $drive, $name) {
+    my $storage_config = PVE::Storage::config();
+
+    PVE::Storage::activate_volumes($storage_config, [$drive->{file}]);
+
+    my ($node_name, $read_only) =
+        PVE::QemuServer::Blockdev::attach($storage_config, $vmid, $drive, { qsd => 1 });
+
+    my $fuse_path = PVE::QemuServer::Helpers::qsd_fuse_export_path($vmid, $name);
+    PVE::Tools::file_set_contents($fuse_path, '', 0600); # mountpoint file needs to exist up-front
+
+    my $export = {
+        type => 'fuse',
+        id => "$name",
+        mountpoint => $fuse_path,
+        'node-name' => "$node_name",
+        writable => json_bool(!$read_only),
+        growable => JSON::false,
+        'allow-other' => 'off',
+    };
+
+    PVE::QemuServer::Monitor::qsd_cmd($vmid, 'block-export-add', $export->%*);
+
+    return $fuse_path;
+}
+
+=head3 quit
+
+    PVE::QemuServer::QSD::quit($vmid);
+
+Shut down the QEMU storage daemon associated to VM C<$vmid> and cleans up its PID file and socket.
+Waits for 60 seconds for clean shutdown, then sends SIGTERM and waits an additional 10 seconds
+before sending SIGKILL.
+
+=cut
+
+sub quit($vmid) {
+    eval { PVE::QemuServer::Monitor::qsd_cmd($vmid, 'quit'); };
+    my $qmp_err = $@;
+    warn "QEMU storage daemon for $vmid failed to handle 'quit' - $qmp_err" if $qmp_err;
+
+    my $count = $qmp_err ? 60 : 0; # can't wait for QMP 'quit' to terminate the process if it failed
+    my $pid = PVE::QemuServer::Helpers::qsd_running_locally($vmid);
+    while ($pid) {
+        if ($count == 60) {
+            warn "QEMU storage daemon for $vmid still running with PID $pid"
+                . " - terminating now with SIGTERM\n";
+            kill 15, $pid;
+        } elsif ($count == 70) {
+            warn "QEMU storage daemon for $vmid still running with PID $pid"
+                . " - terminating now with SIGKILL\n";
+            kill 9, $pid;
+            last;
+        }
+
+        sleep 1;
+        $count++;
+        $pid = PVE::QemuServer::Helpers::qsd_running_locally($vmid);
+    }
+
+    unlink PVE::QemuServer::Helpers::qsd_pidfile_name($vmid);
+    unlink PVE::QemuServer::Helpers::qmp_socket({ vmid => $vmid, type => 'qsd' });
+
+    PVE::QemuServer::Helpers::qsd_fuse_export_cleanup_files($vmid);
+
+    return;
+}
+
+1;
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 15/16] tpm: support non-raw volumes via FUSE exports for swtpm
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (13 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 16/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
  2025-10-17 13:17 ` [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] " Daniel Kral
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Dependency bump for swtpm needed!

 src/PVE/QemuServer.pm | 33 ++++++++++++++++++++++++++++++---
 1 file changed, 30 insertions(+), 3 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 613ab361..dcc5cafb 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -82,6 +82,7 @@ use PVE::QemuServer::OVMF;
 use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
 use PVE::QemuServer::QemuImage;
 use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
+use PVE::QemuServer::QSD;
 use PVE::QemuServer::RNG qw(parse_rng print_rng_device_commandline print_rng_object_commandline);
 use PVE::QemuServer::RunState;
 use PVE::QemuServer::StateFile;
@@ -2828,8 +2829,12 @@ sub start_swtpm {
     my ($storeid) = PVE::Storage::parse_volume_id($tpm->{file}, 1);
     if ($storeid) {
         my $format = checked_volume_format($storecfg, $tpm->{file});
-        die "swtpm currently only supports 'raw' state volumes\n" if $format ne 'raw';
-        $state = PVE::Storage::map_volume($storecfg, $tpm->{file});
+        if ($format eq 'raw') {
+            $state = PVE::Storage::map_volume($storecfg, $tpm->{file});
+        } else {
+            PVE::QemuServer::QSD::start($vmid);
+            $state = PVE::QemuServer::QSD::add_fuse_export($vmid, $tpm, 'tpmstate0');
+        }
     } else {
         $state = $tpm->{file};
     }
@@ -5451,6 +5456,12 @@ sub vm_start_nolock {
     eval { clear_reboot_request($vmid); };
     warn $@ if $@;
 
+    # terminate left-over storage daemon if still running
+    if (my $pid = PVE::QemuServer::Helpers::qsd_running_locally($vmid)) {
+        log_warn("left-over QEMU storage daemon for $vmid running with PID $pid - terminating now");
+        PVE::QemuServer::QSD::quit($vmid);
+    }
+
     if (!$statefile && scalar(keys %{ $conf->{pending} })) {
         vmconfig_apply_pending($vmid, $conf, $storecfg);
         $conf = PVE::QemuConfig->load_config($vmid); # update/reload
@@ -5644,6 +5655,13 @@ sub vm_start_nolock {
     }
     $systemd_properties{timeout} = 10 if $statefile; # setting up the scope should be quick
 
+    my $cleanup_qsd = sub {
+        if (PVE::QemuServer::Helpers::qsd_running_locally($vmid)) {
+            eval { PVE::QemuServer::QSD::quit($vmid); };
+            warn "stopping QEMU storage daemon failed - $@" if $@;
+        }
+    };
+
     my $run_qemu = sub {
         PVE::Tools::run_fork sub {
             PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid",
@@ -5654,7 +5672,11 @@ sub vm_start_nolock {
             my $tpmpid;
             if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
                 # start the TPM emulator so QEMU can connect on start
-                $tpmpid = start_swtpm($storecfg, $vmid, $tpm, $migratedfrom);
+                eval { $tpmpid = start_swtpm($storecfg, $vmid, $tpm, $migratedfrom); };
+                if (my $err = $@) {
+                    $cleanup_qsd->();
+                    die $err;
+                }
             }
 
             my $exitcode = run_command($cmd, %run_params);
@@ -5665,6 +5687,8 @@ sub vm_start_nolock {
                     warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup error\n";
                     kill 'TERM', $tpmpid;
                 }
+                $cleanup_qsd->();
+
                 die "QEMU exited with code $exitcode\n";
             }
         };
@@ -6026,6 +6050,9 @@ sub vm_stop_cleanup {
     my ($storecfg, $vmid, $conf, $keepActive, $apply_pending_changes, $noerr) = @_;
 
     eval {
+        PVE::QemuServer::QSD::quit($vmid)
+            if PVE::QemuServer::Helpers::qsd_running_locally($vmid);
+
         # ensure that no dbus-vmstate helper is left running in any case
         # at this point, it should never be still running, so quiesce any warnings
         PVE::QemuServer::DBusVMState::qemu_del_dbus_vmstate($vmid, quiet => 1);
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH qemu-server 16/16] fix #4693: drive: allow non-raw image formats for TPM state drive
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (14 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 15/16] tpm: support non-raw volumes via FUSE exports for swtpm Fiona Ebner
@ 2025-10-14 14:39 ` Fiona Ebner
  2025-10-17 13:17 ` [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] " Daniel Kral
  16 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-14 14:39 UTC (permalink / raw)
  To: pve-devel

Now that there is a mechanism to export non-raw images as FUSE for
swtpm, it's possible to align the possible formats with what other
disk types can use. This also reduces special-casing for TPM state
volumes.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Build-dependency bump and dependency bump for pve-storage needed!

 src/PVE/API2/Qemu.pm        | 7 ++-----
 src/PVE/QemuServer.pm       | 1 -
 src/PVE/QemuServer/Drive.pm | 2 ++
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 4243e4da..e77245a3 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -627,13 +627,13 @@ my sub create_disks : prototype($$$$$$$$$$$) {
                         $storecfg, $storeid, $vmid, $fmt, $arch, $disk, $smm, $amd_sev_type,
                     );
                 } elsif ($ds eq 'tpmstate0') {
-                    # swtpm can only use raw volumes, and uses a fixed size
+                    # A fixed size is used for TPM state volumes
                     $size = PVE::Tools::convert_size(
                         PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE,
                         'b' => 'kb',
                     );
                     $volid =
-                        PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, "raw", undef, $size);
+                        PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, $fmt, undef, $size);
                 } else {
                     $volid =
                         PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, $fmt, undef, $size);
@@ -675,9 +675,6 @@ my sub create_disks : prototype($$$$$$$$$$$) {
                     ) {
                         die "$ds - cloud-init drive is already attached at '$ci_key'\n";
                     }
-                } elsif ($ds eq 'tpmstate0' && $volume_format ne 'raw') {
-                    die
-                        "tpmstate0: volume format is '$volume_format', only 'raw' is supported!\n";
                 }
             }
 
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index dcc5cafb..0326c054 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -7825,7 +7825,6 @@ sub clone_disk {
         } elsif ($dst_drivename eq 'efidisk0') {
             $size = $efisize or die "internal error - need to specify EFI disk size\n";
         } elsif ($dst_drivename eq 'tpmstate0') {
-            $dst_format = 'raw';
             $size = PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE;
         } else {
             clone_disk_check_io_uring(
diff --git a/src/PVE/QemuServer/Drive.pm b/src/PVE/QemuServer/Drive.pm
index 79dd22e6..f54f9612 100644
--- a/src/PVE/QemuServer/Drive.pm
+++ b/src/PVE/QemuServer/Drive.pm
@@ -10,6 +10,7 @@ use List::Util qw(first);
 
 use PVE::RESTEnvironment qw(log_warn);
 use PVE::Storage;
+use PVE::Storage::Common;
 use PVE::JSONSchema qw(get_standard_option);
 
 use base qw(Exporter);
@@ -570,6 +571,7 @@ my $tpmstate_fmt = {
         format_description => 'volume',
         description => "The drive's backing volume.",
     },
+    format => get_standard_option('pve-vm-image-format', { optional => 1 }),
     size => {
         type => 'string',
         format => 'disk-size',
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 06/16] qmp client: better abstract peer in preparation for qemu-storage-daemon
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 06/16] qmp client: better abstract peer in preparation for qemu-storage-daemon Fiona Ebner
@ 2025-10-17 12:38   ` Daniel Kral
  2025-10-17 13:36     ` Fiona Ebner
  0 siblings, 1 reply; 28+ messages in thread
From: Daniel Kral @ 2025-10-17 12:38 UTC (permalink / raw)
  To: Proxmox VE development discussion; +Cc: pve-devel

Only a few nits inline, otherwise a very nice refactor!

On Tue Oct 14, 2025 at 4:39 PM CEST, Fiona Ebner wrote:
> In preparation to add 'qsd' as a peer type for qemu-storage-daemon.
>
> There already are two different peer types, namely 'qga' for the QEMU
> guest agent and 'qmp' for the QEMU instance itself.
>
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
>  src/PVE/QMPClient.pm          | 39 ++++++++++++++++-------------------
>  src/PVE/QemuServer.pm         | 19 ++++++++++-------
>  src/PVE/QemuServer/Helpers.pm |  6 +++---
>  src/PVE/QemuServer/Monitor.pm | 36 ++++++++++++++++++++++----------
>  src/PVE/VZDump/QemuServer.pm  |  9 ++++++--
>  src/test/snapshot-test.pm     |  2 +-
>  6 files changed, 65 insertions(+), 46 deletions(-)
>
> diff --git a/src/PVE/QMPClient.pm b/src/PVE/QMPClient.pm
> index 1935a336..685cfe81 100644
> --- a/src/PVE/QMPClient.pm
> +++ b/src/PVE/QMPClient.pm
> @@ -53,15 +53,13 @@ my $qga_allow_close_cmds = {
>  };
>  
>  my $push_cmd_to_queue = sub {
> -    my ($self, $vmid, $cmd) = @_;
> +    my ($self, $peer, $cmd) = @_;
>  
>      my $execute = $cmd->{execute} || die "no command name specified";
>  
> -    my $qga = ($execute =~ /^guest\-+/) ? 1 : 0;
> +    my $sname = PVE::QemuServer::Helpers::qmp_socket($peer);
>  
> -    my $sname = PVE::QemuServer::Helpers::qmp_socket($vmid, $qga);
> -
> -    $self->{queue_info}->{$sname} = { qga => $qga, vmid => $vmid, sname => $sname, cmds => [] }
> +    $self->{queue_info}->{$sname} = { peer => $peer, sname => $sname, cmds => [] }
>          if !$self->{queue_info}->{$sname};
>  
>      push @{ $self->{queue_info}->{$sname}->{cmds} }, $cmd;
> @@ -72,21 +70,21 @@ my $push_cmd_to_queue = sub {
>  # add a single command to the queue for later execution
>  # with queue_execute()
>  sub queue_cmd {
> -    my ($self, $vmid, $callback, $execute, %params) = @_;
> +    my ($self, $peer, $callback, $execute, %params) = @_;
>  
>      my $cmd = {};
>      $cmd->{execute} = $execute;
>      $cmd->{arguments} = \%params;
>      $cmd->{callback} = $callback;
>  
> -    &$push_cmd_to_queue($self, $vmid, $cmd);
> +    &$push_cmd_to_queue($self, $peer, $cmd);

nit: pre-existing but could become a $self->push_cmd_to_queue(...)?

>  
>      return;
>  }
>  
>  # execute a single command
>  sub cmd {
> -    my ($self, $vmid, $cmd, $timeout, $noerr) = @_;
> +    my ($self, $peer, $cmd, $timeout, $noerr) = @_;
>  
>      my $result;
>  
> @@ -101,7 +99,7 @@ sub cmd {
>      $cmd->{callback} = $callback;
>      $cmd->{arguments} = {} if !defined($cmd->{arguments});
>  
> -    my $queue_info = &$push_cmd_to_queue($self, $vmid, $cmd);
> +    my $queue_info = &$push_cmd_to_queue($self, $peer, $cmd);

nit: same here

>  
>      if (!$timeout) {
>          # hack: monitor sometime blocks
> @@ -158,7 +156,8 @@ sub cmd {
>      $self->queue_execute($timeout, 2);
>  
>      if (defined($queue_info->{error})) {
> -        die "VM $vmid qmp command '$cmd->{execute}' failed - $queue_info->{error}" if !$noerr;
> +        die "VM $peer->{vmid} $peer->{type} command '$cmd->{execute}' failed - $queue_info->{error}"
> +            if !$noerr;

nit: could be moved into a separate commit as the next patch, but
     definitely not important.

>          $result = { error => $queue_info->{error} };
>          $result->{'error-is-timeout'} = 1 if $queue_info->{'error-is-timeout'};
>      }
> @@ -206,10 +205,10 @@ my $open_connection = sub {
>  
>      die "duplicate call to open" if defined($queue_info->{fh});
>  
> -    my $vmid = $queue_info->{vmid};
> -    my $qga = $queue_info->{qga};
> +    my $peer = $queue_info->{peer};
> +    my ($vmid, $sotype) = $peer->@{qw(vmid type)};
>  
> -    my $sname = PVE::QemuServer::Helpers::qmp_socket($vmid, $qga);
> +    my $sname = PVE::QemuServer::Helpers::qmp_socket($peer);
>  
>      $timeout = 1 if !$timeout;
>  
> @@ -217,8 +216,6 @@ my $open_connection = sub {
>      my $starttime = [gettimeofday];
>      my $count = 0;
>  
> -    my $sotype = $qga ? 'qga' : 'qmp';
> -
>      for (;;) {
>          $count++;
>          $fh = IO::Socket::UNIX->new(Peer => $sname, Blocking => 0, Timeout => 1);
> @@ -253,7 +250,7 @@ my $check_queue = sub {
>          my $fh = $queue_info->{fh};
>          next if !$fh;
>  
> -        my $qga = $queue_info->{qga};
> +        my $qga = $queue_info->{peer}->{type} eq 'qga';
>  
>          if ($queue_info->{error}) {
>              &$close_connection($self, $queue_info);
> @@ -339,7 +336,7 @@ sub queue_execute {
>          eval {
>              &$open_connection($self, $queue_info, $timeout);
>  
> -            if (!$queue_info->{qga}) {
> +            if ($queue_info->{peer}->{type} ne 'qga') {

nit: might be worth to add to the patch message that qsd also exposes a
     qmp monitor and so will also need to capabilities negotiation,
     either in this patch or in the patch introducing qsd.

>                  my $cap_cmd = { execute => 'qmp_capabilities', arguments => {} };
>                  unshift @{ $queue_info->{cmds} }, $cap_cmd;
>              }
> @@ -397,8 +394,8 @@ sub mux_input {
>      return if !$queue_info;
>  
>      my $sname = $queue_info->{sname};
> -    my $vmid = $queue_info->{vmid};
> -    my $qga = $queue_info->{qga};
> +    my $vmid = $queue_info->{peer}->{vmid};
> +    my $qga = $queue_info->{peer}->{type} eq 'qga';
>  
>      my $curcmd = $queue_info->{current};
>      die "unable to lookup current command for VM $vmid ($sname)\n" if !$curcmd;
> @@ -501,8 +498,8 @@ sub mux_eof {
>      return if !$queue_info;
>  
>      my $sname = $queue_info->{sname};
> -    my $vmid = $queue_info->{vmid};
> -    my $qga = $queue_info->{qga};
> +    my $vmid = $queue_info->{peer}->{vmid};
> +    my $qga = $queue_info->{peer}->{type} eq 'qga';
>  
>      my $curcmd = $queue_info->{current};
>      die "unable to lookup current command for VM $vmid ($sname)\n" if !$curcmd;
> diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
> index 45daa06c..7e1b7540 100644
> --- a/src/PVE/QemuServer.pm
> +++ b/src/PVE/QemuServer.pm
> @@ -2706,13 +2706,15 @@ sub vmstatus {
>      my $statuscb = sub {
>          my ($vmid, $resp) = @_;
>  
> -        $qmpclient->queue_cmd($vmid, $proxmox_support_cb, 'query-proxmox-support');
> -        $qmpclient->queue_cmd($vmid, $blockstatscb, 'query-blockstats');
> -        $qmpclient->queue_cmd($vmid, $machinecb, 'query-machines');
> -        $qmpclient->queue_cmd($vmid, $versioncb, 'query-version');
> +        my $qmp_peer = { vmid => $vmid, type => 'qmp' };
> +
> +        $qmpclient->queue_cmd($qmp_peer, $proxmox_support_cb, 'query-proxmox-support');
> +        $qmpclient->queue_cmd($qmp_peer, $blockstatscb, 'query-blockstats');
> +        $qmpclient->queue_cmd($qmp_peer, $machinecb, 'query-machines');
> +        $qmpclient->queue_cmd($qmp_peer, $versioncb, 'query-version');
>          # this fails if ballon driver is not loaded, so this must be
>          # the last command (following command are aborted if this fails).
> -        $qmpclient->queue_cmd($vmid, $ballooncb, 'query-balloon');
> +        $qmpclient->queue_cmd($qmp_peer, $ballooncb, 'query-balloon');
>  
>          my $status = 'unknown';
>          if (!defined($status = $resp->{'return'}->{status})) {
> @@ -2726,7 +2728,8 @@ sub vmstatus {
>      foreach my $vmid (keys %$list) {
>          next if $opt_vmid && ($vmid ne $opt_vmid);
>          next if !$res->{$vmid}->{pid}; # not running
> -        $qmpclient->queue_cmd($vmid, $statuscb, 'query-status');
> +        my $qmp_peer = { vmid => $vmid, type => 'qmp' };
> +        $qmpclient->queue_cmd($qmp_peer, $statuscb, 'query-status');
>      }
>  
>      $qmpclient->queue_execute(undef, 2);
> @@ -3180,7 +3183,7 @@ sub config_to_command {
>  
>      my $use_virtio = 0;
>  
> -    my $qmpsocket = PVE::QemuServer::Helpers::qmp_socket($vmid);
> +    my $qmpsocket = PVE::QemuServer::Helpers::qmp_socket({ vmid => $vmid, type => 'qmp' });
>      push @$cmd, '-chardev', "socket,id=qmp,path=$qmpsocket,server=on,wait=off";
>      push @$cmd, '-mon', "chardev=qmp,mode=control";
>  
> @@ -3417,7 +3420,7 @@ sub config_to_command {
>      my $guest_agent = parse_guest_agent($conf);
>  
>      if ($guest_agent->{enabled}) {
> -        my $qgasocket = PVE::QemuServer::Helpers::qmp_socket($vmid, 1);
> +        my $qgasocket = PVE::QemuServer::Helpers::qmp_socket({ vmid => $vmid, type => 'qga' });
>          push @$devices, '-chardev', "socket,path=$qgasocket,server=on,wait=off,id=qga0";
>  
>          if (!$guest_agent->{type} || $guest_agent->{type} eq 'virtio') {
> diff --git a/src/PVE/QemuServer/Helpers.pm b/src/PVE/QemuServer/Helpers.pm
> index 3e444839..cfbcf726 100644
> --- a/src/PVE/QemuServer/Helpers.pm
> +++ b/src/PVE/QemuServer/Helpers.pm
> @@ -79,9 +79,9 @@ our $var_run_tmpdir = "/var/run/qemu-server";
>  mkdir $var_run_tmpdir;
>  
>  sub qmp_socket {
> -    my ($vmid, $qga) = @_;
> -    my $sockettype = $qga ? 'qga' : 'qmp';
> -    return "${var_run_tmpdir}/$vmid.$sockettype";
> +    my ($peer) = @_;
> +    my ($vmid, $type) = $peer->@{qw(vmid type)};
> +    return "${var_run_tmpdir}/${vmid}.${type}";
>  }
>  
>  sub pidfile_name {
> diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
> index 0cccdfbe..00b52799 100644
> --- a/src/PVE/QemuServer/Monitor.pm
> +++ b/src/PVE/QemuServer/Monitor.pm
> @@ -15,20 +15,31 @@ our @EXPORT_OK = qw(
>  =head3 qmp_cmd
>  
>      my $cmd = { execute => $qmp_command_name, arguments => \%params };
> -    my $result = qmp_cmd($vmid, $cmd);
> +    my $peer = { vmid => $vmid, type => $type };
> +    my $result = qmp_cmd($peer, $cmd);
>  
> -Execute the C<$qmp_command_name> with arguments C<%params> for VM C<$vmid>. Dies if the VM is not
> -running or the monitor socket cannot be reached, even if the C<noerr> argument is used. Returns the
> -structured result from the QMP side converted from JSON to structured Perl data. In case the
> -C<noerr> argument is used and the QMP command failed or timed out, the result is a hash reference
> -with an C<error> key containing the error message.
> +Execute the C<$qmp_command_name> with arguments C<%params> for the peer C<$peer>. The type C<$type>
> +of the peer can be C<qmp> for the QEMU instance of the VM or C<qga> for the guest agent of the VM.
> +Dies if the VM is not running or the monitor socket cannot be reached, even if the C<noerr> argument
> +is used. Returns the structured result from the QMP side converted from JSON to structured Perl
> +data. In case the C<noerr> argument is used and the QMP command failed or timed out, the result is a
> +hash reference with an C<error> key containing the error message.

nit: might be enough to state the allowed values for $peer only in the
     parameter list below? Even though this won't change a lot, then
     there would only be a single source.

nit: asserting that the peer's $type here already would be nice, but
     will be done in a later patch anyway.

>  
>  Parameters:
>  
>  =over
>  
> +=item C<$peer>: The peer to communicate with. A hash reference with:
> +
> +=over
> +
>  =item C<$vmid>: The ID of the virtual machine.
>  
> +=item C<$type>: Type of the peer to communicate with. This can be C<qmp> for the VM's QEMU instance
> +or C<qga> for the VM's guest agent.
> +
> +=back
> +
>  =item C<$cmd>: Hash reference containing the QMP command name for the C<execute> key and additional
>  arguments for the QMP command under the C<arguments> key. The following custom arguments are not
>  part of the QMP schema and supported for all commands:
> @@ -48,8 +59,9 @@ handle the error that is returned as a structured result.
>  =cut
>  
>  sub qmp_cmd {
> -    my ($vmid, $cmd) = @_;
> +    my ($peer, $cmd) = @_;
>  
> +    my $vmid = $peer->{vmid};
>      my $res;
>  
>      my ($noerr, $timeout);
> @@ -59,11 +71,11 @@ sub qmp_cmd {
>  
>      eval {
>          die "VM $vmid not running\n" if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
> -        my $sname = PVE::QemuServer::Helpers::qmp_socket($vmid);
> +        my $sname = PVE::QemuServer::Helpers::qmp_socket($peer);
>          if (-e $sname) { # test if VM is reasonably new and supports qmp/qga
>              my $qmpclient = PVE::QMPClient->new();
>  
> -            $res = $qmpclient->cmd($vmid, $cmd, $timeout, $noerr);
> +            $res = $qmpclient->cmd($peer, $cmd, $timeout, $noerr);
>          } else {
>              die "unable to open monitor socket\n";
>          }
> @@ -81,7 +93,9 @@ sub mon_cmd {
>  
>      my $cmd = { execute => $execute, arguments => \%params };
>  
> -    return qmp_cmd($vmid, $cmd);
> +    my $type = ($execute =~ /^guest\-+/) ? 'qga' : 'qmp';

nit: AFAICS $qmpclient->cmd(...) is only used here, but if cmd(...) or
     queue_cmd(...) would be used somewhere else, these callers could
     set $type = 'qga' and use a qga command. It might be overkill, but
     shouldn't that be asserted too?

> +
> +    return qmp_cmd({ vmid => $vmid, type => $type }, $cmd);
>  }
>  
>  sub hmp_cmd {
> @@ -92,7 +106,7 @@ sub hmp_cmd {
>          arguments => { 'command-line' => $cmdline, timeout => $timeout },
>      };
>  
> -    return qmp_cmd($vmid, $cmd);
> +    return qmp_cmd({ vmid => $vmid, type => 'qmp' }, $cmd);
>  }
>  
>  1;
> diff --git a/src/PVE/VZDump/QemuServer.pm b/src/PVE/VZDump/QemuServer.pm
> index dd789652..6256c0be 100644
> --- a/src/PVE/VZDump/QemuServer.pm
> +++ b/src/PVE/VZDump/QemuServer.pm
> @@ -995,6 +995,7 @@ sub archive_vma {
>          }
>  
>          my $qmpclient = PVE::QMPClient->new();
> +        my $qmp_peer = { vmid => $vmid, type => 'qmp' };
>          my $backup_cb = sub {
>              my ($vmid, $resp) = @_;
>              $backup_job_uuid = $resp->{return}->{UUID};
> @@ -1012,10 +1013,14 @@ sub archive_vma {
>              $params->{fleecing} = JSON::true if $task->{'use-fleecing'};
>              add_backup_performance_options($params, $opts->{performance}, $qemu_support);
>  
> -            $qmpclient->queue_cmd($vmid, $backup_cb, 'backup', %$params);
> +            $qmpclient->queue_cmd($qmp_peer, $backup_cb, 'backup', %$params);
>          };
>  
> -        $qmpclient->queue_cmd($vmid, $add_fd_cb, 'getfd', fd => $outfileno, fdname => "backup");
> +        $qmpclient->queue_cmd(
> +            $qmp_peer, $add_fd_cb, 'getfd',
> +            fd => $outfileno,
> +            fdname => "backup",
> +        );
>  
>          my $fs_frozen = $self->qga_fs_freeze($task, $vmid);
>  
> diff --git a/src/test/snapshot-test.pm b/src/test/snapshot-test.pm
> index f61cd64b..5808f032 100644
> --- a/src/test/snapshot-test.pm
> +++ b/src/test/snapshot-test.pm
> @@ -356,7 +356,7 @@ sub vm_running_locally {
>  # BEGIN mocked PVE::QemuServer::Monitor methods
>  
>  sub qmp_cmd {
> -    my ($vmid, $cmd) = @_;
> +    my ($peer, $cmd) = @_;
>  
>      my $exec = $cmd->{execute};
>      if ($exec eq "guest-ping") {



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality Fiona Ebner
@ 2025-10-17 13:08   ` Daniel Kral
  2025-10-17 14:46     ` Fiona Ebner
  2025-10-20  8:47   ` Laurent GUERBY
  1 sibling, 1 reply; 28+ messages in thread
From: Daniel Kral @ 2025-10-17 13:08 UTC (permalink / raw)
  To: Proxmox VE development discussion; +Cc: pve-devel

On Tue Oct 14, 2025 at 4:39 PM CEST, Fiona Ebner wrote:
> diff --git a/src/PVE/QemuServer/Helpers.pm b/src/PVE/QemuServer/Helpers.pm
> index 2c78a7b4..ab8aa389 100644
> --- a/src/PVE/QemuServer/Helpers.pm
> +++ b/src/PVE/QemuServer/Helpers.pm
> @@ -89,6 +89,24 @@ sub qsd_pidfile_name {
>      return "${var_run_tmpdir}/qsd-${vmid}.pid";
>  }
>  
> +sub qsd_fuse_export_cleanup_files {
> +    my ($vmid) = @_;
> +
> +    PVE::Tools::dir_glob_foreach(
> +        $var_run_tmpdir,
> +        "qsd-${vmid}-.*.fuse",
> +        sub {
> +            my ($file) = @_;
> +            unlink "${var_run_tmpdir}/${file}";

This could fail if the exported fuse fs wasn't cleaned up by qsd
correctly, see more below.

> +        },
> +    );
> +}
> +
> +sub qsd_fuse_export_path {
> +    my ($vmid, $export_name) = @_;
> +    return "${var_run_tmpdir}/qsd-${vmid}-${export_name}.fuse";
> +}
> +

--- [ snip ] ---

> +=head3 quit
> +
> +    PVE::QemuServer::QSD::quit($vmid);
> +
> +Shut down the QEMU storage daemon associated to VM C<$vmid> and cleans up its PID file and socket.
> +Waits for 60 seconds for clean shutdown, then sends SIGTERM and waits an additional 10 seconds
> +before sending SIGKILL.
> +
> +=cut
> +
> +sub quit($vmid) {
> +    eval { PVE::QemuServer::Monitor::qsd_cmd($vmid, 'quit'); };
> +    my $qmp_err = $@;
> +    warn "QEMU storage daemon for $vmid failed to handle 'quit' - $qmp_err" if $qmp_err;
> +
> +    my $count = $qmp_err ? 60 : 0; # can't wait for QMP 'quit' to terminate the process if it failed
> +    my $pid = PVE::QemuServer::Helpers::qsd_running_locally($vmid);
> +    while ($pid) {
> +        if ($count == 60) {
> +            warn "QEMU storage daemon for $vmid still running with PID $pid"
> +                . " - terminating now with SIGTERM\n";
> +            kill 15, $pid;
> +        } elsif ($count == 70) {
> +            warn "QEMU storage daemon for $vmid still running with PID $pid"
> +                . " - terminating now with SIGKILL\n";
> +            kill 9, $pid;
> +            last;

SIGKILL will leave the exported fuse export in an unremovable state
since the qsd couldn't clean it up. I could only reproduce this by doing
it manually with `kill -SIGKILL ...`, which resulted in the VM not being
startable anymore.

Might be worth to add an `umount /run/qemu-server/qsd-....fuse` if
unlink fails or if we can stat -f that the file is still the
FUSE-exported image?

> +        }
> +
> +        sleep 1;
> +        $count++;
> +        $pid = PVE::QemuServer::Helpers::qsd_running_locally($vmid);
> +    }
> +
> +    unlink PVE::QemuServer::Helpers::qsd_pidfile_name($vmid);
> +    unlink PVE::QemuServer::Helpers::qmp_socket({ vmid => $vmid, type => 'qsd' });
> +
> +    PVE::QemuServer::Helpers::qsd_fuse_export_cleanup_files($vmid);
> +
> +    return;
> +}
> +
> +1;



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH qemu 01/16] d/rules: enable fuse
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu 01/16] d/rules: enable fuse Fiona Ebner
@ 2025-10-17 13:09   ` Daniel Kral
  2025-10-17 14:03     ` Fiona Ebner
  0 siblings, 1 reply; 28+ messages in thread
From: Daniel Kral @ 2025-10-17 13:09 UTC (permalink / raw)
  To: Proxmox VE development discussion; +Cc: pve-devel

On Tue Oct 14, 2025 at 4:39 PM CEST, Fiona Ebner wrote:
> This is in preparation to allow FUSE exports of qcow2-formatted TPM
> state volumes via qemu-storage-daemon.
>
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
>  debian/rules | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/debian/rules b/debian/rules
> index 757ca2a..912456e 100755
> --- a/debian/rules
> +++ b/debian/rules
> @@ -61,6 +61,7 @@ endif
>  	    --disable-xen \
>  	    --enable-curl \
>  	    --enable-docs \
> +	    --enable-fuse \
>  	    --enable-gnutls \
>  	    --enable-libiscsi \
>  	    --enable-libusb \

Should add a libfuse3-dev >= 3.1 in the BuildDepends ;)


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive
  2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (15 preceding siblings ...)
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 16/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
@ 2025-10-17 13:17 ` Daniel Kral
  16 siblings, 0 replies; 28+ messages in thread
From: Daniel Kral @ 2025-10-17 13:17 UTC (permalink / raw)
  To: Proxmox VE development discussion; +Cc: pve-devel

On Tue Oct 14, 2025 at 4:39 PM CEST, Fiona Ebner wrote:
> Add infrastructure for doing FUSE exports via QEMU storage daemon.
> This makes it possible to use non-raw formatted volumes for the TPM
> state, by exposing it to swtpm as raw via FUSE. A QEMU storage daemon
> instance is associated to a given VM.
>
> The swtpm_setup code tries to unlink files rather than just clear the
> header like it does for block devices. FUSE exports cannot be
> unlinked, align the behavior to also just remove the header for files.
>
> To have FUSE exports available, it's necessary to enable via QEMU
> build flags.
>
> A new standard option for VM image formats is introduced and in the
> end used for the TPM state drive. The need for that also came up
> already in the past for setting a format override when restoring and
> it's cleaner to use what the storage layer actually supports.
>
> Then there's two independent improvements for qemu-server.
>
> For the QMP client and wrappers, the QMP peer is better abstracted and
> the QEMU storage daemon is added as a possible peer.
>
> Blockdev code is updated to also support attaching a drive to the QEMU
> storage daemon rather than just the main QEMU instance for a VM.
>
> Then the QSD module is introduced and handling for TPM is added.
>
> Finally, non-raw formats are allowed in the schema for the TPM state
> drive.

I have tested this for a few pre-existing and new VMs and it works like
a charm, nice work!

I've tested the following:

- creating a VM with tpmstate0 as a raw image
- creating a VM with tpmstate0 as a subvol
- creating a VM with tpmstate0 as a qcow2 image
- cloning a VM with tpmstate0 as a raw image
- cloning a VM with tpmstate0 as a qcow2 image
- templating a VM with tpmstate0 + cloning
- moving qcow2 tmpstate0s between storages
- converting existing raw tpmstate0 to qcow2
- converting qcow2 tpmstate0 back to raw image

I installed Debian on the new VMs and used existing Debian and Windows
VMs to test the templating / moving / converting ops and all of those
worked as expected.

As pointed out in the cover letter / swtpm patch, not applying that one
will result in a failed VM start as it will try to unlink the FUSE
export fs (which is exposed as a regular file).

The qsd run files in /run/qemu-server/ were always cleaned up correctly
(besides when sending SIGKILL to the VM's qsd process directly) and I
didn't ran into qsd hanging when stopping the VM normally otherwise.

I've tried my best to study the surrounding code that I didn't have much
knowledge about yet (mainly setting up swtpm and the qmp cmd
infrastructure), but didn't find any serious issues besides a few nits
and the SIGKILL handling, so with the latter addressed consider this as:

Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 06/16] qmp client: better abstract peer in preparation for qemu-storage-daemon
  2025-10-17 12:38   ` Daniel Kral
@ 2025-10-17 13:36     ` Fiona Ebner
  0 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-17 13:36 UTC (permalink / raw)
  To: Proxmox VE development discussion, Daniel Kral; +Cc: pve-devel

Am 17.10.25 um 2:39 PM schrieb Daniel Kral:
> On Tue Oct 14, 2025 at 4:39 PM CEST, Fiona Ebner wrote:
>> @@ -72,21 +70,21 @@ my $push_cmd_to_queue = sub {
>>  # add a single command to the queue for later execution
>>  # with queue_execute()
>>  sub queue_cmd {
>> -    my ($self, $vmid, $callback, $execute, %params) = @_;
>> +    my ($self, $peer, $callback, $execute, %params) = @_;
>>  
>>      my $cmd = {};
>>      $cmd->{execute} = $execute;
>>      $cmd->{arguments} = \%params;
>>      $cmd->{callback} = $callback;
>>  
>> -    &$push_cmd_to_queue($self, $vmid, $cmd);
>> +    &$push_cmd_to_queue($self, $peer, $cmd);
> 
> nit: pre-existing but could become a $self->push_cmd_to_queue(...)?

Not right now, because $push_cmd_to_queue is a subroutine reference and
not a method. And a method can't be private AFAIK. Could be changed to
be a private sub and called that way, but it's not in the scope of this
patch.

>> @@ -158,7 +156,8 @@ sub cmd {
>>      $self->queue_execute($timeout, 2);
>>  
>>      if (defined($queue_info->{error})) {
>> -        die "VM $vmid qmp command '$cmd->{execute}' failed - $queue_info->{error}" if !$noerr;
>> +        die "VM $peer->{vmid} $peer->{type} command '$cmd->{execute}' failed - $queue_info->{error}"
>> +            if !$noerr;
> 
> nit: could be moved into a separate commit as the next patch, but
>      definitely not important.

I'll mention the change in the commit message and squash in the next
patch. In a way, stating the type of command is part of the better
abstraction.

>> @@ -339,7 +336,7 @@ sub queue_execute {
>>          eval {
>>              &$open_connection($self, $queue_info, $timeout);
>>  
>> -            if (!$queue_info->{qga}) {
>> +            if ($queue_info->{peer}->{type} ne 'qga') {
> 
> nit: might be worth to add to the patch message that qsd also exposes a
>      qmp monitor and so will also need to capabilities negotiation,
>      either in this patch or in the patch introducing qsd.

Will do!

>> diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
>> index 0cccdfbe..00b52799 100644
>> --- a/src/PVE/QemuServer/Monitor.pm
>> +++ b/src/PVE/QemuServer/Monitor.pm
>> @@ -15,20 +15,31 @@ our @EXPORT_OK = qw(
>>  =head3 qmp_cmd
>>  
>>      my $cmd = { execute => $qmp_command_name, arguments => \%params };
>> -    my $result = qmp_cmd($vmid, $cmd);
>> +    my $peer = { vmid => $vmid, type => $type };
>> +    my $result = qmp_cmd($peer, $cmd);
>>  
>> -Execute the C<$qmp_command_name> with arguments C<%params> for VM C<$vmid>. Dies if the VM is not
>> -running or the monitor socket cannot be reached, even if the C<noerr> argument is used. Returns the
>> -structured result from the QMP side converted from JSON to structured Perl data. In case the
>> -C<noerr> argument is used and the QMP command failed or timed out, the result is a hash reference
>> -with an C<error> key containing the error message.
>> +Execute the C<$qmp_command_name> with arguments C<%params> for the peer C<$peer>. The type C<$type>
>> +of the peer can be C<qmp> for the QEMU instance of the VM or C<qga> for the guest agent of the VM.
>> +Dies if the VM is not running or the monitor socket cannot be reached, even if the C<noerr> argument
>> +is used. Returns the structured result from the QMP side converted from JSON to structured Perl
>> +data. In case the C<noerr> argument is used and the QMP command failed or timed out, the result is a
>> +hash reference with an C<error> key containing the error message.
> 
> nit: might be enough to state the allowed values for $peer only in the
>      parameter list below? Even though this won't change a lot, then
>      there would only be a single source.

IMHO, the description should explicitly list what you can communicate
with, because it's core part of the functionality and then it's very
natural to already mention the type names along with that. It's also
adjacent, so it's unlikely to become out of sync after a change.

> 
> nit: asserting that the peer's $type here already would be nice, but
>      will be done in a later patch anyway.

The code after the whole series will end up being the same, but yes,
good point, will do!

>> @@ -81,7 +93,9 @@ sub mon_cmd {
>>  
>>      my $cmd = { execute => $execute, arguments => \%params };
>>  
>> -    return qmp_cmd($vmid, $cmd);
>> +    my $type = ($execute =~ /^guest\-+/) ? 'qga' : 'qmp';
> 
> nit: AFAICS $qmpclient->cmd(...) is only used here, but if cmd(...) or
>      queue_cmd(...) would be used somewhere else, these callers could
>      set $type = 'qga' and use a qga command. It might be overkill, but
>      shouldn't that be asserted too?
> 
Sorry, I'm not sure what you mean. Assert that type 'qga' iff $execute
=~ /^guest\-+/ in cmd() and queue_cmd()?

To be honest, I'd rather try to move away from that in the future and
not hard-code it more. For example, introduce and use a qga_cmd()
helper. I think it's just pure convention that all guest agent commands
start with 'guest-' and that no QMP command starts with 'guest-' yet.

If you meant something else, let me know!


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH qemu 01/16] d/rules: enable fuse
  2025-10-17 13:09   ` Daniel Kral
@ 2025-10-17 14:03     ` Fiona Ebner
  0 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-17 14:03 UTC (permalink / raw)
  To: Proxmox VE development discussion, Daniel Kral; +Cc: pve-devel

Am 17.10.25 um 3:09 PM schrieb Daniel Kral:
> On Tue Oct 14, 2025 at 4:39 PM CEST, Fiona Ebner wrote:
>> This is in preparation to allow FUSE exports of qcow2-formatted TPM
>> state volumes via qemu-storage-daemon.
>>
>> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
>> ---
>>  debian/rules | 1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/debian/rules b/debian/rules
>> index 757ca2a..912456e 100755
>> --- a/debian/rules
>> +++ b/debian/rules
>> @@ -61,6 +61,7 @@ endif
>>  	    --disable-xen \
>>  	    --enable-curl \
>>  	    --enable-docs \
>> +	    --enable-fuse \
>>  	    --enable-gnutls \
>>  	    --enable-libiscsi \
>>  	    --enable-libusb \
> 
> Should add a libfuse3-dev >= 3.1 in the BuildDepends ;)

Nice catch! I didn't use sbuild, so I didn't notice. I think we don't
need to specify the version, as the earliest that shipped with Trixie is
already new enough.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality
  2025-10-17 13:08   ` Daniel Kral
@ 2025-10-17 14:46     ` Fiona Ebner
  0 siblings, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-17 14:46 UTC (permalink / raw)
  To: Proxmox VE development discussion, Daniel Kral; +Cc: pve-devel

Am 17.10.25 um 3:08 PM schrieb Daniel Kral:
> On Tue Oct 14, 2025 at 4:39 PM CEST, Fiona Ebner wrote:
>> diff --git a/src/PVE/QemuServer/Helpers.pm b/src/PVE/QemuServer/Helpers.pm
>> index 2c78a7b4..ab8aa389 100644
>> --- a/src/PVE/QemuServer/Helpers.pm
>> +++ b/src/PVE/QemuServer/Helpers.pm
>> @@ -89,6 +89,24 @@ sub qsd_pidfile_name {
>>      return "${var_run_tmpdir}/qsd-${vmid}.pid";
>>  }
>>  
>> +sub qsd_fuse_export_cleanup_files {
>> +    my ($vmid) = @_;
>> +
>> +    PVE::Tools::dir_glob_foreach(
>> +        $var_run_tmpdir,
>> +        "qsd-${vmid}-.*.fuse",
>> +        sub {
>> +            my ($file) = @_;
>> +            unlink "${var_run_tmpdir}/${file}";
> 
> This could fail if the exported fuse fs wasn't cleaned up by qsd
> correctly, see more below.
> 
>> +        },
>> +    );
>> +}
>> +
>> +sub qsd_fuse_export_path {
>> +    my ($vmid, $export_name) = @_;
>> +    return "${var_run_tmpdir}/qsd-${vmid}-${export_name}.fuse";
>> +}
>> +
> 
> --- [ snip ] ---
> 
>> +=head3 quit
>> +
>> +    PVE::QemuServer::QSD::quit($vmid);
>> +
>> +Shut down the QEMU storage daemon associated to VM C<$vmid> and cleans up its PID file and socket.
>> +Waits for 60 seconds for clean shutdown, then sends SIGTERM and waits an additional 10 seconds
>> +before sending SIGKILL.
>> +
>> +=cut
>> +
>> +sub quit($vmid) {
>> +    eval { PVE::QemuServer::Monitor::qsd_cmd($vmid, 'quit'); };
>> +    my $qmp_err = $@;
>> +    warn "QEMU storage daemon for $vmid failed to handle 'quit' - $qmp_err" if $qmp_err;
>> +
>> +    my $count = $qmp_err ? 60 : 0; # can't wait for QMP 'quit' to terminate the process if it failed
>> +    my $pid = PVE::QemuServer::Helpers::qsd_running_locally($vmid);
>> +    while ($pid) {
>> +        if ($count == 60) {
>> +            warn "QEMU storage daemon for $vmid still running with PID $pid"
>> +                . " - terminating now with SIGTERM\n";
>> +            kill 15, $pid;
>> +        } elsif ($count == 70) {
>> +            warn "QEMU storage daemon for $vmid still running with PID $pid"
>> +                . " - terminating now with SIGKILL\n";
>> +            kill 9, $pid;
>> +            last;
> 
> SIGKILL will leave the exported fuse export in an unremovable state
> since the qsd couldn't clean it up. I could only reproduce this by doing
> it manually with `kill -SIGKILL ...`, which resulted in the VM not being
> startable anymore.
> 
> Might be worth to add an `umount /run/qemu-server/qsd-....fuse` if
> unlink fails or if we can stat -f that the file is still the
> FUSE-exported image?

Good catch! Yeah, I think trying an unmount is a good approach.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality
  2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality Fiona Ebner
  2025-10-17 13:08   ` Daniel Kral
@ 2025-10-20  8:47   ` Laurent GUERBY
  2025-10-20  9:49     ` Fiona Ebner
  1 sibling, 1 reply; 28+ messages in thread
From: Laurent GUERBY @ 2025-10-20  8:47 UTC (permalink / raw)
  To: Proxmox VE development discussion

On Tue, 2025-10-14 at 16:39 +0200, Fiona Ebner wrote:
> For now, supports creating FUSE exports based on Proxmox VE drive
> definitions. NBD exports could be added later. In preparation to allow
> qcow2 for TPM state volumes. A QEMU storage daemon instance is
> associated to a given VM.

Hi,

I wonder if this addition of qemu-storage-daemon with fuse would be
able to solve the following issue I just opened:

https://bugzilla.proxmox.com/show_bug.cgi?id=6953

"cannot set set-require-min-compat-client to reef, luminous clients
from kernel rbd due to VM with TPM /dev/rbd"

The rbd kernel module feature is stuck to luminous and swtpm use of
kernel /dev/rbd limits the usable features of the whole proxmox/ceph
cluster as soon as a VM with TPM is created on the cluster.

If possible using qemu-storage-daemon to export the rbd image to swtpm
would still allow proxmox to leave the TPM disk on ceph while
benefiting from recent ceph features.

Sincerely,

Laurent GUERBY

PS: ZFS over iSCSI isn't usable for TPM as well
https://bugzilla.proxmox.com/show_bug.cgi?id=3662
TPM disfunctional with ZFS over iSCSI

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality
  2025-10-20  8:47   ` Laurent GUERBY
@ 2025-10-20  9:49     ` Fiona Ebner
  2025-10-20 10:00       ` Fiona Ebner
  2025-10-20 11:27       ` Laurent GUERBY
  0 siblings, 2 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-20  9:49 UTC (permalink / raw)
  To: Proxmox VE development discussion, Laurent GUERBY

Hi,

Am 20.10.25 um 10:57 AM schrieb Laurent GUERBY:
> On Tue, 2025-10-14 at 16:39 +0200, Fiona Ebner wrote:
>> For now, supports creating FUSE exports based on Proxmox VE drive
>> definitions. NBD exports could be added later. In preparation to allow
>> qcow2 for TPM state volumes. A QEMU storage daemon instance is
>> associated to a given VM.
> 
> Hi,
> 
> I wonder if this addition of qemu-storage-daemon with fuse would be
> able to solve the following issue I just opened:
> 
> https://bugzilla.proxmox.com/show_bug.cgi?id=6953
> 
> "cannot set set-require-min-compat-client to reef, luminous clients
> from kernel rbd due to VM with TPM /dev/rbd"
> 
> The rbd kernel module feature is stuck to luminous

Do you know why? Or if there is any interest to change that?

> and swtpm use of
> kernel /dev/rbd limits the usable features of the whole proxmox/ceph
> cluster as soon as a VM with TPM is created on the cluster.
> 
> If possible using qemu-storage-daemon to export the rbd image to swtpm
> would still allow proxmox to leave the TPM disk on ceph while
> benefiting from recent ceph features.

It's would be possible, but it rather sounds like the real issue is that
the kernel module is outdated. And for container volumes, krbd is also
always used, so it would need to be adapted there too. Otherwise, you
will still break container volumes when bumping the minimum required
client version.

> PS: ZFS over iSCSI isn't usable for TPM as well
> https://bugzilla.proxmox.com/show_bug.cgi?id=3662
> TPM disfunctional with ZFS over iSCSI

Yes, I'm planning to add that later, it's hopefully rather easy once the
infrastructure is in place.

Best Regards,
Fiona


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality
  2025-10-20  9:49     ` Fiona Ebner
@ 2025-10-20 10:00       ` Fiona Ebner
  2025-10-20 11:27       ` Laurent GUERBY
  1 sibling, 0 replies; 28+ messages in thread
From: Fiona Ebner @ 2025-10-20 10:00 UTC (permalink / raw)
  To: Proxmox VE development discussion, Laurent GUERBY

Am 20.10.25 um 11:50 AM schrieb Fiona Ebner:
> Am 20.10.25 um 10:57 AM schrieb Laurent GUERBY:
>> If possible using qemu-storage-daemon to export the rbd image to swtpm
>> would still allow proxmox to leave the TPM disk on ceph while
>> benefiting from recent ceph features.
> 
> It's would be possible, but it rather sounds like the real issue is that
> the kernel module is outdated. And for container volumes, krbd is also
> always used, so it would need to be adapted there too. Otherwise, you
> will still break container volumes when bumping the minimum required
> client version.

Also, the performance impact would likely be bad for containers (but one
would need to actually test this).


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality
  2025-10-20  9:49     ` Fiona Ebner
  2025-10-20 10:00       ` Fiona Ebner
@ 2025-10-20 11:27       ` Laurent GUERBY
  1 sibling, 0 replies; 28+ messages in thread
From: Laurent GUERBY @ 2025-10-20 11:27 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

Hi,

On Mon, 2025-10-20 at 11:49 +0200, Fiona Ebner wrote:
> Hi,
> 
> Am 20.10.25 um 10:57 AM schrieb Laurent GUERBY:
> > On Tue, 2025-10-14 at 16:39 +0200, Fiona Ebner wrote:
> > > For now, supports creating FUSE exports based on Proxmox VE drive
> > > definitions. NBD exports could be added later. In preparation to allow
> > > qcow2 for TPM state volumes. A QEMU storage daemon instance is
> > > associated to a given VM.
> > 
> > Hi,
> > 
> > I wonder if this addition of qemu-storage-daemon with fuse would be
> > able to solve the following issue I just opened:
> > 
> > https://bugzilla.proxmox.com/show_bug.cgi?id=6953
> > 
> > "cannot set set-require-min-compat-client to reef, luminous clients
> > from kernel rbd due to VM with TPM /dev/rbd"
> > 
> > The rbd kernel module feature is stuck to luminous
> 
> Do you know why? Or if there is any interest to change that?

I don't know (I'm not a ceph nor kernel developper), if I look at the
latest ceph documentation it points to 4.19 kernel min version :

https://docs.ceph.com/en/latest/start/os-recommendations/#linux-kernel

This is coherent the "4.17" comment on the following feature of ceph:

https://github.com/ceph/ceph/blob/main/src/include/ceph_features.h#L157

(nothing more recent than 4.17)

The linux kernel rbd driver code doesn't change much

https://github.com/torvalds/linux/commits/master/drivers/block/rbd.c

I presume this is for maximum compatibility with potentially old-ish
userspace.

I also don't know if the rbd kernel module could advertise more recent
ceph features and fallback to luminous level in some way.


> 
> > and swtpm use of
> > kernel /dev/rbd limits the usable features of the whole proxmox/ceph
> > cluster as soon as a VM with TPM is created on the cluster.
> > 
> > If possible using qemu-storage-daemon to export the rbd image to swtpm
> > would still allow proxmox to leave the TPM disk on ceph while
> > benefiting from recent ceph features.
> 
> It's would be possible, but it rather sounds like the real issue is that
> the kernel module is outdated. And for container volumes, krbd is also
> always used, so it would need to be adapted there too. Otherwise, you
> will still break container volumes when bumping the minimum required
> client version.

Good catch, I don't use containers on Proxmox VE so I didn't think of
that.

May be it would be wise to ask the ceph developpers what they think
about it as ceph users outside of proxmox will be affected as well.

For example the following reef based feature is documented as a
performance improvement (and "highly recommended") and the container
world is important nowadays:

https://docs.ceph.com/en/latest/rados/operations/balancer/#modes
"""
upmap-read. This balancer mode combines optimization benefits of both
upmap and read mode. Like in read mode, upmap-read makes use of pg-
upmap-primary. As such, only Reef and later clients are compatible. For
more details about client compatibility, see Operating the Read
(Primary) Balancer.

upmap-read is highly recommended for achieving the upmap mode’s
offering of balanced PG distribution as well as the read mode’s
offering of balanced reads.
"""

As a side node I had imbalance in our small proxmox/ceph cluster (8
nodes and 57 OSD) and lowering upmap_max_deviation from 5 to 2 got rid
of it:

ceph config set mgr mgr/balancer/upmap_max_deviation 2

MIN/MAX VAR: 0.85/1.21  STDDEV: 5.38 # before default 5
MIN/MAX VAR: 0.93/1.06  STDDEV: 1.56 # after set at 2

It also got rid of warnings on some OSD > 0.8 use (while average use
what at 0.6).

So it might be interesting to add proxmox VE documentation and may be
tooling for this parameter as I assume most proxmox users will have
small-ish clusters and potentially hit imbalance issues like us.

https://docs.ceph.com/en/latest/rados/operations/balancer/#throttling

Let me know if it's worth opening a separate bugzilla.

Sincerely,

Laurent GUERBY

> 
> > PS: ZFS over iSCSI isn't usable for TPM as well
> > https://bugzilla.proxmox.com/show_bug.cgi?id=3662
> > TPM disfunctional with ZFS over iSCSI
> 
> Yes, I'm planning to add that later, it's hopefully rather easy once the
> infrastructure is in place.
> 
> Best Regards,
> Fiona

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2025-10-20 12:04 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-14 14:39 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu 01/16] d/rules: enable fuse Fiona Ebner
2025-10-17 13:09   ` Daniel Kral
2025-10-17 14:03     ` Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH swtpm 02/16] swtpm setup: file: always just clear header rather than unlinking Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH storage 03/16] common: add pve-vm-image-format standard option for VM image formats Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 04/16] tests: cfg2cmd: remove invalid mocking of qmp_cmd Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 05/16] migration: offline volumes: drop deprecated special casing for TPM state Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 06/16] qmp client: better abstract peer in preparation for qemu-storage-daemon Fiona Ebner
2025-10-17 12:38   ` Daniel Kral
2025-10-17 13:36     ` Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 07/16] monitor: qmp: precise error message by logging peer type Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 08/16] helpers: add functions for qemu-storage-daemon instances Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 09/16] monitor: qmp: allow 'qsd' peer type for qemu-storage-daemon Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 10/16] monitor: align interface of qmp_cmd() with other helpers Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 11/16] machine: include +pve version when getting installed machine version Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 12/16] blockdev: support attaching to qemu-storage-daemon Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 13/16] blockdev: attach: also return whether attached blockdev is read-only Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality Fiona Ebner
2025-10-17 13:08   ` Daniel Kral
2025-10-17 14:46     ` Fiona Ebner
2025-10-20  8:47   ` Laurent GUERBY
2025-10-20  9:49     ` Fiona Ebner
2025-10-20 10:00       ` Fiona Ebner
2025-10-20 11:27       ` Laurent GUERBY
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 15/16] tpm: support non-raw volumes via FUSE exports for swtpm Fiona Ebner
2025-10-14 14:39 ` [pve-devel] [PATCH qemu-server 16/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
2025-10-17 13:17 ` [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server 00/16] " Daniel Kral

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal