public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive
@ 2025-10-20 14:12 Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu v2 01/16] d/rules: enable fuse Fiona Ebner
                   ` (15 more replies)
  0 siblings, 16 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

Changes in v2:
* Attempt unmounting left-over mounted files (thanks @Dano).
* Add missing fuse dependencies for QEMU packaging (thanks @Dano).
* Improve commit message for QMP peer abstraction (thanks @Dano).
* Clean up files before starting new QSD instance.
* Further abstract QMP peer in QMP client/monitor modules:
  Replace 'vmid' by 'id' and allow specifying a peer name for error
  messages. This is preparation for use cases of the storage daemon
  where there might not be a single associated guest. For example,
  restoring from a backup provider via exports of a storage daemon,
  and a second storage daemon for the TPM of the VM itself.
* Add UI patch.

Add infrastructure for doing FUSE exports via QEMU storage daemon.
This makes it possible to use non-raw formatted volumes for the TPM
state, by exposing it to swtpm as raw via FUSE. A QEMU storage daemon
instance is associated to a given VM.

The swtpm_setup code tries to unlink files rather than just clear the
header like it does for block devices. FUSE exports cannot be
unlinked, align the behavior to also just remove the header for files.

To have FUSE exports available, it's necessary to enable via QEMU
build flags.

A new standard option for VM image formats is introduced and in the
end used for the TPM state drive. The need for that also came up
already in the past for setting a format override when restoring and
it's cleaner to use what the storage layer actually supports.

Then there's two independent improvements for qemu-server.

For the QMP client and wrappers, the QMP peer is better abstracted and
the QEMU storage daemon is added as a possible peer.

Blockdev code is updated to also support attaching a drive to the QEMU
storage daemon rather than just the main QEMU instance for a VM.

Then the QSD module is introduced and handling for TPM is added.

Finally, non-raw formats are allowed in the schema for the TPM state
drive.

Smoke tested, but not yet in-depth.

Build-dependency bump and dependency bump for pve-storage needed!
Dependency bump for QEMU and swtpm needed!

qemu:

Fiona Ebner (1):
  d/rules: enable fuse

 debian/control | 2 ++
 debian/rules   | 1 +
 2 files changed, 3 insertions(+)


swtpm:

Fiona Ebner (1):
  swtpm setup: file: always just clear header rather than unlinking

 src/swtpm_setup/swtpm_backend_file.c | 42 +++++++++++-----------------
 1 file changed, 17 insertions(+), 25 deletions(-)


storage:

Fiona Ebner (1):
  common: add pve-vm-image-format standard option for VM image formats

 src/PVE/Storage/Common.pm | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)


qemu-server:

Fiona Ebner (12):
  tests: cfg2cmd: remove invalid mocking of qmp_cmd
  migration: offline volumes: drop deprecated special casing for TPM
    state
  qmp client: better abstract peer in preparation for
    qemu-storage-daemon
  helpers: add functions for qemu-storage-daemon instances
  monitor: qmp: allow 'qsd' peer type for qemu-storage-daemon
  monitor: align interface of qmp_cmd() with other helpers
  machine: include +pve version when getting installed machine version
  blockdev: support attaching to qemu-storage-daemon
  blockdev: attach: also return whether attached blockdev is read-only
  introduce QSD module for qemu-storage-daemon functionality
  tpm: support non-raw volumes via FUSE exports for swtpm
  fix #4693: drive: allow non-raw image formats for TPM state drive

 src/PVE/API2/Qemu.pm                 |   8 +-
 src/PVE/QMPClient.pm                 |  56 ++++++------
 src/PVE/QemuMigrate.pm               |   7 +-
 src/PVE/QemuServer.pm                |  59 +++++++++---
 src/PVE/QemuServer/BlockJob.pm       |   2 +-
 src/PVE/QemuServer/Blockdev.pm       |  43 ++++++---
 src/PVE/QemuServer/Drive.pm          |   2 +
 src/PVE/QemuServer/Helpers.pm        |  72 ++++++++++++---
 src/PVE/QemuServer/Machine.pm        |  19 ++--
 src/PVE/QemuServer/Makefile          |   1 +
 src/PVE/QemuServer/Monitor.pm        |  80 ++++++++++++-----
 src/PVE/QemuServer/QSD.pm            | 130 +++++++++++++++++++++++++++
 src/PVE/VZDump/QemuServer.pm         |   9 +-
 src/test/run_config2command_tests.pl |   1 -
 src/test/snapshot-test.pm            |   4 +-
 15 files changed, 377 insertions(+), 116 deletions(-)
 create mode 100644 src/PVE/QemuServer/QSD.pm


manager:

Fiona Ebner (1):
  ui: qemu: tpm drive: follow back-end and allow non-raw formats

 www/manager6/form/DiskStorageSelector.js | 2 +-
 www/manager6/qemu/HDMove.js              | 1 -
 www/manager6/qemu/HDTPM.js               | 2 +-
 3 files changed, 2 insertions(+), 3 deletions(-)


Summary over all repositories:
  22 files changed, 416 insertions(+), 146 deletions(-)

-- 
Generated by git-murpp 0.5.0


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu v2 01/16] d/rules: enable fuse
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH swtpm v2 02/16] swtpm setup: file: always just clear header rather than unlinking Fiona Ebner
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

This is in preparation to allow FUSE exports of qcow2-formatted TPM
state volumes via qemu-storage-daemon.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Changes in v2:
* Add build-depends and depends for relevant fuse packages.

 debian/control | 2 ++
 debian/rules   | 1 +
 2 files changed, 3 insertions(+)

diff --git a/debian/control b/debian/control
index 7c17051..81cc026 100644
--- a/debian/control
+++ b/debian/control
@@ -11,6 +11,7 @@ Build-Depends: debhelper-compat (= 13),
                libcurl4-gnutls-dev,
                libepoxy-dev,
                libfdt-dev,
+               libfuse3-dev,
                libgbm-dev,
                libgnutls28-dev,
                libiscsi-dev (>= 1.12.0),
@@ -45,6 +46,7 @@ Standards-Version: 3.7.2
 Package: pve-qemu-kvm
 Architecture: any
 Depends: ceph-common (>= 0.48),
+         fuse3,
          iproute2,
          libiscsi4 (>= 1.12.0) | libiscsi7,
          libjpeg62-turbo,
diff --git a/debian/rules b/debian/rules
index 757ca2a..912456e 100755
--- a/debian/rules
+++ b/debian/rules
@@ -61,6 +61,7 @@ endif
 	    --disable-xen \
 	    --enable-curl \
 	    --enable-docs \
+	    --enable-fuse \
 	    --enable-gnutls \
 	    --enable-libiscsi \
 	    --enable-libusb \
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH swtpm v2 02/16] swtpm setup: file: always just clear header rather than unlinking
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu v2 01/16] d/rules: enable fuse Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH storage v2 03/16] common: add pve-vm-image-format standard option for VM image formats Fiona Ebner
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

When using a FUSE export as the target file, it cannot be unlinked.
For block devices, the header is cleared, do the same for files.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
---
 src/swtpm_setup/swtpm_backend_file.c | 42 +++++++++++-----------------
 1 file changed, 17 insertions(+), 25 deletions(-)

diff --git a/src/swtpm_setup/swtpm_backend_file.c b/src/swtpm_setup/swtpm_backend_file.c
index a0d0f4d..d4ca2c5 100644
--- a/src/swtpm_setup/swtpm_backend_file.c
+++ b/src/swtpm_setup/swtpm_backend_file.c
@@ -57,36 +57,28 @@ static int check_access(void *state,
     return ret == 0 || errno == ENOENT ? 0 : 1;
 }
 
-/* Delete state from file: if regular file, unlink, if blockdev, zero header so
- * swtpm binary will treat it as a new instance. */
+/* Delete state from file: zero header so swtpm binary will treat it as a new
+ * instance. */
 static int delete_state(void *state) {
     const struct file_state *fstate = (struct file_state*)state;
     int fd;
 
-    if (fstate->is_blockdev) {
-        char zerobuf[8] = {0}; /* swtpm header has 8 byte */
-        fd = open(fstate->path, O_WRONLY);
-        if (fd < 0) {
-            logerr(gl_LOGFILE, "Couldn't open file for clearing %s: %s\n",
-                   fstate->path, strerror(errno));
-            return 1;
-        }
-        /* writing less bytes than requested is bad, but won't set errno */
-        errno = 0;
-        if (write(fd, zerobuf, sizeof(zerobuf)) < (long)sizeof(zerobuf)) {
-            logerr(gl_LOGFILE, "Couldn't write file for clearing %s: %s\n",
-                   fstate->path, strerror(errno));
-            close(fd);
-            return 1;
-        }
-        close(fd);
-    } else {
-        if (unlink(fstate->path)) {
-            logerr(gl_LOGFILE, "Couldn't unlink file for clearing %s: %s\n",
-                   fstate->path, strerror(errno));
-            return 1;
-        }
+    char zerobuf[8] = {0}; /* swtpm header has 8 byte */
+    fd = open(fstate->path, O_WRONLY);
+    if (fd < 0) {
+        logerr(gl_LOGFILE, "Couldn't open file for clearing %s: %s\n",
+               fstate->path, strerror(errno));
+        return 1;
     }
+    /* writing less bytes than requested is bad, but won't set errno */
+    errno = 0;
+    if (write(fd, zerobuf, sizeof(zerobuf)) < (long)sizeof(zerobuf)) {
+        logerr(gl_LOGFILE, "Couldn't write file for clearing %s: %s\n",
+               fstate->path, strerror(errno));
+        close(fd);
+        return 1;
+    }
+    close(fd);
 
     return 0;
 }
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH storage v2 03/16] common: add pve-vm-image-format standard option for VM image formats
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu v2 01/16] d/rules: enable fuse Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH swtpm v2 02/16] swtpm setup: file: always just clear header rather than unlinking Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 04/16] tests: cfg2cmd: remove invalid mocking of qmp_cmd Fiona Ebner
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

The image formats defined for the pve-vm-image-format standard option
are the formats that are allowed on Proxmox VE storages for VM images.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
---
 src/PVE/Storage/Common.pm | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
index 222dc76..3932aee 100644
--- a/src/PVE/Storage/Common.pm
+++ b/src/PVE/Storage/Common.pm
@@ -45,16 +45,31 @@ Possible formats a guest image can have.
 
 =cut
 
+PVE::JSONSchema::register_standard_option(
+    'pve-storage-image-format',
+    {
+        type => 'string',
+        enum => ['raw', 'qcow2', 'subvol', 'vmdk'],
+        description => "Format of the image.",
+    },
+);
+
+=head3 pve-vm-image-format
+
+Possible formats a VM image can have.
+
+=cut
+
 # TODO PVE 9 - Note that currently, qemu-server allows more formats for VM images, so third party
 # storage plugins might potentially allow more too, but none of the plugins we are aware of do that.
 # Those formats should either be allowed here or support for them should be phased out (at least in
 # the storage layer). Can still be added again in the future, should any plugin provider request it.
 
 PVE::JSONSchema::register_standard_option(
-    'pve-storage-image-format',
+    'pve-vm-image-format',
     {
         type => 'string',
-        enum => ['raw', 'qcow2', 'subvol', 'vmdk'],
+        enum => ['raw', 'qcow2', 'vmdk'],
         description => "Format of the image.",
     },
 );
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 04/16] tests: cfg2cmd: remove invalid mocking of qmp_cmd
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (2 preceding siblings ...)
  2025-10-20 14:12 ` [pve-devel] [PATCH storage v2 03/16] common: add pve-vm-image-format standard option for VM image formats Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 05/16] migration: offline volumes: drop deprecated special casing for TPM state Fiona Ebner
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

There is no definition of 'qmp_cmd' that could be referenced as a
subroutine with '\&qmp_cmd'. If the function would actually be called
during the test, there would be an error:
> Undefined subroutine &main::qmp_cmd called

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
---
 src/test/run_config2command_tests.pl | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/test/run_config2command_tests.pl b/src/test/run_config2command_tests.pl
index 0623b5c1..f63e8b68 100755
--- a/src/test/run_config2command_tests.pl
+++ b/src/test/run_config2command_tests.pl
@@ -525,7 +525,6 @@ $qemu_monitor_module->mock(
         die "unexpected QMP command: '$cmd'";
     },
 );
-$qemu_monitor_module->mock('qmp_cmd', \&qmp_cmd);
 
 my $mapping_usb_module = Test::MockModule->new("PVE::Mapping::USB");
 $mapping_usb_module->mock(
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 05/16] migration: offline volumes: drop deprecated special casing for TPM state
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (3 preceding siblings ...)
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 04/16] tests: cfg2cmd: remove invalid mocking of qmp_cmd Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 06/16] qmp client: better abstract peer in preparation for qemu-storage-daemon Fiona Ebner
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

Since qemu-server >= 7.2-1 with commit 13d121d7 ("fix #3861: migrate:
fix live migration when cloud-init changes storage"), migration
targets can handle the 'offline_volume' log line for passing back the
new volume ID for an offline migrated volume to the source side. Drop
the special handling for TPM state now, so that the special handling
for parsing can also be dropped in the future.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
---
 src/PVE/API2/Qemu.pm   | 1 +
 src/PVE/QemuMigrate.pm | 7 +------
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 71bedc1e..4243e4da 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -3491,6 +3491,7 @@ __PACKAGE__->register_method({
                 } elsif ($line =~ m/^replicated_volume: (.*)$/) {
                     $replicated_volumes->{$1} = 1;
                 } elsif ($line =~ m/^tpmstate0: (.*)$/) { # Deprecated, use offline_volume instead
+                    # TODO PVE 10.x drop special handling here
                     $offline_volumes->{tpmstate0} = $1;
                 } elsif ($line =~ m/^offline_volume: ([^:]+): (.*)$/) {
                     $offline_volumes->{$1} = $2;
diff --git a/src/PVE/QemuMigrate.pm b/src/PVE/QemuMigrate.pm
index 78954c20..b5023864 100644
--- a/src/PVE/QemuMigrate.pm
+++ b/src/PVE/QemuMigrate.pm
@@ -1020,12 +1020,7 @@ sub phase2_start_local_cluster {
         my $new_volid = $self->{volume_map}->{$volid};
         next if !$new_volid || $volid eq $new_volid;
 
-        # FIXME PVE 8.x only use offline_volume variant once all targets can handle it
-        if ($drivename eq 'tpmstate0') {
-            $input .= "$drivename: $new_volid\n";
-        } else {
-            $input .= "offline_volume: $drivename: $new_volid\n";
-        }
+        $input .= "offline_volume: $drivename: $new_volid\n";
     }
 
     $input .= "spice_ticket: $migrate->{spice_ticket}\n" if $migrate->{spice_ticket};
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 06/16] qmp client: better abstract peer in preparation for qemu-storage-daemon
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (4 preceding siblings ...)
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 05/16] migration: offline volumes: drop deprecated special casing for TPM state Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 07/16] helpers: add functions for qemu-storage-daemon instances Fiona Ebner
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

In preparation to add 'qsd' as a peer type for qemu-storage-daemon.

There already are two different peer types, namely 'qga' for the QEMU
guest agent and 'qmp' for the QEMU instance itself.

Future QMP peers (like the qemu-storage-daemon) are likely to use a
QMP monitor with capability negotiation like QEMU itself, so the
special handling done for the guest agent stays limited to the 'qga'
peer type.

Replace the association with a VM ID and allow specifying an arbitrary
ID.

Precise two error messages that used a hard-coded 'qmp' by specifing
the actual QMP peer type instead. The $peer structure also has a name
that is used for error messages to avoid hard-coding 'VM' there.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Changes in v2:
* Improve commit message.
* Further abstract QMP peer in QMP client/monitor modules:
  Replace 'vmid' by 'id' and allow specifying a peer name for error
  messages. This is preparation for use cases of the storage daemon
  where there might not be a single associated guest. For example,
  restoring from a backup provider via exports of a storage daemon,
  and a second storage daemon for the TPM of the VM itself.

 src/PVE/QMPClient.pm          | 56 +++++++++++++++++------------------
 src/PVE/QemuServer.pm         | 21 ++++++++-----
 src/PVE/QemuServer/Helpers.pm |  6 ++--
 src/PVE/QemuServer/Monitor.pm | 49 +++++++++++++++++++++---------
 src/PVE/VZDump/QemuServer.pm  |  9 ++++--
 src/test/snapshot-test.pm     |  2 +-
 6 files changed, 86 insertions(+), 57 deletions(-)

diff --git a/src/PVE/QMPClient.pm b/src/PVE/QMPClient.pm
index 1935a336..e4f7a515 100644
--- a/src/PVE/QMPClient.pm
+++ b/src/PVE/QMPClient.pm
@@ -53,15 +53,13 @@ my $qga_allow_close_cmds = {
 };
 
 my $push_cmd_to_queue = sub {
-    my ($self, $vmid, $cmd) = @_;
+    my ($self, $peer, $cmd) = @_;
 
     my $execute = $cmd->{execute} || die "no command name specified";
 
-    my $qga = ($execute =~ /^guest\-+/) ? 1 : 0;
+    my $sname = PVE::QemuServer::Helpers::qmp_socket($peer);
 
-    my $sname = PVE::QemuServer::Helpers::qmp_socket($vmid, $qga);
-
-    $self->{queue_info}->{$sname} = { qga => $qga, vmid => $vmid, sname => $sname, cmds => [] }
+    $self->{queue_info}->{$sname} = { peer => $peer, sname => $sname, cmds => [] }
         if !$self->{queue_info}->{$sname};
 
     push @{ $self->{queue_info}->{$sname}->{cmds} }, $cmd;
@@ -72,26 +70,26 @@ my $push_cmd_to_queue = sub {
 # add a single command to the queue for later execution
 # with queue_execute()
 sub queue_cmd {
-    my ($self, $vmid, $callback, $execute, %params) = @_;
+    my ($self, $peer, $callback, $execute, %params) = @_;
 
     my $cmd = {};
     $cmd->{execute} = $execute;
     $cmd->{arguments} = \%params;
     $cmd->{callback} = $callback;
 
-    &$push_cmd_to_queue($self, $vmid, $cmd);
+    &$push_cmd_to_queue($self, $peer, $cmd);
 
     return;
 }
 
 # execute a single command
 sub cmd {
-    my ($self, $vmid, $cmd, $timeout, $noerr) = @_;
+    my ($self, $peer, $cmd, $timeout, $noerr) = @_;
 
     my $result;
 
     my $callback = sub {
-        my ($vmid, $resp) = @_;
+        my ($id, $resp) = @_;
         $result = $resp->{'return'};
         $result = { error => $resp->{'error'} } if !defined($result) && $resp->{'error'};
     };
@@ -101,7 +99,7 @@ sub cmd {
     $cmd->{callback} = $callback;
     $cmd->{arguments} = {} if !defined($cmd->{arguments});
 
-    my $queue_info = &$push_cmd_to_queue($self, $vmid, $cmd);
+    my $queue_info = &$push_cmd_to_queue($self, $peer, $cmd);
 
     if (!$timeout) {
         # hack: monitor sometime blocks
@@ -158,7 +156,8 @@ sub cmd {
     $self->queue_execute($timeout, 2);
 
     if (defined($queue_info->{error})) {
-        die "VM $vmid qmp command '$cmd->{execute}' failed - $queue_info->{error}" if !$noerr;
+        die "$peer->{name} $peer->{type} command '$cmd->{execute}' failed - $queue_info->{error}"
+            if !$noerr;
         $result = { error => $queue_info->{error} };
         $result->{'error-is-timeout'} = 1 if $queue_info->{'error-is-timeout'};
     }
@@ -206,10 +205,10 @@ my $open_connection = sub {
 
     die "duplicate call to open" if defined($queue_info->{fh});
 
-    my $vmid = $queue_info->{vmid};
-    my $qga = $queue_info->{qga};
+    my $peer = $queue_info->{peer};
+    my ($peer_name, $sotype) = $peer->@{qw(name type)};
 
-    my $sname = PVE::QemuServer::Helpers::qmp_socket($vmid, $qga);
+    my $sname = PVE::QemuServer::Helpers::qmp_socket($peer);
 
     $timeout = 1 if !$timeout;
 
@@ -217,18 +216,17 @@ my $open_connection = sub {
     my $starttime = [gettimeofday];
     my $count = 0;
 
-    my $sotype = $qga ? 'qga' : 'qmp';
-
     for (;;) {
         $count++;
         $fh = IO::Socket::UNIX->new(Peer => $sname, Blocking => 0, Timeout => 1);
         last if $fh;
         if ($! != EINTR && $! != EAGAIN) {
-            die "unable to connect to VM $vmid $sotype socket - $!\n";
+            die "unable to connect to $peer_name $sotype socket - $!\n";
         }
         my $elapsed = tv_interval($starttime, [gettimeofday]);
         if ($elapsed >= $timeout) {
-            die "unable to connect to VM $vmid $sotype socket - timeout after $count retries\n";
+            die
+                "unable to connect to $peer_name $sotype socket - timeout after $count retries\n";
         }
         usleep(100000);
     }
@@ -253,7 +251,7 @@ my $check_queue = sub {
         my $fh = $queue_info->{fh};
         next if !$fh;
 
-        my $qga = $queue_info->{qga};
+        my $qga = $queue_info->{peer}->{type} eq 'qga';
 
         if ($queue_info->{error}) {
             &$close_connection($self, $queue_info);
@@ -339,7 +337,7 @@ sub queue_execute {
         eval {
             &$open_connection($self, $queue_info, $timeout);
 
-            if (!$queue_info->{qga}) {
+            if ($queue_info->{peer}->{type} ne 'qga') {
                 my $cap_cmd = { execute => 'qmp_capabilities', arguments => {} };
                 unshift @{ $queue_info->{cmds} }, $cap_cmd;
             }
@@ -397,11 +395,11 @@ sub mux_input {
     return if !$queue_info;
 
     my $sname = $queue_info->{sname};
-    my $vmid = $queue_info->{vmid};
-    my $qga = $queue_info->{qga};
+    my ($id, $peer_name) = $queue_info->{peer}->@{qw(id name)};
+    my $qga = $queue_info->{peer}->{type} eq 'qga';
 
     my $curcmd = $queue_info->{current};
-    die "unable to lookup current command for VM $vmid ($sname)\n" if !$curcmd;
+    die "unable to lookup current command for $peer_name ($sname)\n" if !$curcmd;
 
     my $raw;
 
@@ -437,7 +435,7 @@ sub mux_input {
             $obj = from_json($jsons[1]);
 
             if (my $callback = $curcmd->{callback}) {
-                &$callback($vmid, $obj);
+                &$callback($id, $obj);
             }
 
             return;
@@ -471,7 +469,7 @@ sub mux_input {
             delete $queue_info->{current};
 
             if (my $callback = $curcmd->{callback}) {
-                &$callback($vmid, $obj);
+                &$callback($id, $obj);
             }
         }
     };
@@ -501,11 +499,11 @@ sub mux_eof {
     return if !$queue_info;
 
     my $sname = $queue_info->{sname};
-    my $vmid = $queue_info->{vmid};
-    my $qga = $queue_info->{qga};
+    my ($id, $peer_name) = $queue_info->{peer}->@{qw(id name)};
+    my $qga = $queue_info->{peer}->{type} eq 'qga';
 
     my $curcmd = $queue_info->{current};
-    die "unable to lookup current command for VM $vmid ($sname)\n" if !$curcmd;
+    die "unable to lookup current command for $peer_name ($sname)\n" if !$curcmd;
 
     if ($qga && $qga_allow_close_cmds->{ $curcmd->{execute} }) {
 
@@ -522,7 +520,7 @@ sub mux_eof {
             delete $queue_info->{current};
 
             if (my $callback = $curcmd->{callback}) {
-                &$callback($vmid, undef);
+                &$callback($id, undef);
             }
         };
         if (my $err = $@) {
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index df2476aa..e8bce20f 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -2706,13 +2706,15 @@ sub vmstatus {
     my $statuscb = sub {
         my ($vmid, $resp) = @_;
 
-        $qmpclient->queue_cmd($vmid, $proxmox_support_cb, 'query-proxmox-support');
-        $qmpclient->queue_cmd($vmid, $blockstatscb, 'query-blockstats');
-        $qmpclient->queue_cmd($vmid, $machinecb, 'query-machines');
-        $qmpclient->queue_cmd($vmid, $versioncb, 'query-version');
+        my $qmp_peer = { name => "VM $vmid", id => $vmid, type => 'qmp' };
+
+        $qmpclient->queue_cmd($qmp_peer, $proxmox_support_cb, 'query-proxmox-support');
+        $qmpclient->queue_cmd($qmp_peer, $blockstatscb, 'query-blockstats');
+        $qmpclient->queue_cmd($qmp_peer, $machinecb, 'query-machines');
+        $qmpclient->queue_cmd($qmp_peer, $versioncb, 'query-version');
         # this fails if ballon driver is not loaded, so this must be
         # the last command (following command are aborted if this fails).
-        $qmpclient->queue_cmd($vmid, $ballooncb, 'query-balloon');
+        $qmpclient->queue_cmd($qmp_peer, $ballooncb, 'query-balloon');
 
         my $status = 'unknown';
         if (!defined($status = $resp->{'return'}->{status})) {
@@ -2726,7 +2728,8 @@ sub vmstatus {
     foreach my $vmid (keys %$list) {
         next if $opt_vmid && ($vmid ne $opt_vmid);
         next if !$res->{$vmid}->{pid}; # not running
-        $qmpclient->queue_cmd($vmid, $statuscb, 'query-status');
+        my $qmp_peer = { name => "VM $vmid", id => $vmid, type => 'qmp' };
+        $qmpclient->queue_cmd($qmp_peer, $statuscb, 'query-status');
     }
 
     $qmpclient->queue_execute(undef, 2);
@@ -3180,7 +3183,8 @@ sub config_to_command {
 
     my $use_virtio = 0;
 
-    my $qmpsocket = PVE::QemuServer::Helpers::qmp_socket($vmid);
+    my $qmpsocket =
+        PVE::QemuServer::Helpers::qmp_socket({ name => "VM $vmid", id => $vmid, type => 'qmp' });
     push @$cmd, '-chardev', "socket,id=qmp,path=$qmpsocket,server=on,wait=off";
     push @$cmd, '-mon', "chardev=qmp,mode=control";
 
@@ -3417,7 +3421,8 @@ sub config_to_command {
     my $guest_agent = parse_guest_agent($conf);
 
     if ($guest_agent->{enabled}) {
-        my $qgasocket = PVE::QemuServer::Helpers::qmp_socket($vmid, 1);
+        my $qgasocket = PVE::QemuServer::Helpers::qmp_socket(
+            { name => "VM $vmid", id => $vmid, type => 'qga' });
         push @$devices, '-chardev', "socket,path=$qgasocket,server=on,wait=off,id=qga0";
 
         if (!$guest_agent->{type} || $guest_agent->{type} eq 'virtio') {
diff --git a/src/PVE/QemuServer/Helpers.pm b/src/PVE/QemuServer/Helpers.pm
index 3e444839..87f4f841 100644
--- a/src/PVE/QemuServer/Helpers.pm
+++ b/src/PVE/QemuServer/Helpers.pm
@@ -79,9 +79,9 @@ our $var_run_tmpdir = "/var/run/qemu-server";
 mkdir $var_run_tmpdir;
 
 sub qmp_socket {
-    my ($vmid, $qga) = @_;
-    my $sockettype = $qga ? 'qga' : 'qmp';
-    return "${var_run_tmpdir}/$vmid.$sockettype";
+    my ($peer) = @_;
+    my ($id, $type) = $peer->@{qw(id type)};
+    return "${var_run_tmpdir}/${id}.${type}";
 }
 
 sub pidfile_name {
diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
index 0cccdfbe..293679fe 100644
--- a/src/PVE/QemuServer/Monitor.pm
+++ b/src/PVE/QemuServer/Monitor.pm
@@ -15,19 +15,32 @@ our @EXPORT_OK = qw(
 =head3 qmp_cmd
 
     my $cmd = { execute => $qmp_command_name, arguments => \%params };
-    my $result = qmp_cmd($vmid, $cmd);
+    my $peer = { name => $name, id => $id, type => $type };
+    my $result = qmp_cmd($peer, $cmd);
 
-Execute the C<$qmp_command_name> with arguments C<%params> for VM C<$vmid>. Dies if the VM is not
-running or the monitor socket cannot be reached, even if the C<noerr> argument is used. Returns the
-structured result from the QMP side converted from JSON to structured Perl data. In case the
-C<noerr> argument is used and the QMP command failed or timed out, the result is a hash reference
-with an C<error> key containing the error message.
+Execute the C<$qmp_command_name> with arguments C<%params> for the peer C<$peer>. The type C<$type>
+of the peer can be C<qmp> for the QEMU instance of the VM or C<qga> for the guest agent of the VM.
+Dies if the VM is not running or the monitor socket cannot be reached, even if the C<noerr> argument
+is used. Returns the structured result from the QMP side converted from JSON to structured Perl
+data. In case the C<noerr> argument is used and the QMP command failed or timed out, the result is a
+hash reference with an C<error> key containing the error message.
 
 Parameters:
 
 =over
 
-=item C<$vmid>: The ID of the virtual machine.
+=item C<$peer>: The peer to communicate with. A hash reference with:
+
+=over
+
+=item C<$name>: Name of the peer used in error messages.
+
+=item C<$id>: Identifier for the peer. The pair C<($id, $type)> uniquely identifies a peer.
+
+=item C<$type>: Type of the peer to communicate with. This can be C<qmp> for the VM's QEMU instance
+or C<qga> for the VM's guest agent.
+
+=back
 
 =item C<$cmd>: Hash reference containing the QMP command name for the C<execute> key and additional
 arguments for the QMP command under the C<arguments> key. The following custom arguments are not
@@ -48,7 +61,7 @@ handle the error that is returned as a structured result.
 =cut
 
 sub qmp_cmd {
-    my ($vmid, $cmd) = @_;
+    my ($peer, $cmd) = @_;
 
     my $res;
 
@@ -58,18 +71,24 @@ sub qmp_cmd {
     }
 
     eval {
-        die "VM $vmid not running\n" if !PVE::QemuServer::Helpers::vm_running_locally($vmid);
-        my $sname = PVE::QemuServer::Helpers::qmp_socket($vmid);
+        if ($peer->{type} eq 'qmp' || $peer->{type} eq 'qga') {
+            die "$peer->{name} not running\n"
+                if !PVE::QemuServer::Helpers::vm_running_locally($peer->{id});
+        } else {
+            die "qmp_cmd - unknown peer type $peer->{type}\n";
+        }
+
+        my $sname = PVE::QemuServer::Helpers::qmp_socket($peer);
         if (-e $sname) { # test if VM is reasonably new and supports qmp/qga
             my $qmpclient = PVE::QMPClient->new();
 
-            $res = $qmpclient->cmd($vmid, $cmd, $timeout, $noerr);
+            $res = $qmpclient->cmd($peer, $cmd, $timeout, $noerr);
         } else {
             die "unable to open monitor socket\n";
         }
     };
     if (my $err = $@) {
-        syslog("err", "VM $vmid qmp command failed - $err");
+        syslog("err", "$peer->{name} $peer->{type} command failed - $err");
         die $err;
     }
 
@@ -81,7 +100,9 @@ sub mon_cmd {
 
     my $cmd = { execute => $execute, arguments => \%params };
 
-    return qmp_cmd($vmid, $cmd);
+    my $type = ($execute =~ /^guest\-+/) ? 'qga' : 'qmp';
+
+    return qmp_cmd({ name => "VM $vmid", id => $vmid, type => $type }, $cmd);
 }
 
 sub hmp_cmd {
@@ -92,7 +113,7 @@ sub hmp_cmd {
         arguments => { 'command-line' => $cmdline, timeout => $timeout },
     };
 
-    return qmp_cmd($vmid, $cmd);
+    return qmp_cmd({ name => "VM $vmid", id => $vmid, type => 'qmp' }, $cmd);
 }
 
 1;
diff --git a/src/PVE/VZDump/QemuServer.pm b/src/PVE/VZDump/QemuServer.pm
index dd789652..84ebbe80 100644
--- a/src/PVE/VZDump/QemuServer.pm
+++ b/src/PVE/VZDump/QemuServer.pm
@@ -995,6 +995,7 @@ sub archive_vma {
         }
 
         my $qmpclient = PVE::QMPClient->new();
+        my $qmp_peer = { name => "VM $vmid", id => $vmid, type => 'qmp' };
         my $backup_cb = sub {
             my ($vmid, $resp) = @_;
             $backup_job_uuid = $resp->{return}->{UUID};
@@ -1012,10 +1013,14 @@ sub archive_vma {
             $params->{fleecing} = JSON::true if $task->{'use-fleecing'};
             add_backup_performance_options($params, $opts->{performance}, $qemu_support);
 
-            $qmpclient->queue_cmd($vmid, $backup_cb, 'backup', %$params);
+            $qmpclient->queue_cmd($qmp_peer, $backup_cb, 'backup', %$params);
         };
 
-        $qmpclient->queue_cmd($vmid, $add_fd_cb, 'getfd', fd => $outfileno, fdname => "backup");
+        $qmpclient->queue_cmd(
+            $qmp_peer, $add_fd_cb, 'getfd',
+            fd => $outfileno,
+            fdname => "backup",
+        );
 
         my $fs_frozen = $self->qga_fs_freeze($task, $vmid);
 
diff --git a/src/test/snapshot-test.pm b/src/test/snapshot-test.pm
index f61cd64b..5808f032 100644
--- a/src/test/snapshot-test.pm
+++ b/src/test/snapshot-test.pm
@@ -356,7 +356,7 @@ sub vm_running_locally {
 # BEGIN mocked PVE::QemuServer::Monitor methods
 
 sub qmp_cmd {
-    my ($vmid, $cmd) = @_;
+    my ($peer, $cmd) = @_;
 
     my $exec = $cmd->{execute};
     if ($exec eq "guest-ping") {
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 07/16] helpers: add functions for qemu-storage-daemon instances
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (5 preceding siblings ...)
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 06/16] qmp client: better abstract peer in preparation for qemu-storage-daemon Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 08/16] monitor: qmp: allow 'qsd' peer type for qemu-storage-daemon Fiona Ebner
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

In particular, a function to get the path to the PID file and a
function to check whether the qemu-storage-daemon instance is running.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Changes in v2:
* Use ID instead of VM ID for QSD peer.

 src/PVE/QemuServer.pm         |  4 ++--
 src/PVE/QemuServer/Helpers.pm | 33 ++++++++++++++++++++++++++-------
 2 files changed, 28 insertions(+), 9 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index e8bce20f..66fc3231 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -2944,7 +2944,7 @@ sub query_supported_cpu_flags {
     my $kvm_supported = defined(kvm_version());
     my $qemu_cmd = PVE::QemuServer::Helpers::get_command_for_arch($arch);
     my $fakevmid = -1;
-    my $pidfile = PVE::QemuServer::Helpers::pidfile_name($fakevmid);
+    my $pidfile = PVE::QemuServer::Helpers::vm_pidfile_name($fakevmid);
 
     # Start a temporary (frozen) VM with vmid -1 to allow sending a QMP command
     my $query_supported_run_qemu = sub {
@@ -3198,7 +3198,7 @@ sub config_to_command {
         push @$cmd, '-mon', "chardev=qmp-event,mode=control";
     }
 
-    push @$cmd, '-pidfile', PVE::QemuServer::Helpers::pidfile_name($vmid);
+    push @$cmd, '-pidfile', PVE::QemuServer::Helpers::vm_pidfile_name($vmid);
 
     push @$cmd, '-daemonize';
 
diff --git a/src/PVE/QemuServer/Helpers.pm b/src/PVE/QemuServer/Helpers.pm
index 87f4f841..ce9c352a 100644
--- a/src/PVE/QemuServer/Helpers.pm
+++ b/src/PVE/QemuServer/Helpers.pm
@@ -84,7 +84,12 @@ sub qmp_socket {
     return "${var_run_tmpdir}/${id}.${type}";
 }
 
-sub pidfile_name {
+sub qsd_pidfile_name {
+    my ($id) = @_;
+    return "${var_run_tmpdir}/qsd-${id}.pid";
+}
+
+sub vm_pidfile_name {
     my ($vmid) = @_;
     return "${var_run_tmpdir}/$vmid.pid";
 }
@@ -94,7 +99,7 @@ sub vnc_socket {
     return "${var_run_tmpdir}/$vmid.vnc";
 }
 
-# Parse the cmdline of a running kvm/qemu process and return arguments as hash
+# Parse the cmdline of a running kvm/qemu-* process and return arguments as hash
 sub parse_cmdline {
     my ($pid) = @_;
 
@@ -106,7 +111,7 @@ sub parse_cmdline {
         my @param = split(/\0/, $line);
 
         my $cmd = $param[0];
-        return if !$cmd || ($cmd !~ m|kvm$| && $cmd !~ m@(?:^|/)qemu-system-[^/]+$@);
+        return if !$cmd || ($cmd !~ m|kvm$| && $cmd !~ m@(?:^|/)qemu-[^/]+$@);
 
         my $phash = {};
         my $pending_cmd;
@@ -130,10 +135,8 @@ sub parse_cmdline {
     return;
 }
 
-sub vm_running_locally {
-    my ($vmid) = @_;
-
-    my $pidfile = pidfile_name($vmid);
+my sub instance_running_locally {
+    my ($pidfile) = @_;
 
     if (my $fd = IO::File->new("<$pidfile")) {
         my $st = stat($fd);
@@ -164,6 +167,22 @@ sub vm_running_locally {
     return;
 }
 
+sub qsd_running_locally {
+    my ($id) = @_;
+
+    my $pidfile = qsd_pidfile_name($id);
+
+    return instance_running_locally($pidfile);
+}
+
+sub vm_running_locally {
+    my ($vmid) = @_;
+
+    my $pidfile = vm_pidfile_name($vmid);
+
+    return instance_running_locally($pidfile);
+}
+
 sub min_version {
     my ($verstr, $major, $minor, $pve) = @_;
 
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 08/16] monitor: qmp: allow 'qsd' peer type for qemu-storage-daemon
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (6 preceding siblings ...)
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 07/16] helpers: add functions for qemu-storage-daemon instances Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 09/16] monitor: align interface of qmp_cmd() with other helpers Fiona Ebner
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Changes in v2:
* Use ID instead of VM ID for QSD peer.

 src/PVE/QemuServer/Monitor.pm | 26 +++++++++++++++++++-------
 1 file changed, 19 insertions(+), 7 deletions(-)

diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
index 293679fe..1f6aa17d 100644
--- a/src/PVE/QemuServer/Monitor.pm
+++ b/src/PVE/QemuServer/Monitor.pm
@@ -19,11 +19,12 @@ our @EXPORT_OK = qw(
     my $result = qmp_cmd($peer, $cmd);
 
 Execute the C<$qmp_command_name> with arguments C<%params> for the peer C<$peer>. The type C<$type>
-of the peer can be C<qmp> for the QEMU instance of the VM or C<qga> for the guest agent of the VM.
-Dies if the VM is not running or the monitor socket cannot be reached, even if the C<noerr> argument
-is used. Returns the structured result from the QMP side converted from JSON to structured Perl
-data. In case the C<noerr> argument is used and the QMP command failed or timed out, the result is a
-hash reference with an C<error> key containing the error message.
+of the peer can be C<qmp> for the QEMU instance of the VM,  C<qga> for the guest agent of the VM or
+C<qsd> for the QEMU storage daemon associated to the VM. Dies if the VM is not running or the
+monitor socket cannot be reached, even if the C<noerr> argument is used. Returns the structured
+result from the QMP side converted from JSON to structured Perl data. In case the C<noerr> argument
+is used and the QMP command failed or timed out, the result is a hash reference with an C<error> key
+containing the error message.
 
 Parameters:
 
@@ -37,8 +38,8 @@ Parameters:
 
 =item C<$id>: Identifier for the peer. The pair C<($id, $type)> uniquely identifies a peer.
 
-=item C<$type>: Type of the peer to communicate with. This can be C<qmp> for the VM's QEMU instance
-or C<qga> for the VM's guest agent.
+=item C<$type>: Type of the peer to communicate with. This can be C<qmp> for the VM's QEMU instance,
+C<qga> for the VM's guest agent or C<qsd> for the QEMU storage daemon assoicated to the VM.
 
 =back
 
@@ -74,6 +75,9 @@ sub qmp_cmd {
         if ($peer->{type} eq 'qmp' || $peer->{type} eq 'qga') {
             die "$peer->{name} not running\n"
                 if !PVE::QemuServer::Helpers::vm_running_locally($peer->{id});
+        } elsif ($peer->{type} eq 'qsd') {
+            die "$peer->{name} not running\n"
+                if !PVE::QemuServer::Helpers::qsd_running_locally($peer->{id});
         } else {
             die "qmp_cmd - unknown peer type $peer->{type}\n";
         }
@@ -95,6 +99,14 @@ sub qmp_cmd {
     return $res;
 }
 
+sub qsd_cmd {
+    my ($id, $execute, %params) = @_;
+
+    my $cmd = { execute => $execute, arguments => \%params };
+
+    return qmp_cmd({ name => "QEMU storage daemon $id", id => $id, type => 'qsd' }, $cmd);
+}
+
 sub mon_cmd {
     my ($vmid, $execute, %params) = @_;
 
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 09/16] monitor: align interface of qmp_cmd() with other helpers
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (7 preceding siblings ...)
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 08/16] monitor: qmp: allow 'qsd' peer type for qemu-storage-daemon Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 10/16] machine: include +pve version when getting installed machine version Fiona Ebner
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

Since all callers of qmp_cmd() construct the $cmd argument the same
way, this can also be done directly in qmp_cmd(). This aligns the
interface of qmp_cmd() to other helpers like mon_cmd(), except for
having a peer rather than just a VM ID. It's much more straightforward
to switch calls from mon_cmd() to qmp_cmd() after this change.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Changes in v2:
* Use ID instead of VM ID for QSD peer.

 src/PVE/QemuServer/Monitor.pm | 34 ++++++++++++++++------------------
 src/test/snapshot-test.pm     |  4 +++-
 2 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
index 1f6aa17d..b4725a1f 100644
--- a/src/PVE/QemuServer/Monitor.pm
+++ b/src/PVE/QemuServer/Monitor.pm
@@ -14,9 +14,8 @@ our @EXPORT_OK = qw(
 
 =head3 qmp_cmd
 
-    my $cmd = { execute => $qmp_command_name, arguments => \%params };
     my $peer = { name => $name, id => $id, type => $type };
-    my $result = qmp_cmd($peer, $cmd);
+    my $result = qmp_cmd($peer, $execute, %arguments);
 
 Execute the C<$qmp_command_name> with arguments C<%params> for the peer C<$peer>. The type C<$type>
 of the peer can be C<qmp> for the QEMU instance of the VM,  C<qga> for the guest agent of the VM or
@@ -43,9 +42,10 @@ C<qga> for the VM's guest agent or C<qsd> for the QEMU storage daemon assoicated
 
 =back
 
-=item C<$cmd>: Hash reference containing the QMP command name for the C<execute> key and additional
-arguments for the QMP command under the C<arguments> key. The following custom arguments are not
-part of the QMP schema and supported for all commands:
+=item C<$execute>: The QMP command name.
+
+=item C<%arguments>: Additional arguments for the QMP command. The following custom arguments are
+not part of the QMP schema and supported for all commands:
 
 =over
 
@@ -62,7 +62,9 @@ handle the error that is returned as a structured result.
 =cut
 
 sub qmp_cmd {
-    my ($peer, $cmd) = @_;
+    my ($peer, $execute, %arguments) = @_;
+
+    my $cmd = { execute => $execute, arguments => \%arguments };
 
     my $res;
 
@@ -102,30 +104,26 @@ sub qmp_cmd {
 sub qsd_cmd {
     my ($id, $execute, %params) = @_;
 
-    my $cmd = { execute => $execute, arguments => \%params };
-
-    return qmp_cmd({ name => "QEMU storage daemon $id", id => $id, type => 'qsd' }, $cmd);
+    return qmp_cmd({ name => "QEMU storage daemon $id", id => $id, type => 'qsd' },
+        $execute, %params);
 }
 
 sub mon_cmd {
     my ($vmid, $execute, %params) = @_;
 
-    my $cmd = { execute => $execute, arguments => \%params };
-
     my $type = ($execute =~ /^guest\-+/) ? 'qga' : 'qmp';
 
-    return qmp_cmd({ name => "VM $vmid", id => $vmid, type => $type }, $cmd);
+    return qmp_cmd({ name => "VM $vmid", id => $vmid, type => $type }, $execute, %params);
 }
 
 sub hmp_cmd {
     my ($vmid, $cmdline, $timeout) = @_;
 
-    my $cmd = {
-        execute => 'human-monitor-command',
-        arguments => { 'command-line' => $cmdline, timeout => $timeout },
-    };
-
-    return qmp_cmd({ name => "VM $vmid", id => $vmid, type => 'qmp' }, $cmd);
+    return qmp_cmd(
+        { name => "VM $vmid", id => $vmid, type => 'qmp' }, 'human-monitor-command',
+        'command-line' => $cmdline,
+        timeout => $timeout,
+    );
 }
 
 1;
diff --git a/src/test/snapshot-test.pm b/src/test/snapshot-test.pm
index 5808f032..e28107b9 100644
--- a/src/test/snapshot-test.pm
+++ b/src/test/snapshot-test.pm
@@ -356,7 +356,9 @@ sub vm_running_locally {
 # BEGIN mocked PVE::QemuServer::Monitor methods
 
 sub qmp_cmd {
-    my ($peer, $cmd) = @_;
+    my ($peer, $execute, %arguments) = @_;
+
+    my $cmd = { execute => $execute, arguments => \%arguments };
 
     my $exec = $cmd->{execute};
     if ($exec eq "guest-ping") {
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 10/16] machine: include +pve version when getting installed machine version
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (8 preceding siblings ...)
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 09/16] monitor: align interface of qmp_cmd() with other helpers Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 11/16] blockdev: support attaching to qemu-storage-daemon Fiona Ebner
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

Move the code attaching the +pve version from the single call site
into the get_installed_machine_version() helper. Rename the helper
to latest_installed_machine_version() to make it a bit more explicit.

This is in preparation for the upcoming qemu-storage-daemon
functionality re-using this helper.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
---
 src/PVE/QemuServer/Machine.pm | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/src/PVE/QemuServer/Machine.pm b/src/PVE/QemuServer/Machine.pm
index b2ff7e24..719b134d 100644
--- a/src/PVE/QemuServer/Machine.pm
+++ b/src/PVE/QemuServer/Machine.pm
@@ -308,11 +308,17 @@ sub qemu_machine_pxe {
     return $machine;
 }
 
-sub get_installed_machine_version {
+sub latest_installed_machine_version {
     my ($kvmversion) = @_;
+
     $kvmversion = PVE::QemuServer::Helpers::kvm_user_version() if !defined($kvmversion);
-    $kvmversion =~ m/^(\d+\.\d+)/;
-    return $1;
+
+    my ($version) = ($kvmversion =~ m/^(\d+\.\d+)/);
+
+    my $pvever = get_pve_version($version);
+    $version .= "+pve$pvever" if $pvever > 0;
+
+    return $version;
 }
 
 sub windows_get_pinned_machine_version {
@@ -320,12 +326,7 @@ sub windows_get_pinned_machine_version {
 
     my $pin_version = $base_version;
     if (!defined($base_version) || !can_run_pve_machine_version($base_version, $kvmversion)) {
-        $pin_version = get_installed_machine_version($kvmversion);
-        # pin to the current pveX version to make use of most current features if > 0
-        my $pvever = get_pve_version($pin_version);
-        if ($pvever > 0) {
-            $pin_version .= "+pve$pvever";
-        }
+        $pin_version = latest_installed_machine_version($kvmversion);
     }
     if (!$machine || $machine eq 'pc') {
         $machine = "pc-i440fx-$pin_version";
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 11/16] blockdev: support attaching to qemu-storage-daemon
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (9 preceding siblings ...)
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 10/16] machine: include +pve version when getting installed machine version Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 12/16] blockdev: attach: also return whether attached blockdev is read-only Fiona Ebner
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

The qemu-storage-daemon runs with the installed binary version, so the
machine type should be set accordingly.

All that needs to be changed is the peer for QMP communication.

Adds an export for the qmp_cmd() subroutine.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Changes in v2:
* Use ID instead of VM ID for QSD peer.

 src/PVE/QemuServer/Blockdev.pm | 39 +++++++++++++++++++++++-----------
 src/PVE/QemuServer/Monitor.pm  |  1 +
 2 files changed, 28 insertions(+), 12 deletions(-)

diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 8fa5eb51..f3fa73a5 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -16,7 +16,7 @@ use PVE::QemuServer::BlockJob;
 use PVE::QemuServer::Drive qw(drive_is_cdrom);
 use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Machine;
-use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::Monitor qw(mon_cmd qmp_cmd);
 
 # gives ($host, $port, $export)
 my $NBD_TCP_PATH_RE_3 = qr/nbd:(\S+):(\d+):exportname=(\S+)/;
@@ -513,9 +513,9 @@ sub generate_pbs_blockdev {
 }
 
 my sub blockdev_add {
-    my ($vmid, $blockdev) = @_;
+    my ($qmp_peer, $blockdev) = @_;
 
-    eval { mon_cmd($vmid, 'blockdev-add', $blockdev->%*); };
+    eval { qmp_cmd($qmp_peer, 'blockdev-add', $blockdev->%*); };
     if (my $err = $@) {
         my $node_name = $blockdev->{'node-name'} // 'undefined';
         die "adding blockdev '$node_name' failed : $err\n" if $@;
@@ -528,9 +528,9 @@ my sub blockdev_add {
 
 =head3 attach
 
-    my $node_name = attach($storecfg, $vmid, $drive, $options);
+    my $node_name = attach($storecfg, $id, $drive, $options);
 
-Attach the drive C<$drive> to the VM C<$vmid> considering the additional options C<$options>.
+Attach the drive C<$drive> to the VM C<$id> considering the additional options C<$options>.
 Returns the node name of the (topmost) attached block device node.
 
 Parameters:
@@ -539,7 +539,7 @@ Parameters:
 
 =item C<$storecfg>: The storage configuration.
 
-=item C<$vmid>: The ID of the virtual machine.
+=item C<$id>: The ID of the virtual machine or QEMU storage daemon.
 
 =item C<$drive>: The drive as parsed from a virtual machine configuration.
 
@@ -563,6 +563,8 @@ rather than the volume itself.
 =item C<< $options->{'tpm-backup'} >>: Generate and attach a block device for backing up the TPM
 state image.
 
+=item C<< $options->{'qsd'} >>: Rather than attaching to a VM, attach to a QEMU storage daemon.
+
 =back
 
 =back
@@ -570,9 +572,22 @@ state image.
 =cut
 
 sub attach {
-    my ($storecfg, $vmid, $drive, $options) = @_;
+    my ($storecfg, $id, $drive, $options) = @_;
 
-    my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+    my $qmp_peer;
+    if ($options->{qsd}) {
+        $qmp_peer = { name => "QEMU storage daemon $id", id => $id, type => 'qsd' };
+    } else {
+        $qmp_peer = { name => "VM $id", id => $id, type => 'qmp' };
+    }
+
+    my $machine_version;
+    if ($options->{qsd}) { # qemu-storage-daemon runs with the installed binary version
+        $machine_version =
+            'pc-i440fx-' . PVE::QemuServer::Machine::latest_installed_machine_version();
+    } else {
+        $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($id);
+    }
 
     my $blockdev = generate_drive_blockdev($storecfg, $drive, $machine_version, $options);
 
@@ -585,17 +600,17 @@ sub attach {
     eval {
         if ($throttle_group_id) {
             # Try to remove potential left-over.
-            mon_cmd($vmid, 'object-del', id => $throttle_group_id, noerr => 1);
+            qmp_cmd($qmp_peer, 'object-del', id => $throttle_group_id, noerr => 1);
 
             my $throttle_group = generate_throttle_group($drive);
-            mon_cmd($vmid, 'object-add', $throttle_group->%*);
+            qmp_cmd($qmp_peer, 'object-add', $throttle_group->%*);
         }
 
-        blockdev_add($vmid, $blockdev);
+        blockdev_add($qmp_peer, $blockdev);
     };
     if (my $err = $@) {
         if ($throttle_group_id) {
-            eval { mon_cmd($vmid, 'object-del', id => $throttle_group_id); };
+            eval { qmp_cmd($qmp_peer, 'object-del', id => $throttle_group_id); };
         }
         die $err;
     }
diff --git a/src/PVE/QemuServer/Monitor.pm b/src/PVE/QemuServer/Monitor.pm
index b4725a1f..e5278881 100644
--- a/src/PVE/QemuServer/Monitor.pm
+++ b/src/PVE/QemuServer/Monitor.pm
@@ -10,6 +10,7 @@ use PVE::QMPClient;
 use base 'Exporter';
 our @EXPORT_OK = qw(
     mon_cmd
+    qmp_cmd
 );
 
 =head3 qmp_cmd
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 12/16] blockdev: attach: also return whether attached blockdev is read-only
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (10 preceding siblings ...)
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 11/16] blockdev: support attaching to qemu-storage-daemon Fiona Ebner
@ 2025-10-20 14:12 ` Fiona Ebner
  2025-10-20 14:13 ` [pve-devel] [PATCH qemu-server v2 13/16] introduce QSD module for qemu-storage-daemon functionality Fiona Ebner
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:12 UTC (permalink / raw)
  To: pve-devel

This is in preparation for the qemu-storage-daemon, so that it can
apply the same setting for the export.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
---
 src/PVE/QemuServer/BlockJob.pm | 2 +-
 src/PVE/QemuServer/Blockdev.pm | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 506010e1..c89994db 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -480,7 +480,7 @@ sub blockdev_mirror {
 
     # Note that if 'aio' is not explicitly set, i.e. default, it can change if source and target
     # don't both allow or both not allow 'io_uring' as the default.
-    my $target_node_name =
+    my ($target_node_name) =
         PVE::QemuServer::Blockdev::attach($storecfg, $vmid, $dest_drive, $attach_dest_opts);
 
     $jobs = {} if !$jobs;
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index f3fa73a5..26e9c383 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -528,10 +528,10 @@ my sub blockdev_add {
 
 =head3 attach
 
-    my $node_name = attach($storecfg, $id, $drive, $options);
+    my ($node_name, $read_only) = attach($storecfg, $id, $drive, $options);
 
 Attach the drive C<$drive> to the VM C<$id> considering the additional options C<$options>.
-Returns the node name of the (topmost) attached block device node.
+Returns the node name of the (topmost) attached block device node and whether the node is read-only.
 
 Parameters:
 
@@ -615,7 +615,7 @@ sub attach {
         die $err;
     }
 
-    return $blockdev->{'node-name'};
+    return ($blockdev->{'node-name'}, $blockdev->{'read-only'});
 }
 
 =pod
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 13/16] introduce QSD module for qemu-storage-daemon functionality
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (11 preceding siblings ...)
  2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 12/16] blockdev: attach: also return whether attached blockdev is read-only Fiona Ebner
@ 2025-10-20 14:13 ` Fiona Ebner
  2025-10-20 14:13 ` [pve-devel] [PATCH qemu-server v2 14/16] tpm: support non-raw volumes via FUSE exports for swtpm Fiona Ebner
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:13 UTC (permalink / raw)
  To: pve-devel

For now, supports creating FUSE exports based on Proxmox VE drive
definitions. NBD exports could be added later. In preparation to allow
qcow2 for TPM state volumes. A QEMU storage daemon instance is
associated to a given VM.

Target files where the FUSE export is mounted must already exist. The
'writable' flag for the export is taken to be the negation of the
'read-only' status of the added block node. The 'allow-other' flag is
set to 'off', so only the user the storage daemon is running as may
access the export. For now, exported images don't need to be resized,
so the 'growable' flag is hard-coded to 'false'.

When cleaning up, a 'quit' QMP command is sent to the storage daemon,
with 60 second timeout, after which a SIGTERM is sent with 10 seconds
timeout, before finally SIGKILL is used if the QEMU storage daemon
would still be running.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Dependency bump for QEMU needed!

Changes in v2:
* Use ID instead of VM ID for QSD peer.

 src/PVE/QemuServer/Helpers.pm |  33 +++++++++
 src/PVE/QemuServer/Makefile   |   1 +
 src/PVE/QemuServer/QSD.pm     | 130 ++++++++++++++++++++++++++++++++++
 3 files changed, 164 insertions(+)
 create mode 100644 src/PVE/QemuServer/QSD.pm

diff --git a/src/PVE/QemuServer/Helpers.pm b/src/PVE/QemuServer/Helpers.pm
index ce9c352a..35c00754 100644
--- a/src/PVE/QemuServer/Helpers.pm
+++ b/src/PVE/QemuServer/Helpers.pm
@@ -89,6 +89,39 @@ sub qsd_pidfile_name {
     return "${var_run_tmpdir}/qsd-${id}.pid";
 }
 
+sub qsd_fuse_export_cleanup_files {
+    my ($id) = @_;
+
+    # Usually, /var/run is a symlink to /run. It needs to be the exact path for checking if mounted
+    # below. Note that Cwd::realpath() needs to be done on the directory already. Doing it on the
+    # file does not work if the storage daemon is not running and the FUSE is still mounted.
+    my ($real_dir) = Cwd::realpath($var_run_tmpdir) =~ m/^(.*)$/; # untaint
+    if (!$real_dir) {
+        warn "error resolving $var_run_tmpdir - not checking for left-over QSD files\n";
+        return;
+    }
+
+    my $mounts = PVE::ProcFSTools::parse_proc_mounts();
+
+    PVE::Tools::dir_glob_foreach(
+        $real_dir,
+        "qsd-${id}-.*\.fuse",
+        sub {
+            my ($file) = @_;
+            my $path = "${real_dir}/${file}";
+            if (grep { $_->[1] eq $path } $mounts->@*) {
+                PVE::Tools::run_command(['umount', $path]);
+            }
+            unlink $path;
+        },
+    );
+}
+
+sub qsd_fuse_export_path {
+    my ($id, $export_name) = @_;
+    return "${var_run_tmpdir}/qsd-${id}-${export_name}.fuse";
+}
+
 sub vm_pidfile_name {
     my ($vmid) = @_;
     return "${var_run_tmpdir}/$vmid.pid";
diff --git a/src/PVE/QemuServer/Makefile b/src/PVE/QemuServer/Makefile
index 63c8d77c..d599ca91 100644
--- a/src/PVE/QemuServer/Makefile
+++ b/src/PVE/QemuServer/Makefile
@@ -23,6 +23,7 @@ SOURCES=Agent.pm	\
 	PCI.pm		\
 	QemuImage.pm	\
 	QMPHelpers.pm	\
+	QSD.pm		\
 	RNG.pm		\
 	RunState.pm	\
 	StateFile.pm	\
diff --git a/src/PVE/QemuServer/QSD.pm b/src/PVE/QemuServer/QSD.pm
new file mode 100644
index 00000000..897ed9cd
--- /dev/null
+++ b/src/PVE/QemuServer/QSD.pm
@@ -0,0 +1,130 @@
+package PVE::QemuServer::QSD;
+
+use v5.36;
+
+use JSON qw(to_json);
+
+use PVE::JSONSchema qw(json_bool);
+use PVE::SafeSyslog qw(syslog);
+use PVE::Storage;
+use PVE::Tools;
+
+use PVE::QemuServer::Blockdev;
+use PVE::QemuServer::Helpers;
+use PVE::QemuServer::Monitor;
+
+=head3 start
+
+    PVE::QemuServer::QSD::start($id);
+
+Start a QEMU storage daemon instance with ID C<$id>.
+
+=cut
+
+sub start($id) {
+    my $name = "QEMU storage daemon $id";
+
+    # If something is still mounted, that could block the new instance, try to clean up first.
+    PVE::QemuServer::Helpers::qsd_fuse_export_cleanup_files($id);
+
+    my $qmp_socket_path =
+        PVE::QemuServer::Helpers::qmp_socket({ name => $name, id => $id, type => 'qsd' });
+    my $pidfile = PVE::QemuServer::Helpers::qsd_pidfile_name($id);
+
+    my $cmd = [
+        'qemu-storage-daemon',
+        '--daemonize',
+        '--chardev',
+        "socket,id=qmp,path=$qmp_socket_path,server=on,wait=off",
+        '--monitor',
+        'chardev=qmp,mode=control',
+        '--pidfile',
+        $pidfile,
+    ];
+
+    PVE::Tools::run_command($cmd);
+
+    my $pid = PVE::QemuServer::Helpers::qsd_running_locally($id);
+    syslog("info", "$name started with PID $pid.");
+
+    return;
+}
+
+=head3 add_fuse_export
+
+    my $path = PVE::QemuServer::QSD::add_fuse_export($id, $drive, $name);
+
+Attach drive C<$drive> to the storage daemon with ID C<$id> and export it with name C<$name> via
+FUSE. Returns the path to the file representing the export.
+
+=cut
+
+sub add_fuse_export($id, $drive, $name) {
+    my $storage_config = PVE::Storage::config();
+
+    PVE::Storage::activate_volumes($storage_config, [$drive->{file}]);
+
+    my ($node_name, $read_only) =
+        PVE::QemuServer::Blockdev::attach($storage_config, $id, $drive, { qsd => 1 });
+
+    my $fuse_path = PVE::QemuServer::Helpers::qsd_fuse_export_path($id, $name);
+    PVE::Tools::file_set_contents($fuse_path, '', 0600); # mountpoint file needs to exist up-front
+
+    my $export = {
+        type => 'fuse',
+        id => "$name",
+        mountpoint => $fuse_path,
+        'node-name' => "$node_name",
+        writable => json_bool(!$read_only),
+        growable => JSON::false,
+        'allow-other' => 'off',
+    };
+
+    PVE::QemuServer::Monitor::qsd_cmd($id, 'block-export-add', $export->%*);
+
+    return $fuse_path;
+}
+
+=head3 quit
+
+    PVE::QemuServer::QSD::quit($id);
+
+Shut down the QEMU storage daemon with ID C<$id> and cleans up its PID file and socket. Waits for 60
+seconds for clean shutdown, then sends SIGTERM and waits an additional 10 seconds before sending
+SIGKILL.
+
+=cut
+
+sub quit($id) {
+    my $name = "QEMU storage daemon $id";
+
+    eval { PVE::QemuServer::Monitor::qsd_cmd($id, 'quit'); };
+    my $qmp_err = $@;
+    warn "$name failed to handle 'quit' - $qmp_err" if $qmp_err;
+
+    my $count = $qmp_err ? 60 : 0; # can't wait for QMP 'quit' to terminate the process if it failed
+    my $pid = PVE::QemuServer::Helpers::qsd_running_locally($id);
+    while ($pid) {
+        if ($count == 60) {
+            warn "$name still running with PID $pid - terminating now with SIGTERM\n";
+            kill 15, $pid;
+        } elsif ($count == 70) {
+            warn "$name still running with PID $pid - terminating now with SIGKILL\n";
+            kill 9, $pid;
+            last;
+        }
+
+        sleep 1;
+        $count++;
+        $pid = PVE::QemuServer::Helpers::qsd_running_locally($id);
+    }
+
+    unlink PVE::QemuServer::Helpers::qsd_pidfile_name($id);
+    unlink PVE::QemuServer::Helpers::qmp_socket({ name => $name, id => $id, type => 'qsd' });
+
+    PVE::QemuServer::Helpers::qsd_fuse_export_cleanup_files($id);
+
+    return;
+}
+
+1;
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 14/16] tpm: support non-raw volumes via FUSE exports for swtpm
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (12 preceding siblings ...)
  2025-10-20 14:13 ` [pve-devel] [PATCH qemu-server v2 13/16] introduce QSD module for qemu-storage-daemon functionality Fiona Ebner
@ 2025-10-20 14:13 ` Fiona Ebner
  2025-10-20 14:13 ` [pve-devel] [PATCH qemu-server v2 15/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
  2025-10-20 14:13 ` [pve-devel] [PATCH manager v2 16/16] ui: qemu: tpm drive: follow back-end and allow non-raw formats Fiona Ebner
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:13 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
---

Dependency bump for swtpm needed!

 src/PVE/QemuServer.pm | 33 ++++++++++++++++++++++++++++++---
 1 file changed, 30 insertions(+), 3 deletions(-)

diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 66fc3231..5791eee8 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -82,6 +82,7 @@ use PVE::QemuServer::OVMF;
 use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
 use PVE::QemuServer::QemuImage;
 use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
+use PVE::QemuServer::QSD;
 use PVE::QemuServer::RNG qw(parse_rng print_rng_device_commandline print_rng_object_commandline);
 use PVE::QemuServer::RunState;
 use PVE::QemuServer::StateFile;
@@ -2828,8 +2829,12 @@ sub start_swtpm {
     my ($storeid) = PVE::Storage::parse_volume_id($tpm->{file}, 1);
     if ($storeid) {
         my $format = checked_volume_format($storecfg, $tpm->{file});
-        die "swtpm currently only supports 'raw' state volumes\n" if $format ne 'raw';
-        $state = PVE::Storage::map_volume($storecfg, $tpm->{file});
+        if ($format eq 'raw') {
+            $state = PVE::Storage::map_volume($storecfg, $tpm->{file});
+        } else {
+            PVE::QemuServer::QSD::start($vmid);
+            $state = PVE::QemuServer::QSD::add_fuse_export($vmid, $tpm, 'tpmstate0');
+        }
     } else {
         $state = $tpm->{file};
     }
@@ -5453,6 +5458,12 @@ sub vm_start_nolock {
     eval { clear_reboot_request($vmid); };
     warn $@ if $@;
 
+    # terminate left-over storage daemon if still running
+    if (my $pid = PVE::QemuServer::Helpers::qsd_running_locally($vmid)) {
+        log_warn("left-over QEMU storage daemon for $vmid running with PID $pid - terminating now");
+        PVE::QemuServer::QSD::quit($vmid);
+    }
+
     if (!$statefile && scalar(keys %{ $conf->{pending} })) {
         vmconfig_apply_pending($vmid, $conf, $storecfg);
         $conf = PVE::QemuConfig->load_config($vmid); # update/reload
@@ -5646,6 +5657,13 @@ sub vm_start_nolock {
     }
     $systemd_properties{timeout} = 10 if $statefile; # setting up the scope should be quick
 
+    my $cleanup_qsd = sub {
+        if (PVE::QemuServer::Helpers::qsd_running_locally($vmid)) {
+            eval { PVE::QemuServer::QSD::quit($vmid); };
+            warn "stopping QEMU storage daemon failed - $@" if $@;
+        }
+    };
+
     my $run_qemu = sub {
         PVE::Tools::run_fork sub {
             PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid",
@@ -5656,7 +5674,11 @@ sub vm_start_nolock {
             my $tpmpid;
             if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
                 # start the TPM emulator so QEMU can connect on start
-                $tpmpid = start_swtpm($storecfg, $vmid, $tpm, $migratedfrom);
+                eval { $tpmpid = start_swtpm($storecfg, $vmid, $tpm, $migratedfrom); };
+                if (my $err = $@) {
+                    $cleanup_qsd->();
+                    die $err;
+                }
             }
 
             my $exitcode = run_command($cmd, %run_params);
@@ -5667,6 +5689,8 @@ sub vm_start_nolock {
                     warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup error\n";
                     kill 'TERM', $tpmpid;
                 }
+                $cleanup_qsd->();
+
                 die "QEMU exited with code $exitcode\n";
             }
         };
@@ -6028,6 +6052,9 @@ sub vm_stop_cleanup {
     my ($storecfg, $vmid, $conf, $keepActive, $apply_pending_changes, $noerr) = @_;
 
     eval {
+        PVE::QemuServer::QSD::quit($vmid)
+            if PVE::QemuServer::Helpers::qsd_running_locally($vmid);
+
         # ensure that no dbus-vmstate helper is left running in any case
         # at this point, it should never be still running, so quiesce any warnings
         PVE::QemuServer::DBusVMState::qemu_del_dbus_vmstate($vmid, quiet => 1);
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH qemu-server v2 15/16] fix #4693: drive: allow non-raw image formats for TPM state drive
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (13 preceding siblings ...)
  2025-10-20 14:13 ` [pve-devel] [PATCH qemu-server v2 14/16] tpm: support non-raw volumes via FUSE exports for swtpm Fiona Ebner
@ 2025-10-20 14:13 ` Fiona Ebner
  2025-10-20 14:13 ` [pve-devel] [PATCH manager v2 16/16] ui: qemu: tpm drive: follow back-end and allow non-raw formats Fiona Ebner
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:13 UTC (permalink / raw)
  To: pve-devel

Now that there is a mechanism to export non-raw images as FUSE for
swtpm, it's possible to align the possible formats with what other
disk types can use. This also reduces special-casing for TPM state
volumes.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
---

Build-dependency bump and dependency bump for pve-storage needed!

 src/PVE/API2/Qemu.pm        | 7 ++-----
 src/PVE/QemuServer.pm       | 1 -
 src/PVE/QemuServer/Drive.pm | 2 ++
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 4243e4da..e77245a3 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -627,13 +627,13 @@ my sub create_disks : prototype($$$$$$$$$$$) {
                         $storecfg, $storeid, $vmid, $fmt, $arch, $disk, $smm, $amd_sev_type,
                     );
                 } elsif ($ds eq 'tpmstate0') {
-                    # swtpm can only use raw volumes, and uses a fixed size
+                    # A fixed size is used for TPM state volumes
                     $size = PVE::Tools::convert_size(
                         PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE,
                         'b' => 'kb',
                     );
                     $volid =
-                        PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, "raw", undef, $size);
+                        PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, $fmt, undef, $size);
                 } else {
                     $volid =
                         PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, $fmt, undef, $size);
@@ -675,9 +675,6 @@ my sub create_disks : prototype($$$$$$$$$$$) {
                     ) {
                         die "$ds - cloud-init drive is already attached at '$ci_key'\n";
                     }
-                } elsif ($ds eq 'tpmstate0' && $volume_format ne 'raw') {
-                    die
-                        "tpmstate0: volume format is '$volume_format', only 'raw' is supported!\n";
                 }
             }
 
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 5791eee8..fcdfaf66 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -7827,7 +7827,6 @@ sub clone_disk {
         } elsif ($dst_drivename eq 'efidisk0') {
             $size = $efisize or die "internal error - need to specify EFI disk size\n";
         } elsif ($dst_drivename eq 'tpmstate0') {
-            $dst_format = 'raw';
             $size = PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE;
         } else {
             clone_disk_check_io_uring(
diff --git a/src/PVE/QemuServer/Drive.pm b/src/PVE/QemuServer/Drive.pm
index 79dd22e6..f54f9612 100644
--- a/src/PVE/QemuServer/Drive.pm
+++ b/src/PVE/QemuServer/Drive.pm
@@ -10,6 +10,7 @@ use List::Util qw(first);
 
 use PVE::RESTEnvironment qw(log_warn);
 use PVE::Storage;
+use PVE::Storage::Common;
 use PVE::JSONSchema qw(get_standard_option);
 
 use base qw(Exporter);
@@ -570,6 +571,7 @@ my $tpmstate_fmt = {
         format_description => 'volume',
         description => "The drive's backing volume.",
     },
+    format => get_standard_option('pve-vm-image-format', { optional => 1 }),
     size => {
         type => 'string',
         format => 'disk-size',
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [pve-devel] [PATCH manager v2 16/16] ui: qemu: tpm drive: follow back-end and allow non-raw formats
  2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
                   ` (14 preceding siblings ...)
  2025-10-20 14:13 ` [pve-devel] [PATCH qemu-server v2 15/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
@ 2025-10-20 14:13 ` Fiona Ebner
  15 siblings, 0 replies; 17+ messages in thread
From: Fiona Ebner @ 2025-10-20 14:13 UTC (permalink / raw)
  To: pve-devel

Since qemu-server commit "fix #4693: drive: allow non-raw image
formats for TPM state drive", non-raw image formats are supported
for TPM state drives.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

New in v2.

 www/manager6/form/DiskStorageSelector.js | 2 +-
 www/manager6/qemu/HDMove.js              | 1 -
 www/manager6/qemu/HDTPM.js               | 2 +-
 3 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/www/manager6/form/DiskStorageSelector.js b/www/manager6/form/DiskStorageSelector.js
index ec22ef58..4b41ec5c 100644
--- a/www/manager6/form/DiskStorageSelector.js
+++ b/www/manager6/form/DiskStorageSelector.js
@@ -28,7 +28,7 @@ Ext.define('PVE.form.DiskStorageSelector', {
     // hides the size field (e.g, for the efi disk dialog)
     hideSize: false,
 
-    // hides the format field (e.g. for TPM state)
+    // hides the format field
     hideFormat: false,
 
     // sets the initial size value
diff --git a/www/manager6/qemu/HDMove.js b/www/manager6/qemu/HDMove.js
index 2e545b91..54659710 100644
--- a/www/manager6/qemu/HDMove.js
+++ b/www/manager6/qemu/HDMove.js
@@ -76,7 +76,6 @@ Ext.define('PVE.window.HDMove', {
                     cbind: {
                         nodename: '{nodename}',
                         storageContent: (get) => (get('isQemu') ? 'images' : 'rootdir'),
-                        hideFormat: (get) => get('disk') === 'tpmstate0',
                     },
                     hideSize: true,
                 },
diff --git a/www/manager6/qemu/HDTPM.js b/www/manager6/qemu/HDTPM.js
index 1bfa25a6..947e3738 100644
--- a/www/manager6/qemu/HDTPM.js
+++ b/www/manager6/qemu/HDTPM.js
@@ -21,6 +21,7 @@ Ext.define('PVE.qemu.TPMDiskInputPanel', {
             me.drive.file = values.hdstorage + ':1';
         }
 
+        me.drive.format = values.diskformat;
         me.drive.version = values.version;
         var params = {};
         params[confid] = PVE.Parser.printQemuDrive(me.drive);
@@ -54,7 +55,6 @@ Ext.define('PVE.qemu.TPMDiskInputPanel', {
                 nodename: me.nodename,
                 disabled: me.disabled,
                 hideSize: true,
-                hideFormat: true,
             },
             {
                 xtype: 'proxmoxKVComboBox',
-- 
2.47.3



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2025-10-20 14:15 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-20 14:12 [pve-devel] [PATCH-SERIES qemu/swtpm/storage/qemu-server/manager v2 00/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH qemu v2 01/16] d/rules: enable fuse Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH swtpm v2 02/16] swtpm setup: file: always just clear header rather than unlinking Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH storage v2 03/16] common: add pve-vm-image-format standard option for VM image formats Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 04/16] tests: cfg2cmd: remove invalid mocking of qmp_cmd Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 05/16] migration: offline volumes: drop deprecated special casing for TPM state Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 06/16] qmp client: better abstract peer in preparation for qemu-storage-daemon Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 07/16] helpers: add functions for qemu-storage-daemon instances Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 08/16] monitor: qmp: allow 'qsd' peer type for qemu-storage-daemon Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 09/16] monitor: align interface of qmp_cmd() with other helpers Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 10/16] machine: include +pve version when getting installed machine version Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 11/16] blockdev: support attaching to qemu-storage-daemon Fiona Ebner
2025-10-20 14:12 ` [pve-devel] [PATCH qemu-server v2 12/16] blockdev: attach: also return whether attached blockdev is read-only Fiona Ebner
2025-10-20 14:13 ` [pve-devel] [PATCH qemu-server v2 13/16] introduce QSD module for qemu-storage-daemon functionality Fiona Ebner
2025-10-20 14:13 ` [pve-devel] [PATCH qemu-server v2 14/16] tpm: support non-raw volumes via FUSE exports for swtpm Fiona Ebner
2025-10-20 14:13 ` [pve-devel] [PATCH qemu-server v2 15/16] fix #4693: drive: allow non-raw image formats for TPM state drive Fiona Ebner
2025-10-20 14:13 ` [pve-devel] [PATCH manager v2 16/16] ui: qemu: tpm drive: follow back-end and allow non-raw formats Fiona Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal