public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v3 0/3] Initial TPM support for VMs
@ 2021-10-04 15:29 Stefan Reiter
  2021-10-04 15:29 ` [pve-devel] [PATCH v3 storage 1/3] import: don't check for 1K aligned size Stefan Reiter
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Stefan Reiter @ 2021-10-04 15:29 UTC (permalink / raw)
  To: pve-devel

Makes Windows 11 (test build) happy: https://i.imgur.com/kZ0Mpnr.jpeg

Tested under Linux as well, works with (updated) OVMF and SeaBIOS, though
SeaBIOS requires clearing via the BIOS setup screen and may not support all
features it seems (e.g. Windows shows the TPM, but doesn't allow BitLocker,
presumably because it requires UEFI).

Requires patched swtpm with my PRs applied:
https://github.com/stefanberger/swtpm/pull/513
https://github.com/stefanberger/swtpm/pull/570

Can also be found as 'swtpm' in my staff repos.

RFC v2 -> v3:
* support backups by attaching the TPM as a drive to QEMU temporarily
* swtpm_setup now has support for file backend, so use it
* Ceph is now supported by forcing krbd and mapping the block device
* drop applied OVMF patch


RFC v1 -> RFC v2:
* with the above PR, we can store state in a single file/block device, thus we
  can treat it similar to an efidisk - this eliminates any concerns about
  storing on pmxcfs
* always allocate the state as 4 MiB (on directory storage it might auto-shrink)
* fixes migration, since source and destination are now different
* add GUI patch


 storage: Stefan Reiter (1):
  import: don't check for 1K aligned size

 PVE/Storage/Plugin.pm | 1 -
 1 file changed, 1 deletion(-)

 qemu-server: Stefan Reiter (1):
  fix #3075: add TPM v1.2 and v2.0 support via swtpm

 PVE/API2/Qemu.pm         |   5 ++
 PVE/QemuMigrate.pm       |  14 +++-
 PVE/QemuServer.pm        | 137 +++++++++++++++++++++++++++++++++++++--
 PVE/QemuServer/Drive.pm  |  63 ++++++++++++++----
 PVE/VZDump/QemuServer.pm |  43 ++++++++++--
 5 files changed, 238 insertions(+), 24 deletions(-)

 manager: Stefan Reiter (1):
  ui: add support for adding TPM devices

 www/manager6/Makefile                    |  1 +
 www/manager6/Utils.js                    |  2 +-
 www/manager6/form/DiskStorageSelector.js |  5 +-
 www/manager6/qemu/HDMove.js              |  1 +
 www/manager6/qemu/HDTPM.js               | 88 ++++++++++++++++++++++++
 www/manager6/qemu/HardwareView.js        | 25 ++++++-
 6 files changed, 119 insertions(+), 3 deletions(-)
 create mode 100644 www/manager6/qemu/HDTPM.js

-- 
2.30.2




^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH v3 storage 1/3] import: don't check for 1K aligned size
  2021-10-04 15:29 [pve-devel] [PATCH v3 0/3] Initial TPM support for VMs Stefan Reiter
@ 2021-10-04 15:29 ` Stefan Reiter
  2021-10-05  4:24   ` [pve-devel] applied: " Thomas Lamprecht
  2021-10-04 15:29 ` [pve-devel] [PATCH v3 qemu-server 2/3] fix #3075: add TPM v1.2 and v2.0 support via swtpm Stefan Reiter
  2021-10-04 15:29 ` [pve-devel] [PATCH v3 manager 3/3] ui: add support for adding TPM devices Stefan Reiter
  2 siblings, 1 reply; 7+ messages in thread
From: Stefan Reiter @ 2021-10-04 15:29 UTC (permalink / raw)
  To: pve-devel

TPM state disks on directory storages may have completely unaligned
sizes, this check doesn't make sense for them.

This appears to just be a (weak) safeguard and not serve an actual
functional purpose, so simply get rid of it to allow migration of TPM
state.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 PVE/Storage/Plugin.pm | 1 -
 1 file changed, 1 deletion(-)

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 417d1fd..fab2316 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -1353,7 +1353,6 @@ sub read_common_header($) {
     sysread($fh, my $size, 8);
     $size = unpack('Q<', $size);
     die "import: no size found in export header, aborting.\n" if !defined($size);
-    die "import: got a bad size (not a multiple of 1K), aborting.\n" if ($size&1023);
     # Size is in bytes!
     return $size;
 }
-- 
2.30.2





^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH v3 qemu-server 2/3] fix #3075: add TPM v1.2 and v2.0 support via swtpm
  2021-10-04 15:29 [pve-devel] [PATCH v3 0/3] Initial TPM support for VMs Stefan Reiter
  2021-10-04 15:29 ` [pve-devel] [PATCH v3 storage 1/3] import: don't check for 1K aligned size Stefan Reiter
@ 2021-10-04 15:29 ` Stefan Reiter
  2021-10-05  5:30   ` [pve-devel] applied: " Thomas Lamprecht
  2021-10-04 15:29 ` [pve-devel] [PATCH v3 manager 3/3] ui: add support for adding TPM devices Stefan Reiter
  2 siblings, 1 reply; 7+ messages in thread
From: Stefan Reiter @ 2021-10-04 15:29 UTC (permalink / raw)
  To: pve-devel

Starts an instance of swtpm per VM in it's systemd scope, it will
terminate by itself if the VM exits, or be terminated manually if
startup fails.

Before first use, a TPM state is created via swtpm_setup. State is
stored in a 'tpmstate0' volume, treated much the same way as an efidisk.

It is migrated 'offline', the important part here is the creation of the
target volume, the actual data transfer happens via the QEMU device
state migration process.

Move-disk can only work offline, as the disk is not registered with
QEMU, so 'drive-mirror' wouldn't work. swtpm itself has no method of
moving a backing storage at runtime.

For backups, a bit of a workaround is necessary (this may later be
replaced by NBD support in swtpm): During the backup, we attach the
backing file of the TPM as a read-only drive to QEMU, so our backup
code can detect it as a block device and back it up as such, while
ensuring consistency with the rest of disk state ("snapshot" semantic).

The name for the ephemeral drive is specifically chosen as
'drive-tpmstate0-backup', diverging from our usual naming scheme with
the '-backup' suffix, to avoid it ever being treated as a regular drive
from the rest of the stack in case it gets left over after a backup for
some reason (shouldn't happen).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 PVE/API2/Qemu.pm         |   5 ++
 PVE/QemuMigrate.pm       |  14 +++-
 PVE/QemuServer.pm        | 137 +++++++++++++++++++++++++++++++++++++--
 PVE/QemuServer/Drive.pm  |  63 ++++++++++++++----
 PVE/VZDump/QemuServer.pm |  43 ++++++++++--
 5 files changed, 238 insertions(+), 24 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index a8fbd9d..6228125 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -184,6 +184,11 @@ my $create_disks = sub {
 	    my $volid;
 	    if ($ds eq 'efidisk0') {
 		($volid, $size) = PVE::QemuServer::create_efidisk($storecfg, $storeid, $vmid, $fmt, $arch);
+	    } elsif ($ds eq 'tpmstate0') {
+		# swtpm can only use raw volumes, and uses a fixed size
+		$size = PVE::Tools::convert_size(PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE, 'b' => 'kb');
+		$volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid,
+		    "raw", undef, $size);
 	    } else {
 		$volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, $fmt, undef, $size);
 	    }
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 4f5bfa4..ae3eaf1 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -488,6 +488,7 @@ sub scan_local_volumes {
 
 	    $local_volumes->{$volid}->{ref} = $attr->{referenced_in_config} ? 'config' : 'snapshot';
 	    $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};
+	    $local_volumes->{$volid}->{ref} = 'generated' if $attr->{is_tpmstate};
 
 	    $local_volumes->{$volid}->{is_vmstate} = $attr->{is_vmstate} ? 1 : 0;
 
@@ -587,6 +588,9 @@ sub scan_local_volumes {
 		$local_volumes->{$volid}->{migration_mode} = 'online';
 	    } elsif ($self->{running} && $ref eq 'generated') {
 		# offline migrate the cloud-init ISO and don't regenerate on VM start
+		#
+		# tpmstate will also be offline migrated first, and in case of
+		# live migration then updated by QEMU/swtpm if necessary
 		$local_volumes->{$volid}->{migration_mode} = 'offline';
 	    } else {
 		$local_volumes->{$volid}->{migration_mode} = 'offline';
@@ -648,7 +652,9 @@ sub config_update_local_disksizes {
 
     PVE::QemuConfig->foreach_volume($conf, sub {
 	my ($key, $drive) = @_;
-	return if $key eq 'efidisk0'; # skip efidisk, will be handled later
+	# skip special disks, will be handled later
+	return if $key eq 'efidisk0';
+	return if $key eq 'tpmstate0';
 
 	my $volid = $drive->{file};
 	return if !defined($local_volumes->{$volid}); # only update sizes for local volumes
@@ -665,6 +671,12 @@ sub config_update_local_disksizes {
     if (defined($conf->{efidisk0})) {
 	PVE::QemuServer::update_efidisk_size($conf);
     }
+
+    # TPM state might have an irregular filesize, to avoid problems on transfer
+    # we always assume the static size of 4M to allocate on the target
+    if (defined($conf->{tpmstate0})) {
+	PVE::QemuServer::update_tpmstate_size($conf);
+    }
 }
 
 sub filter_local_volumes {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e8047e8..0ca5e00 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1143,7 +1143,8 @@ PVE::JSONSchema::register_format('pve-qm-bootdev', \&verify_bootdev);
 sub verify_bootdev {
     my ($dev, $noerr) = @_;
 
-    return $dev if PVE::QemuServer::Drive::is_valid_drivename($dev) && $dev !~ m/^efidisk/;
+    my $special = $dev =~ m/^efidisk/ || $dev =~ m/^tpmstate/;
+    return $dev if PVE::QemuServer::Drive::is_valid_drivename($dev) && !$special;
 
     my $check = sub {
 	my ($base) = @_;
@@ -2966,6 +2967,90 @@ sub audio_devs {
     return $devs;
 }
 
+sub get_tpm_paths {
+    my ($vmid) = @_;
+    return {
+	socket => "/var/run/qemu-server/$vmid.swtpm",
+	pid => "/var/run/qemu-server/$vmid.swtpm.pid",
+    };
+}
+
+sub add_tpm_device {
+    my ($vmid, $devices, $conf) = @_;
+
+    return if !$conf->{tpmstate0};
+
+    my $paths = get_tpm_paths($vmid);
+
+    push @$devices, "-chardev", "socket,id=tpmchar,path=$paths->{socket}";
+    push @$devices, "-tpmdev", "emulator,id=tpmdev,chardev=tpmchar";
+    push @$devices, "-device", "tpm-tis,tpmdev=tpmdev";
+}
+
+sub start_swtpm {
+    my ($storecfg, $vmid, $tpmdrive, $migration) = @_;
+
+    return if !$tpmdrive;
+
+    my $state;
+    my $tpm = parse_drive("tpmstate0", $tpmdrive);
+    my ($storeid, $volname) = PVE::Storage::parse_volume_id($tpm->{file}, 1);
+    if ($storeid) {
+	$state = PVE::Storage::map_volume($storecfg, $tpm->{file});
+    } else {
+	$state = $tpm->{file};
+    }
+
+    my $paths = get_tpm_paths($vmid);
+
+    # during migration, we will get state from remote
+    #
+    if (!$migration) {
+	# run swtpm_setup to create a new TPM state if it doesn't exist yet
+	my $setup_cmd = [
+	    "swtpm_setup",
+	    "--tpmstate",
+	    "file://$state",
+	    "--createek",
+	    "--create-ek-cert",
+	    "--create-platform-cert",
+	    "--lock-nvram",
+	    "--config",
+	    "/etc/swtpm_setup.conf", # do not use XDG configs
+	    "--runas",
+	    "0", # force creation as root, error if not possible
+	    "--not-overwrite", # ignore existing state, do not modify
+	];
+
+	push @$setup_cmd, "--tpm2" if $tpm->{version} eq 'v2.0';
+	# TPM 2.0 supports ECC crypto, use if possible
+	push @$setup_cmd, "--ecc" if $tpm->{version} eq 'v2.0';
+
+	run_command($setup_cmd, outfunc => sub {
+	    print "swtpm_setup: $1\n";
+	});
+    }
+
+    my $emulator_cmd = [
+	"swtpm",
+	"socket",
+	"--tpmstate",
+	"backend-uri=file://$state,mode=0600",
+	"--ctrl",
+	"type=unixio,path=$paths->{socket},mode=0600",
+	"--pid",
+	"file=$paths->{pid}",
+	"--terminate", # terminate on QEMU disconnect
+	"--daemon",
+    ];
+    push @$emulator_cmd, "--tpm2" if $tpm->{version} eq 'v2.0';
+    run_command($emulator_cmd, outfunc => sub { print $1; });
+
+    # return untainted PID of swtpm daemon so it can be killed on error
+    file_read_firstline($paths->{pid}) =~ m/(\d+)/;
+    return $1;
+}
+
 sub vga_conf_has_spice {
     my ($vga) = @_;
 
@@ -3467,6 +3552,8 @@ sub config_to_command {
 	push @$devices, @$audio_devs;
     }
 
+    add_tpm_device($vmid, $devices, $conf);
+
     my $sockets = 1;
     $sockets = $conf->{smp} if $conf->{smp}; # old style - no longer iused
     $sockets = $conf->{sockets} if  $conf->{sockets};
@@ -3663,6 +3750,8 @@ sub config_to_command {
 
 	# ignore efidisk here, already added in bios/fw handling code above
 	return if $drive->{interface} eq 'efidisk';
+	# similar for TPM
+	return if $drive->{interface} eq 'tpmstate';
 
 	$use_virtio = 1 if $ds =~ m/^virtio/;
 
@@ -4524,6 +4613,9 @@ sub foreach_volid {
 	$volhash->{$volid}->{is_vmstate} //= 0;
 	$volhash->{$volid}->{is_vmstate} = 1 if $key eq 'vmstate';
 
+	$volhash->{$volid}->{is_tpmstate} //= 0;
+	$volhash->{$volid}->{is_tpmstate} = 1 if $key eq 'tpmstate0';
+
 	$volhash->{$volid}->{is_unused} //= 0;
 	$volhash->{$volid}->{is_unused} = 1 if $key =~ /^unused\d+$/;
 
@@ -4721,7 +4813,7 @@ sub vmconfig_hotplug_pending {
 		vmconfig_update_net($storecfg, $conf, $hotplug_features->{network},
 				    $vmid, $opt, $value, $arch, $machine_type);
 	    } elsif (is_valid_drivename($opt)) {
-		die "skip\n" if $opt eq 'efidisk0';
+		die "skip\n" if $opt eq 'efidisk0' || $opt eq 'tpmstate0';
 		# some changes can be done without hotplug
 		my $drive = parse_drive($opt, $value);
 		if (drive_is_cloudinit($drive)) {
@@ -5341,8 +5433,17 @@ sub vm_start_nolock {
 	PVE::Tools::run_fork sub {
 	    PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %properties);
 
+	    my $tpmpid;
+	    if (my $tpm = $conf->{tpmstate0}) {
+		# start the TPM emulator so QEMU can connect on start
+		$tpmpid = start_swtpm($storecfg, $vmid, $tpm, $migratedfrom);
+	    }
+
 	    my $exitcode = run_command($cmd, %run_params);
-	    die "QEMU exited with code $exitcode\n" if $exitcode;
+	    if ($exitcode) {
+		kill 'TERM', $tpmpid if $tpmpid;
+		die "QEMU exited with code $exitcode\n";
+	    }
 	};
     };
 
@@ -5542,6 +5643,14 @@ sub vm_stop_cleanup {
 	if (!$keepActive) {
 	    my $vollist = get_vm_volumes($conf);
 	    PVE::Storage::deactivate_volumes($storecfg, $vollist);
+
+	    if (my $tpmdrive = $conf->{tpmstate0}) {
+		my $tpm = parse_drive("tpmstate0", $tpmdrive);
+		my ($storeid, $volname) = PVE::Storage::parse_volume_id($tpm->{file}, 1);
+		if ($storeid) {
+		    PVE::Storage::unmap_volume($storecfg, $tpm->{file});
+		}
+	    }
 	}
 
 	foreach my $ext (qw(mon qmp pid vnc qga)) {
@@ -6079,7 +6188,7 @@ sub restore_update_config_line {
 	$net->{macaddr} = PVE::Tools::random_ether_addr($dc->{mac_prefix}) if $net->{macaddr};
 	$netstr = print_net($net);
 	$res .= "$id: $netstr\n";
-    } elsif ($line =~ m/^((ide|scsi|virtio|sata|efidisk)\d+):\s*(\S+)\s*$/) {
+    } elsif ($line =~ m/^((ide|scsi|virtio|sata|efidisk|tpmstate)\d+):\s*(\S+)\s*$/) {
 	my $virtdev = $1;
 	my $value = $3;
 	my $di = parse_drive($virtdev, $value);
@@ -6397,8 +6506,8 @@ sub restore_proxmox_backup_archive {
 	    my $volid = $d->{volid};
 	    my $path = PVE::Storage::path($storecfg, $volid);
 
-	    # for live-restore we only want to preload the efidisk
-	    next if $options->{live} && $virtdev ne 'efidisk0';
+	    # for live-restore we only want to preload the efidisk and TPM state
+	    next if $options->{live} && $virtdev ne 'efidisk0' && $virtdev ne 'tpmstate0';
 
 	    my $pbs_restore_cmd = [
 		'/usr/bin/pbs-restore',
@@ -6473,7 +6582,9 @@ sub restore_proxmox_backup_archive {
 	my $conf = PVE::QemuConfig->load_config($vmid);
 	die "cannot do live-restore for template\n" if PVE::QemuConfig->is_template($conf);
 
-	delete $devinfo->{'drive-efidisk0'}; # this special drive is already restored before start
+	# these special drives are already restored before start
+	delete $devinfo->{'drive-efidisk0'};
+	delete $devinfo->{'drive-tpmstate0-backup'};
 	pbs_live_restore($vmid, $conf, $storecfg, $devinfo, $repo, $keyfile, $pbs_backup_name);
 
 	PVE::QemuConfig->remove_lock($vmid, "create");
@@ -7307,6 +7418,8 @@ sub clone_disk {
 	    $size = PVE::QemuServer::Cloudinit::CLOUDINIT_DISK_SIZE;
 	} elsif ($drivename eq 'efidisk0') {
 	    $size = get_efivars_size($conf);
+	} elsif ($drivename eq 'tpmstate0') {
+	    $size = PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE;
 	} else {
 	    ($size) = PVE::Storage::volume_size_info($storecfg, $drive->{file}, 10);
 	}
@@ -7347,6 +7460,8 @@ sub clone_disk {
 	    }
 	} else {
 
+	    die "cannot move TPM state while VM is running\n" if $drivename eq 'tpmstate0';
+
 	    my $kvmver = get_running_qemu_version ($vmid);
 	    if (!min_version($kvmver, 2, 7)) {
 		die "drive-mirror with iothread requires qemu version 2.7 or higher\n"
@@ -7417,6 +7532,14 @@ sub update_efidisk_size {
     return;
 }
 
+sub update_tpmstate_size {
+    my ($conf) = @_;
+
+    my $disk = PVE::QemuServer::parse_drive('tpmstate0', $conf->{tpmstate0});
+    $disk->{size} = PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE;
+    $conf->{tpmstate0} = print_drive($disk);
+}
+
 sub create_efidisk($$$$$) {
     my ($storecfg, $storeid, $vmid, $fmt, $arch) = @_;
 
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 5110190..32c7377 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -306,16 +306,6 @@ my $virtiodesc = {
 };
 PVE::JSONSchema::register_standard_option("pve-qm-virtio", $virtiodesc);
 
-my $alldrive_fmt = {
-    %drivedesc_base,
-    %iothread_fmt,
-    %model_fmt,
-    %queues_fmt,
-    %scsiblock_fmt,
-    %ssd_fmt,
-    %wwn_fmt,
-};
-
 my $efidisk_fmt = {
     volume => { alias => 'file' },
     file => {
@@ -345,6 +335,55 @@ my $efidisk_desc = {
 
 PVE::JSONSchema::register_standard_option("pve-qm-efidisk", $efidisk_desc);
 
+my %tpmversion_fmt = (
+    version => {
+	type => 'string',
+	enum => [qw(v1.2 v2.0)],
+	description => "The TPM interface version. v2.0 is newer and should be "
+		     . "preferred. Note that this cannot be changed later on.",
+	optional => 1,
+	default => 'v2.0',
+    },
+);
+my $tpmstate_fmt = {
+    volume => { alias => 'file' },
+    file => {
+	type => 'string',
+	format => 'pve-volume-id-or-qm-path',
+	default_key => 1,
+	format_description => 'volume',
+	description => "The drive's backing volume.",
+    },
+    size => {
+	type => 'string',
+	format => 'disk-size',
+	format_description => 'DiskSize',
+	description => "Disk size. This is purely informational and has no effect.",
+	optional => 1,
+    },
+    %tpmversion_fmt,
+};
+my $tpmstate_desc = {
+    optional => 1,
+    type => 'string', format => $tpmstate_fmt,
+    description => "Configure a Disk for storing TPM state. " .
+	$ALLOCATION_SYNTAX_DESC . " Note that SIZE_IN_GiB is ignored here " .
+	"and that the default size of 4 MiB will always be used instead. The " .
+	"format is also fixed to 'raw'.",
+};
+use constant TPMSTATE_DISK_SIZE => 4 * 1024 * 1024;
+
+my $alldrive_fmt = {
+    %drivedesc_base,
+    %iothread_fmt,
+    %model_fmt,
+    %queues_fmt,
+    %scsiblock_fmt,
+    %ssd_fmt,
+    %wwn_fmt,
+    %tpmversion_fmt,
+};
+
 my $unused_fmt = {
     volume => { alias => 'file' },
     file => {
@@ -379,6 +418,7 @@ for (my $i = 0; $i < $MAX_VIRTIO_DISKS; $i++)  {
 }
 
 $drivedesc_hash->{efidisk0} = $efidisk_desc;
+$drivedesc_hash->{tpmstate0} = $tpmstate_desc;
 
 for (my $i = 0; $i < $MAX_UNUSED_DISKS; $i++) {
     $drivedesc_hash->{"unused$i"} = $unuseddesc;
@@ -390,7 +430,8 @@ sub valid_drive_names {
             (map { "scsi$_" } (0 .. ($MAX_SCSI_DISKS - 1))),
             (map { "virtio$_" } (0 .. ($MAX_VIRTIO_DISKS - 1))),
             (map { "sata$_" } (0 .. ($MAX_SATA_DISKS - 1))),
-            'efidisk0');
+            'efidisk0',
+            'tpmstate0');
 }
 
 sub is_valid_drivename {
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 44b705f..b133694 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -86,11 +86,10 @@ sub prepare {
 	if (!$volume->{included}) {
 	    $self->loginfo("exclude disk '$name' '$volid' ($volume->{reason})");
 	    next;
-	} elsif ($self->{vm_was_running} && $volume_config->{iothread}) {
-	    if (!PVE::QemuServer::Machine::runs_at_least_qemu_version($vmid, 4, 0, 1)) {
-		die "disk '$name' '$volid' (iothread=on) can't use backup feature with running QEMU " .
-		    "version < 4.0.1! Either set backup=no for this drive or upgrade QEMU and restart VM\n";
-	    }
+	} elsif ($self->{vm_was_running} && $volume_config->{iothread} &&
+		 !PVE::QemuServer::Machine::runs_at_least_qemu_version($vmid, 4, 0, 1)) {
+	    die "disk '$name' '$volid' (iothread=on) can't use backup feature with running QEMU " .
+		"version < 4.0.1! Either set backup=no for this drive or upgrade QEMU and restart VM\n";
 	} else {
 	    my $log = "include disk '$name' '$volid'";
 	    if (defined(my $size = $volume_config->{size})) {
@@ -131,6 +130,12 @@ sub prepare {
 	    qmdevice => "drive-$ds",
 	};
 
+	if ($ds eq 'tpmstate0') {
+	    # TPM drive only exists for backup, which is reflected in the name
+	    $diskinfo->{qmdevice} = 'drive-tpmstate0-backup';
+	    $task->{tpmpath} = $path;
+	}
+
 	if (-b $path) {
 	    $diskinfo->{type} = 'block';
 	} else {
@@ -425,6 +430,28 @@ my $query_backup_status_loop = sub {
     };
 };
 
+my $attach_tpmstate_drive = sub {
+    my ($self, $task, $vmid) = @_;
+
+    return if !$task->{tpmpath};
+
+    # unconditionally try to remove the tpmstate-named drive - it only exists
+    # for backing up, and avoids errors if left over from some previous event
+    eval { PVE::QemuServer::qemu_drivedel($vmid, "tpmstate0-backup"); };
+
+    $self->loginfo('attaching TPM drive to QEMU for backup');
+
+    my $drive = "file=$task->{tpmpath},if=none,read-only=on,id=drive-tpmstate0-backup";
+    my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"");
+    die "attaching TPM drive failed\n" if $ret !~ m/OK/s;
+};
+
+my $detach_tpmstate_drive = sub {
+    my ($task, $vmid) = @_;
+    return if !$task->{tpmpath} || !PVE::QemuServer::check_running($vmid);
+    eval { PVE::QemuServer::qemu_drivedel($vmid, "tpmstate0-backup"); };
+};
+
 sub archive_pbs {
     my ($self, $task, $vmid) = @_;
 
@@ -501,6 +528,8 @@ sub archive_pbs {
 	    $master_keyfile = undef; # skip rest of master key handling below
 	}
 
+	$attach_tpmstate_drive->($self, $task, $vmid);
+
 	my $fs_frozen = $self->qga_fs_freeze($task, $vmid);
 
 	my $params = {
@@ -673,6 +702,8 @@ sub archive_vma {
 	    die "interrupted by signal\n";
 	};
 
+	$attach_tpmstate_drive->($self, $task, $vmid);
+
 	my $outfh;
 	if ($opts->{stdout}) {
 	    $outfh = $opts->{stdout};
@@ -876,6 +907,8 @@ sub snapshot {
 sub cleanup {
     my ($self, $task, $vmid) = @_;
 
+    $detach_tpmstate_drive->($task, $vmid);
+
     if ($self->{qmeventd_fh}) {
 	close($self->{qmeventd_fh});
     }
-- 
2.30.2





^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH v3 manager 3/3] ui: add support for adding TPM devices
  2021-10-04 15:29 [pve-devel] [PATCH v3 0/3] Initial TPM support for VMs Stefan Reiter
  2021-10-04 15:29 ` [pve-devel] [PATCH v3 storage 1/3] import: don't check for 1K aligned size Stefan Reiter
  2021-10-04 15:29 ` [pve-devel] [PATCH v3 qemu-server 2/3] fix #3075: add TPM v1.2 and v2.0 support via swtpm Stefan Reiter
@ 2021-10-04 15:29 ` Stefan Reiter
  2021-10-05  5:34   ` [pve-devel] applied: " Thomas Lamprecht
  2 siblings, 1 reply; 7+ messages in thread
From: Stefan Reiter @ 2021-10-04 15:29 UTC (permalink / raw)
  To: pve-devel

Inspired by HDEfi for efidisks. Extends the DiskStorageSelector to allow
hiding the format, since tpmstate can only be stored in 'raw' format
(even on directory storages).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 www/manager6/Makefile                    |  1 +
 www/manager6/Utils.js                    |  2 +-
 www/manager6/form/DiskStorageSelector.js |  5 +-
 www/manager6/qemu/HDMove.js              |  1 +
 www/manager6/qemu/HDTPM.js               | 88 ++++++++++++++++++++++++
 www/manager6/qemu/HardwareView.js        | 25 ++++++-
 6 files changed, 119 insertions(+), 3 deletions(-)
 create mode 100644 www/manager6/qemu/HDTPM.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 7d491f57..3d1778c2 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -205,6 +205,7 @@ JSSRC= 							\
 	qemu/DisplayEdit.js				\
 	qemu/HDEdit.js					\
 	qemu/HDEfi.js					\
+	qemu/HDTPM.js					\
 	qemu/HDMove.js					\
 	qemu/HDResize.js				\
 	qemu/HardwareView.js				\
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 71e5fc9a..63c70e61 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1602,7 +1602,7 @@ Ext.define('PVE.Utils', {
 	}
     },
 
-    hardware_counts: { net: 32, usb: 5, hostpci: 16, audio: 1, efidisk: 1, serial: 4, rng: 1 },
+    hardware_counts: { net: 32, usb: 5, hostpci: 16, audio: 1, efidisk: 1, serial: 4, rng: 1, tpmstate: 1 },
 
     cleanEmptyObjectKeys: function(obj) {
 	for (const propName of Object.keys(obj)) {
diff --git a/www/manager6/form/DiskStorageSelector.js b/www/manager6/form/DiskStorageSelector.js
index cf73f2e2..ac6b064f 100644
--- a/www/manager6/form/DiskStorageSelector.js
+++ b/www/manager6/form/DiskStorageSelector.js
@@ -28,6 +28,9 @@ Ext.define('PVE.form.DiskStorageSelector', {
     // hides the size field (e.g, for the efi disk dialog)
     hideSize: false,
 
+    // hides the format field (e.g. for TPM state), always assumes 'raw'
+    hideFormat: false,
+
     // sets the initial size value
     // string because else we get a type confusion
     defaultSize: '32',
@@ -155,7 +158,7 @@ Ext.define('PVE.form.DiskStorageSelector', {
 		fieldLabel: gettext('Format'),
 		nodename: me.nodename,
 		disabled: true,
-		hidden: me.storageContent === 'rootdir',
+		hidden: me.hideFormat || me.storageContent === 'rootdir',
 		value: 'qcow2',
 		allowBlank: false,
 	    },
diff --git a/www/manager6/qemu/HDMove.js b/www/manager6/qemu/HDMove.js
index 5bae5314..181b7bdc 100644
--- a/www/manager6/qemu/HDMove.js
+++ b/www/manager6/qemu/HDMove.js
@@ -75,6 +75,7 @@ Ext.define('PVE.window.HDMove', {
 	    nodename: me.nodename,
 	    storageContent: qemu ? 'images' : 'rootdir',
 	    hideSize: true,
+	    hideFormat: me.disk === 'tpmstate0',
 	});
 
 	items.push({
diff --git a/www/manager6/qemu/HDTPM.js b/www/manager6/qemu/HDTPM.js
new file mode 100644
index 00000000..7fa5a424
--- /dev/null
+++ b/www/manager6/qemu/HDTPM.js
@@ -0,0 +1,88 @@
+Ext.define('PVE.qemu.TPMDiskInputPanel', {
+    extend: 'Proxmox.panel.InputPanel',
+    alias: 'widget.pveTPMDiskInputPanel',
+
+    unused: false,
+    vmconfig: {},
+
+    onGetValues: function(values) {
+	var me = this;
+
+	var confid = 'tpmstate0';
+
+	if (values.hdimage) {
+	    me.drive.file = values.hdimage;
+	} else {
+	    // size is constant, so just use 1
+	    me.drive.file = values.hdstorage + ":1";
+	}
+
+	me.drive.version = values.version;
+	var params = {};
+	params[confid] = PVE.Parser.printQemuDrive(me.drive);
+	return params;
+    },
+
+    setNodename: function(nodename) {
+	var me = this;
+	me.down('#hdstorage').setNodename(nodename);
+	me.down('#hdimage').setStorage(undefined, nodename);
+    },
+
+    initComponent: function() {
+	var me = this;
+
+	me.drive = {};
+
+	me.items = [
+	    {
+		xtype: 'pveDiskStorageSelector',
+		name: me.disktype + '0',
+		storageContent: 'images',
+		nodename: me.nodename,
+		hideSize: true,
+		hideFormat: true,
+	    },
+	    {
+		xtype: 'proxmoxKVComboBox',
+		name: 'version',
+		value: 'v2.0',
+		deleteEmpty: false,
+		fieldLabel: gettext('Version'),
+		comboItems: [
+		    ['v1.2', 'v1.2'],
+		    ['v2.0', 'v2.0'],
+		],
+	    },
+	];
+
+	me.callParent();
+    },
+});
+
+Ext.define('PVE.qemu.TPMDiskEdit', {
+    extend: 'Proxmox.window.Edit',
+
+    isAdd: true,
+    subject: gettext('TPM State'),
+
+    width: 450,
+    initComponent: function() {
+	var me = this;
+
+	var nodename = me.pveSelNode.data.node;
+	if (!nodename) {
+	    throw "no node name specified";
+	}
+
+	me.items = [{
+	    xtype: 'pveTPMDiskInputPanel',
+	    //onlineHelp: 'qm_tpm', FIXME: add once available
+	    confid: me.confid,
+	    nodename: nodename,
+	    isCreate: true,
+	}];
+
+	me.callParent();
+    },
+});
diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js
index bfe0a222..9c4b0649 100644
--- a/www/manager6/qemu/HardwareView.js
+++ b/www/manager6/qemu/HardwareView.js
@@ -245,6 +245,13 @@ Ext.define('PVE.qemu.HardwareView', {
 	    never_delete: !caps.vms['VM.Config.Disk'],
 	    header: gettext('EFI Disk'),
 	};
+	rows.tpmstate0 = {
+	    group: 22,
+	    iconCls: 'hdd-o',
+	    editor: null,
+	    never_delete: !caps.vms['VM.Config.Disk'],
+	    header: gettext('TPM State'),
+	};
 	for (let i = 0; i < PVE.Utils.hardware_counts.usb; i++) {
 	    let confid = "usb" + i.toString();
 	    rows[confid] = {
@@ -564,6 +571,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    me.down('#addnet').setDisabled(noVMConfigNetPerm || isAtLimit('net'));
 	    me.down('#addrng').setDisabled(noSysConsolePerm || isAtLimit('rng'));
 	    efidisk_menuitem.setDisabled(noVMConfigDiskPerm || isAtLimit('efidisk'));
+	    me.down('#addtpmstate').setDisabled(noSysConsolePerm || isAtLimit('tpmstate'));
 	    me.down('#addci').setDisabled(noSysConsolePerm || hasCloudInit);
 
 	    if (!rec) {
@@ -588,6 +596,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    const isUsedDisk = !isUnusedDisk && row.isOnStorageBus && !isCDRom && !isCloudInit;
 	    const isDisk = isCloudInit || isUnusedDisk || isUsedDisk;
 	    const isEfi = key === 'efidisk0';
+	    const tpmMoveable = key === 'tpmstate0' && !me.pveSelNode.data.running;
 
 	    remove_btn.setDisabled(
 	        deleted ||
@@ -608,7 +617,7 @@ Ext.define('PVE.qemu.HardwareView', {
 
 	    resize_btn.setDisabled(pending || !isUsedDisk || !diskCap);
 
-	    move_btn.setDisabled(pending || !(isUsedDisk || isEfi) || !diskCap);
+	    move_btn.setDisabled(pending || !(isUsedDisk || isEfi || tpmMoveable) || !diskCap);
 
 	    revert_btn.setDisabled(!pending);
 	};
@@ -666,6 +675,20 @@ Ext.define('PVE.qemu.HardwareView', {
 				},
 			    },
 			    efidisk_menuitem,
+			    {
+				text: gettext('TPM State'),
+				itemId: 'addtpmstate',
+				iconCls: 'fa fa-fw fa-hdd-o black',
+				disabled: !caps.vms['VM.Config.Disk'],
+				handler: function() {
+				    var win = Ext.create('PVE.qemu.TPMDiskEdit', {
+					url: '/api2/extjs/' + baseurl,
+					pveSelNode: me.pveSelNode,
+				    });
+				    win.on('destroy', me.reload, me);
+				    win.show();
+				},
+			    },
 			    {
 				text: gettext('USB Device'),
 				itemId: 'addusb',
-- 
2.30.2





^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] applied: [PATCH v3 storage 1/3] import: don't check for 1K aligned size
  2021-10-04 15:29 ` [pve-devel] [PATCH v3 storage 1/3] import: don't check for 1K aligned size Stefan Reiter
@ 2021-10-05  4:24   ` Thomas Lamprecht
  0 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2021-10-05  4:24 UTC (permalink / raw)
  To: Proxmox VE development discussion, Stefan Reiter

On 04.10.21 17:29, Stefan Reiter wrote:
> TPM state disks on directory storages may have completely unaligned
> sizes, this check doesn't make sense for them.
> 
> This appears to just be a (weak) safeguard and not serve an actual
> functional purpose, so simply get rid of it to allow migration of TPM
> state.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  PVE/Storage/Plugin.pm | 1 -
>  1 file changed, 1 deletion(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] applied: [PATCH v3 qemu-server 2/3] fix #3075: add TPM v1.2 and v2.0 support via swtpm
  2021-10-04 15:29 ` [pve-devel] [PATCH v3 qemu-server 2/3] fix #3075: add TPM v1.2 and v2.0 support via swtpm Stefan Reiter
@ 2021-10-05  5:30   ` Thomas Lamprecht
  0 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2021-10-05  5:30 UTC (permalink / raw)
  To: Proxmox VE development discussion, Stefan Reiter

On 04.10.21 17:29, Stefan Reiter wrote:
> Starts an instance of swtpm per VM in it's systemd scope, it will
> terminate by itself if the VM exits, or be terminated manually if
> startup fails.
> 
> Before first use, a TPM state is created via swtpm_setup. State is
> stored in a 'tpmstate0' volume, treated much the same way as an efidisk.
> 
> It is migrated 'offline', the important part here is the creation of the
> target volume, the actual data transfer happens via the QEMU device
> state migration process.
> 
> Move-disk can only work offline, as the disk is not registered with
> QEMU, so 'drive-mirror' wouldn't work. swtpm itself has no method of
> moving a backing storage at runtime.
> 
> For backups, a bit of a workaround is necessary (this may later be
> replaced by NBD support in swtpm): During the backup, we attach the
> backing file of the TPM as a read-only drive to QEMU, so our backup
> code can detect it as a block device and back it up as such, while
> ensuring consistency with the rest of disk state ("snapshot" semantic).
> 
> The name for the ephemeral drive is specifically chosen as
> 'drive-tpmstate0-backup', diverging from our usual naming scheme with
> the '-backup' suffix, to avoid it ever being treated as a regular drive
> from the rest of the stack in case it gets left over after a backup for
> some reason (shouldn't happen).
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  PVE/API2/Qemu.pm         |   5 ++
>  PVE/QemuMigrate.pm       |  14 +++-
>  PVE/QemuServer.pm        | 137 +++++++++++++++++++++++++++++++++++++--
>  PVE/QemuServer/Drive.pm  |  63 ++++++++++++++----
>  PVE/VZDump/QemuServer.pm |  43 ++++++++++--
>  5 files changed, 238 insertions(+), 24 deletions(-)
> 
>

applied, with a few trivial whitespace related cleanups, thanks!




^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] applied: [PATCH v3 manager 3/3] ui: add support for adding TPM devices
  2021-10-04 15:29 ` [pve-devel] [PATCH v3 manager 3/3] ui: add support for adding TPM devices Stefan Reiter
@ 2021-10-05  5:34   ` Thomas Lamprecht
  0 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2021-10-05  5:34 UTC (permalink / raw)
  To: Proxmox VE development discussion, Stefan Reiter

On 04.10.21 17:29, Stefan Reiter wrote:
> Inspired by HDEfi for efidisks. Extends the DiskStorageSelector to allow
> hiding the format, since tpmstate can only be stored in 'raw' format
> (even on directory storages).
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>  www/manager6/Makefile                    |  1 +
>  www/manager6/Utils.js                    |  2 +-
>  www/manager6/form/DiskStorageSelector.js |  5 +-
>  www/manager6/qemu/HDMove.js              |  1 +
>  www/manager6/qemu/HDTPM.js               | 88 ++++++++++++++++++++++++
>  www/manager6/qemu/HardwareView.js        | 25 ++++++-
>  6 files changed, 119 insertions(+), 3 deletions(-)
>  create mode 100644 www/manager6/qemu/HDTPM.js
> 
>

applied, thanks!

But I'm once again remebered that the sepcial disk like efidisk and now tpmstate
are really weird when being removed, as they get marked as unused disk and when
one tries to re-attach them they'll act like they really are a disk.

But, this is nothing new and its the dropping of information is also a nuisance
for "real" disks, if one has properties set like bus, wwn, ... they're all lost
and re-attaching means recreating all of them, if we'd try to save as much as
possible of those things for unused disks too we could allow to prefill that
information on the re-attach dialogue and also differ between normal disks and
special disks like efi or tpm here. but that's for another seires ;)




^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-10-05  5:36 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-04 15:29 [pve-devel] [PATCH v3 0/3] Initial TPM support for VMs Stefan Reiter
2021-10-04 15:29 ` [pve-devel] [PATCH v3 storage 1/3] import: don't check for 1K aligned size Stefan Reiter
2021-10-05  4:24   ` [pve-devel] applied: " Thomas Lamprecht
2021-10-04 15:29 ` [pve-devel] [PATCH v3 qemu-server 2/3] fix #3075: add TPM v1.2 and v2.0 support via swtpm Stefan Reiter
2021-10-05  5:30   ` [pve-devel] applied: " Thomas Lamprecht
2021-10-04 15:29 ` [pve-devel] [PATCH v3 manager 3/3] ui: add support for adding TPM devices Stefan Reiter
2021-10-05  5:34   ` [pve-devel] applied: " Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal