public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
	Thomas Lamprecht <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server] migrate: skip tpmstate for NBD migration
Date: Tue, 16 Nov 2021 12:39:54 +0100	[thread overview]
Message-ID: <1637061780.roho39wcf6.astroid@nora.none> (raw)
In-Reply-To: <c37e599d-cad4-74f3-fe9b-46d97a958d7d@proxmox.com>

On November 16, 2021 12:12 pm, Thomas Lamprecht wrote:
> On 16.11.21 11:52, Fabian Grünbichler wrote:
>> the tpmstate volume is not available in the VM directly, but we do
>> migrate the state volume via a storage migration anyway if necessary.
>> 
> 
> some context would be great to have in the commit message, iow. mentioning
> that QEMU is already migrating this as part of its memory/state migration.

I tried to get some understanding of how this works, and I don't think 
that the stuff that Qemu copies as part of the TPM emulator state covers 
everything that is in the state volume.

what happens is the following:
- our migration code finds a tpmstate volume, it gets migrated via 
  storage_migrate if on local storage (and replicated if that is 
  enabled)
- the VM is started on the remote node with the initial swtpm setup part 
  skipped, since we already have a volume with state
- the RAM migration happens (and rest of state, including 'tpm emulator 
  state')

so there is a window between storage_migrate/replication happening, and 
the migration being finished where changes to the TPM state volume from 
within the guest could potentially get lost (unless the state covered by 
the migrate stream covers ALL the state inside the state volume, which I 
don't think, but maybe I am mistaken on that front).

but this is irrespective of this patch, which just fixes the wrong 
attempt of setting up an NBD server for the replicated tpm state volume. 
even attaching the volume (like we do for backups) and setting up that 
NBD server would not help, since changes to the state volume are not 
tracked in the source VM on the block level, as Qemu doesn't access the 
state volume directly, only swtpm does.

> 
> Also, how is "migrate -> stop -> start" affected, is the TPM synced out to
> the (previously replicated?) disk on the target side during stop?

I am not sure I understand this question. nothing changes about the flow 
of migration with this patch, except that where the migration would fall 
apart previously if replication was enabled, it now works. the handling 
of the state volume is unchanged / identical to a VM that is not 
replicated. in either case we only sync the state volume once, before 
starting the VM on the target node, doing block mirror, and the 
ram/state migration. swtpm probably syncs it whenever state-changing 
operations are issued from within the VM - but that is not something 
that we can control when shutting down the VM. AFAIU, the 'raw' state of 
the TPM is not even available to Qemu directly, that's the whole point 
of the swtpm component after all?

> 
>> this code path was only triggered for replicated VMs with TPM.
>> 
>> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
>> ---
>>  PVE/QemuServer.pm | 1 +
>>  1 file changed, 1 insertion(+)
>> 
>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
>> index 580af9e..76d45a2 100644
>> --- a/PVE/QemuServer.pm
>> +++ b/PVE/QemuServer.pm
>> @@ -5238,6 +5238,7 @@ sub vm_migrate_get_nbd_disks {
>>  	my ($ds, $drive) = @_;
>>  
>>  	return if drive_is_cdrom($drive);
>> +	return if $ds eq 'tpmstate0';
>>  
>>  	my $volid = $drive->{file};
>>  
>> 
> 
> 




  reply	other threads:[~2021-11-16 11:40 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-16 10:52 Fabian Grünbichler
2021-11-16 11:12 ` Thomas Lamprecht
2021-11-16 11:39   ` Fabian Grünbichler [this message]
2021-11-16 11:50     ` Thomas Lamprecht
2021-11-16 13:25 ` [pve-devel] applied: " Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1637061780.roho39wcf6.astroid@nora.none \
    --to=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal