From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <t.lamprecht@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id F0520800C5
 for <pve-devel@lists.proxmox.com>; Tue, 16 Nov 2021 12:50:50 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id E459D16A25
 for <pve-devel@lists.proxmox.com>; Tue, 16 Nov 2021 12:50:50 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id 2CF0516A1A
 for <pve-devel@lists.proxmox.com>; Tue, 16 Nov 2021 12:50:50 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id E14DB43C42
 for <pve-devel@lists.proxmox.com>; Tue, 16 Nov 2021 12:50:49 +0100 (CET)
Message-ID: <03b86a5c-a515-f4d7-5ab6-f123a37a7b7d@proxmox.com>
Date: Tue, 16 Nov 2021 12:50:48 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:95.0) Gecko/20100101
 Thunderbird/95.0
Content-Language: en-US
To: =?UTF-8?Q?Fabian_Gr=c3=bcnbichler?= <f.gruenbichler@proxmox.com>,
 Proxmox VE development discussion <pve-devel@lists.proxmox.com>
References: <20211116105215.1812508-1-f.gruenbichler@proxmox.com>
 <c37e599d-cad4-74f3-fe9b-46d97a958d7d@proxmox.com>
 <1637061780.roho39wcf6.astroid@nora.none>
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
In-Reply-To: <1637061780.roho39wcf6.astroid@nora.none>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.837 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 NICE_REPLY_A           -1.446 Looks like a legit reply (A)
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] [PATCH qemu-server] migrate: skip tpmstate for NBD
 migration
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Tue, 16 Nov 2021 11:50:51 -0000

On 16.11.21 12:39, Fabian Gr=C3=BCnbichler wrote:
> On November 16, 2021 12:12 pm, Thomas Lamprecht wrote:
>> On 16.11.21 11:52, Fabian Gr=C3=BCnbichler wrote:
>>> the tpmstate volume is not available in the VM directly, but we do
>>> migrate the state volume via a storage migration anyway if necessary.=

>>>
>>
>> some context would be great to have in the commit message, iow. mentio=
ning
>> that QEMU is already migrating this as part of its memory/state migrat=
ion.
>=20
> I tried to get some understanding of how this works, and I don't think =

> that the stuff that Qemu copies as part of the TPM emulator state cover=
s=20
> everything that is in the state volume.
>=20
> what happens is the following:
> - our migration code finds a tpmstate volume, it gets migrated via=20
>   storage_migrate if on local storage (and replicated if that is=20
>   enabled)
> - the VM is started on the remote node with the initial swtpm setup par=
t=20
>   skipped, since we already have a volume with state
> - the RAM migration happens (and rest of state, including 'tpm emulator=
=20
>   state')
>=20
> so there is a window between storage_migrate/replication happening, and=
=20
> the migration being finished where changes to the TPM state volume from=
=20
> within the guest could potentially get lost (unless the state covered b=
y=20
> the migrate stream covers ALL the state inside the state volume, which =
I=20
> don't think, but maybe I am mistaken on that front).

I've something in mind about talking to Stefan regarding this, it should
be fine, but need to rethink and ensure that I remember this correctly..

>=20
> but this is irrespective of this patch, which just fixes the wrong=20
> attempt of setting up an NBD server for the replicated tpm state volume=
=2E=20
> even attaching the volume (like we do for backups) and setting up that =

> NBD server would not help, since changes to the state volume are not=20
> tracked in the source VM on the block level, as Qemu doesn't access the=
=20
> state volume directly, only swtpm does.

yeah, something like above paragraph would help to avoid confusion like
mine :)

>=20
>>
>> Also, how is "migrate -> stop -> start" affected, is the TPM synced ou=
t to
>> the (previously replicated?) disk on the target side during stop?
>=20
> I am not sure I understand this question. nothing changes about the flo=
w=20
> of migration with this patch, except that where the migration would fal=
l=20
> apart previously if replication was enabled, it now works. the handling=
=20
> of the state volume is unchanged / identical to a VM that is not=20
> replicated. in either case we only sync the state volume once, before=20
> starting the VM on the target node, doing block mirror, and the=20
> ram/state migration. swtpm probably syncs it whenever state-changing=20
> operations are issued from within the VM - but that is not something=20
> that we can control when shutting down the VM. AFAIU, the 'raw' state o=
f=20
> the TPM is not even available to Qemu directly, that's the whole point =

> of the swtpm component after all?
>=20

Yes and no, it's the case but not the whole point, rather QEMU just did n=
ot
wanted to directly incorporate TPM stuff if it can live externally (i.e.,=

more about code-base and architecture than having them separated to ensur=
e
QEMU is unaware of TPM states) and to allow HW TPM stuff to be used at
the same time, but don't quote me on that, just recollections of discussi=
ons
with Stefan and the swtpm project.