From: Fiona Ebner <f.ebner@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
Thomas Lamprecht <t.lamprecht@proxmox.com>,
Filip Schauer <f.schauer@proxmox.com>
Subject: Re: [pve-devel] [PATCH v3 qemu-server] Fix ACPI-suspended VMs resuming after migration
Date: Tue, 10 Oct 2023 16:19:55 +0200 [thread overview]
Message-ID: <d59420f3-2b1f-5de0-2106-38441c5bbf4d@proxmox.com> (raw)
In-Reply-To: <01c649c9-0717-404c-98d8-17dc8a5cc4eb@proxmox.com>
Am 10.10.23 um 13:45 schrieb Thomas Lamprecht:
> Am 09/10/2023 um 15:25 schrieb Filip Schauer:
>> Add checks for "suspended" and "prelaunch" runstates when checking
>> whether a VM is paused.
>>
>> This fixes the following issues:
>> * ACPI-suspended VMs automatically resuming after migration
>> * Shutdown and reboot commands timing out instead of failing
>> immediately on suspended VMs
>>
>
> I checked the call-sites and what I'm wondering is, can the VM from those
> new states get waked up without QMP intervention, say a ACPI-suspension
> be triggered by some (virtual) RTC or via network (like wake-on-lan),
> as then, we should add a big notice comment on this method to ensure
> new users of it are informed about that possibility.
Sorry! I did not consider this. Yes, it can get woken up without QMP.
Even just with keyboard presses in case of my Debian test VM. So our
config locks are useless here :/
It'll also be an issue for migration:
1. migration is started and remembers that "vm_was_paused"
2. migration continues
3. during migration VM wakes up
4. migration finishes, but doesn't resume guest on the target, because
"vm_was_paused"
We could either:
1. tolerate the old behavior (VM might get resumed on target if it was
suspended) - if we declare migration an ACPI-wake-up event ;)
2. tolerate the new behavior (VM might wake up and not be resumed on
target later) - seems worse then old behavior when it happens
3. disallow migration when in 'suspended' runstate
4. add new check just before moving guest to target - but not sure how
we'd do that without races again...
> Also, with that change we might have added a race for suspend-mode backups,
> at least if VMs really can wake up without a QMP command (which I find likely).
> I.e., between the time we checked and set vm_was_paused until we actually
> suspend, because if the VM would wake up in between we might get inconsistent
> stuff and skip things like fs-freeze.
That race is already there without the patch. QEMU does not transition
from 'suspended' state to 'paused' state when QMP 'stop' is issued, i.e.
what our 'qm suspend' or vm_suspend() actually does. So it doesn't
matter if we call that during backup or not when the VM is already in
'suspended' state.
The window for the race is a bit larger though:
now: VM wakes up between check if paused and 'backup' QMP
before: VM wakes up after fsfreeze was skipped because guest agent was
detected as not running
We could either:
1. (ironically) disallow 'suspend' mode backup when in 'suspended' runstate.
2. resume and pause the VM upon 'suspend' mode backup, but is also
surprising
3. make the race window smaller again by doing the qga_check_running()
and fs-freeze if true, if VM was in 'suspended' runstate.
I don't see how to avoid the possibility of a wake-up during an
inconvenient time except with one of the first two options.
> While we recommend stop and snapshot modes over suspend mode, the latter is
> still exposed via UI and API and should work OK enough.
>
> Note that ACPI S3 suspend and our vm_suspend are very different things, our
> vm_suspend is doing a "stop" monitor command, resulting in all vCPUs being
> stopped, while S3 is a (virtual) firmware/machine feature – I mean, I guess
> there's a reason that those things are reported as different states..
> It doesn't help that our vm_suspend method doesn't actually do a (ACPI S3
> like) suspension, but a (vCPU) "stop".
>
> The "prelaunch" state, OTOH., seems pretty much like "paused", with the
> difference that the VM vCPUs never ran in the former, this seems fine to
> handle the same. But for "suspended" I'm not sure if we'd like to detect
> that as paused only if conflating both is OK for a call-site, so maybe
> with an opt-in parameter. Alternatively we could return the status in
> the "true" case so that call-sites can decide what to do for any such
> special handling without an extra parameter.
Yes, I like the opt-in approach.
next prev parent reply other threads:[~2023-10-10 14:20 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-09 13:25 Filip Schauer
2023-10-10 8:58 ` [pve-devel] applied: " Fiona Ebner
2023-10-10 11:45 ` [pve-devel] " Thomas Lamprecht
2023-10-10 14:19 ` Fiona Ebner [this message]
2023-10-11 7:38 ` Fiona Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d59420f3-2b1f-5de0-2106-38441c5bbf4d@proxmox.com \
--to=f.ebner@proxmox.com \
--cc=f.schauer@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
--cc=t.lamprecht@proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox