public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Mira Limbeck <m.limbeck@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: Re: [pve-devel] [PATCH qemu-server] mediated devices: fix race condition in vm reboot
Date: Thu, 7 Mar 2024 14:31:55 +0100	[thread overview]
Message-ID: <b8f2d47f-5883-416b-b40a-345f0f6a0fe7@proxmox.com> (raw)
In-Reply-To: <20240307093337.1104294-1-d.csapak@proxmox.com>

On 3/7/24 10:33, Dominik Csapak wrote:
> when triggering a vm reboot from the host (via cli/api), the reboot code
> is called under a guest lock, which creates a reboot request, shuts down
> the vm and calls the regular cleanup code (which includes the mdev
> cleanup).
> 
> in parallel the qmeventd sees the vanished vm, and starts 'qm cleanup'
> which is (among other tasks) used to restart a vm with a pending reboot
> request. It does this also under a guest lock, with a default timeout of
> 10 seconds.
> 
> Since we currently wait 10 seconds for the nvidia driver to clean the
> mdev, this creates a race condition for the cleanup lock. Iow. when the
> call to `qm cleanup` starts before we started to sleep for 10 seconds,
> it will not be able to acquire its lock and not start the vm again.
> 
> To fix it, do two things:
> * increase the timeout in `qm cleanup` to 60 seconds
>   (technically this still might run into a timeout, as we can configure
>   up to 16 mediated devices with up to 10 seconds sleep each, but
>   realistically most users won't configure more than two or three of
>   them, if even that)
> 
> * change the `sleep 10` to a loop sleeping for 1 second each before
>   checking the state again. This shortens the timeout when the driver
>   can clean it up in the meantime.
> 
> Further, add a bit of logging, so we can properly see in the (task) log
> what is happening when.
> 
> Fixes: 49c51a60 (pci: workaround nvidia driver issue on mdev cleanup)
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  PVE/CLI/qm.pm     |  3 ++-
>  PVE/QemuServer.pm | 16 ++++++++++++----
>  2 files changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
> index b17b4fe2..dce6c7a1 100755
> --- a/PVE/CLI/qm.pm
> +++ b/PVE/CLI/qm.pm
> @@ -915,7 +915,8 @@ __PACKAGE__->register_method({
>  	my $storecfg = PVE::Storage::config();
>  	warn "Starting cleanup for $vmid\n";
>  
> -	PVE::QemuConfig->lock_config($vmid, sub {
> +	# mdev cleanup can take a while, so wait up to 60 seconds
> +	PVE::QemuConfig->lock_config_full($vmid, 60, sub {
>  	    my $conf = PVE::QemuConfig->load_config ($vmid);
>  	    my $pid = PVE::QemuServer::check_running ($vmid);
>  	    die "vm still running\n" if $pid;
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index b45507aa..efe93c62 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -6133,12 +6133,20 @@ sub cleanup_pci_devices {
>  	    my $dev_sysfs_dir = "/sys/bus/mdev/devices/$uuid";
>  
>  	    # some nvidia vgpu driver versions want to clean the mdevs up themselves, and error
> -	    # out when we do it first. so wait for 10 seconds and then try it
> -	    if ($d->{ids}->[0]->[0]->{vendor} =~ m/^(0x)?10de$/) {
> -		sleep 10;
> +	    # out when we do it first. so wait for up to 10 seconds and then try it manually
> +	    if ($d->{ids}->[0]->[0]->{vendor} =~ m/^(0x)?10de$/ && -e $dev_sysfs_dir) {
> +		my $count = 0;
> +		while (-e $dev_sysfs_dir && $count < 10) {
> +		    sleep 1;
> +		    $count++;
> +		}
> +		warn "waited $count seconds for mdev cleanup\n";
>  	    }
>  
> -	    PVE::SysFSTools::file_write("$dev_sysfs_dir/remove", "1") if -e $dev_sysfs_dir;
> +	    if (-e $dev_sysfs_dir) {
> +		warn "cleaning up mediated device $uuid\n";
> +		PVE::SysFSTools::file_write("$dev_sysfs_dir/remove", "1");
> +	    }
>  	}
>      }
>      PVE::QemuServer::PCI::remove_pci_reservation($vmid);

lgtm

Reviewed-by: Mira Limbeck <m.limbeck@proxmox.com>


Sadly I can't test it since I don't have a vGPU-capable nvidia card. But
the customer who reported it has tested the patch already and it seemed
to fix the issue without introducing other issues so far.




  reply	other threads:[~2024-03-07 13:31 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-07  9:33 Dominik Csapak
2024-03-07 13:31 ` Mira Limbeck [this message]
2024-03-08 13:18 ` [pve-devel] applied: " Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b8f2d47f-5883-416b-b40a-345f0f6a0fe7@proxmox.com \
    --to=m.limbeck@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal