public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Dominik Csapak <d.csapak@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH qemu-server v3 3/3] fix #3258: block vm start when pci device is already in use
Date: Thu,  7 Oct 2021 15:45:31 +0200	[thread overview]
Message-ID: <20211007134531.1693674-4-d.csapak@proxmox.com> (raw)
In-Reply-To: <20211007134531.1693674-1-d.csapak@proxmox.com>

on vm start, we reserve all pciids that we use, and
remove the reservation again in vm_stop_cleanup

first with only a time-based reservation but after the vm is started,
we reserve again but with the pid.

for this, we have to move the start_timeout calculation above the
hostpci handling.

also moved the pci initialization out of the conf parsing loop
so that we can reserve all ids before we actually touch any of them

while touching the lines, fix the indentation

this way, when a vm starts with a pci device that is already configured
for a different running vm, will not be started and the user gets
the error that the device is already in use

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/QemuServer.pm | 50 +++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 42 insertions(+), 8 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f78b2cc..e504e9a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5381,16 +5381,40 @@ sub vm_start_nolock {
 	push @$cmd, '-S';
     }
 
+    my $start_timeout = $params->{timeout} // config_aware_timeout($conf, $resume);
+    my $pciids = [];
+    my $pci_devices = {};
+
     # host pci devices
     for (my $i = 0; $i < $PVE::QemuServer::PCI::MAX_HOSTPCI_DEVICES; $i++)  {
-      my $d = parse_hostpci($conf->{"hostpci$i"});
-      next if !$d;
-      my $pcidevices = $d->{pciid};
-      foreach my $pcidevice (@$pcidevices) {
-	    my $pciid = $pcidevice->{id};
+	my $d = parse_hostpci($conf->{"hostpci$i"});
+	next if !$d;
+	$pci_devices->{$i} = $d;
 
-	    PVE::QemuServer::PCI::prepare_pci_device($vmid, $pciid, $i, $d->{mdev});
-      }
+	my $pcidevices = $d->{pciid};
+
+	my $ids = [map { $_->{id} } @$pcidevices];
+	push @$pciids, @$ids;
+    }
+
+    # reserve all pci ids before actually doing anything with them
+    PVE::QemuServer::PCI::reserve_pci_usage($pciids, $vmid, $start_timeout);
+
+    eval {
+	for my $i (sort keys %$pci_devices) {
+	    my $d = $pci_devices->{$i};
+	    my $pcidevices = $d->{pciid};
+	    foreach my $pcidevice (@$pcidevices) {
+		my $pciid = $pcidevice->{id};
+		PVE::QemuServer::PCI::prepare_pci_device($vmid, $pciid, $i, $d->{mdev});
+	    }
+	}
+    };
+
+    if (my $err = $@) {
+	eval { PVE::QemuServer::PCI::remove_pci_reservation($pciids) };
+	warn $@ if $@;
+	die $err;
     }
 
     PVE::Storage::activate_volumes($storecfg, $vollist);
@@ -5405,7 +5429,6 @@ sub vm_start_nolock {
 
     my $cpuunits = get_cpuunits($conf);
 
-    my $start_timeout = $params->{timeout} // config_aware_timeout($conf, $resume);
     my %run_params = (
 	timeout => $statefile ? undef : $start_timeout,
 	umask => 0077,
@@ -5485,9 +5508,17 @@ sub vm_start_nolock {
     if (my $err = $@) {
 	# deactivate volumes if start fails
 	eval { PVE::Storage::deactivate_volumes($storecfg, $vollist); };
+	eval { PVE::QemuServer::PCI::remove_pci_reservation($pciids) };
+
 	die "start failed: $err";
     }
 
+    # reserve all pciids again with the pid
+    # the vm is already started, we can only warn on error here
+    my $pid = PVE::QemuServer::Helpers::vm_running_locally($vmid);
+    eval { PVE::QemuServer::PCI::reserve_pci_usage($pciids, $vmid, undef, $pid) };
+    warn $@ if $@;
+
     print "migration listens on $migrate_uri\n" if $migrate_uri;
     $res->{migrate_uri} = $migrate_uri;
 
@@ -5676,6 +5707,7 @@ sub vm_stop_cleanup {
 	    unlink '/dev/shm/pve-shm-' . ($ivshmem->{name} // $vmid);
 	}
 
+	my $ids = [];
 	foreach my $key (keys %$conf) {
 	    next if $key !~ m/^hostpci(\d+)$/;
 	    my $hostpciindex = $1;
@@ -5684,9 +5716,11 @@ sub vm_stop_cleanup {
 
 	    foreach my $pci (@{$d->{pciid}}) {
 		my $pciid = $pci->{id};
+		push @$ids, $pci->{id};
 		PVE::SysFSTools::pci_cleanup_mdev_device($pciid, $uuid);
 	    }
 	}
+	PVE::QemuServer::PCI::remove_pci_reservation($ids);
 
 	vmconfig_apply_pending($vmid, $conf, $storecfg) if $apply_pending_changes;
     };
-- 
2.30.2





  parent reply	other threads:[~2021-10-07 13:46 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-07 13:45 [pve-devel] [PATCH qemu-server v3 0/3] fix #3258: check for in-use pci devices on vm start Dominik Csapak
2021-10-07 13:45 ` [pve-devel] [PATCH qemu-server v3 1/3] pci: refactor pci device preparation Dominik Csapak
2021-10-11  6:49   ` [pve-devel] applied: " Thomas Lamprecht
2021-10-07 13:45 ` [pve-devel] [PATCH qemu-server v3 2/3] pci: add helpers to (un)reserve pciids for a vm Dominik Csapak
2021-10-15 17:59   ` [pve-devel] applied: " Thomas Lamprecht
2021-10-07 13:45 ` Dominik Csapak [this message]
2021-10-15 17:59   ` [pve-devel] applied: [PATCH qemu-server v3 3/3] fix #3258: block vm start when pci device is already in use Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211007134531.1693674-4-d.csapak@proxmox.com \
    --to=d.csapak@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal