From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <pve-devel-bounces@lists.proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
	by lore.proxmox.com (Postfix) with ESMTPS id 7D2791FF17A
	for <inbox@lore.proxmox.com>; Tue,  6 Aug 2024 14:22:33 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
	by firstgate.proxmox.com (Proxmox) with ESMTP id BE8DC1D029;
	Tue,  6 Aug 2024 14:22:38 +0200 (CEST)
From: Dominik Csapak <d.csapak@proxmox.com>
To: pve-devel@lists.proxmox.com
Date: Tue,  6 Aug 2024 14:22:00 +0200
Message-Id: <20240806122203.2266054-4-d.csapak@proxmox.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240806122203.2266054-1-d.csapak@proxmox.com>
References: <20240806122203.2266054-1-d.csapak@proxmox.com>
MIME-Version: 1.0
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.014 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See
 http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more
 information. [pci.pm, qemuserver.pm]
Subject: [pve-devel] [PATCH qemu-server 1/3] pci: choose devices: don't
 reserve pciids when vm is already running
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Errors-To: pve-devel-bounces@lists.proxmox.com
Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com>

since the only way this could happen is when we're being called
from 'qm showcmd' and there we don't want to reserve or create anything.

In case the vm was not running, we actually reserve the devices, so we
want to call 'cleanup_pci_devices' after to remove those again. This
minimizes the timespan where those devices are not available for real vm
starts.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
alternatively we could pass the info through that we're in 'showcmd',
but since that's a relatively long call chain and the involved functions
already have a lot of parameters, i opted for this. I'm not opposed to
another solution, if wanted, though.

 PVE/QemuServer.pm                | 5 +++++
 PVE/QemuServer/PCI.pm            | 9 +++++++--
 test/run_config2command_tests.pl | 5 ++++-
 3 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b26da505..b2cbe00e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6129,6 +6129,11 @@ sub vm_commandline {
     my $defaults = load_defaults();
 
     my $cmd = config_to_command($storecfg, $vmid, $conf, $defaults, $forcemachine, $forcecpu);
+    # if the vm is not running, we need to clean up the reserved/created devices
+    if (!PVE::QemuServer::Helpers::vm_running_locally($vmid)) {
+	eval { cleanup_pci_devices($vmid, $conf) };
+	warn $@ if $@;
+    }
 
     return PVE::Tools::cmd2string($cmd);
 }
diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 1673041b..97eb2165 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -523,6 +523,9 @@ sub parse_hostpci_devices {
 my sub choose_hostpci_devices {
     my ($devices, $vmid) = @_;
 
+    # if the vm is running, we must be in 'showcmd', so don't actually reserve or create anything
+    my $is_running = PVE::QemuServer::Helpers::vm_running_locally($vmid) ? 1 : 0;
+
     my $used = {};
 
     my $add_used_device = sub {
@@ -555,8 +558,10 @@ my sub choose_hostpci_devices {
 	    my $ids = [map { $_->{id} } @$alternative];
 
 	    next if grep { defined($used->{$_}) } @$ids; # already used
-	    eval { reserve_pci_usage($ids, $vmid, 10, undef) };
-	    next if $@;
+	    if (!$is_running) {
+		eval { reserve_pci_usage($ids, $vmid, 10, undef) };
+		next if $@;
+	    }
 
 	    # found one that is not used or reserved
 	    $add_used_device->($alternative);
diff --git a/test/run_config2command_tests.pl b/test/run_config2command_tests.pl
index d48ef562..9b5e87ff 100755
--- a/test/run_config2command_tests.pl
+++ b/test/run_config2command_tests.pl
@@ -196,7 +196,10 @@ $qemu_server_module->mock(
     },
     get_initiator_name => sub {
 	return 'iqn.1993-08.org.debian:01:aabbccddeeff';
-    }
+    },
+    cleanup_pci_devices => {
+	# do nothing
+    },
 );
 
 my $qemu_server_config;
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel