From: Noel Ullreich <n.ullreich@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH pve-docs v3] update the PCI(e) docs
Date: Thu, 16 Mar 2023 14:15:51 +0100 [thread overview]
Message-ID: <20230316131551.47180-1-n.ullreich@proxmox.com> (raw)
A little update to the PCI(e) docs with the plan of reworking the PCI
wiki as well.
Along with some minor grammar fixes added:
* how to check if kernelmodules are being loaded
* how to check which drivers to blacklist
* how to add softdeps for module loading
* where to find kernel params
Signed-off-by: Noel Ullreich <n.ullreich@proxmox.com>
---
changes from v1:
* fixed spelling mistakes
* reduced code snippets of how to check iommu groupings to one
* moved where to find kernel params to kernel cmdline section
* removed wrong info on display output. will add correct info to
Examples-Wiki
* changed module names to variable-names, so that people can't
blindly copy-paste.
* restructured commit message ;)
changes from v2:
* while moving where to find the kernel params to the kernel
cmdline section, I forgot to remove it from the pci(e) section
* fixed typo in the link to the kernel param section
qm-pci-passthrough.adoc | 72 +++++++++++++++++++++++++++++++++--------
system-booting.adoc | 9 ++++++
2 files changed, 68 insertions(+), 13 deletions(-)
diff --git a/qm-pci-passthrough.adoc b/qm-pci-passthrough.adoc
index df6cf21..fc89b6c 100644
--- a/qm-pci-passthrough.adoc
+++ b/qm-pci-passthrough.adoc
@@ -16,16 +16,17 @@ device anymore on the host or in any other VM.
General Requirements
~~~~~~~~~~~~~~~~~~~~
-Since passthrough is a feature which also needs hardware support, there are
-some requirements to check and preparations to be done to make it work.
-
+Since passthrough is performed on real hardware, it needs to fulfill some
+requirements. A brief overview of these requirements is given below, for more
+information on specific devices, see
+https://pve.proxmox.com/wiki/PCI_Passthrough[PCI Passthrough Examples].
Hardware
^^^^^^^^
Your hardware needs to support `IOMMU` (*I*/*O* **M**emory **M**anagement
**U**nit) interrupt remapping, this includes the CPU and the mainboard.
-Generally, Intel systems with VT-d, and AMD systems with AMD-Vi support this.
+Generally, Intel systems with VT-d and AMD systems with AMD-Vi support this.
But it is not guaranteed that everything will work out of the box, due
to bad hardware implementation and missing or low quality drivers.
@@ -44,8 +45,8 @@ some configuration to enable PCI(e) passthrough.
.IOMMU
-First, you have to enable IOMMU support in your BIOS/UEFI. Usually the
-corresponding setting is called `IOMMU` or `VT-d`,but you should find the exact
+First, you will have to enable IOMMU support in your BIOS/UEFI. Usually the
+corresponding setting is called `IOMMU` or `VT-d`, but you should find the exact
option name in the manual of your motherboard.
For Intel CPUs, you may also need to enable the IOMMU on the
@@ -92,6 +93,14 @@ After changing anything modules related, you need to refresh your
# update-initramfs -u -k all
----
+To check if the modules are being loaded, the output of
+
+----
+# lsmod | grep vfio
+----
+
+should include the four modules from above.
+
.Finish Configuration
Finally reboot to bring the changes into effect and check that it is indeed
@@ -105,10 +114,11 @@ should display that `IOMMU`, `Directed I/O` or `Interrupt Remapping` is
enabled, depending on hardware and kernel the exact message can vary.
It is also important that the device(s) you want to pass through
-are in a *separate* `IOMMU` group. This can be checked with:
+are in a *separate* `IOMMU` group. This can be checked with a call to the {pve}
+API:
----
-# find /sys/kernel/iommu_groups/ -type l
+# pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""
----
It is okay if the device is in an `IOMMU` group together with its functions
@@ -159,8 +169,8 @@ PCI(e) card, for example a GPU or a network card.
Host Configuration
^^^^^^^^^^^^^^^^^^
-In this case, the host must not use the card. There are two methods to achieve
-this:
+{pve} tries to automatically make the PCI(e) device unavailable for the host.
+However, if this doesn't work, there are two things that can be done:
* pass the device IDs to the options of the 'vfio-pci' modules by adding
+
@@ -175,7 +185,7 @@ the vendor and device IDs obtained by:
# lspci -nn
----
-* blacklist the driver completely on the host, ensuring that it is free to bind
+* blacklist the driver on the host completely, ensuring that it is free to bind
for passthrough, with
+
----
@@ -183,11 +193,49 @@ for passthrough, with
----
+
in a .conf file in */etc/modprobe.d/*.
++
+To find the drivername, execute
++
+----
+# lspci -k
+----
++
+for example:
++
+----
+# lspci -k | grep -A 3 "VGA"
+----
++
+will output something similar to
++
+----
+01:00.0 VGA compatible controller: NVIDIA Corporation GP108 [GeForce GT 1030] (rev a1)
+ Subsystem: Micro-Star International Co., Ltd. [MSI] GP108 [GeForce GT 1030]
+ Kernel driver in use: <some-module>
+ Kernel modules: <some-module>
+----
++
+Now we can blacklist the drivers by writing them into a .conf file:
++
+----
+echo "blacklist <some-module>" >> /etc/modprobe.d/blacklist.conf
+----
For both methods you need to
xref:qm_pci_passthrough_update_initramfs[update the `initramfs`] again and
reboot after that.
+Should this not work, you might need to set a soft dependency to load the gpu
+modules before loading 'vfio-pci'. This can be done with the 'softdep' flag, see
+also the manpages on 'modprobe.d' for more information.
+
+For example, if you are using drivers named <some-module>:
+
+----
+# echo "softdep <some-module> pre: vfio-pci" >> /etc/modprobe.d/<some-module>.conf
+----
+
+
.Verify Configuration
To check if your changes were successful, you can use
@@ -262,7 +310,6 @@ For example:
# qm set VMID -hostpci0 02:00,device-id=0x10f6,sub-vendor-id=0x0000
----
-
Other considerations
^^^^^^^^^^^^^^^^^^^^
@@ -288,7 +335,6 @@ Currently, the most common use case for this are NICs (**N**etwork
physical port. This allows using features such as checksum offloading, etc. to
be used inside a VM, reducing the (host) CPU overhead.
-
Host Configuration
^^^^^^^^^^^^^^^^^^
diff --git a/system-booting.adoc b/system-booting.adoc
index 30621a6..c80d19c 100644
--- a/system-booting.adoc
+++ b/system-booting.adoc
@@ -272,6 +272,15 @@ initrd /EFI/proxmox/5.0.15-1-pve/initrd.img-5.0.15-1-pve
Editing the Kernel Commandline
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+A complete list of kernel parameters can be found at
+'https://www.kernel.org/doc/html/v<YOUR-KERNEL-VERSION>/admin-guide/kernel-parameters.html'.
+replace <YOUR-KERNEL-VERSION> with the major.minor version (e.g. 5.15). You can
+find your kernel version by running
+
+----
+# uname -r
+----
+
You can modify the kernel commandline in the following places, depending on the
bootloader used:
--
2.30.2
reply other threads:[~2023-03-16 13:16 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230316131551.47180-1-n.ullreich@proxmox.com \
--to=n.ullreich@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox