public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Christoph Heiss <c.heiss@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH proxmox-{ve, perl}-rs/common 0/4] use native libnvidia-ml library for vGPU info
Date: Tue, 20 Jan 2026 14:13:08 +0100	[thread overview]
Message-ID: <20260120131319.949986-1-c.heiss@proxmox.com> (raw)

Adds support for using the NVML (Nvidia management library) [0] directly
to retrieve information about available (or concrete: creatable at VM
start) vGPUs for Nvidia GPUs. 

This will allow in the future to support anything related to Nvidia
cards/devices in a more stable manner over different driver version, as
NVML provides a proper, independent abstraction.

E.g. the sysfs interface exposed by the driver may change (as it did
recently for kernels 6.8+) and might contain less information, as is now
already the case.  Currently, the "description" column in the mdev type
dropdown under Hardware -> hostpci* is empty for Nvidia vGPUs, since the
6.8 kernel IIRC, due to driver changes.

This restores this functionality, with the description being the result
of `proxmox_ve_vfio::VgpuTypeInfo::description()`, for example:

class=NVS,max-instances=24,max-instances-per-vm=1,
framebuffer-size=1024MiB,num-heads=1,max-resolution=1280x1024,
license=GRID-Virtual-Apps,3.0

In the future, these bindings will also be needed to implement support
for Nvidia MIG (Multi-Instance GPU), for which information are not
exposed in sysfs at all. See also the series [1] sent by Dominik
introducing a new hookscript phase, to, among other things, support
manually setting up MIG. 

[0] https://developer.nvidia.com/management-library-nvml
[1] https://lore.proxmox.com/pve-devel/20260114155043.3313473-1-d.csapak@proxmox.com/

Apply order
===========

proxmox-ve-rs -> proxmox-perl-rs -> pve-common, same as this series is
laid out. Each package will require a bump of previous one.

Diffstat
========

proxmox-ve-rs:

Christoph Heiss (2):
  vfio: add crate for interacting with vfio host devices
  vfio: add rust-native interface for accessing NVIDIA vGPU info

 Cargo.toml                                    |    2 +
 proxmox-ve-vfio/Cargo.toml                    |   18 +
 proxmox-ve-vfio/README.md                     |   25 +
 proxmox-ve-vfio/debian/changelog              |    5 +
 proxmox-ve-vfio/debian/control                |   38 +
 proxmox-ve-vfio/debian/copyright              |   18 +
 proxmox-ve-vfio/debian/debcargo.toml          |    3 +
 .../examples/nv_list_creatable_vgpus.rs       |   15 +
 proxmox-ve-vfio/generate-nvml-bindings.sh     |   27 +
 proxmox-ve-vfio/src/lib.rs                    |    6 +
 proxmox-ve-vfio/src/nvidia/mod.rs             |  126 +
 proxmox-ve-vfio/src/nvidia/nvml/bindings.rs   | 2290 +++++++++++++++++
 proxmox-ve-vfio/src/nvidia/nvml/mod.rs        |  237 ++
 13 files changed, 2810 insertions(+)
 create mode 100644 proxmox-ve-vfio/Cargo.toml
 create mode 100644 proxmox-ve-vfio/README.md
 create mode 100644 proxmox-ve-vfio/debian/changelog
 create mode 100644 proxmox-ve-vfio/debian/control
 create mode 100644 proxmox-ve-vfio/debian/copyright
 create mode 100644 proxmox-ve-vfio/debian/debcargo.toml
 create mode 100644 proxmox-ve-vfio/examples/nv_list_creatable_vgpus.rs
 create mode 100755 proxmox-ve-vfio/generate-nvml-bindings.sh
 create mode 100644 proxmox-ve-vfio/src/lib.rs
 create mode 100644 proxmox-ve-vfio/src/nvidia/mod.rs
 create mode 100644 proxmox-ve-vfio/src/nvidia/nvml/bindings.rs
 create mode 100644 proxmox-ve-vfio/src/nvidia/nvml/mod.rs

proxmox-perl-rs:

Christoph Heiss (1):
  pve: add bindings for proxmox-ve-vfio

 pve-rs/Cargo.toml                          |  1 +
 pve-rs/Makefile                            |  3 +-
 pve-rs/debian/control                      |  1 +
 pve-rs/examples/nv-list-creatable-vgpus.pl | 20 ++++++++++++
 pve-rs/src/lib.rs                          |  1 +
 pve-rs/src/vfio/mod.rs                     |  6 ++++
 pve-rs/src/vfio/nvidia.rs                  | 38 ++++++++++++++++++++++
 7 files changed, 69 insertions(+), 1 deletion(-)
 create mode 100755 pve-rs/examples/nv-list-creatable-vgpus.pl
 create mode 100644 pve-rs/src/vfio/mod.rs
 create mode 100644 pve-rs/src/vfio/nvidia.rs

pve-common:

Christoph Heiss (1):
  sysfs: use new PVE::RS::VFIO::Nvidia module to retrieve vGPU info

 src/PVE/SysFSTools.pm | 45 ++++++++++++++++++++++++++++++-------------
 1 file changed, 32 insertions(+), 13 deletions(-)

-- 
2.47.0



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


             reply	other threads:[~2026-01-20 13:13 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-20 13:13 Christoph Heiss [this message]
2026-01-20 13:13 ` [pve-devel] [PATCH proxmox-ve-rs 1/4] vfio: add crate for interacting with vfio host devices Christoph Heiss
2026-01-20 13:13 ` [pve-devel] [PATCH proxmox-ve-rs 2/4] vfio: add rust-native interface for accessing NVIDIA vGPU info Christoph Heiss
2026-01-20 13:13 ` [pve-devel] [PATCH proxmox-perl-rs 3/4] pve: add bindings for proxmox-ve-vfio Christoph Heiss
2026-01-20 13:13 ` [pve-devel] [PATCH common 4/4] sysfs: use new PVE::RS::VFIO::Nvidia module to retrieve vGPU info Christoph Heiss
2026-01-20 15:00   ` Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260120131319.949986-1-c.heiss@proxmox.com \
    --to=c.heiss@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal