From: Dominik Csapak <d.csapak@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH proxmox-perl-rs 1/1] pve: add binding for accessing vgpu info
Date: Thu, 5 Mar 2026 10:16:53 +0100 [thread overview]
Message-ID: <20260305091711.1221589-10-d.csapak@proxmox.com> (raw)
In-Reply-To: <20260305091711.1221589-1-d.csapak@proxmox.com>
Adds some basic perl bindings to return the creatable and supported vGPU
types for NVIDIA GPUs.
The 'supported' helper is not yet used, but it'll be useful when we
want to have a better api response for the available mdevs.
The description generated here is in the format that used to be exposed
by the sysfs via the standard mdev api.
Co-developed-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
pve-rs/Cargo.toml | 1 +
pve-rs/Makefile | 1 +
pve-rs/src/bindings/mod.rs | 3 ++
pve-rs/src/bindings/nvml.rs | 91 +++++++++++++++++++++++++++++++++++++
4 files changed, 96 insertions(+)
create mode 100644 pve-rs/src/bindings/nvml.rs
diff --git a/pve-rs/Cargo.toml b/pve-rs/Cargo.toml
index 45389b5..3b6c2fc 100644
--- a/pve-rs/Cargo.toml
+++ b/pve-rs/Cargo.toml
@@ -20,6 +20,7 @@ hex = "0.4"
http = "1"
libc = "0.2"
nix = "0.29"
+nvml-wrapper = "0.12"
openssl = "0.10.40"
serde = "1.0"
serde_bytes = "0.11"
diff --git a/pve-rs/Makefile b/pve-rs/Makefile
index aa7181e..4698358 100644
--- a/pve-rs/Makefile
+++ b/pve-rs/Makefile
@@ -27,6 +27,7 @@ PERLMOD_GENPACKAGE := /usr/lib/perlmod/genpackage.pl \
PERLMOD_PACKAGES := \
PVE::RS::Firewall::SDN \
+ PVE::RS::NVML \
PVE::RS::OCI \
PVE::RS::OpenId \
PVE::RS::ResourceScheduling::Static \
diff --git a/pve-rs/src/bindings/mod.rs b/pve-rs/src/bindings/mod.rs
index c21b328..132079a 100644
--- a/pve-rs/src/bindings/mod.rs
+++ b/pve-rs/src/bindings/mod.rs
@@ -12,6 +12,9 @@ pub use tfa::pve_rs_tfa;
mod openid;
pub use openid::pve_rs_open_id;
+mod nvml;
+pub use nvml::pve_rs_nvml;
+
pub mod firewall;
mod sdn;
diff --git a/pve-rs/src/bindings/nvml.rs b/pve-rs/src/bindings/nvml.rs
new file mode 100644
index 0000000..0f4c81e
--- /dev/null
+++ b/pve-rs/src/bindings/nvml.rs
@@ -0,0 +1,91 @@
+//! Provides access to the state of NVIDIA (v)GPU devices connected to the system.
+
+#[perlmod::package(name = "PVE::RS::NVML", lib = "pve_rs")]
+pub mod pve_rs_nvml {
+ //! The `PVE::RS::NVML` package.
+ //!
+ //! Provides high level helpers to get info from the system with NVML.
+
+ use anyhow::Result;
+ use nvml_wrapper::Nvml;
+ use perlmod::Value;
+
+ /// Retrieves a list of *creatable* vGPU types for the specified GPU by bus id.
+ ///
+ /// The [`bus_id`] is of format "\<domain\>:\<bus\>:\<device\>.\<function\>",
+ /// e.g. "0000:01:01.0".
+ ///
+ /// # See also
+ ///
+ /// [`nvmlDeviceGetCreatableVgpus`]: <https://docs.nvidia.com/deploy/nvml-api/group__nvmlVgpu.html#group__nvmlVgpu_1ge86fff933c262740f7a374973c4747b6>
+ /// [`nvmlDeviceGetHandleByPciBusId_v2`]: <https://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceQueries.html#group__nvmlDeviceQueries_1gea7484bb9eac412c28e8a73842254c05>
+ /// [`struct nvmlPciInfo_t`]: <https://docs.nvidia.com/deploy/nvml-api/structnvmlPciInfo__t.html#structnvmlPciInfo__t_1a4d54ad9b596d7cab96ecc34613adbe4>
+ #[export]
+ fn creatable_vgpu_types_for_dev(bus_id: &str) -> Result<Vec<Value>> {
+ let nvml = Nvml::init()?;
+ let device = nvml.device_by_pci_bus_id(bus_id)?;
+
+ build_vgpu_type_list(device.vgpu_creatable_types()?)
+ }
+
+ /// Retrieves a list of *supported* vGPU types for the specified GPU by bus id.
+ ///
+ /// The [`bus_id`] is of format "\<domain\>:\<bus\>:\<device\>.\<function\>",
+ /// e.g. "0000:01:01.0".
+ ///
+ /// # See also
+ ///
+ /// [`nvmlDeviceGetSupportedVgpus`]: <https://docs.nvidia.com/deploy/nvml-api/group__nvmlVgpu.html#group__nvmlVgpu_1ge084b87e80350165859500ebec714274>
+ /// [`nvmlDeviceGetHandleByPciBusId_v2`]: <https://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceQueries.html#group__nvmlDeviceQueries_1gea7484bb9eac412c28e8a73842254c05>
+ /// [`struct nvmlPciInfo_t`]: <https://docs.nvidia.com/deploy/nvml-api/structnvmlPciInfo__t.html#structnvmlPciInfo__t_1a4d54ad9b596d7cab96ecc34613adbe4>
+ #[export]
+ fn supported_vgpu_types_for_dev(bus_id: &str) -> Result<Vec<Value>> {
+ let nvml = Nvml::init()?;
+ let device = nvml.device_by_pci_bus_id(bus_id)?;
+
+ build_vgpu_type_list(device.vgpu_supported_types()?)
+ }
+
+ fn build_vgpu_type_list(vgpu_types: Vec<nvml_wrapper::vgpu::VgpuType>) -> Result<Vec<Value>> {
+ let mut result = Vec::with_capacity(vgpu_types.len());
+ for vgpu in vgpu_types {
+ let mut value = perlmod::Value::new_hash();
+ if let Some(hash) = value.as_hash_mut() {
+ hash.insert("id", Value::new_uint(vgpu.id() as usize));
+ hash.insert("name", Value::new_string(&vgpu.name()?));
+ hash.insert("description", Value::new_string(&description(&vgpu)?));
+ }
+
+ result.push(Value::new_ref(&value));
+ }
+
+ Ok(result)
+ }
+
+ // a description like it used to exist in the sysfs with the standard mdev interface
+ fn description(vgpu_type: &nvml_wrapper::vgpu::VgpuType) -> Result<String> {
+ let class_name = vgpu_type.class_name()?;
+ let max_instances = vgpu_type.max_instances()?;
+ let max_instances_per_vm = vgpu_type.max_instances_per_vm()?;
+
+ let framebuffer_size_mb = vgpu_type.framebuffer_size()? / 1024 / 1024; // bytes to MiB
+ let num_heads = vgpu_type.num_display_heads()?;
+
+ let (max_res_x, max_res_y) = (0..num_heads)
+ .filter_map(|head| vgpu_type.resolution(head).ok())
+ .max()
+ .unwrap_or((0, 0));
+
+ let license = vgpu_type.license()?;
+
+ Ok(format!(
+ "class={class_name}\n\
+ max-instances={max_instances}\n\
+ max-instances-per-vm={max_instances_per_vm}\n\
+ framebuffer-size={framebuffer_size_mb}MiB\n\
+ num-heads={num_heads}\n\
+ max-resolution={max_res_x}x{max_res_y}\n\
+ license={license}"
+ ))
+ }
+}
--
2.47.3
next prev parent reply other threads:[~2026-03-05 9:17 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-05 9:16 [PATCH common/debcargo-conf/manager/proxmox-perl-rs/qemu-server 00/13] use NVML for vGPU info querying Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 1/8] nvml-wrapper-sys: Update to 0.9.0 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 2/8] nvml-wrapper-sys: release 0.9.0-1 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 3/8] nvml-wrapper: Update to 0.11.0 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 4/8] nvml-wrapper: release 0.11.0-1 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 5/8] nvml-wrapper: Update to 0.12.0 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 6/8] nvml-wrapper: add patch for vgpu ids Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 7/8] backport nvml-wrapper-sys 0.9.0-1 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 8/8] backport nvml-wrapper 0.12.0-1 Dominik Csapak
2026-03-05 9:16 ` Dominik Csapak [this message]
2026-03-05 9:16 ` [PATCH qemu-server 1/2] pci: move mdev related code to own module Dominik Csapak
2026-03-05 9:16 ` [PATCH qemu-server 2/2] pci: mdev: use PVE::RS::NVML for nvidia mdev information Dominik Csapak
2026-03-05 9:16 ` [PATCH manager 1/1] api: hardware: pci: use NVML for querying " Dominik Csapak
2026-03-05 9:16 ` [PATCH common 1/1] sysfs tools: remove moved code Dominik Csapak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260305091711.1221589-10-d.csapak@proxmox.com \
--to=d.csapak@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox