From: "Christoph Heiss" <c.heiss@proxmox.com>
To: "Dominik Csapak" <d.csapak@proxmox.com>
Cc: pve-devel@lists.proxmox.com
Subject: Re: [PATCH proxmox-perl-rs 1/1] pve: add binding for accessing vgpu info
Date: Thu, 19 Mar 2026 12:16:56 +0100 [thread overview]
Message-ID: <DH6PT65DFX1Q.36ZZKMY6K8LG5@proxmox.com> (raw)
In-Reply-To: <20260305091711.1221589-10-d.csapak@proxmox.com>
Two comments inline.
Other than that, please consider it:
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
On Thu Mar 5, 2026 at 10:16 AM CET, Dominik Csapak wrote:
[..]
> diff --git a/pve-rs/Cargo.toml b/pve-rs/Cargo.toml
> index 45389b5..3b6c2fc 100644
> --- a/pve-rs/Cargo.toml
> +++ b/pve-rs/Cargo.toml
> @@ -20,6 +20,7 @@ hex = "0.4"
> http = "1"
> libc = "0.2"
> nix = "0.29"
> +nvml-wrapper = "0.12"
Missing the respective entry in d/control.
[..]
> diff --git a/pve-rs/src/bindings/nvml.rs b/pve-rs/src/bindings/nvml.rs
> new file mode 100644
> index 0000000..0f4c81e
> --- /dev/null
> +++ b/pve-rs/src/bindings/nvml.rs
> @@ -0,0 +1,91 @@
> +//! Provides access to the state of NVIDIA (v)GPU devices connected to the system.
> +
> +#[perlmod::package(name = "PVE::RS::NVML", lib = "pve_rs")]
> +pub mod pve_rs_nvml {
> + //! The `PVE::RS::NVML` package.
> + //!
> + //! Provides high level helpers to get info from the system with NVML.
> +
> + use anyhow::Result;
> + use nvml_wrapper::Nvml;
> + use perlmod::Value;
> +
> + /// Retrieves a list of *creatable* vGPU types for the specified GPU by bus id.
> + ///
> + /// The [`bus_id`] is of format "\<domain\>:\<bus\>:\<device\>.\<function\>",
> + /// e.g. "0000:01:01.0".
> + ///
> + /// # See also
> + ///
> + /// [`nvmlDeviceGetCreatableVgpus`]: <https://docs.nvidia.com/deploy/nvml-api/group__nvmlVgpu.html#group__nvmlVgpu_1ge86fff933c262740f7a374973c4747b6>
> + /// [`nvmlDeviceGetHandleByPciBusId_v2`]: <https://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceQueries.html#group__nvmlDeviceQueries_1gea7484bb9eac412c28e8a73842254c05>
> + /// [`struct nvmlPciInfo_t`]: <https://docs.nvidia.com/deploy/nvml-api/structnvmlPciInfo__t.html#structnvmlPciInfo__t_1a4d54ad9b596d7cab96ecc34613adbe4>
> + #[export]
> + fn creatable_vgpu_types_for_dev(bus_id: &str) -> Result<Vec<Value>> {
> + let nvml = Nvml::init()?;
Looking at this, I was wondering how expensive that call is, considering
this path is triggered from the API. Same for
supported_vgpu_types_for_dev() below.
Did some quick & simple benchmarking - on average, `Nvml::init()` took
~32ms, with quite some variance; at best ~26ms up to an worst case
of >150ms.
IMO nothing worth blocking the series on, as this falls into premature
optimization territory and can be fixed in the future, if needed.
Holding an instance in memory might also be problematic on driver
upgrades? I.e. we keep an old version of the library loaded, and thus
mismatched API.
The above results were done with one GPU only though, so potentially
could be worse on multi-GPU systems.
> + let device = nvml.device_by_pci_bus_id(bus_id)?;
> +
> + build_vgpu_type_list(device.vgpu_creatable_types()?)
> + }
> +
> + /// Retrieves a list of *supported* vGPU types for the specified GPU by bus id.
> + ///
> + /// The [`bus_id`] is of format "\<domain\>:\<bus\>:\<device\>.\<function\>",
> + /// e.g. "0000:01:01.0".
> + ///
> + /// # See also
> + ///
> + /// [`nvmlDeviceGetSupportedVgpus`]: <https://docs.nvidia.com/deploy/nvml-api/group__nvmlVgpu.html#group__nvmlVgpu_1ge084b87e80350165859500ebec714274>
> + /// [`nvmlDeviceGetHandleByPciBusId_v2`]: <https://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceQueries.html#group__nvmlDeviceQueries_1gea7484bb9eac412c28e8a73842254c05>
> + /// [`struct nvmlPciInfo_t`]: <https://docs.nvidia.com/deploy/nvml-api/structnvmlPciInfo__t.html#structnvmlPciInfo__t_1a4d54ad9b596d7cab96ecc34613adbe4>
> + #[export]
> + fn supported_vgpu_types_for_dev(bus_id: &str) -> Result<Vec<Value>> {
> + let nvml = Nvml::init()?;
> + let device = nvml.device_by_pci_bus_id(bus_id)?;
> +
> + build_vgpu_type_list(device.vgpu_supported_types()?)
> + }
next prev parent reply other threads:[~2026-03-19 11:16 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-05 9:16 [PATCH common/debcargo-conf/manager/proxmox-perl-rs/qemu-server 00/13] use NVML for vGPU info querying Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 1/8] nvml-wrapper-sys: Update to 0.9.0 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 2/8] nvml-wrapper-sys: release 0.9.0-1 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 3/8] nvml-wrapper: Update to 0.11.0 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 4/8] nvml-wrapper: release 0.11.0-1 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 5/8] nvml-wrapper: Update to 0.12.0 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 6/8] nvml-wrapper: add patch for vgpu ids Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 7/8] backport nvml-wrapper-sys 0.9.0-1 Dominik Csapak
2026-03-05 9:16 ` [PATCH debcargo-conf 8/8] backport nvml-wrapper 0.12.0-1 Dominik Csapak
2026-03-05 9:16 ` [PATCH proxmox-perl-rs 1/1] pve: add binding for accessing vgpu info Dominik Csapak
2026-03-19 11:16 ` Christoph Heiss [this message]
2026-03-05 9:16 ` [PATCH qemu-server 1/2] pci: move mdev related code to own module Dominik Csapak
2026-03-05 9:16 ` [PATCH qemu-server 2/2] pci: mdev: use PVE::RS::NVML for nvidia mdev information Dominik Csapak
2026-03-19 11:23 ` Christoph Heiss
2026-03-05 9:16 ` [PATCH manager 1/1] api: hardware: pci: use NVML for querying " Dominik Csapak
2026-03-05 9:16 ` [PATCH common 1/1] sysfs tools: remove moved code Dominik Csapak
2026-03-10 9:00 ` partially-applied: [PATCH common/debcargo-conf/manager/proxmox-perl-rs/qemu-server 00/13] use NVML for vGPU info querying Fabian Grünbichler
2026-03-19 11:31 ` Christoph Heiss
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DH6PT65DFX1Q.36ZZKMY6K8LG5@proxmox.com \
--to=c.heiss@proxmox.com \
--cc=d.csapak@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox