From: Arthur Bied-Charreton <a.bied-charreton@proxmox.com>
To: pve-devel@lists.proxmox.com
Cc: Stefan Reiter <s.reiter@proxmox.com>
Subject: [PATCH qemu-server 7/8] api: qemu: Extend cpu-flags endpoint to return actually supported flags
Date: Thu, 12 Mar 2026 09:40:20 +0100 [thread overview]
Message-ID: <20260312084021.124465-8-a.bied-charreton@proxmox.com> (raw)
In-Reply-To: <20260312084021.124465-1-a.bied-charreton@proxmox.com>
Previously the endpoint returned a hardcoded list of flags. It now
returns flags that are both recognized by QEMU, and supported by at
least one cluster node to give a somewhat accurate picture of what flags
can actually be used on the current cluster.
The 'nested-virt` entry is prepended. An optional 'accel' parameter
filters results by virtualization type (kvm or tcg) to help avoid
misconfigurations when assigning flags to VMs with a specific
acceleration setting.
This deviates from the original patch [0] by adding the functionality to
the `cpu-flags` endpoint instead of adding new endpoints. This is
because we never need the understood/supported flags alone in the
frontend, only their intersection. This also improves the VM CPU flag
selector by letting users select from all possible flags in their
cluster.
When passed `aarch64` as argument for `arch`, the index returns an empty
list, which is consistent with the behavior from before this patch.
[0]
https://lore.proxmox.com/pve-devel/20211028114150.3245864-3-s.reiter@proxmox.com/
Originally-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Arthur Bied-Charreton <a.bied-charreton@proxmox.com>
---
src/PVE/API2/Qemu/CPUFlags.pm | 108 +++++++++++++++++++++++++++++++++-
1 file changed, 107 insertions(+), 1 deletion(-)
diff --git a/src/PVE/API2/Qemu/CPUFlags.pm b/src/PVE/API2/Qemu/CPUFlags.pm
index 672bd2d2..9baf6c3e 100644
--- a/src/PVE/API2/Qemu/CPUFlags.pm
+++ b/src/PVE/API2/Qemu/CPUFlags.pm
@@ -10,6 +10,9 @@ use PVE::QemuServer::CPUConfig;
use base qw(PVE::RESTHandler);
+# vmx/svm are already represented by the nested-virt synthetic entry
+my %NESTED_VIRT_ALIASES = map { $_ => 1 } qw(vmx svm);
+
__PACKAGE__->register_method({
name => 'index',
path => '',
@@ -21,6 +24,13 @@ __PACKAGE__->register_method({
properties => {
node => get_standard_option('pve-node'),
arch => get_standard_option('pve-qm-cpu-arch', { optional => 1 }),
+ accel => {
+ description =>
+ 'Virtualization type to filter flags by. If not provided, return all flags.',
+ type => 'string',
+ enum => [qw(kvm tcg)],
+ optional => 1,
+ },
},
},
returns => {
@@ -35,6 +45,13 @@ __PACKAGE__->register_method({
description => {
type => 'string',
description => "Description of the CPU flag.",
+ optional => 1,
+ },
+ 'supported-on' => {
+ description => 'List of nodes supporting the flag.',
+ type => 'array',
+ items => get_standard_option('pve-node'),
+ optional => 1,
},
},
},
@@ -42,10 +59,99 @@ __PACKAGE__->register_method({
code => sub {
my ($param) = @_;
+ my $accel = extract_param($param, 'accel');
my $arch = extract_param($param, 'arch');
- return PVE::QemuServer::CPUConfig::get_supported_cpu_flags($arch);
+ if (defined($arch) && $arch eq 'aarch64') {
+ return [];
+ }
+
+ my $descriptions = PVE::QemuServer::CPUConfig::description_by_flag($arch);
+
+ my $nested_virt = {
+ name => 'nested-virt',
+ description => $descriptions->{'nested-virt'},
+ };
+
+ return [$nested_virt, @{ extract_flags($descriptions, $accel) }];
},
});
+# As described here [0], in order to get an accurate picture of which flags can actually be used, we need
+# to intersect:
+#
+# 1. The understood CPU flags, i.e., all flags QEMU accepts in theory, but that may not be actually
+# supported by the host CPU, and
+# 2. The supported CPU flags, which returns some settings/flags that cannot be used as `-cpu` arguments.
+#
+# [0] https://git.proxmox.com/?p=qemu-server.git;a=blob;f=src/PVE/QemuServer.pm;h=09e7a19b2f11ef48d2cfc11837b70338c306817c;hb=refs/heads/master#l2916
+sub extract_flags($descriptions, $accel = undef) {
+ my $recognized = extract_understood($descriptions);
+ my $supported = extract_supported($descriptions, $accel);
+
+ my %recognized_set = map { $_->{name} => 1 } @$recognized;
+
+ return [
+ map { {
+ name => $_->{name},
+ 'supported-on' => $_->{'supported-on'},
+ (defined($_->{description}) ? (description => $_->{description}) : ()),
+ } } grep {
+ $recognized_set{ $_->{name} }
+ } @$supported
+ ];
+}
+
+sub extract_understood($descriptions) {
+ my $understood_cpu_flags = PVE::QemuServer::query_understood_cpu_flags();
+
+ return [
+ map {
+ my $entry = { name => $_ };
+ $entry->{description} = $descriptions->{$_} if $descriptions->{$_};
+ $entry;
+ } grep {
+ !$NESTED_VIRT_ALIASES{$_}
+ } @$understood_cpu_flags
+ ];
+}
+
+# We do not use `PVE::QemuServer::CPUConfig::query_supported_cpu_flags`, which is quite expensive since
+# it needs to spawn QEMU instances in order to check which flags are supported. Rather, we use its cached
+# output, which is stored by `pvestatd` [0].
+#
+# [0] https://git.proxmox.com/?p=pve-manager.git;a=blob;f=PVE/Service/pvestatd.pm;h=d0719446e3b9a5f1bd3c861dbe768432cb3d7a0e;hb=refs/heads/master#l87
+sub extract_supported($descriptions, $accel = undef) {
+ my %hash;
+
+ my sub add_flags($flags_by_node) {
+ for my $node (keys %$flags_by_node) {
+ # This depends on `pvestatd` storing the flags in space-separated format, which is the case
+ # at the time of this commit.
+ for (split(' ', $flags_by_node->{$node})) {
+ if ($hash{$_}) {
+ $hash{$_}->{'supported-on'}->{$node} = 1;
+ } else {
+ $hash{$_} = { 'supported-on' => { $node => 1 }, name => $_ };
+ }
+ }
+ }
+ }
+
+ add_flags(PVE::Cluster::get_node_kv('cpuflags-kvm')) if !defined($accel) || $accel eq 'kvm';
+ add_flags(PVE::Cluster::get_node_kv('cpuflags-tcg')) if !defined($accel) || $accel eq 'tcg';
+
+ return [
+ map {
+ my $entry = { %$_, 'supported-on' => [sort keys %{ $_->{'supported-on'} }] };
+ $entry->{description} = $descriptions->{ $_->{name} }
+ if $descriptions->{ $_->{name} };
+ $entry;
+ } sort {
+ $a->{name} cmp $b->{name}
+ } grep {
+ !$NESTED_VIRT_ALIASES{ $_->{name} }
+ } values %hash
+ ];
+}
1;
--
2.47.3
next prev parent reply other threads:[~2026-03-12 8:40 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-12 8:40 [PATCH manager/qemu-server 0/8] Add API and UI for custom CPU models Arthur Bied-Charreton
2026-03-12 8:40 ` [PATCH pve-manager 1/8] ui: VMCPUFlagSelector: Fix unknownFlags behaviour Arthur Bied-Charreton
2026-03-12 8:40 ` [PATCH pve-manager 2/8] ui: CPUModelSelector: Fix dirty state on default Arthur Bied-Charreton
2026-03-12 8:40 ` [PATCH pve-manager 3/8] ui: CPUModelSelector: Allow filtering out custom models Arthur Bied-Charreton
2026-03-12 8:40 ` [PATCH pve-manager 4/8] ui: Add basic custom CPU model editor Arthur Bied-Charreton
2026-03-12 8:40 ` [PATCH pve-manager 5/8] ui: Add CPU flag editor for custom models Arthur Bied-Charreton
2026-03-12 8:40 ` [PATCH qemu-server 6/8] qemu: Add helpers for new custom models endpoints Arthur Bied-Charreton
2026-03-12 8:40 ` Arthur Bied-Charreton [this message]
2026-03-12 8:40 ` [PATCH qemu-server 8/8] api: qemu: Add CRUD handlers for custom CPU models Arthur Bied-Charreton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260312084021.124465-8-a.bied-charreton@proxmox.com \
--to=a.bied-charreton@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
--cc=s.reiter@proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox