* [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane
@ 2026-03-16 22:28 Ryosuke Nakayama
2026-03-16 22:28 ` [RFC PATCH manager 1/2] api: network: add VPP (fd.io) dataplane bridge support Ryosuke Nakayama
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: Ryosuke Nakayama @ 2026-03-16 22:28 UTC (permalink / raw)
To: pve-devel
From: ryskn <ryosuke.nakayama@ryskn.com>
This RFC series integrates VPP (Vector Packet Processor, fd.io) as an
optional userspace dataplane alongside OVS in Proxmox VE.
VPP is a DPDK-based, userspace packet processing framework that
provides VM networking via vhost-user sockets. It is already used in
production by several cloud/telecom stacks. The motivation here is to
expose VPP bridge domains natively in the PVE WebUI and REST API,
following the same pattern as OVS integration.
Background and prior discussion:
https://forum.proxmox.com/threads/interest-in-vpp-vector-packet-processing-as-a-dataplane-option-for-proxmox.181530/
Note: the benchmark figures quoted in that forum thread are slightly
off due to test configuration differences. Please use the numbers in
this cover letter instead.
--- What the patches do ---
Patch 1 (pve-manager):
- Detect VPP bridges via 'vppctl show bridge-domain' and expose
them as type=VPPBridge in the network interface list
- Create/delete VPP bridge domains via vppctl
- Persist bridge domains to /etc/vpp/pve-bridges.conf (exec'd at
VPP startup) so they survive reboots
- Support vpp_vlan_aware flag (maps to bridge-domain learn flag)
- VPP VLAN subinterface create/delete/list, persisted to
/etc/vpp/pve-vlans.conf
- Exclude VPP bridges from the SDN-only access guard so they appear
in the WebUI NIC selector
- Vhost-user socket convention:
/var/run/vpp/qemu-<vmid>-<net>.sock
- pve8to9: add upgrade checker for VPP dependencies
Patch 2 (proxmox-widget-toolkit):
- Add VPPBridge/VPPVlan to network_iface_types (Utils.js)
- NetworkView: VPPBridge and VPPVlan entries in the Create menu;
render vlan-raw-device in Ports/Slaves column for VPPVlan;
vpp_vlan_aware support in VLAN aware column
- NetworkEdit: vppbrN name validator; vpp_bridge field for VPPVlan;
hide MTU/Autostart/IP fields for VPP types; use VlanName vtype
for VPPVlan (allows dot notation, e.g. tap0.100)
--- Testing ---
Due to the absence of physical NICs in my test environment, all
benchmarks were performed as VM-to-VM communication over the
hypervisor's virtual switch (vmbr1 or VPP bridge domain). These
results reflect the virtual switching overhead, not physical NIC
performance, where VPP's DPDK polling would show a larger advantage.
Host: Proxmox VE 8.x (Intel Xeon), VMs: Debian 12 (virtio-net q=1)
VPP: 24.06, coalescing: frames=32 time=0.5ms, polling mode
iperf3 / netperf (single queue, VM-to-VM):
Metric vmbr1 VPP (vhost-user)
iperf3 31.0 Gbits/s 13.2 Gbits/s
netperf TCP_STREAM 32,243 Mbps 13,181 Mbps
netperf TCP_RR 15,734 tx/s 989 tx/s
VPP's raw throughput is lower than vmbr1 in this VM-to-VM setup due
to vhost-user coalescing latency. Physical NIC testing (DPDK PMD) is
expected to close or reverse this gap.
gRPC (unary, grpc-flow-bench, single queue, VM-to-VM):
Flows Metric vmbr1 VPP
100 RPS 32,847 39,742
100 p99 lat 7.28 ms 6.16 ms
1000 RPS 40,315 41,139
1000 p99 lat 48.96 ms 31.96 ms
VPP's userspace polling removes kernel scheduler jitter, which is
visible in the gRPC latency results even in the VM-to-VM scenario.
--- Known limitations / TODO ---
- No ifupdown2 integration yet; VPP config is managed separately via
/etc/vpp/pve-bridges.conf and pve-vlans.conf
- No live migration path for vhost-user sockets (sockets must be
pre-created on the target host)
- OVS and VPP cannot share the same physical NIC in this
implementation
- VPP must be installed and running independently (not managed by PVE)
--- CLA ---
Individual CLA has been submitted to office@proxmox.com.
---
ryskn (2):
api: network: add VPP (fd.io) dataplane bridge support
ui: network: add VPP (fd.io) bridge type support
PVE/API2/Network.pm | 413 ++++++++++++++++++++++++++-
PVE/API2/Nodes.pm | 19 ++
PVE/CLI/pve8to9.pm | 48 ++++
www/manager6/form/BridgeSelector.js | 5 +
www/manager6/lxc/Network.js | 34 +++
www/manager6/node/Config.js | 1 +
www/manager6/qemu/NetworkEdit.js | 27 ++
www/manager6/window/Migrate.js | 48 ++++
src/Utils.js | 2 +
src/node/NetworkEdit.js | 64 ++++-
src/node/NetworkView.js | 35 +++
11 files changed, 675 insertions(+), 21 deletions(-)
--
2.50.1 (Apple Git-155)
^ permalink raw reply [flat|nested] 11+ messages in thread* [RFC PATCH manager 1/2] api: network: add VPP (fd.io) dataplane bridge support 2026-03-16 22:28 [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama @ 2026-03-16 22:28 ` Ryosuke Nakayama 2026-03-16 22:28 ` [RFC PATCH widget-toolkit 2/2] ui: network: add VPP (fd.io) bridge type support Ryosuke Nakayama ` (3 subsequent siblings) 4 siblings, 0 replies; 11+ messages in thread From: Ryosuke Nakayama @ 2026-03-16 22:28 UTC (permalink / raw) To: pve-devel From: ryskn <ryosuke.nakayama@ryskn.com> Integrate VPP (fd.io) as an alternative network dataplane alongside OVS. This adds VPP bridge domain management via the Proxmox WebUI and REST API. Backend (PVE/API2/Network.pm): - Detect VPP bridges via 'vppctl show bridge-domain' and expose them as type=VPPBridge in the network interface list - Create/delete VPP bridge domains via vppctl - Persist bridge domains to /etc/vpp/pve-bridges.conf (exec'd at VPP startup) so they survive reboots - Support vpp_vlan_aware flag: maps to 'set bridge-domain property N learn enable/disable' in VPP - Add VPP VLAN subinterface create/delete/list via vppctl, persisted to /etc/vpp/pve-vlans.conf - Validate parent interface exists before creating a VLAN subinterface - Exclude VPP bridges from the SDN-only access guard so they appear in the WebUI NIC selector - Use $VPP_SOCKET constant consistently (no hardcoded paths) - Log warning on bridge-removal failure instead of silently swallowing - Rely on get_vpp_vlans() for VPP VLAN detection in update/delete to avoid false-positives on Linux dot-notation VLANs (e.g. eth0.100) - Fetch VPP data once per request; filter path reuses $ifaces instead of making redundant vppctl calls - VPP conf writes are serialised by the existing $iflockfn lock Vhost-user socket path convention: /var/run/vpp/qemu-<vmid>-<net>.sock Signed-off-by: ryskn <ryosuke.nakayama@ryskn.com> --- PVE/API2/Network.pm | 413 +++++++++++++++++++++++++++- PVE/API2/Nodes.pm | 19 ++ PVE/CLI/pve8to9.pm | 48 ++++ www/manager6/form/BridgeSelector.js | 5 + www/manager6/lxc/Network.js | 34 +++ www/manager6/node/Config.js | 1 + www/manager6/qemu/NetworkEdit.js | 27 ++ www/manager6/window/Migrate.js | 48 ++++ 8 files changed, 590 insertions(+), 5 deletions(-) diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm index fc053fec..f87a8f79 100644 --- a/PVE/API2/Network.pm +++ b/PVE/API2/Network.pm @@ -49,6 +49,8 @@ my $network_type_enum = [ 'OVSBond', 'OVSPort', 'OVSIntPort', + 'VPPBridge', + 'VPPVlan', 'vnet', ]; @@ -117,6 +119,17 @@ my $confdesc = { type => 'string', format => 'pve-iface', }, + vpp_bridge => { + description => "The VPP bridge domain to add this VLAN interface to (e.g. vppbr1).", + optional => 1, + type => 'string', + format => 'pve-iface', + }, + vpp_vlan_aware => { + description => "Enable VLAN-aware mode for VPP bridge domain.", + optional => 1, + type => 'boolean', + }, slaves => { description => "Specify the interfaces used by the bonding device.", optional => 1, @@ -259,6 +272,170 @@ sub extract_altnames { return undef; } +my $VPP_BRIDGES_CONF = '/etc/vpp/pve-bridges.conf'; +my $VPP_VLANS_CONF = '/etc/vpp/pve-vlans.conf'; +my $VPP_SOCKET = '/run/vpp/cli.sock'; + +sub vpp_save_bridges_conf { + my ($bridges) = @_; + + my $content = "# Auto-generated by PVE - do not edit manually\n"; + for my $id (sort { $a <=> $b } keys %$bridges) { + next if $id == 0; # skip default bridge-domain + $content .= "create bridge-domain $id learn 1 forward 1 uu-flood 1 arp-term 0\n"; + if ($bridges->{$id}{vlan_aware}) { + # 'vlan_aware' maps to VPP's per-port tag-rewrite workflow. + # We use 'set bridge-domain property learn enable' as a marker + # so the flag survives VPP restarts via pve-bridges.conf exec. + $content .= "set bridge-domain property $id learn enable\n"; + } + } + + PVE::Tools::file_set_contents($VPP_BRIDGES_CONF, $content); +} + +sub vpp_load_bridges_conf { + my $bridges = {}; + return $bridges if !-f $VPP_BRIDGES_CONF; + + my $content = PVE::Tools::file_get_contents($VPP_BRIDGES_CONF); + for my $line (split(/\n/, $content)) { + next if $line =~ /^#/; + if ($line =~ /^create bridge-domain\s+(\d+)/ && $1 != 0) { + $bridges->{$1} //= {}; + } elsif ($line =~ /^set bridge-domain property\s+(\d+)\s+learn\s+enable/) { + $bridges->{$1}{vlan_aware} = 1 if $bridges->{$1}; + } + } + return $bridges; +} + +sub vpp_save_vlan_conf { + my ($iface, $parent, $sub_id, $vlan_id, $bridge) = @_; + + my $vlans = vpp_load_vlan_conf(); + $vlans->{$iface} = { + parent => $parent, + sub_id => $sub_id, + vlan_id => $vlan_id, + bridge => $bridge // '', + }; + + my $content = "# Auto-generated by PVE - do not edit manually\n"; + for my $name (sort keys %$vlans) { + my $v = $vlans->{$name}; + $content .= "create sub-interfaces $v->{parent} $v->{sub_id} dot1q $v->{vlan_id}\n"; + $content .= "set interface state $name up\n"; + if ($v->{bridge} && $v->{bridge} =~ /^vppbr(\d+)$/) { + $content .= "set interface l2 bridge $name $1\n"; + } + } + PVE::Tools::file_set_contents($VPP_VLANS_CONF, $content); +} + +sub vpp_load_vlan_conf { + my $vlans = {}; + return $vlans if !-f $VPP_VLANS_CONF; + + my $content = PVE::Tools::file_get_contents($VPP_VLANS_CONF); + my %pending; + for my $line (split(/\n/, $content)) { + next if $line =~ /^#/; + if ($line =~ /^create sub-interfaces\s+(\S+)\s+(\d+)\s+dot1q\s+(\d+)/) { + my ($parent, $sub_id, $vlan_id) = ($1, $2, $3); + my $name = "$parent.$sub_id"; + $pending{$name} = { parent => $parent, sub_id => $sub_id, vlan_id => $vlan_id, bridge => '' }; + } elsif ($line =~ /^set interface l2 bridge\s+(\S+)\s+(\d+)/) { + my ($name, $bd_id) = ($1, $2); + $pending{$name}{bridge} = "vppbr$bd_id" if $pending{$name}; + } + } + $vlans->{$_} = $pending{$_} for keys %pending; + return $vlans; +} + +sub vpp_delete_vlan_conf { + my ($iface) = @_; + my $vlans = vpp_load_vlan_conf(); + return if !$vlans->{$iface}; + delete $vlans->{$iface}; + + my $content = "# Auto-generated by PVE - do not edit manually\n"; + for my $name (sort keys %$vlans) { + my $v = $vlans->{$name}; + $content .= "create sub-interfaces $v->{parent} $v->{sub_id} dot1q $v->{vlan_id}\n"; + $content .= "set interface state $name up\n"; + if ($v->{bridge} && $v->{bridge} =~ /^vppbr(\d+)$/) { + $content .= "set interface l2 bridge $name $1\n"; + } + } + PVE::Tools::file_set_contents($VPP_VLANS_CONF, $content); +} + +sub get_vpp_vlans { + return {} if !-x '/usr/bin/vppctl'; + + my $vlans = {}; + eval { + my $output = ''; + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, 'show', 'interface'], + outfunc => sub { $output .= $_[0] . "\n"; }, + timeout => 5, + ); + my $saved = vpp_load_vlan_conf(); + while ($output =~ /^(\S+)\.(\d+)\s+\d+\s+(\S+)/mg) { + my ($parent, $sub_id, $state) = ($1, $2, $3); + my $name = "$parent.$sub_id"; + my $vlan_id = $saved->{$name} ? $saved->{$name}{vlan_id} : $sub_id; + $vlans->{$name} = { + type => 'VPPVlan', + active => ($state eq 'up') ? 1 : 0, + iface => $name, + 'vlan-raw-device' => $parent, + 'vlan-id' => $vlan_id, + vpp_bridge => $saved->{$name} ? $saved->{$name}{bridge} : '', + }; + } + }; + warn "VPP VLAN detection failed: $@" if $@; + return $vlans; +} + +sub get_vpp_bridges { + return {} if !-x '/usr/bin/vppctl'; + + my $bridges = {}; + eval { + my $output = ''; + my $errout = ''; + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, 'show', 'bridge-domain'], + outfunc => sub { $output .= $_[0] . "\n"; }, + errfunc => sub { $errout .= $_[0] . "\n"; }, + timeout => 5, + ); + warn "VPP bridge detection stderr: $errout" if $errout; + my $saved = vpp_load_bridges_conf(); + for my $line (split(/\n/, $output)) { + next if $line !~ /^\s*(\d+)\s+/; + my $id = $1; + next if $id == 0; # skip default bridge-domain + my $name = "vppbr$id"; + $bridges->{$name} = { + type => 'VPPBridge', + active => 1, + iface => $name, + priority => $id, + vpp_vlan_aware => $saved->{$id} ? ($saved->{$id}{vlan_aware} ? 1 : 0) : 0, + }; + } + }; + warn "VPP bridge detection failed: $@" if $@; + + return $bridges; +} + __PACKAGE__->register_method({ name => 'index', path => '', @@ -422,6 +599,16 @@ __PACKAGE__->register_method({ delete $ifaces->{lo}; # do not list the loopback device + # always include VPP bridges and VLANs if VPP is available. + # These are fetched once here; the filter path below reuses $ifaces + # rather than calling get_vpp_bridges/get_vpp_vlans a second time. + # Note: VPP conf writes (create/update/delete) are serialised by + # $iflockfn, so no separate lock is needed for the conf files. + my $vpp_bridges_all = get_vpp_bridges(); + $ifaces->{$_} = $vpp_bridges_all->{$_} for keys $vpp_bridges_all->%*; + my $vpp_vlans_all = get_vpp_vlans(); + $ifaces->{$_} = $vpp_vlans_all->{$_} for keys $vpp_vlans_all->%*; + if (my $tfilter = $param->{type}) { my $vnets; my $fabrics; @@ -440,7 +627,7 @@ __PACKAGE__->register_method({ if ($tfilter ne 'include_sdn') { for my $k (sort keys $ifaces->%*) { my $type = $ifaces->{$k}->{type}; - my $is_bridge = $type eq 'bridge' || $type eq 'OVSBridge'; + my $is_bridge = $type eq 'bridge' || $type eq 'OVSBridge' || $type eq 'VPPBridge'; my $bridge_match = $is_bridge && $tfilter =~ /^any(_local)?_bridge$/; my $match = $tfilter eq $type || $bridge_match; delete $ifaces->{$k} if !$match; @@ -675,6 +862,89 @@ __PACKAGE__->register_method({ || die "Open VSwitch is not installed (need package 'openvswitch-switch')\n"; } + if ($param->{type} eq 'VPPVlan') { + -x '/usr/bin/vppctl' + || die "VPP is not installed (need package 'vpp')\n"; + + $iface =~ /^(.+)\.(\d+)$/ + || die "VPP VLAN name must be <parent>.<vlan-id>, e.g. tap0.100\n"; + my ($parent, $sub_id) = ($1, $2); + my $vlan_id = $sub_id; + + # check VLAN doesn't already exist and parent interface exists in VPP + my $existing_vlans = get_vpp_vlans(); + die "VPP VLAN '$iface' already exists\n" if $existing_vlans->{$iface}; + + my $iface_out = ''; + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, 'show', 'interface'], + outfunc => sub { $iface_out .= $_[0] . "\n"; }, + timeout => 5, + ); + die "VPP interface '$parent' does not exist\n" + if $iface_out !~ /^\Q$parent\E\s/m; + + # create sub-interface + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, + 'create', 'sub-interfaces', $parent, $sub_id, 'dot1q', $vlan_id], + timeout => 10, + ); + + # bring up + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, + 'set', 'interface', 'state', $iface, 'up'], + timeout => 10, + ); + + # optionally add to VPP bridge domain + if (my $bridge = $param->{vpp_bridge}) { + $bridge =~ /^vppbr(\d+)$/ + || die "Invalid VPP bridge name '$bridge'\n"; + my $bd_id = $1; + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, + 'set', 'interface', 'l2', 'bridge', $iface, $bd_id], + timeout => 10, + ); + } + + vpp_save_vlan_conf($iface, $parent, $sub_id, $vlan_id, $param->{vpp_bridge}); + return undef; + } + + if ($param->{type} eq 'VPPBridge') { + -x '/usr/bin/vppctl' + || die "VPP is not installed (need package 'vpp')\n"; + + $iface =~ /^vppbr(\d+)$/ + || die "VPP bridge name must match 'vppbrN' (e.g. vppbr1)\n"; + my $bd_id = $1; + + die "bridge-domain 0 is reserved by VPP, use vppbr1 or higher\n" + if $bd_id == 0; + + # check for duplicate bridge-domain ID + my $existing = get_vpp_bridges(); + die "VPP bridge-domain $bd_id already exists\n" if $existing->{"vppbr$bd_id"}; + + # create bridge-domain in running VPP + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, + 'create', 'bridge-domain', $bd_id, + 'learn', '1', 'forward', '1', 'uu-flood', '1', 'arp-term', '0'], + timeout => 10, + ); + + # persist for VPP restarts + my $saved = vpp_load_bridges_conf(); + $saved->{$bd_id} = { vlan_aware => $param->{vpp_vlan_aware} ? 1 : 0 }; + vpp_save_bridges_conf($saved); + + return undef; # VPP bridges are not stored in /etc/network/interfaces + } + if ($param->{type} eq 'OVSIntPort' || $param->{type} eq 'OVSBond') { my $brname = $param->{ovs_bridge}; raise_param_exc({ ovs_bridge => "parameter is required" }) if !$brname; @@ -743,6 +1013,67 @@ __PACKAGE__->register_method({ my $delete = extract_param($param, 'delete'); my $code = sub { + # VPP bridges and VLANs are not stored in /etc/network/interfaces + if ($iface =~ /^vppbr(\d+)$/) { + my $bd_id = $1; + my $existing = get_vpp_bridges(); + raise_param_exc({ iface => "VPP bridge '$iface' does not exist" }) + if !$existing->{$iface}; + + my $vlan_aware = $param->{vpp_vlan_aware} ? 1 : 0; + + # apply to running VPP + my $vlan_cmd = $vlan_aware ? 'enable' : 'disable'; + eval { + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, + 'set', 'bridge-domain', 'property', $bd_id, 'learn', $vlan_cmd], + timeout => 10, + ); + }; + warn "Failed to set VPP bridge-domain $bd_id vlan_aware: $@" if $@; + + # persist + my $saved = vpp_load_bridges_conf(); + $saved->{$bd_id} //= {}; + $saved->{$bd_id}{vlan_aware} = $vlan_aware; + vpp_save_bridges_conf($saved); + return undef; + } + + if (get_vpp_vlans()->{$iface}) { + # VPP VLAN: update bridge assignment + my $saved = vpp_load_vlan_conf(); + my $entry = $saved->{$iface}; + raise_param_exc({ iface => "VPP VLAN '$iface' not found in config" }) + if !$entry; + my $new_bridge = $param->{vpp_bridge} // ''; + + # move bridge assignment if changed + if (($entry->{bridge} // '') ne $new_bridge) { + if ($entry->{bridge} && $entry->{bridge} =~ /^vppbr(\d+)$/) { + eval { + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, + 'set', 'interface', 'l2', 'bridge', $iface, $1, 'del'], + timeout => 10, + ); + }; + warn "Failed to remove '$iface' from bridge '$entry->{bridge}': $@" if $@; + } + if ($new_bridge && $new_bridge =~ /^vppbr(\d+)$/) { + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, + 'set', 'interface', 'l2', 'bridge', $iface, $1], + timeout => 10, + ); + } + $entry->{bridge} = $new_bridge; + } + vpp_save_vlan_conf($iface, $entry->{parent}, $entry->{sub_id}, $entry->{vlan_id}, $new_bridge); + return undef; + } + my $config = PVE::INotify::read_file('interfaces'); my $ifaces = $config->{ifaces}; @@ -848,6 +1179,21 @@ __PACKAGE__->register_method({ my $iface = $param->{iface}; + # check VPP interfaces if not found in /etc/network/interfaces + if (!$ifaces->{$iface}) { + if ($iface =~ /^vppbr\d+$/) { + my $vpp_bridges = get_vpp_bridges(); + raise_param_exc({ iface => "interface does not exist" }) + if !$vpp_bridges->{$iface}; + return $vpp_bridges->{$iface}; + } elsif ($iface =~ /^.+\.\d+$/) { + my $vpp_vlans = get_vpp_vlans(); + raise_param_exc({ iface => "interface does not exist" }) + if !$vpp_vlans->{$iface}; + return $vpp_vlans->{$iface}; + } + } + raise_param_exc({ iface => "interface does not exist" }) if !$ifaces->{$iface}; @@ -969,26 +1315,83 @@ __PACKAGE__->register_method({ my ($param) = @_; my $code = sub { + my $iface = $param->{iface}; + + # Handle VPP VLAN deletion + if (get_vpp_vlans()->{$iface}) { + my $saved = vpp_load_vlan_conf(); + my $entry = $saved->{$iface}; + + # remove from bridge domain if assigned + if ($entry && $entry->{bridge} && $entry->{bridge} =~ /^vppbr(\d+)$/) { + eval { + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, + 'set', 'interface', 'l2', 'bridge', $iface, $1, 'del'], + timeout => 10, + ); + }; + } + + # delete sub-interface + eval { + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, + 'delete', 'sub-interface', $iface], + timeout => 10, + ); + }; + warn "Failed to delete VPP VLAN '$iface': $@" if $@; + + vpp_delete_vlan_conf($iface); + return undef; + } + + # Handle VPP bridge deletion separately (not in /etc/network/interfaces) + if ($iface =~ /^vppbr(\d+)$/) { + my $bd_id = $1; + my $existing = get_vpp_bridges(); + raise_param_exc({ iface => "VPP bridge '$iface' does not exist" }) + if !$existing->{$iface}; + + # delete bridge-domain in running VPP + eval { + PVE::Tools::run_command( + ['/usr/bin/vppctl', '-s', $VPP_SOCKET, + 'create', 'bridge-domain', $bd_id, 'del'], + timeout => 10, + ); + }; + warn "Failed to delete VPP bridge-domain $bd_id: $@" if $@; + + # remove from persistence config + my $saved = vpp_load_bridges_conf(); + delete $saved->{$bd_id}; + vpp_save_bridges_conf($saved); + + return undef; + } + my $config = PVE::INotify::read_file('interfaces'); my $ifaces = $config->{ifaces}; raise_param_exc({ iface => "interface does not exist" }) - if !$ifaces->{ $param->{iface} }; + if !$ifaces->{$iface}; - my $d = $ifaces->{ $param->{iface} }; + my $d = $ifaces->{$iface}; if ($d->{type} eq 'OVSIntPort' || $d->{type} eq 'OVSBond') { if (my $brname = $d->{ovs_bridge}) { if (my $br = $ifaces->{$brname}) { if ($br->{ovs_ports}) { my @ports = split(/\s+/, $br->{ovs_ports}); - my @new = grep { $_ ne $param->{iface} } @ports; + my @new = grep { $_ ne $iface } @ports; $br->{ovs_ports} = join(' ', @new); } } } } - delete $ifaces->{ $param->{iface} }; + delete $ifaces->{$iface}; PVE::INotify::write_file('interfaces', $config); }; diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm index 5bd6fe49..c4dcd9e6 100644 --- a/PVE/API2/Nodes.pm +++ b/PVE/API2/Nodes.pm @@ -2496,6 +2496,25 @@ my $create_migrate_worker = sub { my $preconditions = PVE::API2::Qemu->migrate_vm_precondition( { node => $nodename, vmid => $vmid, target => $target }); my $invalidConditions = ''; + + if ($online) { + my $vpp_bridges = PVE::API2::Network::get_vpp_bridges(); + if (keys %$vpp_bridges) { + my $conf = PVE::QemuConfig->load_config($vmid); + my @vpp_nics; + for my $opt (sort keys %$conf) { + next if $opt !~ m/^net\d+$/; + my $net = PVE::QemuServer::Network::parse_net($conf->{$opt}); + next if !$net || !$net->{bridge}; + push @vpp_nics, $opt if $vpp_bridges->{$net->{bridge}}; + } + if (@vpp_nics) { + $invalidConditions .= "\n Has VPP vhost-user NICs: "; + $invalidConditions .= join(', ', @vpp_nics); + } + } + } + if ($online && !$with_local_disks && scalar @{ $preconditions->{local_disks} }) { $invalidConditions .= "\n Has local disks: "; $invalidConditions .= diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm index 0c4b2343..dd31c8e5 100644 --- a/PVE/CLI/pve8to9.pm +++ b/PVE/CLI/pve8to9.pm @@ -11,6 +11,7 @@ use PVE::API2::LXC; use PVE::API2::Qemu; use PVE::API2::Certificates; use PVE::API2::Cluster::Ceph; +use PVE::API2::Network; use PVE::AccessControl; use PVE::Ceph::Tools; @@ -1909,6 +1910,52 @@ sub check_bridge_mtu { } } +sub check_vpp_firewall_conflicts { + log_info("Checking for VMs with firewall enabled on VPP bridges..."); + + my $vpp_bridges = eval { PVE::API2::Network::get_vpp_bridges() } // {}; + if (!keys %$vpp_bridges) { + log_skip("No VPP bridges detected."); + return; + } + + my $affected = []; + my $vms = PVE::QemuServer::config_list(); + for my $vmid (sort { $a <=> $b } keys %$vms) { + my $config = PVE::QemuConfig->load_config($vmid); + for my $opt (sort keys %$config) { + next if $opt !~ m/^net\d+$/; + my $net = PVE::QemuServer::Network::parse_net($config->{$opt}); + next if !$net || !$net->{bridge}; + if ($vpp_bridges->{$net->{bridge}} && $net->{firewall}) { + push @$affected, "VM $vmid ($opt on $net->{bridge})"; + } + } + } + + my $cts = PVE::LXC::config_list(); + for my $vmid (sort { $a <=> $b } keys %$cts) { + my $conf = PVE::LXC::Config->load_config($vmid); + for my $opt (sort keys %$conf) { + next if $opt !~ m/^net\d+$/; + my $net = PVE::LXC::Config->parse_lxc_network($conf->{$opt}); + next if !$net || !$net->{bridge}; + if ($vpp_bridges->{$net->{bridge}} && $net->{firewall}) { + push @$affected, "CT $vmid ($opt on $net->{bridge})"; + } + } + } + + if (@$affected) { + log_warn( + "The following guests have firewall enabled on VPP bridges (kernel firewall not available):\n" + . " " + . join(", ", @$affected)); + } else { + log_pass("No firewall conflicts with VPP bridges found."); + } +} + sub check_rrd_migration { if (-e "/var/lib/rrdcached/db/pve-node-9.0") { log_info("Check post RRD metrics data format update situation..."); @@ -2016,6 +2063,7 @@ sub check_virtual_guests { check_lxcfs_fuse_version(); check_bridge_mtu(); + check_vpp_firewall_conflicts(); my $affected_guests_long_desc = []; my $affected_cts_cgroup_keys = []; diff --git a/www/manager6/form/BridgeSelector.js b/www/manager6/form/BridgeSelector.js index b5949018..297a3e19 100644 --- a/www/manager6/form/BridgeSelector.js +++ b/www/manager6/form/BridgeSelector.js @@ -30,6 +30,11 @@ Ext.define('PVE.form.BridgeSelector', { dataIndex: 'active', renderer: Proxmox.Utils.format_boolean, }, + { + header: gettext('Type'), + width: 80, + dataIndex: 'type', + }, { header: gettext('Comment'), dataIndex: 'comments', diff --git a/www/manager6/lxc/Network.js b/www/manager6/lxc/Network.js index e56d47c0..8f377bfa 100644 --- a/www/manager6/lxc/Network.js +++ b/www/manager6/lxc/Network.js @@ -6,6 +6,15 @@ Ext.define('PVE.lxc.NetworkInputPanel', { onlineHelp: 'pct_container_network', + viewModel: { + data: { + bridgeType: '', + }, + formulas: { + isVPPBridge: (get) => get('bridgeType') === 'VPPBridge', + }, + }, + setNodename: function (nodename) { let me = this; @@ -116,6 +125,20 @@ Ext.define('PVE.lxc.NetworkInputPanel', { fieldLabel: gettext('Bridge'), value: cdata.bridge, allowBlank: false, + listeners: { + change: function (field, value) { + let store = field.getStore(); + let rec = store.findRecord('iface', value, 0, false, false, true); + let type = rec ? rec.data.type : ''; + me.getViewModel().set('bridgeType', type); + if (type === 'VPPBridge') { + let fw = me.down('field[name=firewall]'); + if (fw) { + fw.setValue(false); + } + } + }, + }, }, { xtype: 'pveVlanField', @@ -127,6 +150,17 @@ Ext.define('PVE.lxc.NetworkInputPanel', { fieldLabel: gettext('Firewall'), name: 'firewall', value: cdata.firewall, + bind: { + disabled: '{isVPPBridge}', + }, + }, + { + xtype: 'displayfield', + userCls: 'pmx-hint', + value: gettext('Kernel firewall is not available with VPP bridges'), + bind: { + hidden: '{!isVPPBridge}', + }, }, ]; diff --git a/www/manager6/node/Config.js b/www/manager6/node/Config.js index f6cd8749..bd24fe68 100644 --- a/www/manager6/node/Config.js +++ b/www/manager6/node/Config.js @@ -193,6 +193,7 @@ Ext.define('PVE.node.Config', { showAltNames: true, groups: ['services'], nodename: nodename, + types: ['bridge', 'bond', 'vlan', 'ovs', 'vpp'], editOptions: { enableBridgeVlanIds: true, }, diff --git a/www/manager6/qemu/NetworkEdit.js b/www/manager6/qemu/NetworkEdit.js index 2ba13c40..3d096465 100644 --- a/www/manager6/qemu/NetworkEdit.js +++ b/www/manager6/qemu/NetworkEdit.js @@ -38,10 +38,12 @@ Ext.define('PVE.qemu.NetworkInputPanel', { data: { networkModel: undefined, mtu: '', + bridgeType: '', }, formulas: { isVirtio: (get) => get('networkModel') === 'virtio', showMtuHint: (get) => get('mtu') === 1, + isVPPBridge: (get) => get('bridgeType') === 'VPPBridge', }, }, @@ -82,6 +84,20 @@ Ext.define('PVE.qemu.NetworkInputPanel', { nodename: me.nodename, autoSelect: true, allowBlank: false, + listeners: { + change: function (field, value) { + let store = field.getStore(); + let rec = store.findRecord('iface', value, 0, false, false, true); + let type = rec ? rec.data.type : ''; + me.getViewModel().set('bridgeType', type); + if (type === 'VPPBridge') { + let fw = me.down('field[name=firewall]'); + if (fw) { + fw.setValue(false); + } + } + }, + }, }); me.column1 = [ @@ -96,6 +112,17 @@ Ext.define('PVE.qemu.NetworkInputPanel', { fieldLabel: gettext('Firewall'), name: 'firewall', checked: me.insideWizard || me.isCreate, + bind: { + disabled: '{isVPPBridge}', + }, + }, + { + xtype: 'displayfield', + userCls: 'pmx-hint', + value: gettext('Kernel firewall is not available with VPP bridges'), + bind: { + hidden: '{!isVPPBridge}', + }, }, ]; diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js index ff80c70c..c1509be6 100644 --- a/www/manager6/window/Migrate.js +++ b/www/manager6/window/Migrate.js @@ -463,6 +463,54 @@ Ext.define('PVE.window.Migrate', { } } + if (vm.get('running')) { + try { + let { result: netResult } = await Proxmox.Async.api2({ + url: `/nodes/${vm.get('nodename')}/network?type=any_bridge`, + method: 'GET', + }); + let vppBridges = new Set(); + for (const iface of netResult.data || []) { + if (iface.type === 'VPPBridge') { + vppBridges.add(iface.iface); + } + } + if (vppBridges.size > 0) { + let vmConfig = {}; + try { + let { result: cfgResult } = await Proxmox.Async.api2({ + url: `/nodes/${vm.get('nodename')}/qemu/${vm.get('vmid')}/config`, + method: 'GET', + }); + vmConfig = cfgResult.data || {}; + } catch (_err) { /* ignore */ } + + let vppNics = []; + for (const [key, value] of Object.entries(vmConfig)) { + if (!key.match(/^net\d+$/)) { + continue; + } + let net = PVE.Parser.parseQemuNetwork(key, value); + if (net && net.bridge && vppBridges.has(net.bridge)) { + vppNics.push(key); + } + } + if (vppNics.length > 0) { + migration.possible = false; + migration.preconditions.push({ + text: Ext.String.format( + gettext('Cannot live-migrate VM with VPP vhost-user NICs: {0}. Use offline migration or HA (stop/start).'), + vppNics.join(', '), + ), + severity: 'error', + }); + } + } + } catch (_err) { + // VPP bridge check is best-effort + } + } + vm.set('migration', migration); }, checkLxcPreconditions: async function (resetMigrationPossible) { -- 2.50.1 (Apple Git-155) ^ permalink raw reply [flat|nested] 11+ messages in thread
* [RFC PATCH widget-toolkit 2/2] ui: network: add VPP (fd.io) bridge type support 2026-03-16 22:28 [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama 2026-03-16 22:28 ` [RFC PATCH manager 1/2] api: network: add VPP (fd.io) dataplane bridge support Ryosuke Nakayama @ 2026-03-16 22:28 ` Ryosuke Nakayama 2026-03-17 6:39 ` [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Stefan Hanreich ` (2 subsequent siblings) 4 siblings, 0 replies; 11+ messages in thread From: Ryosuke Nakayama @ 2026-03-16 22:28 UTC (permalink / raw) To: pve-devel From: ryskn <ryosuke.nakayama@ryskn.com> Add VPP bridge domain as a creatable/editable network type in the Proxmox node network configuration UI. - Utils.js: add VPPBridge/VPPVlan to network_iface_types with gettext() - NetworkView.js: add 'vpp' to default types list; add VPPBridge and VPPVlan entries to the Create menu; VPPVlan uses a dedicated menu entry (no auto-generated default name); render vlan-raw-device in Ports/Slaves column for VPPVlan; fix VLAN aware column to render vpp_vlan_aware for VPP bridges; declare vpp_bridge/vpp_vlan_aware in the proxmox-networks model - NetworkEdit.js: introduce vppTypes Set as single source of truth for VPP type checks; add vppbrN validator for VPPBridge name field; add vppbrN validator for vpp_bridge field in VPPVlan; increase maxLength to 40 for VPP interface names; hide MTU field for VPP types; exclude Autostart and IP/GW fields for VPP types via vppTypes; use VlanName vtype for VPPVlan to allow dot notation (e.g. tap0.100) Signed-off-by: ryskn <ryosuke.nakayama@ryskn.com> --- src/Utils.js | 2 ++ src/node/NetworkEdit.js | 64 ++++++++++++++++++++++++++++++++++------- src/node/NetworkView.js | 35 ++++++++++++++++++---- 3 files changed, 85 insertions(+), 16 deletions(-) diff --git a/src/Utils.js b/src/Utils.js index 5457ffa..fa88fb1 100644 --- a/src/Utils.js +++ b/src/Utils.js @@ -707,6 +707,8 @@ Ext.define('Proxmox.Utils', { OVSBond: 'OVS Bond', OVSPort: 'OVS Port', OVSIntPort: 'OVS IntPort', + VPPBridge: gettext('VPP Bridge'), + VPPVlan: gettext('VPP VLAN'), }, render_network_iface_type: function (value) { diff --git a/src/node/NetworkEdit.js b/src/node/NetworkEdit.js index c945139..c53cd90 100644 --- a/src/node/NetworkEdit.js +++ b/src/node/NetworkEdit.js @@ -21,7 +21,12 @@ Ext.define('Proxmox.node.NetworkEdit', { me.isCreate = !me.iface; + // Canonical set of VPP interface types — used to gate autostart, + // IP config, MTU, and other kernel-only fields. + const vppTypes = new Set(['VPPBridge', 'VPPVlan']); + let iface_vtype; + let iface_validator; // optional extra validator for the Name field if (me.iftype === 'bridge') { iface_vtype = 'BridgeName'; @@ -39,6 +44,12 @@ Ext.define('Proxmox.node.NetworkEdit', { iface_vtype = 'InterfaceName'; } else if (me.iftype === 'OVSPort') { iface_vtype = 'InterfaceName'; + } else if (me.iftype === 'VPPBridge') { + iface_vtype = 'InterfaceName'; + iface_validator = (v) => + /^vppbr\d+$/.test(v) || gettext('Name must match vppbrN format (e.g. vppbr1)'); + } else if (me.iftype === 'VPPVlan') { + iface_vtype = 'VlanName'; } else { console.log(me.iftype); throw 'unknown network device type specified'; @@ -52,7 +63,7 @@ Ext.define('Proxmox.node.NetworkEdit', { advancedColumn1 = [], advancedColumn2 = []; - if (!(me.iftype === 'OVSIntPort' || me.iftype === 'OVSPort' || me.iftype === 'OVSBond')) { + if (!(me.iftype === 'OVSIntPort' || me.iftype === 'OVSPort' || me.iftype === 'OVSBond' || vppTypes.has(me.iftype))) { column2.push({ xtype: 'proxmoxcheckbox', fieldLabel: gettext('Autostart'), @@ -295,6 +306,32 @@ Ext.define('Proxmox.node.NetworkEdit', { fieldLabel: gettext('OVS options'), name: 'ovs_options', }); + } else if (me.iftype === 'VPPBridge') { + column2.push({ + xtype: 'proxmoxcheckbox', + fieldLabel: gettext('VLAN aware'), + name: 'vpp_vlan_aware', + deleteEmpty: !me.isCreate, + }); + } else if (me.iftype === 'VPPVlan') { + column2.push({ + xtype: 'displayfield', + userCls: 'pmx-hint', + value: gettext('Name format: <parent>.<vlan-id>, e.g. tap0.100'), + }); + column2.push({ + xtype: me.isCreate ? 'textfield' : 'displayfield', + fieldLabel: gettext('Bridge domain'), + name: 'vpp_bridge', + emptyText: gettext('none'), + allowBlank: true, + validator: (v) => + !v || /^vppbr\d+$/.test(v) || gettext('Must match vppbrN format (e.g. vppbr1)'), + autoEl: { + tag: 'div', + 'data-qtip': gettext('VPP bridge domain to attach this VLAN interface to, e.g. vppbr1'), + }, + }); } column2.push({ @@ -328,8 +365,9 @@ Ext.define('Proxmox.node.NetworkEdit', { name: 'iface', value: me.iface, vtype: iface_vtype, + validator: iface_validator, allowBlank: false, - maxLength: iface_vtype === 'BridgeName' ? 10 : 15, + maxLength: iface_vtype === 'BridgeName' ? 10 : (vppTypes.has(me.iftype) ? 40 : 15), autoEl: { tag: 'div', 'data-qtip': gettext('For example, vmbr0.100, vmbr0, vlan0.100, vlan0'), @@ -391,6 +429,8 @@ Ext.define('Proxmox.node.NetworkEdit', { name: 'ovs_bonds', }, ); + } else if (vppTypes.has(me.iftype)) { + // VPP interfaces do not use kernel IP configuration } else { column1.push( { @@ -423,15 +463,17 @@ Ext.define('Proxmox.node.NetworkEdit', { }, ); } - advancedColumn1.push({ - xtype: 'proxmoxintegerfield', - minValue: 1280, - maxValue: 65520, - deleteEmpty: !me.isCreate, - emptyText: 1500, - fieldLabel: 'MTU', - name: 'mtu', - }); + if (!vppTypes.has(me.iftype)) { + advancedColumn1.push({ + xtype: 'proxmoxintegerfield', + minValue: 1280, + maxValue: 65520, + deleteEmpty: !me.isCreate, + emptyText: 1500, + fieldLabel: 'MTU', + name: 'mtu', + }); + } Ext.applyIf(me, { url: url, diff --git a/src/node/NetworkView.js b/src/node/NetworkView.js index 0ff9649..164b349 100644 --- a/src/node/NetworkView.js +++ b/src/node/NetworkView.js @@ -19,6 +19,8 @@ Ext.define('proxmox-networks', { 'type', 'vlan-id', 'vlan-raw-device', + 'vpp_bridge', + 'vpp_vlan_aware', ], idProperty: 'iface', }); @@ -30,7 +32,7 @@ Ext.define('Proxmox.node.NetworkView', { // defines what types of network devices we want to create // order is always the same - types: ['bridge', 'bond', 'vlan', 'ovs'], + types: ['bridge', 'bond', 'vlan', 'ovs', 'vpp'], showApplyBtn: false, @@ -223,6 +225,27 @@ Ext.define('Proxmox.node.NetworkView', { }); } + if (me.types.indexOf('vpp') !== -1) { + if (menu_items.length > 0) { + menu_items.push({ xtype: 'menuseparator' }); + } + + addEditWindowToMenu('VPPBridge', 'vppbr'); + menu_items.push({ + text: Proxmox.Utils.render_network_iface_type('VPPVlan'), + handler: () => + Ext.create('Proxmox.node.NetworkEdit', { + autoShow: true, + nodename: me.nodename, + iftype: 'VPPVlan', + ...me.editOptions, + listeners: { + destroy: () => reload(), + }, + }), + }); + } + let renderer_generator = function (fieldname) { return function (val, metaData, rec) { let tmp = []; @@ -326,14 +349,14 @@ Ext.define('Proxmox.node.NetworkView', { undefinedText: Proxmox.Utils.noText, }, { - xtype: 'booleancolumn', header: gettext('VLAN aware'), width: 80, sortable: true, dataIndex: 'bridge_vlan_aware', - trueText: Proxmox.Utils.yesText, - falseText: Proxmox.Utils.noText, - undefinedText: Proxmox.Utils.noText, + renderer: (value, metaData, { data }) => { + const v = data.bridge_vlan_aware || data.vpp_vlan_aware; + return v ? Proxmox.Utils.yesText : Proxmox.Utils.noText; + }, }, { header: gettext('Ports/Slaves'), @@ -347,6 +370,8 @@ Ext.define('Proxmox.node.NetworkView', { return data.ovs_ports; } else if (value === 'OVSBond') { return data.ovs_bonds; + } else if (value === 'VPPVlan') { + return data['vlan-raw-device']; } return ''; }, -- 2.50.1 (Apple Git-155) ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane 2026-03-16 22:28 [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama 2026-03-16 22:28 ` [RFC PATCH manager 1/2] api: network: add VPP (fd.io) dataplane bridge support Ryosuke Nakayama 2026-03-16 22:28 ` [RFC PATCH widget-toolkit 2/2] ui: network: add VPP (fd.io) bridge type support Ryosuke Nakayama @ 2026-03-17 6:39 ` Stefan Hanreich 2026-03-17 10:18 ` DERUMIER, Alexandre 2026-03-17 11:21 ` [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama 4 siblings, 0 replies; 11+ messages in thread From: Stefan Hanreich @ 2026-03-17 6:39 UTC (permalink / raw) To: pve-devel Hi! Thanks for your contribution! I was already following the discussion in the linked forum thread and shortly discussed this proposal with a colleague - but I wasn't able to find the time yet to take a closer look at VPP itself in order to form an opinion. I'll take a closer look in the coming days and give the patches a spin on my machine. On 3/16/26 11:27 PM, Ryosuke Nakayama wrote: > From: ryskn <ryosuke.nakayama@ryskn.com> > > This RFC series integrates VPP (Vector Packet Processor, fd.io) as an > optional userspace dataplane alongside OVS in Proxmox VE. > > VPP is a DPDK-based, userspace packet processing framework that > provides VM networking via vhost-user sockets. It is already used in > production by several cloud/telecom stacks. The motivation here is to > expose VPP bridge domains natively in the PVE WebUI and REST API, > following the same pattern as OVS integration. > > Background and prior discussion: > https://forum.proxmox.com/threads/interest-in-vpp-vector-packet-processing-as-a-dataplane-option-for-proxmox.181530/ > > Note: the benchmark figures quoted in that forum thread are slightly > off due to test configuration differences. Please use the numbers in > this cover letter instead. > > --- What the patches do --- > > Patch 1 (pve-manager): > - Detect VPP bridges via 'vppctl show bridge-domain' and expose > them as type=VPPBridge in the network interface list > - Create/delete VPP bridge domains via vppctl > - Persist bridge domains to /etc/vpp/pve-bridges.conf (exec'd at > VPP startup) so they survive reboots > - Support vpp_vlan_aware flag (maps to bridge-domain learn flag) > - VPP VLAN subinterface create/delete/list, persisted to > /etc/vpp/pve-vlans.conf > - Exclude VPP bridges from the SDN-only access guard so they appear > in the WebUI NIC selector > - Vhost-user socket convention: > /var/run/vpp/qemu-<vmid>-<net>.sock > - pve8to9: add upgrade checker for VPP dependencies > > Patch 2 (proxmox-widget-toolkit): > - Add VPPBridge/VPPVlan to network_iface_types (Utils.js) > - NetworkView: VPPBridge and VPPVlan entries in the Create menu; > render vlan-raw-device in Ports/Slaves column for VPPVlan; > vpp_vlan_aware support in VLAN aware column > - NetworkEdit: vppbrN name validator; vpp_bridge field for VPPVlan; > hide MTU/Autostart/IP fields for VPP types; use VlanName vtype > for VPPVlan (allows dot notation, e.g. tap0.100) > > --- Testing --- > > Due to the absence of physical NICs in my test environment, all > benchmarks were performed as VM-to-VM communication over the > hypervisor's virtual switch (vmbr1 or VPP bridge domain). These > results reflect the virtual switching overhead, not physical NIC > performance, where VPP's DPDK polling would show a larger advantage. > > Host: Proxmox VE 8.x (Intel Xeon), VMs: Debian 12 (virtio-net q=1) > VPP: 24.06, coalescing: frames=32 time=0.5ms, polling mode > > iperf3 / netperf (single queue, VM-to-VM): > > Metric vmbr1 VPP (vhost-user) > iperf3 31.0 Gbits/s 13.2 Gbits/s > netperf TCP_STREAM 32,243 Mbps 13,181 Mbps > netperf TCP_RR 15,734 tx/s 989 tx/s > > VPP's raw throughput is lower than vmbr1 in this VM-to-VM setup due > to vhost-user coalescing latency. Physical NIC testing (DPDK PMD) is > expected to close or reverse this gap. > > gRPC (unary, grpc-flow-bench, single queue, VM-to-VM): > > Flows Metric vmbr1 VPP > 100 RPS 32,847 39,742 > 100 p99 lat 7.28 ms 6.16 ms > 1000 RPS 40,315 41,139 > 1000 p99 lat 48.96 ms 31.96 ms > > VPP's userspace polling removes kernel scheduler jitter, which is > visible in the gRPC latency results even in the VM-to-VM scenario. > > --- Known limitations / TODO --- > > - No ifupdown2 integration yet; VPP config is managed separately via > /etc/vpp/pve-bridges.conf and pve-vlans.conf > - No live migration path for vhost-user sockets (sockets must be > pre-created on the target host) > - OVS and VPP cannot share the same physical NIC in this > implementation > - VPP must be installed and running independently (not managed by PVE) > > --- CLA --- > > Individual CLA has been submitted to office@proxmox.com. > > --- > > ryskn (2): > api: network: add VPP (fd.io) dataplane bridge support > ui: network: add VPP (fd.io) bridge type support > > PVE/API2/Network.pm | 413 ++++++++++++++++++++++++++- > PVE/API2/Nodes.pm | 19 ++ > PVE/CLI/pve8to9.pm | 48 ++++ > www/manager6/form/BridgeSelector.js | 5 + > www/manager6/lxc/Network.js | 34 +++ > www/manager6/node/Config.js | 1 + > www/manager6/qemu/NetworkEdit.js | 27 ++ > www/manager6/window/Migrate.js | 48 ++++ > src/Utils.js | 2 + > src/node/NetworkEdit.js | 64 ++++- > src/node/NetworkView.js | 35 +++ > 11 files changed, 675 insertions(+), 21 deletions(-) > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane 2026-03-16 22:28 [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama ` (2 preceding siblings ...) 2026-03-17 6:39 ` [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Stefan Hanreich @ 2026-03-17 10:18 ` DERUMIER, Alexandre 2026-03-17 11:14 ` Ryosuke Nakayama 2026-03-17 11:21 ` [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama 4 siblings, 1 reply; 11+ messages in thread From: DERUMIER, Alexandre @ 2026-03-17 10:18 UTC (permalink / raw) To: pve-devel, ryosuke.nakayama Hi, thanks for your work on this ! Could it be possible to write a small Howto to install vpp software itself + bridge configuration ? Alexandre -------- Message initial -------- De: Ryosuke Nakayama <ryosuke.nakayama@ryskn.com> À: pve-devel@lists.proxmox.com Objet: [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Date: 16/03/2026 23:28:14 From: ryskn <ryosuke.nakayama@ryskn.com> This RFC series integrates VPP (Vector Packet Processor, fd.io) as an optional userspace dataplane alongside OVS in Proxmox VE. VPP is a DPDK-based, userspace packet processing framework that provides VM networking via vhost-user sockets. It is already used in production by several cloud/telecom stacks. The motivation here is to expose VPP bridge domains natively in the PVE WebUI and REST API, following the same pattern as OVS integration. Background and prior discussion: Note: the benchmark figures quoted in that forum thread are slightly off due to test configuration differences. Please use the numbers in this cover letter instead. --- What the patches do --- Patch 1 (pve-manager): - Detect VPP bridges via 'vppctl show bridge-domain' and expose them as type=VPPBridge in the network interface list - Create/delete VPP bridge domains via vppctl - Persist bridge domains to /etc/vpp/pve-bridges.conf (exec'd at VPP startup) so they survive reboots - Support vpp_vlan_aware flag (maps to bridge-domain learn flag) - VPP VLAN subinterface create/delete/list, persisted to /etc/vpp/pve-vlans.conf - Exclude VPP bridges from the SDN-only access guard so they appear in the WebUI NIC selector - Vhost-user socket convention: /var/run/vpp/qemu-<vmid>-<net>.sock - pve8to9: add upgrade checker for VPP dependencies Patch 2 (proxmox-widget-toolkit): - Add VPPBridge/VPPVlan to network_iface_types (Utils.js) - NetworkView: VPPBridge and VPPVlan entries in the Create menu; render vlan-raw-device in Ports/Slaves column for VPPVlan; vpp_vlan_aware support in VLAN aware column - NetworkEdit: vppbrN name validator; vpp_bridge field for VPPVlan; hide MTU/Autostart/IP fields for VPP types; use VlanName vtype for VPPVlan (allows dot notation, e.g. tap0.100) --- Testing --- Due to the absence of physical NICs in my test environment, all benchmarks were performed as VM-to-VM communication over the hypervisor's virtual switch (vmbr1 or VPP bridge domain). These results reflect the virtual switching overhead, not physical NIC performance, where VPP's DPDK polling would show a larger advantage. Host: Proxmox VE 8.x (Intel Xeon), VMs: Debian 12 (virtio-net q=1) VPP: 24.06, coalescing: frames=32 time=0.5ms, polling mode iperf3 / netperf (single queue, VM-to-VM): Metric vmbr1 VPP (vhost-user) iperf3 31.0 Gbits/s 13.2 Gbits/s netperf TCP_STREAM 32,243 Mbps 13,181 Mbps netperf TCP_RR 15,734 tx/s 989 tx/s VPP's raw throughput is lower than vmbr1 in this VM-to-VM setup due to vhost-user coalescing latency. Physical NIC testing (DPDK PMD) is expected to close or reverse this gap. gRPC (unary, grpc-flow-bench, single queue, VM-to-VM): Flows Metric vmbr1 VPP 100 RPS 32,847 39,742 100 p99 lat 7.28 ms 6.16 ms 1000 RPS 40,315 41,139 1000 p99 lat 48.96 ms 31.96 ms VPP's userspace polling removes kernel scheduler jitter, which is visible in the gRPC latency results even in the VM-to-VM scenario. --- Known limitations / TODO --- - No ifupdown2 integration yet; VPP config is managed separately via /etc/vpp/pve-bridges.conf and pve-vlans.conf - No live migration path for vhost-user sockets (sockets must be pre-created on the target host) - OVS and VPP cannot share the same physical NIC in this implementation - VPP must be installed and running independently (not managed by PVE) --- CLA --- Individual CLA has been submitted to office@proxmox.com. --- ryskn (2): api: network: add VPP (fd.io) dataplane bridge support ui: network: add VPP (fd.io) bridge type support PVE/API2/Network.pm | 413 ++++++++++++++++++++++++++- PVE/API2/Nodes.pm | 19 ++ PVE/CLI/pve8to9.pm | 48 ++++ www/manager6/form/BridgeSelector.js | 5 + www/manager6/lxc/Network.js | 34 +++ www/manager6/node/Config.js | 1 + www/manager6/qemu/NetworkEdit.js | 27 ++ www/manager6/window/Migrate.js | 48 ++++ src/Utils.js | 2 + src/node/NetworkEdit.js | 64 ++++- src/node/NetworkView.js | 35 +++ 11 files changed, 675 insertions(+), 21 deletions(-) ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane 2026-03-17 10:18 ` DERUMIER, Alexandre @ 2026-03-17 11:14 ` Ryosuke Nakayama 2026-03-17 11:14 ` [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support Ryosuke Nakayama ` (2 more replies) 0 siblings, 3 replies; 11+ messages in thread From: Ryosuke Nakayama @ 2026-03-17 11:14 UTC (permalink / raw) To: pve-devel On Mon, 2026-03-17, Alexandre wrote: > Could it be possible to write a small Howto to install vpp > software itself + bridge configuration ? Sure! Also, I should clarify that the original two patches are not sufficient on their own: a third patch for qemu-server is required to make VMs actually connect to VPP via vhost-user. I have attached those patches below (RFC, same caveats apply). --- How to install VPP on Proxmox VE --- 1. Add the fd.io package repository: curl -fsSL https://packagecloud.io/fdio/release/gpgkey | \ gpg --dearmor \ -o /usr/share/keyrings/fdio-release.gpg echo "deb [signed-by=/usr/share/keyrings/fdio-release.gpg \ trusted=yes] \ https://packagecloud.io/fdio/release/debian bookworm main" \ > /etc/apt/sources.list.d/fdio.list apt update 2. Install VPP and required plugins: apt install vpp vpp-plugin-core vpp-plugin-dpdk vpp-drivers 3. Configure /etc/vpp/startup.conf. The critical sections are: unix { nodaemon log /var/log/vpp/vpp.log cli-listen /run/vpp/cli.sock exec /etc/vpp/vpp-pve.conf } cpu { main-core 0 corelist-workers 1,2 scheduler-policy fifo scheduler-priority 50 } Note: adjust cpu core pinning to your hardware. VPP uses polling threads, so dedicated cores are strongly recommended. 4. Enable and start VPP: systemctl enable --now vpp --- Bridge domain creation --- With the pve-manager patch applied, VPP bridge domains can be created and managed via the Proxmox WebUI (Node > Network > Create > VPP Bridge) or via the API. Manually via vppctl: vppctl create bridge-domain 1 learn 1 forward 1 flood 1 vppctl show bridge-domain 1 Note: bridge-domain 0 is reserved by VPP; use ID >= 1. The WebUI will expose the bridge as "vppbr<ID>" (e.g. vppbr1). --- Connecting a VM --- With the qemu-server patch applied, setting a VM's NIC to a VPP bridge (e.g. bridge=vppbr1) is sufficient. On VM start, Proxmox will automatically: - create a vhost-user server socket at /var/run/vpp/qemu-<vmid>-<netN>.sock - add the resulting VirtualEthernet interface to the bridge domain - pass the socket to QEMU as a vhost-user chardev On VM stop, the vhost-user interface is removed from VPP. Example VM config (/etc/pve/qemu-server/100.conf): net0: virtio=AA:BB:CC:DD:EE:FF,bridge=vppbr1 No additional configuration is needed. --- ryskn (2): qemu: add VPP vhost-user dataplane support qemu: VPP: clean up vhost-user interfaces on stop, fix tx_queue_size src/PVE/QemuServer.pm | 174 ++++++++++++++++++++++++++++++++++- src/PVE/QemuServer/Memory.pm | 16 +++- 2 files changed, 174 insertions(+), 39 deletions(-) ^ permalink raw reply [flat|nested] 11+ messages in thread
* [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support 2026-03-17 11:14 ` Ryosuke Nakayama @ 2026-03-17 11:14 ` Ryosuke Nakayama 2026-03-17 11:14 ` [RFC PATCH qemu-server 2/2] qemu: VPP: clean up vhost-user interfaces on stop, fix tx_queue_size Ryosuke Nakayama 2026-03-17 11:26 ` [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support Ryosuke Nakayama 2 siblings, 0 replies; 11+ messages in thread From: Ryosuke Nakayama @ 2026-03-17 11:14 UTC (permalink / raw) To: pve-devel; +Cc: unixtech From: unixtech <ryosuke_666@icloud.com> - generate vhost-user netdev/chardev for VPP bridge interfaces - add vpp_connect_vhost_nets() to connect VPP vhost-user server sockets before QEMU starts (VPP server mode, QEMU client mode) - add memfd shared memory backend in Memory.pm for vhost-user without hugepages (has_vpp_bridge detection) - support hotplug of VPP vhost-user interfaces via QMP Signed-off-by: Ryosuke Nakayama <ryosuke.nakayama@ryskn.com> Signed-off-by: unixtech <ryosuke.nakayama@ryskn.com> --- src/PVE/QemuServer.pm | 159 +++++++++++++++++++++++++++-------- src/PVE/QemuServer/Memory.pm | 16 +++- 2 files changed, 137 insertions(+), 38 deletions(-) diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm index 09e7a19b..cf1c9e9f 100644 --- a/src/PVE/QemuServer.pm +++ b/src/PVE/QemuServer.pm @@ -95,7 +95,6 @@ use PVE::QemuServer::RunState; use PVE::QemuServer::StateFile; use PVE::QemuServer::USB; use PVE::QemuServer::Virtiofs qw(max_virtiofs start_all_virtiofsd); -use PVE::QemuServer::VolumeChain; use PVE::QemuServer::DBusVMState; my $have_ha_config; @@ -316,8 +315,7 @@ my $confdesc = { optional => 1, type => 'integer', description => - "Amount of target RAM for the VM in MiB. The balloon driver is enabled by default," - . " unless it is explicitly disabled by setting the value to zero.", + "Amount of target RAM for the VM in MiB. Using zero disables the ballon driver.", minimum => 0, }, shares => { @@ -639,7 +637,12 @@ EODESCR . ' This is used internally for snapshots.', }, machine => get_standard_option('pve-qemu-machine'), - arch => get_standard_option('pve-qm-cpu-arch', { optional => 1 }), + arch => { + description => "Virtual processor architecture. Defaults to the host.", + optional => 1, + type => 'string', + enum => [qw(x86_64 aarch64)], + }, smbios1 => { description => "Specify SMBIOS type 1 fields.", type => 'string', @@ -1442,7 +1445,10 @@ sub print_netdev_full { my $netdev = ""; my $script = $hotplug ? "pve-bridge-hotplug" : "pve-bridge"; - if ($net->{bridge}) { + if ($net->{bridge} && $net->{bridge} =~ /^vppbr\d+$/) { + # VPP bridge: use vhost-user socket instead of tap + $netdev = "type=vhost-user,id=$netid,chardev=vhost-user-${netid}"; + } elsif ($net->{bridge}) { $netdev = "type=tap,id=$netid,ifname=${ifname},script=/usr/libexec/qemu-server/$script" . ",downscript=/usr/libexec/qemu-server/pve-bridgedown$vhostparam"; } else { @@ -2586,8 +2592,7 @@ sub vmstatus { $d->{uptime} = int(($uptime - $pstat->{starttime}) / $cpuinfo->{user_hz}); my $cgroup = PVE::QemuServer::CGroup->new($vmid); - my $cgroup_mem = eval { $cgroup->get_memory_stat() } // {}; - warn "unable to get memory stat for $vmid - $@" if $@; + my $cgroup_mem = $cgroup->get_memory_stat(); $d->{memhost} = $cgroup_mem->{mem} // 0; $d->{mem} = $d->{memhost}; # default to cgroup, balloon info can override this below @@ -2713,7 +2718,7 @@ sub vmstatus { $qmpclient->queue_cmd($qmp_peer, $blockstatscb, 'query-blockstats'); $qmpclient->queue_cmd($qmp_peer, $machinecb, 'query-machines'); $qmpclient->queue_cmd($qmp_peer, $versioncb, 'query-version'); - # this fails if balloon driver is not loaded, so this must be + # this fails if ballon driver is not loaded, so this must be # the last command (following command are aborted if this fails). $qmpclient->queue_cmd($qmp_peer, $ballooncb, 'query-balloon'); @@ -2936,13 +2941,17 @@ sub vga_conf_has_spice { sub query_supported_cpu_flags { my ($arch) = @_; - my $host_arch = get_host_arch(); - $arch //= $host_arch; + $arch //= get_host_arch(); my $default_machine = PVE::QemuServer::Machine::default_machine_for_arch($arch); my $flags = {}; - my $kvm_supported = defined(kvm_version()) && $arch eq $host_arch; + # FIXME: Once this is merged, the code below should work for ARM as well: + # https://lists.nongnu.org/archive/html/qemu-devel/2019-06/msg04947.html + die "QEMU/KVM cannot detect CPU flags on ARM (aarch64)\n" + if $arch eq "aarch64"; + + my $kvm_supported = defined(kvm_version()); my $qemu_cmd = PVE::QemuServer::Helpers::get_command_for_arch($arch); my $fakevmid = -1; my $pidfile = PVE::QemuServer::Helpers::vm_pidfile_name($fakevmid); @@ -2970,8 +2979,6 @@ sub query_supported_cpu_flags { if (!$kvm) { push @$cmd, '-accel', 'tcg'; - } else { - push @$cmd, '-cpu', 'host'; } my $rc = run_command($cmd, noerr => 1, quiet => 0); @@ -2982,7 +2989,7 @@ sub query_supported_cpu_flags { $fakevmid, 'query-cpu-model-expansion', type => 'full', - model => { name => $kvm ? 'host' : 'max' }, + model => { name => 'host' }, ); my $props = $cmd_result->{model}->{props}; @@ -3125,7 +3132,7 @@ sub config_to_command { die "Detected old QEMU binary ('$kvmver', at least 6.0 is required)\n"; } - my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf, $forcemachine); + my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf, $forcemachine, $arch); my $machine_version = extract_version($machine_type, $kvmver); $kvm //= 1 if is_native_arch($arch); @@ -3661,6 +3668,11 @@ sub config_to_command { $d->{bootindex} = $bootorder->{$netname} if $bootorder->{$netname}; my $netdevfull = print_netdev_full($vmid, $conf, $arch, $d, $netname); + if ($d->{bridge} && $d->{bridge} =~ /^vppbr\d+$/) { + my $socket = "/var/run/vpp/qemu-${vmid}-${netname}.sock"; + push @$devices, '-chardev', + "socket,id=vhost-user-${netname},path=${socket}"; + } push @$devices, '-netdev', $netdevfull; # force +pve1 if machine version 10.0, for host_mtu differentiation @@ -3730,7 +3742,7 @@ sub config_to_command { push @$machineFlags, 'accel=tcg'; } my $power_state_flags = - PVE::QemuServer::Machine::get_power_state_flags($machine_conf, $arch, $version_guard); + PVE::QemuServer::Machine::get_power_state_flags($machine_conf, $version_guard); push $cmd->@*, $power_state_flags->@* if defined($power_state_flags); push @$machineFlags, 'smm=off' if should_disable_smm($conf, $vga, $machine_type); @@ -4129,7 +4141,7 @@ sub qemu_devicedelverify { sleep 1; } - die "error on hot-unplugging device '$deviceid' - still busy in guest?\n"; + die "error on hot-unplugging device '$deviceid'\n"; } sub qemu_findorcreatescsihw { @@ -4217,6 +4229,29 @@ sub qemu_netdevadd { my ($vmid, $conf, $arch, $device, $deviceid) = @_; my $netdev = print_netdev_full($vmid, $conf, $arch, $device, $deviceid, 1); + + # For VPP bridges, add chardev first then netdev via QMP + if ($device->{bridge} && $device->{bridge} =~ /^vppbr\d+$/) { + my $socket = "/var/run/vpp/qemu-${vmid}-${deviceid}.sock"; + mon_cmd( + $vmid, "chardev-add", + id => "vhost-user-${deviceid}", + backend => { + type => 'socket', + data => { + addr => { type => 'unix', data => { path => $socket } }, + server => JSON::true, + wait => JSON::false, + }, + }, + ); + my %options = split(/[=,]/, $netdev); + mon_cmd($vmid, "netdev_add", %options); + # Connect VPP side + vpp_connect_vhost_nets($conf, $vmid); + return 1; + } + my %options = split(/[=,]/, $netdev); if (defined(my $vhost = $options{vhost})) { @@ -4356,7 +4391,7 @@ sub qemu_volume_snapshot { print "external qemu snapshot\n"; my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid); my $parent_snap = $snapshots->{'current'}->{parent}; - PVE::QemuServer::VolumeChain::blockdev_external_snapshot( + PVE::QemuServer::Blockdev::blockdev_external_snapshot( $storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $parent_snap, ); } elsif ($do_snapshots_type eq 'storage') { @@ -4414,7 +4449,7 @@ sub qemu_volume_snapshot_delete { # improve-me: if firstsnap > child : commit, if firstsnap < child do a stream. if (!$parentsnap) { print "delete first snapshot $snap\n"; - PVE::QemuServer::VolumeChain::blockdev_commit( + PVE::QemuServer::Blockdev::blockdev_commit( $storecfg, $vmid, $machine_version, @@ -4426,7 +4461,7 @@ sub qemu_volume_snapshot_delete { PVE::Storage::rename_snapshot($storecfg, $volid, $snap, $childsnap); - PVE::QemuServer::VolumeChain::blockdev_replace( + PVE::QemuServer::Blockdev::blockdev_replace( $storecfg, $vmid, $machine_version, @@ -4439,7 +4474,7 @@ sub qemu_volume_snapshot_delete { } else { #intermediate snapshot, we always stream the snapshot to child snapshot print "stream intermediate snapshot $snap to $childsnap\n"; - PVE::QemuServer::VolumeChain::blockdev_stream( + PVE::QemuServer::Blockdev::blockdev_stream( $storecfg, $vmid, $machine_version, @@ -4556,7 +4591,7 @@ sub vmconfig_hotplug_pending { my $defaults = load_defaults(); my $arch = PVE::QemuServer::Helpers::get_vm_arch($conf); - my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf); + my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf, undef, $arch); # commit values which do not have any impact on running VM first # Note: those option cannot raise errors, we we do not care about @@ -4759,7 +4794,7 @@ sub vmconfig_hotplug_pending { die "skip\n" if !$hotplug_features->{cpu}; qemu_cpu_hotplug($vmid, $conf, $value); } elsif ($opt eq 'balloon') { - # enable/disable ballooning device is not hotpluggable + # enable/disable balloning device is not hotpluggable my $old_balloon_enabled = !!(!defined($conf->{balloon}) || $conf->{balloon}); my $new_balloon_enabled = !!(!defined($conf->{pending}->{balloon}) || $conf->{pending}->{balloon}); @@ -4971,7 +5006,6 @@ sub vmconfig_apply_pending { $old_drive, $new_drive, ); - $conf->{pending}->{$opt} = print_drive($new_drive); } } elsif (defined($conf->{pending}->{$opt}) && $opt =~ m/^net\d+$/) { my $new_net = PVE::QemuServer::Network::parse_net($conf->{pending}->{$opt}); @@ -5138,6 +5172,49 @@ sub vmconfig_update_net { } } +sub vpp_connect_vhost_nets { + my ($conf, $vmid) = @_; + + return if !-x '/usr/bin/vppctl'; + + foreach my $opt (keys %$conf) { + next if $opt !~ m/^net(\d+)$/; + my $net = PVE::QemuServer::Network::parse_net($conf->{$opt}); + next if !$net || !$net->{bridge} || $net->{bridge} !~ /^vppbr(\d+)$/; + + my $bd_id = $1; + my $socket = "/var/run/vpp/qemu-${vmid}-${opt}.sock"; + + eval { + my $iface_name = ''; + PVE::Tools::run_command( + [ + '/usr/bin/vppctl', 'create', 'vhost-user', + 'socket', $socket, 'server', + ], + outfunc => sub { $iface_name .= $_[0]; }, + timeout => 10, + ); + $iface_name =~ s/^\s+|\s+$//g; + die "vppctl did not return interface name\n" if !$iface_name; + + PVE::Tools::run_command( + ['/usr/bin/vppctl', 'set', 'interface', 'state', $iface_name, 'up'], + timeout => 5, + ); + PVE::Tools::run_command( + [ + '/usr/bin/vppctl', 'set', 'interface', 'l2', 'bridge', + $iface_name, $bd_id, + ], + timeout => 5, + ); + print "VPP: connected $iface_name to bridge-domain $bd_id via $socket\n"; + }; + warn "VPP vhost-user setup failed for $opt: $@" if $@; + } +} + sub vmconfig_update_agent { my ($conf, $opt, $value) = @_; @@ -5402,18 +5479,16 @@ my sub check_efi_vars { return if PVE::QemuConfig->is_template($conf); return if !$conf->{efidisk0}; + return if !$conf->{ostype}; + return if $conf->{ostype} ne 'win10' && $conf->{ostype} ne 'win11'; my $efidisk = parse_drive('efidisk0', $conf->{efidisk0}); if (PVE::QemuServer::OVMF::should_enroll_ms_2023_cert($efidisk)) { # TODO: make the first print a log_warn with PVE 9.2 to make it more noticeable! - print "EFI disk without 'ms-cert=2023k' option, suggesting that not all UEFI 2023\n"; - print "certificates from Microsoft are enrolled yet. The UEFI 2011 certificates expire\n"; - print - "in June 2026! The new certificates are required for secure boot update for Windows\n"; - print "and common Linux distributions. Use 'Disk Action > Enroll Updated Certificates'\n"; - print "in the UI or, while the VM is shut down, run 'qm enroll-efi-keys $vmid' to enroll\n"; - print "the new certificates.\n\n"; - print "For Windows with BitLocker, run the following command inside Powershell:\n"; + print "EFI disk without 'ms-cert=2023w' option, suggesting that the Microsoft UEFI 2023" + . " certificate is not enrolled yet. The UEFI 2011 certificate expires in June 2026!\n"; + print "While the VM is shut down, run 'qm enroll-efi-keys $vmid' to enroll it.\n"; + print "If the VM uses BitLocker, run the following command inside Windows Powershell:\n"; print " manage-bde -protectors -disable <drive>\n"; print "for each drive with BitLocker (for example, <drive> could be 'C:').\n"; } @@ -5564,6 +5639,9 @@ sub vm_start_nolock { PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'pre-start', 1); + # VPP bridges require shared memory (hugepages) for vhost-user to work + + my $forcemachine = $params->{forcemachine}; my $forcecpu = $params->{forcecpu}; my $nets_host_mtu = $params->{'nets-host-mtu'}; @@ -5756,6 +5834,7 @@ sub vm_start_nolock { } } + vpp_connect_vhost_nets($conf, $vmid); my $exitcode = run_command($cmd, %run_params); eval { PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets); }; log_warn("closing virtiofs sockets failed - $@") if $@; @@ -7246,7 +7325,7 @@ sub pbs_live_restore { $live_restore_backing->{$confname} = { name => $pbs_name }; # add blockdev information - my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf); + my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf, undef, $conf->{arch}); my $machine_version = PVE::QemuServer::Machine::extract_version( $machine_type, PVE::QemuServer::Helpers::kvm_user_version(), @@ -7297,7 +7376,9 @@ sub pbs_live_restore { } mon_cmd($vmid, 'cont'); - PVE::QemuServer::BlockJob::monitor($vmid, undef, $jobs, 'auto', 0, 'stream'); + PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor( + $vmid, undef, $jobs, 'auto', 0, 'stream', + ); print "restore-drive jobs finished successfully, removing all tracking block devices" . " to disconnect from Proxmox Backup Server\n"; @@ -7353,7 +7434,7 @@ sub live_import_from_files { $live_restore_backing->{$dev} = { name => "drive-$dev-restore" }; - my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf); + my $machine_type = PVE::QemuServer::Machine::get_vm_machine($conf, undef, $conf->{arch}); my $machine_version = PVE::QemuServer::Machine::extract_version( $machine_type, PVE::QemuServer::Helpers::kvm_user_version(), @@ -7416,7 +7497,9 @@ sub live_import_from_files { } mon_cmd($vmid, 'cont'); - PVE::QemuServer::BlockJob::monitor($vmid, undef, $jobs, 'auto', 0, 'stream'); + PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor( + $vmid, undef, $jobs, 'auto', 0, 'stream', + ); print "restore-drive jobs finished successfully, removing all tracking block devices\n"; @@ -7924,7 +8007,9 @@ sub clone_disk { # if this is the case, we have to complete any block-jobs still there from # previous drive-mirrors if (($completion && $completion eq 'complete') && (scalar(keys %$jobs) > 0)) { - PVE::QemuServer::BlockJob::monitor($vmid, $newvmid, $jobs, $completion, $qga); + PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor( + $vmid, $newvmid, $jobs, $completion, $qga, + ); } goto no_data_clone; } diff --git a/src/PVE/QemuServer/Memory.pm b/src/PVE/QemuServer/Memory.pm index 7ebfc545..2765751e 100644 --- a/src/PVE/QemuServer/Memory.pm +++ b/src/PVE/QemuServer/Memory.pm @@ -476,6 +476,10 @@ sub config { push @$cmd, '-object', 'memory-backend-memfd,id=virtiofs-mem' . ",size=$conf->{memory}M,share=on"; push @$machine_flags, 'memory-backend=virtiofs-mem'; + } elsif (has_vpp_bridge($conf)) { + push @$cmd, '-object', + 'memory-backend-memfd,id=vpp-mem' . ",size=$conf->{memory}M,share=on"; + push @$machine_flags, 'memory-backend=vpp-mem'; } if ($hotplug) { @@ -499,6 +503,16 @@ sub config { } } +sub has_vpp_bridge { + my ($conf) = @_; + for my $opt (keys %$conf) { + next if $opt !~ m/^net\d+$/; + my $net = PVE::QemuServer::Network::parse_net($conf->{$opt}); + return 1 if $net && $net->{bridge} && $net->{bridge} =~ /^vppbr\d+$/; + } + return 0; +} + sub print_mem_object { my ($conf, $id, $size) = @_; @@ -508,7 +522,7 @@ sub print_mem_object { my $path = hugepages_mount_path($hugepages_size); return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes"; - } elsif ($id =~ m/^virtiofs-mem/) { + } elsif ($id =~ m/^(?:virtiofs|vpp)-mem/) { return "memory-backend-memfd,id=$id,size=${size}M,share=on"; } else { return "memory-backend-ram,id=$id,size=${size}M"; -- 2.50.1 (Apple Git-155) ^ permalink raw reply [flat|nested] 11+ messages in thread
* [RFC PATCH qemu-server 2/2] qemu: VPP: clean up vhost-user interfaces on stop, fix tx_queue_size 2026-03-17 11:14 ` Ryosuke Nakayama 2026-03-17 11:14 ` [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support Ryosuke Nakayama @ 2026-03-17 11:14 ` Ryosuke Nakayama 2026-03-17 11:26 ` [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support Ryosuke Nakayama 2 siblings, 0 replies; 11+ messages in thread From: Ryosuke Nakayama @ 2026-03-17 11:14 UTC (permalink / raw) To: pve-devel; +Cc: unixtech From: unixtech <ryosuke_666@icloud.com> - add vpp_cleanup_vhost_nets() to delete VirtualEthernet interfaces when VM stops - set tx_queue_size=1024 for VPP bridge interfaces (was 256, causing RX bottleneck) Signed-off-by: Ryosuke Nakayama <ryosuke.nakayama@ryskn.com> Signed-off-by: unixtech <ryosuke.nakayama@ryskn.com> --- src/PVE/QemuServer.pm | 38 +++++++++++++++++++++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm index cf1c9e9f..e946e0f8 100644 --- a/src/PVE/QemuServer.pm +++ b/src/PVE/QemuServer.pm @@ -1351,7 +1351,8 @@ sub print_netdevice_full { } if (min_version($machine_version, 7, 1) && $net->{model} eq 'virtio') { - $tmpstr .= ",rx_queue_size=1024,tx_queue_size=256"; + my $tx_queue_size = ($net->{bridge} && $net->{bridge} =~ /^vppbr\d+$/) ? 1024 : 256; + $tmpstr .= ",rx_queue_size=1024,tx_queue_size=$tx_queue_size"; } $tmpstr .= ",bootindex=$net->{bootindex}" if $net->{bootindex}; @@ -5215,6 +5216,39 @@ sub vpp_connect_vhost_nets { } } +sub vpp_cleanup_vhost_nets { + my ($conf, $vmid) = @_; + + return if !-x '/usr/bin/vppctl'; + + foreach my $opt (keys %$conf) { + next if $opt !~ m/^net\d+$/; + my $net = PVE::QemuServer::Network::parse_net($conf->{$opt}); + next if !$net || !$net->{bridge} || $net->{bridge} !~ /^vppbr\d+$/; + + my $socket = "/var/run/vpp/qemu-${vmid}-${opt}.sock"; + + eval { + my $ifaces = ''; + PVE::Tools::run_command( + ['/usr/bin/vppctl', 'show', 'vhost-user'], + outfunc => sub { $ifaces .= $_[0] . "\n"; }, + timeout => 5, + ); + while ($ifaces =~ /^Interface:\s+(\S+).*socket filename\s+\Q$socket\E/ms) { + my $iface = $1; + PVE::Tools::run_command( + ['/usr/bin/vppctl', 'delete', 'vhost-user', $iface], + timeout => 5, + ); + print "VPP: deleted vhost-user interface $iface for $opt\n"; + last; + } + }; + warn "VPP vhost-user cleanup failed for $opt: $@" if $@; + } +} + sub vmconfig_update_agent { my ($conf, $opt, $value) = @_; @@ -6233,6 +6267,8 @@ sub vm_stop_cleanup { cleanup_pci_devices($vmid, $conf); + vpp_cleanup_vhost_nets($conf, $vmid); + vmconfig_apply_pending($vmid, $conf, $storecfg) if $apply_pending_changes; }; if (my $err = $@) { -- 2.50.1 (Apple Git-155) ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support 2026-03-17 11:14 ` Ryosuke Nakayama 2026-03-17 11:14 ` [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support Ryosuke Nakayama 2026-03-17 11:14 ` [RFC PATCH qemu-server 2/2] qemu: VPP: clean up vhost-user interfaces on stop, fix tx_queue_size Ryosuke Nakayama @ 2026-03-17 11:26 ` Ryosuke Nakayama 2 siblings, 0 replies; 11+ messages in thread From: Ryosuke Nakayama @ 2026-03-17 11:26 UTC (permalink / raw) To: pve-devel The qemu-server patches were accidentally authored with an old git identity (unixtech <ryosuke_666@icloud.com>), resulting in a duplicate Signed-off-by and an unintended Cc. This will be corrected in v2. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane 2026-03-16 22:28 [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama ` (3 preceding siblings ...) 2026-03-17 10:18 ` DERUMIER, Alexandre @ 2026-03-17 11:21 ` Ryosuke Nakayama 2026-03-17 11:21 ` [RFC PATCH pve-common] network: add VPP bridge helpers for vhost-user dataplane Ryosuke Nakayama 4 siblings, 1 reply; 11+ messages in thread From: Ryosuke Nakayama @ 2026-03-17 11:21 UTC (permalink / raw) To: pve-devel Sorry, I forgot to include a required patch for pve-common in the original series. This patch adds helpers to PVE::Network to detect VPP bridges and skip the Linux tap/bridge path during VM startup. Without it, QEMU will fail to start when a VM NIC is attached to a VPP bridge. The full series should be: [RFC PATCH pve-common] network: add VPP bridge helpers for vhost-user dataplane [RFC PATCH manager 1/2] api: network: add VPP (fd.io) dataplane bridge support [RFC PATCH widget-toolkit 2/2] ui: network: add VPP (fd.io) bridge type support [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support [RFC PATCH qemu-server 2/2] qemu: VPP: clean up vhost-user interfaces on stop, fix tx_queue_size The pve-common patch is attached. ^ permalink raw reply [flat|nested] 11+ messages in thread
* [RFC PATCH pve-common] network: add VPP bridge helpers for vhost-user dataplane 2026-03-17 11:21 ` [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama @ 2026-03-17 11:21 ` Ryosuke Nakayama 0 siblings, 0 replies; 11+ messages in thread From: Ryosuke Nakayama @ 2026-03-17 11:21 UTC (permalink / raw) To: pve-devel; +Cc: unixtech From: unixtech <ryosuke_666@icloud.com> - add is_vpp_bridge() to detect vppbrN interfaces - skip tap_plug() for VPP bridges (vhost-user sockets are used instead of tap interfaces) - skip read_bridge_mtu() kernel path for VPP bridges, return 1500 Signed-off-by: Ryosuke Nakayama <ryosuke.nakayama@ryskn.com> Signed-off-by: unixtech <ryosuke.nakayama@ryskn.com> --- src/PVE/Network.pm | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/src/PVE/Network.pm b/src/PVE/Network.pm index 573e34e..2c57ddb 100644 --- a/src/PVE/Network.pm +++ b/src/PVE/Network.pm @@ -128,6 +128,8 @@ sub tap_rate_limit { sub read_bridge_mtu { my ($bridge) = @_; + return 1500 if is_vpp_bridge($bridge); + my $mtu = PVE::Tools::file_read_firstline("/sys/class/net/$bridge/mtu"); die "bridge '$bridge' does not exist\n" if !$mtu; @@ -515,6 +517,9 @@ sub tap_plug { $opts = {} if !defined($opts); $opts = { learning => $opts } if !ref($opts); # FIXME: backward compat, drop with PVE 8.0 + # VPP bridges use vhost-user sockets, not tap devices + return if is_vpp_bridge($bridge); + if (!defined($opts->{learning})) { # auto-detect $opts = {} if !defined($opts); my $interfaces_config = PVE::INotify::read_file('interfaces'); @@ -966,6 +971,11 @@ sub is_ovs_bridge { die "failed to query OVS to determine type of '$bridge': $res\n"; } +sub is_vpp_bridge { + my ($bridge) = @_; + return defined($bridge) && $bridge =~ /^vppbr\d+$/; +} + # for backward compat, prefer the methods from the leaner IPRoute2 module. sub ip_link_details { return PVE::IPRoute2::ip_link_details(); -- 2.50.1 (Apple Git-155) ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2026-03-17 11:26 UTC | newest] Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2026-03-16 22:28 [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama 2026-03-16 22:28 ` [RFC PATCH manager 1/2] api: network: add VPP (fd.io) dataplane bridge support Ryosuke Nakayama 2026-03-16 22:28 ` [RFC PATCH widget-toolkit 2/2] ui: network: add VPP (fd.io) bridge type support Ryosuke Nakayama 2026-03-17 6:39 ` [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Stefan Hanreich 2026-03-17 10:18 ` DERUMIER, Alexandre 2026-03-17 11:14 ` Ryosuke Nakayama 2026-03-17 11:14 ` [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support Ryosuke Nakayama 2026-03-17 11:14 ` [RFC PATCH qemu-server 2/2] qemu: VPP: clean up vhost-user interfaces on stop, fix tx_queue_size Ryosuke Nakayama 2026-03-17 11:26 ` [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support Ryosuke Nakayama 2026-03-17 11:21 ` [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama 2026-03-17 11:21 ` [RFC PATCH pve-common] network: add VPP bridge helpers for vhost-user dataplane Ryosuke Nakayama
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox