From: Stefan Hanreich <s.hanreich@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: Re: [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane
Date: Tue, 17 Mar 2026 07:39:38 +0100 [thread overview]
Message-ID: <2d25220b-5133-4dc5-89f3-2f237a5dbcb7@proxmox.com> (raw)
In-Reply-To: <20260316222816.42944-1-ryosuke.nakayama@ryskn.com>
Hi!
Thanks for your contribution!
I was already following the discussion in the linked forum thread and
shortly discussed this proposal with a colleague - but I wasn't able to
find the time yet to take a closer look at VPP itself in order to form
an opinion. I'll take a closer look in the coming days and give the
patches a spin on my machine.
On 3/16/26 11:27 PM, Ryosuke Nakayama wrote:
> From: ryskn <ryosuke.nakayama@ryskn.com>
>
> This RFC series integrates VPP (Vector Packet Processor, fd.io) as an
> optional userspace dataplane alongside OVS in Proxmox VE.
>
> VPP is a DPDK-based, userspace packet processing framework that
> provides VM networking via vhost-user sockets. It is already used in
> production by several cloud/telecom stacks. The motivation here is to
> expose VPP bridge domains natively in the PVE WebUI and REST API,
> following the same pattern as OVS integration.
>
> Background and prior discussion:
> https://forum.proxmox.com/threads/interest-in-vpp-vector-packet-processing-as-a-dataplane-option-for-proxmox.181530/
>
> Note: the benchmark figures quoted in that forum thread are slightly
> off due to test configuration differences. Please use the numbers in
> this cover letter instead.
>
> --- What the patches do ---
>
> Patch 1 (pve-manager):
> - Detect VPP bridges via 'vppctl show bridge-domain' and expose
> them as type=VPPBridge in the network interface list
> - Create/delete VPP bridge domains via vppctl
> - Persist bridge domains to /etc/vpp/pve-bridges.conf (exec'd at
> VPP startup) so they survive reboots
> - Support vpp_vlan_aware flag (maps to bridge-domain learn flag)
> - VPP VLAN subinterface create/delete/list, persisted to
> /etc/vpp/pve-vlans.conf
> - Exclude VPP bridges from the SDN-only access guard so they appear
> in the WebUI NIC selector
> - Vhost-user socket convention:
> /var/run/vpp/qemu-<vmid>-<net>.sock
> - pve8to9: add upgrade checker for VPP dependencies
>
> Patch 2 (proxmox-widget-toolkit):
> - Add VPPBridge/VPPVlan to network_iface_types (Utils.js)
> - NetworkView: VPPBridge and VPPVlan entries in the Create menu;
> render vlan-raw-device in Ports/Slaves column for VPPVlan;
> vpp_vlan_aware support in VLAN aware column
> - NetworkEdit: vppbrN name validator; vpp_bridge field for VPPVlan;
> hide MTU/Autostart/IP fields for VPP types; use VlanName vtype
> for VPPVlan (allows dot notation, e.g. tap0.100)
>
> --- Testing ---
>
> Due to the absence of physical NICs in my test environment, all
> benchmarks were performed as VM-to-VM communication over the
> hypervisor's virtual switch (vmbr1 or VPP bridge domain). These
> results reflect the virtual switching overhead, not physical NIC
> performance, where VPP's DPDK polling would show a larger advantage.
>
> Host: Proxmox VE 8.x (Intel Xeon), VMs: Debian 12 (virtio-net q=1)
> VPP: 24.06, coalescing: frames=32 time=0.5ms, polling mode
>
> iperf3 / netperf (single queue, VM-to-VM):
>
> Metric vmbr1 VPP (vhost-user)
> iperf3 31.0 Gbits/s 13.2 Gbits/s
> netperf TCP_STREAM 32,243 Mbps 13,181 Mbps
> netperf TCP_RR 15,734 tx/s 989 tx/s
>
> VPP's raw throughput is lower than vmbr1 in this VM-to-VM setup due
> to vhost-user coalescing latency. Physical NIC testing (DPDK PMD) is
> expected to close or reverse this gap.
>
> gRPC (unary, grpc-flow-bench, single queue, VM-to-VM):
>
> Flows Metric vmbr1 VPP
> 100 RPS 32,847 39,742
> 100 p99 lat 7.28 ms 6.16 ms
> 1000 RPS 40,315 41,139
> 1000 p99 lat 48.96 ms 31.96 ms
>
> VPP's userspace polling removes kernel scheduler jitter, which is
> visible in the gRPC latency results even in the VM-to-VM scenario.
>
> --- Known limitations / TODO ---
>
> - No ifupdown2 integration yet; VPP config is managed separately via
> /etc/vpp/pve-bridges.conf and pve-vlans.conf
> - No live migration path for vhost-user sockets (sockets must be
> pre-created on the target host)
> - OVS and VPP cannot share the same physical NIC in this
> implementation
> - VPP must be installed and running independently (not managed by PVE)
>
> --- CLA ---
>
> Individual CLA has been submitted to office@proxmox.com.
>
> ---
>
> ryskn (2):
> api: network: add VPP (fd.io) dataplane bridge support
> ui: network: add VPP (fd.io) bridge type support
>
> PVE/API2/Network.pm | 413 ++++++++++++++++++++++++++-
> PVE/API2/Nodes.pm | 19 ++
> PVE/CLI/pve8to9.pm | 48 ++++
> www/manager6/form/BridgeSelector.js | 5 +
> www/manager6/lxc/Network.js | 34 +++
> www/manager6/node/Config.js | 1 +
> www/manager6/qemu/NetworkEdit.js | 27 ++
> www/manager6/window/Migrate.js | 48 ++++
> src/Utils.js | 2 +
> src/node/NetworkEdit.js | 64 ++++-
> src/node/NetworkView.js | 35 +++
> 11 files changed, 675 insertions(+), 21 deletions(-)
>
next prev parent reply other threads:[~2026-03-17 6:39 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-16 22:28 Ryosuke Nakayama
2026-03-16 22:28 ` [RFC PATCH manager 1/2] api: network: add VPP (fd.io) dataplane bridge support Ryosuke Nakayama
2026-03-16 22:28 ` [RFC PATCH widget-toolkit 2/2] ui: network: add VPP (fd.io) bridge type support Ryosuke Nakayama
2026-03-17 6:39 ` Stefan Hanreich [this message]
2026-03-17 10:18 ` [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane DERUMIER, Alexandre
2026-03-17 11:14 ` Ryosuke Nakayama
2026-03-17 11:14 ` [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support Ryosuke Nakayama
2026-03-17 11:14 ` [RFC PATCH qemu-server 2/2] qemu: VPP: clean up vhost-user interfaces on stop, fix tx_queue_size Ryosuke Nakayama
2026-03-17 11:26 ` [RFC PATCH qemu-server 1/2] qemu: add VPP vhost-user dataplane support Ryosuke Nakayama
2026-03-17 11:21 ` [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama
2026-03-17 11:21 ` [RFC PATCH pve-common] network: add VPP bridge helpers for vhost-user dataplane Ryosuke Nakayama
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2d25220b-5133-4dc5-89f3-2f237a5dbcb7@proxmox.com \
--to=s.hanreich@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox