* [RFC qemu-server] add vhost-user-net backend support
@ 2026-03-12 17:36 亮佑中山
0 siblings, 0 replies; only message in thread
From: 亮佑中山 @ 2026-03-12 17:36 UTC (permalink / raw)
To: pve-devel
Subject: [RFC qemu-server] add vhost-user-net backend support
Hi,
I'd like to propose adding vhost-user-net backend support to
qemu-server, enabling high-performance userspace networking
with frameworks like FD.io VPP.
Background
----------
Currently, qemu-server supports tap (with optional vhost kernel
acceleration) and user-mode networking via print_netdev_full()
in QemuServer.pm. QEMU itself already supports vhost-user
backends, but qemu-server does not expose this option.
FD.io VPP is a high-performance userspace forwarding engine
originally developed by Cisco and now under the Linux
Foundation. It uses DPDK for kernel bypass and processes
packets in vectors (batches) rather than one at a time,
maximizing CPU cache efficiency.
The performance difference compared to the Linux kernel stack
is substantial [1]:
- L3 forwarding: VPP achieves 14 Mpps (line rate on 10G)
with just 3 cores, where the Linux kernel needs ~26 cores
for the same throughput -- roughly a 9x improvement in
CPU efficiency.
- NAT: VPP reaches 3.2 Mpps with 2 cores, while iptables
requires ~29 cores for the same rate. VPP can push NAT
to full line rate with 12 cores.
VPP also provides a rich plugin ecosystem including SRv6,
VXLAN, WireGuard, IPsec, and PPPoE -- all operating at
near-line-rate [2]. Its vhost-user backend enables direct
shared-memory connectivity with VMs, bypassing the kernel
entirely for VM-to-VPP traffic.
For Proxmox users running network-intensive workloads, this
would be a significant upgrade over the current Linux Bridge
or OVS dataplane. Concrete use cases include:
- high-performance East-West forwarding between VMs
- SRv6 traffic steering and encapsulation
- PPPoE Access Concentrator termination at line rate
- high-speed NAT for edge/ISP-like setups
- router VM appliances backed by VPP
- VXLAN overlay networks with lower CPU overhead than
kernel-based VXLAN encap/decap
Proposed changes
----------------
1) Extend the net device schema in PVE::QemuServer::Network
to accept a vhost-user socket path as a new backend type.
2) Modify print_netdev_full() to generate the appropriate QEMU
arguments:
-chardev socket,id=<id>,path=<socket_path>
-netdev vhost-user,id=<id>,chardev=<id>
3) Auto-detect available vhost-user sockets (e.g. from a
running VPP instance exposing an L2 bridge-domain) to
simplify configuration.
Considerations
--------------
- HA: should work without issues, as vhost-user is a
host-local configuration.
- live migration: block migration for VMs using vhost-user
netdevs, since the socket is local to the source host.
Migration support could be added later if the target host
can provide a matching socket.
- firewall: reject firewall=1 on vhost-user NICs, since
traffic bypasses the kernel and pve-firewall rules would
be silently ineffective.
I have reviewed the qemu-server source on git.proxmox.com and
believe the change is fairly contained. Happy to write and
submit a patch if there is interest.
This was also discussed on the Proxmox forum:
https://forum.proxmox.com/threads/interest-in-vpp-vector-packet-processing-as-a-dataplane-option-for-proxmox.181530/
Feedback on the approach or any design constraints I should be
aware of would be appreciated.
[1]
https://blog.apnic.net/2020/04/17/kernel-bypass-networking-with-fd-io-and-vpp/
[2] https://fd.io/docs/vpp/v2101/whatisvpp/performance
[3] https://fd.io/docs/whitepapers/FDioVPPwhitepaperJuly2017.pdf
Signed-off-by: Ryosuke Nakayama <ryosuke.nakayama@ryskn.com>
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2026-03-12 18:00 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-03-12 17:36 [RFC qemu-server] add vhost-user-net backend support 亮佑中山
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox