public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: 亮佑中山 <ryosuke.nakayama@ryskn.com>
To: pve-devel@lists.proxmox.com
Subject: [RFC qemu-server] add vhost-user-net backend support
Date: Fri, 13 Mar 2026 02:36:13 +0900	[thread overview]
Message-ID: <CAFT652U9thuGB_X2kcBH6wJ-GZJmQx0=1qSU08ZAWV_N7_UVRA@mail.gmail.com> (raw)

Subject: [RFC qemu-server] add vhost-user-net backend support

Hi,

I'd like to propose adding vhost-user-net backend support to
qemu-server, enabling high-performance userspace networking
with frameworks like FD.io VPP.

Background
----------

Currently, qemu-server supports tap (with optional vhost kernel
acceleration) and user-mode networking via print_netdev_full()
in QemuServer.pm. QEMU itself already supports vhost-user
backends, but qemu-server does not expose this option.

FD.io VPP is a high-performance userspace forwarding engine
originally developed by Cisco and now under the Linux
Foundation. It uses DPDK for kernel bypass and processes
packets in vectors (batches) rather than one at a time,
maximizing CPU cache efficiency.

The performance difference compared to the Linux kernel stack
is substantial [1]:

- L3 forwarding: VPP achieves 14 Mpps (line rate on 10G)
  with just 3 cores, where the Linux kernel needs ~26 cores
  for the same throughput -- roughly a 9x improvement in
  CPU efficiency.

- NAT: VPP reaches 3.2 Mpps with 2 cores, while iptables
  requires ~29 cores for the same rate. VPP can push NAT
  to full line rate with 12 cores.

VPP also provides a rich plugin ecosystem including SRv6,
VXLAN, WireGuard, IPsec, and PPPoE -- all operating at
near-line-rate [2]. Its vhost-user backend enables direct
shared-memory connectivity with VMs, bypassing the kernel
entirely for VM-to-VPP traffic.

For Proxmox users running network-intensive workloads, this
would be a significant upgrade over the current Linux Bridge
or OVS dataplane. Concrete use cases include:

- high-performance East-West forwarding between VMs
- SRv6 traffic steering and encapsulation
- PPPoE Access Concentrator termination at line rate
- high-speed NAT for edge/ISP-like setups
- router VM appliances backed by VPP
- VXLAN overlay networks with lower CPU overhead than
  kernel-based VXLAN encap/decap

Proposed changes
----------------

1) Extend the net device schema in PVE::QemuServer::Network
   to accept a vhost-user socket path as a new backend type.

2) Modify print_netdev_full() to generate the appropriate QEMU
   arguments:
     -chardev socket,id=<id>,path=<socket_path>
     -netdev vhost-user,id=<id>,chardev=<id>

3) Auto-detect available vhost-user sockets (e.g. from a
   running VPP instance exposing an L2 bridge-domain) to
   simplify configuration.

Considerations
--------------

- HA: should work without issues, as vhost-user is a
  host-local configuration.

- live migration: block migration for VMs using vhost-user
  netdevs, since the socket is local to the source host.
  Migration support could be added later if the target host
  can provide a matching socket.

- firewall: reject firewall=1 on vhost-user NICs, since
  traffic bypasses the kernel and pve-firewall rules would
  be silently ineffective.

I have reviewed the qemu-server source on git.proxmox.com and
believe the change is fairly contained. Happy to write and
submit a patch if there is interest.

This was also discussed on the Proxmox forum:
https://forum.proxmox.com/threads/interest-in-vpp-vector-packet-processing-as-a-dataplane-option-for-proxmox.181530/

Feedback on the approach or any design constraints I should be
aware of would be appreciated.

[1]
https://blog.apnic.net/2020/04/17/kernel-bypass-networking-with-fd-io-and-vpp/
[2] https://fd.io/docs/vpp/v2101/whatisvpp/performance
[3] https://fd.io/docs/whitepapers/FDioVPPwhitepaperJuly2017.pdf

Signed-off-by: Ryosuke Nakayama <ryosuke.nakayama@ryskn.com>

                 reply	other threads:[~2026-03-12 18:00 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAFT652U9thuGB_X2kcBH6wJ-GZJmQx0=1qSU08ZAWV_N7_UVRA@mail.gmail.com' \
    --to=ryosuke.nakayama@ryskn.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal