all lists on lists.proxmox.com
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane
@ 2026-03-16 22:28 Ryosuke Nakayama
  2026-03-16 22:28 ` [RFC PATCH manager 1/2] api: network: add VPP (fd.io) dataplane bridge support Ryosuke Nakayama
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Ryosuke Nakayama @ 2026-03-16 22:28 UTC (permalink / raw)
  To: pve-devel

From: ryskn <ryosuke.nakayama@ryskn.com>

This RFC series integrates VPP (Vector Packet Processor, fd.io) as an
optional userspace dataplane alongside OVS in Proxmox VE.

VPP is a DPDK-based, userspace packet processing framework that
provides VM networking via vhost-user sockets. It is already used in
production by several cloud/telecom stacks. The motivation here is to
expose VPP bridge domains natively in the PVE WebUI and REST API,
following the same pattern as OVS integration.

Background and prior discussion:
  https://forum.proxmox.com/threads/interest-in-vpp-vector-packet-processing-as-a-dataplane-option-for-proxmox.181530/

Note: the benchmark figures quoted in that forum thread are slightly
off due to test configuration differences. Please use the numbers in
this cover letter instead.

--- What the patches do ---

Patch 1 (pve-manager):
  - Detect VPP bridges via 'vppctl show bridge-domain' and expose
    them as type=VPPBridge in the network interface list
  - Create/delete VPP bridge domains via vppctl
  - Persist bridge domains to /etc/vpp/pve-bridges.conf (exec'd at
    VPP startup) so they survive reboots
  - Support vpp_vlan_aware flag (maps to bridge-domain learn flag)
  - VPP VLAN subinterface create/delete/list, persisted to
    /etc/vpp/pve-vlans.conf
  - Exclude VPP bridges from the SDN-only access guard so they appear
    in the WebUI NIC selector
  - Vhost-user socket convention:
    /var/run/vpp/qemu-<vmid>-<net>.sock
  - pve8to9: add upgrade checker for VPP dependencies

Patch 2 (proxmox-widget-toolkit):
  - Add VPPBridge/VPPVlan to network_iface_types (Utils.js)
  - NetworkView: VPPBridge and VPPVlan entries in the Create menu;
    render vlan-raw-device in Ports/Slaves column for VPPVlan;
    vpp_vlan_aware support in VLAN aware column
  - NetworkEdit: vppbrN name validator; vpp_bridge field for VPPVlan;
    hide MTU/Autostart/IP fields for VPP types; use VlanName vtype
    for VPPVlan (allows dot notation, e.g. tap0.100)

--- Testing ---

Due to the absence of physical NICs in my test environment, all
benchmarks were performed as VM-to-VM communication over the
hypervisor's virtual switch (vmbr1 or VPP bridge domain). These
results reflect the virtual switching overhead, not physical NIC
performance, where VPP's DPDK polling would show a larger advantage.

Host: Proxmox VE 8.x (Intel Xeon), VMs: Debian 12 (virtio-net q=1)
VPP: 24.06, coalescing: frames=32 time=0.5ms, polling mode

iperf3 / netperf (single queue, VM-to-VM):

  Metric             vmbr1          VPP (vhost-user)
  iperf3             31.0 Gbits/s   13.2 Gbits/s
  netperf TCP_STREAM 32,243 Mbps    13,181 Mbps
  netperf TCP_RR     15,734 tx/s    989 tx/s

VPP's raw throughput is lower than vmbr1 in this VM-to-VM setup due
to vhost-user coalescing latency. Physical NIC testing (DPDK PMD) is
expected to close or reverse this gap.

gRPC (unary, grpc-flow-bench, single queue, VM-to-VM):

  Flows  Metric    vmbr1     VPP
  100    RPS       32,847    39,742
  100    p99 lat   7.28 ms   6.16 ms
  1000   RPS       40,315    41,139
  1000   p99 lat   48.96 ms  31.96 ms

VPP's userspace polling removes kernel scheduler jitter, which is
visible in the gRPC latency results even in the VM-to-VM scenario.

--- Known limitations / TODO ---

- No ifupdown2 integration yet; VPP config is managed separately via
  /etc/vpp/pve-bridges.conf and pve-vlans.conf
- No live migration path for vhost-user sockets (sockets must be
  pre-created on the target host)
- OVS and VPP cannot share the same physical NIC in this
  implementation
- VPP must be installed and running independently (not managed by PVE)

--- CLA ---

Individual CLA has been submitted to office@proxmox.com.

---

ryskn (2):
  api: network: add VPP (fd.io) dataplane bridge support
  ui: network: add VPP (fd.io) bridge type support

 PVE/API2/Network.pm                 | 413 ++++++++++++++++++++++++++-
 PVE/API2/Nodes.pm                   |  19 ++
 PVE/CLI/pve8to9.pm                  |  48 ++++
 www/manager6/form/BridgeSelector.js |   5 +
 www/manager6/lxc/Network.js         |  34 +++
 www/manager6/node/Config.js         |   1 +
 www/manager6/qemu/NetworkEdit.js    |  27 ++
 www/manager6/window/Migrate.js      |  48 ++++
 src/Utils.js                        |   2 +
 src/node/NetworkEdit.js             |  64 ++++-
 src/node/NetworkView.js             |  35 +++
 11 files changed, 675 insertions(+), 21 deletions(-)

-- 
2.50.1 (Apple Git-155)




^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-03-17 10:34 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-03-16 22:28 [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Ryosuke Nakayama
2026-03-16 22:28 ` [RFC PATCH manager 1/2] api: network: add VPP (fd.io) dataplane bridge support Ryosuke Nakayama
2026-03-16 22:28 ` [RFC PATCH widget-toolkit 2/2] ui: network: add VPP (fd.io) bridge type support Ryosuke Nakayama
2026-03-17  6:39 ` [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Stefan Hanreich
2026-03-17 10:18 ` DERUMIER, Alexandre

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal