From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 0B2F71FF137 for ; Tue, 17 Mar 2026 07:39:34 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 8B92B32E5; Tue, 17 Mar 2026 07:39:44 +0100 (CET) Message-ID: <2d25220b-5133-4dc5-89f3-2f237a5dbcb7@proxmox.com> Date: Tue, 17 Mar 2026 07:39:38 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane To: pve-devel@lists.proxmox.com References: <20260316222816.42944-1-ryosuke.nakayama@ryskn.com> Content-Language: en-US From: Stefan Hanreich In-Reply-To: <20260316222816.42944-1-ryosuke.nakayama@ryskn.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.342 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.408 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.819 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.903 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: MNF64IUNX5QRLOQ3ZY3LUTE2NOTX5HUI X-Message-ID-Hash: MNF64IUNX5QRLOQ3ZY3LUTE2NOTX5HUI X-MailFrom: s.hanreich@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Hi! Thanks for your contribution! I was already following the discussion in the linked forum thread and shortly discussed this proposal with a colleague - but I wasn't able to find the time yet to take a closer look at VPP itself in order to form an opinion. I'll take a closer look in the coming days and give the patches a spin on my machine. On 3/16/26 11:27 PM, Ryosuke Nakayama wrote: > From: ryskn > > This RFC series integrates VPP (Vector Packet Processor, fd.io) as an > optional userspace dataplane alongside OVS in Proxmox VE. > > VPP is a DPDK-based, userspace packet processing framework that > provides VM networking via vhost-user sockets. It is already used in > production by several cloud/telecom stacks. The motivation here is to > expose VPP bridge domains natively in the PVE WebUI and REST API, > following the same pattern as OVS integration. > > Background and prior discussion: > https://forum.proxmox.com/threads/interest-in-vpp-vector-packet-processing-as-a-dataplane-option-for-proxmox.181530/ > > Note: the benchmark figures quoted in that forum thread are slightly > off due to test configuration differences. Please use the numbers in > this cover letter instead. > > --- What the patches do --- > > Patch 1 (pve-manager): > - Detect VPP bridges via 'vppctl show bridge-domain' and expose > them as type=VPPBridge in the network interface list > - Create/delete VPP bridge domains via vppctl > - Persist bridge domains to /etc/vpp/pve-bridges.conf (exec'd at > VPP startup) so they survive reboots > - Support vpp_vlan_aware flag (maps to bridge-domain learn flag) > - VPP VLAN subinterface create/delete/list, persisted to > /etc/vpp/pve-vlans.conf > - Exclude VPP bridges from the SDN-only access guard so they appear > in the WebUI NIC selector > - Vhost-user socket convention: > /var/run/vpp/qemu--.sock > - pve8to9: add upgrade checker for VPP dependencies > > Patch 2 (proxmox-widget-toolkit): > - Add VPPBridge/VPPVlan to network_iface_types (Utils.js) > - NetworkView: VPPBridge and VPPVlan entries in the Create menu; > render vlan-raw-device in Ports/Slaves column for VPPVlan; > vpp_vlan_aware support in VLAN aware column > - NetworkEdit: vppbrN name validator; vpp_bridge field for VPPVlan; > hide MTU/Autostart/IP fields for VPP types; use VlanName vtype > for VPPVlan (allows dot notation, e.g. tap0.100) > > --- Testing --- > > Due to the absence of physical NICs in my test environment, all > benchmarks were performed as VM-to-VM communication over the > hypervisor's virtual switch (vmbr1 or VPP bridge domain). These > results reflect the virtual switching overhead, not physical NIC > performance, where VPP's DPDK polling would show a larger advantage. > > Host: Proxmox VE 8.x (Intel Xeon), VMs: Debian 12 (virtio-net q=1) > VPP: 24.06, coalescing: frames=32 time=0.5ms, polling mode > > iperf3 / netperf (single queue, VM-to-VM): > > Metric vmbr1 VPP (vhost-user) > iperf3 31.0 Gbits/s 13.2 Gbits/s > netperf TCP_STREAM 32,243 Mbps 13,181 Mbps > netperf TCP_RR 15,734 tx/s 989 tx/s > > VPP's raw throughput is lower than vmbr1 in this VM-to-VM setup due > to vhost-user coalescing latency. Physical NIC testing (DPDK PMD) is > expected to close or reverse this gap. > > gRPC (unary, grpc-flow-bench, single queue, VM-to-VM): > > Flows Metric vmbr1 VPP > 100 RPS 32,847 39,742 > 100 p99 lat 7.28 ms 6.16 ms > 1000 RPS 40,315 41,139 > 1000 p99 lat 48.96 ms 31.96 ms > > VPP's userspace polling removes kernel scheduler jitter, which is > visible in the gRPC latency results even in the VM-to-VM scenario. > > --- Known limitations / TODO --- > > - No ifupdown2 integration yet; VPP config is managed separately via > /etc/vpp/pve-bridges.conf and pve-vlans.conf > - No live migration path for vhost-user sockets (sockets must be > pre-created on the target host) > - OVS and VPP cannot share the same physical NIC in this > implementation > - VPP must be installed and running independently (not managed by PVE) > > --- CLA --- > > Individual CLA has been submitted to office@proxmox.com. > > --- > > ryskn (2): > api: network: add VPP (fd.io) dataplane bridge support > ui: network: add VPP (fd.io) bridge type support > > PVE/API2/Network.pm | 413 ++++++++++++++++++++++++++- > PVE/API2/Nodes.pm | 19 ++ > PVE/CLI/pve8to9.pm | 48 ++++ > www/manager6/form/BridgeSelector.js | 5 + > www/manager6/lxc/Network.js | 34 +++ > www/manager6/node/Config.js | 1 + > www/manager6/qemu/NetworkEdit.js | 27 ++ > www/manager6/window/Migrate.js | 48 ++++ > src/Utils.js | 2 + > src/node/NetworkEdit.js | 64 ++++- > src/node/NetworkView.js | 35 +++ > 11 files changed, 675 insertions(+), 21 deletions(-) >