From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id B815E1FF141 for ; Mon, 16 Mar 2026 23:28:25 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id EA59235391; Mon, 16 Mar 2026 23:28:32 +0100 (CET) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773700100; x=1774304900; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=96sDBQ3MrzaoAibdYpcMFuTsLGgsuzbPyU0zDDNIOoc=; b=AaCRbgaWvJI9QNAJHA9lByMiphWdYtaJCRaOlrzvOgjWiXXii5kF3zRz9FGqt640+X LUDFghBUFZCEDm1ZiZ1hjl2ovCNqlB10hmIAjFReY2QF1fiGJcbFajHF8Eb+sKHVIcbg 5LmpbjnGoTFvSDOGbUzNsxonmALNuw5MDqWklPpQJcQES9GyPvCaxZ/Jm/GBmCNfVmOk fCskfdUviSeMl4ryQXaa3/Ps7Dyzfs7eNITj96ecAORtoH7NRggOERK7JJhIwpTuJWuB N1GvePql0oX+9BlG7jw5tgvnQWKxnhTg6OBBloGx1T4kod7r4ABxja7P12z/8ARjS/if /A/w== X-Gm-Message-State: AOJu0YzvcCny7RkjKZRHxBVBwg7CMKy0/g/1bSyrD1FGa9UTLsEsiQSs UfXzrm3Dx0VE2LMYaR0LEMAtTvhKzX4uomjc2p6hYOoDJ5/FXsTh8rehCiG1zA0IzrWN X-Gm-Gg: ATEYQzxftmb4AUHW7Gsg8UWioV5UEFrH8ubohkH6wRVfW9tQXSB+Nr82zPPrp5VxHaU iMd1IP3/shBvUsQLY2M0uDIE3hwM8ygEI9ygr4XDM7C8Q4mZ2DmYgS7CLMOz75CK5B9YYi1TDDj 0vs9Ndm6uDUlNZZvCasRlT9LFa6APImmxqYw0eVN8y41qaG0DQNbkeBSMpwGTxc6nrJ49vyqqYf 7J28VV9UVZo+KQOpSkGmeZhI3097uYsrqT8ZF+0HkeJ+HyZw+yjzd/PN+ytkxklMAr//L8VBM/V ItSJ2DH2uJhnFUJkLzl7RbNEkfIbquUrRqqmdvT+Ac7qcqofnDnzN8lOIFAnuY05KIybn/L0FGZ 272oU8WoFcUQQAeQ1AO4+1wXo0nauk0d/CIe6l5KUG0qKaMCcmxorezaFEUpyDt/glTZ3430OsW KYdT6h5mGAdhmHbLHWljmM0WZ7W+bYFcPDrtZPY+iCFTKovtqkKlywAQ4tBw+ffK/MZGaEwqsub nhIWpzG5WLm4Qyu7TTw1wyL7MGwC1JP0gsBj+XS3m6AlQIT0Z81bcXU4VzT X-Received: by 2002:a17:902:fc43:b0:2b0:60b2:4dc with SMTP id d9443c01a7336-2b060c14bdamr22119755ad.15.1773700099720; Mon, 16 Mar 2026 15:28:19 -0700 (PDT) From: Ryosuke Nakayama To: pve-devel@lists.proxmox.com Subject: [RFC PATCH 0/2] network: add VPP (fd.io) as alternative dataplane Date: Tue, 17 Mar 2026 07:28:14 +0900 Message-ID: <20260316222816.42944-1-ryosuke.nakayama@ryskn.com> X-Mailer: git-send-email 2.50.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy FREEMAIL_ENVFROM_END_DIGIT 1 Envelope-from freemail username ends in digit FREEMAIL_FORGED_FROMDOMAIN 0.249 2nd level domains in From and EnvelopeFrom freemail headers are different FREEMAIL_FROM 0.001 Sender email is commonly abused enduser mail provider HEADER_FROM_DIFFERENT_DOMAINS 0.25 From and EnvelopeFrom 2nd level mail domains are different KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/, no trust RCVD_IN_MSPIKE_H2 0.001 Average reputation (+2) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: P55GZKRXYZ2QAUEZUOMP7K4LNNRTYWVQ X-Message-ID-Hash: P55GZKRXYZ2QAUEZUOMP7K4LNNRTYWVQ X-MailFrom: koyakiu666@gmail.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: ryskn This RFC series integrates VPP (Vector Packet Processor, fd.io) as an optional userspace dataplane alongside OVS in Proxmox VE. VPP is a DPDK-based, userspace packet processing framework that provides VM networking via vhost-user sockets. It is already used in production by several cloud/telecom stacks. The motivation here is to expose VPP bridge domains natively in the PVE WebUI and REST API, following the same pattern as OVS integration. Background and prior discussion: https://forum.proxmox.com/threads/interest-in-vpp-vector-packet-processing-as-a-dataplane-option-for-proxmox.181530/ Note: the benchmark figures quoted in that forum thread are slightly off due to test configuration differences. Please use the numbers in this cover letter instead. --- What the patches do --- Patch 1 (pve-manager): - Detect VPP bridges via 'vppctl show bridge-domain' and expose them as type=VPPBridge in the network interface list - Create/delete VPP bridge domains via vppctl - Persist bridge domains to /etc/vpp/pve-bridges.conf (exec'd at VPP startup) so they survive reboots - Support vpp_vlan_aware flag (maps to bridge-domain learn flag) - VPP VLAN subinterface create/delete/list, persisted to /etc/vpp/pve-vlans.conf - Exclude VPP bridges from the SDN-only access guard so they appear in the WebUI NIC selector - Vhost-user socket convention: /var/run/vpp/qemu--.sock - pve8to9: add upgrade checker for VPP dependencies Patch 2 (proxmox-widget-toolkit): - Add VPPBridge/VPPVlan to network_iface_types (Utils.js) - NetworkView: VPPBridge and VPPVlan entries in the Create menu; render vlan-raw-device in Ports/Slaves column for VPPVlan; vpp_vlan_aware support in VLAN aware column - NetworkEdit: vppbrN name validator; vpp_bridge field for VPPVlan; hide MTU/Autostart/IP fields for VPP types; use VlanName vtype for VPPVlan (allows dot notation, e.g. tap0.100) --- Testing --- Due to the absence of physical NICs in my test environment, all benchmarks were performed as VM-to-VM communication over the hypervisor's virtual switch (vmbr1 or VPP bridge domain). These results reflect the virtual switching overhead, not physical NIC performance, where VPP's DPDK polling would show a larger advantage. Host: Proxmox VE 8.x (Intel Xeon), VMs: Debian 12 (virtio-net q=1) VPP: 24.06, coalescing: frames=32 time=0.5ms, polling mode iperf3 / netperf (single queue, VM-to-VM): Metric vmbr1 VPP (vhost-user) iperf3 31.0 Gbits/s 13.2 Gbits/s netperf TCP_STREAM 32,243 Mbps 13,181 Mbps netperf TCP_RR 15,734 tx/s 989 tx/s VPP's raw throughput is lower than vmbr1 in this VM-to-VM setup due to vhost-user coalescing latency. Physical NIC testing (DPDK PMD) is expected to close or reverse this gap. gRPC (unary, grpc-flow-bench, single queue, VM-to-VM): Flows Metric vmbr1 VPP 100 RPS 32,847 39,742 100 p99 lat 7.28 ms 6.16 ms 1000 RPS 40,315 41,139 1000 p99 lat 48.96 ms 31.96 ms VPP's userspace polling removes kernel scheduler jitter, which is visible in the gRPC latency results even in the VM-to-VM scenario. --- Known limitations / TODO --- - No ifupdown2 integration yet; VPP config is managed separately via /etc/vpp/pve-bridges.conf and pve-vlans.conf - No live migration path for vhost-user sockets (sockets must be pre-created on the target host) - OVS and VPP cannot share the same physical NIC in this implementation - VPP must be installed and running independently (not managed by PVE) --- CLA --- Individual CLA has been submitted to office@proxmox.com. --- ryskn (2): api: network: add VPP (fd.io) dataplane bridge support ui: network: add VPP (fd.io) bridge type support PVE/API2/Network.pm | 413 ++++++++++++++++++++++++++- PVE/API2/Nodes.pm | 19 ++ PVE/CLI/pve8to9.pm | 48 ++++ www/manager6/form/BridgeSelector.js | 5 + www/manager6/lxc/Network.js | 34 +++ www/manager6/node/Config.js | 1 + www/manager6/qemu/NetworkEdit.js | 27 ++ www/manager6/window/Migrate.js | 48 ++++ src/Utils.js | 2 + src/node/NetworkEdit.js | 64 ++++- src/node/NetworkView.js | 35 +++ 11 files changed, 675 insertions(+), 21 deletions(-) -- 2.50.1 (Apple Git-155)