From: Robin Christ <robin@rchrist.io>
To: pve-devel@lists.proxmox.com
Cc: Robin Christ <r.christ@partimus.com>
Subject: [PATCH ifupdown2 4/4] bridge: Fix multiple Single VXLAN Devices in bridge not having tunnel_info applied on first run
Date: Mon, 30 Mar 2026 23:39:21 +0200 [thread overview]
Message-ID: <20260330213921.533853-5-robin@rchrist.io> (raw)
In-Reply-To: <20260330213921.533853-1-robin@rchrist.io>
From: Robin Christ <r.christ@partimus.com>
If you add multiple Single VXLAN Devices to a bridge, only the last one would get the proper
tunnel_info applied, ultimately resulting in a non-functional network setup.
This could be fixed by a second execution of ifupdown2.
The reason for this was that the original code was not written with multiple Single VXLAN Devices
in a single bridge in mind, thus it had only the variable single_vxlan_device_ifaceobj storing
a single interface that would control the application of tunnel_info the bridge's SVDs.
Replacing the variable against a list single_vxlan_device_ifaceobjs and adding another little
loop fixes the issue.
Additionally, some very exhaustive, clarifying information has been added
Signed-off-by: Robin Christ <r.christ@partimus.com>
---
...tiple-single-vxlan-devices-in-bridge.patch | 101 ++++++++++++++++++
debian/patches/series | 1 +
2 files changed, 102 insertions(+)
create mode 100644 debian/patches/pve/0019-bridge-fix-multiple-single-vxlan-devices-in-bridge.patch
diff --git a/debian/patches/pve/0019-bridge-fix-multiple-single-vxlan-devices-in-bridge.patch b/debian/patches/pve/0019-bridge-fix-multiple-single-vxlan-devices-in-bridge.patch
new file mode 100644
index 0000000..1374426
--- /dev/null
+++ b/debian/patches/pve/0019-bridge-fix-multiple-single-vxlan-devices-in-bridge.patch
@@ -0,0 +1,101 @@
+From: Robin Christ <r.christ@partimus.com>
+Date: Mon, 30 Mar 2026 21:07:41 +0200
+Subject: bridge: Fix multiple Single VXLAN Devices in bridge not having
+ tunnel_info applied on first run
+
+If you add multiple Single VXLAN Devices to a bridge, only the last one would get the proper
+tunnel_info applied, ultimately resulting in a non-functional network setup.
+This could be fixed by a second execution of ifupdown2.
+
+The reason for this was that the original code was not written with multiple Single VXLAN Devices
+in a single bridge in mind, thus it had only the variable single_vxlan_device_ifaceobj storing
+a single interface that would control the application of tunnel_info the bridge's SVDs.
+Replacing the variable against a list single_vxlan_device_ifaceobjs and adding another little
+loop fixes the issue.
+
+Additionally, some very exhaustive, clarifying information has been added
+
+Signed-off-by: Robin Christ <r.christ@partimus.com>
+---
+ ifupdown2/addons/bridge.py | 54 +++++++++++++++++++++++++++++++++++++++++++---
+ 1 file changed, 51 insertions(+), 3 deletions(-)
+
+diff --git a/ifupdown2/addons/bridge.py b/ifupdown2/addons/bridge.py
+index dee6f7b..5918c94 100644
+--- a/ifupdown2/addons/bridge.py
++++ b/ifupdown2/addons/bridge.py
+@@ -2145,7 +2145,51 @@ class bridge(Bridge, moduleBase):
+
+ def up_apply_brports_attributes(self, ifaceobj, ifaceobj_getfunc, bridge_vlan_aware, target_ports=[], newly_enslaved_ports=[]):
+ ifname = ifaceobj.name
+- single_vxlan_device_ifaceobj = None
++ # Historically, "Single VXLAN Device" was chosen to name the concept of having
++ # a single VXLAN interface with "external" flag in ip link (Kernel VXLAN_F_COLLECT_METADATA)
++ # that is also the only VXLAN interface slave of the bridge. This VXLAN interface
++ # would terminate all the VNIs for the bridge, improving scalability a lot, as now you
++ # don't have to create a VXLAN interface for each VNI.
++ # In the beginning, you could only ONE Single VXLAN device per UDP-Port on the entire system.
++ # However, it was recognized that there may be the need to have multiple "Single VXLAN Devices"
++ # on your system, and in kernel commit f9c4bb0b245cee35ef66f75bf409c9573d934cf9 the possibility
++ # to have multiple SVD's was added, but it requires the "vnifilter" flag (kernel VXLAN_F_VNIFILTER)
++ #
++ # But even with Single VXLAN Devices at hand, there are valid scenarios where you may want to have multiple
++ # Single VXLAN Devices on the same bridge! One scenario could be traffic steering in BGP-to-the-host
++ # setups, where you don't want separate bridges per VXLAN interface (e.g. because customer VMs don't
++ # want a single trunk port that terminates in different VXLAN interface depending on the VLAN)
++ #
++ # Addendum: A little explanation how "Single VXLAN Device" works in kernel:
++ # In the beginning, a VXLAN interface could only have a single VNI assigned. You had to create one
++ # VXLAN device per VNI, which didn't scale. Therefore, the following flags were added:
++ #
++ # 1. link add dev <ifname> type vxlan external
++ # This flag is what makes a VXLAN interface a "Single VXLAN Device"!
++ # The "external" flag is very oddly and cryptically named. While technically the naming is correct, as it
++ # indicates "whether an external control plane (e.g. ip route encap) or the internal FDB should be used"
++ # it doesn't really help you as a user.
++ # It was added in kernel commit ee122c79d4227f6ec642157834b6a90fcffa4382 ("vxlan: Flow based tunneling")
++ # and is called VXLAN_F_COLLECT_METADATA
++ # What this flag essentially does it that a VXLAN interface with "external" flag
++ # **will receive traffic for all VNIs** on the entire system, and there can be only ONE of them (unless
++ # you add the "vnifilter" flag!)
++ # With this flag active, you must do 'bridge vlan add dev <ifname> vid <vid> tunnel_info id <vni>'!
++ #
++ # 2. link add dev <ifname> type vxlan vnifilter
++ # As mentioned above, at some point it was recognized that there may be the need
++ # to have multiple "Single VXLAN Devices".
++ # Therefore in kernel commit f9c4bb0b245cee35ef66f75bf409c9573d934cf9
++ # ("vxlan: vni filtering support on collect metadata device") the possibility
++ # to have multiple SVD's was added, using the "vnifilter" flag (kernel VXLAN_F_VNIFILTER)
++ # This flag limits which VNIs are received on a "Single VXLAN Device", ultimately
++ # allowing you to have multiple "Single VXLAN Devices" on the same system...
++ # and even on the same bridge!
++ # With this flag active, you must do 'bridge vni add dev <ifname> vni <vni>'
++ # yes, even though this uses the bridge command, this is not really related to bridges
++ # at all and <ifname> is the name of a VXLAN interface!
++
++ single_vxlan_device_ifaceobjs = []
+
+ try:
+ brports_ifla_info_slave_data = dict()
+@@ -2471,7 +2515,7 @@ class bridge(Bridge, moduleBase):
+ #
+
+ if brport_ifaceobj.link_privflags & ifaceLinkPrivFlags.SINGLE_VXLAN:
+- single_vxlan_device_ifaceobj = brport_ifaceobj
++ single_vxlan_device_ifaceobjs.append(brport_ifaceobj)
+ brport_vlan_tunnel_cached_value = self.cache.get_link_info_slave_data_attribute(
+ brport_name,
+ Link.IFLA_BRPORT_VLAN_TUNNEL
+@@ -2501,7 +2545,11 @@ class bridge(Bridge, moduleBase):
+ except Exception as e:
+ self.log_error(str(e), ifaceobj)
+
+- if single_vxlan_device_ifaceobj:
++ # As explained at the top of the function, we have have multiple SVD's enslaved to our bridge
++ # If we don't handle them all here, we will have the scenario where only the FIRST enslaved
++ # VXLAN interface gets the tunnel_info applied, and the other ones don't... Leading to the
++ # scenario that the network config is only correctly applied after another ifreload...
++ for single_vxlan_device_ifaceobj in single_vxlan_device_ifaceobjs:
+ self.apply_bridge_port_vlan_vni_map(single_vxlan_device_ifaceobj)
+
+ @staticmethod
diff --git a/debian/patches/series b/debian/patches/series
index 1dc2fc6..a059c38 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -19,3 +19,4 @@ pve/0015-revert-addons-bond-warn-if-sub-interface-is-detected-on-bond-slave.patc
pve/0016-nlcache-fix-missing-nodad-option-in-addr_add_dry_run.patch
pve/0017-nlcache-add-missing-link_set_mtu_dry_run-method.patch
pve/0018-iproute2-fix-bridge_link_update_vni_filter-for-dry-run.patch
+pve/0019-bridge-fix-multiple-single-vxlan-Devices-in-bridge.patch
--
2.47.3
next prev parent reply other threads:[~2026-03-30 21:48 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-30 21:39 [PATCH ifupdown2 0/4] Fix multiple Single VXLAN Devices in bridge and some dry-run fixes Robin Christ
2026-03-30 21:39 ` [PATCH ifupdown2 1/4] nlcache: Fix missing nodad option in addr_add_dry_run Robin Christ
2026-03-30 21:39 ` [PATCH ifupdown2 2/4] nlcache: Add missing link_set_mtu_dry_run method Robin Christ
2026-03-30 21:39 ` [PATCH ifupdown2 3/4] iproute2: Fix bridge_link_update_vni_filter for dry-run Robin Christ
2026-03-30 21:39 ` Robin Christ [this message]
2026-03-31 8:18 ` [PATCH ifupdown2 4/4] bridge: Fix multiple Single VXLAN Devices in bridge not having tunnel_info applied on first run Gabriel Goller
2026-03-31 11:45 ` Robin Christ
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260330213921.533853-5-robin@rchrist.io \
--to=robin@rchrist.io \
--cc=pve-devel@lists.proxmox.com \
--cc=r.christ@partimus.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.