From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <pve-devel-bounces@lists.proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id A29FE1FF15E for <inbox@lore.proxmox.com>; Tue, 22 Apr 2025 13:34:01 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 14C2A39EDA; Tue, 22 Apr 2025 13:33:57 +0200 (CEST) Message-ID: <0d965cab-3962-475a-b285-e75d86fe1183@proxmox.com> Date: Tue, 22 Apr 2025 13:33:53 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird To: Thomas Lamprecht <t.lamprecht@proxmox.com>, Proxmox VE development discussion <pve-devel@lists.proxmox.com> References: <20250417104855.144882-1-s.hanreich@proxmox.com> <9ecdfff3-7458-4c37-a153-dce43a9ff93e@proxmox.com> Content-Language: en-US From: Stefan Hanreich <s.hanreich@proxmox.com> In-Reply-To: <9ecdfff3-7458-4c37-a153-dce43a9ff93e@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.665 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH docs/manager/qemu-server v2 0/3] Make VirtIO network devices always inherit MTU from bridge X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/> List-Post: <mailto:pve-devel@lists.proxmox.com> List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe> Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com> On 4/18/25 09:46, Thomas Lamprecht wrote: > Am 17.04.25 um 12:48 schrieb Stefan Hanreich: >> The current default behavior for VirtIO network devices is to default to 1500 >> MTU, unless otherwise specified. This is inconvenient in cases where the MTU is >> not the default value (e.g. for VXLAN VNets or bridges with jumbo frames). >> Containers already inherit the MTU of the bridge, if not set, so change the >> behavior of VMs to be more in line with containers. This also makes using >> non-standard MTUs more convenient and less error-prone since users do not have >> to remember setting the MTU everytime they configure a network device on such a >> brige. > > Hmm, does this have regression potential for bridges with a too high MTU? > I.e., one where the MTU works for LAN but not for anything going beyond that, > which is odd but can be working fine I think? At least as long as no host and > no CT uses this bridge for communicating with endpoints outside the LAN. In that case, traffic going outside the LAN has to go through a router, which has to handle routing between networks with different MTU. Either by fragmenting packets or dropping them and sending an ICMP 'fragmentation needed'. Of course that setup is far from optimal, but it should work. Not 100% sure if that is what you meant, correct me if I misunderstood something. With this patch we're setting the MTU of the NIC to the MTU that is set on the bridge already, so the bridge would have already dropped packets that are too large. If the MTU of the bridge was larger than 1500, but the NIC was set to 1500, then the VM was just sending packets that are too small, but the setup would work, assuming the bridge MTU is the correct one for the network. A possible regression I can think of is: If the bridge was set to the wrong MTU (e.g. 9000) at some point, but external devices in the same LAN are still set to use a lower MTU (e.g. 1500). If users never configured the larger MTU anywhere else besides the bridge, then this would break. If the MTU of the bridge was smaller than 1500, but the NIC is set to 1500 (which is the case with SDN VXLAN bridges), then this would be discovered quite quickly in most cases since network packets would get dropped. This change would fix such existing broken setups. > FWIW, we could also tie this behavior to a machine version to avoid changing > the behavior for any existing VM. But I would be fine with applying this only > for PVE 9 then and add a notice to the pve8to9 checker script that lists all > VMs that will change their MTU including the respective value. I think it would be a good idea to include this in pve8to9 with warnings at least and mention it in the release notes. It might make for some noise and unsettle some users though. Since we cannot really tell what MTU is set inside the VM, we'd have to show warnings for basically every network device on a bridge with MTU != 1500. Would also be open to tie this to a new machine version if we want to be really careful and avoid the unnecessary warnings. _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel