From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id B1FFC62E0A for ; Sun, 23 Aug 2020 18:14:30 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id A0AB617276 for ; Sun, 23 Aug 2020 18:14:30 +0200 (CEST) Received: from mx0.it-functions.nl (mx0.it-functions.nl [178.32.167.210]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id B280E17269 for ; Sun, 23 Aug 2020 18:14:29 +0200 (CEST) Received: from [217.100.26.194] (helo=daruma-old.hachimitsu.nl) by mx0.it-functions.nl with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92) (envelope-from ) id 1k9sdT-0001Mb-3U for pve-devel@lists.proxmox.com; Sun, 23 Aug 2020 18:14:23 +0200 Received: from [192.168.254.32] by daruma-old.hachimitsu.nl with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89) (envelope-from ) id 1k9sdQ-0002Ia-79 for pve-devel@lists.proxmox.com; Sun, 23 Aug 2020 18:14:20 +0200 To: pve-devel@lists.proxmox.com References: <1877466395.127.1598159022900@webmail.proxmox.com> <292235591.128.1598159408132@webmail.proxmox.com> <15c9ed01-6e88-b3c6-6efd-cb5c881904fb@it-functions.nl> <169647259.135.1598192643864@webmail.proxmox.com> <4da8f252-3599-6af2-f398-3c7ac0010045@it-functions.nl> From: Stephan Leemburg Organization: IT Functions Message-ID: <41585d8d-d0be-3c71-b2fa-380731133fe7@it-functions.nl> Date: Sun, 23 Aug 2020 18:14:20 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <4da8f252-3599-6af2-f398-3c7ac0010045@it-functions.nl> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: nl X-Scan-Signature: afb37d6a91d4b67465dbf6b034f5c04c X-GeoIP: NL X-Virus-Scanned: by clamav-new X-Scan-Signature: 8e19f636d376672d551d5093ebaac7ce X-SPAM-LEVEL: Spam detection results: 0 AWL -0.110 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -0.948 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record T_SPF_PERMERROR 0.01 SPF: test of record failed (permerror) Subject: Re: [pve-devel] More than 10 interfaces in lxc containers X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 23 Aug 2020 16:14:30 -0000 Hi Dietmar, I have done some more testing on my openvswitch test proxmox system. If I don't put a tag on the device, it seems to behave like a trunk. So, that would solve my problem. _If_ the hosts where openvswitch enabled. Which they are not. So, in order to solve this I have to migrate them (these are operational systems hosting clients systems) to a openvswitch setup. They where setup before openvswitch became operationally viable. If this resolves the issue, then that must be done. But in the mean time, it would be very nice if you could just accept the patch to allow for the 32 interfaces instead of just the 10. If you have other suggestions, links, then I would be happy to follow them and do my own research. I could also contribute some documentation for others facing the same issues. Kind regards, Stephan On 23-08-2020 17:04, Stephan Leemburg wrote: > Hi Dietmar, > > As said, the node has tradtional vmbr (brctl) bridges. So with that > setup, I do not know how to do what you suggest. But I am happy to learn. > > And as far as I can tell on my test server that uses openvswitch, I > can only assign one tag to an interface in a container. > > So also that will not work. If I could assign multiple VLAN's to an > openswitch based container interface then I could create the vlan > interfaces inside the container. > > Ending up with as many vlan devices required in the container, so im > my case with more than 10. > > That would - however - require changing the current production setup > on the OVH server(s) to switch from traditional bridging to openvswitch. > > OVH servers are good in price/performance. Support is not so good and > there is no console, so if something goes wrong you have to order (and > pay for) a kvm to be attached for one day. That can take up to an hour > or so to be performed as it is work that has to be performed manually > by a site engineer in the data center. > > But if there is a way, then I would be more than glad to learn about it. > > Kind regards, > > Stephan > > > On 23-08-2020 16:24, Dietmar Maurer wrote: >>> If it would be possible to provide a 'trunk' openvswitch interface to >>> the CT, then from within the CT vlan devices could be setup from the >>> trunk, but in the end that will still create 10+ interfaces in the >>> container itself. >> Cant you simply use a single network interface, then configure the vlans >> inside the firewall? >> >> IMHO, using one interface for each VLAN is the wrong approach. I am sure >> next time people will ask for 4095 interfaces ... >> > > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >