public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Stephan Leemburg <sleemburg@it-functions.nl>
To: Dietmar Maurer <dietmar@proxmox.com>,
	Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] More than 10 interfaces in lxc containers
Date: Sun, 23 Aug 2020 17:04:49 +0200	[thread overview]
Message-ID: <4da8f252-3599-6af2-f398-3c7ac0010045@it-functions.nl> (raw)
In-Reply-To: <169647259.135.1598192643864@webmail.proxmox.com>

Hi Dietmar,

As said, the node has tradtional vmbr (brctl) bridges. So with that 
setup, I do not know how to do what you suggest. But I am happy to learn.

And as far as I can tell on my test server that uses openvswitch, I can 
only assign one tag to an interface in a container.

So also that will not work. If I could assign multiple VLAN's to an 
openswitch based container interface then I could create the vlan 
interfaces inside the container.

Ending up with as many vlan devices required in the container, so im my 
case with more than 10.

That would - however - require changing the current production setup on 
the OVH server(s) to switch from traditional bridging to openvswitch.

OVH servers are good in price/performance. Support is not so good and 
there is no console, so if something goes wrong you have to order (and 
pay for) a kvm to be attached for one day. That can take up to an hour 
or so to be performed as it is work that has to be performed manually by 
a site engineer in the data center.

But if there is a way, then I would be more than glad to learn about it.

Kind regards,

Stephan


On 23-08-2020 16:24, Dietmar Maurer wrote:
>> If it would be possible to provide a 'trunk' openvswitch interface to
>> the CT, then from within the CT vlan devices could be setup from the
>> trunk, but in the end that will still create 10+ interfaces in the
>> container itself.
> Cant you simply use a single network interface, then configure the vlans
> inside the firewall?
>
> IMHO, using one interface for each VLAN is the wrong approach. I am sure
> next time people will ask for 4095 interfaces ...
>



  reply	other threads:[~2020-08-23 15:05 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-22 21:41 Stephan Leemburg
2020-08-22 22:16 ` Stephan Leemburg
2020-08-23  5:03 ` Dietmar Maurer
2020-08-23  5:10   ` Dietmar Maurer
2020-08-23 10:58     ` Stephan Leemburg
2020-08-23 14:24       ` Dietmar Maurer
2020-08-23 15:04         ` Stephan Leemburg [this message]
2020-08-23 16:14           ` Stephan Leemburg
2020-08-24  4:53             ` Dietmar Maurer
2020-08-24 10:54               ` Stephan Leemburg
2020-08-24 15:49                 ` Dietmar Maurer
2020-08-24 16:14                   ` Tom Weber
2020-08-24 22:09                     ` Stephan Leemburg
2020-08-27 11:19                     ` Thomas Lamprecht
2020-08-23 15:49         ` Stephan Leemburg
2020-08-23 16:13       ` Tom Weber
2020-08-23 16:35         ` Stephan Leemburg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4da8f252-3599-6af2-f398-3c7ac0010045@it-functions.nl \
    --to=sleemburg@it-functions.nl \
    --cc=dietmar@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal