public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] More than 10 interfaces in lxc containers
@ 2020-08-22 21:41 Stephan Leemburg
  2020-08-22 22:16 ` Stephan Leemburg
  2020-08-23  5:03 ` Dietmar Maurer
  0 siblings, 2 replies; 17+ messages in thread
From: Stephan Leemburg @ 2020-08-22 21:41 UTC (permalink / raw)
  To: pve-devel

Hi @dev,

I have read about other people who need more than 10 network interfaces 
in their lxc containers.

For me, I have that need too for a firewall container.

I think it is not so difficult to raise the 10 upto 32.

Just change

/usr/share/pve-manager/js/pvemanagerlib.js

in Ext.define('PVE.lxc.NetworkView', {

the line

me.down('button[name=addButton]').setDisabled((records.length >= 10));

to

me.down('button[name=addButton]').setDisabled((records.length >= 32));

And in

/usr/share/perl5/PVE/LXC/Config.pm change

my $MAX_LXC_NETWORKS = 10;

to

my $MAX_LXC_NETWORKS = 32;

As far as I can see, that is enough.

Would you please consider raising the limit? Would you like me to send 
in a patch file or pull request.

Or is the above sufficient.

Thanks and kind regards,

Stephan




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-22 21:41 [pve-devel] More than 10 interfaces in lxc containers Stephan Leemburg
@ 2020-08-22 22:16 ` Stephan Leemburg
  2020-08-23  5:03 ` Dietmar Maurer
  1 sibling, 0 replies; 17+ messages in thread
From: Stephan Leemburg @ 2020-08-22 22:16 UTC (permalink / raw)
  To: pve-devel

Sorry, and also in

/usr/share/pve-manager/js/pvemanagerlib.js :

in /usr/share/pve-manager/js/pvemanagerlib.js

change

         for (i = 0; i < 10; i++) {
             if (me.isCreate && !me.dataCache['net'+i.toString()]) {

to

         for (i = 0; i < 31; i++) {
             if (me.isCreate && !me.dataCache['net'+i.toString()]) {

Then it works for me.

It would be great to have an uplift of possible interfaces.

Kind regards,

Stephan

On 22-08-2020 23:41, Stephan Leemburg wrote:
> Hi @dev,
>
> I have read about other people who need more than 10 network 
> interfaces in their lxc containers.
>
> For me, I have that need too for a firewall container.
>
> I think it is not so difficult to raise the 10 upto 32.
>
> Just change
>
> /usr/share/pve-manager/js/pvemanagerlib.js
>
> in Ext.define('PVE.lxc.NetworkView', {
>
> the line
>
> me.down('button[name=addButton]').setDisabled((records.length >= 10));
>
> to
>
> me.down('button[name=addButton]').setDisabled((records.length >= 32));
>
> And in
>
> /usr/share/perl5/PVE/LXC/Config.pm change
>
> my $MAX_LXC_NETWORKS = 10;
>
> to
>
> my $MAX_LXC_NETWORKS = 32;
>
> As far as I can see, that is enough.
>
> Would you please consider raising the limit? Would you like me to send 
> in a patch file or pull request.
>
> Or is the above sufficient.
>
> Thanks and kind regards,
>
> Stephan
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-22 21:41 [pve-devel] More than 10 interfaces in lxc containers Stephan Leemburg
  2020-08-22 22:16 ` Stephan Leemburg
@ 2020-08-23  5:03 ` Dietmar Maurer
  2020-08-23  5:10   ` Dietmar Maurer
  1 sibling, 1 reply; 17+ messages in thread
From: Dietmar Maurer @ 2020-08-23  5:03 UTC (permalink / raw)
  To: Proxmox VE development discussion, Stephan Leemburg

> For me, I have that need too for a firewall container.

Why does your firewall need more the 10 interface?

> Would you please consider raising the limit? 

No, unless someone can explain why that is required ;-)




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-23  5:03 ` Dietmar Maurer
@ 2020-08-23  5:10   ` Dietmar Maurer
  2020-08-23 10:58     ` Stephan Leemburg
  0 siblings, 1 reply; 17+ messages in thread
From: Dietmar Maurer @ 2020-08-23  5:10 UTC (permalink / raw)
  To: Proxmox VE development discussion, Stephan Leemburg


> > For me, I have that need too for a firewall container.
> 
> Why does your firewall need more the 10 interface?

Sigh. too early in the morning... I wanted to ask:

Why does your firewall need more than 10 interfaces?

Normally, a firewall uses one interface per zone, and more
than 10 zones are quite uncommon?

> > Would you please consider raising the limit? 
> 
> No, unless someone can explain why that is required ;-)
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-23  5:10   ` Dietmar Maurer
@ 2020-08-23 10:58     ` Stephan Leemburg
  2020-08-23 14:24       ` Dietmar Maurer
  2020-08-23 16:13       ` Tom Weber
  0 siblings, 2 replies; 17+ messages in thread
From: Stephan Leemburg @ 2020-08-23 10:58 UTC (permalink / raw)
  To: Dietmar Maurer, Proxmox VE development discussion

Good afternoon Dietmar,

The reason is separation of client's resources on the machine(s).

In firewalling, it is not uncommon to use a lot of VLAN's.

For example at one of my clients that I do consultancy for, they have 
more than 60 VLAN's defined on their firewall.

For my the setup is like this:

Zone     Nr    Purpose
WAN       1    Internet connectivity
MGMT      2    Management Network
DMZ       3    DMZ Network (proxyies, etc) accessible from the Internet
SHARED    4    Shared Hosting. Shared resources only Internet accessable 
by some sources
SERVICES  5    Services for other networks, like shared database. No 
Internet access
CLIENT1   6    Client1's network
CLIENT2   7    Client2's network
CLIENT3   8    Client3's network
CLIENT4   9    Client4's network
CLIENT5  10    Client5's network
CLIENTX  10++  ClientX's network

Yesterday, I was configuring the CLIENTX's network and ran into the issue.

This node still has 'traditional' vmbr interfaces, but using openvswitch 
would not help here.

If it would be possible to provide a 'trunk' openvswitch interface to 
the CT, then from within the CT vlan devices could be setup from the 
trunk, but in the end that will still create 10+ interfaces in the 
container itself.

This firewall is running on one of my OVH machines as a lxc container 
with a fwbuilder (iptables) created firewall.

On my other OVH machine, I have a kvm with pfSense running. That pfSense 
firewall has 11 interfaces.

But, I want to move from the KVM to a CT based setup and in the end also 
replace the pfSense qm with a debian based ct.

I've read about more people asking for this. And in fact, I patched my 
test proxmox system yesterday and it works perfectly.

It only requires 3 adjustments. So before I went to bed yesterday, I 
have started cloning the proxmox repo's with:

   for i in `curl -s https://git.proxmox.com/|grep .git|sed 
's/.*p=\([^;]*\).*/\1/'|grep '.git$' |sort -u`; do git clone 
"https://git.proxmox.com/git/$i"; done

Which provided me with an impressing 41GB of repo data ;-)

If you would accept the patch, then I will be happy to provide one based 
upon the git repo's. I will read through te way you want to receive the 
patch and send it formatted the way you require.

To be honest, I cannot see why raising it from 10 to 32 would be a 
problem. And it would take away blocking my setup from being continued.

Also, as an IT person, I think the number 32 looks much better than the 
number 10 ;-)

Kind regards,

Stephan

On 23-08-2020 07:10, Dietmar Maurer wrote:
>>> For me, I have that need too for a firewall container.
>> Why does your firewall need more the 10 interface?
> Sigh. too early in the morning... I wanted to ask:
>
> Why does your firewall need more than 10 interfaces?
>
> Normally, a firewall uses one interface per zone, and more
> than 10 zones are quite uncommon?
>
>>> Would you please consider raising the limit?
>> No, unless someone can explain why that is required ;-)
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-23 10:58     ` Stephan Leemburg
@ 2020-08-23 14:24       ` Dietmar Maurer
  2020-08-23 15:04         ` Stephan Leemburg
  2020-08-23 15:49         ` Stephan Leemburg
  2020-08-23 16:13       ` Tom Weber
  1 sibling, 2 replies; 17+ messages in thread
From: Dietmar Maurer @ 2020-08-23 14:24 UTC (permalink / raw)
  To: Stephan Leemburg, Proxmox VE development discussion

> If it would be possible to provide a 'trunk' openvswitch interface to 
> the CT, then from within the CT vlan devices could be setup from the 
> trunk, but in the end that will still create 10+ interfaces in the 
> container itself.

Cant you simply use a single network interface, then configure the vlans
inside the firewall?

IMHO, using one interface for each VLAN is the wrong approach. I am sure
next time people will ask for 4095 interfaces ...




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-23 14:24       ` Dietmar Maurer
@ 2020-08-23 15:04         ` Stephan Leemburg
  2020-08-23 16:14           ` Stephan Leemburg
  2020-08-23 15:49         ` Stephan Leemburg
  1 sibling, 1 reply; 17+ messages in thread
From: Stephan Leemburg @ 2020-08-23 15:04 UTC (permalink / raw)
  To: Dietmar Maurer, Proxmox VE development discussion

Hi Dietmar,

As said, the node has tradtional vmbr (brctl) bridges. So with that 
setup, I do not know how to do what you suggest. But I am happy to learn.

And as far as I can tell on my test server that uses openvswitch, I can 
only assign one tag to an interface in a container.

So also that will not work. If I could assign multiple VLAN's to an 
openswitch based container interface then I could create the vlan 
interfaces inside the container.

Ending up with as many vlan devices required in the container, so im my 
case with more than 10.

That would - however - require changing the current production setup on 
the OVH server(s) to switch from traditional bridging to openvswitch.

OVH servers are good in price/performance. Support is not so good and 
there is no console, so if something goes wrong you have to order (and 
pay for) a kvm to be attached for one day. That can take up to an hour 
or so to be performed as it is work that has to be performed manually by 
a site engineer in the data center.

But if there is a way, then I would be more than glad to learn about it.

Kind regards,

Stephan


On 23-08-2020 16:24, Dietmar Maurer wrote:
>> If it would be possible to provide a 'trunk' openvswitch interface to
>> the CT, then from within the CT vlan devices could be setup from the
>> trunk, but in the end that will still create 10+ interfaces in the
>> container itself.
> Cant you simply use a single network interface, then configure the vlans
> inside the firewall?
>
> IMHO, using one interface for each VLAN is the wrong approach. I am sure
> next time people will ask for 4095 interfaces ...
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-23 14:24       ` Dietmar Maurer
  2020-08-23 15:04         ` Stephan Leemburg
@ 2020-08-23 15:49         ` Stephan Leemburg
  1 sibling, 0 replies; 17+ messages in thread
From: Stephan Leemburg @ 2020-08-23 15:49 UTC (permalink / raw)
  To: Dietmar Maurer, Proxmox VE development discussion

Hi Dietmar,

To explain a little more. The OVH servers are just rented hardware 
somewhere in a datacenter of OVH.

I have no control over switching, etc. All networking is 'internal'. See 
the attached drawing.

Probably, it is what was on your mind. But I think it's good for me to 
explain as clearly as possible.

And - again - if I am not educated enough about how to use traditional 
vmbr setups as a vlan trunk, then any pointer to information is welcome.

Kind regards,

Stephan

On 23-08-2020 16:24, Dietmar Maurer wrote:
>> If it would be possible to provide a 'trunk' openvswitch interface to
>> the CT, then from within the CT vlan devices could be setup from the
>> trunk, but in the end that will still create 10+ interfaces in the
>> container itself.
> Cant you simply use a single network interface, then configure the vlans
> inside the firewall?
>
> IMHO, using one interface for each VLAN is the wrong approach. I am sure
> next time people will ask for 4095 interfaces ...
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-23 10:58     ` Stephan Leemburg
  2020-08-23 14:24       ` Dietmar Maurer
@ 2020-08-23 16:13       ` Tom Weber
  2020-08-23 16:35         ` Stephan Leemburg
  1 sibling, 1 reply; 17+ messages in thread
From: Tom Weber @ 2020-08-23 16:13 UTC (permalink / raw)
  To: pve-devel

Am Sonntag, den 23.08.2020, 12:58 +0200 schrieb Stephan Leemburg:
> Good afternoon Dietmar,
> 
> The reason is separation of client's resources on the machine(s).
> 
> In firewalling, it is not uncommon to use a lot of VLAN's.
> 
> For example at one of my clients that I do consultancy for, they
> have 
> more than 60 VLAN's defined on their firewall.

probably not helping with your original Problem, but running (such) a
firewall in a LXC feels totally wrong to me.

Putting the FW in a VM is fine for me, but I surely don't want it to be
a part of the hosts network stack.

Regards,
  Tom




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-23 15:04         ` Stephan Leemburg
@ 2020-08-23 16:14           ` Stephan Leemburg
  2020-08-24  4:53             ` Dietmar Maurer
  0 siblings, 1 reply; 17+ messages in thread
From: Stephan Leemburg @ 2020-08-23 16:14 UTC (permalink / raw)
  To: pve-devel

Hi Dietmar,

I have done some more testing on my openvswitch test proxmox system.

If I don't put a tag on the device, it seems to behave like a trunk. So, 
that would solve my problem. _If_ the hosts where openvswitch enabled.

Which they are not. So, in order to solve this I have to migrate them 
(these are operational systems hosting clients systems) to a openvswitch 
setup.

They where setup before openvswitch became operationally viable.

If this resolves the issue, then that must be done. But in the mean 
time, it would be very nice if you could just accept the patch to allow 
for the 32 interfaces instead of just the 10.

If you have other suggestions, links, then I would be happy to follow 
them and do my own research. I could also contribute some documentation 
for others facing the same issues.

Kind regards,

Stephan

On 23-08-2020 17:04, Stephan Leemburg wrote:
> Hi Dietmar,
>
> As said, the node has tradtional vmbr (brctl) bridges. So with that 
> setup, I do not know how to do what you suggest. But I am happy to learn.
>
> And as far as I can tell on my test server that uses openvswitch, I 
> can only assign one tag to an interface in a container.
>
> So also that will not work. If I could assign multiple VLAN's to an 
> openswitch based container interface then I could create the vlan 
> interfaces inside the container.
>
> Ending up with as many vlan devices required in the container, so im 
> my case with more than 10.
>
> That would - however - require changing the current production setup 
> on the OVH server(s) to switch from traditional bridging to openvswitch.
>
> OVH servers are good in price/performance. Support is not so good and 
> there is no console, so if something goes wrong you have to order (and 
> pay for) a kvm to be attached for one day. That can take up to an hour 
> or so to be performed as it is work that has to be performed manually 
> by a site engineer in the data center.
>
> But if there is a way, then I would be more than glad to learn about it.
>
> Kind regards,
>
> Stephan
>
>
> On 23-08-2020 16:24, Dietmar Maurer wrote:
>>> If it would be possible to provide a 'trunk' openvswitch interface to
>>> the CT, then from within the CT vlan devices could be setup from the
>>> trunk, but in the end that will still create 10+ interfaces in the
>>> container itself.
>> Cant you simply use a single network interface, then configure the vlans
>> inside the firewall?
>>
>> IMHO, using one interface for each VLAN is the wrong approach. I am sure
>> next time people will ask for 4095 interfaces ...
>>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-23 16:13       ` Tom Weber
@ 2020-08-23 16:35         ` Stephan Leemburg
  0 siblings, 0 replies; 17+ messages in thread
From: Stephan Leemburg @ 2020-08-23 16:35 UTC (permalink / raw)
  To: pve-devel

Am Sonntag, den 23.08.2020, 12:58 +0200 schrieb Stephan Leemburg:
>> Good afternoon Dietmar,
>>
>> The reason is separation of client's resources on the machine(s).
>>
>> In firewalling, it is not uncommon to use a lot of VLAN's.
>>
>> For example at one of my clients that I do consultancy for, they
>> have
>> more than 60 VLAN's defined on their firewall.
> probably not helping with your original Problem, but running (such) a
> firewall in a LXC feels totally wrong to me.
That is not my setup. The customer runs very expensive firewalls and all 
interfaces are vlan interfaces on top of link aggregations.
>
> Putting the FW in a VM is fine for me, but I surely don't want it to be
> a part of the hosts network stack.

Maybe I should reconsider my thought in migrating from a kvm that runs 
pfSense to a debian container that runs iptables in the same kernel and 
network stack as the node.

Thanks for your input. I will do some more research and educated thinking.

Best regards,

Stephan

>
> Regards,
>    Tom
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-23 16:14           ` Stephan Leemburg
@ 2020-08-24  4:53             ` Dietmar Maurer
  2020-08-24 10:54               ` Stephan Leemburg
  0 siblings, 1 reply; 17+ messages in thread
From: Dietmar Maurer @ 2020-08-24  4:53 UTC (permalink / raw)
  To: Proxmox VE development discussion, Stephan Leemburg

> If I don't put a tag on the device, it seems to behave like a trunk. So, 
> that would solve my problem. _If_ the hosts where openvswitch enabled.

I am unable to see why you need openvswitch for that? This also works with
standard linux network.




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-24  4:53             ` Dietmar Maurer
@ 2020-08-24 10:54               ` Stephan Leemburg
  2020-08-24 15:49                 ` Dietmar Maurer
  0 siblings, 1 reply; 17+ messages in thread
From: Stephan Leemburg @ 2020-08-24 10:54 UTC (permalink / raw)
  To: Dietmar Maurer, Proxmox VE development discussion

On 24-08-2020 06:53, Dietmar Maurer wrote:
>> If I don't put a tag on the device, it seems to behave like a trunk. So,
>> that would solve my problem. _If_ the hosts where openvswitch enabled.
> I am unable to see why you need openvswitch for that? This also works with
> standard linux network.

Hi Dietmar,

Oh, that is new for me.

So, I can have a vlan aware traditional bridge in the firewall that 
receives tagged frames and at the same time have the clients on the 
specific 'vlans' receive non-tagged frames for their respective pvid?

How can this be configured in Proxmox?

Kind regards,

Stephan




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-24 10:54               ` Stephan Leemburg
@ 2020-08-24 15:49                 ` Dietmar Maurer
  2020-08-24 16:14                   ` Tom Weber
  0 siblings, 1 reply; 17+ messages in thread
From: Dietmar Maurer @ 2020-08-24 15:49 UTC (permalink / raw)
  To: Stephan Leemburg, Proxmox VE development discussion


> On 08/24/2020 12:54 PM Stephan Leemburg <sleemburg@it-functions.nl> wrote:
> 
>  
> On 24-08-2020 06:53, Dietmar Maurer wrote:
> >> If I don't put a tag on the device, it seems to behave like a trunk. So,
> >> that would solve my problem. _If_ the hosts where openvswitch enabled.
> > I am unable to see why you need openvswitch for that? This also works with
> > standard linux network.
> 
> Hi Dietmar,
> 
> Oh, that is new for me.
> 
> So, I can have a vlan aware traditional bridge in the firewall that 
> receives tagged frames and at the same time have the clients on the 
> specific 'vlans' receive non-tagged frames for their respective pvid?
> 
> How can this be configured in Proxmox?

You do not not any special config on the pve host if you do all VLAN related
stuff inside the VM.




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-24 15:49                 ` Dietmar Maurer
@ 2020-08-24 16:14                   ` Tom Weber
  2020-08-24 22:09                     ` Stephan Leemburg
  2020-08-27 11:19                     ` Thomas Lamprecht
  0 siblings, 2 replies; 17+ messages in thread
From: Tom Weber @ 2020-08-24 16:14 UTC (permalink / raw)
  To: pve-devel

Am Montag, den 24.08.2020, 17:49 +0200 schrieb Dietmar Maurer:
> > On 08/24/2020 12:54 PM Stephan Leemburg <sleemburg@it-functions.nl>
> > wrote:
> > 
> >  
> > On 24-08-2020 06:53, Dietmar Maurer wrote:
> > > > If I don't put a tag on the device, it seems to behave like a
> > > > trunk. So,
> > > > that would solve my problem. _If_ the hosts where openvswitch
> > > > enabled.
> > > I am unable to see why you need openvswitch for that? This also
> > > works with
> > > standard linux network.
> > 
> > Hi Dietmar,
> > 
> > Oh, that is new for me.
> > 
> > So, I can have a vlan aware traditional bridge in the firewall
> > that 
> > receives tagged frames and at the same time have the clients on
> > the 
> > specific 'vlans' receive non-tagged frames for their respective
> > pvid?
> > 
> > How can this be configured in Proxmox?
> 
> You do not not any special config on the pve host if you do all VLAN
> related
> stuff inside the VM.

You do realize that Stephan is talking about CT not VM? (althought I
don't think such a setup makes sense)

  Tom




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-24 16:14                   ` Tom Weber
@ 2020-08-24 22:09                     ` Stephan Leemburg
  2020-08-27 11:19                     ` Thomas Lamprecht
  1 sibling, 0 replies; 17+ messages in thread
From: Stephan Leemburg @ 2020-08-24 22:09 UTC (permalink / raw)
  To: pve-devel

On 24-08-2020 18:14, Tom Weber wrote:
> Am Montag, den 24.08.2020, 17:49 +0200 schrieb Dietmar Maurer:
>>> On 08/24/2020 12:54 PM Stephan Leemburg <sleemburg@it-functions.nl>
>>> wrote:
>>>
>>>   
>>> On 24-08-2020 06:53, Dietmar Maurer wrote:
>>>>> If I don't put a tag on the device, it seems to behave like a
>>>>> trunk. So,
>>>>> that would solve my problem. _If_ the hosts where openvswitch
>>>>> enabled.
>>>> I am unable to see why you need openvswitch for that? This also
>>>> works with
>>>> standard linux network.
>>> Hi Dietmar,
>>>
>>> Oh, that is new for me.
>>>
>>> So, I can have a vlan aware traditional bridge in the firewall
>>> that
>>> receives tagged frames and at the same time have the clients on
>>> the
>>> specific 'vlans' receive non-tagged frames for their respective
>>> pvid?
>>>
>>> How can this be configured in Proxmox?
>> You do not not any special config on the pve host if you do all VLAN
>> related
>> stuff inside the VM.
> You do realize that Stephan is talking about CT not VM? (althought I
> don't think such a setup makes sense)
>
>    Tom

Thanks. I have done some research and experimenting on my test system.

I was not aware of vlan capable bridging. But if I have this in my 
/etc/network/interfaces on a traditional bridge configured system, then 
I can also assign vlans to the hosts on vmbr1 Just like with openvswitch.

auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
     address 192.168.240.246
     netmask 255.255.255.0
     gateway 192.168.240.254
     bridge_ports eth0
     bridge_stp off
     bridge_fd 0

auto vmbr1
iface vmbr1 inet manual
     bridge-vlan-aware yes
     bridge-vids 2-200
     bridge-pvid 2
     bridge_ports none
     bridge_stp off
     bridge_fd 0

Dietmar knows this, but I had to do my homework. So, it is more or less 
the same as with openvswitch. And it still is an intrusive change for my 
operational systems.

So for now, while planning to do the migration to openvswitch, I took 
the easy way out in adding an additional interface in the 
/etc/pve/lxc/${CT}.conf file:

lxc.net.10.type: veth
lxc.net.10.link: vmbr5
lxc.net.10.veth.pair: veth1001i15
lxc.net.10.hwaddr: 00:CE:99:F9:BF:12
lxc.net.10.name: eth11
lxc.net.10.flags: up

So, I have learned. Even though some think different about the 'shared 
network stack' firewall approach, it can work. Be it with ovs, vlan 
capable bridge or a workaround.

Still (Dietmar?), bumping from 10 to 32 would not hurt anyone and can 
avoid long mail threads like this.. And 2^(10/2) is nicer than 10^1 
isn't it? And there still is 10 in it ;-)

Anyway. I will not bother you any longer on this  subject.

Thank you all for your patience, replies and efforts.

I have learned at least something new about vlan capable bridges and 
that Proxmox supports it. And I know tomorrow I will share this with 
some other senior Linux admin who has been using Proxmox for a long time 
that also did not know about this (as I also consulted with him).

Kind regards,

Stephan

>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [pve-devel] More than 10 interfaces in lxc containers
  2020-08-24 16:14                   ` Tom Weber
  2020-08-24 22:09                     ` Stephan Leemburg
@ 2020-08-27 11:19                     ` Thomas Lamprecht
  1 sibling, 0 replies; 17+ messages in thread
From: Thomas Lamprecht @ 2020-08-27 11:19 UTC (permalink / raw)
  To: Proxmox VE development discussion, Tom Weber, Stephan Leemburg

Am 8/24/20 um 6:14 PM schrieb Tom Weber:
> Am Montag, den 24.08.2020, 17:49 +0200 schrieb Dietmar Maurer:
>>> On 08/24/2020 12:54 PM Stephan Leemburg <sleemburg@it-functions.nl> wrote:
>>> On 24-08-2020 06:53, Dietmar Maurer wrote:
>>>>> If I don't put a tag on the device, it seems to behave like a
>>>>> trunk. So, that would solve my problem. _If_ the hosts where openvswitch
>>>>> enabled.
>>>>
>>>> I am unable to see why you need openvswitch for that? This also
>>>> works with standard linux network.
>>>
>>> Oh, that is new for me.
>>>
>>> So, I can have a vlan aware traditional bridge in the firewall
>>> that 
>>> receives tagged frames and at the same time have the clients on
>>> the 
>>> specific 'vlans' receive non-tagged frames for their respective
>>> pvid?
>>>
>>> How can this be configured in Proxmox?
>>
>> You do not not any special config on the pve host if you do all VLAN
>> related
>> stuff inside the VM.
> 
> You do realize that Stephan is talking about CT not VM? (althought I
> don't think such a setup makes sense)
> 

But it should be also possible to do that with CTs and their veth
devices, they can be untagged and act like a trunk interface (and they
can to that on one or both side of the veth peers).

I found this article which seems to explain the thematic quite well,
at least after skimming over it ;-)
https://linux-blog.anracom.com/2017/11/20/fun-with-veth-devices-linux-bridges-and-vlans-in-unnamed-linux-network-namespaces-iv/

I applied the increase to the CT NIC limit nonetheless, as it makes
sense to have it in sync with VMs. But this use case shouldn't need
that increase...

cheers,
Thomas




^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-08-27 11:20 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-22 21:41 [pve-devel] More than 10 interfaces in lxc containers Stephan Leemburg
2020-08-22 22:16 ` Stephan Leemburg
2020-08-23  5:03 ` Dietmar Maurer
2020-08-23  5:10   ` Dietmar Maurer
2020-08-23 10:58     ` Stephan Leemburg
2020-08-23 14:24       ` Dietmar Maurer
2020-08-23 15:04         ` Stephan Leemburg
2020-08-23 16:14           ` Stephan Leemburg
2020-08-24  4:53             ` Dietmar Maurer
2020-08-24 10:54               ` Stephan Leemburg
2020-08-24 15:49                 ` Dietmar Maurer
2020-08-24 16:14                   ` Tom Weber
2020-08-24 22:09                     ` Stephan Leemburg
2020-08-27 11:19                     ` Thomas Lamprecht
2020-08-23 15:49         ` Stephan Leemburg
2020-08-23 16:13       ` Tom Weber
2020-08-23 16:35         ` Stephan Leemburg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal