* [PVE-User] ip address on both bond0 and vmbr0
@ 2021-03-23 10:42 mj
[not found] ` <c32dec75-3644-3f82-4615-7fbc18630126@yahoo.com>
2021-03-23 12:02 ` Ronny Aasen
0 siblings, 2 replies; 4+ messages in thread
From: mj @ 2021-03-23 10:42 UTC (permalink / raw)
To: Proxmox VE user list
Hi all,
First some info:
10.0.0.0/24 is ceph storage
192.168.143.0/24 is our LAN
I am trying to make this /etc/networking/interfaces work in in pve:
> auto enp2s0f0
> iface enp2s0f0 inet manual
> #mlag1
>
> auto enp2s0f1
> iface enp2s0f1 inet manual
> #mlag2
>
> iface enp0s25 inet manual
> #management
>
> auto bond0
> iface bond0 inet static
> address 10.0.0.10/24
> bond-slaves enp2s0f0 enp2s0f1
> bond-miimon 100
> bond-mode active-backup
> bond-primary enp2s0f0
>
> auto vmbr0
> iface vmbr0 inet static
> address 192.168.143.10/24
> gateway 192.168.143.1
> bridge-ports bond0
> bridge-stp off
> bridge-fd 0
We will connect pve servers to two mlagged arista 40G switches. The
10.0.0.0/24 ceph network will remain local on the two aristas, and
192.168.143.0/24 will be routed to our core switch.
The VM IPs are in the LAN 192.168.143.0/24 range, and obviously don't
require access to 10.0.0.0/24
We connect the VMs to vmbr0 and assign VLANs to them by configuring a
VLAN tag in the proxmox VM config. This works. :-)
However, assigning the IP address to bond0 does NOT work. The IP address
is ignored. bond0 works, but is IP-less. Adding the IP address manually
after boot works, using:
> ip addr add 10.0.0.10/24 dev bond0
Why is this ip address not assigned to bond0 at boot time?
Is it not possible to have an IP on both bond0 and vmbr0, when bond0 is
also used as a bridge port?
The setup is based (freely) on the pve docs:
https://pve.proxmox.com/wiki/Network_Configuration#_linux_bond
Thanks!
MJ
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PVE-User] ip address on both bond0 and vmbr0
[not found] ` <c32dec75-3644-3f82-4615-7fbc18630126@yahoo.com>
@ 2021-03-23 11:36 ` mj
0 siblings, 0 replies; 4+ messages in thread
From: mj @ 2021-03-23 11:36 UTC (permalink / raw)
To: dorsy, Proxmox VE user list
Hi Dorsy,
Thanks for the quick reply! :-)
On 23/03/2021 11:51, dorsy wrote:
> Also, if You look at the examples, they do not use a bond and VM bridge
> with 2 addresses, in the first, an IP is on the bond, the VM bridge is
> another physical IF.
>
> The second example shows the VM bridge is over the bond, and the IP is
> on the bridge IF (no IP on the bond there).
Yes, I realise they are not identical. (that's why I said: freely based
on..)
I thought: adding the ceph IP on the bond0 would be a nice and easy way
to seperate ceph traffic from the VMs.
I have tried now as you suggested, and that works, yes. Thank you!
> iface vmbr0 inet static
> address 192.168.143.10/24
> gateway 192.168.143.1
> bridge-ports bond0
> bridge-stp off
> bridge-fd 0
> post-up /sbin/ip addr add 10.0.0.10/24 dev vmbr0
I just remain curious why it would be so strange to put the IP on bond0.
I do see most examples on the net NOT having an IP on bond0. So I
understand it's not normal.
But what's wrong with it?
I tried also putting the "post-up addr add" stanza to bond0 config, but
it doesn't work as well. (strange, given that adding it works, after
boot has finished)
I will use your suggestion, thanks, appreciated.
But still: Why is putting an ip on bond0 considered strange, and why
doesn't it work *during* boot, and does it work *after* boot?
MJ
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PVE-User] ip address on both bond0 and vmbr0
2021-03-23 10:42 [PVE-User] ip address on both bond0 and vmbr0 mj
[not found] ` <c32dec75-3644-3f82-4615-7fbc18630126@yahoo.com>
@ 2021-03-23 12:02 ` Ronny Aasen
2021-03-23 14:28 ` mj
1 sibling, 1 reply; 4+ messages in thread
From: Ronny Aasen @ 2021-03-23 12:02 UTC (permalink / raw)
To: pve-user
On 23.03.2021 11:42, mj wrote:
> Hi all,
>
> First some info:
> 10.0.0.0/24 is ceph storage
> 192.168.143.0/24 is our LAN
>
> I am trying to make this /etc/networking/interfaces work in in pve:
>
>> auto enp2s0f0
>> iface enp2s0f0 inet manual
>> #mlag1
>>
>> auto enp2s0f1
>> iface enp2s0f1 inet manual
>> #mlag2
>>
>> iface enp0s25 inet manual
>> #management
>>
>> auto bond0
>> iface bond0 inet static
>> address 10.0.0.10/24
>> bond-slaves enp2s0f0 enp2s0f1
>> bond-miimon 100
>> bond-mode active-backup
>> bond-primary enp2s0f0
>>
>> auto vmbr0
>> iface vmbr0 inet static
>> address 192.168.143.10/24
>> gateway 192.168.143.1
>> bridge-ports bond0
>> bridge-stp off
>> bridge-fd 0
>
> We will connect pve servers to two mlagged arista 40G switches. The
> 10.0.0.0/24 ceph network will remain local on the two aristas, and
> 192.168.143.0/24 will be routed to our core switch.
>
> The VM IPs are in the LAN 192.168.143.0/24 range, and obviously don't
> require access to 10.0.0.0/24
>
> We connect the VMs to vmbr0 and assign VLANs to them by configuring a
> VLAN tag in the proxmox VM config. This works. :-)
>
> However, assigning the IP address to bond0 does NOT work. The IP address
> is ignored. bond0 works, but is IP-less. Adding the IP address manually
> after boot works, using:
>> ip addr add 10.0.0.10/24 dev bond0
>
> Why is this ip address not assigned to bond0 at boot time?
>
> Is it not possible to have an IP on both bond0 and vmbr0, when bond0 is
> also used as a bridge port?
>
No you can not use the ip on the bond and the bridge; while you can run
2 ip's on bridge, that is a bit ugly.
the way we do it is running vlan's on the bond, into a vlan aware bridge
auto ens6f0
iface ens6f0 inet manual
mtu 9700
auto ens6f1
iface ens6f1 inet manual
mtu 9700
auto bond0
iface bond0 inet manual
slaves ens6f0 ens6f1
bond_miimon 100
bond_mode 1
bond_xmit_hash_policy layer3+4
mtu 9700
auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0
bridge_stp off
bridge_maxage 0
bridge_ageing 0
bridge_maxwait 0
bridge_fd 0
bridge_vlan_aware yes
mtu 9700
up echo 1 >
/sys/devices/virtual/net/vmbr0/bridge/multicast_querier
up echo 0 >
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
then define an vlan interface per subnet
auto vmbr0.10
iface vmbr0.10 inet6 static
address 2001:db8:2323::11
netmask 64
gateway 2001:bd8:2323::1
mtu 1500
vm's attach to vmbr0 + the tag for the vlan they should be in.
good luck
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PVE-User] ip address on both bond0 and vmbr0
2021-03-23 12:02 ` Ronny Aasen
@ 2021-03-23 14:28 ` mj
0 siblings, 0 replies; 4+ messages in thread
From: mj @ 2021-03-23 14:28 UTC (permalink / raw)
To: pve-user
Hi all,
Thanks for all suggestions! I will try with Bastian's:
> bond0 (slaves enp2...)
> vmbr0 (slave bond0) 192.168.143.10/24
> bond0.10 10.0.0.10/24
as that will also give proper separation of ceph traffic, as indicated
by Dorsy.
Also thank you Ronny, for showing your elaborate config!
MJ
On 23/03/2021 13:02, Ronny Aasen wrote:
> On 23.03.2021 11:42, mj wrote:
>> Hi all,
>>
>> First some info:
>> 10.0.0.0/24 is ceph storage
>> 192.168.143.0/24 is our LAN
>>
>> I am trying to make this /etc/networking/interfaces work in in pve:
>>
>>> auto enp2s0f0
>>> iface enp2s0f0 inet manual
>>> #mlag1
>>>
>>> auto enp2s0f1
>>> iface enp2s0f1 inet manual
>>> #mlag2
>>>
>>> iface enp0s25 inet manual
>>> #management
>>>
>>> auto bond0
>>> iface bond0 inet static
>>> address 10.0.0.10/24
>>> bond-slaves enp2s0f0 enp2s0f1
>>> bond-miimon 100
>>> bond-mode active-backup
>>> bond-primary enp2s0f0
>>>
>>> auto vmbr0
>>> iface vmbr0 inet static
>>> address 192.168.143.10/24
>>> gateway 192.168.143.1
>>> bridge-ports bond0
>>> bridge-stp off
>>> bridge-fd 0
>>
>> We will connect pve servers to two mlagged arista 40G switches. The
>> 10.0.0.0/24 ceph network will remain local on the two aristas, and
>> 192.168.143.0/24 will be routed to our core switch.
>>
>> The VM IPs are in the LAN 192.168.143.0/24 range, and obviously don't
>> require access to 10.0.0.0/24
>>
>> We connect the VMs to vmbr0 and assign VLANs to them by configuring a
>> VLAN tag in the proxmox VM config. This works. :-)
>>
>> However, assigning the IP address to bond0 does NOT work. The IP
>> address is ignored. bond0 works, but is IP-less. Adding the IP address
>> manually after boot works, using:
>>> ip addr add 10.0.0.10/24 dev bond0
>>
>> Why is this ip address not assigned to bond0 at boot time?
>>
>> Is it not possible to have an IP on both bond0 and vmbr0, when bond0
>> is also used as a bridge port?
>>
>
>
> No you can not use the ip on the bond and the bridge; while you can run
> 2 ip's on bridge, that is a bit ugly.
>
> the way we do it is running vlan's on the bond, into a vlan aware bridge
>
> auto ens6f0
> iface ens6f0 inet manual
> mtu 9700
>
> auto ens6f1
> iface ens6f1 inet manual
> mtu 9700
>
> auto bond0
> iface bond0 inet manual
> slaves ens6f0 ens6f1
> bond_miimon 100
> bond_mode 1
> bond_xmit_hash_policy layer3+4
> mtu 9700
>
> auto vmbr0
> iface vmbr0 inet manual
> bridge_ports bond0
> bridge_stp off
> bridge_maxage 0
> bridge_ageing 0
> bridge_maxwait 0
> bridge_fd 0
> bridge_vlan_aware yes
> mtu 9700
> up echo 1 >
> /sys/devices/virtual/net/vmbr0/bridge/multicast_querier
> up echo 0 >
> /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
>
> then define an vlan interface per subnet
>
> auto vmbr0.10
> iface vmbr0.10 inet6 static
> address 2001:db8:2323::11
> netmask 64
> gateway 2001:bd8:2323::1
> mtu 1500
>
>
> vm's attach to vmbr0 + the tag for the vlan they should be in.
>
> good luck
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-03-23 14:28 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-23 10:42 [PVE-User] ip address on both bond0 and vmbr0 mj
[not found] ` <c32dec75-3644-3f82-4615-7fbc18630126@yahoo.com>
2021-03-23 11:36 ` mj
2021-03-23 12:02 ` Ronny Aasen
2021-03-23 14:28 ` mj
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox