* [PVE-User] Problem With Bond & VLANs - Help Please
@ 2024-08-16 7:36 duluxoz
2024-08-16 8:05 ` Christian Kivalo
2024-08-16 9:53 ` Stefan Hanreich
0 siblings, 2 replies; 10+ messages in thread
From: duluxoz @ 2024-08-16 7:36 UTC (permalink / raw)
To: pve-user
Hi All,
Disclaimer: I'm coming from an EL background - this is my first venture
into Debian-world :-)
So I'm having an issue getting the NICs, Bond, and VLANs correctly
configured on a new Proxmox Node (Old oVirt Node). This worked on the
old oVirt config (abit a different set of config files/statements).
What I'm trying to achieve:
* Proxmox Node IP Address: 10.0.200.100/24, Tag:VLAN 200
* Gateway: 10.0.200.1
* Bond: NIC1 (eno0) & NIC2 (eno1), 802.3ad
* VLAN bond0.100: 10.0.100.0/24, Gateway 10.0.100.1
* VLAN bond0.200: 10.0.200.0/24, Gateway 10.0.200.1
* NIC3 (eno2): 10.0.300.100/24 - not really relevant, as its not part
of the Bond, but I've included it to be thorough
My /etc/network/interfaces file:
~~~
auto lo
iface lo inet loopback
iface eno0 inet manual
iface en01 inet manual
auto eno3
iface nic3 inet static
address 10.0.300.100/24
auto bond0
iface bond0 inet manual
bond-members nic1 nic2
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
bond-miimon 100
bond-downdelay 200
bond-updelay 200
auto bond0.100
iface bond0.100 inet static
address 10.0.100.0/24
gateway 10.0.10.1
vlan-raw-device bond0
auto bond0.200
iface bond0.200 inet static
address 10.0.200.0/24
gateway 10.0.200.1
vlan-raw-device bond0
auto vmbr0
iface vmbr0 inet static
address 10.0.200.100/24
gateway 10.0.200.1
bridge_porrts bond0
bridge-fd 0
bridge_stp off
bridge-vlan-aware yes
bridge-vids 100 200
bridge-allow-untagged no
~~~
If someone could be kind enough to let me know where I'm going wrong,
I'd really appreciate it - thanks (in advance)
Dulux-Oz
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PVE-User] Problem With Bond & VLANs - Help Please
2024-08-16 7:36 [PVE-User] Problem With Bond & VLANs - Help Please duluxoz
@ 2024-08-16 8:05 ` Christian Kivalo
2024-08-16 9:43 ` duluxoz
2024-08-16 9:53 ` Stefan Hanreich
1 sibling, 1 reply; 10+ messages in thread
From: Christian Kivalo @ 2024-08-16 8:05 UTC (permalink / raw)
To: Proxmox VE user list
>auto vmbr0
>iface vmbr0 inet static
> address 10.0.200.100/24
> gateway 10.0.200.1
> bridge_porrts bond0
^^^^^^^^^^^^^^^^^^
Is this copied from your config or just a typo in the mail?
> bridge-fd 0
> bridge_stp off
> bridge-vlan-aware yes
> bridge-vids 100 200
> bridge-allow-untagged no
--
Christian Kivalo
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PVE-User] Problem With Bond & VLANs - Help Please
2024-08-16 8:05 ` Christian Kivalo
@ 2024-08-16 9:43 ` duluxoz
0 siblings, 0 replies; 10+ messages in thread
From: duluxoz @ 2024-08-16 9:43 UTC (permalink / raw)
To: pve-user
Hi Christian,
Just a typo in the email - but nice catch :-)
Anything else jump out at you?
Cheers
dulux-oz
On 16/8/24 18:05, Christian Kivalo wrote:
>
>
>> auto vmbr0
>> iface vmbr0 inet static
>> address 10.0.200.100/24
>> gateway 10.0.200.1
>> bridge_porrts bond0
> ^^^^^^^^^^^^^^^^^^
> Is this copied from your config or just a typo in the mail?
>> bridge-fd 0
>> bridge_stp off
>> bridge-vlan-aware yes
>> bridge-vids 100 200
>> bridge-allow-untagged no
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PVE-User] Problem With Bond & VLANs - Help Please
2024-08-16 7:36 [PVE-User] Problem With Bond & VLANs - Help Please duluxoz
2024-08-16 8:05 ` Christian Kivalo
@ 2024-08-16 9:53 ` Stefan Hanreich
2024-08-16 10:42 ` duluxoz
1 sibling, 1 reply; 10+ messages in thread
From: Stefan Hanreich @ 2024-08-16 9:53 UTC (permalink / raw)
To: matthew, Proxmox VE user list, duluxoz
On 8/16/24 09:36, duluxoz wrote:
> Hi All,
>
> Disclaimer: I'm coming from an EL background - this is my first venture
> into Debian-world :-)
>
> So I'm having an issue getting the NICs, Bond, and VLANs correctly
> configured on a new Proxmox Node (Old oVirt Node). This worked on the
> old oVirt config (abit a different set of config files/statements).
>
> What I'm trying to achieve:
>
> * Proxmox Node IP Address: 10.0.200.100/24, Tag:VLAN 200
> * Gateway: 10.0.200.1
> * Bond: NIC1 (eno0) & NIC2 (eno1), 802.3ad
> * VLAN bond0.100: 10.0.100.0/24, Gateway 10.0.100.1
> * VLAN bond0.200: 10.0.200.0/24, Gateway 10.0.200.1
> * NIC3 (eno2): 10.0.300.100/24 - not really relevant, as its not part
> of the Bond, but I've included it to be thorough
What *exactly* doesn't work?
Does the configuration not apply? Do you not get any connectivity with
internet / specific networks?
First thing that springs to mind is that you cannot configure two
default gateways. There can only be one default gateway. You can
configure different gateways for different subnets / interfaces. Or you
can configure different routing tables for different processes.
Your current configuration specifies three gateways. I assume you want
to use different gateways for different subnets?
How does the output of the following commands look like?
ip a
ip r
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PVE-User] Problem With Bond & VLANs - Help Please
2024-08-16 9:53 ` Stefan Hanreich
@ 2024-08-16 10:42 ` duluxoz
2024-08-16 11:26 ` Gilberto Ferreira
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: duluxoz @ 2024-08-16 10:42 UTC (permalink / raw)
To: Stefan Hanreich, Proxmox VE user list
Hi Stephan,
My apologises, I should have been more precise.
What doesn't work? Most of the ifaces are down (won't come up
automatically as I expect (not even NIC3)), and so I have no
connectivity to the LAN, let alone the rest of the outside world.
Yes, each VLAN should have its own gateway - each VLAN is its own
subnet, of course.
Results of `ip r`:
~~~
default via 10.0.200.1 dev vmbr0 proto kernal onlink linkdown
10.0.100.0/24 dev bond0.100 proto kernal scope link src 10.0.100.0 linkdown
10.0.200.0/24 dev bond0.200 proto kernal scope link src 10.0.200.0 linkdown
10.0.200.0/24 dev vmbr0 proto kernal scope link src 10.0.200.100 linkdown
~~~
Results of `ip a`:
~~~
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eno0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 00:1b:21:e4:a6:f4 brd ff:ff:ff:ff:ff:ff
3: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 00:1b:21:e4:a6:f5 brd ff:ff:ff:ff:ff:ff
4: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 00:1b:21:e4:a6:f6 brd ff:ff:ff:ff:ff:ff
5: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc
noqueue master vmbr0 state DOWN group default qlen 1000
link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
6: bond0.100@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc
noqueue state LOWERLAYERDOWN group default qlen 1000
link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
inet 10.0.100.0/24 scope global bond0.100
valid_lft forever preferred_lft forever
7: bond0.200@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc
noqueue state LOWERLAYERDOWN group default qlen 1000
link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
inet 10.0.200.0/24 scope global bond0.200
valid_lft forever preferred_lft forever
8: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
state DOWN group default qlen 1000
link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
inet 10.0.200.100/24 scope global vmbr0
valid_lft forever preferred_lft forever
~~~
Thanks for taking a look
Cheers
dulux-oz
On 16/8/24 19:53, Stefan Hanreich wrote:
>
> On 8/16/24 09:36, duluxoz wrote:
>> Hi All,
>>
>> Disclaimer: I'm coming from an EL background - this is my first venture
>> into Debian-world :-)
>>
>> So I'm having an issue getting the NICs, Bond, and VLANs correctly
>> configured on a new Proxmox Node (Old oVirt Node). This worked on the
>> old oVirt config (abit a different set of config files/statements).
>>
>> What I'm trying to achieve:
>>
>> * Proxmox Node IP Address: 10.0.200.100/24, Tag:VLAN 200
>> * Gateway: 10.0.200.1
>> * Bond: NIC1 (eno0) & NIC2 (eno1), 802.3ad
>> * VLAN bond0.100: 10.0.100.0/24, Gateway 10.0.100.1
>> * VLAN bond0.200: 10.0.200.0/24, Gateway 10.0.200.1
>> * NIC3 (eno2): 10.0.300.100/24 - not really relevant, as its not part
>> of the Bond, but I've included it to be thorough
> What *exactly* doesn't work?
> Does the configuration not apply? Do you not get any connectivity with
> internet / specific networks?
>
>
> First thing that springs to mind is that you cannot configure two
> default gateways. There can only be one default gateway. You can
> configure different gateways for different subnets / interfaces. Or you
> can configure different routing tables for different processes.
>
> Your current configuration specifies three gateways. I assume you want
> to use different gateways for different subnets?
>
>
> How does the output of the following commands look like?
>
> ip a
> ip r
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PVE-User] Problem With Bond & VLANs - Help Please
2024-08-16 10:42 ` duluxoz
@ 2024-08-16 11:26 ` Gilberto Ferreira
2024-08-16 11:26 ` proxmox
2024-08-16 11:32 ` proxmox
2 siblings, 0 replies; 10+ messages in thread
From: Gilberto Ferreira @ 2024-08-16 11:26 UTC (permalink / raw)
To: matthew, Proxmox VE user list
There's a lot of DOWN and NO CARRIER
I think this indicates a physical connection problem or some sort of
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em sex., 16 de ago. de 2024, 07:51, duluxoz <duluxoz@gmail.com> escreveu:
> Hi Stephan,
>
> My apologises, I should have been more precise.
>
> What doesn't work? Most of the ifaces are down (won't come up
> automatically as I expect (not even NIC3)), and so I have no
> connectivity to the LAN, let alone the rest of the outside world.
>
> Yes, each VLAN should have its own gateway - each VLAN is its own
> subnet, of course.
>
> Results of `ip r`:
>
> ~~~
> default via 10.0.200.1 dev vmbr0 proto kernal onlink linkdown
> 10.0.100.0/24 dev bond0.100 proto kernal scope link src 10.0.100.0
> linkdown
> 10.0.200.0/24 dev bond0.200 proto kernal scope link src 10.0.200.0
> linkdown
> 10.0.200.0/24 dev vmbr0 proto kernal scope link src 10.0.200.100 linkdown
> ~~~
>
> Results of `ip a`:
>
> ~~~
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host noprefixroute
> valid_lft forever preferred_lft forever
> 2: eno0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
> default qlen 1000
> link/ether 00:1b:21:e4:a6:f4 brd ff:ff:ff:ff:ff:ff
> 3: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
> default qlen 1000
> link/ether 00:1b:21:e4:a6:f5 brd ff:ff:ff:ff:ff:ff
> 4: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
> default qlen 1000
> link/ether 00:1b:21:e4:a6:f6 brd ff:ff:ff:ff:ff:ff
> 5: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc
> noqueue master vmbr0 state DOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
> 6: bond0.100@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc
> noqueue state LOWERLAYERDOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
> inet 10.0.100.0/24 scope global bond0.100
> valid_lft forever preferred_lft forever
> 7: bond0.200@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc
> noqueue state LOWERLAYERDOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
> inet 10.0.200.0/24 scope global bond0.200
> valid_lft forever preferred_lft forever
> 8: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
> state DOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
> inet 10.0.200.100/24 scope global vmbr0
> valid_lft forever preferred_lft forever
> ~~~
>
> Thanks for taking a look
>
> Cheers
>
> dulux-oz
>
>
> On 16/8/24 19:53, Stefan Hanreich wrote:
> >
> > On 8/16/24 09:36, duluxoz wrote:
> >> Hi All,
> >>
> >> Disclaimer: I'm coming from an EL background - this is my first venture
> >> into Debian-world :-)
> >>
> >> So I'm having an issue getting the NICs, Bond, and VLANs correctly
> >> configured on a new Proxmox Node (Old oVirt Node). This worked on the
> >> old oVirt config (abit a different set of config files/statements).
> >>
> >> What I'm trying to achieve:
> >>
> >> * Proxmox Node IP Address: 10.0.200.100/24, Tag:VLAN 200
> >> * Gateway: 10.0.200.1
> >> * Bond: NIC1 (eno0) & NIC2 (eno1), 802.3ad
> >> * VLAN bond0.100: 10.0.100.0/24, Gateway 10.0.100.1
> >> * VLAN bond0.200: 10.0.200.0/24, Gateway 10.0.200.1
> >> * NIC3 (eno2): 10.0.300.100/24 - not really relevant, as its not part
> >> of the Bond, but I've included it to be thorough
> > What *exactly* doesn't work?
> > Does the configuration not apply? Do you not get any connectivity with
> > internet / specific networks?
> >
> >
> > First thing that springs to mind is that you cannot configure two
> > default gateways. There can only be one default gateway. You can
> > configure different gateways for different subnets / interfaces. Or you
> > can configure different routing tables for different processes.
> >
> > Your current configuration specifies three gateways. I assume you want
> > to use different gateways for different subnets?
> >
> >
> > How does the output of the following commands look like?
> >
> > ip a
> > ip r
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PVE-User] Problem With Bond & VLANs - Help Please
2024-08-16 10:42 ` duluxoz
2024-08-16 11:26 ` Gilberto Ferreira
@ 2024-08-16 11:26 ` proxmox
2024-08-16 11:32 ` proxmox
2 siblings, 0 replies; 10+ messages in thread
From: proxmox @ 2024-08-16 11:26 UTC (permalink / raw)
To: matthew, Proxmox VE user list
Hi perhaps another typo here, but
You have following Interfaces
eno1
eno2
eno3
But you wrote in your config file
nic1
nic2
nic3
That cant work ;)
Hth
Mehmet
Am 16. August 2024 12:42:46 MESZ schrieb duluxoz <duluxoz@gmail.com>:
>Hi Stephan,
>
>My apologises, I should have been more precise.
>
>What doesn't work? Most of the ifaces are down (won't come up automatically as I expect (not even NIC3)), and so I have no connectivity to the LAN, let alone the rest of the outside world.
>
>Yes, each VLAN should have its own gateway - each VLAN is its own subnet, of course.
>
>Results of `ip r`:
>
>~~~
>default via 10.0.200.1 dev vmbr0 proto kernal onlink linkdown
>10.0.100.0/24 dev bond0.100 proto kernal scope link src 10.0.100.0 linkdown
>10.0.200.0/24 dev bond0.200 proto kernal scope link src 10.0.200.0 linkdown
>10.0.200.0/24 dev vmbr0 proto kernal scope link src 10.0.200.100 linkdown
>~~~
>
>Results of `ip a`:
>
>~~~
>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host noprefixroute
> valid_lft forever preferred_lft forever
>2: eno0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
> link/ether 00:1b:21:e4:a6:f4 brd ff:ff:ff:ff:ff:ff
>3: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
> link/ether 00:1b:21:e4:a6:f5 brd ff:ff:ff:ff:ff:ff
>4: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
> link/ether 00:1b:21:e4:a6:f6 brd ff:ff:ff:ff:ff:ff
>5: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue master vmbr0 state DOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
>6: bond0.100@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
> inet 10.0.100.0/24 scope global bond0.100
> valid_lft forever preferred_lft forever
>7: bond0.200@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
> inet 10.0.200.0/24 scope global bond0.200
> valid_lft forever preferred_lft forever
>8: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
> inet 10.0.200.100/24 scope global vmbr0
> valid_lft forever preferred_lft forever
>~~~
>
>Thanks for taking a look
>
>Cheers
>
>dulux-oz
>
>
>On 16/8/24 19:53, Stefan Hanreich wrote:
>>
>> On 8/16/24 09:36, duluxoz wrote:
>>> Hi All,
>>>
>>> Disclaimer: I'm coming from an EL background - this is my first venture
>>> into Debian-world :-)
>>>
>>> So I'm having an issue getting the NICs, Bond, and VLANs correctly
>>> configured on a new Proxmox Node (Old oVirt Node). This worked on the
>>> old oVirt config (abit a different set of config files/statements).
>>>
>>> What I'm trying to achieve:
>>>
>>> * Proxmox Node IP Address: 10.0.200.100/24, Tag:VLAN 200
>>> * Gateway: 10.0.200.1
>>> * Bond: NIC1 (eno0) & NIC2 (eno1), 802.3ad
>>> * VLAN bond0.100: 10.0.100.0/24, Gateway 10.0.100.1
>>> * VLAN bond0.200: 10.0.200.0/24, Gateway 10.0.200.1
>>> * NIC3 (eno2): 10.0.300.100/24 - not really relevant, as its not part
>>> of the Bond, but I've included it to be thorough
>> What *exactly* doesn't work?
>> Does the configuration not apply? Do you not get any connectivity with
>> internet / specific networks?
>>
>>
>> First thing that springs to mind is that you cannot configure two
>> default gateways. There can only be one default gateway. You can
>> configure different gateways for different subnets / interfaces. Or you
>> can configure different routing tables for different processes.
>>
>> Your current configuration specifies three gateways. I assume you want
>> to use different gateways for different subnets?
>>
>>
>> How does the output of the following commands look like?
>>
>> ip a
>> ip r
>_______________________________________________
>pve-user mailing list
>pve-user@lists.proxmox.com
>https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PVE-User] Problem With Bond & VLANs - Help Please
2024-08-16 10:42 ` duluxoz
2024-08-16 11:26 ` Gilberto Ferreira
2024-08-16 11:26 ` proxmox
@ 2024-08-16 11:32 ` proxmox
2 siblings, 0 replies; 10+ messages in thread
From: proxmox @ 2024-08-16 11:32 UTC (permalink / raw)
To: matthew, Proxmox VE user list, duluxoz, Stefan Hanreich
On 16/08/2024 12:42, duluxoz wrote:
> Hi Stephan,
>
> My apologises, I should have been more precise.
>
> What doesn't work? Most of the ifaces are down (won't come up
> automatically as I expect (not even NIC3)), and so I have no
> connectivity to the LAN, let alone the rest of the outside world.
>
> Yes, each VLAN should have its own gateway - each VLAN is its own
> subnet, of course.
>
> Results of `ip r`:
>
> ~~~
> default via 10.0.200.1 dev vmbr0 proto kernal onlink linkdown
> 10.0.100.0/24 dev bond0.100 proto kernal scope link src 10.0.100.0
> linkdown
> 10.0.200.0/24 dev bond0.200 proto kernal scope link src 10.0.200.0
> linkdown
> 10.0.200.0/24 dev vmbr0 proto kernal scope link src 10.0.200.100 linkdown
> ~~~
>
I believe your problem is in your definition of bond0
~~~
auto bond0
iface bond0 inet manual
bond-members eno1 eno2
~~~
Bond members here should be eno1 and eno2, and not nic1 nic2
Looking at the eno3 nic, you have done the same problem nic3 should be eno3
~~~
auto eno3
iface eno3 inet static
address 10.0.300.100/24
~~~
Another problem I see with your configuration is that you define
gateways on each vlan interface. Linux only have one routing domain.
Remember that you don't need either gateway nor ip address on the bridge
you use for vm's.
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PVE-User] Problem With Bond & VLANs - Help Please
2024-08-18 11:14 duluxoz
@ 2024-08-18 14:07 ` Gilberto Ferreira
0 siblings, 0 replies; 10+ messages in thread
From: Gilberto Ferreira @ 2024-08-18 14:07 UTC (permalink / raw)
To: matthew, Proxmox VE user list
Glad you made it
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em dom., 18 de ago. de 2024, 08:14, duluxoz <duluxoz@gmail.com> escreveu:
> Hi All,
>
> So I've finally got this working. It turns out I should have:
>
> * Used the command `bond-slaves` instead of `bond-members` (even
> though the online man page I looked up said that `bond-members` was
> the correct command because `bond-slaves` was no long "appropriate"
> * Set up by VLAN 200 under the vmbr0 bridge (ie iface vmbr0.200)
> * Didn't have to worry about setting up VLAN 100
>
> Thanks to everyone who helped out - much appreciated :-)
>
> Cheers
>
> Diulux-Oz
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PVE-User] Problem With Bond & VLANs - Help Please
@ 2024-08-18 11:14 duluxoz
2024-08-18 14:07 ` Gilberto Ferreira
0 siblings, 1 reply; 10+ messages in thread
From: duluxoz @ 2024-08-18 11:14 UTC (permalink / raw)
To: pve-user
Hi All,
So I've finally got this working. It turns out I should have:
* Used the command `bond-slaves` instead of `bond-members` (even
though the online man page I looked up said that `bond-members` was
the correct command because `bond-slaves` was no long "appropriate"
* Set up by VLAN 200 under the vmbr0 bridge (ie iface vmbr0.200)
* Didn't have to worry about setting up VLAN 100
Thanks to everyone who helped out - much appreciated :-)
Cheers
Diulux-Oz
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-08-18 14:07 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-08-16 7:36 [PVE-User] Problem With Bond & VLANs - Help Please duluxoz
2024-08-16 8:05 ` Christian Kivalo
2024-08-16 9:43 ` duluxoz
2024-08-16 9:53 ` Stefan Hanreich
2024-08-16 10:42 ` duluxoz
2024-08-16 11:26 ` Gilberto Ferreira
2024-08-16 11:26 ` proxmox
2024-08-16 11:32 ` proxmox
2024-08-18 11:14 duluxoz
2024-08-18 14:07 ` Gilberto Ferreira
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox