From: proxmox@elchaka.de
To: matthew@peregrineit.net,
Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Problem With Bond & VLANs - Help Please
Date: Fri, 16 Aug 2024 13:26:32 +0200 [thread overview]
Message-ID: <043BBE0D-848F-4587-9678-7729AFFEE951@elchaka.de> (raw)
In-Reply-To: <534f2488-4441-43eb-9767-5e20b531f6b3@gmail.com>
Hi perhaps another typo here, but
You have following Interfaces
eno1
eno2
eno3
But you wrote in your config file
nic1
nic2
nic3
That cant work ;)
Hth
Mehmet
Am 16. August 2024 12:42:46 MESZ schrieb duluxoz <duluxoz@gmail.com>:
>Hi Stephan,
>
>My apologises, I should have been more precise.
>
>What doesn't work? Most of the ifaces are down (won't come up automatically as I expect (not even NIC3)), and so I have no connectivity to the LAN, let alone the rest of the outside world.
>
>Yes, each VLAN should have its own gateway - each VLAN is its own subnet, of course.
>
>Results of `ip r`:
>
>~~~
>default via 10.0.200.1 dev vmbr0 proto kernal onlink linkdown
>10.0.100.0/24 dev bond0.100 proto kernal scope link src 10.0.100.0 linkdown
>10.0.200.0/24 dev bond0.200 proto kernal scope link src 10.0.200.0 linkdown
>10.0.200.0/24 dev vmbr0 proto kernal scope link src 10.0.200.100 linkdown
>~~~
>
>Results of `ip a`:
>
>~~~
>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host noprefixroute
> valid_lft forever preferred_lft forever
>2: eno0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
> link/ether 00:1b:21:e4:a6:f4 brd ff:ff:ff:ff:ff:ff
>3: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
> link/ether 00:1b:21:e4:a6:f5 brd ff:ff:ff:ff:ff:ff
>4: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
> link/ether 00:1b:21:e4:a6:f6 brd ff:ff:ff:ff:ff:ff
>5: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue master vmbr0 state DOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
>6: bond0.100@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
> inet 10.0.100.0/24 scope global bond0.100
> valid_lft forever preferred_lft forever
>7: bond0.200@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
> inet 10.0.200.0/24 scope global bond0.200
> valid_lft forever preferred_lft forever
>8: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
> link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
> inet 10.0.200.100/24 scope global vmbr0
> valid_lft forever preferred_lft forever
>~~~
>
>Thanks for taking a look
>
>Cheers
>
>dulux-oz
>
>
>On 16/8/24 19:53, Stefan Hanreich wrote:
>>
>> On 8/16/24 09:36, duluxoz wrote:
>>> Hi All,
>>>
>>> Disclaimer: I'm coming from an EL background - this is my first venture
>>> into Debian-world :-)
>>>
>>> So I'm having an issue getting the NICs, Bond, and VLANs correctly
>>> configured on a new Proxmox Node (Old oVirt Node). This worked on the
>>> old oVirt config (abit a different set of config files/statements).
>>>
>>> What I'm trying to achieve:
>>>
>>> * Proxmox Node IP Address: 10.0.200.100/24, Tag:VLAN 200
>>> * Gateway: 10.0.200.1
>>> * Bond: NIC1 (eno0) & NIC2 (eno1), 802.3ad
>>> * VLAN bond0.100: 10.0.100.0/24, Gateway 10.0.100.1
>>> * VLAN bond0.200: 10.0.200.0/24, Gateway 10.0.200.1
>>> * NIC3 (eno2): 10.0.300.100/24 - not really relevant, as its not part
>>> of the Bond, but I've included it to be thorough
>> What *exactly* doesn't work?
>> Does the configuration not apply? Do you not get any connectivity with
>> internet / specific networks?
>>
>>
>> First thing that springs to mind is that you cannot configure two
>> default gateways. There can only be one default gateway. You can
>> configure different gateways for different subnets / interfaces. Or you
>> can configure different routing tables for different processes.
>>
>> Your current configuration specifies three gateways. I assume you want
>> to use different gateways for different subnets?
>>
>>
>> How does the output of the following commands look like?
>>
>> ip a
>> ip r
>_______________________________________________
>pve-user mailing list
>pve-user@lists.proxmox.com
>https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
next prev parent reply other threads:[~2024-08-16 11:33 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-16 7:36 duluxoz
2024-08-16 8:05 ` Christian Kivalo
2024-08-16 9:43 ` duluxoz
2024-08-16 9:53 ` Stefan Hanreich
2024-08-16 10:42 ` duluxoz
2024-08-16 11:26 ` Gilberto Ferreira
2024-08-16 11:26 ` proxmox [this message]
2024-08-16 11:32 ` proxmox
2024-08-18 11:14 duluxoz
2024-08-18 14:07 ` Gilberto Ferreira
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=043BBE0D-848F-4587-9678-7729AFFEE951@elchaka.de \
--to=proxmox@elchaka.de \
--cc=matthew@peregrineit.net \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.