public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Сергей Цаболов" <tsabolov@t8.ru>
To: Proxmox VE user list <pve-user@lists.proxmox.com>,
	Dimitri Alexandris <d.alexandris@gmail.com>
Subject: Re: [PVE-User] openvswitch + bond0 + 2 Fiber interfaces.
Date: Fri, 21 Jan 2022 15:28:27 +0300	[thread overview]
Message-ID: <b50d1e57-22c8-7d8a-1ddf-74a253ce78fe@t8.ru> (raw)
In-Reply-To: <CAOWoYHpvS5ycmJRHCQ6e7ZXgmUOQoSuv24K4qckmWNJwLB2ZBA@mail.gmail.com>

Dimitri, hello

Thank you with you share

My Proxmox  is proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve)

I try change allow-vmbr0 with auto.

I found the link 
https://metadata.ftp-master.debian.org/changelogs/main/o/openvswitch/testing_openvswitch-switch.README.Debian 


In section  ex 9: Bond + Bridge + VLAN + MTU  allow is used.

But nothing wrong I try allow and auto, just comment one line.


Dimitri, thanks again for you share.




21.01.2022 15:03, Dimitri Alexandris пишет:
> I have Openvswitch bonds working fine for years now, but in older versions
> of Proxmox (6.4-4 and 5.3-5):
>
> --------------
> auto eno2
> iface eno2 inet manual
>
> auto eno1
> iface eno1 inet manual
>
> allow-vmbr0 ath
> iface ath inet static
> address 10.NN.NN.38/26
> gateway 10.NN.NN.1
> ovs_type OVSIntPort
> ovs_bridge vmbr0
> ovs_options tag=100
> .
> .
> allow-vmbr0 bond0
> iface bond0 inet manual
>    ovs_bonds eno1 eno2
>    ovs_type OVSBond
>    ovs_bridge vmbr0
>    ovs_options bond_mode=balance-slb lacp=active
> allow-ovs vmbr0
> iface vmbr0 inet manual
> ovs_type OVSBridge
> ovs_ports bond0 ath lan dmz_vod ampr
> --------
>
> I think now, "allow-vmbr0" and "allow-ovs" are replaced with "auto".
>
> This bond works fine with HP, 3COM, HUAWEI, and MIKROTIK switches.
> Several OVSIntPort VLANS are attached to it.
> I also had 10G bonds (Intel, Supermicro inter-server links), with the same
> result.
>
> I see the only difference with your setup is the bond_mode.  Switch setup
> is also very important to match this.
>
>
>
>
>
> On Fri, Jan 21, 2022 at 1:23 PM Сергей Цаболов<tsabolov@t8.ru>  wrote:
>
>> Hello,
>>
>> I have PVE cluster and I thinking to install on  the pve-7 openvswitch
>> for can move and add VM from other networks and Proxmox Cluster
>>
>> With base Linux bridge all work well without problem with 2 interface
>> 10GB ens1f0np0 ens1f12np0
>>
>> I  install openvswitch  with manual
>> https://pve.proxmox.com/wiki/Open_vSwitch
>>
>> I want use Fiber  10GB interfaces ens1f0np0 ens1f12np0  with Bond I think.
>>
>> I try some settings but is not working.
>>
>> My setup in interfaces:
>>
>> auto lo
>> iface lo inet loopback
>>
>> auto ens1f12np0
>> iface ens1f12np0 inet manual
>> #Fiber
>>
>> iface idrac inet manual
>>
>> iface eno2 inet manual
>>
>> iface eno3 inet manual
>>
>> iface eno4 inet manual
>>
>> auto ens1f0np0
>> iface ens1f0np0 inet manual
>>
>> iface eno1 inet manual
>>
>> auto inband
>> iface inband inet static
>>       address 10.10.29.10/24
>>       gateway 10.10.29.250
>>       ovs_type OVSIntPort
>>       ovs_bridge vmbr0
>> #Proxmox Web Access
>>
>> auto vlan10
>> iface vlan10 inet manual
>>       ovs_type OVSIntPort
>>       ovs_bridge vmbr0
>>       ovs_options tag=10
>> #Network 10
>>
>> auto bond0
>> iface bond0 inet manual
>>       ovs_bonds ens1f0np0 ens1f12np0
>>       ovs_type OVSBond
>>       ovs_bridge vmbr0
>>       ovs_mtu 9000
>>       ovs_options bond_mode=active-backup
>>
>> auto vmbr0
>> iface vmbr0 inet manual
>>       ovs_type OVSBridge
>>       ovs_ports bond0 inband vlan10
>>       ovs_mtu 9000
>> #inband
>>
>>
>> Can some one help me if I set all correctly or not?
>>
>> If someone have setup openvswitch with Bond interfaces 10GB share with
>> me configuration.
>>
>> Thank at lot.
>>
>>
>> Sergey TS
>> The best Regard
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Sergey TS
The best Regard

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


  reply	other threads:[~2022-01-21 12:29 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-21 11:10 Сергей Цаболов
2022-01-21 12:03 ` Dimitri Alexandris
2022-01-21 12:28   ` Сергей Цаболов [this message]
2022-01-27  8:53     ` Сергей Цаболов

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b50d1e57-22c8-7d8a-1ddf-74a253ce78fe@t8.ru \
    --to=tsabolov@t8.ru \
    --cc=d.alexandris@gmail.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal