public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>,
	"t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] seem than ifupdown2 is installed by default on upgrade (a friend reported me an ipv6 slaac bug)
Date: Fri, 24 Nov 2023 13:12:34 +0000	[thread overview]
Message-ID: <b41f7100c4768b40eb8c8b0c115757fa285296df.camel@groupe-cyllene.com> (raw)
In-Reply-To: <8c41e5a8-4be7-4d1b-8832-787f402f767d@proxmox.com>

-------- Message initial --------
De: Thomas Lamprecht <t.lamprecht@proxmox.com>
À: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>, pve-
devel@lists.proxmox.com <pve-devel@lists.proxmox.com>
Objet: Re: [pve-devel] seem than ifupdown2 is installed by default on
upgrade (a friend reported me an ipv6 slaac bug)
Date: 24/11/2023 13:49:26

Am 24/11/2023 um 11:12 schrieb DERUMIER, Alexandre:
> After investigate a litte bit,
> 
> I think this is because  ifupdown1  is setting accept_ra=2  by
> default.
> 
> 
> and with ifupdown2, by security, we setup accept_ra=0   until it's
> really setup in /etc/network/interfaces
> 
> 
> iface vmbr0 inet6 auto
>           accept_ra 2


>>Yeah, it's your patch that broke compat here which we applied
>>already [0], but upstream hasn't yet [1] (

yes, this is my patch. I was not sure if we need to change this
accept_ra default to 2.


>>do you know what's going on
>>with them, much less responsive and no release yet since over three
>>years, maybe just NVIDIA stifling the great work the cumulus devs?)


I really think it's Nvidia related. here a friend pull request, about a
vxlan fix, where it's was already fixed in nvidia/cumulus ifupdown2 deb
version (I'm also a cumulus customer, so I verified that indeed they
are sometime minor differences)

https://github.com/CumulusNetworks/ifupdown2/pull/271

"we use an internal repository for ifupdown2 where we actively push our
changes daily/weekly.
Some changes are specific to Cumulus Linux and diverge from upstream
debian (i.e. default values and Cumulus specific features, etc).

So it takes quite some time to review the changes (and diff between
github/internal), and making sure they don't break upstream (and CL
ifupdown2). I pretty much maintain this github repo on my free time,
hence the long delay for the PRs, open issues and sync between
github/internal repo.
"




Looking at the code, I think it's a little bit behind the nvidia
version, but not too much.

But Indeed, they should tag a new version.





  reply	other threads:[~2023-11-24 13:13 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-23 17:50 DERUMIER, Alexandre
2023-11-24  9:07 ` Thomas Lamprecht
2023-11-24 10:12   ` DERUMIER, Alexandre
2023-11-24 12:49     ` Thomas Lamprecht
2023-11-24 13:12       ` DERUMIER, Alexandre [this message]
2023-11-24 13:41       ` DERUMIER, Alexandre
2023-11-29 12:55         ` DERUMIER, Alexandre

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b41f7100c4768b40eb8c8b0c115757fa285296df.camel@groupe-cyllene.com \
    --to=alexandre.derumier@groupe-cyllene.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal