From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <s.hanreich@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 534B9BBB9
 for <pve-devel@lists.proxmox.com>; Wed, 13 Sep 2023 10:54:21 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 3600F1B5DF
 for <pve-devel@lists.proxmox.com>; Wed, 13 Sep 2023 10:54:21 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Wed, 13 Sep 2023 10:54:20 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 1B44B46CA6;
 Wed, 13 Sep 2023 10:54:20 +0200 (CEST)
Message-ID: <d047f4fd-bdba-c7d9-64b6-5dfd5e5faccb@proxmox.com>
Date: Wed, 13 Sep 2023 10:54:18 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.15.0
Content-Language: en-US
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
 "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
References: <20230908134304.2009415-1-s.hanreich@proxmox.com>
 <2fd1071602ad075d4580d62565fc757e4bd92a91.camel@groupe-cyllene.com>
From: Stefan Hanreich <s.hanreich@proxmox.com>
In-Reply-To: <2fd1071602ad075d4580d62565fc757e4bd92a91.camel@groupe-cyllene.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 1.206 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 NICE_REPLY_A           -1.473 Looks like a legit reply (A)
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See
 http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more
 information. [thekelleys.org.uk]
Subject: Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for
 DHCP servers to SDN
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Wed, 13 Sep 2023 08:54:21 -0000

Sorry for my late reply, I was a bit busy the last two days and I also 
wanted some time to think about your suggestions.

On 9/11/23 05:53, DERUMIER, Alexandre wrote:
> Hi,
> 
> I think we should think how we want to attribute ips to the vms before
> continue the implementation. >
> I think they are 2 models:
> 
> 1)
> 
> - we want that dhcp server attribute itself ips && leases from the
> subnets/ranges configured.
> 
> That mean that leases need to be shared across nodes.  (from the same
> cluster maybe with /etc/pve tricks,   but in real world, it should also
> works across multiple clusters, as it's not uncommon to shared subnets
> in differents cluster, public network,...)
> 
> So we don't have that 2 differents vms starting on the same time on 2
> differents cluster, receive the same ips. (so dhcp servers need to use
> some kind of central lock,...)
>

This is also something I have thought about, but I assume dnsmasq is not 
really built in mind with multiple instances accessing the same leases file.

This problem would be solved by using distributed DHCP servers like kea. 
kea on the other hand has the issue that it we need to set up a SQL 
database or other external storage. Alternatively we need to write a new 
backend for kea that integrates with our pmxcfs.

This is partly why I think Thomas mentioned implementing our own DHCP 
server, where we have the flexibility of handling things as we see fit.

Then we can just recommend the dnsmasq plugin for simple setups (e.g. 
single node setups), while more advanced setups should opt for other 
DHCP backends.

> 
> 2)
> 
> The other way (my preferred way), could be to use ipam. (where we
> already have local ipam, or external ipams like netbox/phpipam for
> sharing between multiple cluster).
> 
> 
> The ip is reserved in ipam  (automatic find next free ip at vm creation
> for example, or manually in the gui, or maybe at vm start if we want
> ephemeral ip), then registered dns,
> and generated dhcp server config with mac-ip reserversation. (for dhcp
> server config generation, it could be a daemon pooling the ipam
> database change for example)
> 
> Like this, no need to handle lease sharing, so it can work with any
> dhcp server.
> 

Implementing this via IPAM plugins seems like a good idea, but if we 
want to use distributed DHCP servers like kea (or our own 
implementation) then this might not be needed in those cases. It also 
adds quite a bit of complexity.

With dnsmasq there is even the possibility of running scripts (via 
--dhcp-script, see the docs [1]) when a lease is added / changed / 
deleted. But as far as I can tell this can not be used to override the 
IP that dnsmasq provides via DHCP, so it is probably not really useful 
for our use-case.

------

Another method that I had in mind was providing a DHCP forwarding plugin 
that proxies the DHCP requests to another DHCP server (that can then 
even be outside the cluster). This way there is only one DHCP server 
that handles keeping track of the leases and you do not have the issue 
of having to handle sharing a lease database / using IPAM. So, for 
instance, you have a DHCP server running on one node and the other nodes 
just proxy their requests to the one DHCP server.

I was also thinking we could implement setting the IP for a specific VM 
on interfaces where we have a DHCP server, since we can then just 
provide fixed IPs for specific MAC-addresses. This could be quite 
convenient.



[1] https://thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html