From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id AF016E75F for ; Tue, 26 Sep 2023 13:21:00 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 901D037BF9 for ; Tue, 26 Sep 2023 13:21:00 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Tue, 26 Sep 2023 13:20:59 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 8299F470F0; Tue, 26 Sep 2023 13:20:59 +0200 (CEST) Message-ID: <3105e957-706e-d512-36f5-6ba558b347ef@proxmox.com> Date: Tue, 26 Sep 2023 13:20:58 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.15.0 To: "DERUMIER, Alexandre" , "pve-devel@lists.proxmox.com" , "t.lamprecht@proxmox.com" References: <20230908134304.2009415-1-s.hanreich@proxmox.com> <2fd1071602ad075d4580d62565fc757e4bd92a91.camel@groupe-cyllene.com> <3e766920-35e9-4acf-a9fa-f3b56fe0408e@proxmox.com> <7980640a-da18-9da7-88cb-f8602c9339d4@proxmox.com> <5708827d07ec44793cccda18d75a66562a093bc0.camel@groupe-cyllene.com> <30aa87542f4b615aa9f1295b170f26eb8c146ba6.camel@groupe-cyllene.com> <87490886980ba3d102cbfa3b40c858fcd9ffdbe7.camel@groupe-cyllene.com> Content-Language: en-US From: Stefan Hanreich In-Reply-To: <87490886980ba3d102cbfa3b40c858fcd9ffdbe7.camel@groupe-cyllene.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.767 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_ASCII_DIVIDERS 0.8 Email that uses ascii formatting dividers and possible spam tricks KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -1.473 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Sep 2023 11:21:00 -0000 On 9/20/23 23:48, DERUMIER, Alexandre wrote: > Finally, It's not so easy without writing ip on proxmox side (in vm > config or somewhere else), because to retrieve a reserved ip from > external ipam when vm start, we need to lookup maybe from mac address, > maybe from hostname of the vm, or maybe some custom attributes, but not > all ipams accept same attributes. > > (at least phpipam && netbox don't support all features, or not easyly. > Netbox for example, for macaddress need to register the full vm object > && interfaces + mac + mapping to ip, Phpipam is a single ip object > with mac as attribute). Yes, I think so as well. It also would make us dependent on external systems, which might not always be up and would create an additional hurdle for setting things up. Having our own solution for this seems preferable imo. We can still provide integrations with netbox / phpipam so they can take over from our small IPAM if they implement the features we need. I'll take a closer look at netbox, since I was under the impression that they should support this - although it's been awhile since I played around with it. Not sure about phpIPAM, but I wasn't too stoked on using it anyway after browsing their source code for a bit. > So I think the best way is still to write the ip into the vm config, > this allow to inject already reserved ip in dhcp at vm start/migrate > without need to call the ipam (also avoid start problem is ipam server > is down). > > and this allow to use it for firewall ipfilter, I see a usecase for sdn > vxlan too or special /32 route injection) > Yes, I think so as well, although we would need to take care of proper synchronization between Configs and IPAM. If this diverges for whatever reason we will run into trouble. Of course, this *should* never happen when properly implemented. Another option I thought about would be storing a VMID -> IP mapping in the (pve) IPAM itself. This would have the upside of having a centralized storage and single source of truth without having to maintain two different locations where we store the IP. Though it would also be a bit more intransparent to the user if we don't expose it somewhere in the UI. This would have the downside that starting VMs is an issue when the IPAM is down. While using the pve IPAM in a cluster (or a single node) I can see this being alright, since you need quorum to start a VM. As long as you have a quorate cluster the pve IPAM *should* be available as well. In the case of using phpIPAM or netbox this is an issue we would need to think about. > I just need some protections for snapshot, but nothing too difficult, > but we really need to avoid to try to manage in ipam multiple > version/snapshot of ip entry for a vm. > I had tried 2years ago, it was really painful to handle this in > differents ipam. > So maybe the best way is to forbid to change ip address when a snapshot > already exist. Yes, it might be just the best way to check on restore if the IP is the same or at least currently available and otherwise just get a new IP from IPAM automatically (maybe with a warning). On the other hand, this should not be an issue when storing the VMID -> IP mapping centralized somewhere, since we then can just rely on the IP being stored there. Of course this would exclude the DHCP/IP setting from the snapshot which can be good or bad I'd say (depending on the use case). > I think we could implement ipam call like: > > > create vm or add a new nic --> > ----------------------------- > qm create ... -net0 > bridge=vnet,....,ip=(auto|192.168.0.1|dynamic),ip6=(..) > > > auto : search a free ip in ipam. write the ip address in net0: ...,ip= > ip field > > 192.168.0.1: check if ip is free in ipam && register ip in ipam. write > the ip in ip field. > > > dynamic: write "ephemeral" in net0: ....,ip=ephemeral (This is a > dynamic ip registered at vm start, and release at vm stop) Sounds good to me. > > vm start > --------- > - if ip=ephemeral, find && register a free ip in ipam, write it in vm > net0: ...,ip=192.168.0.10[E] . (maybe with a special flag [E] to > indicate it's ephemeral) > - read ip from vm config && inject in dhcp Maybe we can even get away with setting the IP in the DHCP config as soon as we set it in the VM configuration, as long as it is not ephemeral, thus avoiding the need for having to do it while starting VMs? > vm_stop > ------- > if ip is ephemeral (netX: ip=192.168.0.10[E]), delete ip from ipam, > set ip=ephemeral in vm config > > > vm_destroy or nic remove/unplug > ------------------------- > if netX: ...,ip=192.168.0.10 , remove ip from ipam > > > > nic update when vm is running: > ------------------------------ > if ip is defined : netX: ip=192.168.0.10, we don't allow bridge change > or ip change, as vm is not notified about theses changes, and still use > old ip. > > We can allow nic hot-unplug && hotplug. (guest os will remove the ip on > nic removal, and will call dhcp again on nic hotplug) Yes, I think so as well. Maybe we could give the option to change the IP together with a forced reboot and a warning like 'When changing the IP this VM will get rebooted' as a quality of life feature? > > nic hotplug with ip=auto: > ------------------------- > > --> add nic in pending state ----> find ip in ipam && write it in > pending ---> do the hotplug in qemu. > > We need to handle the config revert to remove ip from ipam if the nic > hotplug is blocked in pending state(I never see this case until os > don't have pci_hotplug module loaded, but it's better to be carefull ) > Yes - defensive programming is always good! > The ipam modules (internal pve, phpipam,netbox) are already for this, I > think it shouldn't be too difficult. > > dnsmasq seem to have a reservation file option, where we can > dynamically add ip-mac without need to reload it. > > I'll try it, re-using your current dnsmasq patches. Since you want to take a shot at implementing it, is there anything I could help you with? I'd have some resources now for taking a shot at this as well. It would also be interesting to improve and add some features to our built-in IPAM, maybe even add the VMID -> IP mapping functionality I've touched upon earlier. It would also be interesting to be able to expose some of this information to the frontend, so users have an overview of currently leased IPs in the frontend - what do you think? Would it also make sense to set IPSet entries for VMs, so they are only allowed to use the IPs we dedicate to them? This would be a decent safeguard for preventing issues down the line. Additionally it would be interesting to automatically create Aliases for VNets/VMs in the Firewall configuration - what do you think? If we add VMs as Aliases, we would have to recompile the iptables on every IP change. For this feature it would make sense to be able to set names / comments on VNets, so we can reference them this way. What do you think?