public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
@ 2023-10-23 10:27 Stefan Lendl
  2023-10-23 12:52 ` Stefan Lendl
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Stefan Lendl @ 2023-10-23 10:27 UTC (permalink / raw)
  To: pve-devel


I am currently working on the SDN feature.  This is an initial review of
the patch series and I am trying to make a strong case against ephemeral
DHCP IP reservation.

The current state of the patch series invokes the IPAM on every VM/CT
start/stop to add or remove the IP from the IPAM.
This triggers the dnsmasq config generation on the specific host with
only the MAC/IP mapping of that particular host.

From reading the discussion of the v1 patch series I understand this
approach tries to implement the ephemeral IP reservation strategy. From
off-list conversations with Stefan Hanreich, I agree that having
ephemeral IP reservation coordinated by the IPAM requires us to
re-implement DHCP functionality in the IPAM and heavily rely on syncing
between the different services.

To maintain reliable sync we need to hook into many different places
where the IPAM need to be queried.  Any issues with the implementation
may lead to IPAM and DHCP local config state running out of sync causing
network issues duplicate multiple IPs.

Furthermore, every interaction with the IPAM requires a cluster-wide
lock on the IPAM. Having a central cluster-wide lock on every VM
start/stop/migrate will significantly limit parallel operations.  Event
starting two VMs in parallel will be limited by this central lock. At
boot trying to start many VMs (ideally as much in parallel as possible)
is limited by the central IPAM lock even further.

I argue that we shall not support ephemeral IPs altogether.
The alternative is to make all IPAM reservations persistent.

Using persistent IPs only reduces the interactions of VM/CTs with the
IPAM to a minimum of NIC joining a subnet and NIC leaving a subnet. I am
deliberately not referring to VMs because a VM may be part of multiple
VNets or even multiple times in the same VNet (regardless if that is
sensible).

Cases the IPAM needs to be involved:

- NIC with DHCP enabled VNet is added to VM config
- NIC with DHCP enabled VNet is removed from VM config
- NIC is assigned to another Bridge
  can be treated as individual leave + join events

Cases that are explicitly not covered but may be added if desired:

- Manually assign an IP address on a NIC
  will not be automatically visible in the IPAM
- Manually change the MAC on a NIC
  don't do that > you are on your own.
  Not handled > change in IPAM manually

Once an IP is reserved via IPAM, the dnsmasq config can be generated
stateless and idempotent from the pve IPAM and is identical on all nodes
regardless if a VM/CT actually resides on that node or is running or
stopped.  This is especially useful for VM migration because the IP
stays consistent without spacial considering.

Snapshot/revert, backup/restore, suspend/hibernate/resume cases are
automatically covered because the IP will already be reserved for that
MAC.

If the admin wants to change, the IP of a VM this can be done via the
IPAM API/UI which will have to be implemented separately.

A limitation of this approach vs dynamic IP reservation is that the IP
range on the subnet needs to be large enough to hold all IPs of all,
even stopped, VMs in that subnet. This is in contrast to default DHCP
functionality where only the number of actively running VMs is limited.
It should be enough to mention this in the docs.

I will further review the code an try to implement the aforementioned
approach.

Best regards,
Stefan Lendl

Stefan Hanreich <s.hanreich@proxmox.com> writes:

> This is a WIP patch series, since I will be gone for 3 weeks and wanted to
> share my current progress with the DHCP support for SDN.
>
> This patch series adds support for automatically deploying dnsmasq as a DHCP
> server to a simple SDN Zone.
>
> While certainly not 100% polished on some ends (looking at restarting systemd
> services in particular), the general idea behind the mechanism shows. I wanted
> to gather some feedback on how I approached designing the plugins and the
> config regeneration process before comitting to this design by creating an API
> and UI around it.
>
> You need to install dnsmasq (and disable it afterwards):
>
>   apt install dnsmasq && systemctl disable --now dnsmasq
>
>
> You can use the following example configuration for deploying a DHCP server in
> a SDN subnet:
>
> /etc/pve/sdn/dhcp.cfg:
>
>   dnsmasq: nat
>
>
> /etc/pve/sdn/zones.cfg:
>
>   simple: DHCPNAT
>           ipam pve
>
>
> /etc/pve/sdn/vnets.cfg:
>
>   vnet: dhcpnat
>           zone DHCPNAT
>
>
> /etc/pve/sdn/subnets.cfg:
>
>   subnet: DHCPNAT-10.1.0.0-16
>           vnet dhcpnat
>           dhcp-dns-server 10.1.0.1
>           dhcp-range server=nat,start-address=10.1.0.100,end-address=10.1.0.200
>           gateway 10.1.0.1
>           snat 1
>
>
> Then apply the SDN configuration:
>
>   pvesh set /cluster/sdn
>
> You need to apply the SDN configuration once after adding the dhcp-range lines
> to the configuration, since the running configuration is used for managing
> DHCP. It will not work otherwise!
>
> For testing it can be helpful to monitor the following files (e.g. with watch)
> to find out what is happening
>   * /etc/dnsmasq.d/<dhcp_id>/ethers (on each node)
>   * /etc/pve/priv/ipam.db
>
> Changes from v1 -> v2:
>   * added hooks for handling DHCP when starting / stopping / .. VMs and CTs
>   * Get an IP from IPAM and register that IP in the DHCP server
>     (pve only for now)
>   * remove lease-time, since it is now infinite and managed by the VM lifecycle
>   * add hooks for setting & deleting DHCP mappings to DHCP plugins
>   * modified interface of the abstract class to reflect new requirements
>   * added helpers in existing SDN classes
>   * simplified DHCP configuration settings
>
>
>
> pve-cluster:
>
> Stefan Hanreich (1):
>   cluster files: add dhcp.cfg
>
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
>
>
> pve-network:
>
> Stefan Hanreich (6):
>   subnets: vnets: preparations for DHCP plugins
>   dhcp: add abstract class for DHCP plugins
>   dhcp: subnet: add DHCP options to subnet configuration
>   dhcp: add DHCP plugin for dnsmasq
>   ipam: Add helper methods for DHCP to PVE IPAM
>   dhcp: regenerate config for DHCP servers on reload
>
>  debian/control                         |   1 +
>  src/PVE/Network/SDN.pm                 |  11 +-
>  src/PVE/Network/SDN/Dhcp.pm            | 192 +++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Dnsmasq.pm    | 186 ++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Makefile      |   8 ++
>  src/PVE/Network/SDN/Dhcp/Plugin.pm     |  83 +++++++++++
>  src/PVE/Network/SDN/Ipams/PVEPlugin.pm |  64 +++++++++
>  src/PVE/Network/SDN/Makefile           |   3 +-
>  src/PVE/Network/SDN/SubnetPlugin.pm    |  32 +++++
>  src/PVE/Network/SDN/Subnets.pm         |  43 ++++--
>  src/PVE/Network/SDN/Vnets.pm           |  27 ++--
>  11 files changed, 622 insertions(+), 28 deletions(-)
>  create mode 100644 src/PVE/Network/SDN/Dhcp.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm
>
>
> pve-manager:
>
> Stefan Hanreich (1):
>   sdn: regenerate DHCP config on reload
>
>  PVE/API2/Network.pm | 1 +
>  1 file changed, 1 insertion(+)
>
>
> qemu-server:
>
> Stefan Hanreich (1):
>   sdn: dhcp: add DHCP setup to vm-network-scripts
>
>  PVE/QemuServer.pm                 | 14 ++++++++++++++
>  vm-network-scripts/pve-bridge     |  3 +++
>  vm-network-scripts/pve-bridgedown | 19 +++++++++++++++++++
>  3 files changed, 36 insertions(+)
>
>
> pve-container:
>
> Stefan Hanreich (1):
>   sdn: dhcp: setup DHCP mappings in LXC hooks
>
>  src/PVE/LXC.pm            | 10 ++++++++++
>  src/lxc-pve-poststop-hook |  1 +
>  src/lxc-pve-prestart-hook |  9 +++++++++
>  3 files changed, 20 insertions(+)
>
>
> Summary over all repositories:
>   20 files changed, 681 insertions(+), 28 deletions(-)
>
> --
> murpp v0.4.0
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-23 10:27 [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN Stefan Lendl
@ 2023-10-23 12:52 ` Stefan Lendl
  2023-10-26 12:49 ` DERUMIER, Alexandre
  2023-10-26 12:53 ` DERUMIER, Alexandre
  2 siblings, 0 replies; 16+ messages in thread
From: Stefan Lendl @ 2023-10-23 12:52 UTC (permalink / raw)
  To: pve-devel

PS: Sorry for double posting. this mail was sent with invalid
In-Reply-To and References Headers.

I sent it again with the correct Headers after I managed to correctly
setup my mail client.

Stefan Lendl <s.lendl@proxmox.com> writes:

> I am currently working on the SDN feature.  This is an initial review of
> the patch series and I am trying to make a strong case against ephemeral
> DHCP IP reservation.
>
> The current state of the patch series invokes the IPAM on every VM/CT
> start/stop to add or remove the IP from the IPAM.
> This triggers the dnsmasq config generation on the specific host with
> only the MAC/IP mapping of that particular host.
>
> From reading the discussion of the v1 patch series I understand this
> approach tries to implement the ephemeral IP reservation strategy. From
> off-list conversations with Stefan Hanreich, I agree that having
> ephemeral IP reservation coordinated by the IPAM requires us to
> re-implement DHCP functionality in the IPAM and heavily rely on syncing
> between the different services.
>
> To maintain reliable sync we need to hook into many different places
> where the IPAM need to be queried.  Any issues with the implementation
> may lead to IPAM and DHCP local config state running out of sync causing
> network issues duplicate multiple IPs.
>
> Furthermore, every interaction with the IPAM requires a cluster-wide
> lock on the IPAM. Having a central cluster-wide lock on every VM
> start/stop/migrate will significantly limit parallel operations.  Event
> starting two VMs in parallel will be limited by this central lock. At
> boot trying to start many VMs (ideally as much in parallel as possible)
> is limited by the central IPAM lock even further.
>
> I argue that we shall not support ephemeral IPs altogether.
> The alternative is to make all IPAM reservations persistent.
>
> Using persistent IPs only reduces the interactions of VM/CTs with the
> IPAM to a minimum of NIC joining a subnet and NIC leaving a subnet. I am
> deliberately not referring to VMs because a VM may be part of multiple
> VNets or even multiple times in the same VNet (regardless if that is
> sensible).
>
> Cases the IPAM needs to be involved:
>
> - NIC with DHCP enabled VNet is added to VM config
> - NIC with DHCP enabled VNet is removed from VM config
> - NIC is assigned to another Bridge
>   can be treated as individual leave + join events
>
> Cases that are explicitly not covered but may be added if desired:
>
> - Manually assign an IP address on a NIC
>   will not be automatically visible in the IPAM
> - Manually change the MAC on a NIC
>   don't do that > you are on your own.
>   Not handled > change in IPAM manually
>
> Once an IP is reserved via IPAM, the dnsmasq config can be generated
> stateless and idempotent from the pve IPAM and is identical on all nodes
> regardless if a VM/CT actually resides on that node or is running or
> stopped.  This is especially useful for VM migration because the IP
> stays consistent without spacial considering.
>
> Snapshot/revert, backup/restore, suspend/hibernate/resume cases are
> automatically covered because the IP will already be reserved for that
> MAC.
>
> If the admin wants to change, the IP of a VM this can be done via the
> IPAM API/UI which will have to be implemented separately.
>
> A limitation of this approach vs dynamic IP reservation is that the IP
> range on the subnet needs to be large enough to hold all IPs of all,
> even stopped, VMs in that subnet. This is in contrast to default DHCP
> functionality where only the number of actively running VMs is limited.
> It should be enough to mention this in the docs.
>
> I will further review the code an try to implement the aforementioned
> approach.
>
> Best regards,
> Stefan Lendl
>
> Stefan Hanreich <s.hanreich@proxmox.com> writes:
>
>> This is a WIP patch series, since I will be gone for 3 weeks and wanted to
>> share my current progress with the DHCP support for SDN.
>>
>> This patch series adds support for automatically deploying dnsmasq as a DHCP
>> server to a simple SDN Zone.
>>
>> While certainly not 100% polished on some ends (looking at restarting systemd
>> services in particular), the general idea behind the mechanism shows. I wanted
>> to gather some feedback on how I approached designing the plugins and the
>> config regeneration process before comitting to this design by creating an API
>> and UI around it.
>>
>> You need to install dnsmasq (and disable it afterwards):
>>
>>   apt install dnsmasq && systemctl disable --now dnsmasq
>>
>>
>> You can use the following example configuration for deploying a DHCP server in
>> a SDN subnet:
>>
>> /etc/pve/sdn/dhcp.cfg:
>>
>>   dnsmasq: nat
>>
>>
>> /etc/pve/sdn/zones.cfg:
>>
>>   simple: DHCPNAT
>>           ipam pve
>>
>>
>> /etc/pve/sdn/vnets.cfg:
>>
>>   vnet: dhcpnat
>>           zone DHCPNAT
>>
>>
>> /etc/pve/sdn/subnets.cfg:
>>
>>   subnet: DHCPNAT-10.1.0.0-16
>>           vnet dhcpnat
>>           dhcp-dns-server 10.1.0.1
>>           dhcp-range server=nat,start-address=10.1.0.100,end-address=10.1.0.200
>>           gateway 10.1.0.1
>>           snat 1
>>
>>
>> Then apply the SDN configuration:
>>
>>   pvesh set /cluster/sdn
>>
>> You need to apply the SDN configuration once after adding the dhcp-range lines
>> to the configuration, since the running configuration is used for managing
>> DHCP. It will not work otherwise!
>>
>> For testing it can be helpful to monitor the following files (e.g. with watch)
>> to find out what is happening
>>   * /etc/dnsmasq.d/<dhcp_id>/ethers (on each node)
>>   * /etc/pve/priv/ipam.db
>>
>> Changes from v1 -> v2:
>>   * added hooks for handling DHCP when starting / stopping / .. VMs and CTs
>>   * Get an IP from IPAM and register that IP in the DHCP server
>>     (pve only for now)
>>   * remove lease-time, since it is now infinite and managed by the VM lifecycle
>>   * add hooks for setting & deleting DHCP mappings to DHCP plugins
>>   * modified interface of the abstract class to reflect new requirements
>>   * added helpers in existing SDN classes
>>   * simplified DHCP configuration settings
>>
>>
>>
>> pve-cluster:
>>
>> Stefan Hanreich (1):
>>   cluster files: add dhcp.cfg
>>
>>  src/PVE/Cluster.pm  | 1 +
>>  src/pmxcfs/status.c | 1 +
>>  2 files changed, 2 insertions(+)
>>
>>
>> pve-network:
>>
>> Stefan Hanreich (6):
>>   subnets: vnets: preparations for DHCP plugins
>>   dhcp: add abstract class for DHCP plugins
>>   dhcp: subnet: add DHCP options to subnet configuration
>>   dhcp: add DHCP plugin for dnsmasq
>>   ipam: Add helper methods for DHCP to PVE IPAM
>>   dhcp: regenerate config for DHCP servers on reload
>>
>>  debian/control                         |   1 +
>>  src/PVE/Network/SDN.pm                 |  11 +-
>>  src/PVE/Network/SDN/Dhcp.pm            | 192 +++++++++++++++++++++++++
>>  src/PVE/Network/SDN/Dhcp/Dnsmasq.pm    | 186 ++++++++++++++++++++++++
>>  src/PVE/Network/SDN/Dhcp/Makefile      |   8 ++
>>  src/PVE/Network/SDN/Dhcp/Plugin.pm     |  83 +++++++++++
>>  src/PVE/Network/SDN/Ipams/PVEPlugin.pm |  64 +++++++++
>>  src/PVE/Network/SDN/Makefile           |   3 +-
>>  src/PVE/Network/SDN/SubnetPlugin.pm    |  32 +++++
>>  src/PVE/Network/SDN/Subnets.pm         |  43 ++++--
>>  src/PVE/Network/SDN/Vnets.pm           |  27 ++--
>>  11 files changed, 622 insertions(+), 28 deletions(-)
>>  create mode 100644 src/PVE/Network/SDN/Dhcp.pm
>>  create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
>>  create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
>>  create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm
>>
>>
>> pve-manager:
>>
>> Stefan Hanreich (1):
>>   sdn: regenerate DHCP config on reload
>>
>>  PVE/API2/Network.pm | 1 +
>>  1 file changed, 1 insertion(+)
>>
>>
>> qemu-server:
>>
>> Stefan Hanreich (1):
>>   sdn: dhcp: add DHCP setup to vm-network-scripts
>>
>>  PVE/QemuServer.pm                 | 14 ++++++++++++++
>>  vm-network-scripts/pve-bridge     |  3 +++
>>  vm-network-scripts/pve-bridgedown | 19 +++++++++++++++++++
>>  3 files changed, 36 insertions(+)
>>
>>
>> pve-container:
>>
>> Stefan Hanreich (1):
>>   sdn: dhcp: setup DHCP mappings in LXC hooks
>>
>>  src/PVE/LXC.pm            | 10 ++++++++++
>>  src/lxc-pve-poststop-hook |  1 +
>>  src/lxc-pve-prestart-hook |  9 +++++++++
>>  3 files changed, 20 insertions(+)
>>
>>
>> Summary over all repositories:
>>   20 files changed, 681 insertions(+), 28 deletions(-)
>>
>> --
>> murpp v0.4.0
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-23 10:27 [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN Stefan Lendl
  2023-10-23 12:52 ` Stefan Lendl
@ 2023-10-26 12:49 ` DERUMIER, Alexandre
  2023-10-26 12:53 ` DERUMIER, Alexandre
  2 siblings, 0 replies; 16+ messages in thread
From: DERUMIER, Alexandre @ 2023-10-26 12:49 UTC (permalink / raw)
  To: s.lendl; +Cc: pve-devel

Hi Stefan (Lendl),

I'm totally agreed with you, we should have persistent reservation,
at vm create/nic plug, nic delete, vm delete.

At least , for my usage with multiple cluster on different datacenters,
I really can wait to call ipam to api at each start (for scalability or
for security if ipam is down)


This also allow to simply do reservations in dnsmasq file without any
need to restart it. (AFAIK, openstack is using dnsmasq like this too)


I'm not sure if true dynamic ephemral ip , changing at each vm
stop/start is interesting for a server vm usage. (maybe for desktop
vmwhere you share a small pool of ip, but I personnaly don't known any
proxmox users using proxmox ve for this)


see my proposal here (with handle ephemeral && reserved, but it's even
easier with only reserved):

https://lists.proxmox.com/pipermail/pve-devel/2023-September/059169.html




"
I think we could implement ipam call like:


create vm or add a new nic  --> 
-----------------------------
qm create ... -net0
bridge=vnet,....,ip=(auto|192.168.0.1|dynamic),ip6=(..)


auto : search a free ip in ipam.  write the ip address in net0: ...,ip=
ip field 

192.168.0.1:  check if ip is free in ipam && register ip in ipam. write
the ip in ip field.


dynamic: write "ephemeral" in net0: ....,ip=ephemeral (This is a
dynamic ip registered at vm start, and release at vm stop)



vm start
---------
- if ip=ephemeral, find && register a free ip in ipam, write it in vm
net0: ...,ip=192.168.0.10[E] .   (maybe with a special flag [E] to
indicate it's ephemeral)
- read ip from vm config && inject in dhcp


vm_stop
-------
if ip is ephemeral (netX: ip=192.168.0.10[E]),  delete ip from ipam,
set ip=ephemeral in vm config


vm_destroy or nic remove/unplug
-------------------------
if netX: ...,ip=192.168.0.10   ,  remove ip from ipam



nic update when vm is running:
------------------------------
if ip is defined : netX: ip=192.168.0.10,  we don't allow bridge change
or ip change, as vm is not notified about theses changes, and still use
old ip.

We can allow nic hot-unplug && hotplug. (guest os will remove the ip on
nic removal, and will call dhcp again on nic hotplug)




nic hotplug with ip=auto:
-------------------------

--> add nic in pending state ----> find ip in ipam && write it in
pending ---> do the hotplug in qemu.

We need to handle the config revert to remove ip from ipam if the nic
hotplug is blocked in pending state(I never see this case until os
don't have pci_hotplug module loaded, but it's better to be carefull )

"


>>I am currently working on the SDN feature.  This is an initial review
>>of
>>the patch series and I am trying to make a strong case against
>>ephemeral
>>DHCP IP reservation.
>>
>>The current state of the patch series invokes the IPAM on every VM/CT
>>start/stop to add or remove the IP from the IPAM.
>>This triggers the dnsmasq config generation on the specific host with
>>only the MAC/IP mapping of that particular host.





From reading the discussion of the v1 patch series I understand this
approach tries to implement the ephemeral IP reservation strategy. From
off-list conversations with Stefan Hanreich, I agree that having
ephemeral IP reservation coordinated by the IPAM requires us to
re-implement DHCP functionality in the IPAM and heavily rely on syncing
between the different services.

To maintain reliable sync we need to hook into many different places
where the IPAM need to be queried.  Any issues with the implementation
may lead to IPAM and DHCP local config state running out of sync
causing
network issues duplicate multiple IPs.

Furthermore, every interaction with the IPAM requires a cluster-wide
lock on the IPAM. Having a central cluster-wide lock on every VM
start/stop/migrate will significantly limit parallel operations.  Event
starting two VMs in parallel will be limited by this central lock. At
boot trying to start many VMs (ideally as much in parallel as possible)
is limited by the central IPAM lock even further.

I argue that we shall not support ephemeral IPs altogether.
The alternative is to make all IPAM reservations persistent.

Using persistent IPs only reduces the interactions of VM/CTs with the
IPAM to a minimum of NIC joining a subnet and NIC leaving a subnet. I
am
deliberately not referring to VMs because a VM may be part of multiple
VNets or even multiple times in the same VNet (regardless if that is
sensible).

Cases the IPAM needs to be involved:

- NIC with DHCP enabled VNet is added to VM config
- NIC with DHCP enabled VNet is removed from VM config
- NIC is assigned to another Bridge
  can be treated as individual leave + join events

Cases that are explicitly not covered but may be added if desired:

- Manually assign an IP address on a NIC
  will not be automatically visible in the IPAM
- Manually change the MAC on a NIC
  don't do that > you are on your own.
  Not handled > change in IPAM manually

Once an IP is reserved via IPAM, the dnsmasq config can be generated
stateless and idempotent from the pve IPAM and is identical on all
nodes
regardless if a VM/CT actually resides on that node or is running or
stopped.  This is especially useful for VM migration because the IP
stays consistent without spacial considering.

Snapshot/revert, backup/restore, suspend/hibernate/resume cases are
automatically covered because the IP will already be reserved for that
MAC.

If the admin wants to change, the IP of a VM this can be done via the
IPAM API/UI which will have to be implemented separately.

A limitation of this approach vs dynamic IP reservation is that the IP
range on the subnet needs to be large enough to hold all IPs of all,
even stopped, VMs in that subnet. This is in contrast to default DHCP
functionality where only the number of actively running VMs is limited.
It should be enough to mention this in the docs.

I will further review the code an try to implement the aforementioned
approach.

Best regards,
Stefan Lendl

Stefan Hanreich <s.hanreich@proxmox.com> writes:

> This is a WIP patch series, since I will be gone for 3 weeks and
> wanted to
> share my current progress with the DHCP support for SDN.
> 
> This patch series adds support for automatically deploying dnsmasq as
> a DHCP
> server to a simple SDN Zone.
> 
> While certainly not 100% polished on some ends (looking at restarting
> systemd
> services in particular), the general idea behind the mechanism shows.
> I wanted
> to gather some feedback on how I approached designing the plugins and
> the
> config regeneration process before comitting to this design by
> creating an API
> and UI around it.
> 
> You need to install dnsmasq (and disable it afterwards):
> 
>   apt install dnsmasq && systemctl disable --now dnsmasq
> 
> 
> You can use the following example configuration for deploying a DHCP
> server in
> a SDN subnet:
> 
> /etc/pve/sdn/dhcp.cfg:
> 
>   dnsmasq: nat
> 
> 
> /etc/pve/sdn/zones.cfg:
> 
>   simple: DHCPNAT
>           ipam pve
> 
> 
> /etc/pve/sdn/vnets.cfg:
> 
>   vnet: dhcpnat
>           zone DHCPNAT
> 
> 
> /etc/pve/sdn/subnets.cfg:
> 
>   subnet: DHCPNAT-10.1.0.0-16
>           vnet dhcpnat
>           dhcp-dns-server 10.1.0.1
>           dhcp-range server=nat,start-address=10.1.0.100,end-
> address=10.1.0.200
>           gateway 10.1.0.1
>           snat 1
> 
> 
> Then apply the SDN configuration:
> 
>   pvesh set /cluster/sdn
> 
> You need to apply the SDN configuration once after adding the dhcp-
> range lines
> to the configuration, since the running configuration is used for
> managing
> DHCP. It will not work otherwise!
> 
> For testing it can be helpful to monitor the following files (e.g.
> with watch)
> to find out what is happening
>   * /etc/dnsmasq.d/<dhcp_id>/ethers (on each node)
>   * /etc/pve/priv/ipam.db
> 
> Changes from v1 -> v2:
>   * added hooks for handling DHCP when starting / stopping / .. VMs
> and CTs
>   * Get an IP from IPAM and register that IP in the DHCP server
>     (pve only for now)
>   * remove lease-time, since it is now infinite and managed by the VM
> lifecycle
>   * add hooks for setting & deleting DHCP mappings to DHCP plugins
>   * modified interface of the abstract class to reflect new
> requirements
>   * added helpers in existing SDN classes
>   * simplified DHCP configuration settings
> 
> 
> 
> pve-cluster:
> 
> Stefan Hanreich (1):
>   cluster files: add dhcp.cfg
> 
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> 
> pve-network:
> 
> Stefan Hanreich (6):
>   subnets: vnets: preparations for DHCP plugins
>   dhcp: add abstract class for DHCP plugins
>   dhcp: subnet: add DHCP options to subnet configuration
>   dhcp: add DHCP plugin for dnsmasq
>   ipam: Add helper methods for DHCP to PVE IPAM
>   dhcp: regenerate config for DHCP servers on reload
> 
>  debian/control                         |   1 +
>  src/PVE/Network/SDN.pm                 |  11 +-
>  src/PVE/Network/SDN/Dhcp.pm            | 192
> +++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Dnsmasq.pm    | 186
> ++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Makefile      |   8 ++
>  src/PVE/Network/SDN/Dhcp/Plugin.pm     |  83 +++++++++++
>  src/PVE/Network/SDN/Ipams/PVEPlugin.pm |  64 +++++++++
>  src/PVE/Network/SDN/Makefile           |   3 +-
>  src/PVE/Network/SDN/SubnetPlugin.pm    |  32 +++++
>  src/PVE/Network/SDN/Subnets.pm         |  43 ++++--
>  src/PVE/Network/SDN/Vnets.pm           |  27 ++--
>  11 files changed, 622 insertions(+), 28 deletions(-)
>  create mode 100644 src/PVE/Network/SDN/Dhcp.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm
> 
> 
> pve-manager:
> 
> Stefan Hanreich (1):
>   sdn: regenerate DHCP config on reload
> 
>  PVE/API2/Network.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
> 
> qemu-server:
> 
> Stefan Hanreich (1):
>   sdn: dhcp: add DHCP setup to vm-network-scripts
> 
>  PVE/QemuServer.pm                 | 14 ++++++++++++++
>  vm-network-scripts/pve-bridge     |  3 +++
>  vm-network-scripts/pve-bridgedown | 19 +++++++++++++++++++
>  3 files changed, 36 insertions(+)
> 
> 
> pve-container:
> 
> Stefan Hanreich (1):
>   sdn: dhcp: setup DHCP mappings in LXC hooks
> 
>  src/PVE/LXC.pm            | 10 ++++++++++
>  src/lxc-pve-poststop-hook |  1 +
>  src/lxc-pve-prestart-hook |  9 +++++++++
>  3 files changed, 20 insertions(+)
> 
> 
> Summary over all repositories:
>   20 files changed, 681 insertions(+), 28 deletions(-)
> 
> --
> murpp v0.4.0
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://antiphishing.cetsi.fr/proxy/v3?i=d1l4NXNNaWE4SWZqU0dLWcuTfdxE
> d98NfWIp9dma5kY&r=MXJUa0FrUVJqc1UwYWxNZ-
> tuXduEO8AMVnCvYVMprCZ3oPilgy3nXcuJTOGH5iK84rVRg8cukFAROdxYRgFTTg&f=c2
> xMdVN4Smh2R2tOZDdIRKCk7WEocHpTPMerT1Q-
> Aq5qwr8l2xvAWuOGvFsV3frp2oSAgxNUQCpJDHp2iUmTWg&u=https%3A//lists.prox
> mox.com/cgi-bin/mailman/listinfo/pve-devel&k=fjzS


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://antiphishing.cetsi.fr/proxy/v3?i=d1l4NXNNaWE4SWZqU0dLWcuTfdxEd9
8NfWIp9dma5kY&r=MXJUa0FrUVJqc1UwYWxNZ-
tuXduEO8AMVnCvYVMprCZ3oPilgy3nXcuJTOGH5iK84rVRg8cukFAROdxYRgFTTg&f=c2xM
dVN4Smh2R2tOZDdIRKCk7WEocHpTPMerT1Q-
Aq5qwr8l2xvAWuOGvFsV3frp2oSAgxNUQCpJDHp2iUmTWg&u=https%3A//lists.proxmo
x.com/cgi-bin/mailman/listinfo/pve-devel&k=fjzS



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-23 10:27 [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN Stefan Lendl
  2023-10-23 12:52 ` Stefan Lendl
  2023-10-26 12:49 ` DERUMIER, Alexandre
@ 2023-10-26 12:53 ` DERUMIER, Alexandre
  2 siblings, 0 replies; 16+ messages in thread
From: DERUMIER, Alexandre @ 2023-10-26 12:53 UTC (permalink / raw)
  To: pve-devel

also,

about dhcp-ranges in this patch series, I think it could be great to
make it optionnal, as some external couldn't support it.

(netbox seem to support it, but I don't have looked at next_free api
yet  ,  phpipam don't seem to support it).

A lot of external ipam tool only search about free ip in the full
subnet.

So maybe something like : no dhcp-range = any ip from the subnet.

-------- Message initial --------
De: Stefan Lendl <s.lendl@proxmox.com>
Répondre à: Proxmox VE development discussion <pve-
devel@lists.proxmox.com>
À: pve-devel@lists.proxmox.com
Objet: Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-
server/container 00/10] Add support for DHCP servers to SDN
Date: 23/10/2023 12:27:06


I am currently working on the SDN feature.  This is an initial review
of
the patch series and I am trying to make a strong case against
ephemeral
DHCP IP reservation.

The current state of the patch series invokes the IPAM on every VM/CT
start/stop to add or remove the IP from the IPAM.
This triggers the dnsmasq config generation on the specific host with
only the MAC/IP mapping of that particular host.

From reading the discussion of the v1 patch series I understand this
approach tries to implement the ephemeral IP reservation strategy. From
off-list conversations with Stefan Hanreich, I agree that having
ephemeral IP reservation coordinated by the IPAM requires us to
re-implement DHCP functionality in the IPAM and heavily rely on syncing
between the different services.

To maintain reliable sync we need to hook into many different places
where the IPAM need to be queried.  Any issues with the implementation
may lead to IPAM and DHCP local config state running out of sync
causing
network issues duplicate multiple IPs.

Furthermore, every interaction with the IPAM requires a cluster-wide
lock on the IPAM. Having a central cluster-wide lock on every VM
start/stop/migrate will significantly limit parallel operations.  Event
starting two VMs in parallel will be limited by this central lock. At
boot trying to start many VMs (ideally as much in parallel as possible)
is limited by the central IPAM lock even further.

I argue that we shall not support ephemeral IPs altogether.
The alternative is to make all IPAM reservations persistent.

Using persistent IPs only reduces the interactions of VM/CTs with the
IPAM to a minimum of NIC joining a subnet and NIC leaving a subnet. I
am
deliberately not referring to VMs because a VM may be part of multiple
VNets or even multiple times in the same VNet (regardless if that is
sensible).

Cases the IPAM needs to be involved:

- NIC with DHCP enabled VNet is added to VM config
- NIC with DHCP enabled VNet is removed from VM config
- NIC is assigned to another Bridge
  can be treated as individual leave + join events

Cases that are explicitly not covered but may be added if desired:

- Manually assign an IP address on a NIC
  will not be automatically visible in the IPAM
- Manually change the MAC on a NIC
  don't do that > you are on your own.
  Not handled > change in IPAM manually

Once an IP is reserved via IPAM, the dnsmasq config can be generated
stateless and idempotent from the pve IPAM and is identical on all
nodes
regardless if a VM/CT actually resides on that node or is running or
stopped.  This is especially useful for VM migration because the IP
stays consistent without spacial considering.

Snapshot/revert, backup/restore, suspend/hibernate/resume cases are
automatically covered because the IP will already be reserved for that
MAC.

If the admin wants to change, the IP of a VM this can be done via the
IPAM API/UI which will have to be implemented separately.

A limitation of this approach vs dynamic IP reservation is that the IP
range on the subnet needs to be large enough to hold all IPs of all,
even stopped, VMs in that subnet. This is in contrast to default DHCP
functionality where only the number of actively running VMs is limited.
It should be enough to mention this in the docs.

I will further review the code an try to implement the aforementioned
approach.

Best regards,
Stefan Lendl

Stefan Hanreich <s.hanreich@proxmox.com> writes:

> This is a WIP patch series, since I will be gone for 3 weeks and
> wanted to
> share my current progress with the DHCP support for SDN.
> 
> This patch series adds support for automatically deploying dnsmasq as
> a DHCP
> server to a simple SDN Zone.
> 
> While certainly not 100% polished on some ends (looking at restarting
> systemd
> services in particular), the general idea behind the mechanism shows.
> I wanted
> to gather some feedback on how I approached designing the plugins and
> the
> config regeneration process before comitting to this design by
> creating an API
> and UI around it.
> 
> You need to install dnsmasq (and disable it afterwards):
> 
>   apt install dnsmasq && systemctl disable --now dnsmasq
> 
> 
> You can use the following example configuration for deploying a DHCP
> server in
> a SDN subnet:
> 
> /etc/pve/sdn/dhcp.cfg:
> 
>   dnsmasq: nat
> 
> 
> /etc/pve/sdn/zones.cfg:
> 
>   simple: DHCPNAT
>           ipam pve
> 
> 
> /etc/pve/sdn/vnets.cfg:
> 
>   vnet: dhcpnat
>           zone DHCPNAT
> 
> 
> /etc/pve/sdn/subnets.cfg:
> 
>   subnet: DHCPNAT-10.1.0.0-16
>           vnet dhcpnat
>           dhcp-dns-server 10.1.0.1
>           dhcp-range server=nat,start-address=10.1.0.100,end-
> address=10.1.0.200
>           gateway 10.1.0.1
>           snat 1
> 
> 
> Then apply the SDN configuration:
> 
>   pvesh set /cluster/sdn
> 
> You need to apply the SDN configuration once after adding the dhcp-
> range lines
> to the configuration, since the running configuration is used for
> managing
> DHCP. It will not work otherwise!
> 
> For testing it can be helpful to monitor the following files (e.g.
> with watch)
> to find out what is happening
>   * /etc/dnsmasq.d/<dhcp_id>/ethers (on each node)
>   * /etc/pve/priv/ipam.db
> 
> Changes from v1 -> v2:
>   * added hooks for handling DHCP when starting / stopping / .. VMs
> and CTs
>   * Get an IP from IPAM and register that IP in the DHCP server
>     (pve only for now)
>   * remove lease-time, since it is now infinite and managed by the VM
> lifecycle
>   * add hooks for setting & deleting DHCP mappings to DHCP plugins
>   * modified interface of the abstract class to reflect new
> requirements
>   * added helpers in existing SDN classes
>   * simplified DHCP configuration settings
> 
> 
> 
> pve-cluster:
> 
> Stefan Hanreich (1):
>   cluster files: add dhcp.cfg
> 
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> 
> pve-network:
> 
> Stefan Hanreich (6):
>   subnets: vnets: preparations for DHCP plugins
>   dhcp: add abstract class for DHCP plugins
>   dhcp: subnet: add DHCP options to subnet configuration
>   dhcp: add DHCP plugin for dnsmasq
>   ipam: Add helper methods for DHCP to PVE IPAM
>   dhcp: regenerate config for DHCP servers on reload
> 
>  debian/control                         |   1 +
>  src/PVE/Network/SDN.pm                 |  11 +-
>  src/PVE/Network/SDN/Dhcp.pm            | 192
> +++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Dnsmasq.pm    | 186
> ++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Makefile      |   8 ++
>  src/PVE/Network/SDN/Dhcp/Plugin.pm     |  83 +++++++++++
>  src/PVE/Network/SDN/Ipams/PVEPlugin.pm |  64 +++++++++
>  src/PVE/Network/SDN/Makefile           |   3 +-
>  src/PVE/Network/SDN/SubnetPlugin.pm    |  32 +++++
>  src/PVE/Network/SDN/Subnets.pm         |  43 ++++--
>  src/PVE/Network/SDN/Vnets.pm           |  27 ++--
>  11 files changed, 622 insertions(+), 28 deletions(-)
>  create mode 100644 src/PVE/Network/SDN/Dhcp.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm
> 
> 
> pve-manager:
> 
> Stefan Hanreich (1):
>   sdn: regenerate DHCP config on reload
> 
>  PVE/API2/Network.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
> 
> qemu-server:
> 
> Stefan Hanreich (1):
>   sdn: dhcp: add DHCP setup to vm-network-scripts
> 
>  PVE/QemuServer.pm                 | 14 ++++++++++++++
>  vm-network-scripts/pve-bridge     |  3 +++
>  vm-network-scripts/pve-bridgedown | 19 +++++++++++++++++++
>  3 files changed, 36 insertions(+)
> 
> 
> pve-container:
> 
> Stefan Hanreich (1):
>   sdn: dhcp: setup DHCP mappings in LXC hooks
> 
>  src/PVE/LXC.pm            | 10 ++++++++++
>  src/lxc-pve-poststop-hook |  1 +
>  src/lxc-pve-prestart-hook |  9 +++++++++
>  3 files changed, 20 insertions(+)
> 
> 
> Summary over all repositories:
>   20 files changed, 681 insertions(+), 28 deletions(-)
> 
> --
> murpp v0.4.0
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://antiphishing.cetsi.fr/proxy/v3?i=d1l4NXNNaWE4SWZqU0dLWcuTfdxE
> d98NfWIp9dma5kY&r=MXJUa0FrUVJqc1UwYWxNZ-
> tuXduEO8AMVnCvYVMprCZ3oPilgy3nXcuJTOGH5iK84rVRg8cukFAROdxYRgFTTg&f=c2
> xMdVN4Smh2R2tOZDdIRKCk7WEocHpTPMerT1Q-
> Aq5qwr8l2xvAWuOGvFsV3frp2oSAgxNUQCpJDHp2iUmTWg&u=https%3A//lists.prox
> mox.com/cgi-bin/mailman/listinfo/pve-devel&k=fjzS


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://antiphishing.cetsi.fr/proxy/v3?i=d1l4NXNNaWE4SWZqU0dLWcuTfdxEd9
8NfWIp9dma5kY&r=MXJUa0FrUVJqc1UwYWxNZ-
tuXduEO8AMVnCvYVMprCZ3oPilgy3nXcuJTOGH5iK84rVRg8cukFAROdxYRgFTTg&f=c2xM
dVN4Smh2R2tOZDdIRKCk7WEocHpTPMerT1Q-
Aq5qwr8l2xvAWuOGvFsV3frp2oSAgxNUQCpJDHp2iUmTWg&u=https%3A//lists.proxmo
x.com/cgi-bin/mailman/listinfo/pve-devel&k=fjzS



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-27 12:53   ` Stefan Lendl
@ 2023-10-27 13:37     ` DERUMIER, Alexandre
  0 siblings, 0 replies; 16+ messages in thread
From: DERUMIER, Alexandre @ 2023-10-27 13:37 UTC (permalink / raw)
  To: s.lendl; +Cc: pve-devel

-------- Message initial --------
De: Stefan Lendl <s.lendl@proxmox.com>
À: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
Cc: pve-devel@lists.proxmox.com <pve-devel@lists.proxmox.com>
Objet: Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-
server/container 00/10] Add support for DHCP servers to SDN
Date: 27/10/2023 14:53:25


Hi Alexandre, I am proposing a slightly different view.

>>I think it's better to keep all IPs, managed by the IPAM in the IPAM
>>and the VM only configures as DHCP.


Yes, I'm thinking exactly the same !   


I had tried 2year ago to implement ipam with static ip in vm
configuration (+ipam), and they are a lot of corner case.




>>I would implement the 4 mentioned events (vNIC create, destroy,
>>start,
>>stop) in the SDN module and limit interactions between VM configs and
>>the SDN module to these events.

>>
>>On NIC create: the it calls the SDN::nic_join_vnet($bridge, $mac)
>>function that handles IPAM registration if necessary triggers
>>generating
>>DHCP config and so on. Same approach for the other SDN related
>>events.
>>
>>All the logic is implemented in the SDN module. This reduces coupling
>>between VM logic and SDN logic.


sound great :)

"DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com> writes:

> Hi Stefan (Lendl),
> 
> I'm totally agreed with you, we should have persistent reservation,
> at vm create/nic plug, nic delete, vm delete.
> 
> At least , for my usage with multiple cluster on different
> datacenters,
> I really can wait to call ipam to api at each start (for scalability
> or
> for security if ipam is down)
> 
> 
> This also allow to simply do reservations in dnsmasq file without any
> need to restart it. (AFAIK, openstack is using dnsmasq like this too)
> 
> 
> I'm not sure if true dynamic ephemral ip , changing at each vm
> stop/start is interesting for a server vm usage. (maybe for desktop
> vmwhere you share a small pool of ip, but I personnaly don't known
> any
> proxmox users using proxmox ve for this)
> 
> 
> see my proposal here (with handle ephemeral && reserved, but it's
> even
> easier with only reserved):
> 
> https://antiphishing.cetsi.fr/proxy/v3?i=YXJwbnI5ZGY3YXM2MThBYc__j3mP
> QdDC0mZ08oRIJLw&r=d2RpVFJVaTVtcFJRWFNMYgYCddP93Y9SOEaGwAD-
> 9JdLrx2JwwKfs9Sn_uiRQCCUgqnCg4WLD-
> gLY0eKXrXX4A&f=SVN0TjFBb1k5Qk8zQ2E1YT-
> NJ2Y2fJYrRVcVAuRs9UYfyMFrtkoDLcaTV9MhYQZD&u=https%3A//lists.proxmox.c
> om/pipermail/pve-devel/2023-September/059169.html&k=ogd1
> 
> 
> 
> 
> "
> I think we could implement ipam call like:
> 
> 
> create vm or add a new nic  -->
> -----------------------------
> qm create ... -net0
> bridge=vnet,....,ip=(auto|192.168.0.1|dynamic),ip6=(..)
> 
> 
> auto : search a free ip in ipam.  write the ip address in net0:
> ...,ip=
> ip field
> 
> 192.168.0.1:  check if ip is free in ipam && register ip in ipam.
> write
> the ip in ip field.
> 
> 
> dynamic: write "ephemeral" in net0: ....,ip=ephemeral (This is a
> dynamic ip registered at vm start, and release at vm stop)
> 
> 
> 
> vm start
> ---------
> - if ip=ephemeral, find && register a free ip in ipam, write it in vm
> net0: ...,ip=192.168.0.10[E] .   (maybe with a special flag [E] to
> indicate it's ephemeral)
> - read ip from vm config && inject in dhcp
> 
> 
> vm_stop
> -------
> if ip is ephemeral (netX: ip=192.168.0.10[E]),  delete ip from ipam,
> set ip=ephemeral in vm config
> 
> 
> vm_destroy or nic remove/unplug
> -------------------------
> if netX: ...,ip=192.168.0.10   ,  remove ip from ipam
> 
> 
> 
> nic update when vm is running:
> ------------------------------
> if ip is defined : netX: ip=192.168.0.10,  we don't allow bridge
> change
> or ip change, as vm is not notified about theses changes, and still
> use
> old ip.
> 
> We can allow nic hot-unplug && hotplug. (guest os will remove the ip
> on
> nic removal, and will call dhcp again on nic hotplug)
> 
> 
> 
> 
> nic hotplug with ip=auto:
> -------------------------
> 
> --> add nic in pending state ----> find ip in ipam && write it in
> pending ---> do the hotplug in qemu.
> 
> We need to handle the config revert to remove ip from ipam if the nic
> hotplug is blocked in pending state(I never see this case until os
> don't have pci_hotplug module loaded, but it's better to be carefull
> )
> 
> "
> 
> 
> > > I am currently working on the SDN feature.  This is an initial
> > > review
> > > of
> > > the patch series and I am trying to make a strong case against
> > > ephemeral
> > > DHCP IP reservation.
> > > 
> > > The current state of the patch series invokes the IPAM on every
> > > VM/CT
> > > start/stop to add or remove the IP from the IPAM.
> > > This triggers the dnsmasq config generation on the specific host
> > > with
> > > only the MAC/IP mapping of that particular host.
> 
> 
> 
> 
> 
> From reading the discussion of the v1 patch series I understand this
> approach tries to implement the ephemeral IP reservation strategy.
> From
> off-list conversations with Stefan Hanreich, I agree that having
> ephemeral IP reservation coordinated by the IPAM requires us to
> re-implement DHCP functionality in the IPAM and heavily rely on
> syncing
> between the different services.
> 
> To maintain reliable sync we need to hook into many different places
> where the IPAM need to be queried.  Any issues with the
> implementation
> may lead to IPAM and DHCP local config state running out of sync
> causing
> network issues duplicate multiple IPs.
> 
> Furthermore, every interaction with the IPAM requires a cluster-wide
> lock on the IPAM. Having a central cluster-wide lock on every VM
> start/stop/migrate will significantly limit parallel operations. 
> Event
> starting two VMs in parallel will be limited by this central lock. At
> boot trying to start many VMs (ideally as much in parallel as
> possible)
> is limited by the central IPAM lock even further.
> 
> I argue that we shall not support ephemeral IPs altogether.
> The alternative is to make all IPAM reservations persistent.
> 
> Using persistent IPs only reduces the interactions of VM/CTs with the
> IPAM to a minimum of NIC joining a subnet and NIC leaving a subnet. I
> am
> deliberately not referring to VMs because a VM may be part of
> multiple
> VNets or even multiple times in the same VNet (regardless if that is
> sensible).
> 
> Cases the IPAM needs to be involved:
> 
> - NIC with DHCP enabled VNet is added to VM config
> - NIC with DHCP enabled VNet is removed from VM config
> - NIC is assigned to another Bridge
>   can be treated as individual leave + join events
> 
> Cases that are explicitly not covered but may be added if desired:
> 
> - Manually assign an IP address on a NIC
>   will not be automatically visible in the IPAM
> - Manually change the MAC on a NIC
>   don't do that > you are on your own.
>   Not handled > change in IPAM manually
> 
> Once an IP is reserved via IPAM, the dnsmasq config can be generated
> stateless and idempotent from the pve IPAM and is identical on all
> nodes
> regardless if a VM/CT actually resides on that node or is running or
> stopped.  This is especially useful for VM migration because the IP
> stays consistent without spacial considering.
> 
> Snapshot/revert, backup/restore, suspend/hibernate/resume cases are
> automatically covered because the IP will already be reserved for
> that
> MAC.
> 
> If the admin wants to change, the IP of a VM this can be done via the
> IPAM API/UI which will have to be implemented separately.
> 
> A limitation of this approach vs dynamic IP reservation is that the
> IP
> range on the subnet needs to be large enough to hold all IPs of all,
> even stopped, VMs in that subnet. This is in contrast to default DHCP
> functionality where only the number of actively running VMs is
> limited.
> It should be enough to mention this in the docs.
> 
> I will further review the code an try to implement the aforementioned
> approach.
> 
> Best regards,
> Stefan Lendl
> 
> Stefan Hanreich <s.hanreich@proxmox.com> writes:
> 
> > This is a WIP patch series, since I will be gone for 3 weeks and
> > wanted to
> > share my current progress with the DHCP support for SDN.
> > 
> > This patch series adds support for automatically deploying dnsmasq
> > as
> > a DHCP
> > server to a simple SDN Zone.
> > 
> > While certainly not 100% polished on some ends (looking at
> > restarting
> > systemd
> > services in particular), the general idea behind the mechanism
> > shows.
> > I wanted
> > to gather some feedback on how I approached designing the plugins
> > and
> > the
> > config regeneration process before comitting to this design by
> > creating an API
> > and UI around it.
> > 
> > You need to install dnsmasq (and disable it afterwards):
> > 
> >   apt install dnsmasq && systemctl disable --now dnsmasq
> > 
> > 
> > You can use the following example configuration for deploying a
> > DHCP
> > server in
> > a SDN subnet:
> > 
> > /etc/pve/sdn/dhcp.cfg:
> > 
> >   dnsmasq: nat
> > 
> > 
> > /etc/pve/sdn/zones.cfg:
> > 
> >   simple: DHCPNAT
> >           ipam pve
> > 
> > 
> > /etc/pve/sdn/vnets.cfg:
> > 
> >   vnet: dhcpnat
> >           zone DHCPNAT
> > 
> > 
> > /etc/pve/sdn/subnets.cfg:
> > 
> >   subnet: DHCPNAT-10.1.0.0-16
> >           vnet dhcpnat
> >           dhcp-dns-server 10.1.0.1
> >           dhcp-range server=nat,start-address=10.1.0.100,end-
> > address=10.1.0.200
> >           gateway 10.1.0.1
> >           snat 1
> > 
> > 
> > Then apply the SDN configuration:
> > 
> >   pvesh set /cluster/sdn
> > 
> > You need to apply the SDN configuration once after adding the dhcp-
> > range lines
> > to the configuration, since the running configuration is used for
> > managing
> > DHCP. It will not work otherwise!
> > 
> > For testing it can be helpful to monitor the following files (e.g.
> > with watch)
> > to find out what is happening
> >   * /etc/dnsmasq.d/<dhcp_id>/ethers (on each node)
> >   * /etc/pve/priv/ipam.db
> > 
> > Changes from v1 -> v2:
> >   * added hooks for handling DHCP when starting / stopping / .. VMs
> > and CTs
> >   * Get an IP from IPAM and register that IP in the DHCP server
> >     (pve only for now)
> >   * remove lease-time, since it is now infinite and managed by the
> > VM
> > lifecycle
> >   * add hooks for setting & deleting DHCP mappings to DHCP plugins
> >   * modified interface of the abstract class to reflect new
> > requirements
> >   * added helpers in existing SDN classes
> >   * simplified DHCP configuration settings
> > 
> > 
> > 
> > pve-cluster:
> > 
> > Stefan Hanreich (1):
> >   cluster files: add dhcp.cfg
> > 
> >  src/PVE/Cluster.pm  | 1 +
> >  src/pmxcfs/status.c | 1 +
> >  2 files changed, 2 insertions(+)
> > 
> > 
> > pve-network:
> > 
> > Stefan Hanreich (6):
> >   subnets: vnets: preparations for DHCP plugins
> >   dhcp: add abstract class for DHCP plugins
> >   dhcp: subnet: add DHCP options to subnet configuration
> >   dhcp: add DHCP plugin for dnsmasq
> >   ipam: Add helper methods for DHCP to PVE IPAM
> >   dhcp: regenerate config for DHCP servers on reload
> > 
> >  debian/control                         |   1 +
> >  src/PVE/Network/SDN.pm                 |  11 +-
> >  src/PVE/Network/SDN/Dhcp.pm            | 192
> > +++++++++++++++++++++++++
> >  src/PVE/Network/SDN/Dhcp/Dnsmasq.pm    | 186
> > ++++++++++++++++++++++++
> >  src/PVE/Network/SDN/Dhcp/Makefile      |   8 ++
> >  src/PVE/Network/SDN/Dhcp/Plugin.pm     |  83 +++++++++++
> >  src/PVE/Network/SDN/Ipams/PVEPlugin.pm |  64 +++++++++
> >  src/PVE/Network/SDN/Makefile           |   3 +-
> >  src/PVE/Network/SDN/SubnetPlugin.pm    |  32 +++++
> >  src/PVE/Network/SDN/Subnets.pm         |  43 ++++--
> >  src/PVE/Network/SDN/Vnets.pm           |  27 ++--
> >  11 files changed, 622 insertions(+), 28 deletions(-)
> >  create mode 100644 src/PVE/Network/SDN/Dhcp.pm
> >  create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
> >  create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
> >  create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm
> > 
> > 
> > pve-manager:
> > 
> > Stefan Hanreich (1):
> >   sdn: regenerate DHCP config on reload
> > 
> >  PVE/API2/Network.pm | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > 
> > qemu-server:
> > 
> > Stefan Hanreich (1):
> >   sdn: dhcp: add DHCP setup to vm-network-scripts
> > 
> >  PVE/QemuServer.pm                 | 14 ++++++++++++++
> >  vm-network-scripts/pve-bridge     |  3 +++
> >  vm-network-scripts/pve-bridgedown | 19 +++++++++++++++++++
> >  3 files changed, 36 insertions(+)
> > 
> > 
> > pve-container:
> > 
> > Stefan Hanreich (1):
> >   sdn: dhcp: setup DHCP mappings in LXC hooks
> > 
> >  src/PVE/LXC.pm            | 10 ++++++++++
> >  src/lxc-pve-poststop-hook |  1 +
> >  src/lxc-pve-prestart-hook |  9 +++++++++
> >  3 files changed, 20 insertions(+)
> > 
> > 
> > Summary over all repositories:
> >   20 files changed, 681 insertions(+), 28 deletions(-)
> > 
> > --
> > murpp v0.4.0
> > 
> > 
> > _______________________________________________
> > pve-devel mailing list
> > pve-devel@lists.proxmox.com
> > https://antiphishing.cetsi.fr/proxy/v3?i=YXJwbnI5ZGY3YXM2MThBYc__j3
> > mPQdDC0mZ08oRIJLw&r=d2RpVFJVaTVtcFJRWFNMYgYCddP93Y9SOEaGwAD-
> > 9JdLrx2JwwKfs9Sn_uiRQCCUgqnCg4WLD-
> > gLY0eKXrXX4A&f=SVN0TjFBb1k5Qk8zQ2E1YT-
> > NJ2Y2fJYrRVcVAuRs9UYfyMFrtkoDLcaTV9MhYQZD&u=https%3A//antiphishing.
> > cetsi.fr/proxy/v3%3Fi%3Dd1l4NXNNaWE4SWZqU0dLWcuTfdxE&k=ogd1
> > d98NfWIp9dma5kY&r=MXJUa0FrUVJqc1UwYWxNZ-
> > tuXduEO8AMVnCvYVMprCZ3oPilgy3nXcuJTOGH5iK84rVRg8cukFAROdxYRgFTTg&f=
> > c2
> > xMdVN4Smh2R2tOZDdIRKCk7WEocHpTPMerT1Q-
> > Aq5qwr8l2xvAWuOGvFsV3frp2oSAgxNUQCpJDHp2iUmTWg&u=https%3A//lists.pr
> > ox
> > mox.com/cgi-bin/mailman/listinfo/pve-devel&k=fjzS
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://antiphishing.cetsi.fr/proxy/v3?i=YXJwbnI5ZGY3YXM2MThBYc__j3mP
> QdDC0mZ08oRIJLw&r=d2RpVFJVaTVtcFJRWFNMYgYCddP93Y9SOEaGwAD-
> 9JdLrx2JwwKfs9Sn_uiRQCCUgqnCg4WLD-
> gLY0eKXrXX4A&f=SVN0TjFBb1k5Qk8zQ2E1YT-
> NJ2Y2fJYrRVcVAuRs9UYfyMFrtkoDLcaTV9MhYQZD&u=https%3A//antiphishing.ce
> tsi.fr/proxy/v3%3Fi%3Dd1l4NXNNaWE4SWZqU0dLWcuTfdxEd9&k=ogd1
> 8NfWIp9dma5kY&r=MXJUa0FrUVJqc1UwYWxNZ-
> tuXduEO8AMVnCvYVMprCZ3oPilgy3nXcuJTOGH5iK84rVRg8cukFAROdxYRgFTTg&f=c2
> xM
> dVN4Smh2R2tOZDdIRKCk7WEocHpTPMerT1Q-
> Aq5qwr8l2xvAWuOGvFsV3frp2oSAgxNUQCpJDHp2iUmTWg&u=https%3A//lists.prox
> mo
> x.com/cgi-bin/mailman/listinfo/pve-devel&k=fjzS



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-23 12:40 ` Stefan Lendl
  2023-10-27  7:39   ` Thomas Lamprecht
@ 2023-10-27 12:53   ` Stefan Lendl
  2023-10-27 13:37     ` DERUMIER, Alexandre
  1 sibling, 1 reply; 16+ messages in thread
From: Stefan Lendl @ 2023-10-27 12:53 UTC (permalink / raw)
  To: DERUMIER, Alexandre; +Cc: pve-devel


Hi Alexandre, I am proposing a slightly different view.

I think it's better to keep all IPs, managed by the IPAM in the IPAM and
the VM only configures as DHCP.

I would implement the 4 mentioned events (vNIC create, destroy, start,
stop) in the SDN module and limit interactions between VM configs and
the SDN module to these events.

On NIC create: the it calls the SDN::nic_join_vnet($bridge, $mac)
function that handles IPAM registration if necessary triggers generating
DHCP config and so on. Same approach for the other SDN related events.

All the logic is implemented in the SDN module. This reduces coupling
between VM logic and SDN logic.

"DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com> writes:

> Hi Stefan (Lendl),
>
> I'm totally agreed with you, we should have persistent reservation,
> at vm create/nic plug, nic delete, vm delete.
>
> At least , for my usage with multiple cluster on different datacenters,
> I really can wait to call ipam to api at each start (for scalability or
> for security if ipam is down)
>
>
> This also allow to simply do reservations in dnsmasq file without any
> need to restart it. (AFAIK, openstack is using dnsmasq like this too)
>
>
> I'm not sure if true dynamic ephemral ip , changing at each vm
> stop/start is interesting for a server vm usage. (maybe for desktop
> vmwhere you share a small pool of ip, but I personnaly don't known any
> proxmox users using proxmox ve for this)
>
>
> see my proposal here (with handle ephemeral && reserved, but it's even
> easier with only reserved):
>
> https://lists.proxmox.com/pipermail/pve-devel/2023-September/059169.html
>
>
>
>
> "
> I think we could implement ipam call like:
>
>
> create vm or add a new nic  -->
> -----------------------------
> qm create ... -net0
> bridge=vnet,....,ip=(auto|192.168.0.1|dynamic),ip6=(..)
>
>
> auto : search a free ip in ipam.  write the ip address in net0: ...,ip=
> ip field
>
> 192.168.0.1:  check if ip is free in ipam && register ip in ipam. write
> the ip in ip field.
>
>
> dynamic: write "ephemeral" in net0: ....,ip=ephemeral (This is a
> dynamic ip registered at vm start, and release at vm stop)
>
>
>
> vm start
> ---------
> - if ip=ephemeral, find && register a free ip in ipam, write it in vm
> net0: ...,ip=192.168.0.10[E] .   (maybe with a special flag [E] to
> indicate it's ephemeral)
> - read ip from vm config && inject in dhcp
>
>
> vm_stop
> -------
> if ip is ephemeral (netX: ip=192.168.0.10[E]),  delete ip from ipam,
> set ip=ephemeral in vm config
>
>
> vm_destroy or nic remove/unplug
> -------------------------
> if netX: ...,ip=192.168.0.10   ,  remove ip from ipam
>
>
>
> nic update when vm is running:
> ------------------------------
> if ip is defined : netX: ip=192.168.0.10,  we don't allow bridge change
> or ip change, as vm is not notified about theses changes, and still use
> old ip.
>
> We can allow nic hot-unplug && hotplug. (guest os will remove the ip on
> nic removal, and will call dhcp again on nic hotplug)
>
>
>
>
> nic hotplug with ip=auto:
> -------------------------
>
> --> add nic in pending state ----> find ip in ipam && write it in
> pending ---> do the hotplug in qemu.
>
> We need to handle the config revert to remove ip from ipam if the nic
> hotplug is blocked in pending state(I never see this case until os
> don't have pci_hotplug module loaded, but it's better to be carefull )
>
> "
>
>
>>>I am currently working on the SDN feature.  This is an initial review
>>>of
>>>the patch series and I am trying to make a strong case against
>>>ephemeral
>>>DHCP IP reservation.
>>>
>>>The current state of the patch series invokes the IPAM on every VM/CT
>>>start/stop to add or remove the IP from the IPAM.
>>>This triggers the dnsmasq config generation on the specific host with
>>>only the MAC/IP mapping of that particular host.
>
>
>
>
>
> From reading the discussion of the v1 patch series I understand this
> approach tries to implement the ephemeral IP reservation strategy. From
> off-list conversations with Stefan Hanreich, I agree that having
> ephemeral IP reservation coordinated by the IPAM requires us to
> re-implement DHCP functionality in the IPAM and heavily rely on syncing
> between the different services.
>
> To maintain reliable sync we need to hook into many different places
> where the IPAM need to be queried.  Any issues with the implementation
> may lead to IPAM and DHCP local config state running out of sync
> causing
> network issues duplicate multiple IPs.
>
> Furthermore, every interaction with the IPAM requires a cluster-wide
> lock on the IPAM. Having a central cluster-wide lock on every VM
> start/stop/migrate will significantly limit parallel operations.  Event
> starting two VMs in parallel will be limited by this central lock. At
> boot trying to start many VMs (ideally as much in parallel as possible)
> is limited by the central IPAM lock even further.
>
> I argue that we shall not support ephemeral IPs altogether.
> The alternative is to make all IPAM reservations persistent.
>
> Using persistent IPs only reduces the interactions of VM/CTs with the
> IPAM to a minimum of NIC joining a subnet and NIC leaving a subnet. I
> am
> deliberately not referring to VMs because a VM may be part of multiple
> VNets or even multiple times in the same VNet (regardless if that is
> sensible).
>
> Cases the IPAM needs to be involved:
>
> - NIC with DHCP enabled VNet is added to VM config
> - NIC with DHCP enabled VNet is removed from VM config
> - NIC is assigned to another Bridge
>   can be treated as individual leave + join events
>
> Cases that are explicitly not covered but may be added if desired:
>
> - Manually assign an IP address on a NIC
>   will not be automatically visible in the IPAM
> - Manually change the MAC on a NIC
>   don't do that > you are on your own.
>   Not handled > change in IPAM manually
>
> Once an IP is reserved via IPAM, the dnsmasq config can be generated
> stateless and idempotent from the pve IPAM and is identical on all
> nodes
> regardless if a VM/CT actually resides on that node or is running or
> stopped.  This is especially useful for VM migration because the IP
> stays consistent without spacial considering.
>
> Snapshot/revert, backup/restore, suspend/hibernate/resume cases are
> automatically covered because the IP will already be reserved for that
> MAC.
>
> If the admin wants to change, the IP of a VM this can be done via the
> IPAM API/UI which will have to be implemented separately.
>
> A limitation of this approach vs dynamic IP reservation is that the IP
> range on the subnet needs to be large enough to hold all IPs of all,
> even stopped, VMs in that subnet. This is in contrast to default DHCP
> functionality where only the number of actively running VMs is limited.
> It should be enough to mention this in the docs.
>
> I will further review the code an try to implement the aforementioned
> approach.
>
> Best regards,
> Stefan Lendl
>
> Stefan Hanreich <s.hanreich@proxmox.com> writes:
>
>> This is a WIP patch series, since I will be gone for 3 weeks and
>> wanted to
>> share my current progress with the DHCP support for SDN.
>>
>> This patch series adds support for automatically deploying dnsmasq as
>> a DHCP
>> server to a simple SDN Zone.
>>
>> While certainly not 100% polished on some ends (looking at restarting
>> systemd
>> services in particular), the general idea behind the mechanism shows.
>> I wanted
>> to gather some feedback on how I approached designing the plugins and
>> the
>> config regeneration process before comitting to this design by
>> creating an API
>> and UI around it.
>>
>> You need to install dnsmasq (and disable it afterwards):
>>
>>   apt install dnsmasq && systemctl disable --now dnsmasq
>>
>>
>> You can use the following example configuration for deploying a DHCP
>> server in
>> a SDN subnet:
>>
>> /etc/pve/sdn/dhcp.cfg:
>>
>>   dnsmasq: nat
>>
>>
>> /etc/pve/sdn/zones.cfg:
>>
>>   simple: DHCPNAT
>>           ipam pve
>>
>>
>> /etc/pve/sdn/vnets.cfg:
>>
>>   vnet: dhcpnat
>>           zone DHCPNAT
>>
>>
>> /etc/pve/sdn/subnets.cfg:
>>
>>   subnet: DHCPNAT-10.1.0.0-16
>>           vnet dhcpnat
>>           dhcp-dns-server 10.1.0.1
>>           dhcp-range server=nat,start-address=10.1.0.100,end-
>> address=10.1.0.200
>>           gateway 10.1.0.1
>>           snat 1
>>
>>
>> Then apply the SDN configuration:
>>
>>   pvesh set /cluster/sdn
>>
>> You need to apply the SDN configuration once after adding the dhcp-
>> range lines
>> to the configuration, since the running configuration is used for
>> managing
>> DHCP. It will not work otherwise!
>>
>> For testing it can be helpful to monitor the following files (e.g.
>> with watch)
>> to find out what is happening
>>   * /etc/dnsmasq.d/<dhcp_id>/ethers (on each node)
>>   * /etc/pve/priv/ipam.db
>>
>> Changes from v1 -> v2:
>>   * added hooks for handling DHCP when starting / stopping / .. VMs
>> and CTs
>>   * Get an IP from IPAM and register that IP in the DHCP server
>>     (pve only for now)
>>   * remove lease-time, since it is now infinite and managed by the VM
>> lifecycle
>>   * add hooks for setting & deleting DHCP mappings to DHCP plugins
>>   * modified interface of the abstract class to reflect new
>> requirements
>>   * added helpers in existing SDN classes
>>   * simplified DHCP configuration settings
>>
>>
>>
>> pve-cluster:
>>
>> Stefan Hanreich (1):
>>   cluster files: add dhcp.cfg
>>
>>  src/PVE/Cluster.pm  | 1 +
>>  src/pmxcfs/status.c | 1 +
>>  2 files changed, 2 insertions(+)
>>
>>
>> pve-network:
>>
>> Stefan Hanreich (6):
>>   subnets: vnets: preparations for DHCP plugins
>>   dhcp: add abstract class for DHCP plugins
>>   dhcp: subnet: add DHCP options to subnet configuration
>>   dhcp: add DHCP plugin for dnsmasq
>>   ipam: Add helper methods for DHCP to PVE IPAM
>>   dhcp: regenerate config for DHCP servers on reload
>>
>>  debian/control                         |   1 +
>>  src/PVE/Network/SDN.pm                 |  11 +-
>>  src/PVE/Network/SDN/Dhcp.pm            | 192
>> +++++++++++++++++++++++++
>>  src/PVE/Network/SDN/Dhcp/Dnsmasq.pm    | 186
>> ++++++++++++++++++++++++
>>  src/PVE/Network/SDN/Dhcp/Makefile      |   8 ++
>>  src/PVE/Network/SDN/Dhcp/Plugin.pm     |  83 +++++++++++
>>  src/PVE/Network/SDN/Ipams/PVEPlugin.pm |  64 +++++++++
>>  src/PVE/Network/SDN/Makefile           |   3 +-
>>  src/PVE/Network/SDN/SubnetPlugin.pm    |  32 +++++
>>  src/PVE/Network/SDN/Subnets.pm         |  43 ++++--
>>  src/PVE/Network/SDN/Vnets.pm           |  27 ++--
>>  11 files changed, 622 insertions(+), 28 deletions(-)
>>  create mode 100644 src/PVE/Network/SDN/Dhcp.pm
>>  create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
>>  create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
>>  create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm
>>
>>
>> pve-manager:
>>
>> Stefan Hanreich (1):
>>   sdn: regenerate DHCP config on reload
>>
>>  PVE/API2/Network.pm | 1 +
>>  1 file changed, 1 insertion(+)
>>
>>
>> qemu-server:
>>
>> Stefan Hanreich (1):
>>   sdn: dhcp: add DHCP setup to vm-network-scripts
>>
>>  PVE/QemuServer.pm                 | 14 ++++++++++++++
>>  vm-network-scripts/pve-bridge     |  3 +++
>>  vm-network-scripts/pve-bridgedown | 19 +++++++++++++++++++
>>  3 files changed, 36 insertions(+)
>>
>>
>> pve-container:
>>
>> Stefan Hanreich (1):
>>   sdn: dhcp: setup DHCP mappings in LXC hooks
>>
>>  src/PVE/LXC.pm            | 10 ++++++++++
>>  src/lxc-pve-poststop-hook |  1 +
>>  src/lxc-pve-prestart-hook |  9 +++++++++
>>  3 files changed, 20 insertions(+)
>>
>>
>> Summary over all repositories:
>>   20 files changed, 681 insertions(+), 28 deletions(-)
>>
>> --
>> murpp v0.4.0
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://antiphishing.cetsi.fr/proxy/v3?i=d1l4NXNNaWE4SWZqU0dLWcuTfdxE
>> d98NfWIp9dma5kY&r=MXJUa0FrUVJqc1UwYWxNZ-
>> tuXduEO8AMVnCvYVMprCZ3oPilgy3nXcuJTOGH5iK84rVRg8cukFAROdxYRgFTTg&f=c2
>> xMdVN4Smh2R2tOZDdIRKCk7WEocHpTPMerT1Q-
>> Aq5qwr8l2xvAWuOGvFsV3frp2oSAgxNUQCpJDHp2iUmTWg&u=https%3A//lists.prox
>> mox.com/cgi-bin/mailman/listinfo/pve-devel&k=fjzS
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://antiphishing.cetsi.fr/proxy/v3?i=d1l4NXNNaWE4SWZqU0dLWcuTfdxEd9
> 8NfWIp9dma5kY&r=MXJUa0FrUVJqc1UwYWxNZ-
> tuXduEO8AMVnCvYVMprCZ3oPilgy3nXcuJTOGH5iK84rVRg8cukFAROdxYRgFTTg&f=c2xM
> dVN4Smh2R2tOZDdIRKCk7WEocHpTPMerT1Q-
> Aq5qwr8l2xvAWuOGvFsV3frp2oSAgxNUQCpJDHp2iUmTWg&u=https%3A//lists.proxmo
> x.com/cgi-bin/mailman/listinfo/pve-devel&k=fjzS




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-27  7:39   ` Thomas Lamprecht
  2023-10-27 12:26     ` Stefan Lendl
@ 2023-10-27 12:36     ` DERUMIER, Alexandre
  1 sibling, 0 replies; 16+ messages in thread
From: DERUMIER, Alexandre @ 2023-10-27 12:36 UTC (permalink / raw)
  To: pve-devel, s.lendl, t.lamprecht

> 
> Furthermore, every interaction with the IPAM requires a cluster-wide
> lock on the IPAM. Having a central cluster-wide lock on every VM
> start/stop/migrate will significantly limit parallel operations. 
> Event
> starting two VMs in parallel will be limited by this central lock. At
> boot trying to start many VMs (ideally as much in parallel as
> possible)
> is limited by the central IPAM lock even further.

>>Cluster wide locks are relatively cheap, especially if one avoids
>>having
>>a long critical section, i.e., query IPAM while still unlocked, then 
>>read and update the state locked, if the newly received IP is already
>>in there then simply give up lock again and repeat.

>>We also have a clusters wide lock for starting HA guests, to set the
>>wanted ha-resource state, that is no issue at all, you can start/stop
>>many orders of magnitudes more VMs than any HW/Storage could cope
>>with.


You also need to think about external ipam, where maybe it'll take some
seconds to find an available ip and allocate it. (it's depend of size
of the subnet, could have also dns update, ...)

so, it'll really limit the parallelism of vm start.



(Personnaly, If we have choice between reserved at vm/nic create &&
ephemeral  at vm start, it's ok me).






> Once an IP is reserved via IPAM, the dnsmasq config can be generated
> stateless and idempotent from the pve IPAM and is identical on all
nodes
> regardless if a VM/CT actually resides on that node or is running or
> stopped.  This is especially useful for VM migration because the IP
> stays consistent without spacial considering.
>>
>>That should be orthogonal to the feature set, if we have all the info
>>saved somewhere else

>>But this also speaks against having it in the VM config, as that
>>would
>>mean that every node needs to parse every guests' config
>>periodically,
>>which is way worse than some cluster lock and breaks with our base
>>axiom that guests are owned by their current node, and only by that,
>>and a node should not really alter behavior dependent on some
>>"foreign"
>>guest.

I think that is really more simple to add ip in local dnsmasq at vm
start

dnsmasq --dhcp-hostsfile=/var/lib/reservation.txt

echo "mac ip" >> /var/lib/reservation.txt
SIGUP dnsmasq



for persistant ip, we just need search previously allocated ip-mac in
ipam, then write reservation to dnsmasq and start vm


for epheral ip, we need to find && allocated a free ip in ipam, then
write the ip/mac in dnsmasq and start vm




and for external ipam, I had proposed to use local ipam as read cache.

When allocating a new ip (persistent or ephemeral):
   search mac/ip exist in external ipam
           true:  write it to local pve ipam cache
           false: allocate a new free ip in external ipam  && write it
to local pve ipam cache

Like this, for persistant ips, we don't care if external ipam is down
at vm start.
and we can also reuse local ipam ips list for firewall or other stuff,
without need to call the external ipam api.




 




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-27  7:39   ` Thomas Lamprecht
@ 2023-10-27 12:26     ` Stefan Lendl
  2023-10-27 12:36     ` DERUMIER, Alexandre
  1 sibling, 0 replies; 16+ messages in thread
From: Stefan Lendl @ 2023-10-27 12:26 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

Thomas Lamprecht <t.lamprecht@proxmox.com> writes:

> Am 23/10/2023 um 14:40 schrieb Stefan Lendl:
>> I am currently working on the SDN feature.  This is an initial review of
>> the patch series and I am trying to make a strong case against ephemeral
>> DHCP IP reservation.
>
> Stefan Hanreich's reply to the cover letter already mentions upserts, those
> will avoid basically all problems while allowing for some dynamic changes.
>

I totally agree with upserts and my patches add this functionality.

>> The current state of the patch series invokes the IPAM on every VM/CT
>> start/stop to add or remove the IP from the IPAM.
>> This triggers the dnsmasq config generation on the specific host with
>> only the MAC/IP mapping of that particular host.
>>
>> From reading the discussion of the v1 patch series I understand this
>> approach tries to implement the ephemeral IP reservation strategy. From
>> off-list conversations with Stefan Hanreich, I agree that having
>> ephemeral IP reservation coordinated by the IPAM requires us to
>> re-implement DHCP functionality in the IPAM and heavily rely on syncing
>> between the different services.
>>
>> To maintain reliable sync we need to hook into many different places
>> where the IPAM need to be queried.  Any issues with the implementation
>> may lead to IPAM and DHCP local config state running out of sync causing
>> network issues duplicate multiple IPs.
>
> The same is true for permanent reservations, wherever that reservation is
> saved needs to be in sync with IPAM, e.g., also on backup restore (into a
> new env), if subnets change their configured CIDRs, ...
>

Yes, agreed but it's arguably less states and situation that need to be
synced.

The current implementation had a different state per node and depended
on the online/offline state of the guest.

It is currently not allowed to change the CIDR of a subnet.

>>
>> Furthermore, every interaction with the IPAM requires a cluster-wide
>> lock on the IPAM. Having a central cluster-wide lock on every VM
>> start/stop/migrate will significantly limit parallel operations.  Event
>> starting two VMs in parallel will be limited by this central lock. At
>> boot trying to start many VMs (ideally as much in parallel as possible)
>> is limited by the central IPAM lock even further.
>
> Cluster wide locks are relatively cheap, especially if one avoids having
> a long critical section, i.e., query IPAM while still unlocked, then
> read and update the state locked, if the newly received IP is already
> in there then simply give up lock again and repeat.
>
> We also have a clusters wide lock for starting HA guests, to set the
> wanted ha-resource state, that is no issue at all, you can start/stop
> many orders of magnitudes more VMs than any HW/Storage could cope with.
>
>>
>> I argue that we shall not support ephemeral IPs altogether.
>> The alternative is to make all IPAM reservations persistent.
>
>
>>
>> Using persistent IPs only reduces the interactions of VM/CTs with the
>> IPAM to a minimum of NIC joining a subnet and NIC leaving a subnet. I am
>> deliberately not referring to VMs because a VM may be part of multiple
>> VNets or even multiple times in the same VNet (regardless if that is
>> sensible).
>
> Yeah, talking about vNICs / veth's is the better term here, guests are
> only indirectly relevant.
>
>>
>> Cases the IPAM needs to be involved:
>>
>> - NIC with DHCP enabled VNet is added to VM config
>> - NIC with DHCP enabled VNet is removed from VM config
>> - NIC is assigned to another Bridge
>>   can be treated as individual leave + join events
>
> and:
>
> - subnet config is changed
> - vNIC changes from SDN-DHCP managed to manual, or vice versa
>   Albeit that can almost be treated like vNet leave/join though
>
>
>> Cases that are explicitly not covered but may be added if desired:
>>
>> - Manually assign an IP address on a NIC
>>   will not be automatically visible in the IPAM
>
> This sounds like you want to save the state in the VM config, which I'm
> rather skeptical about, and would try hard to avoid. We also would need
> to differ between bridges that are part of DHCP-managed SDN and others,
> as else a user could set some IP but nothing would happen.
>

I am sorry, my explanation was not clear here. I do not want to store IP
inside the VM config.  I agree that this would not be ideal.  If a user
configures an IP from inside the VM, we have no way of tracking that IP.

For now, every added vNIC gets an IP from the IPAM, and if the guest is
configured to use DHCP, it will get this IP from the DHCP server.

If the user decides to manually configure the IP, he will have to
reserve it in the IPAM, and mark the IP as "manual".
This will prevent the IPAM from allocating the IP again and keep the
IP/MAC mapping even if the VM is destroyed.

This is not implemented yet, but sketched out with Mira off-list.

>> - Manually change the MAC on a NIC
>>   don't do that > you are on your own.
>
> FWIW, a clone is such a change, and we have to support that, otherwise
> the MAC field needs to get some warning hints or even become read-only
> in the UI.
>
>>   Not handled > change in IPAM manually
>>
>> Once an IP is reserved via IPAM, the dnsmasq config can be generated
>> stateless and idempotent from the pve IPAM and is identical on all nodes
>> regardless if a VM/CT actually resides on that node or is running or
>> stopped.  This is especially useful for VM migration because the IP
>> stays consistent without spacial considering.
>
> That should be orthogonal to the feature set, if we have all the info
> saved somewhere else
>
> But this also speaks against having it in the VM config, as that would
> mean that every node needs to parse every guests' config periodically,
> which is way worse than some cluster lock and breaks with our base
> axiom that guests are owned by their current node, and only by that,
> and a node should not really alter behavior dependent on some "foreign"
> guest.
>
>>
>> Snapshot/revert, backup/restore, suspend/hibernate/resume cases are
>> automatically covered because the IP will already be reserved for that
>> MAC.
>
> Not really, restore to another setup is broken, one could resume the
> VM after having changed CIDRs of a subnet, making that broken too, ...
>
>>
>> If the admin wants to change, the IP of a VM this can be done via the
>> IPAM API/UI which will have to be implemented separately.
>
> Providing Overrides can be fine, but IMO that all should be still in
> the SDN state, not per-VM one, and ideally use a common API.
>
>
>> A limitation of this approach vs dynamic IP reservation is that the IP
>> range on the subnet needs to be large enough to hold all IPs of all,
>> even stopped, VMs in that subnet. This is in contrast to default DHCP
>> functionality where only the number of actively running VMs is limited.
>> It should be enough to mention this in the docs.
>
> In production setups it should not matter _that_ much, but it might
> be a bit of a PITA if one has a few "archived" VMs or the like, but
> that alone would
>
>>
>> I will further review the code an try to implement the aforementioned
>> approach.
>
> You can naturally experiment, but I'd also try the upsert proposal from
> Stefan H., as IMO that sounds like a good balance.




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-23 12:40 ` Stefan Lendl
@ 2023-10-27  7:39   ` Thomas Lamprecht
  2023-10-27 12:26     ` Stefan Lendl
  2023-10-27 12:36     ` DERUMIER, Alexandre
  2023-10-27 12:53   ` Stefan Lendl
  1 sibling, 2 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2023-10-27  7:39 UTC (permalink / raw)
  To: Proxmox VE development discussion, Stefan Lendl

Am 23/10/2023 um 14:40 schrieb Stefan Lendl:
> I am currently working on the SDN feature.  This is an initial review of
> the patch series and I am trying to make a strong case against ephemeral
> DHCP IP reservation.

Stefan Hanreich's reply to the cover letter already mentions upserts, those
will avoid basically all problems while allowing for some dynamic changes.

> The current state of the patch series invokes the IPAM on every VM/CT
> start/stop to add or remove the IP from the IPAM.
> This triggers the dnsmasq config generation on the specific host with
> only the MAC/IP mapping of that particular host.
> 
> From reading the discussion of the v1 patch series I understand this
> approach tries to implement the ephemeral IP reservation strategy. From
> off-list conversations with Stefan Hanreich, I agree that having
> ephemeral IP reservation coordinated by the IPAM requires us to
> re-implement DHCP functionality in the IPAM and heavily rely on syncing
> between the different services.
> 
> To maintain reliable sync we need to hook into many different places
> where the IPAM need to be queried.  Any issues with the implementation
> may lead to IPAM and DHCP local config state running out of sync causing
> network issues duplicate multiple IPs.

The same is true for permanent reservations, wherever that reservation is
saved needs to be in sync with IPAM, e.g., also on backup restore (into a
new env), if subnets change their configured CIDRs, ...

> 
> Furthermore, every interaction with the IPAM requires a cluster-wide
> lock on the IPAM. Having a central cluster-wide lock on every VM
> start/stop/migrate will significantly limit parallel operations.  Event
> starting two VMs in parallel will be limited by this central lock. At
> boot trying to start many VMs (ideally as much in parallel as possible)
> is limited by the central IPAM lock even further.

Cluster wide locks are relatively cheap, especially if one avoids having
a long critical section, i.e., query IPAM while still unlocked, then 
read and update the state locked, if the newly received IP is already
in there then simply give up lock again and repeat.

We also have a clusters wide lock for starting HA guests, to set the
wanted ha-resource state, that is no issue at all, you can start/stop
many orders of magnitudes more VMs than any HW/Storage could cope with.

> 
> I argue that we shall not support ephemeral IPs altogether.
> The alternative is to make all IPAM reservations persistent.


> 
> Using persistent IPs only reduces the interactions of VM/CTs with the
> IPAM to a minimum of NIC joining a subnet and NIC leaving a subnet. I am
> deliberately not referring to VMs because a VM may be part of multiple
> VNets or even multiple times in the same VNet (regardless if that is
> sensible).

Yeah, talking about vNICs / veth's is the better term here, guests are
only indirectly relevant.

> 
> Cases the IPAM needs to be involved:
> 
> - NIC with DHCP enabled VNet is added to VM config
> - NIC with DHCP enabled VNet is removed from VM config
> - NIC is assigned to another Bridge
>   can be treated as individual leave + join events

and:

- subnet config is changed
- vNIC changes from SDN-DHCP managed to manual, or vice versa
  Albeit that can almost be treated like vNet leave/join though

 
> Cases that are explicitly not covered but may be added if desired:
> 
> - Manually assign an IP address on a NIC
>   will not be automatically visible in the IPAM

This sounds like you want to save the state in the VM config, which I'm
rather skeptical about, and would try hard to avoid. We also would need
to differ between bridges that are part of DHCP-managed SDN and others,
as else a user could set some IP but nothing would happen.

> - Manually change the MAC on a NIC
>   don't do that > you are on your own.

FWIW, a clone is such a change, and we have to support that, otherwise
the MAC field needs to get some warning hints or even become read-only
in the UI.

>   Not handled > change in IPAM manually
> 
> Once an IP is reserved via IPAM, the dnsmasq config can be generated
> stateless and idempotent from the pve IPAM and is identical on all nodes
> regardless if a VM/CT actually resides on that node or is running or
> stopped.  This is especially useful for VM migration because the IP
> stays consistent without spacial considering.

That should be orthogonal to the feature set, if we have all the info
saved somewhere else

But this also speaks against having it in the VM config, as that would
mean that every node needs to parse every guests' config periodically,
which is way worse than some cluster lock and breaks with our base
axiom that guests are owned by their current node, and only by that,
and a node should not really alter behavior dependent on some "foreign"
guest.

> 
> Snapshot/revert, backup/restore, suspend/hibernate/resume cases are
> automatically covered because the IP will already be reserved for that
> MAC.

Not really, restore to another setup is broken, one could resume the
VM after having changed CIDRs of a subnet, making that broken too, ...

> 
> If the admin wants to change, the IP of a VM this can be done via the
> IPAM API/UI which will have to be implemented separately.

Providing Overrides can be fine, but IMO that all should be still in
the SDN state, not per-VM one, and ideally use a common API.


> A limitation of this approach vs dynamic IP reservation is that the IP
> range on the subnet needs to be large enough to hold all IPs of all,
> even stopped, VMs in that subnet. This is in contrast to default DHCP
> functionality where only the number of actively running VMs is limited.
> It should be enough to mention this in the docs.

In production setups it should not matter _that_ much, but it might
be a bit of a PITA if one has a few "archived" VMs or the like, but
that alone would

> 
> I will further review the code an try to implement the aforementioned
> approach.

You can naturally experiment, but I'd also try the upsert proposal from
Stefan H., as IMO that sounds like a good balance.




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-17 13:54 Stefan Hanreich
  2023-10-17 14:48 ` DERUMIER, Alexandre
  2023-10-17 16:04 ` Stefan Hanreich
@ 2023-10-23 12:40 ` Stefan Lendl
  2023-10-27  7:39   ` Thomas Lamprecht
  2023-10-27 12:53   ` Stefan Lendl
  2 siblings, 2 replies; 16+ messages in thread
From: Stefan Lendl @ 2023-10-23 12:40 UTC (permalink / raw)
  To: pve-devel


I am currently working on the SDN feature.  This is an initial review of
the patch series and I am trying to make a strong case against ephemeral
DHCP IP reservation.

The current state of the patch series invokes the IPAM on every VM/CT
start/stop to add or remove the IP from the IPAM.
This triggers the dnsmasq config generation on the specific host with
only the MAC/IP mapping of that particular host.

From reading the discussion of the v1 patch series I understand this
approach tries to implement the ephemeral IP reservation strategy. From
off-list conversations with Stefan Hanreich, I agree that having
ephemeral IP reservation coordinated by the IPAM requires us to
re-implement DHCP functionality in the IPAM and heavily rely on syncing
between the different services.

To maintain reliable sync we need to hook into many different places
where the IPAM need to be queried.  Any issues with the implementation
may lead to IPAM and DHCP local config state running out of sync causing
network issues duplicate multiple IPs.

Furthermore, every interaction with the IPAM requires a cluster-wide
lock on the IPAM. Having a central cluster-wide lock on every VM
start/stop/migrate will significantly limit parallel operations.  Event
starting two VMs in parallel will be limited by this central lock. At
boot trying to start many VMs (ideally as much in parallel as possible)
is limited by the central IPAM lock even further.

I argue that we shall not support ephemeral IPs altogether.
The alternative is to make all IPAM reservations persistent.

Using persistent IPs only reduces the interactions of VM/CTs with the
IPAM to a minimum of NIC joining a subnet and NIC leaving a subnet. I am
deliberately not referring to VMs because a VM may be part of multiple
VNets or even multiple times in the same VNet (regardless if that is
sensible).

Cases the IPAM needs to be involved:

- NIC with DHCP enabled VNet is added to VM config
- NIC with DHCP enabled VNet is removed from VM config
- NIC is assigned to another Bridge
  can be treated as individual leave + join events

Cases that are explicitly not covered but may be added if desired:

- Manually assign an IP address on a NIC
  will not be automatically visible in the IPAM
- Manually change the MAC on a NIC
  don't do that > you are on your own.
  Not handled > change in IPAM manually

Once an IP is reserved via IPAM, the dnsmasq config can be generated
stateless and idempotent from the pve IPAM and is identical on all nodes
regardless if a VM/CT actually resides on that node or is running or
stopped.  This is especially useful for VM migration because the IP
stays consistent without spacial considering.

Snapshot/revert, backup/restore, suspend/hibernate/resume cases are
automatically covered because the IP will already be reserved for that
MAC.

If the admin wants to change, the IP of a VM this can be done via the
IPAM API/UI which will have to be implemented separately.

A limitation of this approach vs dynamic IP reservation is that the IP
range on the subnet needs to be large enough to hold all IPs of all,
even stopped, VMs in that subnet. This is in contrast to default DHCP
functionality where only the number of actively running VMs is limited.
It should be enough to mention this in the docs.

I will further review the code an try to implement the aforementioned
approach.

Best regards,
Stefan Lendl

Stefan Hanreich <s.hanreich@proxmox.com> writes:

> This is a WIP patch series, since I will be gone for 3 weeks and wanted to
> share my current progress with the DHCP support for SDN.
>
> This patch series adds support for automatically deploying dnsmasq as a DHCP
> server to a simple SDN Zone.
>
> While certainly not 100% polished on some ends (looking at restarting systemd
> services in particular), the general idea behind the mechanism shows. I wanted
> to gather some feedback on how I approached designing the plugins and the
> config regeneration process before comitting to this design by creating an API
> and UI around it.
>
> You need to install dnsmasq (and disable it afterwards):
>
>   apt install dnsmasq && systemctl disable --now dnsmasq
>
>
> You can use the following example configuration for deploying a DHCP server in
> a SDN subnet:
>
> /etc/pve/sdn/dhcp.cfg:
>
>   dnsmasq: nat
>
>
> /etc/pve/sdn/zones.cfg:
>
>   simple: DHCPNAT
>           ipam pve
>
>
> /etc/pve/sdn/vnets.cfg:
>
>   vnet: dhcpnat
>           zone DHCPNAT
>
>
> /etc/pve/sdn/subnets.cfg:
>
>   subnet: DHCPNAT-10.1.0.0-16
>           vnet dhcpnat
>           dhcp-dns-server 10.1.0.1
>           dhcp-range server=nat,start-address=10.1.0.100,end-address=10.1.0.200
>           gateway 10.1.0.1
>           snat 1
>
>
> Then apply the SDN configuration:
>
>   pvesh set /cluster/sdn
>
> You need to apply the SDN configuration once after adding the dhcp-range lines
> to the configuration, since the running configuration is used for managing
> DHCP. It will not work otherwise!
>
> For testing it can be helpful to monitor the following files (e.g. with watch)
> to find out what is happening
>   * /etc/dnsmasq.d/<dhcp_id>/ethers (on each node)
>   * /etc/pve/priv/ipam.db
>
> Changes from v1 -> v2:
>   * added hooks for handling DHCP when starting / stopping / .. VMs and CTs
>   * Get an IP from IPAM and register that IP in the DHCP server
>     (pve only for now)
>   * remove lease-time, since it is now infinite and managed by the VM lifecycle
>   * add hooks for setting & deleting DHCP mappings to DHCP plugins
>   * modified interface of the abstract class to reflect new requirements
>   * added helpers in existing SDN classes
>   * simplified DHCP configuration settings
>
>
>
> pve-cluster:
>
> Stefan Hanreich (1):
>   cluster files: add dhcp.cfg
>
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
>
>
> pve-network:
>
> Stefan Hanreich (6):
>   subnets: vnets: preparations for DHCP plugins
>   dhcp: add abstract class for DHCP plugins
>   dhcp: subnet: add DHCP options to subnet configuration
>   dhcp: add DHCP plugin for dnsmasq
>   ipam: Add helper methods for DHCP to PVE IPAM
>   dhcp: regenerate config for DHCP servers on reload
>
>  debian/control                         |   1 +
>  src/PVE/Network/SDN.pm                 |  11 +-
>  src/PVE/Network/SDN/Dhcp.pm            | 192 +++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Dnsmasq.pm    | 186 ++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Makefile      |   8 ++
>  src/PVE/Network/SDN/Dhcp/Plugin.pm     |  83 +++++++++++
>  src/PVE/Network/SDN/Ipams/PVEPlugin.pm |  64 +++++++++
>  src/PVE/Network/SDN/Makefile           |   3 +-
>  src/PVE/Network/SDN/SubnetPlugin.pm    |  32 +++++
>  src/PVE/Network/SDN/Subnets.pm         |  43 ++++--
>  src/PVE/Network/SDN/Vnets.pm           |  27 ++--
>  11 files changed, 622 insertions(+), 28 deletions(-)
>  create mode 100644 src/PVE/Network/SDN/Dhcp.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm
>
>
> pve-manager:
>
> Stefan Hanreich (1):
>   sdn: regenerate DHCP config on reload
>
>  PVE/API2/Network.pm | 1 +
>  1 file changed, 1 insertion(+)
>
>
> qemu-server:
>
> Stefan Hanreich (1):
>   sdn: dhcp: add DHCP setup to vm-network-scripts
>
>  PVE/QemuServer.pm                 | 14 ++++++++++++++
>  vm-network-scripts/pve-bridge     |  3 +++
>  vm-network-scripts/pve-bridgedown | 19 +++++++++++++++++++
>  3 files changed, 36 insertions(+)
>
>
> pve-container:
>
> Stefan Hanreich (1):
>   sdn: dhcp: setup DHCP mappings in LXC hooks
>
>  src/PVE/LXC.pm            | 10 ++++++++++
>  src/lxc-pve-poststop-hook |  1 +
>  src/lxc-pve-prestart-hook |  9 +++++++++
>  3 files changed, 20 insertions(+)
>
>
> Summary over all repositories:
>   20 files changed, 681 insertions(+), 28 deletions(-)
>
> --
> murpp v0.4.0
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel






^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-17 16:04 ` Stefan Hanreich
@ 2023-10-18  9:59   ` DERUMIER, Alexandre
  0 siblings, 0 replies; 16+ messages in thread
From: DERUMIER, Alexandre @ 2023-10-18  9:59 UTC (permalink / raw)
  To: pve-devel

>>Another thing: What happens when a user changes the MAC address via
>>the
>>UI? I'd either disallow it completely or we need to update the DHCP
>>configuration files and IPAM


when mac address is changed online, the nic is doing unplug then
replug.

So technically, it's just

- unplug nic:  delete ipam / clean dhcp

- hotplug nic: add ipam/ add dhcp.


The guest os should automaticaly delete old ip on unplug, and reask a
new ip with dhcp on hotplug.





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-17 16:05   ` Stefan Hanreich
@ 2023-10-17 21:00     ` DERUMIER, Alexandre
  0 siblings, 0 replies; 16+ messages in thread
From: DERUMIER, Alexandre @ 2023-10-17 21:00 UTC (permalink / raw)
  To: pve-devel, s.hanreich

-------- Message initial --------
De: Stefan Hanreich <s.hanreich@proxmox.com>
À: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
"DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
Objet: Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-
server/container 00/10] Add support for DHCP servers to SDN
Date: 17/10/2023 18:05:55

> Maybe try to see if we can use pve ipam as cache in front of external
> ipam.

>>Yes, it would also be cool if you could look at implementing the two
>>newly added methods from the PVEPlugin for Netbox / Phpipam, since
>>you
>>have more experience with those.

>>I also looked into merging those two methods, but haven't really
>>found
>>an elegant solution which is why I left them as separate methods for
>>now.
>>
>>Kind Regards


Yes, sure , no problem ! (I don't have read your code yet)








^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-17 14:48 ` DERUMIER, Alexandre
@ 2023-10-17 16:05   ` Stefan Hanreich
  2023-10-17 21:00     ` DERUMIER, Alexandre
  0 siblings, 1 reply; 16+ messages in thread
From: Stefan Hanreich @ 2023-10-17 16:05 UTC (permalink / raw)
  To: Proxmox VE development discussion, DERUMIER, Alexandre

> Maybe try to see if we can use pve ipam as cache in front of external
> ipam.

Yes, it would also be cool if you could look at implementing the two
newly added methods from the PVEPlugin for Netbox / Phpipam, since you
have more experience with those.

I also looked into merging those two methods, but haven't really found
an elegant solution which is why I left them as separate methods for now.

Kind Regards




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-17 13:54 Stefan Hanreich
  2023-10-17 14:48 ` DERUMIER, Alexandre
@ 2023-10-17 16:04 ` Stefan Hanreich
  2023-10-18  9:59   ` DERUMIER, Alexandre
  2023-10-23 12:40 ` Stefan Lendl
  2 siblings, 1 reply; 16+ messages in thread
From: Stefan Hanreich @ 2023-10-17 16:04 UTC (permalink / raw)
  To: pve-devel

Some additional things we've discussed off-list:

Currently for VMs Migration & Hibernation are not working - everything
else in the lifecycle of VMs/CTs should be covered.



For Migration:
It currently creates an additional mapping in the IPAM and doesn't
delete the existing mapping from the DHCP server config.
I'd say we change this to upsert instead of add (for both IPAM and DHCP
Plugins). This means when adding a mapping for a VM, check if there is
already one, and then return that mapping without doing anything.

Then we just need to delete the existing mapping from the source during
the migration, which could simply be done somewhere in QemuMigrate.

Having upsert, instead of add, would also make implementations with
distributed DHCP servers like kea easier.


For Hibernation:
Having Upsert would solve the issue with Hibernation as well. Here we
need to make sure to not delete the entry from the IPAM, since we use a
memory snapshot rather than using the guest's hibernation feature. That
means the DHCP lease 'persists' in the VM.



We will also need to expose the functionality via the Web UI, for that I
had the following things in mind:

* Add Create/Edit/Delete DHCP to either `Options` or in a new IPAM/DHCP
panel (see below)
* Add a tree view of the current PVE IPAM state, similar to the resource
mapping, as a new panel
* add the `dhcp-range` fields to the Subnet Edit Dialog (possibly in a
new tab in the edit dialogue)



Another thing: What happens when a user changes the MAC address via the
UI? I'd either disallow it completely or we need to update the DHCP
configuration files and IPAM


On 10/17/23 15:54, Stefan Hanreich wrote:
> This is a WIP patch series, since I will be gone for 3 weeks and wanted to
> share my current progress with the DHCP support for SDN.
> 
> This patch series adds support for automatically deploying dnsmasq as a DHCP
> server to a simple SDN Zone.
> 
> While certainly not 100% polished on some ends (looking at restarting systemd
> services in particular), the general idea behind the mechanism shows. I wanted
> to gather some feedback on how I approached designing the plugins and the
> config regeneration process before comitting to this design by creating an API
> and UI around it.
> 
> You need to install dnsmasq (and disable it afterwards):
> 
>   apt install dnsmasq && systemctl disable --now dnsmasq
> 
> 
> You can use the following example configuration for deploying a DHCP server in
> a SDN subnet:
> 
> /etc/pve/sdn/dhcp.cfg:
> 
>   dnsmasq: nat
> 
> 
> /etc/pve/sdn/zones.cfg:
> 
>   simple: DHCPNAT
>           ipam pve
> 
> 
> /etc/pve/sdn/vnets.cfg:
> 
>   vnet: dhcpnat
>           zone DHCPNAT
> 
> 
> /etc/pve/sdn/subnets.cfg:
> 
>   subnet: DHCPNAT-10.1.0.0-16
>           vnet dhcpnat
>           dhcp-dns-server 10.1.0.1
>           dhcp-range server=nat,start-address=10.1.0.100,end-address=10.1.0.200
>           gateway 10.1.0.1
>           snat 1
> 
> 
> Then apply the SDN configuration:
> 
>   pvesh set /cluster/sdn
> 
> You need to apply the SDN configuration once after adding the dhcp-range lines
> to the configuration, since the running configuration is used for managing
> DHCP. It will not work otherwise!
> 
> For testing it can be helpful to monitor the following files (e.g. with watch)
> to find out what is happening
>   * /etc/dnsmasq.d/<dhcp_id>/ethers (on each node)
>   * /etc/pve/priv/ipam.db
> 
> Changes from v1 -> v2:
>   * added hooks for handling DHCP when starting / stopping / .. VMs and CTs
>   * Get an IP from IPAM and register that IP in the DHCP server
>     (pve only for now)
>   * remove lease-time, since it is now infinite and managed by the VM lifecycle
>   * add hooks for setting & deleting DHCP mappings to DHCP plugins
>   * modified interface of the abstract class to reflect new requirements
>   * added helpers in existing SDN classes
>   * simplified DHCP configuration settings
> 
> 
> 
> pve-cluster:
> 
> Stefan Hanreich (1):
>   cluster files: add dhcp.cfg
> 
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> 
> pve-network:
> 
> Stefan Hanreich (6):
>   subnets: vnets: preparations for DHCP plugins
>   dhcp: add abstract class for DHCP plugins
>   dhcp: subnet: add DHCP options to subnet configuration
>   dhcp: add DHCP plugin for dnsmasq
>   ipam: Add helper methods for DHCP to PVE IPAM
>   dhcp: regenerate config for DHCP servers on reload
> 
>  debian/control                         |   1 +
>  src/PVE/Network/SDN.pm                 |  11 +-
>  src/PVE/Network/SDN/Dhcp.pm            | 192 +++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Dnsmasq.pm    | 186 ++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Makefile      |   8 ++
>  src/PVE/Network/SDN/Dhcp/Plugin.pm     |  83 +++++++++++
>  src/PVE/Network/SDN/Ipams/PVEPlugin.pm |  64 +++++++++
>  src/PVE/Network/SDN/Makefile           |   3 +-
>  src/PVE/Network/SDN/SubnetPlugin.pm    |  32 +++++
>  src/PVE/Network/SDN/Subnets.pm         |  43 ++++--
>  src/PVE/Network/SDN/Vnets.pm           |  27 ++--
>  11 files changed, 622 insertions(+), 28 deletions(-)
>  create mode 100644 src/PVE/Network/SDN/Dhcp.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm
> 
> 
> pve-manager:
> 
> Stefan Hanreich (1):
>   sdn: regenerate DHCP config on reload
> 
>  PVE/API2/Network.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
> 
> qemu-server:
> 
> Stefan Hanreich (1):
>   sdn: dhcp: add DHCP setup to vm-network-scripts
> 
>  PVE/QemuServer.pm                 | 14 ++++++++++++++
>  vm-network-scripts/pve-bridge     |  3 +++
>  vm-network-scripts/pve-bridgedown | 19 +++++++++++++++++++
>  3 files changed, 36 insertions(+)
> 
> 
> pve-container:
> 
> Stefan Hanreich (1):
>   sdn: dhcp: setup DHCP mappings in LXC hooks
> 
>  src/PVE/LXC.pm            | 10 ++++++++++
>  src/lxc-pve-poststop-hook |  1 +
>  src/lxc-pve-prestart-hook |  9 +++++++++
>  3 files changed, 20 insertions(+)
> 
> 
> Summary over all repositories:
>   20 files changed, 681 insertions(+), 28 deletions(-)
> 




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
  2023-10-17 13:54 Stefan Hanreich
@ 2023-10-17 14:48 ` DERUMIER, Alexandre
  2023-10-17 16:05   ` Stefan Hanreich
  2023-10-17 16:04 ` Stefan Hanreich
  2023-10-23 12:40 ` Stefan Lendl
  2 siblings, 1 reply; 16+ messages in thread
From: DERUMIER, Alexandre @ 2023-10-17 14:48 UTC (permalink / raw)
  To: pve-devel

Hi Stefan,

Thanks for sharing !

I'll try to deeply test it this week or next week.

Maybe try to see if we can use pve ipam as cache in front of external
ipam.



-------- Message initial --------
De: Stefan Hanreich <s.hanreich@proxmox.com>
Répondre à: Proxmox VE development discussion <pve-
devel@lists.proxmox.com>
À: pve-devel@lists.proxmox.com
Objet: [pve-devel] [WIP v2 cluster/network/manager/qemu-
server/container 00/10] Add support for DHCP servers to SDN
Date: 17/10/2023 15:54:57

This is a WIP patch series, since I will be gone for 3 weeks and wanted
to
share my current progress with the DHCP support for SDN.

This patch series adds support for automatically deploying dnsmasq as a
DHCP
server to a simple SDN Zone.

While certainly not 100% polished on some ends (looking at restarting
systemd
services in particular), the general idea behind the mechanism shows. I
wanted
to gather some feedback on how I approached designing the plugins and
the
config regeneration process before comitting to this design by creating
an API
and UI around it.

You need to install dnsmasq (and disable it afterwards):

  apt install dnsmasq && systemctl disable --now dnsmasq


You can use the following example configuration for deploying a DHCP
server in
a SDN subnet:

/etc/pve/sdn/dhcp.cfg:

  dnsmasq: nat


/etc/pve/sdn/zones.cfg:

  simple: DHCPNAT
          ipam pve


/etc/pve/sdn/vnets.cfg:

  vnet: dhcpnat
          zone DHCPNAT


/etc/pve/sdn/subnets.cfg:

  subnet: DHCPNAT-10.1.0.0-16
          vnet dhcpnat
          dhcp-dns-server 10.1.0.1
          dhcp-range server=nat,start-address=10.1.0.100,end-
address=10.1.0.200
          gateway 10.1.0.1
          snat 1


Then apply the SDN configuration:

  pvesh set /cluster/sdn

You need to apply the SDN configuration once after adding the dhcp-
range lines
to the configuration, since the running configuration is used for
managing
DHCP. It will not work otherwise!

For testing it can be helpful to monitor the following files (e.g. with
watch)
to find out what is happening
  * /etc/dnsmasq.d/<dhcp_id>/ethers (on each node)
  * /etc/pve/priv/ipam.db

Changes from v1 -> v2:
  * added hooks for handling DHCP when starting / stopping / .. VMs and
CTs
  * Get an IP from IPAM and register that IP in the DHCP server
    (pve only for now)
  * remove lease-time, since it is now infinite and managed by the VM
lifecycle
  * add hooks for setting & deleting DHCP mappings to DHCP plugins
  * modified interface of the abstract class to reflect new
requirements
  * added helpers in existing SDN classes
  * simplified DHCP configuration settings



pve-cluster:

Stefan Hanreich (1):
  cluster files: add dhcp.cfg

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)


pve-network:

Stefan Hanreich (6):
  subnets: vnets: preparations for DHCP plugins
  dhcp: add abstract class for DHCP plugins
  dhcp: subnet: add DHCP options to subnet configuration
  dhcp: add DHCP plugin for dnsmasq
  ipam: Add helper methods for DHCP to PVE IPAM
  dhcp: regenerate config for DHCP servers on reload

 debian/control                         |   1 +
 src/PVE/Network/SDN.pm                 |  11 +-
 src/PVE/Network/SDN/Dhcp.pm            | 192 +++++++++++++++++++++++++
 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm    | 186 ++++++++++++++++++++++++
 src/PVE/Network/SDN/Dhcp/Makefile      |   8 ++
 src/PVE/Network/SDN/Dhcp/Plugin.pm     |  83 +++++++++++
 src/PVE/Network/SDN/Ipams/PVEPlugin.pm |  64 +++++++++
 src/PVE/Network/SDN/Makefile           |   3 +-
 src/PVE/Network/SDN/SubnetPlugin.pm    |  32 +++++
 src/PVE/Network/SDN/Subnets.pm         |  43 ++++--
 src/PVE/Network/SDN/Vnets.pm           |  27 ++--
 11 files changed, 622 insertions(+), 28 deletions(-)
 create mode 100644 src/PVE/Network/SDN/Dhcp.pm
 create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
 create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
 create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm


pve-manager:

Stefan Hanreich (1):
  sdn: regenerate DHCP config on reload

 PVE/API2/Network.pm | 1 +
 1 file changed, 1 insertion(+)


qemu-server:

Stefan Hanreich (1):
  sdn: dhcp: add DHCP setup to vm-network-scripts

 PVE/QemuServer.pm                 | 14 ++++++++++++++
 vm-network-scripts/pve-bridge     |  3 +++
 vm-network-scripts/pve-bridgedown | 19 +++++++++++++++++++
 3 files changed, 36 insertions(+)


pve-container:

Stefan Hanreich (1):
  sdn: dhcp: setup DHCP mappings in LXC hooks

 src/PVE/LXC.pm            | 10 ++++++++++
 src/lxc-pve-poststop-hook |  1 +
 src/lxc-pve-prestart-hook |  9 +++++++++
 3 files changed, 20 insertions(+)


Summary over all repositories:
  20 files changed, 681 insertions(+), 28 deletions(-)



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN
@ 2023-10-17 13:54 Stefan Hanreich
  2023-10-17 14:48 ` DERUMIER, Alexandre
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Stefan Hanreich @ 2023-10-17 13:54 UTC (permalink / raw)
  To: pve-devel

This is a WIP patch series, since I will be gone for 3 weeks and wanted to
share my current progress with the DHCP support for SDN.

This patch series adds support for automatically deploying dnsmasq as a DHCP
server to a simple SDN Zone.

While certainly not 100% polished on some ends (looking at restarting systemd
services in particular), the general idea behind the mechanism shows. I wanted
to gather some feedback on how I approached designing the plugins and the
config regeneration process before comitting to this design by creating an API
and UI around it.

You need to install dnsmasq (and disable it afterwards):

  apt install dnsmasq && systemctl disable --now dnsmasq


You can use the following example configuration for deploying a DHCP server in
a SDN subnet:

/etc/pve/sdn/dhcp.cfg:

  dnsmasq: nat


/etc/pve/sdn/zones.cfg:

  simple: DHCPNAT
          ipam pve


/etc/pve/sdn/vnets.cfg:

  vnet: dhcpnat
          zone DHCPNAT


/etc/pve/sdn/subnets.cfg:

  subnet: DHCPNAT-10.1.0.0-16
          vnet dhcpnat
          dhcp-dns-server 10.1.0.1
          dhcp-range server=nat,start-address=10.1.0.100,end-address=10.1.0.200
          gateway 10.1.0.1
          snat 1


Then apply the SDN configuration:

  pvesh set /cluster/sdn

You need to apply the SDN configuration once after adding the dhcp-range lines
to the configuration, since the running configuration is used for managing
DHCP. It will not work otherwise!

For testing it can be helpful to monitor the following files (e.g. with watch)
to find out what is happening
  * /etc/dnsmasq.d/<dhcp_id>/ethers (on each node)
  * /etc/pve/priv/ipam.db

Changes from v1 -> v2:
  * added hooks for handling DHCP when starting / stopping / .. VMs and CTs
  * Get an IP from IPAM and register that IP in the DHCP server
    (pve only for now)
  * remove lease-time, since it is now infinite and managed by the VM lifecycle
  * add hooks for setting & deleting DHCP mappings to DHCP plugins
  * modified interface of the abstract class to reflect new requirements
  * added helpers in existing SDN classes
  * simplified DHCP configuration settings



pve-cluster:

Stefan Hanreich (1):
  cluster files: add dhcp.cfg

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)


pve-network:

Stefan Hanreich (6):
  subnets: vnets: preparations for DHCP plugins
  dhcp: add abstract class for DHCP plugins
  dhcp: subnet: add DHCP options to subnet configuration
  dhcp: add DHCP plugin for dnsmasq
  ipam: Add helper methods for DHCP to PVE IPAM
  dhcp: regenerate config for DHCP servers on reload

 debian/control                         |   1 +
 src/PVE/Network/SDN.pm                 |  11 +-
 src/PVE/Network/SDN/Dhcp.pm            | 192 +++++++++++++++++++++++++
 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm    | 186 ++++++++++++++++++++++++
 src/PVE/Network/SDN/Dhcp/Makefile      |   8 ++
 src/PVE/Network/SDN/Dhcp/Plugin.pm     |  83 +++++++++++
 src/PVE/Network/SDN/Ipams/PVEPlugin.pm |  64 +++++++++
 src/PVE/Network/SDN/Makefile           |   3 +-
 src/PVE/Network/SDN/SubnetPlugin.pm    |  32 +++++
 src/PVE/Network/SDN/Subnets.pm         |  43 ++++--
 src/PVE/Network/SDN/Vnets.pm           |  27 ++--
 11 files changed, 622 insertions(+), 28 deletions(-)
 create mode 100644 src/PVE/Network/SDN/Dhcp.pm
 create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
 create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
 create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm


pve-manager:

Stefan Hanreich (1):
  sdn: regenerate DHCP config on reload

 PVE/API2/Network.pm | 1 +
 1 file changed, 1 insertion(+)


qemu-server:

Stefan Hanreich (1):
  sdn: dhcp: add DHCP setup to vm-network-scripts

 PVE/QemuServer.pm                 | 14 ++++++++++++++
 vm-network-scripts/pve-bridge     |  3 +++
 vm-network-scripts/pve-bridgedown | 19 +++++++++++++++++++
 3 files changed, 36 insertions(+)


pve-container:

Stefan Hanreich (1):
  sdn: dhcp: setup DHCP mappings in LXC hooks

 src/PVE/LXC.pm            | 10 ++++++++++
 src/lxc-pve-poststop-hook |  1 +
 src/lxc-pve-prestart-hook |  9 +++++++++
 3 files changed, 20 insertions(+)


Summary over all repositories:
  20 files changed, 681 insertions(+), 28 deletions(-)

-- 
murpp v0.4.0




^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-10-27 13:38 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-23 10:27 [pve-devel] [WIP v2 cluster/network/manager/qemu-server/container 00/10] Add support for DHCP servers to SDN Stefan Lendl
2023-10-23 12:52 ` Stefan Lendl
2023-10-26 12:49 ` DERUMIER, Alexandre
2023-10-26 12:53 ` DERUMIER, Alexandre
  -- strict thread matches above, loose matches on Subject: below --
2023-10-17 13:54 Stefan Hanreich
2023-10-17 14:48 ` DERUMIER, Alexandre
2023-10-17 16:05   ` Stefan Hanreich
2023-10-17 21:00     ` DERUMIER, Alexandre
2023-10-17 16:04 ` Stefan Hanreich
2023-10-18  9:59   ` DERUMIER, Alexandre
2023-10-23 12:40 ` Stefan Lendl
2023-10-27  7:39   ` Thomas Lamprecht
2023-10-27 12:26     ` Stefan Lendl
2023-10-27 12:36     ` DERUMIER, Alexandre
2023-10-27 12:53   ` Stefan Lendl
2023-10-27 13:37     ` DERUMIER, Alexandre

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal