public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
@ 2023-09-08 13:42 Stefan Hanreich
  2023-09-08 13:42 ` [pve-devel] [RFC pve-cluster 1/6] cluster files: add dhcp.cfg Stefan Hanreich
                   ` (6 more replies)
  0 siblings, 7 replies; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-08 13:42 UTC (permalink / raw)
  To: pve-devel

This patch series adds support for automatically deploying dnsmasq as a DHCP
server to a simple SDN Zone.

While certainly not 100% polished on some ends (looking at restarting systemd
services in particular), the general idea behind the mechanism shows. I wanted
to gather some feedback on how I approached designing the plugins and the
config regeneration process before comitting to this design by creating an API
and UI around it.

For your testing convenience I've provided deb packages on our share:
  /path/to/nasi/iso/packages/shan-sdn-dhcp

You need to install dnsmasq (and disable it afterwards):

  apt install dnsmasq && systemctl disable --now dnsmasq


You can use the following example configuration for deploying a DHCP server in
a SDN subnet:

/etc/pve/sdn/dhcp.cfg:

  dnsmasq: nat


/etc/pve/sdn/zones.cfg:

  simple: DHCPNAT
          ipam pve


/etc/pve/sdn/vnets.cfg:

  vnet: dhcpnat
          zone DHCPNAT


/etc/pve/sdn/subnets.cfg:

  subnet: DHCPNAT-10.1.0.0-16
          vnet dhcpnat
          dhcp-dns-server 10.1.0.1
          dhcp-range server=nat,start-address=10.1.0.100,end-address=10.1.0.200,lease-time=86400
          dhcp-range server=nat,start-address=10.1.1.100,end-address=10.1.1.200,lease-time=86400,dns-server=10.1.0.2
          gateway 10.1.0.1
          snat 1


Then apply the SDN configuration:

  pvesh set /cluster/sdn


Be careful that after configuring dhcp-range you do not save the subnet config
from the Web UI, since the dhcp-range line will vanish from the config.



pve-cluster:

Stefan Hanreich (1):
  cluster files: add dhcp.cfg

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)


pve-manager:

Stefan Hanreich (1):
  sdn: regenerate DHCP config on reload

 PVE/API2/Network.pm | 1 +
 1 file changed, 1 insertion(+)


pve-network:

Stefan Hanreich (4):
  sdn: dhcp: add abstract class for DHCP plugins
  sdn: dhcp: subnet: add DHCP options to subnet configuration
  sdn: dhcp: add DHCP plugin for dnsmasq
  sdn: dhcp: regenerate config for DHCP servers on reload

 debian/control                      |   1 +
 src/PVE/Network/SDN.pm              |  11 ++-
 src/PVE/Network/SDN/Dhcp.pm         | 122 ++++++++++++++++++++++++++++
 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm | 115 ++++++++++++++++++++++++++
 src/PVE/Network/SDN/Dhcp/Makefile   |   8 ++
 src/PVE/Network/SDN/Dhcp/Plugin.pm  |  76 +++++++++++++++++
 src/PVE/Network/SDN/Makefile        |   4 +-
 src/PVE/Network/SDN/SubnetPlugin.pm |  43 ++++++++++
 8 files changed, 377 insertions(+), 3 deletions(-)
 create mode 100644 src/PVE/Network/SDN/Dhcp.pm
 create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
 create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
 create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm


Summary over all repositories:
  11 files changed, 380 insertions(+), 3 deletions(-)

--
murpp v0.4.0




^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [RFC pve-cluster 1/6] cluster files: add dhcp.cfg
  2023-09-08 13:42 [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN Stefan Hanreich
@ 2023-09-08 13:42 ` Stefan Hanreich
  2023-09-08 13:43 ` [pve-devel] [RFC pve-manager 2/6] sdn: regenerate DHCP config on reload Stefan Hanreich
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-08 13:42 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index e3705b6..2f674db 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -78,6 +78,7 @@ my $observed = {
     'sdn/subnets.cfg' => 1,
     'sdn/ipams.cfg' => 1,
     'sdn/dns.cfg' => 1,
+    'sdn/dhcp.cfg' => 1,
     'sdn/.running-config' => 1,
     'virtual-guest/cpu-models.conf' => 1,
     'mapping/pci.cfg' => 1,
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index c8094ac..4993fc1 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -107,6 +107,7 @@ static memdb_change_t memdb_change_array[] = {
 	{ .path = "sdn/subnets.cfg" },
 	{ .path = "sdn/ipams.cfg" },
 	{ .path = "sdn/dns.cfg" },
+	{ .path = "sdn/dhcp.cfg" },
 	{ .path = "sdn/.running-config" },
 	{ .path = "virtual-guest/cpu-models.conf" },
 	{ .path = "firewall/cluster.fw" },
--
2.39.2




^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [RFC pve-manager 2/6] sdn: regenerate DHCP config on reload
  2023-09-08 13:42 [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN Stefan Hanreich
  2023-09-08 13:42 ` [pve-devel] [RFC pve-cluster 1/6] cluster files: add dhcp.cfg Stefan Hanreich
@ 2023-09-08 13:43 ` Stefan Hanreich
  2023-09-08 13:43 ` [pve-devel] [RFC pve-network 3/6] sdn: dhcp: add abstract class for DHCP plugins Stefan Hanreich
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-08 13:43 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
 PVE/API2/Network.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm
index 00d964a79..f39f04f52 100644
--- a/PVE/API2/Network.pm
+++ b/PVE/API2/Network.pm
@@ -660,6 +660,7 @@ __PACKAGE__->register_method({

 	    if ($have_sdn) {
 		PVE::Network::SDN::generate_zone_config();
+		PVE::Network::SDN::generate_dhcp_config();
 	    }

 	    my $err = sub {
--
2.39.2




^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [RFC pve-network 3/6] sdn: dhcp: add abstract class for DHCP plugins
  2023-09-08 13:42 [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN Stefan Hanreich
  2023-09-08 13:42 ` [pve-devel] [RFC pve-cluster 1/6] cluster files: add dhcp.cfg Stefan Hanreich
  2023-09-08 13:43 ` [pve-devel] [RFC pve-manager 2/6] sdn: regenerate DHCP config on reload Stefan Hanreich
@ 2023-09-08 13:43 ` Stefan Hanreich
  2023-09-08 13:43 ` [pve-devel] [RFC pve-network 4/6] sdn: dhcp: subnet: add DHCP options to subnet configuration Stefan Hanreich
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-08 13:43 UTC (permalink / raw)
  To: pve-devel

This abstract class provides several hooks that should be called
during the config regeneration process:

before_regenerate
Should be called before the plugin does any configuration tasks. The
main usage for this hook is tearing down old instances.

after_regenerate
Should be called after the plugin has finished generating any
configuration. The main usage for this hook is to perform cleanup and
restart / reload services.

before_configure
Should be called before creating the configuration for a specific DHCP
instance, as defined in the dhcp.cfg. This can be used for performing
instance-specific setup.

after_configure
Should be called after the configuration for a specific DHCP instance,
as defined in the dhcp.cfg. This will mainly be used for enabling and
restarting / reloading a specific instance of a DHCP server.

configure_subnet
This function configures the DHCP ranges for a given DHCP server and
subnet. This will usually involve generating configuration files based
on the SDN configuration.

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
 src/PVE/Network/SDN/Dhcp/Makefile  |  8 ++++
 src/PVE/Network/SDN/Dhcp/Plugin.pm | 76 ++++++++++++++++++++++++++++++
 src/PVE/Network/SDN/Makefile       |  1 +
 3 files changed, 85 insertions(+)
 create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
 create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm

diff --git a/src/PVE/Network/SDN/Dhcp/Makefile b/src/PVE/Network/SDN/Dhcp/Makefile
new file mode 100644
index 0000000..1e9b6d3
--- /dev/null
+++ b/src/PVE/Network/SDN/Dhcp/Makefile
@@ -0,0 +1,8 @@
+SOURCES=Plugin.pm
+
+
+PERL5DIR=${DESTDIR}/usr/share/perl5
+
+.PHONY: install
+install:
+	for i in ${SOURCES}; do install -D -m 0644 $$i ${PERL5DIR}/PVE/Network/SDN/Dhcp/$$i; done
diff --git a/src/PVE/Network/SDN/Dhcp/Plugin.pm b/src/PVE/Network/SDN/Dhcp/Plugin.pm
new file mode 100644
index 0000000..31ca9e3
--- /dev/null
+++ b/src/PVE/Network/SDN/Dhcp/Plugin.pm
@@ -0,0 +1,76 @@
+package PVE::Network::SDN::Dhcp::Plugin;
+
+use strict;
+use warnings;
+
+use PVE::Cluster;
+use PVE::JSONSchema qw(get_standard_option);
+
+use base qw(PVE::SectionConfig);
+
+PVE::Cluster::cfs_register_file('sdn/dhcp.cfg',
+    sub { __PACKAGE__->parse_config(@_); },
+    sub { __PACKAGE__->write_config(@_); },
+);
+
+my $defaultData = {
+    propertyList => {
+	type => {
+	    description => "Plugin type.",
+	    format => 'pve-configid',
+	    type => 'string',
+	},
+	node => {
+	    type => 'array',
+	    description => 'A list of nodes where this DHCP server should be deployed',
+	    items => get_standard_option('pve-node'),
+	},
+	'lease-time' => {
+	    type => 'integer',
+	    description => 'Lifetime for the DHCP leases of this DNS server',
+	    minimum => '1',
+	},
+    },
+};
+
+sub private {
+    return $defaultData;
+}
+
+sub options {
+    return {
+	node => {
+	    optional => 1,
+	},
+	'lease-time' => {
+	    optional => 1,
+	},
+    };
+}
+
+sub configure_subnet {
+    my ($class, $dhcp_config, $subnet_config, $range_config) = @_;
+    die 'implement in sub class';
+}
+
+sub before_configure {
+    my ($class, $dhcp_config) = @_;
+    die 'implement in sub class';
+}
+
+sub after_configure {
+    my ($class, $dhcp_config) = @_;
+    die 'implement in sub class';
+}
+
+sub before_regenerate {
+    my ($class) = @_;
+    die 'implement in sub class';
+}
+
+sub after_regenerate {
+    my ($class, $dhcp_config) = @_;
+    die 'implement in sub class';
+}
+
+1;
diff --git a/src/PVE/Network/SDN/Makefile b/src/PVE/Network/SDN/Makefile
index 92cfcd0..848f7d4 100644
--- a/src/PVE/Network/SDN/Makefile
+++ b/src/PVE/Network/SDN/Makefile
@@ -10,4 +10,5 @@ install:
 	make -C Zones install
 	make -C Ipams install
 	make -C Dns install
+	make -C Dhcp install

--
2.39.2




^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [RFC pve-network 4/6] sdn: dhcp: subnet: add DHCP options to subnet configuration
  2023-09-08 13:42 [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN Stefan Hanreich
                   ` (2 preceding siblings ...)
  2023-09-08 13:43 ` [pve-devel] [RFC pve-network 3/6] sdn: dhcp: add abstract class for DHCP plugins Stefan Hanreich
@ 2023-09-08 13:43 ` Stefan Hanreich
  2023-09-11  4:03   ` DERUMIER, Alexandre
  2023-09-08 13:43 ` [pve-devel] [RFC pve-network 5/6] sdn: dhcp: add DHCP plugin for dnsmasq Stefan Hanreich
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-08 13:43 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
 src/PVE/Network/SDN/SubnetPlugin.pm | 43 +++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/src/PVE/Network/SDN/SubnetPlugin.pm b/src/PVE/Network/SDN/SubnetPlugin.pm
index 15b370f..527db4f 100644
--- a/src/PVE/Network/SDN/SubnetPlugin.pm
+++ b/src/PVE/Network/SDN/SubnetPlugin.pm
@@ -61,6 +61,34 @@ sub private {
     return $defaultData;
 }

+my $dhcp_range_fmt = {
+    server => {
+	type => 'pve-configid',
+	description => 'ID of the DHCP server responsible for managing this range',
+    },
+    'start-address' => {
+	type => 'ip',
+	description => 'Start address for the DHCP IP range',
+    },
+    'end-address' => {
+	type => 'ip',
+	description => 'End address for the DHCP IP range',
+    },
+    'lease-time' => {
+	type => 'integer',
+	description => 'Lifetime for the DHCP leases of this subnet (in seconds)',
+	minimum => 1,
+	optional => 1,
+    },
+    'dns-server' => {
+	type => 'ip',
+	description => 'IP address for the DNS server',
+	optional => 1,
+    },
+};
+
+PVE::JSONSchema::register_format('pve-sdn-dhcp-range', $dhcp_range_fmt);
+
 sub properties {
     return {
         vnet => {
@@ -84,6 +112,19 @@ sub properties {
             type => 'string', format => 'dns-name',
             description => "dns domain zone prefix  ex: 'adm' -> <hostname>.adm.mydomain.com",
         },
+	'dhcp-range' => {
+	    type => 'array',
+	    description => 'A list of DHCP ranges for this subnet',
+	    items => {
+		type => 'string',
+		format => 'pve-sdn-dhcp-range',
+	    }
+	},
+	'dhcp-dns-server' => {
+	    type => 'ip',
+	    description => 'IP address for the DNS server',
+	    optional => 1,
+	},
     };
 }

@@ -94,6 +135,8 @@ sub options {
 #	routes => { optional => 1 },
 	snat => { optional => 1 },
 	dnszoneprefix => { optional => 1 },
+	'dhcp-range' => { optional => 1 },
+	'dhcp-dns-server' => { optional => 1 },
     };
 }

--
2.39.2




^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [RFC pve-network 5/6] sdn: dhcp: add DHCP plugin for dnsmasq
  2023-09-08 13:42 [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN Stefan Hanreich
                   ` (3 preceding siblings ...)
  2023-09-08 13:43 ` [pve-devel] [RFC pve-network 4/6] sdn: dhcp: subnet: add DHCP options to subnet configuration Stefan Hanreich
@ 2023-09-08 13:43 ` Stefan Hanreich
  2023-09-08 13:43 ` [pve-devel] [RFC pve-network 6/6] sdn: dhcp: regenerate config for DHCP servers on reload Stefan Hanreich
  2023-09-11  3:53 ` [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN DERUMIER, Alexandre
  6 siblings, 0 replies; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-08 13:43 UTC (permalink / raw)
  To: pve-devel

The plugin generates several dnsmasq configuration files from the SDN
configuration.

/etc/default/dnsmasq.<dhcp_id>
This file specifies the configuration directory for the dnsmasq
instance (/etc/dnsmasq.d/<dhcp_id>). It also sets the configuration
file to /dev/null so the default configuration from the package has
no influence on the dnsmasq configuration.

/etc/dnsmasq.d/<dhcp_id>/00-default.conf
The default configration does several things:
* disable DNS functionality.
* make dnsmasq listen only on the interfaces where it should provide
  DHCP (contrary to the default configuration, which listens on all
  interfaces). This is particularly important when running multiple
  instances of dnsmasq.
* Set the lease file to /var/lib/misc/dnsmasq.<dhcp_id>.leases

/etc/dnsmasq.d/<dhcp_id>/10-<subnet_id>.conf
This file contains the subnet specific settings, which usually
includes:
* DHCP range
* DNS server
* Default Gateway (for IPv4)

Currently regenerating and reloading dnsmasq is very sledgehammery. It
deletes all existing subnet configuration files and disables + stops
all currently running dnsmasq instances. Then it enables + starts all
dnsmasq instances that have been recreated. I intend to improve this
behaviour in the future either by getting access to the old
configuration or using systemd targets.

This plugin currently only works for simple Zones with subnets that
have a gateway configured, since I use the gateway as listening
address for dnsmasq.

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
 debian/control                      |   1 +
 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm | 115 ++++++++++++++++++++++++++++
 src/PVE/Network/SDN/Dhcp/Makefile   |   2 +-
 3 files changed, 117 insertions(+), 1 deletion(-)
 create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm

diff --git a/debian/control b/debian/control
index 8b720c3..4424096 100644
--- a/debian/control
+++ b/debian/control
@@ -24,6 +24,7 @@ Depends: libpve-common-perl (>= 5.0-45),
          ${misc:Depends},
          ${perl:Depends},
 Recommends: frr-pythontools (>= 8.5.1~), ifupdown2
+Suggests: dnsmasq
 Description: Proxmox VE's SDN (Software Defined Network) stack
  This package contains the Software Defined Network (tech preview) for
  Proxmox VE.
diff --git a/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm b/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
new file mode 100644
index 0000000..55d42e4
--- /dev/null
+++ b/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
@@ -0,0 +1,115 @@
+package PVE::Network::SDN::Dhcp::Dnsmasq;
+
+use strict;
+use warnings;
+
+use base qw(PVE::Network::SDN::Dhcp::Plugin);
+
+use Net::IP qw(:PROC);
+use PVE::Tools qw(file_set_contents run_command);
+
+my $DNSMASQ_CONFIG_ROOT = '/etc/dnsmasq.d';
+my $DNSMASQ_DEFAULT_ROOT = '/etc/default';
+my $DNSMASQ_LEASE_ROOT = '/var/lib/misc';
+
+my $DEFAULT_LEASE_TIME = 86400;
+
+sub type {
+    return 'dnsmasq';
+}
+
+sub configure_subnet {
+    my ($class, $dhcp_config, $subnet_config, $range_config) = @_;
+
+    die "No gateway defined for subnet $subnet_config->{id}"
+	if !$subnet_config->{gateway};
+
+    my $tag = $subnet_config->{id};
+    my $lease_time = $subnet_config->{'lease-time'} || $dhcp_config->{'lease-time'} || $DEFAULT_LEASE_TIME;
+
+    my @dnsmasq_config = (
+	"listen-address=$subnet_config->{gateway}",
+    );
+
+    my $option_string;
+    if (ip_is_ipv6($subnet_config->{network})) {
+	$option_string = 'option6';
+
+	push @dnsmasq_config, "enable-ra";
+    } else {
+	$option_string = 'option';
+
+	push @dnsmasq_config, "dhcp-option=tag:$tag,$option_string:router,$subnet_config->{gateway}";
+    }
+
+
+
+    foreach my $dhcp_range (@$range_config) {
+	push @dnsmasq_config, "dhcp-range=set:$tag,$dhcp_range->{'start-address'},$dhcp_range->{'end-address'},${lease_time}s";
+
+	my $dns_server = $dhcp_range->{'dns-server'} || $subnet_config->{'dhcp-dns-server'} ;
+	push @dnsmasq_config, "dhcp-option=tag:$tag,$option_string:dns-server,$dns_server"
+	    if $dns_server;
+    }
+
+    PVE::Tools::file_set_contents(
+	"$DNSMASQ_CONFIG_ROOT/$dhcp_config->{id}/10-$subnet_config->{id}.conf",
+	join("\n", @dnsmasq_config)
+    );
+}
+
+sub before_configure {
+    my ($class, $dhcp_config) = @_;
+
+    my $config_directory = "$DNSMASQ_CONFIG_ROOT/$dhcp_config->{id}";
+
+    mkdir($config_directory, 755) if !-d $config_directory;
+
+    my $default_config = <<CFG;
+CONFIG_DIR=$config_directory,.dpkg-dist,.dpkg-old,.dpkg-new,.pve-new
+DNSMASQ_OPTS="--conf-file=/dev/null"
+CFG
+
+    PVE::Tools::file_set_contents(
+	"$DNSMASQ_DEFAULT_ROOT/dnsmasq.$dhcp_config->{id}",
+	$default_config
+    );
+
+    my $default_dnsmasq_config = <<CFG;
+except-interface=lo
+bind-dynamic
+no-resolv
+no-hosts
+dhcp-leasefile=$DNSMASQ_LEASE_ROOT/dnsmasq.$dhcp_config->{id}.leases
+CFG
+
+    PVE::Tools::file_set_contents(
+	"$config_directory/00-default.conf",
+	$default_dnsmasq_config
+    );
+
+    unlink glob "$config_directory/10-*.conf";
+}
+
+sub after_configure {
+    my ($class, $dhcp_config) = @_;
+
+    my $service_name = "dnsmasq\@$dhcp_config->{id}";
+
+    PVE::Tools::run_command(['systemctl', 'enable', $service_name]);
+    PVE::Tools::run_command(['systemctl', 'restart', $service_name]);
+}
+
+sub before_regenerate {
+    my ($class) = @_;
+
+    PVE::Tools::run_command(['systemctl', 'stop', "dnsmasq@*"]);
+    PVE::Tools::run_command(['systemctl', 'disable', 'dnsmasq@']);
+}
+
+sub after_regenerate {
+    my ($class) = @_;
+    # noop
+}
+
+1;
diff --git a/src/PVE/Network/SDN/Dhcp/Makefile b/src/PVE/Network/SDN/Dhcp/Makefile
index 1e9b6d3..6546513 100644
--- a/src/PVE/Network/SDN/Dhcp/Makefile
+++ b/src/PVE/Network/SDN/Dhcp/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Plugin.pm
+SOURCES=Plugin.pm Dnsmasq.pm


 PERL5DIR=${DESTDIR}/usr/share/perl5
--
2.39.2




^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [RFC pve-network 6/6] sdn: dhcp: regenerate config for DHCP servers on reload
  2023-09-08 13:42 [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN Stefan Hanreich
                   ` (4 preceding siblings ...)
  2023-09-08 13:43 ` [pve-devel] [RFC pve-network 5/6] sdn: dhcp: add DHCP plugin for dnsmasq Stefan Hanreich
@ 2023-09-08 13:43 ` Stefan Hanreich
  2023-09-11  3:53 ` [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN DERUMIER, Alexandre
  6 siblings, 0 replies; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-08 13:43 UTC (permalink / raw)
  To: pve-devel

During config regeneration parsing of the SDN configuration happens in
one pass before generating the configuration files via the plugins in
order to avoid having to parse property strings in the subnet
configuration multiple times.

Then we call the respective hooks of the plugin responsible for
configuring a DHCP instance. The plugin should then handle the config
generation accordingly.

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
 src/PVE/Network/SDN.pm       |  11 +++-
 src/PVE/Network/SDN/Dhcp.pm  | 122 +++++++++++++++++++++++++++++++++++
 src/PVE/Network/SDN/Makefile |   3 +-
 3 files changed, 133 insertions(+), 3 deletions(-)
 create mode 100644 src/PVE/Network/SDN/Dhcp.pm

diff --git a/src/PVE/Network/SDN.pm b/src/PVE/Network/SDN.pm
index 057034f..952f9dc 100644
--- a/src/PVE/Network/SDN.pm
+++ b/src/PVE/Network/SDN.pm
@@ -12,6 +12,7 @@ use PVE::Network::SDN::Vnets;
 use PVE::Network::SDN::Zones;
 use PVE::Network::SDN::Controllers;
 use PVE::Network::SDN::Subnets;
+use PVE::Network::SDN::Dhcp;

 use PVE::Tools qw(extract_param dir_glob_regex run_command);
 use PVE::Cluster qw(cfs_read_file cfs_write_file cfs_lock_file);
@@ -149,13 +150,15 @@ sub commit_config {
     my $zones_cfg = PVE::Network::SDN::Zones::config();
     my $controllers_cfg = PVE::Network::SDN::Controllers::config();
     my $subnets_cfg = PVE::Network::SDN::Subnets::config();
+    my $dhcp_cfg = PVE::Network::SDN::Dhcp::config();

     my $vnets = { ids => $vnets_cfg->{ids} };
     my $zones = { ids => $zones_cfg->{ids} };
     my $controllers = { ids => $controllers_cfg->{ids} };
     my $subnets = { ids => $subnets_cfg->{ids} };
+    my $dhcp = { ids => $dhcp_cfg->{ids} };

-     $cfg = { version => $version, vnets => $vnets, zones => $zones, controllers => $controllers, subnets => $subnets };
+    $cfg = { version => $version, vnets => $vnets, zones => $zones, controllers => $controllers, subnets => $subnets, dhcp => $dhcp };

     cfs_write_file($running_cfg, $cfg);
 }
@@ -231,6 +234,12 @@ sub generate_controller_config {
     PVE::Network::SDN::Controllers::reload_controller() if $reload;
 }

+sub generate_dhcp_config {
+    my ($reload) = @_;
+
+    PVE::Network::SDN::Dhcp::regenerate_config($reload);
+}
+
 sub encode_value {
     my ($type, $key, $value) = @_;

diff --git a/src/PVE/Network/SDN/Dhcp.pm b/src/PVE/Network/SDN/Dhcp.pm
new file mode 100644
index 0000000..8c8a437
--- /dev/null
+++ b/src/PVE/Network/SDN/Dhcp.pm
@@ -0,0 +1,122 @@
+package PVE::Network::SDN::Dhcp;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_read_file);
+
+use PVE::Network::SDN;
+use PVE::Network::SDN::SubnetPlugin;
+use PVE::Network::SDN::Dhcp qw(config);
+use PVE::Network::SDN::Subnets qw(sdn_subnets_config config);
+use PVE::Network::SDN::Dhcp::Plugin;
+use PVE::Network::SDN::Dhcp::Dnsmasq;
+use PVE::JSONSchema qw(parse_property_string);
+
+use PVE::INotify qw(nodename);
+
+PVE::Network::SDN::Dhcp::Plugin->init();
+
+PVE::Network::SDN::Dhcp::Dnsmasq->register();
+PVE::Network::SDN::Dhcp::Dnsmasq->init();
+
+sub config {
+    return cfs_read_file('sdn/dhcp.cfg');
+}
+
+sub parse_config {
+    my ($dhcps, $subnets, $nodename) = @_;
+
+    my %parsed_config;
+
+    for my $subnet_id (keys %{$subnets->{ids}}) {
+	my $subnet_config = PVE::Network::SDN::Subnets::sdn_subnets_config($subnets, $subnet_id);
+
+	next if !$subnet_config->{'dhcp-range'};
+
+	foreach my $element (@{$subnet_config->{'dhcp-range'}}) {
+	    my $dhcp_range = eval { parse_property_string('pve-sdn-dhcp-range', $element) };
+
+	    if ($@ || !$dhcp_range) {
+		warn "Unable to parse dhcp-range string: $element\n";
+		warn "$@\n" if $@;
+		next;
+	    }
+
+	    my $dhcp_config = $dhcps->{ids}->{$dhcp_range->{server}};
+
+	    if (!$dhcp_config) {
+		warn "Cannot find configuration for DHCP server $dhcp_range->{server}";
+		next;
+	    }
+
+	    next if $dhcp_config->{node} && !grep(/^$nodename$/, @{$dhcp_config->{node}});
+
+	    push @{$parsed_config{$dhcp_range->{server}}{$subnet_id}}, $dhcp_range;
+	}
+    }
+
+    return \%parsed_config;
+}
+
+sub regenerate_config {
+    my ($reload) = @_;
+
+    my $dhcps = PVE::Network::SDN::Dhcp::config();
+    my $subnets = PVE::Network::SDN::Subnets::config();
+    my $nodename = PVE::INotify::nodename();
+    my $parsed_config = parse_config($dhcps, $subnets, $nodename);
+
+    my $plugins = PVE::Network::SDN::Dhcp::Plugin->lookup_types();
+
+    foreach my $plugin_name (@$plugins) {
+	my $plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($plugin_name);
+
+	eval { $plugin->before_regenerate() };
+	die "Could not run before_regenerate for DHCP plugin $plugin_name $@\n" if $@;
+    }
+
+    for my $dhcp_id (keys %$parsed_config) {
+	my $parsed_subnets = $parsed_config->{$dhcp_id};
+
+	next if !%$parsed_subnets;
+
+	my $dhcp_config = $dhcps->{ids}->{$dhcp_id};
+	$dhcp_config->{id} = $dhcp_id;
+
+	my $plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($dhcp_config->{type});
+
+	eval { $plugin->before_configure($dhcp_config) };
+
+	if ($@) {
+	    warn "Could not run before_configure for DHCP server $dhcp_id $@\n" if $@;
+	    next;
+	}
+
+	for my $subnet_id (keys %$parsed_subnets) {
+	    my $subnet_config = PVE::Network::SDN::Subnets::sdn_subnets_config($subnets, $subnet_id);
+	    $subnet_config->{id} = $subnet_id;
+
+	    eval {
+		$plugin->configure_subnet(
+		    $dhcp_config,
+		    $subnet_config,
+		    $parsed_subnets->{$subnet_id},
+		);
+	    };
+	    warn "Could not configure Subnet $subnet_id: $@\n" if $@;
+	}
+
+	eval { $plugin->after_configure($dhcp_config) };
+	warn "Could not run after_configure for DHCP server $dhcp_id $@\n" if $@;
+    }
+
+    foreach my $plugin_name (@$plugins) {
+	my $plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($plugin_name);
+
+	eval { $plugin->after_regenerate() };
+	warn "Could not run after_regenerate for DHCP plugin $plugin_name $@\n" if $@;
+    }
+}
+
+1;
diff --git a/src/PVE/Network/SDN/Makefile b/src/PVE/Network/SDN/Makefile
index 848f7d4..86c3b9d 100644
--- a/src/PVE/Network/SDN/Makefile
+++ b/src/PVE/Network/SDN/Makefile
@@ -1,5 +1,4 @@
-SOURCES=Vnets.pm VnetPlugin.pm Zones.pm Controllers.pm Subnets.pm SubnetPlugin.pm Ipams.pm Dns.pm
-
+SOURCES=Vnets.pm VnetPlugin.pm Zones.pm Controllers.pm Subnets.pm SubnetPlugin.pm Ipams.pm Dns.pm Dhcp.pm

 PERL5DIR=${DESTDIR}/usr/share/perl5

--
2.39.2




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-08 13:42 [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN Stefan Hanreich
                   ` (5 preceding siblings ...)
  2023-09-08 13:43 ` [pve-devel] [RFC pve-network 6/6] sdn: dhcp: regenerate config for DHCP servers on reload Stefan Hanreich
@ 2023-09-11  3:53 ` DERUMIER, Alexandre
  2023-09-13  8:18   ` DERUMIER, Alexandre
  2023-09-13  8:54   ` Stefan Hanreich
  6 siblings, 2 replies; 28+ messages in thread
From: DERUMIER, Alexandre @ 2023-09-11  3:53 UTC (permalink / raw)
  To: pve-devel

Hi,

I think we should think how we want to attribute ips to the vms before
continue the implementation.

I think they are 2 models:

1)

- we want that dhcp server attribute itself ips && leases from the
subnets/ranges configured.

That mean that leases need to be shared across nodes.  (from the same
cluster maybe with /etc/pve tricks,   but in real world, it should also
works across multiple clusters, as it's not uncommon to shared subnets
in differents cluster, public network,...)

So we don't have that 2 differents vms starting on the same time on 2
differents cluster, receive the same ips. (so dhcp servers need to use
some kind of central lock,...)


2)

The other way (my preferred way), could be to use ipam. (where we
already have local ipam, or external ipams like netbox/phpipam for
sharing between multiple cluster).


The ip is reserved in ipam  (automatic find next free ip at vm creation
for example, or manually in the gui, or maybe at vm start if we want
ephemeral ip), then registered dns, 
and generated dhcp server config with mac-ip reserversation. (for dhcp
server config generation, it could be a daemon pooling the ipam
database change for example)

Like this, no need to handle lease sharing, so it can work with any
dhcp server.




What do you think about it ?


Le vendredi 08 septembre 2023 à 15:42 +0200, Stefan Hanreich a écrit :
> This patch series adds support for automatically deploying dnsmasq as
> a DHCP
> server to a simple SDN Zone.
> 
> While certainly not 100% polished on some ends (looking at restarting
> systemd
> services in particular), the general idea behind the mechanism shows.
> I wanted
> to gather some feedback on how I approached designing the plugins and
> the
> config regeneration process before comitting to this design by
> creating an API
> and UI around it.
> 
> For your testing convenience I've provided deb packages on our share:
>   /path/to/nasi/iso/packages/shan-sdn-dhcp
> 
> You need to install dnsmasq (and disable it afterwards):
> 
>   apt install dnsmasq && systemctl disable --now dnsmasq
> 
> 
> You can use the following example configuration for deploying a DHCP
> server in
> a SDN subnet:
> 
> /etc/pve/sdn/dhcp.cfg:
> 
>   dnsmasq: nat
> 
> 
> /etc/pve/sdn/zones.cfg:
> 
>   simple: DHCPNAT
>           ipam pve
> 
> 
> /etc/pve/sdn/vnets.cfg:
> 
>   vnet: dhcpnat
>           zone DHCPNAT
> 
> 
> /etc/pve/sdn/subnets.cfg:
> 
>   subnet: DHCPNAT-10.1.0.0-16
>           vnet dhcpnat
>           dhcp-dns-server 10.1.0.1
>           dhcp-range server=nat,start-address=10.1.0.100,end-
> address=10.1.0.200,lease-time=86400
>           dhcp-range server=nat,start-address=10.1.1.100,end-
> address=10.1.1.200,lease-time=86400,dns-server=10.1.0.2
>           gateway 10.1.0.1
>           snat 1
> 
> 
> Then apply the SDN configuration:
> 
>   pvesh set /cluster/sdn
> 
> 
> Be careful that after configuring dhcp-range you do not save the
> subnet config
> from the Web UI, since the dhcp-range line will vanish from the
> config.
> 
> 
> 
> pve-cluster:
> 
> Stefan Hanreich (1):
>   cluster files: add dhcp.cfg
> 
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> 
> pve-manager:
> 
> Stefan Hanreich (1):
>   sdn: regenerate DHCP config on reload
> 
>  PVE/API2/Network.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
> 
> pve-network:
> 
> Stefan Hanreich (4):
>   sdn: dhcp: add abstract class for DHCP plugins
>   sdn: dhcp: subnet: add DHCP options to subnet configuration
>   sdn: dhcp: add DHCP plugin for dnsmasq
>   sdn: dhcp: regenerate config for DHCP servers on reload
> 
>  debian/control                      |   1 +
>  src/PVE/Network/SDN.pm              |  11 ++-
>  src/PVE/Network/SDN/Dhcp.pm         | 122
> ++++++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Dnsmasq.pm | 115 ++++++++++++++++++++++++++
>  src/PVE/Network/SDN/Dhcp/Makefile   |   8 ++
>  src/PVE/Network/SDN/Dhcp/Plugin.pm  |  76 +++++++++++++++++
>  src/PVE/Network/SDN/Makefile        |   4 +-
>  src/PVE/Network/SDN/SubnetPlugin.pm |  43 ++++++++++
>  8 files changed, 377 insertions(+), 3 deletions(-)
>  create mode 100644 src/PVE/Network/SDN/Dhcp.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
>  create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm
> 
> 
> Summary over all repositories:
>   11 files changed, 380 insertions(+), 3 deletions(-)
> 
> --
> murpp v0.4.0
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://antiphishing.cetsi.fr/proxy/v3?i=SHV0Y1JZQjNyckJFa3dUQiblhF5YcUqtiWCaK_ri0kk&r=T0hnMlUyVEgwNmlmdHc1NSqeTQ1pLQVNn4UvDLnWe4fCxNuytxXrtkvXRfHgEH29SgNUOJTfU-F2je9BBTq-sg&f=V3p0eFlQOUZ4czh2enpJS6vlBYwhEUcOwTmUN-Hu71ZWogcUGH-slS7gYzVrVVB6_wb2zNaC4g2GRLF4nWvKLw&u=https%3A//lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel&k=ZVd0
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC pve-network 4/6] sdn: dhcp: subnet: add DHCP options to subnet configuration
  2023-09-08 13:43 ` [pve-devel] [RFC pve-network 4/6] sdn: dhcp: subnet: add DHCP options to subnet configuration Stefan Hanreich
@ 2023-09-11  4:03   ` DERUMIER, Alexandre
  2023-09-13  8:37     ` Stefan Hanreich
  0 siblings, 1 reply; 28+ messages in thread
From: DERUMIER, Alexandre @ 2023-09-11  4:03 UTC (permalink / raw)
  To: pve-devel

I think that some common options could also be declared at subnet level
or even at zone level.

(I'm think about static routes for example, they could be defined at
subnet level,   maybe dnsserver,ntpserver could be defined at zone
level, ....)

to avoid to redefined them each time for each range.


So maybe be able to defined them at uppper level, and be able to
override them at range level. 



Le vendredi 08 septembre 2023 à 15:43 +0200, Stefan Hanreich a écrit :
> Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
> ---
>  src/PVE/Network/SDN/SubnetPlugin.pm | 43
> +++++++++++++++++++++++++++++
>  1 file changed, 43 insertions(+)
> 
> diff --git a/src/PVE/Network/SDN/SubnetPlugin.pm
> b/src/PVE/Network/SDN/SubnetPlugin.pm
> index 15b370f..527db4f 100644
> --- a/src/PVE/Network/SDN/SubnetPlugin.pm
> +++ b/src/PVE/Network/SDN/SubnetPlugin.pm
> @@ -61,6 +61,34 @@ sub private {
>      return $defaultData;
>  }
> 
> +my $dhcp_range_fmt = {
> +    server => {
> +       type => 'pve-configid',
> +       description => 'ID of the DHCP server responsible for
> managing this range',
> +    },
> +    'start-address' => {
> +       type => 'ip',
> +       description => 'Start address for the DHCP IP range',
> +    },
> +    'end-address' => {
> +       type => 'ip',
> +       description => 'End address for the DHCP IP range',
> +    },
> +    'lease-time' => {
> +       type => 'integer',
> +       description => 'Lifetime for the DHCP leases of this subnet
> (in seconds)',
> +       minimum => 1,
> +       optional => 1,
> +    },
> +    'dns-server' => {
> +       type => 'ip',
> +       description => 'IP address for the DNS server',
> +       optional => 1,
> +    },
> +};
> +
> +PVE::JSONSchema::register_format('pve-sdn-dhcp-range',
> $dhcp_range_fmt);
> +
>  sub properties {
>      return {
>          vnet => {
> @@ -84,6 +112,19 @@ sub properties {
>              type => 'string', format => 'dns-name',
>              description => "dns domain zone prefix  ex: 'adm' ->
> <hostname>.adm.mydomain.com",
>          },
> +       'dhcp-range' => {
> +           type => 'array',
> +           description => 'A list of DHCP ranges for this subnet',
> +           items => {
> +               type => 'string',
> +               format => 'pve-sdn-dhcp-range',
> +           }
> +       },
> +       'dhcp-dns-server' => {
> +           type => 'ip',
> +           description => 'IP address for the DNS server',
> +           optional => 1,
> +       },
>      };
>  }
> 
> @@ -94,6 +135,8 @@ sub options {
>  #      routes => { optional => 1 },
>         snat => { optional => 1 },
>         dnszoneprefix => { optional => 1 },
> +       'dhcp-range' => { optional => 1 },
> +       'dhcp-dns-server' => { optional => 1 },
>      };
>  }
> 
> --
> 2.39.2
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://antiphishing.cetsi.fr/proxy/v3?i=MlZSTzBhZFZ6Nzl4c3EyN7fbSKDePLMxi5u5_onpAoI&r=cm1qVmRYUWk2WXhYZVFHWA0PXtTaYxz7-FIOTkZBm34_dHdSch-gXn7ST9eGhQLN&f=S1Zkd042VWdrZG5qQUxxWk5ps4t67kNuHsBZzdzhpquLKuXqTZLIq2K1DfKr9N61yBafm7AuAITd6bHtRU4zEQ&u=https%3A//lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel&k=F1is
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-11  3:53 ` [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN DERUMIER, Alexandre
@ 2023-09-13  8:18   ` DERUMIER, Alexandre
  2023-09-13  8:54   ` Stefan Hanreich
  1 sibling, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre @ 2023-09-13  8:18 UTC (permalink / raw)
  To: pve-devel

Hi,

I'm going to do a POC with kea dhcp and host reservations

It seem possible to dynamically inject reservations with need to reload
the daemon (and only 1 daemon is needed for all interfaces/bridges)
https://ftp.iij.ad.jp/pub/network/isc/kea/1.5.0-P1/doc/kea-guide.html#host-cmds


I'll try to do something like:

- at vm create (or nic create), create a reservation in ipam (the code
is already here) if user want a persistant ip.  (maybe add something 
like :  net:....., dhcp=(unmanaged|persistant|ephemeral)


- at vm start,  
   if dhcp=persistant,look in ipam for reserved ip address, 
   if dhcp=ephemeral, allocation a new ip in pam

   and inject host reservation in local kea.


- at vm stop, remove reservation from local kea
   if dhcp=ephemeral, remove ip from ipam


- at vm destroy or nic detroy, if dhcp=persistant, remove the ip from
ipam




About kea, it seem also possible to allocate /32 leases with soom
hooks, could be usefull too for users with routed setup
https://github.com/zorun/kea-hook-runscript/blob/master/examples/slash32_leases/README.md

Le lundi 11 septembre 2023 à 03:53 +0000, DERUMIER, Alexandre a écrit :
> Hi,
> 
> I think we should think how we want to attribute ips to the vms
> before
> continue the implementation.
> 
> I think they are 2 models:
> 
> 1)
> 
> - we want that dhcp server attribute itself ips && leases from the
> subnets/ranges configured.
> 
> That mean that leases need to be shared across nodes.  (from the same
> cluster maybe with /etc/pve tricks,   but in real world, it should
> also
> works across multiple clusters, as it's not uncommon to shared
> subnets
> in differents cluster, public network,...)
> 
> So we don't have that 2 differents vms starting on the same time on 2
> differents cluster, receive the same ips. (so dhcp servers need to
> use
> some kind of central lock,...)
> 
> 
> 2)
> 
> The other way (my preferred way), could be to use ipam. (where we
> already have local ipam, or external ipams like netbox/phpipam for
> sharing between multiple cluster).
> 
> 
> The ip is reserved in ipam  (automatic find next free ip at vm
> creation
> for example, or manually in the gui, or maybe at vm start if we want
> ephemeral ip), then registered dns, 
> and generated dhcp server config with mac-ip reserversation. (for
> dhcp
> server config generation, it could be a daemon pooling the ipam
> database change for example)
> 
> Like this, no need to handle lease sharing, so it can work with any
> dhcp server.
> 
> 
> 
> 
> What do you think about it ?
> 
> 
> Le vendredi 08 septembre 2023 à 15:42 +0200, Stefan Hanreich a
> écrit :
> > This patch series adds support for automatically deploying dnsmasq
> > as
> > a DHCP
> > server to a simple SDN Zone.
> > 
> > While certainly not 100% polished on some ends (looking at
> > restarting
> > systemd
> > services in particular), the general idea behind the mechanism
> > shows.
> > I wanted
> > to gather some feedback on how I approached designing the plugins
> > and
> > the
> > config regeneration process before comitting to this design by
> > creating an API
> > and UI around it.
> > 
> > For your testing convenience I've provided deb packages on our
> > share:
> >   /path/to/nasi/iso/packages/shan-sdn-dhcp
> > 
> > You need to install dnsmasq (and disable it afterwards):
> > 
> >   apt install dnsmasq && systemctl disable --now dnsmasq
> > 
> > 
> > You can use the following example configuration for deploying a
> > DHCP
> > server in
> > a SDN subnet:
> > 
> > /etc/pve/sdn/dhcp.cfg:
> > 
> >   dnsmasq: nat
> > 
> > 
> > /etc/pve/sdn/zones.cfg:
> > 
> >   simple: DHCPNAT
> >           ipam pve
> > 
> > 
> > /etc/pve/sdn/vnets.cfg:
> > 
> >   vnet: dhcpnat
> >           zone DHCPNAT
> > 
> > 
> > /etc/pve/sdn/subnets.cfg:
> > 
> >   subnet: DHCPNAT-10.1.0.0-16
> >           vnet dhcpnat
> >           dhcp-dns-server 10.1.0.1
> >           dhcp-range server=nat,start-address=10.1.0.100,end-
> > address=10.1.0.200,lease-time=86400
> >           dhcp-range server=nat,start-address=10.1.1.100,end-
> > address=10.1.1.200,lease-time=86400,dns-server=10.1.0.2
> >           gateway 10.1.0.1
> >           snat 1
> > 
> > 
> > Then apply the SDN configuration:
> > 
> >   pvesh set /cluster/sdn
> > 
> > 
> > Be careful that after configuring dhcp-range you do not save the
> > subnet config
> > from the Web UI, since the dhcp-range line will vanish from the
> > config.
> > 
> > 
> > 
> > pve-cluster:
> > 
> > Stefan Hanreich (1):
> >   cluster files: add dhcp.cfg
> > 
> >  src/PVE/Cluster.pm  | 1 +
> >  src/pmxcfs/status.c | 1 +
> >  2 files changed, 2 insertions(+)
> > 
> > 
> > pve-manager:
> > 
> > Stefan Hanreich (1):
> >   sdn: regenerate DHCP config on reload
> > 
> >  PVE/API2/Network.pm | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > 
> > pve-network:
> > 
> > Stefan Hanreich (4):
> >   sdn: dhcp: add abstract class for DHCP plugins
> >   sdn: dhcp: subnet: add DHCP options to subnet configuration
> >   sdn: dhcp: add DHCP plugin for dnsmasq
> >   sdn: dhcp: regenerate config for DHCP servers on reload
> > 
> >  debian/control                      |   1 +
> >  src/PVE/Network/SDN.pm              |  11 ++-
> >  src/PVE/Network/SDN/Dhcp.pm         | 122
> > ++++++++++++++++++++++++++++
> >  src/PVE/Network/SDN/Dhcp/Dnsmasq.pm | 115
> > ++++++++++++++++++++++++++
> >  src/PVE/Network/SDN/Dhcp/Makefile   |   8 ++
> >  src/PVE/Network/SDN/Dhcp/Plugin.pm  |  76 +++++++++++++++++
> >  src/PVE/Network/SDN/Makefile        |   4 +-
> >  src/PVE/Network/SDN/SubnetPlugin.pm |  43 ++++++++++
> >  8 files changed, 377 insertions(+), 3 deletions(-)
> >  create mode 100644 src/PVE/Network/SDN/Dhcp.pm
> >  create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
> >  create mode 100644 src/PVE/Network/SDN/Dhcp/Makefile
> >  create mode 100644 src/PVE/Network/SDN/Dhcp/Plugin.pm
> > 
> > 
> > Summary over all repositories:
> >   11 files changed, 380 insertions(+), 3 deletions(-)
> > 
> > --
> > murpp v0.4.0
> > 
> > 
> > _______________________________________________
> > pve-devel mailing list
> > pve-devel@lists.proxmox.com
> > https://antiphishing.cetsi.fr/proxy/v3?i=SHV0Y1JZQjNyckJFa3dUQiblhF5YcUqtiWCaK_ri0kk&r=T0hnMlUyVEgwNmlmdHc1NSqeTQ1pLQVNn4UvDLnWe4fCxNuytxXrtkvXRfHgEH29SgNUOJTfU-F2je9BBTq-sg&f=V3p0eFlQOUZ4czh2enpJS6vlBYwhEUcOwTmUN-Hu71ZWogcUGH-slS7gYzVrVVB6_wb2zNaC4g2GRLF4nWvKLw&u=https%3A//lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel&k=ZVd0
> > 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC pve-network 4/6] sdn: dhcp: subnet: add DHCP options to subnet configuration
  2023-09-11  4:03   ` DERUMIER, Alexandre
@ 2023-09-13  8:37     ` Stefan Hanreich
  0 siblings, 0 replies; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-13  8:37 UTC (permalink / raw)
  To: Proxmox VE development discussion, DERUMIER, Alexandre


On 9/11/23 06:03, DERUMIER, Alexandre wrote:
> I think that some common options could also be declared at subnet level
> or even at zone level.
> 
> (I'm think about static routes for example, they could be defined at
> subnet level,   maybe dnsserver,ntpserver could be defined at zone
> level, ....)
> 
> to avoid to redefined them each time for each range.
> 
> 
> So maybe be able to defined them at uppper level, and be able to
> override them at range level.
> 

Yes, I was already looking at all the options DHCP provides - apparently 
there are a lot. It would make sense to implement at least the most 
common ones this way (as I did with dnsserver for now). It might also 
make sense to provide the option to take those values from the host 
automatically.




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-11  3:53 ` [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN DERUMIER, Alexandre
  2023-09-13  8:18   ` DERUMIER, Alexandre
@ 2023-09-13  8:54   ` Stefan Hanreich
  2023-09-13  9:26     ` DERUMIER, Alexandre
  2023-09-13 11:37     ` Thomas Lamprecht
  1 sibling, 2 replies; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-13  8:54 UTC (permalink / raw)
  To: Proxmox VE development discussion, DERUMIER, Alexandre

Sorry for my late reply, I was a bit busy the last two days and I also 
wanted some time to think about your suggestions.

On 9/11/23 05:53, DERUMIER, Alexandre wrote:
> Hi,
> 
> I think we should think how we want to attribute ips to the vms before
> continue the implementation. >
> I think they are 2 models:
> 
> 1)
> 
> - we want that dhcp server attribute itself ips && leases from the
> subnets/ranges configured.
> 
> That mean that leases need to be shared across nodes.  (from the same
> cluster maybe with /etc/pve tricks,   but in real world, it should also
> works across multiple clusters, as it's not uncommon to shared subnets
> in differents cluster, public network,...)
> 
> So we don't have that 2 differents vms starting on the same time on 2
> differents cluster, receive the same ips. (so dhcp servers need to use
> some kind of central lock,...)
>

This is also something I have thought about, but I assume dnsmasq is not 
really built in mind with multiple instances accessing the same leases file.

This problem would be solved by using distributed DHCP servers like kea. 
kea on the other hand has the issue that it we need to set up a SQL 
database or other external storage. Alternatively we need to write a new 
backend for kea that integrates with our pmxcfs.

This is partly why I think Thomas mentioned implementing our own DHCP 
server, where we have the flexibility of handling things as we see fit.

Then we can just recommend the dnsmasq plugin for simple setups (e.g. 
single node setups), while more advanced setups should opt for other 
DHCP backends.

> 
> 2)
> 
> The other way (my preferred way), could be to use ipam. (where we
> already have local ipam, or external ipams like netbox/phpipam for
> sharing between multiple cluster).
> 
> 
> The ip is reserved in ipam  (automatic find next free ip at vm creation
> for example, or manually in the gui, or maybe at vm start if we want
> ephemeral ip), then registered dns,
> and generated dhcp server config with mac-ip reserversation. (for dhcp
> server config generation, it could be a daemon pooling the ipam
> database change for example)
> 
> Like this, no need to handle lease sharing, so it can work with any
> dhcp server.
> 

Implementing this via IPAM plugins seems like a good idea, but if we 
want to use distributed DHCP servers like kea (or our own 
implementation) then this might not be needed in those cases. It also 
adds quite a bit of complexity.

With dnsmasq there is even the possibility of running scripts (via 
--dhcp-script, see the docs [1]) when a lease is added / changed / 
deleted. But as far as I can tell this can not be used to override the 
IP that dnsmasq provides via DHCP, so it is probably not really useful 
for our use-case.

------

Another method that I had in mind was providing a DHCP forwarding plugin 
that proxies the DHCP requests to another DHCP server (that can then 
even be outside the cluster). This way there is only one DHCP server 
that handles keeping track of the leases and you do not have the issue 
of having to handle sharing a lease database / using IPAM. So, for 
instance, you have a DHCP server running on one node and the other nodes 
just proxy their requests to the one DHCP server.

I was also thinking we could implement setting the IP for a specific VM 
on interfaces where we have a DHCP server, since we can then just 
provide fixed IPs for specific MAC-addresses. This could be quite 
convenient.



[1] https://thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13  8:54   ` Stefan Hanreich
@ 2023-09-13  9:26     ` DERUMIER, Alexandre
  2023-09-13 11:37     ` Thomas Lamprecht
  1 sibling, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre @ 2023-09-13  9:26 UTC (permalink / raw)
  To: pve-devel, s.hanreich

Le mercredi 13 septembre 2023 à 10:54 +0200, Stefan Hanreich a écrit :
> Sorry for my late reply, I was a bit busy the last two days and I
> also 
> wanted some time to think about your suggestions.
> 
> On 9/11/23 05:53, DERUMIER, Alexandre wrote:
> > Hi,
> > 
> > I think we should think how we want to attribute ips to the vms
> > before
> > continue the implementation. >
> > I think they are 2 models:
> > 
> > 1)
> > 
> > - we want that dhcp server attribute itself ips && leases from the
> > subnets/ranges configured.
> > 
> > That mean that leases need to be shared across nodes.  (from the
> > same
> > cluster maybe with /etc/pve tricks,   but in real world, it should
> > also
> > works across multiple clusters, as it's not uncommon to shared
> > subnets
> > in differents cluster, public network,...)
> > 
> > So we don't have that 2 differents vms starting on the same time on
> > 2
> > differents cluster, receive the same ips. (so dhcp servers need to
> > use
> > some kind of central lock,...)
> > 
> 
> This is also something I have thought about, but I assume dnsmasq is
> not 
> really built in mind with multiple instances accessing the same
> leases file.
> 
> This problem would be solved by using distributed DHCP servers like
> kea. 
> kea on the other hand has the issue that it we need to set up a SQL 
> database or other external storage. Alternatively we need to write a
> new 
> backend for kea that integrates with our pmxcfs.

using pmxcfs could be great for 1 cluster, but if you are multiple
clusters sharing same subnet it'll not work.


Maybe, for cross-cluster, only ip reservations should be used.
and for (dynamic|ephemeral) ip using a subnet specific to the cluster?


> 
> This is partly why I think Thomas mentioned implementing our own DHCP
> server, where we have the flexibility of handling things as we see
> fit.
> 
> Then we can just recommend the dnsmasq plugin for simple setups (e.g.
> single node setups), while more advanced setups should opt for other 
> DHCP backends.
> 
> > 
> > 2)
> > 
> > The other way (my preferred way), could be to use ipam. (where we
> > already have local ipam, or external ipams like netbox/phpipam for
> > sharing between multiple cluster).
> > 
> > 
> > The ip is reserved in ipam  (automatic find next free ip at vm
> > creation
> > for example, or manually in the gui, or maybe at vm start if we
> > want
> > ephemeral ip), then registered dns,
> > and generated dhcp server config with mac-ip reserversation. (for
> > dhcp
> > server config generation, it could be a daemon pooling the ipam
> > database change for example)
> > 
> > Like this, no need to handle lease sharing, so it can work with any
> > dhcp server.
> > 
> 
> Implementing this via IPAM plugins seems like a good idea, but if we 
> want to use distributed DHCP servers like kea (or our own 
> implementation) then this might not be needed in those cases. It also
> adds quite a bit of complexity.
> 
> With dnsmasq there is even the possibility of running scripts (via 
> --dhcp-script, see the docs [1]) when a lease is added / changed / 
> deleted. But as far as I can tell this can not be used to override
> the 
> IP that dnsmasq provides via DHCP, so it is probably not really
> useful 
> for our use-case.
> 
> ------

(I have sent another mail with more detail of what I was thinking to
implement)
> 

> Another method that I had in mind was providing a DHCP forwarding
> plugin 
> that proxies the DHCP requests to another DHCP server (that can then 
> even be outside the cluster). This way there is only one DHCP server 
> that handles keeping track of the leases and you do not have the
> issue 
> of having to handle sharing a lease database / using IPAM. So, for 
> instance, you have a DHCP server running on one node and the other
> nodes 
> just proxy their requests to the one DHCP server.
> 
> I was also thinking we could implement setting the IP for a specific
> VM 
> on interfaces where we have a DHCP server, since we can then just 
> provide fixed IPs for specific MAC-addresses. This could be quite 
> convenient.
> 
> 
I'm always a little bit afraid to use a central dhcp (or a couple in
HA) for my production. Because if a problem occur on dhcp when vms are
starting after a major outage for example.

Personnally, I'm still using static ips in my vms for this.

(And also, I'm using multiple datacenters, with public ips range, so 1
central dhcp is really not possible )





> 
> [1]
> https://antiphishing.cetsi.fr/proxy/v3?i=d09ZU0Z5WTAyTG85WWdYbIX9F1yND7gsvpr6o9NYFYg&r=UTEzTUpQcktwRVdhdEg1TKCFOzhw8CGaAiMfyFTpTR_LTspF9zP2JS-LN0ctA-XBzHeMG-sD1OqL3ihNxDMXJg&f=TmtFVlNVNmxSYnFaWFhxYgWRqBlL4orB2JX8m5oXcL1BuLSwOjOIAbslpc0EWkZJ&u=https%3A//thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html&k=DWI7
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13  8:54   ` Stefan Hanreich
  2023-09-13  9:26     ` DERUMIER, Alexandre
@ 2023-09-13 11:37     ` Thomas Lamprecht
  2023-09-13 11:43       ` DERUMIER, Alexandre
  2023-09-13 11:50       ` Stefan Hanreich
  1 sibling, 2 replies; 28+ messages in thread
From: Thomas Lamprecht @ 2023-09-13 11:37 UTC (permalink / raw)
  To: Proxmox VE development discussion, Stefan Hanreich, DERUMIER, Alexandre

Am 13/09/2023 um 10:54 schrieb Stefan Hanreich:
> With dnsmasq there is even the possibility of running scripts (via --dhcp-script, see the docs [1]) when a lease is added / changed / deleted. But as far as I can tell this can not be used to override the IP that dnsmasq provides via DHCP, so it is probably not really useful for our use-case.


Getting a IP from IPAM and generating a simple MAC to IP mapping
actively on guest start, i.e., before the VM or CT would actually
run, would be enough though.

This sounds relatively simple and would avoid most issues without
backing us into a corner for possible more complex/flexible solutions
in the future.




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13 11:37     ` Thomas Lamprecht
@ 2023-09-13 11:43       ` DERUMIER, Alexandre
  2023-09-13 11:50       ` Stefan Hanreich
  1 sibling, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre @ 2023-09-13 11:43 UTC (permalink / raw)
  To: pve-devel, t.lamprecht, s.hanreich

Le mercredi 13 septembre 2023 à 13:37 +0200, Thomas Lamprecht a écrit :
> Am 13/09/2023 um 10:54 schrieb Stefan Hanreich:
> > With dnsmasq there is even the possibility of running scripts (via
> > --dhcp-script, see the docs [1]) when a lease is added / changed /
> > deleted. But as far as I can tell this can not be used to override
> > the IP that dnsmasq provides via DHCP, so it is probably not really
> > useful for our use-case.
> 
> 
> Getting a IP from IPAM and generating a simple MAC to IP mapping
> actively on guest start, i.e., before the VM or CT would actually
> run, would be enough though.
> 

yes, that's exactly what's I'm thinking too.  (Maybe on nic hotplug
too)

it seem possible to inject them in kea through a socket, I'm going to
test this



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13 11:37     ` Thomas Lamprecht
  2023-09-13 11:43       ` DERUMIER, Alexandre
@ 2023-09-13 11:50       ` Stefan Hanreich
  2023-09-13 12:40         ` Thomas Lamprecht
  2023-09-13 12:50         ` DERUMIER, Alexandre
  1 sibling, 2 replies; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-13 11:50 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion, DERUMIER, Alexandre



On 9/13/23 13:37, Thomas Lamprecht wrote:
> Am 13/09/2023 um 10:54 schrieb Stefan Hanreich:
>> With dnsmasq there is even the possibility of running scripts (via --dhcp-script, see the docs [1]) when a lease is added / changed / deleted. But as far as I can tell this can not be used to override the IP that dnsmasq provides via DHCP, so it is probably not really useful for our use-case.
> 
> 
> Getting a IP from IPAM and generating a simple MAC to IP mapping
> actively on guest start, i.e., before the VM or CT would actually
> run, would be enough though.
> 
> This sounds relatively simple and would avoid most issues without
> backing us into a corner for possible more complex/flexible solutions
> in the future.

Sounds good. We would then remove the mapping on shutdown / stop, I 
suppose? How about Hibernate / Pause? I can see a case for both here..

That way we could also easily add a IP configuration section to the VM 
configuration, which seems convenient from my POV.

I'll take a shot at implementing it this way in the meanwhile.




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13 11:50       ` Stefan Hanreich
@ 2023-09-13 12:40         ` Thomas Lamprecht
  2023-09-13 12:50         ` DERUMIER, Alexandre
  1 sibling, 0 replies; 28+ messages in thread
From: Thomas Lamprecht @ 2023-09-13 12:40 UTC (permalink / raw)
  To: Proxmox VE development discussion, Stefan Hanreich, DERUMIER, Alexandre

Am 13/09/2023 um 13:50 schrieb Stefan Hanreich:
> On 9/13/23 13:37, Thomas Lamprecht wrote:
>> Am 13/09/2023 um 10:54 schrieb Stefan Hanreich:
>>> With dnsmasq there is even the possibility of running scripts (via --dhcp-script, see the docs [1]) when a lease is added / changed / deleted. But as far as I can tell this can not be used to override the IP that dnsmasq provides via DHCP, so it is probably not really useful for our use-case.
>>
>>
>> Getting a IP from IPAM and generating a simple MAC to IP mapping
>> actively on guest start, i.e., before the VM or CT would actually
>> run, would be enough though.
>>
>> This sounds relatively simple and would avoid most issues without
>> backing us into a corner for possible more complex/flexible solutions
>> in the future.
> 
> Sounds good. We would then remove the mapping on shutdown / stop, I suppose? How about Hibernate / Pause? I can see a case for both here..

Yeah, VM hibernate/suspension is genreally a bit icky, time can also
be way off on resume if the guest OS (services) doesn't detects this.

But here I think we can model it after reality, where such a lease
is kept until expiry and then freed for reuse, if the VM gets resumed
again it needs to be able to handle this just like a suspend/hibernate
on a bare metal PC or laptop needs to be able to do and ask for a new
lease, which has then be allocated already on resume by our API – it
just might not be the original one from before the suspension.
I hope I didn't manage to describe this in a too convoluted way, also
actively.

But tldr: yes, remove on stop/shutdown/suspend as it will be freshly
generated again before next start/resume anyway.

Not sure how dnsmasq handles such reloads, especially at cold boot
where it might happen thousands of times in a relatively short period.




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13 11:50       ` Stefan Hanreich
  2023-09-13 12:40         ` Thomas Lamprecht
@ 2023-09-13 12:50         ` DERUMIER, Alexandre
  2023-09-13 13:05           ` Stefan Hanreich
  1 sibling, 1 reply; 28+ messages in thread
From: DERUMIER, Alexandre @ 2023-09-13 12:50 UTC (permalink / raw)
  To: pve-devel, t.lamprecht, s.hanreich

Le mercredi 13 septembre 2023 à 13:50 +0200, Stefan Hanreich a écrit :
> 
> 
> That way we could also easily add a IP configuration section to the
> VM 


I really don't known if it's the best/easiest way to have ip both in
ipam && vm config.

I have sent ipam vm|ct 1 or 2 year ago, and their are a lot of corner
cases (snapshots / backup restore  with an ip different than ipam for
example).


But it's avoid to call ipam at vm_start. (and could be used for
firewall to auto generate ip filtering)








^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13 12:50         ` DERUMIER, Alexandre
@ 2023-09-13 13:05           ` Stefan Hanreich
  2023-09-13 13:21             ` DERUMIER, Alexandre
  0 siblings, 1 reply; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-13 13:05 UTC (permalink / raw)
  To: DERUMIER, Alexandre, pve-devel, t.lamprecht



On 9/13/23 14:50, DERUMIER, Alexandre wrote:
> Le mercredi 13 septembre 2023 à 13:50 +0200, Stefan Hanreich a écrit :
>>
>>
>> That way we could also easily add a IP configuration section to the
>> VM
> 
> 
> I really don't known if it's the best/easiest way to have ip both in
> ipam && vm config.
> 
> I have sent ipam vm|ct 1 or 2 year ago, and their are a lot of corner
> cases (snapshots / backup restore  with an ip different than ipam for
> example).
> 
> 
> But it's avoid to call ipam at vm_start. (and could be used for
> firewall to auto generate ip filtering)
> 

Maybe setting it there could just be an interface for setting it in the 
IPAM manually?

But yes, the snapshots / backup cases certainly requires some thought.

Another thing I was thinking about: When we create a mapping on start / 
stop we will also have to consider DHCP lease time and cannot 
immediately re-use the IP (which is actually quite likely if IPAM always 
just returns the next IP in the list). We will have to take into account 
the DHCP lease time in our pve IPAM implementation and reserve the IP 
accordingly.

For dnsmasq, the dhcp hookscripts might come in handy in that case so we 
know the exact expiration time.

As Thomas already mentioned, time drift in VMs (e.g. after hibernation) 
can also cause real issues here..

We would also need some kind of mechanism for cleaning expired entries 
automatically, otherwise local pve IPAM becomes cluttered.

As far as I can tell, Kea + NetBox integration already supports DHCP 
reservations, so we should be fine on that front.




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13 13:05           ` Stefan Hanreich
@ 2023-09-13 13:21             ` DERUMIER, Alexandre
  2023-09-13 13:48               ` Stefan Hanreich
  2023-09-20 21:48               ` DERUMIER, Alexandre
  0 siblings, 2 replies; 28+ messages in thread
From: DERUMIER, Alexandre @ 2023-09-13 13:21 UTC (permalink / raw)
  To: pve-devel, t.lamprecht, s.hanreich

> > But it's avoid to call ipam at vm_start. (and could be used for
> > firewall to auto generate ip filtering)
> > 
> 
> Maybe setting it there could just be an interface for setting it in
> the 
> IPAM manually?
> 
yes, use should be able to define his own ip too. (maybe directly in a
ipam gui on the sdn subnet ,   or maybe on the vm nic gui (but
registering ip in ipam),  I'm really not sure ...)


> But yes, the snapshots / backup cases certainly requires some
> thought.
> 
> Another thing I was thinking about: When we create a mapping on start
> / 
> stop we will also have to consider DHCP lease time and cannot 
> immediately re-use the IP (which is actually quite likely if IPAM
> always 
> just returns the next IP in the list). We will have to take into
> account 
> the DHCP lease time in our pve IPAM implementation and reserve the IP
> accordingly.
> 
> For dnsmasq, the dhcp hookscripts might come in handy in that case so
> we 
> know the exact expiration time.
> 
> As Thomas already mentioned, time drift in VMs (e.g. after
> hibernation) 
> can also cause real issues here..
> 
> We would also need some kind of mechanism for cleaning expired
> entries 
> automatically, otherwise local pve IPAM becomes cluttered.
> 

Can't we have simply an infinite lease time,
and simply remove leases manually from dhcp + delete ip from ipam at vm
stop/delete ?




> As far as I can tell, Kea + NetBox integration already supports DHCP 
> reservations, so we should be fine on that front.
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13 13:21             ` DERUMIER, Alexandre
@ 2023-09-13 13:48               ` Stefan Hanreich
  2023-09-13 13:52                 ` Stefan Hanreich
  2023-09-20 21:48               ` DERUMIER, Alexandre
  1 sibling, 1 reply; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-13 13:48 UTC (permalink / raw)
  To: DERUMIER, Alexandre, pve-devel, t.lamprecht



On 9/13/23 15:21, DERUMIER, Alexandre wrote:
> Can't we have simply an infinite lease time,
> and simply remove leases manually from dhcp + delete ip from ipam at vm
> stop/delete ?

Wouldn't this cause problems if we remove the lease at stop?

* VM 1 gets IP X via DHCP on start

* We stop VM 1 and remove the lease for IP X from the IPAM

* VM 2 starts some time after and we reserve IP X for it

* We restart VM 1 and reserve some other IP Y, but the VM will never 
send a DHCP request to our DHCP server again, since it is convinced that 
it still owns IP X (since we told the VM that the IP is forever theirs). 
But VM 2 now also uses IP X.




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13 13:48               ` Stefan Hanreich
@ 2023-09-13 13:52                 ` Stefan Hanreich
  2023-09-14 13:15                   ` DERUMIER, Alexandre
  0 siblings, 1 reply; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-13 13:52 UTC (permalink / raw)
  To: DERUMIER, Alexandre, pve-devel, t.lamprecht



On 9/13/23 15:48, Stefan Hanreich wrote:
> 
> 
> On 9/13/23 15:21, DERUMIER, Alexandre wrote:
>> Can't we have simply an infinite lease time,
>> and simply remove leases manually from dhcp + delete ip from ipam at vm
>> stop/delete ?
> 
> Wouldn't this cause problems if we remove the lease at stop?
> 
> * VM 1 gets IP X via DHCP on start
> 
> * We stop VM 1 and remove the lease for IP X from the IPAM
> 
> * VM 2 starts some time after and we reserve IP X for it
> 
> * We restart VM 1 and reserve some other IP Y, but the VM will never 
> send a DHCP request to our DHCP server again, since it is convinced that 
> it still owns IP X (since we told the VM that the IP is forever theirs). 
> But VM 2 now also uses IP X.

Ah sorry, DHCP leases will not persist across reboots of course. So this 
could work I think.




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13 13:52                 ` Stefan Hanreich
@ 2023-09-14 13:15                   ` DERUMIER, Alexandre
  0 siblings, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre @ 2023-09-14 13:15 UTC (permalink / raw)
  To: pve-devel, t.lamprecht

> 
> Ah sorry, DHCP leases will not persist across reboots of course. So
> this 
> could work I think.
> 

yes, the vm client will always send a dhcp request if you restart the
vm , or restart the networking service/dhcp client.

the lease timeout is really a decrementing counter in the guest os, to
auto renew/resend a dhcp request on expire.



I have done some tests with kea api,

The host "reservations" can't be done, as it need a database backend
and also it's a non-free plugin.

But, leases can be provided manually without any problem.

(The difference between both, is that with reservations, you don't need
to define a pool/iprange   in the subnet, as they are used for dynamic
ip assign).


So, after vm start, if we find free ip from ipam &&  inject a lease
though kea socket:

echo '{ "command": "lease4-add", "arguments": { "hw-address":
"16:e5:75:c1:28:a0", "ip-address": "192.168.0.101" } }' | socat
UNIX:/var/run/kea/kea4-ctrl-socket -,ignoreeof


The vm will get this ip at boot 

(we just need to be carefull, that if we don't inject any lease, the
dhcp will itself attribute an ip from the pool, I don't think it's
possible to disable this without hacking kea)




The kea config simply define the subnet && pool:


"Dhcp4": {
     "interfaces-config": {
        "interfaces": ["vnet1"],

    },
     ...
    "hooks-libraries": [
       {
             "library": "/usr/lib/x86_64-linux-
gnu/kea/hooks/libdhcp_lease_cmds.so",
       },
    ],
    "subnet4": [
        {

            "subnet": "192.168.0.0/24",
            "pools": [ { "pool": "192.168.0.20 - 192.168.0.200" } ],
            "option-data": [
                {
                    "name": "routers",
                    "data": "192.168.0.1"
                }
            ],
	}
    }

}


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-13 13:21             ` DERUMIER, Alexandre
  2023-09-13 13:48               ` Stefan Hanreich
@ 2023-09-20 21:48               ` DERUMIER, Alexandre
  2023-09-26 11:20                 ` Stefan Hanreich
  1 sibling, 1 reply; 28+ messages in thread
From: DERUMIER, Alexandre @ 2023-09-20 21:48 UTC (permalink / raw)
  To: pve-devel, t.lamprecht, s.hanreich

Le mercredi 13 septembre 2023 à 13:21 +0000, DERUMIER, Alexandre a
écrit :
> yes, use should be able to define his own ip too. (maybe directly in
> a
> ipam gui on the sdn subnet ,   or maybe on the vm nic gui (but
> registering ip in ipam),  I'm really not sure ...)

Hi, I have done some tests with differents external ipam, to compare 
storing or not storing ip on proxmox side.


Finally, It's not so easy without writing ip on proxmox side (in vm
config or somewhere else), because to retrieve a reserved ip from
external ipam when vm start, we need to lookup maybe from mac address,
maybe from hostname of the vm, or maybe some custom attributes, but not
all ipams accept same attributes. 

(at least phpipam && netbox don't support all features, or not easyly.
Netbox for example, for macaddress need to register the full vm object
&& interfaces + mac  + mapping to ip, Phpipam is a single ip object
with mac as attribute).


So I think the best way is still to write the ip into the vm config,
this allow to inject already reserved ip in dhcp at vm start/migrate
without need to call the ipam (also avoid start problem is ipam server
is down).

and this allow to use it for firewall ipfilter, I see a usecase for sdn
vxlan too or special /32 route injection)


I just need some protections for snapshot, but nothing too difficult,
but we really need to avoid to try to manage in ipam multiple
version/snapshot of ip entry for a vm. 
I had tried 2years ago, it was really painful to handle this in
differents ipam.
So maybe the best way is to forbid to change ip address when a snapshot
already exist.





I think we could implement ipam call like:


create vm or add a new nic  --> 
-----------------------------
qm create ... -net0
bridge=vnet,....,ip=(auto|192.168.0.1|dynamic),ip6=(..)


auto : search a free ip in ipam.  write the ip address in net0: ...,ip=
ip field 

192.168.0.1:  check if ip is free in ipam && register ip in ipam. write
the ip in ip field.


dynamic: write "ephemeral" in net0: ....,ip=ephemeral (This is a
dynamic ip registered at vm start, and release at vm stop)



vm start
---------
- if ip=ephemeral, find && register a free ip in ipam, write it in vm
net0: ...,ip=192.168.0.10[E] .   (maybe with a special flag [E] to
indicate it's ephemeral)
- read ip from vm config && inject in dhcp


vm_stop
-------
if ip is ephemeral (netX: ip=192.168.0.10[E]),  delete ip from ipam,
set ip=ephemeral in vm config


vm_destroy or nic remove/unplug
-------------------------
if netX: ...,ip=192.168.0.10   ,  remove ip from ipam



nic update when vm is running:
------------------------------
if ip is defined : netX: ip=192.168.0.10,  we don't allow bridge change
or ip change, as vm is not notified about theses changes, and still use
old ip.

We can allow nic hot-unplug && hotplug. (guest os will remove the ip on
nic removal, and will call dhcp again on nic hotplug)




nic hotplug with ip=auto:
-------------------------

--> add nic in pending state ----> find ip in ipam && write it in
pending ---> do the hotplug in qemu.

We need to handle the config revert to remove ip from ipam if the nic
hotplug is blocked in pending state(I never see this case until os
don't have pci_hotplug module loaded, but it's better to be carefull )




The ipam modules (internal pve, phpipam,netbox) are already for this, I
think it shouldn't be too difficult.

dnsmasq seem to have a reservation file option, where we can
dynamically add ip-mac without need to reload it. 

I'll try it, re-using your current dnsmasq patches.





^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-20 21:48               ` DERUMIER, Alexandre
@ 2023-09-26 11:20                 ` Stefan Hanreich
  2023-09-26 13:07                   ` DERUMIER, Alexandre
  0 siblings, 1 reply; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-26 11:20 UTC (permalink / raw)
  To: DERUMIER, Alexandre, pve-devel, t.lamprecht

On 9/20/23 23:48, DERUMIER, Alexandre wrote:
> Finally, It's not so easy without writing ip on proxmox side (in vm
> config or somewhere else), because to retrieve a reserved ip from
> external ipam when vm start, we need to lookup maybe from mac address,
> maybe from hostname of the vm, or maybe some custom attributes, but not
> all ipams accept same attributes.
> 
> (at least phpipam && netbox don't support all features, or not easyly.
> Netbox for example, for macaddress need to register the full vm object
> && interfaces + mac  + mapping to ip, Phpipam is a single ip object
> with mac as attribute).

Yes, I think so as well. It also would make us dependent on external 
systems, which might not always be up and would create an additional 
hurdle for setting things up. Having our own solution for this seems 
preferable imo. We can still provide integrations with netbox / phpipam 
so they can take over from our small IPAM if they implement the features 
we need.

I'll take a closer look at netbox, since I was under the impression that 
they should support this - although it's been awhile since I played 
around with it. Not sure about phpIPAM, but I wasn't too stoked on using 
it anyway after browsing their source code for a bit.

> So I think the best way is still to write the ip into the vm config,
> this allow to inject already reserved ip in dhcp at vm start/migrate
> without need to call the ipam (also avoid start problem is ipam server
> is down).
> 
> and this allow to use it for firewall ipfilter, I see a usecase for sdn
> vxlan too or special /32 route injection)
> 

Yes, I think so as well, although we would need to take care of proper 
synchronization between Configs and IPAM. If this diverges for whatever 
reason we will run into trouble. Of course, this *should* never happen 
when properly implemented.

Another option I thought about would be storing a VMID -> IP mapping in 
the (pve) IPAM itself. This would have the upside of having a 
centralized storage and single source of truth without having to 
maintain two different locations where we store the IP.

Though it would also be a bit more intransparent to the user if we don't 
expose it somewhere in the UI.

This would have the downside that starting VMs is an issue when the IPAM 
is down. While using the pve IPAM in a cluster (or a single node) I can 
see this being alright, since you need quorum to start a VM. As long as 
you have a quorate cluster the pve IPAM *should* be available as well.

In the case of using phpIPAM or netbox this is an issue we would need to 
think about.

> I just need some protections for snapshot, but nothing too difficult,
> but we really need to avoid to try to manage in ipam multiple
> version/snapshot of ip entry for a vm.
> I had tried 2years ago, it was really painful to handle this in
> differents ipam.
> So maybe the best way is to forbid to change ip address when a snapshot
> already exist.

Yes, it might be just the best way to check on restore if the IP is the 
same or at least currently available and otherwise just get a new IP 
from IPAM automatically (maybe with a warning).

On the other hand, this should not be an issue when storing the VMID -> 
IP mapping centralized somewhere, since we then can just rely on the IP 
being stored there. Of course this would exclude the DHCP/IP setting 
from the snapshot which can be good or bad I'd say (depending on the use 
case).

> I think we could implement ipam call like:
> 
> 
> create vm or add a new nic  -->
> -----------------------------
> qm create ... -net0
> bridge=vnet,....,ip=(auto|192.168.0.1|dynamic),ip6=(..)
> 
> 
> auto : search a free ip in ipam.  write the ip address in net0: ...,ip=
> ip field
> 
> 192.168.0.1:  check if ip is free in ipam && register ip in ipam. write
> the ip in ip field.
> 
> 
> dynamic: write "ephemeral" in net0: ....,ip=ephemeral (This is a
> dynamic ip registered at vm start, and release at vm stop)

Sounds good to me.

> 
> vm start
> ---------
> - if ip=ephemeral, find && register a free ip in ipam, write it in vm
> net0: ...,ip=192.168.0.10[E] .   (maybe with a special flag [E] to
> indicate it's ephemeral)
> - read ip from vm config && inject in dhcp

Maybe we can even get away with setting the IP in the DHCP config as 
soon as we set it in the VM configuration, as long as it is not 
ephemeral, thus avoiding the need for having to do it while starting VMs?

> vm_stop
> -------
> if ip is ephemeral (netX: ip=192.168.0.10[E]),  delete ip from ipam,
> set ip=ephemeral in vm config
> 
> 
> vm_destroy or nic remove/unplug
> -------------------------
> if netX: ...,ip=192.168.0.10   ,  remove ip from ipam
> 
> 
> 
> nic update when vm is running:
> ------------------------------
> if ip is defined : netX: ip=192.168.0.10,  we don't allow bridge change
> or ip change, as vm is not notified about theses changes, and still use
> old ip.
> 
> We can allow nic hot-unplug && hotplug. (guest os will remove the ip on
> nic removal, and will call dhcp again on nic hotplug)

Yes, I think so as well. Maybe we could give the option to change the IP 
together with a forced reboot and a warning like 'When changing the IP 
this VM will get rebooted' as a quality of life feature?

> 
> nic hotplug with ip=auto:
> -------------------------
> 
> --> add nic in pending state ----> find ip in ipam && write it in
> pending ---> do the hotplug in qemu.
> 
> We need to handle the config revert to remove ip from ipam if the nic
> hotplug is blocked in pending state(I never see this case until os
> don't have pci_hotplug module loaded, but it's better to be carefull )
> 

Yes - defensive programming is always good!

> The ipam modules (internal pve, phpipam,netbox) are already for this, I
> think it shouldn't be too difficult.
> 
> dnsmasq seem to have a reservation file option, where we can
> dynamically add ip-mac without need to reload it.
> 
> I'll try it, re-using your current dnsmasq patches.

Since you want to take a shot at implementing it, is there anything I 
could help you with? I'd have some resources now for taking a shot at 
this as well.

It would also be interesting to improve and add some features to our 
built-in IPAM, maybe even add the VMID -> IP mapping functionality I've 
touched upon earlier. It would also be interesting to be able to expose 
some of this information to the frontend, so users have an overview of 
currently leased IPs in the frontend - what do you think?

Would it also make sense to set IPSet entries for VMs, so they are only 
allowed to use the IPs we dedicate to them? This would be a decent 
safeguard for preventing issues down the line.

Additionally it would be interesting to automatically create Aliases for 
VNets/VMs in the Firewall configuration - what do you think? If we add 
VMs as Aliases, we would have to recompile the iptables on every IP 
change. For this feature it would make sense to be able to set names / 
comments on VNets, so we can reference them this way. What do you think?




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-26 11:20                 ` Stefan Hanreich
@ 2023-09-26 13:07                   ` DERUMIER, Alexandre
  2023-09-26 14:12                     ` Stefan Hanreich
  0 siblings, 1 reply; 28+ messages in thread
From: DERUMIER, Alexandre @ 2023-09-26 13:07 UTC (permalink / raw)
  To: pve-devel, t.lamprecht, s.hanreich

Le mardi 26 septembre 2023 à 13:20 +0200, Stefan Hanreich a écrit :
> On 9/20/23 23:48, DERUMIER, Alexandre wrote:
> > Finally, It's not so easy without writing ip on proxmox side (in vm
> > config or somewhere else), because to retrieve a reserved ip from
> > external ipam when vm start, we need to lookup maybe from mac
> > address,
> > maybe from hostname of the vm, or maybe some custom attributes, but
> > not
> > all ipams accept same attributes.
> > 
> > (at least phpipam && netbox don't support all features, or not
> > easyly.
> > Netbox for example, for macaddress need to register the full vm
> > object
> > && interfaces + mac  + mapping to ip, Phpipam is a single ip object
> > with mac as attribute).
> 
> Yes, I think so as well. It also would make us dependent on external 
> systems, which might not always be up and would create an additional 
> hurdle for setting things up. Having our own solution for this seems 
> preferable imo. We can still provide integrations with netbox /
> phpipam 
> so they can take over from our small IPAM if they implement the
> features 
> we need.
> 
> I'll take a closer look at netbox, since I was under the impression
> that 
> they should support this - although it's been awhile since I played 
> around with it. Not sure about phpIPAM, but I wasn't too stoked on
> using 
> it anyway after browsing their source code for a bit.
> 
> > So I think the best way is still to write the ip into the vm
> > config,
> > this allow to inject already reserved ip in dhcp at vm
> > start/migrate
> > without need to call the ipam (also avoid start problem is ipam
> > server
> > is down).
> > 
> > and this allow to use it for firewall ipfilter, I see a usecase for
> > sdn
> > vxlan too or special /32 route injection)
> > 
> 
> Yes, I think so as well, although we would need to take care of
> proper 
> synchronization between Configs and IPAM. If this diverges for
> whatever 
> reason we will run into trouble. Of course, this *should* never
> happen 
> when properly implemented.
> 
> Another option I thought about would be storing a VMID -> IP mapping
> in 
> the (pve) IPAM itself. This would have the upside of having a 
> centralized storage and single source of truth without having to 
> maintain two different locations where we store the IP.
> 
> Though it would also be a bit more intransparent to the user if we
> don't 
> expose it somewhere in the UI.
> 
> This would have the downside that starting VMs is an issue when the
> IPAM 
> is down. While using the pve IPAM in a cluster (or a single node) I
> can 
> see this being alright, since you need quorum to start a VM. As long
> as 
> you have a quorate cluster the pve IPAM *should* be available as
> well.
> 
> In the case of using phpIPAM or netbox this is an issue we would need
> to 
> think about.
> 
Yes, this is my main concern, as it'll be my case in production, as I
managing multiple clusters, on differents location, with subnets
sharing.

for me, it's ok if ipam is down when allocating a new ip or vm.
But for vm start/stop, I think we should have at minimum some cache
somewhere. (I'm think about a disaster recovery or big network problem,
where you want to fast restart all vms without need to call the ipam).


Maybe a way, could be to use the local pve ipam, as a local mirror of
the external ipam ?    (and don't store ip in vm config, but only in
pve ipam, the source of truth)




vm with ipam=auto,  external ipam is configured:


- if (ip_exist(pve_ipam)) {
         return ip
  } elsif (ip_exist(external_ipam)) {
         return ip
  } else {
         ip = findnextfreeip(external_ipam)
         sync_pve_ipam(ip)
         return ip
  }



I have see this in vmware or openstack/opennebula with external ipam, 
I don't remember exactly.




> > I just need some protections for snapshot, but nothing too
> > difficult,
> > but we really need to avoid to try to manage in ipam multiple
> > version/snapshot of ip entry for a vm.
> > I had tried 2years ago, it was really painful to handle this in
> > differents ipam.
> > So maybe the best way is to forbid to change ip address when a
> > snapshot
> > already exist.
> 
> Yes, it might be just the best way to check on restore if the IP is
> the 
> same or at least currently available and otherwise just get a new IP 
> from IPAM automatically (maybe with a warning).
> 
> On the other hand, this should not be an issue when storing the VMID
> -> 
> IP mapping centralized somewhere, since we then can just rely on the
> IP 
> being stored there. Of course this would exclude the DHCP/IP setting 
> from the snapshot which can be good or bad I'd say (depending on the
> use 
> case).
> 
> > I think we could implement ipam call like:
> > 
> > 
> > create vm or add a new nic  -->
> > -----------------------------
> > qm create ... -net0
> > bridge=vnet,....,ip=(auto|192.168.0.1|dynamic),ip6=(..)
> > 
> > 
> > auto : search a free ip in ipam.  write the ip address in net0:
> > ...,ip=
> > ip field
> > 
> > 192.168.0.1:  check if ip is free in ipam && register ip in ipam.
> > write
> > the ip in ip field.
> > 
> > 
> > dynamic: write "ephemeral" in net0: ....,ip=ephemeral (This is a
> > dynamic ip registered at vm start, and release at vm stop)
> 
> Sounds good to me.
> 
> > 
> > vm start
> > ---------
> > - if ip=ephemeral, find && register a free ip in ipam, write it in
> > vm
> > net0: ...,ip=192.168.0.10[E] .   (maybe with a special flag [E] to
> > indicate it's ephemeral)
> > - read ip from vm config && inject in dhcp
> 
> Maybe we can even get away with setting the IP in the DHCP config as 
> soon as we set it in the VM configuration, as long as it is not 
> ephemeral, thus avoiding the need for having to do it while starting
> VMs?
> 
> > vm_stop
> > -------
> > if ip is ephemeral (netX: ip=192.168.0.10[E]),  delete ip from
> > ipam,
> > set ip=ephemeral in vm config
> > 
> > 
> > vm_destroy or nic remove/unplug
> > -------------------------
> > if netX: ...,ip=192.168.0.10   ,  remove ip from ipam
> > 
> > 
> > 
> > nic update when vm is running:
> > ------------------------------
> > if ip is defined : netX: ip=192.168.0.10,  we don't allow bridge
> > change
> > or ip change, as vm is not notified about theses changes, and still
> > use
> > old ip.
> > 
> > We can allow nic hot-unplug && hotplug. (guest os will remove the
> > ip on
> > nic removal, and will call dhcp again on nic hotplug)
> 
> Yes, I think so as well. Maybe we could give the option to change the
> IP 
> together with a forced reboot and a warning like 'When changing the
> IP 
> this VM will get rebooted' as a quality of life feature?
> 
> > 
> > nic hotplug with ip=auto:
> > -------------------------
> > 
> > --> add nic in pending state ----> find ip in ipam && write it in
> > pending ---> do the hotplug in qemu.
> > 
> > We need to handle the config revert to remove ip from ipam if the
> > nic
> > hotplug is blocked in pending state(I never see this case until os
> > don't have pci_hotplug module loaded, but it's better to be
> > carefull )
> > 
> 
> Yes - defensive programming is always good!
> 
> > The ipam modules (internal pve, phpipam,netbox) are already for
> > this, I
> > think it shouldn't be too difficult.
> > 
> > dnsmasq seem to have a reservation file option, where we can
> > dynamically add ip-mac without need to reload it.
> > 
> > I'll try it, re-using your current dnsmasq patches.
> 
> Since you want to take a shot at implementing it, is there anything I
> could help you with? I'd have some resources now for taking a shot at
> this as well.
> 
I'm a bit busy currently on other stuff and I would like to finish them
first. 

So if you have a little bit time to work on this, it could be great :)

I have send some patches in 2021 for ipam integration in qemu/lxc, if
you want to take some inspiration. (without the ip in the vm config, it
should be a lot easier)



> It would also be interesting to improve and add some features to our 
> built-in IPAM, maybe even add the VMID -> IP mapping functionality
> I've 
> touched upon earlier. It would also be interesting to be able to
> expose 
> some of this information to the frontend, so users have an overview
> of 
> currently leased IPs in the frontend - what do you think?
> 

Yes,admin should be able to see allocated ip. (like a real ipam).

I was thinking about other stuff for later, but maybe it could be great
for an admin to be able to reserve ips and put them in a pool.
Then user could choose ip from this pool.

(Usecase is public ip addresses, where a customer could buy some of
them,
then allocated them like he want)




> Would it also make sense to set IPSet entries for VMs, so they are
> only 
> allowed to use the IPs we dedicate to them? This would be a decent 
> safeguard for preventing issues down the line.
> 
> Additionally it would be interesting to automatically create Aliases
> for 
> VNets/VMs in the Firewall configuration - what do you think? If we
> add 
> VMs as Aliases, we would have to recompile the iptables on every IP 
> change. For this feature it would make sense to be able to set names
> / 
> comments on VNets, so we can reference them this way. What do you
> think?

a Big yes !  (as I'm already doing this manually in production ;)


> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-26 13:07                   ` DERUMIER, Alexandre
@ 2023-09-26 14:12                     ` Stefan Hanreich
  2023-09-26 16:55                       ` DERUMIER, Alexandre
  0 siblings, 1 reply; 28+ messages in thread
From: Stefan Hanreich @ 2023-09-26 14:12 UTC (permalink / raw)
  To: DERUMIER, Alexandre, pve-devel, t.lamprecht

> Yes, this is my main concern, as it'll be my case in production, as I
> managing multiple clusters, on differents location, with subnets
> sharing.
> 
> for me, it's ok if ipam is down when allocating a new ip or vm.
> But for vm start/stop, I think we should have at minimum some cache
> somewhere. (I'm think about a disaster recovery or big network problem,
> where you want to fast restart all vms without need to call the ipam).
> 
> Maybe a way, could be to use the local pve ipam, as a local mirror of
> the external ipam ?    (and don't store ip in vm config, but only in
> pve ipam, the source of truth)
> 

Yes, I think this would be preferrable over the VM config. This also
means we would have to sync from netbox to local PVE IPAMs?

> I'm a bit busy currently on other stuff and I would like to finish them
> first. 
> 
> So if you have a little bit time to work on this, it could be great :)
> 
> I have send some patches in 2021 for ipam integration in qemu/lxc, if
> you want to take some inspiration. (without the ip in the vm config, it
> should be a lot easier)
> 

I'll try to get on it then, I'll still be here for 2,5 weeks until I go
on a longer vacation. Hopefully I'll get something workable ready until
then. I will look into your patches - thanks for the hint!

> Yes,admin should be able to see allocated ip. (like a real ipam).
> 
> I was thinking about other stuff for later, but maybe it could be great
> for an admin to be able to reserve ips and put them in a pool.
> Then user could choose ip from this pool.
> 
> (Usecase is public ip addresses, where a customer could buy some of
> them,
> then allocated them like he want)
> 

That sounds like a great feature for hosters, I'll certainly look into that.




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN
  2023-09-26 14:12                     ` Stefan Hanreich
@ 2023-09-26 16:55                       ` DERUMIER, Alexandre
  0 siblings, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre @ 2023-09-26 16:55 UTC (permalink / raw)
  To: pve-devel, t.lamprecht, s.hanreich

Le mardi 26 septembre 2023 à 16:12 +0200, Stefan Hanreich a écrit :
> > Yes, this is my main concern, as it'll be my case in production, as
> > I
> > managing multiple clusters, on differents location, with subnets
> > sharing.
> > 
> > for me, it's ok if ipam is down when allocating a new ip or vm.
> > But for vm start/stop, I think we should have at minimum some cache
> > somewhere. (I'm think about a disaster recovery or big network
> > problem,
> > where you want to fast restart all vms without need to call the
> > ipam).
> > 
> > Maybe a way, could be to use the local pve ipam, as a local mirror
> > of
> > the external ipam ?    (and don't store ip in vm config, but only
> > in
> > pve ipam, the source of truth)
> > 
> 
> Yes, I think this would be preferrable over the VM config. This also
> means we would have to sync from netbox to local PVE IPAMs?

See my pseudo algorithm, I think we can sync on the fly from netbox to
local pveipam (like a read cache), when we allocate a new ip.

I think it's not a problem with multiple cluster with different local
pveipam, if we always try to allocate a new ip from the external ip,
then write it to local pveipam, for later read.

Maybe it could be improve with a full sync of subnets in cron ? (Need
to check the external ipam apis)





> 
> > I'm a bit busy currently on other stuff and I would like to finish
> > them
> > first. 
> > 
> > So if you have a little bit time to work on this, it could be great
> > :)
> > 
> > I have send some patches in 2021 for ipam integration in qemu/lxc,
> > if
> > you want to take some inspiration. (without the ip in the vm
> > config, it
> > should be a lot easier)
> > 
> 
> I'll try to get on it then, I'll still be here for 2,5 weeks until I
> go
> on a longer vacation. Hopefully I'll get something workable ready
> until
> then. I will look into your patches - thanks for the hint!
> 
I'll have a little bit more time next week , then I'm going to do some
proxmox training with students, so I'll be busy until mid-october.
(so when you'll be in vacation ^_^).

If you have some early patches for this time, I'll be able to continue
the work if needed.



> > Yes,admin should be able to see allocated ip. (like a real ipam).
> > 
> > I was thinking about other stuff for later, but maybe it could be
> > great
> > for an admin to be able to reserve ips and put them in a pool.
> > Then user could choose ip from this pool.
> > 
> > (Usecase is public ip addresses, where a customer could buy some of
> > them,
> > then allocated them like he want)
> > 
> 
> That sounds like a great feature for hosters, I'll certainly look
> into that.
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2023-09-26 16:56 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-08 13:42 [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN Stefan Hanreich
2023-09-08 13:42 ` [pve-devel] [RFC pve-cluster 1/6] cluster files: add dhcp.cfg Stefan Hanreich
2023-09-08 13:43 ` [pve-devel] [RFC pve-manager 2/6] sdn: regenerate DHCP config on reload Stefan Hanreich
2023-09-08 13:43 ` [pve-devel] [RFC pve-network 3/6] sdn: dhcp: add abstract class for DHCP plugins Stefan Hanreich
2023-09-08 13:43 ` [pve-devel] [RFC pve-network 4/6] sdn: dhcp: subnet: add DHCP options to subnet configuration Stefan Hanreich
2023-09-11  4:03   ` DERUMIER, Alexandre
2023-09-13  8:37     ` Stefan Hanreich
2023-09-08 13:43 ` [pve-devel] [RFC pve-network 5/6] sdn: dhcp: add DHCP plugin for dnsmasq Stefan Hanreich
2023-09-08 13:43 ` [pve-devel] [RFC pve-network 6/6] sdn: dhcp: regenerate config for DHCP servers on reload Stefan Hanreich
2023-09-11  3:53 ` [pve-devel] [RFC cluster/manager/network 0/6] Add support for DHCP servers to SDN DERUMIER, Alexandre
2023-09-13  8:18   ` DERUMIER, Alexandre
2023-09-13  8:54   ` Stefan Hanreich
2023-09-13  9:26     ` DERUMIER, Alexandre
2023-09-13 11:37     ` Thomas Lamprecht
2023-09-13 11:43       ` DERUMIER, Alexandre
2023-09-13 11:50       ` Stefan Hanreich
2023-09-13 12:40         ` Thomas Lamprecht
2023-09-13 12:50         ` DERUMIER, Alexandre
2023-09-13 13:05           ` Stefan Hanreich
2023-09-13 13:21             ` DERUMIER, Alexandre
2023-09-13 13:48               ` Stefan Hanreich
2023-09-13 13:52                 ` Stefan Hanreich
2023-09-14 13:15                   ` DERUMIER, Alexandre
2023-09-20 21:48               ` DERUMIER, Alexandre
2023-09-26 11:20                 ` Stefan Hanreich
2023-09-26 13:07                   ` DERUMIER, Alexandre
2023-09-26 14:12                     ` Stefan Hanreich
2023-09-26 16:55                       ` DERUMIER, Alexandre

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal