* [pve-devel] [PATCH proxmox-ve-rs 01/21] debian: add files for packaging
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-27 10:41 ` Gabriel Goller
2024-08-13 16:06 ` Max Carrara
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 02/21] firewall: add ip range types Stefan Hanreich
` (23 subsequent siblings)
24 siblings, 2 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Since we now have a standalone repository for Proxmox VE related
crates, add the required files for packaging the crates contained in
this repository.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
.cargo/config.toml | 5 ++
.gitignore | 8 +++
Cargo.toml | 17 +++++++
Makefile | 69 ++++++++++++++++++++++++++
build.sh | 35 +++++++++++++
bump.sh | 44 ++++++++++++++++
proxmox-ve-config/Cargo.toml | 16 +++---
proxmox-ve-config/debian/changelog | 5 ++
proxmox-ve-config/debian/control | 43 ++++++++++++++++
proxmox-ve-config/debian/copyright | 19 +++++++
proxmox-ve-config/debian/debcargo.toml | 4 ++
11 files changed, 255 insertions(+), 10 deletions(-)
create mode 100644 .cargo/config.toml
create mode 100644 .gitignore
create mode 100644 Cargo.toml
create mode 100644 Makefile
create mode 100755 build.sh
create mode 100755 bump.sh
create mode 100644 proxmox-ve-config/debian/changelog
create mode 100644 proxmox-ve-config/debian/control
create mode 100644 proxmox-ve-config/debian/copyright
create mode 100644 proxmox-ve-config/debian/debcargo.toml
diff --git a/.cargo/config.toml b/.cargo/config.toml
new file mode 100644
index 0000000..3b5b6e4
--- /dev/null
+++ b/.cargo/config.toml
@@ -0,0 +1,5 @@
+[source]
+[source.debian-packages]
+directory = "/usr/share/cargo/registry"
+[source.crates-io]
+replace-with = "debian-packages"
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..d72b68b
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,8 @@
+/target
+/*/target
+Cargo.lock
+**/*.rs.bk
+/*.buildinfo
+/*.changes
+/build
+/*-deb
diff --git a/Cargo.toml b/Cargo.toml
new file mode 100644
index 0000000..ab23d89
--- /dev/null
+++ b/Cargo.toml
@@ -0,0 +1,17 @@
+[workspace]
+members = [
+ "proxmox-ve-config",
+]
+exclude = [
+ "build",
+]
+resolver = "2"
+
+[workspace.package]
+authors = ["Proxmox Support Team <support@proxmox.com>"]
+edition = "2021"
+license = "AGPL-3"
+homepage = "https://proxmox.com"
+exclude = [ "debian" ]
+rust-version = "1.70"
+
diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..0da9b74
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,69 @@
+# Shortcut for common operations:
+
+CRATES != echo proxmox-*/Cargo.toml | sed -e 's|/Cargo.toml||g'
+
+# By default we just run checks:
+.PHONY: all
+all: check
+
+.PHONY: deb
+deb: $(foreach c,$(CRATES), $c-deb)
+ echo $(foreach c,$(CRATES), $c-deb)
+ lintian build/*.deb
+
+.PHONY: dsc
+dsc: $(foreach c,$(CRATES), $c-dsc)
+ echo $(foreach c,$(CRATES), $c-dsc)
+ lintian build/*.dsc
+
+.PHONY: autopkgtest
+autopkgtest: $(foreach c,$(CRATES), $c-autopkgtest)
+
+.PHONY: dinstall
+dinstall:
+ $(MAKE) clean
+ $(MAKE) deb
+ sudo -k dpkg -i build/librust-*.deb
+
+%-deb:
+ ./build.sh $*
+ touch $@
+
+%-dsc:
+ BUILDCMD='dpkg-buildpackage -S -us -uc -d' ./build.sh $*
+ touch $@
+
+%-autopkgtest:
+ autopkgtest build/$* build/*.deb -- null
+ touch $@
+
+.PHONY: check
+check:
+ cargo test
+
+# Prints a diff between the current code and the one rustfmt would produce
+.PHONY: fmt
+fmt:
+ cargo +nightly fmt -- --check
+
+# Doc without dependencies
+.PHONY: doc
+doc:
+ cargo doc --no-deps
+
+.PHONY: clean
+clean:
+ cargo clean
+ rm -rf build/
+ rm -f -- *-deb *-dsc *-autopkgtest *.build *.buildinfo *.changes
+
+.PHONY: update
+update:
+ cargo update
+
+%-upload: %-deb
+ cd build; \
+ dcmd --deb rust-$*_*.changes \
+ | grep -v '.changes$$' \
+ | tar -cf "$@.tar" -T-; \
+ cat "$@.tar" | ssh -X repoman@repo.proxmox.com upload --product devel --dist bookworm
diff --git a/build.sh b/build.sh
new file mode 100755
index 0000000..39a8302
--- /dev/null
+++ b/build.sh
@@ -0,0 +1,35 @@
+#!/bin/sh
+
+set -eux
+
+export CARGO=/usr/bin/cargo
+export RUSTC=/usr/bin/rustc
+
+CRATE=$1
+BUILDCMD=${BUILDCMD:-"dpkg-buildpackage -b -uc -us"}
+
+mkdir -p build
+echo system >build/rust-toolchain
+rm -rf "build/${CRATE}"
+
+CONTROL="$PWD/${CRATE}/debian/control"
+
+if [ -e "$CONTROL" ]; then
+ # check but only warn, debcargo fails anyway if crates are missing
+ dpkg-checkbuilddeps $PWD/${CRATE}/debian/control || true
+ # rm -f "$PWD/${CRATE}/debian/control"
+fi
+
+debcargo package \
+ --config "$PWD/${CRATE}/debian/debcargo.toml" \
+ --changelog-ready \
+ --no-overlay-write-back \
+ --directory "$PWD/build/${CRATE}" \
+ "${CRATE}" \
+ "$(dpkg-parsechangelog -l "${CRATE}/debian/changelog" -SVersion | sed -e 's/-.*//')"
+
+cd "build/${CRATE}"
+rm -f debian/source/format.debcargo.hint
+${BUILDCMD}
+
+cp debian/control "$CONTROL"
diff --git a/bump.sh b/bump.sh
new file mode 100755
index 0000000..08ad119
--- /dev/null
+++ b/bump.sh
@@ -0,0 +1,44 @@
+#!/bin/bash
+
+package=$1
+
+if [[ -z "$package" ]]; then
+ echo "USAGE:"
+ echo -e "\t bump.sh <crate> [patch|minor|major|<version>]"
+ echo ""
+ echo "Defaults to bumping patch version by 1"
+ exit 0
+fi
+
+cargo_set_version="$(command -v cargo-set-version)"
+if [[ -z "$cargo_set_version" || ! -x "$cargo_set_version" ]]; then
+ echo 'bump.sh requires "cargo set-version", provided by "cargo-edit".'
+ exit 1
+fi
+
+if [[ ! -e "$package/Cargo.toml" ]]; then
+ echo "Invalid crate '$package'"
+ exit 1
+fi
+
+version=$2
+if [[ -z "$version" ]]; then
+ version="patch"
+fi
+
+case "$version" in
+ patch|minor|major)
+ bump="--bump"
+ ;;
+ *)
+ bump=
+ ;;
+esac
+
+cargo_toml="$package/Cargo.toml"
+changelog="$package/debian/changelog"
+
+cargo set-version -p "$package" $bump "$version"
+version="$(cargo metadata --format-version=1 | jq ".packages[] | select(.name == \"$package\").version" | sed -e 's/\"//g')"
+DEBFULLNAME="Proxmox Support Team" DEBEMAIL="support@proxmox.com" dch --no-conf --changelog "$changelog" --newversion "$version-1" --distribution stable
+git commit --edit -sm "bump $package to $version-1" Cargo.toml "$cargo_toml" "$changelog"
diff --git a/proxmox-ve-config/Cargo.toml b/proxmox-ve-config/Cargo.toml
index cc689c8..ab8a7a0 100644
--- a/proxmox-ve-config/Cargo.toml
+++ b/proxmox-ve-config/Cargo.toml
@@ -1,14 +1,10 @@
[package]
name = "proxmox-ve-config"
version = "0.1.0"
-edition = "2021"
-authors = [
- "Wolfgang Bumiller <w.bumiller@proxmox.com>",
- "Stefan Hanreich <s.hanreich@proxmox.com>",
- "Proxmox Support Team <support@proxmox.com>",
-]
-description = "Proxmox VE config parsing"
-license = "AGPL-3"
+authors.workspace = true
+edition.workspace = true
+license.workspace = true
+exclude.workspace = true
[dependencies]
log = "0.4"
@@ -20,6 +16,6 @@ serde_json = "1"
serde_plain = "1"
serde_with = "2.3.3"
-proxmox-schema = "3.1.0"
-proxmox-sys = "0.5.3"
+proxmox-schema = "3.1.1"
+proxmox-sys = "0.5.8"
proxmox-sortable-macro = "0.1.3"
diff --git a/proxmox-ve-config/debian/changelog b/proxmox-ve-config/debian/changelog
new file mode 100644
index 0000000..0dfd399
--- /dev/null
+++ b/proxmox-ve-config/debian/changelog
@@ -0,0 +1,5 @@
+proxmox-ve-config (0.1.0) UNRELEASED; urgency=medium
+
+ * Initial release.
+
+ -- Proxmox Support Team <support@proxmox.com> Mon, 03 Jun 2024 10:51:11 +0200
diff --git a/proxmox-ve-config/debian/control b/proxmox-ve-config/debian/control
new file mode 100644
index 0000000..97f5e54
--- /dev/null
+++ b/proxmox-ve-config/debian/control
@@ -0,0 +1,43 @@
+Source: proxmox-ve-config
+Section: rust
+Priority: optional
+Maintainer: Proxmox Support Team <support@proxmox.com>
+Build-Depends: cargo:native,
+ librust-anyhow-1+default-dev,
+ librust-log-0.4+default-dev (>= 0.4.17-~~),
+ librust-nix-0.26+default-dev (>= 0.26.1-~~),
+ librust-proxmox-schema-3+default-dev,
+ librust-proxmox-sortable-macro-dev,
+ librust-proxmox-sys-dev,
+ librust-serde-1+default-dev,
+ librust-serde-1+derive-dev,
+ librust-serde-json-1+default-dev,
+ librust-serde-plain-1+default-dev,
+ librust-serde-with+default-dev,
+ libstd-rust-dev,
+ netbase,
+ python3,
+ rustc:native,
+Standards-Version: 4.6.2
+Homepage: https://www.proxmox.com
+
+Package: librust-proxmox-ve-config-dev
+Architecture: any
+Multi-Arch: same
+Depends:
+ ${misc:Depends},
+ librust-anyhow-1+default-dev,
+ librust-log-0.4+default-dev (>= 0.4.17-~~),
+ librust-nix-0.26+default-dev (>= 0.26.1-~~),
+ librust-proxmox-schema-3+default-dev,
+ librust-proxmox-sortable-macro-dev,
+ librust-proxmox-sys-dev,
+ librust-serde-1+default-dev,
+ librust-serde-1+derive-dev,
+ librust-serde-json-1+default-dev,
+ librust-serde-plain-1+default-dev,
+ librust-serde-with+default-dev,
+ libstd-rust-dev,
+Description: Proxmox's nftables-based firewall written in rust
+ This package contains a nftables-based implementation of the Proxmox VE
+ Firewall
diff --git a/proxmox-ve-config/debian/copyright b/proxmox-ve-config/debian/copyright
new file mode 100644
index 0000000..2d3374f
--- /dev/null
+++ b/proxmox-ve-config/debian/copyright
@@ -0,0 +1,19 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+
+Files:
+ *
+Copyright: 2019 - 2024 Proxmox Server Solutions GmbH <support@proxmox.com>
+License: AGPL-3.0-or-later
+ This program is free software: you can redistribute it and/or modify it under
+ the terms of the GNU Affero General Public License as published by the Free
+ Software Foundation, either version 3 of the License, or (at your option) any
+ later version.
+ .
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
+ details.
+ .
+ You should have received a copy of the GNU Affero General Public License along
+ with this program. If not, see <https://www.gnu.org/licenses/>.
+
diff --git a/proxmox-ve-config/debian/debcargo.toml b/proxmox-ve-config/debian/debcargo.toml
new file mode 100644
index 0000000..27510eb
--- /dev/null
+++ b/proxmox-ve-config/debian/debcargo.toml
@@ -0,0 +1,4 @@
+overlay = "."
+crate_src_path = ".."
+maintainer = "Proxmox Support Team <support@proxmox.com>"
+
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 01/21] debian: add files for packaging
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 01/21] debian: add files for packaging Stefan Hanreich
@ 2024-06-27 10:41 ` Gabriel Goller
2024-07-16 16:03 ` Thomas Lamprecht
2024-08-13 16:06 ` Max Carrara
1 sibling, 1 reply; 43+ messages in thread
From: Gabriel Goller @ 2024-06-27 10:41 UTC (permalink / raw)
To: Proxmox VE development discussion
On 26.06.2024 14:15, Stefan Hanreich wrote:
>Since we now have a standalone repository for Proxmox VE related
>crates, add the required files for packaging the crates contained in
>this repository.
I know we don't really do this, but could we add a README.rst file here?
Maybe with a small outline on what this repo contains, who uses it etc.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 01/21] debian: add files for packaging
2024-06-27 10:41 ` Gabriel Goller
@ 2024-07-16 16:03 ` Thomas Lamprecht
0 siblings, 0 replies; 43+ messages in thread
From: Thomas Lamprecht @ 2024-07-16 16:03 UTC (permalink / raw)
To: Proxmox VE development discussion, Gabriel Goller, Stefan Hanreich
Am 27/06/2024 um 12:41 schrieb Gabriel Goller:
> On 26.06.2024 14:15, Stefan Hanreich wrote:
>> Since we now have a standalone repository for Proxmox VE related
>> crates, add the required files for packaging the crates contained in
>> this repository.
>
> I know we don't really do this, but could we add a README.rst file here?
> Maybe with a small outline on what this repo contains, who uses it etc.
>
>
that'd be fine by me, but please use a README.md, i.e. markdown.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 01/21] debian: add files for packaging
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 01/21] debian: add files for packaging Stefan Hanreich
2024-06-27 10:41 ` Gabriel Goller
@ 2024-08-13 16:06 ` Max Carrara
1 sibling, 0 replies; 43+ messages in thread
From: Max Carrara @ 2024-08-13 16:06 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed Jun 26, 2024 at 2:15 PM CEST, Stefan Hanreich wrote:
> Since we now have a standalone repository for Proxmox VE related
> crates, add the required files for packaging the crates contained in
> this repository.
>
> Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
> ---
> .cargo/config.toml | 5 ++
> .gitignore | 8 +++
> Cargo.toml | 17 +++++++
> Makefile | 69 ++++++++++++++++++++++++++
> build.sh | 35 +++++++++++++
> bump.sh | 44 ++++++++++++++++
> proxmox-ve-config/Cargo.toml | 16 +++---
> proxmox-ve-config/debian/changelog | 5 ++
> proxmox-ve-config/debian/control | 43 ++++++++++++++++
> proxmox-ve-config/debian/copyright | 19 +++++++
> proxmox-ve-config/debian/debcargo.toml | 4 ++
> 11 files changed, 255 insertions(+), 10 deletions(-)
> create mode 100644 .cargo/config.toml
> create mode 100644 .gitignore
> create mode 100644 Cargo.toml
> create mode 100644 Makefile
> create mode 100755 build.sh
> create mode 100755 bump.sh
> create mode 100644 proxmox-ve-config/debian/changelog
> create mode 100644 proxmox-ve-config/debian/control
> create mode 100644 proxmox-ve-config/debian/copyright
> create mode 100644 proxmox-ve-config/debian/debcargo.toml
>
> diff --git a/.cargo/config.toml b/.cargo/config.toml
> new file mode 100644
> index 0000000..3b5b6e4
> --- /dev/null
> +++ b/.cargo/config.toml
> @@ -0,0 +1,5 @@
> +[source]
> +[source.debian-packages]
> +directory = "/usr/share/cargo/registry"
> +[source.crates-io]
> +replace-with = "debian-packages"
> diff --git a/.gitignore b/.gitignore
> new file mode 100644
> index 0000000..d72b68b
> --- /dev/null
> +++ b/.gitignore
> @@ -0,0 +1,8 @@
> +/target
> +/*/target
> +Cargo.lock
> +**/*.rs.bk
> +/*.buildinfo
> +/*.changes
> +/build
> +/*-deb
> diff --git a/Cargo.toml b/Cargo.toml
> new file mode 100644
> index 0000000..ab23d89
> --- /dev/null
> +++ b/Cargo.toml
> @@ -0,0 +1,17 @@
> +[workspace]
> +members = [
> + "proxmox-ve-config",
> +]
> +exclude = [
> + "build",
> +]
> +resolver = "2"
> +
> +[workspace.package]
> +authors = ["Proxmox Support Team <support@proxmox.com>"]
> +edition = "2021"
> +license = "AGPL-3"
> +homepage = "https://proxmox.com"
> +exclude = [ "debian" ]
> +rust-version = "1.70"
> +
> diff --git a/Makefile b/Makefile
> new file mode 100644
> index 0000000..0da9b74
> --- /dev/null
> +++ b/Makefile
> @@ -0,0 +1,69 @@
> +# Shortcut for common operations:
> +
> +CRATES != echo proxmox-*/Cargo.toml | sed -e 's|/Cargo.toml||g'
> +
> +# By default we just run checks:
> +.PHONY: all
> +all: check
> +
> +.PHONY: deb
> +deb: $(foreach c,$(CRATES), $c-deb)
> + echo $(foreach c,$(CRATES), $c-deb)
> + lintian build/*.deb
> +
> +.PHONY: dsc
> +dsc: $(foreach c,$(CRATES), $c-dsc)
> + echo $(foreach c,$(CRATES), $c-dsc)
> + lintian build/*.dsc
> +
> +.PHONY: autopkgtest
> +autopkgtest: $(foreach c,$(CRATES), $c-autopkgtest)
> +
> +.PHONY: dinstall
> +dinstall:
> + $(MAKE) clean
> + $(MAKE) deb
> + sudo -k dpkg -i build/librust-*.deb
> +
> +%-deb:
> + ./build.sh $*
> + touch $@
> +
> +%-dsc:
> + BUILDCMD='dpkg-buildpackage -S -us -uc -d' ./build.sh $*
> + touch $@
> +
> +%-autopkgtest:
> + autopkgtest build/$* build/*.deb -- null
> + touch $@
> +
> +.PHONY: check
> +check:
> + cargo test
> +
> +# Prints a diff between the current code and the one rustfmt would produce
> +.PHONY: fmt
> +fmt:
> + cargo +nightly fmt -- --check
> +
> +# Doc without dependencies
> +.PHONY: doc
> +doc:
> + cargo doc --no-deps
> +
> +.PHONY: clean
> +clean:
> + cargo clean
> + rm -rf build/
> + rm -f -- *-deb *-dsc *-autopkgtest *.build *.buildinfo *.changes
> +
> +.PHONY: update
> +update:
> + cargo update
> +
> +%-upload: %-deb
> + cd build; \
> + dcmd --deb rust-$*_*.changes \
> + | grep -v '.changes$$' \
> + | tar -cf "$@.tar" -T-; \
> + cat "$@.tar" | ssh -X repoman@repo.proxmox.com upload --product devel --dist bookworm
> diff --git a/build.sh b/build.sh
> new file mode 100755
> index 0000000..39a8302
> --- /dev/null
> +++ b/build.sh
> @@ -0,0 +1,35 @@
> +#!/bin/sh
> +
> +set -eux
> +
> +export CARGO=/usr/bin/cargo
> +export RUSTC=/usr/bin/rustc
> +
> +CRATE=$1
> +BUILDCMD=${BUILDCMD:-"dpkg-buildpackage -b -uc -us"}
> +
> +mkdir -p build
> +echo system >build/rust-toolchain
> +rm -rf "build/${CRATE}"
> +
> +CONTROL="$PWD/${CRATE}/debian/control"
> +
> +if [ -e "$CONTROL" ]; then
> + # check but only warn, debcargo fails anyway if crates are missing
> + dpkg-checkbuilddeps $PWD/${CRATE}/debian/control || true
> + # rm -f "$PWD/${CRATE}/debian/control"
> +fi
> +
> +debcargo package \
> + --config "$PWD/${CRATE}/debian/debcargo.toml" \
> + --changelog-ready \
> + --no-overlay-write-back \
> + --directory "$PWD/build/${CRATE}" \
> + "${CRATE}" \
> + "$(dpkg-parsechangelog -l "${CRATE}/debian/changelog" -SVersion | sed -e 's/-.*//')"
> +
> +cd "build/${CRATE}"
> +rm -f debian/source/format.debcargo.hint
> +${BUILDCMD}
> +
> +cp debian/control "$CONTROL"
> diff --git a/bump.sh b/bump.sh
> new file mode 100755
> index 0000000..08ad119
> --- /dev/null
> +++ b/bump.sh
> @@ -0,0 +1,44 @@
> +#!/bin/bash
> +
> +package=$1
> +
> +if [[ -z "$package" ]]; then
> + echo "USAGE:"
> + echo -e "\t bump.sh <crate> [patch|minor|major|<version>]"
> + echo ""
> + echo "Defaults to bumping patch version by 1"
> + exit 0
> +fi
> +
> +cargo_set_version="$(command -v cargo-set-version)"
> +if [[ -z "$cargo_set_version" || ! -x "$cargo_set_version" ]]; then
> + echo 'bump.sh requires "cargo set-version", provided by "cargo-edit".'
> + exit 1
> +fi
> +
> +if [[ ! -e "$package/Cargo.toml" ]]; then
> + echo "Invalid crate '$package'"
> + exit 1
> +fi
> +
> +version=$2
> +if [[ -z "$version" ]]; then
> + version="patch"
> +fi
> +
> +case "$version" in
> + patch|minor|major)
> + bump="--bump"
> + ;;
> + *)
> + bump=
> + ;;
> +esac
> +
> +cargo_toml="$package/Cargo.toml"
> +changelog="$package/debian/changelog"
> +
> +cargo set-version -p "$package" $bump "$version"
> +version="$(cargo metadata --format-version=1 | jq ".packages[] | select(.name == \"$package\").version" | sed -e 's/\"//g')"
> +DEBFULLNAME="Proxmox Support Team" DEBEMAIL="support@proxmox.com" dch --no-conf --changelog "$changelog" --newversion "$version-1" --distribution stable
> +git commit --edit -sm "bump $package to $version-1" Cargo.toml "$cargo_toml" "$changelog"
> diff --git a/proxmox-ve-config/Cargo.toml b/proxmox-ve-config/Cargo.toml
> index cc689c8..ab8a7a0 100644
> --- a/proxmox-ve-config/Cargo.toml
> +++ b/proxmox-ve-config/Cargo.toml
> @@ -1,14 +1,10 @@
> [package]
> name = "proxmox-ve-config"
> version = "0.1.0"
> -edition = "2021"
> -authors = [
> - "Wolfgang Bumiller <w.bumiller@proxmox.com>",
> - "Stefan Hanreich <s.hanreich@proxmox.com>",
> - "Proxmox Support Team <support@proxmox.com>",
> -]
> -description = "Proxmox VE config parsing"
> -license = "AGPL-3"
> +authors.workspace = true
> +edition.workspace = true
> +license.workspace = true
> +exclude.workspace = true
>
> [dependencies]
> log = "0.4"
> @@ -20,6 +16,6 @@ serde_json = "1"
> serde_plain = "1"
> serde_with = "2.3.3"
>
> -proxmox-schema = "3.1.0"
> -proxmox-sys = "0.5.3"
> +proxmox-schema = "3.1.1"
> +proxmox-sys = "0.5.8"
> proxmox-sortable-macro = "0.1.3"
I know it's been a while, but proxmox-sys and serde_with both need a
bump, so leaving this here for your convenience:
serde_with = "3.8.1"
proxmox-sys = "0.6.2"
> diff --git a/proxmox-ve-config/debian/changelog b/proxmox-ve-config/debian/changelog
> new file mode 100644
> index 0000000..0dfd399
> --- /dev/null
> +++ b/proxmox-ve-config/debian/changelog
> @@ -0,0 +1,5 @@
> +proxmox-ve-config (0.1.0) UNRELEASED; urgency=medium
> +
> + * Initial release.
> +
> + -- Proxmox Support Team <support@proxmox.com> Mon, 03 Jun 2024 10:51:11 +0200
> diff --git a/proxmox-ve-config/debian/control b/proxmox-ve-config/debian/control
> new file mode 100644
> index 0000000..97f5e54
> --- /dev/null
> +++ b/proxmox-ve-config/debian/control
> @@ -0,0 +1,43 @@
> +Source: proxmox-ve-config
> +Section: rust
> +Priority: optional
> +Maintainer: Proxmox Support Team <support@proxmox.com>
> +Build-Depends: cargo:native,
> + librust-anyhow-1+default-dev,
> + librust-log-0.4+default-dev (>= 0.4.17-~~),
> + librust-nix-0.26+default-dev (>= 0.26.1-~~),
> + librust-proxmox-schema-3+default-dev,
> + librust-proxmox-sortable-macro-dev,
> + librust-proxmox-sys-dev,
> + librust-serde-1+default-dev,
> + librust-serde-1+derive-dev,
> + librust-serde-json-1+default-dev,
> + librust-serde-plain-1+default-dev,
> + librust-serde-with+default-dev,
> + libstd-rust-dev,
> + netbase,
> + python3,
> + rustc:native,
> +Standards-Version: 4.6.2
> +Homepage: https://www.proxmox.com
> +
> +Package: librust-proxmox-ve-config-dev
> +Architecture: any
> +Multi-Arch: same
> +Depends:
> + ${misc:Depends},
> + librust-anyhow-1+default-dev,
> + librust-log-0.4+default-dev (>= 0.4.17-~~),
> + librust-nix-0.26+default-dev (>= 0.26.1-~~),
> + librust-proxmox-schema-3+default-dev,
> + librust-proxmox-sortable-macro-dev,
> + librust-proxmox-sys-dev,
> + librust-serde-1+default-dev,
> + librust-serde-1+derive-dev,
> + librust-serde-json-1+default-dev,
> + librust-serde-plain-1+default-dev,
> + librust-serde-with+default-dev,
> + libstd-rust-dev,
> +Description: Proxmox's nftables-based firewall written in rust
> + This package contains a nftables-based implementation of the Proxmox VE
> + Firewall
> diff --git a/proxmox-ve-config/debian/copyright b/proxmox-ve-config/debian/copyright
> new file mode 100644
> index 0000000..2d3374f
> --- /dev/null
> +++ b/proxmox-ve-config/debian/copyright
> @@ -0,0 +1,19 @@
> +Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
> +
> +Files:
> + *
> +Copyright: 2019 - 2024 Proxmox Server Solutions GmbH <support@proxmox.com>
> +License: AGPL-3.0-or-later
> + This program is free software: you can redistribute it and/or modify it under
> + the terms of the GNU Affero General Public License as published by the Free
> + Software Foundation, either version 3 of the License, or (at your option) any
> + later version.
> + .
> + This program is distributed in the hope that it will be useful, but WITHOUT
> + ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
> + FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
> + details.
> + .
> + You should have received a copy of the GNU Affero General Public License along
> + with this program. If not, see <https://www.gnu.org/licenses/>.
> +
> diff --git a/proxmox-ve-config/debian/debcargo.toml b/proxmox-ve-config/debian/debcargo.toml
> new file mode 100644
> index 0000000..27510eb
> --- /dev/null
> +++ b/proxmox-ve-config/debian/debcargo.toml
> @@ -0,0 +1,4 @@
> +overlay = "."
> +crate_src_path = ".."
> +maintainer = "Proxmox Support Team <support@proxmox.com>"
> +
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 02/21] firewall: add ip range types
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 01/21] debian: add files for packaging Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-08-13 16:08 ` Max Carrara
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 03/21] firewall: address: use new iprange type for ip entries Stefan Hanreich
` (22 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Currently we are using tuples to represent IP ranges which is
suboptimal. Validation logic and invariant checking needs to happen at
every site using the IP range rather than having a unified struct for
enforcing those invariants.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
.../src/firewall/types/address.rs | 230 +++++++++++++++++-
1 file changed, 228 insertions(+), 2 deletions(-)
diff --git a/proxmox-ve-config/src/firewall/types/address.rs b/proxmox-ve-config/src/firewall/types/address.rs
index e48ac1b..ddf4652 100644
--- a/proxmox-ve-config/src/firewall/types/address.rs
+++ b/proxmox-ve-config/src/firewall/types/address.rs
@@ -1,9 +1,9 @@
-use std::fmt;
+use std::fmt::{self, Display};
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr};
use std::ops::Deref;
use anyhow::{bail, format_err, Error};
-use serde_with::DeserializeFromStr;
+use serde_with::{DeserializeFromStr, SerializeDisplay};
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum Family {
@@ -239,6 +239,202 @@ impl<T: Into<Ipv6Addr>> From<T> for Ipv6Cidr {
}
}
+#[derive(Clone, Copy, Debug, PartialOrd, Ord, PartialEq, Eq, Hash)]
+pub enum IpRangeError {
+ MismatchedFamilies,
+ StartGreaterThanEnd,
+ InvalidFormat,
+}
+
+impl std::error::Error for IpRangeError {}
+
+impl Display for IpRangeError {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.write_str(match self {
+ IpRangeError::MismatchedFamilies => "mismatched ip address families",
+ IpRangeError::StartGreaterThanEnd => "start is greater than end",
+ IpRangeError::InvalidFormat => "invalid ip range format",
+ })
+ }
+}
+
+/// represents a range of IPv4 or IPv6 addresses
+///
+/// For more information see [`AddressRange`]
+#[derive(Clone, Copy, Debug, PartialEq, Eq, SerializeDisplay, DeserializeFromStr)]
+pub enum IpRange {
+ V4(AddressRange<Ipv4Addr>),
+ V6(AddressRange<Ipv6Addr>),
+}
+
+impl IpRange {
+ /// returns the family of the IpRange
+ pub fn family(&self) -> Family {
+ match self {
+ IpRange::V4(_) => Family::V4,
+ IpRange::V6(_) => Family::V6,
+ }
+ }
+
+ /// creates a new [`IpRange`] from two [`IpAddr`]
+ ///
+ /// # Errors
+ ///
+ /// This function will return an error if start and end IP address are not from the same family.
+ pub fn new(start: impl Into<IpAddr>, end: impl Into<IpAddr>) -> Result<Self, IpRangeError> {
+ match (start.into(), end.into()) {
+ (IpAddr::V4(start), IpAddr::V4(end)) => Self::new_v4(start, end),
+ (IpAddr::V6(start), IpAddr::V6(end)) => Self::new_v6(start, end),
+ _ => Err(IpRangeError::MismatchedFamilies),
+ }
+ }
+
+ /// construct a new Ipv4 Range
+ pub fn new_v4(
+ start: impl Into<Ipv4Addr>,
+ end: impl Into<Ipv4Addr>,
+ ) -> Result<Self, IpRangeError> {
+ Ok(IpRange::V4(AddressRange::new_v4(start, end)?))
+ }
+
+ pub fn new_v6(
+ start: impl Into<Ipv6Addr>,
+ end: impl Into<Ipv6Addr>,
+ ) -> Result<Self, IpRangeError> {
+ Ok(IpRange::V6(AddressRange::new_v6(start, end)?))
+ }
+}
+
+impl std::str::FromStr for IpRange {
+ type Err = IpRangeError;
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ if let Ok(range) = s.parse() {
+ return Ok(IpRange::V4(range));
+ }
+
+ if let Ok(range) = s.parse() {
+ return Ok(IpRange::V6(range));
+ }
+
+ Err(IpRangeError::InvalidFormat)
+ }
+}
+
+impl fmt::Display for IpRange {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ match self {
+ IpRange::V4(range) => range.fmt(f),
+ IpRange::V6(range) => range.fmt(f),
+ }
+ }
+}
+
+/// represents a range of IP addresses from start to end
+///
+/// This type is for encapsulation purposes for the [`IpRange`] enum and should be instantiated via
+/// that enum.
+///
+/// # Invariants
+///
+/// * start and end have the same IP address family
+/// * start is lesser than or equal to end
+///
+/// # Textual representation
+///
+/// Two IP addresses separated by a hyphen, e.g.: `127.0.0.1-127.0.0.255`
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+pub struct AddressRange<T> {
+ start: T,
+ end: T,
+}
+
+impl AddressRange<Ipv4Addr> {
+ pub(crate) fn new_v4(
+ start: impl Into<Ipv4Addr>,
+ end: impl Into<Ipv4Addr>,
+ ) -> Result<AddressRange<Ipv4Addr>, IpRangeError> {
+ let (start, end) = (start.into(), end.into());
+
+ if start > end {
+ return Err(IpRangeError::StartGreaterThanEnd);
+ }
+
+ Ok(Self { start, end })
+ }
+}
+
+impl AddressRange<Ipv6Addr> {
+ pub(crate) fn new_v6(
+ start: impl Into<Ipv6Addr>,
+ end: impl Into<Ipv6Addr>,
+ ) -> Result<AddressRange<Ipv6Addr>, IpRangeError> {
+ let (start, end) = (start.into(), end.into());
+
+ if start > end {
+ return Err(IpRangeError::StartGreaterThanEnd);
+ }
+
+ Ok(Self { start, end })
+ }
+}
+
+impl<T> AddressRange<T> {
+ pub fn start(&self) -> &T {
+ &self.start
+ }
+
+ pub fn end(&self) -> &T {
+ &self.end
+ }
+}
+
+impl std::str::FromStr for AddressRange<Ipv4Addr> {
+ type Err = IpRangeError;
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ if let Some((start, end)) = s.split_once('-') {
+ let start_address = start
+ .parse::<Ipv4Addr>()
+ .map_err(|_| IpRangeError::InvalidFormat)?;
+
+ let end_address = end
+ .parse::<Ipv4Addr>()
+ .map_err(|_| IpRangeError::InvalidFormat)?;
+
+ return Self::new_v4(start_address, end_address);
+ }
+
+ Err(IpRangeError::InvalidFormat)
+ }
+}
+
+impl std::str::FromStr for AddressRange<Ipv6Addr> {
+ type Err = IpRangeError;
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ if let Some((start, end)) = s.split_once('-') {
+ let start_address = start
+ .parse::<Ipv6Addr>()
+ .map_err(|_| IpRangeError::InvalidFormat)?;
+
+ let end_address = end
+ .parse::<Ipv6Addr>()
+ .map_err(|_| IpRangeError::InvalidFormat)?;
+
+ return Self::new_v6(start_address, end_address);
+ }
+
+ Err(IpRangeError::InvalidFormat)
+ }
+}
+
+impl<T: fmt::Display> fmt::Display for AddressRange<T> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ write!(f, "{}-{}", self.start, self.end)
+ }
+}
+
#[derive(Clone, Debug)]
#[cfg_attr(test, derive(Eq, PartialEq))]
pub enum IpEntry {
@@ -612,4 +808,34 @@ mod tests {
])
.expect_err("cannot mix ip families in ip list");
}
+
+ #[test]
+ fn test_ip_range() {
+ IpRange::new([10, 0, 0, 2], [10, 0, 0, 1]).unwrap_err();
+
+ IpRange::new(
+ [0x2001, 0x0db8, 0, 0, 0, 0, 0, 0x1000],
+ [0x2001, 0x0db8, 0, 0, 0, 0, 0, 0],
+ )
+ .unwrap_err();
+
+ let v4_range = IpRange::new([10, 0, 0, 0], [10, 0, 0, 100]).unwrap();
+ assert_eq!(v4_range.family(), Family::V4);
+
+ let v6_range = IpRange::new(
+ [0x2001, 0x0db8, 0, 0, 0, 0, 0, 0],
+ [0x2001, 0x0db8, 0, 0, 0, 0, 0, 0x1000],
+ )
+ .unwrap();
+ assert_eq!(v6_range.family(), Family::V6);
+
+ "10.0.0.1-10.0.0.100".parse::<IpRange>().unwrap();
+ "2001:db8::1-2001:db8::f".parse::<IpRange>().unwrap();
+
+ "10.0.0.1-2001:db8::1000".parse::<IpRange>().unwrap_err();
+ "2001:db8::1-192.168.0.2".parse::<IpRange>().unwrap_err();
+
+ "10.0.0.1-10.0.0.0".parse::<IpRange>().unwrap_err();
+ "2001:db8::1-2001:db8::0".parse::<IpRange>().unwrap_err();
+ }
}
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 02/21] firewall: add ip range types
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 02/21] firewall: add ip range types Stefan Hanreich
@ 2024-08-13 16:08 ` Max Carrara
0 siblings, 0 replies; 43+ messages in thread
From: Max Carrara @ 2024-08-13 16:08 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed Jun 26, 2024 at 2:15 PM CEST, Stefan Hanreich wrote:
> Currently we are using tuples to represent IP ranges which is
> suboptimal. Validation logic and invariant checking needs to happen at
> every site using the IP range rather than having a unified struct for
> enforcing those invariants.
That's something I completely support; as you know I'm a fan of
representing state / invariants / etc. via types ;)
>
> Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
> ---
> .../src/firewall/types/address.rs | 230 +++++++++++++++++-
> 1 file changed, 228 insertions(+), 2 deletions(-)
>
> diff --git a/proxmox-ve-config/src/firewall/types/address.rs b/proxmox-ve-config/src/firewall/types/address.rs
> index e48ac1b..ddf4652 100644
> --- a/proxmox-ve-config/src/firewall/types/address.rs
> +++ b/proxmox-ve-config/src/firewall/types/address.rs
> @@ -1,9 +1,9 @@
> -use std::fmt;
> +use std::fmt::{self, Display};
> use std::net::{IpAddr, Ipv4Addr, Ipv6Addr};
> use std::ops::Deref;
>
> use anyhow::{bail, format_err, Error};
> -use serde_with::DeserializeFromStr;
> +use serde_with::{DeserializeFromStr, SerializeDisplay};
>
> #[derive(Clone, Copy, Debug, Eq, PartialEq)]
> pub enum Family {
> @@ -239,6 +239,202 @@ impl<T: Into<Ipv6Addr>> From<T> for Ipv6Cidr {
> }
> }
>
> +#[derive(Clone, Copy, Debug, PartialOrd, Ord, PartialEq, Eq, Hash)]
> +pub enum IpRangeError {
> + MismatchedFamilies,
> + StartGreaterThanEnd,
> + InvalidFormat,
> +}
> +
> +impl std::error::Error for IpRangeError {}
> +
> +impl Display for IpRangeError {
> + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
> + f.write_str(match self {
> + IpRangeError::MismatchedFamilies => "mismatched ip address families",
> + IpRangeError::StartGreaterThanEnd => "start is greater than end",
> + IpRangeError::InvalidFormat => "invalid ip range format",
> + })
> + }
> +}
> +
> +/// represents a range of IPv4 or IPv6 addresses
Small thing: I'd prefer
Represents a range of IPv4 or IPv6 addresses.
instead.
Mainly because most docstrings are written that way, and you do it in
later patches in a couple places as well anyways. Just gonna mention
this here once as you do this a couple times in this series in order to
avoid unnecessary noise.
IMO it's a really minor thing, but since you told me off-list that
you're in the process of neatly documenting everything, I thought I'd
mention it here.
> +///
> +/// For more information see [`AddressRange`]
> +#[derive(Clone, Copy, Debug, PartialEq, Eq, SerializeDisplay, DeserializeFromStr)]
> +pub enum IpRange {
> + V4(AddressRange<Ipv4Addr>),
> + V6(AddressRange<Ipv6Addr>),
> +}
> +
> +impl IpRange {
> + /// returns the family of the IpRange
> + pub fn family(&self) -> Family {
> + match self {
> + IpRange::V4(_) => Family::V4,
> + IpRange::V6(_) => Family::V6,
> + }
> + }
> +
> + /// creates a new [`IpRange`] from two [`IpAddr`]
> + ///
> + /// # Errors
> + ///
> + /// This function will return an error if start and end IP address are not from the same family.
> + pub fn new(start: impl Into<IpAddr>, end: impl Into<IpAddr>) -> Result<Self, IpRangeError> {
> + match (start.into(), end.into()) {
> + (IpAddr::V4(start), IpAddr::V4(end)) => Self::new_v4(start, end),
> + (IpAddr::V6(start), IpAddr::V6(end)) => Self::new_v6(start, end),
> + _ => Err(IpRangeError::MismatchedFamilies),
> + }
> + }
> +
> + /// construct a new Ipv4 Range
> + pub fn new_v4(
> + start: impl Into<Ipv4Addr>,
> + end: impl Into<Ipv4Addr>,
> + ) -> Result<Self, IpRangeError> {
> + Ok(IpRange::V4(AddressRange::new_v4(start, end)?))
> + }
> +
> + pub fn new_v6(
> + start: impl Into<Ipv6Addr>,
> + end: impl Into<Ipv6Addr>,
> + ) -> Result<Self, IpRangeError> {
> + Ok(IpRange::V6(AddressRange::new_v6(start, end)?))
> + }
> +}
> +
> +impl std::str::FromStr for IpRange {
> + type Err = IpRangeError;
> +
> + fn from_str(s: &str) -> Result<Self, Self::Err> {
> + if let Ok(range) = s.parse() {
> + return Ok(IpRange::V4(range));
> + }
> +
> + if let Ok(range) = s.parse() {
> + return Ok(IpRange::V6(range));
> + }
> +
> + Err(IpRangeError::InvalidFormat)
> + }
> +}
> +
> +impl fmt::Display for IpRange {
> + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
> + match self {
> + IpRange::V4(range) => range.fmt(f),
> + IpRange::V6(range) => range.fmt(f),
> + }
> + }
> +}
> +
> +/// represents a range of IP addresses from start to end
> +///
> +/// This type is for encapsulation purposes for the [`IpRange`] enum and should be instantiated via
> +/// that enum.
> +///
> +/// # Invariants
> +///
> +/// * start and end have the same IP address family
> +/// * start is lesser than or equal to end
> +///
> +/// # Textual representation
> +///
> +/// Two IP addresses separated by a hyphen, e.g.: `127.0.0.1-127.0.0.255`
> +#[derive(Clone, Copy, Debug, PartialEq, Eq)]
> +pub struct AddressRange<T> {
> + start: T,
> + end: T,
> +}
> +
> +impl AddressRange<Ipv4Addr> {
> + pub(crate) fn new_v4(
> + start: impl Into<Ipv4Addr>,
> + end: impl Into<Ipv4Addr>,
> + ) -> Result<AddressRange<Ipv4Addr>, IpRangeError> {
> + let (start, end) = (start.into(), end.into());
> +
> + if start > end {
> + return Err(IpRangeError::StartGreaterThanEnd);
> + }
> +
> + Ok(Self { start, end })
> + }
> +}
> +
> +impl AddressRange<Ipv6Addr> {
> + pub(crate) fn new_v6(
> + start: impl Into<Ipv6Addr>,
> + end: impl Into<Ipv6Addr>,
> + ) -> Result<AddressRange<Ipv6Addr>, IpRangeError> {
> + let (start, end) = (start.into(), end.into());
> +
> + if start > end {
> + return Err(IpRangeError::StartGreaterThanEnd);
> + }
> +
> + Ok(Self { start, end })
> + }
> +}
> +
> +impl<T> AddressRange<T> {
> + pub fn start(&self) -> &T {
> + &self.start
> + }
> +
> + pub fn end(&self) -> &T {
> + &self.end
> + }
> +}
> +
> +impl std::str::FromStr for AddressRange<Ipv4Addr> {
> + type Err = IpRangeError;
> +
> + fn from_str(s: &str) -> Result<Self, Self::Err> {
> + if let Some((start, end)) = s.split_once('-') {
> + let start_address = start
> + .parse::<Ipv4Addr>()
> + .map_err(|_| IpRangeError::InvalidFormat)?;
> +
> + let end_address = end
> + .parse::<Ipv4Addr>()
> + .map_err(|_| IpRangeError::InvalidFormat)?;
> +
> + return Self::new_v4(start_address, end_address);
> + }
> +
> + Err(IpRangeError::InvalidFormat)
> + }
> +}
> +
> +impl std::str::FromStr for AddressRange<Ipv6Addr> {
> + type Err = IpRangeError;
> +
> + fn from_str(s: &str) -> Result<Self, Self::Err> {
> + if let Some((start, end)) = s.split_once('-') {
> + let start_address = start
> + .parse::<Ipv6Addr>()
> + .map_err(|_| IpRangeError::InvalidFormat)?;
> +
> + let end_address = end
> + .parse::<Ipv6Addr>()
> + .map_err(|_| IpRangeError::InvalidFormat)?;
> +
> + return Self::new_v6(start_address, end_address);
> + }
> +
> + Err(IpRangeError::InvalidFormat)
> + }
> +}
> +
> +impl<T: fmt::Display> fmt::Display for AddressRange<T> {
> + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
> + write!(f, "{}-{}", self.start, self.end)
> + }
> +}
> +
> #[derive(Clone, Debug)]
> #[cfg_attr(test, derive(Eq, PartialEq))]
> pub enum IpEntry {
> @@ -612,4 +808,34 @@ mod tests {
> ])
> .expect_err("cannot mix ip families in ip list");
> }
> +
> + #[test]
> + fn test_ip_range() {
> + IpRange::new([10, 0, 0, 2], [10, 0, 0, 1]).unwrap_err();
> +
> + IpRange::new(
> + [0x2001, 0x0db8, 0, 0, 0, 0, 0, 0x1000],
> + [0x2001, 0x0db8, 0, 0, 0, 0, 0, 0],
> + )
> + .unwrap_err();
> +
> + let v4_range = IpRange::new([10, 0, 0, 0], [10, 0, 0, 100]).unwrap();
> + assert_eq!(v4_range.family(), Family::V4);
> +
> + let v6_range = IpRange::new(
> + [0x2001, 0x0db8, 0, 0, 0, 0, 0, 0],
> + [0x2001, 0x0db8, 0, 0, 0, 0, 0, 0x1000],
> + )
> + .unwrap();
> + assert_eq!(v6_range.family(), Family::V6);
> +
> + "10.0.0.1-10.0.0.100".parse::<IpRange>().unwrap();
> + "2001:db8::1-2001:db8::f".parse::<IpRange>().unwrap();
> +
> + "10.0.0.1-2001:db8::1000".parse::<IpRange>().unwrap_err();
> + "2001:db8::1-192.168.0.2".parse::<IpRange>().unwrap_err();
> +
> + "10.0.0.1-10.0.0.0".parse::<IpRange>().unwrap_err();
> + "2001:db8::1-2001:db8::0".parse::<IpRange>().unwrap_err();
> + }
> }
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 03/21] firewall: address: use new iprange type for ip entries
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 01/21] debian: add files for packaging Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 02/21] firewall: add ip range types Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 04/21] ipset: add range variant to addresses Stefan Hanreich
` (21 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
.../src/firewall/types/address.rs | 81 +++++++------------
proxmox-ve-config/src/firewall/types/rule.rs | 6 +-
2 files changed, 31 insertions(+), 56 deletions(-)
diff --git a/proxmox-ve-config/src/firewall/types/address.rs b/proxmox-ve-config/src/firewall/types/address.rs
index ddf4652..8db3942 100644
--- a/proxmox-ve-config/src/firewall/types/address.rs
+++ b/proxmox-ve-config/src/firewall/types/address.rs
@@ -439,57 +439,30 @@ impl<T: fmt::Display> fmt::Display for AddressRange<T> {
#[cfg_attr(test, derive(Eq, PartialEq))]
pub enum IpEntry {
Cidr(Cidr),
- Range(IpAddr, IpAddr),
+ Range(IpRange),
}
impl std::str::FromStr for IpEntry {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Error> {
- if s.is_empty() {
- bail!("Empty IP specification!")
+ if let Ok(cidr) = s.parse() {
+ return Ok(IpEntry::Cidr(cidr));
}
- let entries: Vec<&str> = s
- .split('-')
- .take(3) // so we can check whether there are too many
- .collect();
-
- match entries.as_slice() {
- [cidr] => Ok(IpEntry::Cidr(cidr.parse()?)),
- [beg, end] => {
- if let Ok(beg) = beg.parse::<Ipv4Addr>() {
- if let Ok(end) = end.parse::<Ipv4Addr>() {
- if beg < end {
- return Ok(IpEntry::Range(beg.into(), end.into()));
- }
-
- bail!("start address is greater than end address!");
- }
- }
-
- if let Ok(beg) = beg.parse::<Ipv6Addr>() {
- if let Ok(end) = end.parse::<Ipv6Addr>() {
- if beg < end {
- return Ok(IpEntry::Range(beg.into(), end.into()));
- }
-
- bail!("start address is greater than end address!");
- }
- }
-
- bail!("start and end are not valid IP addresses of the same type!")
- }
- _ => bail!("Invalid amount of elements in IpEntry!"),
+ if let Ok(range) = s.parse() {
+ return Ok(IpEntry::Range(range));
}
+
+ bail!("Invalid IP entry: {s}");
}
}
impl fmt::Display for IpEntry {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
- Self::Cidr(ip) => write!(f, "{ip}"),
- Self::Range(beg, end) => write!(f, "{beg}-{end}"),
+ Self::Cidr(ip) => ip.fmt(f),
+ Self::Range(range) => range.fmt(f),
}
}
}
@@ -498,19 +471,7 @@ impl IpEntry {
fn family(&self) -> Family {
match self {
Self::Cidr(cidr) => cidr.family(),
- Self::Range(start, end) => {
- if start.is_ipv4() && end.is_ipv4() {
- return Family::V4;
- }
-
- if start.is_ipv6() && end.is_ipv6() {
- return Family::V6;
- }
-
- // should never be reached due to constructors validating that
- // start type == end type
- unreachable!("invalid IP entry")
- }
+ Self::Range(range) => range.family(),
}
}
}
@@ -521,6 +482,12 @@ impl From<Cidr> for IpEntry {
}
}
+impl From<IpRange> for IpEntry {
+ fn from(value: IpRange) -> Self {
+ IpEntry::Range(value)
+ }
+}
+
#[derive(Clone, Debug, DeserializeFromStr)]
#[cfg_attr(test, derive(Eq, PartialEq))]
pub struct IpList {
@@ -708,7 +675,9 @@ mod tests {
assert_eq!(
entry,
- IpEntry::Range([192, 168, 0, 1].into(), [192, 168, 99, 255].into())
+ IpRange::new_v4([192, 168, 0, 1], [192, 168, 99, 255])
+ .expect("valid IP range")
+ .into()
);
entry = "fe80::1".parse().expect("valid IP entry");
@@ -733,10 +702,12 @@ mod tests {
assert_eq!(
entry,
- IpEntry::Range(
- [0xFD80, 0, 0, 0, 0, 0, 0, 1].into(),
- [0xFD80, 0, 0, 0, 0, 0, 0, 0xFFFF].into(),
+ IpRange::new_v6(
+ [0xFD80, 0, 0, 0, 0, 0, 0, 1],
+ [0xFD80, 0, 0, 0, 0, 0, 0, 0xFFFF],
)
+ .expect("valid IP range")
+ .into()
);
"192.168.100.0-192.168.99.255"
@@ -764,7 +735,9 @@ mod tests {
entries: vec![
IpEntry::Cidr(Cidr::new_v4([192, 168, 0, 1], 32).unwrap()),
IpEntry::Cidr(Cidr::new_v4([192, 168, 100, 0], 24).unwrap()),
- IpEntry::Range([172, 16, 0, 0].into(), [172, 32, 255, 255].into()),
+ IpRange::new_v4([172, 16, 0, 0], [172, 32, 255, 255])
+ .unwrap()
+ .into(),
],
family: Family::V4,
}
diff --git a/proxmox-ve-config/src/firewall/types/rule.rs b/proxmox-ve-config/src/firewall/types/rule.rs
index 20deb3a..5374bb0 100644
--- a/proxmox-ve-config/src/firewall/types/rule.rs
+++ b/proxmox-ve-config/src/firewall/types/rule.rs
@@ -242,7 +242,7 @@ impl FromStr for RuleGroup {
#[cfg(test)]
mod tests {
use crate::firewall::types::{
- address::{IpEntry, IpList},
+ address::{IpEntry, IpList, IpRange},
alias::{AliasName, AliasScope},
ipset::{IpsetName, IpsetScope},
log::LogLevel,
@@ -322,7 +322,9 @@ mod tests {
IpAddrMatch::Ip(IpList::from(Cidr::new_v4([10, 0, 0, 0], 24).unwrap())),
IpAddrMatch::Ip(
IpList::new(vec![
- IpEntry::Range([20, 0, 0, 0].into(), [20, 255, 255, 255].into()),
+ IpRange::new_v4([20, 0, 0, 0], [20, 255, 255, 255])
+ .unwrap()
+ .into(),
IpEntry::Cidr(Cidr::new_v4([192, 168, 0, 0], 16).unwrap()),
])
.unwrap()
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 04/21] ipset: add range variant to addresses
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (2 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 03/21] firewall: address: use new iprange type for ip entries Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 05/21] iprange: add methods for converting an ip range to cidrs Stefan Hanreich
` (20 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
A range can be used to store multiple IP addresses in an ipset that do
not neatly fit into a single CIDR.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
proxmox-ve-config/src/firewall/types/ipset.rs | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/proxmox-ve-config/src/firewall/types/ipset.rs b/proxmox-ve-config/src/firewall/types/ipset.rs
index c1af642..4ddf6d1 100644
--- a/proxmox-ve-config/src/firewall/types/ipset.rs
+++ b/proxmox-ve-config/src/firewall/types/ipset.rs
@@ -6,7 +6,7 @@ use anyhow::{bail, format_err, Error};
use serde_with::DeserializeFromStr;
use crate::firewall::parse::match_non_whitespace;
-use crate::firewall::types::address::Cidr;
+use crate::firewall::types::address::{Cidr, IpRange};
use crate::firewall::types::alias::AliasName;
use crate::guest::vm::NetworkConfig;
@@ -90,6 +90,7 @@ impl Display for IpsetName {
pub enum IpsetAddress {
Alias(AliasName),
Cidr(Cidr),
+ Range(IpRange),
}
impl FromStr for IpsetAddress {
@@ -114,6 +115,12 @@ impl<T: Into<Cidr>> From<T> for IpsetAddress {
}
}
+impl From<IpRange> for IpsetAddress {
+ fn from(range: IpRange) -> Self {
+ IpsetAddress::Range(range)
+ }
+}
+
#[derive(Debug)]
#[cfg_attr(test, derive(Eq, PartialEq))]
pub struct IpsetEntry {
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 05/21] iprange: add methods for converting an ip range to cidrs
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (3 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 04/21] ipset: add range variant to addresses Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-08-13 16:09 ` Max Carrara
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 06/21] ipset: address: add helper methods Stefan Hanreich
` (19 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
This is mainly used in proxmox-perl-rs, so the generated ipsets can be
used in pve-firewall where only CIDRs are supported.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
.../src/firewall/types/address.rs | 818 ++++++++++++++++++
1 file changed, 818 insertions(+)
diff --git a/proxmox-ve-config/src/firewall/types/address.rs b/proxmox-ve-config/src/firewall/types/address.rs
index 8db3942..3238601 100644
--- a/proxmox-ve-config/src/firewall/types/address.rs
+++ b/proxmox-ve-config/src/firewall/types/address.rs
@@ -303,6 +303,17 @@ impl IpRange {
) -> Result<Self, IpRangeError> {
Ok(IpRange::V6(AddressRange::new_v6(start, end)?))
}
+
+ /// converts an IpRange into the minimal amount of CIDRs
+ ///
+ /// see the concrete implementations of [`AddressRange<Ipv4Addr>`] or [`AddressRange<Ipv6Addr>`]
+ /// respectively
+ pub fn to_cidrs(&self) -> Vec<Cidr> {
+ match self {
+ IpRange::V4(range) => range.to_cidrs().into_iter().map(Cidr::from).collect(),
+ IpRange::V6(range) => range.to_cidrs().into_iter().map(Cidr::from).collect(),
+ }
+ }
}
impl std::str::FromStr for IpRange {
@@ -362,6 +373,71 @@ impl AddressRange<Ipv4Addr> {
Ok(Self { start, end })
}
+
+ /// returns the minimum amount of CIDRs that exactly represent the range
+ ///
+ /// The idea behind this algorithm is as follows:
+ ///
+ /// Start iterating with current = start of the IP range
+ ///
+ /// Find two netmasks
+ /// * The largest CIDR that the current IP can be the first of
+ /// * The largest CIDR that *only* contains IPs from current - end
+ ///
+ /// Add the smaller of the two CIDRs to our result and current to the first IP that is in
+ /// the range but not in the CIDR we just added. Proceed until we reached the end of the IP
+ /// range.
+ ///
+ pub fn to_cidrs(&self) -> Vec<Ipv4Cidr> {
+ let mut cidrs = Vec::new();
+
+ let mut current = u32::from_be_bytes(self.start.octets());
+ let end = u32::from_be_bytes(self.end.octets());
+
+ if current == end {
+ // valid Ipv4 since netmask is 32
+ cidrs.push(Ipv4Cidr::new(current, 32).unwrap());
+ return cidrs;
+ }
+
+ // special case this, since this is the only possibility of overflow
+ // when calculating delta_min_mask - makes everything a lot easier
+ if current == u32::MIN && end == u32::MAX {
+ // valid Ipv4 since it is `0.0.0.0/0`
+ cidrs.push(Ipv4Cidr::new(current, 0).unwrap());
+ return cidrs;
+ }
+
+ while current <= end {
+ // netmask of largest CIDR that current IP can be the first of
+ // cast is safe, because trailing zeroes can at most be 32
+ let current_max_mask = IPV4_LENGTH - (current.trailing_zeros() as u8);
+
+ // netmask of largest CIDR that *only* contains IPs of the remaining range
+ // is at most 32 due to unwrap_or returning 32 and ilog2 being at most 31
+ let delta_min_mask = ((end - current) + 1) // safe due to special case above
+ .checked_ilog2() // should never occur due to special case, but for good measure
+ .map(|mask| IPV4_LENGTH - mask as u8)
+ .unwrap_or(IPV4_LENGTH);
+
+ // at most 32, due to current/delta being at most 32
+ let netmask = u8::max(current_max_mask, delta_min_mask);
+
+ // netmask is at most 32, therefore safe to unwrap
+ cidrs.push(Ipv4Cidr::new(current, netmask).unwrap());
+
+ let delta = 2u32.saturating_pow((IPV4_LENGTH - netmask).into());
+
+ if let Some(result) = current.checked_add(delta) {
+ current = result
+ } else {
+ // we reached the end of IP address space
+ break;
+ }
+ }
+
+ cidrs
+ }
}
impl AddressRange<Ipv6Addr> {
@@ -377,6 +453,61 @@ impl AddressRange<Ipv6Addr> {
Ok(Self { start, end })
}
+
+ /// returns the minimum amount of CIDRs that exactly represent the range
+ ///
+ /// This function works analogous to the IPv4 version, please refer to the respective
+ /// documentation of [`AddressRange<Ipv4Addr>`]
+ pub fn to_cidrs(&self) -> Vec<Ipv6Cidr> {
+ let mut cidrs = Vec::new();
+
+ let mut current = u128::from_be_bytes(self.start.octets());
+ let end = u128::from_be_bytes(self.end.octets());
+
+ if current == end {
+ // valid Ipv6 since netmask is 128
+ cidrs.push(Ipv6Cidr::new(current, 128).unwrap());
+ return cidrs;
+ }
+
+ // special case this, since this is the only possibility of overflow
+ // when calculating delta_min_mask - makes everything a lot easier
+ if current == u128::MIN && end == u128::MAX {
+ // valid Ipv6 since it is `::/0`
+ cidrs.push(Ipv6Cidr::new(current, 0).unwrap());
+ return cidrs;
+ }
+
+ while current <= end {
+ // netmask of largest CIDR that current IP can be the first of
+ // cast is safe, because trailing zeroes can at most be 128
+ let current_max_mask = IPV6_LENGTH - (current.trailing_zeros() as u8);
+
+ // netmask of largest CIDR that *only* contains IPs of the remaining range
+ // is at most 128 due to unwrap_or returning 128 and ilog2 being at most 31
+ let delta_min_mask = ((end - current) + 1) // safe due to special case above
+ .checked_ilog2() // should never occur due to special case, but for good measure
+ .map(|mask| IPV6_LENGTH - mask as u8)
+ .unwrap_or(IPV6_LENGTH);
+
+ // at most 128, due to current/delta being at most 128
+ let netmask = u8::max(current_max_mask, delta_min_mask);
+
+ // netmask is at most 128, therefore safe to unwrap
+ cidrs.push(Ipv6Cidr::new(current, netmask).unwrap());
+
+ let delta = 2u128.saturating_pow((IPV6_LENGTH - netmask).into());
+
+ if let Some(result) = current.checked_add(delta) {
+ current = result
+ } else {
+ // we reached the end of IP address space
+ break;
+ }
+ }
+
+ cidrs
+ }
}
impl<T> AddressRange<T> {
@@ -811,4 +942,691 @@ mod tests {
"10.0.0.1-10.0.0.0".parse::<IpRange>().unwrap_err();
"2001:db8::1-2001:db8::0".parse::<IpRange>().unwrap_err();
}
+
+ #[test]
+ fn test_ipv4_to_cidrs() {
+ let range = AddressRange::new_v4([192, 168, 0, 100], [192, 168, 0, 100]).unwrap();
+
+ assert_eq!(
+ [Ipv4Cidr::new([192, 168, 0, 100], 32).unwrap()],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v4([192, 168, 0, 100], [192, 168, 0, 200]).unwrap();
+
+ assert_eq!(
+ [
+ Ipv4Cidr::new([192, 168, 0, 100], 30).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 104], 29).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 112], 28).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 128], 26).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 192], 29).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 200], 32).unwrap(),
+ ],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v4([192, 168, 0, 101], [192, 168, 0, 200]).unwrap();
+
+ assert_eq!(
+ [
+ Ipv4Cidr::new([192, 168, 0, 101], 32).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 102], 31).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 104], 29).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 112], 28).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 128], 26).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 192], 29).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 200], 32).unwrap(),
+ ],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v4([192, 168, 0, 101], [192, 168, 0, 101]).unwrap();
+
+ assert_eq!(
+ [Ipv4Cidr::new([192, 168, 0, 101], 32).unwrap()],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v4([192, 168, 0, 101], [192, 168, 0, 201]).unwrap();
+
+ assert_eq!(
+ [
+ Ipv4Cidr::new([192, 168, 0, 101], 32).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 102], 31).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 104], 29).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 112], 28).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 128], 26).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 192], 29).unwrap(),
+ Ipv4Cidr::new([192, 168, 0, 200], 31).unwrap(),
+ ],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v4([192, 168, 0, 0], [192, 168, 0, 255]).unwrap();
+
+ assert_eq!(
+ [Ipv4Cidr::new([192, 168, 0, 0], 24).unwrap(),],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v4([0, 0, 0, 0], [255, 255, 255, 255]).unwrap();
+
+ assert_eq!(
+ [Ipv4Cidr::new([0, 0, 0, 0], 0).unwrap(),],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v4([0, 0, 0, 1], [255, 255, 255, 255]).unwrap();
+
+ assert_eq!(
+ [
+ Ipv4Cidr::new([0, 0, 0, 1], 32).unwrap(),
+ Ipv4Cidr::new([0, 0, 0, 2], 31).unwrap(),
+ Ipv4Cidr::new([0, 0, 0, 4], 30).unwrap(),
+ Ipv4Cidr::new([0, 0, 0, 8], 29).unwrap(),
+ Ipv4Cidr::new([0, 0, 0, 16], 28).unwrap(),
+ Ipv4Cidr::new([0, 0, 0, 32], 27).unwrap(),
+ Ipv4Cidr::new([0, 0, 0, 64], 26).unwrap(),
+ Ipv4Cidr::new([0, 0, 0, 128], 25).unwrap(),
+ Ipv4Cidr::new([0, 0, 1, 0], 24).unwrap(),
+ Ipv4Cidr::new([0, 0, 2, 0], 23).unwrap(),
+ Ipv4Cidr::new([0, 0, 4, 0], 22).unwrap(),
+ Ipv4Cidr::new([0, 0, 8, 0], 21).unwrap(),
+ Ipv4Cidr::new([0, 0, 16, 0], 20).unwrap(),
+ Ipv4Cidr::new([0, 0, 32, 0], 19).unwrap(),
+ Ipv4Cidr::new([0, 0, 64, 0], 18).unwrap(),
+ Ipv4Cidr::new([0, 0, 128, 0], 17).unwrap(),
+ Ipv4Cidr::new([0, 1, 0, 0], 16).unwrap(),
+ Ipv4Cidr::new([0, 2, 0, 0], 15).unwrap(),
+ Ipv4Cidr::new([0, 4, 0, 0], 14).unwrap(),
+ Ipv4Cidr::new([0, 8, 0, 0], 13).unwrap(),
+ Ipv4Cidr::new([0, 16, 0, 0], 12).unwrap(),
+ Ipv4Cidr::new([0, 32, 0, 0], 11).unwrap(),
+ Ipv4Cidr::new([0, 64, 0, 0], 10).unwrap(),
+ Ipv4Cidr::new([0, 128, 0, 0], 9).unwrap(),
+ Ipv4Cidr::new([1, 0, 0, 0], 8).unwrap(),
+ Ipv4Cidr::new([2, 0, 0, 0], 7).unwrap(),
+ Ipv4Cidr::new([4, 0, 0, 0], 6).unwrap(),
+ Ipv4Cidr::new([8, 0, 0, 0], 5).unwrap(),
+ Ipv4Cidr::new([16, 0, 0, 0], 4).unwrap(),
+ Ipv4Cidr::new([32, 0, 0, 0], 3).unwrap(),
+ Ipv4Cidr::new([64, 0, 0, 0], 2).unwrap(),
+ Ipv4Cidr::new([128, 0, 0, 0], 1).unwrap(),
+ ],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v4([0, 0, 0, 0], [255, 255, 255, 254]).unwrap();
+
+ assert_eq!(
+ [
+ Ipv4Cidr::new([0, 0, 0, 0], 1).unwrap(),
+ Ipv4Cidr::new([128, 0, 0, 0], 2).unwrap(),
+ Ipv4Cidr::new([192, 0, 0, 0], 3).unwrap(),
+ Ipv4Cidr::new([224, 0, 0, 0], 4).unwrap(),
+ Ipv4Cidr::new([240, 0, 0, 0], 5).unwrap(),
+ Ipv4Cidr::new([248, 0, 0, 0], 6).unwrap(),
+ Ipv4Cidr::new([252, 0, 0, 0], 7).unwrap(),
+ Ipv4Cidr::new([254, 0, 0, 0], 8).unwrap(),
+ Ipv4Cidr::new([255, 0, 0, 0], 9).unwrap(),
+ Ipv4Cidr::new([255, 128, 0, 0], 10).unwrap(),
+ Ipv4Cidr::new([255, 192, 0, 0], 11).unwrap(),
+ Ipv4Cidr::new([255, 224, 0, 0], 12).unwrap(),
+ Ipv4Cidr::new([255, 240, 0, 0], 13).unwrap(),
+ Ipv4Cidr::new([255, 248, 0, 0], 14).unwrap(),
+ Ipv4Cidr::new([255, 252, 0, 0], 15).unwrap(),
+ Ipv4Cidr::new([255, 254, 0, 0], 16).unwrap(),
+ Ipv4Cidr::new([255, 255, 0, 0], 17).unwrap(),
+ Ipv4Cidr::new([255, 255, 128, 0], 18).unwrap(),
+ Ipv4Cidr::new([255, 255, 192, 0], 19).unwrap(),
+ Ipv4Cidr::new([255, 255, 224, 0], 20).unwrap(),
+ Ipv4Cidr::new([255, 255, 240, 0], 21).unwrap(),
+ Ipv4Cidr::new([255, 255, 248, 0], 22).unwrap(),
+ Ipv4Cidr::new([255, 255, 252, 0], 23).unwrap(),
+ Ipv4Cidr::new([255, 255, 254, 0], 24).unwrap(),
+ Ipv4Cidr::new([255, 255, 255, 0], 25).unwrap(),
+ Ipv4Cidr::new([255, 255, 255, 128], 26).unwrap(),
+ Ipv4Cidr::new([255, 255, 255, 192], 27).unwrap(),
+ Ipv4Cidr::new([255, 255, 255, 224], 28).unwrap(),
+ Ipv4Cidr::new([255, 255, 255, 240], 29).unwrap(),
+ Ipv4Cidr::new([255, 255, 255, 248], 30).unwrap(),
+ Ipv4Cidr::new([255, 255, 255, 252], 31).unwrap(),
+ Ipv4Cidr::new([255, 255, 255, 254], 32).unwrap(),
+ ],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v4([0, 0, 0, 0], [0, 0, 0, 0]).unwrap();
+
+ assert_eq!(
+ [Ipv4Cidr::new([0, 0, 0, 0], 32).unwrap(),],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v4([255, 255, 255, 255], [255, 255, 255, 255]).unwrap();
+
+ assert_eq!(
+ [Ipv4Cidr::new([255, 255, 255, 255], 32).unwrap(),],
+ range.to_cidrs().as_slice()
+ );
+ }
+
+ #[test]
+ fn test_ipv6_to_cidrs() {
+ let range = AddressRange::new_v6(
+ [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1000],
+ [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1000],
+ )
+ .unwrap();
+
+ assert_eq!(
+ [Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1000], 128).unwrap()],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v6(
+ [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1000],
+ [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2000],
+ )
+ .unwrap();
+
+ assert_eq!(
+ [
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1000], 116).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2000], 128).unwrap(),
+ ],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v6(
+ [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001],
+ [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2000],
+ )
+ .unwrap();
+
+ assert_eq!(
+ [
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001], 128).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1002], 127).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1004], 126).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1008], 125).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1010], 124).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1020], 123).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1040], 122).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1080], 121).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1100], 120).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1200], 119).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1400], 118).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1800], 117).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2000], 128).unwrap(),
+ ],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v6(
+ [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001],
+ [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001],
+ )
+ .unwrap();
+
+ assert_eq!(
+ [Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001], 128).unwrap(),],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v6(
+ [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001],
+ [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2001],
+ )
+ .unwrap();
+
+ assert_eq!(
+ [
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001], 128).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1002], 127).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1004], 126).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1008], 125).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1010], 124).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1020], 123).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1040], 122).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1080], 121).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1100], 120).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1200], 119).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1400], 118).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1800], 117).unwrap(),
+ Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2000], 127).unwrap(),
+ ],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v6(
+ [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0],
+ [0x2001, 0x0DB8, 0, 0, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF],
+ )
+ .unwrap();
+
+ assert_eq!(
+ [Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0], 64).unwrap()],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v6(
+ [0, 0, 0, 0, 0, 0, 0, 0],
+ [
+ 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF,
+ ],
+ )
+ .unwrap();
+
+ assert_eq!(
+ [Ipv6Cidr::new([0, 0, 0, 0, 0, 0, 0, 0], 0).unwrap(),],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v6(
+ [0, 0, 0, 0, 0, 0, 0, 0x0001],
+ [
+ 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF,
+ ],
+ )
+ .unwrap();
+
+ assert_eq!(
+ [
+ "::1/128".parse::<Ipv6Cidr>().unwrap(),
+ "::2/127".parse::<Ipv6Cidr>().unwrap(),
+ "::4/126".parse::<Ipv6Cidr>().unwrap(),
+ "::8/125".parse::<Ipv6Cidr>().unwrap(),
+ "::10/124".parse::<Ipv6Cidr>().unwrap(),
+ "::20/123".parse::<Ipv6Cidr>().unwrap(),
+ "::40/122".parse::<Ipv6Cidr>().unwrap(),
+ "::80/121".parse::<Ipv6Cidr>().unwrap(),
+ "::100/120".parse::<Ipv6Cidr>().unwrap(),
+ "::200/119".parse::<Ipv6Cidr>().unwrap(),
+ "::400/118".parse::<Ipv6Cidr>().unwrap(),
+ "::800/117".parse::<Ipv6Cidr>().unwrap(),
+ "::1000/116".parse::<Ipv6Cidr>().unwrap(),
+ "::2000/115".parse::<Ipv6Cidr>().unwrap(),
+ "::4000/114".parse::<Ipv6Cidr>().unwrap(),
+ "::8000/113".parse::<Ipv6Cidr>().unwrap(),
+ "::1:0/112".parse::<Ipv6Cidr>().unwrap(),
+ "::2:0/111".parse::<Ipv6Cidr>().unwrap(),
+ "::4:0/110".parse::<Ipv6Cidr>().unwrap(),
+ "::8:0/109".parse::<Ipv6Cidr>().unwrap(),
+ "::10:0/108".parse::<Ipv6Cidr>().unwrap(),
+ "::20:0/107".parse::<Ipv6Cidr>().unwrap(),
+ "::40:0/106".parse::<Ipv6Cidr>().unwrap(),
+ "::80:0/105".parse::<Ipv6Cidr>().unwrap(),
+ "::100:0/104".parse::<Ipv6Cidr>().unwrap(),
+ "::200:0/103".parse::<Ipv6Cidr>().unwrap(),
+ "::400:0/102".parse::<Ipv6Cidr>().unwrap(),
+ "::800:0/101".parse::<Ipv6Cidr>().unwrap(),
+ "::1000:0/100".parse::<Ipv6Cidr>().unwrap(),
+ "::2000:0/99".parse::<Ipv6Cidr>().unwrap(),
+ "::4000:0/98".parse::<Ipv6Cidr>().unwrap(),
+ "::8000:0/97".parse::<Ipv6Cidr>().unwrap(),
+ "::1:0:0/96".parse::<Ipv6Cidr>().unwrap(),
+ "::2:0:0/95".parse::<Ipv6Cidr>().unwrap(),
+ "::4:0:0/94".parse::<Ipv6Cidr>().unwrap(),
+ "::8:0:0/93".parse::<Ipv6Cidr>().unwrap(),
+ "::10:0:0/92".parse::<Ipv6Cidr>().unwrap(),
+ "::20:0:0/91".parse::<Ipv6Cidr>().unwrap(),
+ "::40:0:0/90".parse::<Ipv6Cidr>().unwrap(),
+ "::80:0:0/89".parse::<Ipv6Cidr>().unwrap(),
+ "::100:0:0/88".parse::<Ipv6Cidr>().unwrap(),
+ "::200:0:0/87".parse::<Ipv6Cidr>().unwrap(),
+ "::400:0:0/86".parse::<Ipv6Cidr>().unwrap(),
+ "::800:0:0/85".parse::<Ipv6Cidr>().unwrap(),
+ "::1000:0:0/84".parse::<Ipv6Cidr>().unwrap(),
+ "::2000:0:0/83".parse::<Ipv6Cidr>().unwrap(),
+ "::4000:0:0/82".parse::<Ipv6Cidr>().unwrap(),
+ "::8000:0:0/81".parse::<Ipv6Cidr>().unwrap(),
+ "::1:0:0:0/80".parse::<Ipv6Cidr>().unwrap(),
+ "::2:0:0:0/79".parse::<Ipv6Cidr>().unwrap(),
+ "::4:0:0:0/78".parse::<Ipv6Cidr>().unwrap(),
+ "::8:0:0:0/77".parse::<Ipv6Cidr>().unwrap(),
+ "::10:0:0:0/76".parse::<Ipv6Cidr>().unwrap(),
+ "::20:0:0:0/75".parse::<Ipv6Cidr>().unwrap(),
+ "::40:0:0:0/74".parse::<Ipv6Cidr>().unwrap(),
+ "::80:0:0:0/73".parse::<Ipv6Cidr>().unwrap(),
+ "::100:0:0:0/72".parse::<Ipv6Cidr>().unwrap(),
+ "::200:0:0:0/71".parse::<Ipv6Cidr>().unwrap(),
+ "::400:0:0:0/70".parse::<Ipv6Cidr>().unwrap(),
+ "::800:0:0:0/69".parse::<Ipv6Cidr>().unwrap(),
+ "::1000:0:0:0/68".parse::<Ipv6Cidr>().unwrap(),
+ "::2000:0:0:0/67".parse::<Ipv6Cidr>().unwrap(),
+ "::4000:0:0:0/66".parse::<Ipv6Cidr>().unwrap(),
+ "::8000:0:0:0/65".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:1::/64".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:2::/63".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:4::/62".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:8::/61".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:10::/60".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:20::/59".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:40::/58".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:80::/57".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:100::/56".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:200::/55".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:400::/54".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:800::/53".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:1000::/52".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:2000::/51".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:4000::/50".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:0:8000::/49".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:1::/48".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:2::/47".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:4::/46".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:8::/45".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:10::/44".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:20::/43".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:40::/42".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:80::/41".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:100::/40".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:200::/39".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:400::/38".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:800::/37".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:1000::/36".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:2000::/35".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:4000::/34".parse::<Ipv6Cidr>().unwrap(),
+ "0:0:8000::/33".parse::<Ipv6Cidr>().unwrap(),
+ "0:1::/32".parse::<Ipv6Cidr>().unwrap(),
+ "0:2::/31".parse::<Ipv6Cidr>().unwrap(),
+ "0:4::/30".parse::<Ipv6Cidr>().unwrap(),
+ "0:8::/29".parse::<Ipv6Cidr>().unwrap(),
+ "0:10::/28".parse::<Ipv6Cidr>().unwrap(),
+ "0:20::/27".parse::<Ipv6Cidr>().unwrap(),
+ "0:40::/26".parse::<Ipv6Cidr>().unwrap(),
+ "0:80::/25".parse::<Ipv6Cidr>().unwrap(),
+ "0:100::/24".parse::<Ipv6Cidr>().unwrap(),
+ "0:200::/23".parse::<Ipv6Cidr>().unwrap(),
+ "0:400::/22".parse::<Ipv6Cidr>().unwrap(),
+ "0:800::/21".parse::<Ipv6Cidr>().unwrap(),
+ "0:1000::/20".parse::<Ipv6Cidr>().unwrap(),
+ "0:2000::/19".parse::<Ipv6Cidr>().unwrap(),
+ "0:4000::/18".parse::<Ipv6Cidr>().unwrap(),
+ "0:8000::/17".parse::<Ipv6Cidr>().unwrap(),
+ "1::/16".parse::<Ipv6Cidr>().unwrap(),
+ "2::/15".parse::<Ipv6Cidr>().unwrap(),
+ "4::/14".parse::<Ipv6Cidr>().unwrap(),
+ "8::/13".parse::<Ipv6Cidr>().unwrap(),
+ "10::/12".parse::<Ipv6Cidr>().unwrap(),
+ "20::/11".parse::<Ipv6Cidr>().unwrap(),
+ "40::/10".parse::<Ipv6Cidr>().unwrap(),
+ "80::/9".parse::<Ipv6Cidr>().unwrap(),
+ "100::/8".parse::<Ipv6Cidr>().unwrap(),
+ "200::/7".parse::<Ipv6Cidr>().unwrap(),
+ "400::/6".parse::<Ipv6Cidr>().unwrap(),
+ "800::/5".parse::<Ipv6Cidr>().unwrap(),
+ "1000::/4".parse::<Ipv6Cidr>().unwrap(),
+ "2000::/3".parse::<Ipv6Cidr>().unwrap(),
+ "4000::/2".parse::<Ipv6Cidr>().unwrap(),
+ "8000::/1".parse::<Ipv6Cidr>().unwrap(),
+ ],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v6(
+ [0, 0, 0, 0, 0, 0, 0, 0],
+ [
+ 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFE,
+ ],
+ )
+ .unwrap();
+
+ assert_eq!(
+ [
+ "::/1".parse::<Ipv6Cidr>().unwrap(),
+ "8000::/2".parse::<Ipv6Cidr>().unwrap(),
+ "c000::/3".parse::<Ipv6Cidr>().unwrap(),
+ "e000::/4".parse::<Ipv6Cidr>().unwrap(),
+ "f000::/5".parse::<Ipv6Cidr>().unwrap(),
+ "f800::/6".parse::<Ipv6Cidr>().unwrap(),
+ "fc00::/7".parse::<Ipv6Cidr>().unwrap(),
+ "fe00::/8".parse::<Ipv6Cidr>().unwrap(),
+ "ff00::/9".parse::<Ipv6Cidr>().unwrap(),
+ "ff80::/10".parse::<Ipv6Cidr>().unwrap(),
+ "ffc0::/11".parse::<Ipv6Cidr>().unwrap(),
+ "ffe0::/12".parse::<Ipv6Cidr>().unwrap(),
+ "fff0::/13".parse::<Ipv6Cidr>().unwrap(),
+ "fff8::/14".parse::<Ipv6Cidr>().unwrap(),
+ "fffc::/15".parse::<Ipv6Cidr>().unwrap(),
+ "fffe::/16".parse::<Ipv6Cidr>().unwrap(),
+ "ffff::/17".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:8000::/18".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:c000::/19".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:e000::/20".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:f000::/21".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:f800::/22".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:fc00::/23".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:fe00::/24".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ff00::/25".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ff80::/26".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffc0::/27".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffe0::/28".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:fff0::/29".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:fff8::/30".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:fffc::/31".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:fffe::/32".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff::/33".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:8000::/34".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:c000::/35".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:e000::/36".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:f000::/37".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:f800::/38".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:fc00::/39".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:fe00::/40".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ff00::/41".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ff80::/42".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffc0::/43".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffe0::/44".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:fff0::/45".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:fff8::/46".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:fffc::/47".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:fffe::/48".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff::/49".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:8000::/50".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:c000::/51".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:e000::/52".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:f000::/53".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:f800::/54".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:fc00::/55".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:fe00::/56".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ff00::/57".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ff80::/58".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffc0::/59".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffe0::/60".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:fff0::/61".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:fff8::/62".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:fffc::/63".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:fffe::/64".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff::/65".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:8000::/66".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:c000::/67".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:e000::/68".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:f000::/69".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:f800::/70".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:fc00::/71".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:fe00::/72".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:ff00::/73".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:ff80::/74".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:ffc0::/75".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:ffe0::/76".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:fff0::/77".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:fff8::/78".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:fffc::/79".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:fffe::/80".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:ffff::/81".parse::<Ipv6Cidr>().unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:8000::/82"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:c000::/83"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:e000::/84"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:f000::/85"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:f800::/86"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:fc00::/87"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:fe00::/88"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ff00::/89"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ff80::/90"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffc0::/91"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffe0::/92"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:fff0::/93"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:fff8::/94"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:fffc::/95"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:fffe::/96"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff::/97"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:8000:0/98"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:c000:0/99"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:e000:0/100"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:f000:0/101"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:f800:0/102"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:fc00:0/103"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:fe00:0/104"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ff00:0/105"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ff80:0/106"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffc0:0/107"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffe0:0/108"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:fff0:0/109"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:fff8:0/110"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:fffc:0/111"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:fffe:0/112"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:0/113"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:8000/114"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:c000/115"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:e000/116"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:f000/117"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:f800/118"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fc00/119"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fe00/120"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ff00/121"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ff80/122"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffc0/123"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffe0/124"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fff0/125"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fff8/126"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fffc/127"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fffe/128"
+ .parse::<Ipv6Cidr>()
+ .unwrap(),
+ ],
+ range.to_cidrs().as_slice()
+ );
+
+ let range =
+ AddressRange::new_v6([0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0]).unwrap();
+
+ assert_eq!(
+ [Ipv6Cidr::new([0, 0, 0, 0, 0, 0, 0, 0], 128).unwrap(),],
+ range.to_cidrs().as_slice()
+ );
+
+ let range = AddressRange::new_v6(
+ [
+ 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF,
+ ],
+ [
+ 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF,
+ ],
+ )
+ .unwrap();
+
+ assert_eq!(
+ [Ipv6Cidr::new(
+ [0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF],
+ 128
+ )
+ .unwrap(),],
+ range.to_cidrs().as_slice()
+ );
+ }
}
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 05/21] iprange: add methods for converting an ip range to cidrs
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 05/21] iprange: add methods for converting an ip range to cidrs Stefan Hanreich
@ 2024-08-13 16:09 ` Max Carrara
0 siblings, 0 replies; 43+ messages in thread
From: Max Carrara @ 2024-08-13 16:09 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed Jun 26, 2024 at 2:15 PM CEST, Stefan Hanreich wrote:
> This is mainly used in proxmox-perl-rs, so the generated ipsets can be
> used in pve-firewall where only CIDRs are supported.
>
> Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
> ---
> .../src/firewall/types/address.rs | 818 ++++++++++++++++++
> 1 file changed, 818 insertions(+)
>
> diff --git a/proxmox-ve-config/src/firewall/types/address.rs b/proxmox-ve-config/src/firewall/types/address.rs
> index 8db3942..3238601 100644
> --- a/proxmox-ve-config/src/firewall/types/address.rs
> +++ b/proxmox-ve-config/src/firewall/types/address.rs
> @@ -303,6 +303,17 @@ impl IpRange {
> ) -> Result<Self, IpRangeError> {
> Ok(IpRange::V6(AddressRange::new_v6(start, end)?))
> }
> +
> + /// converts an IpRange into the minimal amount of CIDRs
> + ///
> + /// see the concrete implementations of [`AddressRange<Ipv4Addr>`] or [`AddressRange<Ipv6Addr>`]
> + /// respectively
> + pub fn to_cidrs(&self) -> Vec<Cidr> {
> + match self {
> + IpRange::V4(range) => range.to_cidrs().into_iter().map(Cidr::from).collect(),
> + IpRange::V6(range) => range.to_cidrs().into_iter().map(Cidr::from).collect(),
> + }
> + }
> }
>
> impl std::str::FromStr for IpRange {
> @@ -362,6 +373,71 @@ impl AddressRange<Ipv4Addr> {
>
> Ok(Self { start, end })
> }
> +
> + /// returns the minimum amount of CIDRs that exactly represent the range
> + ///
> + /// The idea behind this algorithm is as follows:
> + ///
> + /// Start iterating with current = start of the IP range
> + ///
> + /// Find two netmasks
> + /// * The largest CIDR that the current IP can be the first of
> + /// * The largest CIDR that *only* contains IPs from current - end
> + ///
> + /// Add the smaller of the two CIDRs to our result and current to the first IP that is in
> + /// the range but not in the CIDR we just added. Proceed until we reached the end of the IP
> + /// range.
Would maaaaybe prefer some more inline formatting / minor rewording
regarding the algorithm's steps above, simply for readability's sake
(e.g. when rendering the docs).
Sort of like:
1. Start iteration: Set `current` to `start` of the IP range
2. Find two netmasks:
- The largest CIDR that the `current` IP can be the first of
- The largest CIDR that *only* contains IPs from `current` to `end`
3. Add the smaller of the two CIDRs to our result and `current` to the first IP that is in
the range but *not* in the CIDR we just added. Proceed until we reached the end of the IP
range.
Again, just a small thing, but thought I'd mention it.
> + ///
> + pub fn to_cidrs(&self) -> Vec<Ipv4Cidr> {
> + let mut cidrs = Vec::new();
> +
> + let mut current = u32::from_be_bytes(self.start.octets());
> + let end = u32::from_be_bytes(self.end.octets());
> +
> + if current == end {
> + // valid Ipv4 since netmask is 32
> + cidrs.push(Ipv4Cidr::new(current, 32).unwrap());
> + return cidrs;
> + }
> +
> + // special case this, since this is the only possibility of overflow
> + // when calculating delta_min_mask - makes everything a lot easier
> + if current == u32::MIN && end == u32::MAX {
> + // valid Ipv4 since it is `0.0.0.0/0`
> + cidrs.push(Ipv4Cidr::new(current, 0).unwrap());
> + return cidrs;
> + }
> +
> + while current <= end {
> + // netmask of largest CIDR that current IP can be the first of
> + // cast is safe, because trailing zeroes can at most be 32
> + let current_max_mask = IPV4_LENGTH - (current.trailing_zeros() as u8);
> +
> + // netmask of largest CIDR that *only* contains IPs of the remaining range
> + // is at most 32 due to unwrap_or returning 32 and ilog2 being at most 31
> + let delta_min_mask = ((end - current) + 1) // safe due to special case above
> + .checked_ilog2() // should never occur due to special case, but for good measure
> + .map(|mask| IPV4_LENGTH - mask as u8)
> + .unwrap_or(IPV4_LENGTH);
> +
> + // at most 32, due to current/delta being at most 32
> + let netmask = u8::max(current_max_mask, delta_min_mask);
> +
> + // netmask is at most 32, therefore safe to unwrap
> + cidrs.push(Ipv4Cidr::new(current, netmask).unwrap());
> +
> + let delta = 2u32.saturating_pow((IPV4_LENGTH - netmask).into());
> +
> + if let Some(result) = current.checked_add(delta) {
> + current = result
> + } else {
> + // we reached the end of IP address space
> + break;
> + }
> + }
> +
> + cidrs
> + }
> }
>
> impl AddressRange<Ipv6Addr> {
> @@ -377,6 +453,61 @@ impl AddressRange<Ipv6Addr> {
>
> Ok(Self { start, end })
> }
> +
> + /// returns the minimum amount of CIDRs that exactly represent the range
> + ///
> + /// This function works analogous to the IPv4 version, please refer to the respective
> + /// documentation of [`AddressRange<Ipv4Addr>`]
> + pub fn to_cidrs(&self) -> Vec<Ipv6Cidr> {
> + let mut cidrs = Vec::new();
> +
> + let mut current = u128::from_be_bytes(self.start.octets());
> + let end = u128::from_be_bytes(self.end.octets());
> +
> + if current == end {
> + // valid Ipv6 since netmask is 128
> + cidrs.push(Ipv6Cidr::new(current, 128).unwrap());
> + return cidrs;
> + }
> +
> + // special case this, since this is the only possibility of overflow
> + // when calculating delta_min_mask - makes everything a lot easier
> + if current == u128::MIN && end == u128::MAX {
> + // valid Ipv6 since it is `::/0`
> + cidrs.push(Ipv6Cidr::new(current, 0).unwrap());
> + return cidrs;
> + }
> +
> + while current <= end {
> + // netmask of largest CIDR that current IP can be the first of
> + // cast is safe, because trailing zeroes can at most be 128
> + let current_max_mask = IPV6_LENGTH - (current.trailing_zeros() as u8);
> +
> + // netmask of largest CIDR that *only* contains IPs of the remaining range
> + // is at most 128 due to unwrap_or returning 128 and ilog2 being at most 31
> + let delta_min_mask = ((end - current) + 1) // safe due to special case above
> + .checked_ilog2() // should never occur due to special case, but for good measure
> + .map(|mask| IPV6_LENGTH - mask as u8)
> + .unwrap_or(IPV6_LENGTH);
> +
> + // at most 128, due to current/delta being at most 128
> + let netmask = u8::max(current_max_mask, delta_min_mask);
> +
> + // netmask is at most 128, therefore safe to unwrap
> + cidrs.push(Ipv6Cidr::new(current, netmask).unwrap());
> +
> + let delta = 2u128.saturating_pow((IPV6_LENGTH - netmask).into());
> +
> + if let Some(result) = current.checked_add(delta) {
> + current = result
> + } else {
> + // we reached the end of IP address space
> + break;
> + }
> + }
> +
> + cidrs
> + }
> }
>
> impl<T> AddressRange<T> {
> @@ -811,4 +942,691 @@ mod tests {
> "10.0.0.1-10.0.0.0".parse::<IpRange>().unwrap_err();
> "2001:db8::1-2001:db8::0".parse::<IpRange>().unwrap_err();
> }
> +
> + #[test]
> + fn test_ipv4_to_cidrs() {
> + let range = AddressRange::new_v4([192, 168, 0, 100], [192, 168, 0, 100]).unwrap();
> +
> + assert_eq!(
> + [Ipv4Cidr::new([192, 168, 0, 100], 32).unwrap()],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v4([192, 168, 0, 100], [192, 168, 0, 200]).unwrap();
> +
> + assert_eq!(
> + [
> + Ipv4Cidr::new([192, 168, 0, 100], 30).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 104], 29).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 112], 28).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 128], 26).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 192], 29).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 200], 32).unwrap(),
> + ],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v4([192, 168, 0, 101], [192, 168, 0, 200]).unwrap();
> +
> + assert_eq!(
> + [
> + Ipv4Cidr::new([192, 168, 0, 101], 32).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 102], 31).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 104], 29).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 112], 28).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 128], 26).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 192], 29).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 200], 32).unwrap(),
> + ],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v4([192, 168, 0, 101], [192, 168, 0, 101]).unwrap();
> +
> + assert_eq!(
> + [Ipv4Cidr::new([192, 168, 0, 101], 32).unwrap()],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v4([192, 168, 0, 101], [192, 168, 0, 201]).unwrap();
> +
> + assert_eq!(
> + [
> + Ipv4Cidr::new([192, 168, 0, 101], 32).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 102], 31).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 104], 29).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 112], 28).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 128], 26).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 192], 29).unwrap(),
> + Ipv4Cidr::new([192, 168, 0, 200], 31).unwrap(),
> + ],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v4([192, 168, 0, 0], [192, 168, 0, 255]).unwrap();
> +
> + assert_eq!(
> + [Ipv4Cidr::new([192, 168, 0, 0], 24).unwrap(),],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v4([0, 0, 0, 0], [255, 255, 255, 255]).unwrap();
> +
> + assert_eq!(
> + [Ipv4Cidr::new([0, 0, 0, 0], 0).unwrap(),],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v4([0, 0, 0, 1], [255, 255, 255, 255]).unwrap();
> +
> + assert_eq!(
> + [
> + Ipv4Cidr::new([0, 0, 0, 1], 32).unwrap(),
> + Ipv4Cidr::new([0, 0, 0, 2], 31).unwrap(),
> + Ipv4Cidr::new([0, 0, 0, 4], 30).unwrap(),
> + Ipv4Cidr::new([0, 0, 0, 8], 29).unwrap(),
> + Ipv4Cidr::new([0, 0, 0, 16], 28).unwrap(),
> + Ipv4Cidr::new([0, 0, 0, 32], 27).unwrap(),
> + Ipv4Cidr::new([0, 0, 0, 64], 26).unwrap(),
> + Ipv4Cidr::new([0, 0, 0, 128], 25).unwrap(),
> + Ipv4Cidr::new([0, 0, 1, 0], 24).unwrap(),
> + Ipv4Cidr::new([0, 0, 2, 0], 23).unwrap(),
> + Ipv4Cidr::new([0, 0, 4, 0], 22).unwrap(),
> + Ipv4Cidr::new([0, 0, 8, 0], 21).unwrap(),
> + Ipv4Cidr::new([0, 0, 16, 0], 20).unwrap(),
> + Ipv4Cidr::new([0, 0, 32, 0], 19).unwrap(),
> + Ipv4Cidr::new([0, 0, 64, 0], 18).unwrap(),
> + Ipv4Cidr::new([0, 0, 128, 0], 17).unwrap(),
> + Ipv4Cidr::new([0, 1, 0, 0], 16).unwrap(),
> + Ipv4Cidr::new([0, 2, 0, 0], 15).unwrap(),
> + Ipv4Cidr::new([0, 4, 0, 0], 14).unwrap(),
> + Ipv4Cidr::new([0, 8, 0, 0], 13).unwrap(),
> + Ipv4Cidr::new([0, 16, 0, 0], 12).unwrap(),
> + Ipv4Cidr::new([0, 32, 0, 0], 11).unwrap(),
> + Ipv4Cidr::new([0, 64, 0, 0], 10).unwrap(),
> + Ipv4Cidr::new([0, 128, 0, 0], 9).unwrap(),
> + Ipv4Cidr::new([1, 0, 0, 0], 8).unwrap(),
> + Ipv4Cidr::new([2, 0, 0, 0], 7).unwrap(),
> + Ipv4Cidr::new([4, 0, 0, 0], 6).unwrap(),
> + Ipv4Cidr::new([8, 0, 0, 0], 5).unwrap(),
> + Ipv4Cidr::new([16, 0, 0, 0], 4).unwrap(),
> + Ipv4Cidr::new([32, 0, 0, 0], 3).unwrap(),
> + Ipv4Cidr::new([64, 0, 0, 0], 2).unwrap(),
> + Ipv4Cidr::new([128, 0, 0, 0], 1).unwrap(),
> + ],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v4([0, 0, 0, 0], [255, 255, 255, 254]).unwrap();
> +
> + assert_eq!(
> + [
> + Ipv4Cidr::new([0, 0, 0, 0], 1).unwrap(),
> + Ipv4Cidr::new([128, 0, 0, 0], 2).unwrap(),
> + Ipv4Cidr::new([192, 0, 0, 0], 3).unwrap(),
> + Ipv4Cidr::new([224, 0, 0, 0], 4).unwrap(),
> + Ipv4Cidr::new([240, 0, 0, 0], 5).unwrap(),
> + Ipv4Cidr::new([248, 0, 0, 0], 6).unwrap(),
> + Ipv4Cidr::new([252, 0, 0, 0], 7).unwrap(),
> + Ipv4Cidr::new([254, 0, 0, 0], 8).unwrap(),
> + Ipv4Cidr::new([255, 0, 0, 0], 9).unwrap(),
> + Ipv4Cidr::new([255, 128, 0, 0], 10).unwrap(),
> + Ipv4Cidr::new([255, 192, 0, 0], 11).unwrap(),
> + Ipv4Cidr::new([255, 224, 0, 0], 12).unwrap(),
> + Ipv4Cidr::new([255, 240, 0, 0], 13).unwrap(),
> + Ipv4Cidr::new([255, 248, 0, 0], 14).unwrap(),
> + Ipv4Cidr::new([255, 252, 0, 0], 15).unwrap(),
> + Ipv4Cidr::new([255, 254, 0, 0], 16).unwrap(),
> + Ipv4Cidr::new([255, 255, 0, 0], 17).unwrap(),
> + Ipv4Cidr::new([255, 255, 128, 0], 18).unwrap(),
> + Ipv4Cidr::new([255, 255, 192, 0], 19).unwrap(),
> + Ipv4Cidr::new([255, 255, 224, 0], 20).unwrap(),
> + Ipv4Cidr::new([255, 255, 240, 0], 21).unwrap(),
> + Ipv4Cidr::new([255, 255, 248, 0], 22).unwrap(),
> + Ipv4Cidr::new([255, 255, 252, 0], 23).unwrap(),
> + Ipv4Cidr::new([255, 255, 254, 0], 24).unwrap(),
> + Ipv4Cidr::new([255, 255, 255, 0], 25).unwrap(),
> + Ipv4Cidr::new([255, 255, 255, 128], 26).unwrap(),
> + Ipv4Cidr::new([255, 255, 255, 192], 27).unwrap(),
> + Ipv4Cidr::new([255, 255, 255, 224], 28).unwrap(),
> + Ipv4Cidr::new([255, 255, 255, 240], 29).unwrap(),
> + Ipv4Cidr::new([255, 255, 255, 248], 30).unwrap(),
> + Ipv4Cidr::new([255, 255, 255, 252], 31).unwrap(),
> + Ipv4Cidr::new([255, 255, 255, 254], 32).unwrap(),
> + ],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v4([0, 0, 0, 0], [0, 0, 0, 0]).unwrap();
> +
> + assert_eq!(
> + [Ipv4Cidr::new([0, 0, 0, 0], 32).unwrap(),],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v4([255, 255, 255, 255], [255, 255, 255, 255]).unwrap();
> +
> + assert_eq!(
> + [Ipv4Cidr::new([255, 255, 255, 255], 32).unwrap(),],
> + range.to_cidrs().as_slice()
> + );
> + }
> +
> + #[test]
> + fn test_ipv6_to_cidrs() {
> + let range = AddressRange::new_v6(
> + [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1000],
> + [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1000],
> + )
> + .unwrap();
> +
> + assert_eq!(
> + [Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1000], 128).unwrap()],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v6(
> + [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1000],
> + [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2000],
> + )
> + .unwrap();
> +
> + assert_eq!(
> + [
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1000], 116).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2000], 128).unwrap(),
> + ],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v6(
> + [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001],
> + [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2000],
> + )
> + .unwrap();
> +
> + assert_eq!(
> + [
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001], 128).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1002], 127).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1004], 126).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1008], 125).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1010], 124).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1020], 123).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1040], 122).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1080], 121).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1100], 120).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1200], 119).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1400], 118).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1800], 117).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2000], 128).unwrap(),
> + ],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v6(
> + [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001],
> + [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001],
> + )
> + .unwrap();
> +
> + assert_eq!(
> + [Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001], 128).unwrap(),],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v6(
> + [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001],
> + [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2001],
> + )
> + .unwrap();
> +
> + assert_eq!(
> + [
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1001], 128).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1002], 127).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1004], 126).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1008], 125).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1010], 124).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1020], 123).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1040], 122).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1080], 121).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1100], 120).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1200], 119).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1400], 118).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x1800], 117).unwrap(),
> + Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0x2000], 127).unwrap(),
> + ],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v6(
> + [0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0],
> + [0x2001, 0x0DB8, 0, 0, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF],
> + )
> + .unwrap();
> +
> + assert_eq!(
> + [Ipv6Cidr::new([0x2001, 0x0DB8, 0, 0, 0, 0, 0, 0], 64).unwrap()],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v6(
> + [0, 0, 0, 0, 0, 0, 0, 0],
> + [
> + 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF,
> + ],
> + )
> + .unwrap();
> +
> + assert_eq!(
> + [Ipv6Cidr::new([0, 0, 0, 0, 0, 0, 0, 0], 0).unwrap(),],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v6(
> + [0, 0, 0, 0, 0, 0, 0, 0x0001],
> + [
> + 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF,
> + ],
> + )
> + .unwrap();
> +
> + assert_eq!(
> + [
> + "::1/128".parse::<Ipv6Cidr>().unwrap(),
> + "::2/127".parse::<Ipv6Cidr>().unwrap(),
> + "::4/126".parse::<Ipv6Cidr>().unwrap(),
> + "::8/125".parse::<Ipv6Cidr>().unwrap(),
> + "::10/124".parse::<Ipv6Cidr>().unwrap(),
> + "::20/123".parse::<Ipv6Cidr>().unwrap(),
> + "::40/122".parse::<Ipv6Cidr>().unwrap(),
> + "::80/121".parse::<Ipv6Cidr>().unwrap(),
> + "::100/120".parse::<Ipv6Cidr>().unwrap(),
> + "::200/119".parse::<Ipv6Cidr>().unwrap(),
> + "::400/118".parse::<Ipv6Cidr>().unwrap(),
> + "::800/117".parse::<Ipv6Cidr>().unwrap(),
> + "::1000/116".parse::<Ipv6Cidr>().unwrap(),
> + "::2000/115".parse::<Ipv6Cidr>().unwrap(),
> + "::4000/114".parse::<Ipv6Cidr>().unwrap(),
> + "::8000/113".parse::<Ipv6Cidr>().unwrap(),
> + "::1:0/112".parse::<Ipv6Cidr>().unwrap(),
> + "::2:0/111".parse::<Ipv6Cidr>().unwrap(),
> + "::4:0/110".parse::<Ipv6Cidr>().unwrap(),
> + "::8:0/109".parse::<Ipv6Cidr>().unwrap(),
> + "::10:0/108".parse::<Ipv6Cidr>().unwrap(),
> + "::20:0/107".parse::<Ipv6Cidr>().unwrap(),
> + "::40:0/106".parse::<Ipv6Cidr>().unwrap(),
> + "::80:0/105".parse::<Ipv6Cidr>().unwrap(),
> + "::100:0/104".parse::<Ipv6Cidr>().unwrap(),
> + "::200:0/103".parse::<Ipv6Cidr>().unwrap(),
> + "::400:0/102".parse::<Ipv6Cidr>().unwrap(),
> + "::800:0/101".parse::<Ipv6Cidr>().unwrap(),
> + "::1000:0/100".parse::<Ipv6Cidr>().unwrap(),
> + "::2000:0/99".parse::<Ipv6Cidr>().unwrap(),
> + "::4000:0/98".parse::<Ipv6Cidr>().unwrap(),
> + "::8000:0/97".parse::<Ipv6Cidr>().unwrap(),
> + "::1:0:0/96".parse::<Ipv6Cidr>().unwrap(),
> + "::2:0:0/95".parse::<Ipv6Cidr>().unwrap(),
> + "::4:0:0/94".parse::<Ipv6Cidr>().unwrap(),
> + "::8:0:0/93".parse::<Ipv6Cidr>().unwrap(),
> + "::10:0:0/92".parse::<Ipv6Cidr>().unwrap(),
> + "::20:0:0/91".parse::<Ipv6Cidr>().unwrap(),
> + "::40:0:0/90".parse::<Ipv6Cidr>().unwrap(),
> + "::80:0:0/89".parse::<Ipv6Cidr>().unwrap(),
> + "::100:0:0/88".parse::<Ipv6Cidr>().unwrap(),
> + "::200:0:0/87".parse::<Ipv6Cidr>().unwrap(),
> + "::400:0:0/86".parse::<Ipv6Cidr>().unwrap(),
> + "::800:0:0/85".parse::<Ipv6Cidr>().unwrap(),
> + "::1000:0:0/84".parse::<Ipv6Cidr>().unwrap(),
> + "::2000:0:0/83".parse::<Ipv6Cidr>().unwrap(),
> + "::4000:0:0/82".parse::<Ipv6Cidr>().unwrap(),
> + "::8000:0:0/81".parse::<Ipv6Cidr>().unwrap(),
> + "::1:0:0:0/80".parse::<Ipv6Cidr>().unwrap(),
> + "::2:0:0:0/79".parse::<Ipv6Cidr>().unwrap(),
> + "::4:0:0:0/78".parse::<Ipv6Cidr>().unwrap(),
> + "::8:0:0:0/77".parse::<Ipv6Cidr>().unwrap(),
> + "::10:0:0:0/76".parse::<Ipv6Cidr>().unwrap(),
> + "::20:0:0:0/75".parse::<Ipv6Cidr>().unwrap(),
> + "::40:0:0:0/74".parse::<Ipv6Cidr>().unwrap(),
> + "::80:0:0:0/73".parse::<Ipv6Cidr>().unwrap(),
> + "::100:0:0:0/72".parse::<Ipv6Cidr>().unwrap(),
> + "::200:0:0:0/71".parse::<Ipv6Cidr>().unwrap(),
> + "::400:0:0:0/70".parse::<Ipv6Cidr>().unwrap(),
> + "::800:0:0:0/69".parse::<Ipv6Cidr>().unwrap(),
> + "::1000:0:0:0/68".parse::<Ipv6Cidr>().unwrap(),
> + "::2000:0:0:0/67".parse::<Ipv6Cidr>().unwrap(),
> + "::4000:0:0:0/66".parse::<Ipv6Cidr>().unwrap(),
> + "::8000:0:0:0/65".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:1::/64".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:2::/63".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:4::/62".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:8::/61".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:10::/60".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:20::/59".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:40::/58".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:80::/57".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:100::/56".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:200::/55".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:400::/54".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:800::/53".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:1000::/52".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:2000::/51".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:4000::/50".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:0:8000::/49".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:1::/48".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:2::/47".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:4::/46".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:8::/45".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:10::/44".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:20::/43".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:40::/42".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:80::/41".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:100::/40".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:200::/39".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:400::/38".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:800::/37".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:1000::/36".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:2000::/35".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:4000::/34".parse::<Ipv6Cidr>().unwrap(),
> + "0:0:8000::/33".parse::<Ipv6Cidr>().unwrap(),
> + "0:1::/32".parse::<Ipv6Cidr>().unwrap(),
> + "0:2::/31".parse::<Ipv6Cidr>().unwrap(),
> + "0:4::/30".parse::<Ipv6Cidr>().unwrap(),
> + "0:8::/29".parse::<Ipv6Cidr>().unwrap(),
> + "0:10::/28".parse::<Ipv6Cidr>().unwrap(),
> + "0:20::/27".parse::<Ipv6Cidr>().unwrap(),
> + "0:40::/26".parse::<Ipv6Cidr>().unwrap(),
> + "0:80::/25".parse::<Ipv6Cidr>().unwrap(),
> + "0:100::/24".parse::<Ipv6Cidr>().unwrap(),
> + "0:200::/23".parse::<Ipv6Cidr>().unwrap(),
> + "0:400::/22".parse::<Ipv6Cidr>().unwrap(),
> + "0:800::/21".parse::<Ipv6Cidr>().unwrap(),
> + "0:1000::/20".parse::<Ipv6Cidr>().unwrap(),
> + "0:2000::/19".parse::<Ipv6Cidr>().unwrap(),
> + "0:4000::/18".parse::<Ipv6Cidr>().unwrap(),
> + "0:8000::/17".parse::<Ipv6Cidr>().unwrap(),
> + "1::/16".parse::<Ipv6Cidr>().unwrap(),
> + "2::/15".parse::<Ipv6Cidr>().unwrap(),
> + "4::/14".parse::<Ipv6Cidr>().unwrap(),
> + "8::/13".parse::<Ipv6Cidr>().unwrap(),
> + "10::/12".parse::<Ipv6Cidr>().unwrap(),
> + "20::/11".parse::<Ipv6Cidr>().unwrap(),
> + "40::/10".parse::<Ipv6Cidr>().unwrap(),
> + "80::/9".parse::<Ipv6Cidr>().unwrap(),
> + "100::/8".parse::<Ipv6Cidr>().unwrap(),
> + "200::/7".parse::<Ipv6Cidr>().unwrap(),
> + "400::/6".parse::<Ipv6Cidr>().unwrap(),
> + "800::/5".parse::<Ipv6Cidr>().unwrap(),
> + "1000::/4".parse::<Ipv6Cidr>().unwrap(),
> + "2000::/3".parse::<Ipv6Cidr>().unwrap(),
> + "4000::/2".parse::<Ipv6Cidr>().unwrap(),
> + "8000::/1".parse::<Ipv6Cidr>().unwrap(),
> + ],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v6(
> + [0, 0, 0, 0, 0, 0, 0, 0],
> + [
> + 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFE,
> + ],
> + )
> + .unwrap();
> +
> + assert_eq!(
> + [
> + "::/1".parse::<Ipv6Cidr>().unwrap(),
> + "8000::/2".parse::<Ipv6Cidr>().unwrap(),
> + "c000::/3".parse::<Ipv6Cidr>().unwrap(),
> + "e000::/4".parse::<Ipv6Cidr>().unwrap(),
> + "f000::/5".parse::<Ipv6Cidr>().unwrap(),
> + "f800::/6".parse::<Ipv6Cidr>().unwrap(),
> + "fc00::/7".parse::<Ipv6Cidr>().unwrap(),
> + "fe00::/8".parse::<Ipv6Cidr>().unwrap(),
> + "ff00::/9".parse::<Ipv6Cidr>().unwrap(),
> + "ff80::/10".parse::<Ipv6Cidr>().unwrap(),
> + "ffc0::/11".parse::<Ipv6Cidr>().unwrap(),
> + "ffe0::/12".parse::<Ipv6Cidr>().unwrap(),
> + "fff0::/13".parse::<Ipv6Cidr>().unwrap(),
> + "fff8::/14".parse::<Ipv6Cidr>().unwrap(),
> + "fffc::/15".parse::<Ipv6Cidr>().unwrap(),
> + "fffe::/16".parse::<Ipv6Cidr>().unwrap(),
> + "ffff::/17".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:8000::/18".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:c000::/19".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:e000::/20".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:f000::/21".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:f800::/22".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:fc00::/23".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:fe00::/24".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ff00::/25".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ff80::/26".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffc0::/27".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffe0::/28".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:fff0::/29".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:fff8::/30".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:fffc::/31".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:fffe::/32".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff::/33".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:8000::/34".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:c000::/35".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:e000::/36".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:f000::/37".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:f800::/38".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:fc00::/39".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:fe00::/40".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ff00::/41".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ff80::/42".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffc0::/43".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffe0::/44".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:fff0::/45".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:fff8::/46".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:fffc::/47".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:fffe::/48".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff::/49".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:8000::/50".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:c000::/51".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:e000::/52".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:f000::/53".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:f800::/54".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:fc00::/55".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:fe00::/56".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ff00::/57".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ff80::/58".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffc0::/59".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffe0::/60".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:fff0::/61".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:fff8::/62".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:fffc::/63".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:fffe::/64".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff::/65".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:8000::/66".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:c000::/67".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:e000::/68".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:f000::/69".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:f800::/70".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:fc00::/71".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:fe00::/72".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:ff00::/73".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:ff80::/74".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:ffc0::/75".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:ffe0::/76".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:fff0::/77".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:fff8::/78".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:fffc::/79".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:fffe::/80".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:ffff::/81".parse::<Ipv6Cidr>().unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:8000::/82"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:c000::/83"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:e000::/84"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:f000::/85"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:f800::/86"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:fc00::/87"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:fe00::/88"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ff00::/89"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ff80::/90"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffc0::/91"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffe0::/92"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:fff0::/93"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:fff8::/94"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:fffc::/95"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:fffe::/96"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff::/97"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:8000:0/98"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:c000:0/99"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:e000:0/100"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:f000:0/101"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:f800:0/102"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:fc00:0/103"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:fe00:0/104"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ff00:0/105"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ff80:0/106"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffc0:0/107"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffe0:0/108"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:fff0:0/109"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:fff8:0/110"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:fffc:0/111"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:fffe:0/112"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:0/113"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:8000/114"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:c000/115"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:e000/116"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:f000/117"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:f800/118"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fc00/119"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fe00/120"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ff00/121"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ff80/122"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffc0/123"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffe0/124"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fff0/125"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fff8/126"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fffc/127"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + "ffff:ffff:ffff:ffff:ffff:ffff:ffff:fffe/128"
> + .parse::<Ipv6Cidr>()
> + .unwrap(),
> + ],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range =
> + AddressRange::new_v6([0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0]).unwrap();
> +
> + assert_eq!(
> + [Ipv6Cidr::new([0, 0, 0, 0, 0, 0, 0, 0], 128).unwrap(),],
> + range.to_cidrs().as_slice()
> + );
> +
> + let range = AddressRange::new_v6(
> + [
> + 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF,
> + ],
> + [
> + 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF,
> + ],
> + )
> + .unwrap();
> +
> + assert_eq!(
> + [Ipv6Cidr::new(
> + [0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF],
> + 128
> + )
> + .unwrap(),],
> + range.to_cidrs().as_slice()
> + );
> + }
> }
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 06/21] ipset: address: add helper methods
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (4 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 05/21] iprange: add methods for converting an ip range to cidrs Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-27 10:45 ` Gabriel Goller
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 07/21] firewall: guest: derive traits according to rust api guidelines Stefan Hanreich
` (18 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
proxmox-ve-config/src/firewall/types/address.rs | 10 ++++++++++
proxmox-ve-config/src/firewall/types/ipset.rs | 14 ++++++++++++++
2 files changed, 24 insertions(+)
diff --git a/proxmox-ve-config/src/firewall/types/address.rs b/proxmox-ve-config/src/firewall/types/address.rs
index 3238601..962c9d2 100644
--- a/proxmox-ve-config/src/firewall/types/address.rs
+++ b/proxmox-ve-config/src/firewall/types/address.rs
@@ -11,6 +11,16 @@ pub enum Family {
V6,
}
+impl Family {
+ pub fn is_ipv4(&self) -> bool {
+ matches!(self, Self::V4)
+ }
+
+ pub fn is_ipv6(&self) -> bool {
+ matches!(self, Self::V6)
+ }
+}
+
impl fmt::Display for Family {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
diff --git a/proxmox-ve-config/src/firewall/types/ipset.rs b/proxmox-ve-config/src/firewall/types/ipset.rs
index 4ddf6d1..4754826 100644
--- a/proxmox-ve-config/src/firewall/types/ipset.rs
+++ b/proxmox-ve-config/src/firewall/types/ipset.rs
@@ -129,6 +129,20 @@ pub struct IpsetEntry {
pub comment: Option<String>,
}
+impl IpsetEntry {
+ pub fn new(
+ address: impl Into<IpsetAddress>,
+ nomatch: bool,
+ comment: impl Into<Option<String>>,
+ ) -> IpsetEntry {
+ IpsetEntry {
+ nomatch,
+ address: address.into(),
+ comment: comment.into(),
+ }
+ }
+}
+
impl<T: Into<IpsetAddress>> From<T> for IpsetEntry {
fn from(value: T) -> Self {
Self {
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 06/21] ipset: address: add helper methods
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 06/21] ipset: address: add helper methods Stefan Hanreich
@ 2024-06-27 10:45 ` Gabriel Goller
0 siblings, 0 replies; 43+ messages in thread
From: Gabriel Goller @ 2024-06-27 10:45 UTC (permalink / raw)
To: Proxmox VE development discussion
On 26.06.2024 14:15, Stefan Hanreich wrote:
>Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
>---
> proxmox-ve-config/src/firewall/types/address.rs | 10 ++++++++++
> proxmox-ve-config/src/firewall/types/ipset.rs | 14 ++++++++++++++
> 2 files changed, 24 insertions(+)
>
>diff --git a/proxmox-ve-config/src/firewall/types/address.rs b/proxmox-ve-config/src/firewall/types/address.rs
>index 3238601..962c9d2 100644
>--- a/proxmox-ve-config/src/firewall/types/address.rs
>+++ b/proxmox-ve-config/src/firewall/types/address.rs
>@@ -11,6 +11,16 @@ pub enum Family {
> V6,
> }
>
>+impl Family {
>+ pub fn is_ipv4(&self) -> bool {
>+ matches!(self, Self::V4)
We don't need the `matches!` here, a `*self == Self::V4` is enough.
Same below.
>+ }
>+
>+ pub fn is_ipv6(&self) -> bool {
>+ matches!(self, Self::V6)
>+ }
>+}
>+
> impl fmt::Display for Family {
> fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
> match self {
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 07/21] firewall: guest: derive traits according to rust api guidelines
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (5 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 06/21] ipset: address: add helper methods Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-27 10:50 ` Gabriel Goller
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 08/21] common: add allowlist Stefan Hanreich
` (17 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Almost every type should implement them anyway, and many of them are
required for those types to be used in BTreeMaps, which the nftables
firewall uses for generating stable output.
Additionally, we derive Serialize and Deserialize for a few types that
occur in the sdn configuration. The following patches will use those
for (de-)serialization.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
.../src/firewall/types/address.rs | 19 +++++++++++--------
proxmox-ve-config/src/firewall/types/alias.rs | 4 ++--
proxmox-ve-config/src/firewall/types/ipset.rs | 6 +++---
proxmox-ve-config/src/guest/types.rs | 8 +++++---
proxmox-ve-config/src/guest/vm.rs | 4 ++--
5 files changed, 23 insertions(+), 18 deletions(-)
diff --git a/proxmox-ve-config/src/firewall/types/address.rs b/proxmox-ve-config/src/firewall/types/address.rs
index 962c9d2..a0b82c5 100644
--- a/proxmox-ve-config/src/firewall/types/address.rs
+++ b/proxmox-ve-config/src/firewall/types/address.rs
@@ -30,8 +30,9 @@ impl fmt::Display for Family {
}
}
-#[derive(Clone, Copy, Debug)]
-#[cfg_attr(test, derive(Eq, PartialEq))]
+#[derive(
+ Clone, Copy, Debug, PartialOrd, Ord, PartialEq, Eq, Hash, SerializeDisplay, DeserializeFromStr,
+)]
pub enum Cidr {
Ipv4(Ipv4Cidr),
Ipv6(Ipv6Cidr),
@@ -101,8 +102,7 @@ impl From<Ipv6Cidr> for Cidr {
const IPV4_LENGTH: u8 = 32;
-#[derive(Clone, Copy, Debug)]
-#[cfg_attr(test, derive(Eq, PartialEq))]
+#[derive(Clone, Copy, Debug, PartialOrd, Ord, PartialEq, Eq, Hash)]
pub struct Ipv4Cidr {
addr: Ipv4Addr,
mask: u8,
@@ -176,8 +176,7 @@ impl fmt::Display for Ipv4Cidr {
const IPV6_LENGTH: u8 = 128;
-#[derive(Clone, Copy, Debug)]
-#[cfg_attr(test, derive(Eq, PartialEq))]
+#[derive(Clone, Copy, Debug, PartialOrd, Ord, PartialEq, Eq, Hash)]
pub struct Ipv6Cidr {
addr: Ipv6Addr,
mask: u8,
@@ -271,7 +270,9 @@ impl Display for IpRangeError {
/// represents a range of IPv4 or IPv6 addresses
///
/// For more information see [`AddressRange`]
-#[derive(Clone, Copy, Debug, PartialEq, Eq, SerializeDisplay, DeserializeFromStr)]
+#[derive(
+ Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, SerializeDisplay, DeserializeFromStr,
+)]
pub enum IpRange {
V4(AddressRange<Ipv4Addr>),
V6(AddressRange<Ipv6Addr>),
@@ -364,7 +365,9 @@ impl fmt::Display for IpRange {
/// # Textual representation
///
/// Two IP addresses separated by a hyphen, e.g.: `127.0.0.1-127.0.0.255`
-#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+#[derive(
+ Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, SerializeDisplay, DeserializeFromStr,
+)]
pub struct AddressRange<T> {
start: T,
end: T,
diff --git a/proxmox-ve-config/src/firewall/types/alias.rs b/proxmox-ve-config/src/firewall/types/alias.rs
index e6aa30d..5dfaa41 100644
--- a/proxmox-ve-config/src/firewall/types/alias.rs
+++ b/proxmox-ve-config/src/firewall/types/alias.rs
@@ -2,7 +2,7 @@ use std::fmt::Display;
use std::str::FromStr;
use anyhow::{bail, format_err, Error};
-use serde_with::DeserializeFromStr;
+use serde_with::{DeserializeFromStr, SerializeDisplay};
use crate::firewall::parse::{match_name, match_non_whitespace};
use crate::firewall::types::address::Cidr;
@@ -35,7 +35,7 @@ impl Display for AliasScope {
}
}
-#[derive(Debug, Clone, DeserializeFromStr)]
+#[derive(Debug, Clone, DeserializeFromStr, SerializeDisplay)]
#[cfg_attr(test, derive(Eq, PartialEq))]
pub struct AliasName {
scope: AliasScope,
diff --git a/proxmox-ve-config/src/firewall/types/ipset.rs b/proxmox-ve-config/src/firewall/types/ipset.rs
index 4754826..a3238d1 100644
--- a/proxmox-ve-config/src/firewall/types/ipset.rs
+++ b/proxmox-ve-config/src/firewall/types/ipset.rs
@@ -85,7 +85,7 @@ impl Display for IpsetName {
}
}
-#[derive(Debug)]
+#[derive(Debug, Clone)]
#[cfg_attr(test, derive(Eq, PartialEq))]
pub enum IpsetAddress {
Alias(AliasName),
@@ -121,7 +121,7 @@ impl From<IpRange> for IpsetAddress {
}
}
-#[derive(Debug)]
+#[derive(Debug, Clone)]
#[cfg_attr(test, derive(Eq, PartialEq))]
pub struct IpsetEntry {
pub nomatch: bool,
@@ -205,7 +205,7 @@ impl Ipfilter<'_> {
}
}
-#[derive(Debug)]
+#[derive(Debug, Clone)]
#[cfg_attr(test, derive(Eq, PartialEq))]
pub struct Ipset {
pub name: IpsetName,
diff --git a/proxmox-ve-config/src/guest/types.rs b/proxmox-ve-config/src/guest/types.rs
index 217c537..767ff27 100644
--- a/proxmox-ve-config/src/guest/types.rs
+++ b/proxmox-ve-config/src/guest/types.rs
@@ -1,9 +1,13 @@
use std::fmt;
use std::str::FromStr;
+use serde_with::{DeserializeFromStr, SerializeDisplay};
+
use anyhow::{format_err, Error};
-#[derive(Clone, Copy, Debug, Eq, PartialEq, PartialOrd, Ord, Hash)]
+#[derive(
+ Clone, Copy, Debug, Eq, PartialEq, PartialOrd, Ord, Hash, SerializeDisplay, DeserializeFromStr,
+)]
pub struct Vmid(u32);
impl Vmid {
@@ -34,5 +38,3 @@ impl FromStr for Vmid {
))
}
}
-
-serde_plain::derive_deserialize_from_fromstr!(Vmid, "valid vmid");
diff --git a/proxmox-ve-config/src/guest/vm.rs b/proxmox-ve-config/src/guest/vm.rs
index 5b5866a..a7ea9bb 100644
--- a/proxmox-ve-config/src/guest/vm.rs
+++ b/proxmox-ve-config/src/guest/vm.rs
@@ -5,12 +5,12 @@ use std::str::FromStr;
use std::{collections::HashMap, net::Ipv6Addr};
use proxmox_schema::property_string::PropertyIterator;
+use serde_with::DeserializeFromStr;
use crate::firewall::parse::{match_digits, parse_bool};
use crate::firewall::types::address::{Ipv4Cidr, Ipv6Cidr};
-#[derive(Debug)]
-#[cfg_attr(test, derive(Eq, PartialEq))]
+#[derive(Clone, Debug, DeserializeFromStr, PartialEq, Eq, Hash, PartialOrd, Ord)]
pub struct MacAddress([u8; 6]);
static LOCAL_PART: [u8; 8] = [0xFE, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00];
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 07/21] firewall: guest: derive traits according to rust api guidelines
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 07/21] firewall: guest: derive traits according to rust api guidelines Stefan Hanreich
@ 2024-06-27 10:50 ` Gabriel Goller
0 siblings, 0 replies; 43+ messages in thread
From: Gabriel Goller @ 2024-06-27 10:50 UTC (permalink / raw)
To: Proxmox VE development discussion
On 26.06.2024 14:15, Stefan Hanreich wrote:
>diff --git a/proxmox-ve-config/src/guest/types.rs b/proxmox-ve-config/src/guest/types.rs
>index 217c537..767ff27 100644
>--- a/proxmox-ve-config/src/guest/types.rs
>+++ b/proxmox-ve-config/src/guest/types.rs
>@@ -1,9 +1,13 @@
> use std::fmt;
> use std::str::FromStr;
>
>+use serde_with::{DeserializeFromStr, SerializeDisplay};
>+
Unnecessary empty line here.
> use anyhow::{format_err, Error};
>
>-#[derive(Clone, Copy, Debug, Eq, PartialEq, PartialOrd, Ord, Hash)]
>+#[derive(
>+ Clone, Copy, Debug, Eq, PartialEq, PartialOrd, Ord, Hash, SerializeDisplay, DeserializeFromStr,
>+)]
> pub struct Vmid(u32);
>
> impl Vmid {
>@@ -34,5 +38,3 @@ impl FromStr for Vmid {
> ))
> }
> }
>-
>-serde_plain::derive_deserialize_from_fromstr!(Vmid, "valid vmid");
>diff --git a/proxmox-ve-config/src/guest/vm.rs b/proxmox-ve-config/src/guest/vm.rs
>index 5b5866a..a7ea9bb 100644
>--- a/proxmox-ve-config/src/guest/vm.rs
>+++ b/proxmox-ve-config/src/guest/vm.rs
>@@ -5,12 +5,12 @@ use std::str::FromStr;
> use std::{collections::HashMap, net::Ipv6Addr};
>
> use proxmox_schema::property_string::PropertyIterator;
>+use serde_with::DeserializeFromStr;
Add linebreak between proxmox* imports and third-party imports (Also
while you're at it, you can also pull down the anyhow import)
>
> use crate::firewall::parse::{match_digits, parse_bool};
> use crate::firewall::types::address::{Ipv4Cidr, Ipv6Cidr};
>
>-#[derive(Debug)]
>-#[cfg_attr(test, derive(Eq, PartialEq))]
>+#[derive(Clone, Debug, DeserializeFromStr, PartialEq, Eq, Hash, PartialOrd, Ord)]
> pub struct MacAddress([u8; 6]);
>
> static LOCAL_PART: [u8; 8] = [0xFE, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00];
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 08/21] common: add allowlist
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (6 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 07/21] firewall: guest: derive traits according to rust api guidelines Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-27 10:47 ` Gabriel Goller
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 09/21] sdn: add name types Stefan Hanreich
` (16 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
proxmox-ve-config/src/common/mod.rs | 30 +++++++++++++++++++++++++++++
proxmox-ve-config/src/lib.rs | 1 +
2 files changed, 31 insertions(+)
create mode 100644 proxmox-ve-config/src/common/mod.rs
diff --git a/proxmox-ve-config/src/common/mod.rs b/proxmox-ve-config/src/common/mod.rs
new file mode 100644
index 0000000..9318cff
--- /dev/null
+++ b/proxmox-ve-config/src/common/mod.rs
@@ -0,0 +1,30 @@
+use core::hash::Hash;
+use std::cmp::Eq;
+use std::collections::HashSet;
+
+#[derive(Clone, Debug, Default)]
+pub struct Allowlist<T>(HashSet<T>);
+
+impl<T: Hash + Eq> FromIterator<T> for Allowlist<T> {
+ fn from_iter<A>(iter: A) -> Self
+ where
+ A: IntoIterator<Item = T>,
+ {
+ Allowlist(HashSet::from_iter(iter))
+ }
+}
+
+/// returns true if [`value`] is in the allowlist or if allowlist does not exist
+impl<T: Hash + Eq> Allowlist<T> {
+ pub fn is_allowed(&self, value: &T) -> bool {
+ self.0.contains(value)
+ }
+}
+
+impl<T: Hash + Eq> Allowlist<T> {
+ pub fn new<I>(iter: I) -> Self
+ where I: IntoIterator<Item = T>{
+ Self::from_iter(iter)
+ }
+}
+
diff --git a/proxmox-ve-config/src/lib.rs b/proxmox-ve-config/src/lib.rs
index 856b14f..1b6feae 100644
--- a/proxmox-ve-config/src/lib.rs
+++ b/proxmox-ve-config/src/lib.rs
@@ -1,3 +1,4 @@
+pub mod common;
pub mod firewall;
pub mod guest;
pub mod host;
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 08/21] common: add allowlist
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 08/21] common: add allowlist Stefan Hanreich
@ 2024-06-27 10:47 ` Gabriel Goller
0 siblings, 0 replies; 43+ messages in thread
From: Gabriel Goller @ 2024-06-27 10:47 UTC (permalink / raw)
To: Proxmox VE development discussion
On 26.06.2024 14:15, Stefan Hanreich wrote:
>diff --git a/proxmox-ve-config/src/common/mod.rs b/proxmox-ve-config/src/common/mod.rs
>new file mode 100644
>index 0000000..9318cff
>--- /dev/null
>+++ b/proxmox-ve-config/src/common/mod.rs
>@@ -0,0 +1,30 @@
>+use core::hash::Hash;
>+use std::cmp::Eq;
>+use std::collections::HashSet;
>+
>+#[derive(Clone, Debug, Default)]
>+pub struct Allowlist<T>(HashSet<T>);
>+
>+impl<T: Hash + Eq> FromIterator<T> for Allowlist<T> {
>+ fn from_iter<A>(iter: A) -> Self
>+ where
>+ A: IntoIterator<Item = T>,
>+ {
>+ Allowlist(HashSet::from_iter(iter))
>+ }
>+}
>+
>+/// returns true if [`value`] is in the allowlist or if allowlist does not exist
>+impl<T: Hash + Eq> Allowlist<T> {
>+ pub fn is_allowed(&self, value: &T) -> bool {
>+ self.0.contains(value)
>+ }
>+}
>+
>+impl<T: Hash + Eq> Allowlist<T> {
>+ pub fn new<I>(iter: I) -> Self
>+ where I: IntoIterator<Item = T>{
^ Small rustfmt error here.
>+ Self::from_iter(iter)
>+ }
>+}
>+
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 09/21] sdn: add name types
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (7 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 08/21] common: add allowlist Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-27 10:56 ` Gabriel Goller
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 10/21] sdn: add ipam module Stefan Hanreich
` (15 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
proxmox-ve-config/src/lib.rs | 1 +
proxmox-ve-config/src/sdn/mod.rs | 240 +++++++++++++++++++++++++++++++
2 files changed, 241 insertions(+)
create mode 100644 proxmox-ve-config/src/sdn/mod.rs
diff --git a/proxmox-ve-config/src/lib.rs b/proxmox-ve-config/src/lib.rs
index 1b6feae..d17136c 100644
--- a/proxmox-ve-config/src/lib.rs
+++ b/proxmox-ve-config/src/lib.rs
@@ -2,3 +2,4 @@ pub mod common;
pub mod firewall;
pub mod guest;
pub mod host;
+pub mod sdn;
diff --git a/proxmox-ve-config/src/sdn/mod.rs b/proxmox-ve-config/src/sdn/mod.rs
new file mode 100644
index 0000000..4e7c525
--- /dev/null
+++ b/proxmox-ve-config/src/sdn/mod.rs
@@ -0,0 +1,240 @@
+use std::{error::Error, fmt::Display, str::FromStr};
+
+use serde_with::DeserializeFromStr;
+
+use crate::firewall::types::Cidr;
+
+#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub enum SdnNameError {
+ Empty,
+ TooLong,
+ InvalidSymbols,
+ InvalidSubnetCidr,
+ InvalidSubnetFormat,
+}
+
+impl Error for SdnNameError {}
+
+impl Display for SdnNameError {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str(match self {
+ SdnNameError::TooLong => "name too long",
+ SdnNameError::InvalidSymbols => "invalid symbols in name",
+ SdnNameError::InvalidSubnetCidr => "invalid cidr in name",
+ SdnNameError::InvalidSubnetFormat => "invalid format for subnet name",
+ SdnNameError::Empty => "name is empty",
+ })
+ }
+}
+
+fn validate_sdn_name(name: &str) -> Result<(), SdnNameError> {
+ if name.is_empty() {
+ return Err(SdnNameError::Empty);
+ }
+
+ if name.len() > 8 {
+ return Err(SdnNameError::TooLong);
+ }
+
+ // safe because of empty check
+ if !name.chars().next().unwrap().is_ascii_alphabetic() {
+ return Err(SdnNameError::InvalidSymbols);
+ }
+
+ if !name.chars().all(|c| c.is_ascii_alphanumeric()) {
+ return Err(SdnNameError::InvalidSymbols);
+ }
+
+ Ok(())
+}
+
+/// represents the name of an sdn zone
+#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, DeserializeFromStr)]
+pub struct ZoneName(String);
+
+impl ZoneName {
+ /// construct a new zone name
+ ///
+ /// # Errors
+ ///
+ /// This function will return an error if the name is empty, too long (>8 characters), starts
+ /// with a non-alphabetic symbol or if there are non alphanumeric symbols contained in the name.
+ pub fn new(name: String) -> Result<Self, SdnNameError> {
+ validate_sdn_name(&name)?;
+ Ok(ZoneName(name))
+ }
+
+ pub fn name(&self) -> &str {
+ &self.0
+ }
+}
+
+impl FromStr for ZoneName {
+ type Err = SdnNameError;
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ Self::new(s.to_owned())
+ }
+}
+
+impl Display for ZoneName {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ self.0.fmt(f)
+ }
+}
+
+/// represents the name of an sdn vnet
+#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, DeserializeFromStr)]
+pub struct VnetName(String);
+
+impl VnetName {
+ /// construct a new vnet name
+ ///
+ /// # Errors
+ ///
+ /// This function will return an error if the name is empty, too long (>8 characters), starts
+ /// with a non-alphabetic symbol or if there are non alphanumeric symbols contained in the name.
+ pub fn new(name: String) -> Result<Self, SdnNameError> {
+ validate_sdn_name(&name)?;
+ Ok(VnetName(name))
+ }
+
+ pub fn name(&self) -> &str {
+ &self.0
+ }
+}
+
+impl FromStr for VnetName {
+ type Err = SdnNameError;
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ Self::new(s.to_owned())
+ }
+}
+
+impl Display for VnetName {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ self.0.fmt(f)
+ }
+}
+
+/// represents the name of an sdn subnet
+///
+/// # Textual representation
+/// A subnet name has the form `{zone_id}-{cidr_ip}-{cidr_mask}`
+#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, DeserializeFromStr)]
+pub struct SubnetName(ZoneName, Cidr);
+
+impl SubnetName {
+ pub fn new(zone: ZoneName, cidr: Cidr) -> Self {
+ SubnetName(zone, cidr)
+ }
+
+ pub fn zone(&self) -> &ZoneName {
+ &self.0
+ }
+
+ pub fn cidr(&self) -> &Cidr {
+ &self.1
+ }
+}
+
+impl FromStr for SubnetName {
+ type Err = SdnNameError;
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ if let Some((name, cidr_part)) = s.split_once('-') {
+ if let Some((ip, netmask)) = cidr_part.split_once('-') {
+ let zone_name = ZoneName::from_str(name)?;
+
+ let cidr: Cidr = format!("{ip}/{netmask}")
+ .parse()
+ .map_err(|_| SdnNameError::InvalidSubnetCidr)?;
+
+ return Ok(Self(zone_name, cidr));
+ }
+ }
+
+ Err(SdnNameError::InvalidSubnetFormat)
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ #[test]
+ fn test_zone_name() {
+ ZoneName::new("zone0".to_string()).unwrap();
+
+ assert_eq!(ZoneName::new("".to_string()), Err(SdnNameError::Empty));
+
+ assert_eq!(
+ ZoneName::new("3qwe".to_string()),
+ Err(SdnNameError::InvalidSymbols)
+ );
+
+ assert_eq!(
+ ZoneName::new("qweqweqwe".to_string()),
+ Err(SdnNameError::TooLong)
+ );
+
+ assert_eq!(
+ ZoneName::new("qß".to_string()),
+ Err(SdnNameError::InvalidSymbols)
+ );
+ }
+
+ #[test]
+ fn test_vnet_name() {
+ VnetName::new("vnet0".to_string()).unwrap();
+
+ assert_eq!(VnetName::new("".to_string()), Err(SdnNameError::Empty));
+
+ assert_eq!(
+ VnetName::new("3qwe".to_string()),
+ Err(SdnNameError::InvalidSymbols)
+ );
+
+ assert_eq!(
+ VnetName::new("qweqweqwe".to_string()),
+ Err(SdnNameError::TooLong)
+ );
+
+ assert_eq!(
+ VnetName::new("qß".to_string()),
+ Err(SdnNameError::InvalidSymbols)
+ );
+ }
+
+ #[test]
+ fn test_subnet_name() {
+ assert_eq!(
+ "qweqweqwe-10.101.0.0-16".parse::<SubnetName>(),
+ Err(SdnNameError::TooLong),
+ );
+
+ assert_eq!(
+ "zone0_10.101.0.0-16".parse::<SubnetName>(),
+ Err(SdnNameError::InvalidSubnetFormat),
+ );
+
+ assert_eq!(
+ "zone0-10.101.0.0_16".parse::<SubnetName>(),
+ Err(SdnNameError::InvalidSubnetFormat),
+ );
+
+ assert_eq!(
+ "zone0-10.101.0.0-33".parse::<SubnetName>(),
+ Err(SdnNameError::InvalidSubnetCidr),
+ );
+
+ assert_eq!(
+ "zone0-10.101.0.0-16".parse::<SubnetName>().unwrap(),
+ SubnetName::new(
+ ZoneName::new("zone0".to_string()).unwrap(),
+ Cidr::new_v4([10, 101, 0, 0], 16).unwrap()
+ )
+ )
+ }
+}
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 09/21] sdn: add name types
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 09/21] sdn: add name types Stefan Hanreich
@ 2024-06-27 10:56 ` Gabriel Goller
2024-07-16 9:27 ` Stefan Hanreich
0 siblings, 1 reply; 43+ messages in thread
From: Gabriel Goller @ 2024-06-27 10:56 UTC (permalink / raw)
To: Proxmox VE development discussion
On 26.06.2024 14:15, Stefan Hanreich wrote:
>diff --git a/proxmox-ve-config/src/sdn/mod.rs b/proxmox-ve-config/src/sdn/mod.rs
>new file mode 100644
>index 0000000..4e7c525
>--- /dev/null
>+++ b/proxmox-ve-config/src/sdn/mod.rs
>@@ -0,0 +1,240 @@
>+use std::{error::Error, fmt::Display, str::FromStr};
>+
>+use serde_with::DeserializeFromStr;
>+
>+use crate::firewall::types::Cidr;
>+
>+#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
>+pub enum SdnNameError {
>+ Empty,
>+ TooLong,
>+ InvalidSymbols,
>+ InvalidSubnetCidr,
>+ InvalidSubnetFormat,
>+}
>+
>+impl Error for SdnNameError {}
>+
>+impl Display for SdnNameError {
>+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
>+ f.write_str(match self {
>+ SdnNameError::TooLong => "name too long",
>+ SdnNameError::InvalidSymbols => "invalid symbols in name",
>+ SdnNameError::InvalidSubnetCidr => "invalid cidr in name",
>+ SdnNameError::InvalidSubnetFormat => "invalid format for subnet name",
>+ SdnNameError::Empty => "name is empty",
>+ })
>+ }
>+}
>+
Hmm, maybe we should pull in the `thiserror` crate here...
There are a few error-enums that could benefit from it:
SdnNameError, IpamError, SdnConfigError, IpRangeError.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 09/21] sdn: add name types
2024-06-27 10:56 ` Gabriel Goller
@ 2024-07-16 9:27 ` Stefan Hanreich
0 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-07-16 9:27 UTC (permalink / raw)
To: Proxmox VE development discussion, Gabriel Goller
On 6/27/24 12:56, Gabriel Goller wrote:
> On 26.06.2024 14:15, Stefan Hanreich wrote:
>> diff --git a/proxmox-ve-config/src/sdn/mod.rs
>> b/proxmox-ve-config/src/sdn/mod.rs
>> new file mode 100644
>> index 0000000..4e7c525
>> --- /dev/null
>> +++ b/proxmox-ve-config/src/sdn/mod.rs
>> @@ -0,0 +1,240 @@
>> +use std::{error::Error, fmt::Display, str::FromStr};
>> +
>> +use serde_with::DeserializeFromStr;
>> +
>> +use crate::firewall::types::Cidr;
>> +
>> +#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
>> +pub enum SdnNameError {
>> + Empty,
>> + TooLong,
>> + InvalidSymbols,
>> + InvalidSubnetCidr,
>> + InvalidSubnetFormat,
>> +}
>> +
>> +impl Error for SdnNameError {}
>> +
>> +impl Display for SdnNameError {
>> + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
>> + f.write_str(match self {
>> + SdnNameError::TooLong => "name too long",
>> + SdnNameError::InvalidSymbols => "invalid symbols in name",
>> + SdnNameError::InvalidSubnetCidr => "invalid cidr in name",
>> + SdnNameError::InvalidSubnetFormat => "invalid format for
>> subnet name",
>> + SdnNameError::Empty => "name is empty",
>> + })
>> + }
>> +}
>> +
>
> Hmm, maybe we should pull in the `thiserror` crate here...
> There are a few error-enums that could benefit from it: SdnNameError,
> IpamError, SdnConfigError, IpRangeError.
Thought about this as well, I guess we depend on it in quite a few
crates already - so pulling it in here wouldn't be too bad.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 10/21] sdn: add ipam module
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (8 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 09/21] sdn: add name types Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-08-13 16:12 ` Max Carrara
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 11/21] sdn: ipam: add method for generating ipsets Stefan Hanreich
` (14 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
This module includes structs for representing the JSON schema from the
PVE ipam. Those can be used to parse the current IPAM state.
We also include a general Ipam struct, and provide a method for
converting the PVE IPAM to the general struct. The idea behind this
is that we have multiple IPAM plugins in PVE and will likely add
support for importing them in the future. With the split, we can have
our dedicated structs for representing the different data formats from
the different IPAM plugins and then convert them into a common
representation that can then be used internally, decoupling the
concrete plugin from the code using the IPAM configuration.
Enforcing the invariants the way we currently do adds a bit of runtime
complexity when building the object, but we get the upside of never
being able to construct an invalid struct. For the amount of entries
the ipam usually has, this should be fine. Should it turn out to be
not performant enough we could always add a HashSet for looking up
values and speeding up the validation. For now, I wanted to avoid the
additional complexity.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
.../src/firewall/types/address.rs | 8 +
proxmox-ve-config/src/guest/vm.rs | 4 +
proxmox-ve-config/src/sdn/ipam.rs | 330 ++++++++++++++++++
proxmox-ve-config/src/sdn/mod.rs | 2 +
4 files changed, 344 insertions(+)
create mode 100644 proxmox-ve-config/src/sdn/ipam.rs
diff --git a/proxmox-ve-config/src/firewall/types/address.rs b/proxmox-ve-config/src/firewall/types/address.rs
index a0b82c5..3ad1a7a 100644
--- a/proxmox-ve-config/src/firewall/types/address.rs
+++ b/proxmox-ve-config/src/firewall/types/address.rs
@@ -61,6 +61,14 @@ impl Cidr {
pub fn is_ipv6(&self) -> bool {
matches!(self, Cidr::Ipv6(_))
}
+
+ pub fn contains_address(&self, ip: &IpAddr) -> bool {
+ match (self, ip) {
+ (Cidr::Ipv4(cidr), IpAddr::V4(ip)) => cidr.contains_address(ip),
+ (Cidr::Ipv6(cidr), IpAddr::V6(ip)) => cidr.contains_address(ip),
+ _ => false,
+ }
+ }
}
impl fmt::Display for Cidr {
diff --git a/proxmox-ve-config/src/guest/vm.rs b/proxmox-ve-config/src/guest/vm.rs
index a7ea9bb..6a706c7 100644
--- a/proxmox-ve-config/src/guest/vm.rs
+++ b/proxmox-ve-config/src/guest/vm.rs
@@ -17,6 +17,10 @@ static LOCAL_PART: [u8; 8] = [0xFE, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00];
static EUI64_MIDDLE_PART: [u8; 2] = [0xFF, 0xFE];
impl MacAddress {
+ pub fn new(address: [u8; 6]) -> Self {
+ Self(address)
+ }
+
/// generates a link local IPv6-address according to RFC 4291 (Appendix A)
pub fn eui64_link_local_address(&self) -> Ipv6Addr {
let head = &self.0[..3];
diff --git a/proxmox-ve-config/src/sdn/ipam.rs b/proxmox-ve-config/src/sdn/ipam.rs
new file mode 100644
index 0000000..682bbe7
--- /dev/null
+++ b/proxmox-ve-config/src/sdn/ipam.rs
@@ -0,0 +1,330 @@
+use std::{
+ collections::{BTreeMap, HashMap},
+ error::Error,
+ fmt::Display,
+ net::IpAddr,
+};
+
+use serde::Deserialize;
+
+use crate::{
+ firewall::types::Cidr,
+ guest::{types::Vmid, vm::MacAddress},
+ sdn::{SdnNameError, SubnetName, ZoneName},
+};
+
+/// struct for deserializing a gateway entry in PVE IPAM
+///
+/// They are automatically generated by the PVE SDN module when creating a new subnet.
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct IpamJsonDataGateway {
+ #[serde(rename = "gateway")]
+ _gateway: u8,
+}
+
+/// struct for deserializing a guest entry in PVE IPAM
+///
+/// They are automatically created when adding a guest to a VNet that has a Subnet with DHCP
+/// configured.
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct IpamJsonDataVm {
+ vmid: Vmid,
+ hostname: Option<String>,
+ mac: MacAddress,
+}
+
+/// struct for deserializing a custom entry in PVE IPAM
+///
+/// Custom entries are created manually by the user via the Web UI / API.
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct IpamJsonDataCustom {
+ mac: MacAddress,
+}
+
+/// Enum representing the different kinds of entries that can be located in PVE IPAM
+///
+/// For more information about the members see the documentation of the respective structs in the
+/// enum.
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+#[serde(untagged)]
+pub enum IpamJsonData {
+ Vm(IpamJsonDataVm),
+ Gateway(IpamJsonDataGateway),
+ Custom(IpamJsonDataCustom),
+}
+
+/// struct for deserializing IPs from the PVE IPAM
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord, Default)]
+pub struct IpJson {
+ ips: BTreeMap<IpAddr, IpamJsonData>,
+}
+
+/// struct for deserializing subnets from the PVE IPAM
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord, Default)]
+pub struct SubnetJson {
+ subnets: BTreeMap<Cidr, IpJson>,
+}
+
+/// struct for deserializing the PVE IPAM
+///
+/// It is usually located in `/etc/pve/priv/ipam.db`
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord, Default)]
+pub struct IpamJson {
+ zones: BTreeMap<ZoneName, SubnetJson>,
+}
+
+/// holds the data for the IPAM entry of a VM
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct IpamDataVm {
+ ip: IpAddr,
+ vmid: Vmid,
+ mac: MacAddress,
+ hostname: Option<String>,
+}
+
+impl IpamDataVm {
+ pub fn new(
+ ip: impl Into<IpAddr>,
+ vmid: impl Into<Vmid>,
+ mac: MacAddress,
+ hostname: impl Into<Option<String>>,
+ ) -> Self {
+ Self {
+ ip: ip.into(),
+ vmid: vmid.into(),
+ mac,
+ hostname: hostname.into(),
+ }
+ }
+
+ pub fn from_json_data(ip: IpAddr, data: IpamJsonDataVm) -> Self {
+ Self::new(ip, data.vmid, data.mac, data.hostname)
+ }
+
+ pub fn ip(&self) -> &IpAddr {
+ &self.ip
+ }
+
+ pub fn vmid(&self) -> Vmid {
+ self.vmid
+ }
+
+ pub fn mac(&self) -> &MacAddress {
+ &self.mac
+ }
+
+ pub fn hostname(&self) -> Option<&str> {
+ self.hostname.as_deref()
+ }
+}
+
+/// holds the data for the IPAM entry of a Gateway
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct IpamDataGateway {
+ ip: IpAddr,
+}
+
+impl IpamDataGateway {
+ pub fn new(ip: IpAddr) -> Self {
+ Self { ip }
+ }
+
+ fn from_json_data(ip: IpAddr, _json_data: IpamJsonDataGateway) -> Self {
+ Self::new(ip)
+ }
+
+ pub fn ip(&self) -> &IpAddr {
+ &self.ip
+ }
+}
+
+/// holds the data for a custom IPAM entry
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct IpamDataCustom {
+ ip: IpAddr,
+ mac: MacAddress,
+}
+
+impl IpamDataCustom {
+ pub fn new(ip: IpAddr, mac: MacAddress) -> Self {
+ Self { ip, mac }
+ }
+
+ fn from_json_data(ip: IpAddr, json_data: IpamJsonDataCustom) -> Self {
+ Self::new(ip, json_data.mac)
+ }
+
+ pub fn ip(&self) -> &IpAddr {
+ &self.ip
+ }
+
+ pub fn mac(&self) -> &MacAddress {
+ &self.mac
+ }
+}
+
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub enum IpamData {
+ Vm(IpamDataVm),
+ Gateway(IpamDataGateway),
+ Custom(IpamDataCustom),
+}
+
+impl IpamData {
+ pub fn from_json_data(ip: IpAddr, json_data: IpamJsonData) -> Self {
+ match json_data {
+ IpamJsonData::Vm(json_data) => IpamDataVm::from_json_data(ip, json_data).into(),
+ IpamJsonData::Gateway(json_data) => {
+ IpamDataGateway::from_json_data(ip, json_data).into()
+ }
+ IpamJsonData::Custom(json_data) => IpamDataCustom::from_json_data(ip, json_data).into(),
+ }
+ }
+
+ pub fn ip_address(&self) -> &IpAddr {
+ match &self {
+ IpamData::Vm(data) => data.ip(),
+ IpamData::Gateway(data) => data.ip(),
+ IpamData::Custom(data) => data.ip(),
+ }
+ }
+}
+
+impl From<IpamDataVm> for IpamData {
+ fn from(value: IpamDataVm) -> Self {
+ IpamData::Vm(value)
+ }
+}
+
+impl From<IpamDataGateway> for IpamData {
+ fn from(value: IpamDataGateway) -> Self {
+ IpamData::Gateway(value)
+ }
+}
+
+impl From<IpamDataCustom> for IpamData {
+ fn from(value: IpamDataCustom) -> Self {
+ IpamData::Custom(value)
+ }
+}
+
+#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub enum IpamError {
+ NameError(SdnNameError),
+ InvalidIpAddress,
+ DuplicateIpAddress,
+ IpAddressOutOfBounds,
+}
+
+impl Error for IpamError {}
+
+impl Display for IpamError {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str("")
+ }
+}
+
+/// represents an entry in the PVE IPAM database
+#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct IpamEntry {
+ subnet: SubnetName,
+ data: IpamData,
+}
+
+impl IpamEntry {
+ /// creates a new PVE IPAM entry
+ ///
+ /// # Errors
+ ///
+ /// This function will return an error if the IP address of the entry does not match the CIDR
+ /// of the subnet.
+ pub fn new(subnet: SubnetName, data: IpamData) -> Result<Self, IpamError> {
+ if !subnet.cidr().contains_address(data.ip_address()) {
+ return Err(IpamError::IpAddressOutOfBounds);
+ }
+
+ Ok(IpamEntry { subnet, data })
+ }
+
+ pub fn subnet(&self) -> &SubnetName {
+ &self.subnet
+ }
+
+ pub fn data(&self) -> &IpamData {
+ &self.data
+ }
+
+ pub fn ip_address(&self) -> &IpAddr {
+ self.data.ip_address()
+ }
+}
+
+/// Common representation of IPAM data used in SDN
+///
+/// this should be instantiated by reading from one of the concrete IPAM implementations and then
+/// converting into this common struct.
+///
+/// # Invariants
+/// * No IP address in a Subnet is allocated twice
+#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, Default)]
+pub struct Ipam {
+ entries: BTreeMap<SubnetName, Vec<IpamEntry>>,
+}
+
+impl Ipam {
+ pub fn new() -> Self {
+ Self::default()
+ }
+
+ pub fn from_entries(entries: impl IntoIterator<Item = IpamEntry>) -> Result<Self, IpamError> {
+ let mut ipam = Self::new();
+
+ for entry in entries {
+ ipam.add_entry(entry)?;
+ }
+
+ Ok(ipam)
+ }
+
+ /// adds a new [`IpamEntry`] to the database
+ ///
+ /// # Errors
+ ///
+ /// This function will return an error if the IP is already allocated by another guest.
+ pub fn add_entry(&mut self, entry: IpamEntry) -> Result<(), IpamError> {
+ if let Some(entries) = self.entries.get_mut(entry.subnet()) {
+ for ipam_entry in &*entries {
+ if ipam_entry.ip_address() == entry.ip_address() {
+ return Err(IpamError::DuplicateIpAddress);
+ }
+ }
+
+ entries.push(entry);
+ } else {
+ self.entries
+ .insert(entry.subnet().clone(), [entry].to_vec());
+ }
+
+ Ok(())
+ }
+}
+
+impl TryFrom<IpamJson> for Ipam {
+ type Error = IpamError;
+
+ fn try_from(value: IpamJson) -> Result<Self, Self::Error> {
+ let mut ipam = Ipam::default();
+
+ for (zone_name, subnet_json) in value.zones {
+ for (cidr, ip_json) in subnet_json.subnets {
+ for (ip, json_data) in ip_json.ips {
+ let data = IpamData::from_json_data(ip, json_data);
+ let subnet = SubnetName::new(zone_name.clone(), cidr);
+ ipam.add_entry(IpamEntry::new(subnet, data)?)?;
+ }
+ }
+ }
+
+ Ok(ipam)
+ }
+}
diff --git a/proxmox-ve-config/src/sdn/mod.rs b/proxmox-ve-config/src/sdn/mod.rs
index 4e7c525..67af24e 100644
--- a/proxmox-ve-config/src/sdn/mod.rs
+++ b/proxmox-ve-config/src/sdn/mod.rs
@@ -1,3 +1,5 @@
+pub mod ipam;
+
use std::{error::Error, fmt::Display, str::FromStr};
use serde_with::DeserializeFromStr;
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 10/21] sdn: add ipam module
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 10/21] sdn: add ipam module Stefan Hanreich
@ 2024-08-13 16:12 ` Max Carrara
0 siblings, 0 replies; 43+ messages in thread
From: Max Carrara @ 2024-08-13 16:12 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed Jun 26, 2024 at 2:15 PM CEST, Stefan Hanreich wrote:
> This module includes structs for representing the JSON schema from the
> PVE ipam. Those can be used to parse the current IPAM state.
>
> We also include a general Ipam struct, and provide a method for
> converting the PVE IPAM to the general struct. The idea behind this
> is that we have multiple IPAM plugins in PVE and will likely add
> support for importing them in the future. With the split, we can have
> our dedicated structs for representing the different data formats from
> the different IPAM plugins and then convert them into a common
> representation that can then be used internally, decoupling the
> concrete plugin from the code using the IPAM configuration.
Big fan of this - as I had already mentioned, I find it always nice to
have different types for such things.
IMO it would be neat if those types were logically grouped though, e.g.
the types for PVE could live in a separate `mod` inside the file. Or, if
you want, you can also convert `ipam.rs` to a `ipam/mod.rs` and add
further files depending on the types of different representations there.
Either solution would make it harder for these types to become
"intermingled" in the future, IMO. So this kind of grouping would simply
serve as a "decent barrier" between the separate representations.
Perhaps the module(s) could then also use a little bit of developer
documentation or something that describes how and why the types are
organized that way.
It's probably best to just add `mod`s for now, as those can be split up
into files later anyway. (Just don't `use super::*;` :P )
>
> Enforcing the invariants the way we currently do adds a bit of runtime
> complexity when building the object, but we get the upside of never
> being able to construct an invalid struct. For the amount of entries
> the ipam usually has, this should be fine. Should it turn out to be
> not performant enough we could always add a HashSet for looking up
> values and speeding up the validation. For now, I wanted to avoid the
> additional complexity.
>
> Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
> ---
> .../src/firewall/types/address.rs | 8 +
> proxmox-ve-config/src/guest/vm.rs | 4 +
> proxmox-ve-config/src/sdn/ipam.rs | 330 ++++++++++++++++++
> proxmox-ve-config/src/sdn/mod.rs | 2 +
> 4 files changed, 344 insertions(+)
> create mode 100644 proxmox-ve-config/src/sdn/ipam.rs
>
> diff --git a/proxmox-ve-config/src/firewall/types/address.rs b/proxmox-ve-config/src/firewall/types/address.rs
> index a0b82c5..3ad1a7a 100644
> --- a/proxmox-ve-config/src/firewall/types/address.rs
> +++ b/proxmox-ve-config/src/firewall/types/address.rs
> @@ -61,6 +61,14 @@ impl Cidr {
> pub fn is_ipv6(&self) -> bool {
> matches!(self, Cidr::Ipv6(_))
> }
> +
> + pub fn contains_address(&self, ip: &IpAddr) -> bool {
> + match (self, ip) {
> + (Cidr::Ipv4(cidr), IpAddr::V4(ip)) => cidr.contains_address(ip),
> + (Cidr::Ipv6(cidr), IpAddr::V6(ip)) => cidr.contains_address(ip),
> + _ => false,
> + }
> + }
> }
>
> impl fmt::Display for Cidr {
> diff --git a/proxmox-ve-config/src/guest/vm.rs b/proxmox-ve-config/src/guest/vm.rs
> index a7ea9bb..6a706c7 100644
> --- a/proxmox-ve-config/src/guest/vm.rs
> +++ b/proxmox-ve-config/src/guest/vm.rs
> @@ -17,6 +17,10 @@ static LOCAL_PART: [u8; 8] = [0xFE, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00];
> static EUI64_MIDDLE_PART: [u8; 2] = [0xFF, 0xFE];
>
> impl MacAddress {
> + pub fn new(address: [u8; 6]) -> Self {
> + Self(address)
> + }
> +
> /// generates a link local IPv6-address according to RFC 4291 (Appendix A)
> pub fn eui64_link_local_address(&self) -> Ipv6Addr {
> let head = &self.0[..3];
> diff --git a/proxmox-ve-config/src/sdn/ipam.rs b/proxmox-ve-config/src/sdn/ipam.rs
> new file mode 100644
> index 0000000..682bbe7
> --- /dev/null
> +++ b/proxmox-ve-config/src/sdn/ipam.rs
> @@ -0,0 +1,330 @@
> +use std::{
> + collections::{BTreeMap, HashMap},
> + error::Error,
> + fmt::Display,
> + net::IpAddr,
> +};
> +
> +use serde::Deserialize;
> +
> +use crate::{
> + firewall::types::Cidr,
> + guest::{types::Vmid, vm::MacAddress},
> + sdn::{SdnNameError, SubnetName, ZoneName},
> +};
> +
> +/// struct for deserializing a gateway entry in PVE IPAM
> +///
> +/// They are automatically generated by the PVE SDN module when creating a new subnet.
> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
> +pub struct IpamJsonDataGateway {
> + #[serde(rename = "gateway")]
> + _gateway: u8,
> +}
> +
> +/// struct for deserializing a guest entry in PVE IPAM
> +///
> +/// They are automatically created when adding a guest to a VNet that has a Subnet with DHCP
> +/// configured.
> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
> +pub struct IpamJsonDataVm {
> + vmid: Vmid,
> + hostname: Option<String>,
> + mac: MacAddress,
> +}
> +
> +/// struct for deserializing a custom entry in PVE IPAM
> +///
> +/// Custom entries are created manually by the user via the Web UI / API.
> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
> +pub struct IpamJsonDataCustom {
> + mac: MacAddress,
> +}
> +
> +/// Enum representing the different kinds of entries that can be located in PVE IPAM
> +///
> +/// For more information about the members see the documentation of the respective structs in the
> +/// enum.
> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
> +#[serde(untagged)]
> +pub enum IpamJsonData {
> + Vm(IpamJsonDataVm),
> + Gateway(IpamJsonDataGateway),
> + Custom(IpamJsonDataCustom),
> +}
> +
> +/// struct for deserializing IPs from the PVE IPAM
> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord, Default)]
> +pub struct IpJson {
> + ips: BTreeMap<IpAddr, IpamJsonData>,
> +}
> +
> +/// struct for deserializing subnets from the PVE IPAM
> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord, Default)]
> +pub struct SubnetJson {
> + subnets: BTreeMap<Cidr, IpJson>,
> +}
> +
> +/// struct for deserializing the PVE IPAM
> +///
> +/// It is usually located in `/etc/pve/priv/ipam.db`
> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord, Default)]
> +pub struct IpamJson {
> + zones: BTreeMap<ZoneName, SubnetJson>,
> +}
> +
> +/// holds the data for the IPAM entry of a VM
> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
> +pub struct IpamDataVm {
> + ip: IpAddr,
> + vmid: Vmid,
> + mac: MacAddress,
> + hostname: Option<String>,
> +}
> +
> +impl IpamDataVm {
> + pub fn new(
> + ip: impl Into<IpAddr>,
> + vmid: impl Into<Vmid>,
> + mac: MacAddress,
> + hostname: impl Into<Option<String>>,
> + ) -> Self {
> + Self {
> + ip: ip.into(),
> + vmid: vmid.into(),
> + mac,
> + hostname: hostname.into(),
> + }
> + }
> +
> + pub fn from_json_data(ip: IpAddr, data: IpamJsonDataVm) -> Self {
> + Self::new(ip, data.vmid, data.mac, data.hostname)
> + }
> +
> + pub fn ip(&self) -> &IpAddr {
> + &self.ip
> + }
> +
> + pub fn vmid(&self) -> Vmid {
> + self.vmid
> + }
> +
> + pub fn mac(&self) -> &MacAddress {
> + &self.mac
> + }
> +
> + pub fn hostname(&self) -> Option<&str> {
> + self.hostname.as_deref()
> + }
> +}
> +
> +/// holds the data for the IPAM entry of a Gateway
> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
> +pub struct IpamDataGateway {
> + ip: IpAddr,
> +}
> +
> +impl IpamDataGateway {
> + pub fn new(ip: IpAddr) -> Self {
> + Self { ip }
> + }
> +
> + fn from_json_data(ip: IpAddr, _json_data: IpamJsonDataGateway) -> Self {
> + Self::new(ip)
> + }
> +
> + pub fn ip(&self) -> &IpAddr {
> + &self.ip
> + }
> +}
> +
> +/// holds the data for a custom IPAM entry
> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
> +pub struct IpamDataCustom {
> + ip: IpAddr,
> + mac: MacAddress,
> +}
> +
> +impl IpamDataCustom {
> + pub fn new(ip: IpAddr, mac: MacAddress) -> Self {
> + Self { ip, mac }
> + }
> +
> + fn from_json_data(ip: IpAddr, json_data: IpamJsonDataCustom) -> Self {
> + Self::new(ip, json_data.mac)
> + }
> +
> + pub fn ip(&self) -> &IpAddr {
> + &self.ip
> + }
> +
> + pub fn mac(&self) -> &MacAddress {
> + &self.mac
> + }
> +}
> +
> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
> +pub enum IpamData {
> + Vm(IpamDataVm),
> + Gateway(IpamDataGateway),
> + Custom(IpamDataCustom),
> +}
> +
> +impl IpamData {
> + pub fn from_json_data(ip: IpAddr, json_data: IpamJsonData) -> Self {
> + match json_data {
> + IpamJsonData::Vm(json_data) => IpamDataVm::from_json_data(ip, json_data).into(),
> + IpamJsonData::Gateway(json_data) => {
> + IpamDataGateway::from_json_data(ip, json_data).into()
> + }
> + IpamJsonData::Custom(json_data) => IpamDataCustom::from_json_data(ip, json_data).into(),
> + }
> + }
> +
> + pub fn ip_address(&self) -> &IpAddr {
> + match &self {
> + IpamData::Vm(data) => data.ip(),
> + IpamData::Gateway(data) => data.ip(),
> + IpamData::Custom(data) => data.ip(),
> + }
> + }
> +}
> +
> +impl From<IpamDataVm> for IpamData {
> + fn from(value: IpamDataVm) -> Self {
> + IpamData::Vm(value)
> + }
> +}
> +
> +impl From<IpamDataGateway> for IpamData {
> + fn from(value: IpamDataGateway) -> Self {
> + IpamData::Gateway(value)
> + }
> +}
> +
> +impl From<IpamDataCustom> for IpamData {
> + fn from(value: IpamDataCustom) -> Self {
> + IpamData::Custom(value)
> + }
> +}
> +
> +#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
> +pub enum IpamError {
> + NameError(SdnNameError),
> + InvalidIpAddress,
> + DuplicateIpAddress,
> + IpAddressOutOfBounds,
> +}
> +
> +impl Error for IpamError {}
> +
> +impl Display for IpamError {
> + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
> + f.write_str("")
> + }
> +}
> +
> +/// represents an entry in the PVE IPAM database
> +#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
> +pub struct IpamEntry {
> + subnet: SubnetName,
> + data: IpamData,
> +}
> +
> +impl IpamEntry {
> + /// creates a new PVE IPAM entry
> + ///
> + /// # Errors
> + ///
> + /// This function will return an error if the IP address of the entry does not match the CIDR
> + /// of the subnet.
> + pub fn new(subnet: SubnetName, data: IpamData) -> Result<Self, IpamError> {
> + if !subnet.cidr().contains_address(data.ip_address()) {
> + return Err(IpamError::IpAddressOutOfBounds);
> + }
> +
> + Ok(IpamEntry { subnet, data })
> + }
> +
> + pub fn subnet(&self) -> &SubnetName {
> + &self.subnet
> + }
> +
> + pub fn data(&self) -> &IpamData {
> + &self.data
> + }
> +
> + pub fn ip_address(&self) -> &IpAddr {
> + self.data.ip_address()
> + }
> +}
> +
> +/// Common representation of IPAM data used in SDN
> +///
> +/// this should be instantiated by reading from one of the concrete IPAM implementations and then
> +/// converting into this common struct.
> +///
> +/// # Invariants
> +/// * No IP address in a Subnet is allocated twice
> +#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, Default)]
> +pub struct Ipam {
> + entries: BTreeMap<SubnetName, Vec<IpamEntry>>,
> +}
> +
> +impl Ipam {
> + pub fn new() -> Self {
> + Self::default()
> + }
> +
> + pub fn from_entries(entries: impl IntoIterator<Item = IpamEntry>) -> Result<Self, IpamError> {
> + let mut ipam = Self::new();
> +
> + for entry in entries {
> + ipam.add_entry(entry)?;
> + }
> +
> + Ok(ipam)
> + }
> +
> + /// adds a new [`IpamEntry`] to the database
> + ///
> + /// # Errors
> + ///
> + /// This function will return an error if the IP is already allocated by another guest.
> + pub fn add_entry(&mut self, entry: IpamEntry) -> Result<(), IpamError> {
> + if let Some(entries) = self.entries.get_mut(entry.subnet()) {
> + for ipam_entry in &*entries {
> + if ipam_entry.ip_address() == entry.ip_address() {
> + return Err(IpamError::DuplicateIpAddress);
> + }
> + }
> +
> + entries.push(entry);
> + } else {
> + self.entries
> + .insert(entry.subnet().clone(), [entry].to_vec());
> + }
> +
> + Ok(())
> + }
> +}
> +
> +impl TryFrom<IpamJson> for Ipam {
> + type Error = IpamError;
> +
> + fn try_from(value: IpamJson) -> Result<Self, Self::Error> {
> + let mut ipam = Ipam::default();
> +
> + for (zone_name, subnet_json) in value.zones {
> + for (cidr, ip_json) in subnet_json.subnets {
> + for (ip, json_data) in ip_json.ips {
> + let data = IpamData::from_json_data(ip, json_data);
> + let subnet = SubnetName::new(zone_name.clone(), cidr);
> + ipam.add_entry(IpamEntry::new(subnet, data)?)?;
> + }
> + }
> + }
> +
> + Ok(ipam)
> + }
> +}
> diff --git a/proxmox-ve-config/src/sdn/mod.rs b/proxmox-ve-config/src/sdn/mod.rs
> index 4e7c525..67af24e 100644
> --- a/proxmox-ve-config/src/sdn/mod.rs
> +++ b/proxmox-ve-config/src/sdn/mod.rs
> @@ -1,3 +1,5 @@
> +pub mod ipam;
> +
> use std::{error::Error, fmt::Display, str::FromStr};
>
> use serde_with::DeserializeFromStr;
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 11/21] sdn: ipam: add method for generating ipsets
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (9 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 10/21] sdn: add ipam module Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 12/21] sdn: add config module Stefan Hanreich
` (13 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
For every guest that has at least one entry in the IPAM we generate an
ipset with the name `+dc/guest-ipam-{vmid}`. The ipset contains all
IPs from all zones for a guest with {vmid}.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
.../src/firewall/types/address.rs | 9 ++++
proxmox-ve-config/src/sdn/ipam.rs | 54 ++++++++++++++++++-
2 files changed, 62 insertions(+), 1 deletion(-)
diff --git a/proxmox-ve-config/src/firewall/types/address.rs b/proxmox-ve-config/src/firewall/types/address.rs
index 3ad1a7a..cd1d649 100644
--- a/proxmox-ve-config/src/firewall/types/address.rs
+++ b/proxmox-ve-config/src/firewall/types/address.rs
@@ -108,6 +108,15 @@ impl From<Ipv6Cidr> for Cidr {
}
}
+impl From<IpAddr> for Cidr {
+ fn from(value: IpAddr) -> Self {
+ match value {
+ IpAddr::V4(addr) => Ipv4Cidr::from(addr).into(),
+ IpAddr::V6(addr) => Ipv6Cidr::from(addr).into(),
+ }
+ }
+}
+
const IPV4_LENGTH: u8 = 32;
#[derive(Clone, Copy, Debug, PartialOrd, Ord, PartialEq, Eq, Hash)]
diff --git a/proxmox-ve-config/src/sdn/ipam.rs b/proxmox-ve-config/src/sdn/ipam.rs
index 682bbe7..febbc0b 100644
--- a/proxmox-ve-config/src/sdn/ipam.rs
+++ b/proxmox-ve-config/src/sdn/ipam.rs
@@ -8,7 +8,11 @@ use std::{
use serde::Deserialize;
use crate::{
- firewall::types::Cidr,
+ common::Allowlist,
+ firewall::types::{
+ ipset::{IpsetEntry, IpsetScope},
+ Cidr, Ipset,
+ },
guest::{types::Vmid, vm::MacAddress},
sdn::{SdnNameError, SubnetName, ZoneName},
};
@@ -309,6 +313,54 @@ impl Ipam {
}
}
+impl Ipam {
+ /// generates an [`Ipset`] for all guests with at least one entry in the IPAM
+ ///
+ /// # Arguments
+ /// * `filter` - A [`Allowlist<Vmid>`] for which IPsets should get returned
+ ///
+ /// It contains all IPs in all VNets, that a guest has stored in IPAM.
+ /// Ipset name is of the form `guest-ipam-<vmid>`
+ pub fn ipsets<'a>(
+ &self,
+ filter: impl Into<Option<&'a Allowlist<Vmid>>>,
+ ) -> impl Iterator<Item = Ipset> + '_ {
+ let filter = filter.into();
+
+ self.entries
+ .iter()
+ .flat_map(|(_, entries)| entries.iter())
+ .filter_map(|entry| {
+ if let IpamData::Vm(data) = &entry.data() {
+ if filter
+ .map(|list| list.is_allowed(&data.vmid))
+ .unwrap_or(true)
+ {
+ return Some(data);
+ }
+ }
+
+ None
+ })
+ .fold(HashMap::<Vmid, Ipset>::new(), |mut acc, entry| {
+ match acc.get_mut(&entry.vmid) {
+ Some(ipset) => {
+ ipset.push(IpsetEntry::from(entry.ip));
+ }
+ None => {
+ let ipset_name = format!("guest-ipam-{}", entry.vmid);
+ let mut ipset = Ipset::from_parts(IpsetScope::Datacenter, ipset_name);
+ ipset.push(IpsetEntry::from(entry.ip));
+ acc.insert(entry.vmid, ipset);
+ }
+ };
+
+ acc
+ })
+ .into_values()
+ }
+}
+
impl TryFrom<IpamJson> for Ipam {
type Error = IpamError;
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 12/21] sdn: add config module
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (10 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 11/21] sdn: ipam: add method for generating ipsets Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-27 10:54 ` Gabriel Goller
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 13/21] sdn: config: add method for generating ipsets Stefan Hanreich
` (12 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Similar to how the IPAM module works, we separate the internal
representation from the concrete schema of the configuration file.
We provide structs for parsing the running SDN configuration and a
struct that is used internally for representing an SDN configuration,
as well as a method for converting the running configuration to the
internal representation.
This is necessary because there are two possible sources for the SDN
configuration: The running configuration as well as the SectionConfig
that contains possible changes from the UI, that have not yet been
applied.
Simlarly to the IPAM, enforcing the invariants the way we currently do
adds some runtime complexity when building the object, but we get the
upside of never being able to construct an invalid struct. For the
amount of entries the sdn config usually has, this should be fine.
Should it turn out to be not performant enough we could always add a
HashSet for looking up values and speeding up the validation. For now,
I wanted to avoid the additional complexity.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
proxmox-ve-config/src/sdn/config.rs | 571 ++++++++++++++++++++++++++++
proxmox-ve-config/src/sdn/mod.rs | 1 +
2 files changed, 572 insertions(+)
create mode 100644 proxmox-ve-config/src/sdn/config.rs
diff --git a/proxmox-ve-config/src/sdn/config.rs b/proxmox-ve-config/src/sdn/config.rs
new file mode 100644
index 0000000..8454adf
--- /dev/null
+++ b/proxmox-ve-config/src/sdn/config.rs
@@ -0,0 +1,571 @@
+use std::{
+ collections::{BTreeMap, HashMap},
+ error::Error,
+ fmt::Display,
+ net::IpAddr,
+ str::FromStr,
+};
+
+use proxmox_schema::{property_string::PropertyString, ApiType, ObjectSchema, StringSchema};
+use serde::Deserialize;
+use serde_with::{DeserializeFromStr, SerializeDisplay};
+
+use crate::{
+ common::Allowlist,
+ firewall::types::{
+ address::{IpRange, IpRangeError},
+ ipset::{IpsetEntry, IpsetName, IpsetScope},
+ Cidr, Ipset,
+ },
+ sdn::{SubnetName, VnetName, ZoneName},
+};
+
+use super::SdnNameError;
+
+#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub enum SdnConfigError {
+ InvalidZoneType,
+ InvalidDhcpType,
+ ZoneNotFound,
+ VnetNotFound,
+ MismatchedCidrGateway,
+ MismatchedSubnetZone,
+ NameError(SdnNameError),
+ InvalidDhcpRange(IpRangeError),
+ DuplicateVnetName,
+}
+
+impl Error for SdnConfigError {
+ fn source(&self) -> Option<&(dyn Error + 'static)> {
+ match self {
+ SdnConfigError::NameError(e) => Some(e),
+ SdnConfigError::InvalidDhcpRange(e) => Some(e),
+ _ => None,
+ }
+ }
+}
+
+impl Display for SdnConfigError {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ match self {
+ SdnConfigError::NameError(err) => write!(f, "invalid name: {err}"),
+ SdnConfigError::InvalidDhcpRange(err) => write!(f, "invalid dhcp range: {err}"),
+ SdnConfigError::ZoneNotFound => write!(f, "zone not found"),
+ SdnConfigError::VnetNotFound => write!(f, "vnet not found"),
+ SdnConfigError::MismatchedCidrGateway => {
+ write!(f, "mismatched ip address family for gateway and CIDR")
+ }
+ SdnConfigError::InvalidZoneType => write!(f, "invalid zone type"),
+ SdnConfigError::InvalidDhcpType => write!(f, "invalid dhcp type"),
+ SdnConfigError::DuplicateVnetName => write!(f, "vnet name occurs in multiple zones"),
+ SdnConfigError::MismatchedSubnetZone => {
+ write!(f, "subnet zone does not match actual zone")
+ }
+ }
+ }
+}
+
+#[derive(
+ Copy, Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, SerializeDisplay, DeserializeFromStr,
+)]
+pub enum ZoneType {
+ Simple,
+ Vlan,
+ Qinq,
+ Vxlan,
+ Evpn,
+}
+
+impl FromStr for ZoneType {
+ type Err = SdnConfigError;
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ match s {
+ "simple" => Ok(ZoneType::Simple),
+ "vlan" => Ok(ZoneType::Vlan),
+ "qinq" => Ok(ZoneType::Qinq),
+ "vxlan" => Ok(ZoneType::Vxlan),
+ "evpn" => Ok(ZoneType::Evpn),
+ _ => Err(SdnConfigError::InvalidZoneType),
+ }
+ }
+}
+
+impl Display for ZoneType {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str(match self {
+ ZoneType::Simple => "simple",
+ ZoneType::Vlan => "vlan",
+ ZoneType::Qinq => "qinq",
+ ZoneType::Vxlan => "vxlan",
+ ZoneType::Evpn => "evpn",
+ })
+ }
+}
+
+#[derive(
+ Copy, Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, SerializeDisplay, DeserializeFromStr,
+)]
+pub enum DhcpType {
+ Dnsmasq,
+}
+
+impl FromStr for DhcpType {
+ type Err = SdnConfigError;
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ match s {
+ "dnsmasq" => Ok(DhcpType::Dnsmasq),
+ _ => Err(SdnConfigError::InvalidDhcpType),
+ }
+ }
+}
+
+impl Display for DhcpType {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str(match self {
+ DhcpType::Dnsmasq => "dnsmasq",
+ })
+ }
+}
+
+/// struct for deserializing a zone entry of the SDN running config
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct ZoneRunningConfig {
+ #[serde(rename = "type")]
+ ty: ZoneType,
+ dhcp: DhcpType,
+}
+
+/// struct for deserializing the zones of the SDN running config
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Default)]
+pub struct ZonesRunningConfig {
+ ids: HashMap<ZoneName, ZoneRunningConfig>,
+}
+
+/// represents the dhcp-range property string used in the SDN configuration
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct DhcpRange {
+ #[serde(rename = "start-address")]
+ start: IpAddr,
+ #[serde(rename = "end-address")]
+ end: IpAddr,
+}
+
+impl ApiType for DhcpRange {
+ const API_SCHEMA: proxmox_schema::Schema = ObjectSchema::new(
+ "DHCP range",
+ &[
+ (
+ "end-address",
+ false,
+ &StringSchema::new("start address of DHCP range").schema(),
+ ),
+ (
+ "start-address",
+ false,
+ &StringSchema::new("end address of DHCP range").schema(),
+ ),
+ ],
+ )
+ .schema();
+}
+
+impl TryFrom<DhcpRange> for IpRange {
+ type Error = IpRangeError;
+
+ fn try_from(value: DhcpRange) -> Result<Self, Self::Error> {
+ IpRange::new(value.start, value.end)
+ }
+}
+
+/// struct for deserializing a subnet entry of the SDN running config
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct SubnetRunningConfig {
+ vnet: VnetName,
+ gateway: Option<IpAddr>,
+ snat: Option<u8>,
+ #[serde(rename = "dhcp-range")]
+ dhcp_range: Option<Vec<PropertyString<DhcpRange>>>,
+}
+
+/// struct for deserializing the subnets of the SDN running config
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Default)]
+pub struct SubnetsRunningConfig {
+ ids: HashMap<SubnetName, SubnetRunningConfig>,
+}
+
+/// struct for deserializing a vnet entry of the SDN running config
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct VnetRunningConfig {
+ zone: ZoneName,
+}
+
+/// struct for deserializing the vnets of the SDN running config
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Default)]
+pub struct VnetsRunningConfig {
+ ids: HashMap<VnetName, VnetRunningConfig>,
+}
+
+/// struct for deserializing the SDN running config
+///
+/// usually taken from the content of /etc/pve/sdn/.running-config
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Default)]
+pub struct RunningConfig {
+ zones: Option<ZonesRunningConfig>,
+ subnets: Option<SubnetsRunningConfig>,
+ vnets: Option<VnetsRunningConfig>,
+}
+
+/// A struct containing the configuration for an SDN subnet
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct SubnetConfig {
+ name: SubnetName,
+ gateway: Option<IpAddr>,
+ snat: bool,
+ dhcp_range: Vec<IpRange>,
+}
+
+impl SubnetConfig {
+ pub fn new(
+ name: SubnetName,
+ gateway: impl Into<Option<IpAddr>>,
+ snat: bool,
+ dhcp_range: impl IntoIterator<Item = IpRange>,
+ ) -> Result<Self, SdnConfigError> {
+ let gateway = gateway.into();
+
+ if let Some(gateway) = gateway {
+ if !(gateway.is_ipv4() && name.cidr().is_ipv4()
+ || gateway.is_ipv6() && name.cidr().is_ipv6())
+ {
+ return Err(SdnConfigError::MismatchedCidrGateway);
+ }
+ }
+
+ Ok(Self {
+ name,
+ gateway,
+ snat,
+ dhcp_range: dhcp_range.into_iter().collect(),
+ })
+ }
+
+ pub fn try_from_running_config(
+ name: SubnetName,
+ running_config: SubnetRunningConfig,
+ ) -> Result<Self, SdnConfigError> {
+ let snat = running_config
+ .snat
+ .map(|snat| snat != 0)
+ .unwrap_or_else(|| false);
+
+ let dhcp_range: Vec<IpRange> = match running_config.dhcp_range {
+ Some(dhcp_range) => dhcp_range
+ .into_iter()
+ .map(PropertyString::into_inner)
+ .map(IpRange::try_from)
+ .collect::<Result<Vec<IpRange>, IpRangeError>>()
+ .map_err(SdnConfigError::InvalidDhcpRange)?,
+ None => Vec::new(),
+ };
+
+ Self::new(name, running_config.gateway, snat, dhcp_range)
+ }
+
+ pub fn name(&self) -> &SubnetName {
+ &self.name
+ }
+
+ pub fn gateway(&self) -> Option<&IpAddr> {
+ self.gateway.as_ref()
+ }
+
+ pub fn snat(&self) -> bool {
+ self.snat
+ }
+
+ pub fn cidr(&self) -> &Cidr {
+ self.name.cidr()
+ }
+
+ pub fn dhcp_ranges(&self) -> impl Iterator<Item = &IpRange> + '_ {
+ self.dhcp_range.iter()
+ }
+}
+
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct VnetConfig {
+ name: VnetName,
+ subnets: BTreeMap<Cidr, SubnetConfig>,
+}
+
+impl VnetConfig {
+ pub fn new(name: VnetName) -> Self {
+ Self {
+ name,
+ subnets: BTreeMap::default(),
+ }
+ }
+
+ pub fn from_subnets(
+ name: VnetName,
+ subnets: impl IntoIterator<Item = SubnetConfig>,
+ ) -> Result<Self, SdnConfigError> {
+ let mut config = Self::new(name);
+ config.add_subnets(subnets)?;
+ Ok(config)
+ }
+
+ pub fn add_subnets(
+ &mut self,
+ subnets: impl IntoIterator<Item = SubnetConfig>,
+ ) -> Result<(), SdnConfigError> {
+ self.subnets
+ .extend(subnets.into_iter().map(|subnet| (*subnet.cidr(), subnet)));
+ Ok(())
+ }
+
+ pub fn add_subnet(
+ &mut self,
+ subnet: SubnetConfig,
+ ) -> Result<Option<SubnetConfig>, SdnConfigError> {
+ Ok(self.subnets.insert(*subnet.cidr(), subnet))
+ }
+
+ pub fn subnets(&self) -> impl Iterator<Item = &SubnetConfig> + '_ {
+ self.subnets.values()
+ }
+
+ pub fn subnet(&self, cidr: &Cidr) -> Option<&SubnetConfig> {
+ self.subnets.get(cidr)
+ }
+
+ pub fn name(&self) -> &VnetName {
+ &self.name
+ }
+}
+
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
+pub struct ZoneConfig {
+ name: ZoneName,
+ ty: ZoneType,
+ vnets: BTreeMap<VnetName, VnetConfig>,
+}
+
+impl ZoneConfig {
+ pub fn new(name: ZoneName, ty: ZoneType) -> Self {
+ Self {
+ name,
+ ty,
+ vnets: BTreeMap::default(),
+ }
+ }
+
+ pub fn from_vnets(
+ name: ZoneName,
+ ty: ZoneType,
+ vnets: impl IntoIterator<Item = VnetConfig>,
+ ) -> Result<Self, SdnConfigError> {
+ let mut config = Self::new(name, ty);
+ config.add_vnets(vnets)?;
+ Ok(config)
+ }
+
+ pub fn add_vnets(
+ &mut self,
+ vnets: impl IntoIterator<Item = VnetConfig>,
+ ) -> Result<(), SdnConfigError> {
+ self.vnets
+ .extend(vnets.into_iter().map(|vnet| (vnet.name.clone(), vnet)));
+
+ Ok(())
+ }
+
+ pub fn add_vnet(&mut self, vnet: VnetConfig) -> Result<Option<VnetConfig>, SdnConfigError> {
+ Ok(self.vnets.insert(vnet.name.clone(), vnet))
+ }
+
+ pub fn vnets(&self) -> impl Iterator<Item = &VnetConfig> + '_ {
+ self.vnets.values()
+ }
+
+ pub fn vnet(&self, name: &VnetName) -> Option<&VnetConfig> {
+ self.vnets.get(name)
+ }
+
+ pub fn vnet_mut(&mut self, name: &VnetName) -> Option<&mut VnetConfig> {
+ self.vnets.get_mut(name)
+ }
+
+ pub fn name(&self) -> &ZoneName {
+ &self.name
+ }
+
+ pub fn ty(&self) -> ZoneType {
+ self.ty
+ }
+}
+
+/// Representation of a Proxmox VE SDN configuration
+///
+/// This struct should not be instantiated directly but rather through reading the configuration
+/// from a concrete config struct (e.g [`RunningConfig`]) and then converting into this common
+/// representation.
+///
+/// # Invariants
+/// * Every Vnet name is unique, even if they are in different zones
+/// * Subnets can only be added to a zone if their name contains the same zone they are added to
+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord, Default)]
+pub struct SdnConfig {
+ zones: BTreeMap<ZoneName, ZoneConfig>,
+}
+
+impl SdnConfig {
+ pub fn new() -> Self {
+ Self::default()
+ }
+
+ pub fn from_zones(zones: impl IntoIterator<Item = ZoneConfig>) -> Result<Self, SdnConfigError> {
+ let mut config = Self::default();
+ config.add_zones(zones)?;
+ Ok(config)
+ }
+
+ /// adds a collection of zones to the configuration, overwriting existing zones with the same
+ /// name
+ pub fn add_zones(
+ &mut self,
+ zones: impl IntoIterator<Item = ZoneConfig>,
+ ) -> Result<(), SdnConfigError> {
+ for zone in zones {
+ self.add_zone(zone)?;
+ }
+
+ Ok(())
+ }
+
+ /// adds a zone to the configuration, returning the old zone config if the zone already existed
+ pub fn add_zone(&mut self, mut zone: ZoneConfig) -> Result<Option<ZoneConfig>, SdnConfigError> {
+ let vnets = std::mem::take(&mut zone.vnets);
+
+ let zone_name = zone.name().clone();
+ let old_zone = self.zones.insert(zone_name.clone(), zone);
+
+ for vnet in vnets.into_values() {
+ self.add_vnet(&zone_name, vnet)?;
+ }
+
+ Ok(old_zone)
+ }
+
+ pub fn add_vnet(
+ &mut self,
+ zone_name: &ZoneName,
+ mut vnet: VnetConfig,
+ ) -> Result<Option<VnetConfig>, SdnConfigError> {
+ for zone in self.zones.values() {
+ if zone.name() != zone_name && zone.vnets.contains_key(vnet.name()) {
+ return Err(SdnConfigError::DuplicateVnetName);
+ }
+ }
+
+ if let Some(zone) = self.zones.get_mut(zone_name) {
+ let subnets = std::mem::take(&mut vnet.subnets);
+
+ let vnet_name = vnet.name().clone();
+ let old_vnet = zone.vnets.insert(vnet_name.clone(), vnet);
+
+ for subnet in subnets.into_values() {
+ self.add_subnet(zone_name, &vnet_name, subnet)?;
+ }
+
+ return Ok(old_vnet);
+ }
+
+ Err(SdnConfigError::ZoneNotFound)
+ }
+
+ pub fn add_subnet(
+ &mut self,
+ zone_name: &ZoneName,
+ vnet_name: &VnetName,
+ subnet: SubnetConfig,
+ ) -> Result<Option<SubnetConfig>, SdnConfigError> {
+ if zone_name != subnet.name().zone() {
+ return Err(SdnConfigError::MismatchedSubnetZone);
+ }
+
+ if let Some(zone) = self.zones.get_mut(zone_name) {
+ if let Some(vnet) = zone.vnets.get_mut(vnet_name) {
+ return Ok(vnet.subnets.insert(*subnet.name().cidr(), subnet));
+ } else {
+ return Err(SdnConfigError::VnetNotFound);
+ }
+ }
+
+ Err(SdnConfigError::ZoneNotFound)
+ }
+
+ pub fn zone(&self, name: &ZoneName) -> Option<&ZoneConfig> {
+ self.zones.get(name)
+ }
+
+ pub fn zones(&self) -> impl Iterator<Item = &ZoneConfig> + '_ {
+ self.zones.values()
+ }
+
+ pub fn vnet(&self, name: &VnetName) -> Option<(&ZoneConfig, &VnetConfig)> {
+ // we can do this because we enforce the invariant that every VNet name must be unique!
+ for zone in self.zones.values() {
+ if let Some(vnet) = zone.vnet(name) {
+ return Some((zone, vnet));
+ }
+ }
+
+ None
+ }
+
+ pub fn vnets(&self) -> impl Iterator<Item = (&ZoneConfig, &VnetConfig)> + '_ {
+ self.zones()
+ .flat_map(|zone| zone.vnets().map(move |vnet| (zone, vnet)))
+ }
+}
+
+impl TryFrom<RunningConfig> for SdnConfig {
+ type Error = SdnConfigError;
+
+ fn try_from(mut value: RunningConfig) -> Result<Self, Self::Error> {
+ let mut config = SdnConfig::default();
+
+ if let Some(running_zones) = value.zones.take() {
+ config.add_zones(
+ running_zones
+ .ids
+ .into_iter()
+ .map(|(name, running_config)| ZoneConfig::new(name, running_config.ty)),
+ )?;
+ }
+
+ if let Some(running_vnets) = value.vnets.take() {
+ for (name, running_config) in running_vnets.ids {
+ config.add_vnet(&running_config.zone, VnetConfig::new(name))?;
+ }
+ }
+
+ if let Some(running_subnets) = value.subnets.take() {
+ for (name, running_config) in running_subnets.ids {
+ let zone_name = name.zone().clone();
+ let vnet_name = running_config.vnet.clone();
+
+ config.add_subnet(
+ &zone_name,
+ &vnet_name,
+ SubnetConfig::try_from_running_config(name, running_config)?,
+ )?;
+ }
+ }
+
+ Ok(config)
+ }
+}
diff --git a/proxmox-ve-config/src/sdn/mod.rs b/proxmox-ve-config/src/sdn/mod.rs
index 67af24e..f02c170 100644
--- a/proxmox-ve-config/src/sdn/mod.rs
+++ b/proxmox-ve-config/src/sdn/mod.rs
@@ -1,3 +1,4 @@
+pub mod config;
pub mod ipam;
use std::{error::Error, fmt::Display, str::FromStr};
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 12/21] sdn: add config module
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 12/21] sdn: add config module Stefan Hanreich
@ 2024-06-27 10:54 ` Gabriel Goller
2024-07-16 9:28 ` Stefan Hanreich
0 siblings, 1 reply; 43+ messages in thread
From: Gabriel Goller @ 2024-06-27 10:54 UTC (permalink / raw)
To: Proxmox VE development discussion
On 26.06.2024 14:15, Stefan Hanreich wrote:
>diff --git a/proxmox-ve-config/src/sdn/config.rs b/proxmox-ve-config/src/sdn/config.rs
>new file mode 100644
>index 0000000..8454adf
>--- /dev/null
>+++ b/proxmox-ve-config/src/sdn/config.rs
>@@ -0,0 +1,571 @@
> [snip]
>+impl Display for DhcpType {
>+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
>+ f.write_str(match self {
>+ DhcpType::Dnsmasq => "dnsmasq",
>+ })
>+ }
>+}
>+
>+/// struct for deserializing a zone entry of the SDN running config
I think we usually begin doc-strings with a capital letter :)
>+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
>+pub struct ZoneRunningConfig {
>+ #[serde(rename = "type")]
>+ ty: ZoneType,
>+ dhcp: DhcpType,
>+}
>+
>+/// struct for deserializing the zones of the SDN running config
>+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Default)]
>+pub struct ZonesRunningConfig {
>+ ids: HashMap<ZoneName, ZoneRunningConfig>,
>+}
>+
>+/// represents the dhcp-range property string used in the SDN configuration
>+#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]
>+pub struct DhcpRange {
>+ #[serde(rename = "start-address")]
>+ start: IpAddr,
>+ #[serde(rename = "end-address")]
>+ end: IpAddr,
>+}
>+
>+impl ApiType for DhcpRange {
>+ const API_SCHEMA: proxmox_schema::Schema = ObjectSchema::new(
>+ "DHCP range",
>+ &[
>+ (
>+ "end-address",
>+ false,
>+ &StringSchema::new("start address of DHCP range").schema(),
Shouldn't this be "end address..." or is this intended?
Same below.
>+ ),
>+ (
>+ "start-address",
>+ false,
>+ &StringSchema::new("end address of DHCP range").schema(),
>+ ),
>+ ],
>+ )
>+ .schema();
>+}
>+
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-ve-rs 12/21] sdn: add config module
2024-06-27 10:54 ` Gabriel Goller
@ 2024-07-16 9:28 ` Stefan Hanreich
0 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-07-16 9:28 UTC (permalink / raw)
To: Proxmox VE development discussion, Gabriel Goller
On 6/27/24 12:54, Gabriel Goller wrote:
> On 26.06.2024 14:15, Stefan Hanreich wrote:
>> diff --git a/proxmox-ve-config/src/sdn/config.rs
>> b/proxmox-ve-config/src/sdn/config.rs
>> new file mode 100644
>> index 0000000..8454adf
>> --- /dev/null
>> +++ b/proxmox-ve-config/src/sdn/config.rs
>> @@ -0,0 +1,571 @@
>> [snip]
>> +impl Display for DhcpType {
>> + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
>> + f.write_str(match self {
>> + DhcpType::Dnsmasq => "dnsmasq",
>> + })
>> + }
>> +}
>> +
>> +/// struct for deserializing a zone entry of the SDN running config
>
> I think we usually begin doc-strings with a capital letter :)
>
>> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd,
>> Ord)]
>> +pub struct ZoneRunningConfig {
>> + #[serde(rename = "type")]
>> + ty: ZoneType,
>> + dhcp: DhcpType,
>> +}
>> +
>> +/// struct for deserializing the zones of the SDN running config
>> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Default)]
>> +pub struct ZonesRunningConfig {
>> + ids: HashMap<ZoneName, ZoneRunningConfig>,
>> +}
>> +
>> +/// represents the dhcp-range property string used in the SDN
>> configuration
>> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd,
>> Ord)]
>> +pub struct DhcpRange {
>> + #[serde(rename = "start-address")]
>> + start: IpAddr,
>> + #[serde(rename = "end-address")]
>> + end: IpAddr,
>> +}
>> +
>> +impl ApiType for DhcpRange {
>> + const API_SCHEMA: proxmox_schema::Schema = ObjectSchema::new(
>> + "DHCP range",
>> + &[
>> + (
>> + "end-address",
>> + false,
>> + &StringSchema::new("start address of DHCP
>> range").schema(),
>
> Shouldn't this be "end address..." or is this intended?
> Same below.
Good catch, seems like i messed up when copy-pasting.
>> + ),
>> + (
>> + "start-address",
>> + false,
>> + &StringSchema::new("end address of DHCP
>> range").schema(),
>> + ),
>> + ],
>> + )
>> + .schema();
>> +}
>> +
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 13/21] sdn: config: add method for generating ipsets
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (11 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 12/21] sdn: add config module Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 14/21] tests: add sdn config tests Stefan Hanreich
` (11 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
We generate the following ipsets for every vnet in the running sdn
configuration:
* {vnet}-all: contains all subnets of the vnet
* {vnet}-no-gateway: contains all subnets of the vnet except for all
gateways
* {vnet}-gateway: contains all gateways in the vnet
* {vnet}-dhcp: contains all dhcp ranges configured in the vnet
All of them are in the datacenter scope, so the fully qualified name
would look something like this: `+dc/{vnet-all}`.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
proxmox-ve-config/src/sdn/config.rs | 72 +++++++++++++++++++++++++++++
1 file changed, 72 insertions(+)
diff --git a/proxmox-ve-config/src/sdn/config.rs b/proxmox-ve-config/src/sdn/config.rs
index 8454adf..8a7316d 100644
--- a/proxmox-ve-config/src/sdn/config.rs
+++ b/proxmox-ve-config/src/sdn/config.rs
@@ -530,6 +530,78 @@ impl SdnConfig {
self.zones()
.flat_map(|zone| zone.vnets().map(move |vnet| (zone, vnet)))
}
+
+ /// Generates multiple [`Ipset`] for all SDN VNets.
+ ///
+ /// # Arguments
+ /// * `filter` - A [`Allowlist`] of VNet names for which IPsets should get returned
+ ///
+ /// It generates the following [`Ipset`] for all VNets in the config:
+ /// * all: Contains all CIDRs of all subnets in the VNet
+ /// * gateway: Contains all gateways of all subnets in the VNet (if any gateway exists)
+ /// * no-gateway: Matches all CIDRs of all subnets, except for the gateways (if any gateway
+ /// exists)
+ /// * dhcp: Contains all DHCP ranges of all subnets in the VNet (if any dhcp range exists)
+ pub fn ipsets<'a>(
+ &'a self,
+ filter: impl Into<Option<&'a Allowlist<VnetName>>>,
+ ) -> impl Iterator<Item = Ipset> + '_ {
+ let filter = filter.into();
+
+ self.zones
+ .values()
+ .flat_map(|zone| zone.vnets())
+ .filter(move |vnet| {
+ filter
+ .map(|list| list.is_allowed(&vnet.name))
+ .unwrap_or(true)
+ })
+ .flat_map(|vnet| {
+ let mut ipset_all = Ipset::new(IpsetName::new(
+ IpsetScope::Datacenter,
+ format!("{}-all", vnet.name),
+ ));
+ ipset_all.comment = Some(format!("All subnets of VNet {}", vnet.name));
+
+ let mut ipset_gateway = Ipset::new(IpsetName::new(
+ IpsetScope::Datacenter,
+ format!("{}-gateway", vnet.name),
+ ));
+ ipset_gateway.comment = Some(format!("All gateways of VNet {}", vnet.name));
+
+ let mut ipset_all_wo_gateway = Ipset::new(IpsetName::new(
+ IpsetScope::Datacenter,
+ format!("{}-no-gateway", vnet.name),
+ ));
+ ipset_all_wo_gateway.comment = Some(format!(
+ "All subnets of VNet {}, excluding gateways",
+ vnet.name
+ ));
+
+ let mut ipset_dhcp = Ipset::new(IpsetName::new(
+ IpsetScope::Datacenter,
+ format!("{}-dhcp", vnet.name),
+ ));
+ ipset_dhcp.comment = Some(format!("DHCP ranges of VNet {}", vnet.name));
+
+ for subnet in vnet.subnets.values() {
+ ipset_all.push((*subnet.cidr()).into());
+
+ ipset_all_wo_gateway.push((*subnet.cidr()).into());
+
+ if let Some(gateway) = subnet.gateway {
+ let gateway_nomatch = IpsetEntry::new(gateway, true, None);
+ ipset_all_wo_gateway.push(gateway_nomatch);
+
+ ipset_gateway.push(gateway.into());
+ }
+
+ ipset_dhcp.extend(subnet.dhcp_range.iter().cloned().map(IpsetEntry::from));
+ }
+
+ [ipset_all, ipset_gateway, ipset_all_wo_gateway, ipset_dhcp]
+ })
+ }
}
impl TryFrom<RunningConfig> for SdnConfig {
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 14/21] tests: add sdn config tests
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (12 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 13/21] sdn: config: add method for generating ipsets Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 15/21] tests: add ipam tests Stefan Hanreich
` (10 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
proxmox-ve-config/tests/sdn/main.rs | 144 ++++++++++++++++++
.../tests/sdn/resources/running-config.json | 54 +++++++
2 files changed, 198 insertions(+)
create mode 100644 proxmox-ve-config/tests/sdn/main.rs
create mode 100644 proxmox-ve-config/tests/sdn/resources/running-config.json
diff --git a/proxmox-ve-config/tests/sdn/main.rs b/proxmox-ve-config/tests/sdn/main.rs
new file mode 100644
index 0000000..2ac0cb3
--- /dev/null
+++ b/proxmox-ve-config/tests/sdn/main.rs
@@ -0,0 +1,144 @@
+use std::{
+ net::{IpAddr, Ipv4Addr, Ipv6Addr},
+ str::FromStr,
+};
+
+use proxmox_ve_config::{
+ firewall::types::{address::IpRange, Cidr},
+ sdn::{
+ config::{
+ RunningConfig, SdnConfig, SdnConfigError, SubnetConfig, VnetConfig, ZoneConfig,
+ ZoneType,
+ },
+ SubnetName, VnetName, ZoneName,
+ },
+};
+
+#[test]
+fn parse_running_config() {
+ let running_config: RunningConfig =
+ serde_json::from_str(include_str!("resources/running-config.json")).unwrap();
+
+ let parsed_config = SdnConfig::try_from(running_config).unwrap();
+
+ let sdn_config = SdnConfig::from_zones([ZoneConfig::from_vnets(
+ ZoneName::from_str("zone0").unwrap(),
+ ZoneType::Simple,
+ [
+ VnetConfig::from_subnets(
+ VnetName::from_str("vnet0").unwrap(),
+ [
+ SubnetConfig::new(
+ SubnetName::from_str("zone0-fd80::-64").unwrap(),
+ Some(Ipv6Addr::new(0xFD80, 0, 0, 0, 0, 0, 0, 0x1).into()),
+ true,
+ [IpRange::new_v6(
+ [0xFD80, 0, 0, 0, 0, 0, 0, 0x1000],
+ [0xFD80, 0, 0, 0, 0, 0, 0, 0xFFFF],
+ )
+ .unwrap()],
+ )
+ .unwrap(),
+ SubnetConfig::new(
+ SubnetName::from_str("zone0-10.101.0.0-16").unwrap(),
+ Some(Ipv4Addr::new(10, 101, 1, 1).into()),
+ true,
+ [
+ IpRange::new_v4([10, 101, 98, 100], [10, 101, 98, 200]).unwrap(),
+ IpRange::new_v4([10, 101, 99, 100], [10, 101, 99, 200]).unwrap(),
+ ],
+ )
+ .unwrap(),
+ ],
+ )
+ .unwrap(),
+ VnetConfig::from_subnets(
+ VnetName::from_str("vnet1").unwrap(),
+ [SubnetConfig::new(
+ SubnetName::from_str("zone0-10.102.0.0-16").unwrap(),
+ None,
+ false,
+ [],
+ )
+ .unwrap()],
+ )
+ .unwrap(),
+ ],
+ )
+ .unwrap()])
+ .unwrap();
+
+ assert_eq!(sdn_config, parsed_config);
+}
+
+#[test]
+fn sdn_config() {
+ let mut sdn_config = SdnConfig::new();
+
+ let zone0_name = ZoneName::new("zone0".to_string()).unwrap();
+ let zone1_name = ZoneName::new("zone1".to_string()).unwrap();
+
+ let vnet0_name = VnetName::new("vnet0".to_string()).unwrap();
+ let vnet1_name = VnetName::new("vnet1".to_string()).unwrap();
+
+ let zone0 = ZoneConfig::new(zone0_name.clone(), ZoneType::Qinq);
+ sdn_config.add_zone(zone0).unwrap();
+
+ let vnet0 = VnetConfig::new(vnet0_name.clone());
+ assert_eq!(
+ sdn_config.add_vnet(&zone1_name, vnet0.clone()),
+ Err(SdnConfigError::ZoneNotFound)
+ );
+
+ sdn_config.add_vnet(&zone0_name, vnet0.clone()).unwrap();
+
+ let subnet = SubnetConfig::new(
+ SubnetName::new(zone0_name.clone(), Cidr::new_v4([10, 0, 0, 0], 16).unwrap()),
+ IpAddr::V4(Ipv4Addr::new(10, 0, 0, 1)),
+ true,
+ [],
+ )
+ .unwrap();
+
+ assert_eq!(
+ sdn_config.add_subnet(&zone0_name, &vnet1_name, subnet.clone()),
+ Err(SdnConfigError::VnetNotFound),
+ );
+
+ sdn_config
+ .add_subnet(&zone0_name, &vnet0_name, subnet)
+ .unwrap();
+
+ let zone1 = ZoneConfig::from_vnets(
+ zone1_name.clone(),
+ ZoneType::Evpn,
+ [VnetConfig::from_subnets(
+ vnet1_name.clone(),
+ [SubnetConfig::new(
+ SubnetName::new(
+ zone0_name.clone(),
+ Cidr::new_v4([192, 168, 0, 0], 24).unwrap(),
+ ),
+ None,
+ false,
+ [],
+ )
+ .unwrap()],
+ )
+ .unwrap()],
+ )
+ .unwrap();
+
+ assert_eq!(
+ sdn_config.add_zones([zone1]),
+ Err(SdnConfigError::MismatchedSubnetZone),
+ );
+
+ let zone1 = ZoneConfig::new(zone1_name.clone(), ZoneType::Evpn);
+ sdn_config.add_zone(zone1).unwrap();
+
+ assert_eq!(
+ sdn_config.add_vnet(&zone1_name, vnet0.clone()),
+ Err(SdnConfigError::DuplicateVnetName),
+ )
+}
diff --git a/proxmox-ve-config/tests/sdn/resources/running-config.json b/proxmox-ve-config/tests/sdn/resources/running-config.json
new file mode 100644
index 0000000..b03c20f
--- /dev/null
+++ b/proxmox-ve-config/tests/sdn/resources/running-config.json
@@ -0,0 +1,54 @@
+{
+ "version": 10,
+ "subnets": {
+ "ids": {
+ "zone0-fd80::-64": {
+ "gateway": "fd80::1",
+ "type": "subnet",
+ "snat": 1,
+ "dhcp-range": [
+ "start-address=fd80::1000,end-address=fd80::ffff"
+ ],
+ "vnet": "vnet0"
+ },
+ "zone0-10.102.0.0-16": {
+ "vnet": "vnet1",
+ "type": "subnet"
+ },
+ "zone0-10.101.0.0-16": {
+ "dhcp-range": [
+ "start-address=10.101.98.100,end-address=10.101.98.200",
+ "start-address=10.101.99.100,end-address=10.101.99.200"
+ ],
+ "vnet": "vnet0",
+ "type": "subnet",
+ "gateway": "10.101.1.1",
+ "snat": 1
+ }
+ }
+ },
+ "zones": {
+ "ids": {
+ "zone0": {
+ "ipam": "pve",
+ "dhcp": "dnsmasq",
+ "type": "simple"
+ }
+ }
+ },
+ "controllers": {
+ "ids": {}
+ },
+ "vnets": {
+ "ids": {
+ "vnet0": {
+ "type": "vnet",
+ "zone": "zone0"
+ },
+ "vnet1": {
+ "type": "vnet",
+ "zone": "zone0"
+ }
+ }
+ }
+}
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-ve-rs 15/21] tests: add ipam tests
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (13 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 14/21] tests: add sdn config tests Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-firewall 16/21] cargo: update dependencies Stefan Hanreich
` (9 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
proxmox-ve-config/tests/sdn/main.rs | 45 +++++++++++++++++++
proxmox-ve-config/tests/sdn/resources/ipam.db | 26 +++++++++++
2 files changed, 71 insertions(+)
create mode 100644 proxmox-ve-config/tests/sdn/resources/ipam.db
diff --git a/proxmox-ve-config/tests/sdn/main.rs b/proxmox-ve-config/tests/sdn/main.rs
index 2ac0cb3..1815bec 100644
--- a/proxmox-ve-config/tests/sdn/main.rs
+++ b/proxmox-ve-config/tests/sdn/main.rs
@@ -5,11 +5,13 @@ use std::{
use proxmox_ve_config::{
firewall::types::{address::IpRange, Cidr},
+ guest::vm::MacAddress,
sdn::{
config::{
RunningConfig, SdnConfig, SdnConfigError, SubnetConfig, VnetConfig, ZoneConfig,
ZoneType,
},
+ ipam::{Ipam, IpamDataVm, IpamEntry, IpamJson},
SubnetName, VnetName, ZoneName,
},
};
@@ -142,3 +144,46 @@ fn sdn_config() {
Err(SdnConfigError::DuplicateVnetName),
)
}
+
+#[test]
+fn parse_ipam() {
+ let ipam_json: IpamJson = serde_json::from_str(include_str!("resources/ipam.db")).unwrap();
+ let ipam = Ipam::try_from(ipam_json).unwrap();
+
+ let zone_name = ZoneName::new("zone0".to_string()).unwrap();
+
+ assert_eq!(
+ Ipam::from_entries([
+ IpamEntry::new(
+ SubnetName::new(
+ zone_name.clone(),
+ Cidr::new_v6([0xFD80, 0, 0, 0, 0, 0, 0, 0], 64).unwrap()
+ ),
+ IpamDataVm::new(
+ Ipv6Addr::new(0xFD80, 0, 0, 0, 0, 0, 0, 0x1000),
+ 1000,
+ MacAddress::new([0xBC, 0x24, 0x11, 0, 0, 0x01]),
+ "test0".to_string()
+ )
+ .into()
+ )
+ .unwrap(),
+ IpamEntry::new(
+ SubnetName::new(
+ zone_name.clone(),
+ Cidr::new_v4([10, 101, 0, 0], 16).unwrap()
+ ),
+ IpamDataVm::new(
+ Ipv4Addr::new(10, 101, 99, 101),
+ 1000,
+ MacAddress::new([0xBC, 0x24, 0x11, 0, 0, 0x01]),
+ "test0".to_string()
+ )
+ .into()
+ )
+ .unwrap(),
+ ])
+ .unwrap(),
+ ipam
+ )
+}
diff --git a/proxmox-ve-config/tests/sdn/resources/ipam.db b/proxmox-ve-config/tests/sdn/resources/ipam.db
new file mode 100644
index 0000000..a3e6c87
--- /dev/null
+++ b/proxmox-ve-config/tests/sdn/resources/ipam.db
@@ -0,0 +1,26 @@
+{
+ "zones": {
+ "zone0": {
+ "subnets": {
+ "fd80::/64": {
+ "ips": {
+ "fd80::1000": {
+ "vmid": "1000",
+ "mac": "BC:24:11:00:00:01",
+ "hostname": "test0"
+ }
+ }
+ },
+ "10.101.0.0/16": {
+ "ips": {
+ "10.101.99.101": {
+ "mac": "BC:24:11:00:00:01",
+ "vmid": "1000",
+ "hostname": "test0"
+ }
+ }
+ }
+ }
+ }
+ }
+}
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-firewall 16/21] cargo: update dependencies
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (14 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-ve-rs 15/21] tests: add ipam tests Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-firewall 17/21] config: tests: add support for loading sdn and ipam config Stefan Hanreich
` (8 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
proxmox-firewall/Cargo.toml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/proxmox-firewall/Cargo.toml b/proxmox-firewall/Cargo.toml
index 4246f18..c0ce579 100644
--- a/proxmox-firewall/Cargo.toml
+++ b/proxmox-firewall/Cargo.toml
@@ -25,4 +25,4 @@ proxmox-ve-config = "0.1.0"
[dev-dependencies]
insta = { version = "1.21", features = ["json"] }
-proxmox-sys = "0.5.3"
+proxmox-sys = "0.5.8"
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-firewall 17/21] config: tests: add support for loading sdn and ipam config
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (15 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-firewall 16/21] cargo: update dependencies Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-firewall 18/21] ipsets: autogenerate ipsets for vnets and ipam Stefan Hanreich
` (7 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Also add example SDN configuration files that get automatically
loaded, which can be used for future tests.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
proxmox-firewall/src/config.rs | 69 +++++++++++++++++++
.../tests/input/.running-config.json | 45 ++++++++++++
proxmox-firewall/tests/input/ipam.db | 32 +++++++++
proxmox-firewall/tests/integration_tests.rs | 10 +++
4 files changed, 156 insertions(+)
create mode 100644 proxmox-firewall/tests/input/.running-config.json
create mode 100644 proxmox-firewall/tests/input/ipam.db
diff --git a/proxmox-firewall/src/config.rs b/proxmox-firewall/src/config.rs
index 5bd2512..c27aac6 100644
--- a/proxmox-firewall/src/config.rs
+++ b/proxmox-firewall/src/config.rs
@@ -16,6 +16,10 @@ use proxmox_ve_config::guest::{GuestEntry, GuestMap};
use proxmox_nftables::command::{CommandOutput, Commands, List, ListOutput};
use proxmox_nftables::types::ListChain;
use proxmox_nftables::NftClient;
+use proxmox_ve_config::sdn::{
+ config::{RunningConfig, SdnConfig},
+ ipam::{Ipam, IpamJson},
+};
pub trait FirewallConfigLoader {
fn cluster(&self) -> Result<Option<Box<dyn io::BufRead>>, Error>;
@@ -27,6 +31,8 @@ pub trait FirewallConfigLoader {
guest: &GuestEntry,
) -> Result<Option<Box<dyn io::BufRead>>, Error>;
fn guest_firewall_config(&self, vmid: &Vmid) -> Result<Option<Box<dyn io::BufRead>>, Error>;
+ fn sdn_running_config(&self) -> Result<Option<Box<dyn io::BufRead>>, Error>;
+ fn ipam(&self) -> Result<Option<Box<dyn io::BufRead>>, Error>;
}
#[derive(Default)]
@@ -58,6 +64,9 @@ fn open_config_file(path: &str) -> Result<Option<File>, Error> {
const CLUSTER_CONFIG_PATH: &str = "/etc/pve/firewall/cluster.fw";
const HOST_CONFIG_PATH: &str = "/etc/pve/local/host.fw";
+const SDN_RUNNING_CONFIG_PATH: &str = "/etc/pve/sdn/.running-config";
+const SDN_IPAM_PATH: &str = "/etc/pve/priv/ipam.db";
+
impl FirewallConfigLoader for PveFirewallConfigLoader {
fn cluster(&self) -> Result<Option<Box<dyn io::BufRead>>, Error> {
log::info!("loading cluster config");
@@ -119,6 +128,32 @@ impl FirewallConfigLoader for PveFirewallConfigLoader {
Ok(None)
}
+
+ fn sdn_running_config(&self) -> Result<Option<Box<dyn io::BufRead>>, Error> {
+ log::info!("loading SDN running-config");
+
+ let fd = open_config_file(SDN_RUNNING_CONFIG_PATH)?;
+
+ if let Some(file) = fd {
+ let buf_reader = Box::new(BufReader::new(file)) as Box<dyn io::BufRead>;
+ return Ok(Some(buf_reader));
+ }
+
+ Ok(None)
+ }
+
+ fn ipam(&self) -> Result<Option<Box<dyn io::BufRead>>, Error> {
+ log::info!("loading IPAM config");
+
+ let fd = open_config_file(SDN_IPAM_PATH)?;
+
+ if let Some(file) = fd {
+ let buf_reader = Box::new(BufReader::new(file)) as Box<dyn io::BufRead>;
+ return Ok(Some(buf_reader));
+ }
+
+ Ok(None)
+ }
}
pub trait NftConfigLoader {
@@ -150,6 +185,8 @@ pub struct FirewallConfig {
host_config: HostConfig,
guest_config: BTreeMap<Vmid, GuestConfig>,
nft_config: BTreeMap<String, ListChain>,
+ sdn_config: Option<SdnConfig>,
+ ipam_config: Option<Ipam>,
}
impl FirewallConfig {
@@ -207,6 +244,28 @@ impl FirewallConfig {
Ok(guests)
}
+ pub fn parse_sdn(
+ firewall_loader: &dyn FirewallConfigLoader,
+ ) -> Result<Option<SdnConfig>, Error> {
+ Ok(match firewall_loader.sdn_running_config()? {
+ Some(data) => {
+ let running_config: RunningConfig = serde_json::from_reader(data)?;
+ Some(SdnConfig::try_from(running_config)?)
+ }
+ _ => None,
+ })
+ }
+
+ pub fn parse_ipam(firewall_loader: &dyn FirewallConfigLoader) -> Result<Option<Ipam>, Error> {
+ Ok(match firewall_loader.ipam()? {
+ Some(data) => {
+ let raw_ipam: IpamJson = serde_json::from_reader(data)?;
+ Some(Ipam::try_from(raw_ipam)?)
+ }
+ _ => None,
+ })
+ }
+
pub fn parse_nft(
nft_loader: &dyn NftConfigLoader,
) -> Result<BTreeMap<String, ListChain>, Error> {
@@ -233,6 +292,8 @@ impl FirewallConfig {
cluster_config: Self::parse_cluster(firewall_loader)?,
host_config: Self::parse_host(firewall_loader)?,
guest_config: Self::parse_guests(firewall_loader)?,
+ sdn_config: Self::parse_sdn(firewall_loader)?,
+ ipam_config: Self::parse_ipam(firewall_loader)?,
nft_config: Self::parse_nft(nft_loader)?,
})
}
@@ -253,6 +314,14 @@ impl FirewallConfig {
&self.nft_config
}
+ pub fn sdn(&self) -> Option<&SdnConfig> {
+ self.sdn_config.as_ref()
+ }
+
+ pub fn ipam(&self) -> Option<&Ipam> {
+ self.ipam_config.as_ref()
+ }
+
pub fn is_enabled(&self) -> bool {
self.cluster().is_enabled() && self.host().nftables()
}
diff --git a/proxmox-firewall/tests/input/.running-config.json b/proxmox-firewall/tests/input/.running-config.json
new file mode 100644
index 0000000..a4511f0
--- /dev/null
+++ b/proxmox-firewall/tests/input/.running-config.json
@@ -0,0 +1,45 @@
+{
+ "subnets": {
+ "ids": {
+ "test-10.101.0.0-16": {
+ "gateway": "10.101.1.1",
+ "snat": 1,
+ "vnet": "public",
+ "dhcp-range": [
+ "start-address=10.101.99.100,end-address=10.101.99.200"
+ ],
+ "type": "subnet"
+ },
+ "test-fd80::-64": {
+ "snat": 1,
+ "gateway": "fd80::1",
+ "dhcp-range": [
+ "start-address=fd80::1000,end-address=fd80::ffff"
+ ],
+ "vnet": "public",
+ "type": "subnet"
+ }
+ }
+ },
+ "version": 49,
+ "vnets": {
+ "ids": {
+ "public": {
+ "zone": "test",
+ "type": "vnet"
+ }
+ }
+ },
+ "zones": {
+ "ids": {
+ "test": {
+ "dhcp": "dnsmasq",
+ "ipam": "pve",
+ "type": "simple"
+ }
+ }
+ },
+ "controllers": {
+ "ids": {}
+ }
+}
diff --git a/proxmox-firewall/tests/input/ipam.db b/proxmox-firewall/tests/input/ipam.db
new file mode 100644
index 0000000..ac2901e
--- /dev/null
+++ b/proxmox-firewall/tests/input/ipam.db
@@ -0,0 +1,32 @@
+{
+ "zones": {
+ "public": {
+ "subnets": {
+ "10.101.0.0/16": {
+ "ips": {
+ "10.101.1.1": {
+ "gateway": 1
+ },
+ "10.101.1.100": {
+ "vmid": "101",
+ "mac": "BC:24:11:11:22:33",
+ "hostname": null
+ }
+ }
+ },
+ "fd80::/64": {
+ "ips": {
+ "fd80::1": {
+ "gateway": 1
+ },
+ "fd80::1000": {
+ "mac": "BC:24:11:11:22:33",
+ "vmid": "101",
+ "hostname": "test-vm"
+ }
+ }
+ }
+ }
+ }
+ }
+}
diff --git a/proxmox-firewall/tests/integration_tests.rs b/proxmox-firewall/tests/integration_tests.rs
index e9baffe..5de1a4e 100644
--- a/proxmox-firewall/tests/integration_tests.rs
+++ b/proxmox-firewall/tests/integration_tests.rs
@@ -69,6 +69,16 @@ impl FirewallConfigLoader for MockFirewallConfigLoader {
Ok(None)
}
+
+ fn sdn_running_config(&self) -> Result<Option<Box<dyn std::io::BufRead>>, Error> {
+ Ok(Some(Box::new(
+ include_str!("input/.running-config.json").as_bytes(),
+ )))
+ }
+
+ fn ipam(&self) -> Result<Option<Box<dyn std::io::BufRead>>, Error> {
+ Ok(Some(Box::new(include_str!("input/ipam.db").as_bytes())))
+ }
}
struct MockNftConfigLoader {}
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-firewall 18/21] ipsets: autogenerate ipsets for vnets and ipam
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (16 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-firewall 17/21] config: tests: add support for loading sdn and ipam config Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH pve-firewall 19/21] add support for loading sdn firewall configuration Stefan Hanreich
` (6 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
They act like virtual ipsets, similar to ipfilter-net, that can be
used for defining firewall rules for sdn objects dynamically.
The changes in proxmox-ve-config also introduced a dedicated struct
for representing ip ranges, so we update the existing code, so that it
uses that struct as well.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
proxmox-firewall/src/firewall.rs | 22 +-
proxmox-firewall/src/object.rs | 41 +-
.../integration_tests__firewall.snap | 1288 +++++++++++++++++
proxmox-nftables/src/expression.rs | 17 +-
4 files changed, 1354 insertions(+), 14 deletions(-)
diff --git a/proxmox-firewall/src/firewall.rs b/proxmox-firewall/src/firewall.rs
index 4c85ea2..9c19580 100644
--- a/proxmox-firewall/src/firewall.rs
+++ b/proxmox-firewall/src/firewall.rs
@@ -197,6 +197,27 @@ impl Firewall {
self.reset_firewall(&mut commands);
let cluster_host_table = Self::cluster_table();
+ let guest_table = Self::guest_table();
+
+ if let Some(sdn_config) = self.config.sdn() {
+ let ipsets = sdn_config
+ .ipsets(None)
+ .map(|ipset| (ipset.name().to_string(), ipset))
+ .collect();
+
+ self.create_ipsets(&mut commands, &ipsets, &cluster_host_table, None)?;
+ self.create_ipsets(&mut commands, &ipsets, &guest_table, None)?;
+ }
+
+ if let Some(ipam_config) = self.config.ipam() {
+ let ipsets = ipam_config
+ .ipsets(None)
+ .map(|ipset| (ipset.name().to_string(), ipset))
+ .collect();
+
+ self.create_ipsets(&mut commands, &ipsets, &cluster_host_table, None)?;
+ self.create_ipsets(&mut commands, &ipsets, &guest_table, None)?;
+ }
if self.config.host().is_enabled() {
log::info!("creating cluster / host configuration");
@@ -242,7 +263,6 @@ impl Firewall {
commands.push(Delete::table(TableName::from(Self::cluster_table())));
}
- let guest_table = Self::guest_table();
let enabled_guests: BTreeMap<&Vmid, &GuestConfig> = self
.config
.guests()
diff --git a/proxmox-firewall/src/object.rs b/proxmox-firewall/src/object.rs
index 32c4ddb..cf7e773 100644
--- a/proxmox-firewall/src/object.rs
+++ b/proxmox-firewall/src/object.rs
@@ -72,20 +72,37 @@ impl ToNftObjects for Ipset {
let mut nomatch_elements = Vec::new();
for element in self.iter() {
- let cidr = match &element.address {
- IpsetAddress::Cidr(cidr) => cidr,
- IpsetAddress::Alias(alias) => env
- .alias(alias)
- .ok_or(format_err!("could not find alias {alias} in environment"))?
- .address(),
+ let expression = match &element.address {
+ IpsetAddress::Range(range) => {
+ if family != range.family() {
+ continue;
+ }
+
+ Expression::from(range)
+ }
+ IpsetAddress::Cidr(cidr) => {
+ if family != cidr.family() {
+ continue;
+ }
+
+ Expression::from(Prefix::from(cidr))
+ }
+ IpsetAddress::Alias(alias) => {
+ let cidr = env
+ .alias(alias)
+ .ok_or_else(|| {
+ format_err!("could not find alias {alias} in environment")
+ })?
+ .address();
+
+ if family != cidr.family() {
+ continue;
+ }
+
+ Expression::from(Prefix::from(cidr))
+ }
};
- if family != cidr.family() {
- continue;
- }
-
- let expression = Expression::from(Prefix::from(cidr));
-
if element.nomatch {
nomatch_elements.push(expression);
} else {
diff --git a/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap b/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap
index 669bad9..aa8ab64 100644
--- a/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap
+++ b/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap
@@ -202,6 +202,1294 @@ expression: "firewall.full_host_fw().expect(\"firewall can be generated\")"
}
}
},
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-all",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-all"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-all-nomatch",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-all-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-all",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "10.101.0.0",
+ "len": 16
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-all",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-all"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-all-nomatch",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-all-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-all",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "fd80::",
+ "len": 64
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-dhcp",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-dhcp"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-dhcp-nomatch",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-dhcp-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-dhcp",
+ "elem": [
+ {
+ "range": [
+ "10.101.99.100",
+ "10.101.99.200"
+ ]
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-dhcp",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-dhcp"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-dhcp-nomatch",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-dhcp-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-dhcp",
+ "elem": [
+ {
+ "range": [
+ "fd80::1000",
+ "fd80::ffff"
+ ]
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-gateway",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-gateway"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-gateway-nomatch",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-gateway-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-gateway",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "10.101.1.1",
+ "len": 32
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-gateway",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-gateway"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-gateway-nomatch",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-gateway-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-gateway",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "fd80::1",
+ "len": 128
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-no-gateway",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-no-gateway"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-no-gateway-nomatch",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-no-gateway-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-no-gateway",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "10.101.0.0",
+ "len": 16
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/public-no-gateway-nomatch",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "10.101.1.1",
+ "len": 32
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-no-gateway",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-no-gateway"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-no-gateway-nomatch",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-no-gateway-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-no-gateway",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "fd80::",
+ "len": 64
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/public-no-gateway-nomatch",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "fd80::1",
+ "len": 128
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-all",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-all"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-all-nomatch",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-all-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-all",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "10.101.0.0",
+ "len": 16
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-all",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-all"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-all-nomatch",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-all-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-all",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "fd80::",
+ "len": 64
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-dhcp",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-dhcp"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-dhcp-nomatch",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-dhcp-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-dhcp",
+ "elem": [
+ {
+ "range": [
+ "10.101.99.100",
+ "10.101.99.200"
+ ]
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-dhcp",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-dhcp"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-dhcp-nomatch",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-dhcp-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-dhcp",
+ "elem": [
+ {
+ "range": [
+ "fd80::1000",
+ "fd80::ffff"
+ ]
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-gateway",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-gateway"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-gateway-nomatch",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-gateway-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-gateway",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "10.101.1.1",
+ "len": 32
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-gateway",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-gateway"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-gateway-nomatch",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-gateway-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-gateway",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "fd80::1",
+ "len": 128
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-no-gateway",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-no-gateway"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-no-gateway-nomatch",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-no-gateway-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-no-gateway",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "10.101.0.0",
+ "len": 16
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/public-no-gateway-nomatch",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "10.101.1.1",
+ "len": 32
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-no-gateway",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-no-gateway"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-no-gateway-nomatch",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-no-gateway-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-no-gateway",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "fd80::",
+ "len": 64
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/public-no-gateway-nomatch",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "fd80::1",
+ "len": 128
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/guest-ipam-101",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/guest-ipam-101"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/guest-ipam-101-nomatch",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/guest-ipam-101-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v4-dc/guest-ipam-101",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "10.101.1.100",
+ "len": 32
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/guest-ipam-101",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/guest-ipam-101"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/guest-ipam-101-nomatch",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/guest-ipam-101-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "inet",
+ "table": "proxmox-firewall",
+ "name": "v6-dc/guest-ipam-101",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "fd80::1000",
+ "len": 128
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/guest-ipam-101",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/guest-ipam-101"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/guest-ipam-101-nomatch",
+ "type": "ipv4_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/guest-ipam-101-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v4-dc/guest-ipam-101",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "10.101.1.100",
+ "len": 32
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/guest-ipam-101",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/guest-ipam-101"
+ }
+ }
+ },
+ {
+ "add": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/guest-ipam-101-nomatch",
+ "type": "ipv6_addr",
+ "flags": [
+ "interval"
+ ]
+ }
+ }
+ },
+ {
+ "flush": {
+ "set": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/guest-ipam-101-nomatch"
+ }
+ }
+ },
+ {
+ "add": {
+ "element": {
+ "family": "bridge",
+ "table": "proxmox-firewall-guests",
+ "name": "v6-dc/guest-ipam-101",
+ "elem": [
+ {
+ "prefix": {
+ "addr": "fd80::1000",
+ "len": 128
+ }
+ }
+ ]
+ }
+ }
+ },
{
"add": {
"set": {
diff --git a/proxmox-nftables/src/expression.rs b/proxmox-nftables/src/expression.rs
index 18b92d4..71a90eb 100644
--- a/proxmox-nftables/src/expression.rs
+++ b/proxmox-nftables/src/expression.rs
@@ -1,4 +1,5 @@
use crate::types::{ElemConfig, Verdict};
+use proxmox_ve_config::firewall::types::address::IpRange;
use serde::{Deserialize, Serialize};
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr};
@@ -50,6 +51,10 @@ pub enum Expression {
}
impl Expression {
+ pub fn range(start: impl Into<Expression>, end: impl Into<Expression>) -> Self {
+ Expression::Range(Box::new((start.into(), end.into())))
+ }
+
pub fn set(expressions: impl IntoIterator<Item = Expression>) -> Self {
Expression::Set(Vec::from_iter(expressions))
}
@@ -169,12 +174,22 @@ impl From<&IpList> for Expression {
}
}
+#[cfg(feature = "config-ext")]
+impl From<&IpRange> for Expression {
+ fn from(value: &IpRange) -> Self {
+ match value {
+ IpRange::V4(range) => Expression::range(range.start(), range.end()),
+ IpRange::V6(range) => Expression::range(range.start(), range.end()),
+ }
+ }
+}
+
#[cfg(feature = "config-ext")]
impl From<&IpEntry> for Expression {
fn from(value: &IpEntry) -> Self {
match value {
IpEntry::Cidr(cidr) => Expression::from(Prefix::from(cidr)),
- IpEntry::Range(beg, end) => Expression::Range(Box::new((beg.into(), end.into()))),
+ IpEntry::Range(range) => Expression::from(range),
}
}
}
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH pve-firewall 19/21] add support for loading sdn firewall configuration
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (17 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-firewall 18/21] ipsets: autogenerate ipsets for vnets and ipam Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-08-13 16:14 ` Max Carrara
2024-06-26 12:15 ` [pve-devel] [PATCH pve-firewall 20/21] api: load sdn ipsets Stefan Hanreich
` (5 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
src/PVE/Firewall.pm | 43 +++++++++++++++++++++++++++++++++++++++++--
1 file changed, 41 insertions(+), 2 deletions(-)
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 09544ba..95325a0 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -25,6 +25,7 @@ use PVE::Tools qw($IPV4RE $IPV6RE);
use PVE::Tools qw(run_command lock_file dir_glob_foreach);
use PVE::Firewall::Helpers;
+use PVE::RS::Firewall::SDN;
my $pvefw_conf_dir = "/etc/pve/firewall";
my $clusterfw_conf_filename = "$pvefw_conf_dir/cluster.fw";
@@ -3644,7 +3645,7 @@ sub lock_clusterfw_conf {
}
sub load_clusterfw_conf {
- my ($filename) = @_;
+ my ($filename, $load_sdn_config) = @_;
$filename = $clusterfw_conf_filename if !defined($filename);
my $empty_conf = {
@@ -3657,12 +3658,50 @@ sub load_clusterfw_conf {
ipset_comments => {},
};
+ if ($load_sdn_config) {
+ my $sdn_conf = load_sdn_conf();
+ $empty_conf = { %$empty_conf, %$sdn_conf };
+ }
+
my $cluster_conf = generic_fw_config_parser($filename, $empty_conf, $empty_conf, 'cluster');
$set_global_log_ratelimit->($cluster_conf->{options});
return $cluster_conf;
}
+sub load_sdn_conf {
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $authuser = $rpcenv->get_user();
+
+ my $guests = PVE::Cluster::get_vmlist();
+ my $allowed_vms = [];
+ foreach my $vmid (sort keys %{$guests->{ids}}) {
+ next if !$rpcenv->check($authuser, "/vms/$vmid", [ 'VM.Audit' ], 1);
+ push @$allowed_vms, $vmid;
+ }
+
+ my $vnets = PVE::Network::SDN::Vnets::config(1);
+ my $privs = [ 'SDN.Audit', 'SDN.Allocate' ];
+ my $allowed_vnets = [];
+ foreach my $vnet (sort keys %{$vnets->{ids}}) {
+ my $zone = $vnets->{ids}->{$vnet}->{zone};
+ next if !$rpcenv->check_any($authuser, "/sdn/zones/$zone/$vnet", $privs, 1);
+ push @$allowed_vnets, $vnet;
+ }
+
+ my $sdn_config = {
+ ipset => {} ,
+ ipset_comments => {},
+ };
+
+ eval {
+ $sdn_config = PVE::RS::Firewall::SDN::config($allowed_vnets, $allowed_vms);
+ };
+ warn $@ if $@;
+
+ return $sdn_config;
+}
+
sub save_clusterfw_conf {
my ($cluster_conf) = @_;
@@ -4731,7 +4770,7 @@ sub init {
sub update {
my $code = sub {
- my $cluster_conf = load_clusterfw_conf();
+ my $cluster_conf = load_clusterfw_conf(undef, 1);
my $hostfw_conf = load_hostfw_conf($cluster_conf);
if (!is_enabled_and_not_nftables($cluster_conf, $hostfw_conf)) {
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH pve-firewall 19/21] add support for loading sdn firewall configuration
2024-06-26 12:15 ` [pve-devel] [PATCH pve-firewall 19/21] add support for loading sdn firewall configuration Stefan Hanreich
@ 2024-08-13 16:14 ` Max Carrara
0 siblings, 0 replies; 43+ messages in thread
From: Max Carrara @ 2024-08-13 16:14 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed Jun 26, 2024 at 2:15 PM CEST, Stefan Hanreich wrote:
> Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
> ---
> src/PVE/Firewall.pm | 43 +++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 41 insertions(+), 2 deletions(-)
>
> diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
> index 09544ba..95325a0 100644
> --- a/src/PVE/Firewall.pm
> +++ b/src/PVE/Firewall.pm
> @@ -25,6 +25,7 @@ use PVE::Tools qw($IPV4RE $IPV6RE);
> use PVE::Tools qw(run_command lock_file dir_glob_foreach);
>
> use PVE::Firewall::Helpers;
> +use PVE::RS::Firewall::SDN;
>
> my $pvefw_conf_dir = "/etc/pve/firewall";
> my $clusterfw_conf_filename = "$pvefw_conf_dir/cluster.fw";
> @@ -3644,7 +3645,7 @@ sub lock_clusterfw_conf {
> }
>
> sub load_clusterfw_conf {
> - my ($filename) = @_;
> + my ($filename, $load_sdn_config) = @_;
Small thing:
I would suggest using a prototype here and also accept a hash reference
OR a hash as the last parameter, so that the call signature is a little
more readable.
E.g. right now it's:
load_clusterfw_conf(undef, 1)
VS:
load_clusterfw_conf(undef, { load_sdn_config => 1 })
Or:
load_clusterfw_conf(undef, load_sdn_config => 1)
I know we're gonna phase this whole thing out eventually, but little
things like this help a lot in the long run, IMO. It makes it a little
clearer what the subroutine does at call sites.
I'm not sure if these subroutines are used elsewhere (didn't really
bother to check, sorry), so perhaps you could pass `$filename` via the
hash as well, as an optional parameter. Then it's immediately clear what
*everything* stands for, because a sole `undef` "hides" what's actually
passed to the subroutine.
>
> $filename = $clusterfw_conf_filename if !defined($filename);
> my $empty_conf = {
> @@ -3657,12 +3658,50 @@ sub load_clusterfw_conf {
> ipset_comments => {},
> };
>
> + if ($load_sdn_config) {
> + my $sdn_conf = load_sdn_conf();
> + $empty_conf = { %$empty_conf, %$sdn_conf };
> + }
> +
> my $cluster_conf = generic_fw_config_parser($filename, $empty_conf, $empty_conf, 'cluster');
> $set_global_log_ratelimit->($cluster_conf->{options});
>
> return $cluster_conf;
> }
>
> +sub load_sdn_conf {
> + my $rpcenv = PVE::RPCEnvironment::get();
> + my $authuser = $rpcenv->get_user();
> +
> + my $guests = PVE::Cluster::get_vmlist();
> + my $allowed_vms = [];
> + foreach my $vmid (sort keys %{$guests->{ids}}) {
> + next if !$rpcenv->check($authuser, "/vms/$vmid", [ 'VM.Audit' ], 1);
> + push @$allowed_vms, $vmid;
> + }
> +
> + my $vnets = PVE::Network::SDN::Vnets::config(1);
> + my $privs = [ 'SDN.Audit', 'SDN.Allocate' ];
> + my $allowed_vnets = [];
> + foreach my $vnet (sort keys %{$vnets->{ids}}) {
> + my $zone = $vnets->{ids}->{$vnet}->{zone};
> + next if !$rpcenv->check_any($authuser, "/sdn/zones/$zone/$vnet", $privs, 1);
> + push @$allowed_vnets, $vnet;
> + }
> +
> + my $sdn_config = {
> + ipset => {} ,
> + ipset_comments => {},
> + };
> +
> + eval {
> + $sdn_config = PVE::RS::Firewall::SDN::config($allowed_vnets, $allowed_vms);
> + };
> + warn $@ if $@;
> +
> + return $sdn_config;
> +}
> +
> sub save_clusterfw_conf {
> my ($cluster_conf) = @_;
>
> @@ -4731,7 +4770,7 @@ sub init {
> sub update {
> my $code = sub {
>
> - my $cluster_conf = load_clusterfw_conf();
> + my $cluster_conf = load_clusterfw_conf(undef, 1);
> my $hostfw_conf = load_hostfw_conf($cluster_conf);
>
> if (!is_enabled_and_not_nftables($cluster_conf, $hostfw_conf)) {
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH pve-firewall 20/21] api: load sdn ipsets
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (18 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH pve-firewall 19/21] add support for loading sdn firewall configuration Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-06-26 12:34 ` Stefan Hanreich
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-perl-rs 21/21] add PVE::RS::Firewall::SDN module Stefan Hanreich
` (4 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
src/PVE/API2/Firewall/Cluster.pm | 3 ++-
src/PVE/API2/Firewall/Rules.pm | 18 +++++++++++-------
src/PVE/API2/Firewall/VM.pm | 3 ++-
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/src/PVE/API2/Firewall/Cluster.pm b/src/PVE/API2/Firewall/Cluster.pm
index 48ad90d..3f48431 100644
--- a/src/PVE/API2/Firewall/Cluster.pm
+++ b/src/PVE/API2/Firewall/Cluster.pm
@@ -214,6 +214,7 @@ __PACKAGE__->register_method({
permissions => {
check => ['perm', '/', [ 'Sys.Audit' ]],
},
+ protected => 1,
parameters => {
additionalProperties => 0,
properties => {
@@ -253,7 +254,7 @@ __PACKAGE__->register_method({
code => sub {
my ($param) = @_;
- my $conf = PVE::Firewall::load_clusterfw_conf();
+ my $conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
return PVE::Firewall::Helpers::collect_refs($conf, $param->{type}, "dc");
}});
diff --git a/src/PVE/API2/Firewall/Rules.pm b/src/PVE/API2/Firewall/Rules.pm
index 9fcfb20..ebb51af 100644
--- a/src/PVE/API2/Firewall/Rules.pm
+++ b/src/PVE/API2/Firewall/Rules.pm
@@ -72,6 +72,7 @@ sub register_get_rules {
path => '',
method => 'GET',
description => "List rules.",
+ protected => 1,
permissions => PVE::Firewall::rules_audit_permissions($rule_env),
parameters => {
additionalProperties => 0,
@@ -120,6 +121,7 @@ sub register_get_rule {
path => '{pos}',
method => 'GET',
description => "Get single rule data.",
+ protected => 1,
permissions => PVE::Firewall::rules_audit_permissions($rule_env),
parameters => {
additionalProperties => 0,
@@ -412,11 +414,12 @@ sub lock_config {
sub load_config {
my ($class, $param) = @_;
+ my $sdn_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
my $fw_conf = PVE::Firewall::load_clusterfw_conf();
- my $rules = $fw_conf->{groups}->{$param->{group}};
+ my $rules = $sdn_conf->{groups}->{$param->{group}};
die "no such security group '$param->{group}'\n" if !defined($rules);
- return (undef, $fw_conf, $rules);
+ return ($sdn_conf, $fw_conf, $rules);
}
sub save_rules {
@@ -488,10 +491,11 @@ sub lock_config {
sub load_config {
my ($class, $param) = @_;
+ my $sdn_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
my $fw_conf = PVE::Firewall::load_clusterfw_conf();
- my $rules = $fw_conf->{rules};
+ my $rules = $sdn_conf->{rules};
- return (undef, $fw_conf, $rules);
+ return ($sdn_conf, $fw_conf, $rules);
}
sub save_rules {
@@ -528,7 +532,7 @@ sub lock_config {
sub load_config {
my ($class, $param) = @_;
- my $cluster_conf = PVE::Firewall::load_clusterfw_conf();
+ my $cluster_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
my $fw_conf = PVE::Firewall::load_hostfw_conf($cluster_conf);
my $rules = $fw_conf->{rules};
@@ -572,7 +576,7 @@ sub lock_config {
sub load_config {
my ($class, $param) = @_;
- my $cluster_conf = PVE::Firewall::load_clusterfw_conf();
+ my $cluster_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
my $fw_conf = PVE::Firewall::load_vmfw_conf($cluster_conf, 'vm', $param->{vmid});
my $rules = $fw_conf->{rules};
@@ -616,7 +620,7 @@ sub lock_config {
sub load_config {
my ($class, $param) = @_;
- my $cluster_conf = PVE::Firewall::load_clusterfw_conf();
+ my $cluster_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
my $fw_conf = PVE::Firewall::load_vmfw_conf($cluster_conf, 'ct', $param->{vmid});
my $rules = $fw_conf->{rules};
diff --git a/src/PVE/API2/Firewall/VM.pm b/src/PVE/API2/Firewall/VM.pm
index 4222103..9800d8c 100644
--- a/src/PVE/API2/Firewall/VM.pm
+++ b/src/PVE/API2/Firewall/VM.pm
@@ -234,6 +234,7 @@ sub register_handlers {
path => 'refs',
method => 'GET',
description => "Lists possible IPSet/Alias reference which are allowed in source/dest properties.",
+ protected => 1,
permissions => {
check => ['perm', '/vms/{vmid}', [ 'VM.Audit' ]],
},
@@ -278,7 +279,7 @@ sub register_handlers {
code => sub {
my ($param) = @_;
- my $cluster_conf = PVE::Firewall::load_clusterfw_conf();
+ my $cluster_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
my $fw_conf = PVE::Firewall::load_vmfw_conf($cluster_conf, $rule_env, $param->{vmid});
my $dc_refs = PVE::Firewall::Helpers::collect_refs($cluster_conf, $param->{type}, 'dc');
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH pve-firewall 20/21] api: load sdn ipsets
2024-06-26 12:15 ` [pve-devel] [PATCH pve-firewall 20/21] api: load sdn ipsets Stefan Hanreich
@ 2024-06-26 12:34 ` Stefan Hanreich
0 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:34 UTC (permalink / raw)
To: pve-devel
Seems like I regenerated the patches once, after writing a comment so
I'll leave it here:
This is certainly the minimally invasive way to go about this, but it
has the downside of having to load the cluster configuration twice. Once
for validating all rules properly and once for providing the methods
with a cluster config dictionary that doesn't contain the SDN ipsets, so
it can be saved. This didn't pose too much of an issue in my tests, the
API calls were still quite fast.
Passing the configurations from load_conf certainly is a bit hacky,
since we're passing almost the same config twice - but works quite well
for this use case. We only have to do this for the cluster
configuration, since this is the only place where the cluster
configuration gets saved.
On 6/26/24 14:15, Stefan Hanreich wrote:
> Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
> ---
> src/PVE/API2/Firewall/Cluster.pm | 3 ++-
> src/PVE/API2/Firewall/Rules.pm | 18 +++++++++++-------
> src/PVE/API2/Firewall/VM.pm | 3 ++-
> 3 files changed, 15 insertions(+), 9 deletions(-)
>
> diff --git a/src/PVE/API2/Firewall/Cluster.pm b/src/PVE/API2/Firewall/Cluster.pm
> index 48ad90d..3f48431 100644
> --- a/src/PVE/API2/Firewall/Cluster.pm
> +++ b/src/PVE/API2/Firewall/Cluster.pm
> @@ -214,6 +214,7 @@ __PACKAGE__->register_method({
> permissions => {
> check => ['perm', '/', [ 'Sys.Audit' ]],
> },
> + protected => 1,
> parameters => {
> additionalProperties => 0,
> properties => {
> @@ -253,7 +254,7 @@ __PACKAGE__->register_method({
> code => sub {
> my ($param) = @_;
>
> - my $conf = PVE::Firewall::load_clusterfw_conf();
> + my $conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
>
> return PVE::Firewall::Helpers::collect_refs($conf, $param->{type}, "dc");
> }});
> diff --git a/src/PVE/API2/Firewall/Rules.pm b/src/PVE/API2/Firewall/Rules.pm
> index 9fcfb20..ebb51af 100644
> --- a/src/PVE/API2/Firewall/Rules.pm
> +++ b/src/PVE/API2/Firewall/Rules.pm
> @@ -72,6 +72,7 @@ sub register_get_rules {
> path => '',
> method => 'GET',
> description => "List rules.",
> + protected => 1,
> permissions => PVE::Firewall::rules_audit_permissions($rule_env),
> parameters => {
> additionalProperties => 0,
> @@ -120,6 +121,7 @@ sub register_get_rule {
> path => '{pos}',
> method => 'GET',
> description => "Get single rule data.",
> + protected => 1,
> permissions => PVE::Firewall::rules_audit_permissions($rule_env),
> parameters => {
> additionalProperties => 0,
> @@ -412,11 +414,12 @@ sub lock_config {
> sub load_config {
> my ($class, $param) = @_;
>
> + my $sdn_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
> my $fw_conf = PVE::Firewall::load_clusterfw_conf();
> - my $rules = $fw_conf->{groups}->{$param->{group}};
> + my $rules = $sdn_conf->{groups}->{$param->{group}};
> die "no such security group '$param->{group}'\n" if !defined($rules);
>
> - return (undef, $fw_conf, $rules);
> + return ($sdn_conf, $fw_conf, $rules);
> }
>
> sub save_rules {
> @@ -488,10 +491,11 @@ sub lock_config {
> sub load_config {
> my ($class, $param) = @_;
>
> + my $sdn_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
> my $fw_conf = PVE::Firewall::load_clusterfw_conf();
> - my $rules = $fw_conf->{rules};
> + my $rules = $sdn_conf->{rules};
>
> - return (undef, $fw_conf, $rules);
> + return ($sdn_conf, $fw_conf, $rules);
> }
>
> sub save_rules {
> @@ -528,7 +532,7 @@ sub lock_config {
> sub load_config {
> my ($class, $param) = @_;
>
> - my $cluster_conf = PVE::Firewall::load_clusterfw_conf();
> + my $cluster_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
> my $fw_conf = PVE::Firewall::load_hostfw_conf($cluster_conf);
> my $rules = $fw_conf->{rules};
>
> @@ -572,7 +576,7 @@ sub lock_config {
> sub load_config {
> my ($class, $param) = @_;
>
> - my $cluster_conf = PVE::Firewall::load_clusterfw_conf();
> + my $cluster_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
> my $fw_conf = PVE::Firewall::load_vmfw_conf($cluster_conf, 'vm', $param->{vmid});
> my $rules = $fw_conf->{rules};
>
> @@ -616,7 +620,7 @@ sub lock_config {
> sub load_config {
> my ($class, $param) = @_;
>
> - my $cluster_conf = PVE::Firewall::load_clusterfw_conf();
> + my $cluster_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
> my $fw_conf = PVE::Firewall::load_vmfw_conf($cluster_conf, 'ct', $param->{vmid});
> my $rules = $fw_conf->{rules};
>
> diff --git a/src/PVE/API2/Firewall/VM.pm b/src/PVE/API2/Firewall/VM.pm
> index 4222103..9800d8c 100644
> --- a/src/PVE/API2/Firewall/VM.pm
> +++ b/src/PVE/API2/Firewall/VM.pm
> @@ -234,6 +234,7 @@ sub register_handlers {
> path => 'refs',
> method => 'GET',
> description => "Lists possible IPSet/Alias reference which are allowed in source/dest properties.",
> + protected => 1,
> permissions => {
> check => ['perm', '/vms/{vmid}', [ 'VM.Audit' ]],
> },
> @@ -278,7 +279,7 @@ sub register_handlers {
> code => sub {
> my ($param) = @_;
>
> - my $cluster_conf = PVE::Firewall::load_clusterfw_conf();
> + my $cluster_conf = PVE::Firewall::load_clusterfw_conf(undef, 1);
> my $fw_conf = PVE::Firewall::load_vmfw_conf($cluster_conf, $rule_env, $param->{vmid});
>
> my $dc_refs = PVE::Firewall::Helpers::collect_refs($cluster_conf, $param->{type}, 'dc');
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* [pve-devel] [PATCH proxmox-perl-rs 21/21] add PVE::RS::Firewall::SDN module
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (19 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH pve-firewall 20/21] api: load sdn ipsets Stefan Hanreich
@ 2024-06-26 12:15 ` Stefan Hanreich
2024-08-13 16:14 ` Max Carrara
2024-06-28 13:46 ` [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Gabriel Goller
` (3 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Stefan Hanreich @ 2024-06-26 12:15 UTC (permalink / raw)
To: pve-devel
Used for obtaining the IPSets that get autogenerated by the nftables
firewall. The returned configuration has the same format as the
pve-firewall uses internally, making it compatible with the existing
pve-firewall code.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
pve-rs/Cargo.toml | 1 +
pve-rs/Makefile | 1 +
pve-rs/src/firewall/mod.rs | 1 +
pve-rs/src/firewall/sdn.rs | 130 +++++++++++++++++++++++++++++++++++++
pve-rs/src/lib.rs | 1 +
5 files changed, 134 insertions(+)
create mode 100644 pve-rs/src/firewall/mod.rs
create mode 100644 pve-rs/src/firewall/sdn.rs
diff --git a/pve-rs/Cargo.toml b/pve-rs/Cargo.toml
index e40588d..f612b3a 100644
--- a/pve-rs/Cargo.toml
+++ b/pve-rs/Cargo.toml
@@ -43,3 +43,4 @@ proxmox-subscription = "0.4"
proxmox-sys = "0.5"
proxmox-tfa = { version = "4.0.4", features = ["api"] }
proxmox-time = "2"
+proxmox-ve-config = { version = "0.1.0" }
diff --git a/pve-rs/Makefile b/pve-rs/Makefile
index c6b4e08..d01da69 100644
--- a/pve-rs/Makefile
+++ b/pve-rs/Makefile
@@ -28,6 +28,7 @@ PERLMOD_GENPACKAGE := /usr/lib/perlmod/genpackage.pl \
PERLMOD_PACKAGES := \
PVE::RS::APT::Repositories \
+ PVE::RS::Firewall::SDN \
PVE::RS::OpenId \
PVE::RS::ResourceScheduling::Static \
PVE::RS::TFA
diff --git a/pve-rs/src/firewall/mod.rs b/pve-rs/src/firewall/mod.rs
new file mode 100644
index 0000000..8bd18a8
--- /dev/null
+++ b/pve-rs/src/firewall/mod.rs
@@ -0,0 +1 @@
+pub mod sdn;
diff --git a/pve-rs/src/firewall/sdn.rs b/pve-rs/src/firewall/sdn.rs
new file mode 100644
index 0000000..55f3e93
--- /dev/null
+++ b/pve-rs/src/firewall/sdn.rs
@@ -0,0 +1,130 @@
+#[perlmod::package(name = "PVE::RS::Firewall::SDN", lib = "pve_rs")]
+mod export {
+ use std::collections::HashMap;
+ use std::{fs, io};
+
+ use anyhow::{bail, Context, Error};
+ use serde::Serialize;
+
+ use proxmox_ve_config::{
+ common::Allowlist,
+ firewall::types::ipset::{IpsetAddress, IpsetEntry},
+ firewall::types::Ipset,
+ guest::types::Vmid,
+ sdn::{
+ config::{RunningConfig, SdnConfig},
+ ipam::{Ipam, IpamJson},
+ SdnNameError, VnetName,
+ },
+ };
+
+ #[derive(Clone, Debug, Default, Serialize)]
+ pub struct LegacyIpsetEntry {
+ nomatch: bool,
+ cidr: String,
+ comment: Option<String>,
+ }
+
+ impl LegacyIpsetEntry {
+ pub fn from_ipset_entry(entry: &IpsetEntry) -> Vec<LegacyIpsetEntry> {
+ let mut entries = Vec::new();
+
+ match &entry.address {
+ IpsetAddress::Alias(name) => {
+ entries.push(Self {
+ nomatch: entry.nomatch,
+ cidr: name.to_string(),
+ comment: entry.comment.clone(),
+ });
+ }
+ IpsetAddress::Cidr(cidr) => {
+ entries.push(Self {
+ nomatch: entry.nomatch,
+ cidr: cidr.to_string(),
+ comment: entry.comment.clone(),
+ });
+ }
+ IpsetAddress::Range(range) => {
+ entries.extend(range.to_cidrs().into_iter().map(|cidr| Self {
+ nomatch: entry.nomatch,
+ cidr: cidr.to_string(),
+ comment: entry.comment.clone(),
+ }))
+ }
+ };
+
+ entries
+ }
+ }
+
+ #[derive(Clone, Debug, Default, Serialize)]
+ pub struct SdnFirewallConfig {
+ ipset: HashMap<String, Vec<LegacyIpsetEntry>>,
+ ipset_comments: HashMap<String, String>,
+ }
+
+ impl SdnFirewallConfig {
+ pub fn new() -> Self {
+ Default::default()
+ }
+
+ pub fn extend_ipsets(&mut self, ipsets: impl IntoIterator<Item = Ipset>) {
+ for ipset in ipsets {
+ let entries = ipset
+ .iter()
+ .flat_map(LegacyIpsetEntry::from_ipset_entry)
+ .collect();
+
+ self.ipset.insert(ipset.name().name().to_string(), entries);
+
+ if let Some(comment) = &ipset.comment {
+ self.ipset_comments
+ .insert(ipset.name().name().to_string(), comment.to_string());
+ }
+ }
+ }
+ }
+
+ const SDN_RUNNING_CONFIG: &str = "/etc/pve/sdn/.running-config";
+ const SDN_IPAM: &str = "/etc/pve/priv/ipam.db";
+
+ #[export]
+ pub fn config(
+ vnet_filter: Option<Vec<VnetName>>,
+ vm_filter: Option<Vec<Vmid>>,
+ ) -> Result<SdnFirewallConfig, Error> {
+ let mut refs = SdnFirewallConfig::new();
+
+ match fs::read_to_string(SDN_RUNNING_CONFIG) {
+ Ok(data) => {
+ let running_config: RunningConfig = serde_json::from_str(&data)?;
+ let sdn_config = SdnConfig::try_from(running_config)
+ .with_context(|| "Failed to parse SDN config".to_string())?;
+
+ let allowlist = vnet_filter.map(Allowlist::from_iter);
+ refs.extend_ipsets(sdn_config.ipsets(allowlist.as_ref()));
+ }
+ Err(e) if e.kind() == io::ErrorKind::NotFound => (),
+ Err(e) => {
+ bail!("Cannot open SDN running config: {e:#}");
+ }
+ };
+
+ match fs::read_to_string(SDN_IPAM) {
+ Ok(data) => {
+ let ipam_json: IpamJson = serde_json::from_str(&data)?;
+ let ipam: Ipam = Ipam::try_from(ipam_json)
+ .with_context(|| "Failed to parse IPAM".to_string())?;
+
+ let allowlist = vm_filter.map(Allowlist::from_iter);
+ refs.extend_ipsets(ipam.ipsets(allowlist.as_ref()));
+ }
+ Err(e) if e.kind() == io::ErrorKind::NotFound => (),
+ Err(e) => {
+ bail!("Cannot open IPAM database: {e:#}");
+ }
+ };
+
+ Ok(refs)
+ }
+}
diff --git a/pve-rs/src/lib.rs b/pve-rs/src/lib.rs
index 42be39e..dae190e 100644
--- a/pve-rs/src/lib.rs
+++ b/pve-rs/src/lib.rs
@@ -4,6 +4,7 @@
pub mod common;
pub mod apt;
+pub mod firewall;
pub mod openid;
pub mod resource_scheduling;
pub mod tfa;
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [PATCH proxmox-perl-rs 21/21] add PVE::RS::Firewall::SDN module
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-perl-rs 21/21] add PVE::RS::Firewall::SDN module Stefan Hanreich
@ 2024-08-13 16:14 ` Max Carrara
0 siblings, 0 replies; 43+ messages in thread
From: Max Carrara @ 2024-08-13 16:14 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed Jun 26, 2024 at 2:15 PM CEST, Stefan Hanreich wrote:
> Used for obtaining the IPSets that get autogenerated by the nftables
> firewall. The returned configuration has the same format as the
> pve-firewall uses internally, making it compatible with the existing
> pve-firewall code.
>
> Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
> ---
> pve-rs/Cargo.toml | 1 +
> pve-rs/Makefile | 1 +
> pve-rs/src/firewall/mod.rs | 1 +
> pve-rs/src/firewall/sdn.rs | 130 +++++++++++++++++++++++++++++++++++++
> pve-rs/src/lib.rs | 1 +
> 5 files changed, 134 insertions(+)
> create mode 100644 pve-rs/src/firewall/mod.rs
> create mode 100644 pve-rs/src/firewall/sdn.rs
>
> diff --git a/pve-rs/Cargo.toml b/pve-rs/Cargo.toml
> index e40588d..f612b3a 100644
> --- a/pve-rs/Cargo.toml
> +++ b/pve-rs/Cargo.toml
> @@ -43,3 +43,4 @@ proxmox-subscription = "0.4"
> proxmox-sys = "0.5"
> proxmox-tfa = { version = "4.0.4", features = ["api"] }
> proxmox-time = "2"
> +proxmox-ve-config = { version = "0.1.0" }
This hunk doesn't apply anymore because proxmox-sys was bumped.
Manually adding proxmox-ve-config as depencency works just fine though,
so this needs just a little rebase.
> diff --git a/pve-rs/Makefile b/pve-rs/Makefile
> index c6b4e08..d01da69 100644
> --- a/pve-rs/Makefile
> +++ b/pve-rs/Makefile
> @@ -28,6 +28,7 @@ PERLMOD_GENPACKAGE := /usr/lib/perlmod/genpackage.pl \
>
> PERLMOD_PACKAGES := \
> PVE::RS::APT::Repositories \
> + PVE::RS::Firewall::SDN \
> PVE::RS::OpenId \
> PVE::RS::ResourceScheduling::Static \
> PVE::RS::TFA
> diff --git a/pve-rs/src/firewall/mod.rs b/pve-rs/src/firewall/mod.rs
> new file mode 100644
> index 0000000..8bd18a8
> --- /dev/null
> +++ b/pve-rs/src/firewall/mod.rs
> @@ -0,0 +1 @@
> +pub mod sdn;
> diff --git a/pve-rs/src/firewall/sdn.rs b/pve-rs/src/firewall/sdn.rs
> new file mode 100644
> index 0000000..55f3e93
> --- /dev/null
> +++ b/pve-rs/src/firewall/sdn.rs
> @@ -0,0 +1,130 @@
> +#[perlmod::package(name = "PVE::RS::Firewall::SDN", lib = "pve_rs")]
> +mod export {
> + use std::collections::HashMap;
> + use std::{fs, io};
> +
> + use anyhow::{bail, Context, Error};
> + use serde::Serialize;
> +
> + use proxmox_ve_config::{
> + common::Allowlist,
> + firewall::types::ipset::{IpsetAddress, IpsetEntry},
> + firewall::types::Ipset,
> + guest::types::Vmid,
> + sdn::{
> + config::{RunningConfig, SdnConfig},
> + ipam::{Ipam, IpamJson},
> + SdnNameError, VnetName,
SdnNameError isn't used here.
> + },
> + };
> +
> + #[derive(Clone, Debug, Default, Serialize)]
> + pub struct LegacyIpsetEntry {
> + nomatch: bool,
> + cidr: String,
> + comment: Option<String>,
> + }
> +
> + impl LegacyIpsetEntry {
> + pub fn from_ipset_entry(entry: &IpsetEntry) -> Vec<LegacyIpsetEntry> {
> + let mut entries = Vec::new();
> +
> + match &entry.address {
> + IpsetAddress::Alias(name) => {
> + entries.push(Self {
> + nomatch: entry.nomatch,
> + cidr: name.to_string(),
> + comment: entry.comment.clone(),
> + });
> + }
> + IpsetAddress::Cidr(cidr) => {
> + entries.push(Self {
> + nomatch: entry.nomatch,
> + cidr: cidr.to_string(),
> + comment: entry.comment.clone(),
> + });
> + }
> + IpsetAddress::Range(range) => {
> + entries.extend(range.to_cidrs().into_iter().map(|cidr| Self {
> + nomatch: entry.nomatch,
> + cidr: cidr.to_string(),
> + comment: entry.comment.clone(),
> + }))
> + }
> + };
> +
> + entries
> + }
> + }
> +
> + #[derive(Clone, Debug, Default, Serialize)]
> + pub struct SdnFirewallConfig {
> + ipset: HashMap<String, Vec<LegacyIpsetEntry>>,
> + ipset_comments: HashMap<String, String>,
> + }
> +
> + impl SdnFirewallConfig {
> + pub fn new() -> Self {
> + Default::default()
> + }
> +
> + pub fn extend_ipsets(&mut self, ipsets: impl IntoIterator<Item = Ipset>) {
> + for ipset in ipsets {
> + let entries = ipset
> + .iter()
> + .flat_map(LegacyIpsetEntry::from_ipset_entry)
> + .collect();
> +
> + self.ipset.insert(ipset.name().name().to_string(), entries);
> +
> + if let Some(comment) = &ipset.comment {
> + self.ipset_comments
> + .insert(ipset.name().name().to_string(), comment.to_string());
> + }
> + }
> + }
> + }
> +
> + const SDN_RUNNING_CONFIG: &str = "/etc/pve/sdn/.running-config";
> + const SDN_IPAM: &str = "/etc/pve/priv/ipam.db";
> +
> + #[export]
> + pub fn config(
> + vnet_filter: Option<Vec<VnetName>>,
> + vm_filter: Option<Vec<Vmid>>,
> + ) -> Result<SdnFirewallConfig, Error> {
> + let mut refs = SdnFirewallConfig::new();
> +
> + match fs::read_to_string(SDN_RUNNING_CONFIG) {
> + Ok(data) => {
> + let running_config: RunningConfig = serde_json::from_str(&data)?;
> + let sdn_config = SdnConfig::try_from(running_config)
> + .with_context(|| "Failed to parse SDN config".to_string())?;
> +
> + let allowlist = vnet_filter.map(Allowlist::from_iter);
> + refs.extend_ipsets(sdn_config.ipsets(allowlist.as_ref()));
> + }
> + Err(e) if e.kind() == io::ErrorKind::NotFound => (),
> + Err(e) => {
> + bail!("Cannot open SDN running config: {e:#}");
> + }
> + };
> +
> + match fs::read_to_string(SDN_IPAM) {
> + Ok(data) => {
> + let ipam_json: IpamJson = serde_json::from_str(&data)?;
> + let ipam: Ipam = Ipam::try_from(ipam_json)
> + .with_context(|| "Failed to parse IPAM".to_string())?;
> +
> + let allowlist = vm_filter.map(Allowlist::from_iter);
> + refs.extend_ipsets(ipam.ipsets(allowlist.as_ref()));
> + }
> + Err(e) if e.kind() == io::ErrorKind::NotFound => (),
> + Err(e) => {
> + bail!("Cannot open IPAM database: {e:#}");
> + }
> + };
> +
> + Ok(refs)
> + }
> +}
> diff --git a/pve-rs/src/lib.rs b/pve-rs/src/lib.rs
> index 42be39e..dae190e 100644
> --- a/pve-rs/src/lib.rs
> +++ b/pve-rs/src/lib.rs
> @@ -4,6 +4,7 @@
> pub mod common;
>
> pub mod apt;
> +pub mod firewall;
> pub mod openid;
> pub mod resource_scheduling;
> pub mod tfa;
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (20 preceding siblings ...)
2024-06-26 12:15 ` [pve-devel] [PATCH proxmox-perl-rs 21/21] add PVE::RS::Firewall::SDN module Stefan Hanreich
@ 2024-06-28 13:46 ` Gabriel Goller
2024-07-16 9:33 ` Stefan Hanreich
2024-08-13 16:06 ` Max Carrara
` (2 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: Gabriel Goller @ 2024-06-28 13:46 UTC (permalink / raw)
To: Proxmox VE development discussion
Already talked with Stefan offlist, but some major things I noted when
testing:
* It would be cool to have the generated IPSets visible in the IPSet
menu under Firewall (Datacenter). We could add a checkmark to hide
them (as there can be quite many) and make them read-only.
* Zones can be restricted to specific Nodes, but we generate the
IPSets on every Node for all Zones. This means some IPSets are
useless and we could avoid generating them in the first place.
Otherwise the IPSet generation works fine. The algorithm for generating
iptables ipset ranges also works perfectly!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects
2024-06-28 13:46 ` [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Gabriel Goller
@ 2024-07-16 9:33 ` Stefan Hanreich
0 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-07-16 9:33 UTC (permalink / raw)
To: Proxmox VE development discussion, Gabriel Goller
On 6/28/24 15:46, Gabriel Goller wrote:
> Already talked with Stefan offlist, but some major things I noted when
> testing:
> * It would be cool to have the generated IPSets visible in the IPSet
> menu under Firewall (Datacenter). We could add a checkmark to hide
> them (as there can be quite many) and make them read-only.
As already discussed, this might be a bit tricky to do read-only, since
we want to be able to override those IPSets (as is the case with
management, ipfilter, ..). It might make more sense to just additionally
display the IP sets and make them editable as any other. That way you
can easily append / delete IP addresses. Maybe give an indicator if this
is an auto-generated IPSet or an overridden one in the UI? Maybe I'll
make it a separate patch series that also implements this for the other
auto-generated IPsets.
> * Zones can be restricted to specific Nodes, but we generate the
> IPSets on every Node for all Zones. This means some IPSets are
> useless and we could avoid generating them in the first place.
Will try and add this.
>
> Otherwise the IPSet generation works fine. The algorithm for generating
> iptables ipset ranges also works perfectly!
>
Thanks for the review!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (21 preceding siblings ...)
2024-06-28 13:46 ` [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Gabriel Goller
@ 2024-08-13 16:06 ` Max Carrara
2024-09-24 8:41 ` Thomas Lamprecht
2024-10-10 15:59 ` Stefan Hanreich
24 siblings, 0 replies; 43+ messages in thread
From: Max Carrara @ 2024-08-13 16:06 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed Jun 26, 2024 at 2:15 PM CEST, Stefan Hanreich wrote:
> This patch series adds support for autogenerating ipsets for SDN objects. It
> autogenerates ipsets for every VNet as follows:
>
> * ipset containing all IP ranges of the VNet
> * ipset containing all gateways of the VNet
> * ipset containing all IP ranges of the subnet - except gateways
> * ipset containing all dhcp ranges of the vnet
Gave the entire RFC a spin - apart from a few minor version bumps and
one small rejected hunk, everything applied and built just fine.
Encountered no other issues in that regard.
My full review can be found below.
Review: RFC: autogenerate ipsets for sdn objects
================================================
Building
--------
As I also mention in some patches inline, a couple version bumps were
necessary to get everything to build correctly.
Those are as follows:
- proxmox-ve-rs
- proxmox-sys = "0.6.2"
- serde_with = "3.8.1"
- proxmox-firewall
- proxmox-sys = "0.6.2"
Additionally, patch 21 contained one rejected hunk:
diff a/pve-rs/Cargo.toml b/pve-rs/Cargo.toml (rejected hunks)
@@ -43,3 +43,4 @@ proxmox-subscription = "0.4"
proxmox-sys = "0.5"
proxmox-tfa = { version = "4.0.4", features = ["api"] }
proxmox-time = "2"
+proxmox-ve-config = { version = "0.1.0" }
This got rejected because `proxmox-sys` was already bumped to "0.6" in
`pve-rs`.
Simply adding the dependency manually allowed me to build `pve-rs`.
Testing
-------
You saw a lot of this off-list already (thanks again for the help btw),
but want to mention it here as well, so it doesn't get lost.
Setup
*****
- Installed all packages on my development VM on which I then
performed my tests.
- No issues were encountered during installation.
- Installed `dnsmasq` on the VM.
- Disabled `dnsmasq` permanently via `systemctl disable --now dnsmasq`.
Simple Zone & VNet
******************
- Added a new simple zone in Datacenter > SDN > Zones.
- Enabled automatic DHCP on the zone.
- Added a new VNet named `vnet0` and assigned it to the new simple zone.
- Subnet: 172.16.100.0/24
- Gateway: 172.16.100.1
- DHCP Range: 172.16.100.100 - 172.16.100.200
- SNAT: enabled
Cluster Firewall
****************
- Edited cluster firewall rules.
- Immediately noticed that the new ipsets appeared in the UI and were
able to be selected.
- Configured a basic firewall rule as example.
- Type: out
- Action: REJECT
- Macro: Web
- Source: +dc/vnet0-all
- Log: info
- While this does not block web traffic for VMs, this was nevertheless
useful to check whether the ipsets and `iptables` FW config were
generated correctly.
- Relevant output of `iptables-save`:
# iptables-save | grep -E '\-\-dport (80|443) '
-A PVEFW-HOST-OUT -p tcp -m set --match-set PVEFW-0-vnet0-all-v4 src -m tcp --dport 80 -m limit --limit 1/sec -j NFLOG --nflog-prefix ":0:6:PVEFW-HOST-OUT: REJECT: "
-A PVEFW-HOST-OUT -p tcp -m set --match-set PVEFW-0-vnet0-all-v4 src -m tcp --dport 80 -j PVEFW-reject
-A PVEFW-HOST-OUT -p tcp -m set --match-set PVEFW-0-vnet0-all-v4 src -m tcp --dport 443 -m limit --limit 1/sec -j NFLOG --nflog-prefix ":0:6:PVEFW-HOST-OUT: REJECT: "
-A PVEFW-HOST-OUT -p tcp -m set --match-set PVEFW-0-vnet0-all-v4 src -m tcp --dport 443 -j PVEFW-reject
- The set passed via `--match-set` also appears in `ipset -L`:
# ipset -L | grep -A 8 'PVEFW-0-vnet0-all-v4'
Name: PVEFW-0-vnet0-all-v4
Type: hash:net
Revision: 7
Header: family inet hashsize 64 maxelem 64 bucketsize 12 initval 0xa4c09bc0
Size in memory: 504
References: 12
Number of entries: 1
Members:
172.16.100.0/24
- Very nice.
- All of the other ipsets for the VNet also appear in `ipset -L` as expected
(-all, -dhcp, -gateway for v4 and v6 each)
- When removing removing `+dc/vnet0-all` from the firewall rule and
leaving the source empty, outgoing web traffic was blocked, as
expected.
- Keeping it there does *not* block outgoing web traffic, as expected.
Host Firewall
*************
- The cluster firewall rule above was deactivated.
- Otherwise, the exact same steps as above were performed, just on the
host firewall.
- The results are exactly the same, as expected.
VM / CT ipsets
**************
- All containers and VMs correctly got their own ipset
(checked with `ipset -L`).
- Assigning a VM to the VNet makes it show up in IPAM and also updates
its corresponding ipset correctly.
- Adding the same firewall rule as above to a VM blocks the VM's
outgoing web traffic, as expected.
- Changing the source to the VM's ipset, in this case `+guest-ipam-102`,
also blocks the VM's outgoing web traffic, as expected.
- Output of `iptables-save | grep -E '\-\-dport (80|443) '` on the node:
# iptables-save | grep -E '\-\-dport (80|443) ' [17:59:48]
-A tap102i0-OUT -p tcp -m set --match-set PVEFW-0-guest-ipam-102-v4 src -m tcp --dport 80 -m limit --limit 1/sec -j NFLOG --nflog-prefix ":102:6:tap102i0-OUT: REJECT: "
-A tap102i0-OUT -p tcp -m set --match-set PVEFW-0-guest-ipam-102-v4 src -m tcp --dport 80 -j PVEFW-reject
-A tap102i0-OUT -p tcp -m set --match-set PVEFW-0-guest-ipam-102-v4 src -m tcp --dport 443 -m limit --limit 1/sec -j NFLOG --nflog-prefix ":102:6:tap102i0-OUT: REJECT: "
-A tap102i0-OUT -p tcp -m set --match-set PVEFW-0-guest-ipam-102-v4 src -m tcp --dport 443 -j PVEFW-reject
Code Review
-----------
There's not much to say here except that the code looks fantastic as
always - I'm especially a fan of the custom error types. As Gabriel
mentioned, maybe the `thiserror` crate would come in handy eventually,
but that's honestly your decision.
There are a couple more comments inline, but they're rather minor, IMO.
Slightly off-topic, but because you already mentioned off-list that
you're working on updating / adding docstrings and implementing custom
error types for the other things as well, I've got no more things to
mention.
Great work! I really like this feature a lot.
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
>
> Additionally it generates an IPSet for every guest that has one or more IPAM
> entries in the pve IPAM.
>
> Those can then be used in the cluster / host / guest firewalls. Firewall rules
> automatically update on changes of the SDN / IPAM configuration. This patch
> series works for the old firewall as well as the new firewall.
>
> The ipsets in nftables currently get generated as named ipsets in every table,
> this means that the `nft list ruleset` output can get quite crowded for large
> SDN configurations or large IPAM databases. Another option would be to only
> include them as anonymous IPsets in the rules, which would make the nft output
> far less crowded but this way would use more memory when making extensive use of
> the sdn ipsets, since everytime it is used in a rule we create an entirely new
> ipset.
>
> The current series generates all those ipsets in the datacenter scope. My
> initial approach was to introduce an separate scope (sdn/), but I changed my
> mind during the development because that would require non-trivial changes in
> pve-firewall, which is something I wanted to avoid. With this approach we just
> pass a flag to the cluster config loading wherever we need the SDN config - we
> get everything else (rule validation, API output, rule generation) for 'free'
> basically.
>
> Otherwise, the other way I see would need to introduce a completely new
> parameter into all function calls, or at least a new key in the dc config. All
> call sites would need privileges, due to the IPAM being in /etc/pve/priv. We
> would need to parse the SDN configuration everywhere we need the cluster
> configuration, since otherwise we wouldn't be able to parse / validate the
> cluster configuration and then generate rules.
>
> I'm still unsure whether the upside of having a separate scope is worth the
> effort, so any input w.r.t this topic is much appreciated. Introducing a new
> scope and then adapting the firewall is something I wanted to get some feedback
> on before diving into it, which is why I've refrained from doing it for now.
>
> Of course one downside is that we're kinda locking us in here with this
> decision. With the new firewall adding new scopes should be a lot easier, but if
> we decide to go forward with the SDN ipsets in the datacenter scope we would
> need to support that as well or find some migration path.
>
>
> This patch series is based on my private repositories that split the existing
> proxmox-firewall package into proxmox-firewall and proxmox-ve-rs. Those can be
> found in my staff repo:
>
> staff/s.hanreich/proxmox-ve-rs.git master
> staff/s.hanreich/proxmox-firewall.git no-config
>
> Please note that I included the debian packaging commit in this patch series,
> since it is new and should get reviewed as well, I suppose. It is already
> included when pulling from the proxmox-ve-rs repository.
>
> Dependencies:
> * proxmox-perl-rs and proxmox-firewall depend on proxmox-ve-rs
> * pve-firewall depends on proxmox-perl-rs
>
> proxmox-ve-rs:
>
> Stefan Hanreich (15):
> debian: add files for packaging
> firewall: add ip range types
> firewall: address: use new iprange type for ip entries
> ipset: add range variant to addresses
> iprange: add methods for converting an ip range to cidrs
> ipset: address: add helper methods
> firewall: guest: derive traits according to rust api guidelines
> common: add allowlist
> sdn: add name types
> sdn: add ipam module
> sdn: ipam: add method for generating ipsets
> sdn: add config module
> sdn: config: add method for generating ipsets
> tests: add sdn config tests
> tests: add ipam tests
>
> .cargo/config.toml | 5 +
> .gitignore | 8 +
> Cargo.toml | 17 +
> Makefile | 69 +
> build.sh | 35 +
> bump.sh | 44 +
> proxmox-ve-config/Cargo.toml | 16 +-
> proxmox-ve-config/debian/changelog | 5 +
> proxmox-ve-config/debian/control | 43 +
> proxmox-ve-config/debian/copyright | 19 +
> proxmox-ve-config/debian/debcargo.toml | 4 +
> proxmox-ve-config/src/common/mod.rs | 30 +
> .../src/firewall/types/address.rs | 1171 ++++++++++++++++-
> proxmox-ve-config/src/firewall/types/alias.rs | 4 +-
> proxmox-ve-config/src/firewall/types/ipset.rs | 29 +-
> proxmox-ve-config/src/firewall/types/rule.rs | 6 +-
> proxmox-ve-config/src/guest/types.rs | 8 +-
> proxmox-ve-config/src/guest/vm.rs | 8 +-
> proxmox-ve-config/src/lib.rs | 2 +
> proxmox-ve-config/src/sdn/config.rs | 643 +++++++++
> proxmox-ve-config/src/sdn/ipam.rs | 382 ++++++
> proxmox-ve-config/src/sdn/mod.rs | 243 ++++
> proxmox-ve-config/tests/sdn/main.rs | 189 +++
> proxmox-ve-config/tests/sdn/resources/ipam.db | 26 +
> .../tests/sdn/resources/running-config.json | 54 +
> 25 files changed, 2975 insertions(+), 85 deletions(-)
> create mode 100644 .cargo/config.toml
> create mode 100644 .gitignore
> create mode 100644 Cargo.toml
> create mode 100644 Makefile
> create mode 100755 build.sh
> create mode 100755 bump.sh
> create mode 100644 proxmox-ve-config/debian/changelog
> create mode 100644 proxmox-ve-config/debian/control
> create mode 100644 proxmox-ve-config/debian/copyright
> create mode 100644 proxmox-ve-config/debian/debcargo.toml
> create mode 100644 proxmox-ve-config/src/common/mod.rs
> create mode 100644 proxmox-ve-config/src/sdn/config.rs
> create mode 100644 proxmox-ve-config/src/sdn/ipam.rs
> create mode 100644 proxmox-ve-config/src/sdn/mod.rs
> create mode 100644 proxmox-ve-config/tests/sdn/main.rs
> create mode 100644 proxmox-ve-config/tests/sdn/resources/ipam.db
> create mode 100644 proxmox-ve-config/tests/sdn/resources/running-config.json
>
>
> proxmox-firewall:
>
> Stefan Hanreich (3):
> cargo: update dependencies
> config: tests: add support for loading sdn and ipam config
> ipsets: autogenerate ipsets for vnets and ipam
>
> proxmox-firewall/Cargo.toml | 2 +-
> proxmox-firewall/src/config.rs | 69 +
> proxmox-firewall/src/firewall.rs | 22 +-
> proxmox-firewall/src/object.rs | 41 +-
> .../tests/input/.running-config.json | 45 +
> proxmox-firewall/tests/input/ipam.db | 32 +
> proxmox-firewall/tests/integration_tests.rs | 10 +
> .../integration_tests__firewall.snap | 1288 +++++++++++++++++
> proxmox-nftables/src/expression.rs | 17 +-
> 9 files changed, 1511 insertions(+), 15 deletions(-)
> create mode 100644 proxmox-firewall/tests/input/.running-config.json
> create mode 100644 proxmox-firewall/tests/input/ipam.db
>
>
> pve-firewall:
>
> Stefan Hanreich (2):
> add support for loading sdn firewall configuration
> api: load sdn ipsets
>
> src/PVE/API2/Firewall/Cluster.pm | 3 ++-
> src/PVE/API2/Firewall/Rules.pm | 18 +++++++------
> src/PVE/API2/Firewall/VM.pm | 3 ++-
> src/PVE/Firewall.pm | 43 ++++++++++++++++++++++++++++++--
> 4 files changed, 56 insertions(+), 11 deletions(-)
>
>
> proxmox-perl-rs:
>
> Stefan Hanreich (1):
> add PVE::RS::Firewall::SDN module
>
> pve-rs/Cargo.toml | 1 +
> pve-rs/Makefile | 1 +
> pve-rs/src/firewall/mod.rs | 1 +
> pve-rs/src/firewall/sdn.rs | 130 +++++++++++++++++++++++++++++++++++++
> pve-rs/src/lib.rs | 1 +
> 5 files changed, 134 insertions(+)
> create mode 100644 pve-rs/src/firewall/mod.rs
> create mode 100644 pve-rs/src/firewall/sdn.rs
>
>
> Summary over all repositories:
> 43 files changed, 4676 insertions(+), 111 deletions(-)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (22 preceding siblings ...)
2024-08-13 16:06 ` Max Carrara
@ 2024-09-24 8:41 ` Thomas Lamprecht
2024-10-10 15:59 ` Stefan Hanreich
24 siblings, 0 replies; 43+ messages in thread
From: Thomas Lamprecht @ 2024-09-24 8:41 UTC (permalink / raw)
To: Proxmox VE development discussion, Stefan Hanreich
Am 26/06/2024 um 14:15 schrieb Stefan Hanreich:
> The current series generates all those ipsets in the datacenter scope. My
> initial approach was to introduce an separate scope (sdn/), but I changed my
> mind during the development because that would require non-trivial changes in
> pve-firewall, which is something I wanted to avoid. With this approach we just
> pass a flag to the cluster config loading wherever we need the SDN config - we
> get everything else (rule validation, API output, rule generation) for 'free'
> basically.
>
> Otherwise, the other way I see would need to introduce a completely new
> parameter into all function calls, or at least a new key in the dc config. All
> call sites would need privileges, due to the IPAM being in /etc/pve/priv. We
> would need to parse the SDN configuration everywhere we need the cluster
> configuration, since otherwise we wouldn't be able to parse / validate the
> cluster configuration and then generate rules.
>
> I'm still unsure whether the upside of having a separate scope is worth the
> effort, so any input w.r.t this topic is much appreciated. Introducing a new
> scope and then adapting the firewall is something I wanted to get some feedback
> on before diving into it, which is why I've refrained from doing it for now.
I'd prefer a separate scope to avoid potential clashes of IDs on upgrade and
to continue with the scope split we did for cluster-level ("dc" scope) and virtual
guest-level ("guest" scope) IPsets back from PVE 7 to 8, so while one _might_
see the SDN ones as fitting into the "dc" scope, a separate SDN scope is IMO
a bit nicer w.r.t. separating the origin.
I'd be good to know what the problems where when introducing that new sdn scope,
maybe there's a simpler workaround/design.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects
2024-06-26 12:15 [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects Stefan Hanreich
` (23 preceding siblings ...)
2024-09-24 8:41 ` Thomas Lamprecht
@ 2024-10-10 15:59 ` Stefan Hanreich
24 siblings, 0 replies; 43+ messages in thread
From: Stefan Hanreich @ 2024-10-10 15:59 UTC (permalink / raw)
To: pve-devel
v2 here:
https://lore.proxmox.com/pve-devel/20241010155637.255451-1-s.hanreich@proxmox.com/T/
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 43+ messages in thread