public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH container] lxc start: warn in case of conflicting lxc.idmap entries
@ 2023-02-23 17:03 Friedrich Weber
  2023-03-16 13:59 ` Wolfgang Bumiller
  0 siblings, 1 reply; 5+ messages in thread
From: Friedrich Weber @ 2023-02-23 17:03 UTC (permalink / raw)
  To: pve-devel

Users can customize the mapping between host and container uids/gids
by providing `lxc.idmap` entries in the container config. The syntax
is described in lxc.container.conf(5). One source of errors are
conflicting entries for one or more uid/gids. An example:

    ...
    lxc.idmap: u 0 100000 65536
    lxc.idmap: u 1000 1000 10
    ...

Assuming `root:1000:10` is correctly added to /etc/subuid, starting
the container fails with an error that is hard to interpret:

    lxc_map_ids: 3701 newuidmap failed to write mapping
    "newuidmap: write to uid_map failed: Invalid argument":
    newuidmap 67993 0 100000 65536 1000 1000 10

In order to simplify troubleshooting, validate the mapping before
starting the container and print a warning if conflicting uid/gid
mappings are detected. For the above mapping:

    lxc.idmap: invalid map entry 'u 1000 1000 10': container uid 1000
    is already mapped by entry 'u 0 100000 65536'

The warning appears in the task log and in the output of `pct start`.

A failed validation check only prints a warning instead of erroring
out, to make sure potentially buggy (or outdated) validation code does
not prevent containers from starting.

The validation algorithm is quite straightforward and has quadratic
runtime in the worst case. The kernel allows at most 340 lines in
uid_map (see user_namespaces(7)), which the algorithm should be able
to handle in well under 0.5s, but to be safe, skip validation for more
than 100 entries.

Note that validation does not take /etc/sub{uid,gid} into account,
which, if misconfigured, could still prevent the container from
starting with an error like

    "newuidmap: uid range [1000-1010) -> [1000-1010) not allowed"

If needed, validating /etc/sub{uid,gid} could be added in the future.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
---
 The warning could of course be even more detailed, e.g., "container uid range
 [1000...1009] is already mapped to [101000...101009] by entry 'u 0 100000
 65536'". But this would require a more sophisticated algorithm, and I'm not
 sure if the added complexity is worth it -- happy to add it if wanted, though.

 src/PVE/LXC.pm              |  97 ++++++++++++++++++
 src/test/Makefile           |   5 +-
 src/test/idmap-test.pm      | 197 ++++++++++++++++++++++++++++++++++++
 src/test/run_idmap_tests.pl |  10 ++
 4 files changed, 308 insertions(+), 1 deletion(-)
 create mode 100644 src/test/idmap-test.pm
 create mode 100755 src/test/run_idmap_tests.pl

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 54ee0d9..141f195 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -2313,6 +2313,93 @@ sub parse_id_maps {
     return ($id_map, $rootuid, $rootgid);
 }
 
+# parse custom id mapping and throw a human-readable error if any
+# host/container uid/gid is mapped twice. note that the algorithm has quadratic
+# worst-case runtime, which may be problematic with >1000 mapping entries.
+# we're unlikely to exceed this because the kernel only allows 340 entries (see
+# user_namespaces(7)), but if we do, this could be implemented more efficiently
+# (see e.g. "interval trees").
+sub validate_id_maps {
+    my ($id_map) = @_;
+
+    # keep track of already-mapped uids/gid ranges on the container and host
+    # sides. each range is an array [mapped_entry, first_id, last_id], where
+    # mapped_entry is the original config entry (stored for generating error
+    # messages), and [first_id, last_id] is an (inclusive) interval of mapped
+    # ids. Ranges are sorted ascendingly by first_id for more efficient
+    # traversal, and they must not overlap (this would be an invalid mapping).
+    my $sides_ranges = {
+	host => { u => [], g => [] },
+	container => { u => [], g => [] },
+    };
+
+    # try to update the mapping with the requested range.
+    # return (1, undef, undef) on success and (0, $mapped_entry, $id) on failure,
+    # where $mapped_entry is the conflicting map entry that already maps $id.
+    my $map_range = sub {
+	my ($requested_entry, $mapped_ranges, $requested_first, $requested_count) = @_;
+	my $requested_last = $requested_first + $requested_count - 1;
+
+	# fallback case: insert the requested range at the end
+	my $mapped_length = scalar(@$mapped_ranges);
+	my $insert_before = $mapped_length;
+
+	foreach my $i (0 .. $mapped_length - 1) {
+	    my ($mapped_entry, $mapped_first, $mapped_last) = @$mapped_ranges[$i]->@*;
+
+	    if ($requested_last < $mapped_first) {
+		# mapped:          [---]
+		# requested: [---]
+		# as the ranges are sorted, the requested range cannot overlap
+		# with any subsequent mapped range, so we can stop iterating
+		$insert_before = $i;
+		last;
+	    }
+
+	    # two shapes of overlapping ranges are possible.
+	    # mapped:  [---]
+	    # requested: [...
+	    #            ^- conflicting id
+	    return (0, $mapped_entry, $requested_first)
+		if $mapped_first <= $requested_first <= $mapped_last;
+
+	    # mapped:      [...
+	    # requested: [---]
+	    #              ^- conflicting id
+	    return (0, $mapped_entry, $mapped_first)
+		if $requested_first <= $mapped_first <= $requested_last;
+	}
+
+	my $new_entry = [$requested_entry, $requested_first, $requested_last];
+	splice @$mapped_ranges, $insert_before, 0, $new_entry;
+	return (1, undef, undef);
+    };
+
+    my $validate_and_map_range = sub {
+	my ($map_entry, $side, $type, $first_id, $count) = @_;
+	my $mapped_ranges = $sides_ranges->{$side}->{$type};
+
+	my ($ok, $already_mapped_entry, $already_mapped) = $map_range->(
+	    $map_entry,
+	    $mapped_ranges,
+	    $first_id,
+	    $count,
+	);
+	die "invalid map entry '@$map_entry': " .
+	    "$side ${type}id $already_mapped is already mapped by entry '@$already_mapped_entry'\n"
+	    if !$ok;
+    };
+
+    foreach my $map_entry (@$id_map) {
+	my ($type, $first_ct_id, $first_host_id, $count) = @$map_entry;
+
+	$validate_and_map_range->($map_entry, 'container', $type, int($first_ct_id), int($count));
+	$validate_and_map_range->($map_entry, 'host', $type, int($first_host_id), int($count));
+    }
+
+    return $sides_ranges;
+}
+
 sub userns_command {
     my ($id_map) = @_;
     if (@$id_map) {
@@ -2395,6 +2482,16 @@ sub vm_start {
 
     update_lxc_config($vmid, $conf);
 
+    eval {
+	my ($id_map, undef, undef) = PVE::LXC::parse_id_maps($conf);
+	# skip id map validation for large number of mappings,
+	# to avoid slowing down container startup.
+	if (scalar(@$id_map) <= 100) {
+	    PVE::LXC::validate_id_maps($id_map);
+	}
+    };
+    warn "lxc.idmap: $@" if $@;
+
     my $skiplock_flag_fn = "/run/lxc/skiplock-$vmid";
 
     if ($skiplock) {
diff --git a/src/test/Makefile b/src/test/Makefile
index 8734879..82e8967 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -2,7 +2,7 @@ RUN_USERNS := lxc-usernsexec -m "u:0:`id -u`:1" -m "g:0:`id -g`:1" --
 
 all: test
 
-test: test_setup test_snapshot test_bindmount
+test: test_setup test_snapshot test_bindmount test_idmap
 
 test_setup: run_setup_tests.pl
 	$(RUN_USERNS) ./run_setup_tests.pl
@@ -13,5 +13,8 @@ test_snapshot: run_snapshot_tests.pl
 test_bindmount: bindmount_test.pl
 	$(RUN_USERNS) ./bindmount_test.pl
 
+test_idmap: run_idmap_tests.pl
+	./run_idmap_tests.pl
+
 clean:
 	rm -rf tmprootfs
diff --git a/src/test/idmap-test.pm b/src/test/idmap-test.pm
new file mode 100644
index 0000000..4bd3fed
--- /dev/null
+++ b/src/test/idmap-test.pm
@@ -0,0 +1,197 @@
+package PVE::LXC::Test;
+
+use strict;
+use warnings;
+use lib qw(..);
+
+use PVE::LXC;
+
+use Test::More;
+
+use Time::HiRes qw (gettimeofday tv_interval);
+
+subtest 'valid: default config (unprivileged)' => sub {
+    plan tests => 2;
+
+    my ($id_maps, undef, undef) = PVE::LXC::parse_id_maps({
+	    unprivileged => 1,
+	    lxc => [ ['rootfs', 'xyz'] ],
+    });
+
+    my $mapped = PVE::LXC::validate_id_maps($id_maps);
+    is_deeply($mapped->{host}, {
+	    u => [ [@$id_maps[0], 100000, 165535] ],
+	    g => [ [@$id_maps[1], 100000, 165535] ],
+    });
+    is_deeply($mapped->{container}, {
+	    u => [ [@$id_maps[0], 0, 65535] ],
+	    g => [ [@$id_maps[1], 0, 65535] ],
+    });
+};
+
+subtest 'valid: mapping one user/group to host' => sub {
+    plan tests => 2;
+
+    my ($id_maps, undef, undef) = PVE::LXC::parse_id_maps({
+	    lxc => [
+		['lxc.idmap', 'u 0 100000 1005'],
+		['lxc.idmap', 'g 0 100000 1007'],
+		['lxc.idmap', 'u 1005 1005 1'],
+		['lxc.idmap', 'g 1007 1007 1'],
+		['lxc.idmap', 'u 1006 101006 64530'],
+		['lxc.idmap', 'g 1008 101008 64528'],
+	    ],
+    });
+
+    my $mapped = PVE::LXC::validate_id_maps($id_maps);
+    is_deeply($mapped->{host}, {
+	    u => [
+		[@$id_maps[2], 1005, 1005],
+		[@$id_maps[0], 100000, 101004],
+		[@$id_maps[4], 101006, 165535],
+	    ],
+	    g => [
+		[@$id_maps[3], 1007, 1007],
+		[@$id_maps[1], 100000, 101006],
+		[@$id_maps[5], 101008, 165535],
+	    ],
+    });
+    is_deeply($mapped->{container}, {
+	    u => [
+		[@$id_maps[0], 0, 1004],
+		[@$id_maps[2], 1005, 1005],
+		[@$id_maps[4], 1006, 65535],
+	    ],
+	    g => [
+		[@$id_maps[1], 0, 1006],
+		[@$id_maps[3], 1007, 1007],
+		[@$id_maps[5], 1008, 65535],
+	    ],
+    });
+};
+
+subtest 'valid: mapping user/group ranges to host' => sub {
+    plan tests => 2;
+
+    my ($id_maps, undef, undef) = PVE::LXC::parse_id_maps({
+	    lxc => [
+		['lxc.idmap', 'u 3000 103000 60000'], #0
+		['lxc.idmap', 'u 2000 1000 1000'], #1
+		['lxc.idmap', 'u 0 100000 2000'], #2
+		['lxc.idmap', 'g 2000 102000 63534'], #3
+		['lxc.idmap', 'g 1000 2000 1000'], #4
+		['lxc.idmap', 'g 0 100000 1000'], #5
+		['lxc.idmap', 'u 63000 263000 2536'], #6
+		['lxc.idmap', 'g 65534 365534 2'], #7
+	    ],
+    });
+
+    my $mapped = PVE::LXC::validate_id_maps($id_maps);
+    is_deeply($mapped->{host}, {
+	    u => [
+		[@$id_maps[1], 1000, 1999],
+		[@$id_maps[2], 100000, 101999],
+		[@$id_maps[0], 103000, 162999],
+		[@$id_maps[6], 263000, 265535],
+	    ],
+	    g => [
+		[@$id_maps[4], 2000, 2999],
+		[@$id_maps[5], 100000, 100999],
+		[@$id_maps[3], 102000, 165533],
+		[@$id_maps[7], 365534, 365535],
+	    ],
+    });
+    is_deeply($mapped->{container}, {
+	    u => [
+		[@$id_maps[2], 0, 1999],
+		[@$id_maps[1], 2000, 2999],
+		[@$id_maps[0], 3000, 62999],
+		[@$id_maps[6], 63000, 65535],
+	    ],
+	    g => [
+		[@$id_maps[5], 0, 999],
+		[@$id_maps[4], 1000, 1999],
+		[@$id_maps[3], 2000, 65533],
+		[@$id_maps[7], 65534, 65535],
+	    ],
+    });
+};
+
+subtest 'invalid: ambiguous mappings' => sub {
+    plan tests => 4;
+
+    $@ = undef;
+    eval {
+	my ($id_maps, undef, undef) = PVE::LXC::parse_id_maps({
+		lxc => [
+		    ['lxc.idmap', 'u 0 100000 65536'], # maps container uid 1005
+		    ['lxc.idmap', 'u 1005 1005 1'], # maps container uid 1005 again
+		    ['lxc.idmap', 'u 1006 101006 64530'],
+		],
+	});
+	PVE::LXC::validate_id_maps($id_maps);
+    };
+    like($@, qr/container uid 1005 is already mapped/, "\$@ correct");
+
+    $@ = undef;
+    eval {
+	my ($id_maps, undef, undef) = PVE::LXC::parse_id_maps({
+		lxc => [
+		    ['lxc.idmap', 'u 0 100000 1005'],
+		    ['lxc.idmap', 'u 1005 1005 2'], # maps host uid 1005
+		    ['lxc.idmap', 'u 1007 101007 992'],
+		    ['lxc.idmap', 'u 11000 1000 10'], # maps host uid 1005 again
+		],
+	});
+	PVE::LXC::validate_id_maps($id_maps);
+    };
+    like($@, qr/host uid 1005 is already mapped/, "\$@ correct");
+
+    $@ = undef;
+    eval {
+	my ($id_maps, undef, undef) = PVE::LXC::parse_id_maps({
+		lxc => [
+		    ['lxc.idmap', 'g 0 100000 1001'], # maps container gid 1000
+		    ['lxc.idmap', 'g 1000 1000 10'], # maps container gid 1000 again
+		],
+	});
+	PVE::LXC::validate_id_maps($id_maps);
+    };
+    like($@, qr/container gid 1000 is already mapped/, "\$@ correct");
+
+    $@ = undef;
+    eval {
+	my ($id_maps, undef, undef) = PVE::LXC::parse_id_maps({
+		lxc => [
+		    ['lxc.idmap', 'g 0 100000 1000'], # maps host gid 100000
+		    ['lxc.idmap', 'g 2000 102000 1000'],
+		    ['lxc.idmap', 'g 3500 99500 5000'], # maps host gid 100000 again
+		],
+	});
+	PVE::LXC::validate_id_maps($id_maps);
+    };
+    like($@, qr/host gid 100000 is already mapped/, "\$@ correct");
+};
+
+subtest 'check performance' => sub {
+    plan tests => 1;
+
+    # generate mapping with 1000 entries
+    my $entries = [];
+    foreach my $i (0..999) {
+	my $first_container_uid = $i * 10;
+	my $first_host_uid = 100000 + $first_container_uid;
+	push @$entries, ['lxc.idmap', "u $first_container_uid $first_host_uid 10"]
+    }
+
+    my ($id_maps, undef, undef) = PVE::LXC::parse_id_maps({ lxc => $entries });
+
+    my $start_time = [ gettimeofday() ];
+    my $mapped = PVE::LXC::validate_id_maps($id_maps);
+    my $elapsed = tv_interval($start_time);
+
+    diag("validation took $elapsed seconds");
+    is(scalar($mapped->{host}->{u}->@*), 1000);
+};
+
+done_testing();
diff --git a/src/test/run_idmap_tests.pl b/src/test/run_idmap_tests.pl
new file mode 100755
index 0000000..38bb494
--- /dev/null
+++ b/src/test/run_idmap_tests.pl
@@ -0,0 +1,10 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use TAP::Harness;
+
+my $harness = TAP::Harness->new( { "verbosity" => -2 });
+my $res = $harness->runtests( "idmap-test.pm");
+exit -1 if $res->{failed};
-- 
2.30.2





^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [pve-devel] [PATCH container] lxc start: warn in case of conflicting lxc.idmap entries
  2023-02-23 17:03 [pve-devel] [PATCH container] lxc start: warn in case of conflicting lxc.idmap entries Friedrich Weber
@ 2023-03-16 13:59 ` Wolfgang Bumiller
  2023-03-16 15:07   ` Friedrich Weber
  0 siblings, 1 reply; 5+ messages in thread
From: Wolfgang Bumiller @ 2023-03-16 13:59 UTC (permalink / raw)
  To: Friedrich Weber; +Cc: pve-devel

On Thu, Feb 23, 2023 at 06:03:02PM +0100, Friedrich Weber wrote:
> Users can customize the mapping between host and container uids/gids
> by providing `lxc.idmap` entries in the container config. The syntax
> is described in lxc.container.conf(5). One source of errors are
> conflicting entries for one or more uid/gids. An example:
> 
>     ...
>     lxc.idmap: u 0 100000 65536
>     lxc.idmap: u 1000 1000 10
>     ...
> 
> Assuming `root:1000:10` is correctly added to /etc/subuid, starting
> the container fails with an error that is hard to interpret:
> 
>     lxc_map_ids: 3701 newuidmap failed to write mapping
>     "newuidmap: write to uid_map failed: Invalid argument":
>     newuidmap 67993 0 100000 65536 1000 1000 10
> 
> In order to simplify troubleshooting, validate the mapping before
> starting the container and print a warning if conflicting uid/gid
> mappings are detected. For the above mapping:
> 
>     lxc.idmap: invalid map entry 'u 1000 1000 10': container uid 1000
>     is already mapped by entry 'u 0 100000 65536'
> 
> The warning appears in the task log and in the output of `pct start`.
> 
> A failed validation check only prints a warning instead of erroring
> out, to make sure potentially buggy (or outdated) validation code does
> not prevent containers from starting.
> 
> The validation algorithm is quite straightforward and has quadratic
> runtime in the worst case. The kernel allows at most 340 lines in
> uid_map (see user_namespaces(7)), which the algorithm should be able
> to handle in well under 0.5s, but to be safe, skip validation for more
> than 100 entries.
> 
> Note that validation does not take /etc/sub{uid,gid} into account,
> which, if misconfigured, could still prevent the container from
> starting with an error like
> 
>     "newuidmap: uid range [1000-1010) -> [1000-1010) not allowed"
> 
> If needed, validating /etc/sub{uid,gid} could be added in the future.
> 
> Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
> ---
>  The warning could of course be even more detailed, e.g., "container uid range
>  [1000...1009] is already mapped to [101000...101009] by entry 'u 0 100000
>  65536'". But this would require a more sophisticated algorithm, and I'm not
>  sure if the added complexity is worth it -- happy to add it if wanted, though.
> 
>  src/PVE/LXC.pm              |  97 ++++++++++++++++++
>  src/test/Makefile           |   5 +-
>  src/test/idmap-test.pm      | 197 ++++++++++++++++++++++++++++++++++++
>  src/test/run_idmap_tests.pl |  10 ++
>  4 files changed, 308 insertions(+), 1 deletion(-)
>  create mode 100644 src/test/idmap-test.pm
>  create mode 100755 src/test/run_idmap_tests.pl
> 
> diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
> index 54ee0d9..141f195 100644
> --- a/src/PVE/LXC.pm
> +++ b/src/PVE/LXC.pm
> @@ -2313,6 +2313,93 @@ sub parse_id_maps {
>      return ($id_map, $rootuid, $rootgid);
>  }
>  
> +# parse custom id mapping and throw a human-readable error if any
> +# host/container uid/gid is mapped twice. note that the algorithm has quadratic
> +# worst-case runtime, which may be problematic with >1000 mapping entries.
> +# we're unlikely to exceed this because the kernel only allows 340 entries (see
> +# user_namespaces(7)), but if we do, this could be implemented more efficiently
> +# (see e.g. "interval trees").

Both seem a bit excessive to me.

Let's look at the data:
We have a set of ranges consisting of a type, 2 starts and a count.
The types are uids and gids, so we can view those as 2 separate
instances of sets of [ct_start, host_start, count].
Since neither the container nor the host sides must overlap we can -
again - view these as separate sets of container side [start, count] and
host side [start, count].
In other words, we can see the entire id map as just 4 sets of [start,
count] ranges which must not overlap.

So I think all we need to do is sort these by the 'start' value, and for
each element make sure that

    prevous_start + previous_count <= current_start

And yes, that means we need to sort $id_maps twice, once by ct id, once
by host id, and then iterate and do the above check.

Should be much shorter (and faster).

> +sub validate_id_maps {
> +    my ($id_map) = @_;
> +
> +    # keep track of already-mapped uids/gid ranges on the container and host
> +    # sides. each range is an array [mapped_entry, first_id, last_id], where
> +    # mapped_entry is the original config entry (stored for generating error
> +    # messages), and [first_id, last_id] is an (inclusive) interval of mapped
> +    # ids. Ranges are sorted ascendingly by first_id for more efficient
> +    # traversal, and they must not overlap (this would be an invalid mapping).
> +    my $sides_ranges = {
> +	host => { u => [], g => [] },
> +	container => { u => [], g => [] },
> +    };
> +
> +    # try to update the mapping with the requested range.
> +    # return (1, undef, undef) on success and (0, $mapped_entry, $id) on failure,
> +    # where $mapped_entry is the conflicting map entry that already maps $id.
> +    my $map_range = sub {
> +	my ($requested_entry, $mapped_ranges, $requested_first, $requested_count) = @_;
> +	my $requested_last = $requested_first + $requested_count - 1;
> +
> +	# fallback case: insert the requested range at the end
> +	my $mapped_length = scalar(@$mapped_ranges);
> +	my $insert_before = $mapped_length;
> +
> +	foreach my $i (0 .. $mapped_length - 1) {
> +	    my ($mapped_entry, $mapped_first, $mapped_last) = @$mapped_ranges[$i]->@*;
> +
> +	    if ($requested_last < $mapped_first) {
> +		# mapped:          [---]
> +		# requested: [---]
> +		# as the ranges are sorted, the requested range cannot overlap
> +		# with any subsequent mapped range, so we can stop iterating
> +		$insert_before = $i;
> +		last;
> +	    }
> +
> +	    # two shapes of overlapping ranges are possible.
> +	    # mapped:  [---]
> +	    # requested: [...
> +	    #            ^- conflicting id
> +	    return (0, $mapped_entry, $requested_first)
> +		if $mapped_first <= $requested_first <= $mapped_last;
> +
> +	    # mapped:      [...
> +	    # requested: [---]
> +	    #              ^- conflicting id
> +	    return (0, $mapped_entry, $mapped_first)
> +		if $requested_first <= $mapped_first <= $requested_last;
> +	}
> +
> +	my $new_entry = [$requested_entry, $requested_first, $requested_last];
> +	splice @$mapped_ranges, $insert_before, 0, $new_entry;
> +	return (1, undef, undef);
> +    };
> +
> +    my $validate_and_map_range = sub {
> +	my ($map_entry, $side, $type, $first_id, $count) = @_;
> +	my $mapped_ranges = $sides_ranges->{$side}->{$type};
> +
> +	my ($ok, $already_mapped_entry, $already_mapped) = $map_range->(
> +	    $map_entry,
> +	    $mapped_ranges,
> +	    $first_id,
> +	    $count,
> +	);
> +	die "invalid map entry '@$map_entry': " .
> +	    "$side ${type}id $already_mapped is already mapped by entry '@$already_mapped_entry'\n"
> +	    if !$ok;
> +    };
> +
> +    foreach my $map_entry (@$id_map) {
> +	my ($type, $first_ct_id, $first_host_id, $count) = @$map_entry;
> +
> +	$validate_and_map_range->($map_entry, 'container', $type, int($first_ct_id), int($count));
> +	$validate_and_map_range->($map_entry, 'host', $type, int($first_host_id), int($count));
> +    }
> +
> +    return $sides_ranges;
> +}
> +
>  sub userns_command {
>      my ($id_map) = @_;
>      if (@$id_map) {
>...




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [pve-devel] [PATCH container] lxc start: warn in case of conflicting lxc.idmap entries
  2023-03-16 13:59 ` Wolfgang Bumiller
@ 2023-03-16 15:07   ` Friedrich Weber
  2023-03-16 16:09     ` Wolfgang Bumiller
  0 siblings, 1 reply; 5+ messages in thread
From: Friedrich Weber @ 2023-03-16 15:07 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pve-devel

Thanks for the review!

On 16/03/2023 14:59, Wolfgang Bumiller wrote:
> Both seem a bit excessive to me.
> 
> Let's look at the data:
> We have a set of ranges consisting of a type, 2 starts and a count.
> The types are uids and gids, so we can view those as 2 separate
> instances of sets of [ct_start, host_start, count].
> Since neither the container nor the host sides must overlap we can -
> again - view these as separate sets of container side [start, count] and
> host side [start, count].
> In other words, we can see the entire id map as just 4 sets of [start,
> count] ranges which must not overlap.
> 
> So I think all we need to do is sort these by the 'start' value, and for
> each element make sure that
> 
>      prevous_start + previous_count <= current_start
> 
> And yes, that means we need to sort $id_maps twice, once by ct id, once
> by host id, and then iterate and do the above check.
> 
> Should be much shorter (and faster).

Yeah, good point, splitting $id_maps into separate uid/gid maps, and 
then sorting+iterating twice (I'll call this the "sorting algorithm" 
below) does sound more understandable than the current ad-hoc approach, 
and faster too.

However, one small benefit of iterating over $id_maps in its original 
order (instead of sorting) is that the error message always references 
the *first* invalid map entry in the config, e.g. (omitting host uids 
for clarity)

   1) u 1000 <...> 100
   2) u 950 <...> 100
   3) u 900 <...> 100
   4) u 850 <...> 100

The sorting algorithm would error on entry 3, which might suggest to 
users that entries 1-2 are okay (which they are not). The current 
algorithm errors on line 2 already. Similar things would happen with 
interleaved uid/gid mappings, I guess.

But I'm not sure if this really matters to users. What do you think?




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [pve-devel] [PATCH container] lxc start: warn in case of conflicting lxc.idmap entries
  2023-03-16 15:07   ` Friedrich Weber
@ 2023-03-16 16:09     ` Wolfgang Bumiller
  2023-03-17  8:53       ` Friedrich Weber
  0 siblings, 1 reply; 5+ messages in thread
From: Wolfgang Bumiller @ 2023-03-16 16:09 UTC (permalink / raw)
  To: Friedrich Weber; +Cc: pve-devel

On Thu, Mar 16, 2023 at 04:07:34PM +0100, Friedrich Weber wrote:
> Thanks for the review!
> 
> On 16/03/2023 14:59, Wolfgang Bumiller wrote:
> > Both seem a bit excessive to me.
> > 
> > Let's look at the data:
> > We have a set of ranges consisting of a type, 2 starts and a count.
> > The types are uids and gids, so we can view those as 2 separate
> > instances of sets of [ct_start, host_start, count].
> > Since neither the container nor the host sides must overlap we can -
> > again - view these as separate sets of container side [start, count] and
> > host side [start, count].
> > In other words, we can see the entire id map as just 4 sets of [start,
> > count] ranges which must not overlap.
> > 
> > So I think all we need to do is sort these by the 'start' value, and for
> > each element make sure that
> > 
> >      prevous_start + previous_count <= current_start
> > 
> > And yes, that means we need to sort $id_maps twice, once by ct id, once
> > by host id, and then iterate and do the above check.
> > 
> > Should be much shorter (and faster).
> 
> Yeah, good point, splitting $id_maps into separate uid/gid maps, and then
> sorting+iterating twice (I'll call this the "sorting algorithm" below) does
> sound more understandable than the current ad-hoc approach, and faster too.
> 
> However, one small benefit of iterating over $id_maps in its original order
> (instead of sorting) is that the error message always references the *first*
> invalid map entry in the config, e.g. (omitting host uids for clarity)
> 
>   1) u 1000 <...> 100
>   2) u 950 <...> 100
>   3) u 900 <...> 100
>   4) u 850 <...> 100
> 
> The sorting algorithm would error on entry 3, which might suggest to users
> that entries 1-2 are okay (which they are not). The current algorithm errors
> on line 2 already. Similar things would happen with interleaved uid/gid
> mappings, I guess.
> 
> But I'm not sure if this really matters to users. What do you think?

Since it's about helping out users, even better would be to collect all
the errors together and than die() with a message containing all of
them.
And then the order doesn't matter again ;-)




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [pve-devel] [PATCH container] lxc start: warn in case of conflicting lxc.idmap entries
  2023-03-16 16:09     ` Wolfgang Bumiller
@ 2023-03-17  8:53       ` Friedrich Weber
  0 siblings, 0 replies; 5+ messages in thread
From: Friedrich Weber @ 2023-03-17  8:53 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pve-devel

On 16/03/2023 17:09, Wolfgang Bumiller wrote:
> Since it's about helping out users, even better would be to collect all
> the errors together and than die() with a message containing all of
> them.
> And then the order doesn't matter again ;-)

Agree. :) So I'll prepare a v2 that

* is based on the sorting approach you suggested
* reports all errors to the user (hopefully without being too verbose)
* and probably removes the weird "<100 entries" limitation




^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-03-17  8:53 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-23 17:03 [pve-devel] [PATCH container] lxc start: warn in case of conflicting lxc.idmap entries Friedrich Weber
2023-03-16 13:59 ` Wolfgang Bumiller
2023-03-16 15:07   ` Friedrich Weber
2023-03-16 16:09     ` Wolfgang Bumiller
2023-03-17  8:53       ` Friedrich Weber

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal