public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH ha-manager 05/15] rules: add colocation rule plugin
Date: Thu, 03 Apr 2025 14:16:53 +0200	[thread overview]
Message-ID: <1743672047.yfgqapvf7h.astroid@yuna.none> (raw)
In-Reply-To: <20250325151254.193177-7-d.kral@proxmox.com>

On March 25, 2025 4:12 pm, Daniel Kral wrote:
> Add the colocation rule plugin to allow users to specify inter-service
> affinity constraints.
> 
> These colocation rules can either be positive (keeping services
> together) or negative (keeping service separate). Their strictness can
> also be specified as either a MUST or a SHOULD, where the first
> specifies that any service the constraint cannot be applied for stays in
> recovery, while the latter specifies that that any service the
> constraint cannot be applied for is lifted from the constraint.
> 
> The initial implementation also implements four basic transformations,
> where colocation rules with not enough services are dropped, transitive
> positive colocation rules are merged, and inter-colocation rule
> inconsistencies as well as colocation rule inconsistencies with respect
> to the location constraints specified in HA groups are dropped.

a high level question: theres a lot of loops and sorts over rules,
services, groups here - granted that is all in memory, so should be
reasonably fast, do we have concerns here/should we look for further
optimization potential?

e.g. right now I count (coming in via canonicalize):

- check_services_count
-- sort of ruleids (foreach_colocation_rule)
-- loop over rules (foreach_colocation_rule)
--- keys on services of each rule
- loop over the results (should be empty)
- check_positive_intransitivity
-- sort of ruleids, 1x loop over rules (foreach_colocation_rule via split_colocation_rules)
-- loop over each unique pair of ruleids
--- is_disjoint on services of each pair (loop over service keys)
- loop over resulting ruleids (might be many!)
-- loop over mergeable rules for each merge target
--- loop over services of each mergeable rule
- check_inner_consistency
-- sort of ruleids, 1x loop over rules (foreach_colocation_rule via split_colocation_rules)
-- loop over positive rules
--- for every positive rule, loop over negative rules
---- for each pair of positive+negative rule, check service
intersections
- loop over resulting conflicts (should be empty)
- check_consistency_with_groups
-- sort of ruleids, 1x loop over rules (foreach_colocation_rule via split_colocation_rules)
-- loop over positive rules
--- loop over services
---- loop over nodes of service's group
-- loop over negative rules
--- loop over services
---- loop over nodes of service's group
- loop over resulting conflicts (should be empty)

possibly splitting the rules (instead of just the IDs) once and keeping
a list of sorted rule IDs we could save some overhead?

might not be worth it (yet) though, but something to keep in mind if the
rules are getting more complicated over time..

> 
> Signed-off-by: Daniel Kral <d.kral@proxmox.com>
> ---
>  debian/pve-ha-manager.install  |   1 +
>  src/PVE/HA/Makefile            |   1 +
>  src/PVE/HA/Rules/Colocation.pm | 391 +++++++++++++++++++++++++++++++++
>  src/PVE/HA/Rules/Makefile      |   6 +
>  src/PVE/HA/Tools.pm            |   6 +
>  5 files changed, 405 insertions(+)
>  create mode 100644 src/PVE/HA/Rules/Colocation.pm
>  create mode 100644 src/PVE/HA/Rules/Makefile
> 
> diff --git a/debian/pve-ha-manager.install b/debian/pve-ha-manager.install
> index 9bbd375..89f9144 100644
> --- a/debian/pve-ha-manager.install
> +++ b/debian/pve-ha-manager.install
> @@ -33,6 +33,7 @@
>  /usr/share/perl5/PVE/HA/Resources/PVECT.pm
>  /usr/share/perl5/PVE/HA/Resources/PVEVM.pm
>  /usr/share/perl5/PVE/HA/Rules.pm
> +/usr/share/perl5/PVE/HA/Rules/Colocation.pm
>  /usr/share/perl5/PVE/HA/Tools.pm
>  /usr/share/perl5/PVE/HA/Usage.pm
>  /usr/share/perl5/PVE/HA/Usage/Basic.pm
> diff --git a/src/PVE/HA/Makefile b/src/PVE/HA/Makefile
> index 489cbc0..e386cbf 100644
> --- a/src/PVE/HA/Makefile
> +++ b/src/PVE/HA/Makefile
> @@ -8,6 +8,7 @@ install:
>  	install -d -m 0755 ${DESTDIR}${PERLDIR}/PVE/HA
>  	for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/HA/$$i; done
>  	make -C Resources install
> +	make -C Rules install
>  	make -C Usage install
>  	make -C Env install
>  
> diff --git a/src/PVE/HA/Rules/Colocation.pm b/src/PVE/HA/Rules/Colocation.pm
> new file mode 100644
> index 0000000..808d48e
> --- /dev/null
> +++ b/src/PVE/HA/Rules/Colocation.pm
> @@ -0,0 +1,391 @@
> +package PVE::HA::Rules::Colocation;
> +
> +use strict;
> +use warnings;
> +
> +use Data::Dumper;

leftover dumper ;)

> +
> +use PVE::JSONSchema qw(get_standard_option);
> +use PVE::HA::Tools;
> +
> +use base qw(PVE::HA::Rules);
> +
> +sub type {
> +    return 'colocation';
> +}
> +
> +sub properties {
> +    return {
> +	services => get_standard_option('pve-ha-resource-id-list'),
> +	affinity => {
> +	    description => "Describes whether the services are supposed to be kept on separate"
> +		. " nodes, or are supposed to be kept together on the same node.",
> +	    type => 'string',
> +	    enum => ['separate', 'together'],
> +	    optional => 0,
> +	},
> +	strict => {
> +	    description => "Describes whether the colocation rule is mandatory or optional.",
> +	    type => 'boolean',
> +	    optional => 0,
> +	},
> +    }
> +}
> +
> +sub options {
> +    return {
> +	services => { optional => 0 },
> +	strict => { optional => 0 },
> +	affinity => { optional => 0 },
> +	comment => { optional => 1 },
> +    };
> +};
> +
> +sub decode_value {
> +    my ($class, $type, $key, $value) = @_;
> +
> +    if ($key eq 'services') {
> +	my $res = {};
> +
> +	for my $service (PVE::Tools::split_list($value)) {
> +	    if (PVE::HA::Tools::pve_verify_ha_resource_id($service)) {
> +		$res->{$service} = 1;
> +	    }
> +	}
> +
> +	return $res;
> +    }
> +
> +    return $value;
> +}
> +
> +sub encode_value {
> +    my ($class, $type, $key, $value) = @_;
> +
> +    if ($key eq 'services') {
> +	PVE::HA::Tools::pve_verify_ha_resource_id($_) for (keys %$value);
> +
> +	return join(',', keys %$value);
> +    }
> +
> +    return $value;
> +}
> +
> +sub foreach_colocation_rule {
> +    my ($rules, $func, $opts) = @_;
> +
> +    my $my_opts = { map { $_ => $opts->{$_} } keys %$opts };

why? if the caller doesn't want $opts to be modified, they could just
pass in a copy (or you could require it to be passed by value instead of
by reference?).

there's only a single caller that does (introduced by a later patch) and
that one constructs the hash reference right at the call site, so unless
I am missing something this seems a bit overkill..

> +    $my_opts->{type} = 'colocation';
> +
> +    PVE::HA::Rules::foreach_service_rule($rules, $func, $my_opts);
> +}
> +
> +sub split_colocation_rules {
> +    my ($rules) = @_;
> +
> +    my $positive_ruleids = [];
> +    my $negative_ruleids = [];
> +
> +    foreach_colocation_rule($rules, sub {
> +	my ($rule, $ruleid) = @_;
> +
> +	my $ruleid_set = $rule->{affinity} eq 'together' ? $positive_ruleids : $negative_ruleids;
> +	push @$ruleid_set, $ruleid;
> +    });
> +
> +    return ($positive_ruleids, $negative_ruleids);
> +}
> +
> +=head3 check_service_count($rules)
> +
> +Returns a list of conflicts caused by colocation rules, which do not have
> +enough services in them, defined in C<$rules>.
> +
> +If there are no conflicts, the returned list is empty.
> +
> +=cut
> +
> +sub check_services_count {
> +    my ($rules) = @_;
> +
> +    my $conflicts = [];
> +
> +    foreach_colocation_rule($rules, sub {
> +	my ($rule, $ruleid) = @_;
> +
> +	push @$conflicts, $ruleid if (scalar(keys %{$rule->{services}}) < 2);
> +    });
> +
> +    return $conflicts;
> +}

is this really an issue? a colocation rule with a single service is just
a nop? there's currently no cleanup AFAICT if a resource is removed, but
if we add that part (we maybe should?) then one can easily end up in a
situation where a rule temporarily contains a single or no service?

> +
> +=head3 check_positive_intransitivity($rules)
> +
> +Returns a list of conflicts caused by transitive positive colocation rules
> +defined in C<$rules>.
> +
> +Transitive positive colocation rules exist, if there are at least two positive
> +colocation rules with the same strictness, which put at least the same two
> +services in relation. This means, that these rules can be merged together.
> +
> +If there are no conflicts, the returned list is empty.

The terminology here is quit confusing - conflict meaning that two rules
are "transitive" and thus mergeable (which is good, cause it makes
things easier to handle?) is quite weird, as "conflict" is a rather
negative term..

there's only a single call site in the same module, maybe we could just
rename this into "find_mergeable_positive_ruleids", similar to the
variable where the result is stored?

> +
> +=cut
> +
> +sub check_positive_intransitivity {
> +    my ($rules) = @_;
> +
> +    my $conflicts = {};
> +    my ($positive_ruleids) = split_colocation_rules($rules);
> +
> +    while (my $outerid = shift(@$positive_ruleids)) {
> +	my $outer = $rules->{ids}->{$outerid};
> +
> +	for my $innerid (@$positive_ruleids) {

so this is in practice a sort of "optimized" loop over all pairs of
rules - iterating over the positive rules twice, but skipping pairs that
were already visited by virtue of the shift on the outer loop..

might be worth a short note, together with the $inner and $outer
terminology I was a bit confused at first..

> +	    my $inner = $rules->{ids}->{$innerid};
> +
> +	    next if $outerid eq $innerid;
> +	    next if $outer->{strict} != $inner->{strict};
> +	    next if PVE::HA::Tools::is_disjoint($outer->{services}, $inner->{services});
> +
> +	    push @{$conflicts->{$outerid}}, $innerid;
> +	}
> +    }
> +
> +    return $conflicts;
> +}
> +
> +=head3 check_inner_consistency($rules)
> +
> +Returns a list of conflicts caused by inconsistencies between positive and
> +negative colocation rules defined in C<$rules>.
> +
> +Inner inconsistent colocation rules exist, if there are at least the same two
> +services in a positive and a negative colocation relation, which is an
> +impossible constraint as they are opposites of each other.
> +
> +If there are no conflicts, the returned list is empty.

here the conflicts and check terminology makes sense - we are checking
an invariant that must be satisfied after all :)

> +
> +=cut
> +
> +sub check_inner_consistency {

but 'inner' is a weird term since this is consistency between rules?

it basically checks that no pair of services should both be colocated
and not be colocated at the same time, but not sure how to encode that
concisely..

> +    my ($rules) = @_;
> +
> +    my $conflicts = [];
> +    my ($positive_ruleids, $negative_ruleids) = split_colocation_rules($rules);
> +
> +    for my $outerid (@$positive_ruleids) {
> +	my $outer = $rules->{ids}->{$outerid}->{services};

s/outer/positive ?

> +
> +	for my $innerid (@$negative_ruleids) {
> +	    my $inner = $rules->{ids}->{$innerid}->{services};

s/inner/negative ?

> +
> +	    my $intersection = PVE::HA::Tools::intersect($outer, $inner);
> +	    next if scalar(keys %$intersection < 2);

the keys there is not needed, but the parentheses are in the wrong place
instead ;) it does work by accident though, because the result of keys
will be coerced to a scalar anyway, so you get the result of your
comparison wrapped by another call to scalar, so you end up with either
1 or '' depending on whether the check was true or false..

> +
> +	    push @$conflicts, [$outerid, $innerid];
> +	}
> +    }
> +
> +    return $conflicts;
> +}
> +
> +=head3 check_positive_group_consistency(...)
> +
> +Returns a list of conflicts caused by inconsistencies between positive
> +colocation rules defined in C<$rules> and node restrictions defined in
> +C<$groups> and C<$service>.

services?

> +
> +A positive colocation rule inconsistency with groups exists, if at least two
> +services in a positive colocation rule are restricted to disjoint sets of
> +nodes, i.e. they are in restricted HA groups, which have a disjoint set of
> +nodes.
> +
> +If there are no conflicts, the returned list is empty.
> +
> +=cut
> +
> +sub check_positive_group_consistency {
> +    my ($rules, $groups, $services, $positive_ruleids, $conflicts) = @_;

this could just get $positive_rules (filtered via grep) instead?

> +
> +    for my $ruleid (@$positive_ruleids) {
> +	my $rule_services = $rules->{ids}->{$ruleid}->{services};

and this could be

while (my ($ruleid, $rule) = each %$positive_rules) {
  my $nodes;
  ..
}

> +	my $nodes;
> +
> +	for my $sid (keys %$rule_services) {
> +	    my $groupid = $services->{$sid}->{group};
> +	    return if !$groupid;

should this really be a return?

> +
> +	    my $group = $groups->{ids}->{$groupid};
> +	    return if !$group;
> +	    return if !$group->{restricted};

same here?

> +
> +	    $nodes = { map { $_ => 1 } keys %{$group->{nodes}} } if !defined($nodes);

isn't $group->{nodes} already a hash set of the desired format? so this could be

$nodes = { $group->{nodes}->%* };

?

> +	    $nodes = PVE::HA::Tools::intersect($nodes, $group->{nodes});

could add a break here with the same condition as below?

> +	}
> +
> +	if (defined($nodes) && scalar keys %$nodes < 1) {
> +	    push @$conflicts, ['positive', $ruleid];
> +	}
> +    }
> +}
> +
> +=head3 check_negative_group_consistency(...)
> +
> +Returns a list of conflicts caused by inconsistencies between negative
> +colocation rules defined in C<$rules> and node restrictions defined in
> +C<$groups> and C<$service>.
> +
> +A negative colocation rule inconsistency with groups exists, if at least two
> +services in a negative colocation rule are restricted to less nodes in total
> +than services in the rule, i.e. they are in restricted HA groups, where the
> +union of all restricted node sets have less elements than restricted services.
> +
> +If there are no conflicts, the returned list is empty.
> +
> +=cut
> +
> +sub check_negative_group_consistency {
> +    my ($rules, $groups, $services, $negative_ruleids, $conflicts) = @_;

same question here

> +
> +    for my $ruleid (@$negative_ruleids) {
> +	my $rule_services = $rules->{ids}->{$ruleid}->{services};
> +	my $restricted_services = 0;
> +	my $restricted_nodes;
> +
> +	for my $sid (keys %$rule_services) {
> +	    my $groupid = $services->{$sid}->{group};
> +	    return if !$groupid;

same question as above ;)

> +
> +	    my $group = $groups->{ids}->{$groupid};
> +	    return if !$group;
> +	    return if !$group->{restricted};

same here

> +
> +	    $restricted_services++;
> +
> +	    $restricted_nodes = {} if !defined($restricted_nodes);
> +	    $restricted_nodes = PVE::HA::Tools::union($restricted_nodes, $group->{nodes});

here as well - if restricted_services > restricted_nodes, haven't we
already found a violation of the invariant and should break even if
another service would then be added in the next iteration that can run
on 5 move new nodes..

> +	}
> +
> +	if (defined($restricted_nodes)
> +	    && scalar keys %$restricted_nodes < $restricted_services) {
> +	    push @$conflicts, ['negative', $ruleid];
> +	}
> +    }
> +}
> +
> +sub check_consistency_with_groups {
> +    my ($rules, $groups, $services) = @_;
> +
> +    my $conflicts = [];
> +    my ($positive_ruleids, $negative_ruleids) = split_colocation_rules($rules);
> +
> +    check_positive_group_consistency($rules, $groups, $services, $positive_ruleids, $conflicts);
> +    check_negative_group_consistency($rules, $groups, $services, $negative_ruleids, $conflicts);
> +
> +    return $conflicts;
> +}
> +
> +sub canonicalize {
> +    my ($class, $rules, $groups, $services) = @_;

should this note that it will modify $rules in-place? this is only
called by PVE::HA::Rules::checked_config which also does not note that
and could be interpreted as "config is checked now" ;)

> +
> +    my $illdefined_ruleids = check_services_count($rules);
> +
> +    for my $ruleid (@$illdefined_ruleids) {
> +	print "Drop colocation rule '$ruleid', because it does not have enough services defined.\n";
> +
> +	delete $rules->{ids}->{$ruleid};
> +    }
> +
> +    my $mergeable_positive_ruleids = check_positive_intransitivity($rules);
> +
> +    for my $outerid (sort keys %$mergeable_positive_ruleids) {
> +	my $outer = $rules->{ids}->{$outerid};
> +	my $innerids = $mergeable_positive_ruleids->{$outerid};
> +
> +	for my $innerid (@$innerids) {
> +	    my $inner = $rules->{ids}->{$innerid};
> +
> +	    $outer->{services}->{$_} = 1 for (keys %{$inner->{services}});
> +
> +	    print "Merge services of positive colocation rule '$innerid' into positive colocation"
> +		. " rule '$outerid', because they share at least one service.\n";

this is a bit confusing because it modifies the rule while continuing to
refer to it using the old name afterwards.. should we merge them and
give them a new name?

> +
> +	    delete $rules->{ids}->{$innerid};
> +	}
> +    }
> +
> +    my $inner_conflicts = check_inner_consistency($rules);
> +
> +    for my $conflict (@$inner_conflicts) {
> +	my ($positiveid, $negativeid) = @$conflict;
> +
> +	print "Drop positive colocation rule '$positiveid' and negative colocation rule"
> +	    . " '$negativeid', because they share two or more services.\n";
> +
> +	delete $rules->{ids}->{$positiveid};
> +	delete $rules->{ids}->{$negativeid};
> +    }
> +
> +    my $group_conflicts = check_consistency_with_groups($rules, $groups, $services);
> +
> +    for my $conflict (@$group_conflicts) {
> +	my ($type, $ruleid) = @$conflict;
> +
> +	if ($type eq 'positive') {
> +	    print "Drop positive colocation rule '$ruleid', because two or more services are"
> +		. " restricted to different nodes.\n";
> +	} elsif ($type eq 'negative') {
> +	    print "Drop negative colocation rule '$ruleid', because two or more services are"
> +		. " restricted to less nodes than services.\n";
> +	} else {
> +	    die "Invalid group conflict type $type\n";
> +	}
> +
> +	delete $rules->{ids}->{$ruleid};
> +    }
> +}
> +
> +# TODO This will be used to verify modifications to the rules config over the API
> +sub are_satisfiable {

this is basically canonicalize, but
- without deleting rules
- without the transitivity check
- with slightly adapted messages

should they be combined so that we have roughly the same logic when
doing changes via the API and when loading the rules for operations?

> +    my ($class, $rules, $groups, $services) = @_;
> +
> +    my $illdefined_ruleids = check_services_count($rules);
> +
> +    for my $ruleid (@$illdefined_ruleids) {
> +	print "Colocation rule '$ruleid' does not have enough services defined.\n";
> +    }
> +
> +    my $inner_conflicts = check_inner_consistency($rules);
> +
> +    for my $conflict (@$inner_conflicts) {
> +	my ($positiveid, $negativeid) = @$conflict;
> +
> +	print "Positive colocation rule '$positiveid' is inconsistent with negative colocation rule"
> +	    . " '$negativeid', because they share two or more services between them.\n";
> +    }
> +
> +    my $group_conflicts = check_consistency_with_groups($rules, $groups, $services);
> +
> +    for my $conflict (@$group_conflicts) {
> +	my ($type, $ruleid) = @$conflict;
> +
> +	if ($type eq 'positive') {
> +	    print "Positive colocation rule '$ruleid' is unapplicable, because two or more services"
> +		. " are restricted to different nodes.\n";
> +	} elsif ($type eq 'negative') {
> +	    print "Negative colocation rule '$ruleid' is unapplicable, because two or more services"
> +		. " are restricted to less nodes than services.\n";
> +	} else {
> +	    die "Invalid group conflict type $type\n";
> +	}
> +    }
> +
> +    if (scalar(@$inner_conflicts) || scalar(@$group_conflicts)) {
> +	return 0;
> +    }
> +
> +    return 1;
> +}
> +
> +1;
> diff --git a/src/PVE/HA/Rules/Makefile b/src/PVE/HA/Rules/Makefile
> new file mode 100644
> index 0000000..8cb91ac
> --- /dev/null
> +++ b/src/PVE/HA/Rules/Makefile
> @@ -0,0 +1,6 @@
> +SOURCES=Colocation.pm
> +
> +.PHONY: install
> +install:
> +	install -d -m 0755 ${DESTDIR}${PERLDIR}/PVE/HA/Rules
> +	for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/HA/Rules/$$i; done
> diff --git a/src/PVE/HA/Tools.pm b/src/PVE/HA/Tools.pm
> index 35107c9..52251d7 100644
> --- a/src/PVE/HA/Tools.pm
> +++ b/src/PVE/HA/Tools.pm
> @@ -46,6 +46,12 @@ PVE::JSONSchema::register_standard_option('pve-ha-resource-id', {
>      type => 'string', format => 'pve-ha-resource-id',
>  });
>  
> +PVE::JSONSchema::register_standard_option('pve-ha-resource-id-list', {
> +    description => "List of HA resource IDs.",
> +    typetext => "<type>:<name>{,<type>:<name>}*",
> +    type => 'string', format => 'pve-ha-resource-id-list',
> +});
> +
>  PVE::JSONSchema::register_format('pve-ha-resource-or-vm-id', \&pve_verify_ha_resource_or_vm_id);
>  sub pve_verify_ha_resource_or_vm_id {
>      my ($sid, $noerr) = @_;
> -- 
> 2.39.5
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  reply	other threads:[~2025-04-03 12:17 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-25 15:12 [pve-devel] [RFC cluster/ha-manager 00/16] HA colocation rules Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH cluster 1/1] cfs: add 'ha/rules.cfg' to observed files Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 01/15] ignore output of fence config tests in tree Daniel Kral
2025-03-25 17:49   ` [pve-devel] applied: " Thomas Lamprecht
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 02/15] tools: add hash set helper subroutines Daniel Kral
2025-03-25 17:53   ` Thomas Lamprecht
2025-04-03 12:16     ` Fabian Grünbichler
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 03/15] usage: add get_service_node and pin_service_node methods Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 04/15] add rules section config base plugin Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 05/15] rules: add colocation rule plugin Daniel Kral
2025-04-03 12:16   ` Fabian Grünbichler [this message]
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 06/15] config, env, hw: add rules read and parse methods Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 07/15] manager: read and update rules config Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 08/15] manager: factor out prioritized nodes in select_service_node Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 09/15] manager: apply colocation rules when selecting service nodes Daniel Kral
2025-04-03 12:17   ` Fabian Grünbichler
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 10/15] sim: resources: add option to limit start and migrate tries to node Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 11/15] test: ha tester: add test cases for strict negative colocation rules Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 12/15] test: ha tester: add test cases for strict positive " Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 13/15] test: ha tester: add test cases for loose " Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 14/15] test: ha tester: add test cases in more complex scenarios Daniel Kral
2025-03-25 15:12 ` [pve-devel] [PATCH ha-manager 15/15] test: add test cases for rules config Daniel Kral
2025-03-25 16:47 ` [pve-devel] [RFC cluster/ha-manager 00/16] HA colocation rules Daniel Kral
2025-04-01  1:50 ` DERUMIER, Alexandre
2025-04-01  9:39   ` Daniel Kral
2025-04-01 11:05     ` DERUMIER, Alexandre via pve-devel
2025-04-03 12:26     ` Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1743672047.yfgqapvf7h.astroid@yuna.none \
    --to=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal