From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <pve-devel-bounces@lists.proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 7BA2A1FF164 for <inbox@lore.proxmox.com>; Fri, 9 May 2025 13:21:06 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id ED2C53B6A2; Fri, 9 May 2025 13:21:23 +0200 (CEST) Message-ID: <48f91ae3-a7a3-48ce-b55a-180bafe4e111@proxmox.com> Date: Fri, 9 May 2025 13:20:50 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird To: Fiona Ebner <f.ebner@proxmox.com>, Proxmox VE development discussion <pve-devel@lists.proxmox.com> References: <20250325151254.193177-1-d.kral@proxmox.com> <20250325151254.193177-15-d.kral@proxmox.com> <38f55fe9-4c1a-4734-826e-7482649087ac@proxmox.com> Content-Language: en-US From: Daniel Kral <d.kral@proxmox.com> In-Reply-To: <38f55fe9-4c1a-4734-826e-7482649087ac@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.010 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH ha-manager 13/15] test: ha tester: add test cases for loose colocation rules X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/> List-Post: <mailto:pve-devel@lists.proxmox.com> List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe> Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com> Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com> On 4/28/25 16:44, Fiona Ebner wrote: > Am 25.03.25 um 16:12 schrieb Daniel Kral: >> Add test cases for loose positive and negative colocation rules, i.e. >> where services should be kept on the same node together or kept separate >> nodes. These are copies of their strict counterpart tests, but verify >> the behavior if the colocation rule cannot be met, i.e. not adhering to >> the colocation rule. The test scenarios are: >> >> - 2 neg. colocated services in a 3 node cluster; 1 node failing >> - 2 neg. colocated services in a 3 node cluster; 1 node failing, but the >> recovery node cannot start the service >> - 2 pos. colocated services in a 3 node cluster; 1 node failing >> - 3 pos. colocated services in a 3 node cluster; 1 node failing, but the >> recovery node cannot start one of the services >> >> Signed-off-by: Daniel Kral <d.kral@proxmox.com> > > With the errors in the descriptions fixed: > > Reviewed-by: Fiona Ebner <f.ebner@proxmox.com> ACK > >> diff --git a/src/test/test-colocation-loose-separate4/README b/src/test/test-colocation-loose-separate4/README > > Not sure it should be named the same number as the strict test just > because it's adapted from that. Me neither... I'll just make them consecutive in the next revision. If we wanted to be exhaustive we could run all/most of test cases for the strict colocation rules against the loose colocation rules but I'd figure that it would be a waste of resources when running the test suite / building the package as it's a lot of duplicate code. In general, it'd be sure great to have a better overview on what the current test cases already cover as the directory names can only get so long and give only so much description on what's tested and going through every README is also a hassle. But that's a whole other topic. > >> new file mode 100644 >> index 0000000..5b68cde >> --- /dev/null >> +++ b/src/test/test-colocation-loose-separate4/README >> @@ -0,0 +1,17 @@ >> +Test whether a loose negative colocation rule among two services makes one of >> +the services migrate to a different recovery node than the other service in >> +case of a failover of service's previously assigned node. As the service fails >> +to start on the recovery node (e.g. insufficient resources), the failing >> +service is kept on the recovery node. > > The description here is wrong. It will be started on a different node > after the start failure. ACK > >> + >> +The test scenario is: >> +- vm:101 and fa:120001 should be kept separate >> +- vm:101 and fa:120001 are on node2 and node3 respectively >> +- fa:120001 will fail to start on node1 >> +- node1 has a higher service count than node2 to test the colocation rule is >> + applied even though the scheduler would prefer the less utilized node >> + >> +Therefore, the expected outcome is: >> +- As node3 fails, fa:120001 is migrated to node1 >> +- fa:120001 will be relocated to another node, since it couldn't start on its >> + initial recovery node Also mentioned the node where it is migrated to here so that it is clear that loose colocation rules are free to ignore the rule if it would mean that the service is kept in recovery state else > > ---snip 8<--- > >> diff --git a/src/test/test-colocation-loose-together1/README b/src/test/test-colocation-loose-together1/README >> new file mode 100644 >> index 0000000..2f5aeec >> --- /dev/null >> +++ b/src/test/test-colocation-loose-together1/README >> @@ -0,0 +1,11 @@ >> +Test whether a loose positive colocation rule makes two services migrate to >> +the same recovery node in case of a failover of their previously assigned node. >> + >> +The test scenario is: >> +- vm:101 and vm:102 should be kept together >> +- vm:101 and vm:102 are both currently running on node3 >> +- node1 and node2 have the same service count to test that the rule is applied >> + even though it would be usually balanced between both remaining nodes >> + >> +Therefore, the expected outcome is: >> +- As node3 fails, both services are migrated to node2 > > It's actually node1 ACK _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel