From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <pve-devel-bounces@lists.proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 786421FF16F for <inbox@lore.proxmox.com>; Tue, 29 Apr 2025 10:55:18 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 5F452747D; Tue, 29 Apr 2025 10:55:27 +0200 (CEST) Message-ID: <01d4c734-fda9-4e9f-9d9e-f39cdc83704e@proxmox.com> Date: Tue, 29 Apr 2025 10:54:50 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>, Daniel Kral <d.kral@proxmox.com> References: <20250325151254.193177-1-d.kral@proxmox.com> <20250325151254.193177-16-d.kral@proxmox.com> Content-Language: en-US From: Fiona Ebner <f.ebner@proxmox.com> In-Reply-To: <20250325151254.193177-16-d.kral@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL -0.037 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH ha-manager 14/15] test: ha tester: add test cases in more complex scenarios X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/> List-Post: <mailto:pve-devel@lists.proxmox.com> List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe> Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com> Am 25.03.25 um 16:12 schrieb Daniel Kral: > diff --git a/src/test/test-crs-static-rebalance-coloc1/README b/src/test/test-crs-static-rebalance-coloc1/README > new file mode 100644 > index 0000000..c709f45 > --- /dev/null > +++ b/src/test/test-crs-static-rebalance-coloc1/README > @@ -0,0 +1,26 @@ > +Test whether a mixed set of strict colocation rules in conjunction with the > +static load scheduler with auto-rebalancing are applied correctly on service > +start enabled and in case of a subsequent failover. > + > +The test scenario is: > +- vm:101 and vm:102 are non-colocated services > +- Services that must be kept together: > + - vm:102, and vm:107 Even if going for serial commas, AFAIK it's not allowed when there's only two items listed. > + - vm:104, vm:106, and vm:108 > +- Services that must be kept separate: > + - vm:103, vm:104, and vm:105 > + - vm:103, vm:106, and vm:107 > + - vm:107, and vm:108 > +- Therefore, there are consistent interdependencies between the positive and > + negative colocation rules' service members > +- vm:101 and vm:102 are currently assigned to node1 and node2 respectively > +- vm:103 through vm:108 are currently assigned to node3 > + > +Therefore, the expected outcome is: > +- vm:101, vm:102, vm:103 should be started on node1, node2, and node3 > + respectively, as there's nothing running on there yet > +- vm:104, vm:106, and vm:108 should all be assigned on the same node, which > + will be node1, since it has the most resources left for vm:104 > +- vm:105 and vm:107 should both be assigned on the same node, which will be > + node2, since both cannot be assigned to the other nodes because of the > + colocation constraints Would be nice to have a final sentence for the last part of the test: "As node3 fails, ..." ---snip 8<--- > diff --git a/src/test/test-crs-static-rebalance-coloc2/README b/src/test/test-crs-static-rebalance-coloc2/README > new file mode 100644 > index 0000000..1b788f8 > --- /dev/null > +++ b/src/test/test-crs-static-rebalance-coloc2/README > @@ -0,0 +1,16 @@ > +Test whether a set of transitive strict negative colocation rules, i.e. there's I don't like the use of "transitive" here, as that comes with connotations that just don't apply in general here, but would prefer "pairwise". > +negative colocation relations a->b, b->c and a->c, in conjunction with the The relations are symmetric, so I'd write a<->b, etc. > +static load scheduler with auto-rebalancing are applied correctly on service > +start and in case of a subsequent failover. > + > +The test scenario is: > +- vm:101 and vm:102 must be kept separate > +- vm:102 and vm:103 must be kept separate > +- vm:101 and vm:103 must be kept separate > +- Therefore, vm:101, vm:102, and vm:103 must be kept separate > + > +Therefore, the expected outcome is: > +- vm:101, vm:102, and vm:103 should be started on node1, node2, and node3 > + respectively, just as if the three negative colocation rule would've been > + stated in a single negative colocation rule This would already happen with just rebalancing though. I.e. even if I remove the colocation rules, the part of the test output before node3 fails looks exactly the same. You could add dummy services in between or have the nodes have rather huge differences in available resources to make the colocation rules actually matter for the test. > +- As node3 fails, vm:103 cannot be recovered ---snip 8<--- > diff --git a/src/test/test-crs-static-rebalance-coloc3/README b/src/test/test-crs-static-rebalance-coloc3/README > new file mode 100644 > index 0000000..e54a2d4 > --- /dev/null > +++ b/src/test/test-crs-static-rebalance-coloc3/README > @@ -0,0 +1,14 @@ > +Test whether a more complex set of transitive strict negative colocation rules, > +i.e. there's negative colocation relations a->b, b->c and a->c, in conjunction Same comments as above regarding the wording. > +with the static load scheduler with auto-rebalancing are applied correctly on > +service start and in case of a subsequent failover. > + > +The test scenario is: > +- Essentially, all 10 strict negative colocation rules say that, vm:101, > + vm:102, vm:103, vm:104, and vm:105 must be kept together s/together/separate/ > + > +Therefore, the expected outcome is: > +- vm:101, vm:102, and vm:103 should be started on node1, node2, node3, node4, > + and node5 respectively, just as if the 10 negative colocation rule would've > + been stated in a single negative colocation rule > +- As node1 and node5 fails, vm:101 and vm:105 cannot be recovered Again, it seems like colocation rules don't actually matter for the first half of the test. ---snip 8<--- > diff --git a/src/test/test-crs-static-rebalance-coloc3/static_service_stats b/src/test/test-crs-static-rebalance-coloc3/static_service_stats > new file mode 100644 > index 0000000..d9dc9e7 > --- /dev/null > +++ b/src/test/test-crs-static-rebalance-coloc3/static_service_stats > @@ -0,0 +1,5 @@ > +{ > + "vm:101": { "maxcpu": 8, "maxmem": 16000000000 }, > + "vm:102": { "maxcpu": 4, "maxmem": 24000000000 }, > + "vm:103": { "maxcpu": 2, "maxmem": 32000000000 } vm:104 and vm:105 are not defined here > +} _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel