From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 459C39B570 for ; Wed, 18 Oct 2023 10:43:46 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 287C3C233 for ; Wed, 18 Oct 2023 10:43:46 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Wed, 18 Oct 2023 10:43:45 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id BDF3B42974 for ; Wed, 18 Oct 2023 10:43:44 +0200 (CEST) Message-ID: <62512278-42a0-4994-95aa-53904a953034@proxmox.com> Date: Wed, 18 Oct 2023 10:43:43 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: de-AT, en-US To: Thomas Lamprecht , Proxmox VE development discussion References: <1bb25817-66e7-406a-bd4b-0699de6cba31@proxmox.com> <832bc039-57d0-4d27-ac48-721c5c82af83@proxmox.com> <125370ed-9b55-42d2-b544-28683a17be79@proxmox.com> <9406389e-da89-475e-b8ad-e37efb72df10@proxmox.com> <57929886-9ef9-45b9-94c4-f0f66dc2532a@proxmox.com> From: Lukas Wagner In-Reply-To: <57929886-9ef9-45b9-94c4-f0f66dc2532a@proxmox.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.026 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [RFC] towards automated integration testing X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Oct 2023 08:43:46 -0000 On 10/17/23 18:28, Thomas Lamprecht wrote: > Am 17/10/2023 um 14:33 schrieb Lukas Wagner: >> On 10/17/23 08:35, Thomas Lamprecht wrote: >>>  From top of my head I'd rather do some attribute based dependency >>> annotation, so that one can depend on single tests, or whole fixture >>> on others single tests or whole fixture. >>> >> >> The more thought I spend on it, the more I believe that inter-testcase >> deps should be avoided as much as possible. In unit testing, (hidden) > > We don't plan unit testing here though and the dependencies I proposed > are the contrary from hidden, rather explicit annotated ones. > >> dependencies between tests are in my experience the no. 1 cause of >> flaky tests, and I see no reason why this would not also apply for >> end-to-end integration testing. > > Any source on that being the no 1 source of flaky tests? IMO that > should not make any difference, in the end you just allow better Of course I don't have bullet-proof evidence for the 'no. 1' claim, but it's just my personal experience, which comes partly from a former job (where was I coincidentally also responsible for setting up automated testing ;) - there it was for a firmware project), partly from the work I did for my master's thesis (which was also in the broader area of software testing). I would say it's just the consequence of having multiple test cases manipulating a shared, stateful entity, be it directly or indirectly via side effects. Things get of course even more difficult and messy if concurrent test execution enters the picture ;) > reuse through composition of other tests (e.g., migration builds > upon clustering *set up*, not tests, if I just want to run > migration I can do clustering setup without executing its tests). > > Not providing that could also mean that one has to move all logic > in the test-script, resulting in a single test per "fixture", reducing > granularity and parallelity of some running tests. > > I also think that > >> I'd suggest to only allow test cases to depend on fixtures. The fixtures >> themselves could have setup/teardown hooks that allow setting up and >> cleaning up a test scenario. If needed, we could also have something >> like 'fixture inheritance', where a fixture can 'extend' another, >> supplying additional setup/teardown. >> Example: the 'outermost' or 'parent' fixture might define that we >> want a 'basic PVE installation' with the latest .debs deployed, >> while another fixture that inherits from that one might set up a >> storage of a certain type, useful for all tests that require specific >> that type of storage. > > Maybe our disagreement stems mostly from different design pictures in > our head, I probably am a bit less fixed (heh) on the fixtures, or at > least the naming of that term and might use test system, or intra test > system when for your design plan fixture would be the better word. I think it's mostly a terminology problem. In my previous definition of 'fixture' I was maybe too fixated (heh) on it being 'the test infrastructure/VMs that must be set up/instantatiated'. Maybe it helps to think about it more generally as 'common setup/cleanup steps for a set of test cases, which *might* include setting up test infra (although I have not figured out a good way how that would be modeled with the desired decoupling between test runner and test-VM-setup-thingy). > >> On the other hand, instead of inheritance, a 'role/trait'-based system >> might also work (composition >>> inheritance, after all) - and >> maybe that also aligns better with the 'properties' mentioned in >> your other mail (I mean this here:  "ostype=win*", "memory>=10G"). >> >> This is essentially a very similar pattern as in numerous other testing >> frameworks (xUnit, pytest, etc.); I think it makes sense to >> build upon this battle-proven approach. > > Those are all unit testing tools though that we do already in the > sources and IIRC those do not really provide what we need here. > While starting out simple(r) and avoiding too much complexity has > certainly it's merits, I don't think we should try to draw/align > too many parallels with those tools here for us. > > > In summary, the most important points for me is a decoupled test-system > from the automation system that can manage it, ideally such that I can > decide relatively flexible on manual runs, IMO that should not be to much > work and it guarantees for clean cut APIs from which future development, > or integration surely will benefit too. > > The rest is possibly hard to determine clearly on this stage, as it's easy > (at least for me) to get lost in different understandings of terms and > design perception, but hard to convey those very clearly about "pipe dreams", > so at this stage I'll cede to add discussion churn until there's something > more concrete that I can grasp on my terms (through reading/writing code), > but that should not deter others from giving input still while at this stage. Agreed. I think we agree on the most important requirements/aspects of this project and that's a good foundation for my upcoming efforts. At this point, the best move forward for me is to start experimenting with some ideas and start with the actual implementation. When I have something concrete to show, may it be a prototype or some sort of minimum viable product, it's much easier to discuss any further details and design aspects. Thanks! -- - Lukas