all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Lukas Wagner <l.wagner@proxmox.com>
To: Thomas Lamprecht <t.lamprecht@proxmox.com>,
	Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [RFC] towards automated integration testing
Date: Wed, 18 Oct 2023 10:43:43 +0200	[thread overview]
Message-ID: <62512278-42a0-4994-95aa-53904a953034@proxmox.com> (raw)
In-Reply-To: <57929886-9ef9-45b9-94c4-f0f66dc2532a@proxmox.com>



On 10/17/23 18:28, Thomas Lamprecht wrote:
> Am 17/10/2023 um 14:33 schrieb Lukas Wagner:
>> On 10/17/23 08:35, Thomas Lamprecht wrote:
>>>   From top of my head I'd rather do some attribute based dependency
>>> annotation, so that one can depend on single tests, or whole fixture
>>> on others single tests or whole fixture.
>>>
>>
>> The more thought I spend on it, the more I believe that inter-testcase
>> deps should be avoided as much as possible. In unit testing, (hidden)
> 
> We don't plan unit testing here though and the dependencies I proposed
> are the contrary from hidden, rather explicit annotated ones.
> 
>> dependencies between tests are in my experience the no. 1 cause of
>> flaky tests, and I see no reason why this would not also apply for
>> end-to-end integration testing.
> 
> Any source on that being the no 1 source of flaky tests? IMO that
> should not make any difference, in the end you just allow better
Of course I don't have bullet-proof evidence for the 'no. 1' claim, but
it's just my personal experience, which comes partly from a former job 
(where was I coincidentally also responsible for setting up automated 
testing ;) - there it was for a firmware project), partly from the work 
I did for my master's thesis (which was also in the broader area of 
software testing).

I would say it's just the consequence of having multiple test cases
manipulating a shared, stateful entity, be it directly or indirectly
via side effects. Things get of course even more difficult and messy if 
concurrent test execution enters the picture ;)

> reuse through composition of other tests (e.g., migration builds
> upon clustering *set up*, not tests, if I just want to run
> migration I can do clustering setup without executing its tests).
>  > Not providing that could also mean that one has to move all logic
> in the test-script, resulting in a single test per "fixture", reducing
> granularity and parallelity of some running tests.
> 
> I also think that
> 
>> I'd suggest to only allow test cases to depend on fixtures. The fixtures
>> themselves could have setup/teardown hooks that allow setting up and
>> cleaning up a test scenario. If needed, we could also have something
>> like 'fixture inheritance', where a fixture can 'extend' another,
>> supplying additional setup/teardown.
>> Example: the 'outermost' or 'parent' fixture might define that we
>> want a 'basic PVE installation' with the latest .debs deployed,
>> while another fixture that inherits from that one might set up a
>> storage of a certain type, useful for all tests that require specific
>> that type of storage.
> 
> Maybe our disagreement stems mostly from different design pictures in
> our head, I probably am a bit less fixed (heh) on the fixtures, or at
> least the naming of that term and might use test system, or intra test
> system when for your design plan fixture would be the better word.

I think it's mostly a terminology problem. In my previous definition of
'fixture' I was maybe too fixated (heh) on it being 'the test
infrastructure/VMs that must be set up/instantatiated'. Maybe it helps 
to think about it more generally as 'common setup/cleanup steps for a 
set of test cases, which *might* include setting up test infra (although 
I have not figured out a good way how that would be modeled with the 
desired decoupling between test runner and test-VM-setup-thingy).

> 
>> On the other hand, instead of inheritance, a 'role/trait'-based system
>> might also work (composition >>> inheritance, after all) - and
>> maybe that also aligns better with the 'properties' mentioned in
>> your other mail (I mean this here:  "ostype=win*", "memory>=10G").
>>
>> This is essentially a very similar pattern as in numerous other testing
>> frameworks (xUnit, pytest, etc.); I think it makes sense to
>> build upon this battle-proven approach.
> 
> Those are all unit testing tools though that we do already in the
> sources and IIRC those do not really provide what we need here.
> While starting out simple(r) and avoiding too much complexity has
> certainly it's merits, I don't think we should try to draw/align
> too many parallels with those tools here for us.
>  >
> In summary, the most important points for me is a decoupled test-system
> from the automation system that can manage it, ideally such that I can
> decide relatively flexible on manual runs, IMO that should not be to much
> work and it guarantees for clean cut APIs from which future development,
> or integration surely will benefit too.
> 
> The rest is possibly hard to determine clearly on this stage, as it's easy
> (at least for me) to get lost in different understandings of terms and
> design perception, but hard to convey those very clearly about "pipe dreams",
> so at this stage I'll cede to add discussion churn until there's something
> more concrete that I can grasp on my terms (through reading/writing code),
> but that should not deter others from giving input still while at this stage.

Agreed.
I think we agree on the most important requirements/aspects of this
project and that's a good foundation for my upcoming efforts.

At this point, the best move forward for me is to start experimenting 
with some ideas and start with the actual implementation.
When I have something concrete to show, may it be a prototype or some
sort of minimum viable product, it's much easier to discuss
any further details and design aspects.

Thanks!

-- 
- Lukas




      reply	other threads:[~2023-10-18  8:43 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-13 13:33 Lukas Wagner
2023-10-16 11:20 ` Stefan Hanreich
2023-10-16 15:18   ` Lukas Wagner
2023-10-17  7:34     ` Thomas Lamprecht
2023-10-16 13:57 ` Thomas Lamprecht
2023-10-16 15:33   ` Lukas Wagner
2023-10-17  6:35     ` Thomas Lamprecht
2023-10-17 12:33       ` Lukas Wagner
2023-10-17 16:28         ` Thomas Lamprecht
2023-10-18  8:43           ` Lukas Wagner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=62512278-42a0-4994-95aa-53904a953034@proxmox.com \
    --to=l.wagner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal