From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 676609AD2B for ; Mon, 16 Oct 2023 13:20:21 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B426F16CA7 for ; Mon, 16 Oct 2023 13:20:20 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Mon, 16 Oct 2023 13:20:19 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 70EA04105C for ; Mon, 16 Oct 2023 13:20:19 +0200 (CEST) Message-ID: <44f89ce6-043f-7b05-75ad-ac66550eb3e8@proxmox.com> Date: Mon, 16 Oct 2023 13:20:18 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.15.0 Content-Language: en-US To: Proxmox VE development discussion , Lukas Wagner References: <1bb25817-66e7-406a-bd4b-0699de6cba31@proxmox.com> From: Stefan Hanreich In-Reply-To: <1bb25817-66e7-406a-bd4b-0699de6cba31@proxmox.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL 2.086 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -3.339 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [RFC] towards automated integration testing X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Oct 2023 11:20:21 -0000 On 10/13/23 15:33, Lukas Wagner wrote: > - Additionally, it should be easy to run these integration tests locally > on a developer's workstation in order to write new test cases, as well > as troubleshooting and debugging existing test cases. The local > test environment should match the one being used for automated testing > as closely as possible This would also include sharing those fixture templates somewhere, do you already have an idea on how to accomplish this? PBS sounds like a good option for this if I'm not missing something. > As a main mode of operation, the Systems under Test (SUTs) > will be virtualized on top of a Proxmox VE node. > > This has the following benefits: > - it is easy to create various test setups (fixtures), including but not > limited to single Proxmox VE nodes, clusters, Backup servers and > auxiliary services (e.g. an LDAP server for testing LDAP > authentication) I can imagine having to setup VMs inside the Test Setup as well for doing various tests. Doing this manually every time could be quite cumbersome / hard to automate. Do you have a mechanism in mind to deploy VMs inside the test system as well? Again, PBS could be an interesting option for this imo. > In theory, the test runner would also be able to drive tests on real > hardware, but of course with some limitations (harder to have a > predictable, reproducible environment, etc.) Maybe utilizing Aaron's installer for setting up those test systems could at least produce somewhat identical setups? Although it is really hard managing systems with different storage types, network cards, ... . I've seen GitLab using tags for runners that specify certain capabilities of systems. Maybe we could also introduce something like that here for different bare-metal systems? E.g. a test case specifies it needs a system with tag `ZFS` and then you can run / skip the respective test case on that system. Managing those tags can introduce quite a lot of churn though, so I'm not sure if this would be a good idea. > The test script is executed by the test runner; the test outcome is > determined by the exit code of the script. Test scripts could be written Are you considering capturing output as well? That would make sense when using assertions at least, so in case of failures developers have a starting point for debugging. Would it make sense to allow specifying a expected exit code for tests that actually should fail - or do you consider this something that should be handled by the test script? I've refrained from talking about the toml files too much since it's probably too early to say something about that, but they look good so far from my pov. In general this sounds like quite the exciting feature and the RFC looks very promising already. Kind Regards Stefan