From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 73BC79F574 for ; Mon, 6 Nov 2023 10:48:59 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 54FED13DE7 for ; Mon, 6 Nov 2023 10:48:59 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Mon, 6 Nov 2023 10:48:58 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 132B844CD5; Mon, 6 Nov 2023 10:48:58 +0100 (CET) Message-ID: <65c565b8-96fc-4b6d-bc21-f3c8fd138c01@proxmox.com> Date: Mon, 6 Nov 2023 10:48:56 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Beta Content-Language: en-US To: Thomas Lamprecht , Proxmox VE development discussion References: <20231103115343.4133611-1-d.csapak@proxmox.com> <2cd139da-fe1b-4709-881a-603ea5de8427@proxmox.com> From: Dominik Csapak In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.133 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment POISEN_SPAM_PILL 0.1 Meta: its spam POISEN_SPAM_PILL_1 0.1 random spam to be learned in bayes POISEN_SPAM_PILL_3 0.1 random spam to be learned in bayes SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [json-schema.org] Subject: Re: [pve-devel] [RFC PATCH cluster/guest-common/qemu-server/container/manager] add backend profile support X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Nov 2023 09:48:59 -0000 On 11/6/23 09:53, Thomas Lamprecht wrote: > Am 06/11/2023 um 09:17 schrieb Dominik Csapak: >> On 11/4/23 09:34, Thomas Lamprecht wrote: >>> On 03/11/2023 12:53, Dominik Csapak wrote: >>>> Using a single section config for both VMs and CTs make handling the >>>> properties a bit weird. For now i prefix the options with "$type_" >>>> so vm options are e.g. 'vm_ostype', 'vm_name', and so on while container >>>> options are 'ct_ostype', 'ct_hostname', etc. >>> >>> That properties are shared by different sections by default is IMO not >>> really ideal in general, it's also making the storage API and its docs >>> rather hard to understand & work with. >>> >>> One option could be to opt-into a newer behavior, e.g., via some >>> property, or getter, that the section config implementation needs to >>> set, or override and return true, which makes then all properties >>> isolated if not explicitly marked as shared (via another new property), >>> that way we could use it now, and move over existing ones where it >>> makes sense, without risking wide breakage. >>> >> >> ok, so if i'm reading this right, having a config per type is not really >> an option for you? (that would be nice and easy, without having >> to extend the section config at all, but still get most of the >> upsides) > > for some definition of "nice", as I find the resulting docs, and APIs > not really nice. > > I do not want to block a per-type config out-right, it just feels wrong > to me to have section configs with types, and then not be able to use > that for different types of the (in spirit) same stuff, so I'd much > rather see section config adapted to be able to actually handle > different types better through the whole stack. makes sense > >> hard part of extending the section config is how to handle the api >> since when we want to have a 'clean' api per type, we then have > > For JSON schema this could be mapped with oneOf support, i.e., for > properties that exist for multiple types as different schema's one > would create a oneOf with two sub-schemas. > > https://json-schema.org/understanding-json-schema/reference/combining#oneOf > > That would be clean from an API POV and could be reused for the storage > API then, where we already have this 1:1, and while not perfect it > actually uses section configs for what they got made for and works > well from an API and user POV – it would be of much more use to > improve it's rough edges compared to just go the easy way (for the > dev, not users) again. ok, but AFAICS we don't have support for that yet, so we'd have to implement that first... if we want to go in that direction i'll implement that first ofc > >> to also split the api per type? (like e.g. we do with the mapping >> of pci/usb) and then whats the gain of having both types in a single >> config (besides a single file instead of two?) > > API split is not required, so this is obsolete. Besides, a single > config makes it easier to ensure that IDs are unique (to avoid > confusion), so even if, there would be still some advantages. > > >>>> Using the same config/plugin system also makes using it a bit weird.>>> We have to register/init them in the api where they're used, but for the >>>> cli we have to register only the available type and then init. This >>>> makes it necessary to always set 'allow_unknown' while parsing the >>>> config so that 'qm' doesn't trip over the container profiles... >>>> >>>> A fix for both could be to separate the ct/vm configs into two files >>>> (similar to how we did pci/usb mapping configs), that would fix the >>>> prefixing, as well as the register/init issue (We have to have >>>> separate APIs then ofc). >>>> >>>> Another fix would be to extend the section config to allow different >>>> properties of the same name for different types. Should be possible, >>>> although we have to be careful to not break existing ones, and also >>>> the API interface has to be separate for each type then (cannot really >>>> have confiflicting api parameter schemas for the same name?) >>>> >>>> We could also go in a completely different direction and create a config >>>> per profile? (like we handle vm configs). Downside of that is, that the >>>> current guest config handling part is partly in pmxcfs, so we'd have to >>>> make that either more generic, or duplicate it for profiles. >>> >>> I don't see how this would have anything to do with pmxcfs and VMIDs, >>> that map to an actual guest instance and thus needs special treatment >>> compared to just some profile that can, e.g., live in a >>> /etc/pve/guest-profiles/ as .profile (the extension just an >>> example), and be handled only via perl – profiles are only used in >>> the profile management API, where it's ok to fully parse one, and >>> in guest creation POST calls, so even cfs_register* would be probably >>> overkill. >> >> you're right, i did not explain right what i meant. We currently do >> somethings in pmxcfs for vm configs (e.g. read/write/list) and >> we'd have to use the filesystem layer for the profiles if done in perl >> (which i meant with 'duplicating') > > > Yeah, as you've followed-up, ser/de happens still in perl. > So if that route would be chosen, it'd avoid touching CFS at first, we > can still add it to the tracked files for improved versioning & caching, > there later one, if ever needed, removing it is a bit harder. makes sense > >>> Having a file per profile makes some things relatively easy, but doesn't >>> fits the commonly used section config, let's see if others have input >>> (i.e., actively ask one/some of fabian/wolfgang/fiona. >> >> i agree, just mentioned it for completeness sake, but yes, lets wait >> for another opinion >> > > From my gut feeling I'd like that a bit more if we'd start out newly, > and I guess the VMID guest configs can be seen as prior art here, but > those are always a bit special (as they map to instances, not something > that we just access or provides some profile), and I would like to avoid > adding to many special cases – i.e., ideally we got one way with which > we can handle almost all of our needs, as otherwise this mental cost for > devs and users alike, will just grow. > > >>>> * Is the priv path ok? /mapping/profiles/ feels a bit weird >>> >>> What feels weird? s/profile/guest-profile/ might be a bit more telling >>> though. >> >> weird that it's putting in the /mapping/ dir, but if that's not an >> issue for you then i'll use /mapping/guest-profile(s)/ >> (idk if we should prefer singular or plural for that...) > > didn't we had the same discussion for mapping itself (i.e., map vs. maps > vs. mapping vs. mappings) > > IIRC singular was chosen then due to be used more often and to avoid > spending to much painting that bike shed, so would do the same here and > go for singular. > true, i remember now ;) >> >>> >>>> * We should probably introduce a 'meta' property for ct like we have for >>>> vms? we could record the profile there and also e.g. the template used >>>> to set the container up. >>> >>> I assume you mean in the CT config, IIRC there is even an older patch >>> floating around for adding that for recording the ctime. >>> >>> But it shouldn't be needed, as meta stuff should be only informational, >>> profile names can change (delete/re-add) or get modified, so saving the >>> ID has no use, logging it to the task-log is enough for starters, I think. >>> >> >> makes sense, i just thought maybe recording the template name + ctime, would >> make sense anyway. >> >> should i just log it for vms too, or is it ok to put it in the meta option >> there? > > I'd start out with just logging it, IMO this is mostly relevant for the > creation event, as it's only applied there (i.e., profile updates do > nothing for already instantiated guests), so adding it to the config, even > if just in the meta-data part, feels wrong to me – and here again, adding > that later, e.g., after user demand shows up, is way easier than removing > it again. ok, fine with me > >>>> * there is still an issue with the docs generation, i'll have to look >>>> into it >>>> * I'm not quite sure how the UI for creating/editing profiles should >>>> look like. For creating i could imagine a 'create profile from guest' >>>> type of thing (thanks @mira for the idea), but for editing or creating >>>> from scratch I'm a bit conflicted. We could simply show the profiles >>>> in the tree, and reuse the vm hw/options panels for that, but does >>>> that seem overkill? We could also leave that API only for now, and >>>> make the gui later? (Using it in the wizard would also not be trivial, >>>> but there the constraints are rather straight forward) >>> >>> - view: simple grid with profile ID, profile type and the (unavoidable) >>> profile comment as the tree columns (for starters). The admins can >>> set a descriptive ID and comment to know what a profile is intended >>> for. Showing all profile values is IMO bad UX, as it becomes crowded >>> super fast for more than a few trivial profiles. >> >> maybe an additional button that simply shows the config, akin to the >> 'show configuration' button for backups? >> >> might be overkill if we have an edit panel though.. > > could be done, but yes, a bit redundant if there's a edit panel, as unlike > with backup jobs there's no subject that can influence the job too, like > the per-disk opt-out of backups for guests, which was IIRC one of the main > reason for the dedicated job-view there (as that information is simply not > present in the edit panel of backup jobs). IIUC we mean different things, i meant the 'Show configuration' button of individual backups that get read from the archive/pbs snapshot but yes i agree that if we have an edit panel it does not provide any additional info (also we could do it later if there is demand)