public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Dominik Csapak <d.csapak@proxmox.com>
To: Thomas Lamprecht <t.lamprecht@proxmox.com>,
	Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [RFC PATCH cluster/guest-common/qemu-server/container/manager] add backend profile support
Date: Mon, 6 Nov 2023 10:48:56 +0100	[thread overview]
Message-ID: <65c565b8-96fc-4b6d-bc21-f3c8fd138c01@proxmox.com> (raw)
In-Reply-To: <cd1192a6-60a1-4e05-9717-da07b4b81044@proxmox.com>

On 11/6/23 09:53, Thomas Lamprecht wrote:
> Am 06/11/2023 um 09:17 schrieb Dominik Csapak:
>> On 11/4/23 09:34, Thomas Lamprecht wrote:
>>> On 03/11/2023 12:53, Dominik Csapak wrote:
>>>> Using a single section config for both VMs and CTs make handling the
>>>> properties a bit weird. For now i prefix the options with "$type_"
>>>> so vm options are e.g. 'vm_ostype', 'vm_name', and so on while container
>>>> options are 'ct_ostype', 'ct_hostname', etc.
>>>
>>> That properties are shared by different sections by default is IMO not
>>> really ideal in general, it's also making the storage API and its docs
>>> rather hard to understand & work with.
>>>
>>> One option could be to opt-into a newer behavior, e.g., via some
>>> property, or getter, that the section config implementation needs to
>>> set, or override and return true, which makes then all properties
>>> isolated if not explicitly marked as shared (via another new property),
>>> that way we could use it now, and move over existing ones where it
>>> makes sense, without risking wide breakage.
>>>
>>
>> ok, so if i'm reading this right, having a config per type is not really
>> an option for you? (that would be nice and easy, without having
>> to extend the section config at all, but still get most of the
>> upsides)
> 
> for some definition of "nice", as I find the resulting docs, and APIs
> not really nice.
> 
> I do not want to block a per-type config out-right, it just feels wrong
> to me to have section configs with types, and then not be able to use
> that for different types of the (in spirit) same stuff, so I'd much
> rather see section config adapted to be able to actually handle
> different types better through the whole stack.

makes sense

> 
>> hard part of extending the section config is how to handle the api
>> since when we want to have a 'clean' api per type, we then have
> 
> For JSON schema this could be mapped with oneOf support, i.e., for
> properties that exist for multiple types as different schema's one
> would create a oneOf with two sub-schemas.
> 
> https://json-schema.org/understanding-json-schema/reference/combining#oneOf
> 
> That would be clean from an API POV and could be reused for the storage
> API then, where we already have this 1:1, and while not perfect it
> actually uses section configs for what they got made for and works
> well from an API and user POV – it would be of much more use to
> improve it's rough edges compared to just go the easy way (for the
> dev, not users) again.

ok, but AFAICS we don't have support for that yet, so we'd have
to implement that first... if we want to go in that direction
i'll implement that first ofc

> 
>> to also split the api per type? (like e.g. we do with the mapping
>> of pci/usb) and then whats the gain of having both types in a single
>> config (besides a single file instead of two?)
> 
> API split is not required, so this is obsolete. Besides, a single
> config makes it easier to ensure that IDs are unique (to avoid
> confusion), so even if, there would be still some advantages.
> 
> 
>>>> Using the same config/plugin system also makes using it a bit weird.>>> We have to register/init them in the api where they're used, but for the
>>>> cli we have to register only the available type and then init. This
>>>> makes it necessary to always set 'allow_unknown' while parsing the
>>>> config so that 'qm' doesn't trip over the container profiles...
>>>>
>>>> A fix for both could be to separate the ct/vm configs into two files
>>>> (similar to how we did pci/usb mapping configs), that would fix the
>>>> prefixing, as well as the register/init issue (We have to have
>>>> separate APIs then ofc).
>>>>
>>>> Another fix would be to extend the section config to allow different
>>>> properties of the same name for different types. Should be possible,
>>>> although we have to be careful to not break existing ones, and also
>>>> the API interface has to be separate for each type then (cannot really
>>>> have confiflicting api parameter schemas for the same name?)
>>>>
>>>> We could also go in a completely different direction and create a config
>>>> per profile? (like we handle vm configs). Downside of that is, that the
>>>> current guest config handling part is partly in pmxcfs, so we'd have to
>>>> make that either more generic, or duplicate it for profiles.
>>>
>>> I don't see how this would have anything to do with pmxcfs and VMIDs,
>>> that map to an actual guest instance and thus needs special treatment
>>> compared to just some profile that can, e.g., live in a
>>> /etc/pve/guest-profiles/  as <id>.profile (the extension just an
>>> example), and be handled only via perl – profiles are only used in
>>> the profile management API, where it's ok to fully parse one, and
>>> in guest creation POST calls, so even cfs_register* would be probably
>>> overkill.
>>
>> you're right, i did not explain right what i meant. We currently do
>> somethings in pmxcfs for vm configs (e.g. read/write/list) and
>> we'd have to use the filesystem layer for the profiles if done in perl
>> (which i meant with 'duplicating')
> 
> 
> Yeah, as you've followed-up, ser/de happens still in perl.
> So if that route would be chosen, it'd avoid touching CFS at first, we
> can still add it to the tracked files for improved versioning & caching,
> there later one, if  ever needed, removing it is a bit harder.

makes sense

> 
>>> Having a file per profile makes some things relatively easy, but doesn't
>>> fits the commonly used section config, let's see if others have input
>>> (i.e., actively ask one/some of fabian/wolfgang/fiona.
>>
>> i agree, just mentioned it for completeness sake, but yes, lets wait
>> for another opinion
>>
> 
>  From my gut feeling I'd like that a bit more if we'd start out newly,
> and I guess the VMID guest configs can be seen as prior art here, but
> those are always a bit special (as they map to instances, not something
> that we just access or provides some profile), and I would like to avoid
> adding to many special cases – i.e., ideally we got one way with which
> we can handle almost all of our needs, as otherwise this mental cost for
> devs and users alike, will just grow.
> 
> 
>>>> * Is the priv path ok? /mapping/profiles/<id> feels a bit weird
>>>
>>> What feels weird? s/profile/guest-profile/ might be a bit more telling
>>> though.
>>
>> weird that it's putting in the /mapping/ dir, but if that's not an
>> issue for you then i'll use /mapping/guest-profile(s)/<id>
>> (idk if we should prefer singular or plural for that...)
> 
> didn't we had the same discussion for mapping itself (i.e., map vs. maps
> vs. mapping vs. mappings)
> 
> IIRC singular was chosen then due to be used more often and to avoid
> spending to much painting that bike shed, so would do the same here and
> go for singular.
> 

true, i remember now ;)

>>
>>>
>>>> * We should probably introduce a 'meta' property for ct like we have for
>>>>     vms? we could record the profile there and also e.g. the template used
>>>>     to set the container up.
>>>
>>> I assume you mean in the CT config, IIRC there is even an older patch
>>> floating around for adding that for recording the ctime.
>>>
>>> But it shouldn't be needed, as meta stuff should be only informational,
>>> profile names can change (delete/re-add) or get modified, so saving the
>>> ID has no use, logging it to the task-log is enough for starters, I think.
>>>
>>
>> makes sense, i just thought maybe recording the template name + ctime, would
>> make sense anyway.
>>
>> should i just log it for vms too, or is it ok to put it in the meta option
>> there?
> 
> I'd start out with just logging it, IMO this is mostly relevant for the
> creation event, as it's only applied there (i.e., profile updates do
> nothing for already instantiated guests), so adding it to the config, even
> if just in the meta-data part, feels wrong to me – and here again, adding
> that later, e.g., after user demand shows up, is way easier than removing
> it again.

ok, fine with me

> 
>>>> * there is still an issue with the docs generation, i'll have to look
>>>>     into it
>>>> * I'm not quite sure how the UI for creating/editing profiles should
>>>>     look like. For creating i could imagine a 'create profile from guest'
>>>>     type of thing (thanks @mira for the idea), but for editing or creating
>>>>     from scratch I'm a bit conflicted. We could simply show the profiles
>>>>     in the tree, and reuse the vm hw/options panels for that, but does
>>>>     that seem overkill? We could also leave that API only for now, and
>>>>     make the gui later? (Using it in the wizard would also not be trivial,
>>>>     but there the constraints are rather straight forward)
>>>
>>> - view: simple grid with profile ID, profile type and the (unavoidable)
>>>     profile comment as the tree columns (for starters). The admins can
>>>     set a descriptive ID and comment to know what a profile is intended
>>>     for. Showing all profile values is IMO bad UX, as it becomes crowded
>>>     super fast for more than a few trivial profiles.
>>
>> maybe an additional button that simply shows the config, akin to the
>> 'show configuration' button for backups?
>>
>> might be overkill if we have an edit panel though..
> 
> could be done, but yes, a bit redundant if there's a edit panel, as unlike
> with backup jobs there's no subject that can influence the job too, like
> the per-disk opt-out of backups for guests, which was IIRC one of the main
> reason for the dedicated job-view there (as that information is simply not
> present in the edit panel of backup jobs).

IIUC we mean different things, i meant the 'Show configuration' button of
individual backups that get read from the archive/pbs snapshot

but yes i agree that if we have an edit panel it does not provide
any additional info (also we could do it later if there is demand)






  reply	other threads:[~2023-11-06  9:48 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-03 11:53 Dominik Csapak
2023-11-03 11:53 ` [pve-devel] [RFC PATCH cluster 1/1] add profiles.cfg to cluster fs Dominik Csapak
2023-11-03 11:53 ` [pve-devel] [RFC PATCH guest-common 1/1] add profiles section config plugin Dominik Csapak
2023-11-06  9:22   ` Fiona Ebner
2023-11-06  9:34     ` Dominik Csapak
2023-11-06 10:12       ` Fiona Ebner
2023-11-06 10:32         ` Fiona Ebner
2023-11-06 10:38         ` Dominik Csapak
2023-11-06 11:38           ` Fiona Ebner
2023-11-03 11:53 ` [pve-devel] [RFC PATCH qemu-server 1/4] meta info: allow additional properties to be given Dominik Csapak
2023-11-03 11:53 ` [pve-devel] [RFC PATCH qemu-server 2/4] add the VM profiles plugin Dominik Csapak
2023-11-03 11:53 ` [pve-devel] [RFC PATCH qemu-server 3/4] api: add profile option to create vm api call Dominik Csapak
2023-11-03 11:53 ` [pve-devel] [RFC PATCH qemu-server 4/4] qm: register and init the profiles plugins Dominik Csapak
2023-11-03 11:53 ` [pve-devel] [RFC PATCH container 1/3] add the CT profiles plugin Dominik Csapak
2023-11-03 11:53 ` [pve-devel] [RFC PATCH container 2/3] api: add profile option to create ct api call Dominik Csapak
2023-11-03 11:53 ` [pve-devel] [RFC PATCH container 3/3] pct: register and init the profiles plugins Dominik Csapak
2023-11-03 11:53 ` [pve-devel] [RFC PATCH manager 1/1] api: add guest profile api endpoint Dominik Csapak
2023-11-04  8:34 ` [pve-devel] [RFC PATCH cluster/guest-common/qemu-server/container/manager] add backend profile support Thomas Lamprecht
2023-11-06  8:17   ` Dominik Csapak
2023-11-06  8:30     ` Dominik Csapak
2023-11-06  8:53     ` Thomas Lamprecht
2023-11-06  9:48       ` Dominik Csapak [this message]
2023-11-06  9:21     ` Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=65c565b8-96fc-4b6d-bc21-f3c8fd138c01@proxmox.com \
    --to=d.csapak@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal