public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Dominik Csapak <d.csapak@proxmox.com>
To: Thomas Lamprecht <t.lamprecht@proxmox.com>,
	Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
	Markus Frank <m.frank@proxmox.com>
Subject: Re: [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories
Date: Thu, 4 May 2023 10:57:24 +0200	[thread overview]
Message-ID: <10508419-a110-0e52-242d-a20c2e9f7243@proxmox.com> (raw)
In-Reply-To: <43d62e1c-8555-d641-2788-9b15115d683b@proxmox.com>

On 5/4/23 10:42, Thomas Lamprecht wrote:
> Am 04/05/2023 um 10:31 schrieb Dominik Csapak:
>> On 5/4/23 10:13, Thomas Lamprecht wrote:
>>> Am 03/05/2023 um 13:26 schrieb Dominik Csapak:
>>>> just a short comment since this series overlaps a bit with my
>>>> cluster resource mapping series (i plan on sending a v4 soon).
>>>>
>>>> i'd prefer to have the configuration endpoints for mapping bundled in a subdirectory
>>>> so instead of /nodes/<node>/dirs/ i'd put it in /nodes/<node>/mapping/dirs/
>>>> (or /nodes/<node>/map/dirs )
>>>>
>>>> @thomas, @fabian, any other input on that?
>>>>
>>>
>>> huh? aren't mappings per definition cluster wide i.e. /cluster/resource-map/<mapping-id>
>>> than then allows to add/update the mapping of a resource on a specific node?
>>> A node specific path makes no sense to me, at max it would if adding/removing a mapping
>>> is completely decoupled from adding/removing/updaing entries to it – but that seems
>>> convoluted from an usage POV and easy to get out of sync with the actual mapping list.
>>
>> in markus series the mapping are only ever per node, so each node has it's
>> own dir mapping
> 
> Every resource maping is always per node, so that's not really changing anything.
> 

i meant there is no cluster aspect in the current version of markus series at all

> Rather what about migrations? Would be simpler from migration and ACL POV to have
> it cluster wide,

sure, but how we get the info during migration is just an implementation detail and
for that shouldn't matter where the configs/api endpoints are

> 
>>
>> in my series, the actual config was cluster-wide, but the api endpoint to configure
>> them were sitting in the node path (e.g. /node/<nodes>/hardware-map/pci/*)
>  > Please no.

was that way since the very first version, would have been nice to get that feedback
earlier

>   
>> the reason is that to check the validity of the mapping (at least for creating/updating)
>> need to happen at the node itself anyway since only that node can check it
>> (e.g. for pci devices, if it exists, the ids are correct etc.)
> 
> That check is most relevant on using the map, not on updating/configuring it as there
> the UX of getting the right one can be solved of providing a node selector per entry
> that then loads the actual available devices/resources on that node.
i see what you mean, personally i'd still prefer doing these checks on creation to
prevent the user from accidentally (or intentionally) creating wrong entries
(e.g. when using it via the api/cli)

yes in the gui it shouldn't be a problem since we can fill in the info from the correct node

> 
>>
>> we *could* put them into the cluster path api, but we'd need to send a node parameter
>> along and forward it there anyway, so that wouldn't really make a difference
> 
> no need for that, see above.

there is a need for the node parameter, because you always need to know for which node the
mapping is anyway ;) or did you mean the 'forward' part?

> 
>>
>> for reading the mappings, that could be done there, but in my series in the gui at least,
>> i have to make a call to each node to get the current state of the mapping
>> (e.g. if the pci device is still there)
> 
> For now not ideal but ok, in the future I'd rather go in the direction of broadcasting
> some types of HW resources via pmxcfs KV and then this isn't an issue anymore.

yeah can be done, but the question is if we want to broadcast the whole pci/usb list
from all nodes? (i don't believe that scales well?)

> 
>>
>> if a mapping exists (globally) is not interesting most of the time, we only need to know
>> if it exists at a specific node
> 
> that's looking at it backwards, the user and ACL only care for global mapings, how
> the code implements that is then, well an implementation detail.

ACL yes, the user must configure the mapping on a vm (which sits on a specific node)
and there the mapping must exist on that node

> 
>>
>> also, after seeing markus' patches, i also leaned more in the direction of splitting
>> my global mapping config into a config per type+node (so node1/usb.conf, node1/pci.conf,
> 
> no, please no forest^W jungle of config trees :/
> 
> A /etc/pve/resource-map/<type>.conf must be enough, even a /etc/pve/resource-map.conf
> should be tbh., but I could imagine that splitting per resource type makes some (schema)
> things a bit easier and reduce some bug potential, so not to hard feelings on having one
> cluster-wide per type; but really not more.
> 

sure a single config is ok for me (only per type would be weird, since resuing the
sectionconfig would only ever have single type and we'd have to encode the
nodename somehow)





  reply	other threads:[~2023-05-04  8:57 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-25 10:21 [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Markus Frank
2023-04-25 10:21 ` [pve-devel] [PATCH docs v4 1/6] added shared filesystem doc for virtio-fs & virtio-9p Markus Frank
2023-04-25 10:21 ` [pve-devel] [PATCH access-control v4 2/6] added acls for Shared Files Directories Markus Frank
2023-05-04  8:24   ` Fabian Grünbichler
2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories Markus Frank
2023-05-03 11:26   ` Dominik Csapak
2023-05-04  8:13     ` Thomas Lamprecht
2023-05-04  8:31       ` Dominik Csapak
2023-05-04  8:42         ` Thomas Lamprecht
2023-05-04  8:57           ` Dominik Csapak [this message]
2023-05-04 10:21             ` Thomas Lamprecht
2023-05-09  9:31               ` Dominik Csapak
2023-05-04  8:24   ` Fabian Grünbichler
2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 4/6] added Shared Files tab in Node Settings Markus Frank
2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 5/6] added options to add virtio-9p & virtio-fs Shared Filesystems to qemu config Markus Frank
2023-04-25 10:21 ` [pve-devel] [PATCH qemu-server v4 6/6] feature #1027: virtio-9p & virtio-fs support Markus Frank
2023-05-04  8:39   ` Fabian Grünbichler
2023-05-05  8:27     ` Markus Frank
2023-05-04  8:24 ` [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=10508419-a110-0e52-242d-a20c2e9f7243@proxmox.com \
    --to=d.csapak@proxmox.com \
    --cc=m.frank@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal