* [pve-devel] training week: students feedback/requests @ 2021-06-04 3:21 aderumier 2021-06-04 7:52 ` Thomas Lamprecht 0 siblings, 1 reply; 3+ messages in thread From: aderumier @ 2021-06-04 3:21 UTC (permalink / raw) To: pve-devel Hi, I just finish the last training week session, here some students feedback/requesst: - add support for vm offlline vm migration with local usb or pci devices. 1 student have same special device on multiple hosts, and want to be able to offline move vm when it need to do maintenance. It's currently possible with HA anyway. - allow vm migration with local storage cloud-init drive. Without need to replicate it, I think it could be pretty easy to regenerate cloud-init drive on target host if same local storage name exist on target. - allow vm online vm migration for specific usb devices. Maybe a little bit more complex, user have special usb devices (googl coral usb tensorflow accelerator) on multiple host. I think it could be possible to unplug device before migration, and rattach it afte migration. (The application running inside th vm is able to handle this). So maybe adding an option on usb device like "allow migrate" could be enough ? - be able to move vms from a dead node. Currently the only was is to move manually the vms config files. Maybe adding a special wizard, only for root, with a lot of warning, could be great. - Backups: add some kind of lock/queues when you are doing a single schedule of "vzdump -all .." Currently, it's launch vzdump on all nodes at the same time. If user have a lot of nodes, it can easily flood the backup storage, or backup storage network. it could be great to be able to define something like: "the backup storage is able to handle X backups jobs in parallel" - vm start order: be able to add vms dependencies, like a vm2 need to wait that vm1 is started (and if guest agent is used, wait that agent is running, to be sure that vm1 is fully booted) Gui: - Displaying nodes versions in the datacenter sumarry node list. - In vms notes: if user add an http://... link, display it as link to be clickable - add saml authentification Alexandre ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [pve-devel] training week: students feedback/requests 2021-06-04 3:21 [pve-devel] training week: students feedback/requests aderumier @ 2021-06-04 7:52 ` Thomas Lamprecht 2021-06-07 8:15 ` aderumier 0 siblings, 1 reply; 3+ messages in thread From: Thomas Lamprecht @ 2021-06-04 7:52 UTC (permalink / raw) To: Proxmox VE development discussion, aderumier, pve-devel Hi, On 04.06.21 05:21, aderumier@odiso.com wrote: > Hi, > I just finish the last training week session, > here some students feedback/requesst: thanks for your feedback! > - add support for vm offlline vm migration with local usb or pci > devices. > 1 student have same special device on multiple hosts, and want to be > able to offline move vm when it need to do maintenance. > It's currently possible with HA anyway. Sounds reasonable, IIRC, allowing it in HA was a result from some bug entry where user had also identical GPUs reserved for the recovery case. > - allow vm migration with local storage cloud-init drive. > Without need to replicate it, I think it could be pretty easy to > regenerate cloud-init drive on target host if same local storage name > exist on target. Yeah, sync or regeneration should both work from PVE side, wolfgang had some objections for the regeneration of CI drives, it could be a bit unexpected for a guest process having that open, IMO they should just handle that. Do you have any experience of issues with regeneration in practice? > > - allow vm online vm migration for specific usb devices. > Maybe a little bit more complex, user have special usb devices (googl > coral usb tensorflow accelerator) on multiple host. > I think it could be possible to unplug device before migration, and > rattach it afte migration. (The application running inside th vm is > able to handle this). > So maybe adding an option on usb device like "allow migrate" could be > enough ? funnily we'd have a use case for that too (some HSM). It'd like to see some USB plug improvements in general anyway, i.e., allowing one to add new USB hubs so that the rather low current limit could be exceeded. This and the PCI one would need some extra checking on the target node, i.e., some basic "all local HW the VM config refers too present?" to make it a bit nicer. > > - be able to move vms from a dead node. > Currently the only was is to move manually the vms config files. > Maybe adding a special wizard, only for root, with a lot of warning, > could be great. > Can be pretty dangerous, but I do not really want to object that completely, in the end the Admin needs to be the one knowing if it is safe, and we want to reduce the steps where one needs to switch to the CLI, so yeah, we could add this with great care about how to present this somewhat safely. > > - Backups: add some kind of lock/queues when you are doing a single > schedule of "vzdump -all .." > Currently, it's launch vzdump on all nodes at the same time. > If user have a lot of nodes, it can easily flood the backup storage, or > backup storage network. > it could be great to be able to define something like: "the backup > storage is able to handle X backups jobs in parallel" > Would need some thoughts about how to implement this fitting good in the existing stack, but yes, we heard that one a few times already, and seems definetively like a sensible request. > > - vm start order: be able to add vms dependencies, > like a vm2 need to wait that vm1 is started (and if > guest agent is used, wait that agent is running, to be sure that vm1 is > fully booted) Hmm, how do you check the "fully" booted case though? Also, wait for ever if the dependent VM is on another node which is currently offline? A manual start would probably need to allow overriding such ordering too, I guess. > > > Gui: > > - Displaying nodes versions in the datacenter sumarry node list. sounds sensible, I'd like to have a panel there with update/repo status of all nodes anyway, to have a more central view about pending or not-configured updates. > - In vms notes: if user add an http://... link, display it as link to > be clickable I checked out a few small markdown JS libraries in the pase, as that was also requested at least once, but just doing simple link https? detection and wrapping them in <a> tags could be enough for most as a starter. > > - add saml authentification > You may have seen that there's some effort going on in that space thanks to Julien BLAIS, the biggest (or basically single) blocker is which SAML lib. to use, the pure perl thing uses the perl MOOSE framework which is huge and IIRC incompatible with the lib-anyevent we use, the rust versions seem not really active, having a rust one, in combination with perlmod to create perl bindings would be great, as then all products could use the same base. cheers, Thomas ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [pve-devel] training week: students feedback/requests 2021-06-04 7:52 ` Thomas Lamprecht @ 2021-06-07 8:15 ` aderumier 0 siblings, 0 replies; 3+ messages in thread From: aderumier @ 2021-06-07 8:15 UTC (permalink / raw) To: Thomas Lamprecht, Proxmox VE development discussion, pve-devel Le vendredi 04 juin 2021 à 09:52 +0200, Thomas Lamprecht a écrit : > Hi, > > On 04.06.21 05:21, aderumier@odiso.com wrote: > > Hi, > > I just finish the last training week session, > > here some students feedback/requesst: > > thanks for your feedback! > > - add support for vm offlline vm migration with local usb or pci > > devices. > > 1 student have same special device on multiple hosts, and want to > > be > > able to offline move vm when it need to do maintenance. > > It's currently possible with HA anyway. > > Sounds reasonable, IIRC, allowing it in HA was a result from some bug > entry > where user had also identical GPUs reserved for the recovery case. yes, I think currently we don't have any check to not allowing same device sharing across multiple vms ? Ha is pretty dump too, try to start vm if host don't have device/storage/network. (I would like to improve that in the future, but I don't have enough time ;) > > > - allow vm migration with local storage cloud-init drive. > > Without need to replicate it, I think it could be pretty easy to > > regenerate cloud-init drive on target host if same local storage > > name > > exist on target. > > Yeah, sync or regeneration should both work from PVE side, wolfgang > had > some objections for the regeneration of CI drives, it could be a bit > unexpected for a guest process having that open, IMO they should just > handle > that. Do you have any experience of issues with regeneration in > practice? > It's really not a problem. I have done a lot of try with regenerate the config drive online. (I'm currently still working to improve cloudinit onlines changes) and with a different content, even with unplug/replug the config drive with the iso mounted, it's working without any problem. And even if a file is currently read, it's open as read only. so you can remount to a different iso without any lock. So here, with a live migration, with same content regenerated, I really don't see any problem. The cloud-init don't keep any file open after reading them, I don't think the iso is even mounted after cloudinit has runned. > > > > - allow vm online vm migration for specific usb devices. > > Maybe a little bit more complex, user have special usb devices > > (googl > > coral usb tensorflow accelerator) on multiple host. > > I think it could be possible to unplug device before migration, > > and > > rattach it afte migration. (The application running inside th vm > > is > > able to handle this). > > So maybe adding an option on usb device like "allow migrate" > > could be > > enough ? > > funnily we'd have a use case for that too (some HSM). It'd like to > see > some USB plug improvements in general anyway, i.e., allowing one to > add > new USB hubs so that the rather low current limit could be exceeded. I already have some students requests about security/license dongle for example. > > This and the PCI one would need some extra checking on the target > node, > i.e., some basic "all local HW the VM config refers too present?" to > make > it a bit nicer. > yes , agreed. > > > > - be able to move vms from a dead node. > > Currently the only was is to move manually the vms config files. > > Maybe adding a special wizard, only for root, with a lot of > > warning, > > could be great. > > > > Can be pretty dangerous, but I do not really want to object that > completely, > in the end the Admin needs to be the one knowing if it is safe, and > we > want to reduce the steps where one needs to switch to the CLI, so > yeah, > we could add this with great care about how to present this somewhat > safely. > yes, pretty dangerous. That's why I was thinking of a special wizard somewhere with a lot of warning, not just "click on vm of the dead node-> move") or maybe cli only. Move files manually is pretty bad too currently, as you don't have any check if the storage exist for example > > > > - Backups: add some kind of lock/queues when you are doing a single > > schedule of "vzdump -all .." > > Currently, it's launch vzdump on all nodes at the same time. > > If user have a lot of nodes, it can easily flood the backup > > storage, or > > backup storage network. > > it could be great to be able to define something like: "the backup > > storage is able to handle X backups jobs in parallel" > > > > Would need some thoughts about how to implement this fitting good in > the > existing stack, but yes, we heard that one a few times already, and > seems > definetively like a sensible request. > > > > > - vm start order: be able to add vms dependencies, > > like a vm2 need to wait that vm1 is started (and > > if > > guest agent is used, wait that agent is running, to be sure that > > vm1 is > > fully booted) > > Hmm, how do you check the "fully" booted case though? I was thinking to simply check if the agent is running. (not perfect, but still an improvement) > Also, wait for ever > if the dependent VM is on another node which is currently offline? maybe add a timeout if the depend vm is not started > > A manual start would probably need to allow overriding such ordering > too, > I guess. > yes indeed. (This request is really not a big priority for the student) > > > > > > Gui: > > > > - Displaying nodes versions in the datacenter sumarry node list. > > sounds sensible, I'd like to have a panel there with update/repo > status > of all nodes anyway, to have a more central view about pending or > not-configured updates. > yes, student request was to have something central to display current versions, and even better pending update > > - In vms notes: if user add an http://... link, display it as link > > to > > be clickable > > I checked out a few small markdown JS libraries in the pase, as that > was > also requested at least once, but just doing simple link https? > detection > and wrapping them in <a> tags could be enough for most as a starter. yes, something like that. not a big priority request too. > > > > > - add saml authentification > > > > You may have seen that there's some effort going on in that space > thanks to > Julien BLAIS, the biggest (or basically single) blocker is which SAML > lib. > to use, the pure perl thing uses the perl MOOSE framework which is > huge and > IIRC incompatible with the lib-anyevent we use, the rust versions > seem not > really active, having a rust one, in combination with perlmod to > create perl > bindings would be great, as then all products could use the same > base. > > cheers, > Thomas > ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-06-07 8:16 UTC | newest] Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-06-04 3:21 [pve-devel] training week: students feedback/requests aderumier 2021-06-04 7:52 ` Thomas Lamprecht 2021-06-07 8:15 ` aderumier
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox