From: aderumier@odiso.com
To: Thomas Lamprecht <t.lamprecht@proxmox.com>,
Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
pve-devel <pve-devel@pve.proxmox.com>
Subject: Re: [pve-devel] training week: students feedback/requests
Date: Mon, 07 Jun 2021 10:15:37 +0200 [thread overview]
Message-ID: <998956055273321cc687db49df5781480e6a4dc2.camel@odiso.com> (raw)
In-Reply-To: <c5e3beb4-6a17-e88e-46d5-74596c84616b@proxmox.com>
Le vendredi 04 juin 2021 à 09:52 +0200, Thomas Lamprecht a écrit :
> Hi,
>
> On 04.06.21 05:21, aderumier@odiso.com wrote:
> > Hi,
> > I just finish the last training week session,
> > here some students feedback/requesst:
>
> thanks for your feedback!
> > - add support for vm offlline vm migration with local usb or pci
> > devices.
> > 1 student have same special device on multiple hosts, and want to
> > be
> > able to offline move vm when it need to do maintenance.
> > It's currently possible with HA anyway.
>
> Sounds reasonable, IIRC, allowing it in HA was a result from some bug
> entry
> where user had also identical GPUs reserved for the recovery case.
yes, I think currently we don't have any check to not allowing same
device sharing across
multiple vms ? Ha is pretty dump too, try to start vm if host don't
have device/storage/network. (I would like to improve that in the
future, but I don't have enough time ;)
>
> > - allow vm migration with local storage cloud-init drive.
> > Without need to replicate it, I think it could be pretty easy to
> > regenerate cloud-init drive on target host if same local storage
> > name
> > exist on target.
>
> Yeah, sync or regeneration should both work from PVE side, wolfgang
> had
> some objections for the regeneration of CI drives, it could be a bit
> unexpected for a guest process having that open, IMO they should just
> handle
> that. Do you have any experience of issues with regeneration in
> practice?
>
It's really not a problem. I have done a lot of try with regenerate the
config drive online. (I'm currently still working to improve cloudinit
onlines changes)
and with a different content, even with unplug/replug the config drive
with the iso mounted,
it's working without any problem.
And even if a file is currently read, it's open as read only. so you
can remount to a different iso without any lock.
So here, with a live migration, with same content regenerated, I really
don't see any problem.
The cloud-init don't keep any file open after reading them, I don't
think the iso is even mounted after cloudinit has runned.
> >
> > - allow vm online vm migration for specific usb devices.
> > Maybe a little bit more complex, user have special usb devices
> > (googl
> > coral usb tensorflow accelerator) on multiple host.
> > I think it could be possible to unplug device before migration,
> > and
> > rattach it afte migration. (The application running inside th vm
> > is
> > able to handle this).
> > So maybe adding an option on usb device like "allow migrate"
> > could be
> > enough ?
>
> funnily we'd have a use case for that too (some HSM). It'd like to
> see
> some USB plug improvements in general anyway, i.e., allowing one to
> add
> new USB hubs so that the rather low current limit could be exceeded.
I already have some students requests about security/license dongle for
example.
>
> This and the PCI one would need some extra checking on the target
> node,
> i.e., some basic "all local HW the VM config refers too present?" to
> make
> it a bit nicer.
>
yes , agreed.
> >
> > - be able to move vms from a dead node.
> > Currently the only was is to move manually the vms config files.
> > Maybe adding a special wizard, only for root, with a lot of
> > warning,
> > could be great.
> >
>
> Can be pretty dangerous, but I do not really want to object that
> completely,
> in the end the Admin needs to be the one knowing if it is safe, and
> we
> want to reduce the steps where one needs to switch to the CLI, so
> yeah,
> we could add this with great care about how to present this somewhat
> safely.
>
yes, pretty dangerous. That's why I was thinking of a special wizard
somewhere with a lot of warning, not just "click on vm of the dead
node-> move")
or maybe cli only.
Move files manually is pretty bad too currently, as you don't have any
check if the storage exist for example
> >
> > - Backups: add some kind of lock/queues when you are doing a single
> > schedule of "vzdump -all .."
> > Currently, it's launch vzdump on all nodes at the same time.
> > If user have a lot of nodes, it can easily flood the backup
> > storage, or
> > backup storage network.
> > it could be great to be able to define something like: "the backup
> > storage is able to handle X backups jobs in parallel"
> >
>
> Would need some thoughts about how to implement this fitting good in
> the
> existing stack, but yes, we heard that one a few times already, and
> seems
> definetively like a sensible request.
>
> >
> > - vm start order: be able to add vms dependencies,
> > like a vm2 need to wait that vm1 is started (and
> > if
> > guest agent is used, wait that agent is running, to be sure that
> > vm1 is
> > fully booted)
>
> Hmm, how do you check the "fully" booted case though?
I was thinking to simply check if the agent is running. (not perfect,
but still an improvement)
> Also, wait for ever
> if the dependent VM is on another node which is currently offline?
maybe add a timeout if the depend vm is not started
>
> A manual start would probably need to allow overriding such ordering
> too,
> I guess.
>
yes indeed.
(This request is really not a big priority for the student)
> >
> >
> > Gui:
> >
> > - Displaying nodes versions in the datacenter sumarry node list.
>
> sounds sensible, I'd like to have a panel there with update/repo
> status
> of all nodes anyway, to have a more central view about pending or
> not-configured updates.
>
yes, student request was to have something central to display current
versions,
and even better pending update
> > - In vms notes: if user add an http://... link, display it as link
> > to
> > be clickable
>
> I checked out a few small markdown JS libraries in the pase, as that
> was
> also requested at least once, but just doing simple link https?
> detection
> and wrapping them in <a> tags could be enough for most as a starter.
yes, something like that. not a big priority request too.
>
> >
> > - add saml authentification
> >
>
> You may have seen that there's some effort going on in that space
> thanks to
> Julien BLAIS, the biggest (or basically single) blocker is which SAML
> lib.
> to use, the pure perl thing uses the perl MOOSE framework which is
> huge and
> IIRC incompatible with the lib-anyevent we use, the rust versions
> seem not
> really active, having a rust one, in combination with perlmod to
> create perl
> bindings would be great, as then all products could use the same
> base.
>
> cheers,
> Thomas
>
prev parent reply other threads:[~2021-06-07 8:16 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-04 3:21 aderumier
2021-06-04 7:52 ` Thomas Lamprecht
2021-06-07 8:15 ` aderumier [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=998956055273321cc687db49df5781480e6a4dc2.camel@odiso.com \
--to=aderumier@odiso.com \
--cc=pve-devel@lists.proxmox.com \
--cc=pve-devel@pve.proxmox.com \
--cc=t.lamprecht@proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox