public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
	aderumier@odiso.com
Cc: Wolfgang Bumiller <w.bumiller@proxmox.com>
Subject: Re: [pve-devel] cloudinit: question about cloudinit pending values && hostname/mac address changes
Date: Tue, 23 Feb 2021 09:27:26 +0100	[thread overview]
Message-ID: <c0308a4c-be13-7d93-6d1c-73e42aedf5e4@proxmox.com> (raw)
In-Reply-To: <a7955a58255106b5ad78047547236014f3d37226.camel@odiso.com>

On 21.02.21 18:47, aderumier@odiso.com wrote:
> I have some question about cloudinit hotplug pending values.
> 
> Currently, when vm is running, we keep cloudinit specific values
> (ipconfigX, dns, ssh,...)  in pending until we regenerate image
> manually. 
> 
> But some other change, like vm name (use for hostname), or nic mac
> address . (use to match interface in config nodrive format), are not
> keeped as pending.
> 
> Why don't we simply auto regenerate the cloudinit config drive after
> changes? (and don't use pending values like "pending  cdrom
> generation").

IMO OK, wasn't the other stuff done because of some changes cannot be
applied live?

> 
> Anyway, when vm is offline, we don't have pending state at all, and
> config drive is generated only after at vmstart.
> 
> Also, currently, to regenerated the iso, we need 2 api call,  
> 1 to remove cdrom ,  1 to replug cdrom with new config.
> 
> I really would like to be able to change cloud-init config like lxc, 
> simply update values, and get them auto apply.
> 
> What do you think about it ?

Sounds OK to me, without much thinking into regression possibilities.

In general, I'd like to simplify cloud init anyway, IMO the whole
special disk handling just brought us bug after bug with clone, migrate,
...

I'd like to generate the image just in memory (e.g., /run/qemu-server ?)
and just attach it from there (e.g., just using the first free IDE bus
slot, adding new IDE CD ROM devices need reboot anyway, so if it has to
move to another free slot its not a problem).

For backups we already save the config with the state applied, so there
no change is required.

For live migration we'd need to transfer the current state, not much extra
work but needs a few changes.

For live-snapshots we'd need to save the state too (so that processes
which currently have that open do not die if it changed), also a bit of
changes required.

But I think that would simplify this whole thing a lot, and also would
not require the user to add a cloudinit cdrom to the VM, just configure
it and be done.




  reply	other threads:[~2021-02-23  8:27 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-21 17:47 aderumier
2021-02-23  8:27 ` Thomas Lamprecht [this message]
2021-02-23  9:06   ` Wolfgang Bumiller
2021-02-23  9:29     ` Thomas Lamprecht
2021-02-23 15:29       ` aderumier
2021-03-06  7:31       ` aderumier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c0308a4c-be13-7d93-6d1c-73e42aedf5e4@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=aderumier@odiso.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=w.bumiller@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal