public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Laurent GUERBY <laurent@guerby.net>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
	 Fabian Ebner <f.ebner@proxmox.com>
Subject: Re: [pve-devel] [POC qemu-server] fix 3303: allow "live" upgrade of qemu version
Date: Wed, 23 Jun 2021 19:56:53 +0200	[thread overview]
Message-ID: <1624471013.17569.11.camel@guerby.net> (raw)
In-Reply-To: <aa69b89d-4206-0ac4-50a6-04b6333a0907@proxmox.com>

On Thu, 2021-04-08 at 18:44 +0200, Thomas Lamprecht wrote:
> On 08.04.21 12:33, Fabian Ebner wrote:
> > The code is in a very early state, I'm just sending this to discuss
> > the idea.
> > I didn't do a whole lot of testing yet, but it does seem to work.
> > 
> > The idea is rather simple:
> > 1. save the state to ramfs
> > 2. stop the VM
> > 3. start the VM loading the state
> 
> For the record, as we (Dietmar, you and I) discussed this a bit off-
> list:
> 
> The issue we see here is that one temporarily requires a potential
> big chunk of
> free memory, i.e., another time the amount the guest is assigned. So
> tens to
> hundreds of GiB, which (educated guess) > 90 % of our users just do
> not have
> available, at least for the bigger VMs of theirs.
> 
> So, it would be nicer if we could makes this more QEMU internal,
> e.g., just save
> the state out (as that one may not be compatible 1:1 for reuse with
> the new QEMU
> version) and re-use the guest memory directly, e.g., start new QEMU
> process
> migrate state and map over the guest-memory, then pause old one, cont
> new one and
> be done (very condensed).
> That may have it's own difficulties/edge-cases, but it would not
> require having
> so much extra memory freely available...

Hi,

I'm wondering how much ksm would help reduce the extra memory
requirement during same host migration.

May be there's a sweet spot by changing ksm to be more aggressive just
before starting the migration and slowing down the migration using
bandwidth control parameter so all new pages created by the migration
process end up shared quickly? And returning ksmtuned to default after
it's done.

Or may be only lowering migration bandwidth will be enough with ksm
settings unchanged (still has to be faster than mutation rate though so
can't be too low).

I assume for most users even if the migration to same host is slow it's
fine since it will not consume network ressources, just a bit more cpu.

Sincerely,

Laurent

PS: thanks Stefan_R for pointing this thread
https://forum.proxmox.com/threads/upgrade-of-pve-qemu-kvm-and-running-v
m.91236/

> > 
> > This approach solves the problem that our stack is (currently) not
> > designed to
> > have multiple instances with the same VM ID running. To do so, we'd
> > need to
> > handle config locking, sockets, pid file, passthrough resources?,
> > etc.
> > 
> > Another nice feature of this approach is that it doesn't require
> > touching the
> > vm_start or migration code at all, avoiding further bloating.
> > 
> > 
> > Thanks to Fabian G. and Stefan for inspiring this idea:
> > 
> > Fabian G. suggested using the suspend to disk + start route if the
> > required
> > changes to our stack would turn out to be infeasable.
> > 
> > Stefan suggested migrating to a dummy VM (outside our stack) which
> > just holds
> > the state and migrating back right away. It seems that dummy VM is
> > in fact not
> > even needed ;) If we really really care about smallest possible
> > downtime, this
> > approach might still be the best, and we'd need to start the dummy
> > VM while the
> > backwards migration runs (resulting in two times the migration
> > downtime). But
> > it does have more moving parts and requires some migration/startup
> > changes.
> > 
> > 
> > Fabian Ebner (6):
> >   create vmstate_size helper
> >   create savevm_monitor helper
> >   draft of upgrade_qemu function
> >   draft of qemuupgrade API call
> >   add timing for testing
> >   add usleep parameter to savevm_monitor
> > 
> >  PVE/API2/Qemu.pm  |  60 ++++++++++++++++++++++
> >  PVE/QemuConfig.pm |  10 +---
> >  PVE/QemuServer.pm | 125 +++++++++++++++++++++++++++++++++++++++---
> > ----
> >  3 files changed, 170 insertions(+), 25 deletions(-)
> > 
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 



      reply	other threads:[~2021-06-23 17:57 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-08 10:33 Fabian Ebner
2021-04-08 10:33 ` [pve-devel] [POC qemu-server 1/6] create vmstate_size helper Fabian Ebner
2021-04-08 10:33 ` [pve-devel] [POC qemu-server 2/6] create savevm_monitor helper Fabian Ebner
2021-04-08 10:33 ` [pve-devel] [POC qemu-server 3/6] draft of upgrade_qemu function Fabian Ebner
2021-04-08 10:33 ` [pve-devel] [POC qemu-server 4/6] draft of qemuupgrade API call Fabian Ebner
2021-04-08 10:33 ` [pve-devel] [POC qemu-server 5/6] add timing for testing Fabian Ebner
2021-04-08 10:33 ` [pve-devel] [POC qemu-server 6/6] add usleep parameter to savevm_monitor Fabian Ebner
2021-04-08 16:44 ` [pve-devel] [POC qemu-server] fix 3303: allow "live" upgrade of qemu version Thomas Lamprecht
2021-06-23 17:56   ` Laurent GUERBY [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1624471013.17569.11.camel@guerby.net \
    --to=laurent@guerby.net \
    --cc=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal