From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id BF82098C1D for ; Thu, 27 Apr 2023 09:33:22 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 9EBD325ACA for ; Thu, 27 Apr 2023 09:32:52 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Thu, 27 Apr 2023 09:32:50 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id C6F6745A78 for ; Thu, 27 Apr 2023 09:32:49 +0200 (CEST) Date: Thu, 27 Apr 2023 09:32:42 +0200 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox VE development discussion References: <20230425165233.3745210-1-aderumier@odiso.com> <20230425165233.3745210-3-aderumier@odiso.com> <1682514292.71raew01tr.astroid@yuna.none> In-Reply-To: MIME-Version: 1.0 User-Agent: astroid/0.16.0 (https://github.com/astroidmail/astroid) Message-Id: <1682580098.xwye6zkp88.astroid@yuna.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL -0.324 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_ASCII_DIVIDERS 0.8 Email that uses ascii formatting dividers and possible spam tricks KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [pve-devel] [PATCH v2 qemu-server 2/2] remote-migration: add target-cpu param X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Apr 2023 07:33:22 -0000 On April 27, 2023 7:50 am, DERUMIER, Alexandre wrote: > Hi, >=20 > Le mercredi 26 avril 2023 =C3=A0 15:14 +0200, Fabian Gr=C3=BCnbichler a = =C3=A9crit=C2=A0: >> On April 25, 2023 6:52 pm, Alexandre Derumier wrote: >> > This patch add support for remote migration when target >> > cpu model is different. >> >=20 >> > The target vm is restart after the migration >>=20 >> so this effectively introduces a new "hybrid" migration mode ;) the >> changes are a bit smaller than I expected (in part thanks to patch >> #1), >> which is good. >>=20 >> there are semi-frequent requests for another variant (also applicable >> to >> containers) in the form of a two phase migration >> - storage migrate >> - stop guest >> - incremental storage migrate >> - start guest on target >>=20 >=20 > But I'm not sure how to to an incremental storage migrate, without > storage snapshot send|receiv. (so zfs && rbd could work). >=20 > - Vm/ct is running > - do a first snapshot + sync to target with zfs|rbd send|receive > - stop the guest > - do a second snapshot + incremental sync + sync to target with zfs|rbd > send|receive > - start the guest on remote >=20 >=20 > (or maybe for vm, without snapshot, with a dirty bitmap ? But we need > to be able to write the dirty map content to disk somewhere after vm > stop, and reread it for the last increment ) theoretically, we could support such a mode for non-snapshot storages by using bitmaps+block-mirror, yes. either with a target VM, or with qemu-storage-daemon on the target node exposing the target volumes > - vm is running > - create a dirty-bitmap and start sync with qemu-block-storage > - stop the vm && save the dirty bitmap > - reread the dirtymap && do incremental sync (with the new qemu-daemon- > storage or starting the vm paused ? stop here could also just mean stop the guest OS, but leave the process for the incremental sync, so it would not need persistent bitmap support. > And currently we don't support yet offline storage migration. (BTW, > This is also breaking migration with unused disk). > I don't known if we can send send|receiv transfert through the tunnel ? > (I never tested it) we do, but maybe you tested with RBD which doesn't support storage migration yet? withing a cluster it doesn't need to, since it's a shared storage, but between cluster we need to implement it (it's on my TODO list and shouldn't be too hard since there is 'rbd export/import'). >> given that it might make sense to save-guard this implementation >> here, >> and maybe switch to a new "mode" parameter? >>=20 >> online =3D> switching CPU not allowed >> offline or however-we-call-this-new-mode (or in the future, two- >> phase-restart) =3D> switching CPU allowed >>=20 >=20 > Yes, I was thinking about that too. > Maybe not "offline", because maybe we want to implement a real offline > mode later. > But simply "restart" ? no, I meant moving the existing --online switch to a new mode parameter, then we'd have "online" and "offline", and then add your new mode on top "however-we-call-this-new-mode", and then we could in the future also add "two-phase-restart" for the sync-twice mode I described :) target-cpu would of course also be supported for the (existing) offline mode, since it just needs to adapt the target-cpu in the config. the main thing I'd want to avoid is somebody accidentally setting "target-cpu", not knowing/noticing that that entails what amounts to a reset of the VM as part of the migration.. there were a few things down below that might also be worthy of discussion. I also wonder whether the two variants of "freeze FS" and "suspend without state" are enough - that only ensures that no more I/O happens so the volumes are bitwise identical, but shouldn't we also at least have the option of doing a clean shutdown at that point so that applications can serialize/flush their state properly and that gets synced across as well? else this is the equivalent of cutting the power cord, which might not be a good fit for all use cases ;) >> >=20 >> > Signed-off-by: Alexandre Derumier >> > --- >> > =C2=A0PVE/API2/Qemu.pm=C2=A0=C2=A0 | 18 ++++++++++++++++++ >> > =C2=A0PVE/CLI/qm.pm=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 6 ++++++ >> > =C2=A0PVE/QemuMigrate.pm | 25 +++++++++++++++++++++++++ >> > =C2=A03 files changed, 49 insertions(+) >> >=20 >> > diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm >> > index 587bb22..6703c87 100644 >> > --- a/PVE/API2/Qemu.pm >> > +++ b/PVE/API2/Qemu.pm >> > @@ -4460,6 +4460,12 @@ __PACKAGE__->register_method({ >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0optional =3D> 1, >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0default =3D> 0, >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }, >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 'target-= cpu' =3D> { >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0optional =3D> 1, >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0description =3D> "Target Emulated CPU model. For >> > online migration, the storage is live migrate, but the memory >> > migration is skipped and the target vm is restarted.", >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0type =3D> 'string', >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0format =3D> 'pve-vm-cpu-conf', >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }, >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 'ta= rget-storage' =3D> get_standard_option('pve- >> > targetstorage', { >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0completion =3D> >> > \&PVE::QemuServer::complete_migration_storage, >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0optional =3D> 0, >> > @@ -4557,11 +4563,14 @@ __PACKAGE__->register_method({ >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0raise_param_exc({ 'tar= get-bridge' =3D> "failed to parse >> > bridge map: $@" }) >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if = $@; >> > =C2=A0 >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0my $target_cpu =3D extract_= param($param, 'target-cpu'); >>=20 >> this is okay >>=20 >> > + >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0die "remote migration = requires explicit storage mapping!\n" >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if = $storagemap->{identity}; >> > =C2=A0 >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$param->{storagemap} = =3D $storagemap; >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$param->{bridgemap} = =3D $bridgemap; >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$param->{targetcpu} =3D $ta= rget_cpu; >>=20 >> but this is a bit confusing with the variable/hash key naming ;) >>=20 >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$param->{remote} =3D { >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 con= n =3D> $conn_args, # re-use fingerprint for tunnel >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 cli= ent =3D> $api_client, >> > @@ -5604,6 +5613,15 @@ __PACKAGE__->register_method({ >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PVE::QemuServer::nbd_stop($st= ate->{vmid}); >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return; >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0}, >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0'restart' =3D> sub { >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PVE::QemuServer::vm_stop(undef, $st= ate->{vmid}, >> > 1, 1); >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 my $info =3D PVE::QemuServer::vm_st= art_nolock( >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$state= ->{storecfg}, >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$state= ->{vmid}, >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$state= ->{conf}, >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ); >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return; >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0}, >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0'resume' =3D> sub { >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if >> > (PVE::QemuServer::Helpers::vm_running_locally($state->{vmid})) { >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= PVE::QemuServer::vm_resume($state->{vmid}, >> > 1, 1); >> > diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm >> > index c3c2982..06c74c1 100755 >> > --- a/PVE/CLI/qm.pm >> > +++ b/PVE/CLI/qm.pm >> > @@ -189,6 +189,12 @@ __PACKAGE__->register_method({ >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0optional =3D> 1, >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0default =3D> 0, >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }, >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 'target-= cpu' =3D> { >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0optional =3D> 1, >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0description =3D> "Target Emulated CPU model. For >> > online migration, the storage is live migrate, but the memory >> > migration is skipped and the target vm is restarted.", >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0type =3D> 'string', >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0format =3D> 'pve-vm-cpu-conf', >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }, >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 'ta= rget-storage' =3D> get_standard_option('pve- >> > targetstorage', { >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0completion =3D> >> > \&PVE::QemuServer::complete_migration_storage, >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0optional =3D> 0, >> > diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm >> > index e182415..04f8053 100644 >> > --- a/PVE/QemuMigrate.pm >> > +++ b/PVE/QemuMigrate.pm >> > @@ -731,6 +731,11 @@ sub cleanup_bitmaps { >> > =C2=A0sub live_migration { >> > =C2=A0=C2=A0=C2=A0=C2=A0 my ($self, $vmid, $migrate_uri, $spice_port) = =3D @_; >> > =C2=A0 >> > +=C2=A0=C2=A0=C2=A0 if($self->{opts}->{targetcpu}){ >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 $self->log('info', "target= cpu is different - skip live >> > migration."); >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return; >> > +=C2=A0=C2=A0=C2=A0 } >> > + >> > =C2=A0=C2=A0=C2=A0=C2=A0 my $conf =3D $self->{vmconf}; >> > =C2=A0 >> > =C2=A0=C2=A0=C2=A0=C2=A0 $self->log('info', "starting online/live migr= ation on >> > $migrate_uri"); >> > @@ -995,6 +1000,7 @@ sub phase1_remote { >> > =C2=A0=C2=A0=C2=A0=C2=A0 my $remote_conf =3D PVE::QemuConfig->load_con= fig($vmid); >> > =C2=A0=C2=A0=C2=A0=C2=A0 PVE::QemuConfig->update_volume_ids($remote_co= nf, $self- >> > >{volume_map}); >> > =C2=A0 >> > +=C2=A0=C2=A0=C2=A0 $remote_conf->{cpu} =3D $self->{opts}->{targetcpu}= ; >>=20 >> do we need permission checks here (or better, somewhere early on, for >> doing this here) >>=20 >> > =C2=A0=C2=A0=C2=A0=C2=A0 my $bridges =3D map_bridges($remote_conf, $se= lf->{opts}- >> > >{bridgemap}); >> > =C2=A0=C2=A0=C2=A0=C2=A0 for my $target (keys $bridges->%*) { >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0for my $nic (keys $bri= dges->{$target}->%*) { >> > @@ -1354,6 +1360,21 @@ sub phase2 { >> > =C2=A0=C2=A0=C2=A0=C2=A0 live_migration($self, $vmid, $migrate_uri, $s= pice_port); >> > =C2=A0 >> > =C2=A0=C2=A0=C2=A0=C2=A0 if ($self->{storage_migration}) { >> > + >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 #freeze source vm io/s if = target cpu is different (no >> > livemigration) >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0if ($self->{opts}->{targetc= pu}) { >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 my $agen= t_running =3D $self->{conf}->{agent} && >> > PVE::QemuServer::qga_check_running($vmid); >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ($age= nt_running) { >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0print "freeze filesystem\n"; >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0eval { mon_cmd($vmid, "guest-fsfreeze-freeze"); }; >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0die $@ if $@; >>=20 >> die here >>=20 >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } else { >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0print "suspend vm\n"; >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0eval { PVE::QemuServer::vm_suspend($vmid, 1); }; >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0warn $@ if $@; >>=20 >> but warn here? >>=20 >> I'd like some more rationale for these two variants, what are the >> pros >> and cons? should we make it configurable? >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0} >> > + >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0# finish block-job wit= h block-job-cancel, to disconnect >> > source VM from NBD >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0# to avoid it trying t= o re-establish it. We are in blockjob >> > ready state, >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0# thus, this command c= hanges to it to blockjob complete >> > (see qapi docs) >> > @@ -1608,6 +1629,10 @@ sub phase3_cleanup { >> > =C2=A0=C2=A0=C2=A0=C2=A0 # clear migrate lock >> > =C2=A0=C2=A0=C2=A0=C2=A0 if ($tunnel && $tunnel->{version} >=3D 2) { >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0PVE::Tunnel::write_tun= nel($tunnel, 10, "unlock"); >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0if ($self->{opts}->{targetc= pu}) { >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 $self->l= og('info', "target cpu is different - restart >> > target vm."); >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PVE::Tun= nel::write_tunnel($tunnel, 10, 'restart'); >> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0} >> > =C2=A0 >> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0PVE::Tunnel::finish_tu= nnel($tunnel); >> > =C2=A0=C2=A0=C2=A0=C2=A0 } else { >> > --=20 >> > 2.30.2 >> >=20 >> >=20 >> > _______________________________________________ >> > pve-devel mailing list >> > pve-devel@lists.proxmox.com >> > https://antiphishing.cetsi.fr/proxy/v3?i=3DZk92VEFKaGQ4Ums4cnZEUWMTpfH= aXFQGRw1_CnOoOH0&r=3DbHA1dGV3NWJQVUloaWNFUZPm0fiiBviaiy_RDav2GQ1U4uy6lsDDv3= uBszpvvWYQN5FqKqFD6WPYupfAUP1c9g&f=3DSlhDbE9uS2laS2JaZFpNWvmsxai1zlJP9llgnl= 5HIv-4jAji8Dh2BQawzxID5bzr6Uv-3EQd-eluQbsPfcUOTg&u=3Dhttps%3A//lists.proxmo= x.com/cgi-bin/mailman/listinfo/pve-devel&k=3DXRKU >> >=20 >> >=20 >> >=20 >>=20 >>=20 >> _______________________________________________ >> pve-devel mailing list >> pve-devel@lists.proxmox.com >> https://antiphishing.cetsi.fr/proxy/v3?i=3DZk92VEFKaGQ4Ums4cnZEUWMTpfHaX= FQGRw1_CnOoOH0&r=3DbHA1dGV3NWJQVUloaWNFUZPm0fiiBviaiy_RDav2GQ1U4uy6lsDDv3uB= szpvvWYQN5FqKqFD6WPYupfAUP1c9g&f=3DSlhDbE9uS2laS2JaZFpNWvmsxai1zlJP9llgnl5H= Iv-4jAji8Dh2BQawzxID5bzr6Uv-3EQd-eluQbsPfcUOTg&u=3Dhttps%3A//lists.proxmox.= com/cgi-bin/mailman/listinfo/pve-devel&k=3DXRKU >>=20 >=20 > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >=20