From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id DC57282039 for ; Fri, 26 Nov 2021 17:06:13 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id D2AD91AEDD for ; Fri, 26 Nov 2021 17:06:13 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id DD6111AED1 for ; Fri, 26 Nov 2021 17:06:12 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id AFE0A46A84 for ; Fri, 26 Nov 2021 17:06:12 +0100 (CET) Date: Fri, 26 Nov 2021 17:06:05 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox VE development discussion References: <20211126130357.GS1745@rck.sh> In-Reply-To: <20211126130357.GS1745@rck.sh> MIME-Version: 1.0 User-Agent: astroid/0.15.0 (https://github.com/astroidmail/astroid) Message-Id: <1637941727.qelq78do1a.astroid@nora.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.254 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com, qemuserver.pm] Subject: Re: [pve-devel] migrate local -> drbd fails with vanished job X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Nov 2021 16:06:13 -0000 On November 26, 2021 2:03 pm, Roland Kammerer wrote: > Dear PVE devs, I'll add some notes inline to tell you other places where to continue=20 debugging >=20 > While most of our users start with fresh VMs on DRBD storage, from time > to time people try to migrate a local VM to DRBD storage. This currently = fails. > Migrating VMs from DRBD to DRBD works. >=20 > I added some debug code to PVE/QemuServer.pm, which looks like the locati= on > things go wrong, or at least where I saw them going wrong: >=20 > root@pve:/usr/share/perl5/PVE# diff -Nur QemuServer.pm{.orig,} > --- QemuServer.pm.orig 2021-11-26 11:27:28.879989894 +0100 > +++ QemuServer.pm 2021-11-26 11:26:30.490988789 +0100 > @@ -7390,6 +7390,8 @@ > $completion //=3D 'complete'; > $op //=3D "mirror"; >=20 > + print "$vmid, $vmiddst, $jobs, $completion, $qga, $op \n"; > + { use Data::Dumper; print Dumper($jobs); }; > eval { > my $err_complete =3D 0; >=20 > @@ -7419,6 +7421,7 @@ > next; > } >=20 > + print "vanished: $vanished\n"; # same as !defined($jobs) > die "$job_id: '$op' has been cancelled\n" if !defined($jo= b); >=20 > my $busy =3D $job->{busy}; >=20 >=20 > With that in place, I try to live migrate the running VM from node "pve" = to > "pvf": >=20 > 2021-11-26 11:29:10 starting migration of VM 100 to node 'pvf' (xx.xx.xx.= xx) > 2021-11-26 11:29:10 found local disk 'local-lvm:vm-100-disk-0' (in curren= t VM config) > 2021-11-26 11:29:10 starting VM 100 on remote node 'pvf' > 2021-11-26 11:29:18 volume 'local-lvm:vm-100-disk-0' is 'drbdstorage:vm-1= 00-disk-1' on the target so at this point the VM is started on the remove node (with some special=20 parameters for migration), and it gave us the info that it allocated a=20 volume for the storage migration. for that volume an nbd server/export=20 has been started, the target VM itself is suspended at this point and=20 will only resume once all the data (disk and RAM/device state) has been=20 transferred and the source VM has been suspended in turn. > 2021-11-26 11:29:18 start remote tunnel > 2021-11-26 11:29:19 ssh tunnel ver 1 > 2021-11-26 11:29:19 starting storage migration connected over SSH, forwarding the sockets/ports for storage and=20 RAM/state migration > 2021-11-26 11:29:19 scsi0: start migration to nbd:unix:/run/qemu-server/1= 00_nbd.migrate:exportname=3Ddrive-scsi0 > drive mirror is starting for drive-scsi0 drive-mirror started for the drive that changes storage > Use of uninitialized value $qga in concatenation (.) or string at /usr/sh= are/perl5/PVE/QemuServer.pm line 7393. expected > 100, 100, HASH(0x557b44474a80), skip, , mirror > $VAR1 =3D { > 'drive-scsi0' =3D> {} > }; this looks okay (not you could just pass @_ to Dumper to print all args,=20 handling undef as well ;)) > vanished: 1 but the job is not there when we ask qemu about it (query-block-jobs).=20 this should normally return either a still-running job, or one that has=20 converged so we can keep it in that state, proceed to the next disk (or=20 VM state if it was the last one), and then finally tell all those=20 converged mirror jobs to switch over to the target node when we=20 have finished transferring the state (PVE::QemuMigrate). so the question is whether the drive-mirror even successfully starts? > drive-scsi0: Cancelling block job > drive-scsi0: Done. > 2021-11-26 11:29:19 ERROR: online migrate failure - block job (mirror) er= ror: drive-scsi0: 'mirror' has been cancelled here the log order is a bit confusing - the error handler cancelling the=20 block job is run in any case if an error occurs, and then the original=20 error (has been cancelled, which actually is 'has vanished' here), is=20 printed last. so just ignore the Cancelling part, we don't actively=20 cancel in this case, because there is no job to cancel ;) > 2021-11-26 11:29:19 aborting phase 2 - cleanup resources > 2021-11-26 11:29:19 migrate_cancel > 2021-11-26 11:29:22 ERROR: migration finished with problems (duration 00:= 00:12) > TASK ERROR: migration problems >=20 > What I also see on "pvf" is that the plugin actually creates the DRBD blo= ck > device, and "something" even tries to write data to it, as the DRBD devic= es > auto-promotes to Primary. that might just be the start codepath allocating the volume and then=20 starting the VM suspended with an NBD server.. >=20 > Any hints how I can debug that further? The block device should be ready = at > that point. What is going on in the background here? are there any messages on either end in the journal? you can of course=20 add a sleep at the start of qemu_drive_mirror_monitor, and interact with=20 the VM via QMP yourself to see which state the disks and block jobs are. in case there is nothing (or nothing meaningful) there, it's also=20 possible to use Qemu's trace mechanism to get a feeling for how far it=20 gets (you can set it up via the 'args' config option in the VM config) >=20 > FWIW the plugin can be found here: > https://github.com/linbit/linstor-proxmox >=20 > Regards, rck >=20 > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >=20 >=20 >=20