From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 46C7B6C404 for ; Fri, 29 Jan 2021 16:12:23 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 783E9112F8 for ; Fri, 29 Jan 2021 16:11:52 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 073631120F for ; Fri, 29 Jan 2021 16:11:48 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id C1C4B43B74 for ; Fri, 29 Jan 2021 16:11:47 +0100 (CET) From: Fabian Ebner To: pve-devel@lists.proxmox.com Date: Fri, 29 Jan 2021 16:11:39 +0100 Message-Id: <20210129151143.10014-10-f.ebner@proxmox.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210129151143.10014-1-f.ebner@proxmox.com> References: <20210129151143.10014-1-f.ebner@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.004 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [qemumigrate.pm] Subject: [pve-devel] [PATCH v2 qemu-server 09/13] migration: cleanup_remotedisks: simplify and include more disks X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 29 Jan 2021 15:12:23 -0000 Namely, those migrated with storage_migrate by using the information from volume_map. Call cleanup_remotedisks in phase1_cleanup as well, because that's where we end if sync_offline_local_volumes fails, and some disks might already have been transfered successfully. Note that the local disks are still here, so this is fine. Signed-off-by: Fabian Ebner --- Changes from v1: * order changed, see previous patch * get rid of a fixme in the tests PVE/QemuMigrate.pm | 22 ++++++---------------- test/run_qemu_migrate_tests.pl | 5 +---- 2 files changed, 7 insertions(+), 20 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 5d92028..94f3328 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -695,16 +695,11 @@ sub sync_offline_local_volumes { sub cleanup_remotedisks { my ($self) = @_; - foreach my $target_drive (keys %{$self->{target_drive}}) { - my $drivestr = $self->{target_drive}->{$target_drive}->{drivestr}; - next if !defined($drivestr); - - my $drive = PVE::QemuServer::parse_drive($target_drive, $drivestr); - + foreach my $volid (values %{$self->{volume_map}}) { # don't clean up replicated disks! - next if defined($self->{replicated_volumes}->{$drive->{file}}); + next if defined($self->{replicated_volumes}->{$volid}); - my ($storeid, $volname) = PVE::Storage::parse_volume_id($drive->{file}); + my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid); my $cmd = [@{$self->{rem_ssh}}, 'pvesm', 'free', "$storeid:$volname"]; @@ -759,20 +754,15 @@ sub phase1_cleanup { $self->log('err', $err); } - my @volids = $self->filter_local_volumes('offline'); - if (scalar(@volids)) { - foreach my $volid (@volids) { - next if defined($self->{replicated_volumes}->{$volid}); - $self->log('err', "found stale volume copy '$volid' on node '$self->{node}'"); - # fixme: try to remove ? - } + eval { $self->cleanup_remotedisks() }; + if (my $err = $@) { + $self->log('err', $err); } eval { $self->cleanup_bitmaps() }; if (my $err =$@) { $self->log('err', $err); } - } sub phase2 { diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl index 67b9d0e..4f7f021 100755 --- a/test/run_qemu_migrate_tests.pl +++ b/test/run_qemu_migrate_tests.pl @@ -1465,7 +1465,6 @@ my $tests = [ }, }, { - # FIXME also cleanup remote disks when failing this early name => '149_storage_migrate_fail', target => 'pve1', vmid => 149, @@ -1482,9 +1481,7 @@ my $tests = [ expect_die => "storage_migrate 'local-lvm:vm-149-disk-0' error", expected => { source_volids => local_volids_for_vm(149), - target_volids => { - 'local-dir:149/vm-149-disk-0.qcow2' => 1, - }, + target_volids => {}, vm_config => $vm_configs->{149}, vm_status => { running => 0, -- 2.20.1