From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id B7B1F9953 for ; Fri, 1 Apr 2022 17:25:05 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id A6EF228D5E for ; Fri, 1 Apr 2022 17:24:35 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 11A6428D55 for ; Fri, 1 Apr 2022 17:24:35 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id D7CF246F6B for ; Fri, 1 Apr 2022 17:24:34 +0200 (CEST) From: Aaron Lauterer To: pve-devel@lists.proxmox.com Date: Fri, 1 Apr 2022 17:24:24 +0200 Message-Id: <20220401152424.3811621-2-a.lauterer@proxmox.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220401152424.3811621-1-a.lauterer@proxmox.com> References: <20220401152424.3811621-1-a.lauterer@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.029 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: [pve-devel] [RFC qemu-server] clone disk: fix #3970 catch same source and destination X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Apr 2022 15:25:05 -0000 In rare situations, it could happen that the source and target path is the same. For example, if the disk image is to be copied from one RBD storage to another one on different Ceph clusters but the pools have the same name. In this situation, the clone operation will clone it to the same image and one will end up with an empty destination volume. This patch does not solve the underlying issue, but is a first step to avoid potential data loss, for example when the 'delete source' option is enabled as well. We also need to delete the newly created image right away because the regular cleanup gets confused and tries to remove the source image. This will fail and we have an orphaned image which cannot be removed easily because the same underlying root cause (same path) will falsely trigger the "Drive::is_volume_in_use" check. Signed-off-by: Aaron Lauterer --- We could think about leaving this as a permanent safety check as there should not be a situation where both paths are the same AFAICT. PVE/QemuServer.pm | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 6c80c0c..a1aa4f2 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -7633,6 +7633,17 @@ sub clone_disk { PVE::Storage::activate_volumes($storecfg, [$newvolid]); + my $src_path = PVE::Storage::path($storecfg, $drive->{file}); + my $dst_path = PVE::Storage::path($storecfg, $newvolid); + + if ($src_path eq $dst_path) { + # Delete the new volume right away. Doing it later will try to remove the old volume and + # fail. Currenlty the only known case is RBD. See bugs 3969 and 3970 for more details. + PVE::Storage::vdisk_free($storecfg, $newvolid); + pop @$newvollist; + die "Source and destination are the same!\n"; + } + if (drive_is_cloudinit($drive)) { # when cloning multiple disks (e.g. during clone_vm) it might be the last disk # if this is the case, we have to complete any block-jobs still there from @@ -7650,8 +7661,6 @@ sub clone_disk { # the relevant data on the efidisk may be smaller than the source # e.g. on RBD/ZFS, so we use dd to copy only the amount # that is given by the OVMF_VARS.fd - my $src_path = PVE::Storage::path($storecfg, $drive->{file}); - my $dst_path = PVE::Storage::path($storecfg, $newvolid); # better for Ceph if block size is not too small, see bug #3324 my $bs = 1024*1024; -- 2.30.2