From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <a.lauterer@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id D3A11A941
 for <pve-devel@lists.proxmox.com>; Tue,  5 Apr 2022 09:28:42 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id C60D61DC4F
 for <pve-devel@lists.proxmox.com>; Tue,  5 Apr 2022 09:28:42 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id 2E5211DC45
 for <pve-devel@lists.proxmox.com>; Tue,  5 Apr 2022 09:28:42 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id F090041CC7
 for <pve-devel@lists.proxmox.com>; Tue,  5 Apr 2022 09:28:41 +0200 (CEST)
Message-ID: <0b5a5b19-df4c-0289-4b13-8443f7f7635d@proxmox.com>
Date: Tue, 5 Apr 2022 09:28:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.7.0
Content-Language: en-US
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
 =?UTF-8?Q?Fabian_Gr=c3=bcnbichler?= <f.gruenbichler@proxmox.com>
References: <20220401152424.3811621-1-a.lauterer@proxmox.com>
 <20220401152424.3811621-2-a.lauterer@proxmox.com>
 <1649084728.7liqfd2nmz.astroid@nora.none>
From: Aaron Lauterer <a.lauterer@proxmox.com>
In-Reply-To: <1649084728.7liqfd2nmz.astroid@nora.none>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.344 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 NICE_REPLY_A           -0.631 Looks like a legit reply (A)
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
 URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See
 http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more
 information. [rbdplugin.pm]
Subject: Re: [pve-devel] [RFC qemu-server] clone disk: fix #3970 catch same
 source and destination
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Tue, 05 Apr 2022 07:28:42 -0000



On 4/4/22 17:26, Fabian Grünbichler wrote:
> On April 1, 2022 5:24 pm, Aaron Lauterer wrote:
>> In rare situations, it could happen that the source and target path is
>> the same. For example, if the disk image is to be copied from one RBD
>> storage to another one on different Ceph clusters but the pools have the
>> same name.
>>
>> In this situation, the clone operation will clone it to the same image and
>> one will end up with an empty destination volume.
>>
>> This patch does not solve the underlying issue, but is a first step to
>> avoid potential data loss, for example  when the 'delete source' option
>> is enabled as well.
>>
>> We also need to delete the newly created image right away because the
>> regular cleanup gets confused and tries to remove the source image. This
>> will fail and we have an orphaned image which cannot be removed easily
>> because the same underlying root cause (same path) will falsely trigger
>> the "Drive::is_volume_in_use" check.
> 
> isn't this technically - just like for the container case - a problem in
> general, not just for cloning a disk? I haven't tested this in practice,
> but since you already have the reproducing setup ;)
> 
> e.g., given the following:
> - storage A, krbd, cluster A, pool foo
> - storage B, krbd, cluster B, pool foo
> - VM 123, with scsi0: A:vm-123-disk-0 and no volumes on B
> - qm set 123 -scsi1: B:1
> 
> next free slot on B is 'vm-123-disk-0', which will be allocated. mapping
> will skip the map part, since the RBD path already exists (provided
> scsi0's volume is already activated). the returned path will point to
> the mapped blockdev corresponding to A:vm-123-disk-0, not B:..
> 
> guest writes to scsi1, likely corrupting whatever is on scsi0, since
> most things that tend to get put on guest disks are not
> multi-writer-safe (or something along the way notices it?)
> 
> if the above is the case, it might actually be prudent to just put the
> check from your other patch into RBDPlugin.pm 's alloc method (and
> clone and rename?) since we'd want to block any allocations on affected
> systems?

Tested it and yep... unfortunately the wrong disk is attached. I am going to implement the check in the RBDPlugin.pm.