From: Aaron Lauterer <a.lauterer@proxmox.com>
To: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>,
"Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [RFC qemu-server] clone disk: fix #3970 catch same source and destination
Date: Tue, 5 Apr 2022 09:28:30 +0200 [thread overview]
Message-ID: <0b5a5b19-df4c-0289-4b13-8443f7f7635d@proxmox.com> (raw)
In-Reply-To: <1649084728.7liqfd2nmz.astroid@nora.none>
On 4/4/22 17:26, Fabian Grünbichler wrote:
> On April 1, 2022 5:24 pm, Aaron Lauterer wrote:
>> In rare situations, it could happen that the source and target path is
>> the same. For example, if the disk image is to be copied from one RBD
>> storage to another one on different Ceph clusters but the pools have the
>> same name.
>>
>> In this situation, the clone operation will clone it to the same image and
>> one will end up with an empty destination volume.
>>
>> This patch does not solve the underlying issue, but is a first step to
>> avoid potential data loss, for example when the 'delete source' option
>> is enabled as well.
>>
>> We also need to delete the newly created image right away because the
>> regular cleanup gets confused and tries to remove the source image. This
>> will fail and we have an orphaned image which cannot be removed easily
>> because the same underlying root cause (same path) will falsely trigger
>> the "Drive::is_volume_in_use" check.
>
> isn't this technically - just like for the container case - a problem in
> general, not just for cloning a disk? I haven't tested this in practice,
> but since you already have the reproducing setup ;)
>
> e.g., given the following:
> - storage A, krbd, cluster A, pool foo
> - storage B, krbd, cluster B, pool foo
> - VM 123, with scsi0: A:vm-123-disk-0 and no volumes on B
> - qm set 123 -scsi1: B:1
>
> next free slot on B is 'vm-123-disk-0', which will be allocated. mapping
> will skip the map part, since the RBD path already exists (provided
> scsi0's volume is already activated). the returned path will point to
> the mapped blockdev corresponding to A:vm-123-disk-0, not B:..
>
> guest writes to scsi1, likely corrupting whatever is on scsi0, since
> most things that tend to get put on guest disks are not
> multi-writer-safe (or something along the way notices it?)
>
> if the above is the case, it might actually be prudent to just put the
> check from your other patch into RBDPlugin.pm 's alloc method (and
> clone and rename?) since we'd want to block any allocations on affected
> systems?
Tested it and yep... unfortunately the wrong disk is attached. I am going to implement the check in the RBDPlugin.pm.
next prev parent reply other threads:[~2022-04-05 7:28 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-01 15:24 [pve-devel] [RFC container] alloc disk: fix #3970 avoid ambiguous rbd image path Aaron Lauterer
2022-04-01 15:24 ` [pve-devel] [RFC qemu-server] clone disk: fix #3970 catch same source and destination Aaron Lauterer
2022-04-04 15:26 ` Fabian Grünbichler
2022-04-05 7:28 ` Aaron Lauterer [this message]
2022-04-04 15:44 ` [pve-devel] [RFC container] alloc disk: fix #3970 avoid ambiguous rbd image path Fabian Grünbichler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0b5a5b19-df4c-0289-4b13-8443f7f7635d@proxmox.com \
--to=a.lauterer@proxmox.com \
--cc=f.gruenbichler@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox