public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>,
	"Thomas Lamprecht" <t.lamprecht@proxmox.com>,
	"Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970 avoid ambiguous rbd path
Date: Mon, 11 Apr 2022 11:08:38 +0200	[thread overview]
Message-ID: <ad6c679b-fba7-8c2c-cb13-61199b143863@proxmox.com> (raw)
In-Reply-To: <2a67ca76-f10f-5c2f-44a7-9d9da0c36c78@proxmox.com>



On 4/11/22 09:39, Thomas Lamprecht wrote:
> On 08.04.22 10:04, Fabian Grünbichler wrote:
>> On April 6, 2022 1:46 pm, Aaron Lauterer wrote:
>>> If two RBD storages use the same pool, but connect to different
>>> clusters, we cannot say to which cluster the mapped RBD image belongs to
>>> if krbd is used. To avoid potential data loss, we need to verify that no
>>> other storage is configured that could have a volume mapped under the
>>> same path before we create the image.
>>>
>>> The ambiguous mapping is in
>>> /dev/rbd/<pool>/<ns>/<image> where the namespace <ns> is optional.
>>>
>>> Once we can tell the clusters apart in the mapping, we can remove these
>>> checks again.
>>>
>>> See bug #3969 for more information on the root cause.
>>>
>>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>>
>> Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
>> Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
>>
>> (small nit below, and given the rather heavy-handed approach a 2nd ack
>> might not hurt.. IMHO, a few easily fixable false-positives beat more
>> users actually running into this with move disk/volume and losing
>> data..)
> 
> The obvious question to me is: why bother with this workaround when we can
> make udev create the symlink now already?
> 
> Patching the rules file and/or binary shipped by ceph-common, or shipping our
> own such script + rule, would seem relatively simple.
The thinking was to implement a stop gap to have more time to consider a solution that we can upstream.

Fabian might have some more thoughts on it but yeah, right now we could patch the udev rules and the ceph-rbdnamer which is called by the rule to create the current paths and then additionally the cluster specific ones. Unfortunately, it seems like the unwieldy cluster fsid is the only identifier we have for the cluster.

Some more (smaller) changes might be necessary, if the implementation we manage to upstream will be a bit different. But that should not be much of an issue AFAICT.




  reply	other threads:[~2022-04-11  9:09 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-06 11:46 Aaron Lauterer
2022-04-08  8:04 ` Fabian Grünbichler
2022-04-11  7:39   ` Thomas Lamprecht
2022-04-11  9:08     ` Aaron Lauterer [this message]
2022-04-11 12:17       ` Thomas Lamprecht
2022-04-11 14:49         ` Aaron Lauterer
2022-04-12  8:35           ` Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ad6c679b-fba7-8c2c-cb13-61199b143863@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal