From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <t.lamprecht@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 43D1DC562
 for <pve-devel@lists.proxmox.com>; Mon, 11 Apr 2022 09:40:23 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 33FFDF3E
 for <pve-devel@lists.proxmox.com>; Mon, 11 Apr 2022 09:39:53 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id F3441F33
 for <pve-devel@lists.proxmox.com>; Mon, 11 Apr 2022 09:39:51 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id C4A3C4097A
 for <pve-devel@lists.proxmox.com>; Mon, 11 Apr 2022 09:39:51 +0200 (CEST)
Message-ID: <2a67ca76-f10f-5c2f-44a7-9d9da0c36c78@proxmox.com>
Date: Mon, 11 Apr 2022 09:39:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:100.0) Gecko/20100101
 Thunderbird/100.0
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
 =?UTF-8?Q?Fabian_Gr=c3=bcnbichler?= <f.gruenbichler@proxmox.com>
References: <20220406114657.452190-1-a.lauterer@proxmox.com>
 <1649404843.ds1yioa8qv.astroid@nora.none>
Content-Language: en-US
In-Reply-To: <1649404843.ds1yioa8qv.astroid@nora.none>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-SPAM-LEVEL: Spam detection results:  0
 AWL 1.593 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 NICE_REPLY_A           -3.086 Looks like a legit reply (A)
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Subject: Re: [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970
 avoid ambiguous rbd path
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Mon, 11 Apr 2022 07:40:23 -0000

On 08.04.22 10:04, Fabian Gr=C3=BCnbichler wrote:
> On April 6, 2022 1:46 pm, Aaron Lauterer wrote:
>> If two RBD storages use the same pool, but connect to different
>> clusters, we cannot say to which cluster the mapped RBD image belongs =
to
>> if krbd is used. To avoid potential data loss, we need to verify that =
no
>> other storage is configured that could have a volume mapped under the
>> same path before we create the image.
>>
>> The ambiguous mapping is in
>> /dev/rbd/<pool>/<ns>/<image> where the namespace <ns> is optional.
>>
>> Once we can tell the clusters apart in the mapping, we can remove thes=
e
>> checks again.
>>
>> See bug #3969 for more information on the root cause.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>=20
> Acked-by: Fabian Gr=C3=BCnbichler <f.gruenbichler@proxmox.com>
> Reviewed-by: Fabian Gr=C3=BCnbichler <f.gruenbichler@proxmox.com>
>=20
> (small nit below, and given the rather heavy-handed approach a 2nd ack =

> might not hurt.. IMHO, a few easily fixable false-positives beat more=20
> users actually running into this with move disk/volume and losing=20
> data..)

The obvious question to me is: why bother with this workaround when we ca=
n
make udev create the symlink now already?

Patching the rules file and/or binary shipped by ceph-common, or shipping=
 our
own such script + rule, would seem relatively simple.