all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: "Aaron Lauterer" <a.lauterer@proxmox.com>,
	"Proxmox VE development discussion" <pve-devel@lists.proxmox.com>,
	"Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970 avoid ambiguous rbd path
Date: Mon, 11 Apr 2022 14:17:17 +0200	[thread overview]
Message-ID: <60e67e63-5229-47ce-1bc2-bca0c87ce3d3@proxmox.com> (raw)
In-Reply-To: <ad6c679b-fba7-8c2c-cb13-61199b143863@proxmox.com>

On 11.04.22 11:08, Aaron Lauterer wrote:
> On 4/11/22 09:39, Thomas Lamprecht wrote:
>> The obvious question to me is: why bother with this workaround when we can
>> make udev create the symlink now already?
>>
>> Patching the rules file and/or binary shipped by ceph-common, or shipping our
>> own such script + rule, would seem relatively simple.
>
> The thinking was to implement a stop gap to have more time to consider a
> solution that we can upstream.

If the stop-gap would be trivial and non intrusive (compared to the Correct
Solution™) or if it would be really pressing, sure, but IMO the situation here
isn't exactly /that/ clear cut.

> 
> Fabian might have some more thoughts on it but yeah, right now we could patch
> the udev rules and the ceph-rbdnamer which is called by the rule to create
> the current paths and then additionally the cluster specific ones.
> Unfortunately, it seems like the unwieldy cluster fsid is the only identifier
> we have for the cluster.

The unwieldiness doesn't really matter for us here though, this is just an
internal check.

> 
> Some more (smaller) changes might be necessary, if the implementation we
> manage to upstream will be a bit different. But that should not be much of an
> issue AFAICT.

We can always ship our downstream solution to be whatever we want and sync up
on major release, so not a real problem.

FWIW, with storage getting the following patch the symlinks get created (may need
an trigger for reloading udev (or manually `udevadm control -R`).

We'd only need to check to prioritize /deb/rbd-pve/$fsid/... paths first;
do you have time to give that a go?

diff --git a/Makefile b/Makefile
index 431db16..029b586 100644
--- a/Makefile
+++ b/Makefile
@@ -41,6 +41,7 @@ install: PVE pvesm.1 pvesm.bash-completion pvesm.zsh-completion
        install -d ${DESTDIR}${SBINDIR}
        install -m 0755 pvesm ${DESTDIR}${SBINDIR}
        make -C PVE install
+       make -C udev-rbd install
        install -d ${DESTDIR}/usr/share/man/man1
        install -m 0644 pvesm.1 ${DESTDIR}/usr/share/man/man1/
        gzip -9 -n ${DESTDIR}/usr/share/man/man1/pvesm.1
diff --git a/udev-rbd/50-rbd-pve.rules b/udev-rbd/50-rbd-pve.rules
new file mode 100644
index 0000000..79432df
--- /dev/null
+++ b/udev-rbd/50-rbd-pve.rules
@@ -0,0 +1,2 @@
+KERNEL=="rbd[0-9]*", ENV{DEVTYPE}=="disk", PROGRAM="/usr/libexec/ceph-rbdnamer-pve %k", SYMLINK+="rbd-pve/%c"
+KERNEL=="rbd[0-9]*", ENV{DEVTYPE}=="partition", PROGRAM="/usr/libexec/ceph-rbdnamer-pve %k", SYMLINK+="rbd-pve/%c-part%n"
diff --git a/udev-rbd/Makefile b/udev-rbd/Makefile
new file mode 100644
index 0000000..065933b
--- /dev/null
+++ b/udev-rbd/Makefile
@@ -0,0 +1,21 @@
+PACKAGE=libpve-storage-perl
+
+DESTDIR=
+PREFIX=/usr
+LIBEXECDIR=${PREFIX}/libexec
+LIBDIR=${PREFIX}/lib
+
+all:
+
+.PHONY: install
+install: 50-rbd-pve.rules ceph-rbdnamer-pve
+       install -d ${DESTDIR}${LIBEXECDIR}
+       install -m 0755 ceph-rbdnamer-pve ${DESTDIR}${LIBEXECDIR}
+       install -d ${DESTDIR}${LIBDIR}/udev/rules.d
+       install -m 0644 50-rbd-pve.rules ${DESTDIR}${LIBDIR}/udev/rules.d
+
+.PHONY: clean
+clean:
+
+.PHONY: distclean
+distclean: clean
diff --git a/udev-rbd/ceph-rbdnamer-pve b/udev-rbd/ceph-rbdnamer-pve
new file mode 100755
index 0000000..23dd626
--- /dev/null
+++ b/udev-rbd/ceph-rbdnamer-pve
@@ -0,0 +1,24 @@
+#!/bin/sh
+
+DEV=$1
+NUM=`echo $DEV | sed 's#p.*##g; s#[a-z]##g'`
+POOL=`cat /sys/devices/rbd/$NUM/pool`
+CLUSTER_FSID=`cat /sys/devices/rbd/$NUM/cluster_fsid`
+
+if [ -f /sys/devices/rbd/$NUM/pool_ns ]; then
+    NAMESPACE=`cat /sys/devices/rbd/$NUM/pool_ns`
+else
+    NAMESPACE=""
+fi
+IMAGE=`cat /sys/devices/rbd/$NUM/name`
+SNAP=`cat /sys/devices/rbd/$NUM/current_snap`
+
+echo -n "/$CLUSTER_FSID/$POOL"
+
+if [ -n "$NAMESPACE" ]; then
+    echo -n "/$NAMESPACE"
+fi
+echo -n "/$IMAGE"
+if [ "$SNAP" != "-" ]; then
+    echo -n "@$SNAP"





  reply	other threads:[~2022-04-11 12:17 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-06 11:46 Aaron Lauterer
2022-04-08  8:04 ` Fabian Grünbichler
2022-04-11  7:39   ` Thomas Lamprecht
2022-04-11  9:08     ` Aaron Lauterer
2022-04-11 12:17       ` Thomas Lamprecht [this message]
2022-04-11 14:49         ` Aaron Lauterer
2022-04-12  8:35           ` Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=60e67e63-5229-47ce-1bc2-bca0c87ce3d3@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=a.lauterer@proxmox.com \
    --cc=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal