public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970 avoid ambiguous rbd path
@ 2022-04-06 11:46 Aaron Lauterer
  2022-04-08  8:04 ` Fabian Grünbichler
  0 siblings, 1 reply; 7+ messages in thread
From: Aaron Lauterer @ 2022-04-06 11:46 UTC (permalink / raw)
  To: pve-devel

If two RBD storages use the same pool, but connect to different
clusters, we cannot say to which cluster the mapped RBD image belongs to
if krbd is used. To avoid potential data loss, we need to verify that no
other storage is configured that could have a volume mapped under the
same path before we create the image.

The ambiguous mapping is in
/dev/rbd/<pool>/<ns>/<image> where the namespace <ns> is optional.

Once we can tell the clusters apart in the mapping, we can remove these
checks again.

See bug #3969 for more information on the root cause.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since
v1:
* fixed code style issues
* moved check to a helper function and call it from
  - alloc_image
  - clone_image
  - rename_image
* rephrased error message with a link to the bugzilla issue

RFC:
* moved check to pve-storage since containers and VMs both have issues
  not just on a move or clone of the image, but also when creating a new
  volume
* reworked the checks, instead of large if conditions, we use
  PVE::Tools::safe_compare with comparison functions
* normalize monhost list to match correctly if the list is in different
  order
* add storage name to error message that triggered the checks
* ignore disabled storages

 PVE/Storage/RBDPlugin.pm | 45 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index e287e28..2a4e1a8 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -127,6 +127,45 @@ my $krbd_feature_update = sub {
     }
 };
 
+# check if another rbd storage with the same pool name but different
+# cluster exists. If so, allocating a new volume can potentially be
+# dangerous because the RBD mapping, exposes it in an ambiguous way under
+# /dev/rbd/<pool>/<ns>/<image>. Without any information to which cluster it
+# belongs, we cannot clearly determine which image we access and
+# potentially use the wrong one. See
+# https://bugzilla.proxmox.com/show_bug.cgi?id=3969 and
+# https://bugzilla.proxmox.com/show_bug.cgi?id=3970
+# TODO: remove these checks once #3969 is fixed and we can clearly tell to
+# which cluster an image belongs to
+my $check_blockdev_collision = sub {
+    my ($storeid, $scfg) = @_;
+
+    my $storecfg = PVE::Storage::config();
+    foreach my $store  (keys %{$storecfg->{ids}}) {
+	next if $store eq $storeid;
+
+	my $checked_scfg = $storecfg->{ids}->{$store};
+
+	next if $checked_scfg->{type} ne 'rbd';
+	next if $checked_scfg->{disable};
+	next if $scfg->{pool} ne $checked_scfg->{pool};
+
+	my $normalize_mons = sub { return join(';', sort( PVE::Tools::split_list(shift))) };
+	my $cmp_mons = sub { $normalize_mons->($_[0]) cmp $normalize_mons->($_[1]) };
+	my $cmp = sub { $_[0] cmp $_[1] };
+
+	# internal and internal, or external and external with identical monitors
+	# => same cluster
+	next if PVE::Tools::safe_compare($scfg->{monhost}, $checked_scfg->{monhost}, $cmp_mons) == 0;
+
+	# different namespaces => no clash possible
+	next if PVE::Tools::safe_compare($scfg->{namespace}, $checked_scfg->{namespace}, $cmp) != 0;
+
+	die "Cannot create volume on '$storeid' - RBD blockdev paths shared with storage '$store'. ".
+	    "See https://bugzilla.proxmox.com/show_bug.cgi?id=3969 for more details.\n";
+    }
+};
+
 sub run_rbd_command {
     my ($cmd, %args) = @_;
 
@@ -475,6 +514,8 @@ sub clone_image {
     my $snap = '__base__';
     $snap = $snapname if length $snapname;
 
+    $check_blockdev_collision->($storeid, $scfg);
+
     my ($vtype, $basename, $basevmid, undef, undef, $isBase) =
         $class->parse_volname($volname);
 
@@ -516,6 +557,8 @@ sub alloc_image {
     die "illegal name '$name' - should be 'vm-$vmid-*'\n"
 	if  $name && $name !~ m/^vm-$vmid-/;
 
+    $check_blockdev_collision->($storeid, $scfg);
+
     $name = $class->find_free_diskname($storeid, $scfg, $vmid) if !$name;
 
     my @options = (
@@ -769,6 +812,8 @@ sub volume_has_feature {
 sub rename_volume {
     my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
 
+    $check_blockdev_collision->($storeid, $scfg);
+
     my (
 	undef,
 	$source_image,
-- 
2.30.2





^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970 avoid ambiguous rbd path
  2022-04-06 11:46 [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970 avoid ambiguous rbd path Aaron Lauterer
@ 2022-04-08  8:04 ` Fabian Grünbichler
  2022-04-11  7:39   ` Thomas Lamprecht
  0 siblings, 1 reply; 7+ messages in thread
From: Fabian Grünbichler @ 2022-04-08  8:04 UTC (permalink / raw)
  To: Proxmox VE development discussion

On April 6, 2022 1:46 pm, Aaron Lauterer wrote:
> If two RBD storages use the same pool, but connect to different
> clusters, we cannot say to which cluster the mapped RBD image belongs to
> if krbd is used. To avoid potential data loss, we need to verify that no
> other storage is configured that could have a volume mapped under the
> same path before we create the image.
> 
> The ambiguous mapping is in
> /dev/rbd/<pool>/<ns>/<image> where the namespace <ns> is optional.
> 
> Once we can tell the clusters apart in the mapping, we can remove these
> checks again.
> 
> See bug #3969 for more information on the root cause.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>

Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>

(small nit below, and given the rather heavy-handed approach a 2nd ack 
might not hurt.. IMHO, a few easily fixable false-positives beat more 
users actually running into this with move disk/volume and losing 
data..)

> ---
> changes since
> v1:
> * fixed code style issues
> * moved check to a helper function and call it from
>   - alloc_image
>   - clone_image
>   - rename_image
> * rephrased error message with a link to the bugzilla issue
> 
> RFC:
> * moved check to pve-storage since containers and VMs both have issues
>   not just on a move or clone of the image, but also when creating a new
>   volume
> * reworked the checks, instead of large if conditions, we use
>   PVE::Tools::safe_compare with comparison functions
> * normalize monhost list to match correctly if the list is in different
>   order
> * add storage name to error message that triggered the checks
> * ignore disabled storages
> 
>  PVE/Storage/RBDPlugin.pm | 45 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 45 insertions(+)
> 
> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
> index e287e28..2a4e1a8 100644
> --- a/PVE/Storage/RBDPlugin.pm
> +++ b/PVE/Storage/RBDPlugin.pm
> @@ -127,6 +127,45 @@ my $krbd_feature_update = sub {
>      }
>  };
>  
> +# check if another rbd storage with the same pool name but different
> +# cluster exists. If so, allocating a new volume can potentially be
> +# dangerous because the RBD mapping, exposes it in an ambiguous way under
> +# /dev/rbd/<pool>/<ns>/<image>. Without any information to which cluster it
> +# belongs, we cannot clearly determine which image we access and
> +# potentially use the wrong one. See
> +# https://bugzilla.proxmox.com/show_bug.cgi?id=3969 and
> +# https://bugzilla.proxmox.com/show_bug.cgi?id=3970
> +# TODO: remove these checks once #3969 is fixed and we can clearly tell to
> +# which cluster an image belongs to
> +my $check_blockdev_collision = sub {
> +    my ($storeid, $scfg) = @_;

parameter order is reversed compared to our pve-storage convention, 
might be worthy of a fixup on application to match the rest:

my ($scfg, $storeid) = @_;

> +
> +    my $storecfg = PVE::Storage::config();
> +    foreach my $store  (keys %{$storecfg->{ids}}) {
> +	next if $store eq $storeid;
> +
> +	my $checked_scfg = $storecfg->{ids}->{$store};
> +
> +	next if $checked_scfg->{type} ne 'rbd';
> +	next if $checked_scfg->{disable};
> +	next if $scfg->{pool} ne $checked_scfg->{pool};
> +
> +	my $normalize_mons = sub { return join(';', sort( PVE::Tools::split_list(shift))) };
> +	my $cmp_mons = sub { $normalize_mons->($_[0]) cmp $normalize_mons->($_[1]) };
> +	my $cmp = sub { $_[0] cmp $_[1] };
> +
> +	# internal and internal, or external and external with identical monitors
> +	# => same cluster
> +	next if PVE::Tools::safe_compare($scfg->{monhost}, $checked_scfg->{monhost}, $cmp_mons) == 0;
> +
> +	# different namespaces => no clash possible
> +	next if PVE::Tools::safe_compare($scfg->{namespace}, $checked_scfg->{namespace}, $cmp) != 0;
> +
> +	die "Cannot create volume on '$storeid' - RBD blockdev paths shared with storage '$store'. ".
> +	    "See https://bugzilla.proxmox.com/show_bug.cgi?id=3969 for more details.\n";
> +    }
> +};
> +
>  sub run_rbd_command {
>      my ($cmd, %args) = @_;
>  
> @@ -475,6 +514,8 @@ sub clone_image {
>      my $snap = '__base__';
>      $snap = $snapname if length $snapname;
>  
> +    $check_blockdev_collision->($storeid, $scfg);
> +
>      my ($vtype, $basename, $basevmid, undef, undef, $isBase) =
>          $class->parse_volname($volname);
>  
> @@ -516,6 +557,8 @@ sub alloc_image {
>      die "illegal name '$name' - should be 'vm-$vmid-*'\n"
>  	if  $name && $name !~ m/^vm-$vmid-/;
>  
> +    $check_blockdev_collision->($storeid, $scfg);
> +
>      $name = $class->find_free_diskname($storeid, $scfg, $vmid) if !$name;
>  
>      my @options = (
> @@ -769,6 +812,8 @@ sub volume_has_feature {
>  sub rename_volume {
>      my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
>  
> +    $check_blockdev_collision->($storeid, $scfg);
> +
>      my (
>  	undef,
>  	$source_image,
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970 avoid ambiguous rbd path
  2022-04-08  8:04 ` Fabian Grünbichler
@ 2022-04-11  7:39   ` Thomas Lamprecht
  2022-04-11  9:08     ` Aaron Lauterer
  0 siblings, 1 reply; 7+ messages in thread
From: Thomas Lamprecht @ 2022-04-11  7:39 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

On 08.04.22 10:04, Fabian Grünbichler wrote:
> On April 6, 2022 1:46 pm, Aaron Lauterer wrote:
>> If two RBD storages use the same pool, but connect to different
>> clusters, we cannot say to which cluster the mapped RBD image belongs to
>> if krbd is used. To avoid potential data loss, we need to verify that no
>> other storage is configured that could have a volume mapped under the
>> same path before we create the image.
>>
>> The ambiguous mapping is in
>> /dev/rbd/<pool>/<ns>/<image> where the namespace <ns> is optional.
>>
>> Once we can tell the clusters apart in the mapping, we can remove these
>> checks again.
>>
>> See bug #3969 for more information on the root cause.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> 
> Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> 
> (small nit below, and given the rather heavy-handed approach a 2nd ack 
> might not hurt.. IMHO, a few easily fixable false-positives beat more 
> users actually running into this with move disk/volume and losing 
> data..)

The obvious question to me is: why bother with this workaround when we can
make udev create the symlink now already?

Patching the rules file and/or binary shipped by ceph-common, or shipping our
own such script + rule, would seem relatively simple.





^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970 avoid ambiguous rbd path
  2022-04-11  7:39   ` Thomas Lamprecht
@ 2022-04-11  9:08     ` Aaron Lauterer
  2022-04-11 12:17       ` Thomas Lamprecht
  0 siblings, 1 reply; 7+ messages in thread
From: Aaron Lauterer @ 2022-04-11  9:08 UTC (permalink / raw)
  To: Proxmox VE development discussion, Thomas Lamprecht,
	Fabian Grünbichler



On 4/11/22 09:39, Thomas Lamprecht wrote:
> On 08.04.22 10:04, Fabian Grünbichler wrote:
>> On April 6, 2022 1:46 pm, Aaron Lauterer wrote:
>>> If two RBD storages use the same pool, but connect to different
>>> clusters, we cannot say to which cluster the mapped RBD image belongs to
>>> if krbd is used. To avoid potential data loss, we need to verify that no
>>> other storage is configured that could have a volume mapped under the
>>> same path before we create the image.
>>>
>>> The ambiguous mapping is in
>>> /dev/rbd/<pool>/<ns>/<image> where the namespace <ns> is optional.
>>>
>>> Once we can tell the clusters apart in the mapping, we can remove these
>>> checks again.
>>>
>>> See bug #3969 for more information on the root cause.
>>>
>>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>>
>> Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
>> Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
>>
>> (small nit below, and given the rather heavy-handed approach a 2nd ack
>> might not hurt.. IMHO, a few easily fixable false-positives beat more
>> users actually running into this with move disk/volume and losing
>> data..)
> 
> The obvious question to me is: why bother with this workaround when we can
> make udev create the symlink now already?
> 
> Patching the rules file and/or binary shipped by ceph-common, or shipping our
> own such script + rule, would seem relatively simple.
The thinking was to implement a stop gap to have more time to consider a solution that we can upstream.

Fabian might have some more thoughts on it but yeah, right now we could patch the udev rules and the ceph-rbdnamer which is called by the rule to create the current paths and then additionally the cluster specific ones. Unfortunately, it seems like the unwieldy cluster fsid is the only identifier we have for the cluster.

Some more (smaller) changes might be necessary, if the implementation we manage to upstream will be a bit different. But that should not be much of an issue AFAICT.




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970 avoid ambiguous rbd path
  2022-04-11  9:08     ` Aaron Lauterer
@ 2022-04-11 12:17       ` Thomas Lamprecht
  2022-04-11 14:49         ` Aaron Lauterer
  0 siblings, 1 reply; 7+ messages in thread
From: Thomas Lamprecht @ 2022-04-11 12:17 UTC (permalink / raw)
  To: Aaron Lauterer, Proxmox VE development discussion,
	Fabian Grünbichler

On 11.04.22 11:08, Aaron Lauterer wrote:
> On 4/11/22 09:39, Thomas Lamprecht wrote:
>> The obvious question to me is: why bother with this workaround when we can
>> make udev create the symlink now already?
>>
>> Patching the rules file and/or binary shipped by ceph-common, or shipping our
>> own such script + rule, would seem relatively simple.
>
> The thinking was to implement a stop gap to have more time to consider a
> solution that we can upstream.

If the stop-gap would be trivial and non intrusive (compared to the Correct
Solution™) or if it would be really pressing, sure, but IMO the situation here
isn't exactly /that/ clear cut.

> 
> Fabian might have some more thoughts on it but yeah, right now we could patch
> the udev rules and the ceph-rbdnamer which is called by the rule to create
> the current paths and then additionally the cluster specific ones.
> Unfortunately, it seems like the unwieldy cluster fsid is the only identifier
> we have for the cluster.

The unwieldiness doesn't really matter for us here though, this is just an
internal check.

> 
> Some more (smaller) changes might be necessary, if the implementation we
> manage to upstream will be a bit different. But that should not be much of an
> issue AFAICT.

We can always ship our downstream solution to be whatever we want and sync up
on major release, so not a real problem.

FWIW, with storage getting the following patch the symlinks get created (may need
an trigger for reloading udev (or manually `udevadm control -R`).

We'd only need to check to prioritize /deb/rbd-pve/$fsid/... paths first;
do you have time to give that a go?

diff --git a/Makefile b/Makefile
index 431db16..029b586 100644
--- a/Makefile
+++ b/Makefile
@@ -41,6 +41,7 @@ install: PVE pvesm.1 pvesm.bash-completion pvesm.zsh-completion
        install -d ${DESTDIR}${SBINDIR}
        install -m 0755 pvesm ${DESTDIR}${SBINDIR}
        make -C PVE install
+       make -C udev-rbd install
        install -d ${DESTDIR}/usr/share/man/man1
        install -m 0644 pvesm.1 ${DESTDIR}/usr/share/man/man1/
        gzip -9 -n ${DESTDIR}/usr/share/man/man1/pvesm.1
diff --git a/udev-rbd/50-rbd-pve.rules b/udev-rbd/50-rbd-pve.rules
new file mode 100644
index 0000000..79432df
--- /dev/null
+++ b/udev-rbd/50-rbd-pve.rules
@@ -0,0 +1,2 @@
+KERNEL=="rbd[0-9]*", ENV{DEVTYPE}=="disk", PROGRAM="/usr/libexec/ceph-rbdnamer-pve %k", SYMLINK+="rbd-pve/%c"
+KERNEL=="rbd[0-9]*", ENV{DEVTYPE}=="partition", PROGRAM="/usr/libexec/ceph-rbdnamer-pve %k", SYMLINK+="rbd-pve/%c-part%n"
diff --git a/udev-rbd/Makefile b/udev-rbd/Makefile
new file mode 100644
index 0000000..065933b
--- /dev/null
+++ b/udev-rbd/Makefile
@@ -0,0 +1,21 @@
+PACKAGE=libpve-storage-perl
+
+DESTDIR=
+PREFIX=/usr
+LIBEXECDIR=${PREFIX}/libexec
+LIBDIR=${PREFIX}/lib
+
+all:
+
+.PHONY: install
+install: 50-rbd-pve.rules ceph-rbdnamer-pve
+       install -d ${DESTDIR}${LIBEXECDIR}
+       install -m 0755 ceph-rbdnamer-pve ${DESTDIR}${LIBEXECDIR}
+       install -d ${DESTDIR}${LIBDIR}/udev/rules.d
+       install -m 0644 50-rbd-pve.rules ${DESTDIR}${LIBDIR}/udev/rules.d
+
+.PHONY: clean
+clean:
+
+.PHONY: distclean
+distclean: clean
diff --git a/udev-rbd/ceph-rbdnamer-pve b/udev-rbd/ceph-rbdnamer-pve
new file mode 100755
index 0000000..23dd626
--- /dev/null
+++ b/udev-rbd/ceph-rbdnamer-pve
@@ -0,0 +1,24 @@
+#!/bin/sh
+
+DEV=$1
+NUM=`echo $DEV | sed 's#p.*##g; s#[a-z]##g'`
+POOL=`cat /sys/devices/rbd/$NUM/pool`
+CLUSTER_FSID=`cat /sys/devices/rbd/$NUM/cluster_fsid`
+
+if [ -f /sys/devices/rbd/$NUM/pool_ns ]; then
+    NAMESPACE=`cat /sys/devices/rbd/$NUM/pool_ns`
+else
+    NAMESPACE=""
+fi
+IMAGE=`cat /sys/devices/rbd/$NUM/name`
+SNAP=`cat /sys/devices/rbd/$NUM/current_snap`
+
+echo -n "/$CLUSTER_FSID/$POOL"
+
+if [ -n "$NAMESPACE" ]; then
+    echo -n "/$NAMESPACE"
+fi
+echo -n "/$IMAGE"
+if [ "$SNAP" != "-" ]; then
+    echo -n "@$SNAP"





^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970 avoid ambiguous rbd path
  2022-04-11 12:17       ` Thomas Lamprecht
@ 2022-04-11 14:49         ` Aaron Lauterer
  2022-04-12  8:35           ` Thomas Lamprecht
  0 siblings, 1 reply; 7+ messages in thread
From: Aaron Lauterer @ 2022-04-11 14:49 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion,
	Fabian Grünbichler



On 4/11/22 14:17, Thomas Lamprecht wrote:
[...]
>>
>> Some more (smaller) changes might be necessary, if the implementation we
>> manage to upstream will be a bit different. But that should not be much of an
>> issue AFAICT.
> 
> We can always ship our downstream solution to be whatever we want and sync up
> on major release, so not a real problem.
> 
> FWIW, with storage getting the following patch the symlinks get created (may need
> an trigger for reloading udev (or manually `udevadm control -R`).
> 
> We'd only need to check to prioritize /deb/rbd-pve/$fsid/... paths first;
> do you have time to give that a go?


The final `fi` in the renamer script was missing from the diff. Once I fixed that, it seems to work fine. Situation with a local hyperconverged cluster and an external one:
---------------------------------
root@cephtest1:/dev/rbd-pve# tree
.
├── ce99d398-91ab-4667-b4f2-307ba0bec358
│   └── ecpool-metadata
│       ├── vm-103-disk-0 -> ../../../rbd0
│       ├── vm-103-disk-0-part1 -> ../../../rbd0p1
│       ├── vm-103-disk-0-part2 -> ../../../rbd0p2
│       ├── vm-103-disk-0-part5 -> ../../../rbd0p5
│       ├── vm-103-disk-1 -> ../../../rbd1
│       └── vm-103-disk-2 -> ../../../rbd2
└── e78d9b15-d5a1-4660-a4a5-d2c1208119e9
     └── rbd
         └── vm-103-disk-0 -> ../../../rbd3






^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970 avoid ambiguous rbd path
  2022-04-11 14:49         ` Aaron Lauterer
@ 2022-04-12  8:35           ` Thomas Lamprecht
  0 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2022-04-12  8:35 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer,
	Fabian Grünbichler

On 11.04.22 16:49, Aaron Lauterer wrote:
> On 4/11/22 14:17, Thomas Lamprecht wrote:
>> FWIW, with storage getting the following patch the symlinks get created (may need
>> an trigger for reloading udev (or manually `udevadm control -R`).
>>
>> We'd only need to check to prioritize /deb/rbd-pve/$fsid/... paths first;
>> do you have time to give that a go?
> 
> 
> The final `fi` in the renamer script was missing from the diff. Once I fixed that, it seems to work fine. Situation with a local hyperconverged cluster and an external one:

Oh sorry, seems my copy-paste has gone slightly wrong.

> ---------------------------------
> root@cephtest1:/dev/rbd-pve# tree
> .
> ├── ce99d398-91ab-4667-b4f2-307ba0bec358
> │   └── ecpool-metadata
> │       ├── vm-103-disk-0 -> ../../../rbd0
> │       ├── vm-103-disk-0-part1 -> ../../../rbd0p1
> │       ├── vm-103-disk-0-part2 -> ../../../rbd0p2
> │       ├── vm-103-disk-0-part5 -> ../../../rbd0p5
> │       ├── vm-103-disk-1 -> ../../../rbd1
> │       └── vm-103-disk-2 -> ../../../rbd2
> └── e78d9b15-d5a1-4660-a4a5-d2c1208119e9
>     └── rbd
>         └── vm-103-disk-0 -> ../../../rbd3

Glad that it works for you too, but what I actually meant is if you had time to
give the remaining perl-side of the fix a go (sorry if I did not made that clear
enough).

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-04-12  8:36 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-06 11:46 [pve-devel] [PATCH v2 storage] rbd: alloc image: fix #3970 avoid ambiguous rbd path Aaron Lauterer
2022-04-08  8:04 ` Fabian Grünbichler
2022-04-11  7:39   ` Thomas Lamprecht
2022-04-11  9:08     ` Aaron Lauterer
2022-04-11 12:17       ` Thomas Lamprecht
2022-04-11 14:49         ` Aaron Lauterer
2022-04-12  8:35           ` Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal