public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Uwe Sauter <uwe.sauter.de@gmail.com>
To: PVE User List <pve-user@pve.proxmox.com>
Subject: Re: [PVE-User] DeviceMapper devices get filtered by Proxmox
Date: Tue, 25 Jul 2023 09:24:22 +0200	[thread overview]
Message-ID: <d94e484f-7504-fb07-80a1-13fd52518594@gmail.com> (raw)
In-Reply-To: <2b5b83bb-c90e-b6dd-4b15-a57414b42542@gmail.com>

So, I've been looking further into this and indeed, there seem to be very strict filters regarding
the block device names that Proxmox allows to be used.

/usr/share/perl5/PVE/Diskmanage.pm

512         # whitelisting following devices
513         # - hdX         ide block device
514         # - sdX         scsi/sata block device
515         # - vdX         virtIO block device
516         # - xvdX:       xen virtual block device
517         # - nvmeXnY:    nvme devices
518         # - cciss!cXnY  cciss devices
519         print Dumper($dev);
520         return if $dev !~ m/^(h|s|x?v)d[a-z]+$/ &&
521                   $dev !~ m/^nvme\d+n\d+$/ &&
522                   $dev !~ m/^cciss\!c\d+d\d+$/;

I don't understand all the consequences of allowing ALL ^dm-\d+$ devices but with proper filtering
it should be possible to allow multipath devices (and given that there might be udev rules that
create additinal symlinks below /dev, each device's name should be resolved to its canonical name
before checking).

To give an example I have sdc as an internal disk that contains a Bluestore OSD:

ls -l /dev/mapper/ | grep dm-0
lrwxrwxrwx 1 root root       7 Jul 22 23:23
ceph--aa6b72f7--f185--44b4--9922--6ae4e6278d10-osd--block--b0a60266--90c0--444f--ae12--328ecfebd87d
-> ../dm-0

Devices sde, sdaw, sdco, sdeg are the same SAS disk and form multipath device dm-2:

# ls -la /dev/mapper/ | grep dm-2$
lrwxrwxrwx  1 root root       7 Jul 22 23:23 35000cca26a7402e4 -> ../dm-2
# multipath -ll | grep -A6 'dm-2 '
35000cca26a7402e4 dm-2 HGST,HUH721010AL5200
size=9.1T features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 1:0:0:0  sde  8:64    active ready running
  |- 1:0:45:0 sdaw 67:0    active ready running
  |- 5:0:0:0  sdco 69:192  active ready running
  `- 5:0:45:0 sdeg 128:128 active ready running

Both dm-0 and dm-2 are device mapper devices but it is possible to get their type:

# dmsetup status /dev/dm-0
0 3750739968 linear

# dmsetup status /dev/dm-2
0 19532873728 multipath 2 0 0 0 1 1 A 0 4 2 8:64 A 0 0 1 67:0 A 0 0 1 69:192 A 0 0 1 128:128 A 0 0 1

Another way, possibly less exact, to filter is if /sys/block/dm-X/slaves/ contains more than one entry.

# ls -l /sys/block/dm-0/slaves
total 0
lrwxrwxrwx 1 root root 0 Jul 25 09:01 sdc ->
../../../../pci0000:00/0000:00:11.4/ata3/host3/target3:0:0/3:0:0:0/block/sdc

# ls -l /sys/block/dm-2/slaves
total 0
lrwxrwxrwx 1 root root 0 Jul 25 09:03 sdaw ->
../../../../pci0000:80/0000:80:02.0/0000:82:00.0/host1/port-1:1/expander-1:1/port-1:1:0/end_device-1:1:0/target1:0:45/1:0:45:0/block/sdaw
lrwxrwxrwx 1 root root 0 Jul 25 09:03 sdco ->
../../../../pci0000:80/0000:80:03.0/0000:83:00.0/host5/port-5:0/expander-5:0/port-5:0:0/end_device-5:0:0/target5:0:0/5:0:0:0/block/sdco
lrwxrwxrwx 1 root root 0 Jul 25 09:03 sde ->
../../../../pci0000:80/0000:80:02.0/0000:82:00.0/host1/port-1:0/expander-1:0/port-1:0:0/end_device-1:0:0/target1:0:0/1:0:0:0/block/sde
lrwxrwxrwx 1 root root 0 Jul 25 09:03 sdeg ->
../../../../pci0000:80/0000:80:03.0/0000:83:00.0/host5/port-5:1/expander-5:1/port-5:1:0/end_device-5:1:0/target5:0:45/5:0:45:0/block/sdeg


Is there any chance that multipath devices will be supported by PVE in the near future?
I'd be willing to test on my non-production system…


Regards,

	Uwe


Am 20.07.23 um 14:21 schrieb Uwe Sauter:
> Dear all,
> 
> I'd like to use some existing hardware to create a Ceph cluster. Unfortunately, all my external
> disks are filtered by the WebUI and by pceceph command which prevents using them as OSD disks.
> 
> My servers are connected to a SAS-JBOD containing 60 SAS-HDDs. The connection is done via 2
> dual-port SAS-HBAs, each port connecting to both of the JBOD controllers.
> This means that all my external disks can be reached via four SAS-path and thus I see 240 disks in
> /dev. (Actually, I see 244 /dev/sd* devices as there are also 4 internal disks for the OS…)
> Using multipathd, each disk with its four entries in /dev is available a fifth time in
> /dev/mapper/${WWN} which is a symlink to /dev/dm-${NUMBER}.
> 
> Both /dev/dm-${NUMBER} and /dev/mapper/${WWN} entries are filtered by Proxmox.
> 
> Is there a way to remove this filter or to define my own set of filters?
> 
> Using the /dev/sd* entries would be possible but I would loose the redundancy provided by multipathd.
> 
> Regards,
> 
> 	Uwe




  reply	other threads:[~2023-07-25  7:24 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-20 12:21 Uwe Sauter
2023-07-25  7:24 ` Uwe Sauter [this message]
     [not found] ` <dc743429b8e92c12ec74c8844605f4b1@antreich.com>
2023-07-25 11:48   ` Uwe Sauter
     [not found]   ` <b75e2575243c3366cdeb52e073947566@antreich.com>
2023-07-26  8:37     ` Uwe Sauter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d94e484f-7504-fb07-80a1-13fd52518594@gmail.com \
    --to=uwe.sauter.de@gmail.com \
    --cc=pve-user@pve.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal