public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Uwe Sauter <uwe.sauter.de@gmail.com>
To: Alwin Antreich <alwin@antreich.com>,
	Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] DeviceMapper devices get filtered by Proxmox
Date: Tue, 25 Jul 2023 13:48:16 +0200	[thread overview]
Message-ID: <337ffcc2-2893-93ad-d7b7-8c665b565a89@gmail.com> (raw)
In-Reply-To: <dc743429b8e92c12ec74c8844605f4b1@antreich.com>

Hi Alwin,

Am 25.07.23 um 12:40 schrieb Alwin Antreich:
> Hi Uwe,
> 
> July 25, 2023 9:24 AM, "Uwe Sauter" <uwe.sauter.de@gmail.com> wrote:
> 
>> So, I've been looking further into this and indeed, there seem to be very strict filters regarding
>> the block device names that Proxmox allows to be used.
>>
>> /usr/share/perl5/PVE/Diskmanage.pm
>>
>> 512 # whitelisting following devices
>> 513 # - hdX ide block device
>> 514 # - sdX scsi/sata block device
>> 515 # - vdX virtIO block device
>> 516 # - xvdX: xen virtual block device
>> 517 # - nvmeXnY: nvme devices
>> 518 # - cciss!cXnY cciss devices
>> 519 print Dumper($dev);
>> 520 return if $dev !~ m/^(h|s|x?v)d[a-z]+$/ &&
>> 521 $dev !~ m/^nvme\d+n\d+$/ &&
>> 522 $dev !~ m/^cciss\!c\d+d\d+$/;
>>
>> I don't understand all the consequences of allowing ALL ^dm-\d+$ devices but with proper filtering
>> it should be possible to allow multipath devices (and given that there might be udev rules that
>> create additinal symlinks below /dev, each device's name should be resolved to its canonical name
>> before checking).
> It is also a matter of ceph support [0]. Aside the extra complexity, using the amount of HDDs is not a good use-case for virtualization. And HDDs definitely need the DB/WAL on a separate device (60x disks -> 5x NVMe).

Well, if the documentation is to be trusted, there is multipath support since Octupus.
My use-case is not hyper-converged virtualization; I simply am using Proxmox due to its good UI and
integration of Ceph (and because it does not rely on containers to deploy Ceph).

I am aware that HDDs will need some amount of flash but I do have a couple of SAS-SSDs at hand that
I can put into the JBODs. And currently all this is just a proof of concept.

> Best to set it up with ceph-volume directly. See the forum post [1] for the experience of other users.

Thanks for the link though I have to support the arguments of the forum members that multipath is an
enterprise feature that should be supported by an enterprise-class virtualization solution.


Best,

	Uwe

> Cheers,
> Alwin
> 
> [0] https://docs.ceph.com/en/latest/ceph-volume/lvm/prepare/#multipath-support
> [1] https://forum.proxmox.com/threads/ceph-with-multipath.70813/




  parent reply	other threads:[~2023-07-25 11:48 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-20 12:21 Uwe Sauter
2023-07-25  7:24 ` Uwe Sauter
     [not found] ` <dc743429b8e92c12ec74c8844605f4b1@antreich.com>
2023-07-25 11:48   ` Uwe Sauter [this message]
     [not found]   ` <b75e2575243c3366cdeb52e073947566@antreich.com>
2023-07-26  8:37     ` Uwe Sauter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=337ffcc2-2893-93ad-d7b7-8c665b565a89@gmail.com \
    --to=uwe.sauter.de@gmail.com \
    --cc=alwin@antreich.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal