public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Uwe Sauter <uwe.sauter.de@gmail.com>
To: Alwin Antreich <alwin@antreich.com>,
	Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] DeviceMapper devices get filtered by Proxmox
Date: Wed, 26 Jul 2023 10:37:21 +0200	[thread overview]
Message-ID: <d6122c2a-ad8e-4c41-a169-f69061403660@gmail.com> (raw)
In-Reply-To: <b75e2575243c3366cdeb52e073947566@antreich.com>

Good morning Alwin,

>> Well, if the documentation is to be trusted, there is multipath support since Octupus.
>> My use-case is not hyper-converged virtualization; I simply am using Proxmox due to its good UI and
>> integration of Ceph (and because it does not rely on containers to deploy Ceph).
> I understand, though cephadm isn't that horrible and there are still other ceph solutions out there. ;)

If you have air-gapped systems and no containers in your environment, using cephadm would require a
whole lot of effort just to make the containers available. On the other hand Proxmox provides this
very nice tool to mirror repositories… just saying.

>> I am aware that HDDs will need some amount of flash but I do have a couple of SAS-SSDs at hand that
>> I can put into the JBODs. And currently all this is just a proof of concept.
> Yeah but the ratio is 4 DB/WAL to 1 SSD, opposed to 12:1 for an NVMe.
> To add, you will need 512 GB RAM for OSDs (60 x 8GB) alone and at least a 32C/64T CPU. Probably some 2x 25 GbE NIC too, depending on the use-case.

For a PoC SSDs and smaller RAM size should be ok. The servers do have 40 cores and 2x 25GbE so that
isn't the problem. And once we see if Ceph fits to the rest of our environment we would invest in
properly sized hardware.

> Just saying, there are certain expectations with that many disks in one node.
>  
>> Thanks for the link though I have to support the arguments of the forum members that multipath is
>> an
>> enterprise feature that should be supported by an enterprise-class virtualization solution.
> Well, multipath is supported. Just not in combination with Ceph. And PVE is not a storage product (yet).

I need to disagree on that one. The WebUI disk overview does not show the multipath devices, only
the member disks. Yes, there is a comment stating that a device is a multipath member but no way to
select the multipath device itself. So, from a users point of view, PVE does not support multipath,
it just recognizes multipath members.
And the Create ZFS and Create LVM Volume Group setup pop-ups show neither multipath members nor
multipath devices.

Thanks

	Uwe



      parent reply	other threads:[~2023-07-26  8:37 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-20 12:21 Uwe Sauter
2023-07-25  7:24 ` Uwe Sauter
     [not found] ` <dc743429b8e92c12ec74c8844605f4b1@antreich.com>
2023-07-25 11:48   ` Uwe Sauter
     [not found]   ` <b75e2575243c3366cdeb52e073947566@antreich.com>
2023-07-26  8:37     ` Uwe Sauter [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d6122c2a-ad8e-4c41-a169-f69061403660@gmail.com \
    --to=uwe.sauter.de@gmail.com \
    --cc=alwin@antreich.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal