public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: ic <lists@benappy.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Cc: PVE User List <pve-user@pve.proxmox.com>
Subject: Re: [PVE-User] ceph
Date: Thu, 9 Sep 2021 00:11:05 +0200	[thread overview]
Message-ID: <4D290E10-15B5-4816-8613-38438FC1F9CC@benappy.com> (raw)
In-Reply-To: <CALt2oz6WNOUjSoWHbmL1+E536YhZzc-FrAjYrsVQtEryWGMXjw@mail.gmail.com>

Hi there,

> On 8 Sep 2021, at 14:46, Leandro Roggerone <leandro@tecnetmza.com.ar> wrote:
> 
> I would like to know the goods that a ceph storage can bring to my existing
> cluster.
> What is an easy / recommended way to implement it ?
> Wich hardware should I consider to use ?

First, HW.

Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G ports) and two Intel X520-DA2 per server.

Hook up each port of the Intel cards to each of the Nexuses, getting a full redundancy between network cards and switches.

Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 as a simple L2 trunk (can provide more details as why if needed).

Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way you get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and if you loose one card or one switch, you still have 10 Gbps for each.

The benefits? With default configuration, your data lives in 3 places. Also, scale out. You know the expensive stuff, hyperconverged servers (nutanix and such) ? You get that with this.

The performance is wild, just moved my customers from a proxmox cluster backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying.

Keep your old storage infrastructure, whatever that is, for backups with PBS.

YMMV

Regards, ic



  parent reply	other threads:[~2021-09-08 22:20 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-08 12:46 Leandro Roggerone
2021-09-08 12:55 ` Gilberto Ferreira
2021-09-08 20:07 ` Alex K
2021-09-08 22:11 ` ic [this message]
2021-09-13 11:32   ` Leandro Roggerone
2023-01-09  9:14 Piviul
     [not found] ` <mailman.191.1673258105.458.pve-user@lists.proxmox.com>
2023-01-09 11:47   ` Piviul
     [not found]     ` <mailman.203.1673265308.458.pve-user@lists.proxmox.com>
2023-01-10  7:23       ` Piviul
     [not found]         ` <mailman.215.1673337884.458.pve-user@lists.proxmox.com>
2023-01-10 13:29           ` Piviul
     [not found]             ` <mailman.232.1673430028.458.pve-user@lists.proxmox.com>
2023-01-11 11:19               ` Piviul
     [not found]                 ` <mailman.235.1673444838.458.pve-user@lists.proxmox.com>
2023-01-11 15:51                   ` Piviul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4D290E10-15B5-4816-8613-38438FC1F9CC@benappy.com \
    --to=lists@benappy.com \
    --cc=pve-user@lists.proxmox.com \
    --cc=pve-user@pve.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal