public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Leandro Roggerone <leandro@tecnetmza.com.ar>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Cc: PVE User List <pve-user@pve.proxmox.com>
Subject: Re: [PVE-User] ceph
Date: Mon, 13 Sep 2021 08:32:21 -0300	[thread overview]
Message-ID: <CALt2oz7Ui2iSdwtNcU=JRF-+UxDk8Ev66w1Jk9WK7p=GbqoG4w@mail.gmail.com> (raw)
In-Reply-To: <4D290E10-15B5-4816-8613-38438FC1F9CC@benappy.com>

hi guys , your responses were very useful.
Lets suppose  I have my 3 nodes running and forming a cluster.
Please confirm:
a -Can I add the ceph storage at any time ?
b- All nodes should be running the same pve version ?
c- All nodes should have 1 or more non used storages with no hardware raid
to be included in the ceph ?
Those storages (c) should be exactly same in capacity , speed , and so ...
?
What can goes wrong if dont have 10 but 1 gbps ports ?
Regards.
Leandro


<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Libre
de virus. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

El mié, 8 sept 2021 a las 19:21, ic (<lists@benappy.com>) escribió:

> Hi there,
>
> > On 8 Sep 2021, at 14:46, Leandro Roggerone <leandro@tecnetmza.com.ar>
> wrote:
> >
> > I would like to know the goods that a ceph storage can bring to my
> existing
> > cluster.
> > What is an easy / recommended way to implement it ?
> > Wich hardware should I consider to use ?
>
> First, HW.
>
> Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G
> ports) and two Intel X520-DA2 per server.
>
> Hook up each port of the Intel cards to each of the Nexuses, getting a
> full redundancy between network cards and switches.
>
> Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 as
> a simple L2 trunk (can provide more details as why if needed).
>
> Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way you
> get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and if you
> loose one card or one switch, you still have 10 Gbps for each.
>
> The benefits? With default configuration, your data lives in 3 places.
> Also, scale out. You know the expensive stuff, hyperconverged servers
> (nutanix and such) ? You get that with this.
>
> The performance is wild, just moved my customers from a proxmox cluster
> backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of
> AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying.
>
> Keep your old storage infrastructure, whatever that is, for backups with
> PBS.
>
> YMMV
>
> Regards, ic
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


  reply	other threads:[~2021-09-13 11:33 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-08 12:46 Leandro Roggerone
2021-09-08 12:55 ` Gilberto Ferreira
2021-09-08 20:07 ` Alex K
2021-09-08 22:11 ` ic
2021-09-13 11:32   ` Leandro Roggerone [this message]
2023-01-09  9:14 Piviul
     [not found] ` <mailman.191.1673258105.458.pve-user@lists.proxmox.com>
2023-01-09 11:47   ` Piviul
     [not found]     ` <mailman.203.1673265308.458.pve-user@lists.proxmox.com>
2023-01-10  7:23       ` Piviul
     [not found]         ` <mailman.215.1673337884.458.pve-user@lists.proxmox.com>
2023-01-10 13:29           ` Piviul
     [not found]             ` <mailman.232.1673430028.458.pve-user@lists.proxmox.com>
2023-01-11 11:19               ` Piviul
     [not found]                 ` <mailman.235.1673444838.458.pve-user@lists.proxmox.com>
2023-01-11 15:51                   ` Piviul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALt2oz7Ui2iSdwtNcU=JRF-+UxDk8Ev66w1Jk9WK7p=GbqoG4w@mail.gmail.com' \
    --to=leandro@tecnetmza.com.ar \
    --cc=pve-user@lists.proxmox.com \
    --cc=pve-user@pve.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal