From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <lists@benappy.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 8D093714ED
 for <pve-user@lists.proxmox.com>; Thu,  9 Sep 2021 00:20:46 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 851612A647
 for <pve-user@lists.proxmox.com>; Thu,  9 Sep 2021 00:20:46 +0200 (CEST)
Received: from smtp-out-03.shrd.fr (smtp-out-03.shrd.fr [195.95.224.13])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 9ABD42A623
 for <pve-user@lists.proxmox.com>; Thu,  9 Sep 2021 00:20:45 +0200 (CEST)
Received: from www02.mutu.shrd.fr (www02.mutu.shrd.fr [194.187.224.16])
 by smtp-out-03.shrd.fr (Postfix) with ESMTP id EA0B916B679;
 Thu,  9 Sep 2021 00:11:15 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by www02.mutu.shrd.fr (Postfix) with ESMTP id 7710230B70A;
 Thu,  9 Sep 2021 00:11:08 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at www02.mutu.shrd.fr
Received: from www02.mutu.shrd.fr ([127.0.0.1])
 by localhost (www02.mutu.shrd.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id ih_CYzYC_0lA; Thu,  9 Sep 2021 00:11:05 +0200 (CEST)
Received: from smtpclient.apple (unknown [194.187.225.181])
 (Authenticated sender: lists@benappy.com)
 by www02.mutu.shrd.fr (Postfix) with ESMTPSA id A0B9330AA23;
 Thu,  9 Sep 2021 00:11:05 +0200 (CEST)
Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.100.0.2.22\))
From: ic <lists@benappy.com>
In-Reply-To: <CALt2oz6WNOUjSoWHbmL1+E536YhZzc-FrAjYrsVQtEryWGMXjw@mail.gmail.com>
Date: Thu, 9 Sep 2021 00:11:05 +0200
Cc: PVE User List <pve-user@pve.proxmox.com>
Message-Id: <4D290E10-15B5-4816-8613-38438FC1F9CC@benappy.com>
References: <CALt2oz6WNOUjSoWHbmL1+E536YhZzc-FrAjYrsVQtEryWGMXjw@mail.gmail.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-SPAM-LEVEL: Spam detection results:  0
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 HTML_MESSAGE            0.001 HTML included in message
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 KAM_LAZY_DOMAIN_SECURITY 1 Sending domain does not have any anti-forgery
 methods
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_NONE                0.001 SPF: sender does not publish an SPF Record
Content-Type: text/plain;
	charset=us-ascii
Content-Transfer-Encoding: quoted-printable
X-Content-Filtered-By: Mailman/MimeDel 2.1.29
Subject: Re: [PVE-User] ceph
X-BeenThere: pve-user@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE user list <pve-user.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-user/>
List-Post: <mailto:pve-user@lists.proxmox.com>
List-Help: <mailto:pve-user-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Wed, 08 Sep 2021 22:20:46 -0000

Hi there,

> On 8 Sep 2021, at 14:46, Leandro Roggerone <leandro@tecnetmza.com.ar> =
wrote:
>=20
> I would like to know the goods that a ceph storage can bring to my =
existing
> cluster.
> What is an easy / recommended way to implement it ?
> Wich hardware should I consider to use ?

First, HW.

Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G =
ports) and two Intel X520-DA2 per server.

Hook up each port of the Intel cards to each of the Nexuses, getting a =
full redundancy between network cards and switches.

Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 =
as a simple L2 trunk (can provide more details as why if needed).

Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way =
you get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and =
if you loose one card or one switch, you still have 10 Gbps for each.

The benefits? With default configuration, your data lives in 3 places. =
Also, scale out. You know the expensive stuff, hyperconverged servers =
(nutanix and such) ? You get that with this.

The performance is wild, just moved my customers from a proxmox cluster =
backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of =
AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying.

Keep your old storage infrastructure, whatever that is, for backups with =
PBS.

YMMV

Regards, ic