public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Mikhail <m@plus-plus.su>
To: Proxmox VE user list <pve-user@lists.proxmox.com>,
	Gregor Burck <gregor@aeppelbroe.de>,
	pve-user@pve.proxmox.com
Subject: Re: [PVE-User] ZFZ questions - performance and caching
Date: Tue, 30 Mar 2021 10:34:05 +0300	[thread overview]
Message-ID: <c90f6435-b822-6a8c-618d-9623430272f1@plus-plus.su> (raw)
In-Reply-To: <20210329152820.EGroupware._t1Sneg3e4BsuWUv_4EVNL7@heim.aeppelbroe.de>

Hi Gregor,

How do you plan to setup backup on this server? Are you going to use
Proxmox Backup Server for this, or just plain vzdump files?

If you're considering PBS, which you should, and you have at least a
pair of NVME devices available, then I would set ZFS to use those NVME
as special device. We have similar setup deployed recently - 12x3,5 HGST
14TB SAS disks + 2 x Intel 1.6TB NVME (using rear bays, Supermicro 2U
platform). This server is purely used as PBS. ZFS is built on top of 11
x 14TB SAS drives in RAIDZ-3 config (I know not optimal for speed, but
we want to see all space available under single pool) and NVME disks are
used as ZFS special device (mirrored). Remaining 12th SAS disk is set to
be hot-spare under ZFS pool. This gives us ~100TB of space and we are
satisfied with RAIDZ-3 speeds we get - we're able to read and write at 1
Gigabyte/s, which is enough for our case as we're limited by 10gbit
networking. Using NVME as SLOG/L2ARC device does not seem to be optimal
in case when used with PBS. Also, if you're thinking about Supermicro
then I would add SATADOM devices for the OS - this will allow you to use
entire NVME disks for ZFS without partitioning it for OS/boot.

regards,
Mikhail.

On 3/29/21 4:28 PM, Gregor Burck wrote:
> Hi,
> 
> for calculating an backup server wich should be in worst case
> additionaly an virtualisation server we thougth about a setup with HDDs
> and for ZFZ chaching NVMes.
> 
> We plan all with enterprise products,...
> 
> I guess following Setup:
> 
> A 12 3,5 '' bay Server, 6 PVEe sockets, maybee with an 2,5'' Rear
> entension kit for the OS
> 
> Backup envirement:
> 
> 4 (or6) HDDs, 6 (or 8) TB without chaching
> 
> For virtualisation enviroment:
> 
> 4 (or6) HDDs, 6 (or 8) TB with NVMe chaching
> 
> 
> I think, for the backup enviroment there shouldn't be a chaching
> neccessary, cause here is the network the limitation.
> 
> The VE part should be a little performer, but how big should the NVMe
> cache? As I read, it is for the ZFZ log.
> And how much is the performance increasemnt?
> 
> Bye
> 
> Gregor
> 
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 




  parent reply	other threads:[~2021-03-30  8:07 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-29 13:28 Gregor Burck
2021-03-29 14:26 ` Gregor Burck
2021-03-30  7:34 ` Mikhail [this message]
2021-03-30  9:34 ` Gregor Burck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c90f6435-b822-6a8c-618d-9623430272f1@plus-plus.su \
    --to=m@plus-plus.su \
    --cc=gregor@aeppelbroe.de \
    --cc=pve-user@lists.proxmox.com \
    --cc=pve-user@pve.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal