From: Wolf Noble <wolf@wolfspyre.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] DELL PowerEdge T440/PERC H750 and embarassing HDD performance...
Date: Wed, 19 Apr 2023 18:14:23 -0500 [thread overview]
Message-ID: <C8002500-3A9E-4DAB-A607-5DF62CFB7CE4@wolfspyre.com> (raw)
In-Reply-To: <6nj5hj-3a81.ln1@hermione.lilliput.linux.it>
Hai Marco!
The voodoo of zfs and how it talks to the disks is a vast and confusing subject… there’s lots of moving pieces, and it’s a huge rabbit hole….
the broadest applicable advise is (fakeraid/passthru raid0/hardware raid) + zfs is a recipe for pain and poor performance.
ZFS expects to talk directly to the disk and have an accurate understanding of the disk geometry… abstraction is problematic here… at best it introduces unnecessary complexity. At worst it exposes you to data loss.
I’ve not seen a single reputable source that’s reported good experiences combining zfs atop raid. even virtualized the benefit of zfs gets questionable imo, but that’s tangental..
granted, Im open to be proven wrong; but if possible i’d change out the controller to one that isn’t playing games with the storage presented to zfs and see if performance improves…
physical sectorsize, logical sector size, queue depth and speed of the spinning rust are also important..
jus my $.03 tho.
[= The contents of this message have been written, read, processed, erased, sorted, sniffed, compressed, rewritten, misspelled, overcompensated, lost, found, and most importantly delivered entirely with recycled electrons =]
> On Apr 19, 2023, at 12:40, Marco Gaiarin <gaio@lilliput.linux.it> wrote:
>
>
> Situation: some little PVE clusters with ZFS, still on PVE6.
>
> We have a set of PowerEdge T340 with PERC H330 Adapter in JBOD mode, that
> perform decently with HDD disks and ZFS.
>
> We have also a set of DELL PowerEdge T440 with PERC H750, that does NOT have
> a JBOD mode, but a 'Non-RAID' auto-RAID0 mode, and perform 'indecently' on
> HDD disks, for example:
>
> root@pppve1:~# fio --filename=/dev/sdc --direct=1 --rw=randrw --bs=128k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --group_reporting --name=hdd-rw-128
> hdd-rw-128: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=256
> ...
> fio-3.12
> Starting 4 processes
> Jobs: 4 (f=4): [m(4)][0.0%][eta 08d:17h:11m:29s]
> hdd-rw-128: (groupid=0, jobs=4): err= 0: pid=26198: Wed May 18 19:11:04 2022
> read: IOPS=84, BW=10.5MiB/s (11.0MB/s)(1279MiB/121557msec)
> slat (usec): min=4, max=303887, avg=23029.19, stdev=61832.29
> clat (msec): min=1329, max=6673, avg=4737.71, stdev=415.84
> lat (msec): min=1543, max=6673, avg=4760.74, stdev=420.10
> clat percentiles (msec):
> | 1.00th=[ 2802], 5.00th=[ 4329], 10.00th=[ 4463], 20.00th=[ 4530],
> | 30.00th=[ 4597], 40.00th=[ 4665], 50.00th=[ 4732], 60.00th=[ 4799],
> | 70.00th=[ 4866], 80.00th=[ 4933], 90.00th=[ 5134], 95.00th=[ 5336],
> | 99.00th=[ 5805], 99.50th=[ 6007], 99.90th=[ 6342], 99.95th=[ 6409],
> | 99.99th=[ 6611]
> bw ( KiB/s): min= 256, max= 5120, per=25.18%, avg=2713.08, stdev=780.45, samples=929
> iops : min= 2, max= 40, avg=21.13, stdev= 6.10, samples=929
> write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(1328MiB/121557msec); 0 zone resets
> slat (usec): min=9, max=309914, avg=23025.13, stdev=61676.77
> clat (msec): min=1444, max=13086, avg=6943.12, stdev=2068.26
> lat (msec): min=1543, max=13086, avg=6966.15, stdev=2069.28
> clat percentiles (msec):
> | 1.00th=[ 2769], 5.00th=[ 4597], 10.00th=[ 4799], 20.00th=[ 5067],
> | 30.00th=[ 5403], 40.00th=[ 5873], 50.00th=[ 6409], 60.00th=[ 7148],
> | 70.00th=[ 8020], 80.00th=[ 9060], 90.00th=[10134], 95.00th=[10671],
> | 99.00th=[11610], 99.50th=[11879], 99.90th=[12550], 99.95th=[12550],
> | 99.99th=[12684]
> bw ( KiB/s): min= 256, max= 5376, per=24.68%, avg=2762.20, stdev=841.30, samples=926
> iops : min= 2, max= 42, avg=21.52, stdev= 6.56, samples=926
> cpu : usr=0.05%, sys=0.09%, ctx=2847, majf=0, minf=49
> IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8%
> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
> issued rwts: total=10233,10627,0,0 short=0,0,0,0 dropped=0,0,0,0
> latency : target=0, window=0, percentile=100.00%, depth=256
>
> Run status group 0 (all jobs):
> READ: bw=10.5MiB/s (11.0MB/s), 10.5MiB/s-10.5MiB/s (11.0MB/s-11.0MB/s), io=1279MiB (1341MB), run=121557-121557msec
> WRITE: bw=10.9MiB/s (11.5MB/s), 10.9MiB/s-10.9MiB/s (11.5MB/s-11.5MB/s), io=1328MiB (1393MB), run=121557-121557msec
>
> Disk stats (read/write):
> sdc: ios=10282/10601, merge=0/0, ticks=3041312/27373721, in_queue=30373472, util=99.99%
>
> note in particular the slow IOPS, very slow...
>
>
> Someone have some hint to share?! Thanks.
>
> --
> La macchina del capo la guida Emilio Fede
> La macchina del capo la lava Emilio Fede
> La macchina del capo la parcheggia Emilio Fede
> ma la benzina gliela paghiamo noi [Dado]
>
>
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
next prev parent reply other threads:[~2023-04-19 23:14 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-19 17:07 Marco Gaiarin
2023-04-19 23:14 ` Wolf Noble [this message]
2023-04-20 10:34 ` Marco Gaiarin
[not found] ` <mailman.86.1681974711.319.pve-user@lists.proxmox.com>
2023-04-26 10:20 ` Marco Gaiarin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C8002500-3A9E-4DAB-A607-5DF62CFB7CE4@wolfspyre.com \
--to=wolf@wolfspyre.com \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox