public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] DELL PowerEdge T440/PERC H750 and embarassing HDD performance...
@ 2023-04-19 17:07 Marco Gaiarin
  2023-04-19 23:14 ` Wolf Noble
       [not found] ` <mailman.86.1681974711.319.pve-user@lists.proxmox.com>
  0 siblings, 2 replies; 4+ messages in thread
From: Marco Gaiarin @ 2023-04-19 17:07 UTC (permalink / raw)
  To: pve-user


Situation: some little PVE clusters with ZFS, still on PVE6.

We have a set of PowerEdge T340 with PERC H330 Adapter in JBOD mode, that
perform decently with HDD disks and ZFS.

We have also a set of DELL PowerEdge T440 with PERC H750, that does NOT have
a JBOD mode, but a 'Non-RAID' auto-RAID0 mode, and perform 'indecently' on
HDD disks, for example:

root@pppve1:~# fio --filename=/dev/sdc --direct=1 --rw=randrw --bs=128k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --group_reporting --name=hdd-rw-128
hdd-rw-128: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=256
...
fio-3.12
Starting 4 processes
Jobs: 4 (f=4): [m(4)][0.0%][eta 08d:17h:11m:29s]                                         
hdd-rw-128: (groupid=0, jobs=4): err= 0: pid=26198: Wed May 18 19:11:04 2022
  read: IOPS=84, BW=10.5MiB/s (11.0MB/s)(1279MiB/121557msec)
    slat (usec): min=4, max=303887, avg=23029.19, stdev=61832.29
    clat (msec): min=1329, max=6673, avg=4737.71, stdev=415.84
     lat (msec): min=1543, max=6673, avg=4760.74, stdev=420.10
    clat percentiles (msec):
     |  1.00th=[ 2802],  5.00th=[ 4329], 10.00th=[ 4463], 20.00th=[ 4530],
     | 30.00th=[ 4597], 40.00th=[ 4665], 50.00th=[ 4732], 60.00th=[ 4799],
     | 70.00th=[ 4866], 80.00th=[ 4933], 90.00th=[ 5134], 95.00th=[ 5336],
     | 99.00th=[ 5805], 99.50th=[ 6007], 99.90th=[ 6342], 99.95th=[ 6409],
     | 99.99th=[ 6611]
   bw (  KiB/s): min=  256, max= 5120, per=25.18%, avg=2713.08, stdev=780.45, samples=929
   iops        : min=    2, max=   40, avg=21.13, stdev= 6.10, samples=929
  write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(1328MiB/121557msec); 0 zone resets
    slat (usec): min=9, max=309914, avg=23025.13, stdev=61676.77
    clat (msec): min=1444, max=13086, avg=6943.12, stdev=2068.26
     lat (msec): min=1543, max=13086, avg=6966.15, stdev=2069.28
    clat percentiles (msec):
     |  1.00th=[ 2769],  5.00th=[ 4597], 10.00th=[ 4799], 20.00th=[ 5067],
     | 30.00th=[ 5403], 40.00th=[ 5873], 50.00th=[ 6409], 60.00th=[ 7148],
     | 70.00th=[ 8020], 80.00th=[ 9060], 90.00th=[10134], 95.00th=[10671],
     | 99.00th=[11610], 99.50th=[11879], 99.90th=[12550], 99.95th=[12550],
     | 99.99th=[12684]
   bw (  KiB/s): min=  256, max= 5376, per=24.68%, avg=2762.20, stdev=841.30, samples=926
   iops        : min=    2, max=   42, avg=21.52, stdev= 6.56, samples=926
  cpu          : usr=0.05%, sys=0.09%, ctx=2847, majf=0, minf=49
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwts: total=10233,10627,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=256

Run status group 0 (all jobs):
   READ: bw=10.5MiB/s (11.0MB/s), 10.5MiB/s-10.5MiB/s (11.0MB/s-11.0MB/s), io=1279MiB (1341MB), run=121557-121557msec
  WRITE: bw=10.9MiB/s (11.5MB/s), 10.9MiB/s-10.9MiB/s (11.5MB/s-11.5MB/s), io=1328MiB (1393MB), run=121557-121557msec

Disk stats (read/write):
  sdc: ios=10282/10601, merge=0/0, ticks=3041312/27373721, in_queue=30373472, util=99.99%

note in particular the slow IOPS, very slow...


Someone have some hint to share?! Thanks.

-- 
  La macchina del capo la guida Emilio Fede
  La macchina del capo la lava Emilio Fede
  La macchina del capo la parcheggia Emilio Fede
  ma la benzina gliela paghiamo noi				[Dado]





^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PVE-User] DELL PowerEdge T440/PERC H750 and embarassing HDD performance...
  2023-04-19 17:07 [PVE-User] DELL PowerEdge T440/PERC H750 and embarassing HDD performance Marco Gaiarin
@ 2023-04-19 23:14 ` Wolf Noble
  2023-04-20 10:34   ` Marco Gaiarin
       [not found] ` <mailman.86.1681974711.319.pve-user@lists.proxmox.com>
  1 sibling, 1 reply; 4+ messages in thread
From: Wolf Noble @ 2023-04-19 23:14 UTC (permalink / raw)
  To: Proxmox VE user list

Hai Marco!

The voodoo of zfs and how it talks to the disks is a vast and confusing subject… there’s lots of moving pieces, and it’s a huge rabbit hole….

the broadest applicable advise is (fakeraid/passthru raid0/hardware raid) + zfs is a recipe for pain and poor performance. 

ZFS expects to talk directly to the disk and have an accurate understanding of the disk geometry… abstraction is problematic here… at best it introduces unnecessary complexity. At worst it exposes you to data loss.

I’ve not seen a single reputable source that’s reported good experiences combining zfs atop raid. even virtualized the benefit of zfs gets questionable imo, but that’s tangental..

granted, Im open to be proven wrong; but if possible i’d change out the controller to one that isn’t playing games with the storage presented to zfs and see if performance improves…

physical sectorsize, logical sector size, queue depth and speed of the spinning rust are also important..

jus my $.03 tho.

[= The contents of this message have been written, read, processed, erased, sorted, sniffed, compressed, rewritten, misspelled, overcompensated, lost, found, and most importantly delivered entirely with recycled electrons =]

> On Apr 19, 2023, at 12:40, Marco Gaiarin <gaio@lilliput.linux.it> wrote:
> 
> 
> Situation: some little PVE clusters with ZFS, still on PVE6.
> 
> We have a set of PowerEdge T340 with PERC H330 Adapter in JBOD mode, that
> perform decently with HDD disks and ZFS.
> 
> We have also a set of DELL PowerEdge T440 with PERC H750, that does NOT have
> a JBOD mode, but a 'Non-RAID' auto-RAID0 mode, and perform 'indecently' on
> HDD disks, for example:
> 
> root@pppve1:~# fio --filename=/dev/sdc --direct=1 --rw=randrw --bs=128k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --group_reporting --name=hdd-rw-128
> hdd-rw-128: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=256
> ...
> fio-3.12
> Starting 4 processes
> Jobs: 4 (f=4): [m(4)][0.0%][eta 08d:17h:11m:29s]                                         
> hdd-rw-128: (groupid=0, jobs=4): err= 0: pid=26198: Wed May 18 19:11:04 2022
>  read: IOPS=84, BW=10.5MiB/s (11.0MB/s)(1279MiB/121557msec)
>    slat (usec): min=4, max=303887, avg=23029.19, stdev=61832.29
>    clat (msec): min=1329, max=6673, avg=4737.71, stdev=415.84
>     lat (msec): min=1543, max=6673, avg=4760.74, stdev=420.10
>    clat percentiles (msec):
>     |  1.00th=[ 2802],  5.00th=[ 4329], 10.00th=[ 4463], 20.00th=[ 4530],
>     | 30.00th=[ 4597], 40.00th=[ 4665], 50.00th=[ 4732], 60.00th=[ 4799],
>     | 70.00th=[ 4866], 80.00th=[ 4933], 90.00th=[ 5134], 95.00th=[ 5336],
>     | 99.00th=[ 5805], 99.50th=[ 6007], 99.90th=[ 6342], 99.95th=[ 6409],
>     | 99.99th=[ 6611]
>   bw (  KiB/s): min=  256, max= 5120, per=25.18%, avg=2713.08, stdev=780.45, samples=929
>   iops        : min=    2, max=   40, avg=21.13, stdev= 6.10, samples=929
>  write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(1328MiB/121557msec); 0 zone resets
>    slat (usec): min=9, max=309914, avg=23025.13, stdev=61676.77
>    clat (msec): min=1444, max=13086, avg=6943.12, stdev=2068.26
>     lat (msec): min=1543, max=13086, avg=6966.15, stdev=2069.28
>    clat percentiles (msec):
>     |  1.00th=[ 2769],  5.00th=[ 4597], 10.00th=[ 4799], 20.00th=[ 5067],
>     | 30.00th=[ 5403], 40.00th=[ 5873], 50.00th=[ 6409], 60.00th=[ 7148],
>     | 70.00th=[ 8020], 80.00th=[ 9060], 90.00th=[10134], 95.00th=[10671],
>     | 99.00th=[11610], 99.50th=[11879], 99.90th=[12550], 99.95th=[12550],
>     | 99.99th=[12684]
>   bw (  KiB/s): min=  256, max= 5376, per=24.68%, avg=2762.20, stdev=841.30, samples=926
>   iops        : min=    2, max=   42, avg=21.52, stdev= 6.56, samples=926
>  cpu          : usr=0.05%, sys=0.09%, ctx=2847, majf=0, minf=49
>  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8%
>     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
>     issued rwts: total=10233,10627,0,0 short=0,0,0,0 dropped=0,0,0,0
>     latency   : target=0, window=0, percentile=100.00%, depth=256
> 
> Run status group 0 (all jobs):
>   READ: bw=10.5MiB/s (11.0MB/s), 10.5MiB/s-10.5MiB/s (11.0MB/s-11.0MB/s), io=1279MiB (1341MB), run=121557-121557msec
>  WRITE: bw=10.9MiB/s (11.5MB/s), 10.9MiB/s-10.9MiB/s (11.5MB/s-11.5MB/s), io=1328MiB (1393MB), run=121557-121557msec
> 
> Disk stats (read/write):
>  sdc: ios=10282/10601, merge=0/0, ticks=3041312/27373721, in_queue=30373472, util=99.99%
> 
> note in particular the slow IOPS, very slow...
> 
> 
> Someone have some hint to share?! Thanks.
> 
> -- 
>  La macchina del capo la guida Emilio Fede
>  La macchina del capo la lava Emilio Fede
>  La macchina del capo la parcheggia Emilio Fede
>  ma la benzina gliela paghiamo noi                [Dado]
> 
> 
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PVE-User] DELL PowerEdge T440/PERC H750 and embarassing HDD performance...
  2023-04-19 23:14 ` Wolf Noble
@ 2023-04-20 10:34   ` Marco Gaiarin
  0 siblings, 0 replies; 4+ messages in thread
From: Marco Gaiarin @ 2023-04-20 10:34 UTC (permalink / raw)
  To: Wolf Noble; +Cc: pve-user

Mandi! Wolf Noble
  In chel di` si favelave...

> The voodoo of zfs and how it talks to the disks is a vast and confusing subject??? there???s lots of moving pieces, and it???s a huge rabbit hole???.

Right. But this seems a 'good passthru raid0', more like an JBOD mode then a
raid0... anyway, SSD disk connected to the same controller behave
correctly, so it is still strange...

-- 
  Se non trovi nessuno vuol dire che siamo scappati alle sei-shell (bash,
  tcsh,csh...)							(Possi)





^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PVE-User] DELL PowerEdge T440/PERC H750 and embarassing HDD performance...
       [not found] ` <mailman.86.1681974711.319.pve-user@lists.proxmox.com>
@ 2023-04-26 10:20   ` Marco Gaiarin
  0 siblings, 0 replies; 4+ messages in thread
From: Marco Gaiarin @ 2023-04-26 10:20 UTC (permalink / raw)
  To: Eneko Lacunza via pve-user; +Cc: pve-user

Mandi! Eneko Lacunza via pve-user
  In chel di` si favelave...

> What disk model?

	HGST HUS726T4TAL


> sdc has only one backing HDD right?

Yes, it is called 'NonRAID' in newer PERC lingo; seems a RAID0 configuration
but with some more 'insight'...


> 84 IOPS is not much, but I don't think you can get much more from an HDD 
> with random RW...
> PERC controller is quite new, maybe driver in PVE 7 is more optimized...

Uh, oh... never minded about that.

There's some way to at least test PVE7 kernel with PVE6?

-- 
  chi si convertiva nel novanta
  ne era dispensato nel novantuno			(F. De Andre`)





^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-04-26 12:40 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-19 17:07 [PVE-User] DELL PowerEdge T440/PERC H750 and embarassing HDD performance Marco Gaiarin
2023-04-19 23:14 ` Wolf Noble
2023-04-20 10:34   ` Marco Gaiarin
     [not found] ` <mailman.86.1681974711.319.pve-user@lists.proxmox.com>
2023-04-26 10:20   ` Marco Gaiarin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal