public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] bad ZFS performance with proxmox 6.2
@ 2020-10-06 11:12 Maxime AUGER
  2020-10-06 14:54 ` Roland
  0 siblings, 1 reply; 3+ messages in thread
From: Maxime AUGER @ 2020-10-06 11:12 UTC (permalink / raw)
  To: 'pve-user@lists.proxmox.com'

Hello,

We notice a significant difference in ZFS performances between proxmox 5.3-5 and proxmox 6.2.
Last year when we tested proxmox5.3 on the HPE-DL360Gen9 hardware.
This hardware is provided with 8 ssd disks (2x 200-GB OS dedicated mdadm RAID0 and 6x 1-TB ZFS pool for VMs storage)
On ZFS pool we measured the peak value at 2.8GB/s (write)
Actually on promox6 we measured the peak value at 1.5GB/s (write).

One server, ITXPVE03 was running Initially on proxmox 5.3-5.
peak performance 2.8GB/s
Recently it has been reinstalled on proxmox 6.2 (from iso)
peak performance 1.5GB/s

to confirm this observation we have extended the checks to the 4 servers, identical hardware and low level software, BIOS and firmwares versions)
The measurements confirm the statement
All tests are done in serveurs idle state conditions with zero ative workload, all VMs/Containers shutdown.
ZFS configuration are identical, no compression, no deduplicatation

root@CLIPVE03(PVE6.2):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.851028 s, 1.2 GB/s

root@ITXPV03(PVE6.2):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.722055 s, 1.5 GB/s

root@CLIB05PVE02(PVE5.3-5):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.397212 s, 2.6 GB/s

root@CLIB05PVE01(PVE5.3-5):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.39394 s, 2.7 GB/s


At the ZFS level we can notice a difference in the version of zfsutils-linux
0.7.X on PVE5.3-5 (0.7.12)
0.8.X on PVE6.2 (same measure on 0.8.3 and 0.8.4)

has anyone experienced this problem ?

Maxime AUGER
Network Team Leader
AURANEXT


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PVE-User] bad ZFS performance with proxmox 6.2
  2020-10-06 11:12 [PVE-User] bad ZFS performance with proxmox 6.2 Maxime AUGER
@ 2020-10-06 14:54 ` Roland
  2020-10-06 15:59   ` Uwe Sauter
  0 siblings, 1 reply; 3+ messages in thread
From: Roland @ 2020-10-06 14:54 UTC (permalink / raw)
  To: Proxmox VE user list, Maxime AUGER

hi,

i remember having seen performance degradation reports on zfsonlinux
issue tracker, for example:

https://github.com/openzfs/zfs/issues/8836


to blame zfs for this, you should first make sure it is not related to
driver issues or other kernel/disk related stuff, so i would recommend
comparing raw disk write performance first, to see if that may also make
a difference already.

furthermore, i would recommend using more data when doing write
performance comparison at such high troughput rate

regards
roland

Am 06.10.20 um 13:12 schrieb Maxime AUGER:
> Hello,
>
> We notice a significant difference in ZFS performances between proxmox 5.3-5 and proxmox 6.2.
> Last year when we tested proxmox5.3 on the HPE-DL360Gen9 hardware.
> This hardware is provided with 8 ssd disks (2x 200-GB OS dedicated mdadm RAID0 and 6x 1-TB ZFS pool for VMs storage)
> On ZFS pool we measured the peak value at 2.8GB/s (write)
> Actually on promox6 we measured the peak value at 1.5GB/s (write).
>
> One server, ITXPVE03 was running Initially on proxmox 5.3-5.
> peak performance 2.8GB/s
> Recently it has been reinstalled on proxmox 6.2 (from iso)
> peak performance 1.5GB/s
>
> to confirm this observation we have extended the checks to the 4 servers, identical hardware and low level software, BIOS and firmwares versions)
> The measurements confirm the statement
> All tests are done in serveurs idle state conditions with zero ative workload, all VMs/Containers shutdown.
> ZFS configuration are identical, no compression, no deduplicatation
>
> root@CLIPVE03(PVE6.2):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.851028 s, 1.2 GB/s
>
> root@ITXPV03(PVE6.2):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.722055 s, 1.5 GB/s
>
> root@CLIB05PVE02(PVE5.3-5):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.397212 s, 2.6 GB/s
>
> root@CLIB05PVE01(PVE5.3-5):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.39394 s, 2.7 GB/s
>
>
> At the ZFS level we can notice a difference in the version of zfsutils-linux
> 0.7.X on PVE5.3-5 (0.7.12)
> 0.8.X on PVE6.2 (same measure on 0.8.3 and 0.8.4)
>
> has anyone experienced this problem ?
>
> Maxime AUGER
> Network Team Leader
> AURANEXT
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PVE-User] bad ZFS performance with proxmox 6.2
  2020-10-06 14:54 ` Roland
@ 2020-10-06 15:59   ` Uwe Sauter
  0 siblings, 0 replies; 3+ messages in thread
From: Uwe Sauter @ 2020-10-06 15:59 UTC (permalink / raw)
  To: pve-user

What kind of checksum/redundancy are you using?

Some time ago the Linux kernel removed the symbols for various SIMD (AVX, etc.) functions for non-GNU modules which lead 
to performace degradation. While the ZFS team found ways to circumvent this it might be that the performance hasn't 
caught up to where it was.


Regards,

	Uwe

Am 06.10.20 um 16:54 schrieb Roland:
> hi,
> 
> i remember having seen performance degradation reports on zfsonlinux
> issue tracker, for example:
> 
> https://github.com/openzfs/zfs/issues/8836
> 
> 
> to blame zfs for this, you should first make sure it is not related to
> driver issues or other kernel/disk related stuff, so i would recommend
> comparing raw disk write performance first, to see if that may also make
> a difference already.
> 
> furthermore, i would recommend using more data when doing write
> performance comparison at such high troughput rate
> 
> regards
> roland
> 
> Am 06.10.20 um 13:12 schrieb Maxime AUGER:
>> Hello,
>>
>> We notice a significant difference in ZFS performances between proxmox 5.3-5 and proxmox 6.2.
>> Last year when we tested proxmox5.3 on the HPE-DL360Gen9 hardware.
>> This hardware is provided with 8 ssd disks (2x 200-GB OS dedicated mdadm RAID0 and 6x 1-TB ZFS pool for VMs storage)
>> On ZFS pool we measured the peak value at 2.8GB/s (write)
>> Actually on promox6 we measured the peak value at 1.5GB/s (write).
>>
>> One server, ITXPVE03 was running Initially on proxmox 5.3-5.
>> peak performance 2.8GB/s
>> Recently it has been reinstalled on proxmox 6.2 (from iso)
>> peak performance 1.5GB/s
>>
>> to confirm this observation we have extended the checks to the 4 servers, identical hardware and low level software, 
>> BIOS and firmwares versions)
>> The measurements confirm the statement
>> All tests are done in serveurs idle state conditions with zero ative workload, all VMs/Containers shutdown.
>> ZFS configuration are identical, no compression, no deduplicatation
>>
>> root@CLIPVE03(PVE6.2):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.851028 s, 1.2 GB/s
>>
>> root@ITXPV03(PVE6.2):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.722055 s, 1.5 GB/s
>>
>> root@CLIB05PVE02(PVE5.3-5):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.397212 s, 2.6 GB/s
>>
>> root@CLIB05PVE01(PVE5.3-5):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.39394 s, 2.7 GB/s
>>
>>
>> At the ZFS level we can notice a difference in the version of zfsutils-linux
>> 0.7.X on PVE5.3-5 (0.7.12)
>> 0.8.X on PVE6.2 (same measure on 0.8.3 and 0.8.4)
>>
>> has anyone experienced this problem ?
>>
>> Maxime AUGER
>> Network Team Leader
>> AURANEXT
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-10-06 15:59 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-06 11:12 [PVE-User] bad ZFS performance with proxmox 6.2 Maxime AUGER
2020-10-06 14:54 ` Roland
2020-10-06 15:59   ` Uwe Sauter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal