From: Maxime AUGER <m.auger@auranext.com>
To: "'pve-user@lists.proxmox.com'" <pve-user@lists.proxmox.com>
Subject: [PVE-User] bad ZFS performance with proxmox 6.2
Date: Tue, 6 Oct 2020 11:12:13 +0000 [thread overview]
Message-ID: <20201006111213.9D6441764081@mailhost.auranext.com> (raw)
Hello,
We notice a significant difference in ZFS performances between proxmox 5.3-5 and proxmox 6.2.
Last year when we tested proxmox5.3 on the HPE-DL360Gen9 hardware.
This hardware is provided with 8 ssd disks (2x 200-GB OS dedicated mdadm RAID0 and 6x 1-TB ZFS pool for VMs storage)
On ZFS pool we measured the peak value at 2.8GB/s (write)
Actually on promox6 we measured the peak value at 1.5GB/s (write).
One server, ITXPVE03 was running Initially on proxmox 5.3-5.
peak performance 2.8GB/s
Recently it has been reinstalled on proxmox 6.2 (from iso)
peak performance 1.5GB/s
to confirm this observation we have extended the checks to the 4 servers, identical hardware and low level software, BIOS and firmwares versions)
The measurements confirm the statement
All tests are done in serveurs idle state conditions with zero ative workload, all VMs/Containers shutdown.
ZFS configuration are identical, no compression, no deduplicatation
root@CLIPVE03(PVE6.2):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.851028 s, 1.2 GB/s
root@ITXPV03(PVE6.2):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.722055 s, 1.5 GB/s
root@CLIB05PVE02(PVE5.3-5):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.397212 s, 2.6 GB/s
root@CLIB05PVE01(PVE5.3-5):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.39394 s, 2.7 GB/s
At the ZFS level we can notice a difference in the version of zfsutils-linux
0.7.X on PVE5.3-5 (0.7.12)
0.8.X on PVE6.2 (same measure on 0.8.3 and 0.8.4)
has anyone experienced this problem ?
Maxime AUGER
Network Team Leader
AURANEXT
next reply other threads:[~2020-10-06 11:22 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-06 11:12 Maxime AUGER [this message]
2020-10-06 14:54 ` Roland
2020-10-06 15:59 ` Uwe Sauter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201006111213.9D6441764081@mailhost.auranext.com \
--to=m.auger@auranext.com \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.