From: Dominik Csapak <d.csapak@proxmox.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>,
Gregor Burck <gregor@aeppelbroe.de>
Subject: Re: [PVE-User] proxmox-restore - performance issues
Date: Fri, 1 Oct 2021 09:00:17 +0200 [thread overview]
Message-ID: <e12bd5f6-2304-5674-e5cb-394695c9f924@proxmox.com> (raw)
In-Reply-To: <20211001085213.EGroupware.sb0JmHulYuMBOtEh9bqxti9@heim.aeppelbroe.de>
On 10/1/21 08:52, Gregor Burck wrote:
> Hi,
hi,
>
> thank you for reply. I made a lot of different tests and setups, but
> this the setup I want to use:
>
> Original setup:
>
> HP DL380 Gen9 with
>
> E5-2640 v3 @ 2.60GHz
> 256 GB RAM
>
> 2x SSDs for host OS
>
> For an ZFS Rais 10:
>
> 2x 1TB SAMSUNG NVME PM983 for spezial devices
> 12x 8 TB HP SAS HDDs
i guess thats the server?
what about the restore client? encryption/sha/etc. will be done by the
client
>
> root@ph-pbs:~# zpool status
> pool: ZFSPOOL
> state: ONLINE
> config:
>
> NAME STATE READ WRITE CKSUM
> ZFSPOOL ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> sdc ONLINE 0 0 0
> sdd ONLINE 0 0 0
> mirror-1 ONLINE 0 0 0
> sde ONLINE 0 0 0
> sdf ONLINE 0 0 0
> mirror-2 ONLINE 0 0 0
> sdg ONLINE 0 0 0
> sdh ONLINE 0 0 0
> mirror-3 ONLINE 0 0 0
> sdi ONLINE 0 0 0
> sdj ONLINE 0 0 0
> mirror-4 ONLINE 0 0 0
> sdk ONLINE 0 0 0
> sdl ONLINE 0 0 0
> mirror-5 ONLINE 0 0 0
> sdm ONLINE 0 0 0
> sdn ONLINE 0 0 0
> special
> mirror-6 ONLINE 0 0 0
> nvme0n1 ONLINE 0 0 0
> nvme1n1 ONLINE 0 0 0
>
> errors: No known data errors
>
> pool: rpool
> state: ONLINE
> scan: scrub repaired 0B in 00:02:40 with 0 errors on Sun Aug 8
> 00:26:43 2021
> config:
>
> NAME STATE READ WRITE CKSUM
> rpool ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> sda3 ONLINE 0 0 0
> sdb3 ONLINE 0 0 0
>
> errors: No known data errors
>
> The VMSTORE and the BACKUPSTORE is on the zsf as an dataset:
>
> root@ph-pbs:~# zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> ZFSPOOL 10.1T 32.1T 96K /ZFSPOOL
> ZFSPOOL/BACKUPSTORE001 5.63T 32.1T 5.63T /ZFSPOOL/BACKUPSTORE001
> ZFSPOOL/VMSTORE001 4.52T 32.1T 4.52T /ZFSPOOL/VMSTORE001
> rpool 27.3G 80.2G 96K /rpool
> rpool/ROOT 27.3G 80.2G 96K /rpool/ROOT
> rpool/ROOT/pbs-1 27.3G 80.2G 27.3G /
>
> The VM I tested with is our Exchange Server. Raw image size 500GB,
> netto ~400GB content
>
> First Test with one restore job:
>
> Virtual
> Environment 7.0-11
> Datacenter
> Search:
> Logs
> new
> volume ID is 'VMSTORE:vm-101-disk-0'
> restore
> proxmox backup image: /usr/bin/pbs-restore --repository
> root@pam@ph-pbs.peiker-holding.de:ZFSPOOLBACKUP
> vm/121/2021-07-23T19:00:03Z drive-virtio0.img.fidx
> /dev/zvol/ZFSPOOLVMSTORE/vm-101-disk-0 --verbose --format raw
> --skip-zero
> connecting
> to repository 'root@pam@ph-pbs.peiker-holding.de:ZFSPOOLBACKUP'
> open
> block backend for target '/dev/zvol/ZFSPOOLVMSTORE/vm-101-disk-0'
> starting
> to restore snapshot 'vm/121/2021-07-23T19:00:03Z'
> download
> and verify backup index
> progress
> 1% (read 5368709120 bytes, zeroes = 2% (125829120 bytes), duration 86
> sec)
> progress
> 2% (read 10737418240 bytes, zeroes = 1% (159383552 bytes), duration
> 181 sec)
> progress
> 3% (read 16106127360 bytes, zeroes = 0% (159383552 bytes), duration
> 270 sec)
> .
> .
> progress
> 98% (read 526133493760 bytes, zeroes = 0% (3628072960 bytes),
> duration 9492 sec)
> progress
> 99% (read 531502202880 bytes, zeroes = 0% (3628072960 bytes),
> duration 9583 sec)
> progress
> 100% (read 536870912000 bytes, zeroes = 0% (3628072960 bytes),
> duration 9676 sec)
> restore
> image complete (bytes=536870912000, duration=9676.97s,
> speed=52.91MB/s)
> rescan
> volumes...
> TASK
> OK
>
> When I regard iotop I see about the same rate.
>
> But when I start multiple restore jobs parallel, I see that the single
> jon is still on IO 40-50 MB/s but the total IO is multiple of the
> rate. I see on iotop rates to 200-250 MB/s
> So I guess it isn't the store. In some Test with an Setup where I used
> the nvmes as source and target I could reach a singele restore rate
> about 70 MB/s
some disks/storages do not scale with single threaded workloads
(and AFAIR, the pbs-restore must restore a disk single threaded because
of qemu limitations?), but will scale with multiple threads just fine
a 'fio' benchmark of the source as well as the target storage
would be good to get a baseline storage perfomance
see for example: https://pve.proxmox.com/wiki/Benchmarking_Storage
>
> Now I test an other CPU in this machine, cause on other test machines
> with other CPU (AMD Ryzen or others) I get an higher rate.
> Unfortunaly the rate on the current machine doesn't rise with the other
> CPU
can you do a proxmox-backup-client benchmark on all machines and their
respective restore speed (especially the clients; also specify a
repository to see tls speed)
kind regards
next prev parent reply other threads:[~2021-10-01 7:00 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-01 6:52 Gregor Burck
2021-10-01 7:00 ` Dominik Csapak [this message]
-- strict thread matches above, loose matches on Subject: below --
2021-10-01 7:18 Gregor Burck
2021-10-01 9:00 ` Gregor Burck
2021-10-01 9:29 ` Dominik Csapak
2021-09-17 9:29 Gregor Burck
2021-09-30 13:07 ` Gregor Burck
2021-09-30 13:24 ` Dominik Csapak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e12bd5f6-2304-5674-e5cb-394695c9f924@proxmox.com \
--to=d.csapak@proxmox.com \
--cc=gregor@aeppelbroe.de \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox