all lists on lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] proxmox-restore - performance issues
@ 2021-09-17  9:29 Gregor Burck
  2021-09-30 13:07 ` Gregor Burck
  0 siblings, 1 reply; 8+ messages in thread
From: Gregor Burck @ 2021-09-17  9:29 UTC (permalink / raw)
  To: pve-user

Hi,

I've setup an pve/pbs on the same machine:

HP DL380 Gen9

E5-2640 v3 @ 2.60GHz (2 x 8 core)
256 GB RAM
2x 1TB SAMSUNG NVME PM983
12x 8 TB HP SAS HDDs

I create with HDDs and NVME an zfs Raid10.

I still got restore rates of 50 MB/s on one restore job.
If I start multiple jobs parallel the single rate is still on this  
rate, but I see with iotop that the summary rate is even higher (max  
around 200 MB/s.

When I use htop for the CPU utilisation it seems that an single Job  
run only on one core, even when there are multiple tasks.

So I searching the bottle neck, it realy seems not the HDDs.

Any idea so long?

Thank for every,..

Bye

Gregor



^ permalink raw reply	[flat|nested] 8+ messages in thread
* Re: [PVE-User] proxmox-restore - performance issues
@ 2021-10-01  6:52 Gregor Burck
  2021-10-01  7:00 ` Dominik Csapak
  0 siblings, 1 reply; 8+ messages in thread
From: Gregor Burck @ 2021-10-01  6:52 UTC (permalink / raw)
  To: Proxmox VE user list

Hi,

thank you for reply. I made a lot of different tests and setups, but
this the setup I want to use:

Original setup:

HP DL380 Gen9 with

E5-2640 v3 @ 2.60GHz
256 GB RAM

2x SSDs for host OS

For an ZFS Rais 10:

2x 1TB SAMSUNG NVME PM983 for spezial devices
12x 8 TB HP SAS HDDs

root@ph-pbs:~# zpool status
    pool: ZFSPOOL
   state: ONLINE
config:

          NAME         STATE     READ WRITE CKSUM
          ZFSPOOL      ONLINE       0     0     0
            mirror-0   ONLINE       0     0     0
              sdc      ONLINE       0     0     0
              sdd      ONLINE       0     0     0
            mirror-1   ONLINE       0     0     0
              sde      ONLINE       0     0     0
              sdf      ONLINE       0     0     0
            mirror-2   ONLINE       0     0     0
              sdg      ONLINE       0     0     0
              sdh      ONLINE       0     0     0
            mirror-3   ONLINE       0     0     0
              sdi      ONLINE       0     0     0
              sdj      ONLINE       0     0     0
            mirror-4   ONLINE       0     0     0
              sdk      ONLINE       0     0     0
              sdl      ONLINE       0     0     0
            mirror-5   ONLINE       0     0     0
              sdm      ONLINE       0     0     0
              sdn      ONLINE       0     0     0
          special
            mirror-6   ONLINE       0     0     0
              nvme0n1  ONLINE       0     0     0
              nvme1n1  ONLINE       0     0     0

errors: No known data errors

    pool: rpool
   state: ONLINE
    scan: scrub repaired 0B in 00:02:40 with 0 errors on Sun Aug  8
00:26:43 2021
config:

          NAME        STATE     READ WRITE CKSUM
          rpool       ONLINE       0     0     0
            mirror-0  ONLINE       0     0     0
              sda3    ONLINE       0     0     0
              sdb3    ONLINE       0     0     0

errors: No known data errors

The VMSTORE and the BACKUPSTORE is on the zsf as an dataset:

root@ph-pbs:~# zfs list
NAME                     USED  AVAIL     REFER  MOUNTPOINT
ZFSPOOL                 10.1T  32.1T       96K  /ZFSPOOL
ZFSPOOL/BACKUPSTORE001  5.63T  32.1T     5.63T  /ZFSPOOL/BACKUPSTORE001
ZFSPOOL/VMSTORE001      4.52T  32.1T     4.52T  /ZFSPOOL/VMSTORE001
rpool                   27.3G  80.2G       96K  /rpool
rpool/ROOT              27.3G  80.2G       96K  /rpool/ROOT
rpool/ROOT/pbs-1        27.3G  80.2G     27.3G  /

The VM I tested with is our Exchange Server. Raw image size 500GB,
netto ~400GB content

First Test with one restore job:

Virtual
Environment 7.0-11
Datacenter
Search:
Logs
new
volume ID is 'VMSTORE:vm-101-disk-0'
restore
proxmox backup image: /usr/bin/pbs-restore --repository
root@pam@ph-pbs.peiker-holding.de:ZFSPOOLBACKUP
vm/121/2021-07-23T19:00:03Z drive-virtio0.img.fidx
/dev/zvol/ZFSPOOLVMSTORE/vm-101-disk-0 --verbose --format raw
--skip-zero
connecting
to repository 'root@pam@ph-pbs.peiker-holding.de:ZFSPOOLBACKUP'
open
block backend for target '/dev/zvol/ZFSPOOLVMSTORE/vm-101-disk-0'
starting
to restore snapshot 'vm/121/2021-07-23T19:00:03Z'
download
and verify backup index
progress
1% (read 5368709120 bytes, zeroes = 2% (125829120 bytes), duration 86
sec)
progress
2% (read 10737418240 bytes, zeroes = 1% (159383552 bytes), duration
181 sec)
progress
3% (read 16106127360 bytes, zeroes = 0% (159383552 bytes), duration
270 sec)
.
.
progress
98% (read 526133493760 bytes, zeroes = 0% (3628072960 bytes),
duration 9492 sec)
progress
99% (read 531502202880 bytes, zeroes = 0% (3628072960 bytes),
duration 9583 sec)
progress
100% (read 536870912000 bytes, zeroes = 0% (3628072960 bytes),
duration 9676 sec)
restore
image complete (bytes=536870912000, duration=9676.97s,
speed=52.91MB/s)
rescan
volumes...
TASK
OK

When I regard iotop I see about the same rate.

But when I start multiple restore jobs parallel, I see that the single
jon is still on IO 40-50 MB/s but the total IO is multiple of the
rate. I see on iotop rates to 200-250 MB/s
So I guess it isn't the store. In some Test with an Setup where I used
the nvmes as source and target I could reach a singele restore rate
about 70 MB/s

Now I test an other CPU in this machine, cause on other test machines
with other CPU (AMD Ryzen or others) I get an higher rate.
Unfortunaly the rate on the current machine doesn't rise with the other CPU.

Now I confused if there is any chance to get the restore rate better.

Bye

Gregor





^ permalink raw reply	[flat|nested] 8+ messages in thread
* Re: [PVE-User] proxmox-restore - performance issues
@ 2021-10-01  7:18 Gregor Burck
  2021-10-01  9:00 ` Gregor Burck
  0 siblings, 1 reply; 8+ messages in thread
From: Gregor Burck @ 2021-10-01  7:18 UTC (permalink / raw)
  To: Proxmox VE user list

Hi,

it is PVE and PBS on the same machine, I wrote it on my first mail ;-)

an additional information:

I made in the moment a new test aftar an reboot to exclude to much RAM
iisues on the zfs cache.

With 6 jobs the total io I see over iotop alter between 120MB/s and
240MB/s with peaks of 300 MB/s
When I start htop I see work on all cores but no core is on 100%

and the benchmark:


root@ph-pbs:~# proxmox-backup-client benchmark --repository
root@pam@localhost:BACKUPSTORE001
Password for "root@pam": ************
Uploaded 373 chunks in 5 seconds.
Time per request: 13456 microseconds.
TLS speed: 311.70 MB/s
SHA256 speed: 418.54 MB/s
Compression speed: 572.10 MB/s
Decompress speed: 818.43 MB/s
AES256/GCM speed: 1838.87 MB/s
Verify speed: 298.19 MB/s
┌───────────────────────────────────┬────────────────────┐
│ Name                              │ Value              │
╞═══════════════════════════════════╪════════════════════╡
│ TLS (maximal backup upload speed) │ 311.70 MB/s (25%)  │
├───────────────────────────────────┼────────────────────┤
│ SHA256 checksum computation speed │ 418.54 MB/s (21%)  │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 compression speed    │ 572.10 MB/s (76%)  │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 decompression speed  │ 818.43 MB/s (68%)  │
├───────────────────────────────────┼────────────────────┤
│ Chunk verification speed          │ 298.19 MB/s (39%)  │
├───────────────────────────────────┼────────────────────┤
│ AES256 GCM encryption speed       │ 1838.87 MB/s (50%) │
└───────────────────────────────────┴────────────────────┘





^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-10-01  9:30 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-17  9:29 [PVE-User] proxmox-restore - performance issues Gregor Burck
2021-09-30 13:07 ` Gregor Burck
2021-09-30 13:24   ` Dominik Csapak
2021-10-01  6:52 Gregor Burck
2021-10-01  7:00 ` Dominik Csapak
2021-10-01  7:18 Gregor Burck
2021-10-01  9:00 ` Gregor Burck
2021-10-01  9:29   ` Dominik Csapak

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal